Digital...wait for it...delay
This article is based around a problem that was given to me at the pub this evening: can I build a device which delays an analogue signal by an accurate, controlled, and variable time (specifically 0-1000ms, 1ms accuracy).
The answer is I can; this I already new, because I know such devices exist. At that point, it's merely a matter of finding the schematics, and working it out from there. However, pub and internet don't mix too well, and I'd already started doodling electronics, so I started trying to work out the solution.
It's complicated.
The basic principles are simple enough: by far the easiest way to implement a frequency independent linear delay is a clocked register. Or, in fact, a set of clocked registers. However, these one take digital inputs, and we've got an analogue signal. So you also need a analogue-digital converter (ADC) and a digital-analogue converter.
So, we sample the signal1, and feed the quantised data through a long chain of registers / flip-flops to delay it the required amount. The current delay can be stored in another register, and edited via a couple of physical buttons. The challenge is then taking this information, and selecting the correct register to output. This appears to require doing a bitwise comparison of the current delay with all constants in the range (which would actually be implemented as combining all the bits with the appropriate not flags; for example, in a 4-bit system, 6 would be s[3]' . s[2] . s[1] . s[0]'), and then multiplexing each inter-register value based on this comparison. A quick calculation showed that taking 10-bit samples at 1kHz1 with a maximum delay of 1s would require in excess of 40,000 gates. So, definitely something that's going to require a bit more thought and design, or a custom made chip.
So, I decided to jump to the other end of the spectrum, and see how I'd go about it on a FPGA. I've done some work in the past with the Altera DE2, the 'University / Training' board, which is really rather shiny and fun to work with (except we don't have all the drivers at Imperial, for reasons best known to the staff here). Of course, defining what I've described in code is really quite easy in Handel-C.
int_10 registers[clock speed * max time];
int total_delay;
while (1) {
par {
registers[0] = get_input();
registers[1] = registers[0];
registers[2] = registers[1];
...
registers[n] = registers[n-1];
output(registers[total_delay]);
}
}
However, this is still really the same naive implementation I was taking about earlier; it requires data to propagate through a series of flip flops. Of course, I can get a chip with a large number of addresses flip flops on it really easily. There's hundred in the computer you're reading this one, as they form the basis of all random access memory. So, how can I cut down the number of calls I'm making to the flip flops? I'm only actually adding one value each cycle, so rather than shift everything around in memory, just have some way a tracking where my current value is.
The best way to handle this is to loop over a block of memory the same size as the delayed data - each cycle I then only need to read from the current memory location, which is the oldest one in memory, and then write a new value over it.
int curr_address = 0;
int max_entries = ???;
void main(void) {
while (1) {
par {
wait_for_delay_value_update();
process_signal();
}
}
}
void process_signal() {
output(get_RAM_value(curr_address));
set_RAM_value(curr_address, get_input());
curr_address = (curr_address == max_entries) ? 0 : curr_address ++;
}
Now, this has reached the point at which I can implement it on anything with a ADC, DAC, and RAM block. It's amazing what you can derive with some pen and paper, some time, and a handy pint.
Improvements to this are left as an exercise to the reader. Leave them as comments!
- 1 ↑ Minimum requirements here is 44kHz, 16-bit. Studio quality is ~96kHz, 32-bit (I believe)