About the Origins of Pulse Code Modulation

The way most digital sound gets reproduced today, a sample-value that has 8, 12, 16, or 24 bits of precision gets sent to a Digital / Analog Converter, and transformed into an analog voltage. This gets repeated at a constant sample-rate, resulting in an analog signal. Usually, the bits of precision are treated by the D/A converter in parallel. And this poses the question, ‘Why is this format named Pulse Code Modulation?’

The reason this is sometimes referred to as PCM, is the fact that a technology once existed, in which the sample-bits were sent to a kind of analog circuit sequentially.

The fact was observed that when a capacitor was allowed to discharge over a simple resistor, the voltage decay curve was exponential, and that within a constant amount of time, the capacitor voltage would halve. That interval of time was then used as the timing constant for pulses, which were either present or absent in a digital stream, and if the pulse was present, a constant amount of charge was pumped into the capacitor, thus increasing its voltage again by a fixed difference.

This resulted in a signal-format in which the least significant bit was also the earliest, and in which therefore the most significant bit would be the one represented in the last pulse. The circuit was so simple, that it could be implemented with a vacuum tube. And thereby, some form of digital sound already existed for military use, in the early 1960s. However, the precision was limited to 6 bits. Also, suitable for military use, this form of digital sound might have sounded quite distorted. It was certainly not meant for music, but was suitable to tell troops in a battlefield situation what their orders were.

(Edit : )

As to how circuit-designers assured that the amount of charge added to the capacitor would be constant, I think they just relied on the anode voltage of the tube being much higher than the signal-amplitudes, so that if this voltage was allowed to flow through a cathode-resistor ‘down to the voltage of the capacitor’, a relatively constant amount of current would result. And the analog pulse-width of each pulse was also made uniform somehow.



Successive Approximation

While Successive Approximation is generally an accurate approach to Analog-to-Digital conversion, it is not a panacea. Its main flaw is in the fact that the D/A converter within, will eventually show inconsistencies. When that happens, some of the least-significant bits output will either be an overestimated one, followed by nothing but zeroes, or an underestimated zero, followed by nothing but ones.

Although circuit specialists do what they can to make this device consistent, there are quantitative limits to how successful they can be. And, whether 24 bits can be achieved depends mainly on frequency. In analog circuits, voltages tend to zero in on an ideal voltage exponentially, even when there is no signal-processing taking place. So the real question should be, ‘Can 24 bits still be achieved, far above 48kHz?’

And, if we insist that the low-pass filter should be purely numeric, we are also implying that one A/D conversion must be taking place at the highest sample-rate, such as at 192kHz, while if the low-pass filter could be partially analog, this would not be required.



I feel that standards need to be reestablished.

When 16-bit / 44.1kHz Audio was first developed, it implied a very capable system for representing high-fidelity sound. But I think that today, we live in a pseudo-16-bit era. Manufacturers have taken 16-bit components, but designed devices which do bot deliver the full power or quality of what this format once promised.

It might be a bit of an exaggeration, but I would say that out of those indicated 16 bits of precision, the last 4 are not accurate. And one main reason this has happened, is due to compressed sound. Admittedly, signal compression – which is often a euphemism for data reduction – is necessary in some areas of signal processing. But one reason fw data-reduction was applied to sound, had more to do with dialup-modems and their lack of signal-speed, and with the need to be able to download songs onto small amounts of HD space, than it served any other purpose, when the first forms of data-reduction were devised.

Even though compressed streams caused this, I would not say that the solution lies in getting rid of compressed streams. But I think that a necessary part of the solution would be consumer awareness.

If I tell people that I own a sound device, that it uses 2x over-sampling, but that I fear the interpolated samples are simply generated as a linear interpolation of the two adjacent, original samples, and if those people answer “So what? Can anybody hear the difference?” Then this is not an example of consumer awareness. I can hear the difference between very-high-pitch sounds that are approximately correct, and ones which are greatly distorted.

Also, if we were to accept for a moment that out of the indicated 16 bits, only the first 12 are accurate, but there exist sound experts who tell us that by dithering the least-significant bit, we can extend the dynamic range of this sound beyond 96db, then I do not really believe that those experts know any less about digital sound. Those experts have just remained so entirely surrounded by their high-end equipment, that they have not yet noticed the standards slip, in other parts of the world.

Also, I do not believe that the answer to this problem lies in consumers downloading 24-bit, 192kHz sound-files, because my assumption would again be, that only a few of those indicated 24 bits will be accurate. I do not believe Humans hear ultrasound. But I think that with great effort, we may be able to hear 15-18kHz sound from our actual playback devices again – in the not-so-distant future.

Continue reading I feel that standards need to be reestablished.

About The Applicability of Over-Sampling Theory

One fact which I have described in my blog, is that when Audio Engineers set the sampling rate at 44.1kHz, they were taking into account a maximum perceptible frequency of 20kHz, but that if the signal was converted from analog to digital format, or the other way around, directly at that sampling rate, they would obtain strong aliasing as their main feature. And so a concept which once existed was called ‘over-sampling’, in which then, the sample-rate was quadrupled, and by now, could simply be doubled, so that all the analog filters still have to be able to do, is suppress a frequency which is twice as high, as the frequencies which they need to pass.

The interpolation of the added samples, exists digitally as a low-pass filter, the highest-quality variety of which would be a sinc-filter.

All of this fun and wonderful technology has a main weakness. It actually needs to be incorporated into the devices, in order to have any bearing on them. That MP3-player, which you just bought at the dollar-store? It has no sinc-filter. And therefore, whatever a sinc-filter would have done, gets lost on the consumer.

Continue reading About The Applicability of Over-Sampling Theory