I have heard innumerable people tell me, that the way to design a Digital / Analog converter, is to assume some sort of zero-voltage current-output, and to connect the pins of a digital input, representing from least-significant to most-significant bits, to a 2kΩ resistor, a 1kΩ resistor, a 500Ω resistor, a 250Ω resistor, etc.. And one of the biggest problems with that sort of D/A converter would become, that by the time we want to perform a 16-bit conversion, the lowest resistance would become way to low, for the digital inputs to handle.
If a D/A converter was to be designed with resistors at all – which is not guaranteed – then the following approach would make much more sense, which assumes a high-input-impedance voltage-follower as its output:
The principle behind this approach is, that two 2kΩ resistors form a voltage-divider in-series, but also form a 1kΩ resistor taken in-parallel, in-series with which the next 1kΩ resistor forms another virtual 2kΩ resistor, with which the next 2kΩ resistor forms the next voltage-divider… This can be repeated for as many bits as needed.
Whether it yields ‘sufficiently good results’ is another question. Given imperfections in the components, this could lead to non-linearity, where linearity is desired. And then, if this D/A converter is to be used within an A/D converter, this non-linearity can lead to further problems down the road.
(Updated 07/11/2018, 8h00 … )
(As Of 07/10/2018 : )
I suppose that because there exist people who don’t understand the difference between precision and accuracy, and because according to first inspection, the circuit above can be given as many (digital) input-bits as wanted – thus increasing precision endlessly – I should explain the main reason why, past a certain number of input-bits, the accuracy still won’t correspond to the precision.
If somebody was to try to implement such a D/A converter using discrete components, aside from the fact that the real resistors don’t match, the main problem is that digital outputs, which act as inputs to this circuit, are expected to produce completely reliable high and low voltages, in response to binary 1’s and 0’s. And logic circuits don’t typically provide such a guarantee. What 5V logic circuits normally ‘guarantee’, is that voltages greater than 2V represent 1’s, while voltages lower than 2V represent 0’s.
But a long time ago, TTL logic chips even used pull-up resistors, to achieve their logical-High output voltages. Such pull-up resistors would simply end up in-series with the connected load, when the logic state is High, so that The connected resistance also affects output-voltages.
But because of the way D/A conversion works, the digital output-voltage corresponding to the most-significant bit input above, will also have the greatest effect on the overall output of the converter. If this was a converter with 32 input-bits, the overall output-voltage would need to be correct, to one part in 4 billion, for the exercise to be a success! And so, the output-voltage fed to this circuit as input, as the most-significant bit, would also need to be correct, to one part in 2 billion. Where would one find discrete components that can do that?
In practice, accurate D/A converters need to be implemented on an IC, where the components match almost-perfectly.
I suppose that one more observation which I could add, is that the output-voltages serving as the digital inputs can be made to be inaccurate, but all to be inaccurate in the same way. I.e., they could all have logical ‘High’ voltages of 4.5V, and logical ‘Low’ voltages of 1.22V, For the sake of argument. In that case, to connect the last 2kΩ resistor to ground, where ground is assumed to have a voltage of exactly 0V, would actually be an error. In such a case, according to the assumption that all the input-voltages are incorrect, but on the same scale, that last, least-significant 2kΩ resistor would actually need to be connected to an output, the voltage-range of which is the same as the voltage-range of the active digital inputs, but the value of which is always (Low == 1.22V). Then, some sort of linearity might be achieved.
In the case of 32 hypothetical input-bits, the last suggestion above would translate, into always setting the least-significant bit ‘Low’ – corresponding to a binary (0) – but to connect the least-significant, 2kΩ resistor to it, instead of to ground. In the case of 16-bit converters, I don’t know if the circuit designers would be willing to sacrifice 1 bit of precision, and in the case of 8-bit converters, I’m sure they would not. They could use a symmetrical set of digital outputs as their inputs, that have an unused 9th, least-significant bit. This could be part of a 16-bit output-set, only the most-significant 8 bits of which are meant to be used, and the least-significant 8 bits of which remain zeroes.
Alternatively, the circuit-designers could just connect that last, least-significant 2kΩ resistor to a voltage of zero, exactly as I diagrammed it above, and acknowledge that their least-significant bit will not contribute accurately to the analog output-voltages.
Another approach which circuit-designers could take, would be to compute the output-impedance of their CMOS logic circuits, which follows from whatever type of transistor is being used – and which would then ideally be the same for the n-Channel and p-Channel MOSFET transistors – when the circuit is either High or Low, and then to subtract this Source-Drain resistance-value from all the 2kΩ resistors’ values, except from the last one, when the last one is just connected to zero…
The concept may be difficult to implement on an actual IC, of 1950Ω resistors, where there are also 2000Ω and 1000Ω resistors, because the manufacture of ICs’ photo-resist masks is pixellated.
There exist certain people who suggest, that in the case of 16-bit Audio, software which “Dithers” the least-significant bit, can make it appear, as though the output was more-than-16-bits accurate. Specifically, algorithms are known to exist, in which the least-significant (output) bit (out of 16) is (1) instead of (0), 25%, 50%, or 75% of the time, as a result of the same internally-represented (32-bit) sample-value. And at frequencies much-below the Nyquist Frequency, this would seem to add a virtual 2 bits of precision, resulting in effectively, 18-bit Audio.
My question in response to this has been, ‘What good will that do me, if I cannot assume that the physical, 16-bit D/A conversion, is in fact accurate to 16 bits?’
(Update 07/11/2018, 8h00 : )
The WiKiPedia shows a basic diagram, of a logic inverter, implemented with CMOS technology. I have an observation to add about this circuit.
If the input is either Low or High, and if the output voltage has correspondingly gone High or Low, the Source-Drain voltage of the active transistor is quite low, with respect to the applied gate voltage, and with respect to by how much each gate voltage, surpasses the threshold, at which each transistor ‘turns on’. This would suggest that in the steady, equilibrium state – not at high speeds – these MOSFETs are operating in their Triode Region, where current is linearly dependent on both gate-voltage and Source-Drain voltage.
But, as soon as current is linearly dependent on Source-Drain voltage, the transistor can be said to have some calculable about of resistance.
I know by now, that this mode, in which a MOSFET can operate, is not very common. But in this circuit, the only way the MOSFETs would not be operating in triode mode, but would be operating in their saturation region, is if the output-voltage did not reach – either for lack of time or due to a heavy load – the steady-state output-voltage defined by a valid input-voltage.