I’ve personally known people, who believed out of the depths of their souls, that conventional light-dimmers are rheostats, which would mean that they have one active component, that being a load-bearing variable resistor, in series with the presumed lightbulb they were designed to regulate.
In reality, the type of light-dimmers that were mass-produced in the 1970s, were of the same design as those still mass-produced today. Their main component is a Triac, which is actually a pair of silicon-controlled rectifiers facing in opposite directions – so that the pair will be able to conduct in both directions, thereby satisfying the need to regulate an A/C power-supply – But the SCRs that make up a triac, have a common, shared gate.
I will described the device, as if a Triac was just the fancier, A/C equivalent of an SCR.
A silicon-controlled rectifier has as essential property, that it will fail to ‘turn on’ – i.e. become conductive between its two main electrodes – until the voltage applied to the gate – where a smaller amount of current will flow – exceeds a certain voltage. Then, an SCR tends to stay turned on, until the voltage across its main electrodes becomes zero again – or just very low – at which time it switches off.
A/C house current in Canada and the USA is supplied as a voltage-sine-wave with a frequency of 60 Hz, while in some other countries, a frequency of 50 Hz is used instead.
What a light-dimmer does, is to delay the turn-on point of this triac, within the 1/120 of a second of a half-wave of supplied A/C voltage, but 120 times per second. It will turn off the voltage supplied to the lightbulb, each time the A/C it is fed crosses zero. So different settings will produce active durations that vary between 100% and 0%, causing the lightbulb to glow with different levels of brightness.
The way this was done in the 1970s, and which remains the cheapest way to manufacture them, a light-dimmer possesses a smaller variable-resistor, aka a potentiometer, through which a much smaller amount of current will flow, than is meant to flow through the load, the wiper of which additionally has a capacitor connected to it. Obviously, the resistance and capacitance need to be calculated somewhat precisely, to take into account the A/C voltage as well as frequency. This wiper is additionally connected to the shared gate of the triac.
(Last Updated 08/12/2017, 15h20 : )
If only a variable resistor was used to turn the triac on, then the lowest-working sensitivity setting would wait until the input A/C was at its full amplitude, before the triac turns on, resulting in a load being fed a chopped-up sine-wave, with a duration of 50%. I.e., brightness-levels below 50% could not be controlled. A real dimmer can go much lower than 50%.
And the reason for this capability, as it was in the 1970s, is that together with the capacitor, the variable resistor will form a phase-shifter, which could theoretically delay the sine-wave fed to the gate of the triac, by as much as a phase-delay of 90⁰. So the voltage there will decrease as the poentiometer is lowered, and will also become delayed, which means that in theory, the dimmer can be turned down close to 0%.
In practice, the behavior is not optimal, because with only such a limited circuit, the way both the gate-voltage and its phase-position change together, will be slightly sub-optimal, at settings close to zero.
Further, a serious problem has always existed, if such a dimmer was used to feed a variable amount of juice to an A/C motor-winding, because being an inductive load, the winding would tend to phase-shift the current flowing through it, with respect to the voltage applied to it. And what this would mean, is that when the supply-voltage did cross zero, with an inductive load, the current would still be non-zero, but the triac would nevertheless shut off.
And, if we have considerable current flowing through an inductor, and if we interrupt this current suddenly, a voltage-spike results, that can cause power-line noise (interference), as well as even damaging this triac, potentially.
And so a more-complicated circuit was dreamt up – as early as the 1970s – such that the triac would truly shut off, at the point in the waveform, where the load-current crossed zero, instead of when the voltage-curve crossed zero. That type of more-complicated dimmer, could be used to feed variable current to a simple A/C motor.
But one reason for which motor-dimmers did not really catch on in the 1970s, was the fact that consumers seldom had motors to regulate, mainly only wanting to regulate lightbulbs. Their slightly-more-complex circuit additionally made motor-dimmers noncompetitive to sell, at their slightly-higher manufacturing-cost.
I’ve seen the situation often, in which a wall-mounted dimmer will act as an on/off switch, but from which, the light-fixture would allays be at 100% brightness, when switched on, and trying to vary it would not change that.
This situation implies, that the owner of the light-dimmer did not understand how it works, assumed it was just a rheostat – which would be difficult to burn out – and that the owner had short-circuited the load at some point in time, maybe when playing with a lightbulb, while the dimmer was turned on. The way in which solid-state components will sometimes fail under high amounts of current, is that their silicon will momentarily melt, so that they will no longer be differentiated into P-doped and N-doped silicon, instead just becoming a piece of fused silicon. And then, with its circuit destroyed, the component goes into a mode-of-failure in which it stays conductive 100% of the time.
(An observation about SCRs and Triacs : )
One thought that the reader might have could be:
‘The triac only requires a small amount of gate-current to turn on, in relationship to how much diode-current will flow. Therefore, the amount of gate-current must always be kept low, to avoid burning out the triac.’
There is a reason why this is not so. It has to do with certain monolithic (silicon) circuits having a different physical form, from what their schematic suggests they are.
A Silicon-Controlled Rectifier could be represented by a schematic, as ‘Two transistors, connected so that the base of one is also the emitter of the other, That way, feeding a small amount of base-current into one transistor, will cause them both to turn on. And, feeding slightly-high base current into any transistor, can burn it out.’
In reality, SCRs are built as 4 layers of silicon, such as a sequence of:
The sandwiched P layer is used as a gate, depending on the direction in which the SCR is ultimately supposed to conduct, and on a more-positive voltage being supposed to turn it on, as opposed let’s say, to a more-negative voltage being supposed to turn it on. Two transistors form implicitly as such:
What this means is that when switched on, the full load-current also passes through one of the layers, which will eventually act as the (amplified) gate. So these gates are rather difficult to burn out.
Well in a Triac, again, the layers used as gates, will also be load-bearing when the components are in their switched-on state, and thus difficult to burn out. Only, the SCR conducting in one direction will be activated by a positive gate-voltage, while the SCR conducting in the other direction, will be activated by a negative gate-voltage. And then the correct way to read the schematic becomes, that the gate-voltage must consistently be brought closer to one of the diode-voltages, to increase turn-on time, regardless of which direction the A/C is oscillating in, at the given moment.
This means that in practice, the design of a light-dimmer does not need to avoid, connecting the gate directly to one of the diode-ends, let’s say because the potentiometer is turned fully-up. The only result will be, a Triac that stays constantly turned-on.
An astute reader might observe:
‘By definition an SCR turns off, when the current flowing through it becomes zero, which should also be the point in time, at which the voltage between its load-bearing electrodes is zero. So what gives, with this point in time differing, when a light-dimmer has erroneously been used to regulate current fed to a motor, or to any other inductive load?’
The answer to that question is, that because of the way the gate of triac is hard-wired, the gate-voltage will cross zero, when the supply-voltage crosses zero, regardless of whether the load is still drawing current. And when the gate-voltage is forced to zero, the triac stops conducting – i.e., current flows backwards through one of the gates.
Alternatively, a potentiometer-capacitor-pair can see to it, that the zero-crossing of the gate-voltage is delayed – but then it would be delayed by a different amount of time, from which the inductive-load-current is delayed by.
If we wanted to design a ~motor-dimmer~ , then a good starting point might be, not to use one triac, but rather to use two SCRs, and to put a discrete diode on the gate of each SCR, to allow each SCR truly to stop conducting, only once the current drawn through it is zero…
Yet, Electrical Engineers today would probably consider this type of technology from the 1970s to be a waste of time, considering that beautiful motors exist, under the heading of “Brushless DC Motors”, which at the low end of the current-range, don’t even require SCRs to drive, but which can actually be driven by CMOS transistor-pairs, from a chip embedded in the stator, a chip that includes a Hall-effect-sensor, to sync with the actual rotation of a permanent-magnet-core, but that also includes whatever amplifier and driver-transistors the motor-coils require. This is how we design mundane, low-powered DC motors today!
(Update 08/12/2017, 15h20 :
One possible misconception which some people might have, is that the only way to drive a motor-coil, is to keep one of its leads connected to neutral, and to put positive and negative voltages to its active lead. This misconception would stem from past-century thinking.
Another possible misconception which some people might have, is that the output of a logic-gate can only be used to drive the input of other logic gates. I don’t really know where the latter idea comes from, but have encountered people who have it.
One way in which motor-coils can be driven, in particular when they are low-current coils, is that a chip can receive a constant DC voltage input to a pair of supply leads, those voltages maybe being at 0V and at +5V. And then each of the leads from one coil, can be connected to two separate pins of the chip.
What the driver-transistors of the chip will then do, is to connect each end of the coil, in alternation between 0V and +5V . Depending on which lead of the coil is at +5V at any one instant in time, conventional thinking could see this coil as receiving a positive or a negative voltage, causing its current to reverse slowly. And a CMOS-Inverter circuit could thus form a driver. There would be energy-losses due to using this approach, which can be minimized, by keeping the amount of time short, that the transistor-pair spends transiting, between connecting its output to the 0V and to the +5V supply-voltages.
A realistic, small motor would have 2 sets of coils, with the 2 coils in each set connected in series, and would therefore take up 4 pins on the driver-chip. Hence, driver-chip-packages with only 6 pins would start to work. )
And, the electronic control of motors – maybe still requiring SCRs – extends to high-performance motors. In this article I once emailed to my friends, I wrote about such appliances in 2015.
In 2017, Montreal hosted a “Formula E” race-car event, where the race-cars were electric, and surely enough, their motors were also – electronically controlled.
So I don’t mean to waste anybody’s time further, with motor-dimmer-wall-pates, from the 1970s.