I have run into people, who believe that a signal cannot be phase-advanced in real-time, only phase-delayed. And as far as I can tell, this idea stems from the misconception, that in order for a signal to be given a phase-advance, some form of *prediction* would be needed. The fact that this is not true can best be visualized, when we take an analog signal, and derive another signal from it, which would be the short-term derivative of the first signal. ( :1 ) Because the derivative would be most-positive at points in its waveform where the input had the most-positive slope, and zero where the input was at its peak, we would already have derived a sine-wave for example, that will be phase-advanced 90⁰ with respect to an input sine-wave.

But the main reason this is not done, is the fact that a short-term derivative also acts as a high-pass filter, which progressively doubles in output amplitude, for every octave of frequencies.

What can be done in the analog domain however, is that a signal can be phase-*delayed* 90⁰, and the frequency-response kept uniform, and then *simply inverted*. The phase-diagram of each of the signal’s frequency-components will then show, the entire signal has been phase-*advanced* 90⁰.

(Updated 11/29/2017 : )

(As It Stood 11/22/2017 : )

1: ) The fact needs to be acknowledged, that according to Algebra, which is pure Math, there is only the derivative of a function, with respect to one or more parameters, and the predicate ‘short-term’ has no meaning.

But one problem with the purely Mathematical definition of derivatives would be, that they define the slope of a continuous function, at infinitesimal distance, from any point of that function. This is a somewhat abstract concept which people Study, but implies that this concept is of limited use in signal-processing, because here, any given electronic signal will have some amount of high-frequency noise – i.e. hiss. And because high-frequency noise creates perturbations of the signal progressively closer to any point in time, according to pure Algebra, the slope of the signal with respect to time, could theoretically be any value.

And so if one designs a circuit to differentiate an input-voltage, it’s important to limit the highest frequencies which its active amplifier will amplify, since in addition, we know that the overall gain keeps increasing, with frequency.

Failing to set some constraints, will lead to an unstable circuit.

Now, if we wanted to phase-delay an arbitrary input signal by integrating it, there is the converse problem, of the signal-gain doubling, every time the frequencies *go down* one octave. But there is less of a problem, with the stability of the circuit. So another way one could look at the two sine-waves I included above, could be, that the red sine-wave follows the integral of the blue sine-wave, only in the opposite direction in which the blue sine-wave is pointing at any one time.

For most signal-processing, what we’d want is uniform frequency-response, over a given, finite frequency-band, and yet to phase-shift all the frequency components. And indeed, there existed analog circuits for some time, which accomplish this. Logically, they have a *lower* frequency-limit (instead of ever achieving infinite gain). ( :5 )

It was my impression, that the corresponding implementation in the digital domain, would also prefer to exploit the superior frequency-response that the digital signal formats offer. Because of this, DAW software – i.e. Digital Audio Workstations – offer a wide variety of effects, which require that many overlapping Fourier Transforms be computed of the original signal, which then apply fancier modifications to each Fourier Transform – thereby manipulating the frequency-components in the desired, arbitrary way – and which are then inverted, to reproduce the modified time-domain waveforms. ( :6 )

The highest-quality phase-shifting can also be accomplished in this way, but not in real-time.

2: ) In the diagram I linked to above, which involves a single transistor as its active stage, there is a potentiometer. When this potentiometer is at one extreme, the transistor merely acts as a voltage-follower, thus achieving zero phase-shift.

(Erratum 11/29/2017 —

As is typical for such stages, when the potentiometer is at its opposite extreme, the stage offers a maximum phase-delay ~~of~~ (closer to) 180⁰. But at the ‘-90⁰’ setting, the circuit problematically just acts as ~~an integrator~~ (a negative differentiator) again, so that the ~~gain~~ (phase-shift) will ~~halve~~ (increase), for every octave by which the frequencies increase(, until C2 just passes deviations in the collector-voltage through to the output, in a way that overrides the conductivity of VR1, so that the ~~negative gain~~ (phase-shift) levels off).

— End Of Erratum. )

If we needed roughly uniform frequency response over a certain band of frequencies, yet a 90⁰ phase-delay, the way to accomplish that with the old-style analog methods, would be to chain two stages together, the diagram of each being as shown in the link, and to set each stage with its potentiometer mid-way, for a 45⁰ phase-delay each time.

And this need, to extract more than one order of integration from any point of the input-waveform, in order to achieve approximately-uniform gain, *cannot be bypassed*. The same would happen if we chose to compute our phase-advance, using a differentiating approach – i.e., we’d eventually need to base the output, on the 1st-order, the 2nd-order, the 3rd-order derivative, etc..

3: ) What follows is the conclusion, that if our goal was to compute a 90⁰ phase-delay in real-time, *as cheaply as possible*, but to accept that doing so also accompanies certain quality-limitations in the result, we could easily derive an algorithm which does so.

The algorithm would chain two, digital, 1st-order low-pass filters, that have the same corner-frequency, but in such a way, that the output-amplitude of each is 1/4 the input-amplitude, at one frequency in the falloff-curve of each, at which we’d want the circuit to be most-accurate.

Then, we’d tap the overall input, as well as output 1 and output 2, and we’d compute a weighted summation of the 3 signals, with the intent that at the ideal frequency, output 2 will be ~180⁰ out-of-phase with the input, and weighted such that their non-phase-shifted contribution to the final output counters that of output 1 and cancels it. At that frequency, output 1 will remain to define the final result, with its 90⁰ phase-delayed component… Since the phase-shift of each stage falls short of 90⁰, this can be compensated for, by emphasizing output 2 more than the input.

If we did that, then the plan would benefit from having used low-pass filters, instead of actual integrators, in that the phase-shift will eventually stop happening, but in that the output-gain will never become infinite, as a result of low input-frequencies.

When I do the Math (in my head), I get:

```
c
```_{in} = -( (1 / sqrt(5)) - (3 / 20) )
c_{in} ~= -0.2972
c_{l1} = 4
c_{l2} = 4
α_{90⁰} = ( (2 / sqrt(5)) + (1 / 5) )
α_{90⁰} ~= 1.0944

~~One of my assumptions is~~, that the *digital*, low-pass filter-algorithm will have |α|=1/2 at *its* ‘corner-frequency’, i.e. that: ( :4 )

```
N = h / 2
F
```_{0} <= N
F_{1/2} = F_{0} / 2
ω = 2 sin( πF_{1/2} / h )
k = 1 / (ω + 1)
l_{n} = k l_{n-1} + (1 - k) in_{n}

This behavior is counter to that of an *analog filter*, which some sources even *define* as having a ‘corner-frequency’, such that |α| = sqrt(1/2) , which should come at half the previously-defined frequency for either type of filter.

(Edit 11/23/2017 : )

Because each filter-output has two components:

- An in-phase component that might otherwise be referred to as the ‘real component’ of a phasor.
- A 90⁰ -shifted component that might otherwise be referred to as the ‘imaginary component’ of a phasor.

It follows that we’d want two results to equal a phasor, each of which has two components, always 0 for the in-phase, and always 1 for the out-of-phase part. Thus, the system as I described it above was overdefined, because it was attempting to create 4 real-numbered results, from 3 known inputs.

This exercise can be improved and made systematic, if our hypothetical filter-series had 3 filters, plus the original input, to work from, to produce 4 predetermined outputs, and to achieve exact, 90⁰ -delayed output at frequencies for which the absolute filter-gain would have been (1/2) as well as (1/4). If this is done, then solving this problem becomes an exercise in Linear Algebra.

*According to what I wrote above*, a digital filter differs from an analog filter, in that its corner-point has a gain of 1/2. If this is true, then the following work-sheet describes what the optimum multiples for the 4 taps should be:

However, should I have been in error, and should *the digital filters also exhibit a 45⁰ phase-shift, where they have a gain of sqrt(1/2)*, then this work-sheet describes the correct multipliers:

One result to be weary of though, would be the excessively high multiplier for the output of the 3rd low-pass filter. The reader should keep in mind, that in the event of a D.C. signal, all these multipliers would just add, and we’d get an output approximately 11 times as high as the input D.C. voltage.

(Edit 11/24/2017 :

I suppose that an important question which the reader might have, would be, what exactly the elements of the matrix represent, which I composed, and on which the Algebraic exercises are based.

Each pair of rows, of my main matrix, describes what the filters would do at one frequency. I used the first row in any pair to describe the in-phase, and the second row, to define the 90⁰ phase-delayed component of what comes out. Then, as we progress from one column to the next, along one row, each element describes the signal that would be present, first as the input, then as the output of the first low-pass filter, then as the output of the second low-pass filter that follows the first, etc..

This is a screen-shot, of one example, of that that process looked like:

Additionally, the strong multipliers which the first exercise derived, suggest, that as soon as the frequency – or any other parameter – goes outside the assigned domain, the results will also become wildly inaccurate. I won’t need to prove *this*, as my word should be sufficient. But for this reason, I would not suggest any of the coefficients I’ve computed so far, to be used in actual applications. )

One fact which I had written about quite some time ago, is that there exists a type of Statistical Analysis, which is called Non-Linear Regression Analysis, of which Polynomial Regression Analysis was a special case. The big capability of this technique is, that a number of results could be known in advance, which exceed the number of predictors, and possibly so by a large factor. Yet, a fixed set of functions of the predictors can be found, which in turn can be named the matrix (X), from which the result-vector (Y) supposedly emerged, and an Algebraic trick that exists, will find a vector of multipliers (A), which when multiplied by *unknown* predictor’s functions, will reproduce values of (Y), that are as close as possible to known values of (Y).

To remind myself of what the Algebra was that does this, I keep the following sheet on-hand:

Based on this sheet, I have determined that the best-possible way to compute the multipliers, for the input, plus the outputs of 2 low-pass filters, so that they best approximate phase-shifts at 2 frequencies, resulting in 4 real numbers, can also be, by using Non-Linear Regression Analysis. And the results of my labor are:

Work-Sheet, Overdetermined – 1

Work-Sheet, Overdetermined – 2

Where again, the results change, depending on what the expectations of the actual filter-algorithms are. I know that If analog filters are used, in each case, ‘Work-Sheet 2′ would be the correct one.

This sure beats trying to guess at the Math in my head.

(Edit 11/24/2017 : )

The only realistic approach might be, to use 3 low-pass filters, but to calibrate them for approximate results at 3 frequencies, instead of calibrating them for exact results at 2 frequencies.

Further, in the work-sheets below, I decided to perform a test, in which I used the multipliers that are known to produce approximate results over the widest-possible range. I re-applied those multipliers to the matrix, with which I’ve defined each filter. And the Results should read, as following from the approximation, but once the real component, once the imaginary component, with both components for each of the 3 frequencies used: F_{sqrt(1/2)}, F_{1/2}, F_{1/4} :

~~Work-Sheet, Overdetermined, Verified – 3~~

(Assuming *incorrectly*-degraded filter behavior.)

Work-Sheet, Overdetermined, Verified – 4

(Assuming analog-filter behavior.)

(Edit 11/28/2017 : )

4: )

I suppose that an idiosyncrasy of mine which I should comment on, is how I compute the constant (k), in order to devise a discretized version of a low-pass(, or of the corresponding high-pass) filter:

`ω = 2 sin( πF`_{1/2} / h )
k = 1 / (ω + 1)
l_{n} = k l_{n-1} + (1 - k) in_{n}

There exist concepts in *analog* circuit-design, according to which the value ‘pi’ simply consists of

`ω = 2 π F`

Which is also the definition of angular velocity – or, of the derivative of a sine-wave, at its zero-crossing. My problem with this is, that to plug this in to (k), only works when (F) is much lower than the Nyquist Frequency (N). At one point in my past, I wanted a way to calibrate this filter, so that it would produce exact amplitudes when (F) is any frequency, even one close to (N).

Well, if F == N == h/2 , then the expressions above will yield

```
ω = 2 sin( πF / h )
== 2 sin( π / 2 )
== 2
k = 1 / (ω + 1) == (1/3)
```

Which turns the discretization of the filter into

`l`_{n} = (1/3) l_{n-1} + (2/3) in_{n}

In order to test whether the value of (k) given, produces α == 1/2 , we need to genuflect the previous value of (l), such that that value would have been (-1/2). We get

`l`_{n} = (1/3)(-1/2) + (2/3) in_{n}

And,

`l`_{n} == (2/3)(+1) - (1/6) == (+1/2)

Which means, that the end-condition will repeat itself.

Yet, for very small values of ε,

sin(ε) ~= ε

Which means that my version also genuflects

`ω = 2 π F`

When F is small.

As of 11/28/2017, I have a tentative explanation for this apparent contradiction. According to analog filter parameters, the absolute of the gain of the filter, where its phase-shift is 45⁰, is supposed to be the square-root of (1/2). But at the Nyquist Frequency, as shown above, the gain is just (1/2).

The fact which I’ve been overlooking all-along, is that at the Nyquist Frequency, the digital signal format is unable to represent any component, that corresponds to the sine-function as opposed to the cosine-function, or that corresponds to the imaginary component, or that corresponds to the 90⁰ phase-shifted component. Only the non-phase-shifted, ‘real’ component of the signal can be represented at that frequency!

Therefore, if |α| == sqrt(1/2) , and if real(α) == (sqrt(1/2) * |α|) , corresponding to the assumed, untold phase-shift of 45⁰, then it will follow that real(α) == (1/2) !

Hence, it would follow that these filters will work as they should, even if their corner-frequency is at 1/4 the Nyquist Frequency, except when the frequency has surpassed that frequency by two octaves. Therefore, I need to change the way the filters supposedly behave, in my work-sheet for degraded filters, in that case:

And so I arrive at:

Work-Sheet, Overdetermined, Verified – 5

The last entry of this work-sheet simply genuflects, that the phase-shifted component of the output, however desired, cannot be expressed at the Nyquist Frequency.

(Edit 11/26/2017 : )

5: )

The correct way to read the diagram I linked to above would be, first to observe that only the input is really being connected to the base of the transistor, via coupling-capacitor C1. But this does not mean that no feedback is taking place. In a standard way, the transistor is assumed to have an arbitrary, high amount of current-gain, so that the emitter-follower resistor R4 provides negative feedback. I.e., small differences in base-emitter voltage will cause large, positively-correlating differences in collector current, which flows through R4, so that the emitter-voltage will follow the base-voltage changes at 1:1.

Normally, if this transistor was just a gain-stage, then the voltage-gain at the collector would be defined as

– ( R3 / R4 )

But, because in this case, R3 == R4 , the transistor simply assures that the signal at its collector will be the negative version of the input-signal, while the collector will be the positive voltage-following version of the input-signal, both at unit gain.

This means that the combination of C2 and VR1 mix together, the inverted signal, modified by the high-pass filter that forms between C2 and VR1, and the positive signal, the significance of which decreases as VR1 increases.

We could say that given a suitable value for VR1, which might match the reactance of C2, a -45⁰ phase-shift results. Only, the reactance of C2 is frequency-dependent, so that if it matches VR1 at one frequency, it won’t do so at another frequency…

Now, if this circuit was assumed to be followed by an operational amplifier / voltage-follower circuit with high input-impedance, then it would not be necessary for C2 to be so high, nor for VR1 to be so low. The main detail to observe, is that the current-load these two components place on the transistor, and on R3 and R4, should not change the behavior of this active stage. A choice of C2 and VR1 which draw less current, would presumably load the active component less, and therefore also modify the voltages present at the emitter and collector less.

But, because it’s assumed that the combination of C2 and VR1 will themselves be loaded by the second coupling-capacitor C3, and whatever it connects to at the output – in effect, that op-amps have not been invented yet – a need stems from this assumption, to make C2 high and the ranges offered by VR1 correspondingly low, so that any load present at the output, will modify the voltages produced by C2 and VR1 as little as possible.

So this circuit is already an example, of numerous approximations being applied at once.

(Edit 11/27/2017 : )

6: )

I’ve been describing phase-shifts in the context of signal-processing. Some readers may ask themselves, what relationship this has, if any, with phase-shifts used for recreational / musical composition.

My short answer would be, that in musical composition, uniform properties, or Mathematically exact properties, are not needed, because subjective human hearing can appreciate effects which are frequency-dependent, even if so, in accidental ways.

And so a better question to ask might be, What the analysis would be, of how the program ‘Audacity’, for example, computes its “Phaser” effect:

An important caveat for me, in trying to second-guess what their programmers might have done, is that this would still be a guess on my part. But an important clue to me is, that this effect can be monitored in real-time. Another is the observation, that Audacity is not big on effects, that require a track be Fourier-analyzed 80 times per second.

I would guess that for every two stages, their programmers combined a low-pass filter and a high-pass filter, so that their corner-frequencies are equal at any point in time.

And they probably computed their parameters, so that each filter would phase-shift its signal +/- 45⁰ at the shared corner-frequency.

Far down, along the rising slope of the high-pass filter, *it* would phase-advance the signal ~90⁰, while far down along the descending slope of a low-pass filter, that one will phase-delay the signal ~90⁰.

I would guess that the trick here might be, to form a subtraction between the low-pass and the high-pass filter outputs, so that the combination always phase-delays the signal by some degree. It would do so maximally at the shared corner-frequency.

Dirk

## One thought on “About +90⁰ Phase-Shifting”