One concept that exists in modern digital signal processing is, that a simple algorithm can often be written, to perform what old-fashioned, analog filters were able to do.

But then, one place where I find lacking progress – at least, where I can find the information posted publicly – is, about how to discretize slightly more complicated analog filters. Specifically, if one wants to design 2nd-order low-pass or high-pass filters, one approach which is often recommended is, just to chain the primitive low-pass or high-pass filters. The problem with that is, the highly damped frequency-response curve that follows, which is evident, in the attenuated voltage gain, at the cutoff frequency itself.

In analog circuitry, a solution to this problem exists in the “Sallen-Key Filter“, which naturally has a gain at the corner frequency of (-6db), which would also result, if two primitive filters were simply chained. But beyond that, the analog filter can be given (positive) feedback gain, in order to increase its Q-factor.

I set out to write some pseudo-code, for how such a filter could also be converted into algorithms…

```
Second-Order...
LP:
for i from 1 to n
Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1]
Z[i] := ( k * Z[i-1] ) + ((1 - k) * Y[i])
Feedback[i] := (Z[i] - Z[i-1]) * k * α
(output Z[i])
BP:
for i from 1 to n
Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1]
Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) )
Feedback[i] := Z[i] * (1 - k) * α
(output Z[i])
HP:
for i from 1 to n
Y[i] := ( k * (Y[i-1] + X[i] - X[i-1]) ) + Feedback[i-1]
Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) )
Feedback[i] := Z[i] * (1 - k) * α
(output Z[i])
Where:
k is the constant that defines the corner frequency via ω, And
α is the constant that peaks the Q-factor.
ω = 2 * sin(π * F0 / h)
k = 1 / (1 + ω), F0 < (h / 4)
h Is the sample-rate.
F0 Is the corner frequency.
To achieve a Q-factor (Q):
α = (2 + (sin^2(π * F0 / h) * 2) - (1 / Q))
'Damping Factor' = (ζ) = 1 / (2 * Q)
Critical Damping:
ζ = 1 / sqrt(2)
(...)
Q = 1 / sqrt(2)
```

(Algorithm Revised 2/08/2021, 23h40. )

(Computation of parameters Revised 2/09/2021, 2h15. )

(Updated 2/10/2021, 18h25… )

(As of 2/07/2021, 9h05… )

I would just like to acknowledge, that the WiKiPedia defines the Low-Pass Filter (…algorithm) differently from the way I do, and in a way which is not symmetrical with their definition of the High-Pass Filter:

My explanation for the difference in how the WiKiPedia defines their Low-Pass Filter lies, in the fact that they use *a different value of (“α”)*, to define the same corner frequency, from the value of (“α”), as they use for their High-Pass Filter. Accounting for the fact that I define the corresponding symbol, (‘k’), the same way both times, should yield the same type of Low-Pass Filter as the WiKiPedia yields.

(Update Removed 2/08/2021, 14h45, because inaccurate.)

(Update 2/09/2021, 1h30: )

I found the following Web-article useful, in deciphering how second-order low-pass filters generally work:

https://www.electronics-tutorials.ws/filter/second-order-filters.html

(Update 2/09/2021, 3h15: )

One observation which I made about the primitive building-blocks of discretized filters was, that while the algorithms shown yield accurate frequencies, when the corner frequency is low, compared with the Nyquist Frequency, as the corner frequency approaches the Nyquist Frequency, the strange behaviour will set in, of a gain of (1/2), not, of (1/sqrt(2)). I have explained this away in different parts of my blog, as the inability of the sampling interval to capture any signal component which has been phase-shifted 90⁰. In short, what any analog low-pass filter is supposed to do at its corner frequency is, to yield an amplitude of (1/sqrt(2)), but with a 45⁰ phase-shift. Such a phase-shift will also be representable as two components, one in-phase and the other 90⁰ out of phase, each at (1/2) amplitude. As if the sampling interval was ignoring one of the components, these algorithms will seem to yield (1/2) amplitude, at 0⁰ phase-shift.

The problematic aspects of this behaviour are twofold:

- This cannot be setting in abruptly, when the corner frequency exceeds or equals (h / 4), And
- The open-loop gain in these filters, applied in pairs, needs to be known exactly, so that high Q-factors can be implemented, and the inverse of the gain approached but never exceeded.

Thus, I felt that the best way to visualize this over-damping would be, that a 90⁰ phase-shifted vector could start to get added-in to the in-phase component, as the corner frequency increases, and that the expected gain that analog filters would attain, results as a hypotenuse. But, this hypotenuse will be missing from the algorithmic filters’ gain, so that it can be multiplied in the determination of the parameter that leads to the Q-factor. By itself, this would yield the term:

sqrt(2 +2*x^2)

However, since the algorithmic filters form a chain of pairs, the square root can be omitted. Further, two consecutive halves form a quarter.

This is how my conjecture forms, as to the correct way to achieve the desired Q-factors.

(Update 2/10/2021, 18h25: )

There is an observation about this discretization of the filters, which should really be more obvious, than the one which I described above. It suffers from the main problem, that any feedback samples are not being added instantaneously, while feedback in an analog circuit would be instantaneous.

In this emulation, the feedback is effectively being added with a delay of 1 sampling interval, after having passed through a High-Pass and a Low-Pass Filter ‘normally’, which in turn, would also make it useless above half the Nyquist Frequency (= h / 4), where each sample deviates from the previous sample in the opposite direction, so that ‘positive’ feedback would only lead to greater cancellation. At half the Nyquist Frequency, feedback samples are being added-in, with an effective 90⁰ phase-delay (compared to the actual input samples).

What this would seem to suggest is, that in order for the filters to be applied over the span of an original Nyquist Frequency, the inputs must also first be up-sampled 2x, perhaps with a linear interpolation, the intended filters applied according to the Math of the new sample-rate, and then, half the resulting output-samples actually used as output, at the original sample-rate.

In this special case, there should be no complicated requirements on how to down-sample, which would come into effect, if there could be (relevant) frequency-components present above half the (derived) Nyquist Frequency. The reason such frequency-components would not be present would be the fact, that the derived sample-rate would ~~only~~ contain signal components present at the original sample-rate.

But then, this observation would also have an effect on what the consequence will be, if a fed-back sample was added with a phase-delay of 90⁰, to the input sample. According to how *Resonance* is formally described, *If the Q-factor is high, its phase-delay will get close to 90⁰ but not quite reach it*. And, at a phase-delay of exactly 90⁰, a physical resonator will fail to receive any more energy from the excitor, thereby having no reason to become active. At a phase-delay *exceeding 90⁰*, the resonator will also be losing energy to the excitor, when both are physical components.

According to what I’ve already written, I will leave the exercise up to the reader, to compute an adjusted value of (α), which should compensate for this latter, stronger phenomenon, than the phenomenon which I adjusted for earlier.

One thing I can tell the reader *not* to do is, ever to allow (α) to reach or exceed (4.0). The reason for this is as follows… If (F0) is chosen to be (h / 6)…

(k = (1-k) = 0.5), and, (k * (1-k) = 0.25),

Which will be the maximum open-loop gain, not accounting for (α). Therefore, if (α >= 4.0), a runaway sample-value can and will result…

The reader will need to enable JavaScript from my site, as well as from ‘mathjax.org’, in order to view the comparative plot shown below…

Dirk

]]>

In recent weeks I’ve been noticing some rather odd behaviour, of the Linux version, of the most up-to-date Chrome browser. In short, every time I launched the browser on my Debian 9 / Stretch computer, that has the Plasma 5.8 Desktop Manager, certain malfunctions would set in, specific to the desktop manager. I waited for several Chrome version upgrades, but the malfunctions persisted. And, as far as I can tell, the problem boils down to the following:

Google will only distribute the latest Chrome version, and when they tag the line which one is supposed to have in one’s Sources.list with ‘stable’, apparently, they mean both the Stable version of Chrome, And, for the Stable version of Debian. According to Google, Debian 10 happens to exist right now, because that is the “stable” version (of Debian), but, Debian 9 and Debian 8 don’t exist anymore. Except for the fact that many people could still have either installed.

And so, rather than to go the insecure route, and to install some outdated, non-official version of Chrome, I’d say that the best thing to do was to install “Chrom*ium*” instead, from the Debian repositories, which has always been the debranded version of Chrome, but, the version that is most compatible.

On my Debian 9 box, that would be the (corresponding) version ‘`73.0.3683.75-1~deb9u1`

‘, as of the time of this posting. It’s a retro version, but not so deeply retro, that I’d fear for the security of my data. While I was at it, I installed ‘`chromium-l10n`

‘ and ‘`chromium-widevine`

‘, the last of which I happen to have the luxury, of *being allowed* to install, because that last package actually allows the browser to play certain DRM-ed content.

Now, I also have Debian 8 computers (which were called ‘Debian Jessie’), and am inferring that what was too recent for Debian 9, was also too recent for Debian 8. I’m inferring this, even though the same, recent Chrome versions showed *no obvious* signs of malfunctioning, under Debian 8. So, I kiboshed that as well. However, I think that the Chromium version that was up-to-date with Debian 8, was *version 57 (+ something)*. That just struck me as too early a version to revert Chrome back to, and so what I did *on that computer* instead was, to install ‘Vivaldi 3.6‘ .

What this does is, to put me back into the situation, in which each of my main Linux computers has two mainstream Web-browsers installed, because I feel more secure with two of those.

Dirk

]]>

In Calculus, one of the most basic things that can be solved for, is that a principal function receives a parameter, multiplies it by a multiplier, and then passes the product to a nested function, of which either the derivative or the integral can subsequently be found. But what needs to be done over the multiplier, is opposite for integration, from what it was for differentiation. The following two work-sheets illustrate:

PDF File for Desktop Computers

Please pardon the poor typesetting of the EPUB File. It’s the result of some compatibility issues (with EPUB readers which do not support EPUB3 that uses MathML.)

This realization also explains how, When the sinc function has been discretized in a certain way and applied as a low-pass filter, I can know that its Nominal Gain, or, its D.C. gain will be close to (2). The assumption which I was making about the low-pass filter was, that the sinc function will make its first zero-crossings near the centre-point, two input samples away, and that it will have an additional zero-crossing every two input samples after that.

This is not how every filter based on the sinc function will be designed; it was only how one specific filter would have been designed.

This means that, a phenomenon which would normally happen over an interval of (π), happens over an interval of (2). Additionally I read, that when the sinc function is Mathematically pure, and has not been translated into Engineering equivalents, its integral approaches (π). Just to be obtuse, the interval of the (Engineering) function’s parameter has been multiplied by (π/2), to arrive at the value which must be fed to the true trig function.

Thus, that half-band filter, that employs 2x over-sampling, will have an integral that approaches (2), not (π).

Dirk

]]>

I can sometimes describe a way of using certain tools – such as in this case, one of the Discrete Cosine Transforms – which is correct in principle, but which has an underlying flaw, that needs to be corrected, from my first approximation of how it can be applied.

One of the things which I had said was possible was, to take a series of frequency-domain ‘equalizer settings’, which be at one per unit of frequency, not, at so many per octave, compute whichever DCT was relevant, such that the result had the lowest frequency as its first element, and then to apply that result as a convolution, in order finally to apply the computed equalizer to a signal.

One of the facts which I’m only realizing recently is, that if the DCT is computed in a one-sided way, the results are ‘completely non-ideal’, because it gives no control over what the phase-shifts will be, at any frequency! Similarly, such a one-sided convolution can also not be applied as the sinc function, because the amount of sine-wave output, in response to a cosine-wave input, will approach infinity, when the frequency is actually at the cutoff frequency.

What I have found instead is, that if such a cosine transform is mirrored around a centre-point, the amount of sine response, to an input cosine-wave, will cancel out and become zero, thus giving phase-shifts of zero.

But a result which some people might like is, to be able to apply controlled phase-shifts, differently for each frequency, such that those people specify a cosine as well as a sine component, for an assumed input cosine-wave.

The way to accomplish that is, to add-in the corresponding (normalized) sine-transform, of the series of phase-shifted response values, and to observe that the sine-transform will actually be zero at the centre-point. Then, the thing to do is, to apply the results negatively on the other side of the centre-point, which were to be applied positively on one side.

I have carried out a certain experiment with the Computer Algebra System named “wxMaxima”, in order first to observe what happens if a set of equal, discrete frequency-coefficients belonging to a series is summed. And then, I plotted the result of the *definite* integral, of the sine function, over a short interval. Just as with the sinc function, The integral of the cosine function was (sin(x) – sin(0)) / x, the definite integral of the sine function will be (1 – cos(x)) / x, and, Because the derivative of cos(x) is zero at (x = 0), the limit equation based on the divide by zero, will actually approach zero, and be well-behaved.

(Update 1/31/2021, 13h35: )

There is an underlying truth about Integral Equations in general, which people who studied Calculus 2 generally know, but, I have no right just to assume that any reader of my blog did so. There exist certain standard Integrals, which behave in the reverse way of how the standard Derivatives behave, just because ‘Integrals’ are ‘Antiderivatives’…

When one solves the Derivatives of certain trig functions repeatedly, one obtains the sequence:

sin(x) -> cos(x) -> -sin(x) -> -cos(x) -> sin(x)

Solving the Indefinite Integrals of the same trig functions yields the result:

sin(x) -> -cos(x) -> -sin(x) -> cos(x) -> sin(x)

Hence, the Indefinite Integral of sin(x) is in fact -cos(x), and:

( -(-cos(0)) = +1 )

(End of Update, 1/31/2021, 13h35.)

(Updated 2/04/2021, 17h10…)

(As of 1/30/2021: )

I achieved the desired result, that both plots are similar, and find that doing so supports my conjecture, even though it does not constitute Mathematical proof that will stand up to any rigour… The reader will need to enable JavaScript from my blog, as well as from ‘mathjax.org’, in order to be able to view the following work-sheet:

Notes:

*If the sample-rate for the continuous functions plotted above was changed to 16kHz, consistently with over-sampled telephone frequencies, then they should be effective down to 500Hz*. And, over-sampling is required for 90⁰ phase-shifting at the initial Nyquist Frequencies.

If this concept was indeed used to implement some sort of equalizer, and not just, ‘a universal phase-shifter’, then there is an additional caveat. The *continuous* functions above are plotted such, that their nominal gain will be 2. With discrete transforms, a question will inevitably come up, as to whether the Type 1 or the Type 2 transforms should be used. The Type 2 transforms will apply a half-sample shift, that corresponds to an offset of a quarter-wave, to the input, even though in this case, the input was in the frequency domain and the output in the time-domain (resulting in an arbitrary convolution as already stated).

One property that the Type 1 DCT has is, non-zero values at both endpoints, which also assures that any signal can be reconstructed. The Type 2 DCT will have as property, that it starts with a non-zero value at the origin of the output, but that because all the indices have odd products of quarter-wave frequencies, they will naturally tend to have zero, as the value of the last (output) element.

This would be ideal for equalizers and filters, because it would mean that no windowing function needs to be applied, to avoid spikes in the output, due to spikes in the input, entering the window.

Well, with ‘Discrete Sine Transforms’, this set of properties is reversed. The Type 2 will have as property, a value of zero at the origin, and non-zero values at the endpoints. Yet, presumably, the coefficients of both cosine and sine transforms, applied simultaneously, are supposed to define the same set of frequencies. What this means is that, If the Type 2 are to be applied, then some sort of windowing function should also be applied *to the sine* transform.

(Update 2/04/2021, 17h10: )

After pondering this question more closely, I find that the type of windowing function best-suited would be, ‘a half-sine wave’ that begins and ends with multipliers of zero, at the beginning and end of the entire interval of the convolution. This should preserve the way equalizers and filters are supposed to behave, best.

(End of Update, 2/04/2021, 17h10.)

An idea which was once voiced was that, because a system of representation already exists, in which the real part of a complex number states a cosine-wave, and the imaginary part states a sine-wave, it should be possible just to apply the Discrete Fourier Transform to a set of ‘phasors’, and end up with a usable result. But one fact which can be overlooked in this way of thinking is that, outside this system of representation, the instantaneous voltage or current through a wire can only be a real number, at least as far as circuit theory goes.

What will happen though is that, wherever the phasors have *positive* imaginary parts, the corresponding *real* parts of the Discrete Fourier Transform will be phase-*advanced* 90⁰. Therefore, if one just ignores the imaginary parts of the DFT, one should also obtain the results stated above. Further, if one additionally only states ‘~~phasors~~‘ with imaginary parts equal to zero, then one is back to computing a DCT.

(Update 1/31/2021, 23h25: )

One question which a reader could have would be concerning the way I proposed to normalize the (custom) Discrete Transform, as this snip repeats:

```
for k: 0 thru 159 do
for n: 0 thru 67 do block (
conv[160 + k]: conv[160 + k] + (sin((n*k*%pi)/160) / 160),
conv[160 - k]: conv[160 - k] - (sin((n*k*%pi)/160) / 160)
);
```

I clearly divided the individual products by 160, even though the array has 320 elements. The way I justified that was, to observe that when two (synchronous) sine-waves are multiplied, or more correctly, if a sine-wave is squared, the product will average as *half* the maximum amplitude (effectively, with a frequency twice that of the non-multiplied sine-waves). Thus, a single frequency-coefficient will result in a sine-wave within the computed convolution, the peak amplitude of which is equal to the value of the coefficient. But then, when this wavelet is multiplied by a stream, because the convolution is being applied as an equalizer, half that amplitude will naturally result and oscillate, at the desired frequency.

What this would lead me to do as a first approximation is, to double the amplitudes generated by the transform, as part of its normalization. However, since the wavelet has two halves, each of which is the same transform, no doubling should be necessary. Given 160 coefficients, each product should indeed be divided by 160, that’s all.

It would be no different for a centre output coefficient, which would only occur once, but which would be the *cosine* zero product with all the frequency coefficients. In that case, *it* would already contain the average of the frequency coefficients, with no concept of getting halved, when applied as part of the wavelet.

Enjoy,

Dirk

]]>

I take the unusual approach of hosting this blog and site, on a server, that is running on my personal computer at home. I don’t recommend that everybody do it this way; this is only how I do it. That makes the availability of my blog and site no better, than the reliability with which I can keep my PC running, as well as that, of my Internet connection.

Unfortunately, I experienced a brief power failure this morning, between 8h45 and 9h00. As a result, this site was down until about 9h40. I apologize for any inconvenience to my readers.

BTW, There have been remarkably few failures in the recent 3 months or so.

Dirk

]]>

I own a Samsung Galaxy S9 smart-phone, and have discovered that, in its tethering settings, there is a new setting, which is named “Auto Hotspot”. What this setting aims to do if activated is, on other Samsung devices, which normally only have WiFi, when the user is roaming along with his phone, there should appear an additional access point for them to connect to. The following screen-shots show, where this can be enabled *on the phone*…

I believe that this explains a fact which I’ve already commented on elsewhere, which is, that when I try to set up Google Instant Tethering, the negotiation between my ‘Asus Flip C213 Chromebook’ and this phone, no longer adds Instant Tethering to the list of features which are enabled. My Samsung S9 phone will now only unlock the Chromebook. What I am guessing is that, because the feature I’m showing in this posting is a Samsung feature, with which Samsung wants to compete with the other companies, Samsung probably removed to offer Instant Tethering from their phone.

Obviously, this is only a feature which I will now be able to use, between my S9 phone, and my Samsung Galaxy TAB S6 tablet.

The reader may ask what the advantages of this feature might be, over ‘regular WiFi tethering’, or ‘a WiFi hotspot’. The advantage could be, that even though it remains an option compatible with all clients, to have the phone constantly offer a WiFi hot-spot could drain the battery more. Supposedly, if Samsung’s Auto Hotspot is being used, it can be kept enabled on the phone, yet not drain the battery *overly*, as long as client devices do not connect. The decision could then be made directly from the client device, whether to connect or not… This is similar, to what Google’s system offers.

Also, the Samsung phones with Android 10 have as feature, that their ‘regular hotspots’ will time out, say after 20 minutes of inactivity, again, to save battery drain. Yet, if the user is carrying a tablet with him that has been configured to connect to the mobile hotspot Automatically, the phone which is serving out this hotspot will never detect inactivity.

Further, I’ve been able to confirm that, as long as I have Auto Hotspot turned on on my phone, indeed it does *not* show up as an available WiFi connection, on devices that are *not* joined to my Samsung account. This is as expected. But it also adds hope that, as long as I don’t connect to the phone’s Auto Hotspot from another device, the battery drain due to my leaving this feature enabled on my phone constantly, may not be very high. I will comment by the end of this day, after having left my phone with its own WiFi Off, which means that my phone will be using its Mobile Data, but, *not* connecting my Samsung TAB S6, whether doing this seems to incur any unusually high amount of battery drain, on the phone…

(The view from my Galaxy TAB S6: )

(Update: )

What some readers may be asking themselves could be, ‘Why not just add an existing Samsung account to the Chromebook?’ After all, It would be playing within the rules, to add a Samsung account to any Android device. But alas, there is a purely technological reason, why trying to do so from a Chromebook either won’t work, or won’t work within the rules.

Under ChromeOS, Android is just a subsystem. Android doesn’t manage a Chromebook’s WiFi or anything else, other than the Android subsystem itself. In fact, ChromeOS even has an internal router, on which Android has a different IP address, from the IP address that the Chromebook has at any time. Therefore, while it might still be possible to add a Samsung account, to the Android subsystem within ChromeOS, doing so *should not really* give Internet access, to ChromeOS.

*The thought has occurred to me,* however, just to give my old ‘Google Pixel C’ (tablet) a Samsung account, hoping to use Samsung’s Auto Hotspot from my Pixel C, since that one is in fact an Android (8) device. But, whether the old Pixel C is actually worth doing so, is not definite to my mind. Additionally, I do *not* know what the minimum system requirements are, *on the client device* (for the Auto Hotspot to display, that is being served out by the phone). *I do know* that the phone which is serving out the Auto Hotspot, needs to have Android *10* (Q) running on it.

And to close that topic, when I go into the Accounts page within the Pixel C’s Settings, a ‘Samsung Account’ is just *not* a type of account which it offers to Add…

(Update 12/20/2020, 18h25: )

I got up before 6h00 this morning, and as I’m writing this, my battery level is down to 35%. This means that today, I spent slightly more juice than I would on any day all-on-mobile-data. But, I cannot really tell whether this is due to the Auto Hotspot feature remaining enabled, or whether it’s just due to my playing with the phone more.

I think I can keep the feature enabled.

Dirk

]]>

One of the possessions which I have is a USB MIDI Keyboard, which I’d like to be able to play, so that my computer’s software synthesizers actually generate the sound…

I know that this can be done because I’ve done it before. But in the past, when I set this up, I was either using an ‘ALSA’ MIDI input, belonging to an ‘ALSA’ or ‘PulseAudio’ application such as “Linux Multimedia Studio”, or I was using ‘QSynth’, which is a graphical front-end to ‘fluidsynth’, but in such a way that QSynth was listening for *ALSA* MIDI, and outputting *JACK* audio. This is actually a very common occurrence. I can switch between using the ‘PulseAudio’ and using the ‘JACK’ sound daemon, through a carefully set-up configuration of ‘QJackCtl’, which suspends PulseAudio when I activate JACK, and which also resumes PulseAudio, when I shut down JACK again.

But there is a basic obstacle, as soon as I want to play my MIDI Keyboard through ‘Ardour’. Ardour v6 *can* be run with the PulseAudio sound system, *but only for playback*, or, Ardour can be run with its JACK sound back-end, after JACK has been launched. Ardour cannot be run with its ALSA back-end, when PulseAudio is running.

The default behaviour of the Debian kernel modules, when I plug in a USB MIDI Keyboard, is, to make that MIDI connection visible within my system as an *ALSA* MIDI source, even though some applications, such as Ardour, will insist on only taking input from *JACK* MIDI sources, when in fact running in JACK mode. And so, this problem needed to be solved this morning…

The solution which I found was, to feed the Keyboard, which happens to be an “Oxygen 61″, to the ‘MIDI *Through* Port’ that’s visible in the ALSA Tab of QJackCtl’s Connections window. When MIDI sequences are fed there, they are also output from the System *JACK* MIDI sources, visible in the MIDI Tab of QJackCtl’s Connections window:

I should also note that, in many cases, the JACK clients can ask the JACK sound daemon to be connected to various inputs and outputs from within, without absolutely requiring that the QJackCtl Connections window be used. This explains why the audio output of Ardour was already routed properly to my PC’s speakers. But I found that I could only keep track of the MIDI connection, through QJackCtl’s Connections window. As the screen-shots above show, the second step is, to feed one of the System Sources to the appropriate Ardour MIDI input, in the MIDI Tab of QjackCtl’s Connections window.

The result was, that the synthesizer which I have available as an Ardour plug-in, played beautifully, in response to my pressing keys on the actual MIDI Keyboard, and no longer just, when I clicked on the graphical keyboard within the Ardour application window:

This on-screen keyboard can be made visible, by double-Alt-Clicking on the icon of the instrument, with Ardour in its Mixer view, and then expanding the resulting windows’ MIDI Keyboard fly-out. Yet, the on-screen keyboard was only useful for setup and testing purposes.

Tada!

(Updated 12/07/2020, 17h20… )

(As of 12/07/2020, 8h20: )

One fact which I should also mention is that, there exists the package ‘`a2jmidid`

‘, which will solve the same sort of problem. When that package is installed – according to its package description – it causes a daemon to run, which will react to every ALSA, MIDI Input or Output port, by connecting to it, and creating a corresponding JACK MIDI Input or Output port.

That package was mainly meant to be used by people, who will entirely make MIDI connections from within their applications, and not using the ‘QJackCtl’ Connections window. And another big drawback of having that package installed will be, that it will automatically tie up all existing ALSA MIDI clients, to make them available to JACK… Therefore, that package can be counterproductive to install, if the user wants to switch back and forth, between having JACK running and not having it running.

(Update 12/07/2020, 17h20: )

I suppose that if a user is determined, to be switching back and forth between running ‘PulseAudio’ and ‘JACK’, and yet, wants the package ‘`a2jmidid`

‘ to make all the ALSA-MIDI ports available as (forwarded) JACK-MIDI ports, then he or she can extrapolate on what I, myself have done, by adding more commands to ‘QJackCtl’. In the ‘Options’ tab, one can check the box named ‘Execute script **after** Startup’, and then in the field next to that tab, type:

```
a2jmidid -e &
```

(Note: The full path name of the executable may not be entered here.) Correspondingly, the user would also check the box named ‘Execute script **on** Shutdown’, and, to the right of that box, type:

```
killall -w a2jmidid
```

However, I have never tried this, as I don’t even have ‘`a2jmidid`

‘ installed.

Dirk

]]>

There exists a maxim in the publishing world, which is, ‘Publish or Perish.’ I guess it’s a good thing I’m not a publisher, then. In any case, it’s been a while since I posted anything, so I decided to share with the community some wisdom that existed in the early days of computing, and when I say that, *it really means*, ‘back in the early days’. This is something that might have been used on mini-computers, or, on the computers in certain special applications, before PCs as such existed.

A standard capability which should exist, is to compute a decently accurate sine function. And one of the most lame reasons could be, the fact that audio files have been encoded with an amplitude, but that a decoder, or speech synthesis chip, might only need to be able to play back a sine-wave, that has that encoded peak amplitude. However, it’s not always a given that any ‘CPU’ (“Central Processing Unit”) actually possesses an ‘FPU’ (a “Floating-Point Unit”). In such situations, programmers back-when devised a trick.

It’s already known, that a table of pre-computed sine functions could be made part of a program, numbering maybe 256, but that, if all a program did was, to look up sine values from such a table once, ridiculously poor accuracies would initially result. But it was also known that, as long as the interval of 1 sine-wave was from (zero) to (two-times-pi), the derivative of the sine function was the cosine function. So, the trick, really, was, to make not one lookup into the table, but *at least* two, one to fetch an approximate sine value, and the next, to fetch an approximate cosine value, the latter of which was supposedly the derivative of the sine value at the same point. What could be done was, that a fractional part of the parameter, between table entries, could be multiplied by this derivative, and the result also added to the sine value, thus yielding a closer approximation to the real sine value. (:3)

But, a question which readers might have about this next could be, ‘Why does Dirk not just look up two adjacent sine-values, subtract to get the delta, and then, multiply the fractional part by this delta?’ And the answer is, ‘Because one can not only apply the first derivative, but also *the second derivative*, by squaring the fractional part and halving it (:1), before multiplying the result from that, by the negative of the sine function!’ One obtains a section of a parabola, and results from a 256-element table, that are close to 16 bits accurate!

The source code can be found in my binaries folder, which is:

https://dirkmittler.homeip.net/binaries/

And, in that folder, the compressed files of interest would be, ‘IntSine.tar.gz’ and ‘IntSine.zip’. They are written in C. The variance that I get, from established values, in (16-bit) integer units squared, is “~~0.811416~~” “~~0.580644~~” (:2). Any variance lower than (1.0) should be considered ‘good’, since (±1) is actually the smallest-possible, per-result error.

(Updated 12/04/2020, 11h50… )

(As of 12/01/2020, 19h10: )

Hint: In order to link both source files, the directive ‘-lm’ needs to be given, when using Linux. One source file legitimately uses the true, double-precision library function, in order to build the table of integer constants, while the other is only using it as a comparison, to score the accuracy of the integer arithmetic, which itself does not require the Math library.

Further, when compiling on a platform as old as Debian 8 / Jessie, the flag ‘-std=gnu11′ must be used.

(Update 12/02/2020, 5h55: )

Given that one thing which I did was, to multiply 2 times Pi by 65536, in order to arrive at an integer constant, which the fractional part between two table-entries was to be multiplied with, this could also raise questions from the reader. Because my format really is, to take the 8 LSBs of a 16-bit integer value, as the fractional position between two table-entries, in such a way that the entire 16-bit modulus defines 1 cycle of the sine wave, it follows that the first derivative is actually 2 times Pi times the cosine function, etc…

I can know that 411774 is a 19-bit integer in two ways:

- I know that 2 times Pi does not exceed 7, so that only 3 extra bits are being added, to what would otherwise be a 16-bit fraction, And
- While 2 to the power of 20 can be estimated as being approximately 1 million, what this really means is, that most 20-bit values don’t actually reach 1 million, only spanning approximately from 500,000 to 1,000,000. Since my integer is closer to within the range from 250,000 to 500,000, it’s most probably a 19-bit number.

Further, while I like the format of having a 16-LSB fractional part to certain numbers, I found that I could reduce round-off errors slightly, if I made the fractional part of certain other numbers 17 bits wide. I apologize if this makes my code less readable. But what this means is that, theoretically, if I was to square such a value, it would have a 34-bit fractional part, even though it’s only a signed, 32-bit integer! I suppose the fact is fortunate then, that the maximum starting value (in the corresponding part of the program) was only a positive 12-bit number. But this means that I’d need to right-shift the value by 18 bits, to arrive at a representation, which again, has a 16-bit fractional field.

I (*was*) right-shifting the value by 19 bits (*in an earlier version of the code*), because I additionally needed to halve it, not, because to do so directly stems from the format of the number.

(Update 12/02/2020, 6h15: )

An additional question which could be asked would be, whether there would be any benefit, to also computing the third derivative, cubing the fractional component, and dividing by ~~three~~ six.

But even before dividing by three, an observation speaks against doing so. If the fractional component by itself, within a 16-bit fraction, only had 11 bits max, and if the fractional component squared, again within a 16-bit fraction, only had 6 bits max, then their product will have 17 bits max. After that has been divided by three, it will not have more than 16 bits. What this means is, that after being right-shifted one more time, to align it with a 16-bit fractional field, the result should be *zero*. Doing so cannot offer any improvements, within a 16-bit fractional format.

(Update 12/02/2020, 8h20: )

**1:)**

A question which some readers may not know the answer to would be of the form: ‘The second derivative of a function can be multiplied, by a small difference in the parameter ~~squared~~; why does it need to be multiplied by that small difference, *squared and halved*?’ And the answer is as follows:

(x) Times a constant, is assumed to give the (first) derivative of a certain value. And the reason *this* is so, is because the integral of a constant over (x) is that constant times (x). What this means is, that the value itself needs to be *the integral of* (x), times that constant, in order for the constant, also to be the multiplier of the second integral of (1). Well, the integral of (x) is (½ x^{2}), therefore, the (½) term. (x) Was also the fractional part between two table-entries.

(Edit 12/03/2020, 21h05…

If the cubic term was also to be expanded, from the third derivative, it would follow as (1/6 x^{3})…

(…End Of Edit, 12/03/2020, 21h05. )

(Update 12/02/2020, 14h10: )

**2:)**

The way in which I computed the variances, assumed first of all, that integers were to be assessed, in which each unit corresponded to:

1 / 65536

Where, 65536 is also 2^{16}. The exact calculation used was:

(mean of x^{2}) – (mean of x)^{2}

The problem when reporting variances in this way, especially since the mean is supposed to be zero, is that the information is incomplete, unless the mean was also reported. And so, oddly, the mean equals (-~~0.111511~~).

I don’t know the explanation for this, and chalk it up to, ‘an odd conspiracy of round-off errors’.

This was calculated for all valid input-values, that are 16-bit integers, and where the modulus of 65536 completes 1 full cycle.

(Update 12/03/2020, 12h55: )

There’s another observation which can be made about my integer arithmetic, which has to do with the rule in Calculus 1, that is also known as the “chain rule”. What that rule implies, in this case is, that if the parameter of a sine-function is (x), because according to pure Math, that sine function completes 1 cycle, over the domain from [0 .. 2*Pi), If the parameter is to be multiplied by (two-times-Pi) – to complete 1 cycle over the interval [0 .. 1.0) – Then the first derivative of the sine function, will be the cosine function of (x), *times two-times-Pi*. Additionally, the derivative of the cosine function, given the same assumption, will be the negative of the sine function, *times two-times-Pi*. Thus, to compute the second derivative, I need to end up multiplying (x) by two-times-Pi *twice*.

What I did instead was, to multiply (x) by two-times-Pi *once*, both, before using it to expand the first derivative, and *also, before squaring it*, to expand the second derivative. What this accomplishes with fewer CPU operations is, to multiply by two-times-Pi *twice*, when expanding the second derivative.

(Update 12/03/2020, 19h05: )

By tweaking the code a little further, I was able to bring the variance down to (0.411991), and the DC offset down to (-0.077118).

There exists another platform-specific implementation detail, which my programs depend*ed* on, and which I received a surprise about, several months ago, when I was writing some simple ‘Qt5′ / GUI-building exercises. In C++, under the Debian / GNU Architectures, the following two function-calls differ in two ways and not one:

```
int(-1.5);
floor(-1.5);
```

One important difference is, that the ‘int()’ call is really a type-conversion, which C programs should never call in the above form, to an integer. The other, more important difference is the fact that, these two function-calls return *numerically different* results. ‘floor(-1.5)’ does exactly what I expected it to do, which is, to return (-2.0). But to my surprise back then, ‘int(-1.5)’ actually returned (-1) ! In other words, when assigning a floating-point number to type ‘int’, the absolute of the numerical value is rounded down ‘towards zero’, but, if the value was negative, the negative sign is put in front of it.

The way I corrected for this in my Qt5 exercise was, to avoid casting directly from a floating-point number to an integer, instead, using the ‘floor()’ function whenever possible.

But a previous version of my two programs depended on the cast. Correct programming practice is, first to compute the ‘floor()’ function of (the floating-point number + 0.5), and then, to assign the result to an integer, the second operation out of which should just, ‘Extract the integer part, when the fractional part is zero.’

The present version of my two programs uses the (correct) ‘floor()’ function, as described.

(Update 12/04/2020, 11h50: )

**3:)**

On the subject of Computing History, when I was reading about the fact that one of my predecessors was using such a trick, his trick differed from mine in two key ways:

- He was only shooting for 8 bits of precision, not 16, And
- His lookup table only had 128 entries, not 256.

And both those facts had as common reason, the fact that although programmable chips existed as early as in the *late* 1970s, and on into the 1980s, *those* programmable chips had far less *ROM*, than even embedded CPUs have today. They simply did not have enough *ROM*, to store a constant array of 256, 16-bit integers, let alone, 32-bit integers, as my suggested algorithm does.

I vaguely seem to recall, that programmable chips with approximately 1KB of ROM *in total* existed at that time, but their programmers also had to use that, for the actual programming, including, for everything else that a programmable chip was expected to do, as part of an appliance.

Now, my exercise can simply be modified, so that the peak amplitude in the sine-value table reaches ±2^{14}, let’s say, just so that the statement can be made, that all its constants be 16-bit constants. But if one did that, one would also no longer achieve ~16-bit accuracy in the results, as the output amplitude would also just scale down.

Dirk

]]>

One of the things which I will never be, is a Musician. However, I think I have some skills in Math and Technology. This allows me to play with open-source software such as the “Linux Multimedia Studio”, and also, to try testing a subwoofer which I bought back in 2019. However, my lack of skill in the field actually caused me to underrate the performance of the subwoofer considerably. Why? Let me explain.

I once had an acquaintance, *who was* a Musician, and who told me, that the musical scale had intentionally been detuned, or retuned, in a specific way. I had already heard that the note ‘A below Middle C’ was meant since olden times, to refer to a frequency of 440Hz. Yet, according to this Musician, in more recent times, that same note had been retuned to *441*Hz. My own personal guess as to why would be, to force at least one note on the Chromatic Scale to have a rational relationship with the sample-rate of Audio CDs, which was 44.1kHz.

According to Western music several centuries ago, the Diatonic Scale had been invented, so that its notes had frequencies with rational relationships. But it was missing the so-called ‘black keys’. What the Chromatic Scale, which was invented later, did, was to detune the existing Diatonic notes, so that all the key-positions, including the black keys, became a sort of logarithm of frequency. According to modern realities, the exact relationships between two adjacent Chromatic notes, is the same over an entire octave, so that each octave again, represents an exact doubling or halving of frequencies. According to that, a maximum of one note, ‘A’, could have a rational frequency in Hertz, and all the other frequencies, in Hertz, end up being irrational.

But, that same note can have slightly greater clarity when sampled, if one sine-wave occupied exactly 100 samples. Thus, my presumed reason for retuning A-below-Middle-C to 441Hz. Theoretically, a sine-wave can be sampled, with an irrational frequency, in relationship to the sample rate. But, when that is done, the clarity with which it gets played back *depends strongly* on the quality of ‘the low-pass filter’, which is also known as the quality of ‘the interpolation’.

(Edit 10/28/2020, 8h20: )

Actually, playing back a frequency of 440Hz at that sample rate, differs from the rational situation by *1Hz*. Because it’s factually untrue, that the period of time taken into account by the low-pass filter, would be *as long as 1 second*, what should result is non-participation by the low-pass filter, but possibly audibly, for people who have very fine hearing, some sort of ‘beating’ or ‘interference effect’, with its own frequency of 1Hz.

*To make things worse*, what the listener will seem to hear is, one frequency at 441Hz *anyway*, even though the scale might be tuned to place that note at 440Hz, but with that one note being modulated in some non-specific way.

(End of Edit 10/28/2020, 8h20. )

(Edit 10/28/2020, 19h00: )

In addition to that, *a third frequency* will physically be present, at ~439Hz, to account of this modulation. This would be similar to the concept of ‘amplitude modulation’.

(End of Edit 10/28/2020, 19h00. )

So, here is the mistake I made. I did not know that the note ‘C’ always begins a new octave. Thus, I was able to play the ‘A’ note which is below ‘Middle C’, as well as the ‘A’ note which is below ‘C1′. But, even when playing with this software, I had failed to notice that the numbering of ‘Middle C’ was ‘C5′, not, ‘~~C4~~‘. But, ‘A below Middle C’ was *still ‘A4′*.

Therefore, naively, I played ‘A below C1′. But I thought that I was sending a frequency of 55.125Hz to my newly purchased subwoofer, when in fact, I was sending it a frequency of 27.5625Hz. The subwoofer reproduced the lower of the two frequencies with excellence, causing my walls to shake, and, making it impossible for me to discern that it was in fact an ‘A’. But a person with musically trained ears would have noticed, that the musical tone of ‘A below C2′ is easy to discern, while this musical tone of ‘A below C1′ is almost impossible to discern.

So, the subwoofer performs much better, than I gave it credit for doing.

Dirk

]]>

I have one of those Chromebooks, which allow a Linux subsystem to be installed, that subsystem being referred to in the Google world as “Crostini”. It takes the form of a Virtual Machine, which mounts a Container. That container provides the logical hard drive of the VM’s Guest System. What Google had done at some point in the past was, to install Debian 9 / Stretch as the Linux version, in a simplified, automated way. But, because Debian Stretch is being replaced by Debian 10 / Buster, the option also exists, to upgrade the Linux Guest System to Buster. Only, while the option to do so manually was always available to knowledgeable users, with the recent Update *of ChromeOS*, Google insists that the user perform the upgrade, and provides ‘an easy script’ to do so. The user is prompted to click on something in his ChromeOS settings panel.

What happened to me, and what may also happen to my readers is, that this script crashes, and leaves the user with a ChromeOS window, that has a big red symbol displayed, to indicate that the upgrade failed. I failed to take a screen-shot of what this looks like. The button to offer the upgrade again, is thankfully taken away at that point. But, if he or she reaches that point, the user will need to decide what to do next, out of essentially two options:

- Delete the Linux Container, and set up a new one from scratch. In that case, everything that was installed to, or stored within Linux will be lost. Or,
- Try to complete the upgrade in spite of the failed script.

I chose to do the latter. The Linux O/S has its own method of performing such an upgrade. I would estimate that the reason for which the script crashed on me, might have been Google’s Expectation that my Linux Guest System might have 200-300 packages installed, when in fact I have a much more complete Linux system, with over 1000 packages installed, including services and other packages that ask for configuration options. At some point, the Google Script hangs, because the Linux O/S is asking an unexpected question. Also, while the easy button has a check-mark checked by default, to back up the Guest System’s files before performing the upgrade, *I intentionally unchecked that*, simply over the knowledge that I do not have sufficient storage on the Chromebook, to back up the Guest System.

I proceeded on the assumption, that what the Google script did first was, to change the contents of the file ‘/etc/apt/sources.list’, as well as of the directory ‘/etc/apt/sources.list.d’, to specify the new software sources, associated with Debian Buster as opposed to Debian Stretch. At that point, the Google script should also have set up, whatever it is that makes Crostini different from stock Linux. Only, once in the middle of the upgrade that follows, the Google script hanged.

(Updated 10/25/2020, 22h55… )

(As of 10/25/2020, 13h40: )

On those assumptions, all I really need to do was, open a Linux terminal in case one was not already open, and type in the following commands:

```
$ sudo su
# apt-get dist-upgrade
```

In some cases, ‘`apt-get`

‘ can be replaced with ‘`apt`

‘ – especially, *after* the upgrade to Debian Buster – and, I did not need to give the command ‘`apt-get update`

‘ because presumably, the failed Google script had already given that command.

Aside from answering a few prompts, which the Google script had not expected, I really didn’t have to do anything else. However, the whole process took me 5 hours, and not the 30 minutes that the Google window had suggested it would take. On larger Linux installs, doing a dist-upgrade can take 6-12 hours. **A so-called dist-upgrade must not be interrupted at any point in the process**.

After the upgrade, I found that all the applications I tested, work as before, including the (updated) ‘TigerVNC server’, that actually allows me to create an LXDE desktop, which I can then access via a ChromeOS-provided VNC *Viewer*. However, there is one more detail which I should mention:

When I set up Linux systems, I often install packages in a careful way, that ‘pull in’ libraries belonging to unused desktop managers, such as ‘GNOME’, either on my Plasma 5 -based computer, or on an LXDE -based computer. I’m careful *not* actually to install GNOME.

Well, during a dist-upgrade, this habit of mine can bite me in the ass. A dist-upgrade can and will perform a full install of GNOME in that case. What this does is, maximize the amount of storage the container uses, with the danger being, that at some point the amount of storage available on the ChromeOS Host System, might no longer accommodate the grown container. This creates a ‘hump’ which I had to wait through, before reversing the problem. After I had cleared this hump, I performed the following commands:

```
# apt clean
# sync
# apt autoremove
```

Amazingly, the ‘autoremove’ command removed the unnecessary packages again, so that again, ‘GNOME’ is not installed.

If one ignores the amount of time this took, the process was as reliable and easy, as if I had just done a smaller update. However, as Linux systems become larger and more complex, a dist-upgrade may not work as easily for the reader. Yet, as long as like me, the reader only has 1000 packages or so in his Linux subsystem, there is a good chance that like me, he or she can simply rescue a failed dist-upgrade in this way.

The hardest part of a dist-upgrade is usually, to get all the repositories right, in the file ‘/etc/apt/sources.list’… The second hardest part of a dist-upgrade is usually, whatever customization the user did to his system prior to the dist-upgrade, and then, trying to make sure that customized features work afterwards. As ingenious as the Linux dist-upgrade process is, it cannot take into account, what the user may have done, outside the route of the package manager. Out-of-tree installs are most likely to break.

I did observe that the Google script had added sources first, that offer Crostini packages, where stock Linux would not have those.

Before the upgrade, my Linux container took up 9.2GB, while afterwards, it was taking up 11.3GB. I am assuming that ChromeOS can shrink this container, in addition to being able to grow it.

There is another fact which readers should be aware of, that can cause the update to seem to have failed, when in fact it may not have.

The way Linux is generally organized is, into a system directory tree and a user home directory tree. Software from the package manager is generally installed to the system directory tree, not the user home directory tree, the latter of which is usually left alone. During an upgrade, several applications are upgraded to newer versions, as it should be. But, the applications still store their settings per-user, in the user’s home directory tree.

This does not change if the Linux system is running as a Guest System.

In some cases, applications with higher version numbers become incompatible with the settings that the previous version saved per-user. If a single application no longer seems to work, then one thing to do would be, either to delete or rename its per-user settings file – which could also be a sub-directory – and then to relaunch that application, with the understanding that it behaves from then on, as through ‘a first run’ was being carried out.

~~I needed to do this~~ with ‘Bluefish’ specifically, before it would connect directly to my FTPS Server again.

I now have LibreOffice 6 where I previously had LibreOffice 5, and v6 works just fine, after translating its per-user settings automatically, to the version required.

(Update 10/25/2020, 22h55: )

* A general note on using Bluefish to connect directly to an FTPS Server in this way*:

The preferred way to do this would be such, that a coder does not need to retype the URL each time. But, in some cases, the only way I was able to connect to the server was, to open the ‘Seahorse’ application, unlock the FTPS password from there, close that application, open Bluefish, type in the URL once, acknowledge that an unverifyable certificate was being used by my (self-signed) server, close Bluefish again, open Bluefish a second time, and then, click on the URL in the History Pane.

A way to improve ‘connecting to FTPS URLs quickly’ is, once the FTPS Server’s contents are being displayed, pick a file at random, and create a bookmark, *which is then* a bookmark within one file, within an FTPS login. After that, even in a new session, after the FTPS Server password has been unlocked in Seahorse, it became possible for me just to open Bluefish once, *navigate the Pane to its Bookmarks*, click on the one bookmark, acknowledge the self-signed certificate, and I am logged on to the server, able to browse its files, and able to close that one file as well, without losing my login.

This just seems to be some weakness Bluefish has, in remembering FTPS URLs, and it has the weakness across Debian 9 and Debian 10 computers.

Dirk

]]>