One phenomenon known in Psychology is, that as the years pass, memories which we have of a same thing that once happened, will change, so that, 10 or 20 years later, it becomes hard to trust those memories.

A modern phenomenon exists, by which many Baby-Boomers tend to recall their old vinyl records as having had better sound, than so-called modern, digital sound. And in total I’d say this recollection is partially true and partially false.

When “digital sound” first became popular (in the early to mid- 1980s), it did so in the form of Audio CDs, the sound of which was uncompressed, 16-bit PCM sound, at a sample-rate of 44.1kHz. Depending on how expensive a person’s CD player actually was, I felt that the sound was quite good. But soon after that, PCs became popular, and many eager people were advised to transfer their recordings, which they still had on LPs, to their PCs, by way of the PCs’ built-in sound devices, and then to compress the recordings *to MP3 Format for Archiving*. And, a bit-rate which people might have used for the MP3 Files could have been, 128kbps. People *had to* compress the audio *in some way*, because early hard drives would not have had the capacity, to store a person’s collection of music, as uncompressed WAV or AIFF Files. Further, if the exercise had been, to burn uncompressed audio onto CD-*R*s (from LPs), this would also have missed the point in some way. (:2)

What some people might be forgetting is the fact that many LPs which were re-recorded in this way, had strong sound defects before being transcribed, the most important of which was, frequent scratches. I think, the second-most-common sound defect in the LPs was, that unless the listener had a high-end turntable, with a neutrally counterweighted tonearm, and a calibrated spring that defined stylus force, if an LP was listened to many, many times, its higher-frequency sound content would actually become distorted, due to wear of the groove.

(Updated 3/02/2021, 18h05… )

(As of 11h00… )

And so, when many recordings were transferred to this specific digital format, software was available early, which would remove the clicks and pops from the sound, *and which added its own distortion*. My memory of this software was just recently revived, because I downloaded some older software, that still had these capabilities. Mind you, the software which I downloaded was not so old, as to be from the 1990s, and because this software was just that – more recent – it embodied ways of removing ‘clicks’ from the audio, that will actually distort sound less, while it’s doing so.

But I think that one other perceptual phenomenon that has largely taken place is, that a newer set of faults replaced an older set of faults. When the scratchy sound was gone, and gone constantly, from music that people listen to, people began to listen to the sound more carefully. At that point, people began to recognize that the modern music seems to have a ‘uniformly smooth’ quality to it, which not all songs would have for any particular reason in their production. And in addition, many forms of compressed sound simply have too few frequency components.

I think that true audiophiles always knew what to listen for in music, always appreciated spectrally rich, polyphonic sounds, and always had the expensive turntables, from which the sound reproduction was truly clear and sharp. But in the era where common people were transcribing their old recordings to digital formats (the 1990s), there was a category of turntable being sold for that purpose, which, due to its very design, could never have reproduced the sound with the same level of quality in the first place, that expensive turntables from the 1980s could. Those expensive turntables had direct-drive motors, because the wobble of a rubber wheel, or even that of a belt around the platter, was enough to drive true audiophiles up the wall. I think that many later turntables meant for transcription had belts, etc. They seldom had direct drive. And I seem to recall that, back in the days of expensive turntables, some of those even had features such as special bearings for the platter, to reduce rumble, and tonearms made of ‘space-age materials’, as the materials were called back then, such as carbon composites, because *resonance of the tonearm* could interfere with an audiophile’s experience. They had an anti-skating adjustment…

So, if a person is listening to any recordings, that consistently seem to have the same deficiencies, one becomes quite annoyed with those. And then, if a person makes a switch to another type of recording, where those deficiencies are just magically gone, there is an initial sense of relief and pleasure. But then, a new set of deficiencies becomes apparent, maybe after a few years of listening.

Also, I’d say that if modern listeners do buy LPs today, those won’t come from the store with scratches already in them. But I could be wrong.

But the existence of those scratches confirms to the listener, that he or she is hearing sound from an LP, because they generate pulses of high-frequency sound, that have almost-perfect immediacy. It’s a pet theory of mine that, even if no compression or lossless compression is being used, the amount of time it takes pulses of sound to reach full amplitude, can physiologically reduce the quality of the experience, of listening to that sound. One way in which the early sound of Audio CDs, even, was just too smooth, was in the fact that the brick-wall low-pass filters that were used, spread out high-frequency pulses over time, so that they were no longer instantaneous. Well, when a scratch passes underneath a phonograph stylus, its reproduction of that pulse is virtually instant.

So, listeners can pick their poison. Do they prefer sound which has ‘rectangular frequency response’ all the way to 20kHz, but poor temporal resolution, or do they prefer sound that has non-uniform frequency response, near-perfect temporal resolution, but frequent sound-pulses, that were never part of the original recording? (:1)

I also remember that any stereo receiver from before the 1980s, which had a phonograph input, had a preamplifier built-in, which was fed the raw voltage-swings of the phonograph’s cartridge, and which had a standard (frequency) equalization curve that was decided, before the records were in fact recorded. This preamp did *not* have today’s standard 3.5mm stereo jack as input terminal, and was very sensitive.

The turntables which were sold in the 1990s for transcribing records to digital format, had an amplified output similar to a line output, with a 3.5mm jack as terminal-type. They needed to have this, because *consumer sound cards* did not have the sensitivity required, to accept input from the cartridge directly.

I would rather *not* try to guess, whether these internal amplifiers were as good, as the preamps were, that were once built-in to stereo receivers. What I can conclude is, that if a person still had a genuine turntable from before 1980 (by itself), he was in no position to use it to transcribe his LPs, simply because the genuine turntables lacked an internal amp, and, as I just wrote, did not output the voltages required, for a sound card to recognize a signal.

(Update 3/02/2021, 17h50: )

**1:)**

The slap-dash way in which I described the choices available to ‘consumers’ today was a bit over-simplified. For example, ‘Super Audio CDs’ (invented in the 1990s) are still being mastered and sold, which have approximately uniform frequency response all the way up to ~40kHz, above which the frequency response starts to roll off more gently, at -12 dB /Octave, aka at -40 dB /Decade, just so that those high-frequency sound pulses will have preserved temporal resolution.

But, the equipment today which plays Super Audio CDs has strong DRM, to prevent listeners from ripping the disks at their full resolution – meaning that the only way to listen to them today is, with a player, a receiver and either speakers or headphones – and often The price-tags are close to US$ 400 just for the player, to which the price of the other Home Theatre components would need to be added… Hardly, a price that *average* consumers would be prepared to pay, for causal listening.

(Update 3/02/2021, 18h05: )

**2:)**

Some of my readers might not immediately understand the logical meaning of my statement, that in the 1990s, people may have compressed their music collections to 128kbps MP3 Files. This would be because, according to today’s standards, that would be a bad thing to do.

If people subscribe to a music streaming service that still uses MP3 sound compression – and I think, there has been a revival in MP3… – the service may stream the stereo at 256kbps or at even higher resolution. But, in my own experience I’ve found, that increasing the bit-rate of MP3s beyond 192kbps fails to overcome their inherent weaknesses. These days, this would be my list of recommended bit-rates for several encodings:

```
MP3 -> 192 kbps
OOG Vorbis -> 192-256 kbps
AAC -> 128 kbps
OOG OPUS -> 96kbps
```

But, simply having provided such a list, still doesn’t mean, that I’d just apply lossy compression to any sort of music. In cases where I’ve compressed so-called Classical Music, I’ve used FLAC.

Dirk

]]>

I take the unusual approach, of hosting my Web-site, including this blog, on a private PC I have at home, acting as server. I’m not suggesting that everybody do it this way; it’s only how I do it. For that reason, the visibility of my site and blog, are only as good, as the quality of my Internet connection, as well as the continuity of my power supply.

Unfortunately, my immediate neighbourhood was subject to multiple power failures this evening, from 16h25 until 1815. For that reason, my Web-server was offline, until about 18h50. I apologize for any inconvenience to my readers.

BTW, This last server-session ran uninterrupted for over 60 days, as a sign of how rare such failures have been, in recent months.

Dirk

]]>

One concept that exists in modern digital signal processing is, that a simple algorithm can often be written, to perform what old-fashioned, analog filters were able to do.

But then, one place where I find lacking progress – at least, where I can find the information posted publicly – is, about how to discretize slightly more complicated analog filters. Specifically, if one wants to design 2nd-order low-pass or high-pass filters, one approach which is often recommended is, just to chain the primitive low-pass or high-pass filters. The problem with that is, the highly damped frequency-response curve that follows, which is evident, in the attenuated voltage gain, at the cutoff frequency itself.

In analog circuitry, a solution to this problem exists in the “Sallen-Key Filter“, which naturally has a gain at the corner frequency of (-6db), which would also result, if two primitive filters were simply chained. But beyond that, the analog filter can be given (positive) feedback gain, in order to increase its Q-factor.

I set out to write some pseudo-code, for how such a filter could also be converted into algorithms…

```
Second-Order...
LP:
for i from 1 to n
Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1]
Z[i] := ( k * Z[i-1] ) + ((1 - k) * Y[i])
Feedback[i] := (Z[i] - Z[i-1]) * k * α
(output Z[i])
BP:
for i from 1 to n
Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1]
Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) )
Feedback[i] := Z[i] * (1 - k) * α
(output Z[i])
HP:
for i from 1 to n
Y[i] := ( k * (Y[i-1] + X[i] - X[i-1]) ) + Feedback[i-1]
Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) )
Feedback[i] := Z[i] * (1 - k) * α
(output Z[i])
Where:
k is the constant that defines the corner frequency via ω, And
α is the constant that peaks the Q-factor.
ω = 2 * sin(π * F0 / h)
k = 1 / (1 + ω), F0 < (h / 4)
h Is the sample-rate.
F0 Is the corner frequency.
To achieve a Q-factor (Q):
α = (2 + (sin^2(π * F0 / h) * 2) - (1 / Q))
'Damping Factor' = (ζ) = 1 / (2 * Q)
Critical Damping:
ζ = 1 / sqrt(2)
(...)
Q = 1 / sqrt(2)
```

(Algorithm Revised 2/08/2021, 23h40. )

(Computation of parameters Revised 2/09/2021, 2h15. )

(Updated 2/10/2021, 18h25… )

(As of 2/07/2021, 9h05… )

I would just like to acknowledge, that the WiKiPedia defines the Low-Pass Filter (…algorithm) differently from the way I do, and in a way which is not symmetrical with their definition of the High-Pass Filter:

My explanation for the difference in how the WiKiPedia defines their Low-Pass Filter lies, in the fact that they use *a different value of (“α”)*, to define the same corner frequency, from the value of (“α”), as they use for their High-Pass Filter. Accounting for the fact that I define the corresponding symbol, (‘k’), the same way both times, should yield the same type of Low-Pass Filter as the WiKiPedia yields.

(Update Removed 2/08/2021, 14h45, because inaccurate.)

(Update 2/09/2021, 1h30: )

I found the following Web-article useful, in deciphering how second-order low-pass filters generally work:

https://www.electronics-tutorials.ws/filter/second-order-filters.html

(Update 2/09/2021, 3h15: )

One observation which I made about the primitive building-blocks of discretized filters was, that while the algorithms shown yield accurate frequencies, when the corner frequency is low, compared with the Nyquist Frequency, as the corner frequency approaches the Nyquist Frequency, the strange behaviour will set in, of a gain of (1/2), not, of (1/sqrt(2)). I have explained this away in different parts of my blog, as the inability of the sampling interval to capture any signal component which has been phase-shifted 90⁰. In short, what any analog low-pass filter is supposed to do at its corner frequency is, to yield an amplitude of (1/sqrt(2)), but with a 45⁰ phase-shift. Such a phase-shift will also be representable as two components, one in-phase and the other 90⁰ out of phase, each at (1/2) amplitude. As if the sampling interval was ignoring one of the components, these algorithms will seem to yield (1/2) amplitude, at 0⁰ phase-shift.

The problematic aspects of this behaviour are twofold:

- This cannot be setting in abruptly, when the corner frequency exceeds or equals (h / 4), And
- The open-loop gain in these filters, applied in pairs, needs to be known exactly, so that high Q-factors can be implemented, and the inverse of the gain approached but never exceeded.

Thus, I felt that the best way to visualize this over-damping would be, that a 90⁰ phase-shifted vector could start to get added-in to the in-phase component, as the corner frequency increases, and that the expected gain that analog filters would attain, results as a hypotenuse. But, this hypotenuse will be missing from the algorithmic filters’ gain, so that it can be multiplied in the determination of the parameter that leads to the Q-factor. By itself, this would yield the term:

sqrt(2 +2*x^2)

However, since the algorithmic filters form a chain of pairs, the square root can be omitted. Further, two consecutive halves form a quarter.

This is how my conjecture forms, as to the correct way to achieve the desired Q-factors.

(Update 2/10/2021, 18h25: )

There is an observation about this discretization of the filters, which should really be more obvious, than the one which I described above. It suffers from the main problem, that any feedback samples are not being added instantaneously, while feedback in an analog circuit would be instantaneous.

In this emulation, the feedback is effectively being added with a delay of 1 sampling interval, after having passed through a High-Pass and a Low-Pass Filter ‘normally’, which in turn, would also make it useless above half the Nyquist Frequency (= h / 4), where each sample deviates from the previous sample in the opposite direction, so that ‘positive’ feedback would only lead to greater cancellation. At half the Nyquist Frequency, feedback samples are being added-in, with an effective 90⁰ phase-delay (compared to the actual input samples).

What this would seem to suggest is, that in order for the filters to be applied over the span of an original Nyquist Frequency, the inputs must also first be up-sampled 2x, perhaps with a linear interpolation, the intended filters applied according to the Math of the new sample-rate, and then, half the resulting output-samples actually used as output, at the original sample-rate.

In this special case, there should be no complicated requirements on how to down-sample, which would come into effect, if there could be (relevant) frequency-components present above half the (derived) Nyquist Frequency. The reason such frequency-components would not be present would be the fact, that the derived sample-rate would ~~only~~ contain signal components present at the original sample-rate.

But then, this observation would also have an effect on what the consequence will be, if a fed-back sample was added with a phase-delay of 90⁰, to the input sample. According to how *Resonance* is formally described, *If the Q-factor is high, its phase-delay will get close to 90⁰ but not quite reach it*. And, at a phase-delay of exactly 90⁰, a physical resonator will fail to receive any more energy from the excitor, thereby having no reason to become active. At a phase-delay *exceeding 90⁰*, the resonator will also be losing energy to the excitor, when both are physical components.

According to what I’ve already written, I will leave the exercise up to the reader, to compute an adjusted value of (α), which should compensate for this latter, stronger phenomenon, than the phenomenon which I adjusted for earlier.

One thing I can tell the reader *not* to do is, ever to allow (α) to reach or exceed (4.0). The reason for this is as follows… If (F0) is chosen to be (h / 6)…

(k = (1-k) = 0.5), and, (k * (1-k) = 0.25),

Which will be the maximum open-loop gain, not accounting for (α). Therefore, if (α >= 4.0), a runaway sample-value can and will result…

The reader will need to enable JavaScript from my site, as well as from ‘mathjax.org’, in order to view the comparative plot shown below…

Dirk

]]>

In recent weeks I’ve been noticing some rather odd behaviour, of the Linux version, of the most up-to-date Chrome browser. In short, every time I launched the browser on my Debian 9 / Stretch computer, that has the Plasma 5.8 Desktop Manager, certain malfunctions would set in, specific to the desktop manager. I waited for several Chrome version upgrades, but the malfunctions persisted. And, as far as I can tell, the problem boils down to the following:

Google will only distribute the latest Chrome version, and when they tag the line which one is supposed to have in one’s Sources.list with ‘stable’, apparently, they mean both the Stable version of Chrome, And, for the Stable version of Debian. According to Google, Debian 10 happens to exist right now, because that is the “stable” version (of Debian), but, Debian 9 and Debian 8 don’t exist anymore. Except for the fact that many people could still have either installed.

And so, rather than to go the insecure route, and to install some outdated, non-official version of Chrome, I’d say that the best thing to do was to install “Chrom*ium*” instead, from the Debian repositories, which has always been the debranded version of Chrome, but, the version that is most compatible.

On my Debian 9 box, that would be the (corresponding) version ‘`73.0.3683.75-1~deb9u1`

‘, as of the time of this posting. It’s a retro version, but not so deeply retro, that I’d fear for the security of my data. While I was at it, I installed ‘`chromium-l10n`

‘ and ‘`chromium-widevine`

‘, the last of which I happen to have the luxury, of *being allowed* to install, because that last package actually allows the browser to play certain DRM-ed content.

Now, I also have Debian 8 computers (which were called ‘Debian Jessie’), and am inferring that what was too recent for Debian 9, was also too recent for Debian 8. I’m inferring this, even though the same, recent Chrome versions showed *no obvious* signs of malfunctioning, under Debian 8. So, I kiboshed that as well. However, I think that the Chromium version that was up-to-date with Debian 8, was *version 57 (+ something)*. That just struck me as too early a version to revert Chrome back to, and so what I did *on that computer* instead was, to install ‘Vivaldi 3.6‘ .

What this does is, to put me back into the situation, in which each of my main Linux computers has two mainstream Web-browsers installed, because I feel more secure with two of those.

Dirk

]]>

In Calculus, one of the most basic things that can be solved for, is that a principal function receives a parameter, multiplies it by a multiplier, and then passes the product to a nested function, of which either the derivative or the integral can subsequently be found. But what needs to be done over the multiplier, is opposite for integration, from what it was for differentiation. The following two work-sheets illustrate:

PDF File for Desktop Computers

Please pardon the poor typesetting of the EPUB File. It’s the result of some compatibility issues (with EPUB readers which do not support EPUB3 that uses MathML.)

This realization also explains how, When the sinc function has been discretized in a certain way and applied as a low-pass filter, I can know that its Nominal Gain, or, its D.C. gain will be close to (2). The assumption which I was making about the low-pass filter was, that the sinc function will make its first zero-crossings near the centre-point, two input samples away, and that it will have an additional zero-crossing every two input samples after that.

This is not how every filter based on the sinc function will be designed; it was only how one specific filter would have been designed.

This means that, a phenomenon which would normally happen over an interval of (π), happens over an interval of (2). Additionally I read, that when the sinc function is Mathematically pure, and has not been translated into Engineering equivalents, its integral approaches (π). Just to be obtuse, the interval of the (Engineering) function’s parameter has been multiplied by (π/2), to arrive at the value which must be fed to the true trig function.

Thus, that half-band filter, that employs 2x over-sampling, will have an integral that approaches (2), not (π).

Dirk

]]>

I can sometimes describe a way of using certain tools – such as in this case, one of the Discrete Cosine Transforms – which is correct in principle, but which has an underlying flaw, that needs to be corrected, from my first approximation of how it can be applied.

One of the things which I had said was possible was, to take a series of frequency-domain ‘equalizer settings’, which be at one per unit of frequency, not, at so many per octave, compute whichever DCT was relevant, such that the result had the lowest frequency as its first element, and then to apply that result as a convolution, in order finally to apply the computed equalizer to a signal.

One of the facts which I’m only realizing recently is, that if the DCT is computed in a one-sided way, the results are ‘completely non-ideal’, because it gives no control over what the phase-shifts will be, at any frequency! Similarly, such a one-sided convolution can also not be applied as the sinc function, because the amount of sine-wave output, in response to a cosine-wave input, will approach infinity, when the frequency is actually at the cutoff frequency.

What I have found instead is, that if such a cosine transform is mirrored around a centre-point, the amount of sine response, to an input cosine-wave, will cancel out and become zero, thus giving phase-shifts of zero.

But a result which some people might like is, to be able to apply controlled phase-shifts, differently for each frequency, such that those people specify a cosine as well as a sine component, for an assumed input cosine-wave.

The way to accomplish that is, to add-in the corresponding (normalized) sine-transform, of the series of phase-shifted response values, and to observe that the sine-transform will actually be zero at the centre-point. Then, the thing to do is, to apply the results negatively on the other side of the centre-point, which were to be applied positively on one side.

I have carried out a certain experiment with the Computer Algebra System named “wxMaxima”, in order first to observe what happens if a set of equal, discrete frequency-coefficients belonging to a series is summed. And then, I plotted the result of the *definite* integral, of the sine function, over a short interval. Just as with the sinc function, The integral of the cosine function was (sin(x) – sin(0)) / x, the definite integral of the sine function will be (1 – cos(x)) / x, and, Because the derivative of cos(x) is zero at (x = 0), the limit equation based on the divide by zero, will actually approach zero, and be well-behaved.

(Update 1/31/2021, 13h35: )

There is an underlying truth about Integral Equations in general, which people who studied Calculus 2 generally know, but, I have no right just to assume that any reader of my blog did so. There exist certain standard Integrals, which behave in the reverse way of how the standard Derivatives behave, just because ‘Integrals’ are ‘Antiderivatives’…

When one solves the Derivatives of certain trig functions repeatedly, one obtains the sequence:

sin(x) -> cos(x) -> -sin(x) -> -cos(x) -> sin(x)

Solving the Indefinite Integrals of the same trig functions yields the result:

sin(x) -> -cos(x) -> -sin(x) -> cos(x) -> sin(x)

Hence, the Indefinite Integral of sin(x) is in fact -cos(x), and:

( -(-cos(0)) = +1 )

(End of Update, 1/31/2021, 13h35.)

(Updated 2/04/2021, 17h10…)

(As of 1/30/2021: )

I achieved the desired result, that both plots are similar, and find that doing so supports my conjecture, even though it does not constitute Mathematical proof that will stand up to any rigour… The reader will need to enable JavaScript from my blog, as well as from ‘mathjax.org’, in order to be able to view the following work-sheet:

Notes:

*If the sample-rate for the continuous functions plotted above was changed to 16kHz, consistently with over-sampled telephone frequencies, then they should be effective down to 500Hz*. And, over-sampling is required for 90⁰ phase-shifting at the initial Nyquist Frequencies.

If this concept was indeed used to implement some sort of equalizer, and not just, ‘a universal phase-shifter’, then there is an additional caveat. The *continuous* functions above are plotted such, that their nominal gain will be 2. With discrete transforms, a question will inevitably come up, as to whether the Type 1 or the Type 2 transforms should be used. The Type 2 transforms will apply a half-sample shift, that corresponds to an offset of a quarter-wave, to the input, even though in this case, the input was in the frequency domain and the output in the time-domain (resulting in an arbitrary convolution as already stated).

One property that the Type 1 DCT has is, non-zero values at both endpoints, which also assures that any signal can be reconstructed. The Type 2 DCT will have as property, that it starts with a non-zero value at the origin of the output, but that because all the indices have odd products of quarter-wave frequencies, they will naturally tend to have zero, as the value of the last (output) element.

This would be ideal for equalizers and filters, because it would mean that no windowing function needs to be applied, to avoid spikes in the output, due to spikes in the input, entering the window.

Well, with ‘Discrete Sine Transforms’, this set of properties is reversed. The Type 2 will have as property, a value of zero at the origin, and non-zero values at the endpoints. Yet, presumably, the coefficients of both cosine and sine transforms, applied simultaneously, are supposed to define the same set of frequencies. What this means is that, If the Type 2 are to be applied, then some sort of windowing function should also be applied *to the sine* transform.

(Update 2/04/2021, 17h10: )

After pondering this question more closely, I find that the type of windowing function best-suited would be, ‘a half-sine wave’ that begins and ends with multipliers of zero, at the beginning and end of the entire interval of the convolution. This should preserve the way equalizers and filters are supposed to behave, best.

(End of Update, 2/04/2021, 17h10.)

An idea which was once voiced was that, because a system of representation already exists, in which the real part of a complex number states a cosine-wave, and the imaginary part states a sine-wave, it should be possible just to apply the Discrete Fourier Transform to a set of ‘phasors’, and end up with a usable result. But one fact which can be overlooked in this way of thinking is that, outside this system of representation, the instantaneous voltage or current through a wire can only be a real number, at least as far as circuit theory goes.

What will happen though is that, wherever the phasors have *positive* imaginary parts, the corresponding *real* parts of the Discrete Fourier Transform will be phase-*advanced* 90⁰. Therefore, if one just ignores the imaginary parts of the DFT, one should also obtain the results stated above. Further, if one additionally only states ‘~~phasors~~‘ with imaginary parts equal to zero, then one is back to computing a DCT.

(Update 1/31/2021, 23h25: )

One question which a reader could have would be concerning the way I proposed to normalize the (custom) Discrete Transform, as this snip repeats:

```
for k: 0 thru 159 do
for n: 0 thru 67 do block (
conv[160 + k]: conv[160 + k] + (sin((n*k*%pi)/160) / 160),
conv[160 - k]: conv[160 - k] - (sin((n*k*%pi)/160) / 160)
);
```

I clearly divided the individual products by 160, even though the array has 320 elements. The way I justified that was, to observe that when two (synchronous) sine-waves are multiplied, or more correctly, if a sine-wave is squared, the product will average as *half* the maximum amplitude (effectively, with a frequency twice that of the non-multiplied sine-waves). Thus, a single frequency-coefficient will result in a sine-wave within the computed convolution, the peak amplitude of which is equal to the value of the coefficient. But then, when this wavelet is multiplied by a stream, because the convolution is being applied as an equalizer, half that amplitude will naturally result and oscillate, at the desired frequency.

What this would lead me to do as a first approximation is, to double the amplitudes generated by the transform, as part of its normalization. However, since the wavelet has two halves, each of which is the same transform, no doubling should be necessary. Given 160 coefficients, each product should indeed be divided by 160, that’s all.

It would be no different for a centre output coefficient, which would only occur once, but which would be the *cosine* zero product with all the frequency coefficients. In that case, *it* would already contain the average of the frequency coefficients, with no concept of getting halved, when applied as part of the wavelet.

Enjoy,

Dirk

]]>

I take the unusual approach of hosting this blog and site, on a server, that is running on my personal computer at home. I don’t recommend that everybody do it this way; this is only how I do it. That makes the availability of my blog and site no better, than the reliability with which I can keep my PC running, as well as that, of my Internet connection.

Unfortunately, I experienced a brief power failure this morning, between 8h45 and 9h00. As a result, this site was down until about 9h40. I apologize for any inconvenience to my readers.

BTW, There have been remarkably few failures in the recent 3 months or so.

Dirk

]]>

I own a Samsung Galaxy S9 smart-phone, and have discovered that, in its tethering settings, there is a new setting, which is named “Auto Hotspot”. What this setting aims to do if activated is, on other Samsung devices, which normally only have WiFi, when the user is roaming along with his phone, there should appear an additional access point for them to connect to. The following screen-shots show, where this can be enabled *on the phone*…

I believe that this explains a fact which I’ve already commented on elsewhere, which is, that when I try to set up Google Instant Tethering, the negotiation between my ‘Asus Flip C213 Chromebook’ and this phone, no longer adds Instant Tethering to the list of features which are enabled. My Samsung S9 phone will now only unlock the Chromebook. What I am guessing is that, because the feature I’m showing in this posting is a Samsung feature, with which Samsung wants to compete with the other companies, Samsung probably removed to offer Instant Tethering from their phone.

Obviously, this is only a feature which I will now be able to use, between my S9 phone, and my Samsung Galaxy TAB S6 tablet.

The reader may ask what the advantages of this feature might be, over ‘regular WiFi tethering’, or ‘a WiFi hotspot’. The advantage could be, that even though it remains an option compatible with all clients, to have the phone constantly offer a WiFi hot-spot could drain the battery more. Supposedly, if Samsung’s Auto Hotspot is being used, it can be kept enabled on the phone, yet not drain the battery *overly*, as long as client devices do not connect. The decision could then be made directly from the client device, whether to connect or not… This is similar, to what Google’s system offers.

Also, the Samsung phones with Android 10 have as feature, that their ‘regular hotspots’ will time out, say after 20 minutes of inactivity, again, to save battery drain. Yet, if the user is carrying a tablet with him that has been configured to connect to the mobile hotspot Automatically, the phone which is serving out this hotspot will never detect inactivity.

Further, I’ve been able to confirm that, as long as I have Auto Hotspot turned on on my phone, indeed it does *not* show up as an available WiFi connection, on devices that are *not* joined to my Samsung account. This is as expected. But it also adds hope that, as long as I don’t connect to the phone’s Auto Hotspot from another device, the battery drain due to my leaving this feature enabled on my phone constantly, may not be very high. I will comment by the end of this day, after having left my phone with its own WiFi Off, which means that my phone will be using its Mobile Data, but, *not* connecting my Samsung TAB S6, whether doing this seems to incur any unusually high amount of battery drain, on the phone…

(The view from my Galaxy TAB S6: )

(Update: )

What some readers may be asking themselves could be, ‘Why not just add an existing Samsung account to the Chromebook?’ After all, It would be playing within the rules, to add a Samsung account to any Android device. But alas, there is a purely technological reason, why trying to do so from a Chromebook either won’t work, or won’t work within the rules.

Under ChromeOS, Android is just a subsystem. Android doesn’t manage a Chromebook’s WiFi or anything else, other than the Android subsystem itself. In fact, ChromeOS even has an internal router, on which Android has a different IP address, from the IP address that the Chromebook has at any time. Therefore, while it might still be possible to add a Samsung account, to the Android subsystem within ChromeOS, doing so *should not really* give Internet access, to ChromeOS.

*The thought has occurred to me,* however, just to give my old ‘Google Pixel C’ (tablet) a Samsung account, hoping to use Samsung’s Auto Hotspot from my Pixel C, since that one is in fact an Android (8) device. But, whether the old Pixel C is actually worth doing so, is not definite to my mind. Additionally, I do *not* know what the minimum system requirements are, *on the client device* (for the Auto Hotspot to display, that is being served out by the phone). *I do know* that the phone which is serving out the Auto Hotspot, needs to have Android *10* (Q) running on it.

And to close that topic, when I go into the Accounts page within the Pixel C’s Settings, a ‘Samsung Account’ is just *not* a type of account which it offers to Add…

(Update 12/20/2020, 18h25: )

I got up before 6h00 this morning, and as I’m writing this, my battery level is down to 35%. This means that today, I spent slightly more juice than I would on any day all-on-mobile-data. But, I cannot really tell whether this is due to the Auto Hotspot feature remaining enabled, or whether it’s just due to my playing with the phone more.

I think I can keep the feature enabled.

Dirk

]]>

One of the possessions which I have is a USB MIDI Keyboard, which I’d like to be able to play, so that my computer’s software synthesizers actually generate the sound…

I know that this can be done because I’ve done it before. But in the past, when I set this up, I was either using an ‘ALSA’ MIDI input, belonging to an ‘ALSA’ or ‘PulseAudio’ application such as “Linux Multimedia Studio”, or I was using ‘QSynth’, which is a graphical front-end to ‘fluidsynth’, but in such a way that QSynth was listening for *ALSA* MIDI, and outputting *JACK* audio. This is actually a very common occurrence. I can switch between using the ‘PulseAudio’ and using the ‘JACK’ sound daemon, through a carefully set-up configuration of ‘QJackCtl’, which suspends PulseAudio when I activate JACK, and which also resumes PulseAudio, when I shut down JACK again.

But there is a basic obstacle, as soon as I want to play my MIDI Keyboard through ‘Ardour’. Ardour v6 *can* be run with the PulseAudio sound system, *but only for playback*, or, Ardour can be run with its JACK sound back-end, after JACK has been launched. Ardour cannot be run with its ALSA back-end, when PulseAudio is running.

The default behaviour of the Debian kernel modules, when I plug in a USB MIDI Keyboard, is, to make that MIDI connection visible within my system as an *ALSA* MIDI source, even though some applications, such as Ardour, will insist on only taking input from *JACK* MIDI sources, when in fact running in JACK mode. And so, this problem needed to be solved this morning…

The solution which I found was, to feed the Keyboard, which happens to be an “Oxygen 61″, to the ‘MIDI *Through* Port’ that’s visible in the ALSA Tab of QJackCtl’s Connections window. When MIDI sequences are fed there, they are also output from the System *JACK* MIDI sources, visible in the MIDI Tab of QJackCtl’s Connections window:

I should also note that, in many cases, the JACK clients can ask the JACK sound daemon to be connected to various inputs and outputs from within, without absolutely requiring that the QJackCtl Connections window be used. This explains why the audio output of Ardour was already routed properly to my PC’s speakers. But I found that I could only keep track of the MIDI connection, through QJackCtl’s Connections window. As the screen-shots above show, the second step is, to feed one of the System Sources to the appropriate Ardour MIDI input, in the MIDI Tab of QjackCtl’s Connections window.

The result was, that the synthesizer which I have available as an Ardour plug-in, played beautifully, in response to my pressing keys on the actual MIDI Keyboard, and no longer just, when I clicked on the graphical keyboard within the Ardour application window:

This on-screen keyboard can be made visible, by double-Alt-Clicking on the icon of the instrument, with Ardour in its Mixer view, and then expanding the resulting windows’ MIDI Keyboard fly-out. Yet, the on-screen keyboard was only useful for setup and testing purposes.

Tada!

(Updated 12/07/2020, 17h20… )

(As of 12/07/2020, 8h20: )

One fact which I should also mention is that, there exists the package ‘`a2jmidid`

‘, which will solve the same sort of problem. When that package is installed – according to its package description – it causes a daemon to run, which will react to every ALSA, MIDI Input or Output port, by connecting to it, and creating a corresponding JACK MIDI Input or Output port.

That package was mainly meant to be used by people, who will entirely make MIDI connections from within their applications, and not using the ‘QJackCtl’ Connections window. And another big drawback of having that package installed will be, that it will automatically tie up all existing ALSA MIDI clients, to make them available to JACK… Therefore, that package can be counterproductive to install, if the user wants to switch back and forth, between having JACK running and not having it running.

(Update 12/07/2020, 17h20: )

I suppose that if a user is determined, to be switching back and forth between running ‘PulseAudio’ and ‘JACK’, and yet, wants the package ‘`a2jmidid`

‘ to make all the ALSA-MIDI ports available as (forwarded) JACK-MIDI ports, then he or she can extrapolate on what I, myself have done, by adding more commands to ‘QJackCtl’. In the ‘Options’ tab, one can check the box named ‘Execute script **after** Startup’, and then in the field next to that tab, type:

```
a2jmidid -e &
```

(Note: The full path name of the executable may not be entered here.) Correspondingly, the user would also check the box named ‘Execute script **on** Shutdown’, and, to the right of that box, type:

```
killall -w a2jmidid
```

However, I have never tried this, as I don’t even have ‘`a2jmidid`

‘ installed.

Dirk

]]>

There exists a maxim in the publishing world, which is, ‘Publish or Perish.’ I guess it’s a good thing I’m not a publisher, then. In any case, it’s been a while since I posted anything, so I decided to share with the community some wisdom that existed in the early days of computing, and when I say that, *it really means*, ‘back in the early days’. This is something that might have been used on mini-computers, or, on the computers in certain special applications, before PCs as such existed.

A standard capability which should exist, is to compute a decently accurate sine function. And one of the most lame reasons could be, the fact that audio files have been encoded with an amplitude, but that a decoder, or speech synthesis chip, might only need to be able to play back a sine-wave, that has that encoded peak amplitude. However, it’s not always a given that any ‘CPU’ (“Central Processing Unit”) actually possesses an ‘FPU’ (a “Floating-Point Unit”). In such situations, programmers back-when devised a trick.

It’s already known, that a table of pre-computed sine functions could be made part of a program, numbering maybe 256, but that, if all a program did was, to look up sine values from such a table once, ridiculously poor accuracies would initially result. But it was also known that, as long as the interval of 1 sine-wave was from (zero) to (two-times-pi), the derivative of the sine function was the cosine function. So, the trick, really, was, to make not one lookup into the table, but *at least* two, one to fetch an approximate sine value, and the next, to fetch an approximate cosine value, the latter of which was supposedly the derivative of the sine value at the same point. What could be done was, that a fractional part of the parameter, between table entries, could be multiplied by this derivative, and the result also added to the sine value, thus yielding a closer approximation to the real sine value. (:3)

But, a question which readers might have about this next could be, ‘Why does Dirk not just look up two adjacent sine-values, subtract to get the delta, and then, multiply the fractional part by this delta?’ And the answer is, ‘Because one can not only apply the first derivative, but also *the second derivative*, by squaring the fractional part and halving it (:1), before multiplying the result from that, by the negative of the sine function!’ One obtains a section of a parabola, and results from a 256-element table, that are close to 16 bits accurate!

The source code can be found in my binaries folder, which is:

https://dirkmittler.homeip.net/binaries/

And, in that folder, the compressed files of interest would be, ‘IntSine.tar.gz’ and ‘IntSine.zip’. They are written in C. The variance that I get, from established values, in (16-bit) integer units squared, is “~~0.811416~~” “~~0.580644~~” (:2). Any variance lower than (1.0) should be considered ‘good’, since (±1) is actually the smallest-possible, per-result error.

(Updated 12/04/2020, 11h50… )

(As of 12/01/2020, 19h10: )

Hint: In order to link both source files, the directive ‘-lm’ needs to be given, when using Linux. One source file legitimately uses the true, double-precision library function, in order to build the table of integer constants, while the other is only using it as a comparison, to score the accuracy of the integer arithmetic, which itself does not require the Math library.

Further, when compiling on a platform as old as Debian 8 / Jessie, the flag ‘-std=gnu11′ must be used.

(Update 12/02/2020, 5h55: )

Given that one thing which I did was, to multiply 2 times Pi by 65536, in order to arrive at an integer constant, which the fractional part between two table-entries was to be multiplied with, this could also raise questions from the reader. Because my format really is, to take the 8 LSBs of a 16-bit integer value, as the fractional position between two table-entries, in such a way that the entire 16-bit modulus defines 1 cycle of the sine wave, it follows that the first derivative is actually 2 times Pi times the cosine function, etc…

I can know that 411774 is a 19-bit integer in two ways:

- I know that 2 times Pi does not exceed 7, so that only 3 extra bits are being added, to what would otherwise be a 16-bit fraction, And
- While 2 to the power of 20 can be estimated as being approximately 1 million, what this really means is, that most 20-bit values don’t actually reach 1 million, only spanning approximately from 500,000 to 1,000,000. Since my integer is closer to within the range from 250,000 to 500,000, it’s most probably a 19-bit number.

Further, while I like the format of having a 16-LSB fractional part to certain numbers, I found that I could reduce round-off errors slightly, if I made the fractional part of certain other numbers 17 bits wide. I apologize if this makes my code less readable. But what this means is that, theoretically, if I was to square such a value, it would have a 34-bit fractional part, even though it’s only a signed, 32-bit integer! I suppose the fact is fortunate then, that the maximum starting value (in the corresponding part of the program) was only a positive 12-bit number. But this means that I’d need to right-shift the value by 18 bits, to arrive at a representation, which again, has a 16-bit fractional field.

I (*was*) right-shifting the value by 19 bits (*in an earlier version of the code*), because I additionally needed to halve it, not, because to do so directly stems from the format of the number.

(Update 12/02/2020, 6h15: )

An additional question which could be asked would be, whether there would be any benefit, to also computing the third derivative, cubing the fractional component, and dividing by ~~three~~ six.

But even before dividing by three, an observation speaks against doing so. If the fractional component by itself, within a 16-bit fraction, only had 11 bits max, and if the fractional component squared, again within a 16-bit fraction, only had 6 bits max, then their product will have 17 bits max. After that has been divided by three, it will not have more than 16 bits. What this means is, that after being right-shifted one more time, to align it with a 16-bit fractional field, the result should be *zero*. Doing so cannot offer any improvements, within a 16-bit fractional format.

(Update 12/02/2020, 8h20: )

**1:)**

A question which some readers may not know the answer to would be of the form: ‘The second derivative of a function can be multiplied, by a small difference in the parameter ~~squared~~; why does it need to be multiplied by that small difference, *squared and halved*?’ And the answer is as follows:

(x) Times a constant, is assumed to give the (first) derivative of a certain value. And the reason *this* is so, is because the integral of a constant over (x) is that constant times (x). What this means is, that the value itself needs to be *the integral of* (x), times that constant, in order for the constant, also to be the multiplier of the second integral of (1). Well, the integral of (x) is (½ x^{2}), therefore, the (½) term. (x) Was also the fractional part between two table-entries.

(Edit 12/03/2020, 21h05…

If the cubic term was also to be expanded, from the third derivative, it would follow as (1/6 x^{3})…

(…End Of Edit, 12/03/2020, 21h05. )

(Update 12/02/2020, 14h10: )

**2:)**

The way in which I computed the variances, assumed first of all, that integers were to be assessed, in which each unit corresponded to:

1 / 65536

Where, 65536 is also 2^{16}. The exact calculation used was:

(mean of x^{2}) – (mean of x)^{2}

The problem when reporting variances in this way, especially since the mean is supposed to be zero, is that the information is incomplete, unless the mean was also reported. And so, oddly, the mean equals (-~~0.111511~~).

I don’t know the explanation for this, and chalk it up to, ‘an odd conspiracy of round-off errors’.

This was calculated for all valid input-values, that are 16-bit integers, and where the modulus of 65536 completes 1 full cycle.

(Update 12/03/2020, 12h55: )

There’s another observation which can be made about my integer arithmetic, which has to do with the rule in Calculus 1, that is also known as the “chain rule”. What that rule implies, in this case is, that if the parameter of a sine-function is (x), because according to pure Math, that sine function completes 1 cycle, over the domain from [0 .. 2*Pi), If the parameter is to be multiplied by (two-times-Pi) – to complete 1 cycle over the interval [0 .. 1.0) – Then the first derivative of the sine function, will be the cosine function of (x), *times two-times-Pi*. Additionally, the derivative of the cosine function, given the same assumption, will be the negative of the sine function, *times two-times-Pi*. Thus, to compute the second derivative, I need to end up multiplying (x) by two-times-Pi *twice*.

What I did instead was, to multiply (x) by two-times-Pi *once*, both, before using it to expand the first derivative, and *also, before squaring it*, to expand the second derivative. What this accomplishes with fewer CPU operations is, to multiply by two-times-Pi *twice*, when expanding the second derivative.

(Update 12/03/2020, 19h05: )

By tweaking the code a little further, I was able to bring the variance down to (0.411991), and the DC offset down to (-0.077118).

There exists another platform-specific implementation detail, which my programs depend*ed* on, and which I received a surprise about, several months ago, when I was writing some simple ‘Qt5′ / GUI-building exercises. In C++, under the Debian / GNU Architectures, the following two function-calls differ in two ways and not one:

```
int(-1.5);
floor(-1.5);
```

One important difference is, that the ‘int()’ call is really a type-conversion, which C programs should never call in the above form, to an integer. The other, more important difference is the fact that, these two function-calls return *numerically different* results. ‘floor(-1.5)’ does exactly what I expected it to do, which is, to return (-2.0). But to my surprise back then, ‘int(-1.5)’ actually returned (-1) ! In other words, when assigning a floating-point number to type ‘int’, the absolute of the numerical value is rounded down ‘towards zero’, but, if the value was negative, the negative sign is put in front of it.

The way I corrected for this in my Qt5 exercise was, to avoid casting directly from a floating-point number to an integer, instead, using the ‘floor()’ function whenever possible.

But a previous version of my two programs depended on the cast. Correct programming practice is, first to compute the ‘floor()’ function of (the floating-point number + 0.5), and then, to assign the result to an integer, the second operation out of which should just, ‘Extract the integer part, when the fractional part is zero.’

The present version of my two programs uses the (correct) ‘floor()’ function, as described.

(Update 12/04/2020, 11h50: )

**3:)**

On the subject of Computing History, when I was reading about the fact that one of my predecessors was using such a trick, his trick differed from mine in two key ways:

- He was only shooting for 8 bits of precision, not 16, And
- His lookup table only had 128 entries, not 256.

And both those facts had as common reason, the fact that although programmable chips existed as early as in the *late* 1970s, and on into the 1980s, *those* programmable chips had far less *ROM*, than even embedded CPUs have today. They simply did not have enough *ROM*, to store a constant array of 256, 16-bit integers, let alone, 32-bit integers, as my suggested algorithm does.

I vaguely seem to recall, that programmable chips with approximately 1KB of ROM *in total* existed at that time, but their programmers also had to use that, for the actual programming, including, for everything else that a programmable chip was expected to do, as part of an appliance.

Now, my exercise can simply be modified, so that the peak amplitude in the sine-value table reaches ±2^{14}, let’s say, just so that the statement can be made, that all its constants be 16-bit constants. But if one did that, one would also no longer achieve ~16-bit accuracy in the results, as the output amplitude would also just scale down.

Dirk

]]>