Power Failure Today, Downtime

I take the unusual approach, of Hosting this site and this blog, on a PC at home. I don’t say that other people should do this. This is only what I do.

This implies that the visibility of this blog is only as reliable, as the operation of my PC here at home.

Today we had a power failure, from approximately 16h15 until 17h35. As a result, this blog was also offline, until about 18h00.

I apologize for any inconvenience to my readers.

Dirk

 

Print Friendly, PDF & Email

Comparing two Bose headphones, both of which use active technology.

In this posting I’m going to do something I rarely do, which is, something like a product review. I have purchased the following two headphones within the past few months:

  1. Bose QuietComfort 25 Noise Cancelling
  2. Bose AE2 SoundLink

The first set of headphones has an analog 3.5mm stereo input cable, which has a dual-purpose Mike / Headphone Jack, and comes either compatible with Samsung, or with Apple phones, while the second uses Bluetooth to connect to either brand of phone. I should add that the phone I use with either set of headphones is a Samsung Galaxy S9, which supports Bluetooth 5.

The first set of headphones requires a single, AAA alkaline battery to work properly. And this not only fuels its active noise cancelling, but also an equalizer chip that has become standard with many similar middle-price-range headphones. The second has a built-in rechargeable Lithium-Ion Battery, which is rumoured to be good for 10-15 hours of play-time, which I have not yet tested. Like the first, the second has an equalizer chip, but no active noise cancellation.

I think that right off the bat I should point out, that I don’t approve of this use of an equalizer chip, effectively, to compensate for the sound oddities of the internal voice-coils. I think that more properly, the voice-coils should be designed to deliver the best frequency response possible, by themselves. But the reality in the year 2019 is, that many headphones come with an internal equalizer chip instead.

What I’ve found is that the first set of headphones, while having excellent noise cancellation, has two main drawbacks:

  • The jack into which the analog cable fits, is poorly designed, and can cause bad connections,
  • The single, AAA battery can only deliver a voltage of 1.5V, and if the actual voltage is any lower, either because a Ni-MH battery was used in place of an alkaline cell, or, because the battery is just plain low, the low-voltage equalizer chip will no longer work fully, resulting in sound that reveals the deficiencies in the voice-coil.

The second set of headphones overcomes both these limitations, and I fully expect that its equalizer chips will have uniform behaviour, that my ears will be able to adjust to in the long term, even when I use them for hours or days. Also, I’d tend to say that the way the equalizer arrangement worked in the first set of headphones, was not complete in fulfilling its job, even when the battery was fully charged. Therefore, If I only had the money to buy one of the headphones, I’d choose the second set, which I just received today.

But, having said that, I should also add that I have two 12,000BTU air conditioners running in the Summer months, which really require the noise-cancellation of the first set of headphones, that the second set does not provide.

Also, I have an observation of why the EQ chip in the second set of headphones may work better than the similarly purposed chip in the first set…

(Updated 9/28/2019, 19h05 … )

Continue reading Comparing two Bose headphones, both of which use active technology.

Print Friendly, PDF & Email

Certain wrong places, to put recursion, into a program?

One of the subjects which I was pursuing recently, was not just of why, before March of 2019, I had gotten some error messages, when trying to compile a certain program, but also of why or if, other people, using other types of computers, might continue to obtain error messages, long after I was no longer obtaining them.

And a basic concept which I had referenced was that C++ compilers, when not given an exact match in the data-types of a function prototype, to the data-type of the arguments passed to those functions, will first try to perform a type-conversion which is referred to as a “promotion”, and if that fails, will attempt what’s referred to as a “standard conversion”, the latter of which can transform a ‘higher’ type of built-in number to a ‘lower type’, etc.. There was a basic question which I had not provided any sort of answer to, nor which I even acknowledged explicitly could exist. That question has to do with what happens, when more than one type-conversion has the ability to go from the argument-type, to the parameter-type of a function prototype.

Theoretically, it would be possible to design a compiler such, that every time a type-conversion is being sought, both types are tried, in a pattern which is referred to as ‘recursion’, but which can also just be referred to as an ‘exhaustive search’. There’s every possibility that this is not how the compiler is programmed in fact. What can happen instead would be, that the compiler is willing to perform a promotion, in the service of a potential, standard conversion, but that it will go no further.

And in that context, as well as in the mentioned context of recursive template definitions, casting a derived class to one of its parent classes, counts as a promotion.

There’s every possibility that if recursion was placed in the code of the compiler, to re-attempt a standard conversion, to execute prior to another standard conversion that will not fit yet, the result could become some sort of endless loop, while the real behaviour of a compiler needs to be more stable. And this would be a valid reason, for which certain standard template declarations will first try to instantiate the templates using a ‘float’, then a ‘double’, and then a ‘long double’, in the form of specializations. The programmers will assume that a standard conversion needs to receive a value, that can be the result of a promotion. And in that case, the first template specialization may not work, while a later one might, just because to convert, say, a ‘double_t’ to a ‘long double’ will be a promotion, while to convert a ‘double_t’ to a ‘float’ would not, but would in fact be a standard conversion, in the service of another, standard conversion (that should not happen).

Dirk

 

Print Friendly, PDF & Email

One weakness of my algorithm, regarding polynomials.

Several months ago, I wrote a C++ program (with C coding-style) that computes numerical approximations, of the roots of an arbitrary polynomial. Even though I changed the code slightly since March, mainly to improve the chances that it will compile on other people’s computers, the basic algorithm hasn’t changed. Yet, because I wrote the algorithm, I know that it has certain weaknesses, and I’m about to divulge one of those…

The main search algorithm isn’t able to determine whether (x) is sufficiently close to a root. It’s really only able to determine whether (y) is sufficiently close to zero. And so an idea that an implementer could have about this would be, to name the maximum allowable error in (x) ‘epsilon’, to compute the derivative of the polynomial for the given value of (x), and then to multiply ‘epsilon’ by this derivative, perhaps naming the derived, allowable error in (y) ‘epsilond’. But I consciously chose not to go that route because:

  • Computing the real derivative of the polynomial each time, though certainly doable, would have made my code complex beyond what I was willing to tolerate,
  • Even had I done so, certain polynomials will have a uselessly low derivative near one of their roots. This can happen because of a doubled root, or simply because of two roots that are too close together. ‘epsilon’ is as small as what would be practical, given the number of bits the variables have to work with, that were, a 52-bit significand.

So my program, instead, computes a ‘notional derivative’ of the polynomial, near (x), that is really just the derivative of the highest exponent by itself. This little detail could actually make it difficult for some observers to understand my code. At the same time, my program limits the lower extent of this derivative, to an arbitrary, modest fraction, just so that (x) won’t need to be impossibly precise, just to satisfy the computation of how precise (y) would need to be.

But then, a predictable problem becomes, that the real derivative of a given polynomial might be much smaller than just, the derivative of this highest exponent. And if it is, then satisfying … for (y) will no longer satisfy … for (x).

The following output illustrates an example of this behaviour:

 


"Maxima" Preparation:

(%i1) expand((x-100)*(x-101)*(x-102));
(%o1) x^3-303*x^2+30602*x-1030200

(%i2) expand((x-100)*(x-110)*(x-120));
(%o2) x^3-330*x^2+36200*x-1320000

(%i3) expand((x-100)*(x-101));
(%o3) x^2-201*x+10100


Approximations from Program:

dirk@Phosphene:~$ poly_solve 1 -303 30602 -1030200

101.00000000013  +  0I
99.999999999938  +  0I
101.99999999993  +  0I


dirk@Phosphene:~$ poly_solve 1 -330 36200 -1320000

100  +  0I
110  +  0I
120  +  0I


dirk@Phosphene:~$ poly_solve 1 -201 10100

100  +  0I
101  +  0I


dirk@Phosphene:~$ 


 

 


 

(Updated 9/22/2019, 5h40 … )

Continue reading One weakness of my algorithm, regarding polynomials.

Print Friendly, PDF & Email