I have written numerous postings about sound-compression, in which I did acknowledge that certain forms of it are based on time-domain signal-processing, but where several important sound-compression techniques are based in the frequency-domain. Given numerous postings from me, a reader might ask, ‘Which posting summarizes the blogger’s understanding of the concept best?’

And while many people directly pull up a posting, which I explicitly stated, describes something which will not work, but displays that concept as a point-of-view, to compare working concepts to, instead of recommending that posting again, I would recommend this posting.

Dirk

]]>

I have run into people, who believe that a signal cannot be phase-advanced in real-time, only phase-delayed. And as far as I can tell, this idea stems from the misconception, that in order for a signal to be given a phase-advance, some form of *prediction* would be needed. The fact that this is not true can best be visualized, when we take an analog signal, and derive another signal from it, which would be the short-term derivative of the first signal. ( :1 ) Because the derivative would be most-positive at points in its waveform where the input had the most-positive slope, and zero where the input was at its peak, we would already have derived a sine-wave for example, that will be phase-advanced 90⁰ with respect to an input sine-wave.

But the main reason this is not done, is the fact that a short-term derivative also acts as a high-pass filter, which progressively doubles in output amplitude, for every octave of frequencies.

What can be done in the analog domain however, is that a signal can be phase-*delayed* 90⁰, and the frequency-response kept uniform, and then *simply inverted*. The phase-diagram of each of the signal’s frequency-components will then show, the entire signal has been phase-*advanced* 90⁰.

(Updated 11/23/2017 : )

(As It Stood 11/22/2017 : )

1: ) The fact needs to be acknowledged, that according to Algebra, which is pure Math, there is only the derivative of a function, with respect to one or more parameters, and the predicate ‘short-term’ has no meaning.

But one problem with the purely Mathematical definition of derivatives would be, that they define the slope of a continuous function, at infinitesimal distance, from any point of that function. This is a somewhat abstract concept which people Study, but implies that this concept is of limited use in signal-processing, because here, any given electronic signal will have some amount of high-frequency noise – i.e. hiss. And because high-frequency noise creates perturbations of the signal progressively closer to any point in time, according to pure Algebra, the slope of the signal with respect to time, could theoretically be any value.

And so if one designs a circuit to differentiate an input-voltage, it’s important to limit the highest frequencies which its active amplifier will amplify, since in addition, we know that the overall gain keeps increasing, with frequency.

Failing to set some constraints, will lead to an unstable circuit.

Now, if we wanted to phase-delay an arbitrary input signal by integrating it, there is the converse problem, of the signal-gain doubling, every time the frequencies *go down* one octave. But there is less of a problem, with the stability of the circuit. So another way one could look at the two sine-waves I included above, could be, that the red sine-wave follows the integral of the blue sine-wave, only in the opposite direction in which the blue sine-wave is pointing at any one time.

For most signal-processing, what we’d want is uniform frequency-response, over a given, finite frequency-band, and yet to phase-shift all the frequency components. And indeed, there existed analog circuits for some time, which accomplish this. Logically, they have a *lower* frequency-limit (instead of ever achieving infinite gain).

It was my impression, that the corresponding implementation in the digital domain, would also prefer to exploit the superior frequency-response that the digital signal formats offer. Because of this, DAW software – i.e. Digital Audio Workstations – offer a wide variety of effects, which require that many overlapping Fourier Transforms be computed of the original signal, which then apply fancier modifications to each Fourier Transform – thereby manipulating the frequency-components in the desired, arbitrary way – and which are then inverted, to reproduce the modified time-domain waveforms.

The highest-quality phase-shifting can also be accomplished in this way, but not in real-time.

2: ) In the diagram I linked to above, which involves a single transistor as its active stage, there is a potentiometer. When this potentiometer is at one extreme, the transistor merely acts as a voltage-follower, thus achieving zero phase-shift.

As is typical for such stages, when the potentiometer is at its opposite extreme, the stage offers a maximum phase-delay of 90⁰. But at that setting, the circuit problematically just acts as an integrator again, so that the gain will halve, for every octave by which the frequencies increase.

If we needed roughly uniform frequency response over a certain band of frequencies, yet a 90⁰ phase-delay, the way to accomplish that with the old-style analog methods, would be to chain two stages together, the diagram of each being as shown in the link, and to set each stage with its potentiometer mid-way, for a 45⁰ phase-delay each time.

And this need, to extract more than one order of integration from any point of the input-waveform, in order to achieve approximately-uniform gain, *cannot be bypassed*. The same would happen if we chose to compute our phase-advance, using a differentiating approach – i.e., we’d eventually need to base the output, on the 1st-order, the 2nd-order, the 3rd-order derivative, etc..

3: ) What follows is the conclusion, that if our goal was to compute a 90⁰ phase-delay in real-time, *as cheaply as possible*, but to accept that doing so also accompanies certain quality-limitations in the result, we could easily derive an algorithm which does so.

The algorithm would chain two, digital, 1st-order low-pass filters, that have the same corner-frequency, but in such a way, that the output-amplitude of each is 1/4 the input-amplitude, at one frequency in the falloff-curve of each, at which we’d want the circuit to be most-accurate.

Then, we’d tap the overall input, as well as output 1 and output 2, and we’d compute a weighted summation of the 3 signals, with the intent that at the ideal frequency, output 2 will be ~180⁰ out-of-phase with the input, and weighted such that their non-phase-shifted contribution to the final output counters that of output 1 and cancels it. At that frequency, output 1 will remain to define the final result, with its 90⁰ phase-delayed component… Since the phase-shift of each stage falls short of 90⁰, this can be compensated for, by emphasizing output 2 more than the input.

If we did that, then the plan would benefit from having used low-pass filters, instead of actual integrators, in that the phase-shift will eventually stop happening, but in that the output-gain will never become infinite, as a result of low input-frequencies.

When I do the Math, I get:

```
c
```_{in} = -( (1 / sqrt(5)) - (3 / 20) )
c_{in} ~= -0.2972
c_{l1} = 4
c_{l2} = 4
α_{90⁰} = ( (2 / sqrt(5)) + (1 / 5) )
α_{90⁰} ~= 1.0944

One of my assumptions is, that the *digital*, low-pass filter-algorithm will have |α|=1/2 at *its* ‘corner-frequency’, i.e. that:

```
N = h / 2
F
```_{0} <= N
F_{1/2} = F_{0} / 2
ω = 2 sin( πF_{1/2} / h )
k = 1 / (ω + 1)
l_{n} = k l_{n-1} + (1 - k) in_{n}

This behavior is counter to that of an *analog filter*, which some sources even *define* as having a ‘corner-frequency’, such that |α| = sqrt(1/2) , which should come at half the previously-defined frequency for either type of filter.

(Edit 11/23/2017 : )

Because each filter-output has two components:

- An in-phase component that might otherwise be referred to as the ‘real component’ of a phasor.
- A 90⁰ -shifted component that might otherwise be referred to as the ‘imaginary component’ of a phasor.

It follows that we’d want two results to equal a phasor, each of which has two components, always 0 for the in-phase, and always 1 for the out-of-phase part. Thus, the system as I described it above was overdefined, because it was attempting to create 4 real-numbered results, from 3 known inputs.

This exercise can be improved and made systematic, if our hypothetical filter-series had 3 filters, plus the original input, to work from, to produce 4 predetermined outputs, and to achieve exact, 90⁰ -delayed output at frequencies for which the absolute filter-gain would have been (1/2) as well as (1/4). If this is done, then solving this problem becomes an exercise in Linear Algebra.

According to what I wrote above, a digital filter differs from an analog filter, in that its corner-point has a gain of 1/2. If this is true, then the following work-sheet describes what the optimum multiples for the 4 taps should be:

However, should I have been in error, and should the digital filters also exhibit a 45⁰ phase-shift, where they have a gain of sqrt(1/2), then this work-sheet describes the correct multipliers:

One result to be weary of though, would be the excessively high multiplier for the output of the 3rd low-pass filter. The reader should keep in mind, that in the event of a D.C. signal, all these multipliers would just add, and we’d get an output approximately 11 times as high as the input D.C. voltage.

One fact which I had written about quite some time ago, is that there exists a type of Statistical Analysis, which is called Non-Linear Regression Analysis, of which Polynomial Regression Analysis was a special case. The big capability of this technique is, that a number of results could be known in advance, which exceed the number of predictors, and possibly so by a large factor. Yet, a fixed set of functions of the predictors can be found, which in turn can be named the matrix (X), from which the result-vector (Y) supposedly emerged, and an Algebraic trick that exists, will find a vector of multipliers (A), which when multiplied by *unknown* predictor’s functions, will reproduce values of (Y), that are as close as possible to known values of (Y).

To remind myself of what the Algebra was that does this, I keep the following sheet on-hand:

Based on this sheet, I have determined that the best-possible way to compute the multipliers, for the input, plus the outputs of 2 low-pass filters, so that they best approximate phase-shifts at 2 frequencies, resulting in 4 real numbers, can also be, by using Non-Linear Regression Analysis. And the results of my labor are:

Work-Sheet, Overdetermined – 1

Work-Sheet, Overdetermined – 2

Where again, the results change, depending on what the expectations of the actual filter-algorithms are. I know that If analog filters are used, in each case, ‘Work-Sheet 2′ would be the correct one.

This sure beats trying to guess at the Math in my head.

Dirk

]]>

In the bottom left-hand corner of each of my postings, the reader will see a little icon that has a printer-symbol on it, and the word “Print”. This icon can be used either to print the posting in question, or to save it to a PDF File, on the reader’s computer. In fact, the reader can even delete specific paragraphs from his local copy of my posting – since plausibly, the reader might find some of my postings too long to be printed in their entirety.

Some time ago, I had encountered a situation where Not code belonging to this plug-in, but Rather the URL which hosts the service, was showing me a warning in my browser, that ‘unknown URLs’ were trying to run scripts.

My own Web-browser has a script-blocker, which will display scripts to me which a page is trying to display, but which I did not authorize.

Certain features which I use on my blog, are actually hosted by the Web-site of 3rd parties, whose scripts may run, just because my page includes their widget.

The first time I noticed this I went into an alarm-mode, and removed the button from my blog quickly, thinking that maybe it was malware. But some time after that, I installed an additional extension to my blogging engine, called “WordFence”. This is an extension that can not only scan the (PHP-) code present on my own server for viruses and other malware, but that can also just scan whatever HTML my blogging engine outputs, for the possible presence of URLs to sites that are black-listed, regardless of how those URLs ended up being generated by my blogging engine.

Once I had WordFence installed, I decided that a more-Scientific way to test the PrintFriendly plug-in, would be to reactivate it, while WordFence is scanning my site. If any of the URLs produced by this plug-in were malicious, surely WordFence would catch this.

As it stands, the PrintFriendly button again displays URLs which belong to parties unknown to me. But as it stands, none of those URLs seem to suggest the presence of malware. I suppose, that the hosts of PrintFriendly rely on some of those scripts, to generate income? Since I’m not required to pay, to use their button.

Dirk

]]>

In a previous posting, I explained that I am able to write C++ programs, that test some aspect of how the CPU performs simplistic computations, even though C++ is a high-level language, which its compiler tries to optimize, before generating a machine-language program, the latter of which is called the run-time.

As I pointed out in that posting, there exists a danger that the computation may not take place the way in which the code reads to plain inspection, more specifically, in the fact that certain computations will be performed at compile-time, instead of at run-time. In reality, both the compile-time and the run-time computations as such still take place on the CPU, but in addition, a compiler can parse and then examine larger pieces of code, while a CPU needs to produce output, usually just based on the contents of a few registers – in that case, based on the content of up to two input-registers.

I feel that in contrast with the example which I wrote will work, I should actually provide an example, which will not work as expected. The following example is based on the fact that in C++, there exist three basic types of constants:

- Literals,
- Declared constants,
- Constant Expressions.

Constant expressions are any expressions, the parameters of which are never variables, always operators, functions, or other constants. While they can increase the efficiency with which the program works, the fact about constant expressions which needs to be remembered, is that by definition, the compiler will compute their values at compile-time, so that the run-time can make use of them. The example below is an example in which this happens, and which exists as an antithesis of the earlier example:

```
// This code exemplifies how certain types of
// computations will be performed by the compiler,
// and not by the compiled program at run-time.
// It uses 3 types of constants that exist in C++ .
#include <cstdio>
int main () {
// A variable, initialized with a literal
double x = 0;
// A declared constant
const double Pi = 3.141592653589;
// A constant expression, being assigned to the variable
x = ( Pi / 6 );
printf("This is approximately Pi / 6 : %1.12f\n", x);
return 0;
}
```

```
dirk@Plato:~/Programs$ ./simp_const
This is approximately Pi / 6 : 0.523598775598
dirk@Plato:~/Programs$
```

If this was practical code, there would be no problem with it, purely because it outputs the correct result. But if this example was used to test anything about how the run-time behaves, its innocent Math will suffer from one main problem:

When finally assigning a computed value to the double-precision floating-point variable ‘x’ , the value on the right-hand side of the assignment operator, the ‘=’ , is computed by the compiler, before a machine-language program has been generated, because all of its terms are either literals or declared constants.

This program really only tests, how well the ‘printf()’ function is able to print a double-precision value, when instructed exactly in what format to do so. That value is contained in the machine-code of the run-time, and not computed by it.

BTW, There exists a type of object in *C*, called a Compound Literal, which is written as an Initializer List, preceded by a data-type, as if to type-cast that initializer list to that data-type. But, because all manner of syntax that involves initializer lists is distinctly C and not C++ , I’ve never given much attention to compound literals.

(Edit 11/13/2017 : )

In other words, if the code above contained a term of the form

`( 1 / ( 1 / Pi ))`

Then chances are high, that *because this also involves a constant expression*, the compiler may simplify it to

`( Pi )`

Without giving the programmer any opportunity, to test the ability of the CPU, to compute any reciprocals.

Dirk

]]>

There exist WiKiPedia pages that do explain how single- and double-precision floating-point numbers are formatted – both used by computers – but which are so heavily bogged down with tedious details, that the reader would need to be a Computer Scientist already, to be able to understand them. And in that case, those articles can act as a quick reference. But they would do little, to explain the subject to laypeople. The mere fact that single- and double-precision numbers are explained on the WiKi in two separate articles, could already act as a deterrent for most people to try understanding the basic concepts.

I will try to explain this subject in basic terms.

While computers store data in bits that are organized into words – those words either being 32-bit or 64-bit words on most popular architectures – even by the CPU, those words are interpreted as representing numbers in different ways. One way is either as signed or as unsigned ‘integers’, which is another way of saying ‘whole numbers’. And another is either as 32-bit or as 64-bit floating-point numbers. Obviously, the floating-point numbers are used to express fractions, as well as very large or very small values, as well as fractions which are accurate to a high number of digits ‘after the decimal point’.

A CPU must be given an exact opcode, to perform Math on the different representations of numbers, where what type of number they are, is already reflected at compile-time, by which opcode has been encoded, to use the numbers. So obviously, some non-trivial Math goes into defining, how these different number-formats work. I’m to focus on the two most-popular floating-point formats.

Understanding how floating-point numbers work on computers, first requires understanding how Scientists use Scientific Notation. In the Engineering world, as well as in the Household, what most people are used to, is that the number of digits a number has to the left of the decimal-point, be grouped in threes, and that the magnitude of the number is expressed with prefixes such as kilo- , mega- , giga- , tera- , peta- , or, going in the other direction, with milli- , micro- , nano- , pico- , femto- or atto- .

In Science, this notation is so encumbering that the Scientists try to avoid it. What Scientists will do, is state a field of decimal digits, which will always begin with a single (non-zero) digit, followed by the decimal point, followed by an arbitrary number of fractional digits, followed by a multiplication-symbol, followed by the base of 10 raised to either a positive or a negative power. This power also states, how many places further right, or how many places further left, the reader should visualize the decimal point. For example, Avogadro’s number is expressed as

6.022 × 10^{23}

IF we are told to limit our precision to 3 places after the decimal point. If we were told to give 6 places behind the decimal point, we would give it as

6.022141 × 10^{23}

What this means, is that relative to where it is written, the decimal point would need to be *shifted to the right 23 places*, to arrive at a number, that has the correct order of magnitude.

When I went to High-School, we were drilled to use this notation ad nauseum, so that even if it seemed ridiculous, we would answer in our sleep that to express how much ‘a dozen’ was, using Scientific Notation, yielded

1.2 × 10^{+1}

More importantly, Scientists feel comfortable using the format, because they can express such ideas as ‘how many atoms of regular matter are thought to exist in the known universe’, as long as they were not ashamed to write a ridiculous power of ten:

1 × 10^{80}

Or, how many stars are thought to exist in our galaxy:

( 2 × 10^{11} … 4 × 10^{11} )

The latter of which should read, ‘from 200 billion to 400 billion’.

When Computing started, its Scientists had the idea to adapt Scientific Notation to the Binary Number System. What they did was to break down the available word-sizes, essentially, into three fields:

- A so-called “Significand”, which would correspond to the Mantissa,
- An Exponent,
- A Sign-bit for the entire number.

The main difference to Scientific Notation however was, that floating-point numbers on computers, would do everything in powers of two, rather than in powers of ten.

A standard, 32-bit floating-point number reserves 23 bits for the fraction, and 8 bits for the exponent of 2, while a standard, 64-bit floating-point number reserves 52 bits for the fraction, and 11 bits for the exponent of 2. This assignment is arbitrary, but sometimes necessary to know, for implementing certain types of subroutines or hardware.

But one thing that works as well in binary as it does in decimal, is that bits could occur after a point, as easily as they could occur before a point.

Hence, this would be the number 3 in binary:

11

While this would be the fraction 3/4 in binary:

0.11

(Updated 11/13/2017 : )

Thus, if the fractional part of a 32-bit floating-point number was to store 23 binary digits, equivalent to standard expectations in decimal form, then a bit of weirdness that needs to be taken care of, is that in effect, there would also be 23 different possible ways to store the number (1). Each of them would have a single bit equal to (1), all the other bits equal to (0), and the required exponent that repositions the non-zero bit, as required, to yield a product of (1).

Such oddities do not exist in Computing for very long, because at the very least, they’d lead to a decrease in efficiency. And so a little trick which takes place in Computing, is that *an unstated bit of (1) is assumed to precede the stored fraction*. That way, there is exactly one way to store the value

1.0 × 10^{0}

That being

0 01111111 0000 0000 0000 0000 0000 000

The way the exponent is stored, reflects the fact that Computer Science wants the format to work well, when these numbers are multiplied. This means, that exponents must be easy *to add*. And so in principle, the exponent *could be* stored in two’s compliment. But in practice, it actually gets stored as an integer, the value of which is offset, which would almost be the same thing as two’s complement, except for the fact that the offset can be arbitrary, and is chosen to maximize efficiency. Typically, either

10000000000

or

10000000

are used to denote 2^{+1}.

But one fact which programmers must deal with every time they write source code, that uses floating-point Math, is that in the source code, they write the constants *in Base-10*. While the compiler can do the work of translating between binary and decimal forms, the programmer must at least know what his available ranges are. And to do that, there exist two coarse approximations, of how binary numbers can be visualized in decimal:

- 2
^{10}== 1024 ~= 1000 - 4 bits ~= 1 decimal digit.

Hence, if we knew that the field of bits of actual precision were equal to 24, then we could estimate that the number of decimal digits this would give us, is a disappointing 6 decimal places.

And if we could say that the exponent ranged from -127 to +127, then this would roughly correspond to the powers of ten

10^{-32} … 10^{+32}

But because the analogy is only approximate, the actual values that result just from these exponents are

1.2 × 10^{-38} … 1.7 × 10^{+38}

(Edited 11/08/2017 , Commented on 11/11/2017 … )

So obviously, this “single-precision” format needed replacement early in the History of Computing, with a longer format, and so the 64-bit format, which is also referred to as “double-precision”, is strongly favored today, because its field of significant bits is approximately

53 / 4 ~= 13 decimal digits

and its powers of ten are approximately

+/- 1000 / 4 ~= 250

There exists the unanswered question so far, as to how one would actually store the number zero, since what I have written so far would imply, that the assumed digit of (1) needs to be right-shifted an infinite number of times, so that its ‘real value’ diminishes towards zero.

The convention that gets used is, that the most-negative exponent, which would normally signal the smallest order of magnitude that can be represented, actually signals either that the numeral zero is meant, ~~or that some anomalous result has been obtained~~. And the highest-possible exponent effectively, signals an overflow.

According to the way I was taught Computing, the CPU was able to distinguish between an underflow, and a valid representation of the number (0). The latter (did) occur when all the fractional bits were set to zeroes.

But according to the way I was taught Computing, there was no analogous way to distinguish between an overflow, and some other, corresponding, ‘meaningful result’. According to the WiKiPedia *today*, if the fractional bits are all set to zeroes, and the exponent is its maximum-possible value, this actually signals ‘the symbol infinity’.

If that were true, then I’d expect that this symbol behaves, exactly as the Algebraic symbol would behave, if nothing else was known, than the values passed to one operation – i.e., if no further context was given.

This means that operations between infinity and ‘ordinary numbers’ would have predictable results, while operations between opposing infinities would continue to result in ~~error-messages~~, because according to Math, those can only be resolved – if at all – using Computer Algebra Systems, and if given an entire system of equations. A CPU is generally only fed the contents of a small number of registers – usually two – and not an entire equation. Based only only those two terms, the answer remains undefined.

It has always been possible, for the CPU to signal an error due to numeric values it was instructed to perform an operation on, and for that error message *not* to have been an overflow, nor an underflow. I.e., traditionally, dividing by zero simply resulted in one such ‘illegal operation’ message. It did not result in an ‘overflow’ message, because the CPU would not attempt to compute its value.

If the symbol ‘infinity’ was recognized by the CPU, then those sort of messages would become less frequent, although I’m not sure how useful it would be, if code which was expected to return a numeric value, was allowed to return ‘infinity’ instead.

But then, actually dividing by zero would result in infinity, and in code that for the time being, continues to run. But, trying to multiply infinity by zero, would finally result in an ‘illegal operation’.

~~I find this version of ‘the concept of infinity’ to be a misguided effort on the part of the WiKiPedia, because~~ Infinity is not a number; it’s just an Algebraic symbol.

(Erratum 11/08/2017 : )

The fact that Infinity is one out of many existing Algebraic symbols, caused me to misinterpret, what the latest IEEE standard means, when they write, that the bits of a floating-point number can stand for “Not A Number”. Those bits would include the most-positive exponent possible, plus a field of fractional bits, *not* all of which are zeroes.

According to the IEEE standard, this is equivalent to having an error-code, which carries forward through a series of operations.

Apparently, one main reason for which the IEEE did this, was the fact that for the CPU to throw an exception, is a big problem in massively parallel computing – in fact largely unsupported there. Instead, a core can write to its output, that an error has taken place, and keep running. If one of the two operands input already state this condition, the next output is also set to this condition. And finally, when some final output is examined, and contains this code, the problem can be analyzed by humans, or by more code, of where in the attempted computations the problem took place.

Therefore, according to the new guidelines:

- Underflows are not supposed to take place anymore. Instead, what used to lead to underflows, now leads to
*Signed Zero*. - What used to lead to overflows, now leads to
.*Not A Number* - Other operations that cannot be resolved, now lead to
*Not A Number*… - If the exponent is at its most-negative, but the fractional bits are
*not*all zeroes, then those fractional bits now represent a*Denormalized Number*, which means, that there is no longer a preceding, unstated (1), which in turn, can lead to even-smaller (values within the) mantissa (, where leading zeroes become possible).

I suppose that one question this leaves unanswered, concerns the fact that to subtract one floating-point number from another, can sometimes lead to an apparent zero, and that the representation now prefers to know whether this leads to a positive or a negative zero.

This scenario is aggravated by the fact that by default, each floating-point number has a non-zero error-margin, so that even if the known bits did cancel, we could not assume safely that the real value left, should actually become zero. Instead, the real value which the operation fails to find, could be another real number, several orders of magnitude smaller than either operand, but a number that could nevertheless be expressed accurately by itself, in the same format, if it had been found.

If this was treated as Not A Number, then innocent Math could lead to error messages, since subtraction may take place naively. According to the WiKi, this is resolved as *Positive Zero, unless* rounding takes place negatively, in which case it gets resolved as *Negative Zero*. IMHO, this should be resolved *Negative Zero, IF* rounding that led to the zero was positive.

(Erratum 11/10/2017 : )

A personal friend of mine has pointed out to me, that my recent version of how floating-point numbers work, still contained an error:

Apparently, when a numerical result is obtained, which is too large to be expressed, but not necessarily a division of an ordinary number by zero, this can still be referred to as a ‘Regular Overflow’, but is in fact treated by the CPU as equivalent to Infinity. Meaning, that this result can be used in later operations, as this posting describes the usage of ‘Infinity’, and not, that the result is taken out of the computation, as this posting describes the usage of ‘Not A Number’.

On such a fine detail, I thought that the best way to test this person’s claim would be, just to try it out. Because, even the WiKiPedia could be in error, and, the actual, formal documents, are harder for me to analyze, than it was just to write a few lines of code. So this was the result:

```
// This exercise is to test, whether a general overflow simply
// leads to infinity, and whether my CPU supports
// denormalized numbers.
#include
using std::cout;
using std::endl;
int main() {
float num1 = 0.0F;
float num2 = 1.0e+20F;
double num3 = 1.0e+20;
float infinity = 1 / num1;
float overflow = num2 * num2;
double reg_num = num3 * num3;
float denorm = 1.0e-20F / num2;
cout << "Result 1: " << ( 1 / infinity ) << endl;
cout << "Result 2: " << ( 1 / overflow ) << endl;
cout << "Result 3: " << ( 1 / reg_num ) << endl;
cout << "Result 4: " << denorm << endl;
return 0;
}
```

```
dirk@Plato:~/Programs$ ./infin_test_2
Result 1: 0
Result 2: 0
Result 3: 1e-40
Result 4: 9.99995e-41
dirk@Plato:~/Programs$
```

(Exercise Augmented 11/13/2017 . )

And as the reader can see, my friend was correct.

This usage of the code ‘infinity’ could be contested, because according to certain logic, a number could just be ridiculously large, and not stand for infinity in real life. My example above, of

1 × 10^{80}

Described how this applies to certain problems in Physics and Astronomy. But apparently, what is more practical in Computing, is that the result of a number becoming

1 × 10^{40}

Is just too large to be computed – *If we make the mistake of using single-precision, floating-point numbers* – and then to keep using it ‘makes more sense’.

Caveat:

If the reader chooses as I did, to test certain low-level behaviors of the CPU, by writing a program in a high-level language such as C++, the fact needs to be considered, that any modern compiler worth its salt, will optimize our C++, in this case. This also means, that if we write numeric literals, these will be expressions within the source-code, which a compiler may recognize the value of before the program even runs, in which case the compiler will simplify, before generating machine-code.

My reason for putting the upper-case letter ‘F’ at the end of the single-precision numeric literals was, the fact that by default, the compiler will start by taking the literals to be double-precision. This means that by default, the compiler will already convert these double-precision constants into single-precision, just because I declared the variables on the left-hand side of the initialization as single-precision. To put an ‘F’ actually forces *the numeric value in the source-code*, to be read as single-precision by the compiler. ( :1 )

What I found was that if I declared variables of type ‘float’ or ‘double’, and *If I initialized those variables, from right-hand values that are themselves variables and not constants*, this will fool the compiler, into allowing the CPU to compute the right-hand side of those definitions, *At run-time*. But if the compilers become much more intelligent than they currently are, their translation of C++ into machine-language could just as easily short my future attempts to test, what the CPU does.

(Comment 11/11/2017 : )

The above posting states, that if the exponent is at its most-negative value possible, but the fractional bits *not* all zero, a denormalized number results, in which the preceding (1) is no longer assumed, before the stored fractional bits.

Even though this detail might seem trivial, I should point out, that this needs to take place in a particular way.

In the case of a 32-bit, single-precision, floating-point number, the smallest-possible exponent, that still leads to ‘an ordinary number’, is actually (-126), and, the smallest-possible, positive number that can exist in this form, will be represented as

0 00000001 0000 0000 0000 0000 0000 000

What we need to watch out for, is that even though

1.0 × 2^{-126}

was still an ordinary number, the next range of *denormalized numbers* which need to be possible, would be of the form

0.5 × 2^{-126}

which would be represented in binary, as

0 00000000 1000 0000 0000 0000 0000 000

What this effectively means is that in practice, stating a field of exponent bits as ‘0’ instead of as ‘1’, will still imply that a power of two is being applied, which stays at (-126) and does not become (-127), since (0.5) will still need to be multiplied by (2^{-126}) and allow the full range of possible numbers to be represented. The range of denormalized numbers needs to be continuous, with the smallest-possible, ‘ordinary number’.

This could run counter to what the reader might expect, since the numeric value of (0) is still smaller than the numeric value of (1). But the way the binary, floating-point format works, the (power of 2) that results, is the same.

The equivalent phenomenon will take place with 64-bit, double-precision floating-point numbers, when those next lead to *their* denormalized numbers. The most-negative power of two they can express, will be (-1022), even though the corresponding bit-field could express the number (-1023).

1: ) By default, a C or a C++ compiler will allow a numeric literal to initialize a variable, even if the data-type of the variable is not as precise, as the literal was, without generating any messages.

More specifically, a double-precision literal, or even an integer, can be used to initialize a single-precision floating-point variable in this way, because those glyphs may be the easiest way for the programmer to write, what he wants the variable to be initialized to.

- In C or C++, any built-in computation performed between an integer and a floating-point number, will lead to a floating-point output, which has the highest precision already specified in the input-values, that are called parameters. And this is called a ‘promotion’, which will take place silently. I used it in the code above.

But the convention above is implemented by the compiler, and not by the CPU itself. Hence, a compiler will convert each of the parameters to the required data-type, before putting opcodes into the machine-language representation of the program, that finally perform the computation between the intended parameters, which will already be of the same data-type as the output. Thus, the CPU’s instruction-set only needs to include opcodes, that convert a single parameter of one type, to the equivalent of another type.

But these fine details are best learned, by taking courses in C or in C++ .

Dirk

]]>

I have already mentioned in this blog, that I use an application named ‘Celestia‘, which basically anybody can download and ‘play with’. This is an Astronomy application, which displays to the user graphically, not only what the sky above the Earth would have looked like at some arbitrary point in time, but also what the sky – i.e. the star field – would look like, as seen from elsewhere in the near regions of space, such as anywhere from within our own solar system, or from the closest neighbors to our own sun.

In fact, I even uploaded a YouTube Video, which explains to anybody, the basic usage of this program.

This is another video which I uploaded at 1920×1080 resolution, but which the viewer may have to play with somewhat, even after he has made it full-screen, to switch its resolution to True HD.

(Edit 11/07/2017 :

When recording the above screen-cast, I nearly sidetracked the main subject – of how to navigate the program – with the question, of how to change the Field Of View, the ‘FOV’, indicated in the bottom-right-hand corner of the screen. I do know from experience, that when ‘in synchronous orbit around the Moon’, and looking back at the Earth, using the scroll-wheel of the mouse does not make the earth look any larger or smaller, because using the scroll-wheel will then only change the distance with which my camera-position is orbiting the Moon.

The way to adjust the FOV is finally, to hold down the <Shift> Key, and then to click-and-drag with the Left Mouse Button.

Also, the distinction is known to me, between how this program defines ‘a synchronous orbit’, and what a synchronous orbit would be, correct to Physics. A synchronous orbit needs to have one specific altitude, at which a stable, circular orbit has the same rotational period, as the body we’re orbiting. In the case of the moon, this may not even be achievable. Yet, the way ‘Celestia’ defines a synchronous orbit, is as my screen-cast shows. )

But if this program is to be used for anything aside from pure entertainment, the question should ultimately be asked, how accurate the model is, by which planets are animated, at whatever time-period the user is observing. Basically, a program would be possible, which simply extrapolates Kepler’s Laws about the movements of Planets, according to which their orbits are purely elliptical, and according to which the periods of each orbit stay perfectly the same, over the Eons.

The problem with Kepler’s Laws is, that they not only assume Newtonian Gravity, but *They also assume that each orbit is only affected by the gravitational interaction of two bodies*: A body assumed to be orbiting, and the body around which the first is assumed to be orbiting. The problem with this is the fact that in the real Universe, every body that causes gravitation, eventually exerts that gravitation on any other body – up to infinite distance. The fact that each and every person standing on the surface of the Earth, experiences the gravitation of the Earth, is a kind of proof of that. In theory, the gravitation of the Earth not only affects the orbit of the moon, but to a lesser extent, also the orbits of Mars and Venus – except for the fact that some people fail to know, that at the distance of Mars, for example, the gravitation of the Earth would be assumed negligible in strength. The effect of a car that passes the street in front of an Inhabitant of Earth, is in fact stronger, than the effect of the gravitation of Mars, on the same Inhabitant. And this is simply because, the strength of gravitation does decrease, as the reciprocal of distance squared.

But what this means is that over longer time-frames, the orbits of the planets become ‘perturbed.’ Mostly, this is due to the effects of the gravitation of Gas Giants, on the smaller planets of our solar system, but it really applies generally, wherever gravitation exists.

Well The programmers of Celestia took this into consideration, when they created that program. What they did, was to assume that Kepler’s laws generally apply, when they are fed a single parameter – time – and that they predict elliptical orbits, as a linear function of time. But, because real orbits are perturbed, it was at some point thought best, that time first be fed through a polynomial, to arrive at the parameters, which are then fed into Kepler’s Model, such as the one parameter that states, in what phase of its orbit a planet is, as orbiting our sun.

In reality, this method of applying a polynomial, does not reflect any physical realities, of ‘how the orbits work’. What it reflects is that Real Astronomers at some time in the past, used very powerful computers in order to compute gravitational interactions, and that the result of their simulation was a continuous sequence of planetary positions, which next needed to be stated somehow. The reader might think, that nothing but a series of numerical values would be needed, except that one problem with that idea would be, that effectively an infinite series of numerical values would be needed, because any time-interval can be magnified, and the motion of planets is supposed to remain continuous.

And so what Astronomers did, was to express the results of their simulation as a polynomial approximation, which consumers’ computers can next turn into real values, for what the position of a planet is supposed to be, at any precise point in time.

In other words, the use of a polynomial approximation served essentially, as a type of data-compression, and not as a physical model of gravity.

This approximation is also known as “vsop87“.

(Updated 11/08/2017 : )

Any polynomial approximation has as its main weakness, that while it stays accurate over a finite interval, outside the assigned interval, it goes off the rails – quickly. I think I recall reading somewhere, that Celestia Users were trying to emulate what our Solar System would have looked like, in the year 2000 B.C. – i.e., 4000 years ago. And those users were experiencing anomalies. To the best of my knowledge, ‘vsop87′ should still be accurate, near the year 1 A.D. – i.e. merely 2000 years ago, especially according to the article above.

I suppose that a second question could be asked, as to *how faithfully Celestia applies ‘vsop87′*. In theory, each of the parameters of an orbit, such as the radial distance of its perihelion and its aphelion, as well as the angle of inclination in the solar system, could be made polynomial functions of time. But what Celestia may have done instead, could have been to keep the geometry of each orbit constant, but to keep the phase of the planet’s motion within its orbit the required polynomial. Additionally, I believe that their model of the rate at which bodies spin, is a simple, constant function of time – which may again, not be completely accurate.

This type of simplification would actually ‘make sense’, for two practical reasons I can think of:

- It simplifies coding,
- If the user was to take the application Celestia outside the epoch within which ‘vsop87′ is deemed accurate, he will experience that the planets’ positions within their orbits are not correct anymore, but the user may not even be interested in this. And then, such a user would find that at least, the geometries of the orbits would still be correct – more or less – rather than to have each orbit assume ‘a wild geometry’, which it never had.

So according to the article which I just linked to above, *‘vsop87′ is accurate +/- 4000 years from the modern epoch*, and if the other user was placing his time-pointer at the edge of the valid interval, the polynomial would indeed have started to become inaccurate – directly at the edge.

(Edit 11/06/2017 : )

I have some added observations, that eventually have implications about the accuracy, of the application, ‘Celestia’.

This application is scriptable. What this means, is that using the language ‘Lua’, any user can install an arbitrary 3D model, and can decide that this model is supposed to be orbiting the sun (‘Sol’) , the Earth, the Moon, or anything else. This type of addition to Celestia’s collection of known object does not require, that the user in question know how to apply polynomial approximations. According to his script, such objects can be made to have much simpler movement – i.e. completely linear parameters governing movement.

It was only to simulate our own Solar System, that the very-precise model was used, which is named ‘vsop87′, and that uses polynomial approximation.

Further, some time ago I found myself analyzing the question, of how many bits of precision should be used to evaluate the polynomial. And the conclusion that I reached at the time was, that the coefficient of x^1 needs to be more precise, than any other coefficients, including for x^0 , or, the constant term.

My reasoning for this result was rather straightforward. Floating-point numerals have been engineered in such a way, that they remain accurate over many multiplications. Hence, the term of x^8 can become hugely positive far-away from the modern epoch, but can then be multiplied by a 32-bit, floating-point number, which has a correspondingly negative order of magnitude, so that this coefficient can be as minuscule, as the term itself has become huge.

None of this would really change the fact, that we’d obtain a numerical result, which is still approximately 24-bits accurate overall, the way 32-bit floating-point numbers are stored.

But when it comes to the coefficient of x^1 , we find that in some way the equations can be set up, the part of the result which is to the left of the decimal-point may not interest us, because it simply expresses how many times a body has orbited the sun. When observing a body, this could not be seen. But the part of the result which is important, is the part that’s to the right of the decimal point, which expresses what fraction of an orbit a body has completed, which can in fact be observed.

Well if we simply add or subtract two 32-bit, floating-point numbers, the magnitudes of which are very different, one number can become much smaller, than the margin of error, of the fractional part of the other. Therefore, 32-bit floating-point numbers do not maintain their accuracy, for addition or subtraction, if their magnitudes are strongly different.

I.e., we could perform the following two series of additions, and obtain different results:

- 1.0e+00 + 1.0e-20 – 1.0e+00 ~= 0.0e+00 (?)
- 1.0e+00 – 1.0e+00 + 1.0e-20 ~= 1.0e-20

Now, there is an argument against the apparent, visual clarity of what I have written above:

- (1.0e+00) was not an exact value to begin with. It has an implicit margin of error, which is (1 / 2^24) times its magnitude, if that is a standard, single-precision floating-point number. Therefore, its margin of error may make the value of (1.0e-20) irrelevant.

This argument places an ultimate limit, on how accurately high-order polynomials can be evaluated at all, as soon as the error present in any one term, is larger than another term.

But when expressing the movement of a body within its orbit, what we’re most interested in, is the accuracy of the fractional component.

I’ve actually seen source code, due to which ‘Celestia’ performs certain crucial computations in a type of 128-bit, fixed-point format, that has been defined using object-oriented programming, but for which only basic computations, such as addition, subtraction, multiplication and division, have been defined. Other operations are computed within ‘Celestia’, using very mundane 32-bit or 64-bit floating-point Math, for which every conceivable function has been defined.

The custom, 128-bit numeral has been set up, as having 64 (largely-unused) bits to the left of the decimal, plus 64 bits to the right of the decimal. And certain member-functions will extract either of the two fields, or set either of the two fields, using native data-types.

We can ask ourselves whether the coefficient of x^0 needs to be more-precise, that 24-bits. The answer would be, ‘Only if we want the phase of the position of a planet, within its orbit, to be more-precise than 24-bits behind the decimal-point.’ (Edit 11/08/2017 : In other words, ‘Not if we only want to display an entire orbit, on a 1920×1080 display.’ )

What remains important however, is that the fractional part of this term needs to be preserved, when added to the higher-order parts of the polynomial, even if the non-fractional parts of that polynomial are numbers in the thousands – i.e. even if the planet has orbited thousands of times.

Giving the first-order term this kind of 128-bit, fixed-point precision, ‘makes sense’, then.

But what about the higher terms? How precise does the coefficient of x^2 , x^3 , … x^8 need to be? And how precise can they be? The answer that I find is, that *Those terms can only ever be as precise, as the number of bits of precision, which their coefficients have been written in*. They are not responsible for the fact that on average, the Earth orbits once per year, only for the more-subtle perturbations to that orbit, which the Earth has undergone over the range of ( -4000 … +4000 ) years or orbits.

Assuming that the magnitude of the actual perturbations is not huge, and that these coefficients are not written as having a huge number of digits, they can be computed most-efficiently, as a series of multiplications. I.e.,

x^3 == x*x*x

x^8 == x*x*x*x*x*x*x*x

Hence, unless we include a type of numeral, which has (24 * 8) bits of precision, which would actually be 192 bits of precision, we have no way to make those terms more precise, than their stated coefficients would be. And because the exaggeratedly-long numbers need to be in fixed-position format, it would actually make sense to have (192 + 192 == 384) bits then (code for which is not evident).

So simply due to practical limitations, we’d get the idea only to compute the x^1 term in the especially-long format, and to compute all the other terms using ~~either 32-bit or~~ 64-bit floating-point numbers, but also to pay attention to the order with which we apply computations.

Also, when evaluating the polynomial in question, we would find that the lower terms start earlier to become very large, but that the highest terms only start to become significant, at the outer limits of the interval over which the polynomial is deemed valid. Eventually these terms need to oppose each other *when added*, to result in an overall result that’s to be accurate.

This last observation seems to suggest that the term x^2 should be computed, before the term x^3 is computed, as well as that the term x^7 should be computed, before the term x^8 is computed, and that the result from each term can be added to a cumulative value, which will eventually *start out* as having matchingly large positive and negative values, to lead to matching, correspondingly smaller values.

It would make most sense, for this cumulative value – the result of additions – to be stored in the custom, 128-bit format. But in fact, ‘Celestia’ might not use this format here.

The fact is trivially possible, that a constant, average rate of orbit is indicated by the x^1 term, but that deviations in the orbital position from this value, exceed 1 orbit. If thousands of years have gone by, the deviations of the orbital phase due to perturbation, may reach 10 orbits, and for academic completeness, a deviation of up to 100 orbits should be planned for.

Effectively, this would reduce the precision by 1 or 2 further decimals, i.e., powers of ten, with respect to what each term was computed to, assuming that we’re still only interested in the fractional part, of the final result. Hence, if it was a valid standard that the x^0 term only needed to be as accurate as single-precision floating-point Math, then by same token, double-precision Math for the higher terms would seem appropriate.

(Edit 11/07/2017 : )

I’ve observed that even though the source code defines a custom-data-type, which corresponds to a 128-bit numeral, the same source-code fails to apply it, when computing polynomials.

A curious reader might ask, ‘In what situation, then, did the developers think it appropriate to apply this type of numeral?’

AFAICT, They applied it, when an object orbits another object, which orbits another object.

In other words, for example, the moons of Jupiter could be an orbital system, which needs to be scaled, before it gets added to the position of Jupiter. This entails a multiplication, followed by an addition.

And yet, a user may eventually want to know, whether as seen from Earth, Ganymede transits Neptune, which is of course another planet that orbits the Sun, potentially transiting at one point in time. In order for the program to reveal an answer to this question, the position of Ganymede within our solar system, needs to be as precise, as the orbit of Neptune, as well as the orbit of Earth. And yet, the position of Ganymede has been scaled and transformed…

Dirk

]]>

There is a phenomenon which is not extremely interesting to modern Astronomy, but which was important to ancient Astrologers for a long time, which is that as seen from that tiny point in space known as the Planet Earth, two distant celestial bodies can seem to Transit, which means that their position in the sky can seem to become superimposed, even though the angular size of celestial bodies is typically very narrow.

Some people might expect, that this will happen about as often, as the orbits coincide within our Solar System. But one reason why actual transitings are not that frequent, is the fact that the planets’ orbits are not truly coplanar. The orbits come close to lying in the same plane – which is loosely referred to as the plane of the solar system – but each orbit is tilted slightly with respect to the orbit of another planet. This means that while some transitings occur regularly, others only do so after hundreds, or even after thousands of years.

Jupiter “crowning Regulus” is a common phenomenon, which is tied to the period of the orbit of Jupiter. (This is also so common, because nobody expects Jupiter *to pass directly in front of* Regulus. )

But please, ask any Astronomer who you can get your hands on, ‘How often does Venus transit Jupiter?’ That should be good for a laugh.

Well in the year 2 B.C., both these events took place. Actually, Jupiter crowning Regulus is a dance which takes several months.

But, because Venus and Jupiter are the two brightest natural objects in the sky – unless we count the Moon and the Sun – If they become superimposed, and if Civilization is in a primitive, superstitious state, then Astrologers will take great note of a star seeming to form, which is twice as bright as any star they were familiar with.

Now, several years ago, I created a video using an Application called “Celestia“, which depicted this event. But I later felt that this video was so poorly orchestrated, that I deleted it myself. The actual act of Venus transiting Jupiter only took place within the last two seconds of that video, and the video had no sound. So I doubt that anybody was able to recognize what I was really trying to show the viewer.

But only today, I redid that video, and posted it to YouTube, as visible here:

This time, I narrated the Video with Audio. I invite the reader to watch it.

(Edit 11/02/2017 : )

One fact which I’ve noticed about YouTube Videos and the way they play back, is that they could have been uploaded as 1920×1080 videos, but even after the viewer makes them full-screen, their resolution stays quite blurry for a good 30 seconds, until their real-time resolution keeps up with the viewer’s format-change, and switches to true High-Def.

The above video is an example of one, which I uploaded as a 1920×1080, and which should start playing back for the reader as such, *about 30 seconds after* he makes it full-screen.

(Edit 11/04/2017 : )

Obviously, in order for this information to be taken seriously, the accuracy of the software needs to be questioned.

Questioning the accuracy myself, I have come across this information, not only in general, about how ‘Celestia’ works, but also, what some of the limitations of the software are.

My conclusion so far has been, that in order to depict the Year 1 A.D., its accuracy should be sufficient.

Dirk

]]>

One of the facts which I’ve been blogging about, is that I have erased Windows from a computer I had, which at the time was named ‘Mithral’, and that I had then installed Debian / Stretch on it, at which point I also changed its name to ‘Plato’. Debian / Stretch is the successor to Debian / Jessie, the latter of which I still have installed on two of my computers.

One of the main differences between the Debian / Stretch and the Debian / Jessie code-repositories is, that the newer Debian / Stretch is based on the desktop manager ‘Plasma 5′ – assuming we choose that desktop-manager – while Debian / Jessie was still based on ‘KDE 4′ as its desktop manager. And so one aspect of my ‘new’ Debian / Stretch system which I’ve been curious about – and anticipating – was how I’ll like Plasma 5 as opposed to KDE 4.

One of the facts which should be noted, is that although Plasma 5 has been altered enough to create major version issues with KDE 4 builds of applications, Plasma is not really that different, finally, from KDE 4.

The developers have focused on simplifying the experience. KDE 4 had almost unlimited options by which the user could fine-tune the appearance of his desktop, while Plasma 5 has reduced the number of settings. And yet I find, I can still do everything under Plasma 5, that I was used to doing under KDE 4. I do not necessarily need to be able to fine-tune, how translucent the Task-Bar is – which under Plasma 5 or KDE 4 specifically is named a ‘Panel’ – while its center-region, where entries exist for applications currently running in user-space, is actually named the ‘Task-*Switcher*‘. Linux people are sometimes particular about not wanting to seem to be copying the conventions of some other O/S.

Under Linux, we have a variety of methods, to display what processes are occupying the CPU, one of those being the command-line ‘top’, and another being the slightly-more-colorful, but still text-based command ‘htop’. We refer to htop as a ‘Process Viewer’.

One detail which went a bit far for me however, was the degree with which the default Theme – named ‘Breeze’ – made the icons and widgets seem uninteresting. From a package-manager, we can still install a throwback ‘Oxygen’ Theme, the appearance of which is more-similar to how KDE 4 looked. But if we choose the Oxygen Theme, then the default assumption would be that we want our desktop to have a dark look. I actually bypassed this result, by choosing my Look And Feel to be Oxygen, but by choosing my Desktop Theme to be ‘Air’, from the System Settings center. Air is what’s keeping the background-colors of most of my desktop bright-looking.

Also, I always took care to keep wallpapers which I had chosen, and not to allow any switch in Themes to replace those, since a bright-looking wallpaper is also necessary, for obtaining a Desktop Appearance which is bright-looking.

I suppose that one loss which I mourned at first, was that ‘Apper’ no longer works with Plasma 5. Instead we have a new suggested package-manager named ‘Discover’, which I do not like as much as I did Apper. But then, we can still use Synaptic as the GUI for our package-manager, which still works well, and which I actually use to pick and choose software to install.

Aside from that, re-installing capabilities which I already had on my KDE 4, Debian / Jessie -based computers, is often just a question of installing applications from the package-manager, the names of which have not changed, since the switch to Plasma 5.

One change which I also noticed, is that an application which I used to use, to preview images, was called ‘gwenview’, but no longer previews all the images which I need to preview. Specifically, I ran in to some TIFF-Images that the new gwenview could not display, and a possible reason could be the fact, that these TIFF-Images have an alpha-channel – although having an alpha-channel, did not prevent similar TIFF-Images from displaying in gwenview, under KDE 4. And so what I needed to do, was change the default program with which I preview images, to ‘ImageMagick’.

I discovered that changing such things as File Properties, and under that, File-Type Options, is as easy under Plasma 5 as it was under KDE 4. We can select whichever installed application we like, as our default for opening a File-Type – a feature which is still important enough, to be implemented fully.

I suppose that I should add another observation about the newer way of doing things. Under Debian / Jessie, the Session Manager was a time-proven piece of software named ‘kdm’. This is a program that runs with System / Root privileges when we boot the computer, which assumes the responsibility of launching the X-server, and which then led to a KDE 4 log-in, that benefits from being displayed in an X-window environment.

I believe that under Debian / Stretch, if we want to run Plasma 5, kdm is replaced with a similar program named ‘ssdm.’ From a purely practical standpoint, I’ve observed, that if we perform an accustomed log-out, followed by a log-in, ssdm no longer goes so far as to restart the X-server. The X-server just keeps running. Not only that, but doing this will also cause disconnected dbus-daemons to keep running, that were once associated with expired user-sessions.

This means that if we did a log-out, log-in, and then ran the following command from a terminal-window:

kde-mv somefile trash:/

Apparently, that file will still get moved to the trash correctly, but only after several error-messages are printed, which we would not get to see, if we were just moving the file to the trash, by pointing and clicking, from the GUI. Supposedly, similar error-messages will then also be spamming our syslog.

What this currently does, is make the maneuver useless to me, to log out and then log in again. I was kind of relying on this also restarting the X-server, and fully cleaning up what was left of the previous session, when I was doing this on my Debian / Jessie systems. And so on ‘Plato’, I need to replace a log-out, log-in maneuver, with an actual reboot, in general.

I regard this as a bug, because it may mess up a future ability to use ‘Plato’ as a Web-server.

Further, Debian / Stretch is not so old. It was only released on June 17 this year. This actually puts Debian Team behind, because various other Linux versions have come out, also sporting Plasma 5.

One of the consequences of this is, that *Debian / Stretch has somehow been tied to Plasma 5.8* , even though Plasma 5.9 or Plasma 5.11 did come out, with *specific features that Plasma 5.8 does not have*:

- A Global Menu Feature,
- A ‘Night Light‘ Feature.

Because Plasma 5.8 is the ‘long-term support’ sub-version of Plasma 5, and because Debian Team has decided that their priority for Stretch is to make it stable, what needs to be expected is that they will stick with Plasma 5.8 .

Now, I have never installed a blue-light-filter on any of my desktop managers specifically. If I did feel that these computer-monitors were causing insomnia, I’d need to take into account that supposedly, only brief exposures to that nasty, short-wavelength light are required, to re-awaken a person who’s supposed to be going to sleep, and to re-awaken his insomnia. I mean, less than a full minute of exposure can wake *me* back up, after which I am usually able to get to sleep anyway.

So I would need to do something about every source of blue light in my home, starting with all my monitors, but also including my smart-phone, since it, too, can give off blue light briefly. And so, I’d be looking at a cross-platform solution that installs and configures quickly.

And in that case, I can still find the package ‘redshift’ in my package-manager, to do so. I have not really tested that it still works, but would assume that it does. It does not depend on Plasma 5.

And there is an aspect about ‘Plato’, which I don’t feel 100% enthusiastic about. It happens to have a 1920×1080 computer-monitor, because that hardware never changed, and it responds to this resolution, by choosing a 48×48 font automatically, as well as to apply some unknown, high number of DPI for the font-sizes, that it might be reading from the monitor via the H/W connection.

This leaves me with *less* space on the desktop, to place large icons and widgets.

Now, just as it was with KDE 4, Plasma 5 allows us to micro-manage these things, if we have the ‘kscreen’ package installed, and so I could decide to make my fonts and icons smaller, or to change display-resolution to something other than what the system chose for me. But as it stands, I risk creating an effect that even looks worse than what I started with, which are settings presumably correct because read directly from hardware.

Also, an unexpected side-effect of having a high resolution in pixels, but large icons and widgets, is that this makes the screen-shots look slightly better, when viewed in this blog. Imagine trying to examine the windows above from an iPhone, because the reader just happened to be reading my blog on an iPhone, but with tiny features… The reader would certainly need to pinch-zoom a lot.

As it was with KDE 4, a basic way in which we customize our desktop, is first to right-click on the desktop, and to “Unlock Widgets” – in case they’re not already unlocked – and then to choose a Widget, which we can either Add To the Desktop, or which we can Add To any given Panel.

In the box used to select a widget, from which we drag it, each available widget has a number in its upper-left-hand corner, that indicates how many instances of the widget are already showing. Depending on whether we Drag it to a Panel or the Desktop, we get a slightly different result. And the (1) in the upper-right corner of the widget which is named “Task Manager” in the above screen-shot, denotes the fact that this widget has one instance, currently in the center of my one panel. I could choose to have a panel along the bottom of the screen, as well as another along the left-hand side…

There are two ways in which I find this to be better-managed under Plasma 5, than it was under KDE 4:

*There were always*only these two basic ways, to build up our desktop. Only, under Plasma 5, the layout makes the choices more-obvious, whereas under KDE 4, I myself was more prone to look for additional ways, which did not exist.- Under Plasma 5, if I delete a widget from either location, the number in the upper-left corner, of the selection box, gets updated reliably and immediately. Under KDE 4, the danger was very substantial, that this number would not get updated or removed, because in terms of code running in the background, the widgets would continue to be running, even though deleted, until we started a new user-session.

(Edit : )

A feature of Plasma 5, which has stayed the same as it was with KDE 4, is a possible “Folder View Widget”, that is only meant to be Added to the Desktop. Rather than to have the special user-folder ‘~/Desktop’ appear as the entire desktop, we’d typically create a special widget, that continuously displays a given folder, that folder usually being ‘~/Desktop’.

This way, we can actually reserve more space on the desktop itself for widgets, and for individual launchers which we may also drag there.

I had overlooked the fact that this widget has its own Icon-Size Settings, under Plasma 5. On ‘Plato’, the icon-size was set rather large, and it was a simple matter just to reduce the icon size there, not system-wide.

It would have been a huge mistake of me, to change the size of icons and/or fonts, system-wide.

Dirk

]]>

I have run across the idea several times, that unknown people in the Industry, wish to block the users’ ability to create Non-DRMed content – that being either Video-DVDs or Video-Blu-rays – and to play back that content on standard playback-devices.

According to my own experiences, this idea is an unsubstantiated rumor. In addition to such a measure seeming pointless – wanting to block content of which nobody claims intellectual ownership – in every case I’ve encountered myself, there was some sort of technical bug or incompatibility, arising from the actual data and its formats.

In my experience, playback devices do what my computers do, which is to play back content as best their logic allows, as long as the content is *Not* DRMed. Just as with computers, the logic of these machines does not always follow ‘Human Common Sense’.

What this also suggests, is that the description is largely hype, according to which UDF content urgently needs to be of version *2.50* . Higher UDF versions do serve a purpose, but that purpose was never meant to be, to restrict access to technology. Accordingly, I still find the concept plausible, that many consumers have content on Hybrid ISO9660 / UDF 1.02 disks, even if those disks were bought in a store, and that standalone playback-devices simply play them.

Dirk

]]>

I have written numerous postings, to guide myself as well as anybody else who might be interested, on the subject of Video-DVD burning, as well as on the subject of Video-Blu-ray burning. According to advice which I gave, it’s possible to use a program named “tsMuxerGUI”, to create the Blu-ray File Structure, which a Blu-ray playback-device will play.

According to additional advice I gave, it’s possible to burn these Blu-rays using some version of ISO-13346, which is also known as ‘UDF’, as opposed to burning them with ISO-9660, as the File System with which data is encoded on the disk.

But what I have noticed, is that certain Blu-rays which were burned in this way, will not play back using the application “VLC”. Normally, the open-source player named VLC can play back Blu-rays, which were commercially produced. So, it would seem natural, that we’d want to test our Blu-rays on the computer we used to create them, with the VLC application as the playback system.

My own experience has been, that the Blu-rays which result play back fine on my Sony Blu-ray playback-device, but do not open on VLC, on my computers.

As unlikely as this may seem, I did after all return to the conclusion, that I’ve created two UDF-encoded Blu-rays, which VLC cannot read, because of the customized UDF-encoding.

Apparently, when we instruct VLC to play a disk inserted into a specific Blu-ray drive, such as perhaps ‘/dev/sr1′, VLC expects to connect directly with the drive, rather than to use the mount-point exclusively, which Linux can create for us.

This is somewhat bewildering, because by default, I need to mount the disk in question, as a regular user, which we can do from the notification tray, before VLC is capable of playing it. But then, whether VLC can in fact read the Blu-ray turns into an independent question, from whether Linux was able to mount it for the rest of the computer to use.

(Edit 10/23/2017 :

There is an even more improbable-sounding possibility, as to why this actually happens. It may be that VLC expects to be able to access the Media Key Block of an inserted Blu-ray Disk, in order to decrypt that, and to start playing back DRM-ed Blu-rays. This would require not only raw access to the disk, but also that such a Block be present on the disk.

If I translate this problem into Human Logic, I’ll get, that ‘VLC’ is only capable of playing Blu-rays that have DRM, when those Blu-rays are also ISO9660-compatible. This may be unfortunate, because even though UDF 2.50 is still not ‘the law of the land’, ISO9660-compatibility may be phased out one day, while DRM likely will not be. )

But there is a workaround. VLC includes in its menus, the ability to Play A Directory. We can choose this option, and can navigate to the mount-point, which we created when we mounted the disk from the notification tray. That mount-point should exist under the directory ‘/media’ , have ‘BDMV’ as one of its sub-folders. And when we then direct VLC to play the folder, that is the parent folder to the ‘BDMV’ sub-folder, we are in fact directing it to play the root-folder of the disk.

And in my own, recent experience, VLC is then able to play the disk. I specifically took care, not to direct VLC to play the folder on my HD, from which I created one of the Blu-rays, but rather the folder that is the mount-point of an actually-inserted disk. Because, it would be pointless to conduct a test, which physically bypasses the disk.

But apparently, when we direct VLC to Play such a Directory, we are forcing it to use whatever ability Linux has to mount the disk, rather than to use whatever ability VLC would otherwise have, to read the disk directly. And then, If our Linux kernel allows us to mount UDF v2.50 File-Systems in read-only form, this will also allow us to preview or fully watch, any Non-DRMed Blu-ray which may have been encoded with that format.

(Edit 10/23/2017 : )

There is some uncertainty, as to whether VLC will also display our Disk Menu correctly. When we select from the dialog-box belonging to VLC, that we wish to play back a Blu-ray, normally a check-box sets itself such, as Not to show the Disk Menu, because there could be some issues with how reliably VLC may be able to do so / not be able to do so.

When we tell VLC to Play A Directory, I see no such check-box.

Dirk

]]>