Touching Up Some More Source Code

In This previous posting, I had already described a problem that can exist, with my amateur attempts to build “Graphical User Interfaces” – GUIs – where the application window could lack a basic feature, that being, when the user resizes it – or, when the O/S resizes it because the desktop is too small for the application window’s default size – certain behaviours in the window are supposed to update but did not.

Specifically, it is not automatic that the Layout Widget, that spaces out sub-widgets in the main application window, also resizes, when the application window does so.

And so, there was a second application which I wrote, where I thought it would be meaningful to update the source code, to the correct behaviour.

In this case, that very minor update to the source code brought on greater complexity, because I did, in fact, compile Windows 32-bit and Windows 64-bit binaries for it. And additionally, I provided the updated Linux, 64-bit AppImage. The affected files can be found here:

https://dirkmittler.homeip.net/binaries/

And the downloadable files are the ones with naming that begins with ‘Dirk_Roots_GUI_1...‘.

 

Enjoy,
Dirk

 

Polynomials: I’ve just made a text-based UI, slightly user-friendlier.

One of the facts which I’ve written about before was, that I had written a C++ program which finds numerical approximations to the roots of polynomials, preferably difficult polynomials to which there can be no exact solution. And one of the facts about my program, which I was never completely satisfied with, had less to do with the actual algorithm that gets closer and closer to each root, but has more to do with how my code could be inaccessible to those readers, who do not already know how to use a compiler.

Specifically, I had written a version of the program which somebody executes from ‘the Command Prompt’, as it’s called under Windows, because under Linux, power-users access features with one-line commands often. Yet, if a user was not such a power-user, then what they might do under Windows could be, just to double-click on the .EXE-File (the Application icon), observe that a console window briefly appears, and then watch it disappear again. There could be users who do not know how to navigate their Command Prompt windows to the folder into which they unzipped the program, and then execute that program from within their Command Prompt / Terminal Window.

And so, even though I kept the code roughly the same – it’s still text-based – I changed it subtly, to help out potential readers who may have had that problem. You may try it and see if you like it. The compressed files can be found at the following location:

https://dirkmittler.homeip.net/binaries/

And, within the table of contents which gets displayed there, the reader would either be interested in the file ‘Roots_Dirk_Mittler.tar.gz‘, or the file ‘Roots_Dirk_Mittler.zip‘ – in either case, the file-name that begins with a single ‘R’.

Enjoy,

Dirk

 

Revisiting the subject of approximating roots of polynomials.

In an earlier posting, I had written about an approach, for how to find approximations of the roots of polynomials, of an arbitrary degree, but in such away, also to find all the complex roots. (:1)

But with such strategies, there are issues. One concept was, that a home-grown search algorithm would get close to the actual root. Next, polishing would make the result more accurate. And then, an augmented division would be computed, which is also referred to as “Deflation”, resulting in a deflated polynomial, as many times as the original polynomial’s degree, minus one.

Pondering this issue today, I realized that there was still a conceptual weakness in that approach, that being, the fact that some small amount of error is tolerated in the first root found, so that each successive deflated polynomial contains progressively greater degrees of error. What effectively happens next is, that accurate roots are found, of increasingly inaccurate polynomials, and, that there appeared to be few ways to detect and correct the resulting errors, in roots found afterwards. Theoretically, this problem could progress to the point, where doubt is evoked, in whether or not roots found later, were even roots of the original polynomial, since by that time, the object which the roots are being found of, is no longer that original polynomial.

(Update 6/08/2020, 18h35… )

Continue reading Revisiting the subject of approximating roots of polynomials.

One weakness of my algorithm, regarding polynomials.

Several months ago, I wrote a C++ program (with C coding-style) that computes numerical approximations, of the roots of an arbitrary polynomial. Even though I changed the code slightly since March, mainly to improve the chances that it will compile on other people’s computers, the basic algorithm hasn’t changed. Yet, because I wrote the algorithm, I know that it has certain weaknesses, and I’m about to divulge one of those…

The main search algorithm isn’t able to determine whether (x) is sufficiently close to a root. It’s really only able to determine whether (y) is sufficiently close to zero. And so an idea that an implementer could have about this would be, to name the maximum allowable error in (x) ‘epsilon’, to compute the derivative of the polynomial for the given value of (x), and then to multiply ‘epsilon’ by this derivative, perhaps naming the derived, allowable error in (y) ‘epsilond’. But I consciously chose not to go that route because:

  • Computing the real derivative of the polynomial each time, though certainly doable, would have made my code complex beyond what I was willing to tolerate,
  • Even had I done so, certain polynomials will have a uselessly low derivative near one of their roots. This can happen because of a doubled root, or simply because of two roots that are too close together. ‘epsilon’ is as small as what would be practical, given the number of bits the variables have to work with, that were, a 52-bit significand.

So my program, instead, computes a ‘notional derivative’ of the polynomial, near (x), that is really just the derivative of the highest exponent by itself. This little detail could actually make it difficult for some observers to understand my code. At the same time, my program limits the lower extent of this derivative, to an arbitrary, modest fraction, just so that (x) won’t need to be impossibly precise, just to satisfy the computation of how precise (y) would need to be.

But then, a predictable problem becomes, that the real derivative of a given polynomial might be much smaller than just, the derivative of this highest exponent. And if it is, then satisfying … for (y) will no longer satisfy … for (x).

The following output illustrates an example of this behaviour:

 


"Maxima" Preparation:

(%i1) expand((x-100)*(x-101)*(x-102));
(%o1) x^3-303*x^2+30602*x-1030200

(%i2) expand((x-100)*(x-110)*(x-120));
(%o2) x^3-330*x^2+36200*x-1320000

(%i3) expand((x-100)*(x-101));
(%o3) x^2-201*x+10100


Approximations from Program:

dirk@Phosphene:~$ poly_solve 1 -303 30602 -1030200

101.00000000013  +  0I
99.999999999938  +  0I
101.99999999993  +  0I


dirk@Phosphene:~$ poly_solve 1 -330 36200 -1320000

100  +  0I
110  +  0I
120  +  0I


dirk@Phosphene:~$ poly_solve 1 -201 10100

100  +  0I
101  +  0I


dirk@Phosphene:~$ 


 

 


 

(Updated 9/22/2019, 5h40 … )

Continue reading One weakness of my algorithm, regarding polynomials.