How 3D-plotted, implicit functions are often inferior, to ISO-Surfaces rendered for 3D Gaming.

One of the subjects which I revisited in recent weeks has been, that either Computer Algebra Systems, or other numeric toolboxes, may plot functions. And a fact that should be pointed out is, that to plot a function, either as a 2D or a 3D plot, is always numeric, even if it’s being offered as part of what a ‘CAS’ can do (a “Computer Algebra System”). And so, a subcategory of what is sometimes offered, is a 3D plot, of an implicit function, kind of like this one:

hyperboloid

This is a plot, of complementary hyperboloids, which are the 3D counterparts to 2D hyperbola.

What some people might just wonder is, how the refined toolbox works, that plots this type of implicit function. And one way in which this can be done, is by generating an ISO-Surface, which is a derived mesh, along which a Density that has been computed from X, Y and Z parameters, crosses a threshold-value, which can just be named (H) for the sake of this posting.

And, in turn, such an ISO-Surface can be computed, by using the ‘Marching cubes algorithm‘. If it gets used, this algorithm forms a geometry shader, which accepts one Point as input topology, and which outputs a number of triangles from (0) to (4).

The question which this posting poses is, whether the mesh which is output by such an algorithm, will always include vertex-normals. And the short answer is No. Applications exist, in which normals are computed, and applications exist where normals are not computed. And so, because some users are used to high-end gaming, and used to seeing shaded surfaces, which can only really be shaded if normals have been made available to a fragment shader, those users might find themselves asking, why Mathematical plotting algorithms might exist, which never compute real normals.

(Updated 5/07/2020, 16h15… )

Continue reading How 3D-plotted, implicit functions are often inferior, to ISO-Surfaces rendered for 3D Gaming.

Secondary Polishing

When the project is undertaken to write programs, that would be sub-components to Computer Algebra Systems, but that produce floating-point numerical outputs, then an unwanted side effect of how those work is, they can be output in place of integers (whole numbers), but may differ from those integers by some very small fractional amount. Thus, instead of outputting (1) exactly, such a program might output:

0.9999999999999994

The problem is that such output can be visually misleading, and confusing because a Human user wants to know that the answer to a problem was (1). And so a possible step in the refinement of such programs is “Secondary Polishing”, which does not change the actual computations, but which makes the output ‘look nicer’.

I recently completed a project that approximates the roots of arbitrary polynomials, and also looked in to the need for secondary polishing. There was one specific situation in which this was not required: The root’s real or imaginary component could have an absolute of (1/10) or greater. In this case, the simple fact that I had set the precision of the printed output to (14), but that the roots found are more precise than to be within (10^-14), at least after the actual, primary polishing, that affects computed values, together with the way the standard output functions work in C++, will cause the example above to be output as a single-digit (1), even though what was stored internally might be different from that, by less than (10^-14). But a special case exists within the norms of C++, if the absolute of the numerical term to be output is less than (1/10).

(Updated 2/11/2019, 19h35 … )

Continue reading Secondary Polishing

Noticing when SageMath is using IPython, instead of Maxima.

One of the subjects of my recent postings, has been a Computer Algebra System called “SageMath”, which I was able to install on my Debian / Stretch (Debian 9) computer named ‘Plato’. One of the distinctions which I left slightly blurred about this, is the distinction between Computer Algebra, and Numerical Tools. The former refers to the ability of a computer to manipulate symbols, in the way Algebra manipulates them, but to solve equations which Humans might just find tedious or too time-consuming to solve. This can lead to answers that are theoretically exact, but which can sometimes be useless because the numerical equivalent is only available indirectly.

Numerical Tools are more numerous under Linux, and offer theoretically inexact solutions to equations, simply because the numerical answers have a limited number of decimal places after the point or comma. Yet, the numerical answers can sometimes be much more useful than Algebraic answers, for reasons that I think are self-explanatory.

SageMath offers both. In order to do Algebra, SageMath uses “Maxima” as its back-end. And under Debian Linux, installing SageMath actually installs a separate version of Maxima, which users are not supposed to use directly.

Continue reading Noticing when SageMath is using IPython, instead of Maxima.

A Hypothetical Algorithm…

One of the ideas which I’ve written about often is, that when certain Computer Algebra Software needs to compute the root of an equation, such as of a polynomial, an exact Algebraic solution, which is also referred to as the analytical solution, or symbolic Math, may not be at hand, and that therefore, the software uses numerical approximation, in a way that never churned out the Algebraic solution in the first place. And while it might sound disappointing, often, the numerical solution is what Engineers really need.

But one subject which I haven’t analyzed in-depth before, was, how this art might work. This is a subject which some people may study in University, and I never studied that. I can see that in certain cases, an obvious pathway suggests itself. For example, if somebody knows an interval for (x), and if the polynomial function of (x), that being (y), happens to be positive at one end of the interval, and negative at the other end, then it becomes feasible to keep bisecting the interval, so that if (y) is positive at the point of bisection, its value of (x) replaces the ‘positive’ value of (x) for the interval, while if at that new point, (y) is negative, its value for (x) replaces the ‘negative’ value of (x) for the interval. This can be repeated until the interval has become smaller than some amount, by which the root is allowed to be inaccurate.

But there exist certain cases in which the path forward is not as obvious, such as what one should do, if one was given a polynomial of an even degree, that only has complex roots, yet, if these complex roots nevertheless needed to be found. Granted, in practical terms such a problem may never present itself in the lifetime of the reader. But if it does, I just had lots of idle time, and have contemplated an answer.

(Updated 1/30/2019, 13h00 … )

Continue reading A Hypothetical Algorithm…