One of the subjects which I revisited in recent weeks has been, that either Computer Algebra Systems, or other numeric toolboxes, may plot functions. And a fact that should be pointed out is, that to plot a function, either as a 2D or a 3D plot, is always numeric, even if it’s being offered as part of what a ‘CAS’ can do (a “Computer Algebra System”). And so, a subcategory of what is sometimes offered, is a 3D plot, of an implicit function, kind of like this one:

This is a plot, of complementary hyperboloids, which are the 3D counterparts to 2D hyperbola.

What some people might just wonder is, how the refined toolbox works, that plots this type of implicit function. And one way in which this can be done, is by generating an ISO-Surface, which is a derived mesh, along which a Density that has been computed from X, Y and Z parameters, crosses a threshold-value, which can just be named (H) for the sake of this posting.

And, in turn, such an ISO-Surface *can be* computed, by using the ‘Marching cubes algorithm‘. If it gets used, this algorithm forms a geometry shader, which accepts one *Point* as input topology, and which outputs a number of triangles from (0) to (4).

The question which this posting poses is, whether the mesh which is output by such an algorithm, will always include vertex-normals. And the short answer is No. Applications exist, in which normals are computed, and applications exist where normals are not computed. And so, because some users are used to high-end gaming, and used to seeing shaded surfaces, which can only really be shaded if normals have been made available to a fragment shader, those users might find themselves asking, why Mathematical plotting algorithms might exist, which never compute real normals.

(Updated 5/07/2020, 16h15… )

(As of 5/06/2020: )

The synopsis I’d give to this answer is, that the vertex normals must not only be approximated in some way, but must always become the same vector, regardless of which tessellated output-triangle the vertex is a member of, *and regardless from which Geometry Shader invocation it results*. Only then will the resulting surface look smooth and not tessellated.

Hypothetically, Computer Algebra might try to compute the derivative of any density function, as a way to compute the normals of the surface as well. But, because the tool is numerical, and, because not all functions are continuously differentiable over their entire domain, this is not commonly done.

Instead, if one assumed that a volume texture was to supply a density (D), as part of what is really a 3D array, and if one wanted to output triangles with vertex normals as well, then one would additionally need to supply a volume texture, each element of which is a gradient vector (G), mapped exactly as the densities were mapped.

Then, if we fast-forward past the part of the marching cubes algorithm, that just determines what set of triangles should be output from any one cube input, we’d find that an additional task which the algorithm *could* perform, is to compute what the exact position of the output vertex is supposed to be, as a function of the constant positions (Pn) of the input cube’s vertices. If we could simplify, that we want one output position (Pout) to follow from two input positions (P1, P2), then an approach which will work is, first to compute a parameter (t), which describes by how much (P2) is supposed to replace (P1):

```
float Interpolate(int D1, int D2, float H = 0.0) {
float t = 0.0;
if (D1 != D2) {
t = (H - D1) / (D2 - D1);
} else {
assert (H == float(D1));
}
// Optionally:
assert (0.0 <= t <= 1.0);
return t;
}
```

Then, this parameter can easily be applied, to compute the position of one output vertex like so:

```
Pout := (t * P2) + ((1 - t) * P1);
```

If we simply had a matching array supplying gradient vectors (G1, G2), then the output normal could also be derived, using the previously-computed parameter:

```
N := normalize( (t * G2) + ((1 - t) * G1) );
```

One obstacle here is, that volume textures as such are already so exotic, that, *if they are to be rendered by the GPU* and not by the CPU, there is usually some limitation to their per-element bit-depth. We might find that 8 bits per channel are available, but that 16 bits per channel are no longer so. And so, to maximize efficiency in encoding an array of gradient vectors, it would seem that (G1) and (G2) also need to be normalized, before they are given an (8+8+8 -bit, R,G,B) colour value, and before they are stored, by the CPU, into the second volume texture.

Well, in some cases, the programmers who actually write the code that renders the ISO-Surface may simply say, ‘Skip that.’ And, they may choose an arbitrary value, that is a single real number, colour-code it, and render *it* as the surface colour of the output-mesh that again, is not supposed to reveal what its original granularity was, by offering 1 output colour-value per fragment, and not, one output-colour per mesh-facet. That way, even though the mesh is faceted, the fact that it is will not seem *as* obvious.

And in some cases, the one real number that gets output through a surface colour, could simply be the Z-value of the current fragment. The example above happens to be like that.

In order to simplify things further, that real number might only be *computed once per vertex*, but then interpolated, to result in per-fragment values, as the output triangles’ Fragment Shader invocations have their (input) varyings interpolated.

(Update 5/06/2020, 18h20: )

I suppose that an additional fact which I should point out, is, that Geometry Shaders *that run on the GPU* face a limitation which Fragment- and Vertex- Shaders also face, which is, that each shader invocation cannot look beyond the values it has been given as one unit of input, in order to generate all the units of output it generates.

Hence, if a GS *running on the GPU* receives ‘one cube of variables’ as input, that’s all it can use, to generate all the properties, of the Points that it will Emit, including any Normal Vectors.

The way the CPU works is much more versatile, in that it can look at any of the elements of a 3D array, to make whatever computations it needs to make, and can also look beyond that array… This is why I suggested that the CPU would need to pack gradient vectors into a matching 3D array, so that those will actually have exactly the same values, for every GS -invocation which will later be run on the GPU, regardless of which cube is being processed as input.

Numerical methods to approximate these gradient vectors certainly exist, that will be accurate enough to fool human vision into seeing a smooth but accurate surface, which can also be shiny, etc.. One such method can be:

```
G[X, Y, Z].x := D[X-1, Y, Z] - D[X+1, Y, Z];
G[X, Y, Z].y := D[X, Y-1, Z] - D[X, Y+1, Z];
G[X, Y, Z].z := D[X, Y, Z-1] - D[X, Y, Z+1];
```

(Assuming, that the normal vectors to result, face from ‘a more-positive inside’ region, towards a more-negative region that forms ‘the outside of the ISO-Surface’.)

```
normalize(G[X,Y,Z]);
```

This is very different from what a Geometry Shader running on the GPU could do, which would be limited to looking at 8 points at a time, not at (8+(6*4)=32) points at once.

Yet, since ‘real software’ still consists of code that primarily runs on the CPU, which in turn sets up what the GPU should do, a following refinement might make, ‘ISO-Surfaces with Normal-Vectors’ feasible:

Instead of every element of the volume texture consisting of a single grey-scale value, *or* of an (R,G,B) value, each element could be a familiar, 32-bit (R,G,B,A) value. The Alpha-channel could be used to store an unsigned Density, and as suggested earlier, the R,G, and B channels could store the normalized Gradient-vectors. That way, only a single volume-texture would need to be loaded at once…

A drawback to doing things this way would be, the 8-bit limitation on Density values. For Gaming and CGI, 8-bit, unsigned Density values might be just fine, but for Scientific Computing, those might be too limiting.

(Update 5/06/2020, 18h40 : )

I suppose that another refinement is possible, that would allow the GS, *still running on the GPU*, only to have a volume texture as input, that offers a single Density-value for every X,Y,Z coordinate set, and, at greater bit-depth than 8. But the same GS could only have a single Point as an input-topology (from which to Emit 0-4 triangles), and that Point could have an X,Y,Z coordinate-set as its only properties to pass along to the GS.

But, the GS could compute from this X,Y,Z input-set, where to sample the volume texture. The downside to that could eventually be, that the GS would also need to sample the volume texture 32 times, as described above, for every Point of input topology, and certain points-in-the-volume-texture would then be sampled multiple times, as belonging to multiple invocations of the GS.

This might sound enormously inefficient. But the initial lack of efficiency could be offset, by the fact that modern GPUs also have a considerable number of cores, that run concurrently, beyond what the main CPU of a PC usually has.

(Update 5/06/2020, 19h10 : )

One must always be on the lookout, for how much the implementation of some core idea may have been dumbed-down.

What actual implementations of the Marching Cubes algorithm may do, instead of recomputing (t) many times, and in awkward combinations of ways, is, just to assume that (t) is always equal to (0.5).

(Update 5/06/2020, 21h05: )

I also wanted to comment, on how Computer Algebra Systems generally work with implicit functions. The following would be such a function, in two variables, that defines a circle:

```
x
```^{2} + y^{2} == 1

What Computer Algebra Systems tend to do with these, is to derive the following expression from the equation shown:

```
(x
```^{2} + y^{2} - 1)

Then, if told to solve, a CAS will essentially try to find its roots. However, if told to plot this function, what will be done is that a domain of (x) and (y) will get rasterised, and ideally, there would be some combinations of (x) and (y) at which this expression equals zero. However, because it can often happen that such combinations come close, but do not exactly equal zero, what a plot of an implicit function will do instead, is to mark any pixels for which, when scanning, the sign of the expression switches from being positive to being negative, or vice-versa. If for some pixel-combinations of the plot, the expression will be exactly equal to zero, then those pixels can be marked as well, but pixels for which the value went from being zero, to being positive or negative, not marked.

In order to deal with functions better, that form tangents with the X or Y axis, two sign-comparisons can be done instead of one, once, incrementing according to (X), and again, incrementing according to (Y). If a switch in sign takes place at least according to one of these comparisons, the current pixel can be marked. If the function actually equals zero at the current pixel, that can be marked.

When this gets translated to 3 dimensions, it could well be the case that each corner of a cube only receives a binary (1) or a binary (0). If that’s the case, then what the reader should also be able to see is, that if the approach I wrote above was also applied, it would make most sense for the CPU to put either zero or some constant value, into the colour-channel used to define density, just so that a Geometry Shader can test this channel in 8 voxels, and arrive at the same, simple, 1-byte integer, to select which set of Points to Emit.

In such a case, the corresponding constant to use for (H) would become zero, but, (t) may already be a constant.

(Update 5/07/2020, 13h40: )

FWIW, If the reader is interested in Free, Open-Source software that can plot Mathematical functions in 3D, *and that can shade the resulting surfaces*, an example which does this is Euler Math Toolbox. A version of this software is also available from the Linux package manager. However, under Debian, the maintainers tend to drop support for the CAS named ‘Maxima’, which the Windows version has. So, instead of installing this from the package manager, I usually install this just once, under Wine. A disadvantage to doing so is, that the Wine-installed version has no access to Python, nor to *POV-Ray*. *Yet, the Wine-installed version of this application has access to the native installations of ‘Maxima’ as well as ‘LaTeX’*.

What needs to be remembered though, is that the 3D-plots that Euler will shade in this way are limited to *parametric* plots, and do not include implicit plots. What happens here is that Euler steps two parameters through a predefined set of increments, under the assumption that doing so will generate a mesh. These parameters are then translated into combinations of X, Y and Z values, which become vertex coordinates. Under those conditions, Euler is also able,

- First, to compute face-normals,
- Then, to average those face-normals for the vertices,
- Next, to shade the triangles, with interpolated normals.

The big difference here, compared to what an ISO-Surface would bring, is that it’s easy for the algorithm that generated the mesh, to identify which of its quads is adjacent to any current quad. In the case of the Marching Cubes algorithm, the assumption is made that each shader invocation, as I wrote earlier, is unaware of whatever triangles were output by the other shader invocations, to be averaging face-mormals that resulted.

Also, if the Marching Cubes algorithm has in fact been simplified, so that (t) is always equal to (0.5), which means in more concrete terms, that the vertices of triangles output are always positioned exactly at midpoints, of the edges of cubes input, the standard approach would also imply, that only a small set of vertex-normals can ever result, depending on which type of cube is neighbouring with which other type of cube… If vertex-normals are to be computed, this becomes worthwhile, when the possible results are not severely constrained.

These two tests represent what the results can look like (from parametrically generated meshes, the vertices of which can occupy any out of a continuous interval of positions):

It immediately results in an error message, if I try to apply the “>hue” parameter to a 3D, *implicit* plot.

The capability still works in the simpler case, where a function for a 3D-plot is merely defined as depending on (X) and (Y) as arguments, and where the generated value turns into an implied (Z) coordinate.

(Update 5/07/2020, 16h15: )

When “Euler Math Toolbox” is installed under Windows, optional software also installed to Windows, which ‘EMT’ can make use of, include ‘Maxima’, ‘LaTeX’, ‘Python’ and ‘POV-Ray‘, the last of which is a ray-tracing, 3D-rendering program, the graphics of which will be much-superior to what ‘EMT’ can render by itself.

Yet, when Linux users install ‘EMT’ from their package manager, they have a bare-bones application, that does not give access to these additional tools. And what the reader of this posting might wonder is, ‘Considering that EMT is open-source software either way, why would the Debian maintainers not include all the features which this application can support?’

I would speculate that the answer has to do with a concept that exists under Linux, which I could just call ‘Dependency Hell’, but which Debian specifically tries to avoid. As soon as the main application draws on the abilities of 4 other software-packages, or each time, from a larger collection of packages, a type of complaint which the maintainers often receive is: ‘I installed the application, but some of its features don’t work out-of-the-box.’ And what this has led the maintainers to do in most cases, is either:

- Make the additional, system-installed features dependencies of the current package, or
- Compile the application without support for the given feature.

Option (1) above can lead to the result that, because a user wants to install one application, he ends up pulling in, maybe 1000 additional packages. And this is what I’d call ‘Dependency Hell’. As an alternative, what can sometimes be done is, that certain features will not activate when the user tries to use them, but that this happens in a way that just gently reminds him, that he needs certain packages to be installed… But, Debian maintainers don’t usually like that option.

And so, a packaged version of ‘EMT’ exists, and in my experience, the Windows version, installed under Wine, has many additional features, not including ‘Python’ or ‘POV-Ray’.

I suppose that another possibility could be true, that the version of ‘EMT’ that has been ported to Debian, is very old, compared to what we can obtain from the original author. In any case, the Debian version should really be viewed as a fork.

Dirk

## One thought on “How 3D-plotted, implicit functions are often inferior, to ISO-Surfaces rendered for 3D Gaming.”