One of the things which I did in the past was, to write a C++, Qt5 application, which was meant as a test, of how to incorporate certain GUI elements into a program, those elements extending as far as, how to force the program to display with my own desktop theme, even on target computers that do not have this theme set or installed.

I was so focused on that aspect of the project, that I had overlooked a simpler aspect of it: What would happen if the user decided to resize the window. What happened so far was ugly, but, according to tonight’s update, the behaviour has been made much nicer.

The relevant files can be found in the following directory on my site:

https://dirkmittler.homeip.net/binaries/

The files are the ones that begin with ‘`Creator_Test6...`

‘ .

There is also a compiled AppImage that will run on decently up-to-date Linuxes, but no binary to run under Windows. Sorry for that last omission. However, readers who have the Qt SDK installed, should be able to compile under Windows as well. What needs to be done, when Qt projects are being recompiled with another computer’s SDK, is, that the file ‘`Creator_Test6.pro.user`

‘, typically, needs to be deleted, as it contains details specific to one version of the SDK. Because of this, the Project Configuration – i.e., the Toolkits that are to be compile targets – will then also need to be redefined by any interested power-user.

There’s a small observation to add about this software project. The concept that its inherent desktop theme should display, only really works, when running the AppImage under Linux. That is because I linked the Theme Engines in, using the command-line tool ‘`linuxdeployqt`

‘. Those were binary plug-ins, that I do not even possess the source-code for. Hence, if the reader custom-compiles the same project, they will find that those plug-ins have not been compiled, and that for this reason, the app will display with the local theme at best.

Enjoy,

Dirk

]]>

One of my recent undertakings has been, to extend my knowledge of Python, which I was previously only capable of writing procedural code for, to include, how to write Object-Oriented Python.

In the process, I began to think of what advantages I might now have, with that ability. And one answer which presented itself was of the form, ‘I already know enough about the Qt Library, to use it for some C++ programs. It has a Python binding referred to sometimes as PyQt. With the ability to write Object-Oriented Python, I should also gain the ability to write GUI applications in Python – eventually.’

The result of my recent exercise can be found at this URL:

https://dirkmittler.homeip.net/binaries/

The compressed files which contain my first project using PyQt are named ‘`PyQt_Test_1_s.gz`

‘ and ‘`PyQt_Test_1_s.zip`

‘. Either of those compressed archives need to be unzipped to a folder, in which there should be a total of 4 Python scripts. Python 3 would need to be run on the script named ‘`AppStart.py`

‘.

I’m sorry to start so small.

Oh, yes… In order for these scripts to run, the reader’s Python installation would need to include PyQt5. Not all do.

Dirk

]]>The following link should not be taken, as the complete solution to anybody else’s computing task, in which my readers might want to wrap Python code for use in SageMath. In fact, I don’t even consider myself an expert in either of these subjects.

For people who do not know, “SageMath” is a combination of “Computer Algebra System” – ‘CAS’ – and ‘Numerical Toolbox’. SageMath is written using multiple languages, mainly Python. And, the ‘Maxima’ CAS back-end was written in LISP. ‘mpmath’ is a specific Python package, that allows Multi-Precision Arithmetic.

The integration of ‘mpmath’ is particularly straightforward, because Sage already uses this package. But the principles that would be used for other Python packages are similar. An interface must be established, by which Sage objects can be translated into the objects specific to the external Python packages, and back into objects that Sage can recognize again. Objects that Sage finds particularly useful, are ‘Symbolic Functions’ – that are to be manipulated algebraically – and ditto for ‘Symbolic Expressions’.

If worse comes to worst, then the data generated by the external, ‘wrapped’ code, may be converted into native Python objects such as ~Strings~. However, Sage does *not* recognize strings as valid Symbolic Expressions. So, one way around that could be, to call Sage’s ‘`sage_eval()`

‘ function on the strings. (:1)

http://dirkmittler.homeip.net/LambertW%20Test%203%20–%20Sage.html

(Updated 7/28/2021, 12h15… )

**1:)**

One of the concepts implied by this exercise is that, unlike how it is with the ‘mpmath’ Python package, Numerical Functions, which are really just random Python functions, will need to be substituted into a (SageMath) Symbolic Expression, without the ‘`mpmathify()`

‘ function being available. And, in order for that to work, SageMath needs to be told *within the Python function*, to compute an approximation of a kind of function only SageMath understands, and to store that approximation in a string.

Only recently, I made some adjustments to my script / notebook, that finally accomplish this.

What was slowing me down was the fact, that an approximation was being asked for, inside another, parent approximation. This requires special consideration.

Enjoy,

Dirk

]]>

My recent postings have rambled at some length, about the open-source program ‘FreeFem’, the purpose of which is, to solve Partial Differential Equations, which are strictly defined, but which often won’t have exact, analytical solutions. FreeFem approximates their solutions numerically.

My own *formal* background doesn’t extend much beyond Calculus 2, such that I wasn’t even taught “Ordinary Differential Equations” – aka ‘ODE’s – in a classroom. But what that really means, is just, that I can’t solve one manually. I can still comprehend what problem is being defined, and, given computers that can solve those, can also feed them to my computers to solve.

Of course, ‘PDE’s are more difficult than ODEs, because PDEs are multi-variable.

Long story short, my recent postings have had two main subjects: They have asserted that real-world PDEs, like real-world ODEs, are usually more complicated, than just a form that can be converted directly into an integral. And secondly, I’ve mused over how, then, FreeFem will go about solving them anyway. That second part is speculative.

But, just to make my point, the following is a PDE, which only makes full use of 1 out of its 2 available variables, and which happens to be simple enough, to state a simple integral, but to state it implicitly. Here the FreeFem script:

```
// For label definition.
int Bottom=1;
int Right=2;
int Top=3;
int Left=4;
// The triangulated domain Th is on the left side of its boundary.
mesh Th = square(10, 10, [(x*2)-1., (y*2)-1.]);
plot(Th, ps="ThRectX.eps", wait = true);
// Define a function f.
func f = x * y;
// The finite element space defined over Th is called Vh here.
fespace Vh(Th, P2);
Vh u, v; // Declare u and v as piecewise P2-continuous functions.
// Get the clock in seconds.
real cpu=clock();
// Define a PDE, that just integrates f with respect to Y...
solve SimpleXInt(u, v)
= int2d(Th)(
dy(u) * v // Let this be valid syntax.
)
- int2d(Th)(
f*v
)
+ on(Bottom, u=0); // The boundary condition at the start of the integral.
// Plot the result...
plot(u, cmm="Is this an example of the Poisson Equation? f=x*y",
ps="SimpIntX.eps", value=true, wait = true);
// Display the total computational time.
cout << "CPU time = " << (clock()-cpu) << endl;
```

The way this script works hinges on a simple idea: ‘`dx(u)`

‘ can accurately be computed by FreeFem as ‘the derivative of (u) with respect to (x)’, and evaluates to a real number. Since it was possible just to multiply a function that also evaluates to a real number by ‘`v`

‘, and thus form the RHS of the equation, it should be just as easy, to write ‘`dx(u) * v`

‘ as the LHS. And, after having fixed some minor technicalities peculiar to computing first-order integrals, one can see that this valid syntax computes ‘`f(x,y):=x*y`

‘, and then integrates it once, in the direction that the Y-axis is positive.

Predictable.

(Updated 7/25/2021, 22h00… )

One question which a person might ask, who is curious as to how FreeFem accomplishes, that *two* Boundary conditions be met, with an actual PDE and not just a simple integral, would be, ‘What would happen if on Line 29, the same Boundary value be set for `Top`

as well as `Bottom`

?’ We know that in the case of a simple integral, the problem would end up overdefined. But, because FreeFem is based on numerical methods, it cannot really recognize this fact. It produces no compile-time errors, but when solving, produces an anomalous plot, where (u) repeatedly hits extreme positive values. Why?

To the best of my understanding, the effects that ‘`dy(u)`

‘ have on the expression, will have the opposite sign as belonging to ‘`Top`

‘, from what they have as belonging to ‘`Bottom`

‘… This will cause changes in (u) to cancel out, when just multiplied by ‘`v`

‘(, which in turn, leads to the situation, that attempts to balance the expression fail, with all values of (u) applied.) With more than one Boundary set, firstly, the integral of the Test Function changes, and secondly, ‘`dy(v)`

‘ would also have opposing signs, as belonging to each Boundary, so that the results from the product ‘`dy(u)*dy(v)`

‘ will obtain consistency again. Only then, will changes accumulated into (u) have negative feedback on the value of the expression, and some form of result become stable.

Dirk

]]>

One of the subjects which have been fascinating me in recent days and weeks, has been the program ‘FreeFem’, which is an open-source program, that approximates solutions to “Partial Differential Equations” – ‘PDE’s – such that the solutions result as Finite Element functions, the interpolations of which are also continuous.

A stipulation is to be solved for each time, that an equation that subtracts some sort of ‘regular function’ from the gradients of the solution-function, results in values that converge on zero. Presumably, one of the many strategies which FreeFem applies, to achieve this result, is successive approximation. I’ve written in numerous postings, what my hypotheses are, as to the detailed calculations which FreeFem might be performing. But in reality, the program achieves its goals so well, that the underlying Math can be difficult if not impossible to reveal, in a scripting language which is supposed to solve the PDE instead, in such a way that evidence of the methodology cancels out, on the solution. Thus, I have no real proof for most of my hypotheses.

One *recent observation* which I did want to follow up my own postings about, was, that the default situation for the PDEs is one, in which a Test Function is to be matched by the gradients of the solution, and, that Dirichlet Boundaries are defined, at which the value of the solution must be one exact value for each boundary. In contrast with this situation, FreeFem allows PDEs to be defined using the ‘`- int1d(...) (...)`

‘ expression, which then replaces such a Test Function, and which effectively tells the solver, ‘Please don’t integrate this function.’

And, because the developers became ambitious with their goal, that very complicated meshes can be constructed and then solved over, that have numerous boundaries and not just 2 or 4, I suppose this also means that a PDE of the sort which I just described will receive values of some sort, as long as at least 1 boundary receives a Dirichlet value.

The following script generates and then plots such a PDE:

```
// Load Cubic polynomial interpolator
load "Element_P3";
// Caution:
// Cubic interpolations are often unneccessary, and may
// overshoot their endpoints.
// They're usually only needed, for second-degree
// Derivatives, which lead to second-order Differential Equations.
// A P3-derived fespace will nevertheless be used here.
// For label definition.
int Outer = 1;
int Inner = 2;
// Define mesh boundary.
border C1(t=0, 2*pi){x=cos(t); y=sin(t); label=Outer;}
border C2(t=0, 2*pi){x=cos(t) * 0.5 + 0.2; y=sin(t) * 0.5 + 0.2;
label=Inner;}
// The triangulated domain Th is on the left side of its boundary.
mesh Th = buildmesh(C1(100) + C2(-40));
plot(Th, ps="ThWithHole.eps", wait = true);
// Define a function f.
real Pi = 3.1415926536;
func f = sin(Pi * x) * cos(Pi * y);
// The finite element space defined over Th is called Vh here.
fespace Vh(Th, P3);
Vh u, v; // Declare u and v as piecewise P3-continuous functions.
// Get the clock in seconds.
real cpu=clock();
// Define a PDE, in which the values of the outer boundary are defined by f...
solve Poisson(u, v, solver=LU)
= int2d(Th)( // The bilinear part
(dx(u) * dx(v))
+ (dy(u) * dy(v))
)
- int1d(Th, Outer)( // Applying the 1-D boundary function.
f * v
)
+ on(Inner, u=0); // The Dirichlet boundary condition
// Plot the result...
plot(u, ps="VariableBoundaries.eps", value = true);
// Display the total computational time.
cout << "CPU time = " << (clock()-cpu) << endl;
```

(Revised 7/19/2021, 19h30. )

And, the following is the relevant plot that results:

(Update 7/21/2021, 16h40: )

If I combine this observation, with what I wrote in a previous posting, I surmise that the solver will first process the boundary ‘`Outer`

‘, by applying function ‘`f()`

‘, with interpolation weights, over *all of* (v), negatively, *but in a way that ultimately depends on what ‘ f()‘ was on that boundary*. And then, when the boundary ‘

`Inner`

‘ is to be processed, the solver will apply the value (0.) over `Inner`

‘.Then, (u) will be adjusted, by ~~multiplying the values of the expression with ‘~~ subtracting (the values of the expression) from (u), until those values become sufficiently close to zero, for all combinations of (x,y).`sign(sign(dx(v) + dy(v)) + 0.5)`

‘, and

Why not?

(Update 7/20/2021, 11h30: )

In order for changes which are being written to (u), to cause ‘the values of the expression’ to converge on zero, the contribution which the term starting on Line 38 makes to the value of the expression, must still be an integral.

Presumably, FreeFem stores the values of the expression, as *two* hidden properties of the mesh, which the user, who is writing and testing the script, has no access to. One of those properties would simply act as ‘a temporary buffer’, which allows interpolation between actual, computed values, while the other will receive the result from the interpolation.

If the optimization is desired, that the term starting on Line 42 is only to be computed once, since what is being computed there does not, itself, depend on (u), the mesh would need to possess a third hidden property, where that can be stored (or, which constant values from multiple, Linear terms in the expression can be added to).

And, it’s *because* Line 38 is still being computed as an integral, that the boundary values *of (u)* in the upper-right-hand quadrant of the plot (along the non-constant boundary ‘`Outer`

‘), do not quite reach the (negative) amplitudes, which the boundary-values *of (u)*, in the lower-right-hand quadrant of the plot, reach.

(Update 7/20/2021, 20h50… )

*Note*:

Throughout this blog, I’ve been referring to PDE boundaries that have a constant value as ‘the Dirichlet Boundaries’, while in this posting, referring to ‘the boundary that has a non-constant value’, as such. This is really just an arbitrary way I have of differentiating between two types of FreeFem syntax, that does not have much formal validity.

Be that as it may, FreeFem requires *at least 1 boundary to have a constant value*, in order to generate values for the plot, since this is also its starting point of an integration.

Enjoy,

Dirk

]]>

One of the programs which I’ve been experimenting with, is called “FreeFem”. Actually, it exists both as a library, as well as a set of executables, the latter of which are meant to facilitate its use, in that scripts can be written in a loose but C++-like syntax, that will already produce interesting plots. But, plots can also be meaningless, unless the user understands how they were generated, as well as the underlying Math.

In an earlier posting, I already wrote that a basic weakness of the ~standard~ 2D integrals which FreeFem will solve for, was, that those do not correspond closely to “Double Integrals”, as I was taught those in Calculus 2.

Just to recap basic Calculus 2: If an integral is supposed to be plotted over two dimensions, then the standard way to do so is first, to declare a variable that defines one of the dimensions, and to declare a second variable that defines the second. Next, a 1-dimensional integral is defined over the inner variable, which results in a continuous function. Next, this continuous function is integrated a second time, over the outer variable, thereby resulting in a double integral.

Even though this could be extended to more dimensions than 2 or 3, there is usually little practical value in actually doing so, at least, in my limited life.

Further, the fact is clear to me that integrals exist, which go beyond this basic definition. For example, “Curl Integrals” also exist… However, for the moment I’m focusing on how to overcome some of the limitations which are imposed by FreeFem. And while working on this problem, I have found a way to force FreeFem to compute the sort of double integral which I was taught. The script below shows how I did this…

```
// For label definition.
int Bottom=1;
int Right=2;
int Top=3;
int Left=4;
// The triangulated domain Th is on the left side of its boundary.
mesh Th = square(10, 10, [(x*2)-1., (y*2)-1.]);
plot(Th, ps="ThRectX.eps", wait = true);
// Define a function f.
func f = x;
// The finite element space defined over Th is called Vh here.
fespace Vh(Th, P2);
Vh u, v; // Declare u and v as piecewise P2-continuous functions.
// Get the clock in seconds.
real cpu=clock();
// Define a simple PDE...
solve SimpleXInt(u, v, solver=LU)
= int2d(Th)(dx(u)*dx(v)+dy(u)*dy(v)) // Show me a kludge.
- int2d(Th)(
f*v
)
+ on(Bottom, u=0) // The Dirichlet boundary condition
+ on(Right, u=0)
+ on(Top, u=0)
+ on(Left, u=0);
// Plot the result...
plot(u, ps="RectX.eps", value=true, wait = true);
// Now, let's try to plot a double integral...
Vh A, Fh;
func real f2(real xx, real yy) {
// return xx * yy;
return xx;
}
func real Fx() {
real s = 0.;
for (int i = -5; (i * 0.2) < x; ++i) {
s += int1d(Th, Left) ( f2(i * 0.2, y) );
}
return s;
}
// Compute the x-integrals for reference purposes...
Fh = Fx();
solve Double(A, v)
= int2d(Th) (
dy(A) * dy(v)
)
- int2d(Th) (
Fx() * v
)
+ on(Bottom, A=0) // The Dirichlet boundary condition
+ on(Right, A=0)
+ on(Top, A=0)
+ on(Left, A=0);
plot(A, ps="DoubleIntX.eps", value=true, wait=true);
plot(Fh, value=true);
// Display the total computational time.
cout << "CPU time = " << (clock()-cpu) << endl;
```

This script actually outputs 4 plots and saves the first 3. But, for the sake of this posting, I’m going to assume that there is no reason to show the reader a plot, of a plain, rectangular mesh… The following plot shows the result, when ‘the Poisson Equation’ is simply given for a function of (x), such that:

f(x) = x

A simple linear function was given. It can be integrated once or twice, and the plot above shows how FreeFem usually does so, resulting in u(x,y).

One interesting fact about FreeFem is, that this program can be made to compute a 1-dimensional integral explicitly, i.e., without imposing any need, to solve an equation. The resulting sum (a real number) can then simply be returned with a variable, for every maximum value the function was summed to. So, what would be tempting to do next is, to integrate this variable a second time, which might result in a 2-dimensional array. However, there is a basic limitation in how FreeFem works, that would next stand in our way… (:1)

And so, in order for the second (outer) integral actually to be with respect to (y), I used the little trick in the script above. I defined the gradient of Finite Element function (A), with respect to (y) only, and stipulated that it must equal the 1-dimensional integral which was computed before, for all combinations of (x,y). Hence, this second step of my solution is similar to a degenerate Poissson equation, in that the addition of the (x) gradient has simply been dropped.

When this approach gets applied, the following plot results:

Now, there is still something which FreeFem does, which is supposed to be a feature, ~~and which I cannot switch off~~. That is, to impose that at the boundaries, a certain exact value needs to exist for (A), just as was needed for (u). Documentation states, that by merely defining an equality, the (2D) derivative of a function is supposed to have with some other function, one has not fully defined that function. A boundary value must also be given, and then the resulting “Partial Differential Equation” (‘PDE’) actually defines, either the FE function (u), or the FE function (A). (:2) Further, the way ‘`int1d()`

‘ is implemented, also applies such boundaries.

For that reason, I can also ~~not~~ just shut off, anything which I feel FreeFem might be doing wrong, every time it tries to solve a PDE. What I can do is, request that this boundary occupy the left-hand side of the rectangular plot, and additionally, make sure that I begin my summation with the same value, thus resulting in a definite and not an indefinite integral every time. Also, I had suspected somewhere, that in order to resolve this issue with 2D plots, it really only needs to be resolved along one inconspicuous axis. In this case, I resolved it explicitly for the X-axis, and the behaviour of the Y-axis fell in line.

Yet, what I can do is observe, that the two plots shown above don’t match. And as long as they don’t, what FreeFem computed in its first plot, was also not a double integral.

(Updated 7/25/2021, 17h30… )

**1:)**

There is a largely undocumented implementation detail to FreeFem, which slowed down my attempt to produce this double integral considerably. Apparently, the function ‘`int1d()`

‘ has as behaviour, just to evaluate the function it has been fed at the current values of (x,y), ~~but, to multiply that result by whatever the ~~. The reason the developers did this, was probably, the fact that doing so fits in well, with how FreeFem solves its 2D PDEs. But, for a user who wants to generate 1-dimensional integrals, what this does is, to force that user to do the work himself, of looping through the domain of x-values, that lead to the current x-value, and to perform the summation, ~of *width* of one element of the current mesh is*outputs of* ‘`int1d()`

‘~, to arrive at that 1-D integral, and to do what any Scientific Computing platform should automate.

Furthermore, this function has as evil, that if the convention is used, which works elsewhere within FreeFem, just to pass in a function reference, no function call will actually take place! The name of the function must be written, with a set of parentheses, in order actually to cause a function-call. (Which is also, how it usually works with pure programming languages. It’s just another inconsistency, that FreeFem will sometimes allow the user to pass in the name of the function by itself, causing function-calls.)

Eventually, one discovers these little details.

(Update 7/15/2021, 22h05: )

Astute readers will notice the fact that, even though I’ve complained about FreeFem developers ‘cheating’, * I also cheated*. Code which truly forces a double integral to be computed for a 10×10 mesh, requires that about 1,000,000 vertices be computed in total, and takes about 400 seconds of CPU time on my machine, to complete. Further, the results ‘look wrong’, because when:

```
f2(xx, yy) := xx ->
dy(f2) == 0.
```

This means that, if the ‘outer integration’ has been computed accurately, *vertical lines that correspond to individual values of (x)* will slope in a completely uniform way along the Y-axis. And the following is the *evil* code that does this:

```
// For label definition.
int Bottom=1;
int Right=2;
int Top=3;
int Left=4;
// The triangulated domain Th is on the left side of its boundary.
mesh Th = square(10, 10, [(x*2)-1., (y*2)-1.]);
plot(Th, ps="ThRectX.eps", wait = true);
// Define a function f.
func f = x;
// The finite element space defined over Th is called Vh here.
fespace Vh(Th, P2);
Vh u, v; // Declare u and v as piecewise P2-continuous functions.
// Get the clock in seconds.
real cpu=clock();
// Define a simple PDE...
solve SimpleXInt(u, v, solver=LU)
= int2d(Th)(dx(u)*dx(v)+dy(u)*dy(v)) // Show me a kludge.
- int2d(Th)(
f*v
)
+ on(Bottom, u=0) // The Dirichlet boundary condition
+ on(Right, u=0)
+ on(Top, u=0)
+ on(Left, u=0);
// Plot the result...
plot(u, ps="RectX.eps", value=true, wait = true);
// Now, let's try to plot a double integral...
Vh A, Fh;
func real f2(real xx, real yy) {
// return xx * yy;
return xx;
}
func real Fx(real xx, real yy) {
real s = 0.;
for (int i = -5; (i * 0.2) < xx; ++i) {
s += int1d(Th, Left) ( f2(i * 0.2, yy) );
}
return s;
}
// Compute the x-integrals for reference purposes...
Fh = Fx(x, y);
func real Fy(real xx, real yy) {
real s = 0.;
for (int i = -5; (i * 0.2) < yy; ++i) {
s += int1d(Th, Bottom) ( Fx(xx, i * 0.2) );
}
return s;
}
// This is where the double integral will be computed,
// with ugly results...
A = Fy(x, y);
// This takes about 400 seconds of CPU time on my machine.
plot(A, ps="DoubleIntX.eps", fill=true, value=true, wait=true);
// Notice how a plot, that has straight edges
// along the y-axis, is 'true', but also 'looks wrong'.
// Those straight edges really follow from the fact,
// that f2(x,y) = (x) has a slope of 0. with respect to (y).
plot(Fh, value=true);
// Display the total computational time.
cout << "CPU time = " << (clock()-cpu) << endl;
```

The resulting Y-axis slope will be strongest, where the X-axis *integral* was already, conspicuously negative and asymmetrical. (:3)

(Update 7/16/2021, 5h40: )

I suppose that a relevant question which readers might have, could be, whether it should always take 1,000,000 vertex-computations, to compute the true, double integral, on a 10×10 mesh. And the answer is, ‘Absolutely Not!’

One strategy which can and will be used, is, to save temporary summations, such as of one line while computing one x-axis integral, or of one array after having computed a full set of those, and before computing the y-axis integrals, so that the summation can be continued, from saved temporary summations. It’s a perfectly legitimate thing to do.

However, a basic limitation which FreeFem has is, that it stores its meshes in a way that’s easy to write to, but next to impossible to read information from. For example, *even in their rectangular meshes*, the vertices have *only one index*, while, to perform integrals, what’s really called for, is an x-index and a y-index. And, it’s because one cannot easily use these meshes as input, that I decided to solve the problem, by writing nested loops, that perform a ridiculously excessive number of summations.

Each of my vertices in this example, was being visited up to 10,000 times.

(Update 7/17/2021, 13h10: )

* Conclusion*:

After this amount of experimentation, I have decided that I will trust the output of FreeFem, with two caveats:

- The fact needs to be understood that, even though Integrals over 2 variables were always taught as second-order integrals in Calculus 2 – i.e., double integrals – FreeFem computes them as a sort of first-order integral each time, putting a first-order integral form on both the right-hand and the left-hand side of the equation to be solved for. By forcing FreeFem actually to have a second-order integral on the right-hand side, I’m creating an equation which, Mathematically, is less symmetrical than what I had before, but which FreeFem will be able to solve for anyway, And
- In order to solve the PDE, FreeFem ‘feels free’ to flip the signs of the bilinear terms, as is convenient. This may in fact be a necessity inherent in the problem, that FreeFem solves. And, this peculiarity may result passively, from the notion that the ‘
`int2d()`

‘ function itself ignores ‘`dx(v)`

‘ and ‘`dy(v)`

‘, as it’s computing its summation on both sides of the equation – including, in the summation of the bilinear terms. (:4)

Also, I did have some concerns, over the possibility that the ‘`int2d()`

‘ function computes more of an ad-hoc summation, than an integral. But, what I’m noticing is that FreeFem developers put enough attention into their ad-hoc summation, that it will truly pass for an integral, because of one consideration:

- The same type of ad-hoc summation is being performed on both sides of the equation, matching for each vertex.

(Update 7/17/2021, 14h25: )

**2:)**

Even though the answer to this question has already been stated elsewhere, using tough Mathematical expressions, the answer could be sought in layman’s terms, or in ‘horse sense’, to ‘Why is the Dirichlet problem valid, in that the PDE is under-defined, without a boundary condition being stated?’

The first basic notion needed, to answer that question, is, that if we are given an equation to solve, in which both (x) and (y) are the variables, but only given that:

x + y = A

Where, (A) is a given parameter, the problem is in fact underdefined, because there could be an infinite number of (x,y) combinations that result in (A). Nothing seems new there.

But, when a PDE is loosely stated as the stipulation, that ~The derivative of a function of 2 variables~ must equal some other function, of the same 2 variables, that function being referred to as ‘The Test Function’, what one is really saying is that, for any combination of (x) and (y), that Test Function generates *one real number*, and that the derivative must therefore also exist as one real number.

This poses a problem with most PDEs, such as for example, with the Poisson Equation, in which this derivative is simply given as:

```
(dx(u) * dx(v)) + (dy(u) * dy(v))
```

Because here, two gradients are being added, to arrive at one supposed derivative, it’s another form of the first expression I gave above, which was underdefined. Granted, given a field of ‘`u(x,y)`

‘, both the x-axis gradient and the y-axis gradient will follow. But, those remain 2 distinct gradients, which are to be summed, to arrive at one real number per (x,y).

The Dirichlet Problem also states itself to be solvable, as long as the boundary function is “sufficiently smooth”, which I take to mean, that ‘The derivative of the boundary function along the boundary must not exceed some small value.’ Well, FreeFem puts a narrower constraint on the problem, by stating that each boundary must have one exact value. This makes its derivative zero, so that it should qualify as ‘sufficiently smooth’. (:3)

But, what this observation also tells me is that, when I dropped the ‘`dx(u)`

‘ term from the Poisson Equation, to arrive at my degenerate equation above, the Dirichlet Boundary condition resulted in an overdefinition. Because in my second plot above, ~the derivative of the function ‘`u(x,y)`

‘~ was truly dependent on one variable, that being the y-axis gradient, this equation already defined that derivative fully. By adding *2* boundary conditions, I was really opening myself to the possibility, that the problem might become unsolvable.

It was really just, the fact that my explicit boundary conditions for ‘u(x,y)’ coincided at the ‘`Top`

‘ and ‘`Bottom`

‘ with the way I had defined the ‘outer integral’ of my problem, with respect to (y), at both extremes of (y), that no contradiction resulted, and that my overdetermined problem could still be solved.

I take the fact that ‘FreeFem’ *was able to generate an approximate solution*, as independent of the question, of whether *an exact* solution exists in fact.

(Update 7/17/2021, 19h35: )

**3:)**

I have made two discoveries, important to serious use of FreeFem:

- The ‘true’ second-order integral, which I showed a plot of above, the Y-axis lines of which just continuously slope away from the ‘
`Bottom`

‘ boundary, can be achieved efficiently from my first script, by just commenting out Line 64. And, - It’s not actually a fixed behaviour of FreeFem, to impose the boundaries as one value each. Documentation states, that The way the ‘
`int1d()`

‘ function is supposed to behave inside a ‘`solve() ...`

‘ clause is, to impose boundary conditions as a function. In other words, the Dirichlet, ‘`+ on(...)`

‘ statements can also be replaced in some plots, by putting code as follows…

```
- int1d(Th, Bottom) ( g(x, y) )
... ;
```

If ‘`g(real x, real y)`

‘ is a defined function, its returned real numbers would get mapped along the ‘`Bottom`

‘ boundary in the plot. However, ~~I have not tested this feature~~. It’s said to exist, because FreeFem doesn’t *officially* support 1-dimensional integration.

(Update 7/18/2021, 3h30: )

**4:)**

Just to make my thinking clear, I’d like to elaborate on how I imagine, FreeFem may perform a first-order integration, of a mesh, which has ‘a number of boundaries’, in general.

What the ‘`int2d()`

‘ function may do is, to set all the vertices that belong to a boundary, to their Dirichlet value, if one is set. This will initially assure, that there is a set of triangles, each of which have 2 vertices’ values out of 3 defined. The 3rd vertex of each triangle is then ~integrated~, by computing the mean of the 2 preceding vertices, and always adding the local value of the function to be integrated in the positive sense, meaning, that if the function happened to be negative, the 3rd vertex value will also result as more-negative.

This is essentially what allows complex meshes to be built with many boundaries.

When the user writes a script which computes a gradient, the onus is on him, to multiply the derivative of the function, which is accurately stated by ‘`dx(u)`

‘ for the derivative with respect to (x) of the Finite Element function (u), with the amount of change in coordinate (x), which is written as ‘`dx(v)`

‘ if the domain of integration was (v), for example. ‘`dx(v)`

‘ will then be negative, if the solver, by way of the ‘`int2d()`

‘ function, stepped from a more-positive element of (v) to a more-negative element of (v), according to the X-axis.

What this means is that, for example, if integration is starting from the ‘`Right`

‘ boundary, where (x) is considered to be positive, leftward, the way the script computes a delta in any function (u) that happens to be positive, is by multiplying a negative step revealed by ‘`dx(v)`

‘, with the eventual negative derivative ‘`dx(u)`

‘, to arrive at a positive value, that will ultimately balance with the positive value of the Test Function.

This solution (u) to the equation may only be reached, after the solver has made more than one pass over the expressions to be solved for. But (u) slopes to more-positive *right-to-left*, from the boundary ‘`Right`

‘, if integration was to start there, and, if the Test Function was positive there. Hence, this is opposite to how classical integration would integrate the definite integral of ‘`f(x,y) = x`

‘, if the indefinite integral at the boundary ‘`Right`

‘, was also the start of the interval of the definite integral.

This just happens to work well, if people want to build complex meshes, with numerous boundaries. Also, which boundaries integration starts from can be modified by stating the Dirichlet values for only some of them, instead of for all of them, or, by setting ‘regions’ to existing boundary labels. Those labels become the non-default starting points of the discrete integration.

Nowhere in my exercises have I worked with FreeFem Regions yet. But it seems, according to the documentation, that their existence is a minor detail, that happens to put labels on triangles (or, on tetrahedrea in the case of ‘`int3d()`

‘), so that regions of either can be integrated in their own subdomains. The fact that this needs to be done in explicit scripting, means that it does not kick in by itself. The ad-hoc integration can still proceed, as was just described.

When, instead of a gradient, ‘`int2d()`

‘ is integrating a linear value such as the Test Function, the script must multiply that value by the area of a quad, which is identified by just ‘`v`

‘ if that is the domain of the integration. And, because the use of cross-products can switch sign over something as trivial, as whether edges were counted clockwise or counter-clockwise, their absolute is used, and in this case, ‘`v`

‘ is always positive. Hence, if the value of the Test Function was negative, the delta will also be negative, regardless of whether progress is right-to-left, or left-to-right…

Hence, when (u) finally balances with the Test Function at the ‘`Left`

‘ boundary, if the Test Function was negative there, a negative derivative of (u) must be multiplying with a positive ‘`dx(v)`

‘.

(Update 7/25/2021, 17h30: )

I’d say that the explanation so far leaves an important observation unexplained: ‘If the sense of the ad-hoc integration is as stated, close to each boundary that has a Dirichlet value, why does the sense reverse near the centre of the plot, into the direction which seems more natural, such that the gradient of (u) agrees with the actual gradient of the Test Function? After all, most users did not program the script, to multiply those two variables.’

And, after much experimentation with the software, I’d say that the reason is the fact that something special happens, when two opposing boundaries are given Dirichlet values…

What I’ve concluded FreeFem does is,

- First, to interpolate its Linear Integral term, which does not depend on (u), for all boundaries that have a Dirichlet value set.
- To save the resulting, first-order integral in a hidden property belonging to (v). And then,
- Generally, to compute the terms ‘
`dx(v)`

‘ and ‘`dy(v)`

‘ as linear combinations of the (Δx) and (Δy) respectively, of the*two*triangle-edges, joining*with already-solved vertices*. And, - The concept should be inferred, that the ‘
`int2d()`

‘ function works with the same methodology, on the LHS of the expression,*recomputing*‘`dx(v)`

‘ and ‘`dy(v)`

‘*each time*. - Further, I have concluded that in order for (u) to have Dirichlet values
*maintained*, each integration which ‘`int2d()`

‘ performs, can more simply start from the value (0.), which will take place on both sides of the expression and lead to an error value of (0.)*at the boundaries themselves*. However, the values of (u) which those error-values accumulate in to, need to be initialized to their Dirichlet values, and then, doing so also needs to take place with an interpolation, such as the one I described below, in case there is more than one such boundary. - When the ‘
`int1d(...) (...)`

‘ term is setting ‘`v`

‘, it must do so with a simple interpolation of boundary values, the weight of which diminishes as the vertices get close to any Boundaries that have a Dirichlet value set.

(Update 7/19/2021, 15h25: )

I can extend my hypothesizing a little further, about what numerical methods the developers might have used (without actual proof), but in a way that will yield the observed results, preserve accuracy as much as possible, and remain as simple as the problem permits.

The form which was used to solve PDEs was ‘`solve PDEName(Vh u, Vh v) = expr;`

‘, where (u) and (v) are Finite Element functions, and (expr) was the expression, which was supposed to be made as close to zero as possible. Previously I had described this form such, that (u) simply receives output values, and that ~(v) was just along for the ride~, doing nothing specifically, other than to provide a vertex-by-vertex map, of how (u) was to be written to. Of course, this would have made (v) redundant.

One fact which certainly exists is that (v) can have properties, which the user has no access to, but which FreeFem uses internally. And, according to my recent ruminations, those properties should include:

(dx), (dy), (wv)

Where ‘`dx(v)`

‘ already states a possibly-interpolated X-coordinate delta, ‘`dy(v)`

‘ states a possibly-interpolated Y-coordinate delta, and according to me, something corresponding to (wv) states, ‘with how much weight the vertex has been written to’, as belonging both to (u) and to (v).

The way I visualize the ad-hoc integration is such, that each boundary replaces values into (v), but with ever-decreasing weight, as the vertices become a number of triangles more distant from the boundary. That weight could just be called (wb). One way to compute it, written in my pseudo-code, might be:

```
int d > 0
wb = 1. / d;
wv += wb;
new_value += wb * contributed_value;
```

Where (d) could be the number of triangles that the current vertex is distanced from the current boundary. This weight effectively reaches zero, ‘directly in front of’ the other boundaries.

Dirk

]]>

In my previous posting, I listed several (Open-Source) platforms for Computing, available under Linux at no cost, which have emphasis on Technical and Scientific applications. These platforms differ from conventional programming languages, in that conventional languages mainly specialize in allowing applications to be built, that perform a highly specialized function, while technically oriented platforms allow a user to define Math problems to be solved, to do so, and then to define a whole new Math problem to be solved…

My previous posting had also hinted that, when it comes to Computing tools of this kind, I prefer ‘the lean and mean approach’, in which the learning of specialized scripting languages would be kept to a minimum, but where, through his or her own resourcefulness, the User / Scientist knows how to apply Math, to solve their problem…

Yet, solutions do exist which go entirely in a different direction, and I’d say that “Scilab” is one of them. Under Debian Linux, one usually installs it as a collection of packages, from standard repositories, using the package manager.

Scilab is an application – and a workbench – with a rich GUI. It combines many features. But again, If somebody wanted to use it for real problem-solving, what would really count is, to learn its scripting language (which I have not done). Yet, Scilab typically comes with many Demos that tend to work reliably out-of-the-box, so that, even without knowing the scripting language, users can treat themselves to some amount of eye-candy, just by clicking on those….

As I’ve stated repeatedly, sometimes I cannot gauge whether certain Scientific Computing platforms are really worth their Salt – especially since in this case, they won’t cost much more than a household quantity of salt does. But, if the reader finds that he or she needs a powerful GUI, then maybe, Scilab would be the choice for them?

Dirk

]]>

One of my Computing habits is, to acquire many frameworks, for performing Scientific or Analytical Computing, even though, in all honesty, I have little practical use for them, *most of the time*. They are usually of some Academic curiosity to me.

Some of the examples familiar to me are, ‘wxMaxima‘ (which can also be installed under Linux, directly from the package manager), ‘Euler Math Toolbox‘ (which, under Linux, is best run using Wine), and ‘SageMath‘ (which IMHO, is best installed under Linux, as a lengthy collection of packages, from the standard repositories, using the package manager, that include certain ‘Jupyter’ packages). In addition to that, I’d say that ‘Python‘ can also be a bit of a numerical toolbox, beyond what most programming languages can be, such as C++, yet, a programming language primarily, which under Linux, is best installed as a lengthy collection of packages through the package manager. And a single important reason is the fact that a Python script can perform arbitrary-precision integer arithmetic natively, and, with a small package named ‘python3-gmpy2′, can also perform arbitrary-precision floating-point arithmetic *easily*. If a Linux user wanted to do the same, using *C*, he or she would need to learn the library ‘GMP’ first, and that’s not an easy library to use. Also, there exists IPython, although I don’t know how to use that *well*. AFAICT, this consists mainly of an alternative shell, for interacting with Python, which makes it available through the Web-interface called “Jupyter”. Under Debian Linux, it is best installed as the packages ‘ipython3′, ‘python3-ipython-genutils’, ‘python3-ipykernel’, ‘python3-nbconvert’, and ‘python3-notebook’, although simply installing those packages, does not provide a truly complete installation… Just as one would want a ‘regular’ Python installation to have many additional packages, one would want ‘IPython’ to benefit from many additional packages as well.

But then, that previous paragraph also touches on an important issue. Each Scientific Computing platform I learn, represents yet-another scripting language I’d need to learn, and if I had to learn 50 scripting languages, ultimately, my brain capacity would become diluted, so that I’d master none of them. So, too much of a good thing can actually become a bad thing.

As a counter-balance to that, it can attract me to a given Scientific Computing platform, if it can be made to output good graphics. And, another Math platform which can, is called “FreeFem“. What is it? It’s a platform for solving Partial Differential Equations. Those equations need to be distinguished from simple derivatives, in that they are generally equations, in which a derivative of a variable is being stated on one side (the “left, bilinear side”), but in which a non-derivative function of the same variable is being stated on the other (the “right side”). What this does, is to make the equation a kind of recursive problem, the complexity of which really exceeds that of simple integrals. (:2) Partial Differential Equations, or ‘PDE’s, are to multi-variable Calculus, as Ordinary Differential Equations, or ‘ODE’s, are to single-variable Calculus. Their being “partial” derives from their derivatives being “partial derivatives”.

In truth, Calculus at any level should first be studied at a University, before computers should be used as a simplified way of solving its equations.

FreeFem is a computing package, that solves PDEs using the “Finite Element Method”. This is a fancy way of saying, that the software foregoes finding an exact analytical solution-set, instead providing an approximation, in return for which, it will guarantee some sort of solution, in situations, where an exact, analytical solution-set could not even be found. There are several good applications. (:1)

But I just found myself following a false idea tonight. In search of getting FreeFem to output its results graphically, instead of just running in text mode, I next wasted a lot of my time, custom-compiling FreeFem, with linkage to my many libraries. In truth, such custom-compilation is only useful under Linux, if the results are also going to be installed to the root file-system, where the libraries of the custom-compile are also going to be linked to at run-time. Otherwise, a lot of similar custom-compiled software simply won’t run.

What needs to be understood about FreeFem++ – the executable and not the libraries – is, that *it’s a compiler*. It’s not an application with a GUI, from which features could be explored and evoked. And this means that a script, which FreeFem can execute, is written much like a C++ program, except that it has no ‘`main()`

‘ function, and isn’t entirely procedural in its semantics.

And, all that a FreeFem++ script needs, to produce a good 2D plot, is the use of the ‘`plot()`

‘ function! The example below shows what I mean:

I was able to use *an IDE*, which I’d normally use to write my C++ programs, and which is named “Geany”, to produce this – admittedly, plagiarized – visual. The only thing I needed to change in my GUI was, the command that should be used, to execute the program, *without* compiling it first. I simply changed that command to ‘`FreeFem++ "./%f"`

‘.

Of course, if the reader wants in-depth documentation on how to use this – additional – scripting language, then This would be a good link to find that at, provided by the developers of FreeFem themselves. Such in-depth information will be needed, before FreeFem will solve any PDEs which may come up within the course of the reader’s life.

But, what is not really needed would be, to compile FreeFem with support for many back-ends, or to display itself as a GUI-based application. In fact, the standard Debian version was compiled by its package maintainers, to have as few dependencies as possible (‘X11′), and thus, only to offer *a minimal* back-end.

(Updated 7/14/2021, 21h45… )

(As of 7/09/2021, 0h10: )

Surprisingly, I discovered that even the bare-bones Debian 9 / Stretch version of FreeFem++ has ‘ffmedit’ and ‘VTK 2′ (file-writer) support. However, the way the ffmedit viewer displays a mesh ~~is disappointing, because it doesn’t colour-code the mesh. This makes the display unworthy to be shown in public~~. (:3) However, how well I can view VTK Files, really only depends, on how much I want to play with the settings, of the stock version of ‘Paraview’…

(Update 7/10/2021, 20h40: )

**1:)**

An approach that can be used *for polynomials*, which will find a scalar of roots sequentially, each time zeroing in on a root that ‘works by itself’, *cannot be used for PDEs*. Because PDEs state that a Partial Derivative, a function of perhaps 3 variables, needs to equal another value, also computed from the same variables, it follows that the solution of such an example is inherently multi-dimensional, perhaps consisting of a membrane occupying 3 dimensions, with an inequality instead, for all parts of a volume not exactly on that membrane. Or, with similar probability, the solution could exist as one unique combination of the 3 variables, none of which can be found, without also finding the other 2.

I have to admit, however, that the way PDEs are “solved” in practice, differs, from how I had imagined they would be solved. My previous idea was, that each *variable* would be subdivided into increments within its domain, but that *a fixed function* would be fed *values* of these variables, to determine whether those values satisfy the PDE.

What happens instead is, that *an approximation of a function* is computed – essentially as an array of output-values – in such a way as to get as close as possible to satisfying the PDE. This array is not an analytically defined function, but then acts as a replacement for one. (:4)

The Debian version of the packages, also offers a solver named ‘`ff3d`

‘, which accepts a script *as well as a POV-Ray File* as input. AFAIK, that solver simply takes the defined mesh of the POV-Ray File as its domain – which is referred to as “a fictitious domain”.

**2:)**

A question which the uninitiated may not know the answer to could be, ‘How do differential equations imply integral equations, or vice-versa?’ And the answer which I would offer is, that if a person has been given ‘an integral equation’ to solve, that person can just differentiate both sides of it. What was the integral side of the equation will become non-integral, and what was the non-integral side, will become the derivative side.

It’s just that, because derivatives are generally slightly easier to solve, than integrals, transforming integral equations into differential equations in this way, may ease solving either form.

Also, an integral equation which states fluid flow, can also be differentiated.

Another piece of insight which I can offer, about the subject of ‘Only differentiating an implicit function on both sides of the equation, and then solving’, is that in my own experience, when one does this, the solution one will find, is the derivative of the original, implicit function, not, the solution of the original, implicit function.

The reason for which the PDE is formally written as an equality-or-inequality between integrals, of derivatives, would seem to be the fact that in order for the function’s derivative to satisfy a constraint, the function itself must be the (negative) integral of that constraint.

**3:)**

Above, I had written, that the way ‘`ffmedit`

‘ works, where that is supposed to be the preferred way to view ‘FreeFem++’ meshes, was disappointing, because it was not colour-coding the meshes.

Further, something which I had written about ‘`ff3d`

‘ left the question unanswered, whether that tool, when outputting data for a POV-Ray Mesh, would be able to generate a visual.

To my great satisfaction, I’ve discovered that both goals can, after all, be achieved!

The actual executable ‘`ff3d`

‘ (under Debian 9) has no dependency on ‘X11′, which means, that it will not display anything directly. However, it can be scripted to output 1 or more .MESH Files, which in turn can be viewed with ‘`ffmedit`

‘.

*When doing this, it may be helpful to type the name of the mesh to be viewed on the ‘ffmedit’ command-line, without the ‘.mesh’ file-name extension, just so that the viewer will load both the ‘.mesh’ and the ‘.bb’ Files with the same base-name*.

This viewer, in turn, has a context-menu which I was not aware of before, which one obtains by right-clicking on the displayed mesh, and that menu notably has a ‘Data’ sub-menu, which allows colour-coding to be set as desired. So, it should be possible to obtain everything from these two executables, that one might desire, from 3D PDEs specifically, which are being computed along a POV-Ray Mesh:

Tada!

**4:)**

I have just been studying the FreeFem documentation, to dig deeper into how it can be used. And one fact which I learned was, that the Finite Element Method I had believed in earlier, falls short, of what the actual software package can do. Looking at the tutorial, what I found was, that the following code:

```
fespace Vh(Th, P1);
Vh u, v;// Define u and v as piecewise-P1 continuous functions
```

Does not truly “define” the functions (u) and (v), but rather, ‘declares’ them to be (continuous) functions, which are to receive one value for each of the elements belonging to ‘Th’. What the scripting language offers next, goes further than just, to compute what the derivative of *defined* functions (u) and (v) would be. The following code:

```
solve Poisson(u, v, solver=LU)
= int2d(Th)( // The bilinear part
dx(u)*dx(v)
+ dy(u)*dy(v)
)
- int2d(Th)( // The right hand side
f*v
)
+ on(C, u=0); // The Dirichlet boundary condition
```

Actually populates the elements of (u), with values that satisfy the condition, that the expression to be solved for will be close to zero in value. This same block of code does not populate (v) with elements. But it is only due to this second block of code ‘solving’, that (u) can actually be plotted.

(Update 7/11/2021, 15h55: )

There is something specific, which FreeFem does, which can make it harder for people like me, who have zero experience with FreeFem, to learn its basic usage. I tend to be a person, who needs to have some idea of what, approximately, a software package is doing procedurally, before I’d be able to set up my own problems for it. And in the case of public FreeFem documentation, there is no mention of that.

As can be seen in the code block above, the user is allowed – and required – to make use of the terms ‘`dx(v)`

‘ as well as ‘`dy(v)`

‘, even though (v) has not been initialized. And, after the solver exits, (v) still has no values. (v) is simply mapped over the same domain of (x) and (y), which (u) is mapped over, except that (u) ends up receiving the values, that will form the solution.

What, exactly, the solver does, could be primitive or complex. If the solver is primitive, what it can do is, to keep cycling through the elements of (u) and (v) concurrently, obtain non-zero values from the expression to be solved for, and subtract those values from the elements of (u), until the final pass over the domain yields results from the expression that are desirably close to zero…

At the very least, this would require that the solver be able to obtain a numerical result from the expression to be solved for.

It seems to me, that when the solver encounters the term ‘`dx(v)`

‘, for an element of (v) that has no parameter, it simply evaluates that to (1.), without setting the parameter. Similarly, when the solver encounters ‘`f*v`

‘, it simply treats (v) as being equal to (1.).

However, attempting to plot (v) afterwards simply generates an error message, over the same issue.

The user can do many things with FreeFem. What the user *may not* do is, to leave out the (v) term, or alternatively, the ‘`dx(v)`

‘ and ‘`dy(v)`

‘ terms, from the integrals to be solved for. And the reason seems to be the fact that, in a fixed way, the solver looks for those terms, to define the interval over which the integrals are to be computed (numerically).

(Update 7/11/2021, 22h50: )

What I have also determined by experimentation is that, if the user initializes (v) in the example above, before calling ‘`solve ...`

‘, what FreeFem will do is, just ignore whatever those elements were initialized to…

Hence, the following code is fully legal, although it gives a completely different result, from what the Poisson code gave:

```
// Load Cubic polynomial interpolator
load "Element_P3";
// Caution:
// Cubic interpolations are often unneccessary, and may
// overshoot their endpoints.
// They're usually only needed, for second-degree
// Derivatives, which lead to second-order Differential Equations.
// A P3-derived fespace will nevertheless be used here.
// For label definition.
int Dirichlet=1;
// Define mesh boundary.
border C1(t=0, 2*pi){x=cos(t); y=sin(t); label=Dirichlet;}
border C2(t=0, 2*pi){x=cos(t) * 0.2 + 0.3; y=sin(t) * 0.2 + 0.3;
label=Dirichlet;}
// The triangulated domain Th is on the left side of its boundary.
mesh Th = buildmesh(C1(100) + C2(-40));
plot(Th, ps="ThWithHole.eps", wait = true);
// Define a function f.
func f = x * y;
// The finite element space defined over Th is called Vh here.
fespace Vh(Th, P3);
Vh u, v = f; // Declare u and define v as piecewise P3-continuous functions.
// Define a Vh-fespace function equivalent to f.
Vh fh = f;
// Get the clock in seconds.
real cpu=clock();
// Define a simple PDE...
solve Poisson(u, v, solver=LU)
= int2d(Th)( // The bilinear part
(dx(u) * dx(v))
+ (dy(u) * dy(v))
)
- int2d(Th)(
v
)
+ on(Dirichlet, u=0); // The Dirichlet boundary condition
// Plot the result...
plot(u, ps="PoissonForUnity.eps", wait = true);
plot(v, ps="VDoesNothing.eps", wait = true);
u[] = 0.;
// Define a twisted PDE...
solve TwistedPDE(u, v, solver=LU)
= int2d(Th)( // The bilinear part
(dx(fh) * dx(u) * dx(v))
+ (dy(fh) * dy(u) * dy(v))
)
- int2d(Th)(
f*v
)
+ on(Dirichlet, u=0); // The Dirichlet boundary condition
// What I had hoped, the first plot would show...
plot(u, ps="TwistedPDE.eps", fill=true, nbiso=64);
// Display the total computational time.
cout << "CPU time = " << (clock()-cpu) << endl;
```

Also, just to practice, my hypothetical code put a hole in the mesh, which it is solving for. It generates the following two plots…

So far, my code has only achieved that the second of these two plots, which is of (v), no longer generates an error message. Yet, inspection of the first plot reveals, that it will behave as though ‘`v`

‘, which was *supposed to be equivalent to* ‘`f`

‘, had been defined as (1.). Hence, to have initialized (v), had no effect whatsoever, on the first plot.

Now, the following plot shows what I would have expected the first plot to show, which, translated into English, would mean, ‘Find the fespace function (u) such that its gradients with respect to (x) and (y), *multiplied with the corresponding gradients of (f)*, and the results added, will yield the values of (f).’ …

And there I have that twisted PDE, which has no practical value, but solved non-trivially.

(Update 7/12/2021, 8h20: )

I suppose that I have yet another observation to add. The way the Finite-Element Space function (u) is being colour-coded, seems somewhat lacklustre, in the bottom-left and top-right quadrants of the plot…

I find it to be revealing, that the plot of (u) is in fact negative, where (f) is negative. The gradient of (f) itself is positive – at least, going from left to right, along the top half of the plot… But, the gradient of (f) is negative, going from bottom to top, along the left half of the plot. Along the bottom half of the plot, the gradient of (f) should then be negative again, going from left to right… I guess that the PDE’s plot has proved, that its Y-axis is facing upward, and not, facing downward.

What this results in is the fact, that the X- and Y-gradients of (f) oppose each other in the top-left and bottom-right quadrants, while agreeing with each other, in the top-right and bottom-left quadrants. And this, in turn, seems to require less amplitude from (u), than is required in the top-left and bottom-right quadrants.

(Update 7/12/2021, 18h00: )

There is an admission which I must make about this posting, and what it fails to explain about the integration which FreeFem can perform, on a disk. In Calculus 2, what Students are generally taught is twofold:

- There exist definite and indefinite integrals, where, the definite variety is derived from the indefinite variety, by computing the indefinite integral at a point in its domain, where the definite integral is predetermined to equal zero, and then to subtract this value from all the other indefinite integrals, to arrive at the definite integrals, And
- There exists integration over 2 dimensions, in which each dimension is categorized as an orthogonal interval, defined by one coordinate-variable.

The problem which this posting requests that FreeFem solve, poses two issues with respect to those teachings:

- The mesh is a disk-shaped mesh, not a rectangle, And
- The value of (u), in this case, is not just supposed to equal zero at one point, but is actually supposed to equal zero, everywhere along the boundaries of the disk, in addition to satisfying its other constraints.

In truth, I did not study Calculus beyond Calculus 2, for which reason I can only give a partial answer, to how these two issues are probably resolved.

Issue 1, which is also referred to as “The Dirichlet Problem“, has as added challenge, that the vertices of the mesh generated by this script are poorly ordered; however, each vertex has an (x) and a (y) coordinate. And I think that it gets resolved, in the way the mesh was built, which was, as a set of concentric rings, the angle of which is supposedly orthogonal to the radius. Hence, the problem ~~can be~~ broken down into a classical, 2-variable problem, due to how the mesh was created. The natural order of the elements, which only have one official index in this case, completes one ring after the other. Each ring passively receives a different weight, according to the circumference it represents.

This issue would have gotten sidestepped completely, had I instructed FreeFem to generate a rectangular mesh instead.

The reason for which FreeFem is able to compute ‘`dx(u)`

‘ and ‘`dy(u)`

‘ correctly, which translate into ‘the derivative of (u) with respect to (x)’ and ‘the derivative of (u) with respect to (y)’, is the fact that a mesh has in fact been built, so that vertices additionally belong to triangles.

The following plot displays the mesh, which FreeFem created for me:

(Update 7/12/2021, 18h30: )

According to my latest ruminations, the terms ‘`dx(v)`

‘ and ‘`dy(v)`

‘ should really represent numerically, By how much the x- and y-coordinate have changed, from the previous vertices’.

What’s confusing about that is, the fact that it’s inconsistent, with the way ‘`dx(u)`

‘ and ‘`dy(u)`

‘ are parsed.

(Content removed 7/13/2021, 1h45 because inaccurate. )

(Update 7/14/2021, 14h30: )

There is a hypothetical computation which I can imagine FreeFem to be performing, which would act as a surrogate for the 2-dimensional integrals which I learned in Calculus 2, and which could act as a point of reference, when using the program. Given that the disk’s vertices only have 1 official index, but, that the vertices all belong to a triangular mesh, if the definite integral of 2 points of each triangle is already known, a sort of definite integral of the 3rd point can also be computed.

But, in order for such an approach really to work, what would first need to be done is, that the ‘`dx(v)`

‘ and ‘`dy(v)`

‘ of each edge must form the basis of a linear equation, so that two resulting linear combinations will yield:

```
ax * edge1 + bx * edge2 ->
dx(v) == 1., dy(v) == 0.
ay * edge1 + by * edge2 ->
dx(v) == 0., dy(v) == 1.
```

In other words, the solution would generate error messages, as soon as any two edges are parallel, or, as soon as the length of any edge is zero. (:5)

In order to arrive at ‘`f*v`

‘, the function that can be computed at the unsolved vertex (x0,y0) would be:

```
2DCross( (s1,t1), (s2,t2) ) :=
(s2 * t1) - (s1 * t2)
a_sign = sign( sign(dx(v)) + (0.5 * sign(dy(v))) )
AreaQuad( (x0,y0), (x1,y1), (x2,y2) ) :=
|2DCross( ((x0-x1),(y0-y1)), ((x0-x2),(y0-y2)) )|
u0 = (u1 + u2) / 2 +
( a_sign *
f(x0,y0) * AreaQuad( (x0,y0), (x1,y1), (x2,y2) ) )
```

(Revised 7/14/2021, 19h10. )

The result could simply be added to the mean of the already-solved vertices’ solutions, and the sum offered as the solution for the 3rd vertex.

In order to arrive at either partial gradient by itself, only 1 out of the 2 available linear combinations would get used.

```
dx(v) = x0 - ((x1 + x2) / 2)
dy(v) = y0 - ((y1 + y2) / 2)
( ax * (f(x0,y0) - f(x1,y1)) + bx * (f(x0,y0) - f(x2,y2)) ) * dx(v) OR
( ay * (f(x0,y0) - f(x1,y1)) + by * (f(x0,y0) - f(x2,y2)) ) * dy(v)
```

This ‘sounds nice’, especially since the boundary value of the disk’s outer boundary is set, and constitutes 2 vertices of a triangle each time (see plot above). A ring of vertices could then be derived, ‘1 step inside’ each ring of vertices that has already been solved.

Such a kludge would additionally remain ~~consistent~~ with the way in which I was taught integrals, in that the values computed in this way for ‘`f*v`

‘ would end up having amplitudes, ~~proportional~~ both to those of ‘`f(x,y)`

‘ and to the area of the plot.

(Update 7/13/2021, 18h20: )

**5:)**

Breaking down this thought process into baby-steps, the methodology of computing the multipliers (ax), (bx), (ay) and (by), which I mentioned above, involves a simple application of Linear Algebra, but is included in the following work-sheet, for readers who do not wish to perform the work, to see some sort of results:

http://dirkmittler.homeip.net/FreeFem_1.html

On most browsers, the reader will need to enable JavaScript from my site, as well as from ‘mathjax.org’, to be able to view the sheet.

(Update 7/14/2021, 21h45: )

FreeFem cheats, *at least, when using a disk-shaped 2D plot* !

There was a basic question going around my head, about how the developers of FreeFem solved a problem, that would be inherent, in outputting a numerically computed integral to a 2D Mesh, in the form of a disk with unordered vertices, that have a single index number. And the answer which I have found was, that *they did not*, in fact, solve the following problem…

Let’s say that the function to be integrated, f(x), is just (x). This is the simplest linear function, I’d say. The indefinite integral is:

0.5 * x^{2} + C

Where (C) is the arbitrary constant.

It’s important to understand the significance of this arbitrary constant, that also characterizes all indefinite integrals as distinct from definite integrals. These integrals would be a parabola, concave upwards.

If it was our intent to make the endpoints of this parabola equal a specific boundary value, such as zero, the classical way to do that would be, to set the arbitrary constant to (-0.5). The following plot shows what I mean:

‘The problem’ to be considered, if this function needed to be computed as a discrete summation *from both ends*, is, that the summation from the right-hand side of the plot, where (x= +1), towards the origin, would need to be done with inverted sign, because the integral is becoming increasingly negative, even though the function being summed, is positive. OTOH, the summation coming from the left-hand-side, where (x = -1), also needs to lead to symmetrically negative values, because the function being summed was itself negative.

What this means is that, where a summation from left to right took place positively, the same summation going from right to left needs to be negated, in order to preserve the sense of a true integral.

I’ve known this from the start of this posting. And, one of the questions I’ve been breaking my head over was, ‘Where does FreeFem derive the negation?’

The answer is, ‘It does **NOT**!’ Given a disk to be summed from the outside in, FreeFem applies *each summation with the same, positive sign*. The following plot shows the result:

What’s more, if an attempt *were* made, as I found out, to introduce negations to the summations in any systematic way, doing so would lead to artifacts in the way the plot ‘looks’. It would just never seem to come out right.

For this reason, when plotting a disk, even though by naming, FreeFem appears to be doing something ~in 2D~, it’s *really just a 1-dimensional integral*, radially inwards.

For that reason, its amplitudes will actually reach those of the original function squared, but never, those of the original function cubed.

Dirk

]]>

A scenario which often happens in computing is, that there exists a quantity, call that (a), which will result accurately by squaring the quantities (x), (y) and (z) first, and then computing the square root of the sum. It could then also be said, that the following *explicit function* has been defined:

```
F(x, y, z) := sqrt(x^2 + y^2 + z^2)
```

Further, the idea exists in Computing, that when all one wants to compute, is (x^{2}) for example, it takes fewer CPU cycles actually to compute (x*x), than it takes, to compute a real power function.

But, the object of the exercise could actually be, not to derive (a) from (x), (y) and (z), but rather, to compare two instances of F(x, y, z).

The biggest issue as such, with actually computing F(x, y, z), is, that to compute the square root, is even slower, than it was to compute (x^{2}), (y^{2}) and (z^{2}). Therefore, *if one has the luxury of knowing* what (a) is *in advance*, what one can do, for real-number comparisons, is just to square (a), and then, *not* to compute the square root, which should exist within the function F(). Therefore, when two known quantities are simply being compared, the following way to do it, will run slightly faster:

```
a^2 < (x^2 + y^2 + z^2)
```

In Modern Computing, what is often done is, that actual CPU usage is ignored, to make the task of writing complex code easier, and, the situation may not always be recognizable, that two values are going to be compared, which would *both* have been computed as the square root of one other value. And so, to avoid having to stare at some code cross-eyed, the practice can be just as valid, to compute two instances of F(x, y, z), but, to compute them *with* the square root function *in each case*, and somewhere later in the code execution, just to compare the two resulting values.

Dirk

]]>

One of the behaviours which has been trending for several years now, is to take an arbitrary piece of information, and just to call it AI.

This should not be done.

As an example of what I mean, I can give the following image:

This is an image, which has been palletized. That means that the colours of individual pixels have been reduced to ‘some palette of 256 colours’. What any conventional software allows me to do – including ‘GIMP’ – is next, to assign a new colour palette to this image, so that all these colours get remapped, as follows:

What I could do – although I won’t – is claim somehow that this has ‘artistic merit’. But what I cannot legitimately do is, to claim that the second image is ‘an example of AI’.

(Updated 7/06/2021, 16h30… )

(As of 6/30/2021, 15h50: )

Now, I suppose that a question which the reader could ask, which would be closer to legitimate, would be, ‘By what means can an image, perhaps represented by triples of (R,G,B) values, also called a TrueColor Image, be simplified into a representation, in which each pixel has exactly 1 value out of 256, and, so that the colours in this palette represent the original image *optimally*?’ Hence, the question could next be, ‘Why was the *first* image not an example of AI?’

And the answer I’d give is that, in theory, it would be possible to devise machine learning methodologies, to do what is already accessible through conventional methodologies. But, why do so, if the results obtained through conventional methodologies, are already as close to optimum as possible.

(Update 7/03/2021, 1h15: )

The standard method used to palletize the images is, the Median Cut Algorithm.

(Update 7/03/2021, 9h50: )

Based on what I have read in the past few days, the following two source files, are the notional C++ which I’d say best describes, how a system of (R,G,B,A) colours gets translated into a palletized format, in real applications today. This is only approximate, but writing and testing this code for syntax, satisfied my own curiosity…

http://dirkmittler.homeip.net/text/Palette.h

http://dirkmittler.homeip.net/text/Palette.cpp

(Update 7/04/2021, 19h55: )

I have just made some revisions to the header file and source file linked to above, some of which were minor, but some of which were critical. The critical changes fixed issues which, at run-time, would have caused infinite recursion…

I am ~~finally~~ satisfied with the code.

(Update 7/05/2021, 0h25: )

I have just added a little exercise to the source-code, which populates a bogus 1280×720 pixel image with 24 colour-values, and then palletizes the resulting set. What I found was, that the earlier versions of my code contained a grave error. To create STL sets, the keys for which can be any datum, the programmer must define a comparison operator for that datum, that will sort it linearly.

If the datum consists of 6 values, then this operator must compare each of them, in case the previous comparisons revealed equality. Failure to do this will cause odd behaviour, in my case, such as not to hold elements in one set, that have the same colour, but that differ in screen-position. *Those will just be ignored as duplicates then*.

In some, more-optimized version of the code, this might in fact be useful. But in its current version, this type of set operation just bogs the program tremendously. Running the current version of this program causes it to consume about 180 MBytes of RAM, and the test completes after about 18 seconds.

I think that, for some future purpose, it might actually be better, to generate the palette in a way that only keeps track of distinct colours – which was in fact, how my original code was malfunctioning – but, which then assigns the original image’s pixels to one of the palette colours, according to closeness.

(Update 7/05/2021, 8h20: )

The current version of my algorithm now greatly reduces the total number of pixel-values in each set, into values only differentiated according to (R), (G) and (B). It ‘works’ in that, when indexing a 1280×720 image, it ‘only’ takes up 13MB of RAM, and requires ~2 seconds of time to complete everything, from creating a bogus image to indexing it.

Most of the RAM which my algorithm is allocating, stems from the fact that a TrueColor Image is really being stored, at a pixel-depth of *5**16 bits. Yes, I’m storing an Alpha channel, but not organizing the palette according to it. I’m also storing per pixel, what the maximum channel-values will be, just in case they are greater than 255.

(Update 7/06/2021, 16h30: )

While working on improving my little code exercise, I discovered quite by accident, that there was an instruction which the previous version of the code was executing, 1280x720x24 times, when in fact that instruction only needed to be executed 1280×720 times. In other words, I could not quite figure out, why it was being accomplished so quickly by my code, to determine the supposedly optimal palette, even from a 1280×720 pixel image, yet, why it was taking so long afterwards, to index the same image, given a palette.

Now that I have corrected this mistake, which was in no way required by the nature of the exercise, my whole test program finishes in 1.4 seconds, no longer in 2.

Dirk

]]>