An Observation about NURBS

Given that the reader probably understands more about the Math of NURBS than I do, the reader will also understand that in general, true NURBS Curves will not touch their control-points. My own earlier misperception of that subject came through the use of the (now-defunct) application ‘TrueSpace’, which arbitrarily labeled objects as NURBS, which were really not that.

It should also be understood, that classic cubic splines are not NURBS either.

Given this understanding, a question might arise as to why the actual application “Ayam” is capable of taking an arbitrary CSG-primitive, and converting that into a NURBS-Patch.

ayam_7

And there are really several separate cases, in which this is possible:

(Last Updated 08/20/2017, 22h05 : )

  1. The Order of the NURBS may only be 2, given that the application in question has fully-general Math to work with NURBS. In that case, by default, the position of 1 point along a curve will be determined by 2 adjacent Control-Points, and at one exact parameter-value, will be controlled by only 1 Control-Point. This is a simpler type of NURBS, which will show as straight lines between the CPs, and which touches each CP. It’s useful to convert a Cube.
  2. True NURBS have a Knot-Vector, which states possible repetitions (“Multiplicity”), and which the application in question does implement. When the Knot has a Multiplicity equal to the Order of the NURBS Curve, it takes over the position of the Curve completely. However, a set of three NURBS Control Points can form a linear, equidistant sequence, with each CP having a Multiplicity equal to the Order minus one. The two NCurve CPs that form the endpoints can correspond to the ICurve Handles, while the NCurve CP in the middle can correspond to the ICurve CP. What will happen is that the NCurve will touch the CP in the middle, but not the ones at the ends. This is similar to how a circle is formed, with an Order of 3.
  3. A NURBS-Surface – i.e., a -Patch – is a special case derived from a Curve, that has the two parameters U and V, where a Curve only used to have one parameter. The CPs are sometimes conceived to form a grid with rectangular topology. It seems entirely plausible to me, that the Order along U may be different from the Order along V, and then having a linear Order along one parameter, may be the correct way to convert a Cylinder or a Cone. In that case, a Disk may simply be a special case of a Cone, where one parameter forms a circle, and the other, linear parameter defines either the radius or the height – with a single CP defining either the center or the vertex.
  4. Most plausibly, if the NURBS-Patch is to have a different Order along U from its Order along V, it should also have two separate Knot Vectors.

Dirk

A way to compute single, shared Handles automatically could be:

H1 = P2 – (1/4)( P3 – P1 )

If each ICurve CP is to have two Handles unto itself, a way to compute those automatically could be:

H2- = P2 – (1/6)( P3 – P1 )

H2+ = P2 + (1/6)( P3 – P1 )

 

Print Friendly, PDF & Email

Widening Our 3D Graphics Capabilities under FOSS

Just so that I can say that my 3D Graphics / Model Editing capabilities are not strictly limited to “Blender”, I have just installed the following Model Editors on the Linux computer I name ‘Klystron’, that are not available through my package-manager:

I felt that it might help others for me to note the URLs above, since correct and useful URLs can be hard to find.

In addition, I installed the following Ray-Tracing Software-Rendering Engines, which do not come with their own Model Editors:

Finally, the following was always available through my package manager:

  • Blender
  • K-3D
  • MeshLab
  • Wings3D

 

  • PovRay

 

In order to get ‘Ayam’ to run properly – i.e., be able to load its plugins and therefore load ‘Aqsis’ shaders, I needed to create a number of symlinks like so:

( Last Updated on 08/19/2017, 19h55 )

Continue reading Widening Our 3D Graphics Capabilities under FOSS

Print Friendly, PDF & Email

Particle-Based Fluids

One of the subjects which captured my imagination several years ago, when this subject started to hit professionally-authored CGI content – movies – was how fluids could be emulated graphically. And the state of the art is such, that particle-based fluids can be rendered on high-end, consumer graphics cards, where the particles’ motion is defined by density, pressure, and resistance to compression.

Sadly, I still see no place where consumer devices can simulated fluids as volumes yet – and do so in real-time.

But once the software has been set up to compute the positions of swarms of particles, which collectively define a fluid, a logical question which the power-user will ask is, ‘Now what? A surface of water reflects and refracts light, depending on its normal-vectors, but particles lack any normal-vectors.’

And the answer for what to do next is, to render the particles based on deferred rendering. In other words it’s still alright if the particles are point-sprites, as long as the Fragment Shader renders a depth-map of these individual entities. That depth-map will correspond to the map, which is produced with deferred rendering, and subject to post-processing.

What needs to happen next, is that this depth-map needs to be smoothed, in a way that leaves no holes in the fluid, but which also leaves surfaces at a tangent to the virtual camera-position, where the edge of the virtual fluid is supposed to exist. This means that a special smoothing function is needed, that weights the distance of individual particles, according to a spherical function:

K = SQRT( Radius^2 – X^2 – Y^2 )

Z’ = Z – K

And then, the normal-vector can be computed from the resulting, modified depth-map. This normal-vector can be used to reflect and/or refract an environment-map, but in the case of refraction, the density of the virtual fluid must also be computed realistically, since most real fluids are not perfectly transparent. This could be done using alpha-blending.

Now, there is an extension to this approach, that uses ‘Surfels‘…

Continue reading Particle-Based Fluids

Print Friendly, PDF & Email

More about Framebuffer Objects

In the past, when I was writing about hardware-accelerated graphics – i.e., graphics rendered by the GPU – such as in this article, I chose the phrasing, according to which the Fragment Shader eventually computes the color-values of pixels ‘to be sent to the screen’. I felt that this over-simplification could make my topics a bit easier to understand at the time.

A detail which I had deliberately left out, was that the rendering target may not be the screen in any given context. What happens is that memory-allocation, even the allocation of graphics-memory, is still carried out by the CPU, not the GPU. And ‘a shader’ is just another way to say ‘a GPU program’. In the case of a “Fragment Shader”, what this GPU program does can be visualized better as shading, whereas in the case of a “Vertex Shader”, it just consists of computations that affect coordinates, and may therefore be referred to just as easily as ‘a Vertex Program’. Separately, there exists the graphics-card extension, that allows for the language to be the ARB-language, which may also be referred to as defining a Vertex Program. ( :4 )

The CPU sets up the context within which the shader is supposed to run, and one of the elements of this context, is to set up a buffer, to which the given, Fragment Shader is to render its pixels. The CPU sets this up, as much as it sets up 2D texture images, from which the shader fetches texels.

The rendering target of a given shader-instance may be, ‘what the user finally sees on his display’, or it may not. Under OpenGL, the rendering target could just be a Framebuffer Object (an ‘FBO’), which has also been set up by the CPU as an available texture-image, from which another shader-instance samples texels. The result of that would be Render To Texture (‘RTT’).

Continue reading More about Framebuffer Objects

Print Friendly, PDF & Email