Particle-Based Fluids

One of the subjects which captured my imagination several years ago, when this subject started to hit professionally-authored CGI content – movies – was how fluids could be emulated graphically. And the state of the art is such, that particle-based fluids can be rendered on high-end, consumer graphics cards, where the particles’ motion is defined by density, pressure, and resistance to compression.

Sadly, I still see no place where consumer devices can simulated fluids as volumes yet – and do so in real-time.

But once the software has been set up to compute the positions of swarms of particles, which collectively define a fluid, a logical question which the power-user will ask is, ‘Now what? A surface of water reflects and refracts light, depending on its normal-vectors, but particles lack any normal-vectors.’

And the answer for what to do next is, to render the particles based on deferred rendering. In other words it’s still alright if the particles are point-sprites, as long as the Fragment Shader renders a depth-map of these individual entities. That depth-map will correspond to the map, which is produced with deferred rendering, and subject to post-processing.

What needs to happen next, is that this depth-map needs to be smoothed, in a way that leaves no holes in the fluid, but which also leaves surfaces at a tangent to the virtual camera-position, where the edge of the virtual fluid is supposed to exist. This means that a special smoothing function is needed, that weights the distance of individual particles, according to a spherical function:

K = SQRT( Radius^2 – X^2 – Y^2 )

Z’ = Z – K

And then, the normal-vector can be computed from the resulting, modified depth-map. This normal-vector can be used to reflect and/or refract an environment-map, but in the case of refraction, the density of the virtual fluid must also be computed realistically, since most real fluids are not perfectly transparent. This could be done using alpha-blending.

Now, there is an extension to this approach, that uses ‘Surfels‘…

Continue reading Particle-Based Fluids

More about Framebuffer Objects

In the past, when I was writing about hardware-accelerated graphics – i.e., graphics rendered by the GPU – such as in this article, I chose the phrasing, according to which the Fragment Shader eventually computes the color-values of pixels ‘to be sent to the screen’. I felt that this over-simplification could make my topics a bit easier to understand at the time.

A detail which I had deliberately left out, was that the rendering target may not be the screen in any given context. What happens is that memory-allocation, even the allocation of graphics-memory, is still carried out by the CPU, not the GPU. And ‘a shader’ is just another way to say ‘a GPU program’. In the case of a “Fragment Shader”, what this GPU program does can be visualized better as shading, whereas in the case of a “Vertex Shader”, it just consists of computations that affect coordinates, and may therefore be referred to just as easily as ‘a Vertex Program’. Separately, there exists the graphics-card extension, that allows for the language to be the ARB-language, which may also be referred to as defining a Vertex Program. ( :4 )

The CPU sets up the context within which the shader is supposed to run, and one of the elements of this context, is to set up a buffer, to which the given, Fragment Shader is to render its pixels. The CPU sets this up, as much as it sets up 2D texture images, from which the shader fetches texels.

The rendering target of a given shader-instance may be, ‘what the user finally sees on his display’, or it may not. Under OpenGL, the rendering target could just be a Framebuffer Object (an ‘FBO’), which has also been set up by the CPU as an available texture-image, from which another shader-instance samples texels. The result of that would be Render To Texture (‘RTT’).

Continue reading More about Framebuffer Objects