Some Observations about Roots with Multiplicities.

One of the subjects which I had visited in my past, was to write a C++ program that approximates the roots of polynomials, but that needed to deal with ‘Doubled Roots’ in a special way, just because the search algorithm within that program, which searches for the roots, cannot deal with doubled roots. And what I found was, roots cannot only be doubled, but can have multiplicities higher than (2). After not having pondered that programming project from the past, for some time, I now come back to rethink it, and find that it can be given a lecture of sorts, all to itself.

So, this is a link to a work-sheet, in which I try to explain Roots with Multiplicities, maybe to people with limited knowledge in Calculus:

Link to Letter-Sized PDF

Link to Reduced-Size PDF, for viewing on Mobile Devices

And as those two documents mention, the following is a link to the earlier blog posting, from which readers can download the C++ program, as well as to find instructions on how to compile it, on Linux computers:

Link to Earlier Blog Posting

Hence, the C++ program linked to in the earlier blog posting, needed to make use of the subject, that the two PDF Files describe.

N.B.:

Continue reading Some Observations about Roots with Multiplicities.

A bit of my personal history, experimenting in 3D game design.

I was wide-eyed and curious. And much before the year 2000, I only owned Windows-based computers, purchased most of my software for money, and also purchased a license of 3D Game Studio, some version of which is still being sold today. The version that I purchased well before 2000 was using the ‘A4′ game engine, where all the 3DGS versions have a game engine specified by the latter ‘A’ and a number.

That version of 3DGS was based on DirectX 7 because Microsoft owns and uses DirectX, and DirectX 7 still had as one of its capabilities to switch back into software-mode, even though it was perhaps one of the earliest APIs that offered hardware-rendering, provided that is, that the host machine had a graphics card capable of hardware-rendering.

I created a simplistic game using that engine, which had no real title, but which I simply referred to as my ‘Defeat The Guard Game’. And in so doing I learned a lot.

The API which is referred to as OpenGL, offers what DirectX versions offer. But because Microsoft has the primary say in how the graphics hardware is to be designed, OpenGL versions are frequently just catching up to what the latest DirectX versions have to offer. There is a loose correspondence in version numbers.

Shortly after the year 2000, I upgraded to a 3D Game Studio version with their ‘A6′ game engine. This was a game engine based on DirectX 9.0c, which was also standard with Windows XP, which no longer offered any possibility of software rendering, but which gave the customers of this software product their first opportunity to program shaders. And because I was playing with the ‘A6′ game engine for a long time, in addition owning a PC that ran Windows XP for a long time, the capabilities of DirectX 9.0c became etched in my mind. However, as fate would have it, I never actually created anything significant with this game engine version – only snippets of code designed to test various capabilities.

Continue reading A bit of my personal history, experimenting in 3D game design.

Video Compression

A friend of mine once suggested that ‘a good way’ to compress a (2D) video stream would be, to compute the per-pixel difference of each frame with respect to the previous frame, and then to JPEG-Compress the result. And wouldn’t we know it, this is exactly how MJPEG works! However, the up-to-date, state-of-the-art compression schemes go further than to do that, in order to achieve smaller file-sizes, and are often based on Macroblocks.

Also, my friend failed to notice that at some point within 2D video compression, ‘reference frames’ are needed, which are also referred to sometimes as key-frames. These key-frames should not be confused however with the key-frames that are used in video editing software, 2D and 3D, to control animations which the power-user wants to create. Reference frames are needed within 2D video compression, if for no other reason, than the fact that given small amounts of error with which ‘comparison frames’ are decompressed, the actual present frame’s contents will deviate further and further from the intended, original content, beyond what is acceptable given that the stream is to be compressed.

The concept behind Macroblocks can be stated quite easily. Any frame of a video stream can be subdivided into so-called “Transform Blocks”, which are typically 8×8 pixel-groups, and of which the Discrete Cosine Transform can computed, in what would amount into the simple compression of each frame. The DCT coefficients are then quantized, as is familiar. Simply because the video is also encoded as having a Y’UV colour scheme, there are two possible resolutions at which the DCT could be computed, one for the Luminance Values, and the lower resolution, spanning the doubled number of pixels, for the Chroma Values. However, it is in the comparison of each frame with the previous frames, that ‘good’ 2D video compression has an added aspect of complexity, which my friend did not foresee.

The preceding frame is first translated in 2D, by a vector that is encoded with each Macroblock, in an estimation of motion on the screen, and only after this translation of the subdivided image by an integer number of pixels by X and by Y a sub-result forms, with which the per-pixel difference of the present frame is computed, resulting in per-pixel values that may or may not be non-zero, and resulting in the possibility that an entire Transform Block has DCT coefficients which may all be zeroes.

Continue reading Video Compression

A fact about how software-rendering is managed in practice, today.

People of my generation – and I’m over 50 years old as I’m writing this – first learned about CGI – computer-simulated images – in the form of ‘ray-tracing’. What my contemporaries are slow to find out is that meanwhile, an important additional form of CGI has come into existence, which is referred to as ‘raster-based rendering’.

Ray-tracing has as advantage over raster-based rendering, better optical accuracy, which leads to photo-realism. Ray-tracing therefore still gets used a lot, especially in Hollywood-originated CGI for Movies, etc.. But ray-tracing still has a big drawback, which is, that it’s slow to compute. Typically, ray-tracing cannot be done in real-time, needs to be performed on a farm of computers, and typically, an hour of CPU-time may be needed, to render a sequence which might play for 10 seconds.

But, in order for consumers to be able to play 3D games, the CGI needs to be in real-time, for which reason the other type of rendering was invented in the 1990s, and this form of rendering is carried out by the graphics hardware, in real-time.

What this dichotomy has led to, is model- and scene-editors such as “Blender”, which allow complex editing of 3D content, often with the purpose that the content eventually be rendered by arbitrary, external methods, that include software-based, ray tracing. But such editing applications still themselves possess an Editing Preview window / rectangle, in which their power-users can see the technical details of what they’re editing, in real-time. And those editing preview windows are then hardware-rendered, using raster-based methods, instead of the final result being rendered using raster-based methods.

Continue reading A fact about how software-rendering is managed in practice, today.