A caveat in using ‘ffmpeg’ to produce consumer-ready streams, from individual frame-files.

It recently happened to me, that I had used ‘Blender’ to create a very short animation, with 144 frames, but where I had instructed Blender to output this animation as a series of numbered .PNG-Files, which I would next use an ‘ffmpeg’ command, to compile into a compressed stream, the latter being an .MP4-File, using H.264 video compression. ( :1 )

But unexpectedly, I had obtained an .MP4-File, which would play on some of my player applications, but not on others. And when I investigated this problem, I found that player-applications which used a feature under Linux named ‘VDPAU‘, were Not able to play the stream, while player-applications which used software to decompress the stream, were able to play it.

The very first assumption that some people could make in such a situation would be, that they do not have their graphics drivers set up correctly, and that VDPAU may not be working correctly on their Linux-based computers. But, when I looked at my NVidia settings panel, it indicated that VDPAU support included support for H.264 -encoded streams specifically:

screenshot_20180720_154326

BTW, It’s not necessary for the computer to have an NVidia graphics-card, with the associated NVidia GUI, to possess graphics-acceleration. It’s just that NVidia makes it particularly easy, for users who are used to Windows, to obtain information about their graphics card.

Rather than to believe next, that VDPAU is broken due to the graphics driver, I began to look for my problem elsewhere. And I was able to find the true cause for the playback-problem. Ultimately, if we want to compress a stream into an .MP4-File, and if we want the recipient of that stream to be able to play it back, using hardware-acceleration, which is the norm for high-definition streams, then an ‘ffmpeg’ command similar to the one below would be the correct command:

 


ffmpeg -framerate 24 -i infile_%4d.png -an -vcodec libx264 -pix_fmt yuv420p -crf 20 outfile.mp4

 

But I feel that I should explain, how my first attempt to compress this stream, failed. It did not contain the parameter shown above, namely ‘-pix_fmt yuv420p‘. There are two WiKiPedia articles, which explain the subject of what ‘YUV’ means, and that may explain the subject better than I can, and that I recommend my reader read:

https://en.wikipedia.org/wiki/YUV

https://en.wikipedia.org/wiki/Chroma_subsampling

I am now going to paraphrase, what the above articles explain in detail.

Continue reading A caveat in using ‘ffmpeg’ to produce consumer-ready streams, from individual frame-files.

I’m impressed with the Mesa drivers.

Before we install Linux on our computers, we usually try to make sure that we either have an NVIDIA or an AMD / Radeon  GPU  – the graphics chip-set – so that we can use either the proprietary NVIDIA drivers designed by their company to run under Linux, or so that we can use the proprietary ‘fglrx’ drivers provided by AMD, or so that we can use the ‘Mesa‘ drivers, which are open-source, and which are designed by Linux specialists. Because the proprietary drivers only cover one out of the available families of chip-sets, this means that after we have installed Linux, our choice boils down to a choice between either proprietary or Mesa drivers.

I think that the main advantage of the proprietary drivers remains, that they will offer our computers the highest version of OpenGL possible from the hardware – which could go up to 4.5 ! But obviously, there are also advantages to using Mesa , one of which is the fact that to install those doesn’t install a ‘blob’ – an opaque piece of binary code which nobody can analyze. Another is the fact that the Mesa drivers will provide ‘VDPAU‘, which the ‘fglrx’ drivers fail to implement. This last detail has to do with the hardware-accelerated playback of 2D video-streams, that have been compressed with one out of a very short list of Codecs.

But I would add to the possible reasons for choosing Mesa, the fact that its stated OpenGL version-number does not set a real limit, on what the graphics-chip-set can do. Officially, Mesa offers OpenGL 3.0 , and this could make it look at the surface, as though its implementation of OpenGL is somewhat lacking, as a trade-off against its other benefits.

One way in which ‘OpenGL’ seems to differ from its competitor in real-life: ‘DirectX’, is in the system by which certain DirectX drivers and hardware offer a numeric compute-level, and where if that compute-level has been achieved, the game-designer can count on a specific set of features being implemented. What seems to happen with OpenGL instead, is that 3.0 must first be satisfied. And if it is, the 3D application next checks individually, whether the OpenGL system available, offers specific OpenGL extensions by name. If the application is very-well-written, it will test for the existence of every extension it needs, before giving the command to load that extension. But in certain cases, a failure to test this can lead to the graphics card crashing, because the graphics card itself may not have the extension requested.

As an example of what I mean, my KDE / Plasma compositor settings, allow me to choose ‘OpenGL 3.1′ as an available back-end, and when I select it, it works, in spite of my Mesa drivers ‘only’ achieving 3.0 . I think that if the drivers had been stated to be 3.1 , then this could actually mean they lose backward-compatibility with 3.0 , while in fact they preserve that backward-compatibility as much as possible.

screenshot_20171127_185831

screenshot_20171127_185939

Continue reading I’m impressed with the Mesa drivers.

Understanding that The GPU Is Real

A type of graphics hardware which once existed, was an arrangement by which a region of memory was formatted to correspond directly to screen-pixels, and by which a primitive set of chips would rasterize that memory-region, sending the analog-equivalent of pixel-values to an output-device, such as a monitor, even while the CPU was writing changes to the same memory-region. This type of graphics arrangement is currently referred to as “A Framebuffer Device”. Since the late 1990s, these types of graphics have been replaced by graphics, that possess a ‘GPU’ – a Graphics Processing Unit. The acronym GPU follows similarly to how the acronym ‘CPU’ is formed, the latter of which stands for Central Processing Unit.

A GPU is essentially a kind of co-processor, which does a lot of the graphics-work that the CPU once needed to do, back in the days of framebuffer-devices. The GPU has been optimized, where present, to give real-time 2D, perspective-renderings of 3D scenes, that are fed to the GPU in a language that is either some version of DirectX, or in some version of OpenGL. But, modern GPUs are also capable of performing certain 2D tasks, such as to accelerate the playback of compressed video-streams at very high resolutions, and to do Desktop Compositing.

wayland_1

wayland_2

What they do is called raster-based rendering, as opposed to ray-tracing, where ray-tracing cannot usually be accomplished in real-time.

And modern smart-phones and tablets, also typically have GPUs, that give them some of their smooth home-screen effects and animations, which would all be prohibitive to program under software-based graphics.

The fact that some phone or computer has been designed and built by Apple, does not mean that it has no GPU. Apple presently uses OpenGL as its main language to communicate 3D to its GPUs.

DirectX is totally owned by Microsoft.

The GPU of a general-purpose computing device often possesses additional protocols for accepting data from the CPU, other than DirectX or OpenGL. The accelerated, 2D decompressed video-streams would be an example of that, which are possible under Linux, if a graphics-driver supports ‘vdpau‘ …

Dirk

 

OGRE 1.10 Compiled On Laptop ‘Klystron’

One of the projects which I had been working on, while my Hewlett-Packard laptop was running Windows 8.1 and named ‘Maverick’, was to compile “OGRE 1.10″ on it to the best of my ability. And one mistake which I was adhering to, was to insist on using the ‘MinGW’ compiler suite. OGRE developers had already tried to convince me to use the MS compiler, since that was a Windows computer, but I did not comply. This was particularly pedantic of me, since by now a free version of Visual Studio is available, that can compile OGRE.

So now that the H/W has Linux installed on it, I recommenced compiling OGRE, with native compilers and tools. But the results were not exactly spectacular.

One reason for the lackluster results is, the fact that ‘Klystron’ currently has ‘Mesa’ drivers loaded for its Radeon graphics card, instead of having the proprietary, binary ‘fglrx’ driver. Mesa will give it OpenGL 3.3 tops, while ‘fglrx’ would have given it OpenGL 4.5. And the latest OGRE samples include samples with Geometry Shaders, other OpenGL 3 features, and even some Tessellators, which would be OpenGL 4 features.

Apparently, when one pushes any Mesa Drivers to their limits, these will bug out and even cause the X-server to freeze. Thus, when I switched from testing the OGRE OpenGL 2 rendering engine, to its OpenGL 3+ rendering engine, I ran in to an X-server freeze.

This did not force me to hard boot, because often, during an X-server lockup, I can <Ctrl>+<Alt>+F1 to a console window, from there do a user and a root login, and then issue an ‘init 6‘ command, which will usually do a controlled reboot, in which all file systems are unmounted correctly before the restart.

There is one detail to what the Mesa Driver does, which I like a whole lot.They allow for shader code written in the language Cg to run, even though Cg is a legacy toolkit developed by nVidia, for use on nVidia graphics cards and not on Radeon.

The fact that the Mesa Drivers allow me to do that, differently from the limitations which were only imposed on me under Windows 8.1, means that with OGRE 1.10, the Terrain System finally works 100%. OGRE 1.10 uses GPU-generated terrain, whereas most graphics engines rely entirely on their CPU, to create terrain. The earlier inability to get terrain to work with this system, was more crippling than anything else.

But as long as I am not using the ‘fglrx’ drivers, all attempts to get OpenGL 3 features to work with OGRE utterly fail, including any hope of ISO surfaces, which rely on Geometry Shaders, and any hope of GS-based particles. My particles will be limited to Point Sprites then.

What one does in a situation such as this, is not just to throw out OGRE 1.10, but rather, to disable modules. And so I disabled the GL3+ rendering engine, as well as one ‘HLMS Sample’, and am now able to get many of the samples to run, including, importantly, the Terrain Samples.

 


 

Also, there remains an advantage to using Mesa Drivers, which was pointed out to me already on the Kanotix site. The Mesa Drivers allow hardware-acceleration of high-bandwidth, 2D video streams, via ‘vdpau’, while if I was to use ‘fglrx’, the decoding of MP4 Videos would be limited to CPU decoding, which is in itself lame, if we ever wanted to watch serious video streams. And since that laptop has a screen resolution of 1600×900, wanting to watch videos on it eventually, remains a very realistic prospect.

Dirk

 

(Edit : ) I suppose that one question which I should be asking myself, about why perhaps, the OGRE 1.10 GL3+ Rendering Engine does not work, would be whether this could be due to some incompatibility with the GL3.1 Desktop Compositing which I am already running on the same machine. There have been past cases, where OpenGL 2 from an application did not agree with OpenGL 2 Desktop Compositing, but those cases have generally been solved by the developers of the desktop managers.

On ‘Klystron’, I have rich desktop effects running, that use GL 3.1. So it does not seem obvious, that the Mesa Drivers as such, have problems implementing GL 3.

Also, there is a follow-up thought, as to why maybe, Cg was not working before. Whether or not our graphics cards support Cg, it is a necessary component of OGRE, to build their Cg Program Manager. Under Windows 8.1, I was always unsure of how to provide the OGRE Dependencies when building OGRE. But among those dependencies I always linked in a file named ‘Cg.dll‘, the origin of which was unknown to me.

It is exactly the sort of goofy mistake I would make, perhaps to have taken this DLL File from the install folders of Cg on ‘Maverick’, but for some reason simply to have taken a 64-bit DLL, into my 32-bit OGRE build, or to have taken a DLL from somewhere, which may not have been compatible for some other reason.

At least when we install dependencies under Linux from the package manager, such issues as linkage of code and location of folders, are also taken care of by the package manager. So I am sure that the Cg Program Manager belonging to OGRE, recognized the nVidia Cg packages when compiling now. It is just a bit odd that those were native Cg libraries with header files, while my graphics drivers remain the Mesa Drivers.