## VDPAU Playback Issue (Problem Solved).

One of the facts which apply to Linux computing is, that NVIDIA created an API, which allows for certain video-streams to be played back in a way accelerated by the GPU, instead of all the video decoding taking place on the CPU. And, users don’t necessarily need to have an NVIDIA graphics card, in order for certain graphics drivers to offer this feature, which is called ‘VDPAU’, an acronym that stands for “Video Decode and Playback API for Unix”. Simultaneously, what some Linux users can do, is to experiment with shell-scripts that allow us to click on a specific Application Window, in order to perform screen-capture on that Window for a specified number of seconds, ad then to compress the resulting stream into MP4, AVI, or MPG -Files, once the screen-capture has finished. This latter piece of magic can be performed using elaborate ‘ffmpeg’ commands, which would need to be a part of the script in question. And in recent days, I’ve been tweaking such scripts.

But then an odd behaviour crept up. My NVIDIA graphics card supports the real-time playback of MPEG-1, MPEG-2, DIVX and H.264 -encoded streams, with GPU-acceleration. Yet, when I clicked on the resulting animations, depending on which player I chose to play those with, I’d either obtain the video stream, or I’d just obtain a grey rectangle, replacing the captured video stream. And what I do know, is that which of these results I obtain, depends on whether I’m playing back the video stream using a software decoder purely, or whether I’m choosing to have the stream played back with GPU-acceleration.

I’ve run in to the same problem before, but this time, the cause was elsewhere.

Basically, this result will often mean that the player application first asks the graphics card, whether the latter can decode the stream in question, and when the VDPAU API responds ‘Yes’, hands over the relevant processing to the GPU, but for some unknown reason, the GPU fails to decode the stream. This result can sometimes have a different meaning, but I knew I needed to focus my attention on this interpretation.

Linux users will often need to have some sort of file-format, in which they can store arbitrary video-clips, that do not need to conform to strict broadcasting and distribution standards, even when the goal is ‘just to monkey around with video clips’.

I finally found what the culprit was…

(Updated 8/15/2019, 22h15 … )

## An Observation about the Modern Application of Sinc Filters

One of the facts which I have blogged about before, was that an important type of filter, which was essentially digital, except for its first implementations, was called a ‘Sinc Filter‘. This filter was once presented as an ideal low-pass filter, that was also a brick-wall filter, meaning, that as the series was made longer, near-perfect cutoff was achieved.

Well, while the use of this filter in its original form has largely been deprecated, there is a modern version of it that has captured some popularity. The Sinc Filter is nowadays often combined with a ‘Kaiser Window‘, and doing so accomplishes two goals:

• The Kaiser Window puts an end to the series being an infinite series, which many coders had issues with,
• It also makes possible Sinc Filters with cutoff-frequencies, that are not negative powers of two, times the Nyquist Frequency.

It has always been possible to design a Sinc Filter with 2x or 4x over-sampling, and in some frivolous examples, with 8x over-sampling. But if a Circuit Designer ever tried to design one, that has 4.3 over-sampling, for example, thereby resulting in a cutoff-frequency which is just lower than 1/4 the Nyquist Frequency, the sticky issue would always remain, as to what would take place with the last zero-crossing of the Sinc Function, furthest from the origin. It could create a mess in the resulting signal as well.

Because the Kaiser Windowing Function actually goes to zero gradually, it suppresses the farthest zero-crossings of the Sinc Function from the origin, without impeding that the filter still works essentially, as the Math of the Sinc Function would suggest.

(Updated 8/06/2019, 15h35 … )

## Caveats when using ‘avidemux’ (under Linux).

One of the applications available under Linux, that can help edit video / audio streams, and that has been popular for many years – if not decades – is called ‘avidemux’. But in recent years the subject has become questionable, of whether this GUI- and command-line- based application is still useful.

One of the rather opaque questions surrounding its main use, is simply highlighted in The official usage WiKi for Avidemux. The task can befall a Linux user, that he either wants to split off the audio track from a video / audio stream, or that he wants to give the stream a new audio track. What the user is expected to do is to navigate to the Menu entries ‘Audio -> Save Audio’, or to ‘Audio -> Select Track’, respectively.

What makes the usage of the GUI not straightforward is what the manual entries next state, and what my personal experiments confirm:

• External Audio can only be added from ‘AC3′, ‘MP3′, or ‘WAV’ streams by default,
• The audio track that gets Saved cannot be played back, if In the format of an ‘OGG Vorbis’, an ‘OGG Opus’, or an ‘AAC’ track, as such exported audio tracks lack any header information, which playback apps would need, to be able to play them. In those cases specifically, only the raw bit-stream is saved.

The first problem with this sort of application is that the user needs to perform a memorization exercise, about which non-matching formats he may or may not, Export To or Import From. I don’t like to have to memorize meaningless details, about every GUI-application I have, and in this case the details can only be read through detailed research on the Web. They are not hinted at anywhere within the application.

(Updated 3/23/2019, 15h35 … )

## A caveat in using ‘ffmpeg’ to produce consumer-ready streams, from individual frame-files.

It recently happened to me, that I had used ‘Blender’ to create a very short animation, with 144 frames, but where I had instructed Blender to output this animation as a series of numbered .PNG-Files, which I would next use an ‘ffmpeg’ command, to compile into a compressed stream, the latter being an .MP4-File, using H.264 video compression. ( :1 )

But unexpectedly, I had obtained an .MP4-File, which would play on some of my player applications, but not on others. And when I investigated this problem, I found that player-applications which used a feature under Linux named ‘VDPAU‘, were Not able to play the stream, while player-applications which used software to decompress the stream, were able to play it.

The very first assumption that some people could make in such a situation would be, that they do not have their graphics drivers set up correctly, and that VDPAU may not be working correctly on their Linux-based computers. But, when I looked at my NVidia settings panel, it indicated that VDPAU support included support for H.264 -encoded streams specifically:

BTW, It’s not necessary for the computer to have an NVidia graphics-card, with the associated NVidia GUI, to possess graphics-acceleration. It’s just that NVidia makes it particularly easy, for users who are used to Windows, to obtain information about their graphics card.

Rather than to believe next, that VDPAU is broken due to the graphics driver, I began to look for my problem elsewhere. And I was able to find the true cause for the playback-problem. Ultimately, if we want to compress a stream into an .MP4-File, and if we want the recipient of that stream to be able to play it back, using hardware-acceleration, which is the norm for high-definition streams, then an ‘ffmpeg’ command similar to the one below would be the correct command:


ffmpeg -framerate 24 -i infile_%4d.png -an -vcodec libx264 -pix_fmt yuv420p -crf 20 outfile.mp4



But I feel that I should explain, how my first attempt to compress this stream, failed. It did not contain the parameter shown above, namely ‘-pix_fmt yuv420p‘. There are two WiKiPedia articles, which explain the subject of what ‘YUV’ means, and that may explain the subject better than I can, and that I recommend my reader read:

https://en.wikipedia.org/wiki/YUV

https://en.wikipedia.org/wiki/Chroma_subsampling

I am now going to paraphrase, what the above articles explain in detail.