A way of visualizing YUV color-representation.

In the past I have linked to This WiKiPedia article, to explain what YUV color-encoding is. But I find that even though the article does a good job of explaining, their ‘UV Chroma Map’ is flawed. This would be the square image which they used, together with an assumed Luminance value of 0.5, just to give a basic impression of how the colors get mapped.

The reason I see the above visual as flawed, has to do with the fact that at any one value for Y’ – i.e. for any one value of Luminance – the full range of UV- chroma values is not in fact available, in a way that leads to full color saturation.

I think that any visual should highlight, that many Y’UV combinations do not lead to possible RGB values, for which reason even a full gamut of RGB colors will experience losses, when encoded into practical, integer-based YUV.

I would propose that a modified method be used to exhibit what the YUV encoding does, specifically the ‘UV’ part, when the Y part is allowed to vary, in an arbitrary way that adapts to the full range of UV-chroma values. And the visual which I obtain would be as follows:



I realize that the Math is somewhat fudged, with which I created the visual above, but also know the more-precise Math which gets used, for YUV or Y’UV color-encoding. Here is my worksheet on that:


The reader would need to allow JavaScript from ‘MathJax.org’ to run on his or her browser, for the Math to display correctly.

It would be correct to infer, that in this one Y’UV profile, the Android developers chose ‘Umax’ and ‘Vmax’ to exceed 0.5 deliberately, which effectively ‘over-modulates’ U and V, and that therefore, Android devices will not be able to encode fully-saturated primary colors Red and Blue, when using this profile. And one reason fw the developers may have done this, would be at least to improve the available resolution somewhat, for ‘natural colors’, that do not correspond to saturated Red or Blue.

(Updated 08/02/2018, 20h05 : )

It’s a basic fact in (U,V)-chroma encoding, that the (U) and (V) components can become negative by the same amount, by which they can become positive, which in today’s syntax is also referred to as ‘Umax’ and ‘Vmax’.

But the fact did occur to me, that when the Green primary color, which I’ve named (cGreen), has as amplitude (1.0), while my (cRed) and (cBlue) each have an amplitude of zero, the negativity of (U) and (V) should be less than those of (-Umax) and (-Vmax), even though my first visual above would show the ranges for both (U) and (V) as (±1.0) …

(Updated 08/05/2018, 14h10 … )

Continue reading A way of visualizing YUV color-representation.

A caveat in using ‘ffmpeg’ to produce consumer-ready streams, from individual frame-files.

It recently happened to me, that I had used ‘Blender’ to create a very short animation, with 144 frames, but where I had instructed Blender to output this animation as a series of numbered .PNG-Files, which I would next use an ‘ffmpeg’ command, to compile into a compressed stream, the latter being an .MP4-File, using H.264 video compression. ( :1 )

But unexpectedly, I had obtained an .MP4-File, which would play on some of my player applications, but not on others. And when I investigated this problem, I found that player-applications which used a feature under Linux named ‘VDPAU‘, were Not able to play the stream, while player-applications which used software to decompress the stream, were able to play it.

The very first assumption that some people could make in such a situation would be, that they do not have their graphics drivers set up correctly, and that VDPAU may not be working correctly on their Linux-based computers. But, when I looked at my NVidia settings panel, it indicated that VDPAU support included support for H.264 -encoded streams specifically:


BTW, It’s not necessary for the computer to have an NVidia graphics-card, with the associated NVidia GUI, to possess graphics-acceleration. It’s just that NVidia makes it particularly easy, for users who are used to Windows, to obtain information about their graphics card.

Rather than to believe next, that VDPAU is broken due to the graphics driver, I began to look for my problem elsewhere. And I was able to find the true cause for the playback-problem. Ultimately, if we want to compress a stream into an .MP4-File, and if we want the recipient of that stream to be able to play it back, using hardware-acceleration, which is the norm for high-definition streams, then an ‘ffmpeg’ command similar to the one below would be the correct command:


ffmpeg -framerate 24 -i infile_%4d.png -an -vcodec libx264 -pix_fmt yuv420p -crf 20 outfile.mp4


But I feel that I should explain, how my first attempt to compress this stream, failed. It did not contain the parameter shown above, namely ‘-pix_fmt yuv420p‘. There are two WiKiPedia articles, which explain the subject of what ‘YUV’ means, and that may explain the subject better than I can, and that I recommend my reader read:



I am now going to paraphrase, what the above articles explain in detail.

Continue reading A caveat in using ‘ffmpeg’ to produce consumer-ready streams, from individual frame-files.

Rotation Reversal, by Inverting Only One Axis.

I could engage in some more speculative thinking.

I could be trying to design a hypothetical analog scheme for modulating color information, that belongs to the Y’UV system for representing colors. But I’d like my system to have as advantage over NTSC, that if the chroma sub-carrier gets phase-shifted, due to inaccuracies with the analog circuits, the result should be a shift in hue, which reverses itself from one TV scan-line to the next, as well as from one frame to the next. Just as viewers don’t normally see dot-crawl when they watch an NTSC-modulated signal on a monochrome receiver, they should also not be able to see the hue-shift, due to analog-circuit issues, with my hypothetical modulation scheme.

Consequently, the receivers for this type of signal should not have a Hue potentiometer.

But I discover a problem in my scheme. The U and V components are to be modulated onto a chroma sub-carrier, using quadrature-modulation, just like NTSC was. And yet, I’ll discover that I can only get the clockwise versus counter-clockwise reversal to take place, if I invert either the U or the V signal-component, but not if I invert both, nor if I just invert the sub-carrier, thereby inverting both U and V:


The problem follows, because every signal which gets modulated onto a sub-carrier, using quadrature-modulation, throws sidebands. Hence, if I was to place the sub-carrier frequency just-beyond the frequencies already being used to encode luminance, I would also need to invert both U and V, by default, to eliminate all the dot-crawl. What can I do?

Continue reading Rotation Reversal, by Inverting Only One Axis.