## Some Issues, with Multiple Input Ports, on One PulseAudio Sound Device

My situation is such that my Linux computers have elaborate Graphical User Interfaces, as well as the Pulse Audio Server, for use in multimedia. Yet, to adapt them to my own needs, I have done some customizing of the Audio Server, specifically, giving it a ‘Loop-Back’ module, so that when I play music from my FM Receiver, to the Line Input jack of the PC in question, that music is ‘looped back’, to my speaker amp, for my enjoyment. Such configuration changes can have consequences on how to perform other multimedia tasks, such as Screen Recording, which, on the computer named ‘Phosphene’, requires that I plug a microphone into the Front Mic Jack because that Tower-PC has no internal Web-Cam or Mic.

Ever since I made the ‘loopback’ configuration change, I never retested the ability to plug in a Mic into the front jack, until only a few days ago. But when I finally did, it was with the initial purpose of testing whether certain Screen-Recording software works, which is different software from the software I’d normally use, which was not designed primarily for use with Linux, but the ability of which to connect to my sound inputs is as good, as that of the Web-browser (Chrome) that the extension runs within.

I got a rude surprise when running this test, in that plugging in an input, to the Front Mic Jack, borked my sound, at least until I restarted Pulse Audio. But beyond that, I learned that my sound issue was not due to this browser extension, but rather due to the way I had configured Pulse Audio, as well as perhaps, due to hardware limitations of the ‘Creative Sound Blaster X-Fi Xtreme’ sound card. I.e., Even if I did nothing to relaunch this extension I was first testing, but only plugged in a mike to the Front Mic Jack, during a clean Pulse Audio Server session, the malfunction came back.

So, what was the nature of this malfunction?

According to Pulse Audio, a computer could have one or more Sound Devices, which corresponds to the number of Analog-to-Digital Converters it has, in the case of capture devices, but each of those could have more than one Port, which would be the specific analog input, that is feeding the Sound Device in question. And, Pulse Audio supports the H/W feature of modern sound cards, to detect whether a plug has in fact been inserted into each jack, thus making the Port available. By default, what Pulse Audio would do is simple: Switch the input of the Sound Device, to the newly-plugged analog input. But, it doesn’t work.

When I plugged my external mike into the Front Jack for any reason, while the Line Input in the back of the computer was also active, which it is by default, just so that I could turn on my FM Receiver and hear music,

• I got no sound input from the Front Mic, even though the GUI shows that the input has been switched,
• Switching the input manually produced no difference in behaviour,
• The loopback module will go into a corrupted state, in which it will loop back sound from the Line Input, but with a 10-30 second time delay. I was hearing music, long after I had turned my FM Receiver off…

I took numerous, lengthy steps to find out why this was happening, but my conclusion was, that the actual hardware is unable to activate the Front Mic Jack, as long as the (rear) Line Input is plugged in simultaneously. And this cannot be corrected from the software side, with my present H/W. And so, what I needed to do was, to develop a workflow that would ease switching from ‘Music Enjoyment Mode’, to ‘Screen Recording Mode’, and back again, in the fastest time possible, and without requiring excessive restarts…

In other words, even if I was setting up a Screen-Recording to be created using the native application I’d normally use, because either solution is at the application level, I’d need to follow the same steps to make them work.

## Dealing with a picture frame that freezes.

I recently bought myself a (1920×1080 pixel) digital picture frame, that had rave reviews among other customers, but that began the habit of freezing after about 12 hours of continuous operation, with my JPEG Images on its SD Card.

This could signal that there is some sort of hardware error, including in the internal logic, or of the SD Card itself. And one of the steps which I took to troubleshoot this problem was, to try saving the ‘.jpg’ Files to different SD Cards, and once even, to save those pictures to a USB Key, since the picture frame in question accepts a USB Memory Stick. All these efforts resulted in the same behaviour. This brought me back to the problem, that there could be some sort of data-error, i.e., of the JPEG Files in question already being corrupted, as they were stored on my hard drives. I had known of this possibility, and so I already tried the following:


find . -type f -name '*.jpg' | jpeginfo -c -f - | grep -v 'OK'



Note: To run this command requires that the Debian package ‘jpeginfo’ be installed, which was not installed out-of-the-box on my computer.

This is the Linux way to find JPEG Files that Linux deems to be corrupted. But, aside from some trivial issues which this command found, and which I was easily able to correct, Linux deemed all the relevant JPEG Files to be clean.

And this is where my thinking became more difficult. I was not looking for a quick reimbursement for the picture frame, and continued to operate on the assumption that mine was working as well, as the frames that other users had given such good reviews for. And so, another type of problem came to my attention, which I had run in to previously, in a way that I could be sure of. Sometimes Linux will find media files to be ‘OK’, that non-Linux software (or embedded firmware) deems to be unacceptable. And with my collection of 253 photos, all it would take is one such photo, which, as soon as the frame selected it to be viewed, could still have caused the frame to crash.

(Updated 1/16/2020, 17h15 … )

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

• It allows for a virtually unlimited number of dynamic lights,
• It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
• It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
• (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

## A little trick needed, to get Blender to smooth-shade an object.

I was recently working on a project in Blender, which I have little experience doing, and noticing that, after my project was completed, the exported results showed flat-shading of mesh-approximations of spheres. And my intent was, to use mesh-approximations of spheres, but to have them smooth-shaded, such as, Phong-Shaded.

Because I was exporting the results to WebGL, my next suspicion was, that the WebGL platform was somehow handicapped, into always flat-shading the surfaces of its models. But a problem with this very suspicion was, that according to something I had already posted, to convert a model which is designed to be smooth-shaded, into a model which is flat-shaded, is not only bad practice in modelling, but also difficult to do. Hence, whatever WebGL malfunction might have been taking place, would also need to be accomplishing something computationally difficult.

As it turns out, when one wants an object to be smooth-shaded in Blender, there is an extra setting one needs to select, to make it so:

Once that setting has been clicked on for every object to be smooth-shaded, they will turn out to be so. Not only that, but the exported file-size actually got smaller, once I had done this for my 6 spheroids, than it was, when they were to be flat-shaded. And this last observation reassures me that: