Why the Atmosphere is Thermally Asymmetric.

One concept which exists in Physics is, that if a surface has a greater coefficient of absorption, to one specific electromagnetic wavelength, then the degree with which this surface will emit EM radiation, at the same wavelength, due to incandescence, will also increase in a linear fashion. This is also known as Kirchhoff’s Law.

This fact seems to contradict what has been observed in the Earth’s Atmosphere, regarding how an increase in CO{2} levels has led to warming of the planet. It would be tempting just to assume casually, ‘The Earth’s Radiation of heat into space is low-temperature incandescence, Why does it not increase in-step with thermal absorption?’ And while there are several answers to this question, all of which require a more-complex analysis of what happens in the atmosphere, or of how the Sun is different from the Earth, this posting of mine will focus on one of the answers, which may also be the easiest to understand.

The temperature of the atmosphere at sea-level, may be around 20⁰C wherever it’s Summer. But at an altitude of 15km, the atmospheric temperature is close to -70⁰C. What this means is that, along with actual surfaces of the planet, the CO{2} in the lower layers of the atmosphere may be radiating electromagnetic radiation – i.e., deep infrared light – towards space. But because at an altitude of 15km the atmosphere also contains a matching level of CO{2}, those molecules in the upper atmosphere will mainly just catch this radiation again, and transform it back into stored heat. ( :1 )

So what happens among other things, is that an atmosphere differs from a surface of matter, and from the basic principle mentioned above, in being asymmetric. And what makes it asymmetric, is Gravity. It’s much more difficult for heat to escape, than it is for heat to be absorbed. Now, if there was some way to make the temperatures in the upper atmosphere not-different from those at lower altitudes, then the effect of global warming might not even take place. But the atmosphere’s CO{2} ‘which space sees’, is in the upper atmosphere, which is constantly very cold, while the atmosphere’s CO{2} ‘that humans see’, is at sea-level, which has the (increasing) temperatures we witness in everyday life.

Now, I have never been asked to provide this information. But seeing as there seems to be a question in Physics, which nobody else provides an answer for, and which nobody else seems to recognize as existing, I felt I should spontaneously provide an answer here.

(Updated 04/21/2018 : )

Continue reading Why the Atmosphere is Thermally Asymmetric.

Print Friendly, PDF & Email

One way, in which my earlier description of CUDA was out of touch, with the real-world implementation.

One of the subjects which many programmers have been studying, is not only, how to write highly parallel code, but how to write that code for the GPU, since the GPU is also the most-readily-available highly-parallel processor. In fact, any user with a powerful graphics card, may already have the basis to program using CUDA or using OpenCL.

I had written an earlier posting, in which I ended up trying to devise a way, by which the compiler of the synthesized C or C++, would detect that each variable is being used as ‘rvalues’ or ‘lvalues’ in different parts of a loop, and by which the compiler would then choose, to allocate a local register, allocate a shared register, or to make a local copy of a value once provided in a shared register.

According to what I think I’ve learned, this thinking was erroneous, simply because a CUDA or an OpenCL compiler, does not take this responsibility off the hands of the coder. In other words, the coder needs to declare explicitly and each time, whether a variable is to be allocated in a local or a shared register, and must also keep track of how his code can change the value in a shared register, from other threads than the current thread, which may produce errors in how the current thread computes.

But, a command which CUDA offers, and which needs to exist, is a ‘__syncthreads()’ function, which suspends the current thread, until all the threads running in one core-group have executed the ‘__sycnthreads()’ instruction, after which point, they may all resume again.

One fact which disappoints about the real ‘__syncthreads()’ instruction is, that it offers little in the way of added capabilities. One thing which I had written this function may do however, is actually give the CPU a chance to run briefly, in a way not obvious to CUDA code.

But then there exist capabilities which a CUDA or an OpenCL programmer might want, which have no direct support from the GPU, and one of those capabilities might be, to lock an arbitrary object, so that the current thread can perform some computation which reads the object – after having obtained a lock on it – and which then writes changes to the object, before giving up its lock on it.

(Updated 04/19/2018 : )

Continue reading One way, in which my earlier description of CUDA was out of touch, with the real-world implementation.

Print Friendly, PDF & Email

Alpha Blending and Multisampling Revisited.

I recently read a WiKiPedia article about the subject of Alpha-Blending, and found it to be of good quality. Yet, there is a major inconsistency between how this article explains the subject, and how I once explained it myself:

According to the article, alpha-entities – i.e., models with an alpha-channel, which therefore have per-pixel translucency – are rendered starting with the background first, and ending with the most-superimposed models last. Hence, formally, alpha-blended models should be rendered ‘back-to-front’. Well in computer graphics much rendering is done ‘front-to-back’, i.e., starting with the closest model, and ending with the farthest.

More specifically, the near-to-far rendering order applies to non-alpha entities, which therefore also don’t need to be alpha-blended, and only exists as an optimization. Non-alpha entities in CGI can also be rendered back-to-front, except that doing so usually requires our graphics hardware to render a much-larger number of triangles. By rendering opaque, closer models first, the graphics engine allows their triangles to occlude much of what would be behind them, the latter of which therefore do not consume Fragment-Shader invocations, and which therefore leads to higher frame-rates.

One fact which should be observed about the equations in the WiKi-article is, that they are asymmetric, and that those will therefore only work when rendered back-to-front. What the reader should be aware of, is that a complementary set of equations can be written, that will produce optically-correct results, when the rendering order is nearest-to-farthest. In fact, each reader should have done the mental exercise, of writing the complementary equations.

When routine rendering gets done, the content-developer – aka the game-dev – not, the game-engine-designer, has the capacity to change the rendering order. So what will probably get done in game-design is, that models are grouped, so that non-alpha entities are rendered first, and that only after this has been done, the alpha-entities would be rendered, as a separate rendering-group.

(Edit 04/14/2018, 21h35 :

But there is one specific situation which would require that the mentioned, complementary set of equations be used. Actually, according to my latest thoughts, I’d only suggest that a modified set of equations be used. And that would be, when “Multi-Sampling” is implemented, in a way that treats the fraction of sub-samples which belongs to one real sample, but that are rendered to, as if that fraction was just a multiplier to be applied to the “Source-Alpha”, which the entity’s textures already possess. In that case, the alpha-blending must actually be adapted to the rendering-order, which is more-common to game-design.

The reason for which I’d say so, is the simple observation that, according to conventional alpha-blending, if a pixel is rendered to with 0.5 opacity twice, it not only remains 0.75 opaque, but its resulting color favors the second color rendered to it, twice as strongly, as doing so favors the first color. For alpha-blending this is correct, because alpha-blending just mirrors optics, which successive ‘layers’ would cause.

But with multi-sampling, a real pixel could be rendered to 0.5 times, twice, and there would be no reason why the second color rendered to it, should contribute more-strongly, than the second did… )

You see, this subject has given me reason to rethink the somewhat overly-complex ideas I once had, on how best to achieve multi-sampling. And the cleanest way would be, by treating the fraction of sub-pixels rendered to, as just a component of the source-alpha.

(Updated 04/14/2018, 21h35 … )

Continue reading Alpha Blending and Multisampling Revisited.

Print Friendly, PDF & Email

Photon Polarization / Superposition of States

We can ask ourselves what the subject ‘looks like’, at the single-particle level, of polarized light. We know that at the level of wave-mechanics, both plane-polarized and circularly-polarized light are easy to understand: Either way, the dipole-moments are at right angles to the direction of propagation, all the time, even if randomly so. But there also needs to be a particle / photon -based explanation for all the properties of light, in order to satisfy the demands of Quantum Mechanics.

And so a key question could be phrased as, ‘If we pass randomly-polarized light through a simple linear polarizer, which consists of a gel-block, and which absorbs EM vibrations along one disfavored axis, maybe because it has been made ohmic along that axis, why is the maximum intensity of plane-polarized light that comes out, in fact so close to 50% of the intensity, of the randomly-polarized beam that went in?’ Using wave-mechanics, the answer is easy to see, but using particle-physics, the answer is not so obvious.

And one reason fw the answer may not be obvious, is because we might be visualizing each photon, as being plane-polarized at an angle unique to itself. In that case, if the polarizer only transmits light, which is polarized to an extremely pure degree, the number of photons whose plane of polarization lines up with the favored angle perfectly, should be few-to-none. Each photon could then have an angle of polarization, which is not exactly lined up with the axis which the polarizer favors, and would thus be filtered out. And yet, the strength of the electric dipole-moment which comes out of the polarizer, along the disfavored axis, could be close to zero, while the total amount of light that comes out, could be close to 50% of how much light came in.

If each incident photon had been plane-polarized in one random direction, then surely fewer than 50% of them, would have been polarized, in one exact direction.

(Updated 04/10/2018 … )

Continue reading Photon Polarization / Superposition of States

Print Friendly, PDF & Email