One of the facts about modern computing is, that the hardware could include a multi-core CPU, with a number of virtual cores different from the number of full cores. Such CPUs were once called “Hyper-Threaded”, but are now only called “Threaded”.
If the CPU has 8 virtual cores, but is threaded as only 4 full cores, then there will only be a speed advantage, when running 4 processes. But because processes are sometimes multi-threaded, each of those 4 processes could consist of 2 fully-busy threads, and benefit from a further doubling of speed because each full core has 2 virtual cores.
It’s really a feature of Windows to exploit this fully, while Linux tends to ignore this. When Linux runs on such a CPU, it only ‘sees’ the maximum number of virtual cores, as the logical number of cores that the hardware has, without taking into account that they could be pairing in some way, to result in a lower number of full cores.
And to a certain extent, the Linux kernel is justified in doing so because unlike how it is with Windows, it’s actually just as cheap for a Linux computer to run a high number of separate processes, as it is to run processes with the same number of threads. Two threads share a code segment as well as a data segment (heap), but have two separate stack segments as well as different register-values. This makes them ‘enlightened processes’. Well they only really run faster under Windows (or maybe under OS/X).
Under Linux it’s fully feasible just to create many processes instead, so the bulk of the programming work does not make use as much of multi-threading. Of course Even under Linux, code is sometimes written to be multi-threaded, for reasons I won’t go into here.
But then under Linux, there was also never effort put into the kernel recognizing two of its logical cores, as belonging to the same full core.
(Updated 2/19/2019, 17h30 … )
What I find in practice however, is that when compiling oodles of code, I can give the command:
make -j 4
Which means that as many as 4 separate Jobs, to be compiling, will be run in parallel, and I’ll tend to get maximum benefit. If instead I was to write:
make -j 8
Then, the CPU will heat up more, but the entire project won’t get compiled any faster, than it did, using 4 Jobs.
However, there do exist some programs which just detect 8 logical cores, and which automatically assign to all of them, which can become inefficient, and which I can’t override in the settings. One such program is The LuxCore Renderer, which I wrote about before. This is also an example under Linux, which benefits from being multi-threaded, instead of multi-processing.
Also, I tend to suffer from a problem, in how the cooling system is designed, on the computer which I now name ‘Phosphene’, but which was once the computer ‘Plato’. In place of a simpler heat-sink on the CPU, it has a factory-sealed liquid cooler, with a coolant-pump mounted on the CPU, with small hoses between this module and a heat-exchanger, and with the heat exchanger, that has its own fan, mounted to the back of the case, where the heat-exchanger gives up its waste heat, to the air. All the components show an opaque, plastic dark grey external surface. As long as there were never any leaks introduced in the factory-manufacturing of such a combined unit, it should function well.
This in itself has never been problematic. But a problem which my setup does have, lies in how the BIOS is supposed to rev the liquid coolant pump to higher RPMs, when the CPU actually gets hot. This is meant to parallel how with certain other hardware, the CPU fan might rev, when just the CPU actually gets hot. On my machine, this will fail to happen, unless the GPU also gets hot. It’s a bug, that just happens to work correctly, when I’m running ‘LuxCore’ because running that program causes both components to get hot.
I made sure that my liquid-cooler has ceramic bearings inside the coolant-pump unit that’s mounted to the actual CPU because ceramic bearings tend to last longer than other types of bearings. But I can actually hear this pump rev, when it revs, possibly due to those ceramic bearings.
(Update 2/19/2019, 17h30 : )
What some users might notice about the LuxCore rendering engine is, its setting in the ‘Blender’ GUI, from which the user can decide how many CPU cores the rendering is to run on. When we have chosen to use the GPU, and when LuxCore recognizes the graphics hardware, then this (CPU-related) setting doesn’t have the effect which the user may expect it to have. In my experience, to be running this rendering engine on the GPU, still causes 8/8 of my CPU cores to work hard.
I believe that the main reason for this is the fact, that when we allow GPU computing to run, the GPU does not work autonomously, and doing so requires much participation from the actual CPU. Therefore, this setting doesn’t appear to have effect, when doing the rendering on the GPU.