Understanding the long long data-type with C++

One of the conventions which I was taught when first learning C++, was that, when declaring an integer variable, the designations were to be used, such as:

 


short int a = 0;  // A 16-bit integer
int b = 0;  // A 32-bit integer
long int c = 0;  // Another, 32-bit integer

 

But, since those ancient days, many developments have come to computers, that include 64-bit CPUs. And so the way the plain-looking declarations now work, is:

 


short int a = 0;  // A 16-bit integer
int b = 0;  // A 32-bit integer
long int c = 0;  // Usually, either a 32-bit or a 64-bit, depending on the CPU.
long long d = 0;  // Almost always, a 64-bit integer

 

The fact is not so surprising, that even on 32-bit CPUs, modern C++ runtimes will support 64-bit values partially, because support for longer fields, than the CPU registers, can be achieved, using subroutines that treat longer numbers similarly to how in Elementary School, we were taught to perform multiplication, long division, etc., except that where people were once taught how to perform these operations in Base-10 digits, the subroutine can break longer fields into 32-bit words.

In order to avoid any such confusion, the code is sometimes written in C or C++, like so:

 


uint16_t a = 0;
uint32_t b = 0;
uint64_t c = 0;

 

The ability to put these variables into the source-code does not pose as much of a problem, as the need which sometimes exists, to state literals, which can be assigned to these variables. Hence, a developer on a 64-bit machine might be tempted to put something like this:

 


uint64_t c = 1L << 32;

 

Which literally means, ‘Compute a constant expression, in which a variable of type (long) is left-shifted 32 bits.’ The problem here would be, that if compiled on a 32-bit platform, the literal ‘1L’ just stands for a 32-bit long integer, and usually, some type of compiler warning will ensue, about the fact that the constant expression is being left-shifted, beyond the size of the word in question, which means that the value ‘0’ would result.

If we are to write source-code that can be compiled equally-well on 32-bit and 64-bit machines, we really need to put:

 


uint64_t c = 1ULL << 32;

 

So that the literal ‘1ULL’ will start out, as having a compatible word-size. Hence, the following source-code will become plausible, in that at least it will always compile:

 

#include <cstdlib>
#include <iostream>

using std::cout;
using std::endl;

int main(int argc, char* argv[]) {

	long int c = 1234567890UL;

	if (sizeof(long int) > 7) {
		c = (long int) 1234567890123456ULL;
	}

	cout << c << endl;

	return 0;
}

 

The problem that needed to be mitigated, was not so much whether the CPU has 32-bit or 64-bit registers at run-time, but rather, that the source-code needs to compile either way.

Dirk

 

Some Suggested Code

In This Earlier Posting, I had written at first some observations about Bluetooth-pairing, but then branched out on the subject, of whether a Diffie-Hellman Key Exchange could be easier to compute, if it was somehow simplified into using a 32-bit modulus. Obviously, my assumption was that a 64-bit by 32-bit divide instruction would be cheap on the CPU, while arbitrary-precision integer operations are relatively expensive, and actually cause some observable lag on CPUs which I’ve used.

And so, because I don’t only want to present theory in a form that some people may not be able to visualize, what I did next was to write a C++ program, that actually only uses C, that assumes the user only has a 32-bit CPU, and yet that performs a 64-bit by 64-bit division.

This has now been tested and verified.

One problem in writing this code is the fact that, depending on whether the divisor, which is formatted as a 64-bit field, contains an actual 64-bit, 32-bit, 24-bit, or 16-bit value, a different procedure needs to be selected, and even this fixed-precision format cannot assume that the bits are always positioned in the correct place.

I invite people to look at this sample-code:

http://dirkmittler.homeip.net/text/divide64.cpp

(Update 06/10/2018, 23h30 : )

I needed to correct mistakes which I made in the same piece of code. However, I presently know the code to be correct.

Just to test my premises, I’m going to assume that the following division is to be carried out, erroneously as a simple division, but assuming a word-size of 32 bits:

Continue reading Some Suggested Code

Getting Steam to run with proprietary nVidia.

According to this earlier posting, I had switched my Debian / Stretch, Debian 9 -based computer named ‘Plato’ from the open-source ‘Nouveau’ drivers, which are delivered via ‘Mesa’ packages, to the ‘proprietary nVidia drivers’, because the latter offer more power, in several ways.

But then one question which we’d want an answer to, is how to get “Steam” to run. Just from the Linux package manager, the available games are slim picking, and through a Steam membership, we can buy Linux-versions of at least some powerful games, meaning, to pay for with money.

But, when I tried to launch Steam naively, which used to launch, I only got a message-box which said, that Steam could not find the 32-bit version of ‘libGL.so’ – and then Steam died. This temporary result ‘makes sense’, because I had only installed the default, 64-bit libraries, that go with the proprietary packages. Steam is a 32-bit application by default, and I have a multi-arch setup, as a prerequisite.

And so my next project became, to create a 32-bit as well as the existing, 64-bit interface to the rendering system.

The steps that I took assume, that I had previously chosen to install the ‘GLVND’ version of the GLX binaries, and unless the reader has done same, the following recipe will be incorrect. Only, the ‘GLVND’ packages which I initially installed, are not listed in the posting linked to above; they belonged to the suggested packages, which I wrote I had written down on paper, and then added to the command-line, which transformed my graphics system.

When I installed the additional, 32-bit libraries, I did get a disturbing error message, but my box still runs.

Continue reading Getting Steam to run with proprietary nVidia.

Why OpenShot will Not Run on my Linux Tablet

In This earlier posting, I had written, that although I had already deemed it improbable that the sort of Linux application will run on my Linux tablet, I would nevertheless try, and see if I could get such a thing to run. And as I wrote, I had considerable problems with ‘LiVES’, where, even if I had gotten the stuttering preview-playback under control, I could not have put LiVES into multi-tracking mode, thereby rendering the effort futile. I had also written that on my actual Linux laptop, LiVES just runs ~perfectly~.

And so a natural question which might come next would be, ‘Could OpenShot be made to run in that configuration?’ And the short answer is No.

‘OpenShot’, as well as ‘KDEnlive’, use a media library named ‘mlt’, but which is also referred to as ‘MeLT’, to perform their video compositing actions. I think that the main problem with my Linux tablet, when asked to run such applications, is that it is only a 32-bit quad-core, and an ARM CPU at that. The ARM CPUs are designed in such a way, that they are optimal when running Dalvik Bytecode, which I just learned has been succeeded by ART, through the interpreter and compiler that Android provides, and in certain cases, at running Codecs in native code, which are specialized. They do not have ‘MMX’ extensions etc., because they are RISC-Chips.

When we try to run CPU-intensive applications on an ARM CPU that have been compiled in native code, we suffer from an additional performance penalty.

The entire ‘mlt’ library is already famous, for requiring a high amount of CPU usage, in order to be versatile in applying effects to video time-lines. There have been stuttering issues, when trying to get it to run on ‘real Linux computers’, though not mine. My Linux laptop is a 64-bit quad-core, AMD-Family CPU, with many extensions. That CPU can handle what I throw at it.

What I also observe when trying to run OpenShot on my Linux tablet, is that if I right-click on an imported video-clip, and then left-click on Preview, the CPU usage is what it is, and I already get some mild stuttering / crackling of the audio. But if I then drag that clip onto a time-line, and ask the application to preview the time-line, the CPU usage is double what it would otherwise be, and I get severe playback-slowdown, as well as audio-stuttering.

In itself, this result ‘makes sense’, because even if we have not elected to put many effects into the time-line, the processing that takes place, when we preview it, is as it would be, if we had put an arbitrary number of effects. I.e., the processing is inherently slow, for the eventuality that we’d put many effects. So slow, that the application doesn’t run on a 32-bit, ARM-quad-core, when compiled in native code.

(Updated 10/09/2017 : )

Continue reading Why OpenShot will Not Run on my Linux Tablet