How the Obsolescence of Dalvik Does Not Even Represent an Inconsistency

There was a development with Android, which slipped my attention, and which took place during or after the advent of Android 4.4 (KitKat).

In general, it has always been a fact that Android application-programmers were encouraged to write a kind of source-code, which was identical in its syntax to Java, and which was compiled by their IDE into a kind of bytecode. Only, for many years this language was not officially named Java, instead officially being named Dalvik for some time. Exactly as it goes with Java, this bytecode was then interpreted by a central component of the Android O/S (although, there exist computers on which the JVM is not central to the O/S).

This means, devs were discouraged but not forbidden, from writing their apps in C++ , which would have been compiled directly into native code. In fact, the Native Development Kit has always been an (optional) part of “Android Studio”.

But since Android 5+ (Lollipop), the use of this interpreter has been replaced with something called the Android Runtime (ART). This fact does not present me with much of an inconsistency, only a late awareness of progress.

Even when I was learning some of the basics of programming in Java, one piece of technology which had a name, was a so-called “Flash Compiler”. This was in fact distinct from a “JIT Compiler”, in that the JIT compiler would use the source-code, in order to compile parts of it into native code, while a flash-compiler would only need to use bytecode, in order to derive native code.

So, if the newer Android Runtime flash-compiles the bytecode, this does not change the fact, that devs are writing source-code, which is by default still Java, and initially being compiled by their IDE, into bytecode.

Clearly though, there is a speed-improvement in flash-compiling the bytecode and then executing the resulting native code, over interpreting the bytecode.


 

Yet, the speed-improvement which was once thought to exist in RISC-Chip CPUs, has largely vanished over the History of Computing. One original premise behind RISC-Machines was, that they could be made to run at a higher clock-speed, than Complex Instruction Set (CISC) Computers, and that due to this increase in clock-speed, the RISC-Machines could eventually be made faster.

In addition, early CISC-Machines failed to use concurrency well, in order to execute a number of operations simultaneously. By doing this, modern CISC-Machines also obtain low numbers of clock-cycles per instruction. But I think that this subject will remain in debate, as long as practical CISC-Machines have not exploited concurrency as much as theory should permit.

Since an optimizing compiler generally has the option of compiling source-code into simpler instructions, even when targeting a CISC-Machine, it would follow that realistic source-code needs to be compiled into longer sequences of RISC-Machine instructions.

This debate began before the days, when a CPU had become one chip. Since the CPU is by now a single chip, communication at the speed of light permits a CISC-Machine to have as high a clock-speed as a RISC-Machine. OTOH, the power consumption of a RISC Chip may still be slightly better.

And, as long as the CPU of Android devices only needed to execute a Bytecode Interpreter, this set of instructions could be optimized again, for the purpose of doing so, on a RISC-Machine. But, if we compare the speeds of modern RISC and CISC -Machines running optimized, native code – i.e., compiled C++ – I think it’s clear that the CISC-Machines, including the ones that were meant to run Windows or Linux on, outperform RISC machines.

(Edit 10/09/2017 : )

I believe that there exist specific situations in which a RISC-Machine runs faster, and that a lot of that depends on what language of source-code is being compiled. In the case of RISC-Machines, the compiler has fewer options to optimize the code, than it would with CISC-Machines.

Continue reading How the Obsolescence of Dalvik Does Not Even Represent an Inconsistency

Why OpenShot will Not Run on my Linux Tablet

In This earlier posting, I had written, that although I had already deemed it improbable that the sort of Linux application will run on my Linux tablet, I would nevertheless try, and see if I could get such a thing to run. And as I wrote, I had considerable problems with ‘LiVES’, where, even if I had gotten the stuttering preview-playback under control, I could not have put LiVES into multi-tracking mode, thereby rendering the effort futile. I had also written that on my actual Linux laptop, LiVES just runs ~perfectly~.

And so a natural question which might come next would be, ‘Could OpenShot be made to run in that configuration?’ And the short answer is No.

‘OpenShot’, as well as ‘KDEnlive’, use a media library named ‘mlt’, but which is also referred to as ‘MeLT’, to perform their video compositing actions. I think that the main problem with my Linux tablet, when asked to run such applications, is that it is only a 32-bit quad-core, and an ARM CPU at that. The ARM CPUs are designed in such a way, that they are optimal when running Dalvik Bytecode, which I just learned has been succeeded by ART, through the interpreter and compiler that Android provides, and in certain cases, at running Codecs in native code, which are specialized. They do not have ‘MMX’ extensions etc., because they are RISC-Chips.

When we try to run CPU-intensive applications on an ARM CPU that have been compiled in native code, we suffer from an additional performance penalty.

The entire ‘mlt’ library is already famous, for requiring a high amount of CPU usage, in order to be versatile in applying effects to video time-lines. There have been stuttering issues, when trying to get it to run on ‘real Linux computers’, though not mine. My Linux laptop is a 64-bit quad-core, AMD-Family CPU, with many extensions. That CPU can handle what I throw at it.

What I also observe when trying to run OpenShot on my Linux tablet, is that if I right-click on an imported video-clip, and then left-click on Preview, the CPU usage is what it is, and I already get some mild stuttering / crackling of the audio. But if I then drag that clip onto a time-line, and ask the application to preview the time-line, the CPU usage is double what it would otherwise be, and I get severe playback-slowdown, as well as audio-stuttering.

In itself, this result ‘makes sense’, because even if we have not elected to put many effects into the time-line, the processing that takes place, when we preview it, is as it would be, if we had put an arbitrary number of effects. I.e., the processing is inherently slow, for the eventuality that we’d put many effects. So slow, that the application doesn’t run on a 32-bit, ARM-quad-core, when compiled in native code.

(Updated 10/09/2017 : )

Continue reading Why OpenShot will Not Run on my Linux Tablet

SSDs do not cooperate well with Swap.

On all computers that have Solid-State Hard Drives – aka SSDs – anything corresponding to a Linux Swap Partition, or to a Windows Swap File, will end up killing the SSDs eventually, because the rewriting of data to an SSD is capped by a finite limit. Thus, if we were to install Linux on such a device, we would probably want Swap to be Off.

But then this realization also poses the question, of whether Android Kernels have specifically been compiled with compiler-flags set to disable Virtual Memory. Linux Kernels traditionally have the option, of allowing their compilation with VM disabled. Since the ARM CPU is a RISC-chip, there is even some possibility that this CPU may not have a TLB.

Dirk