How the Obsolescence of Dalvik Does Not Even Represent an Inconsistency

There was a development with Android, which slipped my attention, and which took place during or after the advent of Android 4.4 (KitKat).

In general, it has always been a fact that Android application-programmers were encouraged to write a kind of source-code, which was identical in its syntax to Java, and which was compiled by their IDE into a kind of bytecode. Only, for many years this language was not officially named Java, instead officially being named Dalvik for some time. Exactly as it goes with Java, this bytecode was then interpreted by a central component of the Android O/S (although, there exist computers on which the JVM is not central to the O/S).

This means, devs were discouraged but not forbidden, from writing their apps in C++ , which would have been compiled directly into native code. In fact, the Native Development Kit has always been an (optional) part of “Android Studio”.

But since Android 5+ (Lollipop), the use of this interpreter has been replaced with something called the Android Runtime (ART). This fact does not present me with much of an inconsistency, only a late awareness of progress.

Even when I was learning some of the basics of programming in Java, one piece of technology which had a name, was a so-called “Flash Compiler”. This was in fact distinct from a “JIT Compiler”, in that the JIT compiler would use the source-code, in order to compile parts of it into native code, while a flash-compiler would only need to use bytecode, in order to derive native code.

So, if the newer Android Runtime flash-compiles the bytecode, this does not change the fact, that devs are writing source-code, which is by default still Java, and initially being compiled by their IDE, into bytecode.

Clearly though, there is a speed-improvement in flash-compiling the bytecode and then executing the resulting native code, over interpreting the bytecode.


 

Yet, the speed-improvement which was once thought to exist in RISC-Chip CPUs, has largely vanished over the History of Computing. One original premise behind RISC-Machines was, that they could be made to run at a higher clock-speed, than Complex Instruction Set (CISC) Computers, and that due to this increase in clock-speed, the RISC-Machines could eventually be made faster.

In addition, early CISC-Machines failed to use concurrency well, in order to execute a number of operations simultaneously. By doing this, modern CISC-Machines also obtain low numbers of clock-cycles per instruction. But I think that this subject will remain in debate, as long as practical CISC-Machines have not exploited concurrency as much as theory should permit.

Since an optimizing compiler generally has the option of compiling source-code into simpler instructions, even when targeting a CISC-Machine, it would follow that realistic source-code needs to be compiled into longer sequences of RISC-Machine instructions.

This debate began before the days, when a CPU had become one chip. Since the CPU is by now a single chip, communication at the speed of light permits a CISC-Machine to have as high a clock-speed as a RISC-Machine. OTOH, the power consumption of a RISC Chip may still be slightly better.

And, as long as the CPU of Android devices only needed to execute a Bytecode Interpreter, this set of instructions could be optimized again, for the purpose of doing so, on a RISC-Machine. But, if we compare the speeds of modern RISC and CISC -Machines running optimized, native code – i.e., compiled C++ – I think it’s clear that the CISC-Machines, including the ones that were meant to run Windows or Linux on, outperform RISC machines.

(Edit 10/09/2017 : )

I believe that there exist specific situations in which a RISC-Machine runs faster, and that a lot of that depends on what language of source-code is being compiled. In the case of RISC-Machines, the compiler has fewer options to optimize the code, than it would with CISC-Machines.

Yet, if we were always executing LISP, for example, and if the LISP interpreter was optimized to run on a RISC-Machine, then this could yield faster execution.

OTOH, If our source-code consisted of randomly-constructed C++ , then I don’t believe that there would be any advantage to compiling it to run on a RISC-Machine.

And if the source-code was actually written only in pure C, then the developer would benefit from the fact that a lot of attention has been given, specifically to compiling C to run on unusual architectures, as well as on how to vectorize and / or optimize it. And some of that experience could also spill over, to targeting a RISC-Machine.

In my own experience, (interpreted) Python-programs that were compiled into ‘armhf’ applications, but with no specific consideration for how to port the application in question, did not perform well. One way to improve on that, would be actually to rewrite the Python interpreter, with that consideration…


 

A context that existed in the 1990s, in which I had first read about a Java-Flash-Compiler, was an embedded system. But a fact which needs to be taken into consideration, was that that embedded system ran at very limited clock speeds – by today’s terms. That embedded system might have run at ?2MHz? . And so if it had not been able to flash-compile its bytecode, then to interpret that ‘would have gotten nowhere fast’. And so it is a bit different today, when CPUs easily run at gigahertz speeds.

This is also why I wrote, that flash-compilers existed by name.

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

Please Prove You Are Not A Robot *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>