How the Obsolescence of Dalvik Does Not Even Represent an Inconsistency

There was a development with Android, which slipped my attention, and which took place during or after the advent of Android 4.4 (KitKat).

In general, it has always been a fact that Android application-programmers were encouraged to write a kind of source-code, which was identical in its syntax to Java, and which was compiled by their IDE into a kind of bytecode. Only, for many years this language was not officially named Java, instead officially being named Dalvik for some time. Exactly as it goes with Java, this bytecode was then interpreted by a central component of the Android O/S (although, there exist computers on which the JVM is not central to the O/S).

This means, devs were discouraged but not forbidden, from writing their apps in C++ , which would have been compiled directly into native code. In fact, the Native Development Kit has always been an (optional) part of “Android Studio”.

But since Android 5+ (Lollipop), the use of this interpreter has been replaced with something called the Android Runtime (ART). This fact does not present me with much of an inconsistency, only a late awareness of progress.

Even when I was learning some of the basics of programming in Java, one piece of technology which had a name, was a so-called “Flash Compiler”. This was in fact distinct from a “JIT Compiler”, in that the JIT compiler would use the source-code, in order to compile parts of it into native code, while a flash-compiler would only need to use bytecode, in order to derive native code.

So, if the newer Android Runtime flash-compiles the bytecode, this does not change the fact, that devs are writing source-code, which is by default still Java, and initially being compiled by their IDE, into bytecode.

Clearly though, there is a speed-improvement in flash-compiling the bytecode and then executing the resulting native code, over interpreting the bytecode.


 

Yet, the speed-improvement which was once thought to exist in RISC-Chip CPUs, has largely vanished over the History of Computing. One original premise behind RISC-Machines was, that they could be made to run at a higher clock-speed, than Complex Instruction Set (CISC) Computers, and that due to this increase in clock-speed, the RISC-Machines could eventually be made faster.

In addition, early CISC-Machines failed to use concurrency well, in order to execute a number of operations simultaneously. By doing this, modern CISC-Machines also obtain low numbers of clock-cycles per instruction. But I think that this subject will remain in debate, as long as practical CISC-Machines have not exploited concurrency as much as theory should permit.

Since an optimizing compiler generally has the option of compiling source-code into simpler instructions, even when targeting a CISC-Machine, it would follow that realistic source-code needs to be compiled into longer sequences of RISC-Machine instructions.

This debate began before the days, when a CPU had become one chip. Since the CPU is by now a single chip, communication at the speed of light permits a CISC-Machine to have as high a clock-speed as a RISC-Machine. OTOH, the power consumption of a RISC Chip may still be slightly better.

And, as long as the CPU of Android devices only needed to execute a Bytecode Interpreter, this set of instructions could be optimized again, for the purpose of doing so, on a RISC-Machine. But, if we compare the speeds of modern RISC and CISC -Machines running optimized, native code – i.e., compiled C++ – I think it’s clear that the CISC-Machines, including the ones that were meant to run Windows or Linux on, outperform RISC machines.

(Edit 10/09/2017 : )

I believe that there exist specific situations in which a RISC-Machine runs faster, and that a lot of that depends on what language of source-code is being compiled. In the case of RISC-Machines, the compiler has fewer options to optimize the code, than it would with CISC-Machines.

Continue reading How the Obsolescence of Dalvik Does Not Even Represent an Inconsistency

Android Permissions Control

One fact which I had written about before, was that Android differs from Linux, in that under Android, every installed app has its own username. Also, because different users installed a different set of apps in different order, the UID – an actual number – for any given username will be different from one device to the next. And then I also wrote, that a username belonging to a group or not so, can be used to manage access control, just like under Linux.

There is a reason for which things are not so simple with Android. Most Android apps are written in a language named “Dalvik”, the source code of which has syntax identical to “Java”, and which must be compiled into “Bytecode”. The bytecode in turn runs on a bytecode interpreter, which is also referred to as a Virtual Machine.

The reason for which this affects permissions, is because as far as the kernel can see, this VM itself runs under one username. This is similar to how a Java VM would run under one username. And so a much more complex security model is put in place by the VM itself, because presumably this VM’s username has far-reaching capabilities on the device.

The actual use of groups to control access under Android is simpler, and applies at first glance to processes which have been compiled with the ‘NDK’ – with the “Native Development Kit” – and which therefore run directly, say from C++ source code.

Further, the Dalvik VM is capable of reading the permissions of actual files, and is capable of applying its own security model, in a way that takes the permission bits into account, that have been assigned to the files by the Linux kernel. So for most purposes, the security model on the VM is more important than the actual permission bits, as assigned and implemented by the kernel, because most Android source code is effectively written in a Java-like language.

Dirk