Pursuing the question of, whether a Linux subsystem, that runs under Android, due to the UserLAnd app, can be used for Web development.

It was a subject which I wrote about several months, or years ago, that I had installed the “UserLAnd” app on my Google Pixel C Tablet, so that I could install Debian Linux on it. And a question which one reader had asked me was, whether such an arrangement could be used, to carry out Web development. In fact, some question existed, as to whether proprietary software could be made to run, and my answer was, that it would be preferred to run only Free, Open-Source Software.

In the meantime, I’ve uninstalled Linux from the Pixel C, and installed it on my Samsung Galaxy Tab S6, which has 256GB of internal storage, so that this question can be examined more seriously.

The answer I’d give to this question is, that Web-development can be done in this way, as long as the developer accepts some severe restrictions.

  • Successful development of any kind will depend on whether the user has a real keyboard to type on.
  • The Open-Source application “Bluefish” runs out-of-the box, which is more than I can say for any sort of Python IDE.
  • Because there is little possibility to run a Web-server on the tablet, the features which Bluefish would normally have, to edit PHP Scripts as well, will simply need to be ignored. The ability to preview the Web-pages written, depends on the Guest System’s Firefox browser following the ‘prooted’ Guest System’s Filename-Paths, so that, when Bluefish opens Firefox, the HTML File will essentially be opened as if from the hard drive. And the feature works…


Screenshot_20200924-052525_VNC Viewer

Screenshot_20200924-052618_VNC Viewer


The main reason I would say, not to invest in paid-for software on this platform, is, because its full potential will not be realized.

The HTML and CSS Files created in this way will next need to be transferred to an actual Web-server, and some of the ways in which Bluefish would be set up on a real Linux box, would make this easier.


(Updated 10/03/2020, 4h00: )

(As of 9/29/2020, 6h55: )

I have given further thought to this concept, and concluded that the Android app “SyncThing” could make the prospect slightly easier. This is an app which I already use, together with its Linux companion app, to synchronize arbitrary user folders between my Android devices and my Linux computers.

The recipe which I have tried is something as follows:

  • Create a sub-folder within my static HTML, Web-server folder tree, named ‘http://dirkmittler.homeip.net/mobile‘.
  • Create a sub-folder on the tablet, which has the location: ‘root > sdcard > Android > data > tech.ula > files > storage > html’.
  • The above location will appear within the UserLAnd, Linux Guest System at the location ‘/storage/internal/html’.
  • Use SyncThing according to its own, detailed instructions, to sync the Android folder with the user folder on whatever PC is to be used, in order eventually to transfer HTML to the Web-server.
  • Use whatever tools exist on the PC – i.e., under Linux, the “FileZilla” GUI application – finally to upload the contents of the sub-folder and its corresponding URL, to the actual Web-server.

This should allow more straightforward editing of Web content on the tablet. But the drawback is then, that on whichever PC was being used, the added step needs to be taken manually, to upload changes which were automatically synchronized to its user folder, to the Web-server in turn.


(Update 9/29/2020, 12h55: )

In this later posting I ruminated over the question, of whether to use the Linux command-line program ‘curlftpfs’, which mounts a remote FTP server’s directory as a local, virtual file system with write-access, actually has advantages over using the GUI-based program “FileZilla”. And for the moment, my conclusion seemed to be, ~Not really.~ But I think I see a way in which it could, after all, be advantageous.

It’s possible to write a script which not only mounts the Web-server’s directory, but which, immediately afterwards, also syncs changes in a user directory to a subdirectory of the mounted HTML directory. And so, if I were to assume that only the sub-directory ‘mobile’ was to be synced up to the server, I can create a shell script that does this with a single click:



mkdir -p ~/mnt/ftp
fusermount -u ~/mnt/ftp

curlftpfs -o ssl,tlsv1,cacert=/home/dirk/vsftpd.pem ftp://dirkmittler.homeip.net/html ~/mnt/ftp

rm -rf ~/mnt/ftp/mobile/*

rsync -rW --inplace ~/public_html/mobile/ ~/mnt/ftp/mobile


Assuming, that the user keeps his copy of the Web-site in the directory ‘~/public_html’.

I have posted this script in spite of the fact that it has issues. One issue with the ‘curlftpfs’ program is the fact, that its time-stamps tend to be inaccurate. Another is a known bug, by which certain types of write operations are allowed, such as, to delete the contents of a directory recursively, but, to overwrite a file which already exists on the FTP-server is not presently permitted to certain programs, including ‘rsync’. (:1) This is why my script recursively empties the directory ‘~/mnt/ftp/mobile’. The reader should beware, that Putting an incorrect path-name here will also cause much more damage to the contents stored on the FTP-server, and do so within seconds, without asking for any confirmations! Please see footnote (:2) below for the safer version of the script…



(Update 10/01/2020, 9h45: )


I have analyzed the question more deeply than before, of why certain programs, including ‘rsync’, will fail to save files to such a locally-mounted, virtual folder, if a file by the given name already exists. And what I have found is, that this problem lies with the ‘rsync’ program using the ‘mkstemp()’ C function to create temporary files. This function locks the file.

The way many programs will save documents to files that already exist by their original names is, to create a temporary file first, using ‘mkstemp()’, and, in the case of ‘rsync’, then to rename that file to the already-existing file, thereby overwriting the existing file. Certain other programs, such as ‘Bluefish’, may even open files in Append-mode, which ‘curlftpfs’ does not permit anymore. For this reason, even to mount the FTP File System with the ‘-o umask=0000′ Option, will not be enough to allow “Bluefish” to edit the virtual file directly. However, I’ve found that doing so permits ‘vi’ and ‘gvim’ to open, edit, and then save the documents.

‘mkstemp()’ is not supported by ‘curlftpfs’ by default.

‘curlftpfs’ cannot guarantee that such a file will be locked, as long as its physical existence depends on an entry on an FTP-server. Instead, it responds with an I/O Error in the virtual file system.

And, the most recent modifications to ‘curlftpfs’ make it difficult if not impossible, for any programs to edit files in-place…



(Update 10/01/2020, 9h35: )


I have now come up with a safer version of the script:



mkdir -p /tmp/curlftpfs

fusermount -u ~/mnt/ftp

export TZ="Etc/GMT+0"

curlftpfs -o ssl,tlsv1,cacert=/home/dirk/vsftpd.pem,umask=0000 ftp://dirkmittler.homeip.net/html ~/mnt/ftp

rsync -ru --temp-dir=/tmp/curlftpfs --delete ~/public_html/mobile/ ~/mnt/ftp/mobile


There exist three, connected points of interest with the second version of this script:

  • The ‘-u’ option can be used with versions of ‘vsftpd’, which, when configured to display local modification timestamps to FTP clients, actually take Daylight Savings Time into account. But, the Debian 8 / Jessie version of ‘vsftp’ still has a bug that prevents it from doing so. Therefore, I really needed to disable this feature on my FTP-server, leaving it to report the timestamps according to ‘UTC’.
  • To compensate for that, the script above ‘fakes’ being located at ‘GMT+0′, and results in correct behaviour. Similarly, when faking GMT+0 in this way, the script can be used with other FTP-servers, that report timestamps at GMT+0.
  • The assignment to the environment variable ‘TZ’ must not be in effect, when the attempt is made to list directory contents, in this case belonging to ‘~/mnt/ftp’, because if it were, the timestamps would also be printed to the screen at ‘GMT+0′ time.



(Update 10/01/2020, 10h30: )

A concern which some readers might have could be, that if the FUSE-mount is set with ‘umask=0000′, this could allow all users on the computer, every type of access to the file system. However, when a FUSE-mount is being performed without root privileges, the default behaviour is that only the user who performed the mount will be able to access its files. And this is also in effect, if the individual file permissions state otherwise. What this really means is that, to set ‘umask=0000′, removes whatever permission-checking is built-in to Linux by default, according to which a user could not write to a file system which he mounted, even if actual file I/O operations were permitted by the kernel (and by the user-space code).

This behaviour can generally be changed, with the ‘-o allow_other’ Option. However, like so many options, that one can only be issued with root privileges. Therefore, if a ‘curlftpfs’ file system is being mounted by root, then the details start to become much more critical, what UIDs and GIDs the individual files and directories have, as well as what the ‘umask’ value needs to be, in the case of this FS-type.



(Update 10/02/2020, 14h45: )

One of the facts which some readers may be reluctant to accept is, that strangely-mounted file systems can exist under Linux, which do not permit some of the I/O operations that a full-fledged file system will support. This is especially the case with some FUSE-mounts, where any application developer could have compiled user-space code, to implement the file-system in any way they saw fit. And so, the session below assumed that a ‘curlftpfs’ file-system has mounted the contents of a remote FTP-server, to ‘~/mnt/ftp’ locally. And the behaviour which is evident in the session below just demonstrates, a partial subset of I/O operations having been implemented. This corresponds to how the author feels, his file-system should behave:


dirk@Phosphene:~$ cd ~/mnt/ftp/mobile
dirk@Phosphene:~/mnt/ftp/mobile$ ls -al
total 8
drwxrwxrwx 2 root root 4096 Oct  1 10:44 .
drwxrwxrwx 1 root root 1024 Dec 31  1969 ..
-rwxrwxrwx 1 root root  831 Oct  1 10:44 index.html
-rwxrwxrwx 1 root root    0 Oct  1 10:43 .stfolder
dirk@Phosphene:~/mnt/ftp/mobile$ cat index.html >index2.html
dirk@Phosphene:~/mnt/ftp/mobile$ cat index.html >index2.html
dirk@Phosphene:~/mnt/ftp/mobile$ ls -al
total 12
drwxrwxrwx 2 root root 4096 Oct  2 13:30 .
drwxrwxrwx 1 root root 1024 Dec 31  1969 ..
-rwxrwxrwx 1 root root  831 Oct  2 13:30 index2.html
-rwxrwxrwx 1 root root  831 Oct  1 10:44 index.html
-rwxrwxrwx 1 root root    0 Oct  1 10:43 .stfolder
dirk@Phosphene:~/mnt/ftp/mobile$ cat index.html >>index2.html
bash: index2.html: Operation not supported
dirk@Phosphene:~/mnt/ftp/mobile$ rm index2.html
rm: remove regular file 'index2.html'? y
dirk@Phosphene:~/mnt/ftp/mobile$ ls -al
total 8
drwxrwxrwx 2 root root 4096 Oct  2 13:31 .
drwxrwxrwx 1 root root 1024 Dec 31  1969 ..
-rwxrwxrwx 1 root root  831 Oct  1 10:44 index.html
-rwxrwxrwx 1 root root    0 Oct  1 10:43 .stfolder


I can go just a little further, to try to make the findings of this posting less confusing to some readers, especially readers, who may not be used to Linux. ‘curlftpfs’ and ‘rsync’ are two separate programs, neither of which was written by their respective author, with any consideration for the fact, that it could be used with the other one. Differently from how ‘curlftpfs’ behaves, ‘rsync’ is a very stable program, that will sync two folders, as well as in some cases, a local folder with a remote folder, but which assumes that every sort of I/O operation is permitted, in both folders. Therefore, if it suddenly happens that the FS on one side does not permit I/O operations which are normally permitted, ‘rsync’ doesn’t know what to do. In that case, ‘rsync’ may need to be invoked with special flags in order to achieve what the user expects, or may just become unusable.

In my example, ‘rsync’ is simply being called upon to sync two directories, both of which have been made to look like local directories, but one of them so, by ‘curlftpfs’. For this reason in particular, ‘rsync’ can be called with the options shown, and work in itself.



What the reader might ask next could be, ‘What would need to change on the Web-server, to allow Linux-centric SSHFS-access, so that rsync can be used with SSHFS, which is also sometimes referred to as SFTP? SFTP is to be treated differently from FTPS.’

The most important thing I’d need to change on the server I run, to allow this, would be, to give the user-name ‘www-data’ a working log-in shell, which is set to ‘/bin/false’ right now, because SSH connections in general require that some specific program be executed on the side of the server, to complete the connection from the local computer. And, I have limited what that user can do on the Web-server, specifically in order to maximize security. Usually, the SSH log-in from the local computer, runs an SSH command on the remote computer.

Also, just because I happen to be running a Linux machine as my server, this does not imply that every Web-server out there is Linux-based. If it is not, then whatever overtures the user makes to his Web-hosting admin may be moot, because only Linux servers really run SSHD correctly. There might be some MacOS user who can direct the user to do so as well…

And what can be asked after that would be, ‘If the server allowed the user-name in question to run SSH, what would the advantage be, of connecting to it from the local machine, using rsync?’ And, this posting already explains that, even though ‘rsync’ is more limited when being run in this way, than it would be, syncing two entirely local directories, ‘rsync’ would actually run with fewer limitations, than it does when used with ‘curlftpfs’, the latter of which can practically be used with any FTP-server, including commercial Web-hosting servers. But, if the server did allow a user-name to run SSH, I, personally can bypass the use of ‘rsync’ on the local machine completely, because my graphical file-browser, which happens to be called “Dolphin”, not only has the ability out-of-the-box, to connect to remote SSHFS file systems – if the correct additional packages are installed – but, Dolphin will actually add as a feature, to store the log-in credentials to the remote machine in the local user’s Wallet. Similar capabilities exist, with a different file-browser, under “GNOME” as well as under “LXDE”.

For that reason, I already have the ability to sync files to and from my server-box, securely, using point-and-click methods. But, because of how my server is locked down, this just does not work with the user-name ‘www-data’.



(Update 10/03/2020, 4h00: )

One of the command-line options which can be given to ‘curlftpfs’ is, ‘-o no_remote_lock‘. This is a FUSE option which is not documented in the manpage on my machine, but, if given, also causes no error messages, for which reason I should assume that it still gets applied. As it happens, file-locking on an FTP-server does not have the same effect as file locking within a mounted file system. On an FTP-server, if the feature is supported, files are locked while being uploaded, so that, if other users try to download them while they are still incomplete, this will be made to wait… This is similar to what the ‘mkstemp()’ function was meant to do locally. However, the reality is that ‘curlftpfs’ just does not implement ‘fcntl()’, other than with a dummy function that always returns the same error code.


(Update 10/02/2020, 23h40: )

Actually, “Bluefish” has its own, internal method of accessing remote FTPS-servers, in addition to SFTP-hosts. The following little tutorial explains how it can be used:


And, when I try this from my tower computer ‘Phosphene’, it works, as the two screen-shots below show:



According to the first shot, I have opened the FTPS-server’s folder, and saved the file ‘COMMON.TXT’ onto itself, unmodified. And according to the second shot, the FTP-client ‘FileZilla’ shows the existence of two files on the server, one actual and the other a backup, both saved today, on October 2.

Hence, Why do I not just plunge ahead, and start authoring Web-content on the tablet finally, using Bluefish, installed there? Because:

  • My Router does not have loop-back, And
  • The fact that I can access the site, using the domain-name that the rest of the Internet uses, depends on my having set up an alias for that domain-name, that points to an IP Address directly on my LAN, from the true Linux computers belonging to my LAN, And
  • When Linux is running in a ‘proot’-ed jail on the tablet, I have no control over the network configuration being used. The Guest System unwittingly just uses the network configuration of the Host System, and files that would normally control networking, belonging to the Guest System, just get ignored. And
  • Hypothetically, I could access my domain name in this way, once outside my LAN, especially, since the connection is encrypted by my server, BUT
  • My FTP-server’s port is not exposed to the WAN, And
  • This ability that Bluefish has from the GUI, to open SFTP URLs, depends on the GVFS Daemon, which does not launch as part of the tablet’s Guest Session, And
  • The ability that GVFS does have to open an FTPS URL, actually depends on having ‘curlftpfs’ installed.



Print Friendly, PDF & Email

3 thoughts on “Pursuing the question of, whether a Linux subsystem, that runs under Android, due to the UserLAnd app, can be used for Web development.”

  1. I also installed he UserLand App and Ubuntu seems to be installed and running well. I tried different commands also and they too are working. When I install software like python, gcc and after ending session when I return back these apps were lost. I had to install here again. But frm oyour webpages it seems that new program are permanently installed. Please answer

    1. This is a curious problem. Usually, if ‘apt-get’ was used to install packages, they should remain installed. One question which I’d have for you is, ‘What sort of desktop manager are you using?’ I am using LXDE. What can happen is, that if we are not using a desktop manager, and the corresponding VNC viewer, to access our Linux sub-system, we may find it hard to detect, whether software that we had previously installed, is in fact still installed.

      When using ‘apt-get’ to install software, it’s important to become root first, using ‘su’, even though at the file file-system level, UserLANd does not change the ownership of any files. ‘apt-get’ must still be run with (faked) root, in order to install software.

      Another plausible answer could be that, having installed Ubuntu, you might be using Ubuntu’s software manager to install software. This would mean that your desktop manager is either ‘GNOME’ or ‘Unity’. If you are doing that, then there may be a basic inability of Ubuntu to acquire root privileges – again, because within UserLANd, nothing can really become root – and this could be the reason for which your software installs don’t stick.


    2. There’s another observation which I’d like to repeat, even though my postings already stated it. Many desktop managers, including ‘GNOME’ and ‘LXDE’, usually require that numerous daemons be running, in order to be fully functional. As I already posted, this cannot really be done under UserLANd. This is also why my desktop has no Recycle Bin.

      Yet, when people use the graphical applications within their favourite desktop manager, to install software, then in many cases, doing so also requires that daemons be running – specifically, under ‘GNOME’, the ‘Packagekit’ daemon – which are responsible for allowing user-space applications to request that certain actions be run with elevated privileges. Under UserLANd, these daemons will not be running.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>