ChromeOS Upgrade from Debian 9 to Debian 10 – aka Buster – Google Script crashed.

I have one of those Chromebooks, which allow a Linux subsystem to be installed, that subsystem being referred to in the Google world as “Crostini”. It takes the form of a Virtual Machine, which mounts a Container. That container provides the logical hard drive of the VM’s Guest System. What Google had done at some point in the past was, to install Debian 9 / Stretch as the Linux version, in a simplified, automated way. But, because Debian Stretch is being replaced by Debian 10 / Buster, the option also exists, to upgrade the Linux Guest System to Buster. Only, while the option to do so manually was always available to knowledgeable users, with the recent Update of ChromeOS, Google insists that the user perform the upgrade, and provides ‘an easy script’ to do so. The user is prompted to click on something in his ChromeOS settings panel.

What happened to me, and what may also happen to my readers is, that this script crashes, and leaves the user with a ChromeOS window, that has a big red symbol displayed, to indicate that the upgrade failed. I failed to take a screen-shot of what this looks like. The button to offer the upgrade again, is thankfully taken away at that point. But, if he or she reaches that point, the user will need to decide what to do next, out of essentially two options:

  • Delete the Linux Container, and set up a new one from scratch. In that case, everything that was installed to, or stored within Linux will be lost. Or,
  • Try to complete the upgrade in spite of the failed script.

I chose to do the latter. The Linux O/S has its own method of performing such an upgrade. I would estimate that the reason for which the script crashed on me, might have been Google’s Expectation that my Linux Guest System might have 200-300 packages installed, when in fact I have a much more complete Linux system, with over 1000 packages installed, including services and other packages that ask for configuration options. At some point, the Google Script hangs, because the Linux O/S is asking an unexpected question. Also, while the easy button has a check-mark checked by default, to back up the Guest System’s files before performing the upgrade, I intentionally unchecked that, simply over the knowledge that I do not have sufficient storage on the Chromebook, to back up the Guest System.

I proceeded on the assumption, that what the Google script did first was, to change the contents of the file ‘/etc/apt/sources.list’, as well as of the directory ‘/etc/apt/sources.list.d’, to specify the new software sources, associated with Debian Buster as opposed to Debian Stretch. At that point, the Google script should also have set up, whatever it is that makes Crostini different from stock Linux. Only, once in the middle of the upgrade that follows, the Google script hanged.

Continue reading ChromeOS Upgrade from Debian 9 to Debian 10 – aka Buster – Google Script crashed.

Garmin fenix 5x auto-update stuck at 50%.

I just recently purchased a ‘Garmin fenix 5x’ smart-watch, with emphasis on sports features. And, to make the watch work as smoothly as possible, it was necessary to allow it to install its latest firmware version, which happens to be 20.0 at the time of this posting. But there was a problem. I was intending to use this watch mainly with the ‘Garmin Connect app’, available on Google Play, which communicates with the watch only via Bluetooth. (:1) Installing a firmware upgrade to an unknown, generic device, via Bluetooth, depends on how large a file-size the F/W upgrade is supposed to have. Bluetooth tends to be a slow interface, in the day and age where WiFi as fast as 802.11n is possible. And in this case, the update was apparently stuck at 50%, perhaps not even due to how slow the file transfer would have been, but rather, due to Garmin preferring we install the update via the ‘Garmin Express application’, of which there is a Windows as well as a macOS version.

Problem? I neither have a Windows, nor a macOS device. I depend on installing that update otherwise. But, as indicated below, I found a solution that seems to work for me…

The ‘fenix 5x’ allows the watch itself to be connected to a WiFi network. The possibility should exist, to restart the update process, but, using the watch’s WiFi link, not the Bluetooth upgrade that somehow got broken. In order to accomplish that, the first thing I needed to do was, to add my home WiFi network to the watch’s WiFi networks. Fortunately for me, because the actual Bluetooth pairing between the watch and Android app works 100%, I was able to set up WiFi for the watch, using the Android app.

My phone is a Samsung Galaxy S9, with its latest firmware, and my Garmin Connect app is up-to-date.

The next thing needed to be done manually, because the watch did not just abandon its discontinued Bluetooth upgrade in favour of a new, WiFi-based upgrade.

  • On the watch, Press the ‘Menu’ and the ‘Up’ button simultaneously,
  • Next, Scroll Down to the ‘Settings Entry’,
  • Press the ‘Activate’ button once,
  • Scroll Down to the ‘System Entry’,
  • Press the ‘Activate’ button once, again,
  • Scroll down to the (last) ‘Software Update Entry’,
  • Press the ‘Activate’ button once, again,
  • There should be an ‘Auto-Update Entry’ set to ‘On’, as well as an ‘Update Available Entry…’ Scroll to this last Entry, if there is one after the ‘Auto-Update Entry’.
  • Press the ‘Activate’ button again, once,
  • Press ‘Continue’ if so instructed,
  • Repeat until doing this no longer reveals an ‘Update Available Entry’. At that point, the last menu should only have the ‘Auto-Update (== On) Entry’.

Because the watch was set up to connect to my WiFi without problems, it was able to install a major and a minor update quickly and without issues. After its major update, it restarted.

 

(Update 10/12/2020, 16h05: )

Continue reading Garmin fenix 5x auto-update stuck at 50%.

Resolving ‘curlftpfs’ – Error 530 …I/O Error.

One of the ideas which I recently saw another enthusiastic Linux user comment about was, that the program “Bluefish”, which can be installed from Linux package-managers, is almost an ideal program for Web-design, and one of the reasons given was the fact that additionally, under Linux, it’s possible to mount a remote FTP-server’s directory, to a local mount-point, and then, to edit the HTML, CSS and PHP Files directly within this mount-point. While I like enthusiasm for a good thing, I would prefer not to lead my readers into a situation where they could run into error messages, and then become frustrated, because they might not be able to resolve those. Instead, I’d prefer to have run into some error messages myself, found a solution, and then explain to the community, what worked for me.

For starters, even though I do some minimal Web-design, my way of doing it is, to maintain a set of folders within my home directory, which stays synced between my computers, but then, to use “FileZilla” to upload changes I made to those folders’ files, to an FTPS-server, with point-and-click methods. FileZilla can also be installed from most Linux package managers, has a GUI, and supports secure connections to the FTP-server. If the reader is paying attention, he or she will notice that by implication, I have an FTP-server running on the PC that acts as my Web-server, that I force the exchange of passwords to be encrypted, and that my FTP-server and -client reside side-by-side on computers in the same room, on my private LAN.

The reader may infer that I’m not the most trusting person.


 

 

In this context, Linux users differentiate between two methods of file-sharing:

  • File-Sharing over SSH, which is sometimes referred to as ‘sftp’. If this is set up, then the local mounting of the remote FS becomes dead-simple, Or
  • FTP File-Sharing over TLS, which is referred to differently as ‘ftps’. My assumption for this posting is, that ‘ftps’ is set up, but that ‘sftp’, or, ‘ssh’ is not, for the account in question…

 

 

But, just to verify whether the option can be forced to work, that the actual FTP-server’s directories can be mounted locally, and then, theoretically, also edited in-place, I next discovered that virtually the only solution available is a package called ‘curlftpfs‘, which performs a FUSE-mount. Lately, I’ve begun to like FUSE-mounts, especially without any of the mount options that would require root. But what I found next was, that ‘curlftpfs’ was particularly temperamental. Whatever the client-side mount doesn’t like about the server-side configuration, results in a short error message, and perhaps, a frustrated Web-author.

And so, I’m happy to share what it was, that made the error message go away in my situation. The first thing that the reader would need to visualize is, how the system directories are set up, using ‘vsftpd’, in this case for the user ‘www-data’, which owns all the Web-content, and, exposing Port 21 to the LAN, but not even to the WAN. This is the root of any static HTML I might want to host:

 


Directory               Owner
------------            --------
/var                    root
  +-www                 root
    +-html              www-data

 

While this detail may strike the reader as irrelevant, in fact it’s not, as I found out. That would be, because the Home Directory of the user ‘www-data‘ was set up, by me, several years ago, as being ‘/var/www‘, not, ‘/var/www/html‘. What this means is that, unless some directive was set up on the client-side, to navigate to sub-directory ‘html‘ directly when logging in, from wherever this FTP user’s home directory was, the simple diagram which I just wrote above, would cause the Error 530 Message.

Further, in my configuration file ‘/etc/vsftpd.conf‘, I have the following lines set:

 


userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

 

What this means is that, in the additional file ‘/etc/vsftpd.userlist‘, I have the list of users, who are allowed to use FTP. And, ‘www-data‘ is the only user in the list.

I should also mention that the user-name ‘www-data‘ has its login shell set to ‘/bin/false‘, just to reduce the risk further, of any sort of hacking.

But there was a simultaneous problem causing the same error message: When the ‘curlftpfs’ command is given, with the ‘tlsv1′ option, at least on the Debian 9 / Stretch client, this just causes the authentication error, regardless of how sure the user has been, that his username and password were provided correctly. What’s peculiar about that is the fact that, on my FTP-server, the file ‘/etc/vsftpd.conf‘ does also have the following lines set:

 


ssl_enable=YES
ssl_tlsv1=YES

 

Further, setting ‘Explicit TLS handshake’ in FileZilla, never caused any sort of error messages. What I take this to mean is that either:

  • ‘curlftpfs’ tries to set a version of TLS which would be higher than 1.0, Or
  • Trying to set this option both on the client and server, causes the obscure error…
  • (See footnote :1 at the bottom of this posting, for the correct explanation.)

I should also note, that ‘the correct way’ to get this mount to work would be, that the user set up the file ‘~/.netrc’, that being, a file in his home directory, on the client computer, that stores his username and password in plain-text form. It’s important to give the command:

 


$ chmod go-r ~/.netrc

 

I.e., access to this file should be limited to reading, and perhaps writing, by the user in question.

If the user enters his password on the command-line, then it will also go into the command-history of his shell interpreter, and additionally, be visible in the processes list. This is one possible, correct ‘.netrc’ File, for use with ‘curlftpfs':

 


machine dirkmittler.homeip.net
login www-data
password <my-password>

 

What I eventually found out was, that ‘curlftpfs’ does in fact read and use this file correctly. But, once I got my FTPS-mount to succeed, I also deleted this file, and continued my experiment with the following command-line:

 


$ curlftpfs -o ssl,no_verify_peer,user=www-data ftp://dirkmittler.homeip.net/html ~/mnt

 

What the ‘user=www-data‘ option does is, to cause the command to ignore the ‘.netrc’ File, and to prompt the user to type in his password (invisibly) by hand. I found that using this feature eventually also works. And, at the end of the (remote computer’s) host-name, in my case, the characters ‘/htmlare important, for the reason I gave above.

The following command also works:

 


$ curlftpfs -o ssl,cacert=/home/dirk/vsftpd.pem,user=www-data ftp://dirkmittler.homeip.net/html ~/mnt

 

The last command assumes that the file ‘~/vsftpd.pem‘ contains the public certificate that my self-signed FTP-server uses, and causes the mount command to check the actual server’s public key for equality.

 

Conclusion:

Although this mode of operation can be forced to work, I wouldn’t prefer it as my main way of uploading files, because it’s just too rickety. (:2) I’ll just go back to using FileZilla.

 

(Updated 9/28/2020, 22h45… )

Continue reading Resolving ‘curlftpfs’ – Error 530 …I/O Error.