Resolving ‘curlftpfs’ – Error 530 …I/O Error.

One of the ideas which I recently saw another enthusiastic Linux user comment about was, that the program “Bluefish”, which can be installed from Linux package-managers, is almost an ideal program for Web-design, and one of the reasons given was the fact that additionally, under Linux, it’s possible to mount a remote FTP-server’s directory, to a local mount-point, and then, to edit the HTML, CSS and PHP Files directly within this mount-point. While I like enthusiasm for a good thing, I would prefer not to lead my readers into a situation where they could run into error messages, and then become frustrated, because they might not be able to resolve those. Instead, I’d prefer to have run into some error messages myself, found a solution, and then explain to the community, what worked for me.

For starters, even though I do some minimal Web-design, my way of doing it is, to maintain a set of folders within my home directory, which stays synced between my computers, but then, to use “FileZilla” to upload changes I made to those folders’ files, to an FTPS-server, with point-and-click methods. FileZilla can also be installed from most Linux package managers, has a GUI, and supports secure connections to the FTP-server. If the reader is paying attention, he or she will notice that by implication, I have an FTP-server running on the PC that acts as my Web-server, that I force the exchange of passwords to be encrypted, and that my FTP-server and -client reside side-by-side on computers in the same room, on my private LAN.

The reader may infer that I’m not the most trusting person.


 

 

In this context, Linux users differentiate between two methods of file-sharing:

  • File-Sharing over SSH, which is sometimes referred to as ‘sftp’. If this is set up, then the local mounting of the remote FS becomes dead-simple, Or
  • FTP File-Sharing over TLS, which is referred to differently as ‘ftps’. My assumption for this posting is, that ‘ftps’ is set up, but that ‘sftp’, or, ‘ssh’ is not, for the account in question…

 

 

But, just to verify whether the option can be forced to work, that the actual FTP-server’s directories can be mounted locally, and then, theoretically, also edited in-place, I next discovered that virtually the only solution available is a package called ‘curlftpfs‘, which performs a FUSE-mount. Lately, I’ve begun to like FUSE-mounts, especially without any of the mount options that would require root. But what I found next was, that ‘curlftpfs’ was particularly temperamental. Whatever the client-side mount doesn’t like about the server-side configuration, results in a short error message, and perhaps, a frustrated Web-author.

And so, I’m happy to share what it was, that made the error message go away in my situation. The first thing that the reader would need to visualize is, how the system directories are set up, using ‘vsftpd’, in this case for the user ‘www-data’, which owns all the Web-content, and, exposing Port 21 to the LAN, but not even to the WAN. This is the root of any static HTML I might want to host:

 


Directory               Owner
------------            --------
/var                    root
  +-www                 root
    +-html              www-data

 

While this detail may strike the reader as irrelevant, in fact it’s not, as I found out. That would be, because the Home Directory of the user ‘www-data‘ was set up, by me, several years ago, as being ‘/var/www‘, not, ‘/var/www/html‘. What this means is that, unless some directive was set up on the client-side, to navigate to sub-directory ‘html‘ directly when logging in, from wherever this FTP user’s home directory was, the simple diagram which I just wrote above, would cause the Error 530 Message.

Further, in my configuration file ‘/etc/vsftpd.conf‘, I have the following lines set:

 


userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

 

What this means is that, in the additional file ‘/etc/vsftpd.userlist‘, I have the list of users, who are allowed to use FTP. And, ‘www-data‘ is the only user in the list.

I should also mention that the user-name ‘www-data‘ has its login shell set to ‘/bin/false‘, just to reduce the risk further, of any sort of hacking.

But there was a simultaneous problem causing the same error message: When the ‘curlftpfs’ command is given, with the ‘tlsv1′ option, at least on the Debian 9 / Stretch client, this just causes the authentication error, regardless of how sure the user has been, that his username and password were provided correctly. What’s peculiar about that is the fact that, on my FTP-server, the file ‘/etc/vsftpd.conf‘ does also have the following lines set:

 


ssl_enable=YES
ssl_tlsv1=YES

 

Further, setting ‘Explicit TLS handshake’ in FileZilla, never caused any sort of error messages. What I take this to mean is that either:

  • ‘curlftpfs’ tries to set a version of TLS which would be higher than 1.0, Or
  • Trying to set this option both on the client and server, causes the obscure error…
  • (See footnote :1 at the bottom of this posting, for the correct explanation.)

I should also note, that ‘the correct way’ to get this mount to work would be, that the user set up the file ‘~/.netrc’, that being, a file in his home directory, on the client computer, that stores his username and password in plain-text form. It’s important to give the command:

 


$ chmod go-r ~/.netrc

 

I.e., access to this file should be limited to reading, and perhaps writing, by the user in question.

If the user enters his password on the command-line, then it will also go into the command-history of his shell interpreter, and additionally, be visible in the processes list. This is one possible, correct ‘.netrc’ File, for use with ‘curlftpfs':

 


machine dirkmittler.homeip.net
login www-data
password <my-password>

 

What I eventually found out was, that ‘curlftpfs’ does in fact read and use this file correctly. But, once I got my FTPS-mount to succeed, I also deleted this file, and continued my experiment with the following command-line:

 


$ curlftpfs -o ssl,no_verify_peer,user=www-data ftp://dirkmittler.homeip.net/html ~/mnt

 

What the ‘user=www-data‘ option does is, to cause the command to ignore the ‘.netrc’ File, and to prompt the user to type in his password (invisibly) by hand. I found that using this feature eventually also works. And, at the end of the (remote computer’s) host-name, in my case, the characters ‘/htmlare important, for the reason I gave above.

The following command also works:

 


$ curlftpfs -o ssl,cacert=/home/dirk/vsftpd.pem,user=www-data ftp://dirkmittler.homeip.net/html ~/mnt

 

The last command assumes that the file ‘~/vsftpd.pem‘ contains the public certificate that my self-signed FTP-server uses, and causes the mount command to check the actual server’s public key for equality.

 

Conclusion:

Although this mode of operation can be forced to work, I wouldn’t prefer it as my main way of uploading files, because it’s just too rickety. (:2) I’ll just go back to using FileZilla.

 

(Updated 9/28/2020, 22h45… )

Continue reading Resolving ‘curlftpfs’ – Error 530 …I/O Error.

How to run my AppImages using FireJail.

First of all, I’d like to do some basic defining:

What is FireJail? It’s a type of sandboxing program, which can be run from the command-line, and which allows users to run certain programs which they do not trust, even with user-level access to their Linux computers.

What is an AppImage? It’s a special kind of file, that has execute permissions set, but that is also linked to its own, local version of many libraries, thereby circumventing compatibility issues. But, the way an AppImage is made, consists of a file-system image, that the kernel actually needs to mount, within which these libraries and executables exist, and that gets mounted read-only by default. The security of the computer really depends, on the kernel making sure, that none of the files in this virtual FS can be mounted, with their SETUID bits taking effect. (:1) (:2)

However, even under Linux, there is some risk in ‘arbitrary code’ running with the username of the person who caused this, since most of that user’s personal data is also stored under his or her username. Further, modern Linux computers sometimes ask for requests coming from user-space, resulting in actions by the O/S, and with elevated privileges.

The problem in running certain AppImage’s using FireJail is the fact that the most recent AppImage’s use ‘SquashFS’ as their (compressed) image, while older AppImage’s used an ISO container, which did not offer (much) compression. The version of FireJail that I can install under Debian 9 / Stretch is still of a variety, that can only run AppImage’s built with ISO Images, not with SquashFS. The following bulletin-board posting correctly identified the problem, at least for me:

https://github.com/netblue30/firejail/issues/1143

Following what was suggested there, and wanting this to work, I next uninstalled the version of FireJail that ships with my distro, and cloned the following repository, after which I custom-compiled FireJail from there:

https://github.com/netblue30/firejail

I got version 0.9.63 to work, apparently fully.

This latest version of FireJail does in fact run my AppImage’s just fine. Not only that, but now I can know that other, more recent AppImage’s can also be run with FireJail (on my main computer).

If people want to obtain the accompanying GUI, then the way to do that is, to custom-compile the accompanying ‘firetools’ project:

https://github.com/netblue30/firetools

This parallels how the Debian ‘firetools’ package enhances the Debian ‘firejail’ package…

Screenshot_20200828_140220

 


 

 

But, if people are only willing to use the version of FJ that comes with their package manager, then my AppImage’s will never run, for the reason I just explained.

N.B.:

When running AppImage’s using FireJail, one precedes the name of the AppImage with the ‘--appimage‘ command-line option, because they are a special case.

(Updated 8/29/2020, 13h05: )

Continue reading How to run my AppImages using FireJail.