Resolving ‘curlftpfs’ – Error 530 …I/O Error.

One of the ideas which I recently saw another enthusiastic Linux user comment about was, that the program “Bluefish”, which can be installed from Linux package-managers, is almost an ideal program for Web-design, and one of the reasons given was the fact that additionally, under Linux, it’s possible to mount a remote FTP-server’s directory, to a local mount-point, and then, to edit the HTML, CSS and PHP Files directly within this mount-point. While I like enthusiasm for a good thing, I would prefer not to lead my readers into a situation where they could run into error messages, and then become frustrated, because they might not be able to resolve those. Instead, I’d prefer to have run into some error messages myself, found a solution, and then explain to the community, what worked for me.

For starters, even though I do some minimal Web-design, my way of doing it is, to maintain a set of folders within my home directory, which stays synced between my computers, but then, to use “FileZilla” to upload changes I made to those folders’ files, to an FTPS-server, with point-and-click methods. FileZilla can also be installed from most Linux package managers, has a GUI, and supports secure connections to the FTP-server. If the reader is paying attention, he or she will notice that by implication, I have an FTP-server running on the PC that acts as my Web-server, that I force the exchange of passwords to be encrypted, and that my FTP-server and -client reside side-by-side on computers in the same room, on my private LAN.

The reader may infer that I’m not the most trusting person.


 

 

In this context, Linux users differentiate between two methods of file-sharing:

  • File-Sharing over SSH, which is sometimes referred to as ‘sftp’. If this is set up, then the local mounting of the remote FS becomes dead-simple, Or
  • FTP File-Sharing over TLS, which is referred to differently as ‘ftps’. My assumption for this posting is, that ‘ftps’ is set up, but that ‘sftp’, or, ‘ssh’ is not, for the account in question…

 

 

But, just to verify whether the option can be forced to work, that the actual FTP-server’s directories can be mounted locally, and then, theoretically, also edited in-place, I next discovered that virtually the only solution available is a package called ‘curlftpfs‘, which performs a FUSE-mount. Lately, I’ve begun to like FUSE-mounts, especially without any of the mount options that would require root. But what I found next was, that ‘curlftpfs’ was particularly temperamental. Whatever the client-side mount doesn’t like about the server-side configuration, results in a short error message, and perhaps, a frustrated Web-author.

And so, I’m happy to share what it was, that made the error message go away in my situation. The first thing that the reader would need to visualize is, how the system directories are set up, using ‘vsftpd’, in this case for the user ‘www-data’, which owns all the Web-content, and, exposing Port 21 to the LAN, but not even to the WAN. This is the root of any static HTML I might want to host:

 


Directory               Owner
------------            --------
/var                    root
  +-www                 root
    +-html              www-data

 

While this detail may strike the reader as irrelevant, in fact it’s not, as I found out. That would be, because the Home Directory of the user ‘www-data‘ was set up, by me, several years ago, as being ‘/var/www‘, not, ‘/var/www/html‘. What this means is that, unless some directive was set up on the client-side, to navigate to sub-directory ‘html‘ directly when logging in, from wherever this FTP user’s home directory was, the simple diagram which I just wrote above, would cause the Error 530 Message.

Further, in my configuration file ‘/etc/vsftpd.conf‘, I have the following lines set:

 


userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO

 

What this means is that, in the additional file ‘/etc/vsftpd.userlist‘, I have the list of users, who are allowed to use FTP. And, ‘www-data‘ is the only user in the list.

I should also mention that the user-name ‘www-data‘ has its login shell set to ‘/bin/false‘, just to reduce the risk further, of any sort of hacking.

But there was a simultaneous problem causing the same error message: When the ‘curlftpfs’ command is given, with the ‘tlsv1′ option, at least on the Debian 9 / Stretch client, this just causes the authentication error, regardless of how sure the user has been, that his username and password were provided correctly. What’s peculiar about that is the fact that, on my FTP-server, the file ‘/etc/vsftpd.conf‘ does also have the following lines set:

 


ssl_enable=YES
ssl_tlsv1=YES

 

Further, setting ‘Explicit TLS handshake’ in FileZilla, never caused any sort of error messages. What I take this to mean is that either:

  • ‘curlftpfs’ tries to set a version of TLS which would be higher than 1.0, Or
  • Trying to set this option both on the client and server, causes the obscure error…
  • (See footnote :1 at the bottom of this posting, for the correct explanation.)

I should also note, that ‘the correct way’ to get this mount to work would be, that the user set up the file ‘~/.netrc’, that being, a file in his home directory, on the client computer, that stores his username and password in plain-text form. It’s important to give the command:

 


$ chmod go-r ~/.netrc

 

I.e., access to this file should be limited to reading, and perhaps writing, by the user in question.

If the user enters his password on the command-line, then it will also go into the command-history of his shell interpreter, and additionally, be visible in the processes list. This is one possible, correct ‘.netrc’ File, for use with ‘curlftpfs':

 


machine dirkmittler.homeip.net
login www-data
password <my-password>

 

What I eventually found out was, that ‘curlftpfs’ does in fact read and use this file correctly. But, once I got my FTPS-mount to succeed, I also deleted this file, and continued my experiment with the following command-line:

 


$ curlftpfs -o ssl,no_verify_peer,user=www-data ftp://dirkmittler.homeip.net/html ~/mnt

 

What the ‘user=www-data‘ option does is, to cause the command to ignore the ‘.netrc’ File, and to prompt the user to type in his password (invisibly) by hand. I found that using this feature eventually also works. And, at the end of the (remote computer’s) host-name, in my case, the characters ‘/htmlare important, for the reason I gave above.

The following command also works:

 


$ curlftpfs -o ssl,cacert=/home/dirk/vsftpd.pem,user=www-data ftp://dirkmittler.homeip.net/html ~/mnt

 

The last command assumes that the file ‘~/vsftpd.pem‘ contains the public certificate that my self-signed FTP-server uses, and causes the mount command to check the actual server’s public key for equality.

 

Conclusion:

Although this mode of operation can be forced to work, I wouldn’t prefer it as my main way of uploading files, because it’s just too rickety. (:2) I’ll just go back to using FileZilla.

 

(Updated 9/28/2020, 22h45… )

Continue reading Resolving ‘curlftpfs’ – Error 530 …I/O Error.

File System Corruption !

I use a home-computer as my Web-host, which I named ‘Phoenix’. This was actually a computer built in 2008, and is one of the oldest still working for me. So far it has continued to work reliably.

Until several years ago, this computer was actually named ‘Thunderbox’, and was running Kanotix / Thorhammer. But I when I wiped it completely and installed Kanotix / Spitfire on it, I not only renamed it, but rebuilt its software from zero.

What just happened to me today on ‘Phoenix’, is that I fired up a routine K3b -session, in order to back up some music from Audio CDs which I had actually purchased in the 1980s, and that the GUI application showed me a message saying that it could not locate the utility named ‘dvd+rw-tools’ ; I should try installing the package (even though it’s a dependency and was installed.) After I reinstalled that package, K3b told me that it could not locate ‘cdrdao’ (even though it’s a dependency and was installed). I needed to reinstall that as well, after which I explicitly needed to tell K3b to rescan the hard-drive, to find all its back-end utilities again.

Because on this computer I had never customized the back-end utilities which K3b uses – unlike what I had done on ‘Klystron’ – and, because I had never changed the variables in question, this actually points to file-system corruption. I had last used K3b on ‘Phoenix’ several months ago, and with no error messages or warnings. Further, those utilities were among the first packages I ever installed, when I resurrected this computer as ‘Phoenix’, and the idea that the oldest data on the hard drive, would be the first data forgotten, even though never deleted, is also consistent with file-system corruption.

Usually, I’d expect FS corruption to stem from power outages. But we haven’t had a power-outage in a long time, and the last time we did, the Extension 4 File-System on the machine just seemed to repair itself correctly. Because Ext4 is such a tightly-meshed file-system, the report that it has been repaired, can be believed. If it was not repaired on boot, the computer would refuse to boot.

And so, even though this is not typical, I’d say that in this case, the FS corruption is actually secondary, to the fact that the actual Hard-Drive is aging, and has caused some I/O errors. We only get to see I/O error messages, if we’re running something from the command-line; I believe that when we’re running a GUI application, those just stay buried in a system log somewhere.

But what this seems to spell, is that eventually – sooner than later – ‘Phoenix’ will die completely, and this time, there will be no resurrecting her, because the problem will be in the hardware and not in the software.

(Update 1/16/2018 : But. there’s another possible explanation… )

Continue reading File System Corruption !