How I use SSHFS to access remote filesystems

1

Last Updated on October 14, 2024 by David Both

I’ve taken a lot of digital pictures over the years, and scanned a bunch more prints into digital format. I’ve also collected a good number of images from other sources — no, not those get your mind out of the gutter.

I have almost 19GB of digital pictures on my computer in ~/Pictures and I’ve been meaning to place them on a server and make them available to multiple computers in my home. Yesterday, my partner made another suggestion that I actually work on this project. So today I did and it didn’t turn out the way I first expected.

I started with one objective: to make all the pictures I have available on any of the many computers I have. That sounds simple, but in reality — not so much.

Looking at NFS

I started with NFS but as I started to look at it’s functionality I found some things I didn’t like. The most important of those, and the one that started me looking in other directions, was the large number of inconsistently assigned TCP ports required. All of my computers, even the ones not directly connected to the Internet, have individual firewalls, most of which only allow SSH on port 22 inbound. I really didn’t want to open my firewalls any more than that.

I discarded NFS as a solution for my project.

Finding SSHFS

I like all of the functions made available by SSH and thought it must have some feature I could use. My explorations turned up the SSH File System (SSHFS).

SSHFS is a user space protocol for sharing files over SSH and secure file transfer protocol (SFTP) using port 22, so I don’t need to open any more holes in my firewall. It’s also simple to set up — once I figured it out which took some time. It’s a user space sharing system so, once configured, can be mounted by the user. Kernel.org has a good explanation of user space filesystems, and their configuration.

Getting started with the server

I started by installing a new, 500G SSD storage device on bunkerhill, the host I’d picked out as the new server. The server needs to have SSHD configured and running, and this server does, as do all my systems.

I used Logical Volume Management on the device, so I created a physical volume, a volume group, a logical volume, and an EXT4 filesystem. I created a new mount point at /var/Pictures, added a line in /etc/fstab for this mount, and mounted the filesystem. The fstab entry on the server is an ordinary one that you would use to mount any filesystem.

/dev/mapper/vg02-Pictures    /var/Pictures      ext4    defaults        1 3

I ran a command to reload all the server daemon configurations. This reloads the fstab so that the system recognizes the new entry.

# systemctl daemon-reload

The last thing I did on the server was to create accounts for the users, myself and my partner. I used identical usernames to those on our own workstations to help prevent confusion.

That’s all that needs to be done on the server.

The clients

The clients take more work than the server. I started on the clients by installing the SSHFS user space filesystem.

# dnf install -y fuse-sshfs

Each user on a client needs an ssh RSA public/private keypair (PPKP) so I created one for my personal non-root user account.

$ ssh-keygen -t rsa

I then copied the public key for my user account from my workstation to the server.

$ ssh-copy-id bunkerhill

As root on the client, I edited /etc/fuse.conf to uncomment ‘user_allow_other’. This allows users other than the one that mounted the remote filesystem to access it with all privileges. In this case, I’ve already configured the remote filesystem to be mounted by root.

# mount_max = 1000
user_allow_other

Each client needs an entry in the /etc/fstab file to allow mounting the remote filesystem. Note that the entry shown is wrapped; it should be entered all on one line with changes for your local network and host configuration.

dboth@bunkerhill:/var/Pictures /home/dboth/Pictures     fuse.sshfs allow_other,noexec,default_permissions,noauto,_netdev,reconnect,identityfile=/home/dboth/.ssh/id_rsa  0 0

This entry in the fstab file has the same structure as all other entries. It’s just a bit more complex than most. First is the remote filesystem to be mounted, then the local mountpoint. Notice that the specified mount point is my own ~/Pictures directory. Once mounted, any pictures already located in that directory are inaccessible. They’re still there, but can only be accessed when the remote filesystem us not mounted. Next is the filesystem type of the remote filesystem, fuse.sshfs.

The next thing we encounter is a long string of mount options, some of which are rather esoteric. These options, as well as many others, are all tersely described in the mount.fuse manual page but here’s a quick explanation for each.

  • allow_other Overrides security restrictions so that all users can access the mounted filesystem.
  • noexec As a security precaution, this prevents executable programs from being run.
  • default_permissions Enables permissions checking and access based on the filemodes of the remote files.
  • noauto Does not automount the filesystem at boot time.
  • _netdev Declares this as a network filesystem so that it’s not mounted until the network is up and running.
  • reconnect Allows the mount to be reconnected automatically in the event of a temporary network disconnect.
  • identityfile= Points to the user’s RSA id file, thus providing identification and authentication.

The last two columns are nummeric. The entries for out mount are both zero. The first number is used by the dump command, which is one possible option for making backups. The dump command is seldom used for backups any more, so this column is usually ignored. If by some chance someone is still using dump to make backups, a one (1) in this column means to back up this entire filesystem, and a zero means to skip this filesystem.

The last column is also numeric. It specifies the sequence in which fsck is run against filesystems during startup. Zero (0) means do not run fsck on the filesystem. One (1) means to run fsck on this filesystem first. The root partition is always checked first.

When both columns are zero, it prevents the system from trying to check the remote filesystem that’s not yet mounted. That prevents the boot process from slowing down.

I then ran systemctl daemon-reload to reload the fstab file. At this point it’s possible to manually mount the remote filesystem on the local workstation.

$ mount Pictures/

After copying some content to the remote host, I could see the files and directories in the mount.

$ ll Pictures/
total 2048
drwxr-xr-x 1 dboth  dboth    4096 Oct  9 15:11  1987-08-15-XXXXX
drwxr-xr-x 1 dboth  dboth    4096 Oct  9 15:11  2000-09-Beach
drwxr-xr-x 1 dboth  dboth    4096 Oct  9 15:11  2000-09-XXXXXX
drwxr-xr-x 1 dboth  dboth    4096 Oct  9 15:11  2000-11-XXXXXXXXX

Be aware that the lsblk command does not show the remote filesystems mounted using SSHFS. You can see these mounts using the mount command.

$ mount
<SNIP>
dboth@bunkerhill:/var/Pictures on /home/dboth/Pictures type fuse.sshfs (rw,nosuid,nodev,noexec,relatime,user_id=1000,group_id=1000,default_permissions,allow_other)

User mounting

Mounting the remote directory on the clients required some thought. It doesn’t need to be mounted on the clients at boot time or even when the user is not logged in. Nor did I want the user to need to perform a manual mount.

The ideal solution is to use the user’s own .bash_profile to mount the remote directory. The .bash_profile file is the first Bash configuration file that’s run when the user logs into the desktop, or starts a login shell. It’s the perfect place to add the mount command.

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs
#
# Mount the Picture directory from bunkerhill
mount ~/Pictures

This mounts the remote filesystem when the user logs into the desktop. But we don’t want to leave it mounted when the user logs out. We can address that by adding a simple commend to the .bash_logout file. As it’s name implies, it’s run when the user logs out of the desktop and unmounts the link.

# ~/.bash_logout
# when leaving console clear the screen to increase privacy

if [ "$SHLVL" = 1 ]; then
            [ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q
fi

# Unmount the link to bunkerhill
umount ~/Pictures

Your Bash configuration files may be different from mine.

Trying it

I tried several logins and logouts on the two hosts I’ve configured for this and a VM. It works brilliantly every time. Subjectively it seems almost as fast on my Gigabit network as the local directory.

My partner and I are both happy with this solution.

Security

Because this method of file sharing is all configured from the client side, any host that allows inbound SSH connections and that has an accessible user ID, can be accessed without restrictions by a knowledgeable user who can escalate to root privilege to add a line to fstab. I think this is low risk so I’m willing to use it for file sharing. It certainly seems to be less risky than opening more ports in the firewall and enabling additional services that could be cracked.