Keywords: systemd sshfs mount

Recently I had a bit of a headache trying to set up a systemd service which should mount specific resources during boot time. The problems were many, since it was my first self-created systemd service (I was only familiar with SysV over the years), so there were many things I tried, many fails and tons of reboots. But one major problem turned out to be the one getting me in all this trouble in the first place:

I did not know that systemd revokes all mounts set up by any service it has fired up, as soon as it exits from the service’s script. I can’t figure out the philosophy behind a behavior like this but at last: It can be shut off. But to do so, you need at least an idea of what’s going on behind the scenes, right?

Anyway, the solution is the command:

RemainAfterExit=yes

If you leave this out, systemd will rigorously kill all your fine mounts and leave you with the impression you’d made a mistake.

Here is an example of a service which works:

[Unit]
Description=mountsth service, mounts available network storage
Requires=multi-user.target

[Service]
Type=oneshot
User=user
ExecStart=/usr/bin/mountsth.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

If you plan ever to use

systemctl disable ...

or

systemctl enable ...

with this service, then you should use the Install section as shown above! For the sake of completeness, here is a barebones script which should work fine with this service:

#! /usr/bin/bash
/usr/bin/sshfs -o allow_other,default_permissions user@11.22.33.44:/data /mnt/data

You must use any command like sshfs with the full path included! Otherwise it won’t work.

If the service fails due to missing services, network interfaces or the like, you should include appropriate checks within your shell script.

During the process of figuring out how to get this service doing what I expected it to do, i. e., mounting the resources in a way, that I could use them after login, of course, I also thought about /etc/fstab, right? Everybody does this, and I tried it out and ran into a very similar problem: I was missing one silly option I’d never heard of before, simply because sshfs always did what I wanted when I used it somewhere in user space. So I put this line in my /etc/fstab:

user@11.22.33.44:/data /mnt/data fuse.sshfs allow_other,default_permissions 0 0

And you know what? It did not work! The missing option was

delay_connect

So, to make the command line work, it must look like this:

user@11.22.33.44:/data /mnt/data fuse.sshfs delay_connect,allow_other,default_permissions  0  0

If you wonder how this stuff works without entering passwords somewhere in the script or in /etc/fstab: I work with SSH keys. So there’s no need to enter sensitive passwords in text files.

PS (2021-12-28):

Warning!

I have experienced strange if not to say very weird problems with the abovementioned fstab lines! At the moment I have no idea what goes wrong here, but I’ve experienced destroyed mount points and failed login attempts which led to the servers blocking access to the remote resources. As a consequence I have switched back all machines to the systemd service solution.