Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
poc
On 01/03/2021 01:12, Patrick O'Callaghan wrote:
Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
Well, using systemd units that should be possible. I use systemd to automount nfs. For one in particular I have....
[egreshko@meimei system]$ cat aux.automount [Unit] Description=Automount Aux
[Automount] Where=/aux TimeoutIdleSec=60
[Install] WantedBy=multi-user.target
[egreshko@meimei system]$ cat aux.mount [Unit] Description=nfs mount aux
[Mount] What=nas:/volume1/aux Where=/aux Options=rw,soft,fg,x-systemd.mount-timeout=30 Type=nfs4
[Install] WantedBy=multi-user.target
I would think using "ExecStartPre=" and "ExecStartPost=" would get you want you desire. I'd have to search/experiment where those options would go....... :-)
On 01/03/2021 03:27, Ed Greshko wrote:
I would think using "ExecStartPre=" and "ExecStartPost=" would get you want you desire.
Oh, there is also "ExecStop=" and "ExecStart=". So, more "research" would be needed. But not at 03:40. :-)
On Feb 28, 2021, at 12:13, Patrick O'Callaghan pocallaghan@gmail.com wrote:
Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
Your .mount unit would need a Before= and After= Section. Refer to a systemd .service that you will create that is Type=oneshot, where the ExecStart= powers on the device and ExecStop= powers it off. I think you’ll need a RemainAfterExit=yes in that service too, so systemd thins it is “up” while the device is powered up.
I don’t know how the power up sequence works, but if the power up command exits before it is powered on, you might need to work some sleep or wait command to only have the service “up” when the device is ready, that way the mount won’t be attempted until it is ready.
— Jonathan Billings
On Sun, 2021-02-28 at 16:18 -0500, Jonathan Billings wrote:
On Feb 28, 2021, at 12:13, Patrick O'Callaghan pocallaghan@gmail.com wrote:
Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
Your .mount unit would need a Before= and After= Section. Refer to a systemd .service that you will create that is Type=oneshot, where the ExecStart= powers on the device and ExecStop= powers it off. I think you’ll need a RemainAfterExit=yes in that service too, so systemd thins it is “up” while the device is powered up.
OK.
I don’t know how the power up sequence works, but if the power up command exits before it is powered on, you might need to work some sleep or wait command to only have the service “up” when the device is ready, that way the mount won’t be attempted until it is ready.
The powerup script already handles that.
poc
On Sun, Feb 28, 2021 at 10:13 AM Patrick O'Callaghan pocallaghan@gmail.com wrote:
Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
If you don't need much sophistication, you can add to fstab with options: noauto,x-systemd.automount
Since it's not a startup time dependency, startup won't hang waiting for it if it's not present. If present, it won't be automatically mounted until the mountpoint is accessed.
If you want it to automatically unmount when not being used, also include: x-systemd.idle-timeout="10m"
https://www.freedesktop.org/software/systemd/man/systemd.mount.html https://www.freedesktop.org/software/systemd/man/systemd.automount.html#
On Sun, 2021-02-28 at 18:03 -0700, Chris Murphy wrote:
On Sun, Feb 28, 2021 at 10:13 AM Patrick O'Callaghan pocallaghan@gmail.com wrote:
Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
If you don't need much sophistication, you can add to fstab with options: noauto,x-systemd.automount
Since it's not a startup time dependency, startup won't hang waiting for it if it's not present. If present, it won't be automatically mounted until the mountpoint is accessed.
If you want it to automatically unmount when not being used, also include: x-systemd.idle-timeout="10m"
Yes, I get that, however unmounting is only part of the story. I also want to invoke my power-down script (and conversely the power-up script when mounting again).
poc
On Mon, 2021-03-01 at 03:27 +0800, Ed Greshko wrote:
On 01/03/2021 01:12, Patrick O'Callaghan wrote:
Is there a way to invoke scripts before auto-mounting and after auto- unmounting? I want to be able to power an external drive up and down as needed.
Well, using systemd units that should be possible. I use systemd to automount nfs. For one in particular I have....
[egreshko@meimei system]$ cat aux.automount [Unit] Description=Automount Aux
[Automount] Where=/aux TimeoutIdleSec=60
[Install] WantedBy=multi-user.target
[Sorry for the long delay, but other stuff intervened.]
I've come back to this now and started experimenting incrementally. As a first step, I just want to automount/unmount, ignoring the power- on/off part for now, i.e. the device is already powered on. This is my /etc/fstab entry:
UUID=6cb66da2-147a-4f3c-a513-36f6164ab581 /raid ext4 rw,noauto,user,x-systemd.automount 0 0
and /etc/systemd/system/raid.automount (copied from your example):
[Unit] Description=Automount /raid
[Automount] Where=/raid TimeoutIdleSec=10
[Install] WantedBy=multi-user.target
I've rebooted, and the device starts up mounted:
# findmnt /raid TARGET SOURCE FSTYPE OPTIONS /raid systemd-1 autofs rw,relatime,fd=52,pgrp=1,timeout=10,minproto=5,maxproto=5,direct,pipe_ino=24910
though I'm not sure why as nothing is accessing it. It also remains mounted, despite the timeout.
Any thoughts?
poc
On 10/03/2021 20:57, Patrick O'Callaghan wrote:
[Sorry for the long delay, but other stuff intervened.]
I've come back to this now and started experimenting incrementally. As a first step, I just want to automount/unmount, ignoring the power- on/off part for now, i.e. the device is already powered on. This is my /etc/fstab entry:
UUID=6cb66da2-147a-4f3c-a513-36f6164ab581 /raid ext4 rw,noauto,user,x-systemd.automount 0 0and /etc/systemd/system/raid.automount (copied from your example):
[Unit] Description=Automount /raid [Automount] Where=/raid TimeoutIdleSec=10 [Install] WantedBy=multi-user.targetI've rebooted, and the device starts up mounted:
# findmnt /raid TARGET SOURCE FSTYPE OPTIONS /raid systemd-1 autofs rw,relatime,fd=52,pgrp=1,timeout=10,minproto=5,maxproto=5,direct,pipe_ino=24910though I'm not sure why as nothing is accessing it. It also remains mounted, despite the timeout.
Any thoughts?
My first thought is that you're doing things a bit different than what I did.
You have an entry for your disk in the fstab. My understanding is that when an entry is in fstab mount unit files will be auto generated. I don't have fstab entries for my automounts. I can't say that I know it is wise to mix the methods.
Using my /aux as the example. When unmounted I see...
[root@meimei ~]# findmnt /aux TARGET SOURCE FSTYPE OPTIONS /aux systemd-1 autofs rw,relatime,fd=53,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=26036
But when mounted
[root@meimei ~]# findmnt /aux TARGET SOURCE FSTYPE OPTIONS /aux systemd-1 autofs rw,relatime,fd=53,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=26036 /aux nas:/volume1/aux nfs4 rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,proto=tcp6,timeo=600
I think I would first try removing (commenting out) the fstab entry and creating raid.mount file in /etc/systemd/system/ with the contents.
[Unit] Description=mount raid
[Mount]
What=/dev/disk/by-uuid/6cb66da2-147a-4f3c-a513-36f6164ab581
Where=/raid Options=rw,user Type=ext4
[Install] WantedBy=multi-user.target
At this time I've only used automounts for nfs mounts. I'd try it with local disk if it weren't for the late hour.
Also, due to the late hour, I seem to recall seeing something about working with automounts and external devices and conflicts arising due to the interference of udev. But I may be mixing things. So, I'd first see if making the above changes has any effect.
On 10/03/2021 22:35, Ed Greshko wrote:
Also, due to the late hour, I seem to recall seeing something about working with automounts and external devices and conflicts arising due to the interference of udev. But I may be mixing things. So, I'd first see if making the above changes has any effect.
Another thing along that line. My "seeing something" was after my initial answers in the thread and I came across it looking into something else. Anyway,
I know that one can use systemctl for this, but I've been a bit lazy since KDE's system settings displays systemd units.
In my case, when /aux isn't mounted there is only a aux.automount unit shown and has a status of waiting.
When /aux is mounted there is both a aux.automount and aux.boot. And, the status of aux.automount becomes running.
On Wed, 2021-03-10 at 22:35 +0800, Ed Greshko wrote:
On 10/03/2021 20:57, Patrick O'Callaghan wrote:
[Sorry for the long delay, but other stuff intervened.]
I've come back to this now and started experimenting incrementally. As a first step, I just want to automount/unmount, ignoring the power- on/off part for now, i.e. the device is already powered on. This is my /etc/fstab entry:
UUID=6cb66da2-147a-4f3c-a513-36f6164ab581 /raid ext4 rw,noauto,user,x-systemd.automount 0 0
and /etc/systemd/system/raid.automount (copied from your example):
[Unit] Description=Automount /raid
[Automount] Where=/raid TimeoutIdleSec=10
[Install] WantedBy=multi-user.target
I've rebooted, and the device starts up mounted:
# findmnt /raid TARGET SOURCE FSTYPE OPTIONS /raid systemd-1 autofs rw,relatime,fd=52,pgrp=1,timeout=10,minproto=5,maxproto=5,direct,pipe_ino=24910
though I'm not sure why as nothing is accessing it. It also remains mounted, despite the timeout.
Any thoughts?
My first thought is that you're doing things a bit different than what I did.
You have an entry for your disk in the fstab. My understanding is that when an entry is in fstab mount unit files will be auto generated. I don't have fstab entries for my automounts. I can't say that I know it is wise to mix the methods.
Using my /aux as the example. When unmounted I see...
[root@meimei ~]# findmnt /aux TARGET SOURCE FSTYPE OPTIONS /aux systemd-1 autofs rw,relatime,fd=53,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=26036
But when mounted
[root@meimei ~]# findmnt /aux TARGET SOURCE FSTYPE OPTIONS /aux systemd-1 autofs rw,relatime,fd=53,pgrp=1,timeout=60,minproto=5,maxproto=5,direct,pipe_ino=26036 /aux nas:/volume1/aux nfs4 rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,proto=tcp6,timeo=600
I think I would first try removing (commenting out) the fstab entry and creating raid.mount file in /etc/systemd/system/ with the contents.
[Unit] Description=mount raid
[Mount]
What=/dev/disk/by-uuid/6cb66da2-147a-4f3c-a513-36f6164ab581
Where=/raid Options=rw,user Type=ext4
[Install] WantedBy=multi-user.target
At this time I've only used automounts for nfs mounts. I'd try it with local disk if it weren't for the late hour.
Also, due to the late hour, I seem to recall seeing something about working with automounts and external devices and conflicts arising due to the interference of udev. But I may be mixing things. So, I'd first see if making the above changes has any effect.
A couple of things:
1) I used fstab because the systemd.mount man page suggests it's the preferred method, but that may be my misunderstanding. As I said before, I find the systemd docs a real challenge to read. No doubt they are correct, but they remind me of something we used to say about the UNIX man pages back in the 70s: in order to understand one of them, you just have to read all the other ones first.
2) I've only now noticed that findmnt gives an output line even when the device is not mounted (the filesystem type is listed as 'autofs'). My fault for not looking more carefully.
Finally, without me doing anything at all except leave the system alone for a few hours, it has suddenly started working. Go figure.
I now have to work out where to put the ExecStart/Stop lines.
poc
On Wed, 2021-03-10 at 23:04 +0800, Ed Greshko wrote:
On 10/03/2021 22:35, Ed Greshko wrote:
Also, due to the late hour, I seem to recall seeing something about working with automounts and external devices and conflicts arising due to the interference of udev. But I may be mixing things. So, I'd first see if making the above changes has any effect.
Another thing along that line. My "seeing something" was after my initial answers in the thread and I came across it looking into something else. Anyway,
I know that one can use systemctl for this, but I've been a bit lazy since KDE's system settings displays systemd units.
In my case, when /aux isn't mounted there is only a aux.automount unit shown and has a status of waiting.
When /aux is mounted there is both a aux.automount and aux.boot. And, the status of aux.automount becomes running.
Yes, that seems to be correct, thanks.
poc
On 11/03/2021 01:40, Patrick O'Callaghan wrote:
I now have to work out where to put the ExecStart/Stop lines.
The answer may be "none of the above".
Diving into systemd documentation it seems that the sections included in the mount and automount unit files don't define those lines. :-(
On 11/03/2021 09:17, Ed Greshko wrote:
On 11/03/2021 01:40, Patrick O'Callaghan wrote:
I now have to work out where to put the ExecStart/Stop lines.
The answer may be "none of the above".
Diving into systemd documentation it seems that the sections included in the mount and automount unit files don't define those lines. :-(
Oh, one route you can consider is to post the query to
systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
I've had a few questions answered in the past.
On Thu, 2021-03-11 at 15:38 +0800, Ed Greshko wrote:
On 11/03/2021 09:17, Ed Greshko wrote:
On 11/03/2021 01:40, Patrick O'Callaghan wrote:
I now have to work out where to put the ExecStart/Stop lines.
The answer may be "none of the above".
Diving into systemd documentation it seems that the sections included in the mount and automount unit files don't define those lines. :-(
Oh, one route you can consider is to post the query to
systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
I've had a few questions answered in the past.
Thanks, I may try that.
poc
On Mar 10, 2021, at 20:18, Ed Greshko ed.greshko@greshko.com wrote:
The answer may be "none of the above".
Diving into systemd documentation it seems that the sections included in the mount and automount unit files don't define those lines. :-(
There isn’t anything about exec lines in a .mount unit, which is why I said to have a .service unit that is a requirement and is triggered to start / stop when the mount is mounted / unmounted.
Here’s my post: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org/...
-- Jonathan Billings
On Thu, 2021-03-11 at 07:47 -0500, Jonathan Billings wrote:
On Mar 10, 2021, at 20:18, Ed Greshko ed.greshko@greshko.com wrote:
The answer may be "none of the above".
Diving into systemd documentation it seems that the sections included in the mount and automount unit files don't define those lines. :-(
There isn’t anything about exec lines in a .mount unit, which is why I said to have a .service unit that is a requirement and is triggered to start / stop when the mount is mounted / unmounted.
Here’s my post: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org/...
Yes, I had noticed that. What isn't clear to me is how this would work on system reboot. I need to be able to power down the drive not just after it's unmounted, but when the system is rebooted and the drive hasn't been mounted in the first place, i.e. a non-event.
poc
On Thu, 2021-03-11 at 13:13 +0000, Patrick O'Callaghan wrote:
On Thu, 2021-03-11 at 07:47 -0500, Jonathan Billings wrote:
On Mar 10, 2021, at 20:18, Ed Greshko ed.greshko@greshko.com wrote:
The answer may be "none of the above".
Diving into systemd documentation it seems that the sections included in the mount and automount unit files don't define those lines. :-(
There isn’t anything about exec lines in a .mount unit, which is why I said to have a .service unit that is a requirement and is triggered to start / stop when the mount is mounted / unmounted.
Here’s my post: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org/...
Yes, I had noticed that. What isn't clear to me is how this would work on system reboot. I need to be able to power down the drive not just after it's unmounted, but when the system is rebooted and the drive hasn't been mounted in the first place, i.e. a non-event.
Someone on the SystemD list suggested using an @reboot line in crontab for this, as a special case.
poc
On Thu, Mar 11, 2021 at 01:13:08PM +0000, Patrick O'Callaghan wrote:
Yes, I had noticed that. What isn't clear to me is how this would work on system reboot. I need to be able to power down the drive not just after it's unmounted, but when the system is rebooted and the drive hasn't been mounted in the first place, i.e. a non-event.
Nothing stopping you from having a separate systemd unit that just runs on boot and shutdown, too.
Cron's @reboot is just going to be doing the equivalent of running a systemd unit after the crond service starts.
On 11/03/2021 21:14, Patrick O'Callaghan wrote:
Someone on the SystemD list suggested using an @reboot line in crontab for this, as a special case.
While I didn't think of that option, I somehow got the impression that you needed/wanted to run a script of some sort each time the share was mounted and unmounted.
On Fri, 2021-03-12 at 17:01 +0800, Ed Greshko wrote:
On 11/03/2021 21:14, Patrick O'Callaghan wrote:
Someone on the SystemD list suggested using an @reboot line in crontab for this, as a special case.
While I didn't think of that option, I somehow got the impression that you needed/wanted to run a script of some sort each time the share was mounted and unmounted.
I do. The @reboot suggestion is only a partial solution. Given that the drive is automatically powered up on reboot (there seems to be no way to prevent this as it's triggered by the system scanning the USB bus) I need to be able to power it down, but currently there's no systemd mount event to cause this to happen. No doubt there's a more elegant way around this, but baby steps ...
poc
On 12/03/2021 19:18, Patrick O'Callaghan wrote:
On Fri, 2021-03-12 at 17:01 +0800, Ed Greshko wrote:
On 11/03/2021 21:14, Patrick O'Callaghan wrote:
Someone on the SystemD list suggested using an @reboot line in crontab for this, as a special case.
While I didn't think of that option, I somehow got the impression that you needed/wanted to run a script of some sort each time the share was mounted and unmounted.
I do. The @reboot suggestion is only a partial solution. Given that the drive is automatically powered up on reboot (there seems to be no way to prevent this as it's triggered by the system scanning the USB bus) I need to be able to power it down, but currently there's no systemd mount event to cause this to happen. No doubt there's a more elegant way around this, but baby steps ...
Well, if it can't be done with systemd then a possible inelegant solution is to have a "watcher" background process (or cron job that run periodically) that checks to see if the share has gone from a mounted to unmounted state and then run the appropriate script?
Does something need running when the share goes from unmounted to mounted?
I suppose this kind of thing is one reason I'm happy I opted for a NAS. It runs a RAID configuration and can be configured to power down disks when idle. :-) :-)
On Fri, 2021-03-12 at 19:36 +0800, Ed Greshko wrote:
On 12/03/2021 19:18, Patrick O'Callaghan wrote:
On Fri, 2021-03-12 at 17:01 +0800, Ed Greshko wrote:
On 11/03/2021 21:14, Patrick O'Callaghan wrote:
Someone on the SystemD list suggested using an @reboot line in crontab for this, as a special case.
While I didn't think of that option, I somehow got the impression that you needed/wanted to run a script of some sort each time the share was mounted and unmounted.
I do. The @reboot suggestion is only a partial solution. Given that the drive is automatically powered up on reboot (there seems to be no way to prevent this as it's triggered by the system scanning the USB bus) I need to be able to power it down, but currently there's no systemd mount event to cause this to happen. No doubt there's a more elegant way around this, but baby steps ...
Well, if it can't be done with systemd then a possible inelegant solution is to have a "watcher" background process (or cron job that run periodically) that checks to see if the share has gone from a mounted to unmounted state and then run the appropriate script?
Of course. In the pre-systemd days that would have been the obvious choice. I just thought systemd was supposed to make things more organised, but I'm starting to wonder. I don't want to come across as a systemd sceptic, but IMHO its highly modular structure is reflected in the scattered nature of the documentation, which is a major impediment to really understanding it.
Does something need running when the share goes from unmounted to mounted?
Just a timer. Once it's powered up and mounted, it should stay that way until an idle timeout is triggered.
I suppose this kind of thing is one reason I'm happy I opted for a NAS. It runs a RAID configuration and can be configured to power down disks when idle. :-) :-)
The disks I'm using were actually cannibalised from a NAS that died (after about 10 years use). I stuck them in a dock and aside from this issue they work well. The dock itself does power down after 30 minutes disconnection, but of course while it's mounted that won't happen. I need a script to unmount and then force the disconnection. The script itself works, I just want it to happen automatically.
poc
On 12/03/2021 20:40, Patrick O'Callaghan wrote:
On Fri, 2021-03-12 at 19:36 +0800, Ed Greshko wrote:
On 12/03/2021 19:18, Patrick O'Callaghan wrote:
On Fri, 2021-03-12 at 17:01 +0800, Ed Greshko wrote:
On 11/03/2021 21:14, Patrick O'Callaghan wrote:
Someone on the SystemD list suggested using an @reboot line in crontab for this, as a special case.
While I didn't think of that option, I somehow got the impression that you needed/wanted to run a script of some sort each time the share was mounted and unmounted.
I do. The @reboot suggestion is only a partial solution. Given that the drive is automatically powered up on reboot (there seems to be no way to prevent this as it's triggered by the system scanning the USB bus) I need to be able to power it down, but currently there's no systemd mount event to cause this to happen. No doubt there's a more elegant way around this, but baby steps ...
Well, if it can't be done with systemd then a possible inelegant solution is to have a "watcher" background process (or cron job that run periodically) that checks to see if the share has gone from a mounted to unmounted state and then run the appropriate script?
Of course. In the pre-systemd days that would have been the obvious choice. I just thought systemd was supposed to make things more organised, but I'm starting to wonder. I don't want to come across as a systemd sceptic, but IMHO its highly modular structure is reflected in the scattered nature of the documentation, which is a major impediment to really understanding it.
To be frank, I still don't know/understand all of your requirements.
I understand the one requirement at system boot time and can see how the answer to that is probably best handled with an @reboot cron job.
I don't know, or maybe just don't understand, what needs doing during "normal operation". By "normal operation" I mean the system being up but the share only being accessed periodically.
It may very well be that your use case isn't something systemd was designed to do. Or, maybe can be done but would need multiple services defined.
I think, since you accepted the one response on the systemd list you effectively ended the tread. If you needed more, I would have thought you'd inquire more.
Does something need running when the share goes from unmounted to mounted?
Just a timer. Once it's powered up and mounted, it should stay that way until an idle timeout is triggered.
That statement confuses me. Isn't that what happens with automount and the TimeoutIdleSec= parameter?
Again, I somehow got the impression that something more than simple automount was needed.
I suppose this kind of thing is one reason I'm happy I opted for a NAS. It runs a RAID configuration and can be configured to power down disks when idle. :-) :-)
The disks I'm using were actually cannibalised from a NAS that died (after about 10 years use). I stuck them in a dock and aside from this issue they work well. The dock itself does power down after 30 minutes disconnection, but of course while it's mounted that won't happen. I need a script to unmount and then force the disconnection. The script itself works, I just want it to happen automatically.
Well, I suppose I understand (sort of) my confusion. I don't know what "disconnection" means. And, I don't know why a script is needed to unmount if that is handled by automount and its timeout.
Not that it matters, but if my NAS died I'd just go out and replace it. :-)
Also, FWIW, my nfs automounts get mounted a boot time too. The addition of the noauto option has no effect.
On Fri, 2021-03-12 at 22:39 +0800, Ed Greshko wrote:
Does something need running when the share goes from unmounted to mounted?
Just a timer. Once it's powered up and mounted, it should stay that way until an idle timeout is triggered.
That statement confuses me. Isn't that what happens with automount and the TimeoutIdleSec= parameter?
The automount correctly unmounts the drive. That isn't the problem. The problem is in getting the drive to power down after it has been unmounted. This means invoking a script.
The power-up script does this: echo "- - -" > $SCAN
where SCAN is set to /sys/class/scsi_host/host7/scan. The host number can change (but seems fairly stable in practice), so this is not robust in itself). The "- - -" string is pure black magic I found with a Google search. I have no idea what it means but it does work.
The power-down script does this:
echo 1 > /sys/block/$SLOT1/device/delete where SLOT is sd<N>
Note that "udisksctl power-off ..." is not an option (I tested it) because that deletes the kernel's knowledge of the bus, meaning it can't be powered on again. I'm fairly sure I asked about this some months ago but didn't get a useful answer.
Of course it's possible that sending some command to the dock would power down the drives, but the thing has no useful documentation. It just comes with (of course) a Windows driver.
[...]
Well, I suppose I understand (sort of) my confusion. I don't know what "disconnection" means.
The only way to get the drive to power down is by programmatically disconnecting it from USB, as above.
And, I don't know why a script is needed to unmount if that is handled by automount and its timeout.
It isn't needed to unmount. It's needed to power down after unmounting.
Not that it matters, but if my NAS died I'd just go out and replace it. :-)
That's a valid option of course, but as I already have the hardware I need I dislike spending the money unnecessarily. Call it a hobby project.
poc
On 13/03/2021 01:29, Patrick O'Callaghan wrote:
On Fri, 2021-03-12 at 22:39 +0800, Ed Greshko wrote:
Does something need running when the share goes from unmounted to mounted?
Just a timer. Once it's powered up and mounted, it should stay that way until an idle timeout is triggered.
That statement confuses me. Isn't that what happens with automount and the TimeoutIdleSec= parameter?
<snip>
Of course it's possible that sending some command to the dock would power down the drives, but the thing has no useful documentation. It just comes with (of course) a Windows driver.
OK.... My understanding is that the HW you have doesn't power-down or power-up (wake-up) in response to an unmount or mount operation.
So, in the case of automount/autounmount you need "something" to
issue power-up command followed by mount
and
unmount followed by issue power-down command
So, it does sound to me your configuration is very "non-standard" and not common. It also sounds like you're trying to workaround deficiencies in HW that seems to have been designed with Windows in mind.
I don't think systemd was meant to solve these sorts of issues And a kludge is more fitting than trying to shoehorn in a standard tool. Of course you then, potentially, have to maintain the kludge. I used to do that. But no longer. I found other "hobbies". :-)
On Sat, 2021-03-13 at 08:04 +0800, Ed Greshko wrote:
Of course it's possible that sending some command to the dock would power down the drives, but the thing has no useful documentation. It just comes with (of course) a Windows driver.
OK.... My understanding is that the HW you have doesn't power-down or power-up (wake-up) in response to an unmount or mount operation.
Does any hardware do this when connected via USB?
So, in the case of automount/autounmount you need "something" to
issue power-up command followed by mount
and
unmount followed by issue power-down command
Exactly.
So, it does sound to me your configuration is very "non-standard" and not common. It also sounds like you're trying to workaround deficiencies in HW that seems to have been designed with Windows in mind.
Not surprising in itself of course, but I have to wonder if there exist external docks which work better with Linux in this regard.
I don't think systemd was meant to solve these sorts of issues And a kludge is more fitting than trying to shoehorn in a standard tool. Of course you then, potentially, have to maintain the kludge. I used to do that. But no longer. I found other "hobbies". :-)
Thanks Ed.
poc
On Mar 12, 2021, at 19:05, Ed Greshko ed.greshko@greshko.com wrote:
I don't think systemd was meant to solve these sorts of issues
Honestly, systemd is more equipped to handle this kind of issue than any init system before it. Being able to attach dependencies in a .mount unit to a .service unit is something that would have required a bunch of hacks or the autofs service.
I use a network filesystem every day that has a .mount unit that requires a .service to be launched before it can be mounted (kAFS, see kafs-utils package). It isn’t as complicated as the usb power on/off, but not outside of the realm of possibility.
— Jonathan Billings
On Sat, 2021-03-13 at 08:49 -0500, Jonathan Billings wrote:
On Mar 12, 2021, at 19:05, Ed Greshko ed.greshko@greshko.com wrote:
I don't think systemd was meant to solve these sorts of issues
Honestly, systemd is more equipped to handle this kind of issue than any init system before it. Being able to attach dependencies in a .mount unit to a .service unit is something that would have required a bunch of hacks or the autofs service.
I use a network filesystem every day that has a .mount unit that requires a .service to be launched before it can be mounted (kAFS, see kafs-utils package). It isn’t as complicated as the usb power on/off, but not outside of the realm of possibility.
I assume you mean kafs-client (there is no kafs-utils in the standard repo). I'll have a look at that as a model.
poc
On Sat, 2021-03-13 at 08:49 -0500, Jonathan Billings wrote:
On Mar 12, 2021, at 19:05, Ed Greshko ed.greshko@greshko.com wrote:
I don't think systemd was meant to solve these sorts of issues
Honestly, systemd is more equipped to handle this kind of issue than any init system before it. Being able to attach dependencies in a .mount unit to a .service unit is something that would have required a bunch of hacks or the autofs service.
I use a network filesystem every day that has a .mount unit that requires a .service to be launched before it can be mounted (kAFS, see kafs-utils package). It isn’t as complicated as the usb power on/off, but not outside of the realm of possibility.
I modified the .mount and .service files from kasf-client, and added a .automount file. I also commented out the appropriate line in /etc/fstab.
# cat raid.mount [Unit] Description=External raid mount ConditionPathExists=/raid Wants=dock.service
[Mount] What=none Where=/raid Type=ext4
[Install] WantedBy=local-fs.target
# cat dock.service [Unit] Description=Power the dock up or down After=local-fs.target DefaultDependencies=no
[Service] Type=oneshot #ExecStartPre=/sbin/modprobe -q kafs ExecStart=/usr/local/bin/dock up ExecStop=/usr/local/bin/dock down
# cat raid.automount [Unit] Description=Automount /raid [Automount] Where=/raid TimeoutIdleSec=300
[Install] WantedBy=multi-user.target
I then rebooted: # findmnt /raid # # systemctl list-units|egrep dock|raid raid-check.timer loaded active waiting Weekly RAID setup health check (not relevant here)
So nothing I did appears to have had any effect. The /usr/local/bin/dock script, which logs its activity, is not being called. The raid.automount is not running, and attempting to access the /raid directory does nothing. There is nothing related to either 'dock' or 'raid' in the journal.
I assume there must be a basic error here, but I'm at a loss.
poc
On 14/03/2021 03:02, Patrick O'Callaghan wrote:
I assume there must be a basic error here, but I'm at a loss.
I like to crawl before I walk and walk before I run.
Even then, I see a pitfall in your plan. So, I created a test /etc/systemd/system/dock.service.
[root@f33k system]# cat dock.service [Unit] Description=Power the dock up or down After=local-fs.target DefaultDependencies=no
[Service] Type=oneshot ExecStartPre=/usr/bin/mkdir -p /var/tmp/auto ExecStart=/usr/bin/touch /var/tmp/auto/start ExecStop=/usr/bin/touch /var/tmp/auto/stop
And notice the following......
[root@f33k ~]# ll /var/tmp/auto ls: cannot access '/var/tmp/auto': No such file or directory
[root@f33k ~]# systemctl status dock.service ● dock.service - Power the dock up or down Loaded: loaded (/etc/systemd/system/dock.service; static) Active: inactive (dead)
[root@f33k ~]# systemctl start dock.service
[root@f33k ~]# ll /var/tmp/auto total 0 -rw-r--r--. 1 root root 0 Mar 14 06:35 start -rw-r--r--. 1 root root 0 Mar 14 06:35 stop
See the issue?
Your dock.service, if run, would power-up and then immediately power-down the dock.
On Sun, 2021-03-14 at 06:38 +0800, Ed Greshko wrote:
[root@f33k ~]# systemctl status dock.service ● dock.service - Power the dock up or down Loaded: loaded (/etc/systemd/system/dock.service; static) Active: inactive (dead)
[root@f33k ~]# systemctl start dock.service
[root@f33k ~]# ll /var/tmp/auto total 0 -rw-r--r--. 1 root root 0 Mar 14 06:35 start -rw-r--r--. 1 root root 0 Mar 14 06:35 stop
See the issue?
Your dock.service, if run, would power-up and then immediately power-down the dock.
Indeed. Does that mean I need separate dock-up and dock-down services?
Also, the .service is not being invoked by the .mount (I shouldn't have to start it manually), presumably because the .automount isn't running.
BTW, is there a recommended way to re-run this kind of test cleanly, without having to reboot the system, e.g. after modifying one the various unit files? I know about systemd daemon-reload but that doesn't seem to be enough.
poc
On 14/03/2021 07:14, Patrick O'Callaghan wrote:
On Sun, 2021-03-14 at 06:38 +0800, Ed Greshko wrote:
[root@f33k ~]# systemctl status dock.service ● dock.service - Power the dock up or down Loaded: loaded (/etc/systemd/system/dock.service; static) Active: inactive (dead)
[root@f33k ~]# systemctl start dock.service
[root@f33k ~]# ll /var/tmp/auto total 0 -rw-r--r--. 1 root root 0 Mar 14 06:35 start -rw-r--r--. 1 root root 0 Mar 14 06:35 stop
See the issue?
Your dock.service, if run, would power-up and then immediately power-down the dock.
Indeed. Does that mean I need separate dock-up and dock-down services?
I suppose. But, I can't say that there is a way to start a service went the auto-unmount occurs.
Also, the .service is not being invoked by the .mount (I shouldn't have to start it manually), presumably because the .automount isn't running.
I modified my aux.mount unit to be
[Unit] Description=nfs mount aux Wants=dock.service
[Mount] What=[2001:b030:112f::19]:/volume1/aux Where=/aux Options=rw,noauto,soft,fg,x-systemd.mount-timeout=30 Type=nfs4
[Install] WantedBy=multi-user.target
It then rebooted and.....
[egreshko@f33k ~]$ ll /var/tmp/auto/ total 0 -rw-r--r--. 1 root root 0 Mar 14 07:26 start -rw-r--r--. 1 root root 0 Mar 14 07:26 stop
I then waited until it unmounted and accessed the share.
[egreshko@f33k ~]$ ll /var/tmp/auto/ total 0 -rw-r--r--. 1 root root 0 Mar 14 07:33 start -rw-r--r--. 1 root root 0 Mar 14 07:33 stop
So, it is working as expected for me.
You may wish to try "crawling" first?
BTW, is there a recommended way to re-run this kind of test cleanly, without having to reboot the system, e.g. after modifying one the various unit files? I know about systemd daemon-reload but that doesn't seem to be enough.
I used "systemd daemon-reload" in this test without a problem.
On Fri, 2021-03-12 at 11:18 +0000, Patrick O'Callaghan wrote:
I do. The @reboot suggestion is only a partial solution. Given that the drive is automatically powered up on reboot (there seems to be no way to prevent this as it's triggered by the system scanning the USB bus) I need to be able to power it down, but currently there's no systemd mount event to cause this to happen. No doubt there's a more elegant way around this, but baby steps ...
A post-boot script?
On Sun, 2021-03-14 at 18:11 +1030, Tim via users wrote:
On Fri, 2021-03-12 at 11:18 +0000, Patrick O'Callaghan wrote:
I do. The @reboot suggestion is only a partial solution. Given that the drive is automatically powered up on reboot (there seems to be no way to prevent this as it's triggered by the system scanning the USB bus) I need to be able to power it down, but currently there's no systemd mount event to cause this to happen. No doubt there's a more elegant way around this, but baby steps ...
A post-boot script?
That's essentially what the @reboot crontab entry does.
poc
On Sun, 2021-03-14 at 07:37 +0800, Ed Greshko wrote:
Your dock.service, if run, would power-up and then immediately power-down the dock.
Indeed. Does that mean I need separate dock-up and dock-down services?
I suppose. But, I can't say that there is a way to start a service went the auto-unmount occurs.
That is starting to look like the show-stopper. The only way round it may be a separate monitoring process outside of systemd.
Anyway, taking one thing at a time, I have the dock permanently on while I try to get the automounting to work:
Also, the .service is not being invoked by the .mount (I shouldn't have to start it manually), presumably because the .automount isn't running.
I modified my aux.mount unit to be
[Unit] Description=nfs mount aux Wants=dock.service
[Mount] What=[2001:b030:112f::19]:/volume1/aux Where=/aux Options=rw,noauto,soft,fg,x-systemd.mount-timeout=30 Type=nfs4
[Install] WantedBy=multi-user.target
My raid.mount now looks like this (the previous version was missing the 'What' and 'Options' lines):
# cat raid.mount [Unit] Description=External /raid mount
Wants=dock.service
[Mount] What=/dev/md0p1 Where=/raid Options=rw,noauto,soft,fg,x-systemd.mount-timeout=30 Type=ext4
[Install] WantedBy=multi-user.target
Everything else is the same. Recall that the .automount file is:
# cat raid.automount [Unit] Description=Automount /raid [Automount] Where=/raid TimeoutIdleSec=300
[Install] WantedBy=multi-user.target
and dock.service is:
# cat dock.service [Unit] Description=Power the dock up After=multi-user.target DefaultDependencies=no
[Service] #Type=oneshot ExecStart=/usr/local/bin/dock up
I commented out the 'Type=oneshot' line as this is the default. I could add 'RemainAfterExit=on' in order to keep the state as 'running but I don't see that this makes any material difference here.
It then rebooted and.....
[egreshko@f33k ~]$ ll /var/tmp/auto/ total 0 -rw-r--r--. 1 root root 0 Mar 14 07:26 start -rw-r--r--. 1 root root 0 Mar 14 07:26 stop
I rebooted and got:
# findmnt /raid # ls /raid # findmnt /raid #
IOW nothing happens.
poc
On 15/03/2021 00:31, Patrick O'Callaghan wrote:
I rebooted and got:
# findmnt /raid # ls /raid # findmnt /raid #
IOW nothing happens.
systemctl status raid.mount systemctl status raid.automount systemctl status dock.service
and finally
mount /raid
On Mon, 2021-03-15 at 05:09 +0800, Ed Greshko wrote:
On 15/03/2021 00:31, Patrick O'Callaghan wrote:
I rebooted and got:
# findmnt /raid # ls /raid # findmnt /raid #
IOW nothing happens.
systemctl status raid.mount systemctl status raid.automount systemctl status dock.service
# systemctl status raid.mount ● raid.mount - External /raid mount Loaded: loaded (/etc/systemd/system/raid.mount; disabled; vendor preset: disabled) Active: inactive (dead) Where: /raid What: /dev/md0p1 # systemctl status raid.automount ● raid.automount - Automount /raid Loaded: loaded (/etc/systemd/system/raid.automount; disabled; vendor preset: disabled) Active: inactive (dead) Triggers: ● raid.mount Where: /raid # systemctl status dock.service ● dock.service - Power the dock up Loaded: loaded (/etc/systemd/system/dock.service; static) Active: inactive (dead)
and finally
mount /raid
# mount /raid mount: /raid: can't find in /etc/fstab. #
poc
On 15/03/2021 06:10, Patrick O'Callaghan wrote:
On Mon, 2021-03-15 at 05:09 +0800, Ed Greshko wrote:
On 15/03/2021 00:31, Patrick O'Callaghan wrote:
I rebooted and got:
# findmnt /raid # ls /raid # findmnt /raid #
IOW nothing happens.
systemctl status raid.mount systemctl status raid.automount systemctl status dock.service
# systemctl status raid.mount ● raid.mount - External /raid mount Loaded: loaded (/etc/systemd/system/raid.mount; disabled; vendor preset: disabled) Active: inactive (dead) Where: /raid What: /dev/md0p1 # systemctl status raid.automount ● raid.automount - Automount /raid Loaded: loaded (/etc/systemd/system/raid.automount; disabled; vendor preset: disabled) Active: inactive (dead) Triggers: ● raid.mount Where: /raid # systemctl status dock.service ● dock.service - Power the dock up Loaded: loaded (/etc/systemd/system/dock.service; static) Active: inactive (dead)
systemctl enable raid.mount systemctl enable raid.automount
On Mon, 2021-03-15 at 06:52 +0800, Ed Greshko wrote:
On 15/03/2021 06:10, Patrick O'Callaghan wrote:
On Mon, 2021-03-15 at 05:09 +0800, Ed Greshko wrote:
On 15/03/2021 00:31, Patrick O'Callaghan wrote:
I rebooted and got:
# findmnt /raid # ls /raid # findmnt /raid #
IOW nothing happens.
systemctl status raid.mount systemctl status raid.automount systemctl status dock.service
# systemctl status raid.mount ● raid.mount - External /raid mount Loaded: loaded (/etc/systemd/system/raid.mount; disabled; vendor preset: disabled) Active: inactive (dead) Where: /raid What: /dev/md0p1 # systemctl status raid.automount ● raid.automount - Automount /raid Loaded: loaded (/etc/systemd/system/raid.automount; disabled; vendor preset: disabled) Active: inactive (dead) Triggers: ● raid.mount Where: /raid # systemctl status dock.service ● dock.service - Power the dock up Loaded: loaded (/etc/systemd/system/dock.service; static) Active: inactive (dead)
systemctl enable raid.mount systemctl enable raid.automount
OK, hadn't realised that was necessary. Now (after a reboot) I get: # findmnt /raid TARGET SOURCE FSTYPE OPTIONS /raid systemd-1 autofs rw,relatime,fd=51,pgrp=1,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=728
which is good, but still: # ls /raid ls: cannot access '/raid': No such device #
Turned out that mount was showing: mount: /raid: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.
which I finally narrowed down to the NFS 'soft' and 'fg' mount options, which of course don't apply in my case. After removing those it now appears to work.
The only remaining problem (touch wood) is to get the power-down script to run after a timeout. I'll consider writing a special script to monitor the mount status independently of systemd.
Hopefully that's all for now. Many thanks Ed, and thanks also to Jonathan for useful pointers.
poc
On 15/03/2021 07:37, Patrick O'Callaghan wrote:
The only remaining problem (touch wood) is to get the power-down script to run after a timeout. I'll consider writing a special script to monitor the mount status independently of systemd.
You can consider using the timer/service pair of systemd.
Add another Wants to raid.mount of say "raid.timer". Then....
1. Upon mount, raid.timer is started. 2. After defined time, raid.timer calls "raid.service". 3. raid.service checks mount status. 4. If still mounted then exit. Meaning goes back to waiting and #2. 5. If now unmounted, power off the hub and stop raid.timer. So, basically, raid.service no long runs and you're back to condition #1 which starts the timer again on the next mount.
On Mon, 2021-03-15 at 10:40 +0800, Ed Greshko wrote:
On 15/03/2021 07:37, Patrick O'Callaghan wrote:
The only remaining problem (touch wood) is to get the power-down script to run after a timeout. I'll consider writing a special script to monitor the mount status independently of systemd.
You can consider using the timer/service pair of systemd.
Add another Wants to raid.mount of say "raid.timer". Then....
1. Upon mount, raid.timer is started. 2. After defined time, raid.timer calls "raid.service". 3. raid.service checks mount status. 4. If still mounted then exit. Meaning goes back to waiting and #2. 5. If now unmounted, power off the hub and stop raid.timer. So, basically, raid.service no long runs and you're back to condition #1 which starts the timer again on the next mount.
OK, I'm attempting to do that, but it's not quite there. The main problem seems to be getting the timer to fire more than once (did I mention how obscure the systemd docs are?). As it stands, the raid.service unit checks the mounted status immediately rather than waiting, and of course it always succeeds, but then just finishes.
I attach the current systemd files. The 'dock' script is now considerably simplified (it turns out that all the bus scanning is unnecessary as I can just use ' hdparm -y' to spin down, and spinning up is automatic). The 'cdown' argument powers down the dock conditionally, i.e. if it's not mounted.
poc
On 17/03/2021 22:22, Patrick O'Callaghan wrote:
On Mon, 2021-03-15 at 10:40 +0800, Ed Greshko wrote:
On 15/03/2021 07:37, Patrick O'Callaghan wrote:
The only remaining problem (touch wood) is to get the power-down script to run after a timeout. I'll consider writing a special script to monitor the mount status independently of systemd.
You can consider using the timer/service pair of systemd.
Add another Wants to raid.mount of say "raid.timer". Then....
1. Upon mount, raid.timer is started. 2. After defined time, raid.timer calls "raid.service". 3. raid.service checks mount status. 4. If still mounted then exit. Meaning goes back to waiting and #2. 5. If now unmounted, power off the hub and stop raid.timer. So, basically, raid.service no long runs and you're back to condition #1 which starts the timer again on the next mount.
OK, I'm attempting to do that, but it's not quite there. The main problem seems to be getting the timer to fire more than once (did I mention how obscure the systemd docs are?). As it stands, the raid.service unit checks the mounted status immediately rather than waiting, and of course it always succeeds, but then just finishes.
I attach the current systemd files. The 'dock' script is now considerably simplified (it turns out that all the bus scanning is unnecessary as I can just use ' hdparm -y' to spin down, and spinning up is automatic). The 'cdown' argument powers down the dock conditionally, i.e. if it's not mounted.
I'll have a look at this on my tomorrow....
On 17/03/2021 23:10, Ed Greshko wrote:
On 17/03/2021 22:22, Patrick O'Callaghan wrote:
On Mon, 2021-03-15 at 10:40 +0800, Ed Greshko wrote:
On 15/03/2021 07:37, Patrick O'Callaghan wrote:
The only remaining problem (touch wood) is to get the power-down script to run after a timeout. I'll consider writing a special script to monitor the mount status independently of systemd.
You can consider using the timer/service pair of systemd.
Add another Wants to raid.mount of say "raid.timer". Then....
1. Upon mount, raid.timer is started. 2. After defined time, raid.timer calls "raid.service". 3. raid.service checks mount status. 4. If still mounted then exit. Meaning goes back to waiting and #2. 5. If now unmounted, power off the hub and stop raid.timer. So, basically, raid.service no long runs and you're back to condition #1 which starts the timer again on the next mount.
OK, I'm attempting to do that, but it's not quite there. The main problem seems to be getting the timer to fire more than once (did I mention how obscure the systemd docs are?). As it stands, the raid.service unit checks the mounted status immediately rather than waiting, and of course it always succeeds, but then just finishes.
I attach the current systemd files. The 'dock' script is now considerably simplified (it turns out that all the bus scanning is unnecessary as I can just use ' hdparm -y' to spin down, and spinning up is automatic). The 'cdown' argument powers down the dock conditionally, i.e. if it's not mounted.
I'll have a look at this on my tomorrow...
Before I drifted off I decided to adhere to the KISS principle. So, I decided against the timer/service pair in favor of a simpler approach.
See the attached examples.
This results in....
[root@f33k auto]# pwd /var/tmp/auto
[root@f33k auto]# ll total 0
[root@f33k auto]# ls /aux backups linux-releases qemu-images
[root@f33k auto]# ll total 0 -rw-r--r--. 1 root root 0 Mar 18 06:46 dock-up
And one minute later
[root@f33k auto]# ll total 0 -rw-r--r--. 1 root root 0 Mar 18 06:47 dock-down -rw-r--r--. 1 root root 0 Mar 18 06:46 dock-up
I don't know if the power up/down of the dock takes time...so I may be necessary to adjust sleep times.
On Thu, 2021-03-18 at 06:50 +0800, Ed Greshko wrote:
On 17/03/2021 23:10, Ed Greshko wrote:
On 17/03/2021 22:22, Patrick O'Callaghan wrote:
On Mon, 2021-03-15 at 10:40 +0800, Ed Greshko wrote:
On 15/03/2021 07:37, Patrick O'Callaghan wrote:
The only remaining problem (touch wood) is to get the power- down script to run after a timeout. I'll consider writing a special script to monitor the mount status independently of systemd.
You can consider using the timer/service pair of systemd.
Add another Wants to raid.mount of say "raid.timer". Then....
1. Upon mount, raid.timer is started. 2. After defined time, raid.timer calls "raid.service". 3. raid.service checks mount status. 4. If still mounted then exit. Meaning goes back to waiting and #2. 5. If now unmounted, power off the hub and stop raid.timer. So, basically, raid.service no long runs and you're back to condition #1 which starts the timer again on the next mount.
OK, I'm attempting to do that, but it's not quite there. The main problem seems to be getting the timer to fire more than once (did I mention how obscure the systemd docs are?). As it stands, the raid.service unit checks the mounted status immediately rather than waiting, and of course it always succeeds, but then just finishes.
I attach the current systemd files. The 'dock' script is now considerably simplified (it turns out that all the bus scanning is unnecessary as I can just use ' hdparm -y' to spin down, and spinning up is automatic). The 'cdown' argument powers down the dock conditionally, i.e. if it's not mounted.
I'll have a look at this on my tomorrow...
Before I drifted off I decided to adhere to the KISS principle. So, I decided against the timer/service pair in favor of a simpler approach.
See the attached examples.
This results in....
[root@f33k auto]# pwd /var/tmp/auto
[root@f33k auto]# ll total 0
[root@f33k auto]# ls /aux backups linux-releases qemu-images
[root@f33k auto]# ll total 0 -rw-r--r--. 1 root root 0 Mar 18 06:46 dock-up
And one minute later
[root@f33k auto]# ll total 0 -rw-r--r--. 1 root root 0 Mar 18 06:47 dock-down -rw-r--r--. 1 root root 0 Mar 18 06:46 dock-up
Right. I had to stare at that for a while but I get it. The dock-down script is doing the timeout, not systemd itself. I'll try that and see what happens.
I don't know if the power up/down of the dock takes time...so I may be necessary to adjust sleep times.
Sure.
I'll report back once I've tested for a few days.
poc
On 18/03/2021 23:01, Patrick O'Callaghan wrote:
Sure.
I'll report back once I've tested for a few days.
Today was a slow day. So I implemented the "dock-down" with a timer instead of a while loop. Let me know if you're interested in it.
On Tue, 2021-03-23 at 14:08 +0800, Ed Greshko wrote:
On 18/03/2021 23:01, Patrick O'Callaghan wrote:
Sure.
I'll report back once I've tested for a few days.
Today was a slow day. So I implemented the "dock-down" with a timer instead of a while loop. Let me know if you're interested in it.
Sure, I'd like to see it. I've been testing various mods to the whole thing because I couldn't get it to work reliably, i.e. the loop waiting for the unmount would work when run directly from the shell, but not when run from a systemd service. I think it's working now (or nearly) but all contributions are welcome.
I attach the current version for comparison.
poc
On 23/03/2021 18:57, Patrick O'Callaghan wrote:
Sure, I'd like to see it. I've been testing various mods to the whole thing because I couldn't get it to work reliably, i.e. the loop waiting for the unmount would work when run directly from the shell, but not when run from a systemd service. I think it's working now (or nearly) but all contributions are welcome.
Well, I saw no difference running things from a shell or as a service. So, I can't comment.
Anyway, see the included tar. As usual, mine is nfs mount.
The mount starts dock-down2.timer which runs once/minute. You can change that. When unmount is noticed, the "touch" will signify it and the timer will be stopped.
I attach the current version for comparison.
I really didn't look too much at it. Seems like you are doing more in the dock script than I thought was needed.
On Wed, 2021-03-24 at 06:02 +0800, Ed Greshko wrote:
On 23/03/2021 18:57, Patrick O'Callaghan wrote:
Sure, I'd like to see it. I've been testing various mods to the whole thing because I couldn't get it to work reliably, i.e. the loop waiting for the unmount would work when run directly from the shell, but not when run from a systemd service. I think it's working now (or nearly) but all contributions are welcome.
Well, I saw no difference running things from a shell or as a service. So, I can't comment.
Anyway, see the included tar. As usual, mine is nfs mount.
The mount starts dock-down2.timer which runs once/minute. You can change that. When unmount is noticed, the "touch" will signify it and the timer will be stopped.
I attach the current version for comparison.
I really didn't look too much at it. Seems like you are doing more in the dock script than I thought was needed.
I'll take a look. In the meantime, I've done some tests using inotifywait on the mounted directory (/raid) and it seems like the periodic checking for an active mount can be avoided.
poc
On Tue, 2021-03-23 at 22:40 +0000, Patrick O'Callaghan wrote:
On Wed, 2021-03-24 at 06:02 +0800, Ed Greshko wrote:
On 23/03/2021 18:57, Patrick O'Callaghan wrote:
Sure, I'd like to see it. I've been testing various mods to the whole thing because I couldn't get it to work reliably, i.e. the loop waiting for the unmount would work when run directly from the shell, but not when run from a systemd service. I think it's working now (or nearly) but all contributions are welcome.
Well, I saw no difference running things from a shell or as a service. So, I can't comment.
Anyway, see the included tar. As usual, mine is nfs mount.
The mount starts dock-down2.timer which runs once/minute. You can change that. When unmount is noticed, the "touch" will signify it and the timer will be stopped.
I attach the current version for comparison.
I really didn't look too much at it. Seems like you are doing more in the dock script than I thought was needed.
I'll take a look. In the meantime, I've done some tests using inotifywait on the mounted directory (/raid) and it seems like the periodic checking for an active mount can be avoided.
Just to not leave this hanging: I've implemented the unmount detection using "inotifywait -e unmount /raid", which does what I need without using a polling loop.
There is currently an additional problem with the actually device control, but I'll describe that in a separate thread.
Thanks again for your interest Ed.
poc
On Wed, 2021-03-24 at 06:02 +0800, Ed Greshko wrote:
On 23/03/2021 18:57, Patrick O'Callaghan wrote:
Sure, I'd like to see it. I've been testing various mods to the whole thing because I couldn't get it to work reliably, i.e. the loop waiting for the unmount would work when run directly from the shell, but not when run from a systemd service. I think it's working now (or nearly) but all contributions are welcome.
Well, I saw no difference running things from a shell or as a service. So, I can't comment.
Anyway, see the included tar. As usual, mine is nfs mount.
The mount starts dock-down2.timer which runs once/minute. You can change that. When unmount is noticed, the "touch" will signify it and the timer will be stopped.
I attach the current version for comparison.
Just a quick comment: I think the x-systemd.mount-timeout entry in aux.mount should be x-systemd.idle-timeout, at least according to the systemd.mount man page.
poc
On 27/03/2021 07:24, Patrick O'Callaghan wrote:
Just a quick comment: I think the x-systemd.mount-timeout entry in aux.mount should be x-systemd.idle-timeout, at least according to the systemd.mount man page.
Actually, it should not be there at all. I had copied it the fstab and it was actually there as a relic from when I was transitioning between 2 NAS and I sometimes forgot that a NAS was unavailable. Causing me grief.
I didn't think to check the document....not uncommon. :-)
"Note that this option can only be used in /etc/fstab, and will be ignored when part of the Options= setting in a unit file."
I have "TimeoutIdleSec=60" in the automount unit.
The duration of this tread has me laughing at the systemd "Biggest Myths" web page again:
http://0pointer.de/blog/projects/the-biggest-myths.html
Especially the claims that it is easily scriptable :-).
On Sat, 2021-03-27 at 07:36 +0800, Ed Greshko wrote:
On 27/03/2021 07:24, Patrick O'Callaghan wrote:
Just a quick comment: I think the x-systemd.mount-timeout entry in aux.mount should be x-systemd.idle-timeout, at least according to the systemd.mount man page.
Actually, it should not be there at all. I had copied it the fstab and it was actually there as a relic from when I was transitioning between 2 NAS and I sometimes forgot that a NAS was unavailable. Causing me grief.
I didn't think to check the document....not uncommon. :-)
"Note that this option can only be used in /etc/fstab, and will be ignored when part of the Options= setting in a unit file."
I have "TimeoutIdleSec=60" in the automount unit.
Yes, that's what I did too. The man page for systemd.mount is unclear on this. It seems to imply you can set an idle-timeout (not the mount- timeout) in either the mount or automount units, or in the fstab (not an option in my case and the automount unit seems more logical to me).
poc