# Fedora Quality Assurance Meeting
# Date: 2016-01-04
# Time: 16:00 UTC
# Location: #fedora-meeting on irc.freenode.net
Fedoraland seems to have been pretty quiet since the last meeting
(while RH was on holiday shutdown), so there's not a lot of new
business, but there were a couple of things we couldn't cover fully at
the last meeting since some folks weren't present, so I'm suggesting we
go ahead with Monday's meeting and try again to hit those topics.
Please do suggest any other agenda topics I might have missed!
== Proposed Agenda Topics ==
1. Previous meeting follow-up
2. Non-media blocker status update
3. Two release upgrades
4. Open floor
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
test-announce mailing list
I've been using Fedora with a "simple" LVM setup with no problems for the least 3 years. Recently I've decided to set up my laptop with LVM on top of LUKS in F23. While migration from the previous setup was relatively painless, I've been noting issues with shutdown: I consistently observe logs stating failure to properly deactivate the logical volumes and the LUKS device (as reported by others in bug 1097322 , which unfortunately has been closed due to EOL). I don't know if they are spurious, which led me to investigate a bit about how things work, and I'm failing to make sense of it.
I've noticed the existence of `blk-availability.service` in systemd. It's a service that does nothing on start, and calls the `blkdeactivate` executable on system shutdown, after the "special block-device" services (LVM, iSCSI, etc) have stopped. `blkdeactivate` is called with the option to umount devices in use. But I don't see how it can ever succeed for the system root: other services will still be shutting down, and systemd's unmounting phase will not have been reached yet. The same might hold true for non-system-root mounts as well, if services that depend on them are in the same situation.
My understanding was that special block-device handling was a task performed by dracut in the initramfs. It does have a shutdown hook called `dm-shutdown.sh` that uses the `dmsetup` executable to remove any device-mapper devices still enabled. I don't see any shutdown hooks for the LVM module, so I assume the DM module also takes care of them. Is my understanding correct?
Wouldn't it be possible to replace the custom DM hook with a call to `blkdeactivate`, and remove the `blk-availability` service from the "normal root" shutdown? Could that possibly work better than the current setup, since `blkdeactivate` claims to be capable to handle nested device-mapper setups, and to be able to use LVM commands in a more intelligent way (for example, deactivating whole volume groups at once)? Shouldn't `blkactivate` at least be told not to unmount the root, as it will always fail?
Apologies if I said anything egregiously wrong, and I'd be glad to be corrected in that case.
Thanks and happy holidays,