On Tue, Jun 30, 2020 at 03:18:57PM -0400, Ben Cotton wrote:
== Benefit to Fedora ==
The main benefit is all about centralizing the solution to solve
issues that storage subsystem maintainers have been hitting with udev,
that is:
* providing a central infrastructure for storage event processing,
currently targeted at udev events
* improving the way storage events and their sequences are recognized
and for which complex udev rules were applied before
* single notion of device readiness shared among various storage
subsystems (single API to set the state instead of setting various
variables by different subsystems)
* providing more enhanced possibilities to store and retrieve
storage-device-related records when compared to udev database
* direct support for generic device grouping (matching
subsystem-related groups like LVM, multipath, MD... or creating
arbitrary groups of devices)
* centralized solution for scheduling triggers with associated actions
defined on groups of storage devices
This sounds interesting. Assembling complex storage from udev rules is
not easy, in particular because while it is easy to collect devices
and handle the case where all awaited devices have been detected, it's
much harder to do timeouts or partial assembly or conditional
handling. A daemon can listen to hotplug events and have an internal
state take decisions based on configuration and time and events.
OTOH, based on this description, SID seems to want to take on some
bigger role, e.g. by providing an alternate execution and device
description mechanism. That sounds unnecessary (since udev does that
part reasonably well) and complex (also because support would have to
be added to consumers who currently get this data from udev). I would
love to see a daemon to handle storage devices, but with close
cooperation with udev and filling in the bits that udev cannot provide.
* adding a centralized solution for delayed actions on storage
devices
and groups of devices (avoiding unnecessary work done within udev
context and hence avoiding frequent udev timeouts when processing
events for such devices)
I don't think such timeouts are common. Currently the
default worker
timeout is 180s, and this should be enough to handle any device hotplug
event. And if there are things that need to be executed that take a
long time (for example some health check), then systemd units should be used
for this. Udev already has a mechanism to schedule long-running systemd
jobs in response to events. So I don't think we should add anything new
here. So maybe I'm misunderstanding what this point is about?
== How To Test ==
* Basic testing involves (considering we have at least multipath
and/or LVM module present as well):
** installing new 'sid' package
** installing device-mapper-multipath and/or lvm module (presumably
named device-mapper-multipath-sid-module and lvm2-sid-module)
** creating a device stack including device-mapper-multipath and/or LVM volumes
** booting with 'sid.enabled=1' kernel command line
** checking device-mapper-multipath and/or LVM volumes are correctly activated
Do you plan to handle multi-device btrfs?
Zbyszek