Re: plans for user-backed device support
by Andy Grover
On 02/04/2015 12:21 PM, Alex Elsayed wrote:
>> Maybe we step back and define a DBus interface
>
> I'd be quite happy with that - I considered suggesting it, in fact, but
> wasn't sure of the prevailing opinion re: dbus around here.
>
> What would you do re: discovery, though?
>
> (Explaining some DBus things, which readers may already be aware of)
>
> In DBus, there's a two-level hierarchy of busnames holding objects.
> Busnames are either the inherent, connection-level one of the :\d+.\d+ form,
> or the human-readable well-known name form (reverse DNS). Objects, then,
> implement interfaces.
>
> However, well-known names can only have a single owner - so discovering
> which busnames have objects which implement an interface is non-trivial.
>
> The approach taken by KDE is to suffix the well-known name with the PID
> (org.kde.StatusNotifierItem-2055 or whatever), call ListNames, and filter in
> the client. This has the drawback of making DBus activation impossible.
>
> Another approach is for every implementor to try to claim the well-known
> name, and on failure contact the existing owner to republish their objects
> (possibly under a namespaced object path). This has the drawback of
> complicating the implementation somewhat, as well as making bus activation
> only able to activate a single 'default' implementation.
>
> A third approach would be to explicitly define a multiplexor, which backends
> ask to republish their objects. This simplifies implementations, and it
> could also provide its own API that requests a backend by name, and ensures
> that backend's object is available. This could be driven by something as
> simple as a key-value mapping from backend name to a well-known DBus name
> specific to that backend, which the multiplexor calls to trigger service
> activation.
>
> Thoughts?
It really seems to come down to: will multiple independent user-handler
daemons be needed? Because I'm trying really hard to make tcmu-runner
good enough so that the answer is no :-)
tcmu-runner supports multiple handler modules, so it's extensible. It is
permissively licensed so no issues with non-FOSS handlers needing their
own daemon. It also could be replaced entirely (either with a modified
version of itself or from scratch) and still not give up the
single-busname, service activation approach.
So that would be my current preference. (The fact that the kernel API
doesn't preclude multiple handler daemons does not mean we need to
*support* those right away, or ever.)
If there are likely use cases that tcmu-runner is unsuitable for solving
by itself then that would change things of course, and let's please talk
about them!
-- Andy
8 years, 6 months
Re: plans for user-backed device support
by Andy Grover
On 02/04/2015 09:50 AM, Alex Elsayed wrote:
>>> Here's a proof-of-concept of what targetcli integration might look like:
>>> https://github.com/agrover/targetcli-fb/tree/user-backstore-poc
>>
>> I have a couple questions about this.
>>
>> 1.) What about alternate frontend implementations that still want to use
>> TCMU? By making the config module python, not only do you impose a presort
>> on the implementations, you also likely have duplicated logic (the TCMU
>> backend will need to validate parameters anyway, after all.)
>>
>> 2.) Why not have a dynbs_tcmu.py that talks to TCMU somehow, and have a
>> few additional functions (explicit param validation, etc) added to the API
>> that TCMU expects a backend to expose? That saves the backend implementer
>> from having to care about rtslib's API; they just worry about the TCMU API
>> they already were working with.
>
> (Or heck, even a tiny python lib that dlopen's the handler_*.so and calls
> the TCMU-defined API bits via FFI)
Very good points. So you're saying we don't want to tie the user-handler
discovery mechanism to our current configtool or its language.
It would also be nice to allow an alternate implementation of the TCMU
handler daemon, which dictates a degree of abstraction going the other
way. The current user-kernel interface is of course not dependent on
tcmu-runner, but also lets unrelated processes handle different sets of
user-backed backstores.
Maybe we step back and define a DBus interface that
$tcmu-handler-daemons would implement, which would allow $configtools to
enumerate handlers and pre-validate parameters. This would allow tight
integration of user-handled backstores in targetcli, but also keep
things loosely coupled enough to allow alternate implementations of
either side.
-- Andy
9 years, 2 months
plans for user-backed device support
by Andy Grover
Hi all, I just wanted to share my current thoughts for support for
user-backed backstores in rtslib and targetcli, and hopefully developed
and adopted in common across -fb and Datera versions.
As background, user-backed backstores (also known as TCMU) allow the
processing of a LUN's commands be passed-through to a user process,
instead of being handled by one of LIO's kernel backstore modules. This
might be needed to work with a userspace-only API, or to implement
less-common SCSI command sets such as streaming (tape) emulation that
are not supported in the kernel backstores.
I think we want to strive to make user backstores appear as much as
possible like the built-in kernel ones, and hide as much of their
increased complexity as we can. We can make targetcli and/or rtslib
extensible so the installation of a new userspace handler will result in
that handler being listed as a backstore in targetcli.
For example, a Gluster-backed handler could consist of two parts:
1) handler_gluster.so, part of the tcmu-runner daemon. This would
actually handle converting the SCSI commands received for a
userspace-backed LUN into calls into Gluster API calls.
2) dynbs_gluster.py. This would be packaged along with
handler_gluster.so, but to a different directory where targetcli instead
of tcmu-runner would find it. Targetcli discovers and execs the file,
which defines a UIGlusterBackstore class and instantiates it. This puts
'gluster' in targetcli's tree, right alongside the built-in kernel-based
backstores. Its ui_command_create() does arg validation specific to
Gluster. If the args are valid, it then creates an instance of rtslib
UserBackedStorageObject, and this starts the chain of events that
results in handler_gluster.so being ready to accept commands for the new
storage object.
Here's a proof-of-concept of what targetcli integration might look like:
https://github.com/agrover/targetcli-fb/tree/user-backstore-poc
Regards -- Andy
9 years, 2 months