Grouping service units for bulk stop/start ?

Lennart Poettering mzerqung at 0pointer.de
Tue Jun 26 09:11:43 UTC 2012


On Mon, 25.06.12 15:27, Daniel P. Berrange (berrange at redhat.com) wrote:

> With OpenStack there are quite a large number of daemons per host, each
> of which has their own .service unit file.
> 
>   openstack-glance-api.service
>   openstack-glance-registry.service
>   openstack-keystone.service
>   openstack-nova-api.service
>   openstack-nova-cert.service
>   openstack-nova-compute.service
>   openstack-nova-network.service
>   openstack-nova-objectstore.service
>   openstack-nova-scheduler.service
>   openstack-nova-volume.service
>   openstack-swift-account.service
>   openstack-swift-container.service
>   openstack-swift-object.service
>   openstack-swift-proxy.service
> 
> Currently our OpenStack instructions have such fun commands as:
> 
>  # for svc in api registry; do sudo systemctl start openstack-glance-$svc.service; done
>  # for svc in api objectstore compute network volume scheduler cert; do sudo systemctl start openstack-nova-$svc.service; done
> 
> What I'd like to be able todo is setup some kind of grouping, so that
> you can just start/stop/check status of everything in simple commands
> like:
> 
>  # sudo systemctl start openstack-nova.target
>  # sudo systemctl status openstack-nova.target
>  # sudo systemctl stop openstack-nova.target
> 
> My naive attempt to make this work was todo
> 
>  - Create a openstack-nova.target containining
> 
>    [Unit]
>    Description=OpenStack Nova
>    WantedBy=multi-user.target

Wantedby is not supported in [Unit], it belongs in [Install]

>  - Edit each of openstack-nova-XXX.service to change
>    WantedBy=multi-user.target to WantedBy=openstack-nova.target

I'd recommend simply dropping symlinks into
/usr/lib/systemd/system/openstack-nova.target.wants/ for the services
that shall be components of your target.

> But after doing this, stopping/starting the target has no effect on the
> running state of units I associated with it. Also I'd like starting/stopping
> XXX.target to take account of the enablement state of the individual
> XXX-YYY.service files. eg so I can disable say, openstack-nova-network.service
> on a host, but still use  openstack-nova.target to bulk stop/start all the
> other services that are enabled.

You'd need a BindTo=openstack-nova.target in all your service units to
make sure that if the target goes away the services do to.

> Either I'm missing some config change, or what I'm attempting is just not
> the kind of functionality that .target files are intended to offer ?

So far they have't actually. They tend to have slightly different
semantics that what you are looking for here.

Anyway, to summarize what I am suggesting:

Create your target openstack-nova.target like this:

[Unit]                                                                                                                                                                                                  
Description=OpenStack Nova 

Stick that in /usr/lib/systemd/system/openstack-nova.target. Then, add
symlinks from /usr/lib/systemd/system/openstack-nova.target.wants/ to
your individual services. This bit will make sure that when the target
is activated the service units are pulled in too.

And then, to make sure that if the target goes away your services do to,
you need to add BindTo=openstack-nova.target lines to the [Unit]
sections of all your services.

With this all in place starting the target will start the services, and
stopping the target will stop the services. However, this isn't perfect
yet, because if an individual service is started the target is also
pulled in and hence all other services, too. And that is most likely not
what you want.

This is a new usecase, but a valid one. We'll add a new dependency type
to work this nicely, so that if the target is started all services go
up, if the target goes down all services go down, but if individual
services are started/stopped they don't influence the target nor any
other services. In the meantime, please use what I suggested above, it
comes pretty close to the desired behaviour I guess, and very similar to
what we'll add for you. 

Lennart

-- 
Lennart Poettering - Red Hat, Inc.


More information about the devel mailing list