[HEADS-UP] The systemd unit files I'll post

Lennart Poettering mzerqung at 0pointer.de
Mon Jul 19 14:59:30 UTC 2010


On Mon, 19.07.10 10:10, Stephen Gallagher (sgallagh at redhat.com) wrote:

> We already do this. The log messages you're seeing are actually
> debugging messages. You probably want to use --debug-to-files to ensure
> that these debug messages aren't printed to the console (and instead go
> to /var/log/sssd/*.log)

Ah, great. Fixed that now. makes the ssd.service even shorter.

> Could you explain "socket activation" to me here? The way our SSS
> clients work is that they communicate over a local socket in
> /var/lib/sss/pipes/[nss|pam] to the appropriate responder service
> (sssd_nss or sssd_pam, respectively).

Socket activation is one of the key features of systemd: it pulls the
creation of the listening socket out of the daemons and into the init
system. You basically tell systemd that it should listen for you on a
specific socket, and then when traffic arrives on the socket systemd
makes sure to spawn your daemon and passes the listening socket to
it. This is a bit like inetd, except that support for local sockets is
what really matters here, and the suggested mode of operation is that
one daemon is started that handles all further connections, while in
inetd the most common way to do thigns was to spawn one instance for
each connection.

This has multiple advantages:

1) it allows race-free on-demand starting of services. In your case this
   is probably not particularly important though, since unless I am
   mistaken the service would be requested anyway very early during
   boot.

2) It allows race-free parallel starting of the service providing and
   the services using your socket. i.e. since the socket is created
   early during boot, it is possible to spawn sssd at the same time as
   any client using it, and the clients requests will automatically be
   queued in the socket by the kernel. When sssd finished starting up it
   will then go on and process the queued requests. This is the key
   feature systemd uses to maximize the parallelization of bootup.

3) It allows us to get rid almost entirely of user-configured
   dependencies. Since all sockets are established in a single step
   early during bootup, dependencies will be handled completely
   automatically: if a service needs another service it just connects to
   its sockets and can use it, without having to explcitly wait for that
   dependency or for the socket to be established.

4) It makes things more robust, since it allows us to restart services
   without losing a single client request: since the socket stays
   installed all the time, it will always be connectable. And especially
   in state-less protocols this allows us to restart services without
   the client even noticing. When the daemon crashes, the socket stays
   intact and when we restart the process it can just go on where it
   left of.

This is inspired by apple launchd. For a longer explanation see the
original blog novel http://0pointer.de/blog/projects/systemd.

Adding support for socket activation to a daemon is usually very easy,
but requires a minimal patch that replaces the socket creation already
available in the daemon, with some code to get the socket from systemd
instead -- but only if spawned from systemd.

For details of the interfaces involved see:

http://0pointer.de/public/systemd-man/sd_listen_fds.html
http://0pointer.de/public/systemd-man/daemon.html (see the parts about
socket activation, and the end)

It would be great if as many daemons as possible would support this kind
of activation so that we can parallelize our boot as much as possible. 

Lennart

-- 
Lennart Poettering - Red Hat, Inc.


More information about the devel mailing list