I apologise for my abscence lately. I've finally found some time (and
energy) to have a look at some tickets and noticed some changes Mark
and others have made to dscreate and related parts. Now, these changes
on their own are good, and I really want to thank everyone for picking
up my work in my "time off". Thank you!
But (there is always a but), these changes are missing a few details of
context which perhaps only live in my brain (sorry!). So this is my
attempt to brain dump the "why" of dscreate and some of the decisions I
made for it to support containers.
-- containers --
Containers are the new hotness (aka docker, cri-o, etc). They provide a
number of features which may not seem apparent at first:
* Static linking and distribution channels - containers are staticly
linked applications and run times, along with a method for their
distribution. This has the ability to provide A/B deployment, rollback,
and lets you build and ship in interesting ways.
* OS abstraction - Linux is fundamentally a HP lovecraftian horror in
OS form. One only needs to read my slapi abstraction code for memory
detection to realise this. Linux both was popular (easy to access) but
also a nightmare (complex to use and understand). Containers provide a
trivial Linux abstraction to things like RAM, CPU, Storage and
networking. Rather than needing to understand how to format and manage
all 13000 facets of a system, you only need a minimal surface
understanding of the container abstraction instead.
* Build system - Surprisingly, containers are a replacement to things
like RPM or Automake. They have the ability to build and link parts of
the software together.
Now as with all abstractions and features, there are limitations or
* Containers are ephemeral - they run for a short time and must not
contain stateful information outside of the network or disk storage
volume that is a subset of the filesystem (IE /var/lib/dirsrv)
* Containers have limited access points - you can't ssh in to view the
process, or simply attach GDB. This means you have limited ingress and
egress. You need to configure via environment variables or "network
based" registries, and your container visibility is from logging to
stdout or via the network and health checks
* And more ...
For our discussion these are the only two important ones.
-- Shut up William and get to the point --
Okay okay. It's a lot of info up to this point. Let's talk about 389-ds
and it's integration.
389-ds when designed was for paradigms of 20 years ago - hand reared
free range solaris servers that roamed the DC. That means:
* DS has lots of state on disk in many places (/etc/dirsrv,
/var/lib/dirsrv, /var/log/dirsrv, perhaps custom plugins ...)
* DS has customisation - named instances, live configuration (dse.ldif
* DS expects SSH access - instances are created interactively, you can
edit dse.ldif, you need to tail logs, and such.
* DS can have configuration divergence, IE refint only on a single
* DS expects to live foreverrrrrrrrr...... But it also expects to have
help getting started (dscreate). You can't just start "ns-slapd" and
have an instance.
In fact, most of the things that we have in 389-ds in some way conflict
with the ideas of containers. Let's explore a few.
On a long lived server we'll setup the instance, patch/reboot, restart,
and eventually decommision.
But a container the instance (and it's hostname) always change. The
"instance" deosn't know if it's new (empty /etc, /var etc) or has data
(attaching storage with a dse.ldif/backends).
As a result, a containerised DS deploys an instance *during* the
container build. This instance MUST have a fixed name so that when you
deploy a new container version, it can "find" the same instance. I
chose localhost, but it may as well be "snuffleufugus" for all I care.
What matters is that an instance must be statically named.
-- In place upgrade
The concept of upgrade scripts can't work in a container. On a host
when we do rpm scripts, they act on the data on the server.
but in a container, the rpm's are installed OUTSIDE of the production
environment. As a result, the upgrade scripts are NOT run on startup of
This is why there is some code in fedse.c to add elements to dse.ldif
(rather than scripts) and why "upgrade" code should be added (it's in
the new plugin patch!) for datamigrations. we need to treat every ns-
slapd execution as a potential chance to upgrade our data and
Not added yet, but supporting password change and others from
environment variables (TBH I want to not allow dynamic configuration
from containers so that dse.ldif becomes RO). Now, this means when we
deploy a new container today "what's the directory manager password?"
Because we have to deploy an instance "as part" of the build, we need
to ship a default password today. It can't expect to have one provided!
Because containers are spun-up/down in clusters, the number of read-
only/hub/masters may always change. Thinking about how to automatically
allocated replica id's, replicate data, and more, is important to think
about for automation and automatic scaling of clusters.
-- Resource limiting
Most of this is a solved problem already in the automatic tuning code,
but it's worth mentioning that we need to be well behaviour in a varity
of resource limiting scenarioes.
I hope this gives some ideas and food for thought about DS in
containers. I think it's an important use case, and dscreate was
designed to support this. It would be great if someone else were to
"share" the work in making this a reality.