A simple way to consider this is that every 389 instance in a
container is a read-only
replica, then you simplfy your system a lot (RO instances have a replica ID of 65535 (I
think)). This way on startup/shutdown you just re-init the RO from an external hub or
similar, then you don't care if you delete the volume associate with the container.
If you plan to make your container instances writeable, you should probably not
autoscale
- consider a container addition/removal the same as adding/removing a host, requiring a
clean ruv, and other maintenance tasks to be performed. Consider each persistent volume
with a replica id, db, changelog, as the "instance" and the container just
enables access to it.
So every time you add another container to the scaling, you need to add another
persistent
volume, with it's own unique replica Id's, db, changelog, and then have
replication between them.
Perhaps what could help me is a diagram of your planned infrastructure?
To help with this, let's assume:
[ Container 1 ]
|
[ Volume ID abcd ]
Now you destroy container 1 and upgrade to a newer version - if this is the case, so
long
as all your stateful data is in the volume (dse.ldif, db, changelog db), then this is
fine:
[ Container NEW! ]
|
[ Volume ID abcd ]
It would act like container 1 did, with the same replica ID etc.
The docker image just has the 389-ds packages installed. I'm running an init
script, which will check if there is already any data present in the attached volume and
start it or create a new ds instance, using dscreate and inf files. So I think I have
auto-scaling covered, as the volumes are also created on demand, using a kubernetes
storageclass.
It would be great to have some more testing of the dscontainer tool too, so please see
how
that goes. You can use the latest with opensuse/tumbleweed:latest as a docker base
image,
and just zypper in 389-ds-base. If you want even NEWER versions, you can look at
network:ldap as a repo - I'm happy to help provide dockerfile advice for these
cases.
These assume all your state is in /data, so provided you have that you can work as per
the
example above.
I'll also spin up a separate set for testing dscontainer, from the repos
you've mentioned.
—
Sincerely,
William Brown
Senior Software Engineer, 389 Directory Server
SUSE Labs
Thanks,
Aravind