Followup on koji staging setup

Ralph Bean rbean at redhat.com
Fri Jun 5 20:12:04 UTC 2015


This is one part a product of discussion from the FAD:
https://fedoraproject.org/wiki/FAD_Release_Tools_and_Infrastructure_2015

and another part response to the questions about requirements from April:
https://lists.fedoraproject.org/pipermail/infrastructure/2015-April/016121.html

> We now would like to add some requirements I think. What are they? :)
>
> Do we need to do rawhide/branched composes? Daily?
>
> Do we need to be able to do run-pungi (RC/TC) test composes?
>
> Something else?
>
> Lets find out our requirements before we adjust things.

So, here are some requirements coming out of the FAD:

1) to do rawhide/branched and RC/TC test composes.  Hopefully, soon rawhide
   composes will look more like RC/TC composes so we won't need to make the
   distinction.
2) specifically we want to do this so we can develop/iterate the composition
   tools
3) therefore, we need to be able to do them relatively rapidly
4) furthermore, we need it to be as-like production as possible so we know what
   we're testing.

At the FAD, we came up with four different strategies to get there, with
different pros/cons:

# Leverage koji's secondary volume capabilities. #

Apparently, koji can support mounting multiple volumes.  We could:

  - Mount the prod /mnt/koji read-only in staging and tell staging koji that
    this will be it's "secondary volume".

  - Dump the prod DB and import it into staging.

  - Run a sql script that re-points all the rpm entries from what was the
    prod primary volume to now the staging secondary volume (which is just
    the prod primary volume, but mounted read-only).

We could run full composes in staging here since all the rpm data would be
available, just like in prod. The one problem would be that we could not do
hardlinks on the read-only mount, so we would need to make that portion of the
compose process optional.. or work around it some other way.  For what it's
worth, I personally like this option the most.

Staging koji would drift out of sync with production over time, but we could
re-sync it every week or month or so.  We could automate that with ansible and
cron.

# Leverage writable snapshots from the netapp #

The netapp we use for storage could potentially provide a "writable snapshot",
which could possibly make for an easy-win solution.  If we could snapshot
the prod mount and then remount it in staging as a writable snapshot, (and then
also take a dump of the prod database and import it in staging), then staging
could just write on top of that snapshot, without affecting prod. We would just
need to take a new snapshot every so often to avoid growing beyond our bounds
in staging.

If this is even possible (we have to ask), we wouldn't be able to easily
automate syncing from prod, since it seems like it would require filing a
ticket to get a new snapshot each time.  Correct me if I'm wrong, here.

A nice aspect of this solution is that it involves zero koji-specific
compose-specific knowledge and tooling to pull it off.

# Write a custom "koji snapshotter" tool #

This approach involves writing a custom tool that knows about koji.  It would
inspect prod koji and copy over a subset of the content and db relations
required to do a compose in staging.

This involves no mount fanciness, but it does involve a high degree of special
development against koji's API.  It could work great, but we'd have to re-do it
when koji2.0 comes out (eventually, eventually).

# Just fully copy all 30T of prod koji #

imcleod say that for less than the cost of the FAD, we could get a storage pod
full of disks and have all the space in staging that we want:
https://www.backblaze.com/blog/backblaze-storage-pod-4/

With this, our test composes could potentially be faster than in prod, which is
nice for testing.  However, I can't speak to how easy or not it is or not to
get, install, and maintain hardware in our environment.

Any input on which of these options is preferable would be appreciated.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.fedoraproject.org/pipermail/infrastructure/attachments/20150605/496c07ff/attachment.sig>


More information about the infrastructure mailing list