hosted reproducible package building with multiple developers?

Daniel P. Berrange berrange at redhat.com
Fri Dec 10 15:06:59 UTC 2010


On Fri, Dec 10, 2010 at 09:17:27AM -0500, seth vidal wrote:
> On Fri, 2010-12-10 at 14:02 +0000, Daniel P. Berrange wrote:
> > On Wed, Dec 08, 2010 at 01:07:32PM -0500, seth vidal wrote:
> > > On Wed, 2010-12-08 at 13:03 -0500, James Ralston wrote:
> > > > Riddle me this.
> > > > 
> > > > We want to provide a server for developers within our organization to
> > > > build RPM packages for use within our organization.
> > > > 
> > > > These are our requirements:
> > > > 
> > > >     1.  The developers must not be able to leverage the package build
> > > >         process to obtain root access on the server.
> > > > 
> > > >     2.  If a package has a build dependency that is not explicitly
> > > >         specified, the build must fail.
> > > > 
> > > >     3.  If two developers are building packages simultaneously, their
> > > >         builds must not conflict.
> > > > 
> > > > The only way satisfy requirements #2 and #3 is to use a chroot'ed
> > > > build environment.
> > > > 
> > > > mock(1) uses a chroot'ed build environment, but mock fails requirement
> > > > #1, as anyone in the "mock" group can trivially root the box.
> > > > 
> > > > I think that koji would satisfy all three requirements, because koji
> > > > uses mock to build, but doesn't allow developers to interface with
> > > > mock directly.  But setting up a koji infrastructure seems like a
> > > > highly non-trivial task.
> > > > 
> > > > Is there really no way to meet all three of these requirements without
> > > > going the full-blown koji route?
> > > > 
> > > 
> > > the mock chroots that koji uses could still be rooted by someone who can
> > > submit their own build-requirement-providing packages.
> > > 
> > > in order to protect the builders they must be:
> > > 1. disposable
> > > 2. in a vm
> > > 
> > > or possibly both.
> > 
> > I'm not familiar with what attacks you can do on mocks'
> > chroot setup offhand, but perhaps it is possible to
> > avoid them by also leveraging some of the new kernel
> > container features which allow you to build stronger
> > virtual root, without going to the extreme of a full
> > VM.
> 
> Since the pkgs have to be installed in the chroot as root if a user can
> specify their own dependencies then they can buildrequire a pkg which
> has a %pre or %post script which changes of the chroot and can then get
> to the real system root. The 'easy' solution was to have throw away vms
> so even if they got out they couldn't get far and the system wouldn't
> last long.

Hmm, it sounds very much like you ought to be able to prevent
this kind of attack with clone+pivot_root, though it is likely
going to be more work than a traditional chroot() based.

The theory is as follows though

 1. clone() with the CLONE_NEWNS set
 2. Remount / with MS_PRIVATE|MS_REC flags

These two steps ensure the new process has a totally private
filesystem hiearchy where mount/unmount changes cannot leak
back into the host OS, and also new mounts in host can't
propagate down into the container process. Then the complex
bit is to setup a new root

 3. Create $MOCKROOTPATH
 4. Create $MOCKROOTPATH/.oldroot
 5. Mount tmpfs on $MOCKROOTPATH/.oldroot
 6. Create $MOCKROOTPATH/.oldroot/new
 7. Bind mount $MOCKROOTPATH $MOCKROOTPATH/.oldroot/new
 8. chdir $MOCKROOTPATH/.oldroot/new
 9. pivot_root(".", ".oldroot")

At this point the container process has a new / filesystem
that is based on what the main OS sees as $MOCKROOTPATH.
The main OS' root filesystem is still (temporarily) visible
to the container procss as /.oldroot.

Now you'd need to bind the directory containing the RPMs
to install & any other data you want, and also setup
the generic special mounts you might need

 10. Bind mount /.oldroot/path/to/rpms /tmp/rpms
 11. Mount /proc, /dev, /sys, /dev/pts etc as normal

NB, devpts should have the 'newinstance flag set to ensure
there's no access to /dev/pts/N nods from the main OS.
Also the main OS should have been using 'newinstance' for
its own mount of devpts (/me wonders if Fedora is doing
that yet...it should be to ensure LXC containers have a
properly isolated /dev/pts).

Finally, iterate killing off all the mount points under
/.oldroot, from longest path first

 12. foreach mountpoint under /.oldroot
       umount $mountpoint

The process now has a private root filesystem where it
can only see files from $MOCKROOTPATH, the magic
filesystems proc, devfs, sysfs, etc a private devpts,
and the specific data directories you bind mount for it.
Kernel bugs aside, there should be no means of escape
from this private filesystem as you could with chroot.

Adding CLONE_NEWPID would be worthwhile to stop the
mock process seeing any other PIDs on the machine.

Using cgroups device ACL controller could be used to
block the cloned mock process from doing mknod in /dev
to access other device nodes (ie whitelist just /dev/null
/dev/zero, /dev/urandom and any /dev/pts/*).

There are various other CLONE flags that lock down more
things if desired, eg to hide all host network interfaces.

Of course alot of this setup complexity could be avoided
by using either libvirt's LXC driver or the alternative
LXC commandline tools. Both of these would require you to
copy some of the host yum/rpm/mock binaries inside the
chroot because they'd require you to exec() an intial
binary when starting the container, instead of just doing
a clone() and continuing to run your existing process
address space.

Regards,
Daniel


More information about the devel mailing list