Proposed F19 Feature: High Availability Container Resources

David Vossel dvossel at redhat.com
Fri Feb 1 20:55:12 UTC 2013



----- Original Message -----
> From: "Daniel J Walsh" <dwalsh at redhat.com>
> To: "Development discussions related to Fedora" <devel at lists.fedoraproject.org>
> Sent: Friday, February 1, 2013 10:09:27 AM
> Subject: Re: Proposed F19 Feature: High Availability Container Resources
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 01/29/2013 03:17 PM, Glauber Costa wrote:
> >>>> = Features/ High Availability Container Resources =
> >>>> https://fedoraproject.org/wiki/Features/High_Availability_Container_Resources
> >>>>
> >>>>
> >>>> 
> Feature owner(s): David Vossel <dvossel at redhat.com>
> >>>> 
> >>>> The Container Resources feature allows the HA stack (Pacemaker +
> >>>> Corosync) residing on a host machine to extend management of
> >>>> resources into virtual guest instances (KVM/LXC).
> >>> 
> >>> Is this about LXC or libvirt-lxc? These two are entirely
> >>> different
> >>> projects, sharing no code, which makes me wonder which project is
> >>> meant here?
> >> 
> >> Yep, I left that vague and should have used the term "linux
> >> containers"
> >> instead of LXC.  I'm going to update the page to reflect this.
> >> 
> >> This feature architecturally doesn't care which project
> >> manages/initiates
> >> the container.  All we care about is that the container has it's
> >> own
> >> isolated network namespace that is reachable from the host (or
> >> whatever
> >> node is remotely managing the resources within the container)  I
> >> intentionally chose to use tcp/tls as the first transport we will
> >> support
> >> to avoid locking this feature into use with any specific virt
> >> technology.
> >> 
> >> With that said, I'm likely going to be focusing my test cases on
> >> libvirt-lxc just because it seems like it has better fedora
> >> support.  The
> >> LXC project appears to be moving all over the place.  Part of the
> >> project
> >> is really to identify good use-cases for linux containers in an HA
> >> environment.  The kvm use-case is fairly straight forward and well
> >> understood though.  I'll update the page to list the linux
> >> container
> >> use-case as a possible risk.
> > 
> > Please also keep in mind that LXC usually refers to a specific
> > project,
> > either the original "lxc" code or "libvirt-lxc". We have either
> > Container
> > Solutions in Fedora, like OpenVZ.
> > 
> > You may be able to reach a broader base by making your solution
> > work on
> > that too (and of course, I'd be more than happy to help to trim any
> > issues
> > you may find)
> > 
> > -- E Mare, Libertas
> > 
> I would like to also understand how we can work together with
> virt-sandbox.
> (Secure Linux Containers)

Really interesting idea.

Integrating with virt-sandbox would allow the cluster to dynamically launch resources in a contained environment.

My understand is that this contained environment would give users the ability to automatically set cpu and memory usage limits for a resource as well as isolate that resource's access to the rest of system.  Everywhere that resource gets launched in the cluster, it gets the exact same environment.

For the HA config we could do this in a really slick way.  We could just allow people to start defining environment details (number cpus, memory usage, network settings) in the resource definition.  Then when it's time to launch the resource, if we have certain environment details associated with the resource, we'll just launch the resource in a dynamically created guest sandbox environment instead of directly within the host.  This is really brilliant... Conceptually this is like we are creating a virtual machine image on the fly for a resource to start in that follows the resource wherever it goes in the cluster.

This would be fun to talk through sometime.  The remote LRMD daemon I'm working on would be the piece of the puzzle that allows the HA stack to reach into contained environment to start/stop/monitor the resource living in the container. 

-- Vossel


More information about the devel mailing list