On 10/31/2012 12:40 PM, Robyn Bergeron wrote:
----- Original Message -----
> From: "Mo Morsi" <mmorsi(a)redhat.com>
> To: "aeolus-devel" <aeolus-devel(a)lists.fedorahosted.org>,
"Fedora Cloud SIG" <cloud(a)lists.fedoraproject.org>,
> "Development discussions related to Fedora"
> Sent: Wednesday, October 31, 2012 2:22:47 AM
> Subject: Deploying fedora infrastructure (koji) across clouds
> // Deploying fedora infrastructure (koji) across clouds
> // As promised, steps to deploy kojihub to an openstack instance
> communicating with builders on ec2
> // Any provider supported by deltacloud will work
> // (lots of em: http://deltacloud.apache.org/drivers.html
> // A short video of these steps in action can be seen here:
> // http://youtu.be/qF2ctg7ItNc
A few questions:
#1: How does this take things like ARM or other archs into the picture - ie: I am
guessing we can't really build ARM on ec2? :)
No ARM on EC2, but you can mix and match clouds as you will. Providers
are now just config options so you can easily add and change the
environments and resources that you leverage anytime you want.
There's really a world of possibilities available, we just have to
identify the correct level of abstraction the Fedora community is
#2: Could there be a way to take a (working) nightly build, build one's package
against that nightly in a personal build of some sort, and somehow have a verification
process that it built in that "personal build" before it goes into rawhide, etc?
(or even... unit tests, etc.)?
Absolutely, repositories and packages can be added on the fly, and we
can incorporate any image already pushed to the cloud as well as build
new ones for our purposes.
Even what I demoed in the screencast (just the tip of the iceburg
concerning Deltacloud's capabilities) can be automated such that new
cloud instances w/ any stack we want can be provisioned on demand
anywhere we want.
I'm mostly thinking about things like "how to not have perpetually broken
rawhide" (avoid checking in things that will likely break the build in the first
Really the workflow at this point is the only thing vague to me, once
that's down I'm sure it can be implemented.
Perhaps people can send their thoughts or we can discuss it at the next
cloud sig meeting? (Wednesdays at 10AM EST on #fedora-meeting-1 for
those that are interested)
Re: the endless anaconda-killing-f18 thread: I know there's been some discussion
about whether or not the devel process really accommodates what needs to happen with
something like a full-blown anaconda re-write - and while I know that "THE
CLOUD" is not the entire solution to that (there are obviously, as others have
graciously pointed out, many other feature/fesco/planning/etc processes intertwined here
that also need love) - it seems like having these capabilities might fit in to a solution,
or at least, something on the road to a better devel/build process.
There are plenty of projects doing CI/CD - and having cloud infra makes this
significantly easier to enable - though obviously not a lot of cases of people doing an
I've said it before and I'll say it again, the cloud is _not_ the be all
end all. :-) Its a great tool, and brings a powerful computational
resource to the table, but it's never going to completely mitigate local
I think hybrid technologies is the future, being able to leverage cloud
resources in addition to and along side of local ones seemlessly in an
open manner will really enable us to do some really cool things. I
believe Fedora can make great headway and lead on this front, we just
have to find what works for people and do it! :-)