Hi fedora-cloud people.
I just posted some code on github that I thought would be of interest
here. It's a port of amazon's ec2-net-utils RPM to fedora 20. This adds
proper support for elastic network interfaces to F20 instances on ec2.
There's a README there with some gory details as well as a link to a copr
repository for builds.
This work may not be appropriate for Fedora's base ami (which is
cross-cloud I think)? But in the future, it would be nice if the official
AMI could support ENI's out-of-the-box. I find ENI's very useful for
migrating services between instances with minimal downtime.
You are kindly invited to the meeting:
Fedora Cloud Workgroup on 2014-10-24 from 17:00:00 to 18:00:00 UTC
The meeting will be about:
Standing meeting for the Fedora Cloud Workgroup.
Right now, Cloud is the default environment in the Anaconda UI. This is
clearly not right, and there's an urgent ticket to do Something Else.
I know there was general agreement that we want it there, but I wonder
if it would be better to defer to F22. That's because a) I still think
it might be confusing to people who don't realize that it's for a guest
image, but _mostly_ because there are enough kickstart hacks (including
the one which should make growpart not break rebooting) that the comps
environment doesn't really provide a good starting point for someone
making a cloud image.
Getting those hacks out of kickstart seems like a good goal for f22,
but I'd like to drop it for now if you agree. What do you think?
Fedora Project Leader
At this point, I thought I'd send out an email with my current understanding of the processes we need to add to the releng scripts for ostree, as well as some questions regarding these compose scripts, specifically:
- buildbranched [https://git.fedorahosted.org/cgit/releng/tree/scripts/buildbranched]
- buildrawhide [https://git.fedorahosted.org/cgit/releng/tree/scripts/buildrawhide]
Disclaimer: I have a limited understanding of releng processes, so feel free to correct me on anything. I'm simply hoping to help move along our ostree work faster.
It seems to me that we should add in the bulk of the ostree processes after the compose process completes, but before the rsync. In buildbranched, this is at line 180. In buildrawhide, this is at line 172. This would be where we'd init an ostree repo somewhere like /srv/ostree/repo, use our treefile to run the compose (which captures RPMs to build the tree), and then generate a summary of the repo with `ostree summary -u`. For our purposes, the summary file resulting from this process would serve a comparable purpose as the repomd.xml file we use for our "standard" builds. My thinking is that these scripts could easily accomplish this process, and get the summary file to where it needs to be for use with the MirrorManager metalink business that needs to be tackled next.
That's my perception of what needs to be done after poking around. Next, some questions:
1. n00b question: I'm not sure how what needs to go *in* an image is decided. In order to run the ostree compose, we need to generate a treefile that contains -- among other things -- a list of RPMs that need to be installed. What's the best way to get content for that list?
2. The treefile also needs a branch name for the content. Any input on the naming scheme?
3. The treefile can also take a number of optional values. I'm not sure if any are needed for this process, but they are listed here: https://github.com/projectatomic/rpm-ostree/blob/master/doc/treefile.md Perhaps `gpg_key`, `boot_location`, and/or `units`?
4. What's the best way for me to test the changes I make to the scripts? Can I set up some sort of local build environment, or get access to a testing machine? Perhaps I should just send a patch to someone or a list?
I think that's it for now. I can pop in with MirrorManager stuff after we get this compose process working to the point that we're getting a good summary file.