Google Compute Engine Fedora Images (status update)
by Vaidas Jablonskis
Hi People,
I am pretty new to contributing to Fedora, especially the cloud group.
I have mentioned a few times on IRC (nickname: zooz) that I have started
working on GCE cloud image.
So I thought I will give you an update on my progress so other people can
jump in or help me to figure out the process.
So far so good, GCE image does not seem to look much different that already
existing OpenStack/EC2 images.
- kickstart files for appliance-creator:
https://github.com/vaijab/fedora-gce-image
- gcimagebundle spec/srpm/copr: https://github.com/vaijab/gcimagebundle-rpm
- google-compute-daemon spec/srpm/copr:
https://github.com/vaijab/google-compute-daemon-rpm
I understand that you guys might not want to use google provided daemons
for managing ssh keys, users and forwarded IP addresses, but I thought I
will package it up just in case.
Current blockers: https://bugzilla.redhat.com/show_bug.cgi?id=1055181 but
they accepted my patch and it's been submitted to updates-testing, so it
should be fine soon.
So with what I currently have, I am able to build an image without any
issues which runs on GCE perfectly.
If anyone could have a quick poke at my RPM spec files and let me know if
there are any major issues with them before I submit them for a review for
the first time :-) Also if anyone wants to sponsor my fedora packager
membership I'd really appreciate that.
So what are the next steps? How do we actually start a process of getting
appliance KS files upstream?
Any pointers and feedback is welcome. If anyone wants to jump in and help -
please do so!
Thanks,
Vaidas
--
Vaidas Jablonskis
9 years, 6 months
Cloud SIG grou Todo list
by Matthew Miller
People keep asking me what they can do to help. This is awesome, but I feel
like sometimes my answers are too wide open. Plus, I think we could benefit
from some more overall planning anyway.
I threw this together... about 5 minutes of effort total so far. I will
expand further, but really I don't want this to be _my_ list... it should be
_our_ list (Cloud WG members and the whole cloud SIG) -- basically, the
things that we want or need to get done.
https://fedoraproject.org/wiki/Cloud/Cloud_ToDo
Possibly it would be useful to separate ongoing work from one-time work, and
prioritize a bit. But initially, I wanted to just start getting things on
(virtual) paper. Feel free to pitch in! What's there right now is literally
what I thought of off the top of my head to kind of get started. We should
systematically go through the PRD and generate work areas and work items.
And possibly, get people's names attached to some of them. :)
--
Matthew Miller -- Fedora Project -- <mattdm(a)fedoraproject.org>
9 years, 7 months
lessons to be learned from centos cloud image process?
by Matthew Miller
Take a look at this, and particularly the "kickstart git > image generation
-> smoketest" workflow.
----- Forwarded message from Karanbir Singh <mail-lists(a)karan.org> -----
> Date: Sun, 26 Jan 2014 21:36:53 +0000
> From: Karanbir Singh <mail-lists(a)karan.org>
> To: centos-devel(a)centos.org
> Subject: [CentOS-devel] Cloud Instance SIG Hackathon @ CentOS Dojo 31st Jan
> 2014
>
> hi,
>
> We are organising a hack session to try and build, test and deliver a
> set of CentOS-5/6 32bit/64bit images usable by various onpremise cloud
> setups. This email aims to give everyone an overview of what to expect
> on the day, so we can jump right in on the day and get productive. This
> is a bit of a wordy email, so feel free to skip details - I will have
> most of the important stuff on paper to hand out on the day as well.
>
> The Hack session is expected to start just after lunch, and will run
> through to the end of the day ( ~ 17:30 );
>
> On the day, I will have a local Wifi network with SSID DojoHackathon
> running at the time, everyone wanting to participate will need to get
> onto that. dhcp on the network will hand out 172.30.30.100 - 250 IP's.
> There is a gateway on .1 that will NAT requests to the upstream internet
> ( but I'm told its slow, so dont rely on it being there ). If anyone
> needs content to pull, please let either me or Johnny know, we will
> mirror it down before the event and make sure its on the mirror host on
> the network at the time. We are going to have :
> - CentOS 5/6 on both 32/64 bit x86
> - EPEL 5/6
> - EPEL-Testing 5/6
>
> Various people representing projects have offered to bring pre-setup
> cloud infra on their laptops, thanks for that. Lets try and target
> everyone of those on the day. So far the list is :
> - OpenNebula
> - CloudStack
> - OpenStack ( the HPCloud edition )
> - OpenStack ( the RDO edition )
>
> A rather basic idea of what to expect in terms of infra/network on the
> day : http://bit.ly/1ffXr4G ; Workflow anticipated:
>
> - git.centos.org ( hosted locally ) will have the git repos that host
> kickstarts and metadata files that have some info around the kickstarts.
>
> - anyone can clone the git repos ( I will make sure its pretty clear as
> to what repo to get for what task, ideally there will only be one git
> repo with all the kickstarts ).
>
> - make changes / edits / push back to git.centos.org ( please ensure git
> user.name and user.email is sane )
>
> - git post-recv triggers kick off the actual image builds on the
> image-builder node, which will then push the resulting file to
> cloud.centos.org ( both image-builder and cloud.centos.org mirrors will
> be hosted locally ).
>
> - cloud images from cloud.centos.org can then be downloaded and
> instantiated on the cloud infra people are running;
>
> - once satisfied that the image does everything that is 'required', git
> clone the t_functional repo, and run the test suite. PASS on that would
> indicate them that the image is good to ship. For the day, we will trim
> the test suite down to just the basic stuff that runs in 10 min or less.
>
> - Indicate pass with a comment in the metadata file and git commit which
> starts with 'RELEASEABLE ', git push.
>
> rinse & repeat for 32bit and 64bit.
>
> Worth noting here that the reason i have all the various components
> setup to work with real world urls ( faked by dnsmasq on the .1 machine
> ) is that post hackathon the exact same infra will go live on the same
> urls. With one major change : we will have little or no ACL's fon the
> git repos at the hackathon to make live easier and encourage
> participation. Post Hackathon, we'll need to establish a mechanism for
> people to request commit access.
>
> If we still have time at the end of the day, we can shoot to deliver
> something that works for vmware, ovirt and docker. I am still waiting to
> hear back from the Eucalyptus guys if someone from their side is going
> to be at the hack session.
>
> please note: all kickstarst and images will need to only consume content
> hosted in mirror.centos.org and epel ( or if you need something else,
> let us know before Wed 29th ).
>
> See you there,
>
> --
> Karanbir Singh
> +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
> GnuPG Key : http://www.karan.org/publickey.asc
> _______________________________________________
> CentOS-devel mailing list
> CentOS-devel(a)centos.org
> http://lists.centos.org/mailman/listinfo/centos-devel
>
----- End forwarded message -----
--
Matthew Miller -- Fedora Project -- <mattdm(a)fedoraproject.org>
9 years, 8 months
Using 'fedora' in a github organization name
by Sam Kottler
Greetings legal@,
The Fedora cloud SIG has setup an organization called 'fedora-cloud' on
GitHub for mirroring some dist-git repos in the hopes of getting more
contributions. The question was asked on the cloud list (cc'ed) about
whether we need to get usage approval for the Fedora name to use it in
the organization name. fedora-infra has their own GitHub organization so
there is at least some precedence here.
Do we need to get board approval for usage of 'fedora' in our GitHub org?
Thanks for your help.
-S
9 years, 8 months
Re: fedora-cloud organization on GitHub
by Haïkel
Hi,
I discussed with the infra team about using github for packaging and what
alternatives do we have.
Few points:
* rel-eng insisted that we won't ever connect our build system to external
repositories
* if we use mirroring to keep dist-git synchronized, it won't be supported
by infra, but they won't forbid us if we use a github hook
Steven Gallagher has offered us to integrate review board into dist-git.
>From my POV, this is a fair proposal:
* we gain an user-friendly peer-reviewing system
* it will lower the entry barrier for new contributors
Are you willing to give Steven's proposal a shot ?
Since Fedora.next is still on its early stages, it's probably the good time
to discuss about how we could improve our workflow, integrating a
peer-reviewing system is definitively something i'd appreciate.
H.
9 years, 8 months
fedora-cloud organization on GitHub
by Lars Kellogg-Stedman
Hello all,
The recent discussions regarding cloud-init -- here, and on
centos-devel, and on irc, and elsewhere -- suggest that in some cases
it would be nice to have package sources hosted in a common location
that is more amendable to collaboration than the various "official"
repositories.
Sam Kottler and I have created the "Fedora Cloud" organization on
GitHub:
https://github.com/fedora-cloud
...and for starters we've put the cloud-init package sources here.
Our hope is that by hosting package sources here, we'll be able to
take advantage of things like pull requests to make it easier for
people to suggest changes/fixes/etc.
Cheers,
--
Lars Kellogg-Stedman <lars(a)redhat.com> | larsks @ irc
Cloud Engineering / OpenStack | " " @ twitter
9 years, 8 months
Using cloud image with Xen
by Eric V. Smith
This is likely the wrong place to ask this question, but it's the best I
could come up with. I've searched high and low for an answer, but to no
avail. If there's a better venue, please direct me there.
I want to use one of the Fedora 20 images to run a VM on a locally
installed Xen server. Dom0 is also Fedora 20.
I've been trying to use virt-install. When I install the instance with:
virt-install --graphics=none --name=test --ram=1024 --import
--disk=Fedora-x86_64-20-20131211.1-sda.raw --network=bridge:br0 --debug
I get:
...
[Thu, 23 Jan 2014 16:05:41 virt-install 10918] DEBUG (virt-install:664)
Connecting to text console
[Thu, 23 Jan 2014 16:05:41 virt-install 10918] DEBUG (virt-install:574)
Running: /usr/bin/virsh --connect xen:/// console 15
Connected to domain test
Escape character is ^]
error: internal error: cannot find character device (null)
In /var/log/xen/qemu-dm-test.log I see:
domid: 15
Warning: vlan 0 is not connected to host network
-videoram option does not work with cirrus vga device model. Videoram
set to 4M.
/builddir/build/BUILD/xen-4.3.1/tools/qemu-xen-traditional/hw/xen_blktap.c:628:
Init blktap pipes
Could not open /var/run/tap/qemu-read-15
xs_read(): target get error. /local/domain/15/target.
The bridge br0 exists. I've run a similar setup in Fedora 16, but there
I created the image files myself with boxgrinder. I can do something
similar here, but I'd rather just use the stock image.
The instructions for the default cloud image all seem geared to EC2 or
OpenStack. How can I find out what's required for using this image with Xen?
I've tried --extra-args="console=ttyS0,115200", but it looks like that
won't work for a local image file. But maybe I'm misunderstanding the
error "--extra-args only work if specified with --location".
Thanks for reading this far!
--
Eric.
9 years, 8 months