As we get closer to putting disposable clients into production, we need a way to have updated images for those clients. I don't think this is news to anyone since the topic has come up several times before but now there's a bit more urgency :)
In my mind, we have the following requirements: - Produces qcow2 images that work with testcloud - can be run in an automated way - allows adding/changing/customizing packages contained in image - allows arbitrary repos to be specified
and the following "nice to have" things: - can build branched and rawhide images - builds images from scratch using only things provided by releng - written in python - builds more than qcow2 for some future-proofing - can run well in a VM
Is there anything that I missed?
As far as I know, we're looking at two options right now: taskotron-vmbuilder and imagefactory. I've put together a list of the pros and cons that I know of for both tools. Thoughts on which direction to take would be appreciated.
Tim
taskotron-vmbuilder [1] is a PoC system kparal built around virt-builder [2]. Images are specified in a yaml file and instead of building those images from scratch "It takes cleanly prepared, digitally signed OS templates and customizes them".
[1] https://bitbucket.org/fedoraqa/taskotron-vmbuilder [2] http://libguestfs.org/virt-builder.1.html
pros: - already does almost everything we need - fits all requirements - builds quickly - well supported
cons: - requires blobs which are out of our control * yes, I know who does the work behind virt-builder. My concern isn't with him, it's the general concept that I don't like. This also gets into the fact that we would have pretty much no control over timing of release for the base images. - limited support for rawhide and branched releases - limited support for non-server spins - output images are large in size - virt-builder is not written in python
imagefactory [3] is a system for building os images and potentially shipping those images to various cloud systems. Images are specified with a kickstart file and an xml template descriptor. Imagefactory builds images from scratch, essentially using the kickstart to run an install inside a VM and processing that install into the desired image type.
pros: - used by releng to create Fedora cloud images - builds images from packages: no blobs that we don't have control over - already has a mostly-complete RESTful api that can list images and trigger new builds - can support almost all spins; anything that can be represented in a kickstart - written in python
cons: - not as fast as virt-builder - somewhat more complex than virt-builder - when something goes wrong, debugging can be difficult due to how the tool works - we may be somewhat on our own to fix issues if releng is not hitting similar problems - may not run well in a VM (would need nested virt)
On Mon, 9 Nov 2015 22:02:35 -0700 Tim Flink tflink@redhat.com wrote:
As we get closer to putting disposable clients into production, we need a way to have updated images for those clients. I don't think this is news to anyone since the topic has come up several times before but now there's a bit more urgency :)
In my mind, we have the following requirements:
- Produces qcow2 images that work with testcloud
- can be run in an automated way
- allows adding/changing/customizing packages contained in image
- allows arbitrary repos to be specified
and the following "nice to have" things:
- can build branched and rawhide images
- builds images from scratch using only things provided by releng
- written in python
- builds more than qcow2 for some future-proofing
- can run well in a VM
Is there anything that I missed?
As far as I know, we're looking at two options right now: taskotron-vmbuilder and imagefactory. I've put together a list of the pros and cons that I know of for both tools. Thoughts on which direction to take would be appreciated.
Tim
My vote would be for imagefactory. In my mind, it comes down to the blob vs. from-scratch thing, using the same tools that releng does and the fact that imagefactory is completely written in python.
Whether we use the API or not is a different question, though :)
Tim
My vote would be for imagefactory. In my mind, it comes down to the blob vs. from-scratch thing, using the same tools that releng does and the fact that imagefactory is completely written in python.
Whether we use the API or not is a different question, though :)
I think both tools are capable of doing what we need and the amount of required work will be similar. The blob argument disappears if we create and host our own templates. The collaboration factor is important and having some kind of API is also a good thing (even though I haven't seen it yet and I don't know what we would use it for). I don't think we will patch that too much (if we need to submit nontrivial patches, we're likely using a wrong tool, I don't think we should be spreading ourselves even thinner than we are now), so the implementation language is not a big deal, I think. Hosting and downloading might be easier with virt-builder, because it already supports querying remote image templates index and downloading selected files.
There is one view angle that we should consider as well, I think, which is the task-developer use case. For our production use case, it doesn't really matter how fast the compose process it or how large the disk images are. But how are the task developers going to use them? If they're going to download them (and periodically update them, even several times per week), we want the images as small as possible. If they're going to build them locally, we want them built fast and with small download sizes (the virt-builder template saves some data from downloading, only updates are downloaded; also there's a question whether we could cache the downloaded packages somehow with some of these tools). Or will we offer both approaches, let them pick what works best for them?
If we intended to mostly build those images on dev's computers, I'd probably prefer virt-builder. But my current impression is that local building will be a secondary option, and we'll primarily offer pre-created images for download (even downloading them automatically). Which makes sense, it's easier for the dev, and less error-prone. So in that light (assuming no one has different plans in mind), it doesn't really matter which technology we choose to build it. Image size is a factor here, though. I don't have any real numbers here, it would be very interesting to see the same image (same package set, same filesystem size) built by both tools and compare the output size (ideally even after running zerofree on them and compressing them). My guess is that they should have the same size (what would cause a difference?), but we might be surprised. Do we have a volunteer to test this? :)
On Fri, 13 Nov 2015 10:02:20 -0500 (EST) Kamil Paral kparal@redhat.com wrote:
My vote would be for imagefactory. In my mind, it comes down to the blob vs. from-scratch thing, using the same tools that releng does and the fact that imagefactory is completely written in python.
Whether we use the API or not is a different question, though :)
I think both tools are capable of doing what we need and the amount of required work will be similar. The blob argument disappears if we create and host our own templates. The collaboration factor is important and having some kind of API is also a good thing (even though I haven't seen it yet and I don't know what we would use it for). I don't think we will patch that too much (if we need to submit nontrivial patches, we're likely using a wrong tool, I don't think we should be spreading ourselves even thinner than we are now), so the implementation language is not a big deal, I think.
Agreed on not spreading ourselves any thinner than we already are. From my testing of imagefactory's api, it will need a small patch to make it work for our needs but it's not a big patch, I've just been hoping for some input upstream before submitting anything.
Hosting and downloading might be easier with virt-builder, because it already supports querying remote image templates index and downloading selected files.
Unless we're running virt-builder on each virthost, I don't see how this is relevent.
My thought behind using imagefactory's API was that we could build the images in one location and each virthost would query that API, looking for a newer image and downloading the newer image if found.
There is one view angle that we should consider as well, I think, which is the task-developer use case. For our production use case, it doesn't really matter how fast the compose process it or how large the disk images are. But how are the task developers going to use them? If they're going to download them (and periodically update them, even several times per week), we want the images as small as possible. If they're going to build them locally, we want them built fast and with small download sizes (the virt-builder template saves some data from downloading, only updates are downloaded; also there's a question whether we could cache the downloaded packages somehow with some of these tools). Or will we offer both approaches, let them pick what works best for them?
I think that the bigger question here is what the process for non-taskotron-devs will look like. For disposable client execution, are we going to have them use base cloud images, our custom images or just discourage disposable client execution?
There aren't so many devs and so far, they're all capable of figuring out either way. I'm more worried about what'll happen when we start telling folks to install libtaskotron and "go nuts".
If we intended to mostly build those images on dev's computers, I'd probably prefer virt-builder. But my current impression is that local building will be a secondary option, and we'll primarily offer pre-created images for download (even downloading them automatically). Which makes sense, it's easier for the dev, and less error-prone. So in that light (assuming no one has different plans in mind), it doesn't really matter which technology we choose to build it. Image size is a factor here, though. I don't have any real numbers here, it would be very interesting to see the same image (same package set, same filesystem size) built by both tools and compare the output size (ideally even after running zerofree on them and compressing them). My guess is that they should have the same size (what would cause a difference?), but we might be surprised. Do we have a volunteer to test this? :)
I can give this a try later today - I have both tools installed on a machine here.
Tim
On Fri, 13 Nov 2015 09:23:43 -0700 Tim Flink tflink@redhat.com wrote:
<snip>
If we intended to mostly build those images on dev's computers, I'd probably prefer virt-builder. But my current impression is that local building will be a secondary option, and we'll primarily offer pre-created images for download (even downloading them automatically). Which makes sense, it's easier for the dev, and less error-prone. So in that light (assuming no one has different plans in mind), it doesn't really matter which technology we choose to build it. Image size is a factor here, though. I don't have any real numbers here, it would be very interesting to see the same image (same package set, same filesystem size) built by both tools and compare the output size (ideally even after running zerofree on them and compressing them). My guess is that they should have the same size (what would cause a difference?), but we might be surprised. Do we have a volunteer to test this? :)
I can give this a try later today - I have both tools installed on a machine here.
I've created images for taskotron using both taskotron-vmbuilder and imagefactory. They're similar but not identical - I based the imagefactory off the F22 cloud images instead of specifying the server group install and virt-builder has some restrictions on what you can do with disk space which imagefactory does not have.
I've put all the files up for review: the kickstart for imagefactory, the yaml file for vmbuilder and both created images, gzipped.
https://tflink.fedorapeople.org/taskotron/testimages/
Time of creation operation -------------------------- imagefactory: 16m6.596s vmbuilder: 7m19.273s
Image sizes ----------- 20151113-taskotron_server-22.qcow2.gz 818M 20151113-taskotron_server-22.qcow2 11G 20151113-imagebuilder-taskotron.qcow 3.0G 20151113-imagebuilder-taskotron.qcow.gz 358M
I've created images for taskotron using both taskotron-vmbuilder and imagefactory. They're similar but not identical - I based the imagefactory off the F22 cloud images instead of specifying the server group install and virt-builder has some restrictions on what you can do with disk space which imagefactory does not have.
I've put all the files up for review: the kickstart for imagefactory, the yaml file for vmbuilder and both created images, gzipped.
https://tflink.fedorapeople.org/taskotron/testimages/
Time of creation operation
imagefactory: 16m6.596s vmbuilder: 7m19.273s
Image sizes
20151113-taskotron_server-22.qcow2.gz 818M 20151113-taskotron_server-22.qcow2 11G 20151113-imagebuilder-taskotron.qcow 3.0G 20151113-imagebuilder-taskotron.qcow.gz 358M
I hoped you would compare these two approaches with the same "disk recipe". If you create a 3GB disk and install heavily stripped cloud package set using imagefactory, and then create 10GB disk and install full server package set using virt-builder, there's no point in comparing time or sizes. Could you please try both when e.g. having a 10GB disk and installing the server package set?
Also, were you using "ls" when printing file sizes? That doesn't work for qcow2 images, they are sparse. You need to use "du" for that.
As we get closer to putting disposable clients into production, we need a way to have updated images for those clients. I don't think this is news to anyone since the topic has come up several times before but now there's a bit more urgency :)
In my mind, we have the following requirements:
- Produces qcow2 images that work with testcloud
- can be run in an automated way
- allows adding/changing/customizing packages contained in image
- allows arbitrary repos to be specified
and the following "nice to have" things:
- can build branched and rawhide images
- builds images from scratch using only things provided by releng
- written in python
- builds more than qcow2 for some future-proofing
qemu-img can convert between many image formats, so native support in that tool is not that important, I think.
- can run well in a VM
Is there anything that I missed?
The image should be compatible with guestfish, so that we can e.g. copy in some files without rebuilding the image from scratch. Might be useful for e.g. additional ssh keys (we have cloud-init for that at the moment, but if we had some troubles with it or we needed something it doesn't support, this would be an alternative way). I'm not fully sure what the requirements are, but I think guestfish can work with almost anything, including LVM, so unless the tool creates some crazy partition layout, it should work with everything.
As far as I know, we're looking at two options right now: taskotron-vmbuilder and imagefactory. I've put together a list of the pros and cons that I know of for both tools. Thoughts on which direction to take would be appreciated.
Tim
taskotron-vmbuilder [1] is a PoC system kparal built around virt-builder [2]. Images are specified in a yaml file and instead of building those images from scratch "It takes cleanly prepared, digitally signed OS templates and customizes them".
[1] https://bitbucket.org/fedoraqa/taskotron-vmbuilder [2] http://libguestfs.org/virt-builder.1.html
pros:
- already does almost everything we need
To be fair, there have been some issues regarding SELinux, and I'm not sure they are sorted yet. The SELinux contexts of files inside the image were not set properly and one more reboot with autorelabel was needed. Might be fixed now, or not, I haven't tried for a long time. With anaconda, we're not likely to hit these kind of issues (we'll hit different ones).
- fits all requirements
- builds quickly
- well supported
cons:
- requires blobs which are out of our control
- yes, I know who does the work behind virt-builder. My concern isn't with him, it's the general concept that I don't like. This also gets into the fact that we would have pretty much no control over timing of release for the base images.
All the tools required to create that "blob" - or image template (as they call it) - are open source and in Fedora, from what I see, so we can host our own. virt-builder man page says: "For serious virt-builder use, you may want to create your own repository of templates."
This is how to create the template: http://libguestfs.org/virt-builder.1.html#create-the-templates For new stable Fedora releases, we can a single install manually, and use virt-sysprep on the image to have it ready. For Rawhide and maybe even for Branched, we might want to prepare a fresh new template more often, i.e. automate that. This is how libguestfs project does it: https://github.com/libguestfs/libguestfs/blob/master/builder/website/fedora....
So you might see this as a combination of imagefactory and virt-builder-style process. The image is installed clean using anaconda once in a time (but very rarely), and most of the time just the prepared template is adjusted (updated with new packages), because it's much faster.
I'm not saying this is better or worse than alternatives, I just don't think this "blob" argument is quite right - we'd probably create and host our own templates, not rely on upstream one.
- limited support for rawhide and branched releases
There's limited (or no) support for it in the upstream repo, that's correct.
But if we host our own repo, according to the documentation and source code, it seems that as long as anaconda can install it, it should be possible to create an image for it. Which sounds as the same situation as with imagefactory. (Of course with the additional requirement that virt-* tools have to work in Rawhide/Branched).
- limited support for non-server spins
I'm not really sure what you mean, we can install any package set we want, so the only difference would be in the filesystem layout? The upstream templates seems to have only @core installed, in our own images we could adjust even that.
- output images are large in size
This is interesting. Theoretically I see no reason why official Cloud images should be smaller than the same package set installed using virt-builder. I guess they are simply more stripped down, and the filesystem much smaller? It could use some investigation. It's also a question how imagefactory-created images will look like (once we use our custom kickstarts).
By the way, since we seem to agree that we'll need several package set templates for each release (minimal, server, workstation), we're going to distribute pretty big disk images anyway. (Which concerns me a bit in itself).
- virt-builder is not written in python
Yeah, there are some parts of it in OCaml. Scary. I wouldn't want to patch that :)
imagefactory [3] is a system for building os images and potentially shipping those images to various cloud systems. Images are specified with a kickstart file and an xml template descriptor. Imagefactory builds images from scratch, essentially using the kickstart to run an install inside a VM and processing that install into the desired image type.
pros:
- used by releng to create Fedora cloud images
Collaborating on the tool with another team would be a big win from my POV.
- builds images from packages: no blobs that we don't have control over
- already has a mostly-complete RESTful api that can list images and trigger new builds
- can support almost all spins; anything that can be represented in a kickstart
- written in python
cons:
- not as fast as virt-builder
I haven't tried imagefactory, but we all know how long anaconda installation takes. I don't think it's a problem for our production environment, we don't care about 10 minutes difference. But if we consider building the same image on task-developer's machine, the speed gets more important.
- somewhat more complex than virt-builder
That's true, but if we start prepare our own virt-builder templates, I think that quickly reaches parity in complexity.
- when something goes wrong, debugging can be difficult due to how the tool works
Do they have something like real-time monitoring of anaconda logs, do you know? Because otherwise I guess it's quite hard to learn what went wrong.
- we may be somewhat on our own to fix issues if releng is not hitting similar problems
- may not run well in a VM (would need nested virt)
This is the same as in virt-builder, it also needs virt support. Originally I thought it doesn't, but it does. It can still be used without hw virt support (unlike anaconda, that would just be impossible performance-wise), but it's much much much slower and I don't think we would want to go that route (building an image 30 minutes instead of 3 minutes).
On Fri, 13 Nov 2015 06:29:08 -0500 (EST) Kamil Paral kparal@redhat.com wrote:
As we get closer to putting disposable clients into production, we need a way to have updated images for those clients. I don't think this is news to anyone since the topic has come up several times before but now there's a bit more urgency :)
In my mind, we have the following requirements:
- Produces qcow2 images that work with testcloud
- can be run in an automated way
- allows adding/changing/customizing packages contained in image
- allows arbitrary repos to be specified
and the following "nice to have" things:
- can build branched and rawhide images
- builds images from scratch using only things provided by releng
- written in python
- builds more than qcow2 for some future-proofing
qemu-img can convert between many image formats, so native support in that tool is not that important, I think.
- can run well in a VM
Is there anything that I missed?
The image should be compatible with guestfish, so that we can e.g. copy in some files without rebuilding the image from scratch. Might be useful for e.g. additional ssh keys (we have cloud-init for that at the moment, but if we had some troubles with it or we needed something it doesn't support, this would be an alternative way). I'm not fully sure what the requirements are, but I think guestfish can work with almost anything, including LVM, so unless the tool creates some crazy partition layout, it should work with everything.
As far as I know, we're looking at two options right now: taskotron-vmbuilder and imagefactory. I've put together a list of the pros and cons that I know of for both tools. Thoughts on which direction to take would be appreciated.
Tim
taskotron-vmbuilder [1] is a PoC system kparal built around virt-builder [2]. Images are specified in a yaml file and instead of building those images from scratch "It takes cleanly prepared, digitally signed OS templates and customizes them".
[1] https://bitbucket.org/fedoraqa/taskotron-vmbuilder [2] http://libguestfs.org/virt-builder.1.html
pros:
- already does almost everything we need
To be fair, there have been some issues regarding SELinux, and I'm not sure they are sorted yet. The SELinux contexts of files inside the image were not set properly and one more reboot with autorelabel was needed. Might be fixed now, or not, I haven't tried for a long time. With anaconda, we're not likely to hit these kind of issues (we'll hit different ones).
I suspect we'll hit issues with either tool, honestly.
- fits all requirements
- builds quickly
- well supported
cons:
- requires blobs which are out of our control
- yes, I know who does the work behind virt-builder. My concern isn't with him, it's the general concept that I don't like.
This also gets into the fact that we would have pretty much no control over timing of release for the base images.
All the tools required to create that "blob" - or image template (as they call it) - are open source and in Fedora, from what I see, so we can host our own. virt-builder man page says: "For serious virt-builder use, you may want to create your own repository of templates."
This is how to create the template: http://libguestfs.org/virt-builder.1.html#create-the-templates For new stable Fedora releases, we can a single install manually, and use virt-sysprep on the image to have it ready. For Rawhide and maybe even for Branched, we might want to prepare a fresh new template more often, i.e. automate that. This is how libguestfs project does it: https://github.com/libguestfs/libguestfs/blob/master/builder/website/fedora....
So you might see this as a combination of imagefactory and virt-builder-style process. The image is installed clean using anaconda once in a time (but very rarely), and most of the time just the prepared template is adjusted (updated with new packages), because it's much faster.
I'm not saying this is better or worse than alternatives, I just don't think this "blob" argument is quite right - we'd probably create and host our own templates, not rely on upstream one.
Looks like I didn't do quite enough research. I agree that the "blob" argument is pretty much moot.
- limited support for rawhide and branched releases
There's limited (or no) support for it in the upstream repo, that's correct.
But if we host our own repo, according to the documentation and source code, it seems that as long as anaconda can install it, it should be possible to create an image for it. Which sounds as the same situation as with imagefactory. (Of course with the additional requirement that virt-* tools have to work in Rawhide/Branched).
- limited support for non-server spins
I'm not really sure what you mean, we can install any package set we want, so the only difference would be in the filesystem layout? The upstream templates seems to have only @core installed, in our own images we could adjust even that.
This is pretty much the "blob" argument - if we can create our own templates, this is also a non-point.
- output images are large in size
This is interesting. Theoretically I see no reason why official Cloud images should be smaller than the same package set installed using virt-builder. I guess they are simply more stripped down, and the filesystem much smaller? It could use some investigation. It's also a question how imagefactory-created images will look like (once we use our custom kickstarts).
By the way, since we seem to agree that we'll need several package set templates for each release (minimal, server, workstation), we're going to distribute pretty big disk images anyway. (Which concerns me a bit in itself).
- virt-builder is not written in python
Yeah, there are some parts of it in OCaml. Scary. I wouldn't want to patch that :)
virt-builder upstream seems pretty responsive, though so I don't think it'd be an issue unless we want new features.
imagefactory [3] is a system for building os images and potentially shipping those images to various cloud systems. Images are specified with a kickstart file and an xml template descriptor. Imagefactory builds images from scratch, essentially using the kickstart to run an install inside a VM and processing that install into the desired image type.
pros:
- used by releng to create Fedora cloud images
Collaborating on the tool with another team would be a big win from my POV.
To be honest, I don't know how much collaboration we'd be doing. The COPR folks aren't really interested in custom images (they solve the problem with snapshots) and releng doesn't spend a whole lot of time working on imagefactory.
The advantage I do see is that we wouldn't be the only ones looking to have imagefactory work regularly with rawhide. Regardless of which group hits issues first, it'd be more people interested in the same tool set.
- builds images from packages: no blobs that we don't have control
over
- already has a mostly-complete RESTful api that can list images
and trigger new builds
- can support almost all spins; anything that can be represented
in a kickstart
- written in python
cons:
- not as fast as virt-builder
I haven't tried imagefactory, but we all know how long anaconda installation takes. I don't think it's a problem for our production environment, we don't care about 10 minutes difference. But if we consider building the same image on task-developer's machine, the speed gets more important.
Yeah, I think that where we're going to be running the tools remains a bit of an open question.
- somewhat more complex than virt-builder
That's true, but if we start prepare our own virt-builder templates, I think that quickly reaches parity in complexity.
- when something goes wrong, debugging can be difficult due to how
the tool works
Do they have something like real-time monitoring of anaconda logs, do you know? Because otherwise I guess it's quite hard to learn what went wrong.
No, there's no real-time monitoring of those logs. You can attach to the VM while the install is running to see what's going on but the imagefactory process does not log what's going on inside the install.
We could try writing patches to fix that but I don't think that would be trivial at all.
- we may be somewhat on our own to fix issues if releng is not
hitting similar problems
- may not run well in a VM (would need nested virt)
This is the same as in virt-builder, it also needs virt support. Originally I thought it doesn't, but it does. It can still be used without hw virt support (unlike anaconda, that would just be impossible performance-wise), but it's much much much slower and I don't think we would want to go that route (building an image 30 minutes instead of 3 minutes).
It sounds like we're going to need a bare metal solution either way, then.
Tim
- may not run well in a VM (would need nested virt)
This is the same as in virt-builder, it also needs virt support. Originally I thought it doesn't, but it does. It can still be used without hw virt support (unlike anaconda, that would just be impossible performance-wise), but it's much much much slower and I don't think we would want to go that route (building an image 30 minutes instead of 3 minutes).
It sounds like we're going to need a bare metal solution either way, then.
I bet my old socks that we'll end up with nested virt eventually. It's not a years-proven technology, but it worked well for me locally, OpenQA is now running on it as well inside Fedora Infra, and many projects (not just ours) seem to gravitate towards it, because bare metal is just "too much trouble"TM.
On Mon, 9 Nov 2015 22:02:35 -0700 Tim Flink tflink@redhat.com wrote:
As we get closer to putting disposable clients into production, we need a way to have updated images for those clients. I don't think this is news to anyone since the topic has come up several times before but now there's a bit more urgency :)
In my mind, we have the following requirements:
- Produces qcow2 images that work with testcloud
- can be run in an automated way
- allows adding/changing/customizing packages contained in image
- allows arbitrary repos to be specified
and the following "nice to have" things:
- can build branched and rawhide images
- builds images from scratch using only things provided by releng
- written in python
- builds more than qcow2 for some future-proofing
- can run well in a VM
Is there anything that I missed?
As far as I know, we're looking at two options right now: taskotron-vmbuilder and imagefactory. I've put together a list of the pros and cons that I know of for both tools. Thoughts on which direction to take would be appreciated.
After talking with adam more about how openqa works, it turns out that there is a significant overlap between what we need for taskotron images and the installed images that openqa uses for some of its tests.
From what I can see, it would make sense to have one tool which is capable of creating images for both system. I'm not sure it makes sense to support non-installed image creation (ks, update, shrink and the other bare disks), though.
I'll add on what I can think of for what I can see in the openqa image creation but more feedback from the folks who've dealt with it more than I have would be appreciated.
Requirements: - make post-install changes to the images before distribution - specify partition table type - create users on the image
Are there other requirements for openqa images? Any thoughts on whether it makes sense to look into doing both images in the same place?
Tim
On Tue, 2015-11-17 at 17:23 -0700, Tim Flink wrote:
Requirements: - make post-install changes to the images before distribution - specify partition table type - create users on the image
Are there other requirements for openqa images?
For some of the images what we primarily need is some specific partition table / layout, not any particular contents; as you say it may not make sense to do that with the same tool, but that is one of the openQA requirements.
On Tue, 17 Nov 2015 17:50:13 -0800 Adam Williamson adamwill@fedoraproject.org wrote:
On Tue, 2015-11-17 at 17:23 -0700, Tim Flink wrote:
Requirements: - make post-install changes to the images before distribution - specify partition table type - create users on the image
Are there other requirements for openqa images?
For some of the images what we primarily need is some specific partition table / layout, not any particular contents; as you say it may not make sense to do that with the same tool, but that is one of the openQA requirements.
Just to make sure I was being clear - I was referring to the empty disk images (embedded updates.img, embedded ks, freespace etc. - the stuff that's only using guestfish) that are made with createhdds.sh, not the installed image. If I'm understanding correctly, it's the installed images (minimal, desktop) that would be most useful to rebuild on a regular basis - did I misunderstand something? Do all the images for openqa need to come from the same tool? I suspect that the guestfish "(mostly) empty disk" methods could be run just about anywhere at most once a release.
That being said, it sounds like precise partitioning is another requirement.
Tim
On Tue, 2015-11-17 at 19:33 -0700, Tim Flink wrote:
On Tue, 17 Nov 2015 17:50:13 -0800 Adam Williamson adamwill@fedoraproject.org wrote:
On Tue, 2015-11-17 at 17:23 -0700, Tim Flink wrote:
Requirements: - make post-install changes to the images before distribution - specify partition table type - create users on the image
Are there other requirements for openqa images?
For some of the images what we primarily need is some specific partition table / layout, not any particular contents; as you say it may not make sense to do that with the same tool, but that is one of the openQA requirements.
Just to make sure I was being clear - I was referring to the empty disk images (embedded updates.img, embedded ks, freespace etc. - the stuff that's only using guestfish) that are made with createhdds.sh, not the installed image. If I'm understanding correctly, it's the installed images (minimal, desktop) that would be most useful to rebuild on a regular basis - did I misunderstand something? Do all the images for openqa need to come from the same tool? I suspect that the guestfish "(mostly) empty disk" methods could be run just about anywhere at most once a release.
That all sounds accurate. No, they don't need to come out of the same tool, though of course it's a bit more complex if we have to handle two different image creation processes.
That being said, it sounds like precise partitioning is another requirement.
Yeah, and it's at least feasible we might need to combine the two, though I don't think that's the case for any of the current images.
qa-devel@lists.fedoraproject.org