ARM as a primary architecture
awilliam at redhat.com
Wed Mar 21 02:36:00 UTC 2012
On Tue, 2012-03-20 at 13:39 -0400, Peter Jones wrote:
> >> 4) when milestones occur, arm needs to be just as testible as other
> >> primary architectures
> > So we have a new hire (hi Paul) who is looking at autoqa, and we're
> > going to pull together as much as we can here. It would help me to know
> > (and we're reaching out to QE separately - per my other mail) what you
> > would consider "testable" to mean, in terms of what you'd want to see.
> I'd largely defer to adamw for specific criteria regarding testing, both
> in terms of criteria we're testing for (i.e. #3) and in terms of establishing
> appropriate testing procedures for the platform. I've largely listed those
> because there's not really any indication in the proposal as it stands
> that this is well-considered at this point in time. There's a brief section
> on how to test, but it appears to be largely pro-forma.
> >> 5) installation methods must be in place. I'm not saying it has to be
> >> using the same model as x86, but when we get to beta, if it can't be
> >> installed, it can't meet similar release criteria to existing or prior
> >> primary arches. Where possible, we should be using anaconda for
> >> installation, though I'd be open to looking at using it to build
> >> installed images for machines with severe resource constraints.
> > So we feel it more appropriate to use image creation tools at this
> > point, for the 32-bit systems that we have in mind.
So, my take on this is that if we're to do release validation for ARM,
at a stage where there is no anaconda-for-ARM and our official ARM
deployment method is 'download the image file for your hardware and
flash it' (or however the image file gets written exactly), then we're
going to wind up with release criteria and validation tests for ARM
which look very different from what we have for x86.
I suppose the picture that forms in my mind is that I'd expect
generation of the images to be fully scripted and automated, and for
validation to essentially consist of testing that the generated images
in fact work on each of the 'supported' platforms.
So what I'd expect to be happening is we'd have a list of supported ARM
devices, and we'd want QA to have access to at least one of each of
those devices. Then 'validation testing' for ARM would consist of just
throwing the images at the devices and seeing what stuck.
Desktop validation would be broadly the same as x86, if desktop is
something we'd actually be expecting to work on ARM. I rather would
expect that to be the case, if we were calling it a primary
architecture. By the same token, though, I would rather expect it to
track very closely with x86, as it's all relatively high level code that
ought to behave the same way on both, give or take graphics drivers.
I suppose I'd expect it to be something less of a heroic undertaking
than x86 validation testing, so long as we have this model of a
relatively small set of images for deployment to a relatively small set
of relatively non-customizable bits of hardware. Almost all the
difficulty and complexity in x86 validation comes from the fact that we
definitely don't have that.
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora
More information about the devel