FC5T2 - not ready for prime time.

Peter Jones pjones at redhat.com
Fri Jan 20 19:06:12 UTC 2006


On Fri, 2006-01-20 at 12:11 -0500, Jeff Spaleta wrote:

> If only there was a way a sane way to track hardware coverage being
> actively tested in-house and then later on in the public test2 phase.
> A little more transparency as to the variety of hardware thats
> participating in the testing phases might headoff some of the more
> knee-jerk complaints.

Problem is, in many cases we really can't say what we're testing in
house.  While you may have a Dual-Core Xeon with an i945 chipset, and I
may have a Dual-Core Xeon with an i945 chipset, and they may be *very*
similar hardware, mine is almost certainly a pre-production machine
covered by NDA, and I can't tell you about it.  This is basically the
case for 2/3 of the machines I've done the dmraid installer work on, and
the same is true of other developers and many other features.

That's not something that lends itself to transparency, and I don't
suspect it'll get better in any way.

> But there really isn't a sane way to track all
> the hardware variations even if we got system data back from testers
> in an automated fashion.. organizing it or mining it would be a pain.

We actually did this hardware tracking in RHN[0], and it's one of the
worst things we lost when RHN stopped supporting Fedora.  But yes,
organizing this kind of data is extraordinarily painful, and mining the
data isn't really any better.

Even with all the hardware data, we need more than that -- we need to
know essentially all of the info in anaconda.ks.cfg, as well as the full
packageset (think NVREA) selected. Even with a really good data model,
this explodes to be very large, quite quickly.  And guess what?  In
nearly all test cases that aren't fresh installs, we're going to find
out that there's some package google doesn't know about on the box.
Sometimes that package will matter, sometimes it won't.

Even having all that won't help for cases like Reg's video card problems
-- the problem there is almost certainly DDC failing.  If the hardware
probe fails, we're going to have missing (or worse) data in the list of
tested hardware.  And you really can't distinguish between a monitor
that isn't giving us DDC vs one that's just not plugged in.

[0] See the bit about using the data up2date/rhn_register send us in
aggregate?  We were big fans of telling certain hardware vendors real
numbers about how many RH customers really were using their hardware in
Linux.  Sadly, it didn't seem to have much effect.
-- 
  Peter




More information about the test mailing list