Very rough storage validation matrix draft

Kamil Paral kparal at redhat.com
Fri Dec 20 14:14:22 UTC 2013


> I've made a few abortive tries at re-doing the storage tests and
> basically given up because it's just a hideous thing to try and cover,
> but I thought while I'm still on a momentum roll from F20 and remember
> some of the issues that came up during F20 validation, I'd take another
> cut at it.
> 
> Here's what I came up with:
> 
> https://fedoraproject.org/wiki/User:Adamwill/Draft_storage_matrix

When I first saw this, my reaction was "Holy cow!"

After a while... I think it's doable. But... we need to take a completely different approach.

As you say, there is no way we could test this on a regular basis (every TC). But we could try a different approach, one that I have been considering for a long time. You mention testcase_stats could help if they tracked all dimensions. It's quite hard to do that and it's even harder to visualize it afterwards, but we could do the same as part of our editing process without needing any tools at all. Instead of just providing {{result|pass|kparal}} in the fields and then wiping the matrix clean on every new TC, we could input something like {{result|pass|Beta TC1 kparal}} and leave the matrix on a separate static page (no cleanups).

Once in a while it could be helpful to prune old results, let's say if I have the following the a single cell:
{{result|pass|Alpha RC2 adamw}}
{{result|pass|Beta TC1 kparal}}
{{result|pass|Beta TC3 mkrizek}}

Now you if you want to put in yet another pass from Beta RC1, you might just wipe out the previous results (or leave the most recent one), because they add no further value. Many of your proposed test cases are so specific, that we can be fairly sure that just a single walk-through guarantees us that it works correctly (doesn't apply to all of them, of course). We could add some guidelines to the wiki page, or just prune the matrix from time to time, if it becomes bloated (however, can you imagine _that_ matrix bloated with results? I can't, only in some very specific often tested test cases. And people won't probably bother to add yet another pass in there, once there are enough results already).

With this approach, we clearly see what was tested and when was it tested for the last time. It also encourages people to test blank spaces, which is not the case for our usual matrices - without the help of testcase_stats and some serious investigation (which I suppose hardly anyone does) people just blindly pick test cases and complete them, often wasting their time on areas which were more than sufficiently tested in many previous composes. This "timestamping" approach works much better in this respect. I'm bit afraid of a clutter (some cells just growing too big), but I think it should be manageable as described above.

What do you think?


A few extra notes:
1. Let's get rid of QA:Testcase_install_to_SATA_device and QA:Testcase_install_to_PATA_device. Seriously, when was the last time we encountered a bug in either SATA or PATA driver? We need to get rid of all unnecessary cruft. People would notice very soon if it was not working, we don't need a test case for that. For the same reason we don't need a test case for monitors or keyboards or whatever, those are things automatically spotted if broken.

2. Current Go/No-Go requirements state that all QA matrices must be filled out. If we used this new "timestamping" approach combined with extremely detailed installation matrices (as you proposed), we would not be able to satisfy that. But we would be able to say that e.g. this feature worked reliable two weeks ago. Also, since there are so many combinations in the matrix (and this is just a first draft), it's very likely that there will be a lot of blank spaces. We can't simply test it all, this is just a tool how to make us more efficient and better track what was done. So, our Go/No-Go requirement will likely need to be adjusted, if we go this route.


Btw, thanks for moving all of this forward. Creating such proposals is a time-consuming and anything-but-fun work.


More information about the test mailing list