Test Maps
Adam Williamson
awilliam at redhat.com
Sat Apr 5 00:53:53 UTC 2014
On 2014-04-03 17:13, Mike Ruckman wrote:
> Greetings testers!
>
> I've recently been working on a _rough_ proof of concept for my Test
> Maps [0], idea - and it's just enough done to get the idea across of
> what I have in mind. As I outlined on my blog, currently our test
> matrices are large and testcases are only logically grouped together on
> the wiki. This leads to two issues I can see: 1 - you have to look
> through the given matrix to find something you can/want to test and 2 -
> only those who have worked through the matrix several times (or
> happened to write the matrix) can easily know when and what to test.
>
> The current testing workflow requires a lot of organic knowledge for
> new
> testers to learn. A new contributor joins in on IRC, says "What can I
> test?" and those in channel will ask about the h/w the contributor has
> and DE preferences, etc, to give them a starting point somewhere on
> the wiki.
>
> What I envision is a simple web-based tool that new contributors can go
> to, answer some questions (What arch do you have? What DE do you use?
> What install media do you have available? etc., etc.) and be handed a
> list of tests they can do in a sensible order (which would aid with the
> testing of the multiple WG products - each could make their own test
> maps). The issue with this is, we don't have an easy way to get what
> h/w and software requirements are for every test case we have.
>
> Until that data exists, we have to hand write lists of what tests make
> sense to do one after the other (while updating testcase requirements
> as we go). This brings us to the proof of concept, which lives here:
> http://188.226.194.38/
>
> With this proof of concept, I'm hoping my idea solidifies a
> bit more in people's heads as to what exactly I had in mind. Then we
> can
> determine if this would be a useful idea for us to look into going
> forward - or if it isn't as nifty an idea as I think it could be.
>
> There is currently only one "test map" to click through, and a plethora
> of features aren't yet implemented. The key features that would need to
> be written before we could *actually use it* include:
>
> - Create new "testmaps" without hard coding them
> - select which map to follow
> - easily add or update testcases
> - add/remove h/w and software requirements
> - dynamically find tests to run based on users h/w.
>
> If this idea proves to be something we want to work on, I can put
> together a more complete road-map/feature-list for review.
>
> Here are some of the cooler things we could potentially do with a
> system
> like this:
>
> - FAS Integration (keep track of hardware profiles and post results,
> control edits to testcases)
> - Track test results
> -- See results in real time
> -- Stats on testing and hardware usage
> - Edit Testcases (and push them back to the wiki)
> - Badges integration
>
> I'm sure there's plenty of stuff I haven't thought of or had pointed
> out to me - so if you have any thoughts or questions, reply!
>
> Thanks!
>
> [0] http://roshi.fedorapeople.org/testing-efficiently.html
I do like the idea in general, but I'm not sure it's *quite* the most
logical order of attack. At least in my conception of The Problem, this
is more of a second-order thing.
What I guess I'd say is The Fundamental Problem here is that we don't
have a great way of presenting our test cases and results. Our
'wiki-as-a-TCMS' approach really involves representing *three* things as
mediawiki elements, when you break it down:
i) test cases
ii) test plans
iii) test results
I've said in the past that I think the wiki-as-TCMS approach is
surprisingly good for being an obvious hack, but thinking about it more,
I really only want to stand by that opinion in the case of i). The wiki
only makes a barely-passable method of presenting test plans -
especially the more complex they get, as ours have - and frankly a
pretty *bad* way of entering and representing test results. We're really
reaching the point where we need to do at least ii) and iii) in a rather
more sophisticated way.
we now have a couple of efforts that are coming at ii) and iii): Test
Maps and testcase_stats -
http://testdays.qa.fedoraproject.org/testcase_stats/ . They're both
pretty neat little hacks, I reckon, and they're both things we ought to
have. But I think in an ordered conception of things, we really ought to
be thinking about coming up with a more robust *framework* for test
plans and test results, which would allow us to build things like test
maps and testcase_stats as fairly thin 'presentation layers' on top of
the robust underlying framework.
Do you (actual code-writey types) think I'm thinking along the right
lines there, or getting too platform-y or ambitious?
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
More information about the test
mailing list