[Test-Announce] Call for reviewing TCMS use cases and comparison!
greenfeld at laptop.org
Fri Jan 21 19:02:33 UTC 2011
On Fri, Jan 21, 2011 at 11:40 AM, He Rui <rhe at redhat.com> wrote:
> > > Also: It might be useful to add an "unclear" testcase result, similar
> > > to how Mozilla's Litmus system does it (https://litmus.mozilla.org).
> > I do like litmus! It's a nice evolution from testopia for upstream
> > mozilla. We don't currently have an 'unclear' test result. I'm not
> > opposed to it, but would need better understand how that field is used,
> > and the process around it, in litmus.
> Agree with James.
What I believe Mozilla is doing (since I have not had a chance to work with
their QA team yet) is flagging test cases with a form of soft failure in
that the result of a testcase neither clearly passed, nor clearly failed.
So in addition to "Passed", "Failed", and any other common states (Blocked,
In Progress, etc.) you have an "Unclear" result state.
This forces the result (including a comment section I presume is forced for
all non-passing items) to be evaluated during the release cycle to see if it
was an actual failure, or if the test case needs to be updated and/or
See the "recently unclear" and "testcases most frequently marked as unclear"
links at the bottom of pretty much every page of litmus.mozilla.org for
examples. On the former, you can click on the dialog icon beneath each
result ID to see the comment without switching pages.
Personally I like this approach because management of many projects (in my
experience) rarely budget time for test cases to be updated. They prefer
that you start testing the next release instead. Projects I have been on
have required me to use test cases which were last notably updated a few
years ago if not 5-10+ years ago by simply acting as if the inaccurate
information is not there. By treating potentially incorrect test cases as a
form of failure it forces them to be ideally updated and rerun promptly.
The downside is if a tester is unfamiliar with an area or needs
hand-holding, they might mark a lot of results unclear which actually are
rather straightforward. Or alternatively, no one looks at them, and you
have a muddy result at the end of the release. These would put additional
effort on the core QA team to remedy the situation.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the test