If we implement a 'scenario' key, we can then just teach
Bodhi to look
for that and combine it with the item, then maybe if it can't find one,
treat 'testname-arch' as the scenario (as a fallback for old results,
We don't have that many tests yet, and all of them are basically under our control, so
if we end up doing this, I think we should simply push a patch to all of them (should be
fairly easy) and avoid any legacy handling.
results from Taskotron until we teach it to write 'scenario'
items into
its results, and a general fallback for cases where 'scenario' is
missing for some reason). That could be the general convention, I
guess: first look for a 'scenario' key, otherwise try and construct one
from very commonly available keys.
I'd like to avoid the guessing game here. The system will be reliable only if it is
obvious and simple. If scenario turns out to work well for us, let's go all the way.
If there's no scenario, results should be unique when identified by testcase name +
item. If that's not enough, scenario needs to be there, period. Of course people can
submit whatever results they want, but once we want to employ them in gating, the results
need to adhere to the rules. We're open source, I believe we should send patches to
fix the task reporting if needed, rather than extending the guessing algorithms in our
consumers.