submitters +1ing their own packages
nils at redhat.com
Mon Sep 12 15:56:45 UTC 2011
On Fri, 2011-09-09 at 12:22 +0200, Vít Ondruch wrote:
> Sorry, you are mixing two things:
> 1) One is testing environment and it can be probably well defined,
> clean, etc.
And thus incomparable to real-life environments. Mind, I'm not arguing
against some testing (e.g. automated regression tests, AutoQA) being
done in defined environments. But I doubt that it's enough if all
testing is done under these conditions, because this implicates we need
to define, and test against, all (or at least the majority of)
environments under which our software could be run, which frankly is
unrealistic. Therefore I'd rather have people also test in their
"naturally grown" environments to better cover real life situations that
deviate from defaults.
> 2) The other thing is maintainer mindset. You can try to convince
> yourself to take a different look but I doubt it will work. It reminds
> me like if you do patch review of your patches, which doesn't make
> sense. You, as a developer, are not able to spot weak points.
This is quite condescending, and a straw man to boot. Testing a piece of
software in the way we do it is on no way comparable to reviewing
patches: In the one case I'd be using the software _like any other user
or tester_, try out some functionality, partly related to what was
intended to be fixed, partly a few basic functionality ("smoke") tests,
in the other case I'd be looking at code changes which I worked on for
let's say the last few hours or even days. I grant you that the latter
case invites say a less skeptical approach than is warranted when
reviewing patches, but it's also much harder to do and much less well
defined (thus much easier to trick yourself into biased behavior) than
the former: with testing it's clear what you have to do (to a maintainer
or developer possibly even more than John Random Tester) and it's clear
how results should be interpreted. Either it does what it's supposed to
do, or it doesn't. If it doesn't, and it did before, it's a regression.
If I can describe to a tester how something should be tested, I can test
> And it is
> expected that every developer delivers well tested and well behaving
> code from his side (i.e. automatic +1 karma from his side).
And that's wrong either way how I look at it: If I submit an update only
after I've done testing in the way I described above, I've wasted
precious time in which others could have tested as well. If submitting
an update would automatically imply that level of testing, I couldn't
submit updates for Fedora releases which I don't use.
When I submit an update it usually means that I've tried out the code
(not necessarily the package, probably rather from a checked out source
tree), checked it against bugs supposed to be fixed and done minimal
smoke tests. Nothing more.
> If there is
> not enough karma for his package to bring it into the stable, then there
> is probably time to ask somebody (probably on fedora-devel), to test
> this package.
We have a default of +3 karma for automatic pushes to stable, so a +1
from the maintainer by itself isn't enough to push an update to stable
already. Non-critical-path updates can be pushed to stable within 7 days
of them having been pushed to testing, without any karma at all, which
seems a much lower hurdle if I wanted to dump broken software onto
> BTW no policy can stop some "evil maintainer" who will create other
> Fedora accounts and give karma to his packages under different
And that's supposed to mean... what?
> You can even add karma without Fedora identity, but I am not sure if
> that counts.
Nils Philippsen "Those who would give up Essential Liberty to purchase
Red Hat a little Temporary Safety, deserve neither Liberty
nils at redhat.com nor Safety." -- Benjamin Franklin, 1759
PGP fingerprint: C4A8 9474 5C4C ADE3 2B8F 656D 47D8 9B65 6951 3011
More information about the devel