We've had openQA testing of updates for stable and branched releases,
and gating based on those tests, enabled for a while now. I believe
this is going quite well, and I think we addressed the issues reported
when we first enabled gating - Bodhi's gating status updates work more
smoothly now, and openQA respects Bodhi's "re-run tests" button so
failed tests can be re-triggered.
A few weeks ago, I enabled testing of Rawhide updates in the openQA
lab/stg instance. This was to see how smoothly the tests run, how often
we run into unexpected failures or problems, and whether the hardware
resources we have are sufficient for the extra load.
So far this has been going more smoothly than I anticipated, if
anything. The workers seem to keep up with the test load, even though
one out of three worker systems for the stg instance is currently out
of commission (we're using it to investigate a bug). We do get
occasional failures which seem to be related to Rawhide kernel slowness
(e.g. operations timing out that usually don't otherwise time out), but
on the whole, the level of false failures is (I would say) acceptably
low, enough that my current regime of checking the test results daily
and restarting failed ones that don't seem to indicate a real bug
should be sufficient.
So, I'd like to propose that we enable Rawhide update testing on the
production openQA instance also. This would cause results to appear on
the Automated Tests tab in Bodhi, but they would be only informational
(and unless the update was gated by a CI test, or somehow otherwise
configured not to be pushed automatically, updates would continue to be
pushed 'stable' almost immediately on creation, regardless of the
More significantly, I'd also propose that we turn on gating on openQA
results for Rawhide updates. This would mean Rawhide updates would be
held from going 'stable' (and included in the next compose) until the
gating openQA tests had run and passed. We may want to do this a bit
after turning on the tests; perhaps Fedora 37 branch point would be a
natural time to do it.
Currently this would usually mean a wait from update submission to
'stable push' (which really means that the build goes into the
buildroot, and will go into the next Rawhide compose when it happens)
of somewhere between 45 minutes and a couple of hours. It would also
mean that if Rawhide updates for inter-dependent packages are not
correctly grouped, the dependent update(s) will fail testing and be
gated until the update they depend on has passed testing and been
pushed. The tests for the dependent update(s) would then need to be re-
run, either by someone hitting the button in Bodhi or an openQA admin
noticing and restarting them, before the dependent update(s) could be
In the worst case, if updated packages A and B both need the other to
work correctly but the updates are submitted separately, both updates
may fail tests and be blocked. This could only be resolved by waiving
the failures, or replacing the separate updates with an update
containing both packages.
All of those considerations are already true for stable and branched
releases, but people are probably more used to grouping updates for
stable and branched than doing it for Rawhide, and the typical flow of
going from a build to an update provides more opportunity to create
grouped updates for branched/stable. For Rawhide the easiest way to do
it if you need to do it is to do the builds in a side tag and use
Bodhi's ability to create updates from a side tag.
As with branched/stable, only critical path updates would have the
tests run and be gated on the results. Non-critpath updates would be
unaffected. (There's a small allowlist of non-critpath packages for
which the tests are also run, but they are not currently gated on the
I think doing this could really help us keep Rawhide solid and avoid
introducing major compose-breaking bugs, at minimal cost. But it's a
significant change and I wanted to see what folks think. In particular,
if you find the existing gating of updates for stable/branched releases
to cause problems in any way, I'd love to hear about it.
IRC: adamw | Twitter: adamw_ha
cross-posting test@ and desktop@
Fedora Workstation 32 (upgraded from f31)
Laptop on battery power set aside, 12 hours later it's dead instead of
sleeping. On F31 it reliably would sleep after 20 minutes.
Sleep still happens when pressing the power button and closing the
lid. It seems to be a GNOME automatic suspend timer problem.
Using dconf editor, I changed the
Custom value 30 and the problem doesn't happen. Is there a way to
increase debug messages somehow to find out whether this timeout is
being reached? And what process or policy is causing it to be reset?
With the available information I can't figure out what's preventing
Greetings, my name is Alejandro Lopez from Slimbook Computers and I'm
writing this email following Matthew's advice.
We are hardware company committed to the flawless integration and
improvement of the end user experience with the Linux OS and the
hardware it runs on.
We were given the chance to work together with the KDE team back in 2016
when other laptop brands didn't even think about it, and after getting
the approval from both parties, the objective of our collaboration was
the birth of a modern and sleek device that excelled in performance and
provided a flawless user experience for the community.
The intent of this email is to let you know that the current demand for
your amazing Fedora distribution is 2% from all our orders, and we think
that this should change. I've sent to Matthew some general reports on
which distros are currently in demand from our customers, and KDE Neon
had an exponential growth versus other distros.
What are your thoughts on this situation?
I believe that we should work something out together to fix this.
I think we should do more than offer Fedora as just another operating
system (the user can choose up to 12 distributions).
*The BEST GNU/Linux computers since 2015*
Este mensaje y sus archivos adjuntos van dirigidos exclusivamente a su
destinatario, pudiendo contener información confidencial sometida a
secreto profesional. No está permitida su reproducción o distribución
sin la autorización expresa de Grupo Odin Soluciones Informáticas,
S.L.N.E. Si usted no es el destinatario final por favor elimínelo e
infórmenos por esta vía. De acuerdo con lo establecido por la Ley
Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter
Personal (LOPD), le informamos que sus datos están incorporados en un
fichero del que es titular Grupo Odin Soluciones Informáticas, S.L.N.E.
con la finalidad de realizar la gestión administrativa, contable y
fiscal, así como enviarle comunicaciones comerciales sobre nuestros
productos y/o servicios. Asimismo, le informamos de la posibilidad de
ejercer los derechos de acceso, rectificación, cancelación y oposición
de sus datos en el correo electrónico: info(a)slimbook.es
To what extent should we test Desktop applications? How to automate them?
Currently, we are testing all Desktop applications as required by the
release criteria, but we have identified different approaches with
different testers which resulted in several problematic bugs in the very
last moments of the release cycle. We would like to avoid such a situation
in the future, so we would like to discuss this matter a bit more.
The release criteria state that the applications must *withstand basic
functionality tests*, but it is not very clear what the basic functionality
is and how we shall test it.
Therefore, we would like your view on the following questions:
- What should be tested in the scope of basic functionality in general?
- Are there specific features or workflows you would like us to test
with specific applications?
- What could be set as an absolute required minimum for the Desktop
applications to not block Fedora (ergo to make Fedora pass the Go/No go
meeting). I would like to work on automating this so that we could have
that tested very frequently. Otherwise the automation is difficult and
requires a lot of time spent if we want to test for the overall
Thank you very much for your help.
FEDORA QE, RHCE
612 45 Brno - Královo Pole
TRIED AND PERSONALLY TESTED, ERGO TRUSTED. <https://redhat.com/trusted>