I mentioned this in a QA meeting, and have given it enough testing that I think it's broadly usable. If desired it can be copied out of my user account and put up somewhere where QA folks will see it and can modify it as issues or improvements are discovered.
What is it? The idea is to produce a system that can confidently be used for baremetal testing, without risking the primary operating system. While VM's are a great way to test, it's also a really idealized environment that tends to not expose an assortment of bugs that affect particular hardware. But then quite a lot of folks reasonably don't want to upgrade their daily use hardware early on, because they don't want to always have to debug things, or have to figure out how to undo the upgrade if it really goes badly.
Therefore, I present a dual-boot setup offering:
* no re-partitioning;
* no installation step, instead system upgrade is used;
* reversibility, or undoability, i.e. with just a few steps you can delete the "test OS".
On Fri, 2022-09-02 at 08:37 +0000, Zbigniew Jędrzejewski-Szmek wrote:
> > Now, because I glued openQA to the critpath because it was handy, there
> > are two sets of consequences to a package being in critical path:
> > 1. Tighter Bodhi requirements
> > 2. openQA tests are run, and results gate the update (except Rawhide)
> > So, one of the implicit questions here is, is it OK to keep twinning
> > these two sets of consequences, or should we split them up? Splitting
> > them up kinda implies answer 2) from my original mail: "Keep the
> > current "critical path" concept but define a broader group
> > of "gated packages" somewhere". Because we would then need some new
> > concept that isn't "critical path". As I said, that's more *work* -
> > it'd require us to write new code in several places. Even if we
> > decide it'd be nice to do this, is it nice *enough* to be worth doing
> > that work?
> I'd still vote for keeping a single critpath list and using it as
> "the list of packages that require extra care and testing".
> As you describe, the original meaning of critpath has shifted, but
> it's because the way we do updates and QA has also shifted. Doing
> gating tests for a package seems much more useful than just keeping
> it longer in 'updates-testing' in hope that somebody discovers an
> important regresion in the second week.
Well, there's a caveat there - openQA doesn't test everything. On the
whole we cover quite a lot with the set of tests that gets run on
updates, but there's certainly lots of potential for there to be
important bugs it misses, that a human tester might catch. So I think
there is still a case for the higher karma requirements too.
> So yeah, I don't think it makes sense to do the extra work to split
> the concepts. Also because we have way too many concepts and processes
> in Fedora already.
On the whole, though, I agree with you. I just don't trust my own
opinion because it's obviously biased by what's convenient for me. :D
> > If we don't think it's worth doing that work, then we're kinda stuck
> > with openQA glomming onto the critpath definition to decide which
> > updates to test and gate, because I don't have any other current viable
> > choices for that, really. And we'd have to figure out a critpath
> > definition that's as viable as possible for both purposes.
> > BTW, one other thought I've had in relation to all this is that we
> > could enhance the current critpath definition somewhat. Right now, it's
> > built out of package groups in comps which are kinda topic-separated:
> > there's a critpath-kde, a critpath-gnome, a critpath-server, and so on.
> > But the generated critical path package list is a monolith: it doesn't
> > distinguish between a package that's on the GNOME critpath and a
> > package that's on the KDE critpath, you just get a big list of all
> > critpath packages. It might be nice if we actually did distinguish
> > between those - the critpath definition could keep track of which
> > critpath topic(s) a package is included in, and Bodhi could display
> > that information in the web UI and provide it via the API. That way
> > manual testers could get a bit more info on why a package is critpath
> > and what areas to test, and openQA could potentially target its test
> > runs to conserve resources a bit, though this might require a bit more
> > coding work on the gating stuff now I think about it.
> That sounds useful. We only need a volunteer to figure out the details
> and do the work ;)
I actually did a huge rewrite of the thing that generates the critpath
data this week, and it probably wouldn't be tooooo much work, honestly.
The most annoying bit would be the Bodhi frontend stuff, but that's
because I'm bad at frontend dev in general. :P But yeah, this is
definitely off in sky-castle land. I'll add it to my ever-growing list
of sky-castle projects to do when I get a couple of years of spare
IRC: adamw | Twitter: adamw_ha
We've had openQA testing of updates for stable and branched releases,
and gating based on those tests, enabled for a while now. I believe
this is going quite well, and I think we addressed the issues reported
when we first enabled gating - Bodhi's gating status updates work more
smoothly now, and openQA respects Bodhi's "re-run tests" button so
failed tests can be re-triggered.
A few weeks ago, I enabled testing of Rawhide updates in the openQA
lab/stg instance. This was to see how smoothly the tests run, how often
we run into unexpected failures or problems, and whether the hardware
resources we have are sufficient for the extra load.
So far this has been going more smoothly than I anticipated, if
anything. The workers seem to keep up with the test load, even though
one out of three worker systems for the stg instance is currently out
of commission (we're using it to investigate a bug). We do get
occasional failures which seem to be related to Rawhide kernel slowness
(e.g. operations timing out that usually don't otherwise time out), but
on the whole, the level of false failures is (I would say) acceptably
low, enough that my current regime of checking the test results daily
and restarting failed ones that don't seem to indicate a real bug
should be sufficient.
So, I'd like to propose that we enable Rawhide update testing on the
production openQA instance also. This would cause results to appear on
the Automated Tests tab in Bodhi, but they would be only informational
(and unless the update was gated by a CI test, or somehow otherwise
configured not to be pushed automatically, updates would continue to be
pushed 'stable' almost immediately on creation, regardless of the
More significantly, I'd also propose that we turn on gating on openQA
results for Rawhide updates. This would mean Rawhide updates would be
held from going 'stable' (and included in the next compose) until the
gating openQA tests had run and passed. We may want to do this a bit
after turning on the tests; perhaps Fedora 37 branch point would be a
natural time to do it.
Currently this would usually mean a wait from update submission to
'stable push' (which really means that the build goes into the
buildroot, and will go into the next Rawhide compose when it happens)
of somewhere between 45 minutes and a couple of hours. It would also
mean that if Rawhide updates for inter-dependent packages are not
correctly grouped, the dependent update(s) will fail testing and be
gated until the update they depend on has passed testing and been
pushed. The tests for the dependent update(s) would then need to be re-
run, either by someone hitting the button in Bodhi or an openQA admin
noticing and restarting them, before the dependent update(s) could be
In the worst case, if updated packages A and B both need the other to
work correctly but the updates are submitted separately, both updates
may fail tests and be blocked. This could only be resolved by waiving
the failures, or replacing the separate updates with an update
containing both packages.
All of those considerations are already true for stable and branched
releases, but people are probably more used to grouping updates for
stable and branched than doing it for Rawhide, and the typical flow of
going from a build to an update provides more opportunity to create
grouped updates for branched/stable. For Rawhide the easiest way to do
it if you need to do it is to do the builds in a side tag and use
Bodhi's ability to create updates from a side tag.
As with branched/stable, only critical path updates would have the
tests run and be gated on the results. Non-critpath updates would be
unaffected. (There's a small allowlist of non-critpath packages for
which the tests are also run, but they are not currently gated on the
I think doing this could really help us keep Rawhide solid and avoid
introducing major compose-breaking bugs, at minimal cost. But it's a
significant change and I wanted to see what folks think. In particular,
if you find the existing gating of updates for stable/branched releases
to cause problems in any way, I'd love to hear about it.
IRC: adamw | Twitter: adamw_ha
I'd like to bring something to attention. I just downloaded Server
Boot Media (Netinstall) from:
I made a USB boot disk out of that and it failed to boot.
Here is what I get after choosing to install at the GRUB Menu:
error: ../../grub-core/fs/fshelp.c:257:file '/images/pxeboot/vmlinuz'
error: ../../grub-core/loader/i386/efi/linux.c:258:you need to load the
Press any key to continue ...
The ISO was registered as "last known good". This can't be good.
Which one really works?
This morning when I saw the call for test days e'mail I had an idea that
I want to get some feedback on before I propose it as a test day.
I manage multiple systems and for each new release of Fedora
Workstation, I do installs (clean storage) rather that updates. This
gets rid of the trash in the system space and users get a chance to
clean the trash out of their home directory before it's copied to their
new home directory. The automation (script) I use for configuring these
systems after the base Anaconda install not only installs and removes
software packages, but also does setting of desktop preferences, service
settings, and user setting such as groups. I start using the script with
what I call As Deployed Testing which typically starts a little before
branching of the new version. This happens for pre-release drops as
testing is called for or for other drops as something peaks my interest.
Good things can be discovered about the readiness of the new release for
deployment with this as deployed configuration. For instance:
Unreported bugs can sometimes show up. Besides reporting the bug I can
understand the impact of the bug and plan for a workaround or other
measure given that the bug may not be a priority. An application may be
retired or have a new version that is delayed. Such situations can lead
to the deployment being delayed or working with users to decide if we
might need to be tolerant of a bug or picking a new application.
A Gnome setting (gsettings) may work differently or no longer be
available. Their may be new settings available
I would guess that most folks that deploy new Fedora Workstation
releases for a group of users do something similar. So why then have a
test day for this? I see the following benefits:
On the test day results pages each tester can report the number of users
they support and list by number any new bugs or issues they found that
were seen after their as deployed configuration, but not with with the
base anaconda installed configuration. This provides an indication of
the impact of the bugzilla bugs and Gnome issues that were found.
They can be encouraged to post on Fedora Discussion about any new or
different gsettings or service configurations they found. This will
raise visibility of new, changed, and removed settings. They can tell
about replacement application(s) they are using in place of ones that
are retired or otherwise not available.
Hi Fedora users, developers, and friends!
It's time to start thinking about Test Days for Fedora 38.
For anyone who isn't aware, a Test Day is an event usually focused
around IRC for interaction and a Wiki page for instructions and results,
with the aim being to get a bunch of interested users and developers
together to test a specific feature or area of the distribution. You can
run a Test Day on just about anything for which it would be useful to do
some fairly focused testing in 'real time' with a group of testers; it
doesn't have to be code, for instance, we often run Test Days for
l10n/i18n topics. For more information on Test Days, see
Anyone who wants to can host their own Test Day, or you can request that
the QA group helps you out with organization or any combination of the
two. To propose a Test Day, just file a ticket in fedora-qa pagure - here's
an example https://pagure.io/fedora-qa/issue/624 . For
instructions on hosting a Test Day, see
You can see the schedule at https://pagure.io/fedora-qa/issues?tags=test+days .
There are many slots open right now. Consider the development
schedule, though, in deciding when you want to run your Test Day - for
some topics you may want to avoid
the time before the Beta release or the time after the feature freeze
or the Final Freeze.
We normally aim to schedule Test Days on Thursdays; however, if you want
to run a series of related Test Days, it's often a good idea to do
something like Tuesday / Wednesday / Thursday of the same week (this is
how we usually run the X Test Week, for instance). If all the Thursday
slots fill up but more people want to run Test Days, we will open up
Tuesday slots as overflows. And finally, if you really want to run a
Test Day in a specific time frame due to the development schedule, but
the Thursday slot for that week is full, we can add a slot on another
day. We're flexible! Just put in your ticket the date or time frame you'd
like, and we'll figure it out from there.
If you don't want to run your own Test Day, but you are willing to
help with another, feel free to join one or more of already accepted
GNOME Test Day*
i18n Test Day*
Kernel Test Week(s)*
Upgrade Test Day*
IoT Test Week*
Cloud Test Day*
Fedora CoreOS Test Week*
And don't be afraid, there are a lot of more slots available for your
own Test Day!
[*] These are the test days we run generally to make sure everything
is working fine, the dates get announced as we move into the release
If you have any questions about the Test Day process, please don't
hesitate to contact me or any member of the Fedora QA team on test at
lists.fedoraproject.org or in #fedora-qa on IRC. Thanks!
TRIED AND PERSONALLY TESTED, ERGO TRUSTED
I've got 4 Rawhide VMs across 2 different hosts, and none of them will
boot with any 6.2 kernel. Last working bootable kernel for me is
I'm guessing at this point this is a VirtualBox issue only?
FAS: nixuser | IRC: nixuser
I am considering considering testing. The only sources of info I can
find is doc.fedoraproject.org . Can fedpkg be used for this testing?
Where are the sources of packages and such that new testing. Are there
specific tools needed?
Finally got a chance to install Fedora 37 on most of the machines I have
around here. Interestingly I cannot get Fedora 37 Workstation Live to
boot in UEFI mode on a desktop with an Intel DH87RL motherboard. What
happens is I select the USB drive from the boot selection menu, the
screen goes blank for a few seconds, and then I get dumped back at the
boot device selection menu.
The same USB stick works perfectly on multiple other UEFI and BIOS
systems. Media tests pass.
Legacy (BIOS) boot works fine. Secure Boot enabled / disabled does not
I also tested a 20221221 Workstation compose and it does not boot either.
I suspect this is related BIOS ISO w/ GRUB2 change, but I don't know
that for sure.
I am at a loss for next troubleshooting steps, or even what component to
file a bug against. Suggestions welcome!
 - https://fedoraproject.org/wiki/Changes/BIOSBootISOWithGrub2