Hi folks!
We've had openQA testing of updates for stable and branched releases,
and gating based on those tests, enabled for a while now. I believe
this is going quite well, and I think we addressed the issues reported
when we first enabled gating - Bodhi's gating status updates work more
smoothly now, and openQA respects Bodhi's "re-run tests" button so
failed tests can be re-triggered.
A few weeks ago, I enabled testing of Rawhide updates in the openQA
lab/stg instance. This was to see how smoothly the tests run, how often
we run into unexpected failures or problems, and whether the hardware
resources we have are sufficient for the extra load.
So far this has been going more smoothly than I anticipated, if
anything. The workers seem to keep up with the test load, even though
one out of three worker systems for the stg instance is currently out
of commission (we're using it to investigate a bug). We do get
occasional failures which seem to be related to Rawhide kernel slowness
(e.g. operations timing out that usually don't otherwise time out), but
on the whole, the level of false failures is (I would say) acceptably
low, enough that my current regime of checking the test results daily
and restarting failed ones that don't seem to indicate a real bug
should be sufficient.
So, I'd like to propose that we enable Rawhide update testing on the
production openQA instance also. This would cause results to appear on
the Automated Tests tab in Bodhi, but they would be only informational
(and unless the update was gated by a CI test, or somehow otherwise
configured not to be pushed automatically, updates would continue to be
pushed 'stable' almost immediately on creation, regardless of the
openQA results).
More significantly, I'd also propose that we turn on gating on openQA
results for Rawhide updates. This would mean Rawhide updates would be
held from going 'stable' (and included in the next compose) until the
gating openQA tests had run and passed. We may want to do this a bit
after turning on the tests; perhaps Fedora 37 branch point would be a
natural time to do it.
Currently this would usually mean a wait from update submission to
'stable push' (which really means that the build goes into the
buildroot, and will go into the next Rawhide compose when it happens)
of somewhere between 45 minutes and a couple of hours. It would also
mean that if Rawhide updates for inter-dependent packages are not
correctly grouped, the dependent update(s) will fail testing and be
gated until the update they depend on has passed testing and been
pushed. The tests for the dependent update(s) would then need to be re-
run, either by someone hitting the button in Bodhi or an openQA admin
noticing and restarting them, before the dependent update(s) could be
pushed.
In the worst case, if updated packages A and B both need the other to
work correctly but the updates are submitted separately, both updates
may fail tests and be blocked. This could only be resolved by waiving
the failures, or replacing the separate updates with an update
containing both packages.
All of those considerations are already true for stable and branched
releases, but people are probably more used to grouping updates for
stable and branched than doing it for Rawhide, and the typical flow of
going from a build to an update provides more opportunity to create
grouped updates for branched/stable. For Rawhide the easiest way to do
it if you need to do it is to do the builds in a side tag and use
Bodhi's ability to create updates from a side tag.
As with branched/stable, only critical path updates would have the
tests run and be gated on the results. Non-critpath updates would be
unaffected. (There's a small allowlist of non-critpath packages for
which the tests are also run, but they are not currently gated on the
results).
I think doing this could really help us keep Rawhide solid and avoid
introducing major compose-breaking bugs, at minimal cost. But it's a
significant change and I wanted to see what folks think. In particular,
if you find the existing gating of updates for stable/branched releases
to cause problems in any way, I'd love to hear about it.
Thanks folks!
--
Adam Williamson
Fedora QA
IRC: adamw | Twitter: adamw_ha
https://www.happyassassin.net
Hi, I am planning to change how we support BIOS RAID (sometimes also
called Firmware or Fake RAID) in the installer in the future. I plan
to go through the official Fedora change process for Fedora 38, but
I'd like to get some feedback first.
We are currently using dmraid to support these types of RAIDs in
blivet[1] (storage library the Anaconda installer uses) and we would
like to replace it with mdadm. The main reason is that dmraid is no
longer actively maintained, but it will also mean one less dependency
for the installer (we use mdadm for the software RAID support) and one
less service running during boot (dmraid-activation.service).
The potential issue here is that mdadm doesn't support all BIOS RAID
types. mdadm supports only Common RAID Disk Data Format standard[2]
(DDF) and Intel Matrix Storage Technology (IMSM) so by switching to
mdadm we would remove support for some of the older formats that
existed before DDF was standardized. I am not sure how many people are
still using these older RAIDs and the main reason for sending this
email is to find out. So if you are using a BIOS RAID on your system,
can you check what kind? You can find out simply by checking the
filesystem type on the underlying disk(s) reported by for example
`lsblk -f`. Types supported by mdadm are "ddf_raid_member" and
"isw_raid_member". Types supported only by dmraid are
"adaptec_raid_member", "hpt***_raid_member", "jmicron_raid_member",
"lsi_mega_raid_member", "nvidia_raid_member",
"silicon_medley_raid_member" and "via_raid_member". So if you have one
of the latter ones and you'd be impacted by this change, please let me
know so we can reconsider this change. Note that this would affect
only the installation process, I know some external and NAS drives use
BIOS RAID and these won't be affected, dmraid is not being removed
from the repositories (at least I am not aware of this right now, some
distributions are already planning to remove dmraid completely).
[1] https://github.com/storaged-project/blivet
[2] https://www.snia.org/tech_activities/standards/curr_standards/ddf
Regards
Vojtech Trefny
vtrefny(a)redhat.com
Hi all! I just got back from Open Source Summit, several of the talks I
found interesting were on RISC-V -- a high-level one about the
organizational structure, and Drew Fustini's more technical talk.
In that, he noted that there's a Fedora build *, but it isn't an official
Fedora arch. As I understand it, the major infrastructure blocker is simply
that there isn't server-class hardware (let alone hardware that will build
fast enough that it isn't a frustrating bottleneck).
So, one question is: if we used, say, ARM or x86_64 Amazon cloud instances
as builders, could we build fast enough under QEMU emulation to work? We
have a nice early advantage, but if we don't keep moving, we'll lose that.
But beyond that: What other things might be limits? Are there key bits of
the distro which don't build yet? Is there a big enough risc-v team to
respond to arch-specific build failures? And, do we have enough people to do
QA around release time?
* see http://fedora.riscv.rocks/koji/
--
Matthew Miller
<mattdm(a)fedoraproject.org>
Fedora Project Leader
Hello Fedora developers,
I'm Andreas from Stuttgart in Germany. I'm a system administrator and
software developer, who moved his computers to Fedora about a year ago.
I've written a handful of Perl modules that I package at the Open Build
Service and Copr. I'd like to maintain some of these modules directly in
Fedora. In the past, I maintained ports of other software at
SlackBuilds.org and OpenBSD. Occasionally, I contribute patches to free
software projects. I enjoy programming in C, Perl and recently Kotlin.
Kind regards,
Andreas
Earlier discussion:
https://www.mail-archive.com/devel@lists.fedoraproject.org/msg169800.html
Current memtest86+ 5.x requires non-UEFI, which makes it increasingly
irrelevant to modern hardware. memtest86 forked into a proprietary
product some time ago. However there is hope because upstream
memtest86+ 6.00 is (a) open source and (b) seems to work despite the
large warnings on the website:
https://memtest.org/
Note this new version is derived from pcmemtest mentioned in the
thread above which is only indirectly derived from memtest86+ 5.x and
removes some features.
So my question is are we planning to move to v6.00 in future?
I did attempt to build a Fedora RPM, but it basically involves
removing large sections of the existing RPM (eg. the downstream script
we add seems unnecessary now and the downstream README would need to
be completely rewritten). It's probably only necessary to have
memtest.efi be installed as /boot/memtest.efi and although it won't
appear automatically in the grub menu, it can be accessed by a trivial
two line command.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
Hi list,
Re. https://bugzilla.redhat.com/show_bug.cgi?id=1896901
Since haxe-4.1.3-4 and nekovm-2.3.0-4, both nekovm and haxe packages contains "/usr/lib/.build-id/b0/aed4ddf2d45372bcc79d5e95d2834f5045c09c".
The nekovm one is a symlink to "/usr/bin/neko". The haxe one to "/usr/bin/haxelib".
Both the neko and haxelib binaries are built with libneko, with a nearly identical main.c with the only difference of the present of neko bytecode embedded as a byte array (neko: the byte array is null; haxelib: the byte array is the haxelib neko bytecode).
I'm not sure how to resolve it.
Please advice.
Best regards,
Andy
Hi
I'll be landing tesseract 5.3.0 in rawhide, building it in the
f38-build-side-61405 side tag, along with the following dependent packages:
ffmpeg
gimagereader
mupdf
opencv
python-PyMuPDF
qpdfview
R-tesseract
zathura-pdf-mupdf
Thanks
Sandro
https://fedoraproject.org/wiki/Changes/Noto_CJK_Variable_Fonts
This document represents a proposed Change. As part of the Changes
process, proposals are publicly announced in order to receive
community feedback. This proposal will only be implemented if approved
by the Fedora Engineering Steering Committee.
== Summary ==
Switch the default Noto CJK fonts for Chinese, Japanese and Korean
from static to variable fonts.
== Owner ==
* Name: [[User:pwu| Peng Wu]]
* Email: pwu(a)redhat.com
== Detailed Description ==
In order to reduce the font size in Noto CJK fonts, we plan to switch
to use the variable fonts by default.
# Split the google-noto-cjk-fonts package into
google-noto-sans-cjk-fonts and google-noto-serif-cjk-fonts, and
provide the variable fonts in google-noto-sans-cjk-vf-fonts and
google-noto-serif-cjk-vf-fonts.
# Drop several sub packages which are not installed by default from
the google-noto-cjk-fonts package.
## Like google-noto-sans-cjk-*-fonts, google-noto-sans-*-fonts,
google-noto-sans-mono-cjk-*-fonts, google-noto-serif-cjk-*-fonts and
google-noto-serif-*-fonts
# Install the Noto CJK Variable Fonts by default.
Fedora Copr for testing: https://copr.fedorainfracloud.org/coprs/pwu/noto-cjk/
== Feedback ==
== Benefit to Fedora ==
The variable fonts will reduce the disk space usage and live image
size compared to the static fonts.
{| class="wikitable"
|+ RPM Size
|-
! Size (bytes) !! Noto Sans CJK !! Noto Serif CJK
|-
| Static Fonts || 130674365 || 181621033
|-
| Variable Fonts || 64613100 || 56924710
|}
== Scope ==
* Proposal owners:
** Package four font packages for Noto CJK fonts
** Retire google-noto-cjk-fonts in Fedora rawhide
** Switch to install variable fonts by default in fedora-comps and langpacks
** Submit pull request to lorax templates to use
google-noto-sans-cjk-fonts in the boot.iso
* Other developers:
* Release engineering:
* Policies and guidelines: N/A (not needed for this Change)
* Trademark approval: N/A (not needed for this Change)
* Alignment with Objectives:
== Upgrade/compatibility impact ==
When upgrade, the variable fonts will be installed by default.
== How To Test ==
* Please upgrade to Fedora 38 or rawhide to get the latest fonts
* Install the variable fonts: google-noto-sans-cjk-vf-fonts and
google-noto-serif-cjk-vf-fonts
** Check the google-noto-sans-cjk-ttc-fonts and
google-noto-serif-cjk-ttc-fonts packages are replaced
* Then use CJK locales to check if the new fonts have any problem
== User Experience ==
This new variable fonts will reduce the disk space usage and live image size.
== Dependencies ==
== Contingency Plan ==
* Contingency mechanism: Use the static fonts by default -
google-noto-sans-cjk-fonts and google-noto-serif-cjk-fonts
* Contingency deadline: N/A
* Blocks release? N/A
== Documentation ==
N/A (not a System Wide Change)
== Release Notes ==
This new variable fonts will reduce the disk space usage and live image size.
--
Ben Cotton
He / Him / His
Fedora Program Manager
Red Hat
TZ=America/Indiana/Indianapolis
Hi,
Development of Bottles is moving fast and we have been struggling to
keep up with upstream releases, especially since the introduction of
Rust components.
Upstream has approached the maintainers [1,2] and asked us to retire the
package in favor of the Flatpak packages provided by upstream.
I'm planning to move forward with retiring Bottles in the coming days. I
will add a comment in all open bug reports, letting users know they
should switch to the Flatpak release.
Bottles in F36 and F37 will not receive any further updates unless there
are security related issues surfacing.
[1] https://github.com/bottlesdevs/Bottles/issues/2345
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2160007
Cheers,
Sandro
--
Sandro
FAS: gui1ty
IRC: Penguinpee
Elsewhere: [Pp]enguinpee