I just ran into this: https://bugzilla.redhat.com/show_bug.cgi?id=1309175
It's not a huge deal (and there are several workarounds, for git and for
other tools which default ot using 'gpg'), but it highlights the mismatch
between the default /usr/bin/gpg running gpg1, when other tools, like
gpg-agent, are tailored for gpg2.
RHEL/CentOS has shipped /usr/bin/gpg with gnupg2 since at least sometime in
I'm not saying we shouldn't continue to ship gnupg1, but can we at least
rename it, so gnupg package is version 2, and gnupg1 provides /usr/bin/gpg1
instead? This seems overdue. Is there any reason not to do this?
I am one of the maintainers of the ntl package, which is used by some
numeric applications (e.g., Macaulay2 and sagemath). Upstream
supports use of the PCLMUL instruction, the AVX instructions, and the
FMA instructions to speed up various computations. We can't use any
of those in Fedora, since we have to support a baseline x86_64.
Well, that's kind of a downer. I could advertise that people with
newer CPUs ought to rebuild the ntl package for their own CPUs, but
what's a distribution for if people have to rebuild packages? I've
been looking for a way to automatically support more recent CPUs.
Yesterday I sent a patch upstream that uses gcc's indirect function
support together with __attribute__((target ...)) to build vanilla
x86_64, PCLMUL-enabled, AVX-enabled, and FMA-enabled varieties of
several functions. Upstream was initially excited about this but
then, on further reflection, offered the opinion that this approach is
dangerous. The problem is that some of the types involved may change
ABI depending on the instruction set in use, and therefore it would be
necessary to build larger portions of the library for each supported
CPU variant. At that point, as upstream said, we might as well just
build the entire library for each variant. The problem then is how to
choose which version of the library to use at load time.
On some platforms, ld.so offers "hardware capabilities", such as sse2
on i386. By dropping a vanilla library into /usr/lib and an
SSE2-enabled build into /usr/lib/sse2, applications can get the
version of the library appropriate for the CPU in use. But there
don't seem to be any defined hardware capabilities for x86_64.
Has anybody already thought this through? What's the best approach to
take? For this package, the speedups are substantial, so this is
worth doing, if it can be done well.
It should be possible to touch /.autorelabel and have the SELinux
labels on the filesystem fixed at next boot.
Fedora 24 shipped with a couple of nasty bugs in /.autorelabel
This is not particularly a new thing. This bug against systemd was
filed a couple of years ago, and still not fixed although the problem
is understood and there is a fix:
The general issues are:
(1) Autorelabelling requires that the system is booted up "enough" to
run the fedora-autorelabel.service.
(2) If SELinux is enabled during the boot, then services may fail to
start up correctly because of mislabelled files.
(3) fedora-autorelabel.service requires local-fs.target. This is a
correct dependency, but it also happens quite late -- if you look at
the attached chart you can see that dozens of services need to be
started successfully before we even get to local-fs.target.
(4) If we don't reach the fedora-autorelabel.service then we can be
dumped into a rescue shell, or worse still go into a boot loop.
(5) The fedora-autorelabel.service itself can fail to be run because
SELinux stops systemd from working properly (the cause of
(6) A related problem is that the autorelabel doesn't stop other
services from attempting to start while the relabel is happening.
I'm not sure what's a good way to fix it. Some ways I can think of:
(a) Configure /etc/selinux/config to set SELinux permissive, and
modify the fedora-autorelabel.service so it edits /etc/selinux/config
to re-enable SELinux next time. This editing would have to be
conditional, and the details are up in the air. Maybe there could be
a "/.autorelabel-enforce-after-boot" file to do this?
[Note these are for VM images, so we cannot have "special" boot flags
that the user must set and modify, it must all happen automatically]
(b) Introduce some shortcut, low level, very minimal default target
which systemd uses when it sees the /.autorelabel file. This was
basically what sysvinit used to do - the /.autorelabel file was
processed specially very early in the boot scripts.
(c) Instead of touching the file, set the default.target to some
special target. The problem with this is we want to replace
default.target with the normal one after the autorelabel completes
successfully, and I've no idea how to do that.
(d) Combine setting SELinux to enforcing with checking for
/.autorelabel. If whatever it is that reads /etc/selinux/config
notices that the /.autorelabel file exists, it should do the
autorelabel before setting SELinux to enforcing.
(e) Insert your idea here ...
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
Right now the retirement guidelines state that you should only retire in
branched (prior to freeze) and up to master...
But I just had a user bitten by a change in behaviour between dnf and yum
that was discovered here:
This is the bug raised with my package:
It seems very unintuitive to the user, and wasn't initially apparent to me
until I look at all open dnf bugs and did a "find on page" for "obsolete"
For now I've opened a rel-eng ticket to get the letsencrypt packages
properly removed from the F23 repos so that a dnf install letsencrypt, like
F24 behaviour, will install certbot.
I guess the real question is - is the dnf behaviour correct, and if the dnf
behaviour isn't going to change should we allow packagers to retire from a
I would like to hear your opinion/need your help!
I am working on a component of the Fedora Modularity project, called
Build Pipeline Overview (BPO). It will be a single user interface (probably
both web and API) that would give you information about "everything". And
I would like your help with defining what that "everything" means.
To make the definition of "everything" easier, we are using a concept of
personas. These are basically groups of people that would use the BPO UI
that will help us to identify possible use-cases.
@threebean have identified four personas:
- Release Engineering
- Project Management
I have put them into an Etherpad document .
What I'm asking from you: Could you please discuss here or in the document
what would you like to see in the BPO UI? Or what do you think should be
there. I would like to get as much input as possible.
Associate Software Engineer
in the past we have had a tradition of sponsoring EMEA contributors that would like to attend Flock but are not going to receive funding as speakers.
The allocated budget for this year was only $800, initially. The adjusted budget is even less. As we wish to assist as many contributors in need as possible, we are going to try and raise that budget. However, no promises made at this point.
This program is intended for contributors that currently reside in EMEA. Other regions may feel free to come up with similar programs for their contributors as they see fit. Priority will be given to contributors that have recently maintained a record of activity and also have some kind of work agenda for the conference (e.g. meet with fellow contributors, actively contribute on site).
In any case, the budget we are going to end up with will be limited. Most likely we will be able to offer only -partial- subsidies. Do not apply if you are not okay with that. If you are able to receive funding from other sources (e.g. your employer), please do so.
To request for sponsorship, file a ticket on the EMEA Trac, state the reason(s) for applying and what you expect to accomplish by attending the conference. Deadline should be July 5, 23:59:59 UTC +0. All tickets will be later evaluated by FAmSCo, same as last time.
We will get back to you once we have more information regarding the budget. Please spread the news to your fellow Fedorians.
Thank you and good luck!
giannisk on irc.freenode.net
At a recent QA meeting I raised the idea of a better way for
maintainers to find out when their package is a release blocking bug.
Better is vaguely defined by me as: not email based, and not adamw
based (Adam Williamson is in fact a person not a bot).
Currently, the ways a maintainer finds out a bug is release blocking:
1. Bugzilla email. When QA determines a bug is a blocker, it's noted
in the bug as a comment, and bugzilla emails (most) everyone on the
The problem with email is self-explanatory. If the bugzilla
notification email isn't being registered in a useful way, probably
more emails won't help either.
2. The very nifty Fedora Blocker Bug Tracking app
The problem with this is, it's passive. You need to check it. So it's
mainly used by QA folks to get a bird's eye view of the status of
blocker bugs, and freeze exceptions.
3. The illustrious, humorous, verbose, would have been cloned by now
were it affordable and timely enough, adamw, who sends out an email
summary of blocking bugs to devel@.
Problem, more email.
4. Adamw (or less often another human within QA) takes it upon
themselves to inquire via IRC. These are effective. Unknown is if
slips would have resulted if they didn't happen. But it seems at least
plausible that it would increase slips without this form of nagging
The problem is, I think it's inappropriate for any one person to have
to nag other people about their bugs. It's also tedious and manual.
The time and interest for any QA person to do this is low.
The questions then, are:
- Have we reached the pinnacle notification method of blocker bugs to
maintainers? Or is there a better way to do this?
- Would it help to have a nagbot (or enhance zodbot) to ping
maintainers on IRC? Is the nagbot more or less likely to be ignored,
or would it be about the same? Of course there are lower level
questions about whether it's possible, what work it entails, would it
be opt in or opt out, could notifications happen outside IRC, but for
now I think the "in general" high level context is more useful.
per Build Root Without Perl Fedora 25 change
<https://fedoraproject.org/wiki/Changes/Build_Root_Without_Perl>, I'm ready to
implement the most visible part of this change.
I'm going to inject perl-devel and perl-generators build-requires into Fedora
specification files tomorrow. That's necessary not to break building the
packages when Perl will have been removed from the build root.
There are 3292 packages that should build-require perl-generators,
there are 491 packages that should build-require perl-devel.
Some of them already contain the dependency.
There are 3129 packages that miss one of them and they will be edited.
The edit will consist of one commit with this commit message:
Mandatory Perl build-requires added <https://fedoraproject.org/wiki/Changes/Build_Root_Without_Perl>
The commit will add:
or both into a spec file. There will be no revision bump, no RPM changelog
entry, no rebuild in the Koji.
I expect pulling the source repositories, commiting the changes and pushing
them back to the dist-git will take at least 2 hours. Package maintainers and
people with watch-commit role will receive an e-mail notification about the
commit as usually.
Thank you for your understanding.
To keep this off-list as much as possible, the rant is here:
(The blame lies elsewhere. I wish I had the network and social cred to
get a real movement started, away from the current faceless CA system
and towards a different identity assurance system that depends on
actual, existing day-to-day trust relationships.)
we were talking about this item for some time, so let's start a thread
for it to have the discussion and hopefully also a solution documented.
This is meant as discussion initiation based on the situation in
Fedora on POWER. I would like to ask the bigger experts than me to fill
the missing details and options.
Currently we set a minimal CPU level for an architecture (or use the
toolchain default) on the distribution level
(/usr/lib/rpm/redhat/rpmpc owned by redhat-rpm-config ). It allows
the distro to run on such CPU and any newer evolution of it (omitting
any kernel or hw issues), but it also means it doesn't generally take
advantage of features and improvements in the newer CPUs.
For ppc64 (the big endian POWER) the base is set by the toolchain
default which is Power4/ppc970. When Power7 came we were asked what are
the options to take the advantage of these CPUs, 3 generations newer
than the base. The solution was the introduction of ppc64p7 subarch into
the packaging and release engineering tools. But it showed more as a
hack than a solution, touching rpm, yum, koji, .... The list of packages
is manually maintained, requires manual updates to the buildsystem for
new releases, seems new packaging tools (dnf) don't understand it
correctly. Is there a way to make the subarch mechanism generic and
integrated with the other tools? So we could have ppc64p8 and ppc64p9
inside Fedora for POWER ...
Now I'm getting to an area where others are the experts :-) Glibc
allows to build and install multiple per-cpu optimized runtimes that
are selected based on the CPU. There is the IFUNC mechanism and
function multi-versioning in GCC (user-friendly IFUNC)  to allow
multiple implementations inside one library/binary.
Some packages do the CPU detection during runtime and select the
optimized functions themselves.
There is also an option to introduce a "tertiary architecture" and
rebuild everything for the new CPU, but keeping the rpm arch the same.
But it has its own costs too.
What do you think? Are there any recommendations for both the distro
and package maintainers and upstream developers? I suppose even primary
architectures are facing the same problem.