Neal Gompa wrote:
This is true for building locally for i686. You cannot do 32-bit
development with multilib x86_64 content.
Yes, that is my point. Well, actually, you can do some amount of 32-bit
development with multilib -devel packages, but mock RPM builds require a
complete 32-bit chroot. So multilib is not even self-hosting (i.e., the
removal of the 32-bit repositories makes Fedora no longer a self-hosting
The difference was the modularity only managed to make it in
originally by being described as a purely add-on concept. It was
individual packagers that started breaking the world by churning core
packages into modules.
That could have been prevented by policy, but deliberately was not, because
the people behind Modularity had exactly this (moving random packages to
module-only) as their hidden agenda. I had warned about this part, too, and
asked for policies ensuring that modules remain purely optional. I only got
excuses (of the kind "it is not planned anyway, but we are strictly against
actually banning it" – ha ha, guess why) and no such policy, and FESCo
failed to require the requested policy and took the Modularity people's
(obviously dishonest – again, why on Earth would they have been against such
a policy if they truly had no plans to do the exact opposite?) word for it.
Everyone knew up front that the hardware change was bad and
way to sneak that in without breaking people. It's not going in
Become maintainers of the package? This is sort of the point of the
system. Someone needs to take care of it, and if there's a user, they
can become a maintainer. Ideally, most consumers of the package
(dependency-wise) would consider at least being co-maintainers of
their direct dependencies.
It is absolutely not realistic to expect all end users to become package
It appears quick because we've have the FTBFS orphaning process
broken for three years. There's a lot of breakage to catch up on.
For the orphaned packages, the process is quick by policy, it only allows
for 6 weeks! And I still do not see why the lack of a maintainer is by
itself (without actual issues with the package being reported by actual
users) a reason to remove a package. From a user standpoint, it is better to
have an unmaintained package of the software I need than none at all!
The FTBFS process has slightly more reasonable time frames, but I disagree
that FTBFS is by itself an issue worth orphaning or retiring packages for to
begin with. A failure to build only becomes an issue if I need to change
something in the package or rebuild it due to some soname bump, and that is
the point where I will fix it anyway. Otherwise, those FTBFS bugs are only
an annoyance forcing me to spend time on irrelevant issues rather than on
Regular orphanings are happening because for some reason we allow
orphaned packages to be used as inputs for modules, so now we have
this giant mess of dead packages that aren't dead. This is a very
broken policy and should be fixed, but I suspect that it won't be,
because that would force module packages to always have a non-modular
counterpart in the distribution, based on how our tools work.
Forcing all packages to have a non-modular version would fix a lot of the
insanity of Modularity. I would be all in favor of that as well!
> See qt (4) and kdelibs (4), and qt3 and kdelibs3, for how
> should work.
Those transitions had the benefit of the major consumer (KDE) moving
forward relatively quickly after. Python 2 is not in the same state.
In many cases, Fedora is the driver for packages getting ported to
Python 3, and if we hadn't done it, it likely wouldn't have ever
happened. This is one of the major things I consider valuable about
Fedora. If we don't do it, I do not believe anyone else would.
But there is just no way that Fedora can get all upstream software ported to
Python 3, and so, radically removing Python 2 will deprive users of software
they need and that has no supported alternative.
There is the FESCo exception process, but it is clearly not working, because
FESCo can veto any package from being provided, see e.g.:
It should be the call of the maintainer of the package whether they still
want to provide the package, and if not, they should orphan it and leave
somebody else the chance to pick it up. But in no way should it require the
approval of a committee that can (and does, see above) say "no" arbitrarily.
Look, if something is actually declared dead and unmaintained and
there is an upgrade path, then it is on us to help everyone get
through that upgrade path. That is literally the point of a
distribution. We have a set of opinions of how the distribution is put
together, maintained, and evolved. While I disagree with the removal
of i686 content from the mirror network, I do not have the bandwidth
to commit to helping the x86 SIG. This is why I'm not complaining
In the case of hardware, the "upgrade path" actually requires shelling out
money, unless by "upgrade path", you mean migrating to a distribution not
practicing the same kind of planned obsolescence.
In the case of software, I really don't see what benefits banning by
committee some compatibility packages and/or their use brings us.
Compatibility packages have near zero impact on the people maintaining and
using the current versions. I do think we should try to get packages to
agree on one version of a system library (and a programming language
interpreter is a special case of a "library") where possible, but for major
changes such as Qt 5 or Python 3, it is just not possible to get everything
ported and a compatibility package is obviously necessary.
The folks that care about i686 should come together and revive the
SIG and fix bugs. I personally know that we have issues with Go and
Rust based code on i686 because we keep hitting memory exhaustion
during compilation. It also happens on armv7hl, but the ARM people add
hacks for their platform to work. No one is doing the same for i686.
The problem then is really that we have Go and Rust based code to begin
with. :-) C and C++ forever!