On Sun, 14 Jul 2019 at 18:16, Neal Gompa <ngompa13@gmail.com> wrote:
On Sun, Jul 14, 2019 at 5:21 PM Kevin Fenzi <kevin@scrye.com> wrote:
>
> On 7/14/19 1:15 PM, Neal Gompa wrote:
>
> > This will also make it impossible for people to locally do multilib
> > build/installs. It will remove COPR’s ability to do the same. For that
> > reason alone, I don’t particularly want this change to happen.
>
> Can you expand on what you mean by 'locally do' ?
>
> Current multilib packages will still be available in the x86_64 repo.
> Users can still install them from there just fine.
>
> If a user wants to locally build a i686 package, they can use mock
> against the koji i686 buildroot repo to do so. They could then put that
> package in a local repo with x86_64 packages and run createrepo on it.
>
> It's true there would be no easily mirrored i386 for them to copy to
> aviod the internet, but is that really a big use case?
>
> Finally, if you would prefer this not happen now, is there a time when
> you would further down the road? Whats the critera/goalpost/cutoff?
>

Building library packages and making your own multilib repo is
impossible without having both the i686 repo and the x86_64 repo, as
you need to build for both and then munge them together for a multilib
repo.

Historically, we really haven’t wanted people to pull from the Koji
repo, and we probably still don’t want to do that, since it’s not
mirrored and stressing it could cause more problems our already
overtaxed build system environment.

From my point of view, I don’t think it’s worth getting rid of the
32-bit x86 repo until we’re at the point where people would not need
to build their own multilib repositories. The cost of generating that
for mirroring isn’t that high relative to the amount of pain we’ll
cause for external folks trying to build off Fedora.

Think, for example, the repo that shall not be named. That project’s
Koji instance pulls in Fedora through the mirrored content as an
external source, which feeds its ability to do multilib builds.

I’m sorry, but I don’t see us getting rid of this for the foreseeable
future without breaking virtually all of our downstreams.



The problem is:
1. We currently have no idea of who these downstreams are
2. We have no idea of what they want? [Do they need every package.. do they need just N packages?] 

The turn around of this argument is that we can never allow ANY change/update in our infrastructure or to any package because it would break some downstream sometime. And yet we do allow updates to gcc/kernel as it is built into what we do.. aka we have decided already there is a cut-off where people are going to be 'broken/left behind' for some reason all the time. The expectation is that sometimes we will lose them permanently but a lot of time they will catch up or get ahead of us. 

We do stop some changes because we have a way to get a feeling that the number of people affected by the change are too many to deal with. But we have to have a way to measure what that is.. even if it is a back of the envelope. How can we get an idea on these? 

Because if we are going to start down the 'we can't change something because it could break some unknown sized group' ... it will quickly morph into Fedora 31 being our last release because we have enough rules lawyers in Fedora to keep any and every update from happening because it could break someone. 'Sorry you can't push that CVE, it breaks my hacking group which uses these to break into systems.' will become the releng or fesco ticket of the hour. 

That said, I think the problem with this change is that everyone's mental image of this change is different. From various other threads.. people have different ideas of what multi-lib does, what it provides, who uses it, how does it get built, what is required to build it, and what does an x86_64 user get of i686 packages.

--
Stephen J Smoogen.