On Thu, Nov 25, 2021 at 8:26 AM Neal Gompa ngompa13@gmail.com wrote:
On Thu, Nov 25, 2021 at 6:19 AM Nico Kadel-Garcia nkadel@gmail.com wrote:
On Thu, Nov 25, 2021 at 3:05 AM Miroslav Suchý msuchy@redhat.com wrote:
Dne 22. 11. 21 v 15:00 Pavel Raiskup napsal(a):
Hello Fedora EPEL maintainers!
First I don't feel comfortable announcing this, I'm not happy about the situation and so I don't want to be the lightning rod :-). But I believe that we can come to acceptable Copr/Mock solution and this needs to be discussed... so here we are.
By the end of the year 2021 we have to fix our default EPEL 8 Mock configuration (mock-core-configs.rpm, /etc/mock/epel-8-*.cfg) as CentOS 8 goes EOL by then.
I wrote down the possible options and their pros and cons and I done my best to catch all the feedback here.
https://docs.google.com/document/d/1wF7-7_y6Ac_oB-kCFdE6VBWPW8o8zjXd2Z0SGy4V...
Miroslav
That seems to be a succinct listing. I think you left out my suggestion.of "support people re-inventing point releases for CentOS", which is what major CentOS users will do using internal mirrors. due to concern about unexpected and unwelcome updates of CentOS Stream, while they assess whether AlmaLinux or Rocky are reliable and stable enough to use. It's not an uncommon behavior for EPEL itself, partly because of EPEL's bad habit of deleting RPMs without warning and stripping out all previous releases. That's caused me problems with chromium and firefox when updates were incompatible with contemporary regression testing systems.
It's not a "bad habit", it happens because when packages are retired, keeping the packages there does a disservice to the community by effectively forcing a maintenance burden when there's no maintainer. As for stripping out previous releases, that's just how Pungi and Bodhi do update composes at the moment. Someday that'll be fixed, but then we'd have to come up with a policy on how many because there are storage concerns for mirrors if we kept everything published forever.
It causes problems and confusion for people who need to lock down evisting versions for deployment. And it happens for packages that are not retired, but merely updated. I was bitten by it myself with chromium updates last year. It forces users of EPEL to maintain internal repos, or out of band access to previously accessible RPMs. It's destabilizing and breaks the use of bills-of-material based deployments with complete lists of all desired RPMs.
Storage and bandwidth concerns are legitimate concerns, as is potentially continuing to publish older releases with known vulnerabilities or bugs. But neither Fedora nor RHEL simply discard previously published versions this way, they aggregate new releases. I consider this a longstanding bug for EPEL, and one of the reasons I set up internal mirrors in large deployments.
The difficulty with switching mock to AlmaLinux or Rocky is that there is likely to be significant phase lag with new point releases by Red Hat, and that it will inflict quite a bandwidth burden for all the "mock" setups in the field. Can they scale up to handle that?
Insofar as "phase lag with new point releases", AlmaLinux made their release 48 hours after Red Hat did with RHEL. So, frankly, I'm not worried there with AlmaLinux.
48 hours is pretty danged good. I hesitate to rely on a new OS publisher with so little track record, even if they've been very good in their first year. I'm very curious how well they'll do with RHEL 8.6, without a published CentOS example to compare with. And I'd be very, very wary of the "planning fallacy", where people underestimate the time of future tasks, despite knowledge that previous tasks have sometimes taken far longer.
For bandwidth burdens, mirror networks are designed to alleviate that burden and both have those in place.
Sure, and they're very helpful. But to the one managing such networks, a massive increase in bandwidth use or in the *breadth* of content used for a particular mirrored repo for a particular product is very helpful to plan for and avoid local choke points. I'm especially thinking of the cache on relevant proxy servers.