Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
1. Once a user has selected a stream, updates should follow that stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
* The user's "intention" should be recorded at the time of module enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
* The state `dep_enabled` would be set whenever a stream becomes enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
* The state `default_enabled` would be set whenever a stream becomes enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
* When running `dnf update`, if a module stream's dependency on another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
* If stream switching during an update or upgrade would result in other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency changes:
* Any stream upgrade such as this must guarantee that any artifacts of the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones.
Stephen Gallagher wrote:
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
It looks like a non-default stream could go through the transitions available → dep_enabled → default_enabled. Is that desirable?
Björn Persson
On 04. 10. 19 16:57, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
1. (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by imperfections of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it is easier for them would need to rollback to what's easier for everybody else but them. Modular maintainers who do multiple modular streams would need to maintain both the alternate streams and ursine packages.
2. (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all the corner cases (however AFAIK this needs to be done for the "modules in buildroot" thing as well).
WDYT?
Miro Hrončok wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
+1
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by imperfections of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it is easier for them would need to rollback to what's easier for everybody else but them. Modular maintainers who do multiple modular streams would need to maintain both the alternate streams and ursine packages.
In other words, this proposal would ban module-only packages, allowing modules only for alternate versions? IMHO, that is the most reasonable approach, but:
- (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all the corner cases (however AFAIK this needs to be done for the "modules in buildroot" thing as well).
If that works out, that sounds acceptable to me as well. As long as I am not forced to use or care about modules, this works for me.
WDYT?
I think either of your 2 proposals needs to be implemented ASAP. They are both much saner than the overengineered and error-prone solutions the Modularity team is proposing for this blocker (see Stephen Gallagher's mail that started this thread). Seeing how DNF already has trouble meeting user expectations in corner cases (e.g., with Obsoletes, with weak dependencies, etc., and now also with Modularity), I don't think expecting more complex behavior from it is going to work out well.
Kevin Kofler
On Fri, Oct 4, 2019 at 9:32 PM Miro Hrončok mhroncok@redhat.com wrote:
On 04. 10. 19 16:57, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by imperfections of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it is easier for them would need to rollback to what's easier for everybody else but them. Modular maintainers who do multiple modular streams would need to maintain both the alternate streams and ursine packages.
I think this is the best way forward, and I don't think it's "drastic" for maintainers. They *already* maintain a default branch, for which similar Packaging / Update Guidelines apply than for non-modular packages. They could even use a package.cfg file to automatically build stuff for multiple fedora branches ...
In my opinion, this is also the most "fair" approach, because it doesn't shift the maintenance burden from one packager to *everyone else*.
It results in the least disruption for users who want default versions of things. It means no more disruptions for packagers who rely on the packages in question - they can just target the default ("ursine") versions.
Another benefit of this would be that the "*-modular" repositories could be disabled by default, which would eliminate a whole lot of upgrade issues for "normal users". Only people who *actually want* alternate versions would enable modules (and *-modular repos?), and they can then expected to know how to handle upgrade issues.
- (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all the corner cases (however AFAIK this needs to be done for the "modules in buildroot" thing as well).
I see some issues with this approach. For example, the modularized Java packages dropped a whole lot of "cruft", which now makes these packages slightly incompatible with the non-modular packages - this includes dropping Epoch from packages, dropping subpackages without obsoleting them, etc. This is probably not a problem for module streams, but it definitely *is* a problem if such modular packages start to get treated like "normal packages".
WDYT?
I think that letting people drop ownership and maintenance of "normal packages" and *only* doing modules instead was a mistake. But that's just my opinion ;)
Fabio
-- Miro Hrončok -- Phone: +420777974800 IRC: mhroncok _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Fabio Valentini wrote:
Another benefit of this would be that the "*-modular" repositories could be disabled by default, which would eliminate a whole lot of upgrade issues for "normal users". Only people who *actually want* alternate versions would enable modules (and *-modular repos?), and they can then expected to know how to handle upgrade issues.
+1. I don't understand why Modularity was not implemented that way to begin with.
I think that letting people drop ownership and maintenance of "normal packages" and *only* doing modules instead was a mistake. But that's just my opinion ;)
+1, not just yours. ;-)
Kevin Kofler
Dne 04. 10. 19 v 21:31 Miro Hrončok napsal(a):
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
This will improve current situation. And it will resolve upgrades from F30->F31.
However, I fail to see how this generally resolve dep_enabled modules. And upgrades of modules in general. I.e,: Alice runs: dnf module install foo:1 Fedora N has only foo:1. Fedora N+1 has only foo:2 Alice cannot do: dnf module disable foo:1 dnf module enable foo:2 Because foo:2 is available only in Fedora N+1 and the baseurl is https://mirrors.fedoraproject.org/metalink?repo=fedora-modular-$releasever&a... And she cannot do: dnf module disable foo:1 dnf system-upgrade because of broken deps in Fedora N+1 and module foo:1 (definitelly because of depenency on module_platform(platform:fN)
On 07. 10. 19 10:05, Miroslav Suchý wrote:
Dne 04. 10. 19 v 21:31 Miro Hrončok napsal(a):
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
This will improve current situation. And it will resolve upgrades from F30->F31.
However, I fail to see how this generally resolve dep_enabled modules. And upgrades of modules in general. I.e,: Alice runs: dnf module install foo:1 Fedora N has only foo:1. Fedora N+1 has only foo:2 Alice cannot do: dnf module disable foo:1 dnf module enable foo:2 Because foo:2 is available only in Fedora N+1 and the baseurl is
https://mirrors.fedoraproject.org/metalink?repo=fedora-modular-$releasever&a...
And she cannot do: dnf module disable foo:1 dnf system-upgrade because of broken deps in Fedora N+1 and module foo:1 (definitelly because of depenency on module_platform(platform:fN)
The "modularity gets enabled without an explicit enablement" approach was IMHO a mistake, especially since it breaks upgrades. The original proposal in this thread is trying to invent a very complicated workaround to this feature's quirks. Instead, my proposal removes the problem, while keeping all the benefits for the users.
My proposal doesn't solve the problem for Alice in your example. Alice has run "dnf module ..." and hence she volunteered into the problem. The problems here indeed need to be fixed, but it's not something I care deeply about. If we pile workaround on workarounds in this problem, I won't care. It will only affect the users who opt in.
My proposal solves the problem problem for every other user, who has never actually run "dnf module ...", who has no idea how modules work or how to disable or reset it. That's the problem I care deeply about.
On Fri, Oct 04, 2019 at 09:31:55PM +0200, Miro Hrončok wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
Part of the "hybrid modularity" proposal was that the default stream could _literally_ be tagged into the base repo as non-modular. That has a lot of appeal to me still!
[...]
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Yeah, like this. :)
On Mon, Oct 7, 2019 at 8:02 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Fri, Oct 04, 2019 at 09:31:55PM +0200, Miro Hrončok wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
Part of the "hybrid modularity" proposal was that the default stream could _literally_ be tagged into the base repo as non-modular. That has a lot of appeal to me still!
To quote you from the other ongoing thread: "The default stream for a package shouldn't be updated in disruptive ways in shipped releases" If that's the case, then what *is* the benefit of abandoning the non-modular version of packages, if default streams need to basically be maintained separately for different branches anyway? 🤔
Fabio
[...]
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Yeah, like this. :)
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mon, Oct 07, 2019 at 08:13:17PM +0200, Fabio Valentini wrote:
To quote you from the other ongoing thread: "The default stream for a package shouldn't be updated in disruptive ways in shipped releases" If that's the case, then what *is* the benefit of abandoning the non-modular version of packages, if default streams need to basically be maintained separately for different branches anyway? 🤔
To me, most packages would benefit from having two streams: fast and slow. That's the essential problem I want solved anyway. (Maybe with CentOS Streams: fast, slow, very slow.)
The "slow" version would be updated on a careful cadence with big updates aligned with release boundaries. The fast version would be rolling latest. And for applications, you can pick which you want.
On 10/7/19 4:34 PM, Matthew Miller wrote:
To me, most packages would benefit from having two streams: fast and slow. That's the essential problem I want solved anyway. (Maybe with CentOS Streams: fast, slow, very slow.)
The "slow" version would be updated on a careful cadence with big updates aligned with release boundaries. The fast version would be rolling latest. And for applications, you can pick which you want.
That sounds very reasonable from the user point of view. I think the burden on maintainers would then be to decide which path each release belongs in--I guess the rules would be similar to those for API bumps.
Having said that, I am not sure it will solve the problem with ecosystems requiring specific collection of component versions (*): what is the expected number of required versions for each module in those environments? If it is much more than 2 then fast/slow scheme might not work.
I tried to get a handle on this by counting the available versions in the Fedora Modular repos:
yum module list --releasever=30 --disable-repo updates-modular,update,fedora | cut -d' ' -f1 | sort | uniq -c | colrm 9 | sort | uniq -c
and there seem to be 33 modules with one version, 9 modules with 2 versions, 5 modules with 3 versions and 2 modules with four versions (avocado, dwm and postgresql), with similar results for F29 and rawhide.
(*) Java, Ruby, npm, maybe even Python
(**) I am not sure if this is the way to get a comprehensive list of all modules---there's fewer modules than I thought, and too many one-version modules; please suggest a better way.
On Tue, Oct 08, 2019 at 02:09:24PM -0400, Przemek Klosowski via devel wrote:
Having said that, I am not sure it will solve the problem with ecosystems requiring specific collection of component versions (*): what is the expected number of required versions for each module in those environments? If it is much more than 2 then fast/slow scheme might not work.
Yeah, I don't think modularity solves this well at the individual component version explosion of doom. Ideally, we'd get developers to not do that, but ... we've tried that for 25 years with less and less success over time.
I mean, really, nothing we're doing really solves this.
So... I think the solution there is really: automated bundling, automated detection of that bundling, and where possible, automated patching. But that's _another_ major project.
We could simply stop doing projects that throw wildly different versions of software into a single installation, which causes this issue.
On October 8, 2019 6:23:47 PM UTC, Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Oct 08, 2019 at 02:09:24PM -0400, Przemek Klosowski via devel wrote:
Having said that, I am not sure it will solve the problem with ecosystems requiring specific collection of component versions (*): what is the expected number of required versions for each module in those environments? If it is much more than 2 then fast/slow scheme might not work.
Yeah, I don't think modularity solves this well at the individual component version explosion of doom. Ideally, we'd get developers to not do that, but ... we've tried that for 25 years with less and less success over time.
I mean, really, nothing we're doing really solves this.
So... I think the solution there is really: automated bundling, automated detection of that bundling, and where possible, automated patching. But that's _another_ major project.
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, 8 Oct 2019 at 15:32, John M. Harris, Jr. johnmh@splentity.com wrote:
We could simply stop doing projects that throw wildly different versions of software into a single installation, which causes this issue.
We could also just all quit and join potato farming cults.. they are next to the Yak farms which we seem to be shaving on this since 1997.
The issue has been the same since before we combined Core and Extras together.. how do you get N amount of software for M different needs with Y number of contributors. When Y is growing or has a large influx of people to replace short time contributors.. you can make N large. When it isn't because people either found a different solution to their problem or just don't find OS sexy at the moment.. you have to make N smaller. However in making N smaller, you also end up making M smaller which will negatively affect Y. You just hope you choose the right N's which will make M still large enough that Y minimizes to a positive value.
On October 8, 2019 6:23:47 PM UTC, Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Oct 08, 2019 at 02:09:24PM -0400, Przemek Klosowski via devel wrote:
Having said that, I am not sure it will solve the problem with ecosystems requiring specific collection of component versions (*): what is the expected number of required versions for each module in those environments? If it is much more than 2 then fast/slow scheme might not work.
Yeah, I don't think modularity solves this well at the individual component version explosion of doom. Ideally, we'd get developers to not do that, but ... we've tried that for 25 years with less and less success over time.
I mean, really, nothing we're doing really solves this.
So... I think the solution there is really: automated bundling, automated detection of that bundling, and where possible, automated patching. But that's _another_ major project.
-- Sent from my mobile device. Please excuse my brevity. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 10/8/19 3:30 PM, John M. Harris, Jr. wrote:
We could simply stop doing projects that throw wildly different versions of software into a single installation, which causes this issue.
There's a word for this that I can't remember at the moment---'producting'? I think it's related to the monorepo approach from large companies like Google and Facebook, so that developers just push out a bundle containing all the pieces from the monorepo that their software uses. It causes headaches for everyone else, but works for the original developers and their friends, so I'm afraid that it's hard to avoid.
An OS distro like Fedora also can be seen as a monorepo, containing the latest distributed versions. It's just that our versions don't correspond to the versions of the developer. If only we could all agree on one gigantic, universal monorepo of all software in the universe that everyone uses :)
On Tue, Oct 8, 2019 at 3:42 PM John M. Harris, Jr. johnmh@splentity.com wrote:
We could simply stop doing projects that throw wildly different versions of software into a single installation, which causes this issue.
What you don't seem to appreciate, based on your comments in this thread and others over the past couple of months, is that Fedora is a fast-moving distribution by default. That's (the distribution) Fedora's niche. Fedora (the community) is also largely driven by volunteers, who each have their own reasons for contributing to the project.
Finding the right balance of forward-leaning and features first, which still focusing on usability is a hard problem to solve. The proverbial pendulum does swing back and forth between "things are moving too quickly, and things are broken because of it" and "things aren't moving quickly enough" quite a bit. But to "stop doing projects", to use your words, is not what Fedora is known for.
-- Jared Smith
On Mon, Oct 07, 2019 at 04:34:21PM -0400, Matthew Miller wrote:
On Mon, Oct 07, 2019 at 08:13:17PM +0200, Fabio Valentini wrote:
To quote you from the other ongoing thread: "The default stream for a package shouldn't be updated in disruptive ways in shipped releases" If that's the case, then what *is* the benefit of abandoning the non-modular version of packages, if default streams need to basically be maintained separately for different branches anyway? 🤔
To me, most packages would benefit from having two streams: fast and slow. That's the essential problem I want solved anyway. (Maybe with CentOS Streams: fast, slow, very slow.)
The "slow" version would be updated on a careful cadence with big updates aligned with release boundaries. The fast version would be rolling latest. And for applications, you can pick which you want.
IIUC, Modules make this particular problem worse: Let's consider this: a new version of ook came out a month ago, got released in F31, and it seems nice and stable and fully backwards compatible. The maintainer decides it's time to push it out to stable releases.
- non-modular: git checkout f30 && git merge f31 && fedpkg build && fedpkg update (OK, consider this pseudo-code)
- modular: the "stable" stream gets updated, and now F29 users get an update just before the release is to be retired. This is at best a waste of compilation cycles and download bits, and increases chances of breakage in a release that is supposed to have a minimum of disruptions.
The obvious solution is to *not* update in F29, which means that as Fabio wrote,
default streams need to basically be maintained separately for different branches
Zbyszek
On 04/10/2019 21:31, Miro Hrončok wrote:
On 04. 10. 19 16:57, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
Wouldn't it be easier if the "default stream" would just behave like a regular package?
+1
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by imperfections of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it is easier for them would need to rollback to what's easier for everybody else but them. Modular maintainers who do multiple modular streams would need to maintain both the alternate streams and ursine packages.
That sounds very reasonable to me! We would have a clean "core" again without the complexity of Modularity enabled by default. Having it as an optional solution for additional streams still gives the maintainers the power to support different versions and the power users the choice.
- (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all the corner cases (however AFAIK this needs to be done for the "modules in buildroot" thing as well).
Also sounds reasonable to me, although I prefer 1. this should also solve the current issues.
WDYT?
First of all: Thank you very much for your proposal, I think it is a very important point! I think we should stop developing Modularity in that too complex way we have now and go to a less complex and more optional (for both maintainers and users) implementation. As I wrote above, I prefer option 1 as it should be the easier, less complex design. But also your second option should improve things.
Greetings, Christian
On 04. 10. 19 21:31, Miro Hrončok wrote:
On 04. 10. 19 16:57, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by imperfections of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it is easier for them would need to rollback to what's easier for everybody else but them. Modular maintainers who do multiple modular streams would need to maintain both the alternate streams and ursine packages.
- (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all the corner cases (however AFAIK this needs to be done for the "modules in buildroot" thing as well).
WDYT?
So despite providing zero feedback here, this was voted at the modularity meeting:
* Tagging Module Defaults into non-modular repo (sgallagh, 15:41:37) * AGREED: We disagree with merging default streams into the main repo as non-modular packages. Our approach is to implement a mechanism of following default streams to give people the experience they want. (+4 0 -0) (asamalik, 16:07:40)
https://meetbot.fedoraproject.org/fedora-meeting-3/2019-10-08/modularity.201...
I disagree strongly with the reasons provided in the logs, but clearly, we should aim for solution 1. if solution 2. is not negotiable by the modularity WG.
On Wed, Oct 9, 2019, 12:29 Miro Hrončok mhroncok@redhat.com wrote:
On 04. 10. 19 21:31, Miro Hrončok wrote:
On 04. 10. 19 16:57, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
Wouldn't it be easier if the "default stream" would just behave like a
regular
package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages.
We only
modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by
imperfections
of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it
is
easier for them would need to rollback to what's easier for everybody
else but
them. Modular maintainers who do multiple modular streams would need to
maintain
both the alternate streams and ursine packages.
- (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to
what we
try to do in the buildroot. "dnf install Foo" would install the Foo
package and
would not enbale any streams or modules. The modular maintainers would
keep
maintaining the modules as now, the infrastructure would compose the
defaults
into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all
the
corner cases (however AFAIK this needs to be done for the "modules in
buildroot"
thing as well).
WDYT?
So despite providing zero feedback here, this was voted at the modularity meeting:
- Tagging Module Defaults into non-modular repo (sgallagh, 15:41:37)
- AGREED: We disagree with merging default streams into the main repo as non-modular packages. Our approach is to implement a mechanism of following default streams to give people the experience they want. (+4 0 -0) (asamalik, 16:07:40)
https://meetbot.fedoraproject.org/fedora-meeting-3/2019-10-08/modularity.201...
I disagree strongly with the reasons provided in the logs, but clearly, we should aim for solution 1. if solution 2. is not negotiable by the modularity WG.
Why am I not getting rid of the feeling that Modularity is getting shoved down our throats no matter the objections we raise?
-- Miro Hrončok -- Phone: +420777974800 IRC: mhroncok _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Wed, Oct 9, 2019 at 6:32 AM Fabio Valentini decathorpe@gmail.com wrote:
On Wed, Oct 9, 2019, 12:29 Miro Hrončok mhroncok@redhat.com wrote:
On 04. 10. 19 21:31, Miro Hrončok wrote:
On 04. 10. 19 16:57, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Pros: Users who don't want alternate versions won't be affected by imperfections of our modular design. No Ursa Major/Prime needed in the buildroot.
Cons: Modular maintainers who do modules with just one stream because it is easier for them would need to rollback to what's easier for everybody else but them. Modular maintainers who do multiple modular streams would need to maintain both the alternate streams and ursine packages.
- (potentially dangerous consequences?)
We put the default modular stream into our regular repos, similarly to what we try to do in the buildroot. "dnf install Foo" would install the Foo package and would not enbale any streams or modules. The modular maintainers would keep maintaining the modules as now, the infrastructure would compose the defaults into the regular repo (or an additional but default-enabled one).
Pros: Maintainers would keep doing what they desire.
Cons: We would need to make this technically possible and figure out all the corner cases (however AFAIK this needs to be done for the "modules in buildroot" thing as well).
WDYT?
So despite providing zero feedback here, this was voted at the modularity meeting:
- Tagging Module Defaults into non-modular repo (sgallagh, 15:41:37)
- AGREED: We disagree with merging default streams into the main repo as non-modular packages. Our approach is to implement a mechanism of following default streams to give people the experience they want. (+4 0 -0) (asamalik, 16:07:40)
https://meetbot.fedoraproject.org/fedora-meeting-3/2019-10-08/modularity.201...
I disagree strongly with the reasons provided in the logs, but clearly, we should aim for solution 1. if solution 2. is not negotiable by the modularity WG.
Why am I not getting rid of the feeling that Modularity is getting shoved down our throats no matter the objections we raise?
It's being pushed so hard because it has been promoted as a top level objective, and because it's in RHEL now, no one can afford to let it fail. It *has* to succeed for RHEL, and for Fedora to remain a natural upstream for RHEL, it *must* succeed here too.
The problem is that the RHEL approach to modules only works because RHEL is centrally developed and can be correctly coordinated to overcome issues in the design. This is not true in Fedora, and there doesn't seem to be allowances for this difference.
On Wed, Oct 09, 2019 at 06:39:07AM -0400, Neal Gompa wrote:
It's being pushed so hard because it has been promoted as a top level objective, and because it's in RHEL now, no one can afford to let it fail. It *has* to succeed for RHEL, and for Fedora to remain a natural upstream for RHEL, it *must* succeed here too.
Yes; Modularity was created in response to the too-fast/too-slow issue we see from opposite sides of the coin in both Fedora and RHEL -- and work on it was funded by Red Hat. I'm happy to encourage work towards this problem from basically any quarter, because I think it's a fundamental one we need to solve in order to continue to be relevant not just as an upstream for RHEL but in general.
The problem is that the RHEL approach to modules only works because RHEL is centrally developed and can be correctly coordinated to overcome issues in the design. This is not true in Fedora, and there doesn't seem to be allowances for this difference.
This seems *partly* fair. It's in some ways a natural consequence of Red Hat funding the work and having to fit into RHEL release schedules. But I think we can also get attention and work towards Fedora's needs -- especially with 8 out the door and 9 just twinkle in product management's eye.
On 09. 10. 19 18:30, Matthew Miller wrote:
The problem is that the RHEL approach to modules only works because RHEL is centrally developed and can be correctly coordinated to overcome issues in the design. This is not true in Fedora, and there doesn't seem to be allowances for this difference.
This seems *partly* fair. It's in some ways a natural consequence of Red Hat funding the work and having to fit into RHEL release schedules. But I think we can also get attention and work towards Fedora's needs -- especially with 8 out the door and 9 just twinkle in product management's eye.
And this is exactly the best time to stop and plan for a little and before we implement the a very fragile workaround proposed at the beginning of the thread just to approach the ideal state of "default modular packages behave just like regular packages".
In RHEL, we put some packages in modules to have the ability to declare: This module is only supported for X years, unlike the rest of RHEL.
In Fedora, we plan to maintain and treat the default modular streams the same way we do with regular packages. We have the ability to keep them as regular packages. This approach was clearly treated positively by the community in this thread so far. Let's keep modularity in Fedora to do what it was promised to do: Make it possible to install alternate versions of software. Instead, the majority of Fedora's modules is one stream only. I seriously think that brings no benefit to the users and it makes everything needlessly more complicated.
So despite providing zero feedback here, this was voted at the modularity meeting:
- Tagging Module Defaults into non-modular repo (sgallagh, 15:41:37)
- AGREED: We disagree with merging default streams into the main repo as non-modular packages. Our approach is to implement a mechanism of following default streams to give people the experience they want. (+4 0 -0) (asamalik, 16:07:40)
Well, based on this discussion, pushing content in modular defaults is not the experience that people want. I have been a bit ill for some time and before I could add my point to the discussion, everything has been more or less said. Just for illustration, this is what I wanted to say about it:
1. Modularity should stay away from my system until I call for it -> now it is not the case, because modularity sneaks into users' computer through modular defaults that overcome the non-modular packages. Gimp is the first such "horse" that jumps into almost everybody's desktop and they are modular without even knowing it. 2. Modularity should provide alternative content, if I need it and when I need it. Modules should be installable only through "dnf module" command and not through the regular dnf command, so that I explicitely need to allow modularity on my system. 3. The naming conventions of the streams should be *obligatory* for every module packager. So, if we decide that we want a "latest" stream, then all modules should have a "latest" stream for rolling updates. Currently, they all have various names of streams, from which I cannot tell anything. If there should be a "slow" path, then again, all modules should have a "slow" path. 4. Non-modular Fedora must be a valid use case and remain an option. 5. If I decide to go modular, there must be a way to go non-modular again, without breaking the system. Or, if modular is the only option, so if I go into specific streams, there must be a way to go to defaults without breaking the system. With non-modular defaults, this seems easy. With modules? I am not sure. 6. We need to expect that once there are hundreds of modules, people will install all possible combinations and they all will need to work. I am not sure, we will be able to test something like that.
Seeing the reaction of the Modularity WG ... I do not understand how it is possible that such important decisions are taken by 4 people without any Fedora wide discussions like this. And yet, it seems a little bit that even opinions on this list will not fall on fertile grounds.
I wish the communication improved in the first place. Community means togetherness.
should aim for solution 1. if solution 2. is not negotiable by the modularity WG.
+1
Lukas Ruzicka wrote:
Just for illustration, this is what I wanted to say about it:
- Modularity should stay away from my system until I call for it ->
now it is not the case, because modularity sneaks into users' computer through modular defaults that overcome the non-modular packages. Gimp is the first such "horse" that jumps into almost everybody's desktop and they are modular without even knowing it. 2. Modularity should provide alternative content, if I need it and when I need it. Modules should be installable only through "dnf module" command and not through the regular dnf command, so that I explicitely need to allow modularity on my system. 3. The naming conventions of the streams should be *obligatory* for every module packager. So, if we decide that we want a "latest" stream, then all modules should have a "latest" stream for rolling updates. Currently, they all have various names of streams, from which I cannot tell anything. If there should be a "slow" path, then again, all modules should have a "slow" path. 4. Non-modular Fedora must be a valid use case and remain an option. 5. If I decide to go modular, there must be a way to go non-modular again, without breaking the system. Or, if modular is the only option, so if I go into specific streams, there must be a way to go to defaults without breaking the system. With non-modular defaults, this seems easy. With modules? I am not sure. 6. We need to expect that once there are hundreds of modules, people will install all possible combinations and they all will need to work. I am not sure, we will be able to test something like that.
+1 to all of the above.
Seeing the reaction of the Modularity WG ... I do not understand how it is possible that such important decisions are taken by 4 people without any Fedora wide discussions like this. And yet, it seems a little bit that even opinions on this list will not fall on fertile grounds.
Indeed, this is a real issue.
I think what needs to happen is that the people who allowed this to happen get voted out of FESCo, at least if they still refuse to act on the mailing list feedback. They no longer seem to have a majority behind them, if they even ever did. But for that to happen, we need to have people actually running for FESCo and taking a clear position against forced Modularity (i.e., either make Modularity fully optional as proposed in this thread or axe it entirely, no third option). Democracy can only possibly work if there is an actual choice of candidates with non-uniform positions. In the last few elections, pretty much all the candidates uniformly claimed that Modularity was great and the way to go, only a few (like Miro) had some reservations about it (but still did not dare actually declaring themselves AGAINST Modularity in the election campaign – Miro's proposal in this thread definitely goes the right way though).
Kevin Kofler
On Thu, Oct 10, 2019 at 10:41 AM Lukas Ruzicka lruzicka@redhat.com wrote:
Seeing the reaction of the Modularity WG ... I do not understand how it is possible that such important decisions are taken by 4 people without any Fedora wide discussions like this. And yet, it seems a little bit that even opinions on this list will not fall on fertile grounds.
To be clear, I am reading every single reply to this thread very carefully. We *will* be taking all of this feedback into consideration, but please understand that we're also trying to balance things. As Neal noted upthread, we do have a responsibility to our downstream to make sure that we do not break the ability to manage default streams. This becomes much more difficult if we cannot have them in Fedora, because the testing of them is lost. Additionally, no one on the WG disagrees with you that the current state of things is undesirable. I take a moderate amount of offense to the repeated insinuations that the solutions we are building are "hacks". Yes, there's a proposal to work around the upgrade issue to F31 that is absolutely a one-off hack to buy time. But our plans for how upgrades should work long-term as well as how defaults need to behave in the distro are being considered very carefully. We are trying to avoid breakage and to make the process simpler, but we are also shoring up the bridge while crossing it.
We are absolutely considering the option of disallowing default streams in Fedora, but we *really* don't want to rush into that. For one thing, we do have a number of packages that have moved to modules-only that would have to convert back. For some projects, this is probably just an annoyance, but for others this may be a major impediment. In particular, one of the advantages of Modularity is the ability to have buildroot-only packages that are different from the base operating system (and don't end up delivered as artifacts from the module). There are likely modules out there that rely on this behavior because their build requires a newer or older version of some package than the non-modular buildroot provides. This is not the sole problem to address if we go the "no defaults" route, just the first that came to mind. It's unclear to me right now if forcing everyone back to the old behavior is less effort than fixing the remaining Modularity issues. And since we need to fix them for RHEL as well anyway, it's worth considering carefully if the added work is worthwhile.
I'm wary of assuming that this thread represents the whole of Fedoran opinions, however. As we all know, it's generally the set of people who are upset that speak up the loudest. I'm not discounting your concerns (far from it!), but if we only base development decisions on "make sure no one is upset about it", we'd never accomplish anything new at all. This is why when I've been sending out these emails to discuss ideas, I've been trying to carefully describe both the use-cases and the technical limitations (both intrinsic to the design and those that are the result of imperfect implementation) each time. It's somewhat disheartening to hear responses that largely boil down to "If you can't get it perfectly right, stop trying!".
On Fri, Oct 11, 2019 at 8:50 AM Stephen Gallagher sgallagh@redhat.com wrote:
On Thu, Oct 10, 2019 at 10:41 AM Lukas Ruzicka lruzicka@redhat.com wrote:
Seeing the reaction of the Modularity WG ... I do not understand how it is possible that such important decisions are taken by 4 people without any Fedora wide discussions like this. And yet, it seems a little bit that even opinions on this list will not fall on fertile grounds.
To be clear, I am reading every single reply to this thread very carefully. We *will* be taking all of this feedback into consideration, but please understand that we're also trying to balance things. As Neal noted upthread, we do have a responsibility to our downstream to make sure that we do not break the ability to manage default streams. This becomes much more difficult if we cannot have them in Fedora, because the testing of them is lost. Additionally, no one on the WG disagrees with you that the current state of things is undesirable. I take a moderate amount of offense to the repeated insinuations that the solutions we are building are "hacks". Yes, there's a proposal to work around the upgrade issue to F31 that is absolutely a one-off hack to buy time. But our plans for how upgrades should work long-term as well as how defaults need to behave in the distro are being considered very carefully. We are trying to avoid breakage and to make the process simpler, but we are also shoring up the bridge while crossing it.
Two years into this, I am currently not confident that modularity will be adapted to support community distributions well, especially fast-paced ones like Fedora. My fears about it encouraging Fedora to slow down has also seemingly borne fruit, too. Java is proof positive of this.
Since the implosion of Fedora Java in the regular distribution and its move to modules, the traditional effort to move to newer Java versions has basically disappeared. Java 11 LTS was released last year, and to this day our default Java is still Java 8 (which is EOL!). Clearly, we're developing a new antipattern that we need to nip in the bud sooner rather than later.
My disappointment in this became even greater when openSUSE beat us to switching to Java 11. Their packaging is derived from ours! They've demodularized Java for openSUSE and then did the work to move everything forward. Meanwhile, we've now failed at our "first" and "features" pillars because the incentive is now *gone*.
We are absolutely considering the option of disallowing default streams in Fedora, but we *really* don't want to rush into that. For one thing, we do have a number of packages that have moved to modules-only that would have to convert back. For some projects, this is probably just an annoyance, but for others this may be a major impediment. In particular, one of the advantages of Modularity is the ability to have buildroot-only packages that are different from the base operating system (and don't end up delivered as artifacts from the module). There are likely modules out there that rely on this behavior because their build requires a newer or older version of some package than the non-modular buildroot provides. This is not the sole problem to address if we go the "no defaults" route, just the first that came to mind. It's unclear to me right now if forcing everyone back to the old behavior is less effort than fixing the remaining Modularity issues. And since we need to fix them for RHEL as well anyway, it's worth considering carefully if the added work is worthwhile.
The buildroot-only packages thing should have been banned in Fedora. In my view, this feature is very much an anti-community feature, because it heavily discourages shared maintainership and permits even more orphanings than should be allowed. The more we do this, the less value the distribution itself actually provides.
For example, it's pretty painful to package Rust software because you cannot rely on the existence of Rust components in the distribution. Everything *must* get built and integrated for each package. This not only defeats one of the major virtuous outcomes of Fedora participating in ecosystems (maintainers helping upstreams keep their software fresh and secure), but also makes it functionally impossible to distribute the workload of maintaining Rust packages in the same way we have for Python, C/C++, and Perl.
I'm wary of assuming that this thread represents the whole of Fedoran opinions, however. As we all know, it's generally the set of people who are upset that speak up the loudest. I'm not discounting your concerns (far from it!), but if we only base development decisions on "make sure no one is upset about it", we'd never accomplish anything new at all. This is why when I've been sending out these emails to discuss ideas, I've been trying to carefully describe both the use-cases and the technical limitations (both intrinsic to the design and those that are the result of imperfect implementation) each time. It's somewhat disheartening to hear responses that largely boil down to "If you can't get it perfectly right, stop trying!".
At least this Fedora packager is getting super burned out with the number of problems caused in his day to day by the creation of module-only software in Fedora. I've never really had a problem with the idea of modules for alternative software, but I deeply despise the dependency on modularity for "default" software (per modularity parlance).
I still don't have a good grasp of what to do anymore for packaging. I've edged away from packaging anything that involves modularity in Fedora proper because it's just too complex for me to grok.
And as a third party packager, I really don't want to deal with modules for "default distro" setups. How am I supposed to make my software compatible with all of the potential module filters imposed on me by DNF? I don't know how to deal with depending on content existing in default modules either...
-- 真実はいつも一つ!/ Always, there's only one truth!
----- Original Message -----
From: "Neal Gompa" ngompa13@gmail.com To: "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Friday, October 11, 2019 2:36:58 PM Subject: Re: Modularity and the system-upgrade path
On Fri, Oct 11, 2019 at 8:50 AM Stephen Gallagher sgallagh@redhat.com wrote:
On Thu, Oct 10, 2019 at 10:41 AM Lukas Ruzicka lruzicka@redhat.com wrote:
Seeing the reaction of the Modularity WG ... I do not understand how it is possible that such important decisions are taken by 4 people without any Fedora wide discussions like this. And yet, it seems a little bit that even opinions on this list will not fall on fertile grounds.
To be clear, I am reading every single reply to this thread very carefully. We *will* be taking all of this feedback into consideration, but please understand that we're also trying to balance things. As Neal noted upthread, we do have a responsibility to our downstream to make sure that we do not break the ability to manage default streams. This becomes much more difficult if we cannot have them in Fedora, because the testing of them is lost. Additionally, no one on the WG disagrees with you that the current state of things is undesirable. I take a moderate amount of offense to the repeated insinuations that the solutions we are building are "hacks". Yes, there's a proposal to work around the upgrade issue to F31 that is absolutely a one-off hack to buy time. But our plans for how upgrades should work long-term as well as how defaults need to behave in the distro are being considered very carefully. We are trying to avoid breakage and to make the process simpler, but we are also shoring up the bridge while crossing it.
Two years into this, I am currently not confident that modularity will be adapted to support community distributions well, especially fast-paced ones like Fedora. My fears about it encouraging Fedora to slow down has also seemingly borne fruit, too. Java is proof positive of this.
Since the implosion of Fedora Java in the regular distribution and its move to modules, the traditional effort to move to newer Java versions has basically disappeared. Java 11 LTS was released last year, and to this day our default Java is still Java 8 (which is EOL!). Clearly, we're developing a new antipattern that we need to nip in the bud sooner rather than later.
Not going to argue that we're well behind here, but to my knowledge, the Stewardship SIG maintains just about everything you'd find in a useless modular repo (e.g., packages that are outside of the default module stream's limited API) as an ursine package. We try not to duplicate too much of what's provided in the default module streams. So the claim that Java has imploded in the regular distribution are a little bit of a stretch.
Then again, I don't use eclipse and most of my projects use CMake, not maven so I don't miss either of those major projects. I'm mostly talking about the vast swatches of Java libraries... :)
My disappointment in this became even greater when openSUSE beat us to switching to Java 11. Their packaging is derived from ours! They've demodularized Java for openSUSE and then did the work to move everything forward. Meanwhile, we've now failed at our "first" and "features" pillars because the incentive is now *gone*.
The Stewardship SIG does its best to update packages, but doesn't have the resources to fully switch to JDK 11 ourselves. That's really up to the Java SIG. Also, there's really nothing to do to demodularize a package. Just choose a branch and build it as an ursine package...
On Fri, Oct 11, 2019 at 2:38 PM Neal Gompa ngompa13@gmail.com wrote:
On Fri, Oct 11, 2019 at 8:50 AM Stephen Gallagher sgallagh@redhat.com wrote:
I'm wary of assuming that this thread represents the whole of Fedoran opinions, however. As we all know, it's generally the set of people who are upset that speak up the loudest. I'm not discounting your concerns (far from it!), but if we only base development decisions on "make sure no one is upset about it", we'd never accomplish anything new at all. This is why when I've been sending out these emails to discuss ideas, I've been trying to carefully describe both the use-cases and the technical limitations (both intrinsic to the design and those that are the result of imperfect implementation) each time. It's somewhat disheartening to hear responses that largely boil down to "If you can't get it perfectly right, stop trying!".
At least this Fedora packager is getting super burned out with the number of problems caused in his day to day by the creation of module-only software in Fedora. I've never really had a problem with the idea of modules for alternative software, but I deeply despise the dependency on modularity for "default" software (per modularity parlance).
I still don't have a good grasp of what to do anymore for packaging. I've edged away from packaging anything that involves modularity in Fedora proper because it's just too complex for me to grok.
And as a third party packager, I really don't want to deal with modules for "default distro" setups. How am I supposed to make my software compatible with all of the potential module filters imposed on me by DNF? I don't know how to deal with depending on content existing in default modules either...
For whatever it is worth, I agree with everything Neal wrote too.
Before things are rolled out further, I'd like to see some policies agreed upon for what modularity is and isn't allowed for in Fedora: what are the rules for default streams, buildroot only modules, modularizing non-leaf packages, etc. It feels like we don't or haven't agreed on what we actually want to use modularity for *in Fedora*. There could very well be things that modularity should support for RHEL that don't make sense for Fedora... and I think there's fear that this distinction isn't being made at the moment. Or that the decisions have already been made.
It's certainly true that the loudest and most unhappy voices tend to dominate discussions, but so far I haven't seen many people speak up who are enthusiastic about modularity who aren't also involved in it in some way.
Granted, that could well change over time as improvements are made-- I think the Java Situation has left a bad taste in everyone's mouth-- but it still seems like this is a good time to reflect on the current implementation of modularity, what its benefits are, what we want it to do, and if it's doing what we want it to do.
Ben Rosser
Ben Rosser wrote:
Before things are rolled out further, I'd like to see some policies agreed upon for what modularity is and isn't allowed for in Fedora: what are the rules for default streams, buildroot only modules, modularizing non-leaf packages, etc.
So, to start that discussion, I think all 3 of those should be no gos in Fedora. In other words, I propose the following rules: * no default streams, use "ursine" (non-modular) packages for the default versions instead (you may ALSO ship the same version as a module, if that makes it easier for you, i.e., if it means you don't have to retire and unretire module versions at every release, but the "ursine" version must exist), * no buildroot-only modules nor buildroot-only packages in modules, everything used to build packages must be shipped along with them, * no non-leaf modules, since those unavoidably lead to version hell due to the non-parallel-installability of different versions of the same module.
Kevin Kofler
On 13. 10. 19 19:38, Kevin Kofler wrote:
Ben Rosser wrote:
Before things are rolled out further, I'd like to see some policies agreed upon for what modularity is and isn't allowed for in Fedora: what are the rules for default streams, buildroot only modules, modularizing non-leaf packages, etc.
So, to start that discussion, I think all 3 of those should be no gos in Fedora. In other words, I propose the following rules:
- no default streams, use "ursine" (non-modular) packages for the default versions instead (you may ALSO ship the same version as a module, if that makes it easier for you, i.e., if it means you don't have to retire and unretire module versions at every release, but the "ursine" version must exist),
- no buildroot-only modules nor buildroot-only packages in modules, everything used to build packages must be shipped along with them,
- no non-leaf modules, since those unavoidably lead to version hell due to the non-parallel-installability of different versions of the same module.
The third rule is unnecessary with the first. We can keep the integrity of the default and provide non-defaults that may violate it if properly documented (you might want to enable a nondefault modular stream to install libfoo:0.27 in a container, even if it makes various packages you don't need noninstallable).
On Sun, Oct 13, 2019 at 08:01:45PM +0200, Miro Hrončok wrote:
On 13. 10. 19 19:38, Kevin Kofler wrote:
Ben Rosser wrote:
Before things are rolled out further, I'd like to see some policies agreed upon for what modularity is and isn't allowed for in Fedora: what are the rules for default streams, buildroot only modules, modularizing non-leaf packages, etc.
So, to start that discussion, I think all 3 of those should be no gos in Fedora. In other words, I propose the following rules:
- no default streams, use "ursine" (non-modular) packages for the default versions instead (you may ALSO ship the same version as a module, if that makes it easier for you, i.e., if it means you don't have to retire and unretire module versions at every release, but the "ursine" version must exist),
- no buildroot-only modules nor buildroot-only packages in modules, everything used to build packages must be shipped along with them,
- no non-leaf modules, since those unavoidably lead to version hell due to the non-parallel-installability of different versions of the same module.
The third rule is unnecessary with the first. We can keep the integrity of the default and provide non-defaults that may violate it if properly documented (you might want to enable a nondefault modular stream to install libfoo:0.27 in a container, even if it makes various packages you don't need noninstallable).
I was hoping to have some of the folks who would be saddled with tons more work if this policy was enacted chime in, but I don't think any of them have. (ie, the people who have moved their packages to modules and have or are going to retire their non modular versions). We may want to ask them directly what they would do if this policy is enacted.
I understand that people want to go back to the last known "good" state for them and regroup, but keep in mind that has it's price also. One that I don't think too many in this thread will have to pay, so it's easy to just say 'revert it all'.
kevin
On Sun, Oct 13, 2019 at 10:48 PM Kevin Fenzi kevin@scrye.com wrote:
On Sun, Oct 13, 2019 at 08:01:45PM +0200, Miro Hrončok wrote:
On 13. 10. 19 19:38, Kevin Kofler wrote:
Ben Rosser wrote:
Before things are rolled out further, I'd like to see some policies agreed upon for what modularity is and isn't allowed for in Fedora: what are the rules for default streams, buildroot only modules, modularizing non-leaf packages, etc.
So, to start that discussion, I think all 3 of those should be no gos in Fedora. In other words, I propose the following rules:
- no default streams, use "ursine" (non-modular) packages for the default versions instead (you may ALSO ship the same version as a module, if that makes it easier for you, i.e., if it means you don't have to retire and unretire module versions at every release, but the "ursine" version must exist),
- no buildroot-only modules nor buildroot-only packages in modules, everything used to build packages must be shipped along with them,
- no non-leaf modules, since those unavoidably lead to version hell due to the non-parallel-installability of different versions of the same module.
The third rule is unnecessary with the first. We can keep the integrity of the default and provide non-defaults that may violate it if properly documented (you might want to enable a nondefault modular stream to install libfoo:0.27 in a container, even if it makes various packages you don't need noninstallable).
I was hoping to have some of the folks who would be saddled with tons more work if this policy was enacted chime in, but I don't think any of them have. (ie, the people who have moved their packages to modules and have or are going to retire their non modular versions). We may want to ask them directly what they would do if this policy is enacted.
I understand that people want to go back to the last known "good" state for them and regroup, but keep in mind that has it's price also. One that I don't think too many in this thread will have to pay, so it's easy to just say 'revert it all'.
From what I can tell, the only two package groups that are really affected by a move to "modules only" are java and eclipse. If that's correct, "revert it all" would only affect eclipse so far, because it now has broken dependencies in non-modular fedora.
But we (the Stewardship SIG) have been maintaining over 200 packages in the Java stack since their original maintainers either were declared unresponsive, or abandoned things for "greener" pastures in modular branches. We've managed to cut the number of outdated packages from over 60% to under 40%, and still pending updates bring that down to about 25%. So I'd wager that the non-modular Java packages are now (or will be soon) in better shape than their modular counterparts ...
Fabio
kevin _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 13. 10. 19 23:01, Fabio Valentini wrote:
On Sun, Oct 13, 2019 at 10:48 PM Kevin Fenzi kevin@scrye.com wrote:
On Sun, Oct 13, 2019 at 08:01:45PM +0200, Miro Hrončok wrote:
On 13. 10. 19 19:38, Kevin Kofler wrote:
Ben Rosser wrote:
Before things are rolled out further, I'd like to see some policies agreed upon for what modularity is and isn't allowed for in Fedora: what are the rules for default streams, buildroot only modules, modularizing non-leaf packages, etc.
So, to start that discussion, I think all 3 of those should be no gos in Fedora. In other words, I propose the following rules:
- no default streams, use "ursine" (non-modular) packages for the default versions instead (you may ALSO ship the same version as a module, if that makes it easier for you, i.e., if it means you don't have to retire and unretire module versions at every release, but the "ursine" version must exist),
- no buildroot-only modules nor buildroot-only packages in modules, everything used to build packages must be shipped along with them,
- no non-leaf modules, since those unavoidably lead to version hell due to the non-parallel-installability of different versions of the same module.
The third rule is unnecessary with the first. We can keep the integrity of the default and provide non-defaults that may violate it if properly documented (you might want to enable a nondefault modular stream to install libfoo:0.27 in a container, even if it makes various packages you don't need noninstallable).
I was hoping to have some of the folks who would be saddled with tons more work if this policy was enacted chime in, but I don't think any of them have. (ie, the people who have moved their packages to modules and have or are going to retire their non modular versions). We may want to ask them directly what they would do if this policy is enacted.
I understand that people want to go back to the last known "good" state for them and regroup, but keep in mind that has it's price also. One that I don't think too many in this thread will have to pay, so it's easy to just say 'revert it all'.
From what I can tell, the only two package groups that are really affected by a move to "modules only" are java and eclipse. If that's correct, "revert it all" would only affect eclipse so far, because it now has broken dependencies in non-modular fedora.
Don't forget rust, but rust is covered by https://pagure.io/releng/issue/8767 where the maintainer have asked the modules to be retired a month ago and https://pagure.io/releng/issue/8265.
On Mon, Oct 14, 2019 at 10:46:50AM +0200, Miro Hrončok wrote:
On 13. 10. 19 23:01, Fabio Valentini wrote:
...snip...
From what I can tell, the only two package groups that are really affected by a move to "modules only" are java and eclipse. If that's correct, "revert it all" would only affect eclipse so far, because it now has broken dependencies in non-modular fedora.
So, I see the following modules (in rawhide anyhow) that don't seem to have non modular versions:
avocado cri-o django dwm eclipse gimp jmc lizardfs mysql ninja perl-bootstrap stratis
So all of those would need to come back as regular packages under this proposal right? So they are affected too, no?
Don't forget rust, but rust is covered by https://pagure.io/releng/issue/8767 where the maintainer have asked the modules to be retired a month ago and https://pagure.io/releng/issue/8265.
I was hoping I could convince them to not do that, but I guess the jury is still out.
kevin
On 15. 10. 19 19:13, Kevin Fenzi wrote:
So, I see the following modules (in rawhide anyhow) that don't seem to have non modular versions:
avocado cri-o django dwm eclipse gimp jmc lizardfs mysql ninja perl-bootstrap stratis
Do all of those have default streams? I don't know all but I think that at least gimp, django, mysql have nonmodular "dafault" versions.
In case of gimp, it has both nonmodular version and a default modular stream.
perl-bootstrap is probably needed only to bootstrap Perl.
Eclipse has a nonmodular version that fails to install because other stuff has default streams.
Yes, some of the modules would need to be "converted back". Better to do it when we can still count them on our fingers.
On 15. 10. 19 19:25, Miro Hrončok wrote:
On 15. 10. 19 19:13, Kevin Fenzi wrote:
So, I see the following modules (in rawhide anyhow) that don't seem to have non modular versions:
avocado cri-o django dwm eclipse gimp jmc lizardfs mysql ninja perl-bootstrap stratis
Do all of those have default streams? I don't know all but I think that at least gimp, django, mysql have nonmodular "dafault" versions.
In case of gimp, it has both nonmodular version and a default modular stream.
perl-bootstrap is probably needed only to bootstrap Perl.
Eclipse has a nonmodular version that fails to install because other stuff has default streams.
Yes, some of the modules would need to be "converted back". Better to do it when we can still count them on our fingers.
+1
Kevin Fenzi wrote:
I was hoping to have some of the folks who would be saddled with tons more work if this policy was enacted chime in, but I don't think any of them have. (ie, the people who have moved their packages to modules and have or are going to retire their non modular versions). We may want to ask them directly what they would do if this policy is enacted.
It is their own fault that they did this controversial move despite the objections we have been uttering from day one. So they only have themselves to blame for any extra work. It will teach them an important lesson to not jump the gun.
That said, judging from Fabio's reply, there probably won't even be that much extra work, just a handful packages to sync ("git merge") from the modular branches to the regular ("ursine") ones and "fedpkg build" there.
The current Eclipse FTBFS in non-modular F31 is also a non-issue for this proposal, because it is exclusively caused by dependencies having become module-only, so requiring them to have a default non-modular ("ursine") version will also instantly fix non-modular Eclipse.
Kevin Kofler
Thank you for clarifications.
It's somewhat disheartening to hear responses that largely boil down to "If
you can't get it perfectly right, stop trying!".
I am sorry if this is what you feel about my comments. I never wanted to say that you should stop trying if you cannot get it perfectly right. I agree that trying (and perhaps failing or winning) is the motor to development and evolution. What I wanted to say was "Until we can't get it perfectly right, let's not make it a default." Do you think this a stupid approach from a QE's perspective?
You have said a couple of times, that users should not switch off modular repos, because they could break their system. What if, until we know it's perfectly safe, users would not break their system with modular repos off? In that case, we could have a modularity opt-in for the time's being, and we could buy us time to test default streams and whatever you need to test.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Stephen Gallagher wrote:
To be clear, I am reading every single reply to this thread very carefully. We *will* be taking all of this feedback into consideration, but please understand that we're also trying to balance things. As Neal noted upthread, we do have a responsibility to our downstream to make sure that we do not break the ability to manage default streams. This becomes much more difficult if we cannot have them in Fedora, because the testing of them is lost.
It has been repeatedly stated that Fedora is NOT a beta version of RHEL. So it must not be treated as one.
Fedora needs to ship what makes sense for Fedora, not what makes sense for RHEL.
We are absolutely considering the option of disallowing default streams in Fedora, but we *really* don't want to rush into that. For one thing, we do have a number of packages that have moved to modules-only that would have to convert back.
Well, yes, they would. But it is better to do it now than to wait until even more packages are affected.
If the wrong decision to use default streams had not been made to begin with, we would not have to do this extra work now! And the longer we wait, the worse it will become. So let's fix things as quickly as possible!
No matter how far down the wrong road you have gone, turn back. (That sentence is frequently quoted on the Internet, it is allegedly a Turkish proverb.)
For some projects, this is probably just an annoyance, but for others this may be a major impediment. In particular, one of the advantages of Modularity is the ability to have buildroot-only packages that are different from the base operating system (and don't end up delivered as artifacts from the module). There are likely modules out there that rely on this behavior because their build requires a newer or older version of some package than the non-modular buildroot provides.
The whole concept of buildroot-only packages is incompatible with the definition of Fedora as a self-hosting system and should never have been allowed. I agree with Neal Gompa that it is absolutely anti-community. In addition to the points he already stated, that misfeature makes it painful for users to rebuild the packages, or to compile other software with the same build requirements.
If the package truly needs a different version of the dependency (and cannot be fixed to work with the system version), compatibility packages with a versioned name can be introduced.
But in most cases, buildroot-only packages are actually being abused to hide the only version of a package used at build time from users, because the maintainer does not want to "support" the package for some reason. This is obviously the worst in RHEL, where the decision to not support something is probably taken by management, but reportedly, this situation (packages private to some module) also exists in Fedora, where there is just no valid reason to do that.
I'm wary of assuming that this thread represents the whole of Fedoran opinions, however. As we all know, it's generally the set of people who are upset that speak up the loudest.
But that does not imply that they are a minority. It is too easy to discount criticism as coming from a "vocal minority" with no evidence whatsoever. And as far as I can tell, the only people speaking out in favor of default- enabled Modularity in this thread are directly involved with the Modularity WG, all other packagers who replied support Miro's proposal.
I'm not discounting your concerns (far from it!), but if we only base development decisions on "make sure no one is upset about it", we'd never accomplish anything new at all.
That wrongly assumes that you cannot innovate without breaking things. Innovation can and ought to be done in a way that does not upset people.
I've been trying to carefully describe both the use-cases and the technical limitations (both intrinsic to the design and those that are the result of imperfect implementation) each time. It's somewhat disheartening to hear responses that largely boil down to "If you can't get it perfectly right, stop trying!".
If, just from the design, it is possible to prove by simple logic that the system will not work no matter the implementation (which is the case with Modularity because the design allows modules to require a specific non- default version of another module while not allowing 2 versions of the same module to coexist, so it is a recipe for version conflicts), then yes, it is better to stop trying.
Kevin Kofler
On Thu, 2019-10-10 at 16:40 +0200, Lukas Ruzicka wrote:
So despite providing zero feedback here, this was voted at the modularity meeting:
- Tagging Module Defaults into non-modular repo (sgallagh,
15:41:37)
- AGREED: We disagree with merging default streams into the main
repo
as non-modular packages. Our approach is to implement amechanism of
following default streams to give people the experience theywant.
(+4 0 -0) (asamalik, 16:07:40)Well, based on this discussion, pushing content in modular defaults is not the experience that people want. I have been a bit ill for some time and before I could add my point to the discussion, everything has been more or less said. Just for illustration, this is what I wanted to say about it: Modularity should stay away from my system until I call for it -> now it is not the case, because modularity sneaks into users' computer through modular defaults that overcome the non-modular packages. Gimp is the first such "horse" that jumps into almost everybody's desktop and they are modular without even knowing it.Modularity should provide alternative content, if I need it and when I need it. Modules should be installable only through "dnf module" command and not through the regular dnf command, so that I explicitely need to allow modularity on my system.The naming conventions of the streams should be obligatory for every module packager. So, if we decide that we want a "latest" stream, then all modules should have a "latest" stream for rolling updates. Currently, they all have various names of streams, from which I cannot tell anything. If there should be a "slow" path, then again, all modules should have a "slow" path.Non- modular Fedora must be a valid use case and remain an option.
I can imagine to not having the non-modular content at all. Everything be packaged as modules but it has to be in a different shape than it is now. That approach would simplify things by other direction. I don't care that I'm using modules as long as it works as expected and I'm not dealing with broken upgrade paths or conflicts thanks to this feature. It would be even interesting to have (as someone mentioned here) fast and slow streams. So if you are running Fedora-Server you can default to slow one and if you like rolling updates distributions (but fear the Rawhide) then go to the fast stream.
If I decide to go modular, there must be a way to go non-modular again, without breaking the system. Or, if modular is the only option, so if I go into specific streams, there must be a way to go to defaults without breaking the system. With non-modular defaults, this seems easy. With modules? I am not sure.We need to expect that once there are hundreds of modules, people will install all possible combinations and they all will need to work. I am not sure, we will be able to test something like that. Seeing the reaction of the Modularity WG ... I do not understand how it is possible that such important decisions are taken by 4 people without any Fedora wide discussions like this. And yet, it seems a little bit that even opinions on this list will not fall on fertile grounds.
I wish the communication improved in the first place. Community means togetherness.
should aim for solution 1. if solution 2. is not negotiable by the modularity WG.
+1
_______________________________________________devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 2019-10-04, Miro Hrončok mhroncok@redhat.com wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Big con:
That effectively bans modules with multiple dependencies where at least one is a default version.
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
-- Petr
On Friday, November 15, 2019 6:32:21 AM MST Petr Pisar wrote:
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
This sounds like a bug in Modularity.
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
I'm guessing that Perl from a module doesn't meet a Require on perl? That's not a policy issue, nor an issue with traditional, non-modular, packages.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
I don't believe that's the case. The packager would choose how they want to handle it, most likely just not bothering with modules. The user would just `dnf install bugzilla`, and use the version that is packaged as a non-modular package.
On 2019-11-15, John M. Harris Jr johnmh@splentity.com wrote:
On Friday, November 15, 2019 6:32:21 AM MST Petr Pisar wrote:
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
This sounds like a bug in Modularity.
Modularity can achieve it when both Perls are packaged as a module. I'm only showing why we need default stream if we want modules.
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
I'm guessing that Perl from a module doesn't meet a Require on perl?
It meets the RPM-level "Require on perl". But that's not sufficient because every Perl version is not binary compatible. You need to track against what Perl Bugzilla was built. That means you need to build Bugzilla twice and keep these two Bugzilla builds distinct so that DNF can install the right build depending on Perl user has already installed. Modularity supports it, but you need both Perl as a module.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
I don't believe that's the case. The packager would choose how they want to handle it, most likely just not bothering with modules. The user would just `dnf install bugzilla`, and use the version that is packaged as a non-modular package.
If packager does not build Bugzilla for the modular Perl, then of course the user has no choice. But talk about a case when the user and the package wants to have a choice.
-- Petr
On Fri, Nov 15, 2019 at 02:53:08PM -0000, Petr Pisar wrote:
On 2019-11-15, John M. Harris Jr johnmh@splentity.com wrote:
On Friday, November 15, 2019 6:32:21 AM MST Petr Pisar wrote:
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
This sounds like a bug in Modularity.
Modularity can achieve it when both Perls are packaged as a module. I'm only showing why we need default stream if we want modules.
I'm interested in how this should work when two different modules interact, and we need a language binding across both modules.
Consider if we move the virtualization stack (QEMU & Libvirt) into a module with two streams, one libvirt 5.8.0 and one libvirt 6.1.0.
Now we want to build Perl bindings for libvirt. We'll need the corresponding version of perl-Sys-Virt either 5.8.0 or 6.1.0, built for each virt module stream, but also built for each Perl module stream 5.26 / 5.30. eg the combinatorial expansion
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.26 - perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.30 - perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.26 - perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.30
which module would the perl-Sys-Virt builds live in ?
If perl-Sys-Virt is part of the virt module, IIUC we'd only be able to build it for one specific perl module stream.
If perl-Sys-Virt is part of the perl module, IIUC we'd only be able to build it for one specific virt module stream
It looks to me like we have to create a new module just to hold the perl-Sys-Virt package, and give this 4 streams, to cover the combinatorial expansion of the perl & virt module streams. Is this right ?
And we'd have to do create more modules for every other language binding we ship (ocaml, python, ruby, etc) if the language runtime uses modules.
Regards, Daniel
On 2019-11-15, Daniel P Berrangé berrange@redhat.com wrote:
On Fri, Nov 15, 2019 at 02:53:08PM -0000, Petr Pisar wrote:
On 2019-11-15, John M. Harris Jr johnmh@splentity.com wrote:
On Friday, November 15, 2019 6:32:21 AM MST Petr Pisar wrote:
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
This sounds like a bug in Modularity.
Modularity can achieve it when both Perls are packaged as a module. I'm only showing why we need default stream if we want modules.
I'm interested in how this should work when two different modules interact, and we need a language binding across both modules.
Consider if we move the virtualization stack (QEMU & Libvirt) into a module with two streams, one libvirt 5.8.0 and one libvirt 6.1.0.
Now we want to build Perl bindings for libvirt. We'll need the corresponding version of perl-Sys-Virt either 5.8.0 or 6.1.0, built for each virt module stream, but also built for each Perl module stream 5.26 / 5.30. eg the combinatorial expansion
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.26
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.30
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.26
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.30
True, you have 4 combinations.
which module would the perl-Sys-Virt builds live in ?
If perl-Sys-Virt is part of the virt module, IIUC we'd only be able to build it for one specific perl module stream.
If perl-Sys-Virt is part of the perl module, IIUC we'd only be able to build it for one specific virt module stream
It looks to me like we have to create a new module just to hold the perl-Sys-Virt package, and give this 4 streams, to cover the combinatorial expansion of the perl & virt module streams. Is this right ?
No. Modularity solves this combination problem with "stream expansion". Sources for such module exists only once, you submit them for building with fedpkg only once, but a build systems computes all combinations (this the stream expansion) and schedules a build for each of the combination. That will result in multiple module builds with the same module name, stream, version, but differing with a special discriminator called "context".
Example: Let's say you have libvirt module with 5.8.0 and 6.1.0 streams and perl module with 5.26 and 5.30 streams. If you add perl-Sys-Virt into a new module, you write a modulemd file for it like this:
- buildrequiers: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32]
"fedpkg module-build" on it will spawn 4 builds. Even you don't have to enumerate the streams and let the module build system to figure out it automatically and expand for all existing:
- buildrequiers: libvirt: [] perl: [] platform: [] requires: libvirt: [] perl: [] platform: []
Or you can put perl-Sys-Virt into libvirt module and write into libvirt modulemd of each of the libvirt streams:
- buildrequiers: perl: [] platform: [] requires: perl: [] platform: []
You can see these modules in RHEL or CentOS. E.g. perl-DBD-Pg module https://git.centos.org/modules/perl-DBD-Pg/blob/c8-stream-3.7/f/perl-DBD-Pg.yaml.
When installing the module, DNF makes sure to select a proper context for libvirt and perl you have already selected. If it happened that you built it only for Perl 5.26, but you have already enabled Perl 5.24 on your system, DNF will report an error that 5.26 module is needed but it is disabled.
And we'd have to do create more modules for every other language binding we ship (ocaml, python, ruby, etc) if the language runtime uses modules.
You can put all the bindings into one module. Or each binding into its own module. Whatever fits your needs better.
-- Petr
On Fri, Nov 15, 2019, 17:38 Petr Pisar ppisar@redhat.com wrote:
On 2019-11-15, Daniel P Berrangé berrange@redhat.com wrote:
On Fri, Nov 15, 2019 at 02:53:08PM -0000, Petr Pisar wrote:
On 2019-11-15, John M. Harris Jr johnmh@splentity.com wrote:
On Friday, November 15, 2019 6:32:21 AM MST Petr Pisar wrote:
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as
an
laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
This sounds like a bug in Modularity.
Modularity can achieve it when both Perls are packaged as a module. I'm only showing why we need default stream if we want modules.
I'm interested in how this should work when two different modules interact, and we need a language binding across both modules.
Consider if we move the virtualization stack (QEMU & Libvirt) into a module with two streams, one libvirt 5.8.0 and one libvirt 6.1.0.
Now we want to build Perl bindings for libvirt. We'll need the corresponding version of perl-Sys-Virt either 5.8.0 or 6.1.0, built for each virt module stream, but also built for each Perl module stream 5.26 / 5.30. eg the combinatorial expansion
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.26
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.30
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.26
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.30
True, you have 4 combinations.
which module would the perl-Sys-Virt builds live in ?
If perl-Sys-Virt is part of the virt module, IIUC we'd only be able to build it for one specific perl module stream.
If perl-Sys-Virt is part of the perl module, IIUC we'd only be able to build it for one specific virt module stream
It looks to me like we have to create a new module just to hold the perl-Sys-Virt package, and give this 4 streams, to cover the combinatorial expansion of the perl & virt module streams. Is this right ?
No. Modularity solves this combination problem with "stream expansion". Sources for such module exists only once, you submit them for building with fedpkg only once, but a build systems computes all combinations (this the stream expansion) and schedules a build for each of the combination. That will result in multiple module builds with the same module name, stream, version, but differing with a special discriminator called "context".
The problem described by Daniel was that Perl module should be different version when building against 5.x libvirt.
Example: Let's say you have libvirt module with 5.8.0 and 6.1.0 streams
and perl module with 5.26 and 5.30 streams. If you add perl-Sys-Virt into a new module, you write a modulemd file for it like this:
- buildrequiers: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32]
"fedpkg module-build" on it will spawn 4 builds. Even you don't have to enumerate the streams and let the module build system to figure out it automatically and expand for all existing:
- buildrequiers: libvirt: [] perl: [] platform: [] requires: libvirt: [] perl: [] platform: []
Or you can put perl-Sys-Virt into libvirt module and write into libvirt modulemd of each of the libvirt streams:
- buildrequiers: perl: [] platform: [] requires: perl: [] platform: []
You can see these modules in RHEL or CentOS. E.g. perl-DBD-Pg module < https://git.centos.org/modules/perl-DBD-Pg/blob/c8-stream-3.7/f/perl-DBD-Pg....
.
When installing the module, DNF makes sure to select a proper context for libvirt and perl you have already selected. If it happened that you built it only for Perl 5.26, but you have already enabled Perl 5.24 on your system, DNF will report an error that 5.26 module is needed but it is disabled.
And we'd have to do create more modules for every other language binding we ship (ocaml, python, ruby, etc) if the language runtime uses modules.
You can put all the bindings into one module. Or each binding into its own module. Whatever fits your needs better.
-- Petr _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 2019-11-15, Igor Gnatenko i.gnatenko.brain@gmail.com wrote:
On Fri, Nov 15, 2019, 17:38 Petr Pisar ppisar@redhat.com wrote:
On 2019-11-15, Daniel P Berrang=C3=A9 berrange@redhat.com wrote:
Consider if we move the virtualization stack (QEMU & Libvirt) into a module with two streams, one libvirt 5.8.0 and one libvirt 6.1.0.
Now we want to build Perl bindings for libvirt. We'll need the corresponding version of perl-Sys-Virt either 5.8.0 or 6.1.0, built for each virt module stream, but also built for each Perl module stream 5.26 / 5.30. eg the combinatorial expansion
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.26
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.30
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.26
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.30
[...]
The problem described by Daniel was that Perl module should be different version when building against 5.x libvirt.
Example: Let's say you have libvirt module with 5.8.0 and 6.1.0 streams and perl module with 5.26 and 5.30 streams. If you add perl-Sys-Virt into a new module, you write a modulemd file for it like this:
- buildrequiers: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32]
You are right. Then either you put perl-Sys-Virt 5.8.0 into a perl-Sys-Virt:5.8.0 stream with this dependency specification:
- buildrequiers: libvirt: [5.9.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.9.0] perl: [5.26, 5.30] platform: [f32]
and perl-Sys-Virt 6.1.0 package into perl-Sys-Virt:6.1.0 stream with:
- buildrequiers: libvirt: [6.1.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [6.1.0] perl: [5.26, 5.30] platform: [f32]
Or you put perl-Sys-Virt package into libvirt module and for libvirt:5.9.0 stream you write:
- buildrequiers: perl: [5.26, 5.30] platform: [f32] requires: perl: [5.26, 5.30] platform: [f32]
and for libvirt:6.1.0 stream you do the same.
What approach you want to choose probably depends on compatibility among perl-Sys-Virt package versions and among libvirt versions. And how often they are released.
I.e. if you can rebase perl-Sys-Virt inside libvirt stream because perl-Sys-Virt does not break ABI, then it makes sense to keep it inside libvirt module. That's because public ABI of a module should not change inside a stream.
You can also consider how expensive is to build, test and deliver the libvirt module. If e.g. building perl-Sys-Virt were much quicker than building libvirt, and there were plenty of perl streams, then it would make sense to move perl-Sys-Virt package into its own module.
I think it's a similar problem as to when bundle all dependencies into one package and when to aim for splitting it into muliple independent packages.
-- Petr
Yes, but what you have described is basically to create 2 streams of perl-Sys-Virt module. Which is probably not what normal people want. Creating module for one package is the worst idea ever.
Sure, bundling perl-Sys-Virt into the libvirt module would solve the problem, but then what's the point of modules? You will be building libvirt itself then multiple times due to the stream expansion.
On Tue, Nov 19, 2019 at 11:38 AM Petr Pisar ppisar@redhat.com wrote:
On 2019-11-15, Igor Gnatenko i.gnatenko.brain@gmail.com wrote:
On Fri, Nov 15, 2019, 17:38 Petr Pisar ppisar@redhat.com wrote:
On 2019-11-15, Daniel P Berrang=C3=A9 berrange@redhat.com wrote:
Consider if we move the virtualization stack (QEMU & Libvirt) into a module with two streams, one libvirt 5.8.0 and one libvirt 6.1.0.
Now we want to build Perl bindings for libvirt. We'll need the corresponding version of perl-Sys-Virt either 5.8.0 or 6.1.0, built for each virt module stream, but also built for each Perl module stream 5.26 / 5.30. eg the combinatorial expansion
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.26
- perl-Sys-Virt 5.8.0 with libvirt 5.9.0 with perl 5.30
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.26
- perl-Sys-Virt 6.1.0 with libvirt 6.1.0 with perl 5.30
[...]
The problem described by Daniel was that Perl module should be different version when building against 5.x libvirt.
Example: Let's say you have libvirt module with 5.8.0 and 6.1.0 streams and perl module with 5.26 and 5.30 streams. If you add perl-Sys-Virt into a new module, you write a modulemd file for it like this:
- buildrequiers: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32]
You are right. Then either you put perl-Sys-Virt 5.8.0 into a perl-Sys-Virt:5.8.0 stream with this dependency specification:
- buildrequiers: libvirt: [5.9.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.9.0] perl: [5.26, 5.30] platform: [f32]
and perl-Sys-Virt 6.1.0 package into perl-Sys-Virt:6.1.0 stream with:
- buildrequiers: libvirt: [6.1.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [6.1.0] perl: [5.26, 5.30] platform: [f32]
Or you put perl-Sys-Virt package into libvirt module and for libvirt:5.9.0 stream you write:
- buildrequiers: perl: [5.26, 5.30] platform: [f32] requires: perl: [5.26, 5.30] platform: [f32]
and for libvirt:6.1.0 stream you do the same.
What approach you want to choose probably depends on compatibility among perl-Sys-Virt package versions and among libvirt versions. And how often they are released.
I.e. if you can rebase perl-Sys-Virt inside libvirt stream because perl-Sys-Virt does not break ABI, then it makes sense to keep it inside libvirt module. That's because public ABI of a module should not change inside a stream.
You can also consider how expensive is to build, test and deliver the libvirt module. If e.g. building perl-Sys-Virt were much quicker than building libvirt, and there were plenty of perl streams, then it would make sense to move perl-Sys-Virt package into its own module.
I think it's a similar problem as to when bundle all dependencies into one package and when to aim for splitting it into muliple independent packages.
-- Petr _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 2019-11-19, Igor Gnatenko ignatenkobrain@fedoraproject.org wrote:
Yes, but what you have described is basically to create 2 streams of perl-Sys-Virt module. Which is probably not what normal people want.
Having two different perl-Sys-Virt packages was requesed by Daniel. That was not my choice.
Creating module for one package is the worst idea ever.
Matter of opinion.
Sure, bundling perl-Sys-Virt into the libvirt module would solve the problem, but then what's the point of modules? You will be building libvirt itself then multiple times due to the stream expansion.
Then put perl-Sys-Virt into a separat module.
And please to do not top-post.
-- Petr
On 11/15/19 11:27 AM, Petr Pisar wrote:
No. Modularity solves this combination problem with "stream expansion". Sources for such module exists only once, you submit them for building with fedpkg only once, but a build systems computes all combinations (this the stream expansion) and schedules a build for each of the combination. That will result in multiple module builds with the same module name, stream, version, but differing with a special discriminator called "context".
so for one module with two versions, we will have 2 builds, for 2 modules with two versions we'll have four builds, and in general for N modules with M versions on average, we will have N^M builds? This is a textbook combinatorial explosion: 100 modules with average 3 versions each is a million builds and tests, with million resulting versions to be picked from.
Of course in practice the combinatorial behavior only happens within the subsets of software that depend on each other, but, nevertheless, it seems to me that this means that we have to control and limit the number of interdependent modules drastically, like to single digits.
BTW, it always bothered me that in some sense the prime case for modules is the kernel---but the kernel has always been treated specially and is not being subsumed into modules. I think that is because we are thinking about the whole thing wrong; we haven't found the right abstraction for dealing with software versioning yet.
Przemek Klosowski via devel wrote:
so for one module with two versions, we will have 2 builds, for 2 modules with two versions we'll have four builds, and in general for N modules with M versions on average, we will have N^M builds?
M^N actually.
1 module with M versions = M builds 2 modules with M versions each = M*M=M^2 builds 3 modules with M versions each = M*M*M=M^3 builds … n modules with M versions each = M^N builds
This is a textbook combinatorial explosion: 100 modules with average 3 versions each is a million builds and tests
It is actually 3^100 > 5*10^47 builds. That's more than 500 000 000 000 000 000 000 000 000 000 000 000 000 000 million builds, not 1 million.
Kevin Kofler
On 2019-11-15, Przemek Klosowski via devel devel@lists.fedoraproject.org wrote:
Of course in practice the combinatorial behavior only happens within the subsets of software that depend on each other, but, nevertheless, it seems to me that this means that we have to control and limit the number of interdependent modules drastically, like to single digits.
When it matters, maintainers can limit the number of combinations. E.g. you can restrict to 2 combinations like this:
- buildrequiers: libvirt: [5.8.0] perl: [5.26] platform: [f32] requires: libvirt: [5.8.0] perl: [5.26] platform: [f32] - buildrequiers: libvirt: [6.1.0] perl: [5.30] platform: [f32] requires: libvirt: [6.1.0] perl: [5.30] platform: [f32]
But don't forget that if a built module can actually work with many streams at run-time, you cam simplify it like this:
- buildrequiers: libvirt: [5.8.0] perl: [5.26, 5.30] platform: [f32] requires: libvirt: [5.8.0, 6.1.0] perl: [5.26, 5.30] platform: [f32]
This declares that you want to make two builds and each of the builds will be compatible with both libvirt streams. This is what Java modules often do.
we haven't found the right abstraction for dealing with software versioning yet.
I'm pessimistic. Fedora as any binary distribution distributes binaries. ABI changes usually proliferate quicker than API incompatibilities. That's why e.g. Gentoo does not have this issue because there you simply put "spec files" for multiple versions into a repository and the exact binary combination is formed at installation time on a user's machine. There is missing "Koji" in the the distribution chain.
-- Petr
On Friday, November 15, 2019 7:53:08 AM MST Petr Pisar wrote:
Modularity can achieve it when both Perls are packaged as a module. I'm only showing why we need default stream if we want modules.
If that's the case, why not build a (separate) Modularity distro? If Modularity cannot work with non-modular packages, and that is not a bug with Modularity, it is fundamentally incompatible with Fedora as a traditional distribution.
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
I'm guessing that Perl from a module doesn't meet a Require on perl?
It meets the RPM-level "Require on perl". But that's not sufficient because every Perl version is not binary compatible. You need to track against what Perl Bugzilla was built. That means you need to build Bugzilla twice and keep these two Bugzilla builds distinct so that DNF can install the right build depending on Perl user has already installed. Modularity supports it, but you need both Perl as a module.
That would depend on how the Perl packages are actually handled, which I honestly haven't checked, and so I will make no claims as to compatibility.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
I don't believe that's the case. The packager would choose how they want to handle it, most likely just not bothering with modules. The user would just `dnf install bugzilla`, and use the version that is packaged as a non-modular package.
If packager does not build Bugzilla for the modular Perl, then of course the user has no choice. But talk about a case when the user and the package wants to have a choice.
It seems, based on what you've said, that Modules remove this choice. If somebody chooses for something in the dependency tree to be a module, it all has to be a module, otherwise it doesn't work. Please do correct me if I'm wrong.
On 15. 11. 19 14:32, Petr Pisar wrote:
On 2019-10-04, Miro Hrončok mhroncok@redhat.com wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Big con:
That effectively bans modules with multiple dependencies where at least one is a default version.
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
I don't understand why would the user care about the Perl version when they want Bugzilla. How is Bugzilla different form e.g. Slic3r (app that happens to be written in Perl)? Do we want to modularize all such apps to solve the "no parallel instability" feature?
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
With my proposal, Bugzilla packager would package Bugzilla in non-modular Fedora unless they also want to package it as a module. If I see correctly, this is exactly the case today.
On 2019-11-15, Miro Hrončok mhroncok@redhat.com wrote:
On 15. 11. 19 14:32, Petr Pisar wrote:
On 2019-10-04, Miro Hrončok mhroncok@redhat.com wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Big con:
That effectively bans modules with multiple dependencies where at least one is a default version.
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
I don't understand why would the user care about the Perl version when they want Bugzilla. How is Bugzilla different form e.g. Slic3r (app that happens to be written in Perl)? Do we want to modularize all such apps to solve the "no parallel instability" feature?
I don't know. Ask the user why he needs a different Perl version than the default one. Maybe he has some other applications that work only with that particular version.
If you believe that users do not care about a version of software they use, then we can drop out modularity, and all Fedora releases and deliver only Rawhide. Or we can stop integrating new versions of software and deliver Fedora 32 and nothing else forever.
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
With my proposal, Bugzilla packager would package Bugzilla in non-modular Fedora unless they also want to package it as a module. If I see correctly, this is exactly the case today.
And do you know the packager does not want to pacakge Bugzilla as a module? Because in current Fedora without default streams in build root he had to package it and maintain it twice.
-- Petr
On 15. 11. 19 16:11, Petr Pisar wrote:
On 2019-11-15, Miro Hrončok mhroncok@redhat.com wrote:
On 15. 11. 19 14:32, Petr Pisar wrote:
On 2019-10-04, Miro Hrončok mhroncok@redhat.com wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Big con:
That effectively bans modules with multiple dependencies where at least one is a default version.
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
I don't understand why would the user care about the Perl version when they want Bugzilla. How is Bugzilla different form e.g. Slic3r (app that happens to be written in Perl)? Do we want to modularize all such apps to solve the "no parallel instability" feature?
I don't know. Ask the user why he needs a different Perl version than the default one. Maybe he has some other applications that work only with that particular version.
What I was implying is that I don't understand why the user of Buzgilla wants different Perl version to run it. I was not implying that users don't want various Perl versions generally.
If you believe that users do not care about a version of software they use, then we can drop out modularity, and all Fedora releases and deliver only Rawhide. Or we can stop integrating new versions of software and deliver Fedora 32 and nothing else forever.
I believe that the purpose of a distribution is to cerate and integrated environment, where we simply make sure that Bugzilla works and runs on a Perl version we support. And we move forward and integrate with newer Perl versions.
Note that I don't necessarily mean that the use case doesn't exist, I just say I don't really get it. And why is Bugzilla any different that all other Perl applications.
Either the strategy should be:
"We offer alternate Perl versions for containers etc. they conflict with the default Perl version and with the non-modular apps. That is known and accepted."
Or the strategy should be:
"We build all our Perl applications for all our Perl versions, so users who choose their Perl stream can still keep their applications from the distribution."
I fail to see what are we trying to achieve here exactly. It was said several times that parallel instability is a non-goal of Modularity and that means certain apps won't install if certain streams are selected. Or did I get that wrong?
If each of the Perls is a stream of a module, you will put Bugzilla into a module and let it depend on any of the Perls. User can install any of the Perls and Bugzilla.
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
With my proposal, Bugzilla packager would package Bugzilla in non-modular Fedora unless they also want to package it as a module. If I see correctly, this is exactly the case today.
And do you know the packager does not want to pacakge Bugzilla as a module? Because in current Fedora without default streams in build root he had to package it and maintain it twice.
No, I don't. But I know there are packagers of applications who don't want to do that. And we should make a distro-scale decision whether we want the whole distro to work this way, or whether we only allow this - and those who choose to modularize will do so in addition to the non-modular default packages, not instead.
On 2019-11-15, Miro Hrončok mhroncok@redhat.com wrote:
On 15. 11. 19 16:11, Petr Pisar wrote:
On 2019-11-15, Miro Hrončok mhroncok@redhat.com wrote:
On 15. 11. 19 14:32, Petr Pisar wrote:
On 2019-10-04, Miro Hrončok mhroncok@redhat.com wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Big con:
That effectively bans modules with multiple dependencies where at least one is a default version.
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
I don't understand why would the user care about the Perl version when they want Bugzilla. How is Bugzilla different form e.g. Slic3r (app that happens to be written in Perl)? Do we want to modularize all such apps to solve the "no parallel instability" feature?
I don't know. Ask the user why he needs a different Perl version than the default one. Maybe he has some other applications that work only with that particular version.
What I was implying is that I don't understand why the user of Buzgilla wants different Perl version to run it. I was not implying that users don't want various Perl versions generally.
If the user is interested only in Bugzilla and nothing else, then of course he does not care about Perl version. Unfortunatelly people run more applications on their systems and then they have multiple requirements that can conflict each to other.
If you believe that users do not care about a version of software they use, then we can drop out modularity, and all Fedora releases and deliver only Rawhide. Or we can stop integrating new versions of software and deliver Fedora 32 and nothing else forever.
I believe that the purpose of a distribution is to cerate and integrated environment, where we simply make sure that Bugzilla works and runs on a Perl version we support. And we move forward and integrate with newer Perl versions.
I understand. I also don't care about versions until my system is compatible and supported. But the problem emerges when you start to care because you need a new feature or postpone an upgrade because the new version is broken for you.
Note that I don't necessarily mean that the use case doesn't exist, I just say I don't really get it. And why is Bugzilla any different that all other Perl applications.
Bugzilla is not any different. It was only an example.
Either the strategy should be:
"We offer alternate Perl versions for containers etc. they conflict with the default Perl version and with the non-modular apps. That is known and accepted."
Or the strategy should be:
"We build all our Perl applications for all our Perl versions, so users who choose their Perl stream can still keep their applications from the distribution."
I fail to see what are we trying to achieve here exactly.
I would love this second option, but with current modularity it's not feasable. Not because of the juvenile defects we have now (like switching streams on distribution upgrades) but because it would require a module for each package. And current implementation does not scale so well and cannot describe all needed relations we have readilly available on an RPM level. A true solution would be blending modularity into RPM. At build time as well as at installation time.
So the first option is more realistic. You correctly write "alternate [...] versions [...] conflict [...] with the non-modular apps".
However, my intuition says that nobody will use the alternative versions for exactly this reason. And I think I'm right. Look at me. I maintain Perl modules but I don't use them. I cannot because I would have to uninstall all the other Perl packages from my system. And not only Perl packages. fedpkg transitively depends on non-modular Perl. Anybody who wants a different Perl cannot be a Fedora packager.
Therefore I think it's desirable to modularize applications. Because that way we diminish the conflicts and that increases value of the distribution. Look how few modules we have now in Fedora.
Now you can think another modularity fatatic who wants to modularize everything and have all modules in multiple versions. Not at all. I believe most of the modules will exist only in one stream. But the reason why we need to modularize everything is to actually enable multiple stream for the "few" modules that actually can take benefits from the multiple streams. Because once everything is a module, it's trivial to add a new stream to a module in a middle of the dependency tree. It's trivial to test the new stream. Any user can enable it and test it how it works on his system.
But as I wrote, this is probably not doable with current modularity (modules above packages). I'm sorry, I write to much.
It was said several times that parallel instability is a non-goal of Modularity and that means certain apps won't install if certain streams are selected. Or did I get that wrong?
This problem is parallel availability. If you do not build, let's return to examples, Bugzilla for all available Perls, the user will have no choice. Once he needs Bugzilla he cannot select Perl version. That's what I don't like.
-- Petr
On 15. 11. 19 19:10, Petr Pisar wrote:
Now you can think another modularity fatatic who wants to modularize everything and have all modules in multiple versions. Not at all.
Don't worry, I don't consider anybody a modular fanatic. Yet anyway.
Thanks for you answer, it has been valuable to me and I hope that it was valuable for other as well.
A true solution would be blending modularity into RPM. At build time as well as at installation time.
I agree this would be the best. Basically, final product of a module build should be an rpm. modulemd file should be kind of a meta-spec file which can be built distributively and that contains parts which are then inserted into the standard resulting spec file, which can contain multiple applications or a just single one. "Stream" can be a new rpm property that could be used by user as an extra-specifier during installation. In other words, modularity should be all about build-time. In the end, standard rpm should be the result. That way, there are no collisions between modular and non-modular on the user side because everything is rpm in the end and only the way the rpm was produced differs.
On Fri, 15 Nov 2019 at 19:11, Petr Pisar ppisar@redhat.com wrote:
On 2019-11-15, Miro Hrončok mhroncok@redhat.com wrote:
On 15. 11. 19 16:11, Petr Pisar wrote:
On 2019-11-15, Miro Hrončok mhroncok@redhat.com wrote:
On 15. 11. 19 14:32, Petr Pisar wrote:
On 2019-10-04, Miro Hrončok mhroncok@redhat.com wrote:
Wouldn't it be easier if the "default stream" would just behave like a regular package?
I can think of two solutions of that:
- (drastic for modular maintainers)
We keep miantaining the default versions of things as ursine packages. We only modularize alternate versions.
Big con:
That effectively bans modules with multiple dependencies where at least one is a default version.
Example: I have Perl 5.26 as a default version. I have Perl 5.30 as an laternative version. Now I want to package Bugzilla that's written in Perl. How do you package Bugzilla so that it works with Perl 5.26 as well as with Perl 5.30?
I don't understand why would the user care about the Perl version when they want Bugzilla. How is Bugzilla different form e.g. Slic3r (app that happens to be written in Perl)? Do we want to modularize all such apps to solve the "no parallel instability" feature?
I don't know. Ask the user why he needs a different Perl version than the default one. Maybe he has some other applications that work only with that particular version.
What I was implying is that I don't understand why the user of Buzgilla wants different Perl version to run it. I was not implying that users don't want various Perl versions generally.
If the user is interested only in Bugzilla and nothing else, then of course he does not care about Perl version. Unfortunatelly people run more applications on their systems and then they have multiple requirements that can conflict each to other.
If you believe that users do not care about a version of software they use, then we can drop out modularity, and all Fedora releases and deliver only Rawhide. Or we can stop integrating new versions of software and deliver Fedora 32 and nothing else forever.
I believe that the purpose of a distribution is to cerate and integrated environment, where we simply make sure that Bugzilla works and runs on a Perl version we support. And we move forward and integrate with newer Perl versions.
I understand. I also don't care about versions until my system is compatible and supported. But the problem emerges when you start to care because you need a new feature or postpone an upgrade because the new version is broken for you.
Note that I don't necessarily mean that the use case doesn't exist, I just say I don't really get it. And why is Bugzilla any different that all other Perl applications.
Bugzilla is not any different. It was only an example.
Either the strategy should be:
"We offer alternate Perl versions for containers etc. they conflict with the default Perl version and with the non-modular apps. That is known and accepted."
Or the strategy should be:
"We build all our Perl applications for all our Perl versions, so users who choose their Perl stream can still keep their applications from the distribution."
I fail to see what are we trying to achieve here exactly.
I would love this second option, but with current modularity it's not feasable. Not because of the juvenile defects we have now (like switching streams on distribution upgrades) but because it would require a module for each package. And current implementation does not scale so well and cannot describe all needed relations we have readilly available on an RPM level. A true solution would be blending modularity into RPM. At build time as well as at installation time.
So the first option is more realistic. You correctly write "alternate [...] versions [...] conflict [...] with the non-modular apps".
However, my intuition says that nobody will use the alternative versions for exactly this reason. And I think I'm right. Look at me. I maintain Perl modules but I don't use them. I cannot because I would have to uninstall all the other Perl packages from my system. And not only Perl packages. fedpkg transitively depends on non-modular Perl. Anybody who wants a different Perl cannot be a Fedora packager.
Therefore I think it's desirable to modularize applications. Because that way we diminish the conflicts and that increases value of the distribution. Look how few modules we have now in Fedora.
Now you can think another modularity fatatic who wants to modularize everything and have all modules in multiple versions. Not at all. I believe most of the modules will exist only in one stream. But the reason why we need to modularize everything is to actually enable multiple stream for the "few" modules that actually can take benefits from the multiple streams. Because once everything is a module, it's trivial to add a new stream to a module in a middle of the dependency tree. It's trivial to test the new stream. Any user can enable it and test it how it works on his system.
But as I wrote, this is probably not doable with current modularity (modules above packages). I'm sorry, I write to much.
It was said several times that parallel instability is a non-goal of Modularity and that means certain apps won't install if certain streams are selected. Or did I get that wrong?
This problem is parallel availability. If you do not build, let's return to examples, Bugzilla for all available Perls, the user will have no choice. Once he needs Bugzilla he cannot select Perl version. That's what I don't like.
-- Petr _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Le samedi 16 novembre 2019 à 03:38 +0100, clime a écrit :
A true solution would be blending modularity into RPM. At build time as well as at installation time.
I agree this would be the best. Basically, final product of a module build should be an rpm. modulemd file should be kind of a meta-spec file
There should be no need for a modulemd *at* *all*.
Specifying a module stream target for a build should be just an rpm variable, in ~/.config/rpm/macros or passed on the cli to rpmbuild, etc.
Exactly like we do with dist. We managed to handle multiple distribution releases with dist, even if it was not a core rpm concept, without needing to change rpm at all. And it works. And it didn't cause half the havoc of modules.
Sure, do some rpm fixing if necessary so the result feels less like a kludge than %dist. But, don’t rely on an external framework to do things for you instead of doing the necessary work (if any) at the component format level.
Module upgrade and conflict resolution strategies should be just dnf modules over this basic rpm format. So we can have change the strategy over time without changing the packages and specs themselves.
And *then*, you can build all kinds of templating management frameworks over those basic format changes. In fact people being people they will build multiple frameworks, in all kind of languages, and argue endlessly which one is best. That won’t matter because the low-level module info and format will exist directly in rpm.
Modules started with the end-user management framework (porcelain) part, and got lost somewhere trying to decide how to map it to low level concepts. That, does not work. Start from fundations before arguing about the roof decorations.
Regards,
Le samedi 16 novembre 2019 à 08:37 +0100, Nicolas Mailhot a écrit :
Sure, do some rpm fixing if necessary so the result feels less like a kludge than %dist. But, don’t rely on an external framework to do things for you instead of doing the necessary work (if any) at the component format level.
Hell, there’s no even any need to change rpm itself, Group exists and has been liberated from previous use..
Just shove the module name in Group, write all the necessary macros and templates to decline %{group} info in necessary parts of the spec, write the necessary dnf plugins to do smart things with Group info.
And then when that is shown working (as in, upgrade conflict resolution works), and the correct way to do modules is finaly understood, the behaviour of the macros and dnf plugin can be streamlined and merged and hardcoded in rpm and dnf cores.
(which may never happen, as dist showed, but that's not a problem with rpm the application, that's a problem with Red Hat and rpm the project not investing in cleaning up technical debt)
Regards,
On Sat, 16 Nov 2019 at 08:38, Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Le samedi 16 novembre 2019 à 03:38 +0100, clime a écrit :
A true solution would be blending modularity into RPM. At build time as well as at installation time.
I agree this would be the best. Basically, final product of a module build should be an rpm. modulemd file should be kind of a meta-spec file
There should be no need for a modulemd *at* *all*.
modulemd + related infrastructure gives you distributed building, which is cool if you want to build a "solution" i.e. multiple software packages all combined to serve a particular use-case. Also passing compile time option to tweak the given build is allowed by modulemd. You don't get something like that when building a spec file and i think it would be hard to achieve.
Specifying a module stream target for a build should be just an rpm variable, in ~/.config/rpm/macros or passed on the cli to rpmbuild, etc.
Exactly like we do with dist. We managed to handle multiple distribution releases with dist, even if it was not a core rpm concept, without needing to change rpm at all. And it works. And it didn't cause half the havoc of modules.
Sure, do some rpm fixing if necessary so the result feels less like a kludge than %dist. But, don’t rely on an external framework to do things for you instead of doing the necessary work (if any) at the component format level.
Module upgrade and conflict resolution strategies should be just dnf modules over this basic rpm format. So we can have change the strategy over time without changing the packages and specs themselves.
And *then*, you can build all kinds of templating management frameworks over those basic format changes. In fact people being people they will build multiple frameworks, in all kind of languages, and argue endlessly which one is best. That won’t matter because the low-level module info and format will exist directly in rpm.
Modules started with the end-user management framework (porcelain) part, and got lost somewhere trying to decide how to map it to low level concepts. That, does not work. Start from fundations before arguing about the roof decorations.
Regards,
Nicolas Mailhot _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Le samedi 16 novembre 2019 à 18:42 +0100, clime a écrit :
On Sat, 16 Nov 2019 at 08:38, Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Le samedi 16 novembre 2019 à 03:38 +0100, clime a écrit :
A true solution would be blending modularity into RPM. At build time as well as at installation time.
I agree this would be the best. Basically, final product of a module build should be an rpm. modulemd file should be kind of a meta-spec file
There should be no need for a modulemd *at* *all*.
modulemd + related infrastructure gives you distributed building, which is cool if you want to build a "solution" i.e. multiple software packages all combined to serve a particular use-case.
Yes it is wickedly cool as a distributed building solution.
It is not cool *at* all* as a replacement for spec declarations. Just put the correct variables in the spec files themselves, and have the distributed building solutions set them during builds (as is done for dist)
Regards
On Sat, 16 Nov 2019 at 18:54, Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Le samedi 16 novembre 2019 à 18:42 +0100, clime a écrit :
On Sat, 16 Nov 2019 at 08:38, Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Le samedi 16 novembre 2019 à 03:38 +0100, clime a écrit :
A true solution would be blending modularity into RPM. At build time as well as at installation time.
I agree this would be the best. Basically, final product of a module build should be an rpm. modulemd file should be kind of a meta-spec file
There should be no need for a modulemd *at* *all*.
modulemd + related infrastructure gives you distributed building, which is cool if you want to build a "solution" i.e. multiple software packages all combined to serve a particular use-case.
Yes it is wickedly cool as a distributed building solution.
It is not cool *at* all* as a replacement for spec declarations. Just put the correct variables in the spec files themselves, and have the distributed building solutions set them during builds (as is done for dist)
Yes, but the point is, the product of the distributed build should be a single rpm so that you don't need to handle two kinds of objects during installation time and inter-dependencies between them. So there should be a single spec file generated partially automatically (i.e. by collecting what sources were built and putting them into the corresponding Source: statements) and partially manually (i.e. scriplets need to be pulled from somewhere).
Regards
-- Nicolas Mailhot _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Le samedi 16 novembre 2019 à 19:05 +0100, clime a écrit :
On Sat, 16 Nov 2019 at 18:54, Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Le samedi 16 novembre 2019 à 18:42 +0100, clime a écrit :
On Sat, 16 Nov 2019 at 08:38, Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Le samedi 16 novembre 2019 à 03:38 +0100, clime a écrit :
A true solution would be blending modularity into RPM. At build time as well as at installation time.
I agree this would be the best. Basically, final product of a module build should be an rpm. modulemd file should be kind of a meta-spec file
There should be no need for a modulemd *at* *all*.
modulemd + related infrastructure gives you distributed building, which is cool if you want to build a "solution" i.e. multiple software packages all combined to serve a particular use-case.
Yes it is wickedly cool as a distributed building solution.
It is not cool *at* all* as a replacement for spec declarations. Just put the correct variables in the spec files themselves, and have the distributed building solutions set them during builds (as is done for dist)
Yes, but the point is, the product of the distributed build should be a single rpm so that you don't need to handle two kinds of objects during installation time and inter-dependencies between them.
Really, this is the kind of technical choice that made modules fail in practical terms. Just generate rpms with no magic except a module marker. If you build n versions of a rpm, generate n rpms. Autoreqs will register what you built against (and if they do not allow to disambiguate different versions of the same object they should be fixed).
Choosing to source a particular rpm in a particular module should not be more than a user hint. The packages themselves, should just be packages as usual.
Mass rebuilds are cool because the user does not need to know that the resulting packages were built by a single all-encompasing build command (usually, several releng iterations as problems are found and fixed).
Modules, has they exist today, leak build command grouping into install grouping. That's why the install part of modules is breaking right and left.
Regards,
Either the strategy should be:
"We offer alternate Perl versions for containers etc. they conflict with the default Perl version and with the non-modular apps. That is known and accepted."
Or the strategy should be:
"We build all our Perl applications for all our Perl versions, so users who choose their Perl stream can still keep their applications from the distribution."
Exactly. While I am thinking that the *first strategy is easily achievable*, even with what we already have, the *second strategy is very complicated* to achieve, because we cannot predict what applications users want to install and in which versions. They all would have to work in Fedora, otherwise the distro does not make sense any more. Let me explain.
If I know that installing an alternative version of Perl could break Perl bindings to other applications, I can create a container to use that alternative version of Perl and be happy, having actually the standard Perl in the system and another version in the container, or I just install a system with that, because I only need to have one purpose operating system, such as LAMP or other servers. That's fine.
For desktop users, however, this is not good because you place limitations on them. While I believe it is fairly ok to build a server solution around containers (or even virtual machines), this is overly complicated for Desktop. I do not understand why we would like to make Desktop complicated, when the majority of Red Hat use Fedora as their desktop solution. Also, there are spins that basically are Workstations (or other desktops) that have certain packages pre-installed and we expect them to work flawlessly.
The questions is: *With modularity ... can we make sure that everything works with everything as it does nowadays?*
I believe that having non-modular defaults will make sure the distro works in its entirety, while having alternative versions in modules will help developers and sysadmins to install what they want and need, if it is not the default. For me, this is a win win situation. I understand that it is more work for the packager, but it is more convenient for the users and we should think about the users in the first place.
I have been following this discussion since it started and all I am getting is "We are having issues, but we are working on them.", but nobody has ever explained why it is bad to use Miro's approach.
Le samedi 16 novembre 2019 à 09:35 +0100, Lukas Ruzicka a écrit :
Either the strategy should be:
"We offer alternate Perl versions for containers etc. they conflict with the default Perl version and with the non-modular apps. That is known and accepted."
That won't work.
You *can* ship modular components as alternatives
You *can* *not* ship modular components that directly compete with non- modular as defaults, because the maintainer of non modular will have made the effort to integrate with the rest of the distro, and not only the maintainer of the modular content won't have made this effort, but he will offload the integration problems *he* created to the maintainer of the default non modular version
The root reason of the current mess is that modular was allowed to override things without owning the problems that created.
Modular should *not* override anything by default in dep resolution. Dep resolution *may* use modular preferences as hints, but those hints should be weak at best, and be ignored by the dep resolver as soon as they conflict with dep graph resolution.
And yes that means maintainers of modular things will get the rug pulled under them whenever the default stream changes in incompatible ways. Tough luck. The only reliable way we have to coordinate Fedora activity is this default stream.
Regards,
Petr Pisar wrote:
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
The Bugzilla rebuild for the non-default Perl actually belongs IN the Perl module. Otherwise, enabling the non-default stream for the Perl module will break the user's Bugzilla and force them to manually enable the corresponding non-default stream for the Bugzilla module. Plus, since there are many Perl applications, having a module for each of them (each tracking Perl's module streams) just does not scale.
But what this example really shows is that it is a horrible idea to have a Perl module to begin with. The non-default Perl needs to be packaged as a parallel-installable compatibility package (or as an SCL, but that opens its own can of worms) instead. You cannot just replace a language interpreter (especially not one as widely used as Perl) with a different version. (As you pointed out yourself, that breaks even fedpkg. Even though fedpkg itself is not even written in Perl!) The parallel-installable approach is also the only reasonable way to support applications that require a non-default version of Perl, without conflicting with the rest of the distribution.
Kevin Kofler
On 2019-11-16, Kevin Kofler kevin.kofler@chello.at wrote:
Petr Pisar wrote:
With your proposal Bugzilla packager would have to package Bugzilla twice: as a normal package for default Perl 5.26 and as a module for Perl 5.30. Then a user would have hard time to select the right combinations of Perl and Bugzilla. It would double fork work pacakgers and and make the system more dificult for users.
The Bugzilla rebuild for the non-default Perl actually belongs IN the Perl module. Otherwise, enabling the non-default stream for the Perl module will break the user's Bugzilla and force them to manually enable the corresponding non-default stream for the Bugzilla module. Plus, since there are many Perl applications, having a module for each of them (each tracking Perl's module streams) just does not scale.
Adding Bugzilla in Perl module does not scale either. You have many Perl applications and you should put them all into the Perl module. As a result you would have a Perl module containing all Perl applications. We now more than 3000 such packages. You can imagine that you were never be eable to build such a giant module because there is always a package that fails to build. Also whenever you want to provide a new incompatible application you would have to fork the giant Perl module. Do you want to have perl:5.30-with-Bugzilla-5.1.2-with-slic3r-1.3.0-...?
I believe that better approach is to meld modularity into RPM. So that you can have plenty of small independent packages aware of parallel availability.
But what this example really shows is that it is a horrible idea to have a Perl module to begin with. The non-default Perl needs to be packaged as a parallel-installable compatibility package (or as an SCL, but that opens its own can of worms) instead. You cannot just replace a language interpreter (especially not one as widely used as Perl) with a different version. (As you pointed out yourself, that breaks even fedpkg. Even though fedpkg itself is not even written in Perl!) The parallel-installable approach is also the only reasonable way to support applications that require a non-default version of Perl, without conflicting with the rest of the distribution.
That's nice theory that will never come true becaue it would require to make all Perl code parallel-installable. And Perl code is not only libraries as in the Python language. That's also myriad of Perl scripts that you want to have in PATH. If you start fidling with things in PATH, you have the problem of SCL. And as you wrote, SCL is terrible. And that was the reason why we have modularity: We do not want to relocate code to non-standard paths.
I think it's inevitable that there will be conflicts and it's better to have them managable with a package manager (i.e. having default streams) rather crates few modules that silently overlay non-modular packages whe a user enables them.
The parallel installablity is nice, but it's way more difficult than parallel availability.
-- Petr
Petr Pisar wrote:
That's nice theory that will never come true becaue it would require to make all Perl code parallel-installable. And Perl code is not only libraries as in the Python language. That's also myriad of Perl scripts that you want to have in PATH.
But the scripts do not need to care about the version of Perl you are running, do they? It matters for compiled code, but why for Perl scripts? Those can just run with the default version of Perl if they support it, or with the shebang line changed to something like #!/usr/bin/perl5.30 if that's what they require.
If you start fidling with things in PATH, you have the problem of SCL. And as you wrote, SCL is terrible. And that was the reason why we have modularity: We do not want to relocate code to non-standard paths.
I agree that the SCL approach is not optimal, but letting the versions just conflict is much worse!
The best way to deal with conflicts in PATH is to suffix the binaries, not to move them. But that is only needed when it makes a difference for the end user which version they run. If the executable script "foo" does the exact same thing when run under Perl 5.28 or 5.30, then you need only one /usr/bin/foo set up to run against the distribution default Perl, the other one is redundant (which is the nice thing about parallel installation: you do not have to support running all the executables under a non-default Perl, only those that actually need it).
I think it's inevitable that there will be conflicts and it's better to have them managable with a package manager (i.e. having default streams) rather crates few modules that silently overlay non-modular packages whe a user enables them.
The parallel installablity is nice, but it's way more difficult than parallel availability.
I think that any model that has conflicts is not workable for the Fedora user base. Desktops and small servers are not normally containerized, so being able to install different applications without conflict is a non- negotiable requirement.
I see only 2 ways to provide a newer Perl for Fedora: 1. as a parallel-installable compatibility package, or 2. as a grouped official Bodhi update including a rebuild of all packages depending on the old Perl ABI (and only the first one is suitable if you wish to provide an older Perl, because you should not be downgrading the system Perl). Failing those, the only option is really: 3. just don't do it.
Providing a perl:5.30 module replacing the system Perl (and breaking everything in the distro depending on it) is essentially useless and does not provide much value over option 3.
Kevin Kofler
On Tuesday, November 19, 2019 9:20:29 AM MST Kevin Kofler wrote:
But the scripts do not need to care about the version of Perl you are running, do they? It matters for compiled code, but why for Perl scripts? Those can just run with the default version of Perl if they support it, or with the shebang line changed to something like #!/usr/bin/perl5.30 if that's what they require.
There are certain edge cases where the version really does matter for scripts, but for most scripts you would be correct. I also agree with your solution for scripts that actually do require a specific version, however.
The best way to deal with conflicts in PATH is to suffix the binaries, not to move them. But that is only needed when it makes a difference for the end user which version they run. If the executable script "foo" does the exact same thing when run under Perl 5.28 or 5.30, then you need only one /usr/bin/foo set up to run against the distribution default Perl, the other one is redundant (which is the nice thing about parallel installation: you do not have to support running all the executables under a non-default Perl, only those that actually need it).
While that would work well in the Perl context, there are cases where it wouldn't work, for example there are several programs which hard-code paths which we would need to come up with an alternative path for and patch.
I think that any model that has conflicts is not workable for the Fedora user base. Desktops and small servers are not normally containerized, so being able to install different applications without conflict is a non- negotiable requirement.
Agreed, in fact most workstations and servers with RHEL are not containerized either. Before RHEL 8, which was essentially just released, nothing even recommended containerizing RHEL.
On Tuesday, November 19, 2019 3:52:27 AM MST Petr Pisar wrote:
If you start fidling with things in PATH, you have the problem of SCL. And as you wrote, SCL is terrible. And that was the reason why we have modularity: We do not want to relocate code to non-standard paths.
I may be a bit confused here, but I thought Modularity was not a replacement for SCLs? Clearly, it can't be, it doesn't provide even similar functionality.. With SCLs, as annoying as they are, you do get parallel installations, which Modularity cannot provide.
If parallel availability, without parallel installation, is all you want, I can show you how to do that with RPM right now, no Modularity required.
On 2019-11-20, John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, November 19, 2019 3:52:27 AM MST Petr Pisar wrote:
If you start fidling with things in PATH, you have the problem of SCL. And as you wrote, SCL is terrible. And that was the reason why we have modularity: We do not want to relocate code to non-standard paths.
I may be a bit confused here, but I thought Modularity was not a replacement for SCLs? Clearly, it can't be, it doesn't provide even similar functionality.. With SCLs, as annoying as they are, you do get parallel installations, which Modularity cannot provide.
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/5PFUA7VAS7QOBDFGBPDQG2XQP3DZXYOS/. Start reading from "SCLs" keyword.
If parallel availability, without parallel installation, is all you want, I can show you how to do that with RPM right now, no Modularity required.
I would be happy for the parallel availability without Modularity. And it's not a big deal on the installation part (except of ugly packages names). The issue is the part when we build packages. One of the reasons why Modularity is as it is is that RPM and Koji stated no interest in accepting any changes. Therefore Modularity is a layer above them.
-- Petr
I think we've already discussed and documented [1] this — although we haven't discussed module dependencies back then.
A) If user selects a default (or doesn't do any selection), default is followed. B) If user selects a specific stream, that stream is followed.
So, basically, respecting user choices.
There is also a dnf bug [2].
[1] https://pagure.io/modularity/working-documents/blob/master/f/lifecycles-upgr... [2] https://bugzilla.redhat.com/show_bug.cgi?id=1664427
On Fri, Oct 4, 2019 at 4:58 PM Stephen Gallagher sgallagh@redhat.com wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
- The user's "intention" should be recorded at the time of module
enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
- If stream switching during an update or upgrade would result in
other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency changes:
- Any stream upgrade such as this must guarantee that any artifacts of
the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, Oct 04, 2019 at 10:57:39AM -0400, Stephen Gallagher wrote: [snip]
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
Hmmm, maybe I'm not thinking straight today, but what happens when you cross the streams? Correct me if I'm wrong in the following scenario:
Release N: - foo: available streams: 1.0, 2.0, default: 2.0 - bar: depends on foo 1.0 - user installs foo and bar, gets bar and both foo 1.0 (default_enabled) and foo 2.0 (dep_enabled)
Release N + 1: (cross the streams) - bar: depends on foo 2.0 now - foo 1.0 gets uninstalled(?), foo 2.0... what happens to foo 2.0? does it move to default_enabled or does it remain in dep_enabled? or does it move to "enabled"?
Release N + 2: (the streams diverge again) - foo: 1.0 is removed, 3.0 appears and is made default - what happens on the user's machine? foo 2.0 needs to remain installed, since bar explicitly depends on it, but will there also be a foo 3.0 module installed (since the user requested the default way back when, still in release N)?
Of course, it is completely possible that this case is indeed handled in the proposal and I am the one at fault for not parsing it properly. Anyway, thanks to you all for your work on this!
G'luck, Peter
On Mon, Oct 7, 2019 at 9:58 AM Peter Pentchev roam@ringlet.net wrote:
Hmmm, maybe I'm not thinking straight today, but what happens when you cross the streams? Correct me if I'm wrong in the following scenario:
You're wrong :)
Release N:
- foo: available streams: 1.0, 2.0, default: 2.0
- bar: depends on foo 1.0
- user installs foo and bar, gets bar and both foo 1.0 (default_enabled) and foo 2.0 (dep_enabled)
This state is impossible. DNF will not allow you to install both foo:1.0 and foo:2.0. It would have generated a conflict. So you can't get into any of the further situations.
I would like to open discussions more widely, because we are talking about future of software distribution and discussing only particular issue is not an approach how to delivery solid and stable architecture.
What issues I have in mind? 1. Fedora system upgrade (libgit2, axa, bat)
2. Adding new stream into distribution - will result in an error
3. Removal of stream from distribution - dependent module on removed stream will creates modular dependency error
4. Changing stream dependency - dependency issue
5. Removal of module from distribution (replaced by non-modular packages) Fail safe data will persist forever
6. Upgrade path between contexts This will help with modular dependency resolution
7. Upgrade path between module streams So far this is not described or part of the design
8. Module switching At the present time this is completely disabled for stability reasons
9. Changing defaults o redesign of defaults - defaults have the same behavior like enabled modules. Only problems with defaults are less critical.
Some of requirements are in conflict. Like user must be not able to switch a stream by accident but stream must be changed automatically in other example. Resolving each of this points have consequences in behavior on other parts of modularity and rpm environment therefore any change must be planned well.
Some issues could be resolved by additional data in metadata like obsoletes, information about substreams. Others by changing implementation in dnf/libdnf. The last important part is represented by packaging restrictions, guidelines (people must know limitation of technology).
Lets also ask question what we can change for Fedora 31, 32, or 33?
Jaroslav
On Fri, Oct 4, 2019 at 4:57 PM Stephen Gallagher sgallagh@redhat.com wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
- The user's "intention" should be recorded at the time of module
enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
- If stream switching during an update or upgrade would result in
other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency changes:
- Any stream upgrade such as this must guarantee that any artifacts of
the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones.
On Mon, Oct 7, 2019 at 11:33 AM Jaroslav Mracek jmracek@redhat.com wrote:
I would like to open discussions more widely, because we are talking about future of software distribution and discussing only particular issue is not an approach how to delivery solid and stable architecture.
What issues I have in mind?
Your list of issues lacks sufficient context.
- Fedora system upgrade (libgit2, axa, bat)
In this case, are we talking about F30->F31 where the default stream changes or also considering when the dependencies need to change?
- Adding new stream into distribution
- will result in an error
Uhh, why? Maybe there are some missing words here indicating under what conditions adding a stream to the distro would cause problems?
- Removal of stream from distribution
- dependent module on removed stream will creates modular dependency error
Assuming we only allow this to happen at the release boundary, I think this is desirable behavior. If someone has locked themselves to a stream, we should disallow upgrades of the platform if the new platform cannot support that stream.
- Changing stream dependency
- dependency issue
I try to address that in my design proposal.
- Removal of module from distribution (replaced by non-modular packages)
Fail safe data will persist forever
Please provide more information here, because I don't see where you're coming from.
- Upgrade path between contexts
This will help with modular dependency resolution
Can you state the problem more thoroughly? I *think* what you're asking is this:
"Module A" can function with either "http:2.4" or "http:2.6" as its dependency. "Module B" can only function with "http:2.6" as a dependency. Installing Module A results in DNF selecting "http:2.4" to resolve the transaction. Later, the user wants to install "Module B": How should DNF proceed?
There are two possible routes we could take: 1) Dependency error when trying to install "Module B". Optionally provide aid to the user that they might be able to manually switch streams from "http:2.4" to "http:2.6" before attempting to install "Module B" 2) Automatically perform an "upgrade" step where all software installed from "http:2.4" is replaced by content from "http:2.6" as part of the transaction. This is probably the more user-friendly approach, but it may have some subtle complexities in the dependency-resolution (dealing with nested dependencies) that I can't predict offhand.
- Upgrade path between module streams
So far this is not described or part of the design
This actually is part of the design. Or, rather, it is explicitly not specified. The idea was that we would deal with upgrade path between module streams on a case-by-case basis, since not all streams actually can transition between them. For the proposal of following the default streams, we'd need to ensure that rules are in place that such streams need to behave similarly to how they would have in non-modular Fedora (meaning a clean upgrade path, possibly with automatic migration tools).
- Module switching
At the present time this is completely disabled for stability reasons
Acknowledged. See above.
- Changing defaults o redesign of defaults
- defaults have the same behavior like enabled modules. Only problems with defaults are less critical.
Some of requirements are in conflict. Like user must be not able to switch a stream by accident but stream must be changed automatically in other example. Resolving each of this points have consequences in behavior on other parts of modularity and rpm environment therefore any change must be planned well.
Yes, which is why I consulted with user experience designers before proposing the approach in the original post. It's difficult to strike a balance between "keeping the user safe" and "keeping the user happy". We decided that the only reasonable line we could draw was "did the user ask for this directly or did they just take what was handed to them".
Some issues could be resolved by additional data in metadata like obsoletes, information about substreams. Others by changing implementation in dnf/libdnf. The last important part is represented by packaging restrictions, guidelines (people must know limitation of technology).
Obsoletes are something we may want to consider, but I think they should follow whatever behavior we settle on for tracking changes in the default stream. Meaning: they should only be allowed for cases where an upgrade can be performed safely/automatically or a special case like fedora-obsolete-packages to forcibly remove things from the user's system.
Lets also ask question what we can change for Fedora 31, 32, or 33?
I assume this actually means "Let's figure out what schedule to deliver these things on".
On 2019-10-07, Jaroslav Mracek jmracek@redhat.com wrote:
I would like to open discussions more widely, because we are talking about future of software distribution and discussing only particular issue is not an approach how to delivery solid and stable architecture.
What issues I have in mind?
- Fedora system upgrade (libgit2, axa, bat)
I guess this is about changing platform stream without rebuilding modules. I.e. once relengs create Fedora 32 compose, no modules (except of virtual "platform") can be installed because they require previous platform:31.
How it works in classical Fedora? Relengs take Fedora 31 content and copy it into Fedora 32 repository. And suddenly you have packages for Fedora 32.
How it could workd in modular Fedora? Relengs take Fedora 31 content including modules and copy it into Fedora 32 repository. Then they take modulemd data and rewrite all modular dependencies from platform:31 to platform:32. Voila, you have modules for Fedora 32. These modulemds should be imported into MBS to be available at build-time. We don't have to rebuild the modules because identical packages can belong to more module builds.
-- Petr
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
Simo.
On Fri, 2019-10-04 at 10:57 -0400, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
- The user's "intention" should be recorded at the time of module
enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
- If stream switching during an update or upgrade would result in
other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency changes:
- Any stream upgrade such as this must guarantee that any artifacts of
the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mon, Oct 7, 2019 at 2:56 PM Simo Sorce simo@redhat.com wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
And where is the software for those containers coming from? Some container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
----- Original Message -----
From: "Stephen Gallagher" sgallagh@redhat.com To: "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Monday, October 7, 2019 2:59:37 PM Subject: Re: Modularity and the system-upgrade path
On Mon, Oct 7, 2019 at 2:56 PM Simo Sorce simo@redhat.com wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
And where is the software for those containers coming from? Some container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
I never really understood this argument. Could you help me understand it?
In what way do ursine RPMs not already do this? And more importantly, what benefits does Modularity bring, based on an earlier thread with Modularity use cases?
As far as I can see:
- Modularity doesn't bring parallel-installability. You'd have to support it at the RPM level, which means ursine RPMs would support it to. [0]
- Any size reduction in modular RPMs can be made to urisine RPMs.
- Modules rely on RPMs as their source of trust and don't provide any new trust models.
- To have container-only content (container-preferred content?) you'd need the maintainer of the package to build separate "desktop/server" and "container" streams. And, I'm not sure what benefit anyone will see, that better structuring of sub-packages wouldn't give. Especially since most modular content (build systems, eclipse, ...) aren't exactly suited for production server containers. Application and development containers, sure. [1]
I think, from the user and maintainer point of view, you could handle most of the use cases of modules by:
- Spending a little time ensuring packages are divided up in a way that better behaves with modules (to reduce the installation footprint... say, {pkg}-man only gets installed when man is present, saving the space on containers). I'd imagine this is a goal of the minimization team that I've seen mentions of. But perhaps not. :)
- Focusing on guidelines for parallel installability for library and applications versions.
But perhaps I just never understood Modularity after fighting with it for so long in Fedora and ending up duplicating what it has undone in the ursine world... Is there something obvious I'm missing of why Modularity is more suited for containers than ursine RPMs?
- Alex
[0]: AFAICS, Modularity only gives you parallel availability, that is multiple versions are available to be selected from, if the maintainer wishes. But you can't go install the same packages at two different versions.
[1]: My implicit assumption here is that there's very little we'd do for container support besides divide down RPMs to make things better for a layer's disk footprint... Most upstream projects will either support running in containers, or they won't. I'd think having lots of container-specific content would be a very minimal edge case that I'm not sure is worth handling at this point in time.
To be clear, the above deals with *packages* installed inside other container *images*, not an upstream deciding to say ship, an RPM and a full *container image* of their own.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mon, Oct 07, 2019 at 03:20:21PM -0400, Alexander Scheel wrote:
And where is the software for those containers coming from? Some container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
I never really understood this argument. Could you help me understand it? In what way do ursine RPMs not already do this? And more importantly, what benefits does Modularity bring, based on an earlier thread with Modularity use cases?
I'm going to avoid the word "ursine" because I think it's more confusing then helpful. It's all the same RPMs, after all.
Without modularity, RPM doesn't offer a good way to choose between different versions of the same thing. One can squash version numbers into the name, which covers some use cases, but also becomes unwieldy and loses the _idea_ that these things are different branches of the same basic software.
- Modularity doesn't bring parallel-installability. You'd have to support it at the RPM level, which means ursine RPMs would support it to. [0]
Well, the idea is: if you need parallel install, don't mess with it at the RPM level. Separate at the container level.
- Any size reduction in modular RPMs can be made to urisine RPMs.
Maybe. But what if it reduces functionality? Modularity allows there to be a reduced version or a full version which can be swapped in.
On 07. 10. 19 22:31, Matthew Miller wrote:
- Any size reduction in modular RPMs can be made to urisine RPMs.
Maybe. But what if it reduces functionality? Modularity allows there to be a reduced version or a full version which can be swapped in.
In reality what we see is the reduced version is maintined in module only and the fat version is orphaned. I fail to see the benefit of that approach.
----- Original Message -----
From: "Matthew Miller" mattdm@fedoraproject.org To: "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Monday, October 7, 2019 4:31:18 PM Subject: Re: Modularity and the system-upgrade path
On Mon, Oct 07, 2019 at 03:20:21PM -0400, Alexander Scheel wrote:
And where is the software for those containers coming from? Some container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
I never really understood this argument. Could you help me understand it? In what way do ursine RPMs not already do this? And more importantly, what benefits does Modularity bring, based on an earlier thread with Modularity use cases?
I'm going to avoid the word "ursine" because I think it's more confusing then helpful. It's all the same RPMs, after all.
Ok... But we need some word to describe "RPMs without weird context that behave like they're supposed to and somebody maintains them" instead of "RPMs in a module somewhere". Otherwise the discussion gets confusing fast. So if you don't mind, I'll stick with "ursine RPMs" vs "modular RPMs" for now. :)
Without modularity, RPM doesn't offer a good way to choose between different versions of the same thing. One can squash version numbers into the name, which covers some use cases, but also becomes unwieldy and loses the _idea_ that these things are different branches of the same basic software.
This is not true at all.
For starters, if you have parallel packages available [0], `rpm -i` will let you install them all just fine and track each individually [1]. When you go to uninstall it (`rpm -e rpm-test`), it'll complain that you didn't specify which one [2], so you'll of course have to specify a version [3].
If you then go stick it in a repo, DNF will show you the highest version, which is expected since DNF generally concerns itself with the updated-ness of your system [4]. But you can always pass --showduplicates to show the older versions. And nothing prevents you from selecting a different version of the package if they exist in the repo [5]. The one place this fails is that DNF will perform an upgrade, removing the older version, even if you choose install [6].
So really, the only thing missing here is an option to make `dnf install` mean "install" and not "install or upgrade, whatever DNF feels is best". And then you can do all your user intent on the side: I installed "rpm-test-1.0", so I should always keep it installed... but if I also install "rpm-test", I should take whatever version that was and upgrade it... etc.
To be fair, the packages in [0] are parallel installable too. But for the case where you only want parallel availability, you already have that, and can use dnf version locking [7] to prevent unintended upgrades if you manually specified a version.
So I really think this is a non-issue in ursine RPMs. Modularity only gives you a way to group packages together, like software collections would've. It seems like the problem isn't in RPM or DNF, but is instead in Fedora's package tooling not letting you ship multiple versions of ursine RPMs without modularity. :)
But while it might work better, it wouldn't be as cool and sexy as modules...
- Modularity doesn't bring parallel-installability. You'd have to support it at the RPM level, which means ursine RPMs would support it to. [0]
Well, the idea is: if you need parallel install, don't mess with it at the RPM level. Separate at the container level.
See above; RPM actually supports parallel install just fine. We just need packaging guidelines we all agree on to ensure packages at different versions don't conflict and what that means. And lots of upstream work to make that happen for the packages we care about. And a little bit of alternatives magic to choose a default version for us and let us switch conveniently.
- Any size reduction in modular RPMs can be made to urisine RPMs.
Maybe. But what if it reduces functionality? Modularity allows there to be a reduced version or a full version which can be swapped in.
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
[0]: https://github.com/cipherboy/testbed/rpm-versions
[1]: # rpm -qa | grep -i 'rpm-test' | sort rpm-test-1.0-1.x86_64 rpm-test-1.1-1.x86_64 rpm-test-2.0-1.x86_64
[2]: # rpm -e rpm-test error: "rpm-test" specifies multiple packages: rpm-test-1.0-1.x86_64 rpm-test-1.1-1.x86_64 rpm-test-2.0-1.x86_64
[3]: # rpm -e rpm-test-1.0 # rpm -e rpm-test-1.1 # rpm -e rpm-test-2.0
[4]: # dnf info rpm-test Available Packages Name : rpm-test Version : 2.0 Release : 1 Architecture : x86_64 Size : 6.8 k Source : rpm-test-2.0-1.src.rpm Repository : Test Summary : A test package for seeing how RPM handles different RPM versions. URL : https://github.com/cipherboy/testbed/rpm-versions License : AGPLv3 Description : A test package for seeing how RPM handles different RPM versions. : : This is version 2.0
[5]: # dnf install rpm-test-1.0 Last metadata expiration check: 0:01:01 ago on Mon 07 Oct 2019 07:36:10 PM EDT. Dependencies resolved. ====================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================== Installing: rpm-test x86_64 1.0-1 Test 6.7 k
Transaction Summary ====================================================================================================================== Install 1 Package
Total size: 6.7 k Installed size: 12 Is this ok [y/N]: y Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : rpm-test-1.0-1.x86_64 1/1 Verifying : rpm-test-1.0-1.x86_64 1/1
Installed: rpm-test-1.0-1.x86_64
Complete!
[6]: # dnf install rpm-test-1.1 Last metadata expiration check: 0:02:21 ago on Mon 07 Oct 2019 07:36:10 PM EDT. Dependencies resolved. ====================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================== Upgrading: rpm-test x86_64 1.1-1 Test 6.8 k
Transaction Summary ====================================================================================================================== Upgrade 1 Package
Total size: 6.8 k Is this ok [y/N]: y Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Upgrading : rpm-test-1.1-1.x86_64 1/2 Cleanup : rpm-test-1.0-1.x86_64 2/2 Verifying : rpm-test-1.1-1.x86_64 1/2 Verifying : rpm-test-1.0-1.x86_64 2/2
Upgraded: rpm-test-1.1-1.x86_64
Complete!
[7]: https://dnf-plugins-core.readthedocs.io/en/latest/versionlock.html
On Mon, Oct 07, 2019 at 08:08:56PM -0400, Alexander Scheel wrote:
Without modularity, RPM doesn't offer a good way to choose between different versions of the same thing. One can squash version numbers into the name, which covers some use cases, but also becomes unwieldy and loses the _idea_ that these things are different branches of the same basic software.
This is not true at all.
For starters, if you have parallel packages available [0], `rpm -i` will let you install them all just fine and track each individually [1]. When you go to uninstall it (`rpm -e rpm-test`), it'll complain that you didn't specify which one [2], so you'll of course have to specify a version [3].
If you then go stick it in a repo, DNF will show you the highest version, which is expected since DNF generally concerns itself with the updated-ness of your system [4]. But you can always pass --showduplicates to show the older versions. And nothing prevents you from selecting a different version of the package if they exist in the repo [5]. The one place this fails is that DNF will perform an upgrade, removing the older version, even if you choose install [6].
What if you want to apply a bugfix (or security update) to both of those packages? How would that work?
----- Original Message -----
From: "Matthew Miller" mattdm@fedoraproject.org To: "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Tuesday, October 8, 2019 9:18:29 AM Subject: Re: Modularity and the system-upgrade path
On Mon, Oct 07, 2019 at 08:08:56PM -0400, Alexander Scheel wrote:
Without modularity, RPM doesn't offer a good way to choose between different versions of the same thing. One can squash version numbers into the name, which covers some use cases, but also becomes unwieldy and loses the _idea_ that these things are different branches of the same basic software.
This is not true at all.
For starters, if you have parallel packages available [0], `rpm -i` will let you install them all just fine and track each individually [1]. When you go to uninstall it (`rpm -e rpm-test`), it'll complain that you didn't specify which one [2], so you'll of course have to specify a version [3].
If you then go stick it in a repo, DNF will show you the highest version, which is expected since DNF generally concerns itself with the updated-ness of your system [4]. But you can always pass --showduplicates to show the older versions. And nothing prevents you from selecting a different version of the package if they exist in the repo [5]. The one place this fails is that DNF will perform an upgrade, removing the older version, even if you choose install [6].
What if you want to apply a bugfix (or security update) to both of those packages? How would that work?
I'm not saying it is completely solved, just that what we have left to do is a lot less work than trying to fix modularity. [0] :)
Since you asked... :)
That's where you apply the dnf intent mechanism (iirc) this thread brought up, except it is a lot simpler because it operates on packages, which are single, relatively atomic units instead of modules, which are globs of special packages.
There's four cases here (assuming a two-part version):
- dnf install rpm-test # just a package name ---> no versionlock under the hood - dnf install rpm-test-1 # partial version -- major version 1 dnf versionlock rpm-test>=1 dnf versionlock rpm-test<2 ^ user has said, I want any major version 1 of this package, and I'll take any minor releases or patch updates. - dnf install rpm-test-1.0 # full version -- major version 1, minor version 0 dnf versionlock rpm-test>=1.0 dnf versionlock rpm-test<1.1 ^ under a two-part version scheme, this locks the user into only receiving patch updates. This is strict user intent, but we might want to confirm that they agree to receive only patch updates and might miss important bug fixes and some security fixes. - dnf install rpm-test-1.0-1 # full name, version, and release dnf versionlock rpm-test-1.0-1 ^ this is even stricter and prevents any new patch versions.
I'd probably add a message to DNF update showing what packages won't be upgraded by a specific policy. And, since it all lives in the versionlock DNF plugin, there's a very well-known, easy way to modify these locks, which you'd also have to provide for modularity intents. I'd probably make it a message with confirmation about all new version locks introduced in a transaction and have the user manually confirm the batch separately from confirming the install.
If the user wants to lock dependencies, once they're installed, they can of course use versionlock here as well. Entirely up to them.
---
Bugfixes and security updates are the easiest to do. You'd have a version-based dist-git branch for each of your packages:
- rpm-test:v1.0 - rpm-test:v1.1 - rpm-test:v2.0
Some policy dictates which releases ship where. So, a bugfix comes in as a patch. The maintainer decides which branches to check it into (depends on the bug, severity, and how well the patch applies to older versions). Since its a patch release, everyone will get it.
Now let's say we get a bigger security flaw. It results in a partial rewrite upstream, and isn't conducive to patching. Upstream releases v1.2 and v2.1. I'd:
- Annotate it as a security fix in Bodhi, - When dnf update runs: - non-version locked systems get an update to v2.1, - version-locked to any v2 get an update to v2.1, - version-locked to any v1 get an update to v1.2, - Any other version-locked message gets a warning on DNF upgrade saying that they locked themselves out of receiving a security update. Then, the community member can either loosen their versionlock and get the update, or continue opt-out of receiving the update at their own risk. - System upgrade time will work just like a regular DNF update; if a package versionlock is known to conflict, we can then decide on a policy... removing versionlocks to old versions, requiring parallel versions, &c. Ultimately, you'll probably encourage major-version locking and have a major version overlap on most critical packages. It'll probably work like the orphaned&retired package lists, where things get blacklisted. Or we can design some other mechanism. Up to the community here. But think about it now. :)
Since only packages with explicit versionlocks based on very explicit user intent, most packages will continue to upgrade as expected. You could add heuristics based on "if I version lock @ firefox-1.0, I probably also want to version lock firefox-data-1.0" (and lock all subpackages of a single SRPM to the same version), but that can be a later feature.
Point is, to get this to work requires:
- Minor changes to an existing DNF plugin, - Lots of policy discussions, - Changes to how composes &c work, - Minor changes to tooling. - No changes to packages in dist-git other than letting the maintainer move their existing fedora branches to version-based branches. No modulemd for one. :)
In particular, you don't need:
- A completely separate build system (MBS), - A completely separate package grouping system (dist-git modules/) - A way of integrating the two package systems (Ursa Prime), - A new subcommand under DNF that behaves differently and breaks system upgrade (dnf module ...) - ...
Hell, you could probably even extend versionlock to be the database for module streams and fixing upgrades. :D
Anyhow, that's my 2c pipe dream,
- Alex
[0]: From what I can tell, the MVP here would be to allow the DNF versionlock plugin to support Requires-like constructs, e.g., `dnf versionlock 'rpm-test<2'`. Currently it only supports locking to a specific NVR. You could probably invent a tilda operator for saying "around this version spec": `dnf versionlock 'rpm-test~2'` would lock you to `rpm-test>=2` and `rpm-test<3`.
Then you need policy around what versions can get into a compose, how new package versions get added to a repository (Bodhi), how older versions get retired, &c.
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, Oct 08, 2019 at 10:17:06AM -0400, Alexander Scheel wrote:
What if you want to apply a bugfix (or security update) to both of those packages? How would that work?
I'm not saying it is completely solved, just that what we have left to do is a lot less work than trying to fix modularity. [0] :)
Having watched Modularity develop, *this* is the point at which I'm skeptical. There's a lot which _sounds_ easy but which turns out to be difficult once you go into all of the corner cases. If someone has a solution which *really* does it better, I'm open to experimenting with it, but I don't think "it's easier" really sticks.
I know there are lot of decisions made which could have been a different color, but I'm also sympathetic to work done over work theorized. I do appreciate the time you've put into the write up, but a prototype would speak louder -- and there are a _lot_ of details that would need to be implemented in what you describe -- like something implementing the idea of a "partial version".
Matthew Miller wrote:
Without modularity, RPM doesn't offer a good way to choose between different versions of the same thing. One can squash version numbers into the name, which covers some use cases, but also becomes unwieldy and loses the _idea_ that these things are different branches of the same basic software.
On the other hand, paired with a suitable file naming/placement scheme, the versioned package name pattern brings parallel installability, so it is actually the superior approach from a user standpoint. At least when we are talking about libraries or programming language interpreters, or any other non-leaf package. (Incidentally, those are also the ones triggering the main defect of Modularity, the version conflict hell.)
Well, the idea is: if you need parallel install, don't mess with it at the RPM level. Separate at the container level.
Ewww, no thanks!
A container is essentially a distro within the distro, so a whole separate installation to maintain and update. It is also a huge waste of disk space. And integration with the rest of the system will always be restricted by design due to the container technology (even though things such as Portals try to improve on the situation to a limited extent). And the containers have to be set up and maintained by hand, since the idea presented in the early Modularity talks that DNF would automatically containerize modules in the presence of version conflicts was never implemented, and I strongly doubt it will ever be, because it is just not practical.
So containers are not a practical solution for parallel installation.
Kevin Kofler
Matthew Miller mattdm@fedoraproject.org writes:
On Mon, Oct 07, 2019 at 03:20:21PM -0400, Alexander Scheel wrote:
And where is the software for those containers coming from? Some container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
I never really understood this argument. Could you help me understand it? In what way do ursine RPMs not already do this? And more importantly, what benefits does Modularity bring, based on an earlier thread with Modularity use cases?
I'm going to avoid the word "ursine" because I think it's more confusing then helpful. It's all the same RPMs, after all.
Without modularity, RPM doesn't offer a good way to choose between different versions of the same thing. One can squash version numbers into the name, which covers some use cases, but also becomes unwieldy and loses the _idea_ that these things are different branches of the same basic software.
(Alex covered this, but I also think that part is present at the RPM level.)
What's missing from a more Debian-style solution [1] (for instance) is a more full understanding of dependencies. We could implement "Provides:" (or something like it) and be done with it. This also could have the side affect of making package version upgrades more clean.
- Any size reduction in modular RPMs can be made to urisine RPMs.
Maybe. But what if it reduces functionality? Modularity allows there to be a reduced version or a full version which can be swapped in.
So does having "foo-full" and "foo-minimal" both provide "foo" :)
More seriously, I think doing reduced functionality versions is usually a mistake. The general expectation of users seems to be that things work "the same" inside and outside containers. With a very few exceptions (e.g., early boot, installer, ...), I think foo-minimal is misguided and will only cause pain in the long run.
Thanks, --Robbie
1: https://www.debian.org/doc/debian-policy/ch-relationships.html#s-virtual
Robbie Harwood wrote:
What's missing from a more Debian-style solution [1] (for instance) is a more full understanding of dependencies. We could implement "Provides:" (or something like it) and be done with it. This also could have the side affect of making package version upgrades more clean.
[snip]
So does having "foo-full" and "foo-minimal" both provide "foo" :)
This is already possible now. "Provides: foo" has been implemented in RPM for decades.
Kevin Kofler
On Mon, 2019-10-07 at 14:59 -0400, Stephen Gallagher wrote:
On Mon, Oct 7, 2019 at 2:56 PM Simo Sorce simo@redhat.com wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
And where is the software for those containers coming from? Some container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
We can definitely build it as part of Fedora, that doesn't mean we need to expose regular users to "modularity". It would be a build artifact that only people building the "fedora containers" need to deal with. If you see how flatpaks are done that comes close. There is a flatpack runtime that can be reused by multiple containers, but also on the same machine you can be using multiple runtimes if different applications pull in different runtimes. In all cases what is in the runtime, how it is built and how it end on your machine are transparent to the user, but clearly visibile if you *want* to and also clearly under the control of the container provider.
Simo.
On Mon, Oct 07, 2019 at 02:59:37PM -0400, Stephen Gallagher wrote:
On Mon, Oct 7, 2019 at 2:56 PM Simo Sorce simo@redhat.com wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
And where is the software for those containers coming from? Some
From distribution repositories? Like it always did?
container registry like Docker Hub? One of the main points of Modularity is to provide a trusted source of software to install into containers.
We had this (FROM fedora:30…) before Modularity. Yes, there is a problem when you run Fedora N with specific software version Y, and you want to build container with software version Y-2, which was shipped in Fedora N-4. You would need to create container from unsupported, unsecure version of Fedora N-4. But you have no guarantee that maintainer will provide software verson Y-2 built as module on top of Fedora N. At the moment modularity broke most basic functionality – upgrade from Fedora N to N+1.
On 10/7/19 8:55 PM, Simo Sorce wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
There are only few people who fully understand how modularity works today (contyk who designed modulemd, jmracek and me who implemented the DNF part and few others). I agree that the modular design should be simplified. If we don't lower the bar, the complexity might prevent from wider adoption.
As a former release engineer, I'm personally unhappy about lack of upgrade paths between module contexts and I believe that fixing this part of modularity design could lead to desired simplification. Unfortunately based on discussion I had with contyk yesterday, I don't believe it's achievable without making *huge* changes in the modularity design and the build infrastructure/process.
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
I don't think containers can replace modularity. They need to coexist. If we want to create containers built on top of a distribution (no randomly picked bits from the internet, reproducible builds, security, ...), we need a way to distribute multiple versions of the software (module streams) and they frequently need to be built against each other (module contexts).
Simo.
On Fri, 2019-10-04 at 10:57 -0400, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
- The user's "intention" should be recorded at the time of module
enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
- If stream switching during an update or upgrade would result in
other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency changes:
- Any stream upgrade such as this must guarantee that any artifacts of
the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
I don't think containers can replace modularity. They need to coexist.
If we want to create containers built on top of a distribution (no randomly picked bits from the internet, reproducible builds, security, ...), we need a way to distribute multiple versions of the software (module streams) and they frequently need to be built against each other (module contexts).
Once I was told, that the purpose of modularity is to enable people to create containers with any software version they long for.
This totally has made sense to me, because containers usually only serve a specific purpose, so it is easier to pick up correct software versions and limit their combinations, for example for a LAMP container or something like that, and it is not so crucial that some other application might not be fully compatible with that, because I could create another container with the other application.
System-wide? That is another story, because here we need 100% compatibility for everything, because we never can think of combinations of applications and use cases, that Workstation users long for.
With modularity, we could make Fedora a great distro for developers. But I would like to see Fedora being the distro for everyone, and not just developers. Therefore, we need to be as versatile as possible. I also have a slogan:
WHATEVER YOU HAVE EXPECTED FROM AN OS ... YOU FIND IT WITH FEDORA.
Let's do it perfect.
Simo.
On Fri, 2019-10-04 at 10:57 -0400, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
- The user's "intention" should be recorded at the time of module
enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
- If stream switching during an update or upgrade would result in
other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency
changes:
- Any stream upgrade such as this must guarantee that any artifacts of
the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, 2019-10-11 at 09:56 +0200, Daniel Mach wrote:
On 10/7/19 8:55 PM, Simo Sorce wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
There are only few people who fully understand how modularity works today (contyk who designed modulemd, jmracek and me who implemented the DNF part and few others). I agree that the modular design should be simplified. If we don't lower the bar, the complexity might prevent from wider adoption.
Yes, at the moment it is a too complex system.
As a former release engineer, I'm personally unhappy about lack of upgrade paths between module contexts and I believe that fixing this part of modularity design could lead to desired simplification. Unfortunately based on discussion I had with contyk yesterday, I don't believe it's achievable without making *huge* changes in the modularity design and the build infrastructure/process.
Well, the way I see it, if it is not usable we shouldn't inflict it on users unless there is a clear and overwhelming technical advantage in doing it. So far it eludes me what advantage modularity gives that is so important.
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
I don't think containers can replace modularity. They need to coexist. If we want to create containers built on top of a distribution (no randomly picked bits from the internet, reproducible builds, security, ...), we need a way to distribute multiple versions of the software (module streams) and they frequently need to be built against each other (module contexts).
If modules were just an infrastructure artifact used to build containers (or spins, or any other useful delivery mechanism) I wouldn't be concerned, as then the people exposed to their complexity would be people that know what they are doing and can decide to buy in or not into the modules ecosystem.
My main gripe is the current situation where users are thrown under the bus and then we give them a business card and say: read these instruction to figure out how to save yourself. I think this is unacceptable.
Simo.
Simo.
On Fri, 2019-10-04 at 10:57 -0400, Stephen Gallagher wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
At Flock and other discussions, we've generally come up with a solution, but it's not yet recorded anywhere. I'm sending it out for wider input, but this is more or less the solution we intend to run with, barring someone finding a severe flaw.
Proposed Solution:
What happens today is that once the stream is set, it is fixed and unchangeable except by user decision. Through discussions with UX folks, we've more or less come to the decision that the correct behavior is as follows:
- The user's "intention" should be recorded at the time of module
enablement. Currently, module streams can exist in four states: "available, enabled, disabled, default". We propose that there should be two additional states (names TBD) representing implicit enablement. The state "enabled" would be reserved for any stream that at some point was enabled by name. For example, a user who runs `yum install freeipa:DL1` is making a conscious choice to install the DL1 stream of freeipa. A user who runs `yum install freeipa-client` is instead saying "give me whatever freeipa-client is the default".
- The state `dep_enabled` would be set whenever a stream becomes
enabled because some other module stream depended on it. This state must be entered only if the previous state was `default` or `available`. (We don't want `enabled` or `disabled` streams being able to transition to this state.)
- The state `default_enabled` would be set whenever a stream becomes
enabled because a transaction pulled in a package from a default stream, causing it to be enabled. This state must only be entered if the previous state was `default` or `dep_enabled`. We don't want `enabled` or `disabled` to be able to transition to `default_enabled`. If a user requests installation of a package provided by a stream currently in the `dep_enabled` state, that stream should transition to the `default_enabled` state (meaning that now the user would expect it to be treated the same as any other default-enabled stream).
- When running `dnf update`, if a module stream's dependency on
another module changes to another stream, the transaction should cause that new stream to be enabled (replacing the current stream) if it is in the `dep_enabled` state. When running `dnf update` or `dnf system-upgrade`, if the default stream for a module installed on the system changes and the module's current state is `default_enabled`, then the transaction should cause the new default stream to be enabled.
- If stream switching during an update or upgrade would result in
other module dependency issues, that MUST be reported and returned to the user.
This requires some constraints to be placed on default and dependency changes:
- Any stream upgrade such as this must guarantee that any artifacts of
the stream that is exposed as "API" MUST support RPM-level package upgrades from any previous stream in this stable release. (Example: "freeipa:DL"1 depends on a the "pki-core:3.8" stream at Fedora 30 launch. Later updates move this to depending on "pki-core:3.9" and even later "pki-core:3.10". In this case the packages from "pki-core:3.10" must have a safe upgrade path from both "pki-core:3.8" and "pki-core:3.9" since we cannot guarantee or force our users to update regularly and they might miss some of the intermediate ones. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, Oct 11, 2019 at 6:57 AM Simo Sorce simo@redhat.com wrote:
On Fri, 2019-10-11 at 09:56 +0200, Daniel Mach wrote:
On 10/7/19 8:55 PM, Simo Sorce wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
There are only few people who fully understand how modularity works today (contyk who designed modulemd, jmracek and me who implemented the DNF part and few others). I agree that the modular design should be simplified. If we don't lower the bar, the complexity might prevent from wider adoption.
Yes, at the moment it is a too complex system.
Do you gain something out of that complexity that's worth it? Or is that an open question? And is it a design requirement?
As a former release engineer, I'm personally unhappy about lack of upgrade paths between module contexts and I believe that fixing this part of modularity design could lead to desired simplification. Unfortunately based on discussion I had with contyk yesterday, I don't believe it's achievable without making *huge* changes in the modularity design and the build infrastructure/process.
Well, the way I see it, if it is not usable we shouldn't inflict it on users unless there is a clear and overwhelming technical advantage in doing it. So far it eludes me what advantage modularity gives that is so important.
As a contrarian, I'd be suspicious if there's complete agreement on any new thing. Do you disagree with the stated advantages of modularity? Or do you not understand the advantages of modularity? Do you think modularity is a solution in search of a problem? i.e. you don't even agree or understand the stated problem modularity is intended to solve, even before the questions of whether modularity adequately addresses the problem(s)?
Shouldn't we go back to have default packages and then defer to "containers" for applications (and their dependencies) that need to deviate from system defaults for any reason ?
I don't think containers can replace modularity. They need to coexist. If we want to create containers built on top of a distribution (no randomly picked bits from the internet, reproducible builds, security, ...), we need a way to distribute multiple versions of the software (module streams) and they frequently need to be built against each other (module contexts).
If modules were just an infrastructure artifact used to build containers (or spins, or any other useful delivery mechanism) I wouldn't be concerned, as then the people exposed to their complexity would be people that know what they are doing and can decide to buy in or not into the modules ecosystem.
I rather like the idea of them in flatpaks, where any problems are limited to a particular flatpak, and can't affect the local system.
My main gripe is the current situation where users are thrown under the bus and then we give them a business card and say: read these instruction to figure out how to save yourself. I think this is unacceptable.
Ordinary every day user or do you mean packagers being thrown under the bus. And really then, is there a difference because neither the mortal user or immortal packager should be thrown under the bus, when we get right down to it. That's certainly not a design tolerance. Too much trust in dnf has been built up for that to happen. It's natural and necessary any possibility of that be resisted.
And I don't actually know any of the parameters under discussion. I don't even understand RPM let alone modularity. Every time I approach packaging I find too many barriers to entry, or at least something else that seems more interesting. I'm quite content with dnf and packagers doing the work. Is there a shrinking packager problem?
Chris Murphy lists@colorremedies.com writes:
On Fri, Oct 11, 2019 at 6:57 AM Simo Sorce simo@redhat.com wrote:
On Fri, 2019-10-11 at 09:56 +0200, Daniel Mach wrote:
On 10/7/19 8:55 PM, Simo Sorce wrote:
I have to ask, given containers are so popular and can deal with any dependency without conflicting with system installed binaries, should we really continue with this very complicated modular design ?
There are only few people who fully understand how modularity works today (contyk who designed modulemd, jmracek and me who implemented the DNF part and few others). I agree that the modular design should be simplified. If we don't lower the bar, the complexity might prevent from wider adoption.
Yes, at the moment it is a too complex system.
Do you gain something out of that complexity that's worth it? Or is that an open question? And is it a design requirement?
As a former release engineer, I'm personally unhappy about lack of upgrade paths between module contexts and I believe that fixing this part of modularity design could lead to desired simplification. Unfortunately based on discussion I had with contyk yesterday, I don't believe it's achievable without making *huge* changes in the modularity design and the build infrastructure/process.
Well, the way I see it, if it is not usable we shouldn't inflict it on users unless there is a clear and overwhelming technical advantage in doing it. So far it eludes me what advantage modularity gives that is so important.
As a contrarian, I'd be suspicious if there's complete agreement on any new thing. Do you disagree with the stated advantages of modularity? Or do you not understand the advantages of modularity? Do you think modularity is a solution in search of a problem? i.e. you don't even agree or understand the stated problem modularity is intended to solve, even before the questions of whether modularity adequately addresses the problem(s)?
I believe the point most of us are struggling with is: there's no definition of what advantages of modularity are. There may or may not be some idea of what the advantages could be, which is a different thing. This makes it really hard to argue whether it is or isn't succeeding when there isn't a criteria for success.
Lack of such information places it firmly in the class of "solution in search of a problem".
Thanks, --Robbie
On Fri, 2019-10-11 at 14:42 -0400, Robbie Harwood wrote:
I believe the point most of us are struggling with is: there's no definition of what advantages of modularity are. There may or may not be some idea of what the advantages could be, which is a different thing. This makes it really hard to argue whether it is or isn't succeeding when there isn't a criteria for success.
Well, there's various places that provide a fairly 'official' definition of what the advantages are supposed to be. E.g. the Modularity docs site has a FAQ section where this is the first question: "Exactly what problem are you trying to solve?"
https://docs.fedoraproject.org/en-US/modularity/faq/
"Deploying software has many solutions, but what gets deployed often plays out as a fight between developers and operators. Developers want the latest (or at least later) features. Operators want software in packages, certified, with a known period of support. Fedora Modularity provides multiple versions of packages in a Linux distribution with the qualities expected from a Linux distribution: transparently built and delivered, actively maintained, and easy to install — making both happy."
The "Modules for Everyone" Change also had a "Benefit to Fedora" section, as it was required to:
https://fedoraproject.org/wiki/Changes/ModulesForEveryone#Benefit_to_Fedora
"Fedora users will have access to a wider range of software choices than they had previously. Fedora Packagers will be able to use modules and module defaults to build each stream once and have it available for any supported Fedora release they wish. They will no longer need to duplicate that work for both the modular and non-modular repositories."
The 'Fedora Modularization' objective also defines a goal:
https://fedoraproject.org/wiki/Objectives/Fedora_Modularization_%E2%80%94_Th...
"Modularity will transform the all-in-one Fedora OS into an operating system plus a module repository, which will contain a wide selection of software easily maintained by packagers. This iteration of the Objective focuses on the second part — providing a wide selection software in various versions — while laying the groundwork for the first."
I think those texts taken together give a reasonable account of what modularity is *supposed* to be doing for us, so we can at least attempt to then ask and answer the question "is it actually achieving these things"?
On Fri, Oct 11, 2019 at 11:54:02AM -0700, Adam Williamson wrote:
Well, there's various places that provide a fairly 'official' definition of what the advantages are supposed to be. E.g. the Modularity docs site has a FAQ section where this is the first question: "Exactly what problem are you trying to solve?"
Thanks Adam -- this is a helpful summary and gathering of various bits of info.
On Fri, 11 Oct 2019 at 14:29, Chris Murphy lists@colorremedies.com wrote:
My main gripe is the current situation where users are thrown under the bus and then we give them a business card and say: read these instruction to figure out how to save yourself. I think this is unacceptable.
Ordinary every day user or do you mean packagers being thrown under the bus. And really then, is there a difference because neither the mortal user or immortal packager should be thrown under the bus, when we get right down to it. That's certainly not a design tolerance. Too much trust in dnf has been built up for that to happen. It's natural and necessary any possibility of that be resisted.
And I don't actually know any of the parameters under discussion. I don't even understand RPM let alone modularity. Every time I approach packaging I find too many barriers to entry, or at least something else that seems more interesting. I'm quite content with dnf and packagers doing the work. Is there a shrinking packager problem?
There has been a shrinking packager problem for years due to multiple problems 1. A lot of packagers were doing this as volunteer time, and that is a limited resource. You get sick, you get tired, you have a work deadline, etc and then you find yourself 2-3 releases behind and not feeling like looking at 200 bugzilla's. 2. A lot of packages needed more development work than a packager could pursue. If I packaged up xtank because i loved the game, and was able to fix a couple of things.. that is ok. However when it turns out that it needs a lot of forward port work because Fedora moved to X11R20 and the maintainers want to stick to Xorg for a bit longer.. you just let the bugzilla's pile up hoping it will sort itself. 3. For years there was a cross distro 'arms' race of 'our distro needs as many packages as possible or no one will use it'. So you ended up with some set of packages being pulled in which people were 'sort' of interested in versus invested in. 4. Packaging is like making cheese, there are 2000 different ways to do so and it is hard to know which ones are good. Trying to come up with consensus at times or getting people to follow the consensus ends up with tyres burning in the streets. 5. Getting reviews takes time and energy from reviewers. When you have 100 packages you absorbed from 4 different packagers.. you have little time to look at someone else's and mentor them. So instead you have a backlog of package reviews and probably people who are no longer interested tied to it. 6. Package upstreams are a lot faster and change greatly. There are a lot of software which will completely add a dozen new dependencies and the packager has to rip out the embedded versions or find the versions which work. Of course those versions rarely work with everything else and you end up with conflicts between things. That is tiring and people age out.
As packagers left for some reason or another, other packagers would find that meant something they needed was going away and would take that package. And like a death of a thousand papercuts, that meant that those packagers also started to find their time eaten up so they might have less time to mentor others. Then some of those people burned out and you ended up with even more 'well I will take your 100 packages' added onto people.
Just to be clear, this isn't a problem with only Fedora. Debian, OpenSuSE, etc are all facing the same problems as volunteers interested in working on distributions are not a 'growing' percentage. It also doesn't mean that it is the end of the world. It does mean we have to be more clear about our limits and stick to them. Trying to package up a lot (or all ) of software does not make sense for multiple reasons. However the primary one is that most software was never written to be integrated into an Operating System. It was written as a universe of its own, and the developers dont' see it as their job or vision to be tied to an OS. We can force all kind of things to try to make it but each one takes time and energy from people who are volunteering that effort and could be doing something else.
-- Chris Murphy _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, Oct 11, 2019 at 05:53:26PM -0400, Stephen John Smoogen wrote:
There has been a shrinking packager problem for years due to multiple problems
Has there? I'm not really seeing a significant change in the number of people making changes in dist-git over time. See attached svg. Maybe a little down from 2013-2016, but seems mostly flat 2016-2019.
It's not significantly growing, which may be a problem itself, but let's not get over-dramatic without evidence.
https://mattdm.org/fedora/fedora-contributor-trends/git.user.count.svg
Also see the Contributors by Week graph here
https://mattdm.org/fedora/fedora-contributor-trends/active-contributors-by-w...
although note I don't have that broken out with _just_ dist-git activity (it also includes bodhi karma and wiki edits).
Again, not seeing the growth I'd love, but I also don't see a contributor crisis.
On Sat, Oct 12, 2019 at 11:20:01AM -0400, Matthew Miller wrote:
On Fri, Oct 11, 2019 at 05:53:26PM -0400, Stephen John Smoogen wrote:
There has been a shrinking packager problem for years due to multiple problems
Has there? I'm not really seeing a significant change in the number of people making changes in dist-git over time. See attached svg. Maybe a little down from 2013-2016, but seems mostly flat 2016-2019.
It's not significantly growing, which may be a problem itself, but let's not get over-dramatic without evidence.
https://mattdm.org/fedora/fedora-contributor-trends/git.user.count.svg
Also see the Contributors by Week graph here
https://mattdm.org/fedora/fedora-contributor-trends/active-contributors-by-w...
although note I don't have that broken out with _just_ dist-git activity (it also includes bodhi karma and wiki edits).
Again, not seeing the growth I'd love, but I also don't see a contributor crisis.
Based on those graphs, I'd say there's slow shrinking. It appears to be linear since ~2016. But what is interesting, is that the shrinking mostly affects "the occasional contributor": the green top 1% appear unchanged, the yellow top 10% barely budge, but the last 50% is clearly shrinking.
Zbyszek
Zbigniew Jędrzejewski-Szmek wrote:
Based on those graphs, I'd say there's slow shrinking. It appears to be linear since ~2016. But what is interesting, is that the shrinking mostly affects "the occasional contributor": the green top 1% appear unchanged, the yellow top 10% barely budge, but the last 50% is clearly shrinking.
That is because your recent changes such as Modularity, Silverblue, endorsement of Flatpak, etc., are driving new packagers away.
Some changes (e.g., Silverblue and the more or less related rush to Flatpaks for everything) falsely make potential new contributors believe that RPM packaging skills are no longer needed or wanted here. Others such as Modularity actively make it harder for potential RPM packagers to contribute. (Modules add complexity and cause problems when a dependency is module-only.)
Kevin Kofler
On Sun, 13 Oct 2019 at 16:28, Matthew Miller mattdm@fedoraproject.org wrote:
On Fri, Oct 11, 2019 at 05:53:26PM -0400, Stephen John Smoogen wrote:
There has been a shrinking packager problem for years due to multiple problems
Has there? I'm not really seeing a significant change in the number of people making changes in dist-git over time. See attached svg. Maybe a little down from 2013-2016, but seems mostly flat 2016-2019.
It's not significantly growing, which may be a problem itself, but let's not get over-dramatic without evidence.
https://mattdm.org/fedora/fedora-contributor-trends/git.user.count.svg
I think we are talking about 2 different things, and I should have been clearer in my first reply to explain that.
You are showing we have a constant number of contributors. I was thinking about active contributors per artifact. Using the charts you provided, a conservative number would be we have had a near constant 300 active maintainers from 2012 to 2019. There have been dips and peaks but that seems to be about what we keep.
On this day in 2012, there were 3 releases 16, 17, 18 that these 300 maintainers would have been covering.
16: 10929 src.rpms 17: 11614 src.rpms 18: 12614 src.rpms
(10929+11614+12614) / 300 avg active git users = 117 packages per active git user. If we assume 16 and 17 were not getting much attention as 18 was trying to get out the door, we have 42 packages per active participant.
On this day (2019-10-14) in 2019, there are 3 releases which look like the following:
29: 21847 (Everything Srpms) 708 modular srpms 30: 21292 (Everything Srpms) 472 modular srpms 31: 21191 (Everything Srpms) 911 modular srpms
(21847+21292+21191+708+472+911) / 300 avg active git users = 221 packages per active git user. Again assuming people are just looking at 31, it is 73 packages / active git user. If we had a steady state growth in maintainers we would need 226 more active git contributors at the moment.
If we look at instead of src.rpms but looking at how many deliverables (aka how many arch.rpm/noarch.rpm) we have in F18 we had 40156 packages in x86_64. In F31 we have 89218 rpms + 3188 mod-rpms (92406). So we moved from 133 rpms per active git maintainer to 308 rpms per active git maintainer. In this case we would currently be at 670 active participants.
So from that perception it "feels" like we have a shrinking packager problem. If we had a growing packager set then either we would be still near 133 rpms or less. And again I realize that this isn't a rational count. We would need to work out how many packages get changed per release, what the average amount of packages are for the actual active people per time.. someone might have taken 100 packages in 2015, worked really hard until 2017 and now is no longer involved. Other factors change what the count means as packages are broken down and remerged into different things (aka TeX)
-- Stephen J Smoogen.
On Fri, Oct 4, 2019 at 10:57 AM Stephen Gallagher sgallagh@redhat.com wrote:
Right now, there are two conflicting requirements in Fedora Modularity that we need to resolve.
- Once a user has selected a stream, updates should follow that
stream and not introduce incompatiblities. Selected streams should not be changed without direct action from the user. 2. So far as possible, Modularity should be invisible to those who don't specifically need it. This means being able to set default streams so that `yum install package` works for module-provided content.
Where this becomes an issue is at system-upgrade time (moving from Fedora 30->31 or continuously tracking Rawhide). Because of requirement 1, we cannot automatically move users between streams, but in the case of release upgrades we often want to move to a new default for the distribution.
The Modularity WG has generally agreed that we want and need to support behavior of the following use-cases:
Use Case 1:
On Fedora 30, user Alice runs
yum install Foo
The package "Foo" is provided by a module "foo" with a default stream "v1.0". Because it's available in a default stream, the package is installed and the module stream "foo:v1.0" is implicitly enabled for the system.
Fedora 31 is released. On Fedora 31, the module "foo" has a new default stream "v1.1". When upgrading from Fedora 30 to Fedora 31, Alice expects the package Foo she installed to be upgraded to version 1.1, because that's what would have happened if it was provided as a package from the non-modular repositories.
Use Case 2:
On Fedora 30, user Bob runs
yum enable foo:v1.0
In this case, the "v1.0" stream of the "foo" module has a dependency on the "v2.4" stream of the "bar" module. So when enabling "foo:v1.0", the system also implicitly enables "bar:v2.4".
Fedora 31 is released. On Fedora 31, the module stream "foo:v1.0" now depends on "bar:v2.5" instead of "bar:v2.4". The user, caring only about "foo:v1.0" would expect the upgrade to complete, adjusting the dependencies as needed.
One thing that came up in this thread was the lack of a concept of "Obsoletes" in the modular metadata. This was initially done intentionally, because we had the original constraint 1) above ("Selected streams should not be changed without direct action from the user.").
However, given that we're talking about the need to migrate defaults anyway, I think it may be worth considering adding something like an Obsoletes mechanism, but with a little more nuance.
Alternate Proposal:
Most things from the original proposal in the first message of this thread remains the same except:
Module stream metadata would gain two new optional attributes, "upgrades:" and "obsoletes:".
If the "upgrades: <older_stream>" field exists in the metadata, libdnf should switch to this stream if the following conditions are met: 1) Changing the stream would not introduce conflicts. 2) The stream is marked as `default_enabled` or `dep_enabled`.
The "obsoletes: <older_stream>" field would be stronger. Its use should require a special exemption (with a strong justification) and it would cause libdnf to switch from that stream to this one *unconditionally* (failing the transaction if that transition would cause conflicts). This would essentially be an "emergency escape" if we need it.
This would obviate the need for handling changes to the default stream in favor of having explicit transitions encoded by the packager into the module metadata.
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned in this thread is the option to simply require that there is a non-modular package.
On Wed, Oct 16, 2019 at 12:05 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned in this thread is the option to simply require that there is a non-modular package.
I think we can pretty much guarantee that's not going to happen. Unfortunately, modularization is a one-way road, given how modularity is implemented in DNF and how our distribution policies are currently structured.
It just means that people need to *really* think of the consequences of modularizing content, because there's basically no going back after that. We have no escape hatches or transition mechanisms to go from modular to non-modular variants of the same RPMs.
-- 真実はいつも一つ!/ Always, there's only one truth!
On Tuesday, October 15, 2019 9:07:51 PM MST Neal Gompa wrote:
On Wed, Oct 16, 2019 at 12:05 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned in this thread is the option to simply require that there is a non-modular package.
I think we can pretty much guarantee that's not going to happen. Unfortunately, modularization is a one-way road, given how modularity is implemented in DNF and how our distribution policies are currently structured.
It just means that people need to *really* think of the consequences of modularizing content, because there's basically no going back after that. We have no escape hatches or transition mechanisms to go from modular to non-modular variants of the same RPMs.
That's not what the proposal is. The proposal is to require a non-modular version, an "ursine package", for modular packages, instead of default modules.
On Wed, Oct 16, 2019 at 12:11 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 9:07:51 PM MST Neal Gompa wrote:
On Wed, Oct 16, 2019 at 12:05 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned in this thread is the option to simply require that there is a non-modular package.
I think we can pretty much guarantee that's not going to happen. Unfortunately, modularization is a one-way road, given how modularity is implemented in DNF and how our distribution policies are currently structured.
It just means that people need to *really* think of the consequences of modularizing content, because there's basically no going back after that. We have no escape hatches or transition mechanisms to go from modular to non-modular variants of the same RPMs.
That's not what the proposal is. The proposal is to require a non-modular version, an "ursine package", for modular packages, instead of default modules.
We cannot remove already existing default modules without further breaking things. Moreover, DNF will refuse to expose non-modular RPMs if it's aware of modular ones that have existed at some point. The best we can do is stop people from making more.
We have no process for de-modularization and I fully expect us to not have one ever, as the end goal of the modularity project is to enable a fully modularized distribution. Even RHEL 8 isn't a full realization of that vision.
On Tuesday, October 15, 2019 9:13:40 PM MST Neal Gompa wrote:
the end goal of the modularity project is to enable a fully modularized distribution
Was this ever clarified anywhere? I highly doubt that it would have been able to even begin, if that goal had been communicated.. Especially considering that's not even possible.
On Wed, Oct 16, 2019 at 12:21 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 9:13:40 PM MST Neal Gompa wrote:
the end goal of the modularity project is to enable a fully modularized distribution
Was this ever clarified anywhere? I highly doubt that it would have been able to even begin, if that goal had been communicated.. Especially considering that's not even possible.
That was how this project started. Fedora Server Boltron was the first attempt at it, but it was a lot more than they could do at once, so it was scaled back. However, I don't think there will be any more scaling back of the modularity project. It's far too important for that to happen.
And to be fair, while it is a hard problem to solve, it's a worthy one. It makes sense and if done well, could really distinguish Fedora from the rest in providing a way for codifying individual lifecycles separately from the distribution. Moreover, with all the container circus stuff going on, it's become even more important to enable some kind of parallel availability.
Sadly, I think a lot of people are learning that investing so little in infrastructure tooling (especially build and release tooling) for the past decade has really hurt them. Koji living off 1.5 people for several years, no *real* attempt to improve packager workflows since the move to Dist-Git in 2010, and generally increasing package collection with complex dependency chains has led to a situation where all of our bandages have to come off at once, and we can see that the wounds didn't heal as well as we thought.
Even the work to port our tooling to Python 3 has shown how badly Fedora's tools have been maintained. What's worse, opportunities to build communities around those tools to broaden the user and contributor base clearly weren't taken, which allowed them to devolve into Fedora-specific tooling or just plain rot. There's a lot of corrective actions happening, some of it potentially overreacting, but a lot of it is very justified.
It's going to be a long, hard road to get a good quality of life for Fedora contributors again. There's more for table stakes, we've had serious UX regressions in the past five years, and we have to start seriously examining contributor pain points and dealing with them.
On Tuesday, October 15, 2019 9:40:31 PM MST Neal Gompa wrote:
And to be fair, while it is a hard problem to solve, it's a worthy one. It makes sense and if done well, could really distinguish Fedora from the rest in providing a way for codifying individual lifecycles separately from the distribution. Moreover, with all the container circus stuff going on, it's become even more important to enable some kind of parallel availability.
If "parallel availability" is the problem Modularity is trying to solve, it seems that Modularity is a failure. You can't install more than one version of a package at once.
Anyway, this is off topic, in my eyes, the best course of action is to simply require that all modules have a non-modular version in Fedora. This can also be done for things that are currently default modules. Sure, those who have existing installs with modules won't get their install fixed with the current code, but new installations would. That's a start.
On ti, 15 loka 2019, John M. Harris Jr wrote:
On Tuesday, October 15, 2019 9:40:31 PM MST Neal Gompa wrote:
And to be fair, while it is a hard problem to solve, it's a worthy one. It makes sense and if done well, could really distinguish Fedora from the rest in providing a way for codifying individual lifecycles separately from the distribution. Moreover, with all the container circus stuff going on, it's become even more important to enable some kind of parallel availability.
If "parallel availability" is the problem Modularity is trying to solve, it seems that Modularity is a failure. You can't install more than one version of a package at once.
You are mixing up parallel availability and parallel installability. These aren't the same. Modularity does solve parallel availability problem. It was never designed to solve parallel installability problem.
Anyway, this is off topic, in my eyes, the best course of action is to simply require that all modules have a non-modular version in Fedora. This can also be done for things that are currently default modules. Sure, those who have existing installs with modules won't get their install fixed with the current code, but new installations would. That's a start.
I don't think it is not only reasonable to have this requirement but it is also detrimental to the project to have the requirement that basically doubles the amount of work volunteers have to do. Simply providing content of default modules in non-modular way ignores the fact that you somehow need to be able to rebuild those packages and they might depend in their build dependencies on packages from other modules, including non-default streams.
On Wed, Oct 16, 2019 at 11:50 AM Alexander Bokovoy abokovoy@redhat.com wrote:
On ti, 15 loka 2019, John M. Harris Jr wrote:
On Tuesday, October 15, 2019 9:40:31 PM MST Neal Gompa wrote:
And to be fair, while it is a hard problem to solve, it's a worthy one. It makes sense and if done well, could really distinguish Fedora from the rest in providing a way for codifying individual lifecycles separately from the distribution. Moreover, with all the container circus stuff going on, it's become even more important to enable some kind of parallel availability.
If "parallel availability" is the problem Modularity is trying to solve, it seems that Modularity is a failure. You can't install more than one version of a package at once.
You are mixing up parallel availability and parallel installability. These aren't the same. Modularity does solve parallel availability problem. It was never designed to solve parallel installability problem.
And that is, in my opinion, the root source of all the issues that are currently plaguing Modularity. Parallel availability without parallel installability can only lead to problems. This is just a new, shiny version of DLL hell. Thanks, I hate it.
Fabio
Anyway, this is off topic, in my eyes, the best course of action is to simply require that all modules have a non-modular version in Fedora. This can also be done for things that are currently default modules. Sure, those who have existing installs with modules won't get their install fixed with the current code, but new installations would. That's a start.
I don't think it is not only reasonable to have this requirement but it is also detrimental to the project to have the requirement that basically doubles the amount of work volunteers have to do. Simply providing content of default modules in non-modular way ignores the fact that you somehow need to be able to rebuild those packages and they might depend in their build dependencies on packages from other modules, including non-default streams.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Il giorno mer 16 ott 2019 alle ore 11:58 Fabio Valentini decathorpe@gmail.com ha scritto:
On Wed, Oct 16, 2019 at 11:50 AM Alexander Bokovoy abokovoy@redhat.com wrote:
On ti, 15 loka 2019, John M. Harris Jr wrote:
On Tuesday, October 15, 2019 9:40:31 PM MST Neal Gompa wrote:
And to be fair, while it is a hard problem to solve, it's a worthy one. It makes sense and if done well, could really distinguish Fedora from the rest in providing a way for codifying individual lifecycles separately from the distribution. Moreover, with all the container circus stuff going on, it's become even more important to enable some kind of parallel availability.
If "parallel availability" is the problem Modularity is trying to solve, it seems that Modularity is a failure. You can't install more than one version of a package at once.
You are mixing up parallel availability and parallel installability. These aren't the same. Modularity does solve parallel availability problem. It was never designed to solve parallel installability problem.
And that is, in my opinion, the root source of all the issues that are currently plaguing Modularity. Parallel availability without parallel installability can only lead to problems. This is just a new, shiny version of DLL hell. Thanks, I hate it.
+1 I totally agree
Fabio
Anyway, this is off topic, in my eyes, the best course of action is to simply require that all modules have a non-modular version in Fedora. This can also be done for things that are currently default modules. Sure, those who have existing installs with modules won't get their install fixed with the current code, but new installations would. That's a start.
I don't think it is not only reasonable to have this requirement but it is also detrimental to the project to have the requirement that basically doubles the amount of work volunteers have to do. Simply providing content of default modules in non-modular way ignores the fact that you somehow need to be able to rebuild those packages and they might depend in their build dependencies on packages from other modules, including non-default streams.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Alexander Bokovoy wrote:
You are mixing up parallel availability and parallel installability. These aren't the same. Modularity does solve parallel availability problem. It was never designed to solve parallel installability problem.
… which is exactly why it causes version hell.
I don't think it is not only reasonable to have this requirement but it is also detrimental to the project to have the requirement that basically doubles the amount of work volunteers have to do.
Merging the modular specfile into the non-modular branches (with a fast- forward merge) is almost no work (it takes only seconds). If, for whatever reason, there need to be specfile differences between the modular and the non-modular versions, they can be handled with %if conditionals.
Simply providing content of default modules in non-modular way ignores the fact that you somehow need to be able to rebuild those packages and they might depend in their build dependencies on packages from other modules, including non-default streams.
The default version of a package should NEVER depend on a non-default version of another package. That is just a recipe for version hell.
If you really cannot fix your package to build with the default version of some other package foo, then you should package the version N you need as a fooN compatibility package (where at least the runtime MUST be parallel- installable with the default foo, and the -devel parts SHOULD if possible), not as a module.
Kevin Kofler
On to, 17 loka 2019, Kevin Kofler wrote:
Alexander Bokovoy wrote:
You are mixing up parallel availability and parallel installability. These aren't the same. Modularity does solve parallel availability problem. It was never designed to solve parallel installability problem.
… which is exactly why it causes version hell.
It does not cause version hell in a system where you cannot install multiple versions at the same time.
I don't think it is not only reasonable to have this requirement but it is also detrimental to the project to have the requirement that basically doubles the amount of work volunteers have to do.
Merging the modular specfile into the non-modular branches (with a fast- forward merge) is almost no work (it takes only seconds). If, for whatever reason, there need to be specfile differences between the modular and the non-modular versions, they can be handled with %if conditionals.
In a module stream, yaml definition for the module itself is what drives the choice of dependencies for individual packages at large. You can have a module stream that requires packages from a specific subset of streams of intermodular dependencies that cannot be easily reproduced with 'almost no work' as you claim. Individual packages might simply have no references to those specific version requirements at all; all setup is done by then MBS at the moment when you do a preparation of a build environment for that specific module stream build.
Individual package specfile might simply have no details of the above and thus, while it technically might look not too much different from non-modular branch, the result of building off the modular branch might be vastly different.
Simply providing content of default modules in non-modular way ignores the fact that you somehow need to be able to rebuild those packages and they might depend in their build dependencies on packages from other modules, including non-default streams.
The default version of a package should NEVER depend on a non-default version of another package. That is just a recipe for version hell.
Nope. Modules have dependencies and build dependencies between modules effectively create a required build environment for the packages that are included into modules. For cases where a top-level module default stream is provided for users, a particular set of specific streams dnf/yum implicitly enables for installation of the packages from that stream does not really need to be comprised out of 'default' streams for those modules. If it was, we would never be able to build anything interesting out of such modular structure.
If you really cannot fix your package to build with the default version of some other package foo, then you should package the version N you need as a fooN compatibility package (where at least the runtime MUST be parallel- installable with the default foo, and the -devel parts SHOULD if possible), not as a module.
There is no requirement to have every single package variant to be parallel-installable. It will certainly be not a requirement in future for quite a lot of software. A MUST requirement above is what I do not accept, especially for complex server solutions. It is your choice of words, not real requirement, not even bound to a real-life need in many situations in the area I'm working with.
Alexander Bokovoy wrote:
On to, 17 loka 2019, Kevin Kofler wrote:
… which is exactly why it causes version hell.
It does not cause version hell in a system where you cannot install multiple versions at the same time.
You do not necessarily do that explicitly. The conflicting versions can be dragged in by versioned requirements in other modules, which are also allowed by design.
In a module stream, yaml definition for the module itself is what drives the choice of dependencies for individual packages at large. You can have a module stream that requires packages from a specific subset of streams of intermodular dependencies that cannot be easily reproduced with 'almost no work' as you claim. Individual packages might simply have no references to those specific version requirements at all; all setup is done by then MBS at the moment when you do a preparation of a build environment for that specific module stream build.
Individual package specfile might simply have no details of the above and thus, while it technically might look not too much different from non-modular branch, the result of building off the modular branch might be vastly different.
Building against the distribution's version of libraries instead of some arbitrarily picked version is pretty much the whole point of non-modular packages.
Where this is not possible, compatibility libraries have to be used, but that should be the exception rather than the rule.
The default version of a package should NEVER depend on a non-default version of another package. That is just a recipe for version hell.
Nope. Modules have dependencies and build dependencies between modules effectively create a required build environment for the packages that are included into modules. For cases where a top-level module default stream is provided for users, a particular set of specific streams dnf/yum implicitly enables for installation of the packages from that stream does not really need to be comprised out of 'default' streams for those modules. If it was, we would never be able to build anything interesting out of such modular structure.
I understand how this works, so there was no need to explain it again.
The issue is that building against an arbitrary version of a library will also lead to a runtime dependency on that version. Now if module A needs libfoo.so.1 and module B needs libfoo.so.2, and if those are packaged as a libfoo module with streams libfoo-1 and libfoo-2, modules A and B conflict and cannot be installed together.
This is why building against arbitrary versions of non-leaf modules is a recipe for version hell.
If you really cannot fix your package to build with the default version of some other package foo, then you should package the version N you need as a fooN compatibility package (where at least the runtime MUST be parallel- installable with the default foo, and the -devel parts SHOULD if possible), not as a module.
There is no requirement to have every single package variant to be parallel-installable. It will certainly be not a requirement in future for quite a lot of software. A MUST requirement above is what I do not accept, especially for complex server solutions. It is your choice of words, not real requirement, not even bound to a real-life need in many situations in the area I'm working with.
Without that parallel installability requirement, version conflicts are an unavoidable reality.
Kevin Kofler
On to, 17 loka 2019, Kevin Kofler wrote:
Alexander Bokovoy wrote:
On to, 17 loka 2019, Kevin Kofler wrote:
… which is exactly why it causes version hell.
It does not cause version hell in a system where you cannot install multiple versions at the same time.
You do not necessarily do that explicitly. The conflicting versions can be dragged in by versioned requirements in other modules, which are also allowed by design.
In a module stream, yaml definition for the module itself is what drives the choice of dependencies for individual packages at large. You can have a module stream that requires packages from a specific subset of streams of intermodular dependencies that cannot be easily reproduced with 'almost no work' as you claim. Individual packages might simply have no references to those specific version requirements at all; all setup is done by then MBS at the moment when you do a preparation of a build environment for that specific module stream build.
Individual package specfile might simply have no details of the above and thus, while it technically might look not too much different from non-modular branch, the result of building off the modular branch might be vastly different.
Building against the distribution's version of libraries instead of some arbitrarily picked version is pretty much the whole point of non-modular packages.
Right, and building against carefully chosen collection of dependencies is the whole point of modular packages. These are just two normal requirements that aren't contradicting each other most of the time.
Modular builds treat non-modular packages as a base environment to build on top. Sure, maintainers of modular streams need to take care of being non-conflicting on top of that, but sometimes the conflict is intentional as long as it is going to cover all dependencies broken by that. See, for example, some of scenarios in https://lists.centos.org/pipermail/centos-devel/2019-September/017774.html
Where this is not possible, compatibility libraries have to be used, but that should be the exception rather than the rule.
The default version of a package should NEVER depend on a non-default version of another package. That is just a recipe for version hell.
Nope. Modules have dependencies and build dependencies between modules effectively create a required build environment for the packages that are included into modules. For cases where a top-level module default stream is provided for users, a particular set of specific streams dnf/yum implicitly enables for installation of the packages from that stream does not really need to be comprised out of 'default' streams for those modules. If it was, we would never be able to build anything interesting out of such modular structure.
I understand how this works, so there was no need to explain it again.
The issue is that building against an arbitrary version of a library will also lead to a runtime dependency on that version. Now if module A needs libfoo.so.1 and module B needs libfoo.so.2, and if those are packaged as a libfoo module with streams libfoo-1 and libfoo-2, modules A and B conflict and cannot be installed together.
This is why building against arbitrary versions of non-leaf modules is a recipe for version hell.
You seem to be implying that whoever is maintaining a modular stream is not worth to trust that they are doing some reasonable work.
Dependencies aren't arbitrary; if they were, there would be probably no need to waste our time in working on the whole build part. Whether that is useful to you or other subset of Fedora maintainers is not guaranteed. However, using modular streams allows to solve problems you cannot easily solve otherwise within the same distribution for some use cases. This is one part of value it brings that seems to be constantly ignored with overly negative tone.
If you really cannot fix your package to build with the default version of some other package foo, then you should package the version N you need as a fooN compatibility package (where at least the runtime MUST be parallel- installable with the default foo, and the -devel parts SHOULD if possible), not as a module.
There is no requirement to have every single package variant to be parallel-installable. It will certainly be not a requirement in future for quite a lot of software. A MUST requirement above is what I do not accept, especially for complex server solutions. It is your choice of words, not real requirement, not even bound to a real-life need in many situations in the area I'm working with.
Without that parallel installability requirement, version conflicts are an unavoidable reality.
Sure, for those things that can be installed in parallel. This is not true for a wast amount of software, we have other means to deal with it beyond what is being discussed in this thread.
Alexander Bokovoy wrote:
On to, 17 loka 2019, Kevin Kofler wrote:
Building against the distribution's version of libraries instead of some arbitrarily picked version is pretty much the whole point of non-modular packages.
Right, and building against carefully chosen collection of dependencies is the whole point of modular packages. These are just two normal requirements that aren't contradicting each other most of the time.
Building against one shared distribution version of the library foo or building against a packager-chosen module stream version of the library foo are requirements that are very much contradicting each other by definition.
Modular builds treat non-modular packages as a base environment to build on top. Sure, maintainers of modular streams need to take care of being non-conflicting on top of that, but sometimes the conflict is intentional as long as it is going to cover all dependencies broken by that. See, for example, some of scenarios in https://lists.centos.org/pipermail/centos-devel/2019-September/017774.html
Those are scenarios that are very specific to a long-term distribution such as RHEL or CentOS and do not commonly apply in Fedora.
In Fedora, you would typically ship a new FreeIPA in one of 2 ways: 1. as an official update to the existing Fedora release, if it is suitably compatible for that, OR 2. in the next Fedora release, which is, at any point in time, at most 6 months away. Users who really cannot wait can get the update from a Copr.
And in fact, FreeIPA in Fedora is not currently a module, as you pointed out in your mail.
You would also likely not need to build against a newer krb5 than what Fedora ships. Or if you do, points 1 and 2 above also apply for krb5.
That whole "too fast, too slow" thing is really an issue specific to LTS distributions and not a pressing issue for a fast-moving distribution such as Fedora at all.
This is why building against arbitrary versions of non-leaf modules is a recipe for version hell.
You seem to be implying that whoever is maintaining a modular stream is not worth to trust that they are doing some reasonable work.
This is not a trust thing. No amount of "reasonable work" can prevent a module depending on libfoo-1 and a (from the user's point of view entirely unrelated) module depending on libfoo-2 from conflicting. The only "reasonable work" to do there is to package libfoo1 and libfoo2 as parallel- installable packages (one of which will probably be called just libfoo, the other the suffixed name) instead of module streams to prevent the client applications from conflicting.
Dependencies aren't arbitrary; if they were, there would be probably no need to waste our time in working on the whole build part. Whether that is useful to you or other subset of Fedora maintainers is not guaranteed. However, using modular streams allows to solve problems you cannot easily solve otherwise within the same distribution for some use cases. This is one part of value it brings that seems to be constantly ignored with overly negative tone.
[snip]
Sure, for those things that can be installed in parallel. This is not true for a wast amount of software, we have other means to deal with it beyond what is being discussed in this thread.
Everything can be installed in parallel if appropriately packaged.
Having done the packaging tricks to allow kdelibs3-devel and kdelibs4-devel to coexist (in the same /usr prefix, something upstream did not support), I know exactly what I am talking about. (And for the next major version, kf5-*-devel, we actually got upstream to care about this, so it is parallel- installable with kdelibs3-devel and kdelibs4-devel out of the box. That is really the ideal state to reach.)
Kevin Kofler
On to, 17 loka 2019, Kevin Kofler wrote:
Dependencies aren't arbitrary; if they were, there would be probably no need to waste our time in working on the whole build part. Whether that is useful to you or other subset of Fedora maintainers is not guaranteed. However, using modular streams allows to solve problems you cannot easily solve otherwise within the same distribution for some use cases. This is one part of value it brings that seems to be constantly ignored with overly negative tone.
[snip]
Sure, for those things that can be installed in parallel. This is not true for a wast amount of software, we have other means to deal with it beyond what is being discussed in this thread.
Everything can be installed in parallel if appropriately packaged.
Having done the packaging tricks to allow kdelibs3-devel and kdelibs4-devel to coexist (in the same /usr prefix, something upstream did not support), I know exactly what I am talking about. (And for the next major version, kf5-*-devel, we actually got upstream to care about this, so it is parallel- installable with kdelibs3-devel and kdelibs4-devel out of the box. That is really the ideal state to reach.)
This does not work for server components and is not generalizable. For example, you cannot have multiple versions of Samba running on the same system. You cannot have multiple versions of FreeIPA running on the same system either. These server components have requirements beyond package installability.
We have an answer for those use cases with VMs and containers and they aren't requiring parallel installability.
Alexander Bokovoy wrote:
This does not work for server components and is not generalizable. For example, you cannot have multiple versions of Samba running on the same system. You cannot have multiple versions of FreeIPA running on the same system either. These server components have requirements beyond package installability.
Technically, you can, on a different port. Of course, this kind of service is probably more or less useless on a non-default port though.
But you would not be running multiple versions of the server at once. Why would you want to do that? You would possibly parallel-install the client libraries, if you have software linked to different versions of it, but why the server?
Servers are typically pretty much leaf applications and as such can be handled as any other leaf application, by shipping a default version in the distribution and alternate versions in a module. Of course, if the server links to the client library (e.g., MySQL and early versions of MariaDB used to do that, before the separate MariaDB Connector/C was introduced), then the module must include a version of the client library packaged in a way that does not conflict with the system version that client applications are linked to. But this can always be done.
We have an answer for those use cases with VMs and containers and they aren't requiring parallel installability.
Parallel installability of leaf software is not what I am proposing. It is only needed for libraries.
Kevin Kofler
On pe, 18 loka 2019, Kevin Kofler wrote:
Alexander Bokovoy wrote:
This does not work for server components and is not generalizable. For example, you cannot have multiple versions of Samba running on the same system. You cannot have multiple versions of FreeIPA running on the same system either. These server components have requirements beyond package installability.
Technically, you can, on a different port. Of course, this kind of service is probably more or less useless on a non-default port though.
But you would not be running multiple versions of the server at once. Why would you want to do that? You would possibly parallel-install the client libraries, if you have software linked to different versions of it, but why the server?
That's my point -- requiring parallel installability is not really a MUST, especially in my area. You are driving this requirement as if nothing else could solve your issues.
Servers are typically pretty much leaf applications and as such can be handled as any other leaf application, by shipping a default version in the distribution and alternate versions in a module. Of course, if the server links to the client library (e.g., MySQL and early versions of MariaDB used to do that, before the separate MariaDB Connector/C was introduced), then the module must include a version of the client library packaged in a way that does not conflict with the system version that client applications are linked to. But this can always be done.
Exactly, and this is what we do (in RHEL). We were thinking on making a similar setup for Samba AD in modules in Fedora, but didn't go too far because $TIME.
We have an answer for those use cases with VMs and containers and they aren't requiring parallel installability.
Parallel installability of leaf software is not what I am proposing. It is only needed for libraries.
Libraries I can understand.
Alexander Bokovoy wrote:
That's my point -- requiring parallel installability is not really a MUST, especially in my area. You are driving this requirement as if nothing else could solve your issues.
I am not. This is a strawman.
What I am saying is that modules on which other modules have versioned dependencies cause version conflicts if they are not parallel-installable (which is sadly the case now). There is no need for different versions of leaf modules to be parallel-installable in most cases. (It could be useful in some special cases, such as the game Battle for Wesnoth where savegames are only usable with a specific release branch, but that is not the common case. For the typical server application, I agree that it would be useless.)
Kevin Kofler
On pe, 18 loka 2019, Kevin Kofler wrote:
Alexander Bokovoy wrote:
That's my point -- requiring parallel installability is not really a MUST, especially in my area. You are driving this requirement as if nothing else could solve your issues.
I am not. This is a strawman.
What I am saying is that modules on which other modules have versioned dependencies cause version conflicts if they are not parallel-installable (which is sadly the case now). There is no need for different versions of leaf modules to be parallel-installable in most cases. (It could be useful in some special cases, such as the game Battle for Wesnoth where savegames are only usable with a specific release branch, but that is not the common case. For the typical server application, I agree that it would be useless.)
ok, so your argument is for having module streams installable in parallel for cases where they provide content that doesn't conflict on rpm level.
I've been told this is not possible, so far, due to lack of right metadata connection top to bottom between all layers involved (rpm, libsolv, dnf, repositories, etc). I don't know how far that holds but I would imagine this issue would hold for whatever technology is used to aggregate sets of packages in the collection of repositories.
This does not work for server components and is not generalizable. For
example, you cannot have multiple versions of Samba running on the same system. You cannot have multiple versions of FreeIPA running on the same system either. These server components have requirements beyond package installability.
We have an answer for those use cases with VMs and containers and they aren't requiring parallel installability.
I am glad that this works for you.
<personal> I have been using Linux for 17 years as my only operating system since I deleted the windows partitions from my desktop in 2002. I started with Mandrake, moving over Gentoo, to Archlinux and now Fedora. And I am really proud that I can serve the Fedora community.
In return, I would love to use Fedora as my primary (and my only) operating system, because you use what you love and what you love you want to make better. But if I cannot use it to run my graphical, musical, and typesetting software nicely from one single machine, I will not be able to use it primarily. Does it mean, that I will have to keep at least two PCs, one for testing Fedora, and another one for having fun? Most bugs I have noticed where coming from using Fedora daily, not from looking into test environments.
I understand, that server people need stuff, but we, the desktop people, need stuff, too. Let's try to make everyone happy. I know, we can do it, if we want to. But yeah, I fear the day, when I will have to put several containers together to stitch myself a modern solution that any Linux distro provided 20 years ago.
I hope those days won't come. </personal>
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 10/16/19 7:36 PM, Kevin Kofler wrote:
It was never designed to solve parallel installability problem.
… which is exactly why it causes version hell.
Could you expand on that? Since a modular system currently prevents parallel version installation, it may provide suboptimal/obsolete versions, but I wouldn't tap it as 'version hell', which in my mind describes a system with multiple installed versions installed all over the place, causing confusion and uncertainty as to the ABI compatibility and patching status.
Perhaps you are saying that a hypothetical system allowing packaged parallel installed versions provides the authoritative registry that tracks such dependencies, and therefore does not have these problems?
On Thu, 17 Oct 2019 at 10:06, Przemek Klosowski via devel devel@lists.fedoraproject.org wrote:
On 10/16/19 7:36 PM, Kevin Kofler wrote:
It was never designed to solve parallel installability problem.
… which is exactly why it causes version hell.
Could you expand on that? Since a modular system currently prevents parallel version installation, it may provide suboptimal/obsolete versions, but I wouldn't tap it as 'version hell', which in my mind describes a system with multiple installed versions installed all over the place, causing confusion and uncertainty as to the ABI compatibility and patching status.
[For the sake of this strawman... I am using libreoffice and evolution which are only used because they are highly used items.. they could be anylist of things like httpd and varnish or some other things you would use together but aren't being modulared together.]
people are going to add things into their modules to make whatever software they need. If I find that I need libfoo2-2.34 in libreoffice and you need libfoo2-2.40 in evolution.. then only one of the two modules can be installed. You can either have libreoffice or you can have evolution. Furthermore whatever packages which were built against the non-modular libfoo2-2.30 will either not work or be uninstalled because the user decided they wanted libreoffice or evolution.
Each module 'owner' would have been making the best decision with the time and energy they have to use modules for their problem: get the latest software out for the most releases as easily as possible. They will probably have tried to get libfoo2 updated in the core but found that the libfoo2 maintainer is not responding or a CVE came out and it had to be fixed now and the fix for libreoffice needed a version.
So after this happens the libreoffice and evolution decide to work their modules together. That eventually turns into just one module .. when they run into a problem with the firefox module which now is pulling in a new libfoo2. They get pissed off and retire all of libreoffice and evolution (and whatever else got pulled in as the modular versions of X) or they combine with firefox.
Packages are like a super saturated liquid below the freezing point. There are 20,000+ packages and ~400 active packagers. Little events are going to cause either tiny crystals to grow around a package (a module of 1 package like the RHEL perl-CGI) or a giant crystal (rust and java are probably going to grow until anything built with those languages will have to be in the module) and it will happen very very fast.. must faster than expected.. and like a crystal growth it will have lots of faults and crack open in ways not expected also.
Perhaps you are saying that a hypothetical system allowing packaged parallel installed versions provides the authoritative registry that tracks such dependencies, and therefore does not have these problems?
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 10/17/19 12:27 PM, Stephen John Smoogen wrote:
people are going to add things into their modules to make whatever software they need. If I find that I need libfoo2-2.34 in libreoffice and you need libfoo2-2.40 in evolution.. then only one of the two modules can be installed.You can either have libreoffice or you can have evolution.
Cap't Obvious here, but I think the logic is like this:
1. in an ideal world software would build and run with the latest-greatest versions of everything as a default 2. ...but in the real world we have to sometimes chose a non-default versions. There's enough of this happening that we can't just say we'll work hard until we reach 1. 3. modularity allows choosing non-default versions, which is great for a particular application, but conflicts with other apps, forcing us to choose only one of them. This provides a working solution for at least some people, so it's useful for e.g. Redhat, but it makes life hard for an end-user that just wants to have a system with a complete set of software 4. such modularized solutions can be combined into usable systems by either containers or cooperating VMs, but again, it's harder for end-users and has other undesirable consequences, e.g. complicates security management
The logical conundrum of modularity is that when we require non-default modules, then it logically follows that there will be conflicts (if there weren't, we wouldn't need modules) and so we are forced all the way into 4. unless we're lucky, and happen not to need the packages that depend on conflicting modules.
The bottom line is that modularity is useful, but in the sense of insurance or fire extinguishers: it's good to have them but we should really hope that we won't have to use them.
If only there was a way to limit the scope of the non-default modules to their dependencies--by using private library directories or something like that? I think it would solve the problem of parallel installation, and would simplify upgrades by making it explicit what pulled them in in the first place, and place joint responsibility for updates on these subsystems. This is essentially bundling, but exposed in the packaging system so it's more manageable.
Przemek Klosowski via devel wrote:
- modularity allows choosing non-default versions, which is great for a particular application, but conflicts with other apps, forcing us to choose only one of them. This provides a working solution for at least some people, so it's useful for e.g. Redhat, but it makes life hard for an end-user that just wants to have a system with a complete set of software
Exactly. And we already have a solution to that (allowing to choose non- default versions of libraries without introducing this type of conflicts), it is called compatibility packages.
Kevin Kofler
On 16.10.2019 06:13, Neal Gompa wrote:
We cannot remove already existing default modules without further breaking things. Moreover, DNF will refuse to expose non-modular RPMs if it's aware of modular ones that have existed at some point. The best we can do is stop people from making more.
1. Require all modules to create a non-modular version. 2. Disable Modular repositores by default.
On system upgrade (or distro-sync) dnf will replace modular versions by regular. Problem solved.
enable a fully modularized distribution.
I hope this will never happen.
On Wed, Oct 16, 2019 at 12:15 AM Neal Gompa ngompa13@gmail.com wrote:
On Wed, Oct 16, 2019 at 12:11 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 9:07:51 PM MST Neal Gompa wrote:
On Wed, Oct 16, 2019 at 12:05 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned in this thread is the option to simply require that there is a non-modular package.
I think we can pretty much guarantee that's not going to happen. Unfortunately, modularization is a one-way road, given how modularity is implemented in DNF and how our distribution policies are currently structured.
It just means that people need to *really* think of the consequences of modularizing content, because there's basically no going back after that. We have no escape hatches or transition mechanisms to go from modular to non-modular variants of the same RPMs.
That's not what the proposal is. The proposal is to require a non-modular version, an "ursine package", for modular packages, instead of default modules.
We cannot remove already existing default modules without further breaking things. Moreover, DNF will refuse to expose non-modular RPMs if it's aware of modular ones that have existed at some point. The best we can do is stop people from making more.
This is currently accurate.
We have no process for de-modularization and I fully expect us to not have one ever, as the end goal of the modularity project is to enable a fully modularized distribution. Even RHEL 8 isn't a full realization of that vision.
This is not true. It should be *possible* to have a fully modularized distribution, but that isn't a specific goal for Fedora or RHEL.
Also, we *are* investigating ways that we could move RPMs out of modules, because this may be important for many reasons (such as moving a common dependency out of a module and back to the non-modular repo to be shared). We haven't figured this one out yet, but it's on the queue.
On Wed, Oct 16, 2019 at 08:31:10AM -0400, Stephen Gallagher wrote:
This is not true. It should be *possible* to have a fully modularized distribution, but that isn't a specific goal for Fedora or RHEL.
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are:
1. Users should have alternate streams of software available.
2. Those alternate streams should be able to have different lifecycles.
3. Packaging an individual stream for multiple outputs should be easier than before.
The idea of modularizing the whole distro isn't a bad vision, but we're aiming a little closer to home for now.
I'm perfectly happy with a lot of different ways to get to that goal. I think the modularity team has done a lot of amazing, hard work _even if we're not there yet_.
On Wed, Oct 16, 2019 at 02:48:13PM -0400, Matthew Miller wrote:
On Wed, Oct 16, 2019 at 08:31:10AM -0400, Stephen Gallagher wrote:
This is not true. It should be *possible* to have a fully modularized distribution, but that isn't a specific goal for Fedora or RHEL.
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are: 2. Those alternate streams should be able to have different lifecycles.
Hmm, it sounds like the Council hasn't taken into account the constraints on lifecycle of modules that we have slowly discovered during the last two years, constraints that are now part of FESCo-approved policy.
Essentially, modules in Fedora are only allowed to EOL at EOL of Fedora release. And to preserve stability for users, a.k.a. following the Update Policy, modules should only change to new major version at Fedora releases. This is exactly the same as for "normal" rpms.
The lifecycle of modules in Fedora must be the same as lifecycle of Fedora releases, so no "different lifecycle" is possible.
- Users should have alternate streams of software available.
- Packaging an individual stream for multiple outputs should be easier than before.
Those *are* useful goals, but they should not be tied to specific technology, we should only care about the end-result.
Thus, please replace "Our goals for modularity are" with "What we hope to achieve with modularity" or even "Our goal is for users to be able to".
Zbyszek
On Sun, Oct 20, 2019 at 11:09 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Wed, Oct 16, 2019 at 02:48:13PM -0400, Matthew Miller wrote:
On Wed, Oct 16, 2019 at 08:31:10AM -0400, Stephen Gallagher wrote:
This is not true. It should be *possible* to have a fully modularized distribution, but that isn't a specific goal for Fedora or RHEL.
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are: 2. Those alternate streams should be able to have different lifecycles.
Hmm, it sounds like the Council hasn't taken into account the constraints on lifecycle of modules that we have slowly discovered during the last two years, constraints that are now part of FESCo-approved policy.
Essentially, modules in Fedora are only allowed to EOL at EOL of Fedora release. And to preserve stability for users, a.k.a. following the Update Policy, modules should only change to new major version at Fedora releases. This is exactly the same as for "normal" rpms.
The lifecycle of modules in Fedora must be the same as lifecycle of Fedora releases, so no "different lifecycle" is possible.
Ok, just to be sure that I understand this correctly:
- module EOL dates must align with fedora release EOL dates, - Update Policy is the same for modules as for normal packages, - major package updates can only occur at "release upgrade" time
If I'm not suffering from too low blood levels of caffeine right now, then from these 3 constraints follows:
- default streams are basically useless (since they cannot target multiple fedora releases in most cases, due to the Update Policy), - flexible lifecycle advantages of modules do not apply to fedora, since module EOL dates must align with fedora release EOL dates.
Then, what *is* the benefit of using modules for "default" versions of fedora packages, if "default" streams have to usually be maintained separately for different fedora branches, just like normal packages, but with the *additional* overhead of Modularity - and additional work for maintainers of dependent packages?
Fabio
- Users should have alternate streams of software available.
- Packaging an individual stream for multiple outputs should be easier than before.
Those *are* useful goals, but they should not be tied to specific technology, we should only care about the end-result.
Thus, please replace "Our goals for modularity are" with "What we hope to achieve with modularity" or even "Our goal is for users to be able to".
Zbyszek _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Sun, Oct 20, 2019 at 11:35:37AM +0200, Fabio Valentini wrote:
On Sun, Oct 20, 2019 at 11:09 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Wed, Oct 16, 2019 at 02:48:13PM -0400, Matthew Miller wrote:
On Wed, Oct 16, 2019 at 08:31:10AM -0400, Stephen Gallagher wrote:
This is not true. It should be *possible* to have a fully modularized distribution, but that isn't a specific goal for Fedora or RHEL.
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are: 2. Those alternate streams should be able to have different lifecycles.
Hmm, it sounds like the Council hasn't taken into account the constraints on lifecycle of modules that we have slowly discovered during the last two years, constraints that are now part of FESCo-approved policy.
Essentially, modules in Fedora are only allowed to EOL at EOL of Fedora release. And to preserve stability for users, a.k.a. following the Update Policy, modules should only change to new major version at Fedora releases. This is exactly the same as for "normal" rpms.
The lifecycle of modules in Fedora must be the same as lifecycle of Fedora releases, so no "different lifecycle" is possible.
Ok, just to be sure that I understand this correctly:
- module EOL dates must align with fedora release EOL dates,
Yes, this was voted in https://pagure.io/modularity/issue/112#comment-553234
Allow maintainers to specify that a module stream will live until the EOL date of a particular Fedora release or EPEL minor release, with special cases for "just keep building until I say otherwise"?
and approved in https://pagure.io/modularity/issue/112#comment-562677. (I'm providing exact links because it's hard to find.)
- Update Policy is the same for modules as for normal packages,
- major package updates can only occur at "release upgrade" time
I'm not sure if that is specified in plain text anywhere. The last image in https://pagure.io/modularity/working-documents/blob/master/f/lifecycles-upgr... shows that at least.
But the gist of https://docs.fedoraproject.org/en-US/fesco/Updates_Policy/#philosophy applies to modules too: if there's a module that has been released for some Fedora version, a major user-visible change would be just a disruptive for users as a major user-visible change in any packages. This certainly applies to streams like "/stable" and "/version-nnn".
Maybe somebody from the Modularity team can provide clarification here and links to policy.
If I'm not suffering from too low blood levels of caffeine right now, then from these 3 constraints follows:
- default streams are basically useless (since they cannot target
multiple fedora releases in most cases, due to the Update Policy),
In general, yes. If the package versions have incompatibilities and/or user-visible changes, a different stream is needed for each Fedora release. There was a subthread about this recently, starting at https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/....
- flexible lifecycle advantages of modules do not apply to fedora,
since module EOL dates must align with fedora release EOL dates.
Then, what *is* the benefit of using modules for "default" versions of fedora packages, if "default" streams have to usually be maintained separately for different fedora branches, just like normal packages, but with the *additional* overhead of Modularity - and additional work for maintainers of dependent packages?
That is one of questions we are trying to answer in this thread ;)
Zbyszek
On Sun, Oct 20, 2019 at 10:47:15AM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Sun, Oct 20, 2019 at 11:35:37AM +0200, Fabio Valentini wrote:
On Sun, Oct 20, 2019 at 11:09 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Wed, Oct 16, 2019 at 02:48:13PM -0400, Matthew Miller wrote:
On Wed, Oct 16, 2019 at 08:31:10AM -0400, Stephen Gallagher wrote:
This is not true. It should be *possible* to have a fully modularized distribution, but that isn't a specific goal for Fedora or RHEL.
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are: 2. Those alternate streams should be able to have different lifecycles.
Hmm, it sounds like the Council hasn't taken into account the constraints on lifecycle of modules that we have slowly discovered during the last two years, constraints that are now part of FESCo-approved policy.
Essentially, modules in Fedora are only allowed to EOL at EOL of Fedora release. And to preserve stability for users, a.k.a. following the Update Policy, modules should only change to new major version at Fedora releases. This is exactly the same as for "normal" rpms.
The lifecycle of modules in Fedora must be the same as lifecycle of Fedora releases, so no "different lifecycle" is possible.
Ok, just to be sure that I understand this correctly:
- module EOL dates must align with fedora release EOL dates,
Yes, this was voted in https://pagure.io/modularity/issue/112#comment-553234
Allow maintainers to specify that a module stream will live until the EOL date of a particular Fedora release or EPEL minor release, with special cases for "just keep building until I say otherwise"?
and approved in https://pagure.io/modularity/issue/112#comment-562677. (I'm providing exact links because it's hard to find.)
- Update Policy is the same for modules as for normal packages,
- major package updates can only occur at "release upgrade" time
I'm not sure if that is specified in plain text anywhere. The last image in https://pagure.io/modularity/working-documents/blob/master/f/lifecycles-upgr... shows that at least.
But the gist of https://docs.fedoraproject.org/en-US/fesco/Updates_Policy/#philosophy applies to modules too: if there's a module that has been released for some Fedora version, a major user-visible change would be just a disruptive for users as a major user-visible change in any packages. This certainly applies to streams like "/stable" and "/version-nnn".
Maybe somebody from the Modularity team can provide clarification here and links to policy.
If I'm not suffering from too low blood levels of caffeine right now, then from these 3 constraints follows:
- default streams are basically useless (since they cannot target
multiple fedora releases in most cases, due to the Update Policy),
In general, yes. If the package versions have incompatibilities and/or user-visible changes, a different stream is needed for each Fedora release. There was a subthread about this recently, starting at https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/....
Wait, did I just reply to you quoting your earlier mail? Oops, I did. Smooge is right, this thread has started looping.
Zbyszek
- flexible lifecycle advantages of modules do not apply to fedora,
since module EOL dates must align with fedora release EOL dates.
Then, what *is* the benefit of using modules for "default" versions of fedora packages, if "default" streams have to usually be maintained separately for different fedora branches, just like normal packages, but with the *additional* overhead of Modularity - and additional work for maintainers of dependent packages?
That is one of questions we are trying to answer in this thread ;)
On Sun, Oct 20, 2019 at 10:47:15AM +0000, Zbigniew Jędrzejewski-Szmek wrote:
In general, yes. If the package versions have incompatibilities and/or user-visible changes, a different stream is needed for each Fedora release. There was a subthread about this recently, starting at
In this case, of course, there needs to be a good way for users to be moved from stream to stream at upgrade time in a non-disruptive way. I know work is in progress on that.
On Sun, Oct 20, 2019 at 09:07:27AM +0000, Zbigniew Jędrzejewski-Szmek wrote:
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are: 2. Those alternate streams should be able to have different lifecycles.
Hmm, it sounds like the Council hasn't taken into account the constraints on lifecycle of modules that we have slowly discovered during the last two years, constraints that are now part of FESCo-approved policy.
Essentially, modules in Fedora are only allowed to EOL at EOL of Fedora release. And to preserve stability for users, a.k.a. following the Update Policy, modules should only change to new major version at Fedora releases. This is exactly the same as for "normal" rpms.
This seems appropriate for default steams, but modules should be able to have alternate, opt-in streams which either a) update on a rolling or other cadence or b) choose to keep building the older version across the release boundary.
The tooling should make this clear to the users.
The lifecycle of modules in Fedora must be the same as lifecycle of Fedora releases, so no "different lifecycle" is possible.
- Users should have alternate streams of software available.
- Packaging an individual stream for multiple outputs should be easier than before.
Those *are* useful goals, but they should not be tied to specific technology, we should only care about the end-result.
Yes, that's true from a Council point of view.
However, I also have a pretty strong bias towards people who showed up to do the work, and the decisions they've made. That doesn't mean we're stuck and can't adjust -- in fact, adjusting as we've gone along is a lot of why we're where we are now. But unless someone shows up with people-power and funding to do it, I take kind of a skeptical view of proposals to start a whole new approach from scratch.
Thus, please replace "Our goals for modularity are" with "What we hope to achieve with modularity" or even "Our goal is for users to be able to".
I don't really see a meaningful difference there.
On Wed, Oct 23, 2019 at 12:56:41PM -0400, Matthew Miller wrote:
However, I also have a pretty strong bias towards people who showed up to do the work, and the decisions they've made. That doesn't mean we're stuck and can't adjust -- in fact, adjusting as we've gone along is a lot of why we're where we are now. But unless someone shows up with people-power and funding to do it, I take kind of a skeptical view of proposals to start a whole new approach from scratch.
Sure, people who show up to do the work get to choose what happens. But we can't take it to an extreme: there must always be an option to back out of an idea. Every Change page is required to fill out a Contingency Plan, and yes, we do occasionally execute those. There are decisions which need to be implemented for us to see all the benefits and drawbacks, and in those cases a lot of work is wasted when the contingency plan is enacted. See for example recent proposal by Ben Cotton to use Taiga: it was 90% implemented before some drawbacks became visible, and Ben and Manas took the high road and yanked it.
In fact, the amount of work that has gone into a project is not a reason to keep trying. If anything, the opposite is true — the more person-hours have been "consumed" the more that indicates that the idea is not workable. As we learn from the implementation, we understand our goals and limitations better, and sometimes we need to take a hard look and say the expected *remaining* amount of work is too big. The fact that people showed up and put in work is not the final consideration.
Because this keeps coming up, we talked about this at the Fedora Council meeting today. Our goals for modularity are: 2. Those alternate streams should be able to have different lifecycles.
Hmm, it sounds like the Council hasn't taken into account the constraints on lifecycle of modules that we have slowly discovered during the last two years, constraints that are now part of FESCo-approved policy.
Essentially, modules in Fedora are only allowed to EOL at EOL of Fedora release. And to preserve stability for users, a.k.a. following the Update Policy, modules should only change to new major version at Fedora releases. This is exactly the same as for "normal" rpms.
This seems appropriate for default steams, but modules should be able to have alternate, opt-in streams which either a) update on a rolling or other cadence or b) choose to keep building the older version across the release boundary.
The tooling should make this clear to the users.
Yes, default streams, but also streams that other streams or packages depend on. Having rolling updates or updates with independent cadence is OK if you are a leaf module and the users opt-in into those changes. But as soon as people try to build other packages on top, or try to use such modules in production as dependencies of other things, this breaks down. The general rule in Fedora is that you get version bumps and non-backwards-compat behaviour changes between releases, giving users and other packagers a clear point in time to expect this. Packages (and streams) can also only be retired at Fedora release EOL.
This is a policy choice, not a technical matter. If modules became more popular, and the dependencies between modules grew, we'd need to settle on similar rules, where bigger changes are done with a certain cadence. This is why I think that the "independent lifecycles for modules" are illusory, made possible by current scarcity of modules.
(Or to look at this from another POV: if we want to give users access to a rolling version of some package, we can do it just as well without modules. In fact, we already do, with the kernel, with firefox, and probably a bunch of other packages where this makes sense. For leaf packages this works. If we want to give users e.g. rolling postgresql, we could provide postgresql-rolling package. Maybe we should.)
Zbyszek
This is a policy choice, not a technical matter. If modules became more
popular, and the dependencies between modules grew, we'd need to settle on similar rules, where bigger changes are done with a certain cadence. This is why I think that the "independent lifecycles for modules" are illusory, made possible by current scarcity of modules.
Currently (F31), there are about 63 modules listed with *dnf module list*. I have attempted to install all modules and all streams. Between single installations I always reverted the system to its default state, i.e. modular repos enabled, but no modules installed, enabled, disabled or anything.
From those 63 modules,
- about 18 are not correctly defined according to criteria that I had agreed on with Stephen, https://pagure.io/modularity/issue/149. - about 8 modules cannot be installed because of some dependency problems: #1764546, #1764616, #1764623, #1764624, #1764606, #1764606, #1764611, #1764604
In some of the cases, packagers themselves report that the particular module should NOT be included in that particular version of Fedora, currently 31, but they still ARE.
So, not just the tooling, the content is problematic as well, it is not ready and nobody seems to care, as there are bugs reported that have not been resolved for several weeks. And since we do not currently block on modular sanity, we cannot enforce anything.
As far as tooling is concerned, I have been seeing complaints about DNF doing a bad job, but from the perspective of acceptance testing, it's the DNF operations that usually work fine with installing, enabling, disabling, removing, resetting and switching modules and streams.
I believe, that if modularity was opt-in, we would be able to use it just fine, as it is designed now with some little tweakings, such as DNF providing enough info on retired or discontinued streams, offering a possibility to choose a different stream to switch to on upgrades. The longing for default modular Fedora is what makes it more problematic, because we need to hide everything from the users and make everything work automatically. If modularity was a matter of personal choice we would not have to hide anything from anybody, because those users would be able to read the necessary documentation and tweak their systems just fine.
---
Lukáš Růžička
FEDORA QE, RHCE
Red Hat
Purkyňova 115
612 45 Brno - Královo Pole
lruzicka@redhat.com TRIED AND PERSONALLY TESTED, ERGO TRUSTED. https://redhat.com/trusted
On Thu, Oct 24, 2019 at 4:31 AM Lukas Ruzicka lruzicka@redhat.com wrote:
As far as tooling is concerned, I have been seeing complaints about DNF doing a bad job, but from the perspective of acceptance testing, it's the DNF operations that usually work fine with installing, enabling, disabling, removing, resetting and switching modules and streams.
I believe, that if modularity was opt-in, we would be able to use it just fine, as it is designed now with some little tweakings, such as DNF providing enough info on retired or discontinued streams, offering a possibility to choose a different stream to switch to on upgrades. The longing for default modular Fedora is what makes it more problematic, because we need to hide everything from the users and make everything work automatically. If modularity was a matter of personal choice we would not have to hide anything from anybody, because those users would be able to read the necessary documentation and tweak their systems just fine.
Unfortunately there have also been major performance regressions because of the additional work to handle modules being default enabled. The current handling of modules in DNF is not cheap. I'm not sure if this is because it uses libmodulemd1 vs libmodulemd2 or if it's because modules aren't implemented at the libsolv layer and can't be computed as part of the initial constraint set through the base solver. But whatever the reason, it is markedly slower than on systems that don't have modular repositories at all.
Neal Gompa wrote:
On Thu, Oct 24, 2019 at 4:31 AM Lukas Ruzicka lruzicka@redhat.com wrote:
I believe, that if modularity was opt-in, we would be able to use it just fine, as it is designed now with some little tweakings, such as DNF providing enough info on retired or discontinued streams, offering a possibility to choose a different stream to switch to on upgrades. The longing for default modular Fedora is what makes it more problematic, because we need to hide everything from the users and make everything work automatically. If modularity was a matter of personal choice we would not have to hide anything from anybody, because those users would be able to read the necessary documentation and tweak their systems just fine.
Unfortunately there have also been major performance regressions because of the additional work to handle modules being default enabled. The current handling of modules in DNF is not cheap. I'm not sure if this is because it uses libmodulemd1 vs libmodulemd2 or if it's because modules aren't implemented at the libsolv layer and can't be computed as part of the initial constraint set through the base solver. But whatever the reason, it is markedly slower than on systems that don't have modular repositories at all.
All these are convincing technical arguments for disabling Modularity by default from F32 onwards. (It is unfortunate that these have not been considered for the existing releases, but we cannot turn the time back, we have to focus on improving things for the upcoming releases, hence "from F32 onwards". I would even propose it from F31 onwards, but I do not think FESCo is willing to delay the F31 release long enough to implement the required demodularization upgrade path in DNF, hence F32.)
Kevin Kofler
On Thu, Oct 24, 2019 at 12:42:27PM +0200, Kevin Kofler wrote:
Neal Gompa wrote:
On Thu, Oct 24, 2019 at 4:31 AM Lukas Ruzicka lruzicka@redhat.com wrote:
I believe, that if modularity was opt-in, we would be able to use it just fine, as it is designed now with some little tweakings, such as DNF providing enough info on retired or discontinued streams, offering a possibility to choose a different stream to switch to on upgrades. The longing for default modular Fedora is what makes it more problematic, because we need to hide everything from the users and make everything work automatically. If modularity was a matter of personal choice we would not have to hide anything from anybody, because those users would be able to read the necessary documentation and tweak their systems just fine.
Unfortunately there have also been major performance regressions because of the additional work to handle modules being default enabled. The current handling of modules in DNF is not cheap. I'm not sure if this is because it uses libmodulemd1 vs libmodulemd2 or if it's because modules aren't implemented at the libsolv layer and can't be computed as part of the initial constraint set through the base solver. But whatever the reason, it is markedly slower than on systems that don't have modular repositories at all.
All these are convincing technical arguments for disabling Modularity by default from F32 onwards. (It is unfortunate that these have not been considered for the existing releases, but we cannot turn the time back, we have to focus on improving things for the upcoming releases, hence "from F32 onwards". I would even propose it from F31 onwards, but I do not think FESCo is willing to delay the F31 release long enough to implement the required demodularization upgrade path in DNF, hence F32.)
Yes, F32. Doing any significant changes to F31 at this point is out of question. Please remember that if this path is taken, we'll have to demodularize various packages, and this will take time too.
Zbyszek
On Wed, Oct 16, 2019 at 12:05 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned in this thread is the option to simply require that there is a non-modular package.
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
It may be more work to go backwards than forwards at this point. Modularity does provide some useful feature additions, so to my mind it makes more sense to properly fix the issues we have with it rather than expend enormous amounts of energy to remove those features and revert to the old way of doing things. And, yes, reduce Fedora's value to Red Hat in the process.
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
On Wednesday, October 16, 2019, Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 12:05 AM John M. Harris Jr johnmh@splentity.com wrote:
On Tuesday, October 15, 2019 6:26:31 PM MST Stephen Gallagher wrote:
given that we're talking about the need to migrate defaults
To clarify, that has not been decided, and a prominent option mentioned
in
this thread is the option to simply require that there is a non-modular package.
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
It may be more work to go backwards than forwards at this point. Modularity does provide some useful feature additions, so to my mind it makes more sense to properly fix the issues we have with it rather than expend enormous amounts of energy to remove those features and revert to the old way of doing things. And, yes, reduce Fedora's value to Red Hat in the process.
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
That's just oversimplified - if you find that the way you are on is the wrong one just moving forward is not necessary the correct thing to do.
Going backwards to get a saner state is a worthwhile thing to do. I have yet to see an argument how replacing existing packages with modules or providing default streams by default helps to reach the objective of 'parallel availability' - by dropping the default modules by default you get pretty much that without the downsides.
Stephen Gallagher wrote:
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
If you are standing in front of a cliff, moving forward is just not the answer. Not all changes are improvements. Sometimes, you have to realize that you made a mistake and move back before things only get worse.
The overwhelmingly negative feedback that you are getting is a clear indication that something is wrong. You should not ignore it or summarily file it off as luddites wanting to return to the past. There are real issues with modules, and the Modularity WG is only offering partial workarounds (adding more and more complexity) and no real fixes.
I have provided above 2 possible approaches to address the "migrating back" issue.
Kevin Kofler
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler kevin.kofler@chello.at wrote:
Stephen Gallagher wrote:
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire. Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
If you are standing in front of a cliff, moving forward is just not the answer. Not all changes are improvements. Sometimes, you have to realize that you made a mistake and move back before things only get worse.
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
The overwhelmingly negative feedback that you are getting is a clear indication that something is wrong. You should not ignore it or summarily file it off as luddites wanting to return to the past. There are real issues with modules, and the Modularity WG is only offering partial workarounds (adding more and more complexity) and no real fixes.
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit. And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder. We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
I have provided above 2 possible approaches to address the "migrating back" issue.
No, you've decided on the outcome you want to see and have invented a path to get there that doesn't align with the realities of the present.
On Wed, Oct 16, 2019 at 8:27 PM Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler kevin.kofler@chello.at wrote:
Stephen Gallagher wrote:
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
We could let "dnf distro-sync" take care of it. Rebuilds to remove RPMTAG_MODULARITYLABEL from the package headers would be necessary, but otherwise nothing else should need to change.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire. Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
It was damaging when it was happening before we have a way to depend on modules from non-modular content. It essentially forces other packagers to move to modules too. It's a snowball effect. And *right now* modularization is a one way road. I'm pleased to hear that we will get a way to demodularize, but currently we don't have it.
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
If you are standing in front of a cliff, moving forward is just not the answer. Not all changes are improvements. Sometimes, you have to realize that you made a mistake and move back before things only get worse.
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
The overwhelmingly negative feedback that you are getting is a clear indication that something is wrong. You should not ignore it or summarily file it off as luddites wanting to return to the past. There are real issues with modules, and the Modularity WG is only offering partial workarounds (adding more and more complexity) and no real fixes.
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit. And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder. We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
I think we must have a good UX for handling module transitions. And I know you've mentioned upthread that you want this to be a painful experience, but I would argue that it should not be. We have gone to great lengths to smoothly handle transitions for non-modular content for the past five years. I think we should consider that modules should grow the same facilities. The moment you made modules have dependencies, you basically set it up for requiring the full dependency expression model that RPM has.
At minimum, we need the following: * Provides * Requires * Obsoletes
Without these three, we can't do modular transitions cleanly across releases.
We also need the platform module runtime dependency to become an optional property. In a "modular" world, it's going to be impossible to get rid of modules to upgrade across system releases, so we need modules to not be tightly bound to the platform when they don't need to be. The underlying property here should be split in two, effectively BuildRequires and Requires.
It needs to be possible to have orphaned modules on the system. Without that, smooth and seamless system upgrades are going to be *very* hard. We've never done this to non-modular packages, it's kind of insane to do that for modular ones.
On Wed, Oct 16, 2019 at 8:44 PM Neal Gompa ngompa13@gmail.com wrote:
On Wed, Oct 16, 2019 at 8:27 PM Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler kevin.kofler@chello.at wrote:
Stephen Gallagher wrote:
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
We could let "dnf distro-sync" take care of it. Rebuilds to remove RPMTAG_MODULARITYLABEL from the package headers would be necessary, but otherwise nothing else should need to change.
That would still lead to upgrading to the highest NEVRA though. Which is problematic as I mentioned above.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire. Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
It was damaging when it was happening before we have a way to depend on modules from non-modular content. It essentially forces other packagers to move to modules too. It's a snowball effect. And *right now* modularization is a one way road. I'm pleased to hear that we will get a way to demodularize, but currently we don't have it.
That's a fair observation. I can only plead my team's lack of omnipotence and our willingness to correct our mistakes.
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
If you are standing in front of a cliff, moving forward is just not the answer. Not all changes are improvements. Sometimes, you have to realize that you made a mistake and move back before things only get worse.
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
The overwhelmingly negative feedback that you are getting is a clear indication that something is wrong. You should not ignore it or summarily file it off as luddites wanting to return to the past. There are real issues with modules, and the Modularity WG is only offering partial workarounds (adding more and more complexity) and no real fixes.
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit. And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder. We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
I think we must have a good UX for handling module transitions. And I know you've mentioned upthread that you want this to be a painful experience,
To be clear, I want it to be a painful experience for *arbitrary* module transitions. I think it absolutely needs to have good UX for *planned* and *tested* transitions. I have proposed some ideas for that.
but I would argue that it should not be. We have gone to great lengths to smoothly handle transitions for non-modular content for the past five years. I think we should consider that modules should grow the same facilities. The moment you made modules have dependencies, you basically set it up for requiring the full dependency expression model that RPM has.
At minimum, we need the following:
- Provides
- Requires
- Obsoletes
Without these three, we can't do modular transitions cleanly across releases.
"Begging the question": You're asserting this without context. I proposed something upthread (as a direct reply to my original message) involving "upgrades:" and "obsoletes:", because I think that might be a cleaner approach than just relying on the default stream of the repos you have enabled. I don't know that Provides and Requires are useful though. Explain it to me, please?
We also need the platform module runtime dependency to become an optional property. In a "modular" world, it's going to be impossible to get rid of modules to upgrade across system releases, so we need modules to not be tightly bound to the platform when they don't need to be. The underlying property here should be split in two, effectively BuildRequires and Requires.
It needs to be possible to have orphaned modules on the system. Without that, smooth and seamless system upgrades are going to be *very* hard. We've never done this to non-modular packages, it's kind of insane to do that for modular ones.
That's an interesting thought and one I hadn't seen put to words yet. It is certainly worth exploring what cases that would benefit. Do you have some examples you could share?
Currently, our default stance has been "disallow the system upgrade if the modules they've locked onto won't be available there". This is based on our philosophy that ultimately "the app is what matters". Most people don't install Linux because they enjoy clicking buttons in Anaconda. They install Linux because they have an application they want to deploy. We want our upgrade process to be focused on *keeping that app running*. When possible, we want them to be able to say "I need Node.js 10.x" and even if the system default becomes 12.x or 14.x, as long as a 10.x stream exists in the next Fedora release, they should be able to upgrade their base system without breaking their application. With that philosophy in mind, blocking the upgrade seems more user-friendly than allowing the upgrade to proceed with a possibly-unusable Node.js 10.x installation on the system. I'd rather they see that they have a conflict to resolve and deal with either porting their system to the newer Node.js stream or else switch to a distro like RHEL which will maintain that stream past its upstream EOL.
-- 真実はいつも一つ!/ Always, there's only one truth!
But an infinite number of observer biases!
Stephen Gallagher wrote:
Currently, our default stance has been "disallow the system upgrade if the modules they've locked onto won't be available there". This is based on our philosophy that ultimately "the app is what matters". Most people don't install Linux because they enjoy clicking buttons in Anaconda. They install Linux because they have an application they want to deploy
You have to consider that not all applications are as important as keeping up with the distribution lifecycle itself. If I have Fedora deployed in a bunch of places, I need to be able to move to the next release which is supported if the current release I am running is nearing EOL. At that point, if a module is orphaned and it happens to be a leaf application (say the bat utility which is currently provided as a module and one I happen to use), I don't really want it blocking my ability to upgrade. I would certain like to be informed about the fact but I would want to get to the next release anyway.
Rahul
On Wed, Oct 16, 2019 at 9:14 PM Rahul Sundaram metherid@gmail.com wrote:
Stephen Gallagher wrote:
Currently, our default stance has been "disallow the system upgrade if the modules they've locked onto won't be available there". This is based on our philosophy that ultimately "the app is what matters". Most people don't install Linux because they enjoy clicking buttons in Anaconda. They install Linux because they have an application they want to deploy
You have to consider that not all applications are as important as keeping up with the distribution lifecycle itself. If I have Fedora deployed in a bunch of places, I need to be able to move to the next release which is supported if the current release I am running is nearing EOL. At that point, if a module is orphaned and it happens to be a leaf application (say the bat utility which is currently provided as a module and one I happen to use), I don't really want it blocking my ability to upgrade. I would certain like to be informed about the fact but I would want to get to the next release anyway.
If that's the case, the most obvious way to inform you is to disallow the upgrade and have you resolve it by doing a `dnf module remove bat` and then rerunning the upgrade.
On Wed, Oct 16, 2019 at 9:17 PM Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 9:14 PM Rahul Sundaram metherid@gmail.com wrote:
Stephen Gallagher wrote:
Currently, our default stance has been "disallow the system upgrade if the modules they've locked onto won't be available there". This is based on our philosophy that ultimately "the app is what matters". Most people don't install Linux because they enjoy clicking buttons in Anaconda. They install Linux because they have an application they want to deploy
You have to consider that not all applications are as important as keeping up with the distribution lifecycle itself. If I have Fedora deployed in a bunch of places, I need to be able to move to the next release which is supported if the current release I am running is nearing EOL. At that point, if a module is orphaned and it happens to be a leaf application (say the bat utility which is currently provided as a module and one I happen to use), I don't really want it blocking my ability to upgrade. I would certain like to be informed about the fact but I would want to get to the next release anyway.
If that's the case, the most obvious way to inform you is to disallow the upgrade and have you resolve it by doing a `dnf module remove bat` and then rerunning the upgrade.
When "bat" was non-modular, we didn't require this. Why does it being a module change this? The underlying RPMs still have their dependencies satisfied. If they didn't, DNF would elect to offer its removal as part of the upgrade after passing "--allowerasing". This behavior is sane, useful, and understandable. I don't see a reason it wouldn't map cleanly to modular content.
Hi
On Wed, Oct 16, 2019 at 9:21 PM Neal Gompa ngompa13@gmail.com wrote:
On Wed, Oct 16, 2019 at 9:17 PM Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 9:14 PM Rahul Sundaram metherid@gmail.com
wrote:
If that's the case, the most obvious way to inform you is to disallow the upgrade and have you resolve it by doing a `dnf module remove bat` and then rerunning the upgrade.
One could do that yes but it is helpful to have dnf essentially offer to do this an option
When "bat" was non-modular, we didn't require this. Why does it being a module change this? The underlying RPMs still have their dependencies satisfied. If they didn't, DNF would elect to offer its removal as part of the upgrade after passing "--allowerasing". This behavior is sane, useful, and understandable. I don't see a reason it wouldn't map cleanly to modular content.
Indeed. Before --allowerasing was implemented by dnf and it gained the feature to suggest that users run it to workaround broken dependencies, one would manually be able to remove the dependencies and unbreak themselves out of that problem and upgrading using yum wiki page did prominently suggest that workaround. Allowerasing was a step up in usability however and I wouldn't want orphaned or broken modules to a hindrance to that. Again, in the case of bat, the underlying breakages was blocking updates for a while till I figured out the right steps. So this isn't merely a theoretical example either
Rahul
On Wed, Oct 16, 2019 at 9:00 PM Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 8:44 PM Neal Gompa ngompa13@gmail.com wrote:
On Wed, Oct 16, 2019 at 8:27 PM Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler kevin.kofler@chello.at wrote:
Stephen Gallagher wrote:
An awful lot of people are repeating this as if it's a solution without understanding the existing architecture. Believe it or not, attempting to abandon default streams and go back to only non-modular content available by default is a lot harder than it sounds (or should be, but I noted that we're working on that in another reply elsewhere in the thread). There is currently no path to upgrades that would get back from the modular versions and the closest we could manage would be to rely on the dist-upgrade distro-sync, but in that case we *still* need to have DNF recognize that the default stream has changed (in this case, been dropped) and handle that accordingly.
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
We could let "dnf distro-sync" take care of it. Rebuilds to remove RPMTAG_MODULARITYLABEL from the package headers would be necessary, but otherwise nothing else should need to change.
That would still lead to upgrading to the highest NEVRA though. Which is problematic as I mentioned above.
It'd be interesting if an "inverse filter" could be applied. Instead of modules shadowing non-modular content, the other way around would occur. That would make it easy to clean that up.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire. Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
It was damaging when it was happening before we have a way to depend on modules from non-modular content. It essentially forces other packagers to move to modules too. It's a snowball effect. And *right now* modularization is a one way road. I'm pleased to hear that we will get a way to demodularize, but currently we don't have it.
That's a fair observation. I can only plead my team's lack of omnipotence and our willingness to correct our mistakes.
Sure. And I know personally that you guys were far too compressed to figure out this problem. It would have not shown up in your thought processes, given the focus for the past 18 months.
I started this discussion to ask the community to help us identify the best path *forward*. An endless barrage of "kill it off" replies is not helpful or productive. If anyone has specific advice on how to move forward (or, indeed, if you can figure out how to migrate back without considerable release engineering and packager effort), that would be productive. Just please keep in mind that we have to go to war with the army we have, not the one we wish we had.
If you are standing in front of a cliff, moving forward is just not the answer. Not all changes are improvements. Sometimes, you have to realize that you made a mistake and move back before things only get worse.
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
The overwhelmingly negative feedback that you are getting is a clear indication that something is wrong. You should not ignore it or summarily file it off as luddites wanting to return to the past. There are real issues with modules, and the Modularity WG is only offering partial workarounds (adding more and more complexity) and no real fixes.
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit. And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder. We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
I think we must have a good UX for handling module transitions. And I know you've mentioned upthread that you want this to be a painful experience,
To be clear, I want it to be a painful experience for *arbitrary* module transitions. I think it absolutely needs to have good UX for *planned* and *tested* transitions. I have proposed some ideas for that.
but I would argue that it should not be. We have gone to great lengths to smoothly handle transitions for non-modular content for the past five years. I think we should consider that modules should grow the same facilities. The moment you made modules have dependencies, you basically set it up for requiring the full dependency expression model that RPM has.
At minimum, we need the following:
- Provides
- Requires
- Obsoletes
Without these three, we can't do modular transitions cleanly across releases.
"Begging the question": You're asserting this without context. I proposed something upthread (as a direct reply to my original message) involving "upgrades:" and "obsoletes:", because I think that might be a cleaner approach than just relying on the default stream of the repos you have enabled. I don't know that Provides and Requires are useful though. Explain it to me, please?
Provides is needed because once modules are replaced, the module dependency expressions for third party modules should be satisfiable if they are API-compatible (in the normal sense). Ripping modules out of a distro is going to break things from an ISV perspective unless a mechanism is in place for something else to "slot in", so to speak. This is obviously paired with "Obsoletes", which I think is self-explanatory by now.
Another case for "Provides" would be alternative implementations of the same module. For example, a module of Java 11 could be done with Oracle Java (blech), OpenJDK, or OpenJ9. In all three variants, they are fully substitutable because they provide the same interfaces that other modules can depend on. But they aren't the same modules, because they're built on different "cores".
Requires is kind of obvious, we sort of already have it, it's just not well-defined.
We also need the platform module runtime dependency to become an optional property. In a "modular" world, it's going to be impossible to get rid of modules to upgrade across system releases, so we need modules to not be tightly bound to the platform when they don't need to be. The underlying property here should be split in two, effectively BuildRequires and Requires.
It needs to be possible to have orphaned modules on the system. Without that, smooth and seamless system upgrades are going to be *very* hard. We've never done this to non-modular packages, it's kind of insane to do that for modular ones.
That's an interesting thought and one I hadn't seen put to words yet. It is certainly worth exploring what cases that would benefit. Do you have some examples you could share?
In Rust, we have a lot of leaf modules. They ultimately provide an application that depends on stable interfaces that are binary compatible as you move forward (glibc, etc.). From that perspective, we should be able to treat modular software the same way we treat non-modular software.
And realistically speaking, modules should be able to stick around unless they can't. There's some complexity here because of the whole parallel-availability bit, but I think it's critical for sustainably maintainable platform for us to be able to do this.
Currently, our default stance has been "disallow the system upgrade if the modules they've locked onto won't be available there". This is based on our philosophy that ultimately "the app is what matters". Most people don't install Linux because they enjoy clicking buttons in Anaconda. They install Linux because they have an application they want to deploy. We want our upgrade process to be focused on *keeping that app running*. When possible, we want them to be able to say "I need Node.js 10.x" and even if the system default becomes 12.x or 14.x, as long as a 10.x stream exists in the next Fedora release, they should be able to upgrade their base system without breaking their application. With that philosophy in mind, blocking the upgrade seems more user-friendly than allowing the upgrade to proceed with a possibly-unusable Node.js 10.x installation on the system. I'd rather they see that they have a conflict to resolve and deal with either porting their system to the newer Node.js stream or else switch to a distro like RHEL which will maintain that stream past its upstream EOL.
It is incredibly rare that a situation you've contrived (though a valid one!) actually happens. In nearly all cases that I've been doing this, packages built for older Fedora releases don't immediately "go bad" when you upgrade. If they have no upgrade candidate and their dependencies are no longer satisfiable, then there's a problem. Otherwise, it's just more "orphaned" stuff that you can continue to use unless it's being replaced or upgraded.
If a nodejs 10 stream exists in the F+1 release, then keep it. But if it's going away and the maintainer has set it up to move to nodejs 12, then that should happen automatically, *regardless* of whether it's default or not, unless you want to permit orphaned modules, then it could be a switch that triggers the upgrade of modules vs orphaning them. If the nodejs 10 stream goes away without an upgrade path in F+1, then it should be able to remain on the system if the RPM level dependencies are still satisfied in F+1, which goes toward that "app primacy" philosophy.
Blocking upgrades is dangerous, because users usually don't have enough information to solve the problem in the way you're hoping.
-- 真実はいつも一つ!/ Always, there's only one truth!
But an infinite number of observer biases!
Heh, indeed. That's the first time in years someone has commented on my email signature. :)
Neal Gompa wrote:
It'd be interesting if an "inverse filter" could be applied. Instead of modules shadowing non-modular content, the other way around would occur. That would make it easy to clean that up.
And as I pointed out, if the proposal to require a non-modular default version for all packages were implemented, that "inverse filter" would actually only have to be the default for two releases (two to support upgrades skipping one release). Once the users are migrated to the non- modular default versions, the default could be safely changed back to the current filter.
Kevin Kofler
Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 8:44 PM Neal Gompa ngompa13@gmail.com wrote:
It was damaging when it was happening before we have a way to depend on modules from non-modular content. It essentially forces other packagers to move to modules too. It's a snowball effect. And *right now* modularization is a one way road. I'm pleased to hear that we will get a way to demodularize, but currently we don't have it.
That's a fair observation. I can only plead my team's lack of omnipotence and our willingness to correct our mistakes.
You do not need omnipotence to not deploy a change before the contingency plan is ready, only patience.
Kevin Kofler
There seems to be some confusion here as to the use cases of Fedora vs RHEL. What's good for RHEL is not necessarily what's good for Fedora. I'm sorry, but Fedora is not simply a sandbox to test things for RHEL, and that needs to be made clear.
I'm comfortable saying that most Fedora users are not installing the distro just to support one specific application, as one might with RHEL or CentOS, but to benefit from the Four Foundations of Fedora, in this case the most important ones being Freedom, Features and First.
It'd be great to have a working modular system, but since we don't seem to have that, it's not a good idea to force the broken implementation on users. We need to consider what is best for Fedora's users, not what is best for Red Hat, at least in my opinion.
I see no reason that dropping certain parts of Modularity from actual releases of Fedora will harm the relationship with Red Hat, as Stephen suggests. Such tests can, and probably should, be done in Rawhide, until they're actually ready for users.
So far, the best approach seems to be to remove default modules, and require a non-modular version for fedora releases and branched. (In addition to whatever packagers would package as modules. To clarify, I am not attempting to suggest nothing should be done with Modularity except in Rawhide.)
We're not saying this to discourage you, at least that is not my goal. My goal is to ensure the best result for the end user.
I'm comfortable saying that most Fedora users are not installing the distro just to support one specific application, as one might with RHEL or CentOS, but to benefit from the Four Foundations of Fedora, in this case the most important ones being Freedom, Features and First.
Exactly ... this is what I believe, too. I think that Fedora users put Fedora on their desktops and laptops to be creative in many ways of creativity. Some make make music, some enhance pictures, some model in Blender, cut videos, write documents. The majority, I dare to say, is not interested in having several Inkscape versions, they want the newest yet stable enough and they are satisfied with that.
It'd be great to have a working modular system, but since we don't seem to have that, it's not a good idea to force the broken implementation on users. We need to consider what is best for Fedora's users, not what is best for Red Hat, at least in my opinion.
Fedora modules must be ready to work in all possible combinations and streams, if we really mean it seriously. For example, I as a user, want to install the newest version of Gimp, because I need the newest features, but since the newest Scanner Application stopped supporting my device, I need the penultimate one. I also play windows games with wine and I set the current version of wine to suit my needs, so I want to stick with this version as long as possible and maybe even beyond, and I also want an NFS share for my TV to consume, but because I am paranoid, I want to go 2 versions behind the latest.
To make a long story short, I will need lots of different stream working in harmony and I will want to upgrade my PC without any problems. Until we can provide this, we should keep modularity as opt-in technology preview.
I see no reason that dropping certain parts of Modularity from actual releases of Fedora will harm the relationship with Red Hat, as Stephen suggests. Such tests can, and probably should, be done in Rawhide, until they're actually ready for users.
So far, the best approach seems to be to remove default modules, and require a non-modular version for fedora releases and branched. (In addition to whatever packagers would package as modules. To clarify, I am not attempting to suggest nothing should be done with Modularity except in Rawhide.)
This seems to me the easiest way to solve current problems.
We're not saying this to discourage you, at least that is not my goal. My goal is to ensure the best result for the end user.
-- John M. Harris, Jr. Splentity
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, Oct 18, 2019 at 01:03:24PM +0200, Lukas Ruzicka wrote:
Exactly ... this is what I believe, too. I think that Fedora users put Fedora on their desktops and laptops to be creative in many ways of creativity. Some make make music, some enhance pictures, some model in Blender, cut videos, write documents. The majority, I dare to say, is not interested in having several Inkscape versions, they want the newest yet stable enough and they are satisfied with that.
Well, maybe. Here's an actual Fedora story. A few releases ago, we had a designer working on a little animation promoting how trouble-free and painless Fedora updates are now (as a response to the "you should do an LTS, or a rolling release!" messages I often here).
But, in the middle of making this, they updated themselves, and the new version of Inkscape dropped support for a feature (a file format, I think) that was very important to their work — so they said that they couldn't continue making that ad in good conscience.
If we had Inkscape as two streams, independent of the OS release, they could have opted to continue with the one that worked for them for a while longer at least.
Dne 18. 10. 19 v 17:04 Matthew Miller napsal(a):
On Fri, Oct 18, 2019 at 01:03:24PM +0200, Lukas Ruzicka wrote:
Exactly ... this is what I believe, too. I think that Fedora users put Fedora on their desktops and laptops to be creative in many ways of creativity. Some make make music, some enhance pictures, some model in Blender, cut videos, write documents. The majority, I dare to say, is not interested in having several Inkscape versions, they want the newest yet stable enough and they are satisfied with that.
Well, maybe. Here's an actual Fedora story. A few releases ago, we had a designer working on a little animation promoting how trouble-free and painless Fedora updates are now (as a response to the "you should do an LTS, or a rolling release!" messages I often here).
But, in the middle of making this, they updated themselves, and the new version of Inkscape dropped support for a feature (a file format, I think) that was very important to their work — so they said that they couldn't continue making that ad in good conscience.
If we had Inkscape as two streams, independent of the OS release, they could have opted to continue with the one that worked for them for a while longer at least.
... or the stream they used would go EOL at the same time they upgraded and they would have no option. Even if it was not EOL, they would need to fiddle with choosing the right module which is BTW not possible in G-S if I am not mistaken.
I think we should be more careful about promoting modules and they should be carefully considered.
Vít
Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler wrote:
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
So it looks like I did not describe clearly enough what my proposed enable_modules=0 flag would do. ("Disable all module code" was apparently too vague.)
How I think it should work would be: * For repositories, it completely ignores modular metadata and processes only the non-modular parts of the repository metadata. Therefore, it does not see the Node.js 12.x stream at all. It only sees whatever Node.js is in the non-modular repository. If there are currently only modular versions, then it sees none at all. But with the proposal to require a non-modular default version, it would then see that version, which would likely be Node.js 10.x. Nobody would get forcefully upgraded from 10.x to 12.x. * For installed modules, it completely ignores them, acting as if the database of installed modules were empty. (It just does not read that database at all.) * For installed packages, it treats them all as non-modular. Sure, packages originally installed from a module have weird EVRs encoding module metadata, but otherwise they get processed exactly like a non-modular package. So the default repository only has to provide a newer EVR to upgrade the package. That should address upgrades from 8.x or 10.x to the new default 10.x. If the user had previously installed 12.x, they will only get downgraded if they distro-sync or if package-level dependency issues with the F30 build of 12.x on F31 force the downgrade.
I hope that clears it up.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
You are overestimating by far the effort required to demodularize the handful packages that are currently module-only. The evidence Fabio Valentini has gathered so far shows that actually very few packages would be affected and they would not be too hard to fix. And Miro has also offered help with fixing affected packages.
All in all, it would require fixing a handful packages once and for all instead of implementing workarounds affecting the entire distribution and its thousands of users.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire.
And exactly that code needs to go, at least in Fedora. I think having a way to migrate away from modules is the common case to prioritize here.
That said, it shall be pointed out that, if the proposal to demodularize all default versions of packages gets implemented, we only need a *short term* solution for demodularization in DNF. After 2 releases, we have no default streams left (and they will never come back by policy) and we can expect users to have upgraded through a release with no default streams (given that we do not support upgrading directly to n+3), so at that point DNF can revert to the "safe" behavior (preventing accidental demodularization) by default.
So the proposal to demodularize everything could actually make this problem easier to solve.
Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
Sorry for that. The reason I called it "damage" being done is that there is currently no supported way back (as you pointed out yourself) and that it moves us away from the state I (and others, e.g., Miro) believe we should reach (where there are no module-only packages anymore).
I consider this approach of making a controversial and experimental change with no contingency plan, then using that absence of a contingency plan as an argument to not only stick to the change at all costs, but even go further with it, entirely unacceptable. (We call it the "creating facts" ("Fakten schaffen") tactic in the German-speaking parts of the world. It is an effective way to bypass discussion and democratic participation.)
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
The problems you listed are not the only ones. There are the version conflicts between non-default streams that can be dragged in by default streams. There is the buildroot-only content, which breaks our self-hosting promise and makes it harder to compile additional software. There is the issue that modules built for an older distribution cannot be kept on a newer distribution version, which is a regression from ursine packages that (e.g., if they get dropped from the distribution) you can keep as long as they don't depend on an outdated soname, which can be decades. There is the risk that your more and more complex solutions introduce new issues, e.g., bugs and undesirable behavior in DNF, that may or may not be easy to fix. (The more complexity you introduce, the harder it gets for DNF to behave the way the user expects.)
And this needs to be brought into relation with the benefits of using default streams instead of non-modular default versions, which as far as I can tell are essentially nonexistent. (Sure, they might be an easier upgrade path for things that are already default streams now, but this is self- referential. We should not stick to default streams forever only because of a historical decision. And actually, the upgrade path from default stream to default stream is also not working yet, which is why we have this thread at all.)
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
I understand very well how this doesn't work. :-)
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
LOL, filing off mailing list consensus as "four noisy individuals". This is getting really ridiculous! Why am I even arguing with somebody who clearly does not want to listen?
And you are the people who brought us into this situation to begin with. Default streams should never have been allowed without: 1. an upgrade path from default stream to default stream, AND 2. a contingency plan, i.e., an upgrade path from default stream to non-modular default version The fact that they were implemented without EITHER of these (when actually BOTH are needed) was extremely short-sighted.
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit.
I call them "partial" because they are do not address all the issues we are having with Modularity. See my list a few paragraphs above. At least the version conflict issue is not fixable at all without a complete redesign of Modularity that would be incompatible with the current implementation and would probably also violate the FHS (because that is the only way to achieve universal parallel-installability without per-package workarounds).
And I call them "workarounds" because they add more and more complexity when there would be a simple fix: just don't use default streams anymore. That change might also need short-term workarounds for the upgrade path, but those can all be dropped once the default streams are gone.
And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder.
Yet that is exactly what default streams are doing.
We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
The buildroot workaround does not address any of the issues the end users are hitting. You are proposing a different workaround for the upgrade path issue for end users, workaround which as far as I can tell is not implemented yet, which will not address the other issues, and which will still lead to a more complex feel than the simple fix of no longer using default streams.
I do not understand why you are so strongly against the "no default streams" rule. It would not stop any of the work you are doing on Modularity. You would still be able to ship all your module streams, depend on any other stream of any other module (even if some combinations of streams then conflict), and continue your work as before. The only difference is that instead of making a stream the default, you would merge it into the release branch for the release and build it as a normal "ursine" package. But that does not conflict with any of the Modularity work you are doing.
No, you've decided on the outcome you want to see and have invented a path to get there that doesn't align with the realities of the present.
Funnily, that is exactly how I would describe what has happened with Modularity so far.
Kevin Kofler
On Wed, Oct 16, 2019 at 9:39 PM Kevin Kofler kevin.kofler@chello.at wrote:
So it looks like I did not describe clearly enough what my proposed enable_modules=0 flag would do. ("Disable all module code" was apparently too vague.)
How I think it should work would be:
- For repositories, it completely ignores modular metadata and processes only the non-modular parts of the repository metadata. Therefore, it does not see the Node.js 12.x stream at all. It only sees whatever Node.js is in the non-modular repository. If there are currently only modular versions, then it sees none at all. But with the proposal to require a non-modular default version, it would then see that version, which would likely be Node.js 10.x. Nobody would get forcefully upgraded from 10.x to 12.x.
- For installed modules, it completely ignores them, acting as if the database of installed modules were empty. (It just does not read that database at all.)
- For installed packages, it treats them all as non-modular. Sure, packages originally installed from a module have weird EVRs encoding module metadata, but otherwise they get processed exactly like a non-modular package. So the default repository only has to provide a newer EVR to upgrade the package. That should address upgrades from 8.x or 10.x to the new default 10.x. If the user had previously installed 12.x, they will only get downgraded if they distro-sync or if package-level dependency issues with the F30 build of 12.x on F31 force the downgrade.
I hope that clears it up.
It does, thanks. I think you're right, that probably *would* work, though it's slightly harder to do your third bullet point than it sounds at first blush. I suppose we could fudge it with the `module_hotfix` option though..
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
You are overestimating by far the effort required to demodularize the handful packages that are currently module-only. The evidence Fabio Valentini has gathered so far shows that actually very few packages would be affected and they would not be too hard to fix. And Miro has also offered help with fixing affected packages.
All in all, it would require fixing a handful packages once and for all instead of implementing workarounds affecting the entire distribution and its thousands of users.
It's worth considering. I'm not ruling it out at this time. I'm not committed to doing it yet either.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire.
And exactly that code needs to go, at least in Fedora. I think having a way to migrate away from modules is the common case to prioritize here.
This is one of those cases where I think that RHEL and Fedora needs are in conflict; in RHEL, we absolutely need to support the failsafe behavior, because accidentally replacing a critical dependency will break user applications. In Fedora, this is likely a smaller concern. It needs investigation.
That said, it shall be pointed out that, if the proposal to demodularize all default versions of packages gets implemented, we only need a *short term* solution for demodularization in DNF. After 2 releases, we have no default streams left (and they will never come back by policy) and we can expect users to have upgraded through a release with no default streams (given that we do not support upgrading directly to n+3), so at that point DNF can revert to the "safe" behavior (preventing accidental demodularization) by default.
If we settle on the "no content in default streams" policy for Fedora, this is a sensible way to go about it, yes.
So the proposal to demodularize everything could actually make this problem easier to solve.
Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
Sorry for that. The reason I called it "damage" being done is that there is currently no supported way back (as you pointed out yourself) and that it moves us away from the state I (and others, e.g., Miro) believe we should reach (where there are no module-only packages anymore).
Yeah, I can understand that perspective.
I consider this approach of making a controversial and experimental change with no contingency plan, then using that absence of a contingency plan as an argument to not only stick to the change at all costs, but even go further with it, entirely unacceptable. (We call it the "creating facts" ("Fakten schaffen") tactic in the German-speaking parts of the world. It is an effective way to bypass discussion and democratic participation.)
Yeah, I'm not trying to make this a "fait accompli" discussion either. We are where we are. I just don't want to revert as a knee-jerk reaction if we can find a sustainable solution. I do understand your frustrations in the way things have happened thus far. I'm not entirely convinced that killing off default streams is the right approach, but you and Miro have made enough compelling arguments that it has to be considered carefully.
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
The problems you listed are not the only ones. There are the version conflicts between non-default streams that can be dragged in by default streams.
I submit that this is true of the non-modular content as well. We've worked around this in some places with the alternatives system, but there are plenty of cases where installing an RPM means that some other RPMs are now in conflict. This just moves it to another layer.
There is the buildroot-only content, which breaks our self-hosting promise and makes it harder to compile additional software.
We don't really have a self-hosting promise; We have an aspiration. We know that in reality, if we tried to do a mass-rebuild at Final Freeze, we would fail to build parts of the OS. We can only assert that *at some time* we were able to build everything with other packages available in Fedora. Also, the Ursa Prime Change will allow us to ship this in the buildroot repository publicly, so I think that is addressed.
There is the issue that modules built for an older distribution cannot be kept on a newer distribution version, which is a regression from ursine packages that (e.g., if they get dropped from the distribution) you can keep as long as they don't depend on an outdated soname, which can be decades. There is the risk that your more and more complex solutions introduce new issues, e.g., bugs and undesirable behavior in DNF, that may or may not be easy to fix. (The more complexity you introduce, the harder it gets for DNF to behave the way the user expects.)
Would you mind opening a ticket at https://pagure.io/modularity/issues and/or a new mail thread on this? I think there's value to what you're suggesting. We need to figure out what the right experience is and I think this thread is the wrong place for it.
And this needs to be brought into relation with the benefits of using default streams instead of non-modular default versions, which as far as I can tell are essentially nonexistent. (Sure, they might be an easier upgrade path for things that are already default streams now, but this is self- referential. We should not stick to default streams forever only because of a historical decision. And actually, the upgrade path from default stream to default stream is also not working yet, which is why we have this thread at all.)
As Alexander pointed out elsewhere in the thread, there *are* other benefits to being able to customize the buildroot. Some of them can be emulated with side-tags, buildroot overrides and compat packages, but that's pretty complex. If we make such a choice, we need to understand what we are losing.
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
I understand very well how this doesn't work. :-)
Touché
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
LOL, filing off mailing list consensus as "four noisy individuals". This is getting really ridiculous! Why am I even arguing with somebody who clearly does not want to listen?
See my other reply. This was me getting frustrated while being overtired. Please accept my apology for that outburst.
And you are the people who brought us into this situation to begin with. Default streams should never have been allowed without:
- an upgrade path from default stream to default stream, AND
This was a lack of foresight. We didn't account for this and didn't realize it was a problem until users started complaining that they expected it. We can't think of everything.
- a contingency plan, i.e., an upgrade path from default stream to non-modular default version
Pretty much the same answer as above. It's easy with hindsight to say these were obvious and expected, but we are only human.
The fact that they were implemented without EITHER of these (when actually BOTH are needed) was extremely short-sighted.
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
I think that's a little harsh (but probably fair given my tone above). Can we agree that we're both on the same side: we want Fedora to be excellent?
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit.
I call them "partial" because they are do not address all the issues we are having with Modularity. See my list a few paragraphs above. At least the version conflict issue is not fixable at all without a complete redesign of Modularity that would be incompatible with the current implementation and would probably also violate the FHS (because that is the only way to achieve universal parallel-installability without per-package workarounds).
Parallel-installability is *not* a goal. It is in fact a clearly-stated non-goal. I don't see this changing. Package conflicts are not a new thing with Modularity.
And I call them "workarounds" because they add more and more complexity when there would be a simple fix: just don't use default streams anymore. That change might also need short-term workarounds for the upgrade path, but those can all be dropped once the default streams are gone.
It's a matter of perspective: from someone who prefers the classic packaging style, you're proposing a fix. From my perspective where Modules are a new technology with warts that need to be addressed, reverting here is the workaround. Again, that doesn't mean "no", just that it's not as clear-cut as you think it is.
And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder.
Yet that is exactly what default streams are doing.
I'm not disagreeing that the current state is bad. See the literal next sentence in my reply.
We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
The buildroot workaround does not address any of the issues the end users are hitting. You are proposing a different workaround for the upgrade path issue for end users, workaround which as far as I can tell is not implemented yet, which will not address the other issues, and which will still lead to a more complex feel than the simple fix of no longer using default streams.
Now you're moving the goalposts. I addressed the packager concern, so you pivoted to attacking our user-experience instead. I acknowledged that issue as well as starting this *very thread* to try to find the right way to do that. If you don't think the proposal I've made is the correct one, suggest how it could be improved instead of simply being dismissive.
I do not understand why you are so strongly against the "no default streams" rule. It would not stop any of the work you are doing on Modularity. You would still be able to ship all your module streams, depend on any other stream of any other module (even if some combinations of streams then conflict), and continue your work as before. The only difference is that instead of making a stream the default, you would merge it into the release branch for the release and build it as a normal "ursine" package. But that does not conflict with any of the Modularity work you are doing.
I'm not strongly against it. I'm hesitant because I don't want to be doing reactionary development. We've made mistakes so far by being too fast out of the gate. If we're going to change directions substantially, I want that to come after careful deliberation. Convince me!
No, you've decided on the outcome you want to see and have invented a path to get there that doesn't align with the realities of the present.
Funnily, that is exactly how I would describe what has happened with Modularity so far.
Again, this was a result of being overtired. I apologize for the tone here.
On Wed, Oct 16, 2019 at 9:39 PM Kevin Kofler kevin.kofler@chello.at wrote:
Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler wrote:
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
So it looks like I did not describe clearly enough what my proposed enable_modules=0 flag would do. ("Disable all module code" was apparently too vague.)
How I think it should work would be:
- For repositories, it completely ignores modular metadata and processes only the non-modular parts of the repository metadata. Therefore, it does not see the Node.js 12.x stream at all. It only sees whatever Node.js is in the non-modular repository. If there are currently only modular versions, then it sees none at all. But with the proposal to require a non-modular default version, it would then see that version, which would likely be Node.js 10.x. Nobody would get forcefully upgraded from 10.x to 12.x.
- For installed modules, it completely ignores them, acting as if the database of installed modules were empty. (It just does not read that database at all.)
- For installed packages, it treats them all as non-modular. Sure, packages originally installed from a module have weird EVRs encoding module metadata, but otherwise they get processed exactly like a non-modular package. So the default repository only has to provide a newer EVR to upgrade the package. That should address upgrades from 8.x or 10.x to the new default 10.x. If the user had previously installed 12.x, they will only get downgraded if they distro-sync or if package-level dependency issues with the F30 build of 12.x on F31 force the downgrade.
I hope that clears it up.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
You are overestimating by far the effort required to demodularize the handful packages that are currently module-only. The evidence Fabio Valentini has gathered so far shows that actually very few packages would be affected and they would not be too hard to fix. And Miro has also offered help with fixing affected packages.
All in all, it would require fixing a handful packages once and for all instead of implementing workarounds affecting the entire distribution and its thousands of users.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire.
And exactly that code needs to go, at least in Fedora. I think having a way to migrate away from modules is the common case to prioritize here.
That said, it shall be pointed out that, if the proposal to demodularize all default versions of packages gets implemented, we only need a *short term* solution for demodularization in DNF. After 2 releases, we have no default streams left (and they will never come back by policy) and we can expect users to have upgraded through a release with no default streams (given that we do not support upgrading directly to n+3), so at that point DNF can revert to the "safe" behavior (preventing accidental demodularization) by default.
So the proposal to demodularize everything could actually make this problem easier to solve.
Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
Sorry for that. The reason I called it "damage" being done is that there is currently no supported way back (as you pointed out yourself) and that it moves us away from the state I (and others, e.g., Miro) believe we should reach (where there are no module-only packages anymore).
I consider this approach of making a controversial and experimental change with no contingency plan, then using that absence of a contingency plan as an argument to not only stick to the change at all costs, but even go further with it, entirely unacceptable. (We call it the "creating facts" ("Fakten schaffen") tactic in the German-speaking parts of the world. It is an effective way to bypass discussion and democratic participation.)
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
The problems you listed are not the only ones. There are the version conflicts between non-default streams that can be dragged in by default streams. There is the buildroot-only content, which breaks our self-hosting promise and makes it harder to compile additional software. There is the issue that modules built for an older distribution cannot be kept on a newer distribution version, which is a regression from ursine packages that (e.g., if they get dropped from the distribution) you can keep as long as they don't depend on an outdated soname, which can be decades. There is the risk that your more and more complex solutions introduce new issues, e.g., bugs and undesirable behavior in DNF, that may or may not be easy to fix. (The more complexity you introduce, the harder it gets for DNF to behave the way the user expects.)
And this needs to be brought into relation with the benefits of using default streams instead of non-modular default versions, which as far as I can tell are essentially nonexistent. (Sure, they might be an easier upgrade path for things that are already default streams now, but this is self- referential. We should not stick to default streams forever only because of a historical decision. And actually, the upgrade path from default stream to default stream is also not working yet, which is why we have this thread at all.)
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
I understand very well how this doesn't work. :-)
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
LOL, filing off mailing list consensus as "four noisy individuals". This is getting really ridiculous! Why am I even arguing with somebody who clearly does not want to listen?
And you are the people who brought us into this situation to begin with. Default streams should never have been allowed without:
- an upgrade path from default stream to default stream, AND
- a contingency plan, i.e., an upgrade path from default stream to non-modular default version
The fact that they were implemented without EITHER of these (when actually BOTH are needed) was extremely short-sighted.
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit.
I call them "partial" because they are do not address all the issues we are having with Modularity. See my list a few paragraphs above. At least the version conflict issue is not fixable at all without a complete redesign of Modularity that would be incompatible with the current implementation and would probably also violate the FHS (because that is the only way to achieve universal parallel-installability without per-package workarounds).
And I call them "workarounds" because they add more and more complexity when there would be a simple fix: just don't use default streams anymore. That change might also need short-term workarounds for the upgrade path, but those can all be dropped once the default streams are gone.
And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder.
Yet that is exactly what default streams are doing.
We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
The buildroot workaround does not address any of the issues the end users are hitting. You are proposing a different workaround for the upgrade path issue for end users, workaround which as far as I can tell is not implemented yet, which will not address the other issues, and which will still lead to a more complex feel than the simple fix of no longer using default streams.
I do not understand why you are so strongly against the "no default streams" rule. It would not stop any of the work you are doing on Modularity. You would still be able to ship all your module streams, depend on any other stream of any other module (even if some combinations of streams then conflict), and continue your work as before. The only difference is that instead of making a stream the default, you would merge it into the release branch for the release and build it as a normal "ursine" package. But that does not conflict with any of the Modularity work you are doing.
No, you've decided on the outcome you want to see and have invented a path to get there that doesn't align with the realities of the present.
Funnily, that is exactly how I would describe what has happened with Modularity so far.
Kevin Kofler
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Stephen Gallagher wrote:
I think that's a little harsh (but probably fair given my tone above). Can we agree that we're both on the same side: we want Fedora to be excellent?
I accept your apologies for your harsh tone (and I appreciate your much more constructive reply this time, thank you!) and I would like to apologize for my harsh tone as well. (I know I can be quite rude at times, especially when triggered.)
Yes, I agree that wanting Fedora to be excellent is probably what we all want. We may disagree about the way to get there, but let us sort this out constructively.
Kevin Kofler
On Thu, 2019-10-17 at 09:47 -0400, Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 9:39 PM Kevin Kofler kevin.kofler@chello.at wrote:
So it looks like I did not describe clearly enough what my proposed enable_modules=0 flag would do. ("Disable all module code" was apparently too vague.)
How I think it should work would be:
- For repositories, it completely ignores modular metadata and processes only the non-modular parts of the repository metadata. Therefore, it does not see the Node.js 12.x stream at all. It only sees whatever Node.js is in the non-modular repository. If there are currently only modular versions, then it sees none at all. But with the proposal to require a non-modular default version, it would then see that version, which would likely be Node.js 10.x. Nobody would get forcefully upgraded from 10.x to 12.x.
- For installed modules, it completely ignores them, acting as if the database of installed modules were empty. (It just does not read that database at all.)
- For installed packages, it treats them all as non-modular. Sure, packages originally installed from a module have weird EVRs encoding module metadata, but otherwise they get processed exactly like a non-modular package. So the default repository only has to provide a newer EVR to upgrade the package. That should address upgrades from 8.x or 10.x to the new default 10.x. If the user had previously installed 12.x, they will only get downgraded if they distro-sync or if package-level dependency issues with the F30 build of 12.x on F31 force the downgrade.
I hope that clears it up.
It does, thanks. I think you're right, that probably *would* work, though it's slightly harder to do your third bullet point than it sounds at first blush. I suppose we could fudge it with the `module_hotfix` option though..
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
You are overestimating by far the effort required to demodularize the handful packages that are currently module-only. The evidence Fabio Valentini has gathered so far shows that actually very few packages would be affected and they would not be too hard to fix. And Miro has also offered help with fixing affected packages.
All in all, it would require fixing a handful packages once and for all instead of implementing workarounds affecting the entire distribution and its thousands of users.
It's worth considering. I'm not ruling it out at this time. I'm not committed to doing it yet either.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire.
And exactly that code needs to go, at least in Fedora. I think having a way to migrate away from modules is the common case to prioritize here.
This is one of those cases where I think that RHEL and Fedora needs are in conflict; in RHEL, we absolutely need to support the failsafe behavior, because accidentally replacing a critical dependency will break user applications. In Fedora, this is likely a smaller concern. It needs investigation.
That said, it shall be pointed out that, if the proposal to demodularize all default versions of packages gets implemented, we only need a *short term* solution for demodularization in DNF. After 2 releases, we have no default streams left (and they will never come back by policy) and we can expect users to have upgraded through a release with no default streams (given that we do not support upgrading directly to n+3), so at that point DNF can revert to the "safe" behavior (preventing accidental demodularization) by default.
If we settle on the "no content in default streams" policy for Fedora, this is a sensible way to go about it, yes.
So the proposal to demodularize everything could actually make this problem easier to solve.
Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
Sorry for that. The reason I called it "damage" being done is that there is currently no supported way back (as you pointed out yourself) and that it moves us away from the state I (and others, e.g., Miro) believe we should reach (where there are no module-only packages anymore).
Yeah, I can understand that perspective.
I consider this approach of making a controversial and experimental change with no contingency plan, then using that absence of a contingency plan as an argument to not only stick to the change at all costs, but even go further with it, entirely unacceptable. (We call it the "creating facts" ("Fakten schaffen") tactic in the German-speaking parts of the world. It is an effective way to bypass discussion and democratic participation.)
Yeah, I'm not trying to make this a "fait accompli" discussion either. We are where we are. I just don't want to revert as a knee-jerk reaction if we can find a sustainable solution. I do understand your frustrations in the way things have happened thus far. I'm not entirely convinced that killing off default streams is the right approach, but you and Miro have made enough compelling arguments that it has to be considered carefully.
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
The problems you listed are not the only ones. There are the version conflicts between non-default streams that can be dragged in by default streams.
I submit that this is true of the non-modular content as well. We've worked around this in some places with the alternatives system, but there are plenty of cases where installing an RPM means that some other RPMs are now in conflict. This just moves it to another layer.
There is the buildroot-only content, which breaks our self-hosting promise and makes it harder to compile additional software.
We don't really have a self-hosting promise; We have an aspiration. We know that in reality, if we tried to do a mass-rebuild at Final Freeze, we would fail to build parts of the OS. We can only assert that *at some time* we were able to build everything with other packages available in Fedora. Also, the Ursa Prime Change will allow us to ship this in the buildroot repository publicly, so I think that is addressed.
There is the issue that modules built for an older distribution cannot be kept on a newer distribution version, which is a regression from ursine packages that (e.g., if they get dropped from the distribution) you can keep as long as they don't depend on an outdated soname, which can be decades. There is the risk that your more and more complex solutions introduce new issues, e.g., bugs and undesirable behavior in DNF, that may or may not be easy to fix. (The more complexity you introduce, the harder it gets for DNF to behave the way the user expects.)
Would you mind opening a ticket at https://pagure.io/modularity/issues and/or a new mail thread on this? I think there's value to what you're suggesting. We need to figure out what the right experience is and I think this thread is the wrong place for it.
And this needs to be brought into relation with the benefits of using default streams instead of non-modular default versions, which as far as I can tell are essentially nonexistent. (Sure, they might be an easier upgrade path for things that are already default streams now, but this is self- referential. We should not stick to default streams forever only because of a historical decision. And actually, the upgrade path from default stream to default stream is also not working yet, which is why we have this thread at all.)
As Alexander pointed out elsewhere in the thread, there *are* other benefits to being able to customize the buildroot. Some of them can be emulated with side-tags, buildroot overrides and compat packages, but that's pretty complex. If we make such a choice, we need to understand what we are losing.
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
I understand very well how this doesn't work. :-)
Touché
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
LOL, filing off mailing list consensus as "four noisy individuals". This is getting really ridiculous! Why am I even arguing with somebody who clearly does not want to listen?
See my other reply. This was me getting frustrated while being overtired. Please accept my apology for that outburst.
And you are the people who brought us into this situation to begin with. Default streams should never have been allowed without:
- an upgrade path from default stream to default stream, AND
This was a lack of foresight. We didn't account for this and didn't realize it was a problem until users started complaining that they expected it. We can't think of everything.
- a contingency plan, i.e., an upgrade path from default stream to non-modular default version
Pretty much the same answer as above. It's easy with hindsight to say these were obvious and expected, but we are only human.
The fact that they were implemented without EITHER of these (when actually BOTH are needed) was extremely short-sighted.
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
I think that's a little harsh (but probably fair given my tone above). Can we agree that we're both on the same side: we want Fedora to be excellent?
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit.
I call them "partial" because they are do not address all the issues we are having with Modularity. See my list a few paragraphs above. At least the version conflict issue is not fixable at all without a complete redesign of Modularity that would be incompatible with the current implementation and would probably also violate the FHS (because that is the only way to achieve universal parallel-installability without per-package workarounds).
Parallel-installability is *not* a goal. It is in fact a clearly-stated non-goal. I don't see this changing. Package conflicts are not a new thing with Modularity.
What about for example Python & other language runtimes that support parallel installation just fine ? Sure, not everything suports it but needed to have separate module for each version of Python (and any other piece of software that supports parallel installability) seems far from optimal.
And I call them "workarounds" because they add more and more complexity when there would be a simple fix: just don't use default streams anymore. That change might also need short-term workarounds for the upgrade path, but those can all be dropped once the default streams are gone.
It's a matter of perspective: from someone who prefers the classic packaging style, you're proposing a fix. From my perspective where Modules are a new technology with warts that need to be addressed, reverting here is the workaround. Again, that doesn't mean "no", just that it's not as clear-cut as you think it is.
And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder.
Yet that is exactly what default streams are doing.
I'm not disagreeing that the current state is bad. See the literal next sentence in my reply.
We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
The buildroot workaround does not address any of the issues the end users are hitting. You are proposing a different workaround for the upgrade path issue for end users, workaround which as far as I can tell is not implemented yet, which will not address the other issues, and which will still lead to a more complex feel than the simple fix of no longer using default streams.
Now you're moving the goalposts. I addressed the packager concern, so you pivoted to attacking our user-experience instead. I acknowledged that issue as well as starting this *very thread* to try to find the right way to do that. If you don't think the proposal I've made is the correct one, suggest how it could be improved instead of simply being dismissive.
I do not understand why you are so strongly against the "no default streams" rule. It would not stop any of the work you are doing on Modularity. You would still be able to ship all your module streams, depend on any other stream of any other module (even if some combinations of streams then conflict), and continue your work as before. The only difference is that instead of making a stream the default, you would merge it into the release branch for the release and build it as a normal "ursine" package. But that does not conflict with any of the Modularity work you are doing.
I'm not strongly against it. I'm hesitant because I don't want to be doing reactionary development. We've made mistakes so far by being too fast out of the gate. If we're going to change directions substantially, I want that to come after careful deliberation. Convince me!
No, you've decided on the outcome you want to see and have invented a path to get there that doesn't align with the realities of the present.
Funnily, that is exactly how I would describe what has happened with Modularity so far.
Again, this was a result of being overtired. I apologize for the tone here.
On Wed, Oct 16, 2019 at 9:39 PM Kevin Kofler kevin.kofler@chello.at wrote:
Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 7:58 PM Kevin Kofler wrote:
So completely disable all module support in DNF by default with some global flag (make all the module code conditional under some new enable_modules flag and default the flag to enable_modules = 0), then it will treat the packages as normal packages and you only have to provide a higher EVR. All this module processing should only happen if the user explicitly enables it.
Given that many of the modules in the distribution currently are there specifically to provide newer, less-stable versions of the versions in the non-modular/default stream, this would be fairly disastrous. For example, Node.js 10.x is the default stream in Fedora 30 because it's the LTS branch. It also provides a non-default stream for 8.x (the previous LTS branch that a lot of applications still rely on) and 12.x, which will be the next LTS branch in November, but is not guaranteed stable yet. With the approach you're describing, everyone would be forcibly updated to the unstable 12.x release.
The only way we could assure ourselves that this wouldn't happen would be to do another mass-rebuild, bumping the epoch of every package that exists in a module. That's a lot of work.
So it looks like I did not describe clearly enough what my proposed enable_modules=0 flag would do. ("Disable all module code" was apparently too vague.)
How I think it should work would be:
- For repositories, it completely ignores modular metadata and processes only the non-modular parts of the repository metadata. Therefore, it does not see the Node.js 12.x stream at all. It only sees whatever Node.js is in the non-modular repository. If there are currently only modular versions, then it sees none at all. But with the proposal to require a non-modular default version, it would then see that version, which would likely be Node.js 10.x. Nobody would get forcefully upgraded from 10.x to 12.x.
- For installed modules, it completely ignores them, acting as if the database of installed modules were empty. (It just does not read that database at all.)
- For installed packages, it treats them all as non-modular. Sure, packages originally installed from a module have weird EVRs encoding module metadata, but otherwise they get processed exactly like a non-modular package. So the default repository only has to provide a newer EVR to upgrade the package. That should address upgrades from 8.x or 10.x to the new default 10.x. If the user had previously installed 12.x, they will only get downgraded if they distro-sync or if package-level dependency issues with the F30 build of 12.x on F31 force the downgrade.
I hope that clears it up.
A slightly more elaborate, but slightly harder to implement, approach would be to let DNF simply disable modules that are enabled locally but no longer available in the repositories, together with disabling the fedora-modular and updates-modular repositories by default.
And again, this only works if every packager who has spent time creating a module with a default stream takes their content and shoves it back into the non-modular repository. Which in some cases they probably cannot do, because they have build-dependencies that are in conflict. This is a highly non-trivial process and it would need to be done individually for every single package. That's far more packager-hostile than fixing the default stream/buildroot problem and the upgrade path problem.
You are overestimating by far the effort required to demodularize the handful packages that are currently module-only. The evidence Fabio Valentini has gathered so far shows that actually very few packages would be affected and they would not be too hard to fix. And Miro has also offered help with fixing affected packages.
All in all, it would require fixing a handful packages once and for all instead of implementing workarounds affecting the entire distribution and its thousands of users.
And the case of demodularizing packages has to be addressed sooner or later anyway, so better address it sooner rather than later, before more and more damage is done by maintainers moving packages to module-only without a way back.
I've already acknowledged upthread that demodularizing packages is a problem we need to solve. It's being worked on, but it's a lot harder than you think, because we have failsafe code implemented in libdnf to prevent *accidental* demodularization that's in conflict with this desire.
And exactly that code needs to go, at least in Fedora. I think having a way to migrate away from modules is the common case to prioritize here.
That said, it shall be pointed out that, if the proposal to demodularize all default versions of packages gets implemented, we only need a *short term* solution for demodularization in DNF. After 2 releases, we have no default streams left (and they will never come back by policy) and we can expect users to have upgraded through a release with no default streams (given that we do not support upgrading directly to n+3), so at that point DNF can revert to the "safe" behavior (preventing accidental demodularization) by default.
So the proposal to demodularize everything could actually make this problem easier to solve.
Also, this paragraph was needlessly antagonistic: moving packages to modules is not "damage".
Sorry for that. The reason I called it "damage" being done is that there is currently no supported way back (as you pointed out yourself) and that it moves us away from the state I (and others, e.g., Miro) believe we should reach (where there are no module-only packages anymore).
I consider this approach of making a controversial and experimental change with no contingency plan, then using that absence of a contingency plan as an argument to not only stick to the change at all costs, but even go further with it, entirely unacceptable. (We call it the "creating facts" ("Fakten schaffen") tactic in the German-speaking parts of the world. It is an effective way to bypass discussion and democratic participation.)
Sure, but we are nowhere near a cliff. As I just posted in the Change Proposal thread, there are three problems we need to solve, two of which we already have solutions designed for and one (this thread) that we are trying to finalize. That's far from "standing in front of a cliff".
The problems you listed are not the only ones. There are the version conflicts between non-default streams that can be dragged in by default streams. There is the buildroot-only content, which breaks our self-hosting promise and makes it harder to compile additional software. There is the issue that modules built for an older distribution cannot be kept on a newer distribution version, which is a regression from ursine packages that (e.g., if they get dropped from the distribution) you can keep as long as they don't depend on an outdated soname, which can be decades. There is the risk that your more and more complex solutions introduce new issues, e.g., bugs and undesirable behavior in DNF, that may or may not be easy to fix. (The more complexity you introduce, the harder it gets for DNF to behave the way the user expects.)
And this needs to be brought into relation with the benefits of using default streams instead of non-modular default versions, which as far as I can tell are essentially nonexistent. (Sure, they might be an easier upgrade path for things that are already default streams now, but this is self- referential. We should not stick to default streams forever only because of a historical decision. And actually, the upgrade path from default stream to default stream is also not working yet, which is why we have this thread at all.)
Please understand that "I don't understand how this works" is not the same thing as "This doesn't work".
I understand very well how this doesn't work. :-)
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
LOL, filing off mailing list consensus as "four noisy individuals". This is getting really ridiculous! Why am I even arguing with somebody who clearly does not want to listen?
And you are the people who brought us into this situation to begin with. Default streams should never have been allowed without:
- an upgrade path from default stream to default stream, AND
- a contingency plan, i.e., an upgrade path from default stream to non-modular default version
The fact that they were implemented without EITHER of these (when actually BOTH are needed) was extremely short-sighted.
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
Also, we're not offering "partial workarounds" (excepting some acknowledged hackery to avoid blocking F31). All of the proposals I have been discussing in this thread are for real design adjustments for long-term benefit.
I call them "partial" because they are do not address all the issues we are having with Modularity. See my list a few paragraphs above. At least the version conflict issue is not fixable at all without a complete redesign of Modularity that would be incompatible with the current implementation and would probably also violate the FHS (because that is the only way to achieve universal parallel-installability without per-package workarounds).
And I call them "workarounds" because they add more and more complexity when there would be a simple fix: just don't use default streams anymore. That change might also need short-term workarounds for the upgrade path, but those can all be dropped once the default streams are gone.
And while they add some additional complexity on the *infrastructure*, a primary goal is to not make the users or packagers' lives harder.
Yet that is exactly what default streams are doing.
We *know* that the default stream/buildroot issue is failing to hit this goal and the solution is known, implemented upstream and could be deployed by the end of the week if FESCo gives its approval.
The buildroot workaround does not address any of the issues the end users are hitting. You are proposing a different workaround for the upgrade path issue for end users, workaround which as far as I can tell is not implemented yet, which will not address the other issues, and which will still lead to a more complex feel than the simple fix of no longer using default streams.
I do not understand why you are so strongly against the "no default streams" rule. It would not stop any of the work you are doing on Modularity. You would still be able to ship all your module streams, depend on any other stream of any other module (even if some combinations of streams then conflict), and continue your work as before. The only difference is that instead of making a stream the default, you would merge it into the release branch for the release and build it as a normal "ursine" package. But that does not conflict with any of the Modularity work you are doing.
No, you've decided on the outcome you want to see and have invented a path to get there that doesn't align with the realities of the present.
Funnily, that is exactly how I would describe what has happened with Modularity so far.
Kevin Kofler
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, Oct 17, 2019 at 03:38:39AM +0200, Kevin Kofler wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
LOL, filing off mailing list consensus as "four noisy individuals". This is getting really ridiculous! Why am I even arguing with somebody who clearly does not want to listen?
And you are the people who brought us into this situation to begin with. Default streams should never have been allowed without:
- an upgrade path from default stream to default stream, AND
- a contingency plan, i.e., an upgrade path from default stream to non-modular default version
The fact that they were implemented without EITHER of these (when actually BOTH are needed) was extremely short-sighted.
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
I can understand with the frustration you seem to have, but I think this section is un-called for and not worth being sent to the devel list.
Stephen is working on modularity either because he has been tasked to or because he genuinely believe on the idea and goals, either way we want him involved in these discussions, otherwise any feedback anyone may have will never reached the right person. As the project evolve some of the earlier assumptions turned to be correct and some turned to be wrong. I remember you raising some concerns early on, and there could be reasons for which they were not taken into account then. So let's not judge or try to re-write the past and focus on possible solutions as the rest of your email was doing. I'm not saying that your solution is the better one or the one that will be implemented, but having Stephen on this thread is the proof that it will at least be evaluated and thought through.
Thank you Stephen for your involvement in this discussion.
Pierre
On Thu, Oct 17, 2019 at 10:17 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Thu, Oct 17, 2019 at 03:38:39AM +0200, Kevin Kofler wrote:
So bringing yourselves up now as "the people who can dig us out of this situation" feels to me feels like intentionally making a patient sick so you can "cure" them.
I can understand with the frustration you seem to have, but I think this section is un-called for and not worth being sent to the devel list.
Stephen is working on modularity either because he has been tasked to or because he genuinely believe on the idea and goals,
It's both. I genuinely believe that when we get this right, it will be a huge win for Fedora, RHEL and the rest of our expanded ecosystem. The fact that Red Hat has seen fit to pay me to work on it is icing on that cake.
either way we want him involved in these discussions, otherwise any feedback anyone may have will never reached the right person. As the project evolve some of the earlier assumptions turned to be correct and some turned to be wrong. I remember you raising some concerns early on, and there could be reasons for which they were not taken into account then.
I've been up-front about this: we were overconfident about some parts of this and that has bitten us. I'm not denying the issues we face today, but I *am* trying to make sure we make the right decisions for the project long-term and not just the knee-jerk expedient ones.
So let's not judge or try to re-write the past and focus on possible solutions as the rest of your email was doing. I'm not saying that your solution is the better one or the one that will be implemented, but having Stephen on this thread is the proof that it will at least be evaluated and thought through.
Thank you Stephen for your involvement in this discussion.
Thank you, Pierre. I appreciate the vote of confidence.
On 17. 10. 19 2:27, Stephen Gallagher wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Let me make this clear: I have a technical opinion (default modular streams are the wrong thing to do). If we are not considering that opinion, because it insults the people who have implemented the technical thing, we are making it personal.
Everybody, please keep the discussion technical (that applies to both "sides" here).
On to, 17 loka 2019, Miro Hrončok wrote:
On 17. 10. 19 2:27, Stephen Gallagher wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Let me make this clear: I have a technical opinion (default modular streams are the wrong thing to do). If we are not considering that opinion, because it insults the people who have implemented the technical thing, we are making it personal.
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
But Miro's proposal is to keep the default version non-modular, which means the kickstart compatibility issue does not come up to begin with. (Non- modular packages are naturally available for kickstart.)
Kevin Kofler
On to, 17 loka 2019, Kevin Kofler wrote:
Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
But Miro's proposal is to keep the default version non-modular, which means the kickstart compatibility issue does not come up to begin with. (Non- modular packages are naturally available for kickstart.)
It will not work for FreeIPA if we move it to modules due to build dependencies on packages that were already moved to modules. Right now we are getting around this with a whole SIG bringing back Java packages to non-modular repo to allow Dogtag (and by that, FreeIPA) buildable.
Had there be default module streams for Java packages in buildroot, we would have no problem.
On 17. 10. 19 13:38, Alexander Bokovoy wrote:
Had there be default module streams for Java packages in buildroot, we would have no problem.
Had there been no default modular streams but regular packages instead, we would have no problem either.
But to extend there a bit, that would also be coorect had there been no computers.
On Thu, 2019-10-17 at 13:43 +0200, Miro Hrončok wrote:
On 17. 10. 19 13:38, Alexander Bokovoy wrote:
Had there be default module streams for Java packages in buildroot, we would have no problem.
Had there been no default modular streams but regular packages instead, we would have no problem either.
But to extend there a bit, that would also be coorect had there been no computers.
🎵 Imagine no computers / Only farms of yaks... 🎵
On Thursday, October 17, 2019 1:59:19 AM MST Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
On Thu, 2019-10-17 at 09:32 -0700, John M. Harris Jr wrote:
On Thursday, October 17, 2019 1:59:19 AM MST Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
Of course if you just don't modularize FreeIPA at all you don't have the kickstart problem, but then you *do* still have the 'we're stuck shipping this one version of FreeIPA for the next seventy jillion years' problem.
On 10/17/19 2:35 PM, Adam Williamson wrote:
On Thu, 2019-10-17 at 09:32 -0700, John M. Harris Jr wrote:
On Thursday, October 17, 2019 1:59:19 AM MST Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
This doesn't seem quite true. You couldn't install it with the same kickstart you used for EL7, but you could use the new module command or syntax in kickstart:
module --name=NAME [--stream=STREAM]
and/or
%packages @module:stream/profile
On Thu, 2019-10-17 at 14:44 -0600, Orion Poplawski wrote:
On 10/17/19 2:35 PM, Adam Williamson wrote:
On Thu, 2019-10-17 at 09:32 -0700, John M. Harris Jr wrote:
On Thursday, October 17, 2019 1:59:19 AM MST Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
This doesn't seem quite true. You couldn't install it with the same kickstart you used for EL7, but you could use the new module command or syntax in kickstart:
module --name=NAME [--stream=STREAM]
and/or
%packages @module:stream/profile
Hmm, yeah, I guess the concern is really about *existing* kickstarts.
On to, 17 loka 2019, Orion Poplawski wrote:
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
This doesn't seem quite true. You couldn't install it with the same kickstart you used for EL7, but you could use the new module command or syntax in kickstart:
module --name=NAME [--stream=STREAM]
Actually, you could install client packages with the same kickstart file as for RHEL 7, that was one of uses for default profiles.
Server package installation from kickstart file is less of a worry because we are running a different deployment process since switching to domain level 1 and that implies you have to do client installation first.
And at the time when all this was designed, kickstart had no support for modularized installation. It has it now, of course.
On Fri, 2019-10-18 at 11:39 +0300, Alexander Bokovoy wrote:
On to, 17 loka 2019, Orion Poplawski wrote:
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
This doesn't seem quite true. You couldn't install it with the same kickstart you used for EL7, but you could use the new module command or syntax in kickstart:
module --name=NAME [--stream=STREAM]
Actually, you could install client packages with the same kickstart file as for RHEL 7, that was one of uses for default profiles.
Server package installation from kickstart file is less of a worry because we are running a different deployment process since switching to domain level 1 and that implies you have to do client installation first.
And at the time when all this was designed, kickstart had no support for modularized installation. It has it now, of course.
Well, module installation vias kickstart has been supported since before 8.0 GA. But I guess the design decisions have taken place before that.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On pe, 18 loka 2019, Martin Kolman wrote:
On Fri, 2019-10-18 at 11:39 +0300, Alexander Bokovoy wrote:
On to, 17 loka 2019, Orion Poplawski wrote:
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
This doesn't seem quite true. You couldn't install it with the same kickstart you used for EL7, but you could use the new module command or syntax in kickstart:
module --name=NAME [--stream=STREAM]
Actually, you could install client packages with the same kickstart file as for RHEL 7, that was one of uses for default profiles.
Server package installation from kickstart file is less of a worry because we are running a different deployment process since switching to domain level 1 and that implies you have to do client installation first.
And at the time when all this was designed, kickstart had no support for modularized installation. It has it now, of course.
Well, module installation vias kickstart has been supported since before 8.0 GA. But I guess the design decisions have taken place before that.
Yes, well before that. In any case, one of bigger requirements we had was to keep support for existing kickstart files that install RHEL IdM. Changing them for using modular content for most common use case (installation of IdM client) was seen as a compatibility break.
So yes, RHEL 8.x has support for enabling modules in the kickstart files but it was not possible to preserve existing kickstart files that used ipa-client package without enabling default module stream after RHEL IdM was moved to module.
On Thu, 2019-10-17 at 14:44 -0600, Orion Poplawski wrote:
On 10/17/19 2:35 PM, Adam Williamson wrote:
On Thu, 2019-10-17 at 09:32 -0700, John M. Harris Jr wrote:
On Thursday, October 17, 2019 1:59:19 AM MST Alexander Bokovoy wrote:
The one thing we are using default modular stream in RHEL 8 for is to be able to provide access to packages in kickstart that were moved to modules in RHEL 8. An example is idm:client stream which is a default module stream in RHEL 8 exactly for this reason, to be able to install ipa-client package and enroll a system into IPA from a kickstart file.
We don't package FreeIPA in modules in Fedora yet but this is one of real examples how default module streams are helpful to maintain coherent user experience for existing users of kickstart files.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland
You could install the ipa-client package and enroll a system into IPA from a kickstart in RHEL 7 too.. Without modules. That's what I've deployed for the environments I support, for example. Using a module is not required there.
That wasn't the point, though - the point was the answer the question "why do we need *default* module streams?"
The logic is this: FreeIPA maintainers wanted FreeIPA to be a module in RHEL, to take advantage of the added flexibility around lifecycles and version bumps (basically so each RHEL release isn't tied to one version of FreeIPA forever). But if it's modularized and there's no concept of 'default stream modules', this is a thing that breaks: you can't install it from a kickstart. So, *given that* we wanted to modularize FreeIPA in RHEL *and* we also want to still make it deployable via kickstart, that creates a requirement for default stream modules or something a lot like it.
This doesn't seem quite true. You couldn't install it with the same kickstart you used for EL7, but you could use the new module command or syntax in kickstart:
Indeed, you can install modules via kickstart. For details, see: https://pykickstart.readthedocs.io/en/latest/kickstart-docs.html#chapter-9-p... https://pykickstart.readthedocs.io/en/latest/kickstart-docs.html#module
module --name=NAME [--stream=STREAM]
This just enables a module stream (or can explicitely disable it with the --disable option). No packages will be installed from such module unless specified in the %packages section.
and/or
%packages @module:stream/profile
This enables the module stream and installs a profile - the one specified or the default profile otherwise.
The syntax is pretty much the same as for DNF CLI - if you call "dnf install @module:stream/profile" you should get the same result.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Adam Williamson wrote:
Of course if you just don't modularize FreeIPA at all you don't have the kickstart problem, but then you *do* still have the 'we're stuck shipping this one version of FreeIPA for the next seventy jillion years' problem.
That is purely a RHEL thing though. I do not see how this is relevant to the discussion on whether to allow default streams *in Fedora*.
Kevin Kofler
On Thursday, October 17, 2019 4:28:27 PM MST Kevin Kofler wrote:
Adam Williamson wrote:
Of course if you just don't modularize FreeIPA at all you don't have the kickstart problem, but then you *do* still have the 'we're stuck shipping this one version of FreeIPA for the next seventy jillion years' problem.
That is purely a RHEL thing though. I do not see how this is relevant to the discussion on whether to allow default streams *in Fedora*.
Even then, as a RHEL subscriber, I'd question its usefulness there. I use RHEL at work because I want a solid stable system without many changes in a release cycle. It makes is much easier with large deployments and high numbers of administrators/users, who may be resistant to change. Throwing a random new version of a package on there sounds like a nightmare.
Actually, I'm not even sure how I'd deploy modules in the environment that I support, because it doesn't have internet access. I usually just download the RHEL repo DVD and I'm good to go.
On Thu, Oct 17, 2019 at 4:33 AM Miro Hrončok mhroncok@redhat.com wrote:
On 17. 10. 19 2:27, Stephen Gallagher wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Let me make this clear: I have a technical opinion (default modular streams are the wrong thing to do). If we are not considering that opinion, because it insults the people who have implemented the technical thing, we are making it personal.
Everybody, please keep the discussion technical (that applies to both "sides" here).
Apologies for the tone here. That was out of line. I need to stop replying when I'm tired. And I wasn't thinking of you, Miro, when I wrote this. Your feedback hasn't been "negative", it's been "constructive" (in the way I think of things).
On 17. 10. 19 15:17, Stephen Gallagher wrote:
On Thu, Oct 17, 2019 at 4:33 AM Miro Hrončok mhroncok@redhat.com wrote:
On 17. 10. 19 2:27, Stephen Gallagher wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Let me make this clear: I have a technical opinion (default modular streams are the wrong thing to do). If we are not considering that opinion, because it insults the people who have implemented the technical thing, we are making it personal.
Everybody, please keep the discussion technical (that applies to both "sides" here).
Apologies for the tone here. That was out of line. I need to stop replying when I'm tired. And I wasn't thinking of you, Miro, when I wrote this. Your feedback hasn't been "negative", it's been "constructive" (in the way I think of things).
I appreciate you are trying to follow up on everything here. It must be very frustrating and I am sorry that my proposal has caused it.
What bothers me ATM is that while the discussion is long and painful, it no longer moves anywhere :(
IMHO Everybody have already said all their arguments at least twice. I wonder how to move forward.
On Thu, 17 Oct 2019 at 09:24, Miro Hrončok mhroncok@redhat.com wrote:
On 17. 10. 19 15:17, Stephen Gallagher wrote:
On Thu, Oct 17, 2019 at 4:33 AM Miro Hrončok mhroncok@redhat.com wrote:
On 17. 10. 19 2:27, Stephen Gallagher wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
Let me make this clear: I have a technical opinion (default modular streams are the wrong thing to do). If we are not considering that opinion, because it insults the people who have implemented the technical thing, we are making it personal.
Everybody, please keep the discussion technical (that applies to both "sides" here).
Apologies for the tone here. That was out of line. I need to stop replying when I'm tired. And I wasn't thinking of you, Miro, when I wrote this. Your feedback hasn't been "negative", it's been "constructive" (in the way I think of things).
I appreciate you are trying to follow up on everything here. It must be very frustrating and I am sorry that my proposal has caused it.
What bothers me ATM is that while the discussion is long and painful, it no longer moves anywhere :(
IMHO Everybody have already said all their arguments at least twice. I wonder how to move forward.
When conversations loop like this, then the problem isn't what is being stated.. it is the emotional problems which are being glossed over. You can't just say 'let us keep this technical' because I expect everyone thinks they are.. even when they aren't. Looking at the tone and 'feelings' of what is being said, it looks like the '4 noisy individuals' feel angry, and possibly betrayed and lied to.
They feel betrayed because several of them tried to point out that pretty much every issue we ran into with libgit2, rust, java, and other items were going to happen. And the answers they got were 'stop impeding progress' to 'no people won't do those things because they should know better', or 'if it happens we will come up with the policies to make sure it doesn't again'. I am not saying they expressed their concerns in a way that made people want to listen to them, but the core of the issues of 'if you want to do this you need to assume people are going to be jackasses 30% of the time, idiots 30% of the time, do what is assumed 30%, and angels 10%. Write the policies and tools to meet that.'
They feel lied to, because things have changed and the changes were not what they expected. Maybe it is their expectations of what Fedora was and what Fedora is are different. To some we are dropping things which they feel strongly attached to, and we are basically telling them to move on or move out. If I don't want modularity.. I can't not have it. If I want i386, I can't have it. If I want various packages which were there before.. but are dead/gone.. I can't have it. If I came here with an idea about an OS dedicated to Freedom, Friends, First, and Features.. and found either that my Features are gone.. or that I also have to share the OS with the projects sponsor's decisions.. that all causes anger.
In any case, when things start cycling, a community needs to start engaging in some sort of counseling to sort out the underlying emotions which aren't getting addressed.
On Wed, 16 Oct 2019 at 20:27, Stephen Gallagher sgallagh@redhat.com wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
I realize you were tired and angry when reading this, but I want to say that I haven't put in my complaints because I find myself actually agreeing with the high level points pointed out by the 4 noisy individuals but find their method irritating and don't want to pile on.
My main problem with modularity is that it has been used to 'weld the engine bonnet/hood shut'. One of the catchy phrases from the early years of Red Hat Linux was Bob Young's "Would you buy a car with the hood welded shut?" It was something that attracted me to RHL and then Fedora. I could get the things out of the OS and rebuild as I needed without bugging others or having to go set up more than a small home garage.
I find however that modularity is being used as a tool to weld parts of the engine away and it drives me bonkers. I can't just take a bunch of rpms from download.fedoraproject.org and rebuild them with some options to get what I want. Instead I have to go dig into 'hidden' parts of koji and other places to then figure out the secret incantations used to stitch things together. Furthermore, I couldn't just use my home garage to build things.. I can't use rpmbuild or mock** to build a module set. Instead I was either supposed to send my things to the Fedora koji factory and wait in line for a build.. or I had to build an entire factory myself. The tools to let me experiment and work on this in my 'garage' always seemed to be delivered as afterthoughts which made me feel increasingly not wanted. [Again I realize that this is non-rational.. but it is the non-rational things which are driving this thread out..]
I have found myself in complete sympathy with the various independent mechanics who can no longer work on various brands of cars because they manufacturer decided to use only custom tools. Sure I could work out and mill my own versions of those tools, but why should I when I never 'asked' for this problem and I thought the manufacturer was making cars that anyone could work on.
** I don't think it does yet, but msuchy (and team) may have added it somewhere as a feature of --chain or something.
17.10.2019, 17:15, "Stephen John Smoogen" smooge@gmail.com:
On Wed, 16 Oct 2019 at 20:27, Stephen Gallagher sgallagh@redhat.com wrote:
So, literally every word of this is wrong. The negative feedback is not "overwhelming". It is approximately four noisy individuals, all of whom have expressed zero interest in understanding the actual situation that they are trying to "fix" by endlessly insulting the people working on the problem. Demoralizing the people who can dig us out of this situation is an unwise strategy.
I realize you were tired and angry when reading this, but I want to say that I haven't put in my complaints because I find myself actually agreeing with the high level points pointed out by the 4 noisy individuals but find their method irritating and don't want to pile on.
Well said. I have also stayed quiet for much the same reasons, but I support what Kevin and Miro and others are saying.
It's definitely just not 4 noisy individuals. I'd even go as far as saying that most of the community here agrees with what they are saying.
Pete
On 10/15/19 9:26 PM, Stephen Gallagher wrote:
Module stream metadata would gain two new optional attributes, "upgrades:" and "obsoletes:".
If the "upgrades: <older_stream>" field exists in the metadata, libdnf should switch to this stream if the following conditions are met:
- Changing the stream would not introduce conflicts.
- The stream is marked as `default_enabled` or `dep_enabled`.
The "obsoletes: <older_stream>" field would be stronger. Its use should require a special exemption (with a strong justification) and it would cause libdnf to switch from that stream to this one *unconditionally* (failing the transaction if that transition would cause conflicts). This would essentially be an "emergency escape" if we need it.
Modularity has multiple use cases: your proposal addresses the OS usage where modularity manages installed distribution's dependency versioning issues. What would happen if someone installed certain module stream to manage their own version requirements? Presumably, they might want to _never_ change the stream. How is that handled in your scheme? I think the "upgrades:" case would be fine, because explicit installation would not have the "default_enabled" attribute. However, if a new module declared the "obsoletes:", it would replace them no matter what. Would there be a way to prevent that, or are you arguing that such override should not be allowed?
On Wed, Oct 16, 2019 at 1:19 PM Przemek Klosowski via devel devel@lists.fedoraproject.org wrote:
On 10/15/19 9:26 PM, Stephen Gallagher wrote:
Module stream metadata would gain two new optional attributes, "upgrades:" and "obsoletes:".
If the "upgrades: <older_stream>" field exists in the metadata, libdnf should switch to this stream if the following conditions are met:
- Changing the stream would not introduce conflicts.
- The stream is marked as `default_enabled` or `dep_enabled`.
The "obsoletes: <older_stream>" field would be stronger. Its use should require a special exemption (with a strong justification) and it would cause libdnf to switch from that stream to this one *unconditionally* (failing the transaction if that transition would cause conflicts). This would essentially be an "emergency escape" if we need it.
Modularity has multiple use cases: your proposal addresses the OS usage where modularity manages installed distribution's dependency versioning issues. What would happen if someone installed certain module stream to manage their own version requirements? Presumably, they might want to _never_ change the stream. How is that handled in your scheme? I think the "upgrades:" case would be fine, because explicit installation would not have the "default_enabled" attribute. However, if a new module declared the "obsoletes:", it would replace them no matter what. Would there be a way to prevent that, or are you arguing that such override should not be allowed?
I'm saying that the policy should forbid the use of that feature except for an absolute emergency, requiring approval from FESCo or similar. It would exist for cases like "Oh crap, it turns out we've been shipping patented content in this stream and we're obligated to remove it" or something like that. It should never be used in the general case. Not even for "This is so old we should force upgrades". For that we should just drop the stream entirely from the next release, which would result in the upgrade being impossible until the user took a manual action to get off that stream.
It would be my hope that the "obsoletes:" would never actually get used, but I'm generally in favor of planning for the worst case ahead of time if we can see it coming.
On Wed, Oct 16, 2019 at 01:32:49PM -0400, Stephen Gallagher wrote:
remove it" or something like that. It should never be used in the general case. Not even for "This is so old we should force upgrades". For that we should just drop the stream entirely from the next release, which would result in the upgrade being impossible until the user took a manual action to get off that stream.
What might that look like from a UX perspective? What about from GNOME Software?
On Wed, Oct 16, 2019 at 2:56 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Oct 16, 2019 at 01:32:49PM -0400, Stephen Gallagher wrote:
remove it" or something like that. It should never be used in the general case. Not even for "This is so old we should force upgrades". For that we should just drop the stream entirely from the next release, which would result in the upgrade being impossible until the user took a manual action to get off that stream.
What might that look like from a UX perspective? What about from GNOME Software?
Given that this should *almost never* happen, I'd avoid going to great lengths to build UX around it. I think it should basically just *happen* as part of the update process. I want to repeat: this should only be used if we have absolutely no other choice. I'd say the most UX we should do is actually in CI: we should disallow a module update to be pushed with this attribute set without an override.
On Wed, Oct 16, 2019 at 03:03:02PM -0400, Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 2:56 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Oct 16, 2019 at 01:32:49PM -0400, Stephen Gallagher wrote:
remove it" or something like that. It should never be used in the general case. Not even for "This is so old we should force upgrades". For that we should just drop the stream entirely from the next release, which would result in the upgrade being impossible until the user took a manual action to get off that stream.
What might that look like from a UX perspective? What about from GNOME Software?
Given that this should *almost never* happen, I'd avoid going to great lengths to build UX around it. I think it should basically just *happen* as part of the update process. I want to repeat: this should only be used if we have absolutely no other choice. I'd say the most UX we should do is actually in CI: we should disallow a module update to be pushed with this attribute set without an override.
I think Matthew's question was not around the Obsolete: tag but when we drop the stream of a release. Say I've enabled django1.6 and it has gone EOL upstream so you've dropped it in F32, what will happen when I try to upgrade to F32 using GNOME Software?
Pierre
On Thu, Oct 17, 2019 at 9:39 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Wed, Oct 16, 2019 at 03:03:02PM -0400, Stephen Gallagher wrote:
On Wed, Oct 16, 2019 at 2:56 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Oct 16, 2019 at 01:32:49PM -0400, Stephen Gallagher wrote:
remove it" or something like that. It should never be used in the general case. Not even for "This is so old we should force upgrades". For that we should just drop the stream entirely from the next release, which would result in the upgrade being impossible until the user took a manual action to get off that stream.
What might that look like from a UX perspective? What about from GNOME Software?
Given that this should *almost never* happen, I'd avoid going to great lengths to build UX around it. I think it should basically just *happen* as part of the update process. I want to repeat: this should only be used if we have absolutely no other choice. I'd say the most UX we should do is actually in CI: we should disallow a module update to be pushed with this attribute set without an override.
I think Matthew's question was not around the Obsolete: tag but when we drop the stream of a release. Say I've enabled django1.6 and it has gone EOL upstream so you've dropped it in F32, what will happen when I try to upgrade to F32 using GNOME Software?
Or, even better (or worse): Sombody installs GIMP via GNOME Software, and under the hood, dnf does its magic and installs gimp from the module, which might depend on another, even non-default module, etc. But then, what will happen when that module is EOL, and the user has never even interacted with dnf, or modules? Will system upgrades break and prompt the user to fix something they didn't even know existed, and have never interacted with?
Fabio
Pierre _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Or, even better (or worse): Sombody installs GIMP via GNOME Software,
and under the hood, dnf does its magic and installs gimp from the module, which might depend on another, even non-default module, etc. But then, what will happen when that module is EOL, and the user has never even interacted with dnf, or modules? Will system upgrades break and prompt the user to fix something they didn't even know existed, and have never interacted with?
This has already happened. We have had complaints from people who had never installed any module, or used the "dnf module" command and they still ended up with modules installed with a problem to upgrade their computers to F31.
Fabio
Pierre _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, 2019-10-18 at 13:05 +0200, Lukas Ruzicka wrote:
Or, even better (or worse): Sombody installs GIMP via GNOME Software,
and under the hood, dnf does its magic and installs gimp from the
module, which might depend on another, even non-default module, etc.
But then, what will happen when that module is EOL, and the user has
never even interacted with dnf, or modules? Will system upgrades break
and prompt the user to fix something they didn't even know existed,
and have never interacted with?
This has already happened. We have had complaints from people who had neverinstalled any module, or used the "dnf module" command and they still ended up withmodules installed with a problem to upgrade their computers to F31.
Fabio
Pierre
I just got a new computer, an Intel with Nvidia 2060 graphics card. I could NOT get fedora to install or boot. For the first time since fedora 7 I am off of fedora.Wayland drove me nuts, the changes to the OS were less than perfect, some applications I used to run would not run, and on and on, the inability to install and boot Fedora was the last straw for me. I may come back someday, because up until Fedora 28, things were pretty good. Get back to that standard, and I will likely come back. Meanwhile I will no longer recommend Fedora to my friends until things stabilize.Good luck, guys. I wish you all the best. Les H
On Fri, Oct 18, 2019 at 10:42:54AM -0700, Howard Howell wrote:
I just got a new computer, an Intel with Nvidia 2060 graphics card. I could NOT get fedora to install or boot. For the first time since
Do you recall any specifics? This is very unlikely to be related to modularity or anything to do with this thread, but more likely an installer or kernel bug.
fedora 7 I am off of fedora.Wayland drove me nuts, the changes to the OS were less than perfect, some applications I used to run would not run, and on and on, the inability to install and boot Fedora was the last straw for me. I may come back someday, because up until Fedora 28, things were pretty good. Get back to that standard, and I will likely come back. Meanwhile I will no longer recommend Fedora to my friends until things stabilize.Good luck, guys. I wish you all the best. Les H
If you do get a chance to try Fedora 31 once it's out and report a bug on the issue, it's more likely to be fixed up.
kevin
On Fri, 2019-10-18 at 13:05 +0200, Lukas Ruzicka wrote:
Or, even better (or worse): Sombody installs GIMP via GNOME Software,
and under the hood, dnf does its magic and installs gimp from the
module, which might depend on another, even non-default module, etc.
But then, what will happen when that module is EOL, and the user has
never even interacted with dnf, or modules? Will system upgrades break
and prompt the user to fix something they didn't even know existed,
and have never interacted with?
This has already happened. We have had complaints from people who had never installed any module, or used the "dnf module" command and they still ended up with modules installed with a problem to upgrade their computers to F31.
This worries me quite a bit as people often install Fedora on computers of relatives and friendsin default configuration & instruct the users to just "press the update button when it shows up".I've done that a couple times. This has been working quite fine so far, so it would be bad to loose this capability, as the actualusers in question are definitely not power users and will not be able to fix any of these issuesby themselves. Also from their point of view it's pretty much Fedora breaking if if a specific module is ta fault.And well, they are not really wrong if a default Fedora install simply breaks upgrades at arandom point in the future...
Fabio
Pierre
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
_______________________________________________devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
This has been working quite fine so far, so it would be bad to loose this capability, as the actual users in question are definitely not power users and will not be able to fix any of these issues by themselves.
+1, this was one of my points, too. I think that Fedora should be open for users of all categories, not just power users, but also newbies and similar species.
On Wed, 16 Oct 2019 at 13:33, Stephen Gallagher sgallagh@redhat.com wrote:
On Wed, Oct 16, 2019 at 1:19 PM Przemek Klosowski via devel devel@lists.fedoraproject.org wrote:
On 10/15/19 9:26 PM, Stephen Gallagher wrote:
I'm saying that the policy should forbid the use of that feature except for an absolute emergency, requiring approval from FESCo or similar. It would exist for cases like "Oh crap, it turns out we've been shipping patented content in this stream and we're obligated to remove it" or something like that. It should never be used in the general case. Not even for "This is so old we should force upgrades". For that we should just drop the stream entirely from the next release, which would result in the upgrade being impossible until the user took a manual action to get off that stream.
It would be my hope that the "obsoletes:" would never actually get used, but I'm generally in favor of planning for the worst case ahead of time if we can see it coming.
If there is one thing I have learned from watching modularity go through multiple releases... anything you think shouldn't happen a lot will. It is nothing new, when packagers learned about Epochs a long time ago... they got used a lot also. We should not just plan for the worst case where we need it, but we need to work on the policies first. Trying to do the policies afterwords is what I think has caused the most feelings of betrayal and anger which are coming up in various people's emails.. especially when several of them pointed out that the problems would occur and were told they were overblowing it.
On Tue, Oct 15, 2019 at 09:26:31PM -0400, Stephen Gallagher wrote:
Alternate Proposal:
Most things from the original proposal in the first message of this thread remains the same except:
Module stream metadata would gain two new optional attributes, "upgrades:" and "obsoletes:".
I'm sorry, but I'm against this proposal, both in its first form, and as amended here. The long discussion in this thread has pushed me over into conviction that modules should not be "on by default" in any way in Fedora.
The first form of the proposal was already staggeringly complex — "default", "dep_enabled", "default_enabled", "default", …. Recording user intent when the users interacts directly with the thing might be OK, but mapping that intent onto dependencies that are pulled in automatically is not something that can be well defined. My expectation is that we'd forever be fighting broken expectations and unexpected cases.
But the amended proposal actually makes things *worse*, even more complex. We would have two parallel sets of dependency specifications: on the rpms level and on the module level. The interactions between them would be hard to understand for users.
I also don't think we need this at all: everything that could be expressed using deps between modules can also be expressed using deps between rpms. We have a rich and well defined dependency language for rpms, so let's just use it.
One of the operational problems with Modularity is that it places huge expectations on dnf. Modularity was never very well defined, and dnf had to adapt on the fly to changing rules and requirements. This never went well. Even more complexity, with three parallel sets of semi-interacting-semi-independent sets of constraint rules (rpm deps, module intent, module obsoletes+provides), expressed in different form, is imho a recipe for bad ux.
At the same time, this thread has shown that this additional complexity would need to be added to have upgrade paths for modules. Essentially, to me this thread has shown that Modularity needs to go back to drawing board, to reassess goals and assumptions and implementation choices. A lot of what people *thought* Modularity would give them, is simply not possible, and at the same time, the costs and impact on the rest of the distribution is higher than expected.
As has been extensively discussed, modules are opaque and a) by design make some rpms not visible and not as available to other packagers as before, b) make it harder for people outside of Fedora to reuse our packaging and build on top of Fedora.
Modules also raise the complexity of packaging. I understand that for some expert packagers they provide new functionality, but they complicate life for the majority of packagers. I think this additional complexity is of the reasons for the decline in participation of non-expert packagers in Fedora that was show in FPL's graphs.
The work that went into Modularity is certainly not all wasted: I think we all understand the problem and what solutions don't work much better then before. I think we should take a step back and try for a solution which has lower end-user complexity and better backwards compatibility.
I'm not asking for an improved proposal here: for me, Modularity in its current form is simply not a net benefit for Fedora's packagers or users. Thus, I'm against both default modules, and adding modules in the buildroot, and against rebasing any part of Fedora to build on top of modules. This is "contingency mode", i.e. thinking how to bring things back to working state. I think it is OK to allow modules to be available, but they must be opt-in, and normal rpms may not allow on the modularized rpms in any way.
I wrote this in reply to this thread, even though some things might fit more in the sister thread "Fedora 32 System-Wide Change proposal: Modules in Non-Modular Buildroot". I don't want to send two mails with a lot of text, and the two things are inextricably linked: we cannot enable modules by default, or make more things depend on them by including in the build root, without having working upgrade paths.
Zbyszek
On Sun, Oct 20, 2019 at 15:20 Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Tue, Oct 15, 2019 at 09:26:31PM -0400, Stephen Gallagher wrote:
Alternate Proposal:
Most things from the original proposal in the first message of this thread remains the same except:
Module stream metadata would gain two new optional attributes, "upgrades:" and "obsoletes:".
I'm sorry, but I'm against this proposal, both in its first form, and as amended here. The long discussion in this thread has pushed me over into conviction that modules should not be "on by default" in any way in Fedora.
The first form of the proposal was already staggeringly complex — "default", "dep_enabled", "default_enabled", "default", …. Recording user intent when the users interacts directly with the thing might be OK, but mapping that intent onto dependencies that are pulled in automatically is not something that can be well defined. My expectation is that we'd forever be fighting broken expectations and unexpected cases.
But the amended proposal actually makes things *worse*, even more complex. We would have two parallel sets of dependency specifications: on the rpms level and on the module level. The interactions between them would be hard to understand for users.
I also don't think we need this at all: everything that could be expressed using deps between modules can also be expressed using deps between rpms. We have a rich and well defined dependency language for rpms, so let's just use it.
One of the operational problems with Modularity is that it places huge expectations on dnf. Modularity was never very well defined, and dnf had to adapt on the fly to changing rules and requirements. This never went well. Even more complexity, with three parallel sets of semi-interacting-semi-independent sets of constraint rules (rpm deps, module intent, module obsoletes+provides), expressed in different form, is imho a recipe for bad ux.
At the same time, this thread has shown that this additional complexity would need to be added to have upgrade paths for modules. Essentially, to me this thread has shown that Modularity needs to go back to drawing board, to reassess goals and assumptions and implementation choices. A lot of what people *thought* Modularity would give them, is simply not possible, and at the same time, the costs and impact on the rest of the distribution is higher than expected.
As has been extensively discussed, modules are opaque and a) by design make some rpms not visible and not as available to other packagers as before, b) make it harder for people outside of Fedora to reuse our packaging and build on top of Fedora.
Modules also raise the complexity of packaging. I understand that for some expert packagers they provide new functionality, but they complicate life for the majority of packagers. I think this additional complexity is of the reasons for the decline in participation of non-expert packagers in Fedora that was show in FPL's graphs.
The work that went into Modularity is certainly not all wasted: I think we all understand the problem and what solutions don't work much better then before. I think we should take a step back and try for a solution which has lower end-user complexity and better backwards compatibility.
I'm not asking for an improved proposal here: for me, Modularity in its current form is simply not a net benefit for Fedora's packagers or users. Thus, I'm against both default modules, and adding modules in the buildroot, and against rebasing any part of Fedora to build on top of modules. This is "contingency mode", i.e. thinking how to bring things back to working state. I think it is OK to allow modules to be available, but they must be opt-in, and normal rpms may not allow on the modularized rpms in any way.
I wrote this in reply to this thread, even though some things might fit more in the sister thread "Fedora 32 System-Wide Change proposal: Modules in Non-Modular Buildroot". I don't want to send two mails with a lot of text, and the two things are inextricably linked: we cannot enable modules by default, or make more things depend on them by including in the build root, without having working upgrade paths.
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package team can make a set of packages in a repository and work out how I can interact with other repositories. I also want to easily build that package set in ways to work on different versions of an operating system. Let us ignore the early goal of modularity of ‘I don’t want a ton of repositories’ which led to ‘virtual repositories’ which led to ‘modules’. For our ‘persona’ story.. let us go back to the days of Dag and Athimm .. and we want to be able to give them the tools to build 2 repositories which worked on Fedora. The tool set they have currently is that they just drop everything in a pool or into tiny 1 package repositories and the user has to figure out what they want.
You couldn’t compare their version of imapd and clamav and they have no way to communicate whether their versions work together or not. The usual story was ‘well just integrate those into the downstream OS’ but that becomes trickier with certain groups of packages which may have multiple capabilities but can only do one or the other (this could be a container.. but again we have to make it so you can build said container with its different options).
What if we made a set of tools which allowed them to group together the items into their repository and build out a set of artifacts which could say ‘yes you can use my NodeJS to supplement your Node stuff.. but python38 was compiled with experimental puppy and wont work with your python38 stuff.’ From an RPM level this is hard to do because you are either writing out a large number of spec files with a dozen compat names to say ‘compat-python38-works-with-athimm-foobar-3.4.0-1.dag.noarch.rpm’ and ‘compat-python38-breaks-with-athimm-foobar-3.4.0-1.dag.noarch.rpm’. The user may not want to deal with such long names they just want to be able to have ‘dnf’ try to add a repository and know if it will work with other items.
While some of that would seem to be extra repository meta-data, we also want to make it easy so that Dag, Athimm and Joe-At-Home to factory build their set of packages together knowing it will work either with Dag, not work with Dag, etc etc. And we want to make it easy for them to build against say Fedora N-2,N-1,N,N+1, rawhide (as would be the case right now before FN+1 gets released) without having to do too much work.
Again some of this may need to be done with packaging rules, but we want to make it easy for the builder to put those in place without a lot of work from either a packager or user. Anyway I think the above may be a good way to restart the conversation. Let us try to aim the solution at packagers
1. They have a lot of packages they need to tie together 2. They need to build those packages for multiple deliverable operating environments 3. They need to interact with other groups of packages in a way which that their group can 'know' if there is a chance of coworking. 4. Many are tied to systems which have fast, hard update cycles which do not work with even a 'fast' OS like Fedora.
Users are wanting 1. A system which can tie these different speed things together. 2. That system to be updated or is clear on why it can't and what needs to be done.
On Sun, Oct 20, 2019 at 09:30:52PM -0400, Stephen John Smoogen wrote:
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package team can make a set of packages in a repository and work out how I can interact with other repositories. I also want to easily build that package set in ways to work on different versions of an operating system.
The question is whether this team wants to do the "heavy lifting" (which might or might not actually be very heavy), of integrating with the rest of the distro. If they don't, then Copr is the answer: it provides all the answers, including automatic rebuilds.
If they do, and they invest in following the packaging guidelines and and the release cycles and whatever we say makes the package suitable for users and other packagers to build on, they get to put the package in the distro.
- They have a lot of packages they need to tie together
- They need to build those packages for multiple deliverable operating
environments 3. They need to interact with other groups of packages in a way which that their group can 'know' if there is a chance of coworking. 4. Many are tied to systems which have fast, hard update cycles which do not work with even a 'fast' OS like Fedora.
Users are wanting
- A system which can tie these different speed things together.
- That system to be updated or is clear on why it can't and what needs to
be done.
This is all already satisfied by rpms (even from Copr). In particular, for point 3., there can be no magic: we can *express* relationships between packages with Provides and Requires, but to *know* what should be expressed, we need packager input _and_ ideally lots of QA and testing. No new repo format and metadata language fundamentally changes this.
Zbyszek
On Mon, 21 Oct 2019 at 01:55, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Sun, Oct 20, 2019 at 09:30:52PM -0400, Stephen John Smoogen wrote:
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package team can make a set of packages in a repository and work out how I can interact with other repositories. I also want to easily build that package set in ways to work on different versions of an operating system.
The question is whether this team wants to do the "heavy lifting" (which might or might not actually be very heavy), of integrating with the rest of the distro. If they don't, then Copr is the answer: it provides all the answers, including automatic rebuilds.
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there. I am saying that if we look at a way that they can clearly communicate these problems to the user then we have fixed that.
Also there needs to be a way to communicate that an upgrade from F32 to F33 will break a system because copr-B has no F-33 packages.
If they do, and they invest in following the packaging guidelines and and the release cycles and whatever we say makes the package suitable for users and other packagers to build on, they get to put the package in the distro.
From what I have heard over and over is that it isn't the packaging guidelines which are a problem.. it is dealing with threads like this or the continual drama churn we have. Investing in the OS means a lot of emotional energy which a lot of people have no room for in our current world. In some ways I see being able to bolt things into Coprs as an escape from dealing with constant absolutes of 'your wrong!' which most of our messages devolve to.
The problem is that our current 20,000 packages is a LOT and most software needs more than we actually have packaged. That means continual growth, but our other needs of 'I need this as quickly as possible', 'I expect you to have fixed all these things', etc are more than most volunteers can deal with at this size. We end up shutting down and yelling at each other because deep down we just want the noise to stop.
On Mon, Oct 21, 2019, 15:17 Stephen John Smoogen smooge@gmail.com wrote:
On Mon, 21 Oct 2019 at 01:55, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Sun, Oct 20, 2019 at 09:30:52PM -0400, Stephen John Smoogen wrote:
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package
team can
make a set of packages in a repository and work out how I can interact
with
other repositories. I also want to easily build that package set in
ways to
work on different versions of an operating system.
The question is whether this team wants to do the "heavy lifting" (which might or might not actually be very heavy), of integrating with the rest of the distro. If they don't, then Copr is the answer: it provides all the answers, including automatic rebuilds.
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there. I am saying that if we look at a way that they can clearly communicate these problems to the user then we have fixed that.
Why not specify those requirements in RPM Requires? That's what they are for.
Also there needs to be a way to communicate that an upgrade from F32 to F33 will break a system because copr-B has no F-33 packages.
This already works somewhat, the only change that would be needed is setting skip_if_unavailable = false for COPR repos (I think they're set to true right now).
If they do, and they invest in following the packaging guidelines and and the release cycles and whatever we say makes the package suitable for users and other packagers to build on, they get to put the package in the distro.
From what I have heard over and over is that it isn't the packaging guidelines which are a problem.. it is dealing with threads like this or the continual drama churn we have. Investing in the OS means a lot of emotional energy which a lot of people have no room for in our current world. In some ways I see being able to bolt things into Coprs as an escape from dealing with constant absolutes of 'your wrong!' which most of our messages devolve to.
The problem is that our current 20,000 packages is a LOT and most software needs more than we actually have packaged. That means continual growth, but our other needs of 'I need this as quickly as possible', 'I expect you to have fixed all these things', etc are more than most volunteers can deal with at this size. We end up shutting down and yelling at each other because deep down we just want the noise to stop.
Yes, I agree, the current growth of the package set isn't sustainable if we don't also scale up the contributor base. I suspect that there are a few handfuls of packagers who maintain hundreds of packages, while the majority only maintains only a handful of packages. And relying on the "overcommitters" (pun intended) to keep the distro running isn't working so well.
Fabio
-- Stephen J Smoogen. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mon, Oct 21, 2019 at 03:36:53PM +0200, Fabio Valentini wrote:
On Mon, Oct 21, 2019, 15:17 Stephen John Smoogen smooge@gmail.com wrote:
On Mon, 21 Oct 2019 at 01:55, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Sun, Oct 20, 2019 at 09:30:52PM -0400, Stephen John Smoogen wrote:
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package
team can
make a set of packages in a repository and work out how I can interact
with
other repositories. I also want to easily build that package set in
ways to
work on different versions of an operating system.
The question is whether this team wants to do the "heavy lifting" (which might or might not actually be very heavy), of integrating with the rest of the distro. If they don't, then Copr is the answer: it provides all the answers, including automatic rebuilds.
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there. I am saying that if we look at a way that they can clearly communicate these problems to the user then we have fixed that.
Why not specify those requirements in RPM Requires? That's what they are for.
Exactly.
(Though, I don't think it is wise to build something complicated from coprs — they are intended to allow the rules to be relaxed, but the obvious corollary is that there is less stability and reliability. The rules we have in the distro how packages must behave might be stiffling and annoying, but they are there for a reason...)
Also there needs to be a way to communicate that an upgrade from F32 to F33 will break a system because copr-B has no F-33 packages.
This already works somewhat, the only change that would be needed is setting skip_if_unavailable = false for COPR repos (I think they're set to true right now).
If they do, and they invest in following the packaging guidelines and and the release cycles and whatever we say makes the package suitable for users and other packagers to build on, they get to put the package in the distro.
From what I have heard over and over is that it isn't the packaging guidelines which are a problem.. it is dealing with threads like this or the continual drama churn we have. Investing in the OS means a lot of emotional energy which a lot of people have no room for in our current world. In some ways I see being able to bolt things into Coprs as an escape from dealing with constant absolutes of 'your wrong!' which most of our messages devolve to.
I consider our discussions to be technical and at a good level. Even this thread: there have a been a few flare-ups, but that's just a handful, and just people being tired and putting a few words in the wrong place rather than any personal attack.
The problem is hard. If there was an obvious solution, we wouldn't be having this discussion.
The problem is that our current 20,000 packages is a LOT and most software needs more than we actually have packaged. That means continual growth, but our other needs of 'I need this as quickly as possible', 'I expect you to have fixed all these things', etc are more than most volunteers can deal with at this size. We end up shutting down and yelling at each other because deep down we just want the noise to stop.
Yes, I agree, the current growth of the package set isn't sustainable if we don't also scale up the contributor base. I suspect that there are a few handfuls of packagers who maintain hundreds of packages, while the majority only maintains only a handful of packages. And relying on the "overcommitters" (pun intended) to keep the distro running isn't working so well.
I very much hope we can grow our packager base. Packaging in Fedora is definitely harder than it used to be. We still haven't really recovered from pkgdb retirement, various infra tools don't have enough support, etc. No easy solutions to this problem either, but I think keeping packaging simple (or at least not more complicated than it currently is).
Zbyszek
On Mon, 2019-10-21 at 14:00 +0000, Zbigniew Jędrzejewski-Szmek wrote:
The problem is hard. If there was an obvious solution, we wouldn't be having this discussion.
I've pointed out a few times that other distros have solved the "too fast, too slow" problem. In at least one case, as long ago as 2004. I see it as a solved problem and I don't understand why we are trying to solve it again.
On Mon, 2019-10-21 at 14:00 +0000, Zbigniew Jędrzejewski-Szmek wrote:
Packaging in Fedora is definitely harder than it used to be. We still haven't really recovered from pkgdb retirement, various infra tools don't have enough support, etc. No easy solutions to this problem either, but I think keeping packaging simple (or at least not more complicated than it currently is).
I agree - I think the retirement of pkgdb without suitable replacement has made packaging harder when it already wasn't easy. This can't be helpful towards the goal of growing our contributor base.
On Mon, 21 Oct 2019 at 09:37, Fabio Valentini decathorpe@gmail.com wrote:
On Mon, Oct 21, 2019, 15:17 Stephen John Smoogen smooge@gmail.com wrote:
On Mon, 21 Oct 2019 at 01:55, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Sun, Oct 20, 2019 at 09:30:52PM -0400, Stephen John Smoogen wrote:
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package team can make a set of packages in a repository and work out how I can interact with other repositories. I also want to easily build that package set in ways to work on different versions of an operating system.
The question is whether this team wants to do the "heavy lifting" (which might or might not actually be very heavy), of integrating with the rest of the distro. If they don't, then Copr is the answer: it provides all the answers, including automatic rebuilds.
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there. I am saying that if we look at a way that they can clearly communicate these problems to the user then we have fixed that.
Why not specify those requirements in RPM Requires? That's what they are for.
Because this isn't about a package but a set of packages. I may not know that you are going to use Repo-B.. but may have known about Repo-C which did work. [Maybe it turns out that libfoo-2.3.2-2 in repo C does work because it had a patch.]
The problem I am seeing is that with N thousand copr packages, you can only express some amount of stuff in the packages themselves. In the end there are parts which need to be expressed more abstractly higher up. [I know I work with copr-X, I don't work with copr-Y.] etc.
This is a common problem which shows up because people have problems they want to solve, they google to find a solution and htey cobble together something from 10 different repositories which may or may not have been designed to work together. [This will also happen in container combinations etc so if we can help fix it here.. it can be used elsewhere.]
The answers of 'well dont' do that', 'get them to put them in the OS', etc etc have not worked for 20 years.
The idea of adding more packagers to the community hasn't worked either.. even before pkgdb was retired.. there was no 'growth'.. there was a set number of people who were active (they may not be the same as now.. but the number was the same). I would say 350 active packagers is our Dunbar number but that is getting far into the pseudo-science weeds. Is that the number of packagers? No.. there are a lot of people who will do a small set and have no interested outside of that. Giving them the tools which can allow them to communicate what their repo can and can not do seems like a good thing.
On 10/21/19 7:16 AM, Stephen John Smoogen wrote:
On Mon, 21 Oct 2019 at 01:55, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Sun, Oct 20, 2019 at 09:30:52PM -0400, Stephen John Smoogen wrote:
If I were to start from scratch on this, I would look at the simplest solution I would want from Boltron. I want to make it so a package team can make a set of packages in a repository and work out how I can interact with other repositories. I also want to easily build that package set in ways to work on different versions of an operating system.
The question is whether this team wants to do the "heavy lifting" (which might or might not actually be very heavy), of integrating with the rest of the distro. If they don't, then Copr is the answer: it provides all the answers, including automatic rebuilds.
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there. I am saying that if we look at a way that they can clearly communicate these problems to the user then we have fixed that.
FWIW, there is this old bug asking to express at least some kind of dependency between COPRs: https://bugzilla.redhat.com/show_bug.cgi?id=1149887
On Mon, 2019-10-21 at 09:16 -0400, Stephen John Smoogen wrote:
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there.
Modularity also suffers from this problem.
On Mon, 21 Oct 2019 at 11:08, Randy Barlow bowlofeggs@fedoraproject.org wrote:
On Mon, 2019-10-21 at 09:16 -0400, Stephen John Smoogen wrote:
The problem is that COPRs do not have any way of communicating with each other. If I grab from copr-A and it has libfoo-2.3.1-1 and I grab from copr-B and it has libfoo-2.3.2-2 then I am going to replace copr-A's packages which may break what I wanted from there.
Modularity also suffers from this problem.
I don't see it having this problem currently. I can either install one or the other. I can not mix the two. It may not communicate to me why I can't mix the two.. but it does stop me from shooting myself in the foot.