Hi everyone! Since it's a new year and a new decade [*], it seems like a good time to look forward and talk about what we want the Fedora Project to be in the next five and even ten years. How do we take the awesome foundation we have now and build and grow and make something that continues to thrive and be useful, valuable, and fun?
[My thoughts below. Feel free to respond to those, or cut here and start your own!]
I see three big themes I think we need to tackle.
First, I'd like to see Fedora become more of an "operating system factory".
The direction we took with the Fedora Editions has been a success — Fedora's general growth and popularity bears that out. But now it's a good time to re-examine the positioning. The Editions were meant to fit big, broad use-cases defined by (at the time) the Fedora Board and FESCo. Since then, everything's become more complicated, with Atomic and then CoreOS, and IoT, and Silverblue — and we never really found a satisfying way to present the work of our other desktop SIGs.
So, I think we should revisit the top-level design for Get Fedora. I'm not a designer and I don't have a particular answer in mind, but I think we should try an approach which showcases all of our different outputs in some way which also makes it easy for new users to find the right solution quickly (and to understand the support options and expecations for their choice).
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
Second, we need to figure out how to work with language-native packaging formats and more directly with code that's distributed in git repos rather than as tarball releases.
We're not adding meaningful end-user value by manually repackaging these in our own format. We _do_ add value by vetting licenses and insuring availability and consistency, but I think we can find better ways to do that. I think the "source git" project is an interesting step here.
These two things are linked. I want application developers to find Fedora a convenient and easy way to get their software to users. Pulling from the Fedora container and flatpak registries should give the same feeling of trust and safety that installing and RPM from our repos does today. We're not going to get either of those things with the system we have now. Our value is unclear to both developers and end users, so we just get left out. If we don't address this, we're ultimately going to be reduced to a barely-differentiated implementation of a base OS that no one really cares about, not the rich software ecosystem we've always aspired to build.
Third, we really need to continue to grow the project as more than coding and packaging.
Obviously that engineering work is the core of the project (and we should grow that too!), but it doesn't matter what we build if no one can find it or find how to use it. We need to feed and grow our documentation and support communities around the world. Marie (our new FCAIC, in case you missed that!) and I have been talking about this, and we hope to really expand the $150-mini-event Mindshare program in the next year, and hopefully build on that further in the coming ones.
Those are my thoughts. What other challenges and opportunities do you see, and what would you like us to focus on?
----
[*] https://www.xkcd.com/2249/
(Also, on a more personal note: I've been SUPER swamped with email. If you sent me something over the holiday break and I didn't answer, it's not you, it's me. If I dropped something important, please send again. I'm declaring email bankrupcy and starting the year fresh.)
Le 2020-01-06 18:19, Matthew Miller a écrit : Hi,
Second, we need to figure out how to work with language-native packaging formats and more directly with code that's distributed in git repos rather than as tarball releases.
We're not adding meaningful end-user value by manually repackaging these in our own format.
It would be nice if it were the case but that’s a completely false assertion.
Re-packaging in our own format is not an horrible toil because our own format is horrible. It’s an horrible toil because our own format is old and mature, and language native formats are not. They lack all kinds of checks. Checks that do not matter in a dev context, but definitely matter in a deployment context.
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because out packa
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
Vít
On Tue, Jan 07, 2020 at 10:36:33AM +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
I think everyone is right ;)
It is pretty clear that we've simplified rpm packaging massively over the last few years. It is enough to take a random spec file from 10 years ago, with all the fragile manual steps and compare it with modern spec file that is often just a bit of boilerplate to provide the name, version, license and description, and then call some macros that do all the heavy lifting.
We've made great strides in making to bring rpm and upstream packaging closer. And this has been an effort on both sides, both upstream to accommodate workflows required by distros, and on the distro side to wrap those workflows: automatically generated spec for rust packages, pyproject macros, etc, etc. Also spdx licenses, automatic github tarballs…
But it is clear that this automatization process is far from complete. And to stay relevant, we (and other distros) need to keep up this work. Without that, we'll never keep up with the infinite supply of upstream projects and we'll stop being useful to users.
We're not adding meaningful end-user value by manually repackaging these in our own format. We _do_ add value by vetting licenses and insuring availability and consistency, but I think we can find better ways to do that.
Agreed, with the "manually" part. I think we need to streamline the process, and only require manual operation when unavoidable. I would like to be in a state where "packaging" of various projects using language-specific frameworks is as simple as flipping a switch in some web interface.
Matthew Miller wrote:
I think the "source git" project is an interesting step here.
OK, as long as we're "just" changing the delivery format (i.e. git archive instead of a tarball), but not trying to package the master branches of various projects. I.e. this must not be about absolving upstreams from having to do releases, but just about cutting out the antiquated intermediate tarball step.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
I don't know anything about ruby packaging, but I assume that this issue is similar in rust: a solution where upstream and the distro cooperate is required. Dependencies need to be conditionalized by architecture, and downstream packaging needs to use those conditionals as appropriate. In rust, sadly, this is still not the case (https://pagure.io/fedora-rust/rust2rpm/issue/2, https://github.com/rust-lang/cargo/issues/5133). I'm sure we need to and can "reasonably satisfy both worlds".
Zbyszek
On 07/01/2020 10:57, Zbigniew Jędrzejewski-Szmek wrote:
It is pretty clear that we've simplified rpm packaging massively over the last few years. It is enough to take a random spec file from 10 years ago, with all the fragile manual steps and compare it with modern spec file that is often just a bit of boilerplate to provide the name, version, license and description, and then call some macros that do all the heavy lifting.
We've made great strides in making to bring rpm and upstream packaging closer. And this has been an effort on both sides, both upstream to accommodate workflows required by distros, and on the distro side to wrap those workflows: automatically generated spec for rust packages, pyproject macros, etc, etc. Also spdx licenses, automatic github tarballs…
But it is clear that this automatization process is far from complete. And to stay relevant, we (and other distros) need to keep up this work. Without that, we'll never keep up with the infinite supply of upstream projects and we'll stop being useful to users.
The thing is that no matter how much you can manage to automate the creation of spec files for a given ecosystem, and I've never seen one where the typical spec file doesn't need some manual tweaking, you are still going to hit the fundamental problem that those specs then need to be reviewed.
In the typical "modern" ecosystem where everything is split into hundreds of every tinier components review bandwidth is the single biggest limitation and unless you're going to abandon reviews as part of automating distribution of upstream components I just don't see how this can work.
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging rather than us having to repackage everything by hand but I just don't see any way of doing it without compromising what we do to the extent that we're not really doing anything useful at all and are just shoveling out whatever nonsense upstreams perpetrate without question.
Tom
Dne 07. 01. 20 v 12:41 Tom Hughes napsal(a):
The thing is that no matter how much you can manage to automate the creation of spec files for a given ecosystem, and I've never seen one where the typical spec file doesn't need some manual tweaking, you are still going to hit the fundamental problem that those specs then need to be reviewed.
I disagree. Especially with libraries - be it python, gems... it can be very well automated without the need for review.
Of course, you will hit some hard pieces which will require human intervention. But it is big difference if you manually package 100 000 gems or if you automatically package 99 900 gems and manually touch 100 gems.
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging
+1 I'd love to see this as part of packaging management (rpm?) too. But only as an option. It is hard to predict if those ideas will be viable.
On 07/01/2020 12:22, Miroslav Suchý wrote:
Dne 07. 01. 20 v 12:41 Tom Hughes napsal(a):
The thing is that no matter how much you can manage to automate the creation of spec files for a given ecosystem, and I've never seen one where the typical spec file doesn't need some manual tweaking, you are still going to hit the fundamental problem that those specs then need to be reviewed.
I disagree. Especially with libraries - be it python, gems... it can be very well automated without the need for review.
Well that depends on the reason for the review, doesn't it?
Just to take a few things, how does automation check that the license declared in the upstream metadata is correct? or that the upstream package is obeying FHS and not installing files in the wrong place?
I have extensive experience with npm and packaging Node.js libraries in Fedora and even a well behaved upstream is rarely fully automatable and many upstreams are not well behaved.
Tom
Tom Hughes tom@compton.nu writes:
On 07/01/2020 12:22, Miroslav Suchý wrote:
Dne 07. 01. 20 v 12:41 Tom Hughes napsal(a):
The thing is that no matter how much you can manage to automate the creation of spec files for a given ecosystem, and I've never seen one where the typical spec file doesn't need some manual tweaking, you are still going to hit the fundamental problem that those specs then need to be reviewed.
I disagree. Especially with libraries - be it python, gems... it can be very well automated without the need for review.
Well that depends on the reason for the review, doesn't it?
Just to take a few things, how does automation check that the license declared in the upstream metadata is correct?
openSUSE actually has a bot for exactly this in the Open Build Service (it's not perfect of course, but it takes a good chunk of the legal review burden from humans): https://github.com/openSUSE/cavil. Iirc Neal has been trying to get it into Fedora.
On Tue, Jan 07, 2020 at 12:30:25PM +0000, Tom Hughes wrote:
On 07/01/2020 12:22, Miroslav Suchý wrote:
Dne 07. 01. 20 v 12:41 Tom Hughes napsal(a):
The thing is that no matter how much you can manage to automate the creation of spec files for a given ecosystem, and I've never seen one where the typical spec file doesn't need some manual tweaking, you are still going to hit the fundamental problem that those specs then need to be reviewed.
I disagree. Especially with libraries - be it python, gems... it can be very well automated without the need for review.
Well that depends on the reason for the review, doesn't it?
Just to take a few things, how does automation check that the license declared in the upstream metadata is correct? or that the upstream package is obeying FHS and not installing files in the wrong place?
Yes, it does, or at least it should. This is the kind of thing that absolutely can be automated. For licensing in particular, we have machine-readable spdx tags on files, and automatic conversion of sdpx tags to Fedora tags. And language-specific packaging formats have a metadata field for the license field. If both those sources agree, then automation should be able to say that the license is correct with a very high degree of confidence. Automation is not going to catch every case, but neither would a human.
And for FHS compliance, similar checks can be easily implemented. Fedora-review certainly does some. But if language-specific packaging framework provides a way to do installation automatically, then actually the chances of an upstream project inventing their own paths is diminished, so this should be less of an issue in the future.
I have extensive experience with npm and packaging Node.js libraries in Fedora and even a well behaved upstream is rarely fully automatable and many upstreams are not well behaved.
No doubt. That's why I said elsewhere in the thread that automation is something that requires cooperation from both upstream and our side.
Zbyszek
On Tue, Jan 7, 2020 at 1:31 PM Tom Hughes tom@compton.nu wrote:
On 07/01/2020 12:22, Miroslav Suchý wrote:
Dne 07. 01. 20 v 12:41 Tom Hughes napsal(a):
The thing is that no matter how much you can manage to automate the creation of spec files for a given ecosystem, and I've never seen one where the typical spec file doesn't need some manual tweaking, you are still going to hit the fundamental problem that those specs then need to be reviewed.
I disagree. Especially with libraries - be it python, gems... it can be very well automated without the need for review.
Well that depends on the reason for the review, doesn't it?
Just to take a few things, how does automation check that the license declared in the upstream metadata is correct? or that the upstream package is obeying FHS and not installing files in the wrong place?
(snip)
I have extensive experience with npm and packaging Node.js libraries in Fedora and even a well behaved upstream is rarely fully automatable and many upstreams are not well behaved.
Tom
Just to add my 2¢ here: I have experience with packaging stuff from many language ecosystems (ruby/gems, python/pypi, go, Java/maven) and with various build systems (autotools, meson, CMake, etc.). The packaging burden is *vastly* different, depending on the ecosystem.
- python / pypi works great for %build and %install, but until testing with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here. - ruby is weird, packaging gems is a bit of a chore, upstream has many knobs to fiddle with to make distro packaging hard (for example, not including test sources in .gem files seems to be a common practice), there's no canonical way of running test suites, and I think the %check section of all my rubygem packages is different and specifically tailored to the package ... - go and rust are pretty easily automated because there's not as many things upstreams can mess with, %build, %install and %check are almost always clean and easily automated with macros. generate_buildrequires also helps with rust packaging. - Java / maven has good support with packaging tools and macros in fedora, but if upstream project deviates from the "standard way of doing things", even if it is only slightly, you might end up modifying maven XML project definitions with XPATH queries. The horror.
For C / C++ / Vala, which don't have language-specific package managers:
- meson is really nice, almost never manual intervention needed; but even if it's necessary, patching meson.build files is pretty straight-forward - CMake is alright, even if it's hardly readable by humans; but patching CMakeLists.txt files gets ugly - I hope autotools dies a fiery death, and soon
TL;DR: The packaging burden ranges from being small or near non-existent (meson, python, go, rust) to being a chore (ruby, Java, autotools). I don't know how nodejs packages compare, since I've been lucky and didn't have to deal with them yet.
Conclusion: Some things could and should be improved, but some of that will only happen if we cooperate with upstreams (for example, right now rubygems or Java/maven is just too wild to be tamed by any downstream automation IMO, unless an omniscient AGI is just around the corner and will do our packaging for us).
Idea: What would be nice to see from more tools would provide introspection or machine-readable output from stuff run in %build and/or %install. For example, maven/XMvn automatically generates file lists. I think automating things in %files should be doable for more ecosystems (python? ruby?).
PS: I *do* think packagers add value by packaging upstream projects *correctly*, basically by providing a unified abstraction over all the weird things that upstream projects can do, and sometimes actually do. It makes it easier for consumers, because they know what to expect when they "dnf install rubygem-foo", whereas "gem install foo" or "pip install foo" might run and do all sorts of weird shit you don't want.
Fabio
-- Tom Hughes (tom@compton.nu) http://compton.nu/ _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 07. 01. 20 14:06, Fabio Valentini wrote:
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
See the %tox macro from https://src.fedoraproject.org/rpms/pyproject-rpm-macros
Examples:
https://src.fedoraproject.org/rpms/python-xmlschema/blob/master/f/python-xml...
https://src.fedoraproject.org/rpms/python-elementpath/blob/master/f/python-e...
On Tue, Jan 7, 2020 at 2:18 PM Miro Hrončok mhroncok@redhat.com wrote:
On 07. 01. 20 14:06, Fabio Valentini wrote:
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
See the %tox macro from https://src.fedoraproject.org/rpms/pyproject-rpm-macros
Examples:
https://src.fedoraproject.org/rpms/python-xmlschema/blob/master/f/python-xml...
https://src.fedoraproject.org/rpms/python-elementpath/blob/master/f/python-e...
Ooh, shiny. I knew that somebody was working on this because it was presented at flock, but I didn't know that it was in a usable state now. I'll try using it in the next python package I touch :) Thanks for the pointers, Miro!
Fabio
-- Miro Hrončok -- Phone: +420777974800 IRC: mhroncok
On Tuesday, 07 January 2020 at 14:32, Fabio Valentini wrote:
On Tue, Jan 7, 2020 at 2:18 PM Miro Hrončok mhroncok@redhat.com wrote:
On 07. 01. 20 14:06, Fabio Valentini wrote:
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
See the %tox macro from https://src.fedoraproject.org/rpms/pyproject-rpm-macros
Examples:
https://src.fedoraproject.org/rpms/python-xmlschema/blob/master/f/python-xml...
https://src.fedoraproject.org/rpms/python-elementpath/blob/master/f/python-e...
Ooh, shiny. I knew that somebody was working on this because it was presented at flock, but I didn't know that it was in a usable state now. I'll try using it in the next python package I touch :) Thanks for the pointers, Miro!
Thanks, indeed. Can we have that mentioned in the Python section of the Packaging Guidelines?
Regards, Dominik
On Sun, 2020-01-12 at 10:02 +0100, Dominik 'Rathann' Mierzejewski wrote:
On Tuesday, 07 January 2020 at 14:32, Fabio Valentini wrote:
On Tue, Jan 7, 2020 at 2:18 PM Miro Hrončok mhroncok@redhat.com wrote:
On 07. 01. 20 14:06, Fabio Valentini wrote:
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
See the %tox macro from https://src.fedoraproject.org/rpms/pyproject-rpm-macros
Examples:
https://src.fedoraproject.org/rpms/python-xmlschema/blob/master/f/python-xml...
https://src.fedoraproject.org/rpms/python-elementpath/blob/master/f/python-e...
Ooh, shiny. I knew that somebody was working on this because it was presented at flock, but I didn't know that it was in a usable state now. I'll try using it in the next python package I touch :) Thanks for the pointers, Miro!
Thanks, indeed. Can we have that mentioned in the Python section of the Packaging Guidelines?
Please don't mention it as required or even necessarily recommended, as it may not be. I intentionally set up my projects such that tox is appropriate for upstream CI, not distribution package checking, and intend distribution packages to run setup.py test.
On 12. 01. 20 16:47, Adam Williamson wrote:
On Sun, 2020-01-12 at 10:02 +0100, Dominik 'Rathann' Mierzejewski wrote:
On Tuesday, 07 January 2020 at 14:32, Fabio Valentini wrote:
On Tue, Jan 7, 2020 at 2:18 PM Miro Hrončok mhroncok@redhat.com wrote:
On 07. 01. 20 14:06, Fabio Valentini wrote:
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
See the %tox macro from https://src.fedoraproject.org/rpms/pyproject-rpm-macros
Examples:
https://src.fedoraproject.org/rpms/python-xmlschema/blob/master/f/python-xml...
https://src.fedoraproject.org/rpms/python-elementpath/blob/master/f/python-e...
Ooh, shiny. I knew that somebody was working on this because it was presented at flock, but I didn't know that it was in a usable state now. I'll try using it in the next python package I touch :) Thanks for the pointers, Miro!
Thanks, indeed. Can we have that mentioned in the Python section of the Packaging Guidelines?
Please don't mention it as required or even necessarily recommended, as it may not be. I intentionally set up my projects such that tox is appropriate for upstream CI, not distribution package checking, and intend distribution packages to run setup.py test.
The %tox macro runs the tests in current environment.
See https://github.com/fedora-python/tox-current-env/ for details.
It will probably fail to handle VCS URLs tests dependencies and such, but for "sane" tox configuration, this works fine.
It is rewuired to have all the dpes, so you can use this to get buildtime + runtime + tox deps:
%generate_buildrequires %pyproject_buildrequires -t
(See https://src.fedoraproject.org/rpms/pyproject-rpm-macros README.)
----
At the same time, `setup.py test` is deprecated.
See for example in python-wikitcms:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1412739
Executing(%check): ... + /usr/bin/python3 setup.py test running test WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
---
We have approached "Python (upstream) packaging people" (PyPA etc.) IRL and asked for a test runner standard. They said: "Use tox, and if you attempt standardization, build it on tox" (in other words, possibly).
Tox will also soon properly support pyproject.toml config (it does already, but only in a very ugly way).
On Sun, 2020-01-12 at 23:24 +0100, Miro Hrončok wrote:
Please don't mention it as required or even necessarily recommended, as it may not be. I intentionally set up my projects such that tox is appropriate for upstream CI, not distribution package checking, and intend distribution packages to run setup.py test.
The %tox macro runs the tests in current environment.
See https://github.com/fedora-python/tox-current-env/ for details.
It's not about the environment. It's about what they actually *do*. In my projects, tox runs linters, coverage checks and unit tests; setup.py test only runs unit tests. This is a convenient separation because I find it appropriate to run linters and coverage checks on commits/PRs for upstream, but there is no point in running them on package builds.
At the same time, `setup.py test` is deprecated.
Sigh. Always nice when people deprecate perfectly well-working workflows out from underneath you. :/
On 13. 01. 20 22:54, Adam Williamson wrote:
On Sun, 2020-01-12 at 23:24 +0100, Miro Hrončok wrote:
Please don't mention it as required or even necessarily recommended, as it may not be. I intentionally set up my projects such that tox is appropriate for upstream CI, not distribution package checking, and intend distribution packages to run setup.py test.
The %tox macro runs the tests in current environment.
See https://github.com/fedora-python/tox-current-env/ for details.
It's not about the environment. It's about what they actually *do*. In my projects, tox runs linters, coverage checks and unit tests; setup.py test only runs unit tests. This is a convenient separation because I find it appropriate to run linters and coverage checks on commits/PRs for upstream, but there is no point in running them on package builds.
I agree with everything you said. Taking what we have, ideally we should convince upstreams to only run tests with default toxenv, not linters.
We also don't have a tox-standard to only run offline testes etc. Maybe one day...
At the same time, `setup.py test` is deprecated.
Sigh. Always nice when people deprecate perfectly well-working workflows out from underneath you. :/
Right. Two things to consider:
- the deprecation period will most likely last forever - setup.py files will eventually disappear anyway
On Tue, 2020-01-14 at 01:29 +0100, Miro Hrončok wrote:
On 13. 01. 20 22:54, Adam Williamson wrote:
On Sun, 2020-01-12 at 23:24 +0100, Miro Hrončok wrote:
Please don't mention it as required or even necessarily recommended, as it may not be. I intentionally set up my projects such that tox is appropriate for upstream CI, not distribution package checking, and intend distribution packages to run setup.py test.
The %tox macro runs the tests in current environment.
See https://github.com/fedora-python/tox-current-env/ for details.
It's not about the environment. It's about what they actually *do*. In my projects, tox runs linters, coverage checks and unit tests; setup.py test only runs unit tests. This is a convenient separation because I find it appropriate to run linters and coverage checks on commits/PRs for upstream, but there is no point in running them on package builds.
I agree with everything you said. Taking what we have, ideally we should convince upstreams to only run tests with default toxenv, not linters.
We also don't have a tox-standard to only run offline testes etc. Maybe one day...
At the same time, `setup.py test` is deprecated.
Sigh. Always nice when people deprecate perfectly well-working workflows out from underneath you. :/
Right. Two things to consider:
- the deprecation period will most likely last forever
- setup.py files will eventually disappear anyway
So I looked into all of this stuff a bit last night. My conclusions so far:
* It would be really nice if there were a short Idiot's Guide with a sample project and RPM spec that puts everything together, as opposed to me winding up with a browser full of tabs open to https://src.fedoraproject.org/rpms/pyproject-rpm-macros/tree/master and https://www.python.org/dev/peps/pep-0517/ and https://hynek.me/articles/sharing-your-labor-of-love-pypi-quick-and-dirty/ etc. Especially since...
* ...the Shiny New Stuff does not appear to be available on EPEL *at all* yet - not even EPEL 8. This makes it a bit of a non-starter if you want to use the same spec file bits across Fedora and EPEL. I realize it may be impractical/impossible to backport everything to EPEL 7, and am getting close to the point where I throw in the towel and drop Python 2 support from my projects' master branches and hang the EPEL 7 package repos out on a branch, but EPEL 8 would be nice and ought to be possible?
* Unless I'm missing something, what you suggested for tox - "ideally we should convince upstreams to only run tests with default toxenv" - actually seems weirdly difficult to implement. AFAIK none of the official or unofficial tox docs I can find really cover the idea of having an environment that's *defined* but is not *default*. It seems to be virtually universal practice with tox that you put every environment in `envlist`...and if you just run `tox` without any `-e` argument or special env var, it runs every environment in `envlist`. People seem to assume all environments will be default, and if you want to run fewer than 'all of them' you filter with `-e` or whatever.
I *did* figure out a way to do this, but it's something I never really saw documented anywhere, and I eventually hit a roadblock with using it in the CI setup I made. You have to make a `tox.ini` like this:
[tox] envlist = py{27,36,37,38,39}
[testenv] deps = -r{toxinidir}/install.requires -r{toxinidir}/tests.requires ci: -r{toxinidir}/ci.requires
commands = py.test ci: py.test --cov-report term-missing --cov-report xml --cov openqa_client ci: diff-cover coverage.xml --fail-under=90 ci: diff-quality --violations=pylint --fail-under=90
setenv = PYTHONPATH = {toxinidir}
by having `ci` directives in `deps` and `commands` like that, you sort of imply the existence of environments like `py27-ci` without ever explicitly declaring them, and tox itself is OK with this. You can run `tox -epy{27,36,37,38,39}-ci` and it'll do what you (maybe) expect - it'll read ci.requires and run the additional commands. But if you just run `tox` it'll run the py{27,36,37,38,39} environments without the additional `ci` bits.
This definitely seems to be a bit 'off the beaten track', though, like I said - it's not explicitly documented anywhere I could find (most documentation of the whole generative environments thing assumes all the environments will be declared up in `envlist`) and commands like `tox -l` and `tox -a` may not do what you expect.
The showstopper I ultimately hit was in an extension on top of tox, `tox-gh-actions`, which is a convenience thing for using tox with GitHub Actions:
https://github.com/ymyzk/tox-gh-actions/issues/11
(the project I'm using as a testbed for this stuff is hosted in github, and Actions seemed like the easiest way to set up CI). It seems like this 'implicit non-default environment' thing completely trips up tox- gh-actions; when I added this section to `tox.ini` to try and make it use the `ci` environments:
[gh-actions] python = 2.7: py27 3.6: py36-ci 3.7: py37-ci 3.8: py38-ci 3.9: py39-ci
it would run, but each actual test run seemed not to actually call tox at all, it just did apparently nothing and then 'passed'.
So in the end I gave up and wound up just making the `-ci` environments the defaults, explicitly declared in envlist, and figuring I'd have the spec file use `-e` to explicitly run the non-ci environments. But then I didn't even do that because of the whole EPEL thing, so so far I've wound up not using the pyproject macros at all, I'm doing this instead:
%if 0%{?with_python2} PYTHONPATH=%{buildroot}%{python2_sitelib} py.test %endif # with_python3 %if 0%{?with_python3} PYTHONPATH=%{buildroot}%{python3_sitelib} py.test-3 %endif # with_python3
PYTHON!
On 28. 02. 20 17:58, Adam Williamson wrote:
On Tue, 2020-01-14 at 01:29 +0100, Miro Hrončok wrote:
On 13. 01. 20 22:54, Adam Williamson wrote:
On Sun, 2020-01-12 at 23:24 +0100, Miro Hrončok wrote:
Please don't mention it as required or even necessarily recommended, as it may not be. I intentionally set up my projects such that tox is appropriate for upstream CI, not distribution package checking, and intend distribution packages to run setup.py test.
The %tox macro runs the tests in current environment.
See https://github.com/fedora-python/tox-current-env/ for details.
It's not about the environment. It's about what they actually *do*. In my projects, tox runs linters, coverage checks and unit tests; setup.py test only runs unit tests. This is a convenient separation because I find it appropriate to run linters and coverage checks on commits/PRs for upstream, but there is no point in running them on package builds.
I agree with everything you said. Taking what we have, ideally we should convince upstreams to only run tests with default toxenv, not linters.
We also don't have a tox-standard to only run offline testes etc. Maybe one day...
At the same time, `setup.py test` is deprecated.
Sigh. Always nice when people deprecate perfectly well-working workflows out from underneath you. :/
Right. Two things to consider:
- the deprecation period will most likely last forever
- setup.py files will eventually disappear anyway
So I looked into all of this stuff a bit last night. My conclusions so far:
- It would be really nice if there were a short Idiot's Guide with a
sample project and RPM spec that puts everything together, as opposed to me winding up with a browser full of tabs open to https://src.fedoraproject.org/rpms/pyproject-rpm-macros/tree/master and https://www.python.org/dev/peps/pep-0517/ and https://hynek.me/articles/sharing-your-labor-of-love-pypi-quick-and-dirty/ etc. Especially since...
This is all "beta" and provisional, but we'll write a guide once we finish the %files section macros. In the meantime, see specs in https://src.fedoraproject.org/rpms/pyproject-rpm-macros/blob/master/f/tests for examples.
Yes, we could use some docs about this. No, we don't have them yet. There is also:
[RFE] Documentation about pyproject.toml file https://bugzilla.redhat.com/show_bug.cgi?id=1739847
- ...the Shiny New Stuff does not appear to be available on EPEL *at
all* yet - not even EPEL 8. This makes it a bit of a non-starter if you want to use the same spec file bits across Fedora and EPEL. I realize it may be impractical/impossible to backport everything to EPEL 7, and am getting close to the point where I throw in the towel and drop Python 2 support from my projects' master branches and hang the EPEL 7 package repos out on a branch, but EPEL 8 would be nice and ought to be possible?
This stands in a way of having this in EPEL 8, AFAIK:
- no automatic RPM buildrequires - ancient pip (PEP 517 support was added in 19.0, we have 9.0.3) - ancient setuptools (40.8 is needed, we have 39.2.0) - no tox (but it's being packaged for EPEL8 as we speak)
How are we supposed to do progress in Fedora when people dislike it when it is not included in EPEL?
- Unless I'm missing something, what you suggested for tox - "ideally
we should convince upstreams to only run tests with default toxenv" - actually seems weirdly difficult to implement. AFAIK none of the official or unofficial tox docs I can find really cover the idea of having an environment that's *defined* but is not *default*. It seems to be virtually universal practice with tox that you put every environment in `envlist`...and if you just run `tox` without any `-e` argument or special env var, it runs every environment in `envlist`. People seem to assume all environments will be default, and if you want to run fewer than 'all of them' you filter with `-e` or whatever.
What I meant with "ideally we should convince upstreams to only run tests with default toxenv, not linters" was this:
`tox -e py37` should only run tests `tox -e py38` should only run tests `tox -e py39` should only run tests
Linters should run in `tox -e lint`, or `tox -e py38-lint`.
%tox runs `tox -e py3X` by default.
I *did* figure out a way to do this, but it's something I never really saw documented anywhere, and I eventually hit a roadblock with using it in the CI setup I made. You have to make a `tox.ini` like this:
[tox] envlist = py{27,36,37,38,39}
[testenv] deps = -r{toxinidir}/install.requires -r{toxinidir}/tests.requires ci: -r{toxinidir}/ci.requires
commands = py.test ci: py.test --cov-report term-missing --cov-report xml --cov openqa_client ci: diff-cover coverage.xml --fail-under=90 ci: diff-quality --violations=pylint --fail-under=90
setenv = PYTHONPATH = {toxinidir}
by having `ci` directives in `deps` and `commands` like that, you sort of imply the existence of environments like `py27-ci` without ever explicitly declaring them, and tox itself is OK with this. You can run `tox -epy{27,36,37,38,39}-ci` and it'll do what you (maybe) expect - it'll read ci.requires and run the additional commands. But if you just run `tox` it'll run the py{27,36,37,38,39} environments without the additional `ci` bits.
This definitely seems to be a bit 'off the beaten track', though, like I said - it's not explicitly documented anywhere I could find (most documentation of the whole generative environments thing assumes all the environments will be declared up in `envlist`) and commands like `tox -l` and `tox -a` may not do what you expect.
The showstopper I ultimately hit was in an extension on top of tox, `tox-gh-actions`, which is a convenience thing for using tox with GitHub Actions:
https://github.com/ymyzk/tox-gh-actions/issues/11
(the project I'm using as a testbed for this stuff is hosted in github, and Actions seemed like the easiest way to set up CI). It seems like this 'implicit non-default environment' thing completely trips up tox- gh-actions; when I added this section to `tox.ini` to try and make it use the `ci` environments:
[gh-actions] python = 2.7: py27 3.6: py36-ci 3.7: py37-ci 3.8: py38-ci 3.9: py39-ci
it would run, but each actual test run seemed not to actually call tox at all, it just did apparently nothing and then 'passed'.
So in the end I gave up and wound up just making the `-ci` environments the defaults, explicitly declared in envlist, and figuring I'd have the spec file use `-e` to explicitly run the non-ci environments.
That's what %tox does.
But then I didn't even do that because of the whole EPEL thing, so so far I've wound up not using the pyproject macros at all, I'm doing this instead:
%if 0%{?with_python2} PYTHONPATH=%{buildroot}%{python2_sitelib} py.test %endif # with_python3 %if 0%{?with_python3} PYTHONPATH=%{buildroot}%{python3_sitelib} py.test-3 %endif # with_python3
PYTHON!
EPEL!
Sorry but I cannot help you have nice new things if you decide to have old things. You need to choose - backwards compatibility or new features.
On Fri, 2020-02-28 at 20:42 +0100, Miro Hrončok wrote:
- ...the Shiny New Stuff does not appear to be available on EPEL *at
all* yet - not even EPEL 8. This makes it a bit of a non-starter if you want to use the same spec file bits across Fedora and EPEL. I realize it may be impractical/impossible to backport everything to EPEL 7, and am getting close to the point where I throw in the towel and drop Python 2 support from my projects' master branches and hang the EPEL 7 package repos out on a branch, but EPEL 8 would be nice and ought to be possible?
This stands in a way of having this in EPEL 8, AFAIK:
- no automatic RPM buildrequires
- ancient pip (PEP 517 support was added in 19.0, we have 9.0.3)
- ancient setuptools (40.8 is needed, we have 39.2.0)
- no tox (but it's being packaged for EPEL8 as we speak)
Yeep! I didn't realize we had such ancient bits in 8...
How are we supposed to do progress in Fedora when people dislike it when it is not included in EPEL?
I assume there's an extra "not" here. On that assumption - I understand the problem, but if you check the history of my builds in EPEL, I'm definitely not in that group of people :P
- Unless I'm missing something, what you suggested for tox - "ideally
we should convince upstreams to only run tests with default toxenv" - actually seems weirdly difficult to implement. AFAIK none of the official or unofficial tox docs I can find really cover the idea of having an environment that's *defined* but is not *default*. It seems to be virtually universal practice with tox that you put every environment in `envlist`...and if you just run `tox` without any `-e` argument or special env var, it runs every environment in `envlist`. People seem to assume all environments will be default, and if you want to run fewer than 'all of them' you filter with `-e` or whatever.
What I meant with "ideally we should convince upstreams to only run tests with default toxenv, not linters" was this:
`tox -e py37` should only run tests `tox -e py38` should only run tests `tox -e py39` should only run tests
Linters should run in `tox -e lint`, or `tox -e py38-lint`.
%tox runs `tox -e py3X` by default.
Ah, so essentially the scheme I suggested, but by 'default toxenv' you meant the default env *of %tox*, not the default for just calling `tox`?
So in the end I gave up and wound up just making the `-ci` environments the defaults, explicitly declared in envlist, and figuring I'd have the spec file use `-e` to explicitly run the non-ci environments.
That's what %tox does.
yeah, I figured that one out later :)
Sorry but I cannot help you have nice new things if you decide to have old things. You need to choose - backwards compatibility or new features.
See above, I know which one I choose :P
On Fri, Feb 28, 2020 at 4:13 PM Adam Williamson adamwill@fedoraproject.org wrote:
On Fri, 2020-02-28 at 20:42 +0100, Miro Hrončok wrote:
- ...the Shiny New Stuff does not appear to be available on EPEL *at
all* yet - not even EPEL 8. This makes it a bit of a non-starter if you want to use the same spec file bits across Fedora and EPEL. I realize it may be impractical/impossible to backport everything to EPEL 7, and am getting close to the point where I throw in the towel and drop Python 2 support from my projects' master branches and hang the EPEL 7 package repos out on a branch, but EPEL 8 would be nice and ought to be possible?
This stands in a way of having this in EPEL 8, AFAIK:
- no automatic RPM buildrequires
- ancient pip (PEP 517 support was added in 19.0, we have 9.0.3)
- ancient setuptools (40.8 is needed, we have 39.2.0)
- no tox (but it's being packaged for EPEL8 as we speak)
Yeep! I didn't realize we had such ancient bits in 8...
Dynamic BuildRequires is unlikely to make it to RHEL 8, but all the others are achievable if the requests are made, right?
On 28. 02. 20 22:21, Neal Gompa wrote:
On Fri, Feb 28, 2020 at 4:13 PM Adam Williamson adamwill@fedoraproject.org wrote:
On Fri, 2020-02-28 at 20:42 +0100, Miro Hrončok wrote:
- ...the Shiny New Stuff does not appear to be available on EPEL *at
all* yet - not even EPEL 8. This makes it a bit of a non-starter if you want to use the same spec file bits across Fedora and EPEL. I realize it may be impractical/impossible to backport everything to EPEL 7, and am getting close to the point where I throw in the towel and drop Python 2 support from my projects' master branches and hang the EPEL 7 package repos out on a branch, but EPEL 8 would be nice and ought to be possible?
This stands in a way of having this in EPEL 8, AFAIK:
- no automatic RPM buildrequires
- ancient pip (PEP 517 support was added in 19.0, we have 9.0.3)
- ancient setuptools (40.8 is needed, we have 39.2.0)
- no tox (but it's being packaged for EPEL8 as we speak)
Yeep! I didn't realize we had such ancient bits in 8...
Dynamic BuildRequires is unlikely to make it to RHEL 8, but all the others are achievable if the requests are made, right?
How?
On 28. 02. 20 22:12, Adam Williamson wrote:
On Fri, 2020-02-28 at 20:42 +0100, Miro Hrončok wrote:
- ...the Shiny New Stuff does not appear to be available on EPEL *at
all* yet - not even EPEL 8. This makes it a bit of a non-starter if you want to use the same spec file bits across Fedora and EPEL. I realize it may be impractical/impossible to backport everything to EPEL 7, and am getting close to the point where I throw in the towel and drop Python 2 support from my projects' master branches and hang the EPEL 7 package repos out on a branch, but EPEL 8 would be nice and ought to be possible?
This stands in a way of having this in EPEL 8, AFAIK:
- no automatic RPM buildrequires
- ancient pip (PEP 517 support was added in 19.0, we have 9.0.3)
- ancient setuptools (40.8 is needed, we have 39.2.0)
- no tox (but it's being packaged for EPEL8 as we speak)
Yeep! I didn't realize we had such ancient bits in 8...
Fedora 28 if I recall correctly :(
How are we supposed to do progress in Fedora when people dislike it when it is not included in EPEL?
I assume there's an extra "not" here. On that assumption - I understand the problem, but if you check the history of my builds in EPEL, I'm definitely not in that group of people :P
Cool, sorry for that assumption, it sounded like not being available on EPEL is a show stopper.
- Unless I'm missing something, what you suggested for tox - "ideally
we should convince upstreams to only run tests with default toxenv" - actually seems weirdly difficult to implement. AFAIK none of the official or unofficial tox docs I can find really cover the idea of having an environment that's *defined* but is not *default*. It seems to be virtually universal practice with tox that you put every environment in `envlist`...and if you just run `tox` without any `-e` argument or special env var, it runs every environment in `envlist`. People seem to assume all environments will be default, and if you want to run fewer than 'all of them' you filter with `-e` or whatever.
What I meant with "ideally we should convince upstreams to only run tests with default toxenv, not linters" was this:
`tox -e py37` should only run tests `tox -e py38` should only run tests `tox -e py39` should only run tests
Linters should run in `tox -e lint`, or `tox -e py38-lint`.
%tox runs `tox -e py3X` by default.
Ah, so essentially the scheme I suggested, but by 'default toxenv' you meant the default env *of %tox*, not the default for just calling `tox`?
Sorry for confusing terms. I basically meant "default" being anything in the "native", "classic", "traditional", "ordinary" scheme of pyXY and non-default anything in the scheme of pyXY-<extra> or just <custom>.
Sorry but I cannot help you have nice new things if you decide to have old things. You need to choose - backwards compatibility or new features.
See above, I know which one I choose :P
ack.
On Fri, 2020-02-28 at 23:06 +0100, Miro Hrončok wrote:
I assume there's an extra "not" here. On that assumption - I understand the problem, but if you check the history of my builds in EPEL, I'm definitely not in that group of people :P
Cool, sorry for that assumption, it sounded like not being available on EPEL is a show stopper.
It just makes things more complicated, as usual...
A follow-up observation, btw: can we exclude things from pyproject_buildrequires ? (whether that's done at the level of the dynamic build generation process itself, or within the pyproject macro/tool I don't care - but I couldn't find any docs indicating it's possible at either level so far).
I use setuptools-git for most of my projects. So in pyproject.toml I'm putting this:
requires = ["setuptools>=40.6.0", "setuptools-git", "wheel"]
because setuptools-git is needed *to produce the source distribution*, thus it is a 'requires' so far as PEP-517/518 are concerned. However, it's not a BuildRequires for a Fedora package, because a Fedora package build *starts* from the source distribution. It doesn't need to produce one.
I think I ran into an earlier version of this problem when I tried to use setup_requires briefly, or something. It'd be nice to use pyproject_buildrequires, but it'd also be nice for it not to pull in something that isn't actually needed...
another thing I just ran into while trying this stuff out:
https://bugzilla.redhat.com/show_bug.cgi?id=1808601
On 28. 02. 20 23:36, Adam Williamson wrote:
On Fri, 2020-02-28 at 23:06 +0100, Miro Hrončok wrote:
I assume there's an extra "not" here. On that assumption - I understand the problem, but if you check the history of my builds in EPEL, I'm definitely not in that group of people :P
Cool, sorry for that assumption, it sounded like not being available on EPEL is a show stopper.
It just makes things more complicated, as usual...
A follow-up observation, btw: can we exclude things from pyproject_buildrequires ? (whether that's done at the level of the dynamic build generation process itself, or within the pyproject macro/tool I don't care - but I couldn't find any docs indicating it's possible at either level so far).
You can patch/sed/etc. upstream metadata in %prep. The original idea is that if upstream metadata is wrong, it should be fixed in upstream, not in spec.
I use setuptools-git for most of my projects. So in pyproject.toml I'm putting this:
requires = ["setuptools>=40.6.0", "setuptools-git", "wheel"]
because setuptools-git is needed *to produce the source distribution*, thus it is a 'requires' so far as PEP-517/518 are concerned. However, it's not a BuildRequires for a Fedora package, because a Fedora package build *starts* from the source distribution. It doesn't need to produce one.
I see the problem, but I don't see a nice solution.
I think I ran into an earlier version of this problem when I tried to use setup_requires briefly, or something. It'd be nice to use pyproject_buildrequires, but it'd also be nice for it not to pull in something that isn't actually needed...
another thing I just ran into while trying this stuff out:
Keep them coming \o/
On 28. 02. 20 23:49, Miro Hrončok wrote:
A follow-up observation, btw: can we exclude things from pyproject_buildrequires ? (whether that's done at the level of the dynamic build generation process itself, or within the pyproject macro/tool I don't care - but I couldn't find any docs indicating it's possible at either level so far).
You can patch/sed/etc. upstream metadata in %prep. The original idea is that if upstream metadata is wrong, it should be fixed in upstream, not in spec.
I use setuptools-git for most of my projects. So in pyproject.toml I'm putting this:
requires = ["setuptools>=40.6.0", "setuptools-git", "wheel"]
because setuptools-git is needed *to produce the source distribution*, thus it is a 'requires' so far as PEP-517/518 are concerned. However, it's not a BuildRequires for a Fedora package, because a Fedora package build *starts* from the source distribution. It doesn't need to produce one.
I see the problem, but I don't see a nice solution.
What about this?
%generate_buildrequires %{pyproject_buildrequires -t} | grep -v setuptools-git
https://src.fedoraproject.org/rpms/pyproject-rpm-macros/pull-request/35
On 12. 01. 20 10:02, Dominik 'Rathann' Mierzejewski wrote:
On Tuesday, 07 January 2020 at 14:32, Fabio Valentini wrote:
On Tue, Jan 7, 2020 at 2:18 PM Miro Hrončok mhroncok@redhat.com wrote:
On 07. 01. 20 14:06, Fabio Valentini wrote:
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
See the %tox macro from https://src.fedoraproject.org/rpms/pyproject-rpm-macros
Examples:
https://src.fedoraproject.org/rpms/python-xmlschema/blob/master/f/python-xml...
https://src.fedoraproject.org/rpms/python-elementpath/blob/master/f/python-e...
Ooh, shiny. I knew that somebody was working on this because it was presented at flock, but I didn't know that it was in a usable state now. I'll try using it in the next python package I touch :) Thanks for the pointers, Miro!
Thanks, indeed. Can we have that mentioned in the Python section of the Packaging Guidelines?
Not yet. The new macros are "experimental".
We want to finish 2 things first:
- macros for %files - adding -s to shebangs without destroying existing shebangs with flags
Once we are ready, we will draft new Python guidelines.
On 07/01/2020 13:06, Fabio Valentini wrote:
- ruby is weird, packaging gems is a bit of a chore, upstream has many
knobs to fiddle with to make distro packaging hard (for example, not including test sources in .gem files seems to be a common practice), there's no canonical way of running test suites, and I think the %check section of all my rubygem packages is different and specifically tailored to the package ...
Yeah npm does many of those things as well...
Test files often excluded from npmjs.com tar ball and only available on github where people often forget to tag releases.
While "npm test" is the normal way I'm not sure it's safe to actually run npm during the build - we certainly don't normally. In any case that often runs all sorts of other stuff like linters or coverage tests that aren't relevant and require extra dependencies so we usually look at what "npm test" actually does and run that.
Test dependencies is another issue - the devDependencies key in the metadata often includes lots of things that aren't needed by the tests but are needed for other reasons by people working on the package.
Also the npmjs.com tar ball may not even have the real source if the package is transpiled from typescript or a different js variant. Though in that case it's often a road block at the moment as we typically won't have the toolchain necessary to do those build steps packaged.
Lots of npm packages that claim to be BSD or MIT are missing the license text is another good one.
Decoding whether a "bin" target declared in the metadata is actually worth exporting in /usr/bin and if so by what name is another thorny issue to automate.
Then of course there's the whole issue of massive dependency trees and version mismatches unless you start packaging multiple versions of the same libraries.
Tom
Dne 07. 01. 20 v 14:19 Tom Hughes napsal(a):
On 07/01/2020 13:06, Fabio Valentini wrote:
- ruby is weird, packaging gems is a bit of a chore, upstream has many
knobs to fiddle with to make distro packaging hard (for example, not including test sources in .gem files seems to be a common practice), there's no canonical way of running test suites, and I think the %check section of all my rubygem packages is different and specifically tailored to the package ...
Yeah npm does many of those things as well...
Test files often excluded from npmjs.com tar ball and only available on github where people often forget to tag releases.
While "npm test" is the normal way
Just FTR, there used to be "gem test" command, which was removed, because it was found unusable. Mainly because the tests are typically designed to be executed from the source repository and have many different dependencies. Since there is no "gem test" command anymore, the test suites are more commonly omitted nowadays.
Vít
I'm not sure it's safe to actually run npm during the build - we certainly don't normally. In any case that often runs all sorts of other stuff like linters or coverage tests that aren't relevant and require extra dependencies so we usually look at what "npm test" actually does and run that.
Test dependencies is another issue - the devDependencies key in the metadata often includes lots of things that aren't needed by the tests but are needed for other reasons by people working on the package.
Also the npmjs.com tar ball may not even have the real source if the package is transpiled from typescript or a different js variant. Though in that case it's often a road block at the moment as we typically won't have the toolchain necessary to do those build steps packaged.
Lots of npm packages that claim to be BSD or MIT are missing the license text is another good one.
Decoding whether a "bin" target declared in the metadata is actually worth exporting in /usr/bin and if so by what name is another thorny issue to automate.
Then of course there's the whole issue of massive dependency trees and version mismatches unless you start packaging multiple versions of the same libraries.
Tom
On Tue, Jan 07, 2020 at 02:06:20PM +0100, Fabio Valentini wrote:
Just to add my 2¢ here: I have experience with packaging stuff from many language ecosystems (ruby/gems, python/pypi, go, Java/maven) and with various build systems (autotools, meson, CMake, etc.). The packaging burden is *vastly* different, depending on the ecosystem.
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
- ruby is weird, packaging gems is a bit of a chore, upstream has many
knobs to fiddle with to make distro packaging hard (for example, not including test sources in .gem files seems to be a common practice), there's no canonical way of running test suites, and I think the %check section of all my rubygem packages is different and specifically tailored to the package ...
- go and rust are pretty easily automated because there's not as many
things upstreams can mess with, %build, %install and %check are almost always clean and easily automated with macros. generate_buildrequires also helps with rust packaging.
- Java / maven has good support with packaging tools and macros in
fedora, but if upstream project deviates from the "standard way of doing things", even if it is only slightly, you might end up modifying maven XML project definitions with XPATH queries. The horror.
For C / C++ / Vala, which don't have language-specific package managers:
- meson is really nice, almost never manual intervention needed; but
even if it's necessary, patching meson.build files is pretty straight-forward
- CMake is alright, even if it's hardly readable by humans; but
patching CMakeLists.txt files gets ugly
- I hope autotools dies a fiery death, and soon
TL;DR: The packaging burden ranges from being small or near non-existent (meson, python, go, rust) to being a chore (ruby, Java, autotools). I don't know how nodejs packages compare, since I've been lucky and didn't have to deal with them yet.
Conclusion: Some things could and should be improved, but some of that will only happen if we cooperate with upstreams (for example, right now rubygems or Java/maven is just too wild to be tamed by any downstream automation IMO, unless an omniscient AGI is just around the corner and will do our packaging for us).
What I read here is: there are ecosystem for which we could automate a good chunk of things. This means, if we don't degrade things for other ecosystem, we would still be improving the overall situation. Improving 20% of our work is less ideal than 90% but is still better than 0%.
The advantage of this diversity is that we should be able to improve things by steps, working through each ecosystem one by one. One disadvantage that will quickly show up though will be the: if you're using X or Y languages you need to do things this way, otherwise you need to do things that way. I hear that our documentation is sometime confusing to new-comer or people who only do some packaging work every once in a while, I fear that this wouldn't help with that problem. I'm not saying that we can't or shouldn't try to improve what/where we can, just that this is something to be aware of, acknowledge and try to mitigate.
Pierre
On Tue, Jan 07, 2020 at 03:18:16PM +0100, Pierre-Yves Chibon wrote:
On Tue, Jan 07, 2020 at 02:06:20PM +0100, Fabio Valentini wrote:
Just to add my 2¢ here: I have experience with packaging stuff from many language ecosystems (ruby/gems, python/pypi, go, Java/maven) and with various build systems (autotools, meson, CMake, etc.). The packaging burden is *vastly* different, depending on the ecosystem.
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
- ruby is weird, packaging gems is a bit of a chore, upstream has many
knobs to fiddle with to make distro packaging hard (for example, not including test sources in .gem files seems to be a common practice), there's no canonical way of running test suites, and I think the %check section of all my rubygem packages is different and specifically tailored to the package ...
- go and rust are pretty easily automated because there's not as many
things upstreams can mess with, %build, %install and %check are almost always clean and easily automated with macros. generate_buildrequires also helps with rust packaging.
- Java / maven has good support with packaging tools and macros in
fedora, but if upstream project deviates from the "standard way of doing things", even if it is only slightly, you might end up modifying maven XML project definitions with XPATH queries. The horror.
For C / C++ / Vala, which don't have language-specific package managers:
- meson is really nice, almost never manual intervention needed; but
even if it's necessary, patching meson.build files is pretty straight-forward
- CMake is alright, even if it's hardly readable by humans; but
patching CMakeLists.txt files gets ugly
- I hope autotools dies a fiery death, and soon
TL;DR: The packaging burden ranges from being small or near non-existent (meson, python, go, rust) to being a chore (ruby, Java, autotools). I don't know how nodejs packages compare, since I've been lucky and didn't have to deal with them yet.
Conclusion: Some things could and should be improved, but some of that will only happen if we cooperate with upstreams (for example, right now rubygems or Java/maven is just too wild to be tamed by any downstream automation IMO, unless an omniscient AGI is just around the corner and will do our packaging for us).
What I read here is: there are ecosystem for which we could automate a good chunk of things. This means, if we don't degrade things for other ecosystem, we would still be improving the overall situation. Improving 20% of our work is less ideal than 90% but is still better than 0%.
The advantage of this diversity is that we should be able to improve things by steps, working through each ecosystem one by one. One disadvantage that will quickly show up though will be the: if you're using X or Y languages you need to do things this way, otherwise you need to do things that way. I hear that our documentation is sometime confusing to new-comer or people who only do some packaging work every once in a while, I fear that this wouldn't help with that problem. I'm not saying that we can't or shouldn't try to improve what/where we can, just that this is something to be aware of, acknowledge and try to mitigate.
I agree with pretty much everything you're saying, except for one thing: documentation *will* help. After all, we've always had language-specific packaging guidelines, nothing new here. Packaging of different ecosystems is inherently different.
Zbyszek
On Tue, Jan 07, 2020 at 02:28:46PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Tue, Jan 07, 2020 at 03:18:16PM +0100, Pierre-Yves Chibon wrote:
On Tue, Jan 07, 2020 at 02:06:20PM +0100, Fabio Valentini wrote:
Just to add my 2¢ here: I have experience with packaging stuff from many language ecosystems (ruby/gems, python/pypi, go, Java/maven) and with various build systems (autotools, meson, CMake, etc.). The packaging burden is *vastly* different, depending on the ecosystem.
- python / pypi works great for %build and %install, but until testing
with tox is automated in packaging macros, %check has to be specified manually since upstream projects do different things there. generate_buildrequires also works nicely here.
- ruby is weird, packaging gems is a bit of a chore, upstream has many
knobs to fiddle with to make distro packaging hard (for example, not including test sources in .gem files seems to be a common practice), there's no canonical way of running test suites, and I think the %check section of all my rubygem packages is different and specifically tailored to the package ...
- go and rust are pretty easily automated because there's not as many
things upstreams can mess with, %build, %install and %check are almost always clean and easily automated with macros. generate_buildrequires also helps with rust packaging.
- Java / maven has good support with packaging tools and macros in
fedora, but if upstream project deviates from the "standard way of doing things", even if it is only slightly, you might end up modifying maven XML project definitions with XPATH queries. The horror.
For C / C++ / Vala, which don't have language-specific package managers:
- meson is really nice, almost never manual intervention needed; but
even if it's necessary, patching meson.build files is pretty straight-forward
- CMake is alright, even if it's hardly readable by humans; but
patching CMakeLists.txt files gets ugly
- I hope autotools dies a fiery death, and soon
TL;DR: The packaging burden ranges from being small or near non-existent (meson, python, go, rust) to being a chore (ruby, Java, autotools). I don't know how nodejs packages compare, since I've been lucky and didn't have to deal with them yet.
Conclusion: Some things could and should be improved, but some of that will only happen if we cooperate with upstreams (for example, right now rubygems or Java/maven is just too wild to be tamed by any downstream automation IMO, unless an omniscient AGI is just around the corner and will do our packaging for us).
What I read here is: there are ecosystem for which we could automate a good chunk of things. This means, if we don't degrade things for other ecosystem, we would still be improving the overall situation. Improving 20% of our work is less ideal than 90% but is still better than 0%.
The advantage of this diversity is that we should be able to improve things by steps, working through each ecosystem one by one. One disadvantage that will quickly show up though will be the: if you're using X or Y languages you need to do things this way, otherwise you need to do things that way. I hear that our documentation is sometime confusing to new-comer or people who only do some packaging work every once in a while, I fear that this wouldn't help with that problem. I'm not saying that we can't or shouldn't try to improve what/where we can, just that this is something to be aware of, acknowledge and try to mitigate.
I agree with pretty much everything you're saying, except for one thing: documentation *will* help. After all, we've always had language-specific packaging guidelines, nothing new here. Packaging of different ecosystems is inherently different.
Sorry, I think my wording was confusing, in "I fear that this wouldn't help", "this" was not meant to refer to the documentation but more to the idea of working/improving/automating some ecosystems separately from the others.
I agree with you that documentation is essential, to be honest it is even the only thing I can think of at the moment that will help mitigating differences in workflow as we make changes.
Pierre
Le mardi 07 janvier 2020 à 14:06 +0100, Fabio Valentini a écrit :
Conclusion: Some things could and should be improved
Yes, there are lots of shades of gray.
All recent package managers allow downloading stuff for use (or they'd have no users).
Some manage to build things. Some manage deps. A small set manages tests (usually, very poorly). And the associated deps. Very few manage installs. Even less manage upgrades.
The more they manage the easier they are to automate and convert into rpm. The less they manage the harder they are to do things with, with of without rpm.
Improving things requires continuing to improve and fixing rpm (which, sadly, is ridiculously under-invested in), and then adding the corresponding features to language package managers when their community is willing to make use of it.
Because, the only way to make mapping to rpm easier is to fix the feature holes in language package managers (that does not remove the need for something like rpm because none of those handle mixed language projects and reality is full of those).
However, adding things when upstream has no intention to move from the stone age is a perfect waste of time, as shown by the lack of adoption of the Fedora maven fork. So that requires willingness from upstream.
And that’s the real answer to "upstream does not want to deal with rpm". Making things work better our side. Feeding value back upstream.
There is no value in dumping rpm or investing in a different packaging tech because rpm is used as an alias for lots of things upstreams disagree with, most of which are not actually rpm the software, and most of which would not go away with a packaging tech switch.
It would be interesting to analyse all those things, not to plan an rpm replacement, but to actually fix the things upstreams are not happy about (and, a lot of time, those won't involve rpm, and when they do involve rpm fixing rpm will be easier than rewriting it from scratch).
But, some of those are inherently unfixable. All of the people that do "open source" because that’s the condition to earn their paycheck, but do not understand or are not interested in free software or Linux, won’t work with use no matter how we costume ourselves.
There are lots of those nowadays. The Microsoft stranglehold has been broken and replaced by a Google/Apple/Facebook/Amazon/Microsoft oligopoly. People feel "free" to switch corporate masters, they to not feel the urge to make commons work.
On Tue, Jan 07, 2020 at 03:39:41PM +0100, Nicolas Mailhot via devel wrote:
It would be interesting to analyse all those things, not to plan an rpm replacement, but to actually fix the things upstreams are not happy about (and, a lot of time, those won't involve rpm, and when they do involve rpm fixing rpm will be easier than rewriting it from scratch).
I can get behind this approach.
To make it work we need to look at the things where 1) upstreams are not happy and 2) we can provide value for both dev and ops.
On Tue, Jan 7, 2020, at 6:41 AM, Tom Hughes wrote:
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging rather than us having to repackage everything by hand but I just don't see any way of doing it without compromising what we do to the extent that we're not really doing anything useful at all and are just shoveling out whatever nonsense upstreams perpetrate without question.
Implicit in this is the idea that value should be captured at a secondary distribution layer. Implicit in this is the idea that distribution forks *need* to happen. But they don't.
In fact, everyone here can work upstream too! If e.g. someone upstream messes up licensing, the mindset shouldn't be "oh man those upstream developers are incompetent, let's patch it downstream".
Join upstream. Review *code* not spec files. Fix *code* not spec files. That's the most valuable thing for FOSS - not spec files.
If there's an upstream that isn't doing the right thing (consistently) - fork the upstream, don't fork it at the package level. That way, work can be shared across multiple distributions.
Even ignoring others, the Red Hat ecosystem today has 3 distributions - it's simply better to work upstream as much as possible, and avoid duplicating work across those 3 downstreams.
On Tue, Jan 7, 2020, at 6:41 AM, Tom Hughes wrote:
Implicit in this is the idea that value should be captured at a secondary distribution layer. Implicit in this is the idea that distribution forks *need* to happen. But they don't.
In fact, everyone here can work upstream too! If e.g. someone upstream messes up licensing, the mindset shouldn't be "oh man those upstream developers are incompetent, let's patch it downstream".
The current problem with that is that Tom and other packagers would need to join a couple of hundred up-streams and do all the emotional, social work to be considered someone they will listen to and trust to make the changes you might need to get it into a distribution. [And yes it takes time and energy to do that.. just going by how much it takes for our own development groups to take things in from random 'strangers'.] That takes up a lot of time which then cuts down the time the person has to work on Fedora.
The issue is always going to be about scalability and the fact that each upstream is usually a community as much as Fedora is with its own joining/acceptance/social time needed to become active in. We have currently about 400 active packagers (depending on the definition of active) and about 22000 src.rpms usually a separate community. We either need to grow our active packagers to a much larger amount or shrink our distribution greatly OR some combination of the two. There would also need to be some duplication of people per community so we don't end up with a lot of SPOF's. [Well we have to drop being able to boot anymore.. pjones left and no one can deal with the shim community except him... but take it to every src.rpm package.]
Now we could also say we outline which packages we really really care about and make sure we have upstream community liasons with.. but that will also need to scale out because every dependency of those packages would also need to be dealt with also which then grows out everytime Mozilla, Libreoffice, hot-web-app-of-the week grows a new requirement.
I know I am sounding like a long list of why this can't be done.. but deep down I agree with the sentiment. I just don't see that we can actually accomplish it without giant changes.
"Colin Walters" walters@verbum.org writes:
On Tue, Jan 7, 2020, at 6:41 AM, Tom Hughes wrote:
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging rather than us having to repackage everything by hand but I just don't see any way of doing it without compromising what we do to the extent that we're not really doing anything useful at all and are just shoveling out whatever nonsense upstreams perpetrate without question.
Implicit in this is the idea that value should be captured at a secondary distribution layer. Implicit in this is the idea that distribution forks *need* to happen. But they don't.
In fact, everyone here can work upstream too! If e.g. someone upstream messes up licensing, the mindset shouldn't be "oh man those upstream developers are incompetent, let's patch it downstream".
Join upstream. Review *code* not spec files. Fix *code* not spec files. That's the most valuable thing for FOSS - not spec files.
If there's an upstream that isn't doing the right thing (consistently)
- fork the upstream, don't fork it at the package level. That way,
work can be shared across multiple distributions.
This is a nice sentiment that does not reflect practice for me. I don't know that I'm a typical case, but I find it unlikely that I'm wildly divergent. I frequently patch my packages downstream, generally for three reasons:
1. Bugfix or feature I (or someone else) contributed upstream that we want sooner than the next upstream release. These of course are shorter lived, but relatively frequent. Note also that upstream involvement has caused the number of these to increase, not decrease.
2. Updating to a new, pristine upstream release would break a depdendent package that isn't ready for the change. This is rare and temporary, but happens about once per year.
3. Fedora diverges from the rest of the world in some weird way that upstream isn't interested in supporting. Debuginfo generation or SElinux quirks or systemd integration are examples here, but I've got plenty of others. These don't usually go away quickly if they go at all.
To twist your argument: arguably I *have* forked upstream. I do have a (public! [1]) git repo with my downstream changes - but even if I didn't explicitly keep one, it's not too hard to generate one from dist-git. I happen to be a Red Hat employee, so that's that distro taken care of, and I'm in good contact with the Debian maintainers as well. (Their workflow is the same - Debian's dist-git analogue works differently than ours of course, but their patches are for the same reasons even if they're less frequent.)
Even ignoring others, the Red Hat ecosystem today has 3 distributions
- it's simply better to work upstream as much as possible, and avoid
duplicating work across those 3 downstreams.
This is correct only for those who work at Red Hat or are involved in CentOS, neither of which are requirements for Fedora involvement. I don't disagree with the sentiment that we should make the entire ecosystem better where possible, but it's very close to an argument we've seen too much of recently that we should do $foo because it's good for Red Hat.
Thanks, --Robbie
On Tue, Jan 07, 2020 at 09:08:20AM -0500, Colin Walters wrote:
On Tue, Jan 7, 2020, at 6:41 AM, Tom Hughes wrote:
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging rather than us having to repackage everything by hand but I just don't see any way of doing it without compromising what we do to the extent that we're not really doing anything useful at all and are just shoveling out whatever nonsense upstreams perpetrate without question.
Implicit in this is the idea that value should be captured at a secondary distribution layer. Implicit in this is the idea that distribution forks *need* to happen. But they don't.
In fact, everyone here can work upstream too! If e.g. someone upstream messes up licensing, the mindset shouldn't be "oh man those upstream developers are incompetent, let's patch it downstream".
Join upstream. Review *code* not spec files. Fix *code* not spec files. That's the most valuable thing for FOSS - not spec files.
This is pretty much how we've been working with the upstream OCaml packaging community (Debian even more than us). Get them to improve things to make downstream packaging easier. It's been a very slow business, but things have improved a lot in the last few years (yay - they now support destdir installs!)
Rich.
The challenge about upstream is when they lack activity for years and contributions are very difficult when users lack knowledge of coding without proper guidance. For example, attempting to improve say CellWriter (sorely missing due to the lack of port to Wayland compositor) and howdy, a Windows Hello facial recognition like for convertible laptops turned out too much as a graphic designer and trying to get someone knowing to code turned out complex than anticipated.
Only options is to actively test and give input so far.
On 2020-01-07 6:08 a.m., Colin Walters wrote:
On Tue, Jan 7, 2020, at 6:41 AM, Tom Hughes wrote:
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging rather than us having to repackage everything by hand but I just don't see any way of doing it without compromising what we do to the extent that we're not really doing anything useful at all and are just shoveling out whatever nonsense upstreams perpetrate without question.
Implicit in this is the idea that value should be captured at a secondary distribution layer. Implicit in this is the idea that distribution forks *need* to happen. But they don't.
In fact, everyone here can work upstream too! If e.g. someone upstream messes up licensing, the mindset shouldn't be "oh man those upstream developers are incompetent, let's patch it downstream".
Join upstream. Review *code* not spec files. Fix *code* not spec files. That's the most valuable thing for FOSS - not spec files.
If there's an upstream that isn't doing the right thing (consistently) - fork the upstream, don't fork it at the package level. That way, work can be shared across multiple distributions.
Even ignoring others, the Red Hat ecosystem today has 3 distributions - it's simply better to work upstream as much as possible, and avoid duplicating work across those 3 downstreams. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 1/12/20 9:38 AM, Luya Tshimbalanga wrote:
The challenge about upstream is when they lack activity for years and contributions are very difficult when users lack knowledge of coding without proper guidance. For example, attempting to improve say CellWriter (sorely missing due to the lack of port to Wayland compositor) and howdy, a Windows Hello facial recognition like for convertible laptops turned out too much as a graphic designer and trying to get someone knowing to code turned out complex than anticipated.
Only options is to actively test and give input so far.
Deepin Linux seems to have a face recognition login (or at least support for this), but still searching for the implementation. The two PAM based authentications (Howdy and PAM-facial-auth):
https://github.com/devinaconley/pam-facial-auth
https://github.com/boltgolt/howdy
seem to suggest they are not intended when high security is required. Tests on manufacturer developed authentication also seem to suggest not so secure:
https://www.blackhat.com/presentations/bh-dc-09/Nguyen/BlackHat-DC-09-Nguyen...
However, a number of banks and KFC do use this in China, so maybe a good open source implementation is missing (something other than a trial version). Most of these rely on machine learning algorithms, maybe something machine learning SIG might be interested in.
Perhaps collaboration across distributions on upstream projects may be good, for example as done for 389 directory server by openSuse and Redhat. In cases with no direct funding, Apache foundation is helpful for large projects, but it seems unlikely that all projects are suitable for it.
On 2020-01-07 6:08 a.m., Colin Walters wrote:
On Tue, Jan 7, 2020, at 6:41 AM, Tom Hughes wrote:
I'd love to find a way to directly integrate the likes of gem, npm etc directly into our packaging rather than us having to repackage everything by hand but I just don't see any way of doing it without compromising what we do to the extent that we're not really doing anything useful at all and are just shoveling out whatever nonsense upstreams perpetrate without question.
Implicit in this is the idea that value should be captured at a secondary distribution layer. Implicit in this is the idea that distribution forks *need* to happen. But they don't.
In fact, everyone here can work upstream too! If e.g. someone upstream messes up licensing, the mindset shouldn't be "oh man those upstream developers are incompetent, let's patch it downstream".
Join upstream. Review *code* not spec files. Fix *code* not spec files. That's the most valuable thing for FOSS - not spec files.
The spec files are useful in ensuring a repeatable build procedure and checks on licenses. It is already quite a task to get those done. Code review for large software projects can be quite challenging (so much so that people would pay for this when they really need it). It would however be good to encourage code review. What guidelines would be proposed for this? Would such software have special recognition in the repositories? What would one do for code review of dependencies?
If there's an upstream that isn't doing the right thing (consistently) - fork the upstream, don't fork it at the package level. That way, work can be shared across multiple distributions.
Even ignoring others, the Red Hat ecosystem today has 3 distributions
- it's simply better to work upstream as much as possible, and avoid
duplicating work across those 3 downstreams. _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 2020-01-13 12:56 a.m., Benson Muite wrote:
On 1/12/20 9:38 AM, Luya Tshimbalanga wrote:
The challenge about upstream is when they lack activity for years and contributions are very difficult when users lack knowledge of coding without proper guidance. For example, attempting to improve say CellWriter (sorely missing due to the lack of port to Wayland compositor) and howdy, a Windows Hello facial recognition like for convertible laptops turned out too much as a graphic designer and trying to get someone knowing to code turned out complex than anticipated.
Only options is to actively test and give input so far.
Deepin Linux seems to have a face recognition login (or at least support for this), but still searching for the implementation. The two PAM based authentications (Howdy and PAM-facial-auth):
https://github.com/devinaconley/pam-facial-auth
https://github.com/boltgolt/howdy
seem to suggest they are not intended when high security is required. Tests on manufacturer developed authentication also seem to suggest not so secure:
https://www.blackhat.com/presentations/bh-dc-09/Nguyen/BlackHat-DC-09-Nguyen...
However, a number of banks and KFC do use this in China, so maybe a good open source implementation is missing (something other than a trial version). Most of these rely on machine learning algorithms, maybe something machine learning SIG might be interested in.
Thank you for the PDF. However, the presentation is sightly outdated given the listed hardware dating from 2008. Some modern laptops are equipped with a IR camera Windows Hello type device which could be suitable for iris recognition similar to devices like Samsung Galaxy S9.
Speaking about howdy, I packaged it on COPR for testing purpose and looking for improvement. I am aware of fprintd but it is beyond my scope,
On 1/14/20 9:00 AM, Luya Tshimbalanga wrote:
On 2020-01-13 12:56 a.m., Benson Muite wrote:
On 1/12/20 9:38 AM, Luya Tshimbalanga wrote:
The challenge about upstream is when they lack activity for years and contributions are very difficult when users lack knowledge of coding without proper guidance. For example, attempting to improve say CellWriter (sorely missing due to the lack of port to Wayland compositor) and howdy, a Windows Hello facial recognition like for convertible laptops turned out too much as a graphic designer and trying to get someone knowing to code turned out complex than anticipated.
Only options is to actively test and give input so far.
Deepin Linux seems to have a face recognition login (or at least support for this), but still searching for the implementation. The two PAM based authentications (Howdy and PAM-facial-auth):
https://github.com/devinaconley/pam-facial-auth
https://github.com/boltgolt/howdy
seem to suggest they are not intended when high security is required. Tests on manufacturer developed authentication also seem to suggest not so secure:
https://www.blackhat.com/presentations/bh-dc-09/Nguyen/BlackHat-DC-09-Nguyen...
However, a number of banks and KFC do use this in China, so maybe a good open source implementation is missing (something other than a trial version). Most of these rely on machine learning algorithms, maybe something machine learning SIG might be interested in.
Thank you for the PDF. However, the presentation is sightly outdated given the listed hardware dating from 2008. Some modern laptops are equipped with a IR camera Windows Hello type device which could be suitable for iris recognition similar to devices like Samsung Galaxy S9.
Thanks for feedback. Not having to remember many passwords is very useful.
Speaking about howdy, I packaged it on COPR for testing purpose and looking for improvement.
Great, may be of interest:
https://github.com/boltgolt/howdy/issues/233
My initial worry is more on the security of the algorithms used in howdy and their effectiveness, rather than correct packaging and linux permissions. Internally Howdy uses convolutional neural networks (CNN - http://dlib.net/cnn_face_detector.py.html) and OpenCV to find and match faces. It would be nice if it had been subjected to stringent tests such as those done by NIST:
https://pages.nist.gov/frvt/html/frvt1N.html
see for example:
https://www.necam.com/AdvancedRecognitionSystems/NISTValidation/FingerprintF...
I am aware of fprintd but it is beyond my scope,
This is already packaged and has a wiki page:
https://koji.fedoraproject.org/koji/packageinfo?packageID=7228
https://fedoraproject.org/wiki/Features/Fingerprint
The source code of fprintd is at https://gitlab.freedesktop.org/libfprint/fprintd
For fingerprints, there also seem to be standards:
https://www.nist.gov/programs-projects/fingerprint-recognition
and a NIST implementation:
https://www.nist.gov/services-resources/software/nist-biometric-image-softwa...
Not sure if fprintd matches these standards, or if there is something significantly better.
For biometric authentication applications such as fprintd and howdy, maybe some kind of quality assurances are required, in particular for hardware specifications and algorithm effectiveness, in addition to the normal packaging procedure.
On 1/14/20 10:34 AM, Benson Muite wrote:
On 1/14/20 9:00 AM, Luya Tshimbalanga wrote:
On 2020-01-13 12:56 a.m., Benson Muite wrote:
On 1/12/20 9:38 AM, Luya Tshimbalanga wrote:
The challenge about upstream is when they lack activity for years and contributions are very difficult when users lack knowledge of coding without proper guidance. For example, attempting to improve say CellWriter (sorely missing due to the lack of port to Wayland compositor) and howdy, a Windows Hello facial recognition like for convertible laptops turned out too much as a graphic designer and trying to get someone knowing to code turned out complex than anticipated.
Only options is to actively test and give input so far.
Deepin Linux seems to have a face recognition login (or at least support for this), but still searching for the implementation. The two PAM based authentications (Howdy and PAM-facial-auth):
https://github.com/devinaconley/pam-facial-auth
https://github.com/boltgolt/howdy
seem to suggest they are not intended when high security is required. Tests on manufacturer developed authentication also seem to suggest not so secure:
https://www.blackhat.com/presentations/bh-dc-09/Nguyen/BlackHat-DC-09-Nguyen...
However, a number of banks and KFC do use this in China, so maybe a good open source implementation is missing (something other than a trial version). Most of these rely on machine learning algorithms, maybe something machine learning SIG might be interested in.
Thank you for the PDF. However, the presentation is sightly outdated given the listed hardware dating from 2008. Some modern laptops are equipped with a IR camera Windows Hello type device which could be suitable for iris recognition similar to devices like Samsung Galaxy S9.
Thanks for feedback. Not having to remember many passwords is very useful.
Maybe am wrong about faces/fingerprints as passwords:
https://www.openwall.com/lists/oss-security/2019/05/08/5
Speaking about howdy, I packaged it on COPR for testing purpose and looking for improvement.
Great, may be of interest:
https://github.com/boltgolt/howdy/issues/233
My initial worry is more on the security of the algorithms used in howdy and their effectiveness, rather than correct packaging and linux permissions. Internally Howdy uses convolutional neural networks (CNN
- http://dlib.net/cnn_face_detector.py.html) and OpenCV to find and
match faces. It would be nice if it had been subjected to stringent tests such as those done by NIST:
https://pages.nist.gov/frvt/html/frvt1N.html
see for example:
https://www.necam.com/AdvancedRecognitionSystems/NISTValidation/FingerprintF...
I am aware of fprintd but it is beyond my scope,
This is already packaged and has a wiki page:
https://koji.fedoraproject.org/koji/packageinfo?packageID=7228
https://fedoraproject.org/wiki/Features/Fingerprint
The source code of fprintd is at https://gitlab.freedesktop.org/libfprint/fprintd
For fingerprints, there also seem to be standards:
https://www.nist.gov/programs-projects/fingerprint-recognition
and a NIST implementation:
https://www.nist.gov/services-resources/software/nist-biometric-image-softwa...
Not sure if fprintd matches these standards, or if there is something significantly better.
For biometric authentication applications such as fprintd and howdy, maybe some kind of quality assurances are required, in particular for hardware specifications and algorithm effectiveness, in addition to the normal packaging procedure.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, 14 Jan 2020 at 03:05, Benson Muite benson_muite@emailplus.org wrote:
Thank you for the PDF. However, the presentation is sightly outdated given the listed hardware dating from 2008. Some modern laptops are equipped with a IR camera Windows Hello type device which could be suitable for iris recognition similar to devices like Samsung Galaxy S9.
Thanks for feedback. Not having to remember many passwords is very useful.
Maybe am wrong about faces/fingerprints as passwords:
The issue is that both are ok 2nd factors of authentication but not primary. The number of points on the finger or face that need to be tracked to make it a strong factor is enormous and then you have work out ways to deal with noise. A lot of the built in ones only track a few points or don't worry about noise to the point that you can put a person's near relative up to the camera and it will say yep thats the person. [Or you can simply print a 3d mask or finger and put it up and it will do the same.]
The problem is that most people don't want to 2 or 3 things.. they want 1 thing which won't take a lot of work. So we constantly try to remove the hard thing which is the strongest security for something simpler. It is like the users who would set their password to 'password' because they had a card which gave 1 time passwords and were shocked that it was easy to either guess the key or mitm and get the key. They key wasn't ever meant to be the primary method of protection.. it is only meant to help assure that the person who has the password is probably the person who should have it. Fingerprint and facial recognition are only useful in helping assure that you are the right person at the keyboard. Relying on it as the only method is going to lead to easily hacked system.
On Tue, Jan 14, 2020 at 3:04 AM Benson Muite benson_muite@emailplus.org wrote:
Maybe am wrong about faces/fingerprints as passwords:
There was also the infamous "gummy fingerprint" article from 2002:
https://cryptome.org/gummy.htm
And the mythbusters test of faking fingerprints. There, printing a copy of a fingerprint, putting it on your fingertip, and moistening it by licking it defeated even the best fingerprint scanners.
https://www.youtube.com/watch?v=3Hji3kp_i9k
While playing with the hardware can be fun, I'd not consider them worth delaying a Fedora release for.
On 2020-01-13 11:34 p.m., Benson Muite wrote:
Speaking about howdy, I packaged it on COPR for testing purpose and looking for improvement.
Great, may be of interest:
I will take a look. Note that I fork the repo for improving upstream codes and suggest them.
My initial worry is more on the security of the algorithms used in howdy and their effectiveness, rather than correct packaging and linux permissions. Internally Howdy uses convolutional neural networks (CNN
- http://dlib.net/cnn_face_detector.py.html) and OpenCV to find and
match faces. It would be nice if it had been subjected to stringent tests such as those done by NIST:
howdy use Histogram of Oriented Gradients by default instead CNN according to its config.ini
https://github.com/boltgolt/howdy/blob/master/src/config.ini
see for example:
https://www.necam.com/AdvancedRecognitionSystems/NISTValidation/FingerprintF...
I am aware of fprintd but it is beyond my scope,
This is already packaged and has a wiki page:
https://koji.fedoraproject.org/koji/packageinfo?packageID=7228
https://fedoraproject.org/wiki/Features/Fingerprint
The source code of fprintd is at https://gitlab.freedesktop.org/libfprint/fprintd
For fingerprints, there also seem to be standards:
https://www.nist.gov/programs-projects/fingerprint-recognition
and a NIST implementation:
https://www.nist.gov/services-resources/software/nist-biometric-image-softwa...
Not sure if fprintd matches these standards, or if there is something significantly better.
For biometric authentication applications such as fprintd and howdy, maybe some kind of quality assurances are required, in particular for hardware specifications and algorithm effectiveness, in addition to the normal packaging procedure.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
Vít
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, Jan 7, 2020 at 7:04 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
There's some interesting cognitive dissonance here. In HN threads where I've seen this, people seem to be naturally discovering that what they want is a curation point for these modules, but when someone points out that the Linux distribution essentially functions in that role, there's some recoil. They say that they don't want that.
I think the underlying problem here is that we don't sell ourselves well in the value proposition to these people. Most people sadly reference Debian as their idea of a Linux distribution. While they certainly provide certainty and curation, they are often too slow to be usable by developers to leverage new features and capabilities for their software. This is something we need to figure out how to market better for Fedora desktop, server, and cloud variants. We provide much of the same benefits that Debian does, except we also provide fresher stacks and new features more quickly for people to leverage.
"Friends. Features. Freedom. First. Fedora"
On 07. 01. 20 13:17, Neal Gompa wrote:
On Tue, Jan 7, 2020 at 7:04 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
There's some interesting cognitive dissonance here. In HN threads where I've seen this, people seem to be naturally discovering that what they want is a curation point for these modules, but when someone points out that the Linux distribution essentially functions in that role, there's some recoil. They say that they don't want that.
I think the underlying problem here is that we don't sell ourselves well in the value proposition to these people. Most people sadly reference Debian as their idea of a Linux distribution. While they certainly provide certainty and curation, they are often too slow to be usable by developers to leverage new features and capabilities for their software. This is something we need to figure out how to market better for Fedora desktop, server, and cloud variants. We provide much of the same benefits that Debian does, except we also provide fresher stacks and new features more quickly for people to leverage.
"Friends. Features. Freedom. First. Fedora"
For me, an ultimate success would be if upstream projects would actually use Fedora-family distros in their CI testing. And I don't mean that they would use Copr or packit to package RPM packages, or that they deploy their own Jenkins on CentOS, I mean that they would use something as easy as Travis CI, but instead of ancient Ubuntu, they could choose from a variety of Fedora systems.
For example: Today, an upstream maintainer expressed dissatisfaction about Python 3.9 missing on Travis CI:
https://github.com/benjaminp/six/issues/317#issuecomment-571408737
It would be so cool to be able to say: Put "distro: fedora" to your CI config to get Python 3.9, because in Fedora, we already have that for a month+.
As much as you might never expected me to say this: It would be even better with modularity, in case we actually offer alternate versions for most of our developer facing things. Instead of compiling my own stuff or downloading precomiled suspicious tarballs on Ubuntu/Travis, I could use Fedora and in the CI config, lists the streams of my database, webservers etc. and use it to expand my testing matrix.
Having a strong presence on upstream CIs would help us get visibility. Later, people might choose Fedora as their base container platform to match their CI environment or even consider it for their workstations.
Unfortunately I don't see this happening without RH partnering up with a major CI provider or without significant investment in providing our own public CI (sans RPM) - however we are now discontinuing services, not adding new.
On Tue, 2020-01-07 at 13:50 +0100, Miro Hrončok wrote:
On 07. 01. 20 13:17, Neal Gompa wrote:
On Tue, Jan 7, 2020 at 7:04 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
There's some interesting cognitive dissonance here. In HN threads where I've seen this, people seem to be naturally discovering that what they want is a curation point for these modules, but when someone points out that the Linux distribution essentially functions in that role, there's some recoil. They say that they don't want that.
I think the underlying problem here is that we don't sell ourselves well in the value proposition to these people. Most people sadly reference Debian as their idea of a Linux distribution. While they certainly provide certainty and curation, they are often too slow to be usable by developers to leverage new features and capabilities for their software. This is something we need to figure out how to market better for Fedora desktop, server, and cloud variants. We provide much of the same benefits that Debian does, except we also provide fresher stacks and new features more quickly for people to leverage.
"Friends. Features. Freedom. First. Fedora"
For me, an ultimate success would be if upstream projects would actually use Fedora-family distros in their CI testing. And I don't mean that they would use Copr or packit to package RPM packages, or that they deploy their own Jenkins on CentOS, I mean that they would use something as easy as Travis CI, but instead of ancient Ubuntu, they could choose from a variety of Fedora systems.
For example: Today, an upstream maintainer expressed dissatisfaction about Python 3.9 missing on Travis CI:
https://github.com/benjaminp/six/issues/317#issuecomment-571408737
In this case it seems it's mainly lack of resources on the Travis side - they have been lagging with updates even for their single Ubuntu based environment for years.
It would be so cool to be able to say: Put "distro: fedora" to your CI config to get Python 3.9, because in Fedora, we already have that for a month+.
This is actually possibly if a bit hacky, as you can launch containers in the Travis environment.
So you can checkout a Fedora container and then run the tests inside it: https://github.com/weldr/lorax/blob/master/.travis.yml#L10 https://github.com/weldr/lorax/blob/master/Makefile#L130
Unfortunately you loose many of the Travis provided simple configuration options, but at least yo don't have to suffer the quirks of the default outdated Ubuntu.
As much as you might never expected me to say this: It would be even better with modularity, in case we actually offer alternate versions for most of our developer facing things. Instead of compiling my own stuff or downloading precomiled suspicious tarballs on Ubuntu/Travis, I could use Fedora and in the CI config, lists the streams of my database, webservers etc. and use it to expand my testing matrix.
Having a strong presence on upstream CIs would help us get visibility. Later, people might choose Fedora as their base container platform to match their CI environment or even consider it for their workstations.
Unfortunately I don't see this happening without RH partnering up with a major CI provider or without significant investment in providing our own public CI (sans RPM) - however we are now discontinuing services, not adding new.
Indeed, an easy upstream usable Fedora/CentOS based upstream CI environment is sorely needed.
BTW, with CentOS streams, it should now be possibly to even test in environment reasonably similar to the next upcoming release or RHEL, which was something that was missing before.
We just need an environment that can be used easily - just as Travis, but Fedora/CentOS based and up to date.
-- Miro Hrončok -- Phone: +420777974800 IRC: mhroncok _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, Jan 7, 2020 at 8:34 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 13:50 +0100, Miro Hrončok wrote:
On 07. 01. 20 13:17, Neal Gompa wrote:
On Tue, Jan 7, 2020 at 7:04 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
> Handling those checks is where the packaging toil is (that is, as long > as Fedora is a deployment project). It is not something the packaging > format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
There's some interesting cognitive dissonance here. In HN threads where I've seen this, people seem to be naturally discovering that what they want is a curation point for these modules, but when someone points out that the Linux distribution essentially functions in that role, there's some recoil. They say that they don't want that.
I think the underlying problem here is that we don't sell ourselves well in the value proposition to these people. Most people sadly reference Debian as their idea of a Linux distribution. While they certainly provide certainty and curation, they are often too slow to be usable by developers to leverage new features and capabilities for their software. This is something we need to figure out how to market better for Fedora desktop, server, and cloud variants. We provide much of the same benefits that Debian does, except we also provide fresher stacks and new features more quickly for people to leverage.
"Friends. Features. Freedom. First. Fedora"
For me, an ultimate success would be if upstream projects would actually use Fedora-family distros in their CI testing. And I don't mean that they would use Copr or packit to package RPM packages, or that they deploy their own Jenkins on CentOS, I mean that they would use something as easy as Travis CI, but instead of ancient Ubuntu, they could choose from a variety of Fedora systems.
For example: Today, an upstream maintainer expressed dissatisfaction about Python 3.9 missing on Travis CI:
https://github.com/benjaminp/six/issues/317#issuecomment-571408737
In this case it seems it's mainly lack of resources on the Travis side - they have been lagging with updates even for their single Ubuntu based environment for years.
It would be so cool to be able to say: Put "distro: fedora" to your CI config to get Python 3.9, because in Fedora, we already have that for a month+.
This is actually possibly if a bit hacky, as you can launch containers in the Travis environment.
So you can checkout a Fedora container and then run the tests inside it: https://github.com/weldr/lorax/blob/master/.travis.yml#L10 https://github.com/weldr/lorax/blob/master/Makefile#L130
Unfortunately you loose many of the Travis provided simple configuration options, but at least yo don't have to suffer the quirks of the default outdated Ubuntu.
As much as you might never expected me to say this: It would be even better with modularity, in case we actually offer alternate versions for most of our developer facing things. Instead of compiling my own stuff or downloading precomiled suspicious tarballs on Ubuntu/Travis, I could use Fedora and in the CI config, lists the streams of my database, webservers etc. and use it to expand my testing matrix.
Having a strong presence on upstream CIs would help us get visibility. Later, people might choose Fedora as their base container platform to match their CI environment or even consider it for their workstations.
Unfortunately I don't see this happening without RH partnering up with a major CI provider or without significant investment in providing our own public CI (sans RPM) - however we are now discontinuing services, not adding new.
Indeed, an easy upstream usable Fedora/CentOS based upstream CI environment is sorely needed.
BTW, with CentOS streams, it should now be possibly to even test in environment reasonably similar to the next upcoming release or RHEL, which was something that was missing before.
We just need an environment that can be used easily - just as Travis, but Fedora/CentOS based and up to date.
Travis CI software itself is open source[1]. However, I don't think the setup of Travis is very easy. The architecture looks like it involves a mixtures of OpenStack and Kubernetes...
I wonder if an enterprising developer could contribute Fedora templates into Travis CI... It looks like it's a spaghetti of Ruby, Packer, and maybe some Terraform HCL to define build environments?
[1]: https://github.com/travis-ci
-- 真実はいつも一つ!/ Always, there's only one truth!
On 07. 01. 20 14:32, Martin Kolman wrote:
On Tue, 2020-01-07 at 13:50 +0100, Miro Hrončok wrote:
For me, an ultimate success would be if upstream projects would actually use Fedora-family distros in their CI testing. And I don't mean that they would use Copr or packit to package RPM packages, or that they deploy their own Jenkins on CentOS, I mean that they would use something as easy as Travis CI, but instead of ancient Ubuntu, they could choose from a variety of Fedora systems.
For example: Today, an upstream maintainer expressed dissatisfaction about Python 3.9 missing on Travis CI:
https://github.com/benjaminp/six/issues/317#issuecomment-571408737
In this case it seems it's mainly lack of resources on the Travis side - they have been lagging with updates even for their single Ubuntu based environment for years.
Yes, bacause they create their own tarballs instead of leveraging the distro - because the distro of they choice has no benefits for this. Fedora could have.
It would be so cool to be able to say: Put "distro: fedora" to your CI config to get Python 3.9, because in Fedora, we already have that for a month+.
This is actually possibly if a bit hacky, as you can launch containers in the Travis environment.
So you can checkout a Fedora container and then run the tests inside it: https://github.com/weldr/lorax/blob/master/.travis.yml#L10 https://github.com/weldr/lorax/blob/master/Makefile#L130
Unfortunately you loose many of the Travis provided simple configuration options, but at least yo don't have to suffer the quirks of the default outdated Ubuntu.
Sure, we do that as well:
https://github.com/fedora-python/taskotron-python-versions/blob/develop/.tra... https://github.com/fedora-python/taskotron-python-versions/blob/develop/Dock...
(In that particular example we are running mock (the rpm one, not Python mocking library) in pytest in tox in Docker with Fedora on Travis with Ubuntu which itself probably runs in some kind of container.)
However it is far from easy and far from fast.
As much as you might never expected me to say this: It would be even better with modularity, in case we actually offer alternate versions for most of our developer facing things. Instead of compiling my own stuff or downloading precomiled suspicious tarballs on Ubuntu/Travis, I could use Fedora and in the CI config, lists the streams of my database, webservers etc. and use it to expand my testing matrix.
Having a strong presence on upstream CIs would help us get visibility. Later, people might choose Fedora as their base container platform to match their CI environment or even consider it for their workstations.
Unfortunately I don't see this happening without RH partnering up with a major CI provider or without significant investment in providing our own public CI (sans RPM) - however we are now discontinuing services, not adding new.
Indeed, an easy upstream usable Fedora/CentOS based upstream CI environment is sorely needed.
BTW, with CentOS streams, it should now be possibly to even test in environment reasonably similar to the next upcoming release or RHEL, which was something that was missing before.
Yes!
We just need an environment that can be used easily - just as Travis, but Fedora/CentOS based and up to date.
For example for CPython upstream, we manage our own test servers with RHEL and Fedora for this. Instead, ti would be nice if the upstream could just pick some.
On Tue, 7 Jan 2020 at 13:58, Miro Hrončok mhroncok@redhat.com wrote:
[...]
For me, an ultimate success would be if upstream projects would actually use Fedora-family distros in their CI testing. And I don't mean that they would use Copr or packit to package RPM packages, or that they deploy their own Jenkins on CentOS, I mean that they would use something as easy as Travis CI, but instead of ancient Ubuntu, they could choose from a variety of Fedora systems.
I cannot agree more. The same applies for GitHub Actions. COPR and packit are great, but at the end of the day, the visibility in all these other widely used services is what matters.
On Tue, Jan 7, 2020 at 1:51 PM Miro Hrončok mhroncok@redhat.com wrote:
For me, an ultimate success would be if upstream projects would actually use Fedora-family distros in their CI testing. And I don't mean that they would use Copr or packit to package RPM packages, or that they deploy their own Jenkins on CentOS, I mean that they would use something as easy as Travis CI, but instead of ancient Ubuntu, they could choose from a variety of Fedora systems.
Yup, exactly! In packit we're doing it the hard way since we force them to embrace spec files and RPM packaging format. I wouldn't be personally too opposed to an idea where upstream projects could start w/o RPM packaging and just run their tests on Fedora and then, slowly, ramp up to have an RPM package out of their project up to automate release into rawhide.
Unfortunately I don't see this happening without RH partnering up with a major CI provider or without significant investment in providing our own public CI (sans RPM) - however we are now discontinuing services, not adding new.
We actually had a discussion in December before the break, whether we would enable a workflow in packit that you would not need spec and would be able to test your software directly from the git repo - the same thing as travis or circle does. As far as I recall, the idea is still on the table.
Tomas
On Tue, 7 Jan 2020 at 13:28, Neal Gompa ngompa13@gmail.com wrote:
On Tue, Jan 7, 2020 at 7:04 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
There's some interesting cognitive dissonance here. In HN threads where I've seen this, people seem to be naturally discovering that what they want is a curation point for these modules, but when someone points out that the Linux distribution essentially functions in that role, there's some recoil. They say that they don't want that.
Language-specific packaging formats share a common thing: they are designed to be installed in the users' home, or equivalently, in a virtual environment without root permissions. I'm guessing here, but the recoil you reference probably comes from the fact that distro-wide packaging systems require admin privileges.
If that's true, then I think we should further promote Fedora toolbox.
Iñaki
I think the underlying problem here is that we don't sell ourselves well in the value proposition to these people. Most people sadly reference Debian as their idea of a Linux distribution. While they certainly provide certainty and curation, they are often too slow to be usable by developers to leverage new features and capabilities for their software. This is something we need to figure out how to market better for Fedora desktop, server, and cloud variants. We provide much of the same benefits that Debian does, except we also provide fresher stacks and new features more quickly for people to leverage.
"Friends. Features. Freedom. First. Fedora"
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Dne 07. 01. 20 v 14:57 Iñaki Ucar napsal(a):
On Tue, 7 Jan 2020 at 13:28, Neal Gompa ngompa13@gmail.com wrote:
On Tue, Jan 7, 2020 at 7:04 AM Martin Kolman mkolman@redhat.com wrote:
On Tue, 2020-01-07 at 10:36 +0100, Vít Ondruch wrote:
Dne 06. 01. 20 v 19:08 Nicolas Mailhot via devel napsal(a):
Le 2020-01-06 19:05, Nicolas Mailhot a écrit :
Handling those checks is where the packaging toil is (that is, as long as Fedora is a deployment project). It is not something the packaging format makes harder.
However, because our packaging format streamlines those checks, and forces to apply them, it is blamed by devs for the impedance mismatch between dev and deployment requirements.
But, this mismatch is not caused by our packaging format. It is caused by devs taking shortcuts because their language packaging format lets them.
Well said Nicolas.
Embracing the "language-native packaging" and "git repos" is giving up on what Fedora maintainers have always did and that is kicking forward all the upstreams, because we force them to keep updating the dependencies (or to maintain compatibility with old versions of dependencies). Once we embrace "git repos" etc, we will lose our soul IMO. There won't be any collaboration between upstream projects, which was cultivated by distribution maintainers. Upstreams will sit in their silos and bundle everything.
Just recently I've read a discussion (IIRC on Hacker News) about an article about yet another mess due to NPM (I think this was for a change some licensing mess, not another malware) where someone suggested a radical new idea: "Lets have a crowd sourced set of packages that are known to have sane licenses, don't contain malware/CVEs and can work together!". Yeah, like, say a Linux distro such as Fedora ?
Basically, it seems to me that the language specific package management systems are already creaking under load & display critical issues almost on a daily basis. Issues people with distro packaging background pointed out long ago, only to be ignored.
So I think it really makes much more sense to continue with all the nice nice improvements we have been doing in RPM packaging, rather than throwing it all away and switching to a fundamentally inferior technology.
Also, just today I had discussion if Ruby packages should be more Fedora tailored or more upstream like and there is no right way which could reasonably satisfy both worlds.
E.g. if upstream package has Windows specific dependencies, it is kind of natural to strip this dependency on Fedora. OTOH, it possibly breaks a dependency resolving on other platforms, if the project was created using Fedora packages. This is unfortunately the reason for devs to take some shortcut, probably to go with upstream way, because if nothing else, it is typically better documented.
There's some interesting cognitive dissonance here. In HN threads where I've seen this, people seem to be naturally discovering that what they want is a curation point for these modules, but when someone points out that the Linux distribution essentially functions in that role, there's some recoil. They say that they don't want that.
Language-specific packaging formats share a common thing: they are designed to be installed in the users' home
Definitely not this one unfortunately.
Vít
, or equivalently, in a virtual environment without root permissions. I'm guessing here, but the recoil you reference probably comes from the fact that distro-wide packaging systems require admin privileges.
If that's true, then I think we should further promote Fedora toolbox.
Iñaki
I think the underlying problem here is that we don't sell ourselves well in the value proposition to these people. Most people sadly reference Debian as their idea of a Linux distribution. While they certainly provide certainty and curation, they are often too slow to be usable by developers to leverage new features and capabilities for their software. This is something we need to figure out how to market better for Fedora desktop, server, and cloud variants. We provide much of the same benefits that Debian does, except we also provide fresher stacks and new features more quickly for people to leverage.
"Friends. Features. Freedom. First. Fedora"
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mon, Jan 06, 2020 at 12:19:30PM -0500, Matthew Miller wrote:
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
One of the primary goals of the osbuild project is to be able to build different releases from the same host system:
https://github.com/osbuild/osbuild https://github.com/osbuild/osbuild-composer
On Mon, Jan 06, 2020 at 04:38:51PM -0800, Brian C. Lane wrote:
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
One of the primary goals of the osbuild project is to be able to build different releases from the same host system: https://github.com/osbuild/osbuild https://github.com/osbuild/osbuild-composer
That sounds like at least a component of what I'm interested in!
On Mon, Jan 6, 2020 at 7:43 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Mon, Jan 06, 2020 at 04:38:51PM -0800, Brian C. Lane wrote:
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
One of the primary goals of the osbuild project is to be able to build different releases from the same host system: https://github.com/osbuild/osbuild https://github.com/osbuild/osbuild-composer
That sounds like at least a component of what I'm interested in!
While probably not *quite* flashy and new, this is something I do regularly with KIWI[1] and livecd-tools[2].
For the past few years, I've been working in the KIWI project upstream to make Fedora a first-class citizen, and today that is pretty much the case. It's actually quite easy to produce a wide variety of outputs (ISOs, disk images for OEM preload, VM disks, container images, etc.). And of course, I'm the maintainer of the livecd-tools suite that is still partially used to build our Fedora images.
Unfortunately, I have no flashy frontends to go with either. The main web frontend for KIWI is OBS[3] and the revisor[4] project that was the frontend for livecd-creator has been dead for a while...
I'd personally like to see Koji democratized more like COPR, where people can have projects with all their inputs and have builds/integrations set up. Of course my holy grail would be that everything would come together and we'd have one unified system to serve all our needs...
At the minimum, democratizing Koji would make it easier for Teams to build their own stuff using any of the tools supported by Koji... Then it's a question of documentation of how to make custom media and describing things like how to do branding.
[1]: https://github.com/OSInside/kiwi [2]: https://github.com/livecd-tools/livecd-tools [3]: http://openbuildservice.org/ [4]: https://pagure.io/revisor
-- 真実はいつも一つ!/ Always, there's only one truth!
On Mon, Jan 06, 2020 at 08:27:41PM -0500, Neal Gompa wrote:
At the minimum, democratizing Koji would make it easier for Teams to build their own stuff using any of the tools supported by Koji... Then it's a question of documentation of how to make custom media and describing things like how to do branding.
Yes, I think this democratization is key. Having an easy, straightforward (and well-documented!) interface is more important than having a flashy one.
Dne 06. 01. 20 v 18:19 Matthew Miller napsal(a):
We're not adding meaningful end-user value by manually repackaging these in our own format. We _do_ add value by vetting licenses and insuring availability and consistency, but I think we can find better ways to do that.
COPR can play interesting role here as outer ring for automated packaging. E.g. there is interesting project https://copr.fedorainfracloud.org/coprs/iucar/cran/ run by @iucar which brings all R packages from CRAN to Fedora.
On Tue, 7 Jan 2020 at 10:28, Miroslav Suchý msuchy@redhat.com wrote:
Dne 06. 01. 20 v 18:19 Matthew Miller napsal(a):
We're not adding meaningful end-user value by manually repackaging these in our own format. We _do_ add value by vetting licenses and insuring availability and consistency, but I think we can find better ways to do that.
COPR can play interesting role here as outer ring for automated packaging.
I agree.
E.g. there is interesting project https://copr.fedorainfracloud.org/coprs/iucar/cran/ run by @iucar which brings all R packages from CRAN to Fedora.
Not quite yet, but we are getting there. :) COPR is growing fast, and I'm sure this kind of projects will be viable soon.
* Matthew Miller:
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
Do you see this covering spins which rebuild mainline Fedora packages (possibly from the same SRPMs)?
Thanks, Florian
On Tue, Jan 07, 2020 at 01:13:02PM +0100, Florian Weimer wrote:
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
Do you see this covering spins which rebuild mainline Fedora packages (possibly from the same SRPMs)?
Possibly? I would expect in this case these would use modularity to make variant streams.
* Matthew Miller:
On Tue, Jan 07, 2020 at 01:13:02PM +0100, Florian Weimer wrote:
In support of that, I'd like to also have that page steer people into tooling for creating new spins —- and I'd like to see us invest in and rebuild the spin creation processes. (Particularly, I'd like spin releases to be decoupled from the main OS release, and for those to be self-service by their SIGs with minimal rel-eng involvement needed.)
Do you see this covering spins which rebuild mainline Fedora packages (possibly from the same SRPMs)?
Possibly? I would expect in this case these would use modularity to make variant streams.
I doubt that will currently work for glibc and GCC. 8-)
It's also not clear to me how you would rebuild modules. I guess you would have to rename them? That would certainly be inconvenient.
Thanks, Florian
On Mon, 6 Jan 2020 at 18:28, Matthew Miller mattdm@fedoraproject.org wrote:
Hi everyone! Since it's a new year and a new decade [*], it seems like a good time to look forward and talk about what we want the Fedora Project to be in the next five and even ten years. How do we take the awesome foundation we have now and build and grow and make something that continues to thrive and be useful, valuable, and fun?
[...]
Those are my thoughts. What other challenges and opportunities do you see, and what would you like us to focus on?
For me, the main challenge Fedora faces is **positioning**.
Let me explain: (I don't have numbers but) in my (limited) experience, when seasoned sysadmins need to launch a new system, they usually think "Debian" as something reliable; when seasoned as well as not-very-seasoned-in-Linux research engineers (I know better this category, since I'm a researcher) need to setup a system for some demo or experiment, they mostly think "Ubuntu" (yes, I know...); when we see a new exciting service (such as Travis CI and the like) coming out, they usually support Ubuntu; and so on and so forth, and I'm not even talking about the desktop use case.
So I think there's the challenge for Fedora, for all those people to consider Fedora as a first option for their use cases.
-- Iñaki Úcar
On Tue, Jan 07, 2020 at 03:22:45PM +0100, Iñaki Ucar wrote:
For me, the main challenge Fedora faces is **positioning**.
Let me explain: (I don't have numbers but) in my (limited) experience, when seasoned sysadmins need to launch a new system, they usually think "Debian" as something reliable; when seasoned as well as not-very-seasoned-in-Linux research engineers (I know better this category, since I'm a researcher) need to setup a system for some demo or experiment, they mostly think "Ubuntu" (yes, I know...); when we see a new exciting service (such as Travis CI and the like) coming out, they usually support Ubuntu; and so on and so forth, and I'm not even talking about the desktop use case.
So I think there's the challenge for Fedora, for all those people to consider Fedora as a first option for their use cases.
I agree that's a challenge. Any ideas for how to address it and change these perceptions?
On Tue, 7 Jan 2020 at 16:38, Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Jan 07, 2020 at 03:22:45PM +0100, Iñaki Ucar wrote:
For me, the main challenge Fedora faces is **positioning**.
Let me explain: (I don't have numbers but) in my (limited) experience, when seasoned sysadmins need to launch a new system, they usually think "Debian" as something reliable; when seasoned as well as not-very-seasoned-in-Linux research engineers (I know better this category, since I'm a researcher) need to setup a system for some demo or experiment, they mostly think "Ubuntu" (yes, I know...); when we see a new exciting service (such as Travis CI and the like) coming out, they usually support Ubuntu; and so on and so forth, and I'm not even talking about the desktop use case.
So I think there's the challenge for Fedora, for all those people to consider Fedora as a first option for their use cases.
I agree that's a challenge. Any ideas for how to address it and change these perceptions?
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it. Second, exposure. If someone wants to configure a Travis CI instance, or a Google Cloud instance for some data science pipeline, etc., etc., and Fedora is there among the options available, then Fedora will automatically come to mind as an option for the next project. Of course that's not under our direct control, but if we know the requirements for such third-party services, we can build specially tailored spins and try to promote them in those communities/projects/enterprises at all levels. So 1) stay on the cutting edge, 2) make it as easy as possible to choose Fedora over other options, and 3) marketing and promotion may be a good recipe.
Le mardi 07 janvier 2020 à 17:14 +0100, Iñaki Ucar a écrit :
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it.
They had good marketing in the form of a billionaire publicly showering cash around “in the public interest”. The press (especially the non- technical press) loves this kind of story. Unfortunately, it’s not something cheap or easy to replicate.
On Tue, 2020-01-07 at 18:20 +0100, Nicolas Mailhot via devel wrote:
Le mardi 07 janvier 2020 à 17:14 +0100, Iñaki Ucar a écrit :
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it.
They had good marketing in the form of a billionaire publicly showering cash around “in the public interest”. The press (especially the non- technical press) loves this kind of story. Unfortunately, it’s not something cheap or easy to replicate.
If anyone has a handy generous multi-millionaire up their sleeve, please call Matt. :)
On Tue, 2020-01-07 at 11:37 -0600, Joe Doss wrote:
On 1/7/20 11:33 AM, Adam Williamson wrote:
If anyone has a handy generous multi-millionaire up their sleeve, please call Matt. :)
*coughs* Red Hat...
I *did* say "generous"
It would be in RedHat's own best interest to promote the Fedora project more though. Isn't Fedora supposed to be the upstream/testing grounds for RHEL releases? What's the best way to learn and get familiar with a RedHat based environment? It's Fedora, although I do know RHEL offers free developer licenses and CentOS is always there as well.
On 1/7/20 12:39 PM, Adam Williamson wrote:
On Tue, 2020-01-07 at 11:37 -0600, Joe Doss wrote:
On 1/7/20 11:33 AM, Adam Williamson wrote:
If anyone has a handy generous multi-millionaire up their sleeve, please call Matt. :)
*coughs* Red Hat...
I *did* say "generous"
On Tue, Jan 07, 2020 at 11:37:28AM -0600, Joe Doss wrote:
If anyone has a handy generous multi-millionaire up their sleeve, please call Matt. :)
*coughs* Red Hat...
Red Hat *does* contribute millions of dollars to Fedora annually in time, hardware, and of course literal money.
Disclaimer: the below is my view and opinions and I'm not speaking for Red Hat officially. I'm definitely over here on the open source side not the business side. That said:
Red Hat has also always invested its marketing dollars in _product_; the sponsorship of Fedora is _mostly_ from an engineering side. I'd *like* to get more for these wider efforts, but in a very real way that Red Hat investment is like the investment of anyone voluntarily contributing. We each focus on the things that we care about personally. Red Hat puts some money towards community health and growth (and funds the FCAIC position to support that), but the main interest is in Fedora as a good RHEL upstream from the RHEL engineering part of the company.
Do I think we could use money from Red Hat for marketing Fedora in a way that would ultimately benefit the company? Yes, yes I do. But this competes with, say, investment needed to close deals with large telecom providers or ineternational banks, and because Red Hat is an enterprise product company and likes near-term and *predictable* long-termreturn on investment... so, well, here we are, doing the best with what we can.
Or, tl;dr: if we want to be successful as a community, we can't count on Red Hat for everything.
On Tue, 7 Jan 2020 at 19:03, Matthew Miller mattdm@fedoraproject.org wrote:
Red Hat has also always invested its marketing dollars in _product_; the sponsorship of Fedora is _mostly_ from an engineering side. I'd *like* to get more for these wider efforts, but in a very real way that Red Hat investment is like the investment of anyone voluntarily contributing. We each focus on the things that we care about personally. Red Hat puts some money towards community health and growth (and funds the FCAIC position to support that), but the main interest is in Fedora as a good RHEL upstream from the RHEL engineering part of the company.
To be known and trustworthy means more users, which means a bigger community, which means more contributors, which results in a better upstream for RHEL. :)
On Tue, Jan 07, 2020 at 09:33:55AM -0800, Adam Williamson wrote:
They had good marketing in the form of a billionaire publicly showering cash around “in the public interest”. The press (especially the non- technical press) loves this kind of story. Unfortunately, it’s not something cheap or easy to replicate.
If anyone has a handy generous multi-millionaire up their sleeve, please call Matt. :)
True! Phone lines are standing by!
I do think that we can do some things to make a difference short of that. But it _is_ an uphill, underfunded project.
On Tue, Jan 7, 2020, 18:21 Nicolas Mailhot via devel < devel@lists.fedoraproject.org> wrote:
Le mardi 07 janvier 2020 à 17:14 +0100, Iñaki Ucar a écrit :
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it.
They had good marketing in the form of a billionaire publicly showering cash around “in the public interest”. The press (especially the non- technical press) loves this kind of story. Unfortunately, it’s not something cheap or easy to replicate.
In my opinion it is more about story telling rather than actual marketing involving a big pile of cash.
What are the stories we can share about Fedora ? What kind of cool stuff it allows you to do ?
There is very little content (blog, article, tutorials, video etc ..) based on Fedora. For example we now have modularity and I have yet to read or watch someone showing the cool stuff they did with it.
I think the project needs different story to tell, for example "How Fedora can help in a DevSecOps pipeline" or "Why should I run my python microservice on Fedora" etc ...
-- Nicolas Mailhot _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Le mardi 07 janvier 2020 à 18:37 +0100, Clement Verna a écrit :
On Tue, Jan 7, 2020, 18:21 Nicolas Mailhot via devel < devel@lists.fedoraproject.org> wrote:
Le mardi 07 janvier 2020 à 17:14 +0100, Iñaki Ucar a écrit :
I'm far from having a satisfactory response to that, but I see
two
fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it.
They had good marketing in the form of a billionaire publicly showering cash around “in the public interest”. The press (especially the non- technical press) loves this kind of story. Unfortunately, it’s not something cheap or easy to replicate.
In my opinion it is more about story telling rather than actual marketing involving a big pile of cash.
Yes, it worked because it was a press-friendly “fairy tale” story, not because of the cash spent on marketing (or because of the quality of the marketed product).
IBM could waste 10 times the money in marketing with less results, “Big Blue spending loads of cash” is not a coverage-worthy story.
On Tue, Jan 07, 2020 at 06:48:05PM +0100, Nicolas Mailhot via devel wrote:
IBM could waste 10 times the money in marketing with less results, “Big Blue spending loads of cash” is not a coverage-worthy story.
Although to be clear if anyone from IBM is reading: _we'll take it_. :)
On Tue, Jan 7, 2020 at 12:51 PM Nicolas Mailhot via devel devel@lists.fedoraproject.org wrote:
Yes, it worked because it was a press-friendly “fairy tale” story, not because of the cash spent on marketing (or because of the quality of the marketed product).
It's both, though. Having a good story is part of the answer, funding the telling of that story is the other part. It doesn't necessarily have to be a lot either. Imagine if we had one full-time marketing person who could work with upstreams to get their demos using Fedora. That's momentum that we can build off of. Like other parts of the project our marketing efforts have ebbed and flowed, but over the last few years in particular (despite the hard work of x3mboy, bt0dotninja, and other volunteers), we haven't had consistent marketing effort.
I think, and this is my personal opinion, that Ubuntu is so popular, because it is easy to use for everyone. You don't need to have much technical knowledge to use Ubuntu for most thinks that non technical user needs and it looks good.
Every time I'm trying to use Fedora the same way, I always end up in terminal for various reasons, either because of bugs in some software or debugging something that simply doesn't work.
I tried a experiment on my desktop computer and tried to play with it like regular user (using GUI for everything and doing things like installing new things, watching movies, playing games etc.). It worked for some time, but I always encounter something that just broke things and if you google it, there is in most cases no way to fix this without using terminal and have some technical knowledge.
The same is for the guides. There are plenty of guides for Ubuntu with screenshots, so it's easy for users to just follow these guides. For Fedora we have plenty of guides that just have only commands you need to run and I know plenty of users that just don't know what command means or where they should write it.
Michal
On 07/01/2020 17:14, Iñaki Ucar wrote:
On Tue, 7 Jan 2020 at 16:38, Matthew Miller mattdm@fedoraproject.org wrote:
On Tue, Jan 07, 2020 at 03:22:45PM +0100, Iñaki Ucar wrote:
For me, the main challenge Fedora faces is **positioning**.
Let me explain: (I don't have numbers but) in my (limited) experience, when seasoned sysadmins need to launch a new system, they usually think "Debian" as something reliable; when seasoned as well as not-very-seasoned-in-Linux research engineers (I know better this category, since I'm a researcher) need to setup a system for some demo or experiment, they mostly think "Ubuntu" (yes, I know...); when we see a new exciting service (such as Travis CI and the like) coming out, they usually support Ubuntu; and so on and so forth, and I'm not even talking about the desktop use case.
So I think there's the challenge for Fedora, for all those people to consider Fedora as a first option for their use cases.
I agree that's a challenge. Any ideas for how to address it and change these perceptions?
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it. Second, exposure. If someone wants to configure a Travis CI instance, or a Google Cloud instance for some data science pipeline, etc., etc., and Fedora is there among the options available, then Fedora will automatically come to mind as an option for the next project. Of course that's not under our direct control, but if we know the requirements for such third-party services, we can build specially tailored spins and try to promote them in those communities/projects/enterprises at all levels. So 1) stay on the cutting edge, 2) make it as easy as possible to choose Fedora over other options, and 3) marketing and promotion may be a good recipe.
I think, and this is my personal opinion, that Ubuntu is so popular, because it is easy to use for everyone. You don't need to have much technical knowledge to use Ubuntu for most thinks that non technical user needs and it looks good.
Every time I'm trying to use Fedora the same way, I always end up in terminal for various reasons, either because of bugs in some software or debugging something that simply doesn't work.
I tried a experiment on my desktop computer and tried to play with it like regular user (using GUI for everything and doing things like installing new things, watching movies, playing games etc.). It worked for some time, but I always encounter something that just broke things and if you google it, there is in most cases no way to fix this without using terminal and have some technical knowledge.
The same is for the guides. There are plenty of guides for Ubuntu with screenshots, so it's easy for users to just follow these guides. For Fedora we have plenty of guides that just have only commands you need to run and I know plenty of users that just don't know what command means or where they should write it.
Michal
Well said!
Daniel
On 1/7/20 11:14 AM, Iñaki Ucar wrote:
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it.
I can think of several reasons that are important to me; some of them were addressed by Fedora and are no longer relevant, but they gave Ubuntu enough momentum to last
- Ubuntu provides LTS releases, so people can chose to install and forget. Yes, it is a tradeoff with new/shiny, but it's nice to have this option for something that is intended to last.
- as the result of the momentum, Ubuntu became the default in various special circumstances: Jupyter notebooks, WSL, etc.; furthermore, this popularity attracted packagers so that some Ubuntu packages lead Fedora (see also next point).
- Ubuntu was pragmatic and compromising on non-free software such as codecs and video drivers; as a result, it has sometimes better support for things like CUDA software, video/multimedia, etc., even though nowadays Fedora has practically out-of-box support for these.
Regarding the first point, the Fedora/Redhat/CentOS environment requires an early decision and commitment to one of the three alternatives. If it is production, one would deploy paid-support RedHat; less critical but still long-term roles call for CentOS, and of course Fedora is best for personal systems, especially for development and testing new software stacks.
It turns out, however, that the initial intent often changes: an important production system becomes a less-critical legacy, or a cutting-edge development system proves itself and becomes production. In these cases it would be nice to transition smoothly between the choices: a RHEL system that comes off its entitlement should not just sit there unpatched but should smoothly transition to CentOS, and maybe there could be a way to transition a no-longer supported Fedora to a roughly-equivalent RedHat/CentOS. I realize that this is a big ask, but I wished for it often enough that I thought I'd put it out here for consideration, especially in the context of competing with Ubuntu.
On 1/15/20 8:33 PM, Przemek Klosowski via devel wrote:
On 1/7/20 11:14 AM, Iñaki Ucar wrote:
I'm far from having a satisfactory response to that, but I see two fronts here. First, marketing. How does Ubuntu managed to be so popular among less-experienced Linux users? I'm not sure, but I suspect that good marketing has something to do with it.
One of their primary aims has been user friendliness. Their forums are helpful and it is easier to find information on Ubuntu with a quick internet search. A number of other linux distributions now aim to be friendly to those who just want things to work.
I can think of several reasons that are important to me; some of them were addressed by Fedora and are no longer relevant, but they gave Ubuntu enough momentum to last
- Ubuntu provides LTS releases, so people can chose to install and
forget. Yes, it is a tradeoff with new/shiny, but it's nice to have this option for something that is intended to last.
Fedora is positioned somewhere in between Debian and Ubuntu. Debian does not have as many users as Ubuntu. Cent OS is available for 10 years, but is mostly considered a server distro, though is also very capable desktop as many of the things in Fedora can be used in Cent OS or easily ported to it.
- as the result of the momentum, Ubuntu became the default in various
special circumstances: Jupyter notebooks, WSL, etc.; furthermore, this popularity attracted packagers so that some Ubuntu packages lead Fedora (see also next point).
Having software packages is helpful. However, things like Flatpak, Snap and Appimages may make this less of a concern. Some distributions allow using package repositories from other distributions, for example Puppy dog linux can use Ubuntu repositories, so with a small number of core developers can offer many applications.
- Ubuntu was pragmatic and compromising on non-free software such as
codecs and video drivers; as a result, it has sometimes better support for things like CUDA software, video/multimedia, etc., even though nowadays Fedora has practically out-of-box support for these.
It is helpful to know when non-free software is used. Perhaps better communication with hardware vendors is required. Alternatively, a number of distributions do have online stores where you can get a pre-installed system that should be hassle free. Part of the attraction of linux is the freedom to configure things yourself which requires an investment of time.
Regarding the first point, the Fedora/Redhat/CentOS environment requires an early decision and commitment to one of the three alternatives. If it is production, one would deploy paid-support RedHat; less critical but still long-term roles call for CentOS, and of course Fedora is best for personal systems, especially for development and testing new software stacks.
This mostly needs a good partitioning of the file system and/or multiple hard drives, separate, data from the operating system and the applications. It is then possible to easily change the operating system. It is also possible to have workstations with multiple operating system boot options.
It turns out, however, that the initial intent often changes: an important production system becomes a less-critical legacy, or a cutting-edge development system proves itself and becomes production. In these cases it would be nice to transition smoothly between the choices: a RHEL system that comes off its entitlement should not just sit there unpatched but should smoothly transition to CentOS, and maybe there could be a way to transition a no-longer supported Fedora to a roughly-equivalent RedHat/CentOS. I realize that this is a big ask, but I wished for it often enough that I thought I'd put it out here for consideration, especially in the context of competing with Ubuntu.
This can work by separating data from operating system. Main problem might be that some software package may need to be built again since it may not be available in the repository - this would likely need some developer/packager time. Transitioning may be challenging to fully automate due to application software availability and compatibility, though many linux installers now give a choice of where to put the operating system and what disks/partitions to leave untouched.
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 1/7/20 10:28 AM, Matthew Miller wrote:
On Tue, Jan 07, 2020 at 03:22:45PM +0100, Iñaki Ucar wrote:
For me, the main challenge Fedora faces is **positioning**.
Let me explain: (I don't have numbers but) in my (limited) experience, when seasoned sysadmins need to launch a new system, they usually think "Debian" as something reliable; when seasoned as well as not-very-seasoned-in-Linux research engineers (I know better this category, since I'm a researcher) need to setup a system for some demo or experiment, they mostly think "Ubuntu" (yes, I know...); when we see a new exciting service (such as Travis CI and the like) coming out, they usually support Ubuntu; and so on and so forth, and I'm not even talking about the desktop use case.
So I think there's the challenge for Fedora, for all those people to consider Fedora as a first option for their use cases.
I agree that's a challenge. Any ideas for how to address it and change these perceptions?
Here's one that should be easy, though it probably won't have the desired impact, but we should practice what we preach, at minimum: make Fedora a selection for the OS in oVirt. I wind up choosing the latest RHEL for all my Fedora VMs but I always have to wonder if that's optimal -- and I've lived in the shade of RH since the RHL4.0 days. Why do we have to guess at this? I know oVirt isn't a Fedora project, it's a RH one, but this should be one upstream that's the easiest of all to convince. I mean Ubuntu is a choice here! What kind of message does this project to you?
On Wed, Jan 08, 2020 at 02:17:40PM -0500, John Florian wrote:
desired impact, but we should practice what we preach, at minimum: make Fedora a selection for the OS in oVirt. I wind up choosing the latest RHEL for all my Fedora VMs but I always have to wonder if that's optimal -- and I've lived in the shade of RH since the RHL4.0 days. Why do we have to guess at this? I know oVirt isn't a Fedora project, it's a RH one, but this should be one upstream that's the
Yeah, this is a good suggestion and I'm not sure why it's not already the case!
On Fri, Jan 10, 2020 at 2:47 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Jan 08, 2020 at 02:17:40PM -0500, John Florian wrote:
desired impact, but we should practice what we preach, at minimum: make Fedora a selection for the OS in oVirt. I wind up choosing the latest RHEL for all my Fedora VMs but I always have to wonder if that's optimal -- and I've lived in the shade of RH since the RHL4.0 days. Why do we have to guess at this? I know oVirt isn't a Fedora project, it's a RH one, but this should be one upstream that's the
Yeah, this is a good suggestion and I'm not sure why it's not already the case!
It's quite obvious. Nobody in that team thinks of Fedora much. To them, CentOS is the major freely available OS from Red Hat, just like with RDO.
On Sat, Jan 11, 2020 at 06:25:31AM -0500, Neal Gompa wrote:
On Fri, Jan 10, 2020 at 2:47 PM Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Jan 08, 2020 at 02:17:40PM -0500, John Florian wrote:
desired impact, but we should practice what we preach, at minimum: make Fedora a selection for the OS in oVirt. I wind up choosing the latest RHEL for all my Fedora VMs but I always have to wonder if that's optimal -- and I've lived in the shade of RH since the RHL4.0 days. Why do we have to guess at this? I know oVirt isn't a Fedora project, it's a RH one, but this should be one upstream that's the
Yeah, this is a good suggestion and I'm not sure why it's not already the case!
It's quite obvious. Nobody in that team thinks of Fedora much. To them, CentOS is the major freely available OS from Red Hat, just like with RDO.
Let's not play the game of: "I know what they think", it's not a fun one.
Pierre
On Tue, Jan 7, 2020 at 9:29 AM Matthew Miller mattdm@fedoraproject.org wrote:
I agree that's a challenge. Any ideas for how to address it and change these perceptions?
My 1.5 broken-cryptocurrency cents on this one.
I think we should make a serious and great effort to put the users first remind us of this in every breaking decision we make. For example, in Fedora 31, we don't have docker out of the box; though, podman is awesome. This is a good example of how we ask that users align into, often awesome, new technology paradigms, but it breaks user's environments and it takes more work to make Fedora your main OS.
This is why we have, in every release, so many articles and scripts that "fix" Fedora, so that the user doesn't have to.
Making sure we don't break the user's experience is key, IMHO.
Making sure that software doesn't break is key as well. Things should just work. Also, we should through some configuration help there as well; not provide stuff as vanilla as we have for years. For example, just recently, the nginx configuration took into account php-fpm (as an example that comes to mind). For several releases, one had to do this manually.
We should provide an environment that welcomes 3rd parties. We want companies to make Fedora their go-to distro. Wouldn't be awesome that docker showed examples of how to do stuff in Fedora first; rather than Ubuntu? Same goes for k8s, raspberry pi,
Bring the companies and the companies will bring the users.
That's what I can think of right now, hehe.
On Tue, 7 Jan 2020 at 15:22, Iñaki Ucar iucar@fedoraproject.org wrote:
On Mon, 6 Jan 2020 at 18:28, Matthew Miller mattdm@fedoraproject.org wrote:
Hi everyone! Since it's a new year and a new decade [*], it seems like a good time to look forward and talk about what we want the Fedora Project to be in the next five and even ten years. How do we take the awesome foundation we have now and build and grow and make something that continues to thrive and be useful, valuable, and fun?
[...]
Those are my thoughts. What other challenges and opportunities do you see, and what would you like us to focus on?
For me, the main challenge Fedora faces is **positioning**.
Speaking of which, shouldn't we claim the GitHub account https://github.com/fedora, as Debian did?
Disclaimer regarding the current discussion about Git Forges: not saying that we should move to GitHub or anything, but, you know, visibility...
Iñaki
On Mon, Jan 6, 2020 at 11:20 AM Matthew Miller mattdm@fedoraproject.org wrote:
Those are my thoughts. What other challenges and opportunities do you see, and what would you like us to focus on?
The packaging process has changed a lot over the last couple of years (well, not the core fedpkg process) but I've been a packager for almost 12 years now (Where did the time go?!?!?) and here are some things that would help me.
Preface: I have a lot more packages but a lot less time than I did 12 years ago. Aging parents, kids, $DAYJOB, etc...
1. Make updating well behaved packages easier and/or more automated.
I have a few upstreams that are very organized and they don't accidentally break API/ABI compatibility and I almost never need to patch anything. How about an option or build those when updated, but not a "scratch build" or a candidate build, but something in between that's gated before becoming a candidate with a report where I can easily see what changed (pkgdiff & abipkgdiff?) and then simply click a button if I want to create an update or throw it away.
2. I find the whole fedpkg, rpkg, copr-cli and their interaction between src.fedoraproject.org and pagure.io confusing.
Not only are the icons the same for the browser tab, but it seems like I'm constantly getting generic emails telling me my api token is about to expire and there's nothing in the email that differentiates the two sites, nor a link.
Also, because it's not done all THAT often (it just feels like it is) I never remember which file I have to plug the API token in!
Add to that one of them goes in .config/rpkg/fedpkg.conf which continues to blur the lines between rpkg and fedpkg.
The other ones go in /etc which doesn't make any sense to me. Shouldn't API tokens be user specific and not machine specific?
Out of frustration I end up just pasting the key in the different files until the tool works...
Thanks, Richard
On Thu, Jan 09, 2020 at 08:24:41AM -0600, Richard Shaw wrote:
On Mon, Jan 6, 2020 at 11:20 AM Matthew Miller <[1]mattdm@fedoraproject.org> wrote:
Those are my thoughts. What other challenges and opportunities do you see, and what would you like us to focus on?The packaging process has changed a lot over the last couple of years (well, not the core fedpkg process) but I've been a packager for almost 12 years now (Where did the time go?!?!?) and here are some things that would help me. Preface: I have a lot more packages but a lot less time than I did 12 years ago. Aging parents, kids, $DAYJOB, etc...
- Make updating well behaved packages easier and/or more automated.
I have a few upstreams that are very organized and they don't accidentally break API/ABI compatibility and I almost never need to patch anything. How about an option or build those when updated, but not a "scratch build" or a candidate build, but something in between that's gated before becoming a candidate with a report where I can easily see what changed (pkgdiff & abipkgdiff?) and then simply click a button if I want to create an update or throw it away.
I wonder how much of this could be achieved packit + gating + the automation added for rawhide gating. I guess some parts will be missing but potentially not so much.
- I find the whole fedpkg, rpkg, copr-cli and their interaction between
[2]src.fedoraproject.org and [3]pagure.io confusing. Not only are the icons the same for the browser tab, but it seems like I'm constantly getting generic emails telling me my api token is about to expire and there's nothing in the email that differentiates the two sites, nor a link.
That is definitely something we want to fix, we even have ideas how, we just need to dedicate some time to do it.
Pierre
First, I'd like to see Fedora become more of an "operating system factory".
20s is from 2020 to 2029. 10 years. So long. So, let me write high level thoughts. And let me post a new topic.
For the topic 'more of an "operating system factory"', I want to see Fedora project as a place to "democratize new technologies".
Nowadays many people can use the computer and internet with relatively affordable cost, not depending on a belonging organization or region. That means those are democratized. But back to 1960s, 70s, 90s, only a few people who have access to the specific academia and companies or in specific regions could use those technologies relatively easier. For example, someone living near a company could use a computer for free for only weekend by negotiating the company.
In my experience, in 1999 I was living in the same region with an NGO working for a backbone network. And the NGO helped me to put my Linux server in their network for free. In the age of non-democratized computer and internet, some organizations helped for the new technologies.
Now there are not democratized digital tools and hardware again. The history repeats.
I assume people want to use non-x86_64 servers to debug their application program with affordable cost I saw it working in upstream projects.
I was asking to use s390x and ppc64le server to debug an application by sending email on Fedora. But after 2 weeks, no response. The operation could be improved. https://fedoraproject.org/wiki/Test_Machine_Resources_For_Package_Maintainer...
When you keep looking at the community using new technologies with open source, some hardware are necessary. This tendency could be increased in the 20s.
What resource helps a technology democratized? This can be a question to know what Fedora does, when you attend an event.
Fedora's theme in 20s can be "democratizing to the access to the new technologies". And the actions are
* Enable non-x86_64 servers for users easily with time sharing. * Enable hardware for users that is necessary to use technology. With time sharing? How to do it? It's a challenge.
Regards, Jun
In my experience, in 1999 I was living in the same region with an NGO working for a backbone network. And the NGO helped me to put my Linux server in their network for free. In the age of non-democratized computer and internet, some organizations helped for the new technologies.
My mistake. Maybe it was not "NGO" but "NPO" (NonProfit Organization).