Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates. In a later phase, Rawhide updates that contain multiple builds will also be enabled for gating. Our goal is to improve our ability to continuously turn out a useful Fedora OS. So we hope and expect to get opt-in from as many Fedora package maintainers as possible, including maintainers of the base OS. But this phase of gating remains opt-in, and should not affect packagers who choose for now not to opt in.
Last April FESCo approved a change proposal[1] allowing to gate Rawhide packages based on test results. The proposal included gating updates with only a single build as well as updates with multiple builds. It was designed to cause minimal to no interference with the current workflow of packagers who do not opt-in.
The team has been working hard on this proposal, and decided to do a phased roll-out of this change, so that we can gather feedback as early as possible from the packagers interested in testing this workflow without impacting everyone.
On July 24th, we plan to turn on the first phase of this change.
What does it mean for us as packagers? --------------------------------------
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
If the package maintainer has opted in into the CI workflow, the creation of the update will trigger the CI pipeline which will send its results to resultsdb, triggering greenwave to evaluate if all the required tests have passed or not, thus allowing the update to go through the gate or not.
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Small FAQ: -------------- But I do not want to use gating! While we believe CI and gating will ultimately help making a better Fedora, there is nothing enforced at this point. Keep packaging as you do now!
How do I enroll? Great! We’re glad to see such enthusiasm! You can find the instruction in the docs on how to enable gating: https://docs.fedoraproject.org/en-US/ci/gating/
It does not work! Bugs will be bugs. This is the first roll-out of this change, and more will come. This rollout lets us gather feedback and iterate on the approach in an open source fashion. If you did not opt-in and you can’t do your packaging work as you used to, please file a infrastructure ticket, since it’s likely a bug: https://pagure.io/fedora-infrastructure/new_issue?title=%5BCI] If you did opt-in and something in the gating of your update doesn’t work (for example, CI ran but its results aren’t being considered, waiving didn’t work…), file an infrastructure ticket: https://pagure.io/fedora-infrastructure/new_issue?title=%5BCI] If you opted-in and the tests don’t run the way you expect, file a fedora-ci ticket: https://pagure.io/fedora-ci/general/new_issue
I enrolled but now I want to step out for some reasons :sad trombone: We hope you reported all the issues you’ve found/faced and are helping us resolving them. In the meanwhile, you can simply remove the gating.yaml file you’ve added in your git repo and that should be enough to make greenwave ignore your package.
Want to know more? Your question isn’t here? Check our documentation on rawhide gating: https://docs.fedoraproject.org/en-US/rawhide-gating/ [2] We’ll keep it up to date as we face or solve questions.
Pierre For the Rawhide package gating team
[1] https://pagure.io/fesco/issue/2102
[2] At the time of writing this, it looks like the website is having some difficulties build the new version of the docs. This should get resolved in the coming hours, sorry for the inconvenience. (If only it had CI... :-))
_______________________________________________ devel-announce mailing list -- devel-announce@lists.fedoraproject.org To unsubscribe send an email to devel-announce-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel-announce@lists.fedorapro...
On Tue, Jul 23, 2019 at 10:51:28PM +0200, Pierre-Yves Chibon wrote:
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating,
This is very exciting! I suppose I'll not jinx things by congratulating too soon, but this is great work and huge news.
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Tom
On Tue, Jul 23, 2019 at 10:35:13PM +0100, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Currently the extra latency is about 3 minutes. It's the frequency at which the cron job pushing the updates having past CI to stable runs. We do want to make this be bus-based (instead of cron-based) which will reduce this latency even more.
Best, Pierre
On Wed, Jul 24, 2019 at 9:10 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Tue, Jul 23, 2019 at 10:35:13PM +0100, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Currently the extra latency is about 3 minutes. It's the frequency at which the
What is that based upon? What level of capacity is there to run the CI etc
cron job pushing the updates having past CI to stable runs. We do want to make
I'm assuming you mean passed and not past here, the later gives it quite a different meaning.
this be bus-based (instead of cron-based) which will reduce this latency even more.
Best, Pierre _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Wed, Jul 24, 2019 at 09:14:05AM +0100, Peter Robinson wrote:
On Wed, Jul 24, 2019 at 9:10 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Tue, Jul 23, 2019 at 10:35:13PM +0100, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Currently the extra latency is about 3 minutes. It's the frequency at which the
What is that based upon? What level of capacity is there to run the CI etc
cron job pushing the updates having past CI to stable runs. We do want to make
I'm assuming you mean passed and not past here, the later gives it quite a different meaning.
Sorry, I wasn't clear, the 3 minutes extra latency is for non-gated packages. For gated packages, this highly depends on the tests you run. On my canary test that is just call the "fail" method of ansible, it takes about 8 minutes to have the tests set up, ran and tear down. So that makes an extra 8 minutes for the tests + up to 3 minutes for the update to be pushed to stable (assuming tests passed).
Best, Pierre
On Wed, Jul 24, 2019 at 09:14:05AM +0100, Peter Robinson wrote:
On Wed, Jul 24, 2019 at 9:10 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Tue, Jul 23, 2019 at 10:35:13PM +0100, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Currently the extra latency is about 3 minutes. It's the frequency at which the
What is that based upon? What level of capacity is there to run the CI etc
cron job pushing the updates having past CI to stable runs. We do want to make
I'm assuming you mean passed and not past here, the later gives it quite a different meaning.
Sorry, I wasn't clear, the 3 minutes extra latency is for non-gated packages. For gated packages, this highly depends on the tests you run. On my canary test that is just call the "fail" method of ansible, it takes about 8 minutes to have the tests set up, ran and tear down. So that makes an extra 8 minutes for the tests + up to 3 minutes for the update to be pushed to stable (assuming tests passed).
Is there documentation on the failure work flow? What does a packager need to do to get it resubmitted etc?
On Wed, Jul 24, 2019 at 10:27:35AM +0100, Peter Robinson wrote:
On Wed, Jul 24, 2019 at 09:14:05AM +0100, Peter Robinson wrote:
On Wed, Jul 24, 2019 at 9:10 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Tue, Jul 23, 2019 at 10:35:13PM +0100, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Currently the extra latency is about 3 minutes. It's the frequency at which the
What is that based upon? What level of capacity is there to run the CI etc
cron job pushing the updates having past CI to stable runs. We do want to make
I'm assuming you mean passed and not past here, the later gives it quite a different meaning.
Sorry, I wasn't clear, the 3 minutes extra latency is for non-gated packages. For gated packages, this highly depends on the tests you run. On my canary test that is just call the "fail" method of ansible, it takes about 8 minutes to have the tests set up, ran and tear down. So that makes an extra 8 minutes for the tests + up to 3 minutes for the update to be pushed to stable (assuming tests passed).
Is there documentation on the failure work flow? What does a packager need to do to get it resubmitted etc?
I've been meaning to add that question to the FAQ since this morning but still haven't got to it :(
We do want to have a mechanism to allow all packagers to re-trigger tests for their package. However, at this point we do not have it. There are thus two ways to move forward, either bump the release and rebuild (that will re-trigger the tests) or waive the missing tests (that will let the build go through gating). We're not fond of that situation since we're teaching our packagers to waive tests, but this is targeting early-adopters and we hope they can forgive us for this.
I'll made a PR to the doc for this right now :)
Best, Pierre
On 24/07/2019 09:14, Peter Robinson wrote:
On Wed, Jul 24, 2019 at 9:10 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Tue, Jul 23, 2019 at 10:35:13PM +0100, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Do we have an estimate of how much extra latency this is likely to add both with and without gating enabled? ie how much more delay there is likely to be before new builds are available?
Currently the extra latency is about 3 minutes. It's the frequency at which the
What is that based upon? What level of capacity is there to run the CI etc
Well I assume that's just the overhead of the extra moving things around on top of any time for the actual tests to run.
I generally run tests in %check anyway so test wise I'm mostly interested in using rpmdeplint and maybe abicheck, although I'm pretty sure that's not really robust enough yet.
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
Tom
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
josh
On 24/07/2019 13:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
Well more ordinary YAML would be a good start.
I mean I literally had to go and try and read the YAML spec to try and work out what it was doing and let me tell you, for something that I had always thought was a simple format it has a very long and hard to read spec...
So a single document would be good, and get rid of the tags which I assume are the result of serialising objects with those name.
The very.long.reverse.domain.test.names are not ideal.
Then there's decision_context which apparently does nothing but has to be there.
Is there any rule type other than PassingTestCaseRule?
If not then what's wrong with:
--- rules: fedora-*: - dist.abicheck - dist.rpmlint fedora-30: - my.special.text
or something equally simple, which just a list of tests to require for each version.
Tom
On Wed, Jul 24, 2019 at 02:13:02PM +0100, Tom Hughes wrote:
On 24/07/2019 13:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
Well more ordinary YAML would be a good start.
I mean I literally had to go and try and read the YAML spec to try and work out what it was doing and let me tell you, for something that I had always thought was a simple format it has a very long and hard to read spec...
So a single document would be good, and get rid of the tags which I assume are the result of serialising objects with those name.
The very.long.reverse.domain.test.names are not ideal.
Agreed, we could look for better ones.
Then there's decision_context which apparently does nothing but has to be there.
It is used! It is what define which tests are used to gate the build/update when entering the -testing repo vs which ones are used to gate entering the -stable repo.
Is there any rule type other than PassingTestCaseRule?
There actually are others: https://docs.pagure.org/greenwave/policies.html There is the one that is used to pull policies from remote locations (which is what is used to allow package-specific rules) and we had another one in the past to allow, globally, certain rules to apply only to some packages.
That being said, maybe there would be a way to simplify the syntax for remote policies, so I've opened a ticket to greenwave to see what they think about it and if it is doable: https://pagure.io/greenwave/issue/465
Thanks for your feedback :) Pierre
On 24/07/2019 14:51, Pierre-Yves Chibon wrote:
On Wed, Jul 24, 2019 at 02:13:02PM +0100, Tom Hughes wrote:
Then there's decision_context which apparently does nothing but has to be there.
It is used! It is what define which tests are used to gate the build/update when entering the -testing repo vs which ones are used to gate entering the -stable repo.
Sorry... I can't find it now but I was sure when I was reading last night I came across a ticket where somebody claimed that it didn't actually do anything.
Tom
On Wed, Jul 24, 2019 at 03:17:41PM +0100, Tom Hughes wrote:
On 24/07/2019 14:51, Pierre-Yves Chibon wrote:
On Wed, Jul 24, 2019 at 02:13:02PM +0100, Tom Hughes wrote:
Then there's decision_context which apparently does nothing but has to be there.
It is used! It is what define which tests are used to gate the build/update when entering the -testing repo vs which ones are used to gate entering the -stable repo.
Sorry... I can't find it now but I was sure when I was reading last night I came across a ticket where somebody claimed that it didn't actually do anything.
To be complete, in the context of rawhide, only the -stable one is needed, but for stable branches, both are.
Pierre
On Wed, Jul 24, 2019 at 9:57 AM Tom Hughes tom@compton.nu wrote:
On 24/07/2019 13:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
Well more ordinary YAML would be a good start.
I mean I literally had to go and try and read the YAML spec to try and work out what it was doing and let me tell you, for something that I had always thought was a simple format it has a very long and hard to read spec...
So a single document would be good, and get rid of the tags which I assume are the result of serialising objects with those name.
The very.long.reverse.domain.test.names are not ideal.
Then there's decision_context which apparently does nothing but has to be there.
Is there any rule type other than PassingTestCaseRule?
If not then what's wrong with:
rules: fedora-*:
- dist.abicheck
- dist.rpmlint
fedora-30:
- my.special.text
or something equally simple, which just a list of tests to require for each version.
I'd like to second that simpler hierarchy, but I'd say switching to TOML would make it considerably more simple to understand and imposes limits on the format so that it can't get wildly complicated.
gating.toml: [[rules]]
[[rules.fedora]] dist.abicheck = true dist.rpmlint = true
[[rules.fedora.30]] my.special.test = true
[[rules.epel]] dist.abirestrict = true
[[rules.epel.8]] dist.modularcheck = true
[[tests]] my.special.test = ["/path/to/test", arg1, arg2]
Under no circumstances should the driving system for checks be too complex for people to grok.
-- 真実はいつも一つ!/ Always, there's only one truth!
On 24. 07. 19 14:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
This looks quite better: https://pagure.io/fedora-ci/general/issue/52#comment-584489
On Wed, Jul 24, 2019 at 03:36:35PM +0200, Miro Hrončok wrote:
On 24. 07. 19 14:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
This looks quite better: https://pagure.io/fedora-ci/general/issue/52#comment-584489
Pagure.io is not reachable since few days (traceroute end at 2605:bc80:f03:4::2 corv-car1-gw.nero.net) so it a bit hard to discuss this proposal.
On Wed, Jul 24, 2019 at 04:22:01PM +0200, Tomasz Torcz wrote:
On Wed, Jul 24, 2019 at 03:36:35PM +0200, Miro Hrončok wrote:
On 24. 07. 19 14:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
This looks quite better: https://pagure.io/fedora-ci/general/issue/52#comment-584489
Pagure.io is not reachable since few days (traceroute end at 2605:bc80:f03:4::2 corv-car1-gw.nero.net) so it a bit hard to discuss this proposal.
You may want to email admin@fp.o since you can't open a ticket on the infra tracker. We should be able to figure things out with you.
Note: there has been reports of IPv6 issues in a specific office/network. Does it work better if you enable IPv4 only?
Best, Pierre
On 7/24/19 7:22 AM, Tomasz Torcz wrote:
On Wed, Jul 24, 2019 at 03:36:35PM +0200, Miro Hrončok wrote:
On 24. 07. 19 14:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
This looks quite better: https://pagure.io/fedora-ci/general/issue/52#comment-584489
Pagure.io is not reachable since few days (traceroute end at 2605:bc80:f03:4::2 corv-car1-gw.nero.net) so it a bit hard to discuss this proposal.
This is likely going to the old ip for pagure.io (we moved it a week or so ago). Please check that your dns is updating and that you do not have a /etc/hosts entry or the like?
Otherwise, happy to help debug more off list.
kevin
On Wed, 24 Jul 2019 at 11:09, Tomasz Torcz tomek@pipebreaker.pl wrote:
On Wed, Jul 24, 2019 at 03:36:35PM +0200, Miro Hrončok wrote:
On 24. 07. 19 14:32, Josh Boyer wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com
wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
This looks quite better: https://pagure.io/fedora-ci/general/issue/52#comment-584489
Pagure.io is not reachable since few days (traceroute end at 2605:bc80:f03:4::2 corv-car1-gw.nero.net) so it a bit hard to discuss this proposal.
Check to see if you have a hard coded /etc/hosts entry for pagure.io or a DNS cache which isn't timing things out. That is an old IP address from a move previously announced. The DNS entries should be
[smooge@smoogen-laptop ~]$ host pagure.io pagure.io has address 8.43.85.75 pagure.io has IPv6 address 2620:52:3:1:dead:beef:cafe:fed5 pagure.io mail is handled by 10 pagure.io.
[smooge@smoogen-laptop ~]$ host stg.pagure.io stg.pagure.io has address 8.43.85.77 stg.pagure.io has IPv6 address 2620:52:3:1:dead:beef:cafe:fed3 stg.pagure.io mail is handled by 10 stg.pagure.io.
On Wed, Jul 24, 2019 at 2:33 PM Josh Boyer jwboyer@fedoraproject.org wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok mhroncok@redhat.com wrote:
On 24. 07. 19 10:24, Tom Hughes wrote:
That said, having to go round adding a mega ugly config file to every package that looks an awful lot like an internal braindump from some system doesn't really inspire confidence, or make for an easy way of opting in.
This. The gating.yaml file is terrible.
Do either of you have a better suggestion?
If most people would have the same default yaml file copy-pasted into a thousand places, it could be easily replaced with just: ``` gating: default ``` And allow people to override the default policy (with the current syntax or hopefully something more readable) only when they really have some specific needs. This will also help in future when the defaults need to be changed.
More preset values can be defined subsequently, e.g.: gating: default/minimal/custom/disabled etc.
On Wed, Jul 24, 2019 at 04:18:25PM +0200, Kamil Paral wrote:
On Wed, Jul 24, 2019 at 2:33 PM Josh Boyer <[1]jwboyer@fedoraproject.org> wrote:
On Wed, Jul 24, 2019 at 8:02 AM Miro Hrončok <[2]mhroncok@redhat.com> wrote: > > On 24. 07. 19 10:24, Tom Hughes wrote: > > That said, having to go round adding a mega ugly config file > > to every package that looks an awful lot like an internal braindump > > from some system doesn't really inspire confidence, or make for an > > easy way of opting in. > > This. The gating.yaml file is terrible. Do either of you have a better suggestion?If most people would have the same default yaml file copy-pasted into a thousand places, it could be easily replaced with just:
gating: defaultAnd allow people to override the default policy (with the current syntax or hopefully something more readable) only when they really have some specific needs. This will also help in future when the defaults need to be changed. More preset values can be defined subsequently, e.g.: gating: default/minimal/custom/disabled etc.
This is an interesting idea, I've opened a ticket on greenwave to track it: https://pagure.io/greenwave/issue/466
We already have global (ie: distro-wide) policies that we can enforce, so they would be your "default", but your mechanism still allows for opt-in/opt-out which the distro-wide policies do not.
Thanks for the idea :)
Pierre
On 24. 07. 19 8:29, Florian Weimer wrote:
- Pierre-Yves Chibon:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates.
How does this interact with the mass rebuild?
Will Fedora 31 release with an unrebuilt package if gating fails?
Mass rebuild happens in a side tag and AFAIK it will just be merged after no matter what gating says. Effectively, it bypasses it.
On Wed, Jul 24, 2019 at 08:29:03AM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates.
How does this interact with the mass rebuild?
It does not. Mass-rebuild are done in a dedicated side-tags from which they are merged directly into the buildroot tag. So they entirely by-pass gating (current and future versions of it).
Will Fedora 31 release with an unrebuilt package if gating fails?
Since there is a mass-rebuild no (unless they fail to rebuild). For releases where we have no mass-rebuild, we could, indeed, see that if the maintainers do not fix the tests or the packages.
Best, Pierre
* Pierre-Yves Chibon:
On Wed, Jul 24, 2019 at 08:29:03AM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates.
How does this interact with the mass rebuild?
It does not. Mass-rebuild are done in a dedicated side-tags from which they are merged directly into the buildroot tag. So they entirely by-pass gating (current and future versions of it).
Thanks for the clarification. So the mass rebuild effectively waives all previous gating failures. I don't think there's a good choice here, either approach has its problems. 8-/
Maybe in the future, perhaps we should do the mass rebuild, tag it in, and then re-run all gating tests to see what kind of regressions there are?
Thanks, Florian
On Wed, Jul 24, 2019 at 11:41:16AM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
On Wed, Jul 24, 2019 at 08:29:03AM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates.
How does this interact with the mass rebuild?
It does not. Mass-rebuild are done in a dedicated side-tags from which they are merged directly into the buildroot tag. So they entirely by-pass gating (current and future versions of it).
Thanks for the clarification. So the mass rebuild effectively waives all previous gating failures. I don't think there's a good choice here, either approach has its problems. 8-/
Maybe in the future, perhaps we should do the mass rebuild, tag it in, and then re-run all gating tests to see what kind of regressions there are?
I would certainly not vote against this :) And if we get to the point where we can run tests again mass-rebuilds to find regressions across our entire package selection, I think Fedora will be in a good place then :)
Pierre
On Tue, 23 Jul 2019 22:51:28 +0200 Pierre-Yves Chibon pingou@pingoured.fr wrote:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates. In a later phase, Rawhide updates that contain multiple builds will also be enabled for gating. Our goal is to improve our ability to continuously turn out a useful Fedora OS. So we hope and expect to get opt-in from as many Fedora package maintainers as possible, including maintainers of the base OS. But this phase of gating remains opt-in, and should not affect packagers who choose for now not to opt in.
What is your level of confidence about reliability of the whole process? How much baby-sitting from the maintainer will be required (for lost messages, crashed or stuck processes, etc)?
How arch specific packages will be handled? I mean how s390x or ppc64le specific packages will be handled if the infra would be ready for x86_64 only?
Thanks,
Dan
On Wed, Jul 24, 2019 at 10:17:20AM +0200, Dan Horák wrote:
On Tue, 23 Jul 2019 22:51:28 +0200 Pierre-Yves Chibon pingou@pingoured.fr wrote:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates. In a later phase, Rawhide updates that contain multiple builds will also be enabled for gating. Our goal is to improve our ability to continuously turn out a useful Fedora OS. So we hope and expect to get opt-in from as many Fedora package maintainers as possible, including maintainers of the base OS. But this phase of gating remains opt-in, and should not affect packagers who choose for now not to opt in.
What is your level of confidence about reliability of the whole process? How much baby-sitting from the maintainer will be required (for lost messages, crashed or stuck processes, etc)?
We have move the most significant pieces of the workflow from fedmsg to fedora-messaging which should eliminate or drastically reduce the risk of lost messages. We have three pieces that are still fedmsg and that we are still working on porting to fedora-messaging: the CI system itself, resultsdb-listener (uploads results from CI into resultsdb) and robosignatory. All of which are actively being worked on and should land in the coming weeks (hopefully days).
The process is pretty straight forward, so I am pretty confident in it. There will be bugs (there always are) but I don't think we'll have that are actually blocking the update process.
How arch specific packages will be handled? I mean how s390x or ppc64le specific packages will be handled if the infra would be ready for x86_64 only?
I cannot answer for the CI system, maybe Dominik or Aleksandra can :)
Best, Pierre
On 24. 07. 19 10:17, Dan Horák wrote:
On Tue, 23 Jul 2019 22:51:28 +0200 Pierre-Yves Chibon pingou@pingoured.fr wrote:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates. In a later phase, Rawhide updates that contain multiple builds will also be enabled for gating. Our goal is to improve our ability to continuously turn out a useful Fedora OS. So we hope and expect to get opt-in from as many Fedora package maintainers as possible, including maintainers of the base OS. But this phase of gating remains opt-in, and should not affect packagers who choose for now not to opt in.
What is your level of confidence about reliability of the whole process? How much baby-sitting from the maintainer will be required (for lost messages, crashed or stuck processes, etc)?
How arch specific packages will be handled? I mean how s390x or ppc64le specific packages will be handled if the infra would be ready for x86_64 only?
Other architectures are low priority :(
https://pagure.io/fedora-ci/general/issue/16
On 23. 07. 19 22:51, Pierre-Yves Chibon wrote:
How do I enroll? Great! We’re glad to see such enthusiasm! You can find the instruction in the docs on how to enable gating:https://docs.fedoraproject.org/en-US/ci/gating/
Assuming I want to gate openssl for python_selftest from here:
https://src.fedoraproject.org/rpms/openssl/blob/master/f/tests/tests_python....
What would be the test_case_name?
org.centos.prod.ci.pipeline.allpackages-build.package.test_python.python_selftest ?
On Wed, Jul 24, 2019 at 04:42:11PM +0200, Miro Hrončok wrote:
On 23. 07. 19 22:51, Pierre-Yves Chibon wrote:
How do I enroll? Great! We’re glad to see such enthusiasm! You can find the instruction in the docs on how to enable gating:https://docs.fedoraproject.org/en-US/ci/gating/
Assuming I want to gate openssl for python_selftest from here:
https://src.fedoraproject.org/rpms/openssl/blob/master/f/tests/tests_python....
What would be the test_case_name?
org.centos.prod.ci.pipeline.allpackages-build.package.test_python.python_selftest ?
No the pipeline doesn't send a message per test, it will run all the tests and return a global pass|fail. So you'll want to gate on org.centos.prod.ci.pipeline.allpackages-build.complete or org.centos.prod.ci.pipeline.allpackages-build.package.test.functional.complete as in the example (the first one is the very last message send by the pipeline, the second one is when it finished the step of running the tests).
Best, Pierre
Pierre-Yves Chibon pingou@pingoured.fr writes:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Hi, how will we programatically check what state the tests are in? For instance, `fedpkg build` (`koji watch-task`) waits until builds are complete - what do we do to wait until tests are complete (and check the result)?
Thanks, --Robbie
On Wed, Jul 24, 2019 at 11:30:45AM -0400, Robbie Harwood wrote:
Pierre-Yves Chibon pingou@pingoured.fr writes:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Hi, how will we programatically check what state the tests are in? For instance, `fedpkg build` (`koji watch-task`) waits until builds are complete - what do we do to wait until tests are complete (and check the result)?
So we don't have an easy way to do this. I have a script that monitors the entire pipeline/workflow in production and in staging. I have been querying datagrepper for messages about the build that I've made to see if/when the tests are done. You can find the code in: https://pagure.io/fedora-ci/monitor-gating/blob/97bc5b619032cfd3218b863786e3... called in: https://pagure.io/fedora-ci/monitor-gating/blob/97bc5b619032cfd3218b863786e3...
I've added an entry in the FAQ for this: https://pagure.io/cpe/rawhide-gating-docs/pull-request/5
Maybe we could also expend fedpkg to provide some information on this.
Best, Pierre
On Wed, Jul 24, 2019 at 11:30:45AM -0400, Robbie Harwood wrote:
So we don't have an easy way to do this. I have a script that monitors the entire pipeline/workflow in production and in staging. I have been querying datagrepper for messages about the build that I've made to see if/when the tests are done. You can find the code in: https://pagure.io/fedora-ci/monitor-gating/blob/97bc5b619032cfd3218b86378... called in: https://pagure.io/fedora-ci/monitor-gating/blob/97bc5b619032cfd3218b86378...
This is pretty bad from a workflow perspective, especially when it's going to take 11 minutes (or 8 - I can't actually tell which from replies above) longer per-package on top of what we have now, plus time for any tests to run.
Your scripts are helpful, but there are a number of issues: you repeatedly send requests to datagrepper in a `while True` (is it okay with this load? What if we all start doing it?), and you make assumptions about the number of messages that can appear in an interval. I would want to see a proper interface for querying this (i.e., not polling datagrepper in a loop), as well as `fedpkg` integration as you suggest.
Thanks, --Robbie
On Wed, Jul 24, 2019 at 06:31:50PM -0000, Robbie Harwood wrote:
On Wed, Jul 24, 2019 at 11:30:45AM -0400, Robbie Harwood wrote:
So we don't have an easy way to do this. I have a script that monitors the entire pipeline/workflow in production and in staging. I have been querying datagrepper for messages about the build that I've made to see if/when the tests are done. You can find the code in: https://pagure.io/fedora-ci/monitor-gating/blob/97bc5b619032cfd3218b86378... called in: https://pagure.io/fedora-ci/monitor-gating/blob/97bc5b619032cfd3218b86378...
This is pretty bad from a workflow perspective, especially when it's going to take 11 minutes (or 8 - I can't actually tell which from replies above) longer per-package on top of what we have now, plus time for any tests to run.
Agreed and as we said, this is opt-in and is only the first release. Not everything is as polished as we want it to be, the core of it is though which is why we want to have this available to the people who are interested in giving it a try.
If: "I want to be able to follow/know what's going on" is a hard-requirement for you to test this system, then you'll have to wait a bit longer. If you're ok with these shortcomings, knowing they are meant to improve, then you are more than welcome to opt-in and give us some feedback on how it works for you.
Your scripts are helpful, but there are a number of issues: you repeatedly send requests to datagrepper in a `while True` (is it okay with this load? What if we all start doing it?), and you make assumptions about the number of messages that can appear in an interval. I would want to see a proper interface for querying this (i.e., not polling datagrepper in a loop), as well as `fedpkg` integration as you suggest.
It is a reasonable request. I could quickly hack something that listens to fedmsg and give you some clues as to what is going on for a specified build if that's helpful to you (and others).
I've opened a ticket to fedpkg to track this idea: https://pagure.io/fedpkg/issue/346 feel free to contribute your ideas there :)
Thanks for your feedback, Pierre
On Wed, Jul 24, 2019 at 06:05:50PM +0200, Pierre-Yves Chibon wrote:
I've added an entry in the FAQ for this: https://pagure.io/cpe/rawhide-gating-docs/pull-request/5 Maybe we could also expend fedpkg to provide some information on this.
Speaking with my occasional-packager hat on, yes please. That would be really, really helpful.
On 24. 07. 19 17:30, Robbie Harwood wrote:
Pierre-Yves Chibon pingou@pingoured.fr writes:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Hi, how will we programatically check what state the tests are in? For instance, `fedpkg build` (`koji watch-task`) waits until builds are complete - what do we do to wait until tests are complete (and check the result)?
If I understand this properly, `koji wait-repo` will do for packages without gated test or when the tests pass. However, it will eventually timeout if the tests fail.
On Thu, Jul 25, 2019 at 01:38:23AM +0200, Miro Hrončok wrote:
On 24. 07. 19 17:30, Robbie Harwood wrote:
Pierre-Yves Chibon pingou@pingoured.fr writes:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
Hi, how will we programatically check what state the tests are in? For instance, `fedpkg build` (`koji watch-task`) waits until builds are complete - what do we do to wait until tests are complete (and check the result)?
If I understand this properly, `koji wait-repo` will do for packages without gated test or when the tests pass. However, it will eventually timeout if the tests fail.
If used with `--build`, I think you're right.
Best, Pierre
On 23. 07. 19 22:51, Pierre-Yves Chibon wrote:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates.
Where at https://bodhi.fedoraproject.org/ do I find the rawhide updates?
On Thu, Jul 25, 2019 at 11:29:56AM +0200, Miro Hrončok wrote:
On 23. 07. 19 22:51, Pierre-Yves Chibon wrote:
Good Morning Everyone,
TL;DR: On July 24th we will turn on the first phase of Rawhide package gating, for single build updates.
Where at https://bodhi.fedoraproject.org/ do I find the rawhide updates?
It will show up as f31 once we have created it :) (ETA for this in about 2h, so 12:30 UTC).
Best, Pierre
Good Morning Everyone,
I just wanted to let everyone know that this is now live. You can see all the updates going through (or not) to rawhide in: https://bodhi.fedoraproject.org/releases/F31
Many many thanks to all the people involved, I'm afraid I'll miss some but I'll take the risk, so here it is (in no particular order): Many thanks to Clément, Aurélien, Kevin, Nils, Mohan, Michal, Randy, Ryan, Patrick, smooge, Troy and of course Leigh, Jim and Paul who have made this possible.
I'm keeping the point from the FAQ about where to report issues, if you run into any, here as it may now come in handy ;-)
Small FAQ:
[...]
It does not work! Bugs will be bugs. This is the first roll-out of this change, and more will come. This rollout lets us gather feedback and iterate on the approach in an open source fashion. If you did not opt-in and you can’t do your packaging work as you used to, please file a infrastructure ticket, since it’s likely a bug: https://pagure.io/fedora-infrastructure/new_issue?title=%5BCI] If you did opt-in and something in the gating of your update doesn’t work (for example, CI ran but its results aren’t being considered, waiving didn’t work…), file an infrastructure ticket: https://pagure.io/fedora-infrastructure/new_issue?title=%5BCI] If you opted-in and the tests don’t run the way you expect, file a fedora-ci ticket: https://pagure.io/fedora-ci/general/new_issue
Happy packaging!
Pierre
_______________________________________________ devel-announce mailing list -- devel-announce@lists.fedoraproject.org To unsubscribe send an email to devel-announce-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel-announce@lists.fedorapro...
On Thu, Jul 25, 2019 at 5:25 PM Pierre-Yves Chibon pingou@pingoured.fr wrote:
Good Morning Everyone,
I just wanted to let everyone know that this is now live. You can see all the updates going through (or not) to rawhide in: https://bodhi.fedoraproject.org/releases/F31
Many many thanks to all the people involved, I'm afraid I'll miss some but I'll take the risk, so here it is (in no particular order): Many thanks to Clément, Aurélien, Kevin, Nils, Mohan, Michal, Randy, Ryan, Patrick, smooge, Troy and of course Leigh, Jim and Paul who have made this possible.
That's great, thanks to everyone who made this possible :)
Just s small question, is it possible to include a small default "installable" CI test for all packages, which checks if the newly built packages are actually installable in rawhide? I know of a few issues that could have been prevented if non-installable packages (because they introduce broken dependencies) were not allowed to enter rawhide without manual intervention.
Fabio
I'm keeping the point from the FAQ about where to report issues, if you run into any, here as it may now come in handy ;-)
Happy packaging!
Pierre _______________________________________________ devel-announce mailing list -- devel-announce@lists.fedoraproject.org To unsubscribe send an email to devel-announce-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel-announce@lists.fedorapro...
On Tue, Jul 30, 2019 at 10:11:07AM +0200, Fabio Valentini wrote:
On Thu, Jul 25, 2019 at 5:25 PM Pierre-Yves Chibon pingou@pingoured.fr wrote:
Good Morning Everyone,
I just wanted to let everyone know that this is now live. You can see all the updates going through (or not) to rawhide in: https://bodhi.fedoraproject.org/releases/F31
Many many thanks to all the people involved, I'm afraid I'll miss some but I'll take the risk, so here it is (in no particular order): Many thanks to Clément, Aurélien, Kevin, Nils, Mohan, Michal, Randy, Ryan, Patrick, smooge, Troy and of course Leigh, Jim and Paul who have made this possible.
That's great, thanks to everyone who made this possible :)
Just s small question, is it possible to include a small default "installable" CI test for all packages, which checks if the newly built packages are actually installable in rawhide? I know of a few issues that could have been prevented if non-installable packages (because they introduce broken dependencies) were not allowed to enter rawhide without manual intervention.
It is my understanding that this is something that Dominik's team is working on under their "distro-wide" tests that is part of the new CI objective: https://fedoraproject.org/wiki/Objectives/CI:2019
I agree that this will help rawhide in quite a few cases!
Best, Pierre
On Tue, Jul 30, 2019 at 10:41 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
On Tue, Jul 30, 2019 at 10:11:07AM +0200, Fabio Valentini wrote:
On Thu, Jul 25, 2019 at 5:25 PM Pierre-Yves Chibon pingou@pingoured.fr
wrote:
Good Morning Everyone,
I just wanted to let everyone know that this is now live. You can see all the updates going through (or not) to rawhide in: https://bodhi.fedoraproject.org/releases/F31
Many many thanks to all the people involved, I'm afraid I'll miss some
but I'll
take the risk, so here it is (in no particular order): Many thanks to Clément, Aurélien, Kevin, Nils, Mohan, Michal, Randy,
Ryan,
Patrick, smooge, Troy and of course Leigh, Jim and Paul who have made
this
possible.
That's great, thanks to everyone who made this possible :)
Just s small question, is it possible to include a small default "installable" CI test for all packages, which checks if the newly built packages are actually installable in rawhide? I know of a few issues that could have been prevented if non-installable packages (because they introduce broken dependencies) were not allowed to enter rawhide without manual intervention.
It is my understanding that this is something that Dominik's team is working on under their "distro-wide" tests that is part of the new CI objective: https://fedoraproject.org/wiki/Objectives/CI:2019
Hopefully a generic test like rpmdeplint (currently executed in Taskotron against proposed updates) can be executed by Fedora CI as part of Rawhide gating, once the feature is done.
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Tom
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
Tom
On Fri, Jul 26, 2019 at 02:19:18PM +0100, Tom Hughes wrote:
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
So only counting the ones from bodhi, how many would be acceptable? 2? update was created, update was pushed to the buildroot? Or do we want just one: "update was pushed to the buildroot"?
I think we may want to keep the one informing that CI failed (if/when it does), which is not in the list of 5 you've mentioned.
Thanks for your help, Pierre
On 29/07/2019 11:08, Pierre-Yves Chibon wrote:
On Fri, Jul 26, 2019 at 02:19:18PM +0100, Tom Hughes wrote:
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
So only counting the ones from bodhi, how many would be acceptable? 2? update was created, update was pushed to the buildroot? Or do we want just one: "update was pushed to the buildroot"?
Well this was a package that didn't have CI enabled at all so it's not clear any of them are very useful.
I think we may want to keep the one informing that CI failed (if/when it does), which is not in the list of 5 you've mentioned.
Sure if the package has CI enabled then telling me when it fails is obviously useful.
Tom
On Mon, Jul 29, 2019 at 11:23:32AM +0100, Tom Hughes wrote:
On 29/07/2019 11:08, Pierre-Yves Chibon wrote:
On Fri, Jul 26, 2019 at 02:19:18PM +0100, Tom Hughes wrote:
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
So only counting the ones from bodhi, how many would be acceptable? 2? update was created, update was pushed to the buildroot? Or do we want just one: "update was pushed to the buildroot"?
Well this was a package that didn't have CI enabled at all so it's not clear any of them are very useful.
The update was pushed to the buildroot seems useful as it means things are working the way they should, but I guess koji wait-repo --build does the same if you're relying on it.
Pierre
On 29/07/2019 11:31, Pierre-Yves Chibon wrote:
On Mon, Jul 29, 2019 at 11:23:32AM +0100, Tom Hughes wrote:
On 29/07/2019 11:08, Pierre-Yves Chibon wrote:
On Fri, Jul 26, 2019 at 02:19:18PM +0100, Tom Hughes wrote:
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
So only counting the ones from bodhi, how many would be acceptable? 2? update was created, update was pushed to the buildroot? Or do we want just one: "update was pushed to the buildroot"?
Well this was a package that didn't have CI enabled at all so it's not clear any of them are very useful.
The update was pushed to the buildroot seems useful as it means things are working the way they should, but I guess koji wait-repo --build does the same if you're relying on it.
Well koji wait-repo is also more reliable, as the bodhi message fires as soon as it is submitted while wait-repo tells you when the compose has completed and the package is available to use.
Tom
On 29. 07. 19 12:08, Pierre-Yves Chibon wrote:
On Fri, Jul 26, 2019 at 02:19:18PM +0100, Tom Hughes wrote:
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
So only counting the ones from bodhi, how many would be acceptable? 2? update was created, update was pushed to the buildroot? Or do we want just one: "update was pushed to the buildroot"?
I've also opened this: https://github.com/fedora-infra/bodhi/issues/3430
On Mon, Jul 29, 2019 at 12:26:25PM +0200, Miro Hrončok wrote:
On 29. 07. 19 12:08, Pierre-Yves Chibon wrote:
On Fri, Jul 26, 2019 at 02:19:18PM +0100, Tom Hughes wrote:
On 26/07/2019 14:17, Tom Hughes wrote:
On 23/07/2019 21:51, Pierre-Yves Chibon wrote:
This is the first roll-out of this gating change, and so there may be additional tuning and fixes until things are as smooth as we want them to be. With this release we are looking for feedback on what can be improved. We have a dedicated team working on this project and we will be taking your feedback into account to improve the experience.
Not getting five additional emails for every rawhide build I do would probably be a good start?
Correction, make that seven, as I hadn't gotten to the ones sent by notifications (as opposed to bodhi) yet...
So only counting the ones from bodhi, how many would be acceptable? 2? update was created, update was pushed to the buildroot? Or do we want just one: "update was pushed to the buildroot"?
I've also opened this: https://github.com/fedora-infra/bodhi/issues/3430
And me: https://github.com/fedora-infra/bodhi/issues/3431
I'll see to merge them into one :)
Thanks, Pierre
I've been trying to build nbdkit which depends on libnbd >= 0.1.9 in Fedora Rawhide this morning.
I built libnbd a few hours ago, but it hasn't turned up in Rawhide.
I also discovered that you can now submit updates for Rawhide, so why not: https://bodhi.fedoraproject.org/updates/FEDORA-2019-d3e8c3f7da However this didn't help either.
I guess this is something to do with this change, and if it is, what am I supposed to do?
Rich.
On Fri, Jul 26, 2019 at 02:41:44PM +0100, Richard W.M. Jones wrote:
I've been trying to build nbdkit which depends on libnbd >= 0.1.9 in Fedora Rawhide this morning.
I built libnbd a few hours ago, but it hasn't turned up in Rawhide.
I also discovered that you can now submit updates for Rawhide, so why not: https://bodhi.fedoraproject.org/updates/FEDORA-2019-d3e8c3f7da However this didn't help either.
I guess this is something to do with this change, and if it is, what am I supposed to do?
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
My test of the pipeline this morning took about 5h to land in rawhide :(
I've opened an infra ticket to see how we can improve/avoid this in the future.
Also, there is no need to go via bodhi, the update you have created would have been automatically created for you. Basically in rawhide, we're using bodhi for its UI and its central place. Everybody knows it and this makes it an obvious place to check what's going on with a build/update. Unless you've added tests to your packages or are interested in it, there is basically no need to interact with bodhi.
Best, Pierre
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Rich.
On 7/27/19 8:26 AM, Richard W.M. Jones wrote:
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Yeah, the mass rebuild finished yesterday, but signing didn't catch all the way up until last night. :(
In any case it should be all back to normal now.
There were 1760 failed builds from the mass rebuild. I am resubmitting all those to try and catch at least some of the ones that just failed due to transitory builder issues or the like.
kevin
On Sun, Jul 28, 2019 at 11:49:05AM -0700, Kevin Fenzi wrote:
On 7/27/19 8:26 AM, Richard W.M. Jones wrote:
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Yeah, the mass rebuild finished yesterday, but signing didn't catch all the way up until last night. :(
In any case it should be all back to normal now.
I confirm, my canary took 13 minutes to clone, bump the release, commit, push, build, have the update created, have CI send results, have them show up in resultsdb and datagrepper, waive the failed tests and have the waiverdb and greenwave messages show in datagrepper.
So all good :)
Pierre
On Sun, Jul 28, 2019 at 11:49:05AM -0700, Kevin Fenzi wrote:
On 7/27/19 8:26 AM, Richard W.M. Jones wrote:
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Yeah, the mass rebuild finished yesterday, but signing didn't catch all the way up until last night. :(
In any case it should be all back to normal now.
It seems to be really slow again. I built a package this morning. It's 9 hours later and it's still not in Rawhide.
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344209
Rich.
On 7/30/19 9:59 AM, Richard W.M. Jones wrote:
On Sun, Jul 28, 2019 at 11:49:05AM -0700, Kevin Fenzi wrote:
On 7/27/19 8:26 AM, Richard W.M. Jones wrote:
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Yeah, the mass rebuild finished yesterday, but signing didn't catch all the way up until last night. :(
In any case it should be all back to normal now.
It seems to be really slow again. I built a package this morning. It's 9 hours later and it's still not in Rawhide.
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344209
Sadly yes I know.
The mass rebuild side tag (f31-rebuild) was tagged into f31-pending. So, the robosigner is checking each package and making sure they are signed and written out. This is taking a really long time.
I killed some koji-gc jobs this morning to help make it faster, but there's nothing much else I can do. ;(
You can check it's progress with:
koji list-tagged f31-pending | wc -l
(but please do so sparingly, as that also adds additonal koji load).
I think next time we not tag this in this way, we should confirm everything is signed by asking koji and then just merge it into f31.
kevin
Kevin Fenzi wrote:
On 7/27/19 8:26 AM, Richard W.M. Jones wrote:
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Yeah, the mass rebuild finished yesterday, but signing didn't catch all the way up until last night. :(
In any case it should be all back to normal now.
I'm still seeing these in f31-updates-candidate, been there for over 24 hours now:
f5-attica-5.60.0-2.fc31 kf5-bluez-qt-5.60.0-2.fc31 kf5-karchive-5.60.0-1.fc31 kf5-kcodecs-5.60.0-2.fc31 kf5-kcoreaddons-5.60.0-2.fc31 kf5-kguiaddons-5.60.0-2.fc31 kf5-kidletime-5.60.0-2.fc31 kf5-kitemmodels-5.60.0-2.fc31 kf5-kwayland-5.60.0-2.fc31 kf5-kwindowsystem-5.60.0-2.fc31 kf5-modemmanager-qt-5.60.0-2.fc31 kf5-networkmanager-qt-5.60.0-2.fc31 kf5-prison-5.60.0-2.fc31 kf5-solid-5.60.0-2.fc31 kf5-sonnet-5.60.0-2.fc31 kf5-syntax-highlighting-5.60.0-2.fc31
-- Rex
On Tue, 2019-07-30 at 12:33 -0500, Rex Dieter wrote:
Kevin Fenzi wrote:
On 7/27/19 8:26 AM, Richard W.M. Jones wrote:
On Fri, Jul 26, 2019 at 04:08:40PM +0200, Pierre-Yves Chibon wrote:
Yes and no, robosignatory is swamped signing the builds from the mass-rebuild, which means they aren't landing in the buildroot :(
This is still taking a really long time. I built one package early this morning which still hasn't gone into Rawhide probably 6+ hours later.
Yeah, the mass rebuild finished yesterday, but signing didn't catch all the way up until last night. :(
In any case it should be all back to normal now.
I'm still seeing these in f31-updates-candidate, been there for over 24 hours now:
f5-attica-5.60.0-2.fc31 kf5-bluez-qt-5.60.0-2.fc31 kf5-karchive-5.60.0-1.fc31 kf5-kcodecs-5.60.0-2.fc31 kf5-kcoreaddons-5.60.0-2.fc31 kf5-kguiaddons-5.60.0-2.fc31 kf5-kidletime-5.60.0-2.fc31 kf5-kitemmodels-5.60.0-2.fc31 kf5-kwayland-5.60.0-2.fc31 kf5-kwindowsystem-5.60.0-2.fc31 kf5-modemmanager-qt-5.60.0-2.fc31 kf5-networkmanager-qt-5.60.0-2.fc31 kf5-prison-5.60.0-2.fc31 kf5-solid-5.60.0-2.fc31 kf5-sonnet-5.60.0-2.fc31 kf5-syntax-highlighting-5.60.0-2.fc31
Also kernel-5.3.0-0.rc2.git0.1.fc31 .
Yep. This is due to signing backlog, as the entire mass rebuild was taged into f31-ppending and has to be checked by robosignatory. ;(
It seems to be moving a bit faster now...
Sorry for this hassle.
kevin
* Pierre-Yves Chibon:
What does it mean for us as packagers?
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
Thanks, Florian
On Tue, Jul 30, 2019 at 4:16 PM Florian Weimer fweimer@redhat.com wrote:
- Pierre-Yves Chibon:
What does it mean for us as packagers?
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
I think that's probably caused by koji being *really* busy with tagging and merging f31-rebuild into f31 for the past day or so.
Fabio
Thanks, Florian _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Tue, Jul 30, 2019 at 04:15:31PM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
What does it mean for us as packagers?
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
It's robosignatory acting up again :(
I think this may be related to the mass-rebuild being merged, cf Kevin's comment in: https://pagure.io/fedora-infrastructure/issue/8041#comment-585310
To answer your question: - builds land in f31-updates-candidate - robosignatory signs them and moves them to f31-updates-testing - bodhi picks them up from there - once CI passes, bodhi moves them to f31
Best, Pierre
* Pierre-Yves Chibon:
On Tue, Jul 30, 2019 at 04:15:31PM +0200, Florian Weimer wrote:
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
It's robosignatory acting up again :(
Oh. I guess the only thing to do for us waiting, then.
I think this may be related to the mass-rebuild being merged, cf Kevin's comment in: https://pagure.io/fedora-infrastructure/issue/8041#comment-585310
To answer your question:
- builds land in f31-updates-candidate
- robosignatory signs them and moves them to f31-updates-testing
- bodhi picks them up from there
- once CI passes, bodhi moves them to f31
Thanks for these references. This will help to recognize the pattern if it occurs again.
Florian
On Tue, Jul 30, 2019 at 04:26:30PM +0200, Pierre-Yves Chibon wrote:
On Tue, Jul 30, 2019 at 04:15:31PM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
What does it mean for us as packagers?
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
It's robosignatory acting up again :(
What is it about signing packages that takes so long? The crypto ops?
Rich.
On Tue, 2019-07-30 at 18:00 +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 04:26:30PM +0200, Pierre-Yves Chibon wrote:
On Tue, Jul 30, 2019 at 04:15:31PM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
What does it mean for us as packagers?
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
It's robosignatory acting up again :(
What is it about signing packages that takes so long? The crypto ops?
The robot's pen keeps running out of ink...
On 7/30/19 10:04 AM, Adam Williamson wrote:
On Tue, 2019-07-30 at 18:00 +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 04:26:30PM +0200, Pierre-Yves Chibon wrote:
On Tue, Jul 30, 2019 at 04:15:31PM +0200, Florian Weimer wrote:
- Pierre-Yves Chibon:
What does it mean for us as packagers?
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
What are the actual tag names? I have a glibc build which appears to be stuck in f31-updates-candidate:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344222
I wonder if this is caused by gating, or if it's something else.
It's robosignatory acting up again :(
What is it about signing packages that takes so long? The crypto ops?
The robot's pen keeps running out of ink...
:)
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
kevin
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
Rich.
On Wed, 31 Jul 2019 at 10:16, Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
1. Because everyone's rawhide.repo says they are signed 2. Everytime we get unsigned packages people start freaking out that some nation state is trying to take over their computer. 3. Because nation states do that and those packages will become F32/F33 at some point.
On Wed, Jul 31, 2019 at 10:22:36AM -0400, Stephen John Smoogen wrote:
On Wed, 31 Jul 2019 at 10:16, Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
- Because everyone's rawhide.repo says they are signed
- Everytime we get unsigned packages people start freaking out that some
nation state is trying to take over their computer. 3. Because nation states do that and those packages will become F32/F33 at some point.
Actually my question was wrong. Is there any reason we need to sign builds while they are internal to Koji (ie. proving BuildRequires for subsequent builds)? They could still be signed when they go out to Rawhide.
Rich.
On 7/31/19 8:07 AM, Richard W.M. Jones wrote:
On Wed, Jul 31, 2019 at 10:22:36AM -0400, Stephen John Smoogen wrote:
On Wed, 31 Jul 2019 at 10:16, Richard W.M. Jones rjones@redhat.com wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
Can you define 'a long time'?
Do you have an example build for me to look at?
- Because everyone's rawhide.repo says they are signed
- Everytime we get unsigned packages people start freaking out that some
nation state is trying to take over their computer. 3. Because nation states do that and those packages will become F32/F33 at some point.
Actually my question was wrong. Is there any reason we need to sign builds while they are internal to Koji (ie. proving BuildRequires for subsequent builds)? They could still be signed when they go out to Rawhide.
Packages are signed before CI runs on them. This is so the _exact_ thing we are going to be using/shipping/building against is the thing that we actually test. When you instead test something, then change it, you leave yourself open to issues with whatever changes you are doing.
CI runs before they land in the buildroot as we want to not build against anything that was gated for whatever reason.
kevin
On Wed, Jul 31, 2019 at 08:52:52AM -0700, Kevin Fenzi wrote:
Do you have an example build for me to look at?
I waited 2 hours for ocaml-result-1.2-12.fc31. In fact it's just now become available in the buildroot. I don't know if that helps.
The next build I will be waiting for (when it completes) is ocaml-dune-1.10.0-4.fc31
Packages are signed before CI runs on them. This is so the _exact_ thing we are going to be using/shipping/building against is the thing that we actually test. When you instead test something, then change it, you leave yourself open to issues with whatever changes you are doing.
CI runs before they land in the buildroot as we want to not build against anything that was gated for whatever reason.
Makes sense, thanks.
Rich.
On Wed, Jul 31, 2019 at 03:15:32PM +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
Because administrator of Fedora infrastructure run rawhide on laptops, and we don't want them to be easily* hackable.
* or maybe not easily, but easier than users of regular releases
On 7/31/19 7:35 AM, Tomasz Torcz wrote:
On Wed, Jul 31, 2019 at 03:15:32PM +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
Because administrator of Fedora infrastructure run rawhide on laptops, and we don't want them to be easily* hackable.
- or maybe not easily, but easier than users of regular releases
Ha. No.
It's for a variety of reasons:
* Various groups that interact with the packages do not want to have to code in exceptions or treat some things differently. (QA, CI, package tools).
* Signing packages is a clear way to indicate where they are from. (look at the 'keychecker' package. If you see a foo-1.0-1.fc29.x86_64.rpm package you can check it's signature and see that it came from rawhide or f29 or no where known, etc.
* If you use metalinks, rpm signatures are just gravy on top, in the end you are still just trusing SSL CA's.
* Making sure everything is signed in rawhide allows us to test/develop tooling that operates on composes instead of having to test those in stable release branches.
There's likely other things too...
kevin
* Jason L. Tibbitts, III:
"KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
At one point, there was a verified hash chain from the https:// metalink service, to the repository metadata, down to individual packages. Any tampering was detected then.
I don't know if all the pieces (including the installer) still use the metalink service over https:// and verify the hashes.
Thanks, Florian
"FW" == Florian Weimer fweimer@redhat.com writes:
FW> At one point, there was a verified hash chain from the https:// FW> metalink service, to the repository metadata, down to individual FW> packages. Any tampering was detected then.
I understand that the metalink contains enough information to verify the returnes repomd.xml files, but I guess I don't really know if there's enough data to chase that down to the checksum of every file that's ever expected to be on a mirror. If it is, then great, though signatures still have value because there are other ways to get RPMs than letting dnf hit the mirror network.
- J<
* Jason L. Tibbitts, III:
"FW" == Florian Weimer fweimer@redhat.com writes:
FW> At one point, there was a verified hash chain from the https:// FW> metalink service, to the repository metadata, down to individual FW> packages. Any tampering was detected then.
I understand that the metalink contains enough information to verify the returnes repomd.xml files, but I guess I don't really know if there's enough data to chase that down to the checksum of every file that's ever expected to be on a mirror.
repomd.xml has hashes for primary.xml etc., and primary.xml contains digests of the RPM files. In theory, it can all be checked.
At one point, RPM wrote unchecked file contents to disk, leading to vulnerabilities such as CVE-2013-6435. At the time, it was not possible to teach RPM to verify the data before writing it.
If it is, then great, though signatures still have value because there are other ways to get RPMs than letting dnf hit the mirror network.
I think dnf only performs signature checking if the RPMs are downloaded from repositories.
Thanks, Florian
On 7/31/19 11:09 AM, Florian Weimer wrote:
- Jason L. Tibbitts, III:
> "FW" == Florian Weimer fweimer@redhat.com writes:
FW> At one point, there was a verified hash chain from the https:// FW> metalink service, to the repository metadata, down to individual FW> packages. Any tampering was detected then.
I understand that the metalink contains enough information to verify the returnes repomd.xml files, but I guess I don't really know if there's enough data to chase that down to the checksum of every file that's ever expected to be on a mirror.
repomd.xml has hashes for primary.xml etc., and primary.xml contains digests of the RPM files. In theory, it can all be checked.
Yes, it's all checked and if tampered with would fail.
You get the metalink via https from our mirrorlist containers running on our proxies. This metalink has in it a list of mirrors that the checksums for the repomd.xml file that is valid. You go to one of those mirrors. If repomd.xml was tampered with, dnf will call it broken and go on. If someone tampers with packages they would not match the other checksums in the repomd.xml and be treated as corrupt.
If you are using metalink and not mirrorlist or pointing directly to a mirror, you should be safe.
At one point, RPM wrote unchecked file contents to disk, leading to vulnerabilities such as CVE-2013-6435. At the time, it was not possible to teach RPM to verify the data before writing it.
If it is, then great, though signatures still have value because there are other ways to get RPMs than letting dnf hit the mirror network.
I think dnf only performs signature checking if the RPMs are downloaded from repositories.
Yep. I am pretty sure that is the case.
kevin
On Wed, Jul 31, 2019 at 2:45 PM Kevin Fenzi kevin@scrye.com wrote:
On 7/31/19 11:09 AM, Florian Weimer wrote:
- Jason L. Tibbitts, III:
At one point, RPM wrote unchecked file contents to disk, leading to vulnerabilities such as CVE-2013-6435. At the time, it was not possible to teach RPM to verify the data before writing it.
If it is, then great, though signatures still have value because there are other ways to get RPMs than letting dnf hit the mirror network.
I think dnf only performs signature checking if the RPMs are downloaded from repositories.
Yep. I am pretty sure that is the case.
By default this is the case, but you can configure DNF to validate signatures for all cases if you want.
You just set localpkg_gpgcheck=1 in /etc/dnf/dnf.conf
That said, you probably don't want to do that, since most downloaded packages aren't signed...
"NG" == Neal Gompa ngompa13@gmail.com writes:
NG> You just set localpkg_gpgcheck=1 in /etc/dnf/dnf.conf
NG> That said, you probably don't want to do that, since most downloaded NG> packages aren't signed...
I think that the ideal behavior would be to always check, but warn/prompt for unsigned packages or those with signature failures. Certainly it's better to verify as much as possible as often as possible.
- J<
On 2019-07-31 21:35, Jason L Tibbitts III wrote:
"NG" == Neal Gompa ngompa13@gmail.com writes:
NG> You just set localpkg_gpgcheck=1 in /etc/dnf/dnf.conf
NG> That said, you probably don't want to do that, since most downloaded NG> packages aren't signed...
Cool! I wasn't even aware of this setting. Can we set that and then override with --nogpgcheck for the exceptions?
I think that the ideal behavior would be to always check, but warn/prompt for unsigned packages or those with signature failures. Certainly it's better to verify as much as possible as often as possible.
That would be even better, except maybe when scripting.
Le mercredi 31 juillet 2019 à 12:25 -0500, Jason L Tibbitts III a écrit :
"KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
TLS is a fine thing in theory, but relying on it requires a lot more debugging capabilities, than the ones we built in our tools. TLS stacks are heavily biaised towards refusing to connect as soon as something does not matches their expectations (and they usually forget to tell you what they didn't like).
On 7/31/19 12:05 PM, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 12:25 -0500, Jason L Tibbitts III a écrit :
> "KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
Any errors? Bug filed? as long as the certs were valid/normal certs, there should not be any reason that wouldn't work I wouldn't think.
kevin
Le mercredi 31 juillet 2019 à 13:34 -0700, Kevin Fenzi a écrit :
On 7/31/19 12:05 PM, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 12:25 -0500, Jason L Tibbitts III a écrit :
> > "KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
Any errors? Bug filed? as long as the certs were valid/normal certs, there should not be any reason that wouldn't work I wouldn't think.
A bug will be filed yes (we gave up on managing to make it work yesterday). An error would have been mighty fine, IIRC there was nothing useful returned by anaconda, just the same behaviour as if one had made a mistake in the URL, that anaconda gave up on after a timeout.
Regards,
On 7/31/19 4:34 PM, Kevin Fenzi wrote:
On 7/31/19 12:05 PM, Nicolas Mailhot via devel wrote:
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
Any errors? Bug filed? as long as the certs were valid/normal certs, there should not be any reason that wouldn't work I wouldn't think.
My guess would be a protocol version or cipher suite negotiation failure, presumably because the HTTPS end points use newer configurations that exclude old versions and ciphers. Hopefully Nicholas will find the real reason.
BTW, the new crypto systems like wireguard are eschewing crypto negotiation: if the current protocols are determined to be lacking, the plan is to push a new version and force everyone to upgrade.
It's pretty harsh from the operational point of view, but they have a point: if the crypto is vulnerable, it should not be possible to force a downgrade on connections you care about, and you can still run the old protocol specifically for endpoints which you cannot upgrade.
On Wed, 2019-07-31 at 13:34 -0700, Kevin Fenzi wrote:
On 7/31/19 12:05 PM, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 12:25 -0500, Jason L Tibbitts III a écrit :
> > "KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
Any errors? Bug filed? as long as the certs were valid/normal certs, there should not be any reason that wouldn't work I wouldn't think.
Indeed - this sounds like it could potentially be a serious regression, which should be investigated and fixed.
kevin
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Wed, Jul 31, 2019 at 09:05:21PM +0200, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 12:25 -0500, Jason L Tibbitts III a écrit :
> "KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
It's odd that they would work from an installed system and not anaconda. Are you using a self-signed cert on them? If so you can pass inst.noverifyssl to anaconda to tell it to ignore the error but still use https.
Le mercredi 31 juillet 2019 à 16:10 -0700, Brian C. Lane a écrit :
On Wed, Jul 31, 2019 at 09:05:21PM +0200, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 12:25 -0500, Jason L Tibbitts III a écrit :
> > "KF" == Kevin Fenzi kevin@scrye.com writes:
KF> * If you use metalinks, rpm signatures are just gravy on top, in the KF> end you are still just trusing SSL CA's.
Only if you trust every mirror to always serve authentic content.
And, just to provide another data point, we tried this month to make the network install iso talk to https dnf repos (a reposync of fedora devel x86_64, without x86 packages, because we don't have the storage budget to mirror 32 bit packages we don't have the use for them anyway). The repos themselves worked fine from installed systems. But, anaconda refused to use them, till they were re-exposed in plain un- secured http.
It's odd that they would work from an installed system and not anaconda. Are you using a self-signed cert on them?
No, a proper public cert, that even Firefox accepts without grumbling (not an easy thing to manage those days).
If so you can pass inst.noverifyssl to anaconda to tell it to ignore the error but still use https.
Thanks for the suggestion, I had forgotten about it. Is it possible to do that manually without a kickstart? Fot that installation workflow we start from a minimal unmodified install, and customize it in a later stage.
Regards,
On 7/31/19 11:41 PM, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 16:10 -0700, Brian C. Lane a écrit :
If so you can pass inst.noverifyssl to anaconda to tell it to ignore the error but still use https.
Thanks for the suggestion, I had forgotten about it. Is it possible to do that manually without a kickstart? Fot that installation workflow we start from a minimal unmodified install, and customize it in a later stage.
You can add that to the kernel command line when you boot.
Le jeudi 01 août 2019 à 00:27 -0700, Samuel Sieb a écrit :
On 7/31/19 11:41 PM, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 16:10 -0700, Brian C. Lane a écrit :
If so you can pass inst.noverifyssl to anaconda to tell it to ignore the error but still use https.
Thanks for the suggestion, I had forgotten about it. Is it possible to do that manually without a kickstart? Fot that installation workflow we start from a minimal unmodified install, and customize it in a later stage.
You can add that to the kernel command line when you boot.
Sor, from grub, before the keyboard layout is properly set up. Ok, not too convenient, but a lot better than nothing. Thank you.
On Thu, Aug 01, 2019 at 08:41:32AM +0200, Nicolas Mailhot via devel wrote:
Le mercredi 31 juillet 2019 à 16:10 -0700, Brian C. Lane a écrit :
It's odd that they would work from an installed system and not anaconda. Are you using a self-signed cert on them?
No, a proper public cert, that even Firefox accepts without grumbling (not an easy thing to manage those days).
Very odd then, possibly a crypto support mismatch with the installer image?
dnf has *very* verbose logs in dnf.librepo.log if it is failing inside dnf I'd expect something useful there.
You can also try curl from the cmdline while booted into the installer image. I'd probably start with that.
If so you can pass inst.noverifyssl to anaconda to tell it to ignore the error but still use https.
Thanks for the suggestion, I had forgotten about it. Is it possible to do that manually without a kickstart? Fot that installation workflow we start from a minimal unmodified install, and customize it in a later stage.
inst.noverifyssl on the cmdline, or in your kickstart pass --noverifyssl to the url and repo lines:
https://pykickstart.readthedocs.io/en/latest/kickstart-docs.html#url
On Wed, Jul 31, 2019 at 03:15:32PM +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
My canary took 14 minutes this morning, so that's within the usual time for it.
I'll run it again right to see if it is slower now.
Pierre
On 31/07/2019 16:10, Pierre-Yves Chibon wrote:
On Wed, Jul 31, 2019 at 03:15:32PM +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
My canary took 14 minutes this morning, so that's within the usual time for it.
I'll run it again right to see if it is slower now.
It seems to vary quite a bit. So far today I've seen about 45 minutes then 15 and I'm now waiting on another one that's at 50 minutes and counting.
Tom
On Wed, Jul 31, 2019 at 04:35:11PM +0100, Tom Hughes wrote:
On 31/07/2019 16:10, Pierre-Yves Chibon wrote:
On Wed, Jul 31, 2019 at 03:15:32PM +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
My canary took 14 minutes this morning, so that's within the usual time for it.
I'll run it again right to see if it is slower now.
It seems to vary quite a bit. So far today I've seen about 45 minutes then 15 and I'm now waiting on another one that's at 50 minutes and counting.
I've been waiting so far nearly 2 hours for this one to get into the buildroot:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344542
Rich.
On Wed, Jul 31, 2019 at 04:39:09PM +0100, Richard W.M. Jones wrote:
On Wed, Jul 31, 2019 at 04:35:11PM +0100, Tom Hughes wrote:
On 31/07/2019 16:10, Pierre-Yves Chibon wrote:
On Wed, Jul 31, 2019 at 03:15:32PM +0100, Richard W.M. Jones wrote:
On Tue, Jul 30, 2019 at 11:11:34AM -0700, Kevin Fenzi wrote:
In this case it's koji.
For every package in the mass rebuild (f31-pending tag) robosign asks koji "hey, is foobar-1.0.1-1.fc31 signed' ? koji checks... "yes, it is". robosign: "great, then I ask you to write out the signed rpms now" koji: "ok, writing them out to disk again"
it's mostly this last step thats slow. I am not sure if koji is just seeing if they were written out and returning, or actually re-writing them out. It seems like it might be the latter, which makes me suspect koji could optimize this somewhat.
It's still taking a long time today to get builds through Koji and into Rawhide. Is there a reason we need to sign builds in Rawhide?
My canary took 14 minutes this morning, so that's within the usual time for it.
I'll run it again right to see if it is slower now.
It seems to vary quite a bit. So far today I've seen about 45 minutes then 15 and I'm now waiting on another one that's at 50 minutes and counting.
I've been waiting so far nearly 2 hours for this one to get into the buildroot:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1344542
My canary ran took 24 minutes, apparently the CI pipeline was slower than usual but the rest of the workflow seemed fine.
$ koji buildinfo ocaml-result-1.2-12.fc31 returns: Tags: f31 f31-updates-pending
So it should be in the buildroot. Is it not?
Pierre
On 31/07/2019 16:58, Pierre-Yves Chibon wrote:
My canary ran took 24 minutes, apparently the CI pipeline was slower than usual but the rest of the workflow seemed fine.
$ koji buildinfo ocaml-result-1.2-12.fc31 returns: Tags: f31 f31-updates-pending
So it should be in the buildroot. Is it not?
Well wait-repo is not returning:
koji wait-repo --build=ocaml-result-1.2-12.fc31 f31-build
Tom
On 7/31/19 9:06 AM, Tom Hughes wrote:
On 31/07/2019 16:58, Pierre-Yves Chibon wrote:
My canary ran took 24 minutes, apparently the CI pipeline was slower than usual but the rest of the workflow seemed fine.
$ koji buildinfo ocaml-result-1.2-12.fc31 returns: Tags: f31 f31-updates-pending
So it should be in the buildroot. Is it not?
Well wait-repo is not returning:
koji wait-repo --build=ocaml-result-1.2-12.fc31 f31-build
It's there now.
Odd that it would take that long for a newrepo...
According to koji tag history it should have been done like 10 hours ago, and signing took... 20 seconds.
Wed Jul 31 07:50:50 2019: ocaml-result-1.2-12.fc31 tagged into f31-updates-candidate by rjones Wed Jul 31 07:51:00 2019: ocaml-result-1.2-12.fc31 untagged from f31-updates-candidate by autopen Wed Jul 31 07:51:00 2019: ocaml-result-1.2-12.fc31 tagged into f31-updates-testing-pending by autopen Wed Jul 31 07:51:06 2019: ocaml-result-1.2-12.fc31 untagged from f31-updates-testing-pending by bodhi Wed Jul 31 07:51:09 2019: ocaml-result-1.2-12.fc31 tagged into f31 by bodhi [still active] Wed Jul 31 07:51:10 2019: ocaml-result-1.2-12.fc31 tagged into f31-updates-pending by bodhi [still active]
kevin
On Wed, Jul 31, 2019 at 05:06:17PM +0100, Tom Hughes wrote:
On 31/07/2019 16:58, Pierre-Yves Chibon wrote:
My canary ran took 24 minutes, apparently the CI pipeline was slower than usual but the rest of the workflow seemed fine.
$ koji buildinfo ocaml-result-1.2-12.fc31 returns: Tags: f31 f31-updates-pending
So it should be in the buildroot. Is it not?
Well wait-repo is not returning:
koji wait-repo --build=ocaml-result-1.2-12.fc31 f31-build
It just went into the buildroot a few minutes ago. I think total waiting time was about 2 hours 20 mins for this one.
Rich.
On 7/31/19 9:29 AM, Richard W.M. Jones wrote:
On Wed, Jul 31, 2019 at 05:06:17PM +0100, Tom Hughes wrote:
On 31/07/2019 16:58, Pierre-Yves Chibon wrote:
My canary ran took 24 minutes, apparently the CI pipeline was slower than usual but the rest of the workflow seemed fine.
$ koji buildinfo ocaml-result-1.2-12.fc31 returns: Tags: f31 f31-updates-pending
So it should be in the buildroot. Is it not?
Well wait-repo is not returning:
koji wait-repo --build=ocaml-result-1.2-12.fc31 f31-build
It just went into the buildroot a few minutes ago. I think total waiting time was about 2 hours 20 mins for this one.
Pretty puzzling... it looks like from tagging it went through really fast, but somehow didn't land in the buildroot. ;(
I'll look and see if there's some buildroot repo regen problem happening...
kevin
On Wed, Jul 31, 2019, 22:42 Kevin Fenzi kevin@scrye.com wrote:
On 7/31/19 9:29 AM, Richard W.M. Jones wrote:
On Wed, Jul 31, 2019 at 05:06:17PM +0100, Tom Hughes wrote:
On 31/07/2019 16:58, Pierre-Yves Chibon wrote:
My canary ran took 24 minutes, apparently the CI pipeline was slower
than usual
but the rest of the workflow seemed fine.
$ koji buildinfo ocaml-result-1.2-12.fc31 returns: Tags: f31 f31-updates-pending
So it should be in the buildroot. Is it not?
Well wait-repo is not returning:
koji wait-repo --build=ocaml-result-1.2-12.fc31 f31-build
It just went into the buildroot a few minutes ago. I think total waiting time was about 2 hours 20 mins for this one.
Pretty puzzling... it looks like from tagging it went through really fast, but somehow didn't land in the buildroot. ;(
I'll look and see if there's some buildroot repo regen problem happening...
FWIW, I just built two packages (snakeyaml and xbean), and for both of them I received the "bodhi pushed this update to stable" email even before "fedpkg build" exited. So it's pretty fast now, at least for small packages like these.
Fabio
kevin
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
* Pierre-Yves Chibon:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
I see both “Status stable” and a “Push to Testing” button in the upper right here:
https://bodhi.fedoraproject.org/updates/FEDORA-2019-51c4168307
Is this a UI issue?
Thanks, Florian
On 8/2/19 11:13 AM, Florian Weimer wrote:
- Pierre-Yves Chibon:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
I see both “Status stable” and a “Push to Testing” button in the upper right here:
https://bodhi.fedoraproject.org/updates/FEDORA-2019-51c4168307
Is this a UI issue?
It's an issue of some kind definitely... either it should not show that or not allow it.
kevin
On Sat, Aug 03, 2019 at 11:20:35AM -0700, Kevin Fenzi wrote:
On 8/2/19 11:13 AM, Florian Weimer wrote:
- Pierre-Yves Chibon:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
I see both “Status stable” and a “Push to Testing” button in the upper right here:
https://bodhi.fedoraproject.org/updates/FEDORA-2019-51c4168307
Is this a UI issue?
It's an issue of some kind definitely... either it should not show that or not allow it.
+1
Could you please make a ticket of this at: https://github.com/fedora-infra/bodhi/issues/ ?
Thanks, Pierre
* Pierre-Yves Chibon:
On Sat, Aug 03, 2019 at 11:20:35AM -0700, Kevin Fenzi wrote:
On 8/2/19 11:13 AM, Florian Weimer wrote:
- Pierre-Yves Chibon:
When you run `fedpkg build` on Rawhide, your package will be built in a new koji tag (which will be the default target for Rawhide). The package will be picked up from this koji tag, signed and moved onto a second tag. Bodhi will be notified by koji once this new build is signed and will automatically create an update for it (you will be notified about this by email by bodhi directly) with a “Testing” status. If the package maintainer has not opted in into the CI workflow, the update will be pushed to “Stable” and the build will be pushed into the regular Rawhide tag, making it available in the Rawhide buildroot, just as it is today.
I see both “Status stable” and a “Push to Testing” button in the upper right here:
https://bodhi.fedoraproject.org/updates/FEDORA-2019-51c4168307
Is this a UI issue?
It's an issue of some kind definitely... either it should not show that or not allow it.
+1
Could you please make a ticket of this at: https://github.com/fedora-infra/bodhi/issues/ ?
Done: https://github.com/fedora-infra/bodhi/issues/3451
Thanks, Florian