I had a long discussion yesterday with Colin about some of the pain points that are causing him to currently have a separate atomic- workstation build on the Centos infrastructure, and what we can do to address those and consolidate back to the Fedora infrastructure.
The long term goal we have is getting to the point where someone who is moderately adventuresome can consume Fedora Atomic Workstation in a rolling fashion - every week a new version of Atomic Workstation shows up with whatever minor or major updates are considered stable, and if something breaks, rpm-ostree offers the ability to roll back.
For Atomic Host, they offer this experience based on the *last* stable release of Fedora - so when a new release of OpenShift or atomic-cli happens, they rebase it in f25, and then the f25-based Atomic Host image is updated. This provides something much more stable than basing their releases on Rawhide, because only a fraction of the packages get updated .
But we can't literally follow this model for workstation, because we can't make that conceptual separation between the stable base and the stuff that is updated - kernel, systemd, NetworkManager, gnome-shell all have roughly the same status. The best separation we have for Workstation is operating system vs. apps, and Flatpak is the route forward to allow people to try out new apps on a stale base.
So what we converged on is to concentrate for now on making consuming Rawhide via ostree better - once we get experience there, we can see whether a *separate* rolling-but-stable stream might make sense and try to convince the wider Fedora project about that.
I've listed some goals below, trying to order them from things that are just a bit more than configuration changes, to things that require a bit of implementation and development, to thing that are major changes to how we work in Fedora.
Goal 1 (immediate) ==================
Have the ostree of at least the rawhide version of the ostree for Fedora Workstation ("Fedora Atomic Workstation") rebuild more than once a day. Right now fixing a problem with the ostree image is painful, since either you have to set up a local build environment, or you can test one change a day.
My understanding is that the way that this was resolved for Fedora Atomic Host was that the compose for Fedora Atomic Host was separated from the main compose, and the Atomic Host compose gets run in a cron job every fifteen minutes or so (presumably throttling on the completion of the last compose?)
Ways of proceeding:
* Have a separate "Fedora Atomic Workstation" compose copied from however the Fedora Atomic Host compose is done.
* Run the entire Fedora Rawhide compose process out of a cron job, like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
* Run just the *workstation* Fedora Rawhide compose out of a cron job - I don't know how separable one edition is from the overall process.
Goal 2 (immediate) ================ Have a branch for the ostree repository for Fedora Workstation rawhide that updates once a week rather than with every compose. This could be:
* Simply done at a fixed time * Gated based on a human (possibly looking at the results of automated tests) * Gated automatically based on tests
We'd also want to make sure that there are install ISOs of these tags - we possibly could only build ISOs when a tag is made to speed up the continuous ostree compose process.
Goal 3 (short-term) =================== Have tests that run automatically on the workstation ostree repositories.
Goal 4 (short-term) =================== Have some way to cherry pick selected security fixes and do an async- update of the once-a-week tag. This would involve some sort of snap- shot of the state of the koji tag used to build that version that could then be cloned and updated with newer versions.
Goal 5 (longer-term) ==================== Extend the continuous build process to also apply to to bodhi-managed distributions whether released (like f25) or unreleased (like f26). ostree branches corresponding to updates and updates-testing would be built continuously for testing purposes and automated tests would be run against them.
Goal 6 (longer-term) ==================== Have a way of doing development branches (corresponding roughly to side-tags in Koji) so that someone working on the next version of GNOME or systemd can land changes and get them run through CI without breaking people following rawhide.
Goal 7 (future) =============== Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
On Fri, Mar 03, 2017 at 10:21:56AM -0500, Owen Taylor wrote:
- Run the entire Fedora Rawhide compose process out of a cron job,
like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
- Run just the *workstation* Fedora Rawhide compose out of a cron job
- I don't know how separable one edition is from the overall process.
Right now, it is very, very tightly coupled.
Goal 7 (future)
Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
On Fri, 2017-03-03 at 11:30 -0500, Matthew Miller wrote:
On Fri, Mar 03, 2017 at 10:21:56AM -0500, Owen Taylor wrote:
- Run the entire Fedora Rawhide compose process out of a cron job,
like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
Well. I think that's over-simplifying by quite a lot, honestly. Where are people who *do* run Rawhide going to get their packages from? Do we just ship a fedora-repos-rawhide which points to a symlink to the 'latest' compose in kojipkgs, or what? If we do that, how do we test that Rawhide dnf / PackageKit / etc. work okay with mirrormanager?
Out of the three proposals, TBH, this one looked like by far the *worst* idea to me.
- Run just the *workstation* Fedora Rawhide compose out of a cron job
- I don't know how separable one edition is from the overall process.
Right now, it is very, very tightly coupled.
I don't think that's the whole story. We can't just decouple bits of 'the official Rawhide compose', no - a compose is a compose is a compose, it's a unitary thing. But we *can* quite easily set up different, concurrent composes, AIUI. It wouldn't be infeasible at all to have a different compose profile which just built a much smaller set of deliverables. This is, after all, basically exactly what the Fedora- Atomic, Fedora-Docker and Fedora-Cloud composes we run nightly from the last stable release are.
Goal 7 (future)
Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
I don't think we've discussed it, no. I think the focus is a bit different: the NoMoreAlpha thing is about keeping Rawhide basically functional, really, not about keeping it at a level of quality where we could call it 'rolling stable'.
On Sat, Mar 04, 2017 at 05:15:59PM -0800, Adam Williamson wrote:
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
Well. I think that's over-simplifying by quite a lot, honestly. Where are people who *do* run Rawhide going to get their packages from? Do we just ship a fedora-repos-rawhide which points to a symlink to the 'latest' compose in kojipkgs, or what? If we do that, how do we test that Rawhide dnf / PackageKit / etc. work okay with mirrormanager?
I was thinking simply a different path / rsync module which we tell the mirror admins will be updated constantly. We could make sure a few mirrors *do* run (and frequently) for making sure the tools work with mirrormanager.
On Sat, 2017-03-04 at 17:15 -0800, Adam Williamson wrote:
Goal 7 (future)
=============== Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
I don't think we've discussed it, no. I think the focus is a bit different: the NoMoreAlpha thing is about keeping Rawhide basically functional, really, not about keeping it at a level of quality where we could call it 'rolling stable'.
I don't think anybody expects Rawhide to miraculously turn into "rolling stable" - the first step is to get it to "rolling usable" :-)
Hopefully the things I'm proposing here are complementary to the ideas of NoMoreAlpha - in the context of Atomic Workstation, we have the ability to go beyond the package level and actually run integration tests on the actual operating system that the user will be running, and then we can add a second level of gating.
Package is build => package tests => package goes into tree => integration tests => tree is tagged for distribution to users
The more we catch problems at the first level, the better, but some problems won't be found that way.
If we get better at making Rawhide usable, we can decide whether to push forward towards further stability (say by the idea of having devel branches for major changes, and only merging when they are pretty stable), or alternatively use the expertise we've gained to be better at making changes to a stable branch.
- Owen
On Mon, 2017-03-06 at 15:52 -0500, Owen Taylor wrote:
Hopefully the things I'm proposing here are complementary to the ideas of NoMoreAlpha - in the context of Atomic Workstation, we have the ability to go beyond the package level and actually run integration tests on the actual operating system that the user will be running, and then we can add a second level of gating.
We already do this for regular Workstation, and those tests are planned to be a part of the NoMoreAlpha system (the openQA desktop tests).
El vie, 03-03-2017 a las 11:30 -0500, Matthew Miller escribió:
On Fri, Mar 03, 2017 at 10:21:56AM -0500, Owen Taylor wrote:
* Run the entire Fedora Rawhide compose process out of a cron job, like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
Atomic host in rawhide is not done as a separate compose its all part of the one rawhide compose.
* Run just the *workstation* Fedora Rawhide compose out of a cron job
- I don't know how separable one edition is from the overall
process.
Right now, it is very, very tightly coupled.
Goal 7 (future)
Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Dennis
On Tue, 2017-03-07 at 08:30 -0600, Dennis Gilmore wrote:
El vie, 03-03-2017 a las 11:30 -0500, Matthew Miller escribió:
On Fri, Mar 03, 2017 at 10:21:56AM -0500, Owen Taylor wrote:
* Run the entire Fedora Rawhide compose process out of a cron job, like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
Atomic host in rawhide is not done as a separate compose its all part of the one rawhide compose.
I move have misunderstood. So there is no Atomic Host ostree creation that happens more than once a day?
Goal 7 (future)
Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Asking people to fix one thing a day then recontext-switch into the task the next day is incredibly discouraging to getting things working. So, for now, let's concentrate on goal one: how do we get a workstation ostree built more than once a day? What needs to happen to enable that?
- Owen
On Tue, 2017-03-07 at 10:26 -0500, Owen Taylor wrote:
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Asking people to fix one thing a day then recontext-switch into the task the next day is incredibly discouraging to getting things working. So, for now, let's concentrate on goal one: how do we get a workstation ostree built more than once a day? What needs to happen to enable that?
I think Dennis' point is that it seems odd to be talking about doing extra work to build it more than once a day when currently it hasn't built successfully one time since 2016-10-18.
On Tue, 2017-03-07 at 07:54 -0800, Adam Williamson wrote:
On Tue, 2017-03-07 at 10:26 -0500, Owen Taylor wrote:
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Asking people to fix one thing a day then recontext-switch into the task the next day is incredibly discouraging to getting things working. So, for now, let's concentrate on goal one: how do we get a workstation ostree built more than once a day? What needs to happen to enable that?
I think Dennis' point is that it seems odd to be talking about doing extra work to build it more than once a day when currently it hasn't built successfully one time since 2016-10-18.
If I was suggesting that we need to fail the build multiple times a day and never look at it, that would indeed be odd :-)
But the context of my mail was figuring out a plan to bring effort going into a build of Atomic Workstation on CentOS CI back to the Fedora infrastructure.
And one of the main reasons that Colin and others have been working on that rather than the Fedora build is because the nightly nature of the Fedora build makes it painful to contribute to.
Speaking personally, the one time I tried to fix the Fedora workstation build over the last few months, I came up with a fix based on the logs, committed it, context switched away, and had other things to do the next day.
Owen
On Tue, 2017-03-07 at 11:13 -0500, Owen Taylor wrote:
On Tue, 2017-03-07 at 07:54 -0800, Adam Williamson wrote:
On Tue, 2017-03-07 at 10:26 -0500, Owen Taylor wrote:
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Asking people to fix one thing a day then recontext-switch into the task the next day is incredibly discouraging to getting things working. So, for now, let's concentrate on goal one: how do we get a workstation ostree built more than once a day? What needs to happen to enable that?
I think Dennis' point is that it seems odd to be talking about doing extra work to build it more than once a day when currently it hasn't built successfully one time since 2016-10-18.
If I was suggesting that we need to fail the build multiple times a day and never look at it, that would indeed be odd :-)
But the context of my mail was figuring out a plan to bring effort going into a build of Atomic Workstation on CentOS CI back to the Fedora infrastructure.
And one of the main reasons that Colin and others have been working on that rather than the Fedora build is because the nightly nature of the Fedora build makes it painful to contribute to.
Speaking personally, the one time I tried to fix the Fedora workstation build over the last few months, I came up with a fix based on the logs, committed it, context switched away, and had other things to do the next day.
FWIW, composes are *typically* done once a day, but there's nothing set in stone about it (if you check the recent record, we've been doing rather a lot more than one compose per day lately). You can always ask releng to fire another compose if you want to check if a change worked, and they'll typically be fine with doing that if there's a legitimate reason for it.
On Tue, Mar 7, 2017, at 10:54 AM, Adam Williamson wrote:
I think Dennis' point is that it seems odd to be talking about doing extra work to build it more than once a day when currently it hasn't built successfully one time since 2016-10-18.
Just today I hit this again:
https://github.com/projectatomic/rpm-ostree/issues/415
Decided to push a change to work around it: https://pagure.io/atomic-ws/c/cb81e7def9eff95a0a759e2ffdfd7e1c19a07a47?branc...
Retriggered a build: https://ci.centos.org/view/Atomic/job/atomic-ws-treecompose/4171/console (Ideally we'd have push notification from pagure)
Then tried it (30 mins later, a lot could be optimized there too), and it worked.
As the person who does 90% of the actual fixes in the ostree/anaconda stack, I can tell you this isn't some theoretical concern.
El mar, 07-03-2017 a las 10:26 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 08:30 -0600, Dennis Gilmore wrote:
El vie, 03-03-2017 a las 11:30 -0500, Matthew Miller escribió:
On Fri, Mar 03, 2017 at 10:21:56AM -0500, Owen Taylor wrote:
* Run the entire Fedora Rawhide compose process out of a cron job, like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
Atomic host in rawhide is not done as a separate compose its all part of the one rawhide compose.
I move have misunderstood. So there is no Atomic Host ostree creation that happens more than once a day?
There is no Atomic Host compose that happens more than once a day currently. We do a daily one for latest stable fedora and we build all the pieces as part of rawhide. we have a ostree that we continuously update, it is however only a ostree, we could do workstation ostree as well
Goal 7 (future)
Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Asking people to fix one thing a day then recontext-switch into the task the next day is incredibly discouraging to getting things working. So, for now, let's concentrate on goal one: how do we get a workstation ostree built more than once a day? What needs to happen to enable that?
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen. and has not in ages.
Dennis
On Tue, 2017-03-07 at 11:26 -0600, Dennis Gilmore wrote:
El mar, 07-03-2017 a las 10:26 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 08:30 -0600, Dennis Gilmore wrote:
El vie, 03-03-2017 a las 11:30 -0500, Matthew Miller escribió:
On Fri, Mar 03, 2017 at 10:21:56AM -0500, Owen Taylor wrote:
* Run the entire Fedora Rawhide compose process out of a cron job, like the Fedora Atomic Host compose. This would likely require us to remove Rawhide from the mirrored set, but mirroring Rawhide doesn't seem important - it is presumably a tiny portion of overall Fedora bandwidth usage.
I agree that separating Rawhide out of the main mirror channel would be relatively low-impact; it's a tiny fraction of the connections every day. We might want to offer a nightly snapshot on an optional mirror channel or something like this.
Atomic host in rawhide is not done as a separate compose its all part of the one rawhide compose.
I move have misunderstood. So there is no Atomic Host ostree creation that happens more than once a day?
There is no Atomic Host compose that happens more than once a day currently. We do a daily one for latest stable fedora and we build all the pieces as part of rawhide. we have a ostree that we continuously update, it is however only a ostree, we could do workstation ostree as well
Aha, that's what Colin meant. Yes, a continuously updated ostree of workstation would be very useful, and in fact, basically sufficient, since most operating system changes can be tested at the ostree level (an updated package, a change to the package set, etc.)
We should be able to figure out how to do automated testing directly on ostrees as well.
Having an anaconda installer image is important part of letting people try out the operating system, but a continuously build anaconda installer image isn't a top priority - it's most useful for catching and fixing problems with building the installer image.
Goal 7 (future)
Have a "rolling stable" stream of Fedora that gets major updates not on a six-month-tempo, but after those changes have seen testing in Rawhide. We already treat the kernel like this.
This overlaps a lot with work Adam and Dennis have been doing (see https://fedoraproject.org/wiki/Changes/NoMoreAlpha for part but not all). Have you talked with them?
Honestly the best thing you can do to get it working, stable and better right now is to have people be engaged, paying attention, testing and fixing issues, the workstation ostree pieces have been failing to compose for months, as its non blocking Releng does not look at it at all as we have too many other things going on. You and your team have to get engaged and work on it if you want it to change and be something going forward
Asking people to fix one thing a day then recontext-switch into the task the next day is incredibly discouraging to getting things working. So, for now, let's concentrate on goal one: how do we get a workstation ostree built more than once a day? What needs to happen to enable that?
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull- request to copy those changes over.
Owen
On Tue, 2017-03-07 at 13:24 -0500, Owen Taylor wrote:
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull-request to copy those changes over.
I believe:
https://pagure.io/fedora-lorax-templates/pull-request/13 https://pagure.io/fedora-lorax-templates/pull-request/14
should get things working again, though it's not something I could easily test locally. As noted in the PR, matching changes to how lorax is invoked are needed.
- Owen
El mar, 07-03-2017 a las 14:08 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 13:24 -0500, Owen Taylor wrote:
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull-request to copy those changes over.
I believe:
https://pagure.io/fedora-lorax-templates/pull-request/13 https://pagure.io/fedora-lorax-templates/pull-request/14
should get things working again, though it's not something I could easily test locally. As noted in the PR, matching changes to how lorax is invoked are needed.
- Owen
I can not merge the requests as the commits are not signed off on
Dennis
On Tue, Mar 7, 2017 at 5:33 PM, Dennis Gilmore dennis@ausil.us wrote:
El mar, 07-03-2017 a las 14:08 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 13:24 -0500, Owen Taylor wrote:
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull-request to copy those changes over.
I believe:
https://pagure.io/fedora-lorax-templates/pull-request/13 https://pagure.io/fedora-lorax-templates/pull-request/14
should get things working again, though it's not something I could easily test locally. As noted in the PR, matching changes to how lorax is invoked are needed.
- Owen
I can not merge the requests as the commits are not signed off on
That project has no indication that commits must be signed off. The only thing there to suggest it would be previous commits. However, the Signed-off-by in those previous commit logs is literally meaningless because there is nothing in the code base that describes what it means. You're just following a convention at this point, and how is a developer supposed to know what he or she is signing off on?
If the intention is to be using the DCO (https://developercertificate.org/) then put that in the README.md file, or COPYING, or something to at least give contributors some idea what the hell is going on.
josh
El mar, 07-03-2017 a las 19:10 -0500, Josh Boyer escribió:
On Tue, Mar 7, 2017 at 5:33 PM, Dennis Gilmore dennis@ausil.us wrote:
El mar, 07-03-2017 a las 14:08 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 13:24 -0500, Owen Taylor wrote:
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull-request to copy those changes over.
I believe:
https://pagure.io/fedora-lorax-templates/pull-request/13 https://pagure.io/fedora-lorax-templates/pull-request/14
should get things working again, though it's not something I could easily test locally. As noted in the PR, matching changes to how lorax is invoked are needed.
- Owen
I can not merge the requests as the commits are not signed off on
That project has no indication that commits must be signed off. The only thing there to suggest it would be previous commits. However, the Signed-off-by in those previous commit logs is literally meaningless because there is nothing in the code base that describes what it means. You're just following a convention at this point, and how is a developer supposed to know what he or she is signing off on?
If the intention is to be using the DCO (https://developercertificate.org/) then put that in the README.md file, or COPYING, or something to at least give contributors some idea what the hell is going on.
I have submitted a pull request updating the README.md https://pagure.io/fedora-lorax-templates/pull-request/15 there was never a intention to be vague. It was a oversight when we moved things to pagure. all of the release engineering controlled repos require commits to be signed off on in order to be using DCO
Dennis
Thanks, Dennis! (There's a "artifcats" type in the commit.)
I updated my PR's to include the sign-offs.
- Owen
----- Original Message -----
El mar, 07-03-2017 a las 19:10 -0500, Josh Boyer escribió:
On Tue, Mar 7, 2017 at 5:33 PM, Dennis Gilmore dennis@ausil.us wrote:
El mar, 07-03-2017 a las 14:08 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 13:24 -0500, Owen Taylor wrote:
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull-request to copy those changes over.
I believe:
https://pagure.io/fedora-lorax-templates/pull-request/13 https://pagure.io/fedora-lorax-templates/pull-request/14
should get things working again, though it's not something I could easily test locally. As noted in the PR, matching changes to how lorax is invoked are needed.
- Owen
I can not merge the requests as the commits are not signed off on
That project has no indication that commits must be signed off. The only thing there to suggest it would be previous commits. However, the Signed-off-by in those previous commit logs is literally meaningless because there is nothing in the code base that describes what it means. You're just following a convention at this point, and how is a developer supposed to know what he or she is signing off on?
If the intention is to be using the DCO (https://developercertificate.org/) then put that in the README.md file, or COPYING, or something to at least give contributors some idea what the hell is going on.
I have submitted a pull request updating the README.md https://pagure.io/fedora-lorax-templates/pull-request/15 there was never a intention to be vague. It was a oversight when we moved things to pagure. all of the release engineering controlled repos require commits to be signed off on in order to be using DCO
Dennis _______________________________________________ desktop mailing list -- desktop@lists.fedoraproject.org To unsubscribe send an email to desktop-leave@lists.fedoraproject.org
On Wed, Mar 8, 2017 at 3:14 PM, Dennis Gilmore dennis@ausil.us wrote:
El mar, 07-03-2017 a las 19:10 -0500, Josh Boyer escribió:
On Tue, Mar 7, 2017 at 5:33 PM, Dennis Gilmore dennis@ausil.us wrote:
El mar, 07-03-2017 a las 14:08 -0500, Owen Taylor escribió:
On Tue, 2017-03-07 at 13:24 -0500, Owen Taylor wrote:
I am just asking you to work with us, pay attention and fix the things that have been failing for months with no one looking at it. Step one is getting it built once a day, because that does not happen and has not in ages.
I'm sorry it's been sitting there broken for so long. Certainly more people following the builds and more people trying out the result is really important, and we'll work towards that. The problem that it has been hitting was already fixed for Atomic Host, so I'll work on a pull-request to copy those changes over.
I believe:
https://pagure.io/fedora-lorax-templates/pull-request/13 https://pagure.io/fedora-lorax-templates/pull-request/14
should get things working again, though it's not something I could easily test locally. As noted in the PR, matching changes to how lorax is invoked are needed.
- Owen
I can not merge the requests as the commits are not signed off on
That project has no indication that commits must be signed off. The only thing there to suggest it would be previous commits. However, the Signed-off-by in those previous commit logs is literally meaningless because there is nothing in the code base that describes what it means. You're just following a convention at this point, and how is a developer supposed to know what he or she is signing off on?
If the intention is to be using the DCO (https://developercertificate.org/) then put that in the README.md file, or COPYING, or something to at least give contributors some idea what the hell is going on.
I have submitted a pull request updating the README.md https://pagure.io/fedora-lorax-templates/pull-request/15 there was never a intention to be vague. It was a oversight when we moved things to pagure. all of the release engineering controlled repos require commits to be signed off on in order to be using DCO
Thanks!
josh
The primary concern I had around this was - I want to make something that "early adopters" could test and provide feedback on, but *also* run seriously.
By "seriously" for example I mean we need security updates, and we need at least one "stable" branch where things aren't churning too much for the base OS. So having the project *solely* linked to rawhide doesn't really meet those criteria today.
If we have both f26 and rawhide branches that seems OK.
But I also want the ability to quickly test changes to the f26 version (same requirement for Atomic Host actually) - be able to quickly pull in a testing version of e.g. systemd or anaconda that *don't* affect things derived from the "base package set".
On Fri, 2017-03-03 at 12:19 -0500, Colin Walters wrote:
The primary concern I had around this was - I want to make something that "early adopters" could test and provide feedback on, but *also* run seriously.
By "seriously" for example I mean we need security updates, and we need at least one "stable" branch where things aren't churning too much for the base OS. So having the project *solely* linked to rawhide doesn't really meet those criteria today.
If we have both f26 and rawhide branches that seems OK.
I think the F26 branch of Atomic Workstation (and I'm not suggesting removing that) makes a lot of sense for early adopters *of Atomic Workstation*. It won't give much ability to test other things on an early-adoption basis.
But I also want the ability to quickly test changes to the f26 version (same requirement for Atomic Host actually) - be able to quickly pull in a testing version of e.g. systemd or anaconda that *don't* affect things derived from the "base package set".
Are you suggesting that we could pull a different systemd into "F26 Atomic Workstation" as compared to "F26 Workstation"? This strikes me as ultra-confusing.
In the back of my mind, a Fedora contributor should be able to "branch" F26, changes systemd, and have a ostree spit out that they can try testing, but that seems a long way off.
It would, of course, be great if we had continuous builds of F26 so we can get instant feedback when fixing problems there, and so we can run integration testing, but my understanding is that it's more difficult than for Rawhide:
* Because mirroring needs to be standard * Because we have to figure out the Bodhi interaction
Which is why I pushed it off to "goal 5".
- Owen
On Fri, Mar 3, 2017 at 7:22 AM Owen Taylor otaylor@redhat.com wrote:
I had a long discussion yesterday with Colin about some of the pain points that are causing him to currently have a separate atomic- workstation build on the Centos infrastructure, and what we can do to address those and consolidate back to the Fedora infrastructure.
The long term goal we have is getting to the point where someone who is moderately adventuresome can consume Fedora Atomic Workstation in a rolling fashion - every week a new version of Atomic Workstation shows up with whatever minor or major updates are considered stable, and if something breaks, rpm-ostree offers the ability to roll back.
Here's my personal definition of "moderately adventuresome":
1. Build a virtual machine. I need this to work on GNOME Boxes and Windows 10 Pro Hyper-V, so I'll probably use the Atomic Host ISO. I do *not* need or want any VirtualBox or Vagrant stuff. 2. Start up the VM full-screen (GNOME Boxes / Virtual Machine Manager on Linux, the Hyper-V Viewer on Windows) 3. "git clone" some repo that has all the code to layer Atomic Workstation onto an Atomic Host. Run it and voila! 4. Create SSH keys, post the public key to GitHub/GitLab/Bitbucket, "git clone" my own projects and have at it!
I can find documentation for 1; I tried the test composes of Atomic Workstation ISOs but couldn't get one to work, so I want to do my own builds inside a VM. But I haven't been able to find the step 3 anywhere.
On Fri, Mar 3, 2017 at 7:21 AM, Owen Taylor otaylor@redhat.com wrote:
I had a long discussion yesterday with Colin about some of the pain points that are causing him to currently have a separate atomic- workstation build on the Centos infrastructure, and what we can do to address those and consolidate back to the Fedora infrastructure.
The long term goal we have is getting to the point where someone who is moderately adventuresome can consume Fedora Atomic Workstation in a rolling fashion - every week a new version of Atomic Workstation shows up with whatever minor or major updates are considered stable, and if something breaks, rpm-ostree offers the ability to roll back.
For Atomic Host, they offer this experience based on the *last* stable release of Fedora - so when a new release of OpenShift or atomic-cli happens, they rebase it in f25, and then the f25-based Atomic Host image is updated. This provides something much more stable than basing their releases on Rawhide, because only a fraction of the packages get updated .
But we can't literally follow this model for workstation, because we can't make that conceptual separation between the stable base and the stuff that is updated - kernel, systemd, NetworkManager, gnome-shell all have roughly the same status. The best separation we have for Workstation is operating system vs. apps, and Flatpak is the route forward to allow people to try out new apps on a stale base.
Maybe I'm misunderstanding something, but Fedora Atomic only ships stable Fedora packages. If we want an updated version of atomic or kubernetes (we don't ship openshift in the image) we give it karma through bodhi, and it becomes stable, and we pull it in. I'm running this Fedora Atomic WS (https://pagure.io/atomic-ws) now on my main machine, and it, like the host, draws on stable fedora 25. I'm not totally against the idea of running a rawhide-based version, but... I prefer the idea of running an atomic workstation based on the latest stable fedora. Would the rawhide-based atomic workstation be the only option?
El mié, 08-03-2017 a las 09:24 -0800, Jason Brooks escribió:
On Fri, Mar 3, 2017 at 7:21 AM, Owen Taylor otaylor@redhat.com wrote:
I had a long discussion yesterday with Colin about some of the pain points that are causing him to currently have a separate atomic- workstation build on the Centos infrastructure, and what we can do to address those and consolidate back to the Fedora infrastructure.
The long term goal we have is getting to the point where someone who is moderately adventuresome can consume Fedora Atomic Workstation in a rolling fashion - every week a new version of Atomic Workstation shows up with whatever minor or major updates are considered stable, and if something breaks, rpm-ostree offers the ability to roll back.
For Atomic Host, they offer this experience based on the *last* stable release of Fedora - so when a new release of OpenShift or atomic- cli happens, they rebase it in f25, and then the f25-based Atomic Host image is updated. This provides something much more stable than basing their releases on Rawhide, because only a fraction of the packages get updated .
But we can't literally follow this model for workstation, because we can't make that conceptual separation between the stable base and the stuff that is updated - kernel, systemd, NetworkManager, gnome- shell all have roughly the same status. The best separation we have for Workstation is operating system vs. apps, and Flatpak is the route forward to allow people to try out new apps on a stale base.
Maybe I'm misunderstanding something, but Fedora Atomic only ships stable Fedora packages. If we want an updated version of atomic or kubernetes (we don't ship openshift in the image) we give it karma through bodhi, and it becomes stable, and we pull it in. I'm running this Fedora Atomic WS (https://pagure.io/atomic-ws) now on my main machine, and it, like the host, draws on stable fedora 25. I'm not totally against the idea of running a rawhide-based version, but... I prefer the idea of running an atomic workstation based on the latest stable fedora. Would the rawhide-based atomic workstation be the only option?
Not sure what that pagure project is jowever its not anything that is part of anything official.
Dennis
through bodhi, and it becomes stable, and we pull it in. I'm running this Fedora Atomic WS (https://pagure.io/atomic-ws) now on my main machine, and it, like the host, draws on stable fedora 25. I'm not totally against the idea of running a rawhide-based version, but... I prefer the idea of running an atomic workstation based on the latest stable fedora. Would the rawhide-based atomic workstation be the only option?
Not sure what that pagure project is jowever its not anything that is part of anything official.
Right, I only cite it to say something like: speaking as a current user of an atomic workstation based on fedora...
Dennis _______________________________________________ desktop mailing list -- desktop@lists.fedoraproject.org To unsubscribe send an email to desktop-leave@lists.fedoraproject.org
On Wed, 2017-03-08 at 09:24 -0800, Jason Brooks wrote:
But we can't literally follow this model for workstation, because we can't make that conceptual separation between the stable base and the stuff that is updated - kernel, systemd, NetworkManager, gnome- shell all have roughly the same status. The best separation we have for Workstation is operating system vs. apps, and Flatpak is the route forward to allow people to try out new apps on a stale base.
Maybe I'm misunderstanding something, but Fedora Atomic only ships stable Fedora packages. If we want an updated version of atomic or kubernetes (we don't ship openshift in the image) we give it karma through bodhi, and it becomes stable, and we pull it in. I'm running this Fedora Atomic WS (https://pagure.io/atomic-ws) now on my main machine, and it, like the host, draws on stable fedora 25. I'm not totally against the idea of running a rawhide-based version, but... I prefer the idea of running an atomic workstation based on the latest stable fedora. Would the rawhide-based atomic workstation be the only option?
No, certainly not. There will be builds of Atomic Workstation corresponding to the stable branches of Fedora, and that's how I'd expect most people to consume it.
The distinction I was drawing is that, as I understand it, the F25- based Atomic Host is considered the primary place that development happens. If that's the case, it's only possible because the components that you want to update are mostly used in the context of Atomic and are pretty independent of the Fedora core.
But (without substantially changing Fedora) we can't do that for Workstation - if you want to try out a new version of systemd or NetworkManager, you need to try out Rawhide (or after the branch-point the upcoming release.)
So the place where we'll see the benefits of the "Atomic" approach for development - being able to survive broken updates by rolling back, tagging only changes that pass automatic testing, etc - is Rawhide, and my mail was largely about how to make Rawhide better for development and testing as an ostree.
- Owen
On Wed, Mar 8, 2017 at 11:13 AM, Owen Taylor otaylor@redhat.com wrote:
On Wed, 2017-03-08 at 09:24 -0800, Jason Brooks wrote:
But we can't literally follow this model for workstation, because we can't make that conceptual separation between the stable base and the stuff that is updated - kernel, systemd, NetworkManager, gnome- shell all have roughly the same status. The best separation we have for Workstation is operating system vs. apps, and Flatpak is the route forward to allow people to try out new apps on a stale base.
Maybe I'm misunderstanding something, but Fedora Atomic only ships stable Fedora packages. If we want an updated version of atomic or kubernetes (we don't ship openshift in the image) we give it karma through bodhi, and it becomes stable, and we pull it in. I'm running this Fedora Atomic WS (https://pagure.io/atomic-ws) now on my main machine, and it, like the host, draws on stable fedora 25. I'm not totally against the idea of running a rawhide-based version, but... I prefer the idea of running an atomic workstation based on the latest stable fedora. Would the rawhide-based atomic workstation be the only option?
No, certainly not. There will be builds of Atomic Workstation corresponding to the stable branches of Fedora, and that's how I'd expect most people to consume it.
Sounds good!
The distinction I was drawing is that, as I understand it, the F25- based Atomic Host is considered the primary place that development happens. If that's the case, it's only possible because the components that you want to update are mostly used in the context of Atomic and are pretty independent of the Fedora core.
I think the heaviest development happens on centos atomic continuous https://wiki.centos.org/SpecialInterestGroup/Atomic/Devel, but Colin recently started up a Fedora version that works similarly: https://pagure.io/fedora-atomic-host-continuous
But (without substantially changing Fedora) we can't do that for Workstation - if you want to try out a new version of systemd or NetworkManager, you need to try out Rawhide (or after the branch-point the upcoming release.)
So the place where we'll see the benefits of the "Atomic" approach for development - being able to survive broken updates by rolling back, tagging only changes that pass automatic testing, etc - is Rawhide, and my mail was largely about how to make Rawhide better for development and testing as an ostree.
+1
- Owen
desktop mailing list -- desktop@lists.fedoraproject.org To unsubscribe send an email to desktop-leave@lists.fedoraproject.org
On Wed, 2017-03-08 at 11:21 -0800, Jason Brooks wrote:
The distinction I was drawing is that, as I understand it, the F25- based Atomic Host is considered the primary place that development happens. If that's the case, it's only possible because the components that you want to update are mostly used in the context of Atomic and are pretty independent of the Fedora core.
I think the heaviest development happens on centos atomic continuous https://wiki.centos.org/SpecialInterestGroup/Atomic/Devel, but Colin recently started up a Fedora version that works similarly: https://pagure.io/fedora-atomic-host-continuous
The idea here is to make that not the case any more, and instead make Fedora the primary development area.
desktop@lists.fedoraproject.org