https://fedoraproject.org/wiki/Changes/Additional_buildroot_to_test_x86-64_m...
== Summary ==
Create a dedicated buildroot to test packages built with x86-64 micro-architecture update.
== Owner ==
* Name: [[User:bookwar| Aleksandra Fedorova]] * Email: [mailto:alpha@bookwar.info alpha@bookwar.info] * Name: [[User:fweimer| Florian Weimer]] * Email: [mailto:fweimer@redhat.com fweimer@redhat.com]
== Detailed Description ==
Fedora currently uses the original K8 micro-architecture (without 3DNow! and other AMD-specific parts) as the baseline for its x86_64 architecture. This baseline dates back to 2003 and has not been updated since. As a result, performance of Fedora is not as good as it could be on current CPUs.
Changing the main Fedora baseline to new CPUs in place [[Changes/x86-64 micro-architecture update|was rejected]] as the user base for older machines is still large. But we’d like to unblock the development and testing of this feature.
== Benefit to Fedora ==
* Allow development and verification of the CPU baseline update in Fedora without disrupting users of Fedora on older machines. * Collect real life data on performance improvements, which can help making decision on the baseline update. * As soon as feature is accepted by the community, there will be a smooth process to update baseline in the main Fedora, as all packages will be already verified and tested to work against it. * Until the switch of the main x86_64 architecture, interested parties can install systems from the updated buildroot for performance experiments.
== Scope ==
* Proposal owners: ** define new disttag for the buildroot ** provide updated gcc package which implements the new compiler flags. It is expected that the new baseline will be implemented in a new GCC -march= option for convenience. ** provide update to rpm-config package which changes default compiler options for the disttag ** setup automation so that for each build submitted to Fedora Rawhide there is a build submitted to the additional buildroot. Result of the build task will be posted to Fedora Messaging and consumed by ResultsDB, so that it appears in Bodhi ** setup automation to run periodic partial composes (via ODCS) without installation media to generate repositories with these packages ** update packaging documentation to mention new disttag and how it can be used ** create landing page to describe the purpose and usages of the buildroot in Fedora Wiki
* Other developers: ** None. The goal is to build exactly the same sources in the different buildroot environment. Thus maintainers supposed to work on Fedora Rawhide packages as usual. With maybe additional source of bugreports coming from the build failures.
* Release engineering: [https://pagure.io/releng/issue/9154 #9154] ** New buildroot needs to be configured in koji ** New compose configuration
* Policies and guidelines: [https://pagure.io/packaging-committee/issue/941 #941] ** There will be a new disttag for the alternative buildroot and a new conditional in the rpm spec for it.
* Trademark approval: N/A (not needed for this Change)
== Upgrade/compatibility impact ==
N/A (not a System Wide Change)
== How To Test ==
(Will be updated with more details later)
For each new package submitted to Fedora Rawhide there will be a CI pipeline which builds the same sources in the additional buildroot. Build result will be posted to ResultsDB and will be visible in Bodhi on the update page.
There also is going to be partial compose with all packages built in the alternative buildroot.
To test these packages one can add the repository from the compose and run a dystro-sync.
== User Experience == The alternative buildroot is going to be used for development and testing. There will be no impact on users.
== Dependencies == N/A (not a System Wide Change)
== Contingency Plan == The only impact the feature has on the current Fedora development process is: there will be additional test result which shows up in Fedora Rawhide gating. Thus there is no risk for Fedora release process.
If feature is not completed by Fedora 32 release, it is going to be shifted to Fedora 33 cycle or cancelled.
* Contingency mechanism: N/A (not a System Wide Change) * Contingency deadline: N/A (not a System Wide Change) * Blocks release? N/A (not a System Wide Change)
== Documentation == There will be a landing page on wiki with details on the purpose and usage of this buildroot.
== Release Notes == Preparation work has started for updating Fedora baseline to new CPUs. While it has no effect on the current release, there is a test environment which can be used by anyone interested in this work.
On Thu, Jan 9, 2020 at 12:17 PM Ben Cotton bcotton@redhat.com wrote:
https://fedoraproject.org/wiki/Changes/Additional_buildroot_to_test_x86-64_m...
== Summary ==
Create a dedicated buildroot to test packages built with x86-64 micro-architecture update.
== Owner ==
- Name: [[User:bookwar| Aleksandra Fedorova]]
- Email: [mailto:alpha@bookwar.info alpha@bookwar.info]
- Name: [[User:fweimer| Florian Weimer]]
- Email: [mailto:fweimer@redhat.com fweimer@redhat.com]
== Detailed Description ==
Fedora currently uses the original K8 micro-architecture (without 3DNow! and other AMD-specific parts) as the baseline for its x86_64 architecture. This baseline dates back to 2003 and has not been updated since. As a result, performance of Fedora is not as good as it could be on current CPUs.
Changing the main Fedora baseline to new CPUs in place [[Changes/x86-64 micro-architecture update|was rejected]] as the user base for older machines is still large. But we’d like to unblock the development and testing of this feature.
== Benefit to Fedora ==
- Allow development and verification of the CPU baseline update in
Fedora without disrupting users of Fedora on older machines.
- Collect real life data on performance improvements, which can help
making decision on the baseline update.
- As soon as feature is accepted by the community, there will be a
smooth process to update baseline in the main Fedora, as all packages will be already verified and tested to work against it.
- Until the switch of the main x86_64 architecture, interested parties
can install systems from the updated buildroot for performance experiments.
== Scope ==
- Proposal owners:
** define new disttag for the buildroot ** provide updated gcc package which implements the new compiler flags. It is expected that the new baseline will be implemented in a new GCC -march= option for convenience. ** provide update to rpm-config package which changes default compiler options for the disttag ** setup automation so that for each build submitted to Fedora Rawhide there is a build submitted to the additional buildroot. Result of the build task will be posted to Fedora Messaging and consumed by ResultsDB, so that it appears in Bodhi ** setup automation to run periodic partial composes (via ODCS) without installation media to generate repositories with these packages ** update packaging documentation to mention new disttag and how it can be used ** create landing page to describe the purpose and usages of the buildroot in Fedora Wiki
Three things I'm concerned about:
1. Our builder resources are squeezed enough as it is. In doing this, are we going to get more machines so that we can have more builders? Between modules and this, I worry our resources will get squeezed far too tightly for my comfort. 2. This feature does not describe what the new microarchitecture baseline will be. I *could* assume it's the crazy new microarchitecture that was proposed in the rejected Change, but I don't want to make that assumption. Please specify. 3. Why is this in the main Koji and require a new disttag? Why not just do a shadow Koji and build in an alternate path? Every other architecture bringup has required this process, I don't see why this one wouldn't too. That would also deal with my concern for (1), since a shadow Koji would be required to have its own builder resources separate from the main one.
Note that there's basically no reason to do weird things to redhat-rpm-config, especially if you go the shadow Koji route, since you can specify macros on a per tag basis. This includes overriding the target platform and setting extra flags for optimizations.
As a side note, I'm surprised people are so weirdly focused on Intel-specific optimizations, when it seems like AMD CPUs might benefit from some love directly, especially with new Ryzen chips becoming more popular.
-- 真実はいつも一つ!/ Always, there's only one truth!
On Thu, Jan 09, 2020 at 03:59:41PM -0500, Neal Gompa wrote:
On Thu, Jan 9, 2020 at 12:17 PM Ben Cotton bcotton@redhat.com wrote:
https://fedoraproject.org/wiki/Changes/Additional_buildroot_to_test_x86-64_m...
== Summary ==
Create a dedicated buildroot to test packages built with x86-64 micro-architecture update.
== Owner ==
- Name: [[User:bookwar| Aleksandra Fedorova]]
- Email: [mailto:alpha@bookwar.info alpha@bookwar.info]
- Name: [[User:fweimer| Florian Weimer]]
- Email: [mailto:fweimer@redhat.com fweimer@redhat.com]
== Detailed Description ==
Fedora currently uses the original K8 micro-architecture (without 3DNow! and other AMD-specific parts) as the baseline for its x86_64 architecture. This baseline dates back to 2003 and has not been updated since. As a result, performance of Fedora is not as good as it could be on current CPUs.
Changing the main Fedora baseline to new CPUs in place [[Changes/x86-64 micro-architecture update|was rejected]] as the user base for older machines is still large. But we’d like to unblock the development and testing of this feature.
== Benefit to Fedora ==
- Allow development and verification of the CPU baseline update in
Fedora without disrupting users of Fedora on older machines.
- Collect real life data on performance improvements, which can help
making decision on the baseline update.
- As soon as feature is accepted by the community, there will be a
smooth process to update baseline in the main Fedora, as all packages will be already verified and tested to work against it.
- Until the switch of the main x86_64 architecture, interested parties
can install systems from the updated buildroot for performance experiments.
== Scope ==
- Proposal owners:
** define new disttag for the buildroot ** provide updated gcc package which implements the new compiler flags. It is expected that the new baseline will be implemented in a new GCC -march= option for convenience. ** provide update to rpm-config package which changes default compiler options for the disttag ** setup automation so that for each build submitted to Fedora Rawhide there is a build submitted to the additional buildroot. Result of the build task will be posted to Fedora Messaging and consumed by ResultsDB, so that it appears in Bodhi ** setup automation to run periodic partial composes (via ODCS) without installation media to generate repositories with these packages ** update packaging documentation to mention new disttag and how it can be used ** create landing page to describe the purpose and usages of the buildroot in Fedora Wiki
Three things I'm concerned about:
- Our builder resources are squeezed enough as it is. In doing this,
are we going to get more machines so that we can have more builders? Between modules and this, I worry our resources will get squeezed far too tightly for my comfort. 2. This feature does not describe what the new microarchitecture baseline will be. I *could* assume it's the crazy new microarchitecture that was proposed in the rejected Change, but I don't want to make that assumption. Please specify. 3. Why is this in the main Koji and require a new disttag? Why not just do a shadow Koji and build in an alternate path? Every other architecture bringup has required this process, I don't see why this one wouldn't too. That would also deal with my concern for (1), since a shadow Koji would be required to have its own builder resources separate from the main one.
Note that there's basically no reason to do weird things to redhat-rpm-config, especially if you go the shadow Koji route, since you can specify macros on a per tag basis. This includes overriding the target platform and setting extra flags for optimizations.
Those are all good points. To add more:
4. This certainly needs to be a "system wide change" with the related additional info required for such changes. We certainly need releng to sign off on this.
5. "Additional bugs", i.e. most likely build failures, but probably also runtime failures are mentioned. Who will be on the hook to fix those? Does failure to build block anything?
As a side note, I'm surprised people are so weirdly focused on Intel-specific optimizations, when it seems like AMD CPUs might benefit from some love directly, especially with new Ryzen chips becoming more popular.
Zbyszek
On Thu, Jan 9, 2020 at 5:03 PM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- This certainly needs to be a "system wide change" with the related additional info required for such changes. We certainly need releng to sign off on this.
Apart from potential capacity impacts, this seems self-contained. I'll grant that a reduced builder capacity would impact other contributors, but that doesn't seem like the kind of case the system-wide change definition is designed for.
The change owner (bookwar) has already opened a ticket with RelEng[1] and they will discuss what work is necessary for this at the next meeting.
[1] https://pagure.io/releng/issue/9154
On Thu, Jan 09, 2020 at 05:23:16PM -0500, Ben Cotton wrote:
On Thu, Jan 9, 2020 at 5:03 PM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- This certainly needs to be a "system wide change" with the related additional info required for such changes. We certainly need releng to sign off on this.
Apart from potential capacity impacts, this seems self-contained.
Changes (in the sense of spec conditionals and such) to some yet-undefined subset of packages may necessary (for example if the architecture change has some effect on floating point computations, this could have a wide impact on packages doing numerical tests). So there is a potential for interaction with a large number of packagers.
System-wide changes are also more widely announced (for example they are listed prominently in the release notes), which imo would be appropriate for something like a new architecture.
Because of those two reasons, I'd argue that the category should be changed, but if you disagree, let's not discuss this further.
I'll grant that a reduced builder capacity would impact other contributors, but that doesn't seem like the kind of case the system-wide change definition is designed for.
The change owner (bookwar) has already opened a ticket with RelEng[1] and they will discuss what work is necessary for this at the next meeting.
OK, thanks.
Zbyszek
On Fri, Jan 10, 2020 at 9:16 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Jan 09, 2020 at 05:23:16PM -0500, Ben Cotton wrote:
On Thu, Jan 9, 2020 at 5:03 PM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- This certainly needs to be a "system wide change" with the related additional info required for such changes. We certainly need releng to sign off on this.
Apart from potential capacity impacts, this seems self-contained.
Changes (in the sense of spec conditionals and such) to some yet-undefined subset of packages may necessary (for example if the architecture change has some effect on floating point computations, this could have a wide impact on packages doing numerical tests). So there is a potential for interaction with a large number of packagers.
System-wide changes are also more widely announced (for example they are listed prominently in the release notes), which imo would be appropriate for something like a new architecture.
We are not proposing the new architecture. We are proposing a "staging environment" for the current architecture. Which can be used for experiments which currently can not be performed without disrupting the release and user experience.
And the interaction with the maintainers you mention is not really part of the Change, it is the continuous workflow, which is enabled by it.
Note that we try to make it as light-weight as possible:
1) we reuse the infrastructure which is already available, like koji builders. Because while hardware is costly, the human resources needed to setup completely new infrastructure from scratch and then maintain it in sync with the main infra cost way more; 2) we put all the new logic required to for this change (triggers, feedback loop,..) outside of Fedora RelEng, so that we don't use releng resources to maintain triggers and composes and the builds themselves. This part will be maintained by change owners and people interested in the development of the architecture update.
Thus, apart from raw compute resources, the impact of the change is quite small.
Because of those two reasons, I'd argue that the category should be changed, but if you disagree, let's not discuss this further.
Based on the impact described above, I wouldn't consider the change system-wide.
But I think we touch an interesting topic here: It seems our definition of Change is quite limited and focused on packaged changes. And the work we propose doesn't really fit in this framework. I'm going to start a separate thread on it.
I'll grant that a reduced builder capacity would impact other contributors, but that doesn't seem like the kind of case the system-wide change definition is designed for.
The change owner (bookwar) has already opened a ticket with RelEng[1] and they will discuss what work is necessary for this at the next meeting.
OK, thanks.
-- Aleksandra Fedorova bookwar
On Fri, Jan 10, 2020 at 04:09:18PM +0100, Aleksandra Fedorova wrote:
Based on the impact described above, I wouldn't consider the change system-wide.
But I think we touch an interesting topic here: It seems our definition of Change is quite limited and focused on packaged changes. And the work we propose doesn't really fit in this framework. I'm going to start a separate thread on it.
I'm in favor of Changes being wider rather than inventing a parallel process for non-packaging changes.
On Friday, January 10, 2020 8:09:18 AM MST Aleksandra Fedorova wrote:
We are not proposing the new architecture. We are proposing a "staging environment" for the current architecture. Which can be used for experiments which currently can not be performed without disrupting the release and user experience.
Clearly, this is not about the current architecture, it is about a newer, Intel-specific, micro architecture.
[snip]
Based on the impact described above, I wouldn't consider the change system-wide.
But I think we touch an interesting topic here: It seems our definition of Change is quite limited and focused on packaged changes. And the work we propose doesn't really fit in this framework. I'm going to start a separate thread on it.
This will have affects across the entire distro, and would potentially require changes to a number of Packages to fix either compile-time or run-time issues.
On Thu, Jan 9, 2020 at 3:03 PM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
Those are all good points. To add more:
This certainly needs to be a "system wide change" with the related additional info required for such changes. We certainly need releng to sign off on this.
"Additional bugs", i.e. most likely build failures, but probably also
runtime failures are mentioned. Who will be on the hook to fix those? Does failure to build block anything?
Also, I have a bunch of packages where I have had to deliberately cripple upstream's attempts at using CPU architectures we do not support. The tbb package, for example, uses the -mrtm instructions on x86 platforms by default. Others have optional functionality or faster versions of some functionality if newer architectures are available. The ntl package can use any of -mpclmul, -mavx, -mfma, or -mavx2 if they are available, for example.
The stated purpose is to compare performance. Is the comparison to be carried out by simply rebuilding with changed compiler flags, or do you intend to seek out examples like this and build the code with upstream support for the enabled CPU features?
Hi, Neal,
On Thu, Jan 9, 2020 at 10:01 PM Neal Gompa ngompa13@gmail.com wrote:
On Thu, Jan 9, 2020 at 12:17 PM Ben Cotton bcotton@redhat.com wrote:
https://fedoraproject.org/wiki/Changes/Additional_buildroot_to_test_x86-64_m...
== Summary ==
Create a dedicated buildroot to test packages built with x86-64 micro-architecture update.
== Owner ==
- Name: [[User:bookwar| Aleksandra Fedorova]]
- Email: [mailto:alpha@bookwar.info alpha@bookwar.info]
- Name: [[User:fweimer| Florian Weimer]]
- Email: [mailto:fweimer@redhat.com fweimer@redhat.com]
== Detailed Description ==
Fedora currently uses the original K8 micro-architecture (without 3DNow! and other AMD-specific parts) as the baseline for its x86_64 architecture. This baseline dates back to 2003 and has not been updated since. As a result, performance of Fedora is not as good as it could be on current CPUs.
Changing the main Fedora baseline to new CPUs in place [[Changes/x86-64 micro-architecture update|was rejected]] as the user base for older machines is still large. But we’d like to unblock the development and testing of this feature.
== Benefit to Fedora ==
- Allow development and verification of the CPU baseline update in
Fedora without disrupting users of Fedora on older machines.
- Collect real life data on performance improvements, which can help
making decision on the baseline update.
- As soon as feature is accepted by the community, there will be a
smooth process to update baseline in the main Fedora, as all packages will be already verified and tested to work against it.
- Until the switch of the main x86_64 architecture, interested parties
can install systems from the updated buildroot for performance experiments.
== Scope ==
- Proposal owners:
** define new disttag for the buildroot ** provide updated gcc package which implements the new compiler flags. It is expected that the new baseline will be implemented in a new GCC -march= option for convenience. ** provide update to rpm-config package which changes default compiler options for the disttag ** setup automation so that for each build submitted to Fedora Rawhide there is a build submitted to the additional buildroot. Result of the build task will be posted to Fedora Messaging and consumed by ResultsDB, so that it appears in Bodhi ** setup automation to run periodic partial composes (via ODCS) without installation media to generate repositories with these packages ** update packaging documentation to mention new disttag and how it can be used ** create landing page to describe the purpose and usages of the buildroot in Fedora Wiki
Three things I'm concerned about:
- Our builder resources are squeezed enough as it is. In doing this,
are we going to get more machines so that we can have more builders? Between modules and this, I worry our resources will get squeezed far too tightly for my comfort.
In this change we are looking for x86_64 builders only, and we will run one additional build for every regular (non-scratch) build in Fedora rawhide. I think the load it brings should be bearable, but maybe Releng can provide the estimate.
- This feature does not describe what the new microarchitecture
baseline will be. I *could* assume it's the crazy new microarchitecture that was proposed in the rejected Change, but I don't want to make that assumption. Please specify.
The rejected change https://fedoraproject.org/wiki/Changes/x86-64_micro-architecture_update is explicitly referenced from the current one. So yes, it is the architecture update we are looking for.
And I would suggest to avoid calling things weird and crazy just because you are not interested in them.
- Why is this in the main Koji and require a new disttag? Why not
just do a shadow Koji and build in an alternate path? Every other architecture bringup has required this process, I don't see why this one wouldn't too. That would also deal with my concern for (1), since a shadow Koji would be required to have its own builder resources separate from the main one.
1) There is no new architecture, it is the same x86_64 architecture as usual, with only the default compiler flags changed. Thus, unlike with other architectures, there is no need for new hardware and new koji builders. We can use exactly the same x86_64 koji-builders as usual, which saves resources of Releng and other teams.
2) Separation of resources is not really a solution for the lack of capacity. It makes it worse, because separate resources can not be used by other tasks. It is usually more effective to have a shared pool of compute(in our case - build) power, and use it for various tasks, prioritizing them. In the proposed setup there will be a CI machinery, which will trigger new builds for every new Bodhi update in Fedora Rawhide. We will have a possibility to stop and reschedule the tasks, if there will be a lack of resources required for mass rebuilds or some other high priority tasks.
Note that there's basically no reason to do weird things to redhat-rpm-config, especially if you go the shadow Koji route, since you can specify macros on a per tag basis. This includes overriding the target platform and setting extra flags for optimizations.
As a side note, I'm surprised people are so weirdly focused on Intel-specific optimizations, when it seems like AMD CPUs might benefit from some love directly, especially with new Ryzen chips becoming more popular.
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
The rejected change https://fedoraproject.org/wiki/Changes/x86-64_micro-architecture_update is explicitly referenced from the current one. So yes, it is the architecture update we are looking for.
And I would suggest to avoid calling things weird and crazy just because you are not interested in them.
The premise of the new change request is to ignore all the issues that led to the original change request being rejected, and just assume that the original will be accepted in the near future.
AVX2 is not a reasonable requirement as a replacement for the current Fedora x86_64, as there are CPUs still being made today that don't support that. If you want to split x86_64 (along the lines of i386 vs. i686), then building a shadow copy of the entire distribution is not a good way forward - you need to do all the actual work required to make a second x86_64 sub-architecture in the main x86_64 distribution. Come up with a name, make the changes to the required packages, etc.
Otherwise, what is the point of the shadow architecture? What is the end goal? Build it in perpetuity and just try to get people to run your packages instead of the main distribution?
On Fri, Jan 10, 2020 at 8:37 AM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
The rejected change https://fedoraproject.org/wiki/Changes/x86-64_micro-architecture_update is explicitly referenced from the current one. So yes, it is the architecture update we are looking for.
And I would suggest to avoid calling things weird and crazy just because you are not interested in them.
The premise of the new change request is to ignore all the issues that led to the original change request being rejected, and just assume that the original will be accepted in the near future.
I don't believe that is fair or even true. The premise of the new change is to allow alternative experimentation within Fedora proper without impacting the mainline distribution. There is no assumption that the results will magically replace Fedora in the near future.
AVX2 is not a reasonable requirement as a replacement for the current Fedora x86_64, as there are CPUs still being made today that don't support that. If you want to split x86_64 (along the lines of i386 vs. i686), then building a shadow copy of the entire distribution is not a good way forward - you need to do all the actual work required to make a second x86_64 sub-architecture in the main x86_64 distribution. Come up with a name, make the changes to the required packages, etc.
I think we're focusing entirely too much on the initial interest for this alternative buildroot and being very myopic about it. Let's take a step back.
Otherwise, what is the point of the shadow architecture? What is the end goal? Build it in perpetuity and just try to get people to run your packages instead of the main distribution?
If we look at the additional possibilities this offers outside of CPU tuning, I find it rather intriguing. Having infrastructure that allows for alternative buildroots to be created and leverage mainline Fedora activities allows for all kinds of experimentation. Perhaps it's not CPU tuning, but compiler optimization tuning. Perhaps it's building the distro with a different compiler in general.
Fedora is very good at producing a singular distro, and it has been very poor at providing a way to deviate at scale from that singular distro. Copr is good for small scale, but attempting to build the whole distro there is overly arduous to the point where people don't even try. The part of this proposal that interest me is being able to easily piggyback on day to day Fedora activities to accomplish that scale. Koji-shadow is the only way to do this today, and it requires people to duplicate everything in the infrastructure themselves. It's also extremely invasive to merge back into Fedora proper. If Fedora had a way to provide that without requiring the investment overhead, I'd be really curious to see what kind of innovative things could come from it.
josh
Once upon a time, Josh Boyer jwboyer@fedoraproject.org said:
I don't believe that is fair or even true. The premise of the new change is to allow alternative experimentation within Fedora proper without impacting the mainline distribution. There is no assumption that the results will magically replace Fedora in the near future.
The detailed description says:
"Changing the main Fedora baseline to new CPUs in place was rejected as the user base for older machines is still large. But we’d like to unblock the development and testing of this feature."
What is the point in continuing development of a rejected feature, other than to hope that it is accepted in the future?
I guess I just don't see the benefit to Fedora to stretch infrastucture resources even thinner to support somebody's pet project of a feature that has already been rejected. It feels like the goal is to prove (for some value of "prove") that the original change is right (for some value of "right") and then push it through despite the original objections. If somebody wants to do that, then IMHO they should handle all the resources, not put it on Fedora.
There may be other interesting things this expanded infrastucture could be used for, but nobody is actually proposing that. What if doing it for the shadow architecture prevents it being done for anything else (because there aren't enough resources)?
On Fri, Jan 10, 2020 at 9:13 AM Chris Adams linux@cmadams.net wrote:
Once upon a time, Josh Boyer jwboyer@fedoraproject.org said:
I don't believe that is fair or even true. The premise of the new change is to allow alternative experimentation within Fedora proper without impacting the mainline distribution. There is no assumption that the results will magically replace Fedora in the near future.
The detailed description says:
"Changing the main Fedora baseline to new CPUs in place was rejected as the user base for older machines is still large. But we’d like to unblock the development and testing of this feature."
What is the point in continuing development of a rejected feature, other than to hope that it is accepted in the future?
As an experiment. To see if it actually has real, tangible benefit. Plus, the concept here opens other possibilities for different experiments. Fedora, despite being a fast-paced and recent distribution, doesn't really handle experimentation well. It's entirely OK to try something and fail, but we simply don't do that.
Also, "future" and "near future" are different, right? I am confident the Change proposers believe this is correct for some future version of Fedora but I do not at all believe they intend to do this and then switch 2 Fedora releases from now. This isn't an attempt to subvert a rejection.
I guess I just don't see the benefit to Fedora to stretch infrastucture resources even thinner to support somebody's pet project of a feature that has already been rejected. It feels like the goal is to prove (for some value of "prove") that the original change is right (for some value of "right") and then push it through despite the original objections. If somebody wants to do that, then IMHO they should handle all the resources, not put it on Fedora.
Setting up their own resources is an approach, it's not out-of-hand incorrect, and it is what Fedora has asked people to do for a while now. For net-new architectures (say MIPS or RISC-V), it might even be the most reasonable approach considering the failures that would be associated with that kind of bringup. However, my concern there is that it encourages people to just do it outside of Fedora entirely because the cost is the same. They get no benefit from our community, and Fedora gets no benefit from the work they're doing. The ability to have an alternative buildroot that is targeted at something more stable and easily accomplished seems like it would benefit Fedora more in the long run.
There may be other interesting things this expanded infrastucture could be used for, but nobody is actually proposing that. What if doing it for the shadow architecture prevents it being done for anything else (because there aren't enough resources)?
What if we never did anything new because it might require work or resources or change? Sounds like a great way to become irrelevant over time in a frog-in-boiling-water kind of way.
I'm not trying to be adversarial here. I do think it's reasonable to consider these things. But to immediately punt on them as the knee-jerk reaction is a mentality that I think will hurt Fedora far more than attempting something and having to scale back if it's too invasive.
josh
Once upon a time, Josh Boyer jwboyer@fedoraproject.org said:
As an experiment. To see if it actually has real, tangible benefit.
I guess my biggest issue with it is the proposal does nothing to address the harm of the original proposal (namely, that Fedora would no longer support some brand-new hardware). To me, it doesn't really matter how much benefit the original change has to the hardware it does support if it kills support for hardware on the market today, and that makes the whole proposal moot.
I feel the way forward is going to be along the lines of the i386/i686 changes back in the day, and any effort should be towards that, not just throwing out support for current hardware because it'll make Fedora run faster on a subset of current hardware. I'd guess that everybody already accepts that at least some packages would show performance benefits from changing the baseline, but... so what? Why go to all this effort to prove it?
Hi, Chris,
On Fri, Jan 10, 2020 at 2:37 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
The rejected change https://fedoraproject.org/wiki/Changes/x86-64_micro-architecture_update is explicitly referenced from the current one. So yes, it is the architecture update we are looking for.
And I would suggest to avoid calling things weird and crazy just because you are not interested in them.
The premise of the new change request is to ignore all the issues that led to the original change request being rejected, and just assume that the original will be accepted in the near future.
No. Afaik, the main reason the change was rejected is that we are not ready yet (or don't see yet the reason) for the update of the architecture. And the benefit of such an update is unclear.
Thus we design this change to be explicitly standalone with no impact on the current Fedora release. We want to have a separate test environment where we can experiment with the architecture updates (compiler flag changes and new features). This test environment is needed to preview and test the changes ahead of time.
So that in next years, when (and I do believe that there will be such moment, while it might be that the final configuration flags will be different from those proposed right now) we decide to update the baseline, we have much better understanding on what changes are needed and which benefits we can get from it, and we don't have to squeeze them into one single mass rebuild in one particular moment in the release cycle.
AVX2 is not a reasonable requirement as a replacement for the current Fedora x86_64, as there are CPUs still being made today that don't support that. If you want to split x86_64 (along the lines of i386 vs. i686), then building a shadow copy of the entire distribution is not a good way forward - you need to do all the actual work required to make a second x86_64 sub-architecture in the main x86_64 distribution. Come up with a name, make the changes to the required packages, etc.
Otherwise, what is the point of the shadow architecture? What is the end goal? Build it in perpetuity and just try to get people to run your packages instead of the main distribution?
There is no intent to provide those packages to the regular user or make a separate Fedora Edition out of them. There will be no releases of repositories or media with such packages. It is only an experimental test environment linked to the Fedora Rawhide state.
The end goal of this is not to add new architecture but to have a possibility to move actual Fedora configuration forward, without breaking it. Which means preparing and testing changes as close as possible to Fedora mainline, but without disrupting it.
-- Chris Adams linux@cmadams.net _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
No. Afaik, the main reason the change was rejected is that we are not ready yet (or don't see yet the reason) for the update of the architecture. And the benefit of such an update is unclear.
I disagree that that was the reason - the fact that Fedora would no longer run on hardware being made and sold today was a big issue, and is in no way addressed.
There is no intent to provide those packages to the regular user or make a separate Fedora Edition out of them. There will be no releases of repositories or media with such packages. It is only an experimental test environment linked to the Fedora Rawhide state.
The scope says there will be repositories generated.
On Fri, Jan 10, 2020 at 3:56 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
No. Afaik, the main reason the change was rejected is that we are not ready yet (or don't see yet the reason) for the update of the architecture. And the benefit of such an update is unclear.
I disagree that that was the reason - the fact that Fedora would no longer run on hardware being made and sold today was a big issue, and is in no way addressed.
There is no intent to provide those packages to the regular user or make a separate Fedora Edition out of them. There will be no releases of repositories or media with such packages. It is only an experimental test environment linked to the Fedora Rawhide state.
The scope says there will be repositories generated.
Yes, repositories and compose are going to be generated as development snapshots, similarly to Rawhide nightlies. These repositories are going to be used in tests and CI workflows, but there is no intention to advertise them for the end-user.
And there will be no branching or releases for them.
-- Chris Adams linux@cmadams.net _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Fri, Jan 10, 2020 at 3:56 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
No. Afaik, the main reason the change was rejected is that we are not ready yet (or don't see yet the reason) for the update of the architecture. And the benefit of such an update is unclear.
I disagree that that was the reason - the fact that Fedora would no longer run on hardware being made and sold today was a big issue, and is in no way addressed.
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
There is no intent to provide those packages to the regular user or make a separate Fedora Edition out of them. There will be no releases of repositories or media with such packages. It is only an experimental test environment linked to the Fedora Rawhide state.
The scope says there will be repositories generated.
Chris Adams linux@cmadams.net _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
Hi, Chris,
On Fri, Jan 10, 2020 at 4:29 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
I have adjusted the description of the change in a way that I think clarifies this part. Please check if it is better now.
Also I've posted some context in a separate thread [1].
I think the current change is important to Fedora also as a showcase that we, as a project, care not only about the internal distribution matters but in some large developments in the "outside world". That we can also provide a venue for, for example, hardware vendors to show their newest work.
The rejection is a valuable feedback here, it highlights that it is not enough to just push a new hardware spec to the market to get it adopted. But we can also try to find a way how actually it _can_ be done better.
[1] https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
-- Chris Adams linux@cmadams.net _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, Jan 16, 2020 at 8:01 AM Aleksandra Fedorova alpha@bookwar.info wrote:
Hi, Chris,
On Fri, Jan 10, 2020 at 4:29 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
I have adjusted the description of the change in a way that I think clarifies this part. Please check if it is better now.
Also I've posted some context in a separate thread [1].
I think the current change is important to Fedora also as a showcase that we, as a project, care not only about the internal distribution matters but in some large developments in the "outside world". That we can also provide a venue for, for example, hardware vendors to show their newest work.
The rejection is a valuable feedback here, it highlights that it is not enough to just push a new hardware spec to the market to get it adopted. But we can also try to find a way how actually it _can_ be done better.
I'll be honest: for me, not changing the architecture value that RPM uses for this is a hard blocker for me. The current proposal for distinguishing packages is unnecessarily confusing and makes it difficult to segment in Koji. If I hand-wave away all the other issues with this proposal (including that I'm still uncertain of the true purpose of this Change), the fact that RPMs with a new baseline will pretend to be the same architecture as another in a manner that we cannot codify a control for compatibility in RPM itself is simply unacceptable.
This proposal simply illustrates one of the biggest problems with the way architectures are currently handled: we don't really leverage sub-arches or anything of the sort, so it's quite difficult to discern this minutiae otherwise. Without changing the architecture value, we also cannot implement a meaningful way for RPM to block the installation of these packages on computers that don't meet the CPU requirements.
If we really want to do this, my ask is that you get RPM upstream to give you a new architecture label and have the hardware detection in place to keep people from breaking their systems with these packages.
-- 真実はいつも一つ!/ Always, there's only one truth!
On Thu, Jan 16, 2020 at 7:11 AM Neal Gompa ngompa13@gmail.com wrote:
On Thu, Jan 16, 2020 at 8:01 AM Aleksandra Fedorova alpha@bookwar.info wrote:
Hi, Chris,
On Fri, Jan 10, 2020 at 4:29 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
I have adjusted the description of the change in a way that I think clarifies this part. Please check if it is better now.
Also I've posted some context in a separate thread [1].
I think the current change is important to Fedora also as a showcase that we, as a project, care not only about the internal distribution matters but in some large developments in the "outside world". That we can also provide a venue for, for example, hardware vendors to show their newest work.
The rejection is a valuable feedback here, it highlights that it is not enough to just push a new hardware spec to the market to get it adopted. But we can also try to find a way how actually it _can_ be done better.
I'll be honest: for me, not changing the architecture value that RPM uses for this is a hard blocker for me. The current proposal for distinguishing packages is unnecessarily confusing and makes it difficult to segment in Koji. If I hand-wave away all the other issues with this proposal (including that I'm still uncertain of the true purpose of this Change), the fact that RPMs with a new baseline will pretend to be the same architecture as another in a manner that we cannot codify a control for compatibility in RPM itself is simply unacceptable.
This proposal simply illustrates one of the biggest problems with the way architectures are currently handled: we don't really leverage sub-arches or anything of the sort, so it's quite difficult to discern this minutiae otherwise. Without changing the architecture value, we also cannot implement a meaningful way for RPM to block the installation of these packages on computers that don't meet the CPU requirements.
If we really want to do this, my ask is that you get RPM upstream to give you a new architecture label and have the hardware detection in place to keep people from breaking their systems with these packages.
So, I had actually come up with an idea for this last summer when the changes were originally proposed, but it is a non-trivial amount of work. I think having this separate experimental build root is a good way to see if it is even worth putting in that work to come up with a more manageable solution for future work. My idea from before is below.
rpm has package "flavors" at build time. This is a new field, but you can build variants with different flavors. The flavor gets populated into the repo metadata. DNF has a list of flavors which the system supports. For right now, that might be AVX2, perhaps other things as well. DNF treats flavors as a "preference" not a hard rule, so when looking for updates, it will prefer the flavors that the system supports, but if a package is not available in that flavor, it defaults to unflavored or just the arch. Anaconda sets the flavor based on detection at install time, or it can be edited on the system (even better if we could autodetect a lot of it at either DNF runtime, or with a script). While it seems a lot of work, we do have a bit of time, and putting such a thing into place will have long term benefits. It is extendable to many different things. The end result is a single repository (x86_64) with multiple flavors of the same package (kernel-5.2.2-1.x86_64.rpm, kernel-5.2.2-1.x86_64.avx2.rpm, etc). The other advantage to this scenario is you can add or take away multiple flavors of a package at any given time. Since it is just a preference, if a flavor goes away, it falls back to arch. If a flavor is added, which a system lists as "preferred", on the next update, that flavor is chosen, even if the current package is unflavored.
On Thu, Jan 16, 2020 at 8:43 AM Justin Forbes jmforbes@linuxtx.org wrote:
On Thu, Jan 16, 2020 at 7:11 AM Neal Gompa ngompa13@gmail.com wrote:
On Thu, Jan 16, 2020 at 8:01 AM Aleksandra Fedorova alpha@bookwar.info wrote:
Hi, Chris,
On Fri, Jan 10, 2020 at 4:29 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
I have adjusted the description of the change in a way that I think clarifies this part. Please check if it is better now.
Also I've posted some context in a separate thread [1].
I think the current change is important to Fedora also as a showcase that we, as a project, care not only about the internal distribution matters but in some large developments in the "outside world". That we can also provide a venue for, for example, hardware vendors to show their newest work.
The rejection is a valuable feedback here, it highlights that it is not enough to just push a new hardware spec to the market to get it adopted. But we can also try to find a way how actually it _can_ be done better.
I'll be honest: for me, not changing the architecture value that RPM uses for this is a hard blocker for me. The current proposal for distinguishing packages is unnecessarily confusing and makes it difficult to segment in Koji. If I hand-wave away all the other issues with this proposal (including that I'm still uncertain of the true purpose of this Change), the fact that RPMs with a new baseline will pretend to be the same architecture as another in a manner that we cannot codify a control for compatibility in RPM itself is simply unacceptable.
This proposal simply illustrates one of the biggest problems with the way architectures are currently handled: we don't really leverage sub-arches or anything of the sort, so it's quite difficult to discern this minutiae otherwise. Without changing the architecture value, we also cannot implement a meaningful way for RPM to block the installation of these packages on computers that don't meet the CPU requirements.
If we really want to do this, my ask is that you get RPM upstream to give you a new architecture label and have the hardware detection in place to keep people from breaking their systems with these packages.
So, I had actually come up with an idea for this last summer when the changes were originally proposed, but it is a non-trivial amount of work. I think having this separate experimental build root is a good way to see if it is even worth putting in that work to come up with a more manageable solution for future work. My idea from before is below.
rpm has package "flavors" at build time. This is a new field, but you can build variants with different flavors. The flavor gets populated into the repo metadata. DNF has a list of flavors which the system supports. For right now, that might be AVX2, perhaps other things as well. DNF treats flavors as a "preference" not a hard rule, so when looking for updates, it will prefer the flavors that the system supports, but if a package is not available in that flavor, it defaults to unflavored or just the arch. Anaconda sets the flavor based on detection at install time, or it can be edited on the system (even better if we could autodetect a lot of it at either DNF runtime, or with a script). While it seems a lot of work, we do have a bit of time, and putting such a thing into place will have long term benefits. It is extendable to many different things. The end result is a single repository (x86_64) with multiple flavors of the same package (kernel-5.2.2-1.x86_64.rpm, kernel-5.2.2-1.x86_64.avx2.rpm, etc). The other advantage to this scenario is you can add or take away multiple flavors of a package at any given time. Since it is just a preference, if a flavor goes away, it falls back to arch. If a flavor is added, which a system lists as "preferred", on the next update, that flavor is chosen, even if the current package is unflavored.
This is new to me, any documentation on this somewhere? I've not heard of this capability before...
On Thu, Jan 16, 2020 at 7:46 AM Neal Gompa ngompa13@gmail.com wrote:
On Thu, Jan 16, 2020 at 8:43 AM Justin Forbes jmforbes@linuxtx.org wrote:
On Thu, Jan 16, 2020 at 7:11 AM Neal Gompa ngompa13@gmail.com wrote:
On Thu, Jan 16, 2020 at 8:01 AM Aleksandra Fedorova alpha@bookwar.info wrote:
Hi, Chris,
On Fri, Jan 10, 2020 at 4:29 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
I have adjusted the description of the change in a way that I think clarifies this part. Please check if it is better now.
Also I've posted some context in a separate thread [1].
I think the current change is important to Fedora also as a showcase that we, as a project, care not only about the internal distribution matters but in some large developments in the "outside world". That we can also provide a venue for, for example, hardware vendors to show their newest work.
The rejection is a valuable feedback here, it highlights that it is not enough to just push a new hardware spec to the market to get it adopted. But we can also try to find a way how actually it _can_ be done better.
I'll be honest: for me, not changing the architecture value that RPM uses for this is a hard blocker for me. The current proposal for distinguishing packages is unnecessarily confusing and makes it difficult to segment in Koji. If I hand-wave away all the other issues with this proposal (including that I'm still uncertain of the true purpose of this Change), the fact that RPMs with a new baseline will pretend to be the same architecture as another in a manner that we cannot codify a control for compatibility in RPM itself is simply unacceptable.
This proposal simply illustrates one of the biggest problems with the way architectures are currently handled: we don't really leverage sub-arches or anything of the sort, so it's quite difficult to discern this minutiae otherwise. Without changing the architecture value, we also cannot implement a meaningful way for RPM to block the installation of these packages on computers that don't meet the CPU requirements.
If we really want to do this, my ask is that you get RPM upstream to give you a new architecture label and have the hardware detection in place to keep people from breaking their systems with these packages.
So, I had actually come up with an idea for this last summer when the changes were originally proposed, but it is a non-trivial amount of work. I think having this separate experimental build root is a good way to see if it is even worth putting in that work to come up with a more manageable solution for future work. My idea from before is below.
rpm has package "flavors" at build time. This is a new field, but you can build variants with different flavors. The flavor gets populated into the repo metadata. DNF has a list of flavors which the system supports. For right now, that might be AVX2, perhaps other things as well. DNF treats flavors as a "preference" not a hard rule, so when looking for updates, it will prefer the flavors that the system supports, but if a package is not available in that flavor, it defaults to unflavored or just the arch. Anaconda sets the flavor based on detection at install time, or it can be edited on the system (even better if we could autodetect a lot of it at either DNF runtime, or with a script). While it seems a lot of work, we do have a bit of time, and putting such a thing into place will have long term benefits. It is extendable to many different things. The end result is a single repository (x86_64) with multiple flavors of the same package (kernel-5.2.2-1.x86_64.rpm, kernel-5.2.2-1.x86_64.avx2.rpm, etc). The other advantage to this scenario is you can add or take away multiple flavors of a package at any given time. Since it is just a preference, if a flavor goes away, it falls back to arch. If a flavor is added, which a system lists as "preferred", on the next update, that flavor is chosen, even if the current package is unflavored.
This is new to me, any documentation on this somewhere? I've not heard of this capability before...
As I said, it was a proposal, not something that currently exists. But by doing a side repo as an experiment, it might give the data to see if such an idea is even worth putting the work into making something like this possible.
Justin
Aleksandra Fedorova wrote:
Hi, Chris,
On Fri, Jan 10, 2020 at 4:29 PM Chris Adams linux@cmadams.net wrote:
Once upon a time, Aleksandra Fedorova alpha@bookwar.info said:
Similarly to what Josh said, we want to setup an environment for experiments. It doesn't mean that things we experiment on are going to be merged in Fedora. And it definitely doesn't mean that whatever we did in the experimental environment can bypass the approval process.
I, too, find it absolutely unacceptable that you are now trying to push an unacceptable (and rightfully rejected) system-wide change through via the salami tactic. There is nothing "self-contained" about your change proposal, it is clearly designed to be the first step of a system-wide change.
In addition, this first step is itself not "self-contained": Your buildroot will contain all packages in the entire distribution, and (depending on how exactly it is implemented) potentially slow down all builds or have other system-wide side effects (notification spam etc.).
It is not a backdoor for rejected change, it is a way to safely iterate on the rejected change to see if we can come up with a version of it, which won't be rejected.
No such version can exist. The change was rejected because its fundamental concept is unacceptable to begin with. Therefore, all your "improved" versions will be rejected as well, unless FESCo decides to ignore all the overwhelmingly negative feedback and do a 180° U-turn on the issue.
So... I guess my objections are all based on the proposal as written, which doesn't sound to me like what you are describing. What the proposal says is all about rebuilding Fedora with the rejected baseline change, and showing that it is better to get the community to accept the original change. It also uses the term "older machines" (which is still misleading).
I have adjusted the description of the change in a way that I think clarifies this part. Please check if it is better now.
Even with its current wording, I am still opposed to: https://fedoraproject.org/wiki/Changes/Additional_buildroot_to_test_x86-64_m... and any other Change that wants to introduce or test introducing a requirement on SSE3 or newer.
The only reasonable way to use these architecture extensions is runtime detection in specific packages. I am convinced that such runtime detection in a handful packages should be enough to provide the desired improvements (in fact, some of them already support this and are already make use of this right now! E.g., OpenBLAS), without breaking compatibility with existing hardware.
Therefore, the development efforts should be spent on identifying the packages where runtime detection of vector instruction sets makes sense and does not exist yet and adding such runtime detection to those packages (and ONLY to those packages where it makes sense, because doing this to all packages would just increase their size and lead to no noticeable performance improvement whatsoever).
I think the current change is important to Fedora also as a showcase that we, as a project, care not only about the internal distribution matters but in some large developments in the "outside world". That we can also provide a venue for, for example, hardware vendors to show their newest work.
And runtime detection is the way to go there. It means you can use all the speedup from new CPU generations without breaking all existing (older, but often still sold or even still manufactured!) ones.
The rejection is a valuable feedback here, it highlights that it is not enough to just push a new hardware spec to the market to get it adopted. But we can also try to find a way how actually it _can_ be done better.
I do not think things can be done any better on the hardware end. (Well, technically, it might be possible to backport new instruction sets with microcode updates, but it is unrealistic to expect CPU vendors to do that, and it would also not lead to the expected performance improvements, because adding, e.g., 512-bit vector instructions to a CPU that only has 256-bit vector units would only mean that the microcode would have to split each operation to emulate the 512-bit instruction set.) The better way has to be done on the software end, and it already exists there, it is called runtime detection.
Kevin Kofler
On Friday, January 10, 2020 6:37:11 AM MST Chris Adams wrote:
AVX2 is not a reasonable requirement as a replacement for the current Fedora x86_64, as there are CPUs still being made today that don't support that.
Relevant lines from /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm ida
If we want to go by what's actually in use as well, instead of just new machines, I can get an even more restrictive list. Further, something not being made anymore is not reason to drop it. That makes no sense. Fedora users aren't going to toss their current, working, good machine just to find one with a processor you think is new enough. That's absurd.
* Neal Gompa:
- Our builder resources are squeezed enough as it is. In doing this,
are we going to get more machines so that we can have more builders? Between modules and this, I worry our resources will get squeezed far too tightly for my comfort.
Please send me the required hardware specs and number of machines.
- This feature does not describe what the new microarchitecture
baseline will be. I *could* assume it's the crazy new microarchitecture that was proposed in the rejected Change, but I don't want to make that assumption. Please specify.
It's going the be AVX2 with or without VZEROUPPER. We'll try without VZEROUPPER first, but if that has too poor performance for legacy software, we need to build with VZEROUPPER.
- Why is this in the main Koji and require a new disttag? Why not
just do a shadow Koji and build in an alternate path?
It was the best approach we came up with. If there better ways to implement this, that's fine too.
We do not want to change the RPM architecture, so that users still can install third-party software. This means that we need to change the dist tag to avoid confusion.
Thanks, Florian
Le ven. 10 janv. 2020 à 10:29, Florian Weimer fweimer@redhat.com a écrit :
- Neal Gompa:
- Our builder resources are squeezed enough as it is. In doing this,
are we going to get more machines so that we can have more builders? Between modules and this, I worry our resources will get squeezed far too tightly for my comfort.
Please send me the required hardware specs and number of machines.
- This feature does not describe what the new microarchitecture
baseline will be. I *could* assume it's the crazy new microarchitecture that was proposed in the rejected Change, but I don't want to make that assumption. Please specify.
It's going the be AVX2 with or without VZEROUPPER. We'll try without VZEROUPPER first, but if that has too poor performance for legacy software, we need to build with VZEROUPPER.
- Why is this in the main Koji and require a new disttag? Why not
just do a shadow Koji and build in an alternate path?
It was the best approach we came up with. If there better ways to implement this, that's fine too.
Why not to use a copr repo for this ? I don't think you will rebuild the whole distro there, but a selected set of packages will be a relevant step to provide some numbers.
On Fri, Jan 10, 2020 at 4:28 AM Florian Weimer fweimer@redhat.com wrote:
We do not want to change the RPM architecture, so that users still can install third-party software. This means that we need to change the dist tag to avoid confusion.
Changing the RPM architecture does not necessarily mean that you wouldn't remain compatible with baseline x86_64. For example, OpenMandriva's build of the distribution optimized for first generation AMD Ryzen systems uses the "znver1" RPM architecture, but the "znver1" architecture is deliberately considered compatible with x86_64, so packages that are "x86_64" are still installable. There are numerous examples of this for 32-bit x86, and there's no reason we couldn't do this for 64-bit x86.
* Neal Gompa:
On Fri, Jan 10, 2020 at 4:28 AM Florian Weimer fweimer@redhat.com wrote:
We do not want to change the RPM architecture, so that users still can install third-party software. This means that we need to change the dist tag to avoid confusion.
Changing the RPM architecture does not necessarily mean that you wouldn't remain compatible with baseline x86_64. For example, OpenMandriva's build of the distribution optimized for first generation AMD Ryzen systems uses the "znver1" RPM architecture, but the "znver1" architecture is deliberately considered compatible with x86_64, so packages that are "x86_64" are still installable. There are numerous examples of this for 32-bit x86, and there's no reason we couldn't do this for 64-bit x86.
But the value of %_arch still changes, right?
I believe this will break things, like this:
| F ?= $(shell test ! -e /etc/fedora-release && echo 0; test -e /etc/fedora-release && rpm --eval %{fedora}) | ARCH ?= $(shell test ! -e /etc/fedora-release && uname -m; test -e /etc/fedora-release && rpm --eval %{_arch}) | MOCK_CFG ?= $(shell test -e /etc/fedora-release && echo fedora-$(F)-$(ARCH))
Thanks, Florian
On Fri, Jan 10, 2020 at 5:05 AM Florian Weimer fweimer@redhat.com wrote:
- Neal Gompa:
On Fri, Jan 10, 2020 at 4:28 AM Florian Weimer fweimer@redhat.com wrote:
We do not want to change the RPM architecture, so that users still can install third-party software. This means that we need to change the dist tag to avoid confusion.
Changing the RPM architecture does not necessarily mean that you wouldn't remain compatible with baseline x86_64. For example, OpenMandriva's build of the distribution optimized for first generation AMD Ryzen systems uses the "znver1" RPM architecture, but the "znver1" architecture is deliberately considered compatible with x86_64, so packages that are "x86_64" are still installable. There are numerous examples of this for 32-bit x86, and there's no reason we couldn't do this for 64-bit x86.
But the value of %_arch still changes, right?
I believe this will break things, like this:
| F ?= $(shell test ! -e /etc/fedora-release && echo 0; test -e /etc/fedora-release && rpm --eval %{fedora}) | ARCH ?= $(shell test ! -e /etc/fedora-release && uname -m; test -e /etc/fedora-release && rpm --eval %{_arch}) | MOCK_CFG ?= $(shell test -e /etc/fedora-release && echo fedora-$(F)-$(ARCH))
It will break _that_ specifically, yes. But it is not okay to make this muddier than it already is. If we are changing the definition of x86_64 in Fedora, that's one thing. But you are not proposing that. Therefore, it needs a different architecture classification.
And as an aside, that particular code would still break even if you were using just a weird DistTag change, because a separate build root means a separate mock configuration.
On Fri, 2020-01-10 at 11:05 +0100, Florian Weimer wrote:
- Neal Gompa:
On Fri, Jan 10, 2020 at 4:28 AM Florian Weimer fweimer@redhat.com wrote:
We do not want to change the RPM architecture, so that users still can install third-party software. This means that we need to change the dist tag to avoid confusion.
Changing the RPM architecture does not necessarily mean that you wouldn't remain compatible with baseline x86_64. For example, OpenMandriva's build of the distribution optimized for first generation AMD Ryzen systems uses the "znver1" RPM architecture, but the "znver1" architecture is deliberately considered compatible with x86_64, so packages that are "x86_64" are still installable. There are numerous examples of this for 32-bit x86, and there's no reason we couldn't do this for 64-bit x86.
But the value of %_arch still changes, right?
I don't think that needs to be true. If I'm reading the rpm source right (always questionable) it looks like %_arch is set from $CANONARCH in the installplatform script, which treats ia32e and amd64 such that $CANONARCH is still x86_64.
Granted this does mean you'd need to patch the normal (not the one in your buildroot) rpm package to know about this architecture too, which is maybe not ideal, or else require that installing packages from this buildroot requires using that buildroot's rpm.
- ajax
On Thu, Jan 09, 2020 at 12:16:17PM -0500, Ben Cotton wrote:
https://fedoraproject.org/wiki/Changes/Additional_buildroot_to_test_x86-64_m...
== Summary ==
Create a dedicated buildroot to test packages built with x86-64 micro-architecture update.
So, a few questions:
Can packages built in this buildroot be used on the same system with packages from the normal buildroot?
I assume one of the reasons to do this is that we don't know which packages benifit from the changes and by how much? If we do or could at least have a good idea about this and the normal packages could be shared on the same systems, how about only doing those specific packages in the seperate buildroot instead of everything? That would save a lot of space and also get wider testing perhaps (if people could just upgrade packages from the normal repo to test)
Do these builds need to be signed?
Would there be some kind of initial 'mass build' to populate existing packages / things that don't rebuild very often?
Would there need to be some kind of bootstrap? Or would this inherit from the existing normal buildroot?
My main concern here is the storage front. Would we need to keep old builds? Or could we just keep the last successfull build only?
I guess ideally there will be 0 changes to spec files (just different macros, etc)? And if there are changes, they would be something we would want to upstream, so perhaps a bugzilla tracker for any spec changes to make sure as much as possible they go upstream and go away?
I wonder if in naming and such we could make sure we are doing things generically. ie, instead of naming anything here avx2 or something we name it 'altmicro' or 'alternatearch' so we could keep / reuse this for other things later.
Thanks for the proposal!
kevin
* Kevin Fenzi:
Can packages built in this buildroot be used on the same system with packages from the normal buildroot?
Yes, I expect them to be compatible at the interface level because the flags do not directly alter calling conventions. There could be a slowdown mixing package versions, though.
It's also conceivable t hat some packages change struct layout under #ifdef __AVX2__, and those changes could be externally visible. I do not think this is likely, though, because these packages would fail to run with user-compiled -march=native code today.
I assume one of the reasons to do this is that we don't know which packages benifit from the changes and by how much?
I assumed that machine time and storage is much cheaper than curating the set of packages manually (which potentially requires writing and designing a UI, too).
Do these builds need to be signed?
Why not? If the hash chain through the mirror manager still works or users can go directly to the buildroot repository on https:// URLs, I don't see a strict requirement for that, though.
Would there be some kind of initial 'mass build' to populate existing packages / things that don't rebuild very often?
Yes, for the VZEROUPPER change, we would have to do an initial rebuild.
Would there need to be some kind of bootstrap? Or would this inherit from the existing normal buildroot?
I would just do two rebuild cycles, with GCC and glibc coming first. (That's what I did for the downstream rebuild.) I don't think a real boostrap is needed.
My main concern here is the storage front. Would we need to keep old builds? Or could we just keep the last successfull build only?
I'm not sure. I guess I could keep a copy locally if packages expire really quickly.
I guess ideally there will be 0 changes to spec files (just different macros, etc)? And if there are changes, they would be something we would want to upstream, so perhaps a bugzilla tracker for any spec changes to make sure as much as possible they go upstream and go away?
What do you mean by upstream in this context? Upstream outside Fedora rarely has Fedora-compatible spec files, I think.
Thanks, Florian
On Mon, Jan 13, 2020 at 10:52:32AM +0100, Florian Weimer wrote:
- Kevin Fenzi:
Can packages built in this buildroot be used on the same system with packages from the normal buildroot?
Yes, I expect them to be compatible at the interface level because the flags do not directly alter calling conventions. There could be a slowdown mixing package versions, though.
It's also conceivable t hat some packages change struct layout under #ifdef __AVX2__, and those changes could be externally visible. I do not think this is likely, though, because these packages would fail to run with user-compiled -march=native code today.
ok. That likely means people will mix them, because it's possible. ;)
I assume one of the reasons to do this is that we don't know which packages benifit from the changes and by how much?
I assumed that machine time and storage is much cheaper than curating the set of packages manually (which potentially requires writing and designing a UI, too).
Well, I don't think a UI is needed. We would just set up the new buildroot to inherit from the normal rawhide one and use 'koji add-pkg' to add packages to it, then build them in the alternate buildroot.
Someone would have to curate it indeed, but it would then use a lot less cpu/storage. But if it's unclear what packages should be added then thats probibly not worth it.
Do these builds need to be signed?
Why not? If the hash chain through the mirror manager still works or users can go directly to the buildroot repository on https:// URLs, I don't see a strict requirement for that, though.
well, signing takes time, more disk space, etc. Mirrormanager is not used for koji buildroot repos. They are in only one datacenter and not mirrored anywhere. They are accessable via https.
Would there be some kind of initial 'mass build' to populate existing packages / things that don't rebuild very often?
Yes, for the VZEROUPPER change, we would have to do an initial rebuild.
Would there need to be some kind of bootstrap? Or would this inherit from the existing normal buildroot?
I would just do two rebuild cycles, with GCC and glibc coming first. (That's what I did for the downstream rebuild.) I don't think a real boostrap is needed.
ok.
My main concern here is the storage front. Would we need to keep old builds? Or could we just keep the last successfull build only?
I'm not sure. I guess I could keep a copy locally if packages expire really quickly.
Well, it's a balancing act for sure... how often do you want to go back to the 50th previous build? or one from months ago? Since everything is in git, in theory even a deleted build you could just rebuild again if needed. Keeping say only the last 3 builds for each package could save a lot of storage.
I guess ideally there will be 0 changes to spec files (just different macros, etc)? And if there are changes, they would be something we would want to upstream, so perhaps a bugzilla tracker for any spec changes to make sure as much as possible they go upstream and go away?
What do you mean by upstream in this context? Upstream outside Fedora rarely has Fedora-compatible spec files, I think.
I mean say a package builds fine in normal rawhide, but fails to build in this new alternate buildroot. You might apply a patch to the package in the spec conditional on the new dist tag. That patch should go upstream right? So it works with both normal rawhide and the new flags?
kevin
On Mon, Jan 13, 2020 at 5:18 PM Kevin Fenzi kevin@scrye.com wrote:
On Mon, Jan 13, 2020 at 10:52:32AM +0100, Florian Weimer wrote:
- Kevin Fenzi:
Can packages built in this buildroot be used on the same system with packages from the normal buildroot?
Yes, I expect them to be compatible at the interface level because the flags do not directly alter calling conventions. There could be a slowdown mixing package versions, though.
It's also conceivable t hat some packages change struct layout under #ifdef __AVX2__, and those changes could be externally visible. I do not think this is likely, though, because these packages would fail to run with user-compiled -march=native code today.
ok. That likely means people will mix them, because it's possible. ;)
I assume one of the reasons to do this is that we don't know which packages benifit from the changes and by how much?
I assumed that machine time and storage is much cheaper than curating the set of packages manually (which potentially requires writing and designing a UI, too).
Well, I don't think a UI is needed. We would just set up the new buildroot to inherit from the normal rawhide one and use 'koji add-pkg' to add packages to it, then build them in the alternate buildroot.
Someone would have to curate it indeed, but it would then use a lot less cpu/storage. But if it's unclear what packages should be added then thats probibly not worth it.
Do these builds need to be signed?
Why not? If the hash chain through the mirror manager still works or users can go directly to the buildroot repository on https:// URLs, I don't see a strict requirement for that, though.
well, signing takes time, more disk space, etc. Mirrormanager is not used for koji buildroot repos. They are in only one datacenter and not mirrored anywhere. They are accessable via https.
Would there be some kind of initial 'mass build' to populate existing packages / things that don't rebuild very often?
Yes, for the VZEROUPPER change, we would have to do an initial rebuild.
Would there need to be some kind of bootstrap? Or would this inherit from the existing normal buildroot?
I would just do two rebuild cycles, with GCC and glibc coming first. (That's what I did for the downstream rebuild.) I don't think a real boostrap is needed.
ok.
My main concern here is the storage front. Would we need to keep old builds? Or could we just keep the last successfull build only?
I'm not sure. I guess I could keep a copy locally if packages expire really quickly.
Well, it's a balancing act for sure... how often do you want to go back to the 50th previous build? or one from months ago? Since everything is in git, in theory even a deleted build you could just rebuild again if needed. Keeping say only the last 3 builds for each package could save a lot of storage.
While on one hand we don't need a long-term storage for development builds, on the other: it is quite valuable to be able to compare the latest successful build with some previous ones to see if we actually improve anything over time.
I am thinking on relying on ODCS composes to implement this part.
If we setup ODCS pipeline and regularly build composes out of the current content of the Koji tag, then we can store them elsewhere and use as checkpoints: everything which has landed in such a compose can be removed from Koji as soon as it is not used by the buildroot, i.e. it is not the latest build for the component.
Then we can implement a separate retention policy for composes which may be more flexible. For example it may rely on whether or not this compose was used in some advanced testing by anyone.
I guess ideally there will be 0 changes to spec files (just different macros, etc)? And if there are changes, they would be something we would want to upstream, so perhaps a bugzilla tracker for any spec changes to make sure as much as possible they go upstream and go away?
What do you mean by upstream in this context? Upstream outside Fedora rarely has Fedora-compatible spec files, I think.
I mean say a package builds fine in normal rawhide, but fails to build in this new alternate buildroot. You might apply a patch to the package in the spec conditional on the new dist tag. That patch should go upstream right? So it works with both normal rawhide and the new flags?
kevin _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, Jan 16, 2020 at 02:09:45PM +0100, Aleksandra Fedorova wrote:
While on one hand we don't need a long-term storage for development builds, on the other: it is quite valuable to be able to compare the latest successful build with some previous ones to see if we actually improve anything over time.
I am thinking on relying on ODCS composes to implement this part.
If we setup ODCS pipeline and regularly build composes out of the current content of the Koji tag, then we can store them elsewhere and use as checkpoints: everything which has landed in such a compose can be removed from Koji as soon as it is not used by the buildroot, i.e. it is not the latest build for the component.
Then we can implement a separate retention policy for composes which may be more flexible. For example it may rely on whether or not this compose was used in some advanced testing by anyone.
Sounds reasonable. Thanks.
kevin
On Mon, 2020-01-13 at 10:52 +0100, Florian Weimer wrote:
- Kevin Fenzi:
Can packages built in this buildroot be used on the same system with packages from the normal buildroot?
Yes, I expect them to be compatible at the interface level because the flags do not directly alter calling conventions. There could be a slowdown mixing package versions, though.
It's also conceivable t hat some packages change struct layout under #ifdef __AVX2__, and those changes could be externally visible. I do not think this is likely, though, because these packages would fail to run with user-compiled -march=native code today.
You could detect this automatically by running abidiff against the two arch variants for corresponding builds. Which would be nice.
Would there be some kind of initial 'mass build' to populate existing packages / things that don't rebuild very often?
Yes, for the VZEROUPPER change, we would have to do an initial rebuild.
Can you expand on this change? What are the tradeoffs for disabling vzeroupper? (At least, my gcc thinks it's enabled by default.)
- ajax
I have some concerns about this proposal. Given that this change was essentially unanimously rejected, this line stood out to me:
- As soon as feature is accepted by the community, there will be a
smooth process to update baseline in the main Fedora, as all packages will be already verified and tested to work against it.
This makes it sound as if this change is inevitable and that the community is simply being stubborn or ignorant of its importance. The current (and foreseeable) situation is that Intel will continue to use SIMD extensions as a way to help segment its processor lines, for example we know that the new low power microarchitecture, Tremont, will not have AVX2 support [1].
Currently, Intel's atom [2], celeron [3], and pentium [4] processors do not have AVX2 support. Not to mention that this change would eliminate support for all AMD processors made before 2017 (pre-Zen) and all Intel processors made before 2013 (pre-Haswell) so I am worried that this is a step towards abandoning a large swath of processors for reasons and goals that have not been fully articulated.
So here are some questions that would help me better understand this proposal:
1. The motivation behind the change is clearly performance, but what packages and/or use cases are expected to see a significant increase in performance? What testing/benchmarking has been done to demonstrate these improvements, and where can we see the results?
2. Since it is likely that new SIMD extensions will be implemented in the future, what are the factors considered for moving the baseline of Fedora? What is an acceptable age for a processor to be before it is unsupported? Do we want Fedora to only target mid and high end Intel processor SKUs? What performance increases (and for which packages) merit consideration for bumping the baseline?
3. Why was AVX2 chosen as the baseline? Specifically, why was it chosen over a more conservative increase to something like SSE4.1/4.2, or a more aggressive increase to AVX512?
4. Given that the author of the proposal is expecting this change in x86_64 baseline to be implemented at some point in the future, what is the projected timeline and what is currently blocking this change from being proposed again (besides the community)?
[1] https://en.wikichip.org/wiki/intel/microarchitectures/tremont [2] https://ark.intel.com/content/www/us/en/ark/products/184994/intel-atom-proce... [3] https://ark.intel.com/content/www/us/en/ark/products/134879/intel-celeron-pr... [4] https://ark.intel.com/content/www/us/en/ark/products/135457/intel-pentium-go...