Hi all,
This idea came about when I'm debugging build issues with mcrouter, which turns out to be due to build jobs failing to allocate memory and getting terminated without aborting the entire compilation, causing link issues when empty or corrupted objects are encountered:
https://src.fedoraproject.org/rpms/mcrouter/blob/rawhide/f/mcrouter.spec#_4-...
As a rough estimate it seems like each of the CPU core passed with %{_smp_build_ncpus} ended up consuming close to 8 GB of RAM. And that's with LTO disabled (yeah, it's not a good situation to be in).
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Best regards,
perhaps you should look at how ceph has dealt with a similar issue, they set the max number of cpus based on the system ram. https://src.fedoraproject.org/rpms/ceph/blob/rawhide/f/ceph.spec#_1246
Dennis
On Fri, Mar 26, 2021 at 7:49 PM Michel Alexandre Salim michel@michel-slm.name wrote:
Hi all,
This idea came about when I'm debugging build issues with mcrouter, which turns out to be due to build jobs failing to allocate memory and getting terminated without aborting the entire compilation, causing link issues when empty or corrupted objects are encountered:
https://src.fedoraproject.org/rpms/mcrouter/blob/rawhide/f/mcrouter.spec#_4-...
As a rough estimate it seems like each of the CPU core passed with %{_smp_build_ncpus} ended up consuming close to 8 GB of RAM. And that's with LTO disabled (yeah, it's not a good situation to be in).
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Best regards,
-- Michel Alexandre Salim profile: https://keyoxide.org/michel@michel-slm.name _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On Fri, 26 Mar 2021 17:29:27 -0700 Michel Alexandre Salim michel@michel-slm.name wrote:
Hi all,
This idea came about when I'm debugging build issues with mcrouter, which turns out to be due to build jobs failing to allocate memory and getting terminated without aborting the entire compilation, causing link issues when empty or corrupted objects are encountered:
https://src.fedoraproject.org/rpms/mcrouter/blob/rawhide/f/mcrouter.spec#_4-...
As a rough estimate it seems like each of the CPU core passed with %{_smp_build_ncpus} ended up consuming close to 8 GB of RAM. And that's with LTO disabled (yeah, it's not a good situation to be in).
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
there was an attempt to come up with a system-wide solution, see https://github.com/rpm-software-management/rpm/pull/821
Dan
Hi Michel,
Michel Alexandre Salim michel@michel-slm.name writes:
Hi all,
This idea came about when I'm debugging build issues with mcrouter, which turns out to be due to build jobs failing to allocate memory and getting terminated without aborting the entire compilation, causing link issues when empty or corrupted objects are encountered:
https://src.fedoraproject.org/rpms/mcrouter/blob/rawhide/f/mcrouter.spec#_4-...
As a rough estimate it seems like each of the CPU core passed with %{_smp_build_ncpus} ended up consuming close to 8 GB of RAM. And that's with LTO disabled (yeah, it's not a good situation to be in).
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
openSUSE has a package and a set of macros for exactly this purpose: https://build.opensuse.org/package/show/network:chromium/memory-constraints (and funnily enough, it has been created for Chromium).
It is pretty simple to use, just plop this line into your spec file: %limit_build -m MAX_MB_PER_THREAD
and the macro will set %_smp_mflags for you.
Cheers,
Dan
Dne 27. 03. 21 v 1:29 Michel Alexandre Salim napsal(a):
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Definitely. Additionally see https://pagure.io/copr/copr/issue/1678 for declaring timeout.
Hi again,
On Fri, 2021-03-26 at 17:29 -0700, Michel Alexandre Salim wrote:
[snip]
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Thanks to Dennis, Dan and Miroslav for the helpful pointers!
Short-term I'll look into adopting something similar to the Ceph script, but medium-term - maybe we can get the scriptlet into redhat- rpm-config and then eventually have it in RPM itself once Panu's patch is reworked?
(Having it in redhat-rpm-config or some other RPM is probably needed anyway, for older supported releases, so ideally we use the same macros and only override them in redhat-rpm-config if they were undefined).
cc:ing Panu for context on the RPM PR status.
Best,
On 3/29/21 8:17 PM, Michel Alexandre Salim wrote:
Hi again,
On Fri, 2021-03-26 at 17:29 -0700, Michel Alexandre Salim wrote:
[snip]
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Thanks to Dennis, Dan and Miroslav for the helpful pointers!
Short-term I'll look into adopting something similar to the Ceph script, but medium-term - maybe we can get the scriptlet into redhat- rpm-config and then eventually have it in RPM itself once Panu's patch is reworked?
(Having it in redhat-rpm-config or some other RPM is probably needed anyway, for older supported releases, so ideally we use the same macros and only override them in redhat-rpm-config if they were undefined).
cc:ing Panu for context on the RPM PR status.
There's no progress on that front, other than occasionally thinking about it.
It might not be a bad idea to be able to put an actual BuildRequire on the amount of memory (and other similar - disk space also comes to mind) into specs.
- Panu -
On Tue, Mar 30, 2021 at 4:46 AM Panu Matilainen pmatilai@redhat.com wrote:
On 3/29/21 8:17 PM, Michel Alexandre Salim wrote:
Hi again,
On Fri, 2021-03-26 at 17:29 -0700, Michel Alexandre Salim wrote:
[snip]
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Thanks to Dennis, Dan and Miroslav for the helpful pointers!
Short-term I'll look into adopting something similar to the Ceph script, but medium-term - maybe we can get the scriptlet into redhat- rpm-config and then eventually have it in RPM itself once Panu's patch is reworked?
(Having it in redhat-rpm-config or some other RPM is probably needed anyway, for older supported releases, so ideally we use the same macros and only override them in redhat-rpm-config if they were undefined).
cc:ing Panu for context on the RPM PR status.
There's no progress on that front, other than occasionally thinking about it.
It might not be a bad idea to be able to put an actual BuildRequire on the amount of memory (and other similar - disk space also comes to mind) into specs.
Let's *not*. The amount of space saved in storage by putting socks and underwear in their own separate drawers os often overwhelmed by the space wasted in giving those drawers sides and a way to slide in and out. In other words, it creates unnecessary overhead in managing build environment resources. It's also difficult to predict in advance, especially for the worst offenders, such as builds whose upstream components are often pulled in dynamically: "pip install", "cpan", "mvn", "gradle", "rubygem", and my recent favorite for unpredictable and only erratically evailble dependencies, "go". While well designed SRPMs and .spec files to rely only on other RPMs for their dependencies, many in the field do not. And even when the dependencies are available, even a small change in "test" procedures can massively expand RAM and disk use during compilation.
In other words, the memory requirements are likely to diverge, *massively* and unpredictably. It's safer and simpler to simply allocate memory generously for build environments.
On 3/30/21 12:43 PM, Nico Kadel-Garcia wrote:
On Tue, Mar 30, 2021 at 4:46 AM Panu Matilainen pmatilai@redhat.com wrote:
On 3/29/21 8:17 PM, Michel Alexandre Salim wrote:
Hi again,
On Fri, 2021-03-26 at 17:29 -0700, Michel Alexandre Salim wrote:
[snip]
Right now I'm just overriding _smp_build_ncpus to 1, but there is a more elegant solution I'd like to propose:
What if one can declaratively set the required RAM per build job -- either with a single macro, or maybe two if the LTO usecase requires even more RAM. e.g. to declare each core might take up to 8 GB:
%global _smp_build_ram_per_cpu 8192
then in case this is run on our aarch64 builder with 40GB RAM, dynamically take the minimum of the existing _smp_build_ncpus (which AIUI is determined by the number of cores on the machine) and (amount of RAM / _smp_build_ram_per_cpu), in this case capping the actual number passed to -j to 5.
Is there interest in having this be available? I could imagine it might be useful for other resource-intensive package builds e.g. for Chromium.
Thanks to Dennis, Dan and Miroslav for the helpful pointers!
Short-term I'll look into adopting something similar to the Ceph script, but medium-term - maybe we can get the scriptlet into redhat- rpm-config and then eventually have it in RPM itself once Panu's patch is reworked?
(Having it in redhat-rpm-config or some other RPM is probably needed anyway, for older supported releases, so ideally we use the same macros and only override them in redhat-rpm-config if they were undefined).
cc:ing Panu for context on the RPM PR status.
There's no progress on that front, other than occasionally thinking about it.
It might not be a bad idea to be able to put an actual BuildRequire on the amount of memory (and other similar - disk space also comes to mind) into specs.
Let's *not*. The amount of space saved in storage by putting socks and underwear in their own separate drawers os often overwhelmed by the space wasted in giving those drawers sides and a way to slide in and out. In other words, it creates unnecessary overhead in managing build environment resources. It's also difficult to predict in advance, especially for the worst offenders, such as builds whose upstream components are often pulled in dynamically: "pip install", "cpan", "mvn", "gradle", "rubygem", and my recent favorite for unpredictable and only erratically evailble dependencies, "go". While well designed SRPMs and .spec files to rely only on other RPMs for their dependencies, many in the field do not. And even when the dependencies are available, even a small change in "test" procedures can massively expand RAM and disk use during compilation.
In other words, the memory requirements are likely to diverge, *massively* and unpredictably. It's safer and simpler to simply allocate memory generously for build environments.
The point is not that everybody should add such requirements to their packages, but to have that *option* for those monster packages out there.
- Panu -
Nico Kadel-Garcia wrote:
Let's *not*. The amount of space saved in storage by putting socks and underwear in their own separate drawers os often overwhelmed by the space wasted in giving those drawers sides and a way to slide in and out. In other words, it creates unnecessary overhead in managing build environment resources. It's also difficult to predict in advance,
I understand that point, but…
especially for the worst offenders, such as builds whose upstream components are often pulled in dynamically: "pip install", "cpan", "mvn", "gradle", "rubygem", and my recent favorite for unpredictable and only erratically evailble dependencies, "go". While well designed SRPMs and .spec files to rely only on other RPMs for their dependencies, many in the field do not.
… downloading dependencies from the Internet is a no go in Koji anyway!
Kevin Kofler
On Thu, Apr 1, 2021 at 8:37 AM Kevin Kofler via devel devel@lists.fedoraproject.org wrote:
Nico Kadel-Garcia wrote:
Let's *not*. The amount of space saved in storage by putting socks and underwear in their own separate drawers os often overwhelmed by the space wasted in giving those drawers sides and a way to slide in and out. In other words, it creates unnecessary overhead in managing build environment resources. It's also difficult to predict in advance,
I understand that point, but…
especially for the worst offenders, such as builds whose upstream components are often pulled in dynamically: "pip install", "cpan", "mvn", "gradle", "rubygem", and my recent favorite for unpredictable and only erratically evailble dependencies, "go". While well designed SRPMs and .spec files to rely only on other RPMs for their dependencies, many in the field do not.
… downloading dependencies from the Internet is a no go in Koji anyway!
It happens locally, rather than formally in koji or mock. I've had to use it to build things like "cli53", which has a stunning mess of golang dependencies hosted in github and beyond my ability to sort out the dependency chains. I've also used it for pip based installs when in a rush, using "python3 -m venv /opt/package; source /opt/package/bin/activate; pip install package" to resolve a chain of dependencies beyond my time and willingness to resolve as RPMs and bundle them into an RPM containing /usr/local/package or /opt/package. Since I've published working repos to build dependency chains of over 200 RPM based python modules in a set, I don't do that lightly.
Nico Kadel-Garcia wrote:
It happens locally, rather than formally in koji or mock. I've had to use it to build things like "cli53", which has a stunning mess of golang dependencies hosted in github and beyond my ability to sort out the dependency chains. I've also used it for pip based installs when in a rush, using "python3 -m venv /opt/package; source /opt/package/bin/activate; pip install package" to resolve a chain of dependencies beyond my time and willingness to resolve as RPMs and bundle them into an RPM containing /usr/local/package or /opt/package. Since I've published working repos to build dependency chains of over 200 RPM based python modules in a set, I don't do that lightly.
I don't see how your local packages that would never pass Fedora review (because they do not build in Koji) are affected by the proposal being discussed at all.
If you don't know how much memory is needed, just don't declare it (or declare it as 0).
Kevin Kofler