As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
On Sun, 8 Jul 2018 at 19:57, Igor Gnatenko ignatenkobrain@fedoraproject.org wrote:
As per Changes/Remove GCC from BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
After adding explicite gcc/g++ in BuildRequires it will be extreamly hard to switch use for example to use clang. Congratulation.
kloczek
On 07/09/2018 11:15 AM, Tomasz Kłoczko wrote:
On Sun, 8 Jul 2018 at 19:57, Igor Gnatenko ignatenkobrain@fedoraproject.org wrote:
As per Changes/Remove GCC from BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
After adding explicite gcc/g++ in BuildRequires it will be extreamly hard to switch use for example to use clang.
Well, you imply that currently we could just substitute clang for GCC and successfully rebuild everything, but that is not the case: there will be breakage. With an explicit dependency, the packagers can check clang compatibility ahead of time and prepare for the change. Things that require GCC will give a build-time error instead of mysterious compile or run-time failures. Packages that truly don't care about the compiler (which hopefully will be in the majority) can use boolean dependencies: BuildRequires: (gcc-c++ or clang) https://fedoraproject.org/wiki/Packaging:Guidelines#Rich.2FBoolean_dependenc...
Congratulation.
Come on now...
On Mon, 9 Jul 2018 at 16:42, Przemek Klosowski przemek.klosowski@nist.gov wrote: [..]
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
After adding explicite gcc/g++ in BuildRequires it will be extreamly hard to switch use for example to use clang.
Well, you imply that currently we could just substitute clang for GCC and successfully rebuild everything, but that is not the case: there will be breakage.
Mistake .. this is not implication but *observation*/conclusion. Such implication/observation has nothing to do with my person.
After add explicit to BuildRequires gcc in case of even try to use clang instead gcc all those specs with "BuildRequires: gcc" will needs to be changed. You can wipe me from this universe and this still will be truth. We are talking about potentially thousands if not tenths of thousands packages spec files. That is really shame that someone did not hold for few seconds to ask themselves "moment!! it it really easiest way?"
It would be relatively easy to perform such change in the future in all packages master branches will be used only by rawhide (objectively). However as current Fedora practice shows almost all mass changes never have been done to the bottom because many packages wants to have "universal" spec files. Only this makes such changes way harder or "extremely harder". Instead using git branches almost all Fedora spec files must support all not EOLed Fedora versions, many of them EPEL (at least two versions) and sometimes even CentOS (despite fact that CentOS guys are not using Fedora specs). This is only reason why I wrote that it will be "extremely hard" on any future changes. (Good that recent request to allow use %ifings for SuSE has been refused)
Issue is that clang is better and better and (IMO) sooner or later switching to use it could potentially interesting option. Using clang even not (yet) as the option to produce whole distribution using clang is very useful to expose more no-so-well-written part of the code because clang provides now much. IMO it would be good to split every build request to send to build env where everything would be possible to build using clang .. only to store build logs which should show more compile warnings.
If Fedora deliberately wants to use gcc (because maintaining gcc it is part of the RH core busies and some guys interested testing bleeding edge gcc code and may be not interested to do the same doe clang) that is fine and what conclusion which I formed can be ignored. However if intention would be to provide here some level of flexibility introducing now gcc/g++ as explicit BuidRequires will close many doors.
My proposition is *not* to add gcc/g++ explicit to BuildReequires and use instead glibc-devel and libstdc++-devel modifications and ban use gcc/gcc-c++ in BuildRequires (in most of the cases all current gcc/g++ BuildRequires could be replaced by glibc-devel and libstdc++-devel). All because it is not possible to use C compiler without glibc-devel and C++ compiler without libstdc++-devel.
Changes in redhat-rpm-config to be able easy switch between gcc and clang (to other compilers) could be done later.
kloczek
----- Original Message -----
From: "Tomasz Kłoczko" kloczko.tomasz@gmail.com To: "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Monday, July 9, 2018 6:45:28 PM Subject: Re: [HEADS UP] Removal of GCC from the buildroot
On Mon, 9 Jul 2018 at 16:42, Przemek Klosowski przemek.klosowski@nist.gov wrote: [..]
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
After adding explicite gcc/g++ in BuildRequires it will be extreamly hard to switch use for example to use clang.
Well, you imply that currently we could just substitute clang for GCC and successfully rebuild everything, but that is not the case: there will be breakage.
Mistake .. this is not implication but *observation*/conclusion. Such implication/observation has nothing to do with my person.
Yes it does. You claim something without having any proof to back it up. Not the first time that this has happened.
After add explicit to BuildRequires gcc in case of even try to use clang instead gcc all those specs with "BuildRequires: gcc" will needs to be changed. You can wipe me from this universe and this still will be truth.
Explicit is better than implicit. It's up to the packager (and upstream of course) to decide on the preferred compiler. So of course it will need to be changed. Are you implying that one day maybe we'll change to clang universally so we'll have to keep the SPEC's BuildRequires vague for that case?
We are talking about potentially thousands if not tenths of thousands packages spec files. That is really shame that someone did not hold for few seconds to ask themselves "moment!! it it really easiest way?"
No. But I believe objectively it's the best as it doesn't leave room for guesses. What you see is what it is.
It would be relatively easy to perform such change in the future in all packages master branches will be used only by rawhide (objectively). However as current Fedora practice shows almost all mass changes never have been done to the bottom because many packages wants to have "universal" spec files. Only this makes such changes way harder or "extremely harder". Instead using git branches almost all Fedora spec files must support all not EOLed Fedora versions, many of them EPEL (at least two versions) and sometimes even CentOS (despite fact that CentOS guys are not using Fedora specs). This is only reason why I wrote that it will be "extremely hard" on any future changes.
No they must not. Your wording here implies that this is some sort of rule or guideline while actually packagers just want to do because of convenience in some aspects of packaging. (I would argue that it's not really convenient but that's just my personal opinion).
(Good that recent request to allow use %ifings for SuSE has been refused)
Issue is that clang is better and better and (IMO) sooner or later switching to use it could potentially interesting option. Using clang even not (yet) as the option to produce whole distribution using clang is very useful to expose more no-so-well-written part of the code because clang provides now much. IMO it would be good to split every build request to send to build env where everything would be possible to build using clang .. only to store build logs which should show more compile warnings.
It could be, it could be not. Shouting for something that you don't LIKE doesn't make it more appealing technically and what not. I thought you would have figured it out by now throughout all the pointless threads you have opened or comments you've made here but you keep falling within the same fallacies.
If you have an opinion express it while being aware and respectful of others. You have failed to do that on multiple occasions and I really wonder why you keep posting here.
If Fedora deliberately wants to use gcc (because maintaining gcc it is part of the RH core busies and some guys interested testing bleeding edge gcc code and may be not interested to do the same doe clang) that is fine and what conclusion which I formed can be ignored. However if intention would be to provide here some level of flexibility introducing now gcc/g++ as explicit BuidRequires will close many doors.
Again how it will close many doors? On some hypothetical future where clang will be the default? While that could be a reasonable argument on a time and place where the compilers would fight on equal grounds, and while the status quo's might change from time to time, I don't see gcc being something other than default in the linux land for the foreseeable future.
My proposition is *not* to add gcc/g++ explicit to BuildReequires and use instead glibc-devel and libstdc++-devel modifications and ban use gcc/gcc-c++ in BuildRequires (in most of the cases all current gcc/g++ BuildRequires could be replaced by glibc-devel and libstdc++-devel). All because it is not possible to use C compiler without glibc-devel and C++ compiler without libstdc++-devel.
I love how instead of a compromise you actually propose banning of that buildrequires. You realize that you arguments are never assertive and it just makes people even more opposed to your (poorly worded) propositions?
Changes in redhat-rpm-config to be able easy switch between gcc and clang (to other compilers) could be done later.
kloczek
Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
On Mon, 2018-07-09 at 17:45 +0100, Tomasz Kłoczko wrote:
On Mon, 9 Jul 2018 at 16:42, Przemek Klosowski przemek.klosowski@nist.gov wrote: [..]
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
After adding explicite gcc/g++ in BuildRequires it will be extreamly hard to switch use for example to use clang.
Well, you imply that currently we could just substitute clang for GCC and successfully rebuild everything, but that is not the case: there will be breakage.
My proposition is *not* to add gcc/g++ explicit to BuildReequires and use instead glibc-devel and libstdc++-devel modifications and ban use gcc/gcc-c++ in BuildRequires (in most of the cases all current gcc/g++ BuildRequires could be replaced by glibc-devel and libstdc++-devel). All because it is not possible to use C compiler without glibc-devel and C++ compiler without libstdc++-devel.
It might be a surprise for you, but there are other implementations of C and C++ standard libraries. If I try to imagine Fedora wanting to switch to clang in the future, I can very well imagine it wanting to switch to libc++ at the same time... So your "improved" proposal is, in fact, just as arbitrary and choice-limiting as the one you criticize.
Congratulations!
D.
On Tue, 10 Jul 2018 at 06:37, David Tardon dtardon@redhat.com wrote: [..]
My proposition is *not* to add gcc/g++ explicit to BuildReequires and use instead glibc-devel and libstdc++-devel modifications and ban use gcc/gcc-c++ in BuildRequires (in most of the cases all current gcc/g++ BuildRequires could be replaced by glibc-devel and libstdc++-devel). All because it is not possible to use C compiler without glibc-devel and C++ compiler without libstdc++-devel.
It might be a surprise for you, but there are other implementations of C and C++ standard libraries. If I try to imagine Fedora wanting to switch to clang in the future,
It is not but it is quite interesting how you are trying to move technical discussion to kind of "argumentum ad hominem" field.
I can very well imagine it wanting to switch to libc++ at the same time... So your "improved" proposal is, in fact, just as arbitrary and choice-limiting as the one you criticize.
So you want to tell that putting explicit gcc/gcc-g++ BRs makes such switching (which is less important) or test build (which is way more interesting and valuable options) easier and/or opens some option? Really?
As I've mention using BR: %{__cc} and BR: %{__cxx} covers in the same way some needs as BR: {glibc,libstdc++}-devel some necessary logic which needs to be implemented, however to be fully working it needs few other things to be added. Possible modified solution: use something like BR: {libc,libstl++}-devel may be still better as it hides compilers BRs.
Problem is till that exact libc and standard C++ library + C/C++ compiler BRs should be part of the build env settings (somehow). How exactly to hide the details of the build env details I'm still not 100% sure ..
Flagging something as proposal does not mean that what was described is complete and consistent. It is more or less only discussion entry point. Whatever would be chosen after few cycles of proposals would require some minimal tests as well. No one so far done such minimal tests. Generally speaking issue is that real discussion started only now when so far was no real discussion and changes has been done. Even if such discussion would reach some kind of agreement of possible (few) alternative solutions some set of test how it would be working would be required. Instead have a discussion<>tests iterations we had only kind of "ex cathedra" announcement and mocking discussion. Someone who is the owner of the original proposal (in this case Igor) should be at least collecting possible options and at least time to time sending partial pros/cons analyses as well. In reality nothing like this has been conducted. Instead spending last 4 months on such discussion and tests most of the people had impression that proposal have been abandoned (definitely I can say that it was my impression). Even FESCo verification of whole procedure did not worked as it should here because no one held/froze this proposal as something without proper proposal/tests design at least few cycles. Everything happened when some prev FESCo team cadence finished and probably new team assumed that procedure has been overlooked by p[rev team. Do you see whole context now?
Seems that generally putting explicit gcc BR had some disk space and build time cause. As so far there is no confirmation that published number of packages and used disk space numbers where not been generated with rpm settings have't been generated with proper lang and exclude doc install options it points that possible technical options focused on time and disk space metrics not been analysed correctly. In other words proposed change cause had *nothing to do with any packages dependencies|*. No .. goal was to *reduce used disk space and reduce build time*!!!
I'm almost sure that definitely exclude doc option isn't in use because still at least few @core packages have buggy %post/%postun scriptlets should be showing some install errors when --excludedocs is used. What makes me a bit more angry about not checking those other possible (without touching even single spec file) options is related again to Igor person as many months ago he took my still not finished remove all info pages index updates proposal pushing texinfo file trigger git PR without finishing such change by remove all info pages indexes updates from all possible packages with info pages documentation. As we has proven packager perms he just pushed his own changes without discussing anything with me and texinfo package maintainer and washing hands after all that all what was necessary to do has been done :-L
At the bottom I want only flag that people like Igor as they have proven packager perms are able to make at the end more harm than good. I don't know him (how experienced developer he really is) as my only personal contact with him was when on IRC he asked me where my texinfo PR and bugzilla ticket are. If some wide changes so badly prepared and conducted will happen again IMO someone at least should consider withdraw his proven packager privs.
kloczek PS. And really I don't care that again above will be taken as kind of personal attack (which is not my intention).
Dne 10.7.2018 v 09:42 Tomasz Kłoczko napsal(a):
On Tue, 10 Jul 2018 at 06:37, David Tardon dtardon@redhat.com wrote: [..]
My proposition is *not* to add gcc/g++ explicit to BuildReequires and use instead glibc-devel and libstdc++-devel modifications and ban use gcc/gcc-c++ in BuildRequires (in most of the cases all current gcc/g++ BuildRequires could be replaced by glibc-devel and libstdc++-devel). All because it is not possible to use C compiler without glibc-devel and C++ compiler without libstdc++-devel.
It might be a surprise for you, but there are other implementations of C and C++ standard libraries. If I try to imagine Fedora wanting to switch to clang in the future,
It is not but it is quite interesting how you are trying to move technical discussion to kind of "argumentum ad hominem" field.
I can very well imagine it wanting to switch to libc++ at the same time... So your "improved" proposal is, in fact, just as arbitrary and choice-limiting as the one you criticize.
So you want to tell that putting explicit gcc/gcc-g++ BRs makes such switching (which is less important) or test build (which is way more interesting and valuable options) easier and/or opens some option? Really?
Explicit "BR: gcc" definitely does the switch to other compiler easier, because one of the main question for this change was actually "how many packages actually requires C/C++" and it was quite tricky to answer such question. Now it will much easier.
And of course if we can switch from no requires to "BR: gcc", then it won't be harder to switch to "BR: clang". If we were going to switch to another compiler, it even gives us the choice so selectively stay with gcc , where previously it could be ambiguous which compiler is used ....
BTW I am deliberately not going to read the rest of you email, because it is simple too much words.
V.
On Tue, 10 Jul 2018 at 10:57, Vít Ondruch vondruch@redhat.com wrote: [..]
Explicit "BR: gcc" definitely does the switch to other compiler easier, because one of the main question for this change was actually "how many packages actually requires C/C++" and it was quite tricky to answer such question. Now it will much easier.
FYI using dnf repoquery you don't need to guesses as now is possible to produce precise/exact such metrics value using oneliner like below.
# dnf -C repoquery --qf "%{name}.%{arch} %{source_name} %{reponame}" | grep -w rawhide | grep x86_64 | awk '{print $2}' | sort | uniq | wc -l Last metadata expiration check: 0:03:06 ago on Tue 10 Jul 2018 12:17:44 BST. 9314
kloczek
On Tue, 10 Jul 2018 at 12:26, Tomasz Kłoczko kloczko.tomasz@gmail.com wrote: [..]
# dnf -C repoquery --qf "%{name}.%{arch} %{source_name} %{reponame}" | grep -w rawhide | grep x86_64 | awk '{print $2}' | sort | uniq | wc -l Last metadata expiration check: 0:03:06 ago on Tue 10 Jul 2018 12:17:44 BST. 9314
Just one more comment about this number and why this is quite precise number of the packages which even not now depends on gcc soon maybe have such dependency. Simple someone may bring as the argument that not all x86_64 containing packages now needs gcc. That is (now) 100% true however it is one additional fact: related to LTO optimisation.
On use LTO optimisation is necessary to use gcc-{ar,nm,ranlib} executable and so far even if some non-C/C++ compilers may be able to produce .o LTO aware bytecode to be able link using ld to produce final DSO or ELF executables. In such cases I don't think that it will be possible to use such compilers straight (like ada, go or other) if someone will be trying to use LTO optimisation. This will probably force to go over produce asm -> as [-> ar/ranlib] -> ld. In such pipeline LTO aware wrappers are part of the gcc.
# rpm -qf /usr/bin/gcc-{ar,nm,ranlib} gcc-8.1.1-4.fc29.1.x86_64 gcc-8.1.1-4.fc29.1.x86_64 gcc-8.1.1-4.fc29.1.x86_64
LTO still is not prod ready and I found that it produces now some highly unstable binaries. In case od zabbix code I found that zabbix_agentd crashes and zabbix_server on processing triggers using pcre library functions (which has not been optimised using LTO) produces binaries which behaves not as expected (producing negated values). If someone is interested is bugzilla ticket about this: https://bugzilla.redhat.com/show_bug.cgi?id=1567112
In other words even if now not all binaries may have straight gcc dependency as long as LTO will be stable it will change. Looks like clang/llvm 7 will be able produce LTO aware bytecode so from this point of view probably it can be used as full gcc replacement even to use LTO. https://llvm.org/docs/LinkTimeOptimization.html In other words above adds last few missing lines to the picture with title "why putting gcc BRs is/was wrong?"
kloczek -- Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH
Dne 10.7.2018 v 14:03 Tomasz Kłoczko napsal(a):
On Tue, 10 Jul 2018 at 12:26, Tomasz Kłoczko kloczko.tomasz@gmail.com wrote: [..]
# dnf -C repoquery --qf "%{name}.%{arch} %{source_name} %{reponame}" | grep -w rawhide | grep x86_64 | awk '{print $2}' | sort | uniq | wc -l Last metadata expiration check: 0:03:06 ago on Tue 10 Jul 2018 12:17:44 BST. 9314
Just one more comment about this number and why this is quite precise number of the packages which even not now depends on gcc soon maybe have such dependency.
AFAIK, the list of packages where "BR: gcc{,-g++}" was added had been compiled based on real rebuild errors and not on some artificial query.
The rest of you email is OT. Changing BR for every package is much simpler task then adding BR to packages where it belongs.
V.
Simple someone may bring as the argument that not all x86_64 containing packages now needs gcc. That is (now) 100% true however it is one additional fact: related to LTO optimisation.
On use LTO optimisation is necessary to use gcc-{ar,nm,ranlib} executable and so far even if some non-C/C++ compilers may be able to produce .o LTO aware bytecode to be able link using ld to produce final DSO or ELF executables. In such cases I don't think that it will be possible to use such compilers straight (like ada, go or other) if someone will be trying to use LTO optimisation. This will probably force to go over produce asm -> as [-> ar/ranlib] -> ld. In such pipeline LTO aware wrappers are part of the gcc.
# rpm -qf /usr/bin/gcc-{ar,nm,ranlib} gcc-8.1.1-4.fc29.1.x86_64 gcc-8.1.1-4.fc29.1.x86_64 gcc-8.1.1-4.fc29.1.x86_64
LTO still is not prod ready and I found that it produces now some highly unstable binaries. In case od zabbix code I found that zabbix_agentd crashes and zabbix_server on processing triggers using pcre library functions (which has not been optimised using LTO) produces binaries which behaves not as expected (producing negated values). If someone is interested is bugzilla ticket about this: https://bugzilla.redhat.com/show_bug.cgi?id=1567112
In other words even if now not all binaries may have straight gcc dependency as long as LTO will be stable it will change. Looks like clang/llvm 7 will be able produce LTO aware bytecode so from this point of view probably it can be used as full gcc replacement even to use LTO. https://llvm.org/docs/LinkTimeOptimization.html In other words above adds last few missing lines to the picture with title "why putting gcc BRs is/was wrong?"
kloczek
Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
On Tue, Jul 10, 2018 at 3:12 PM Vít Ondruch vondruch@redhat.com wrote:
Dne 10.7.2018 v 14:03 Tomasz Kłoczko napsal(a):
On Tue, 10 Jul 2018 at 12:26, Tomasz Kłoczko kloczko.tomasz@gmail.com
wrote:
[..]
# dnf -C repoquery --qf "%{name}.%{arch} %{source_name} %{reponame}" | grep -w rawhide | grep x86_64 | awk '{print $2}' | sort | uniq | wc -l Last metadata expiration check: 0:03:06 ago on Tue 10 Jul 2018 12:17:44
BST.
9314
Just one more comment about this number and why this is quite precise number of the packages which even not now depends on gcc soon maybe have such dependency.
AFAIK, the list of packages where "BR: gcc{,-g++}" was added had been compiled based on real rebuild errors and not on some artificial query.
Yes, I've performed 5 mass-scratch-rebuilds, grepped logs for some common errors and then added BuildRequires. It might be inaccurate for some of the packages (e.g. if they depend on something what should depend on gcc itself), but it won't hurt anyone.
The rest of you email is OT. Changing BR for every package is much simpler task then adding BR to packages where it belongs.
V.
Simple someone may bring as the argument that not all x86_64 containing packages now needs gcc. That is (now) 100% true however it is one additional fact: related to LTO optimisation.
On use LTO optimisation is necessary to use gcc-{ar,nm,ranlib} executable and so far even if some non-C/C++ compilers may be able to produce .o LTO aware bytecode to be able link using ld to produce final DSO or ELF executables. In such cases I don't think that it will be possible to use such compilers straight (like ada, go or other) if someone will be trying to use LTO optimisation. This will probably force to go over produce asm -> as [-> ar/ranlib] ->
ld.
In such pipeline LTO aware wrappers are part of the gcc.
# rpm -qf /usr/bin/gcc-{ar,nm,ranlib} gcc-8.1.1-4.fc29.1.x86_64 gcc-8.1.1-4.fc29.1.x86_64 gcc-8.1.1-4.fc29.1.x86_64
LTO still is not prod ready and I found that it produces now some highly unstable binaries. In case od zabbix code I found that zabbix_agentd crashes and zabbix_server on processing triggers using pcre library functions (which has not been optimised using LTO) produces binaries which behaves not as expected (producing negated values). If someone is interested is bugzilla ticket about this: https://bugzilla.redhat.com/show_bug.cgi?id=1567112
In other words even if now not all binaries may have straight gcc dependency as long as LTO will be stable it will change. Looks like clang/llvm 7 will be able produce LTO aware bytecode so from this point of view probably it can be used as full gcc replacement even to use LTO. https://llvm.org/docs/LinkTimeOptimization.html In other words above adds last few missing lines to the picture with title "why putting gcc BRs is/was wrong?"
kloczek
Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
Hi,
On Tue, Jul 10, 2018 at 08:42:09AM +0100, Tomasz Kłoczko wrote:
At the bottom I want only flag that people like Igor as they have proven packager perms are able to make at the end more harm than good. I don't know him (how experienced developer he really is) as my only personal contact with him was when on IRC he asked me where my texinfo PR and bugzilla ticket are. If some wide changes so badly prepared and conducted will happen again IMO someone at least should consider withdraw his proven packager privs.
I am glad for Igor doing this work and he is doing a lot more good than harm.
kloczek PS. And really I don't care that again above will be taken as kind of personal attack (which is not my intention).
The Fedora Code of Conduct is not optional therefore I expect you to care about this. If you believe your e-mail might be offensive, it is your job to ensure that it is not.
Kind regards Till
On Tue, 10 Jul 2018 at 12:52, Till Maas opensource@till.name wrote: [..]
PS. And really I don't care that again above will be taken as kind of personal attack (which is not my intention).
The Fedora Code of Conduct is not optional therefore I expect you to care about this. If you believe your e-mail might be offensive, it is your job to ensure that it is not.
Sorry to say this without telling openly some past stories straight about things which only few people knows it is not possible to understand why at the moment I'm not able to trust what Igor is doing. One time it may be accident .. Truth is that not one but at least few time he miscarried few things. I would be not surprised if I'll be not only person with such experience. IMO even if he has some potential I'm guessing that he is still relatively young and if it is true he may still need proper mentoring (however I'm not going or want to be his mentor) I can honestly promise that if he will be somehow cross the line with things which sometimes I'm trying to take care next time my reaction will be as same as before. You may say that from this point of view I'm 100% predictable.
IMO he needs to learn meaning of some old sentence "Errare humanum est perseverare autem diabolicum" If he will not learn few thing I don't want to be around next time.
If you or someone else can give me the lesson how to say above without hurting someone personal fillings or not act as someone offensive I'm really ready to pay whatever price someone will ask for.
kloczek
On Tue, 10 Jul 2018 at 13:35, Tomasz Kłoczko kloczko.tomasz@gmail.com wrote:
IMO even if he has some potential I'm guessing that he is still relatively young and if it is true he may still need proper mentoring
Not cool, you stepped over the line. Igor has done some great work in Fedora in the last few months.
Richard.
----- Original Message -----
From: "Tomasz Kłoczko" kloczko.tomasz@gmail.com To: "Development discussions related to Fedora" devel@lists.fedoraproject.org Sent: Tuesday, July 10, 2018 2:27:40 PM Subject: Re: [HEADS UP] Removal of GCC from the buildroot
On Tue, 10 Jul 2018 at 12:52, Till Maas opensource@till.name wrote: [..]
PS. And really I don't care that again above will be taken as kind of personal attack (which is not my intention).
The Fedora Code of Conduct is not optional therefore I expect you to care about this. If you believe your e-mail might be offensive, it is your job to ensure that it is not.
Sorry to say this without telling openly some past stories straight about things which only few people knows it is not possible to understand why at the moment I'm not able to trust what Igor is doing. One time it may be accident .. Truth is that not one but at least few time he miscarried few things. I would be not surprised if I'll be not only person with such experience. IMO even if he has some potential I'm guessing that he is still relatively young and if it is true he may still need proper mentoring
Can you really not see what is wrong and hurtful with that statement of yours?
(however I'm not going or want to be his mentor) I can honestly promise that if he will be somehow cross the line with things which sometimes I'm trying to take care next time my reaction will be as same as before. You may say that from this point of view I'm 100% predictable.
This is not about how Igor is reacting but how you keep coming up with ridiculous excuses to justify your toxic behaviour.
IMO he needs to learn meaning of some old sentence "Errare humanum est perseverare autem diabolicum" If he will not learn few thing I don't want to be around next time.
Everyone needs to learn things, especially when dealing with others. Noone though should justify bad actions or wording due to real or hypothetical actions of other people. Please stop using Igor as a scapegoat.
If you or someone else can give me the lesson how to say above without hurting someone personal fillings or not act as someone offensive I'm really ready to pay whatever price someone will ask for.
Noone should feel obliged to give someone a lesson in order to learn to behave properly or in a civilized manner on a public discussions. You inability to do that is not anyone's fault but yours. Please work on that first before deciding to place the blame of your action onto other people.
kloczek
Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
On Tue, 2018-07-10 at 08:42 +0100, Tomasz Kłoczko wrote:
On Tue, 10 Jul 2018 at 06:37, David Tardon dtardon@redhat.com wrote: [..]
My proposition is *not* to add gcc/g++ explicit to BuildReequires and use instead glibc-devel and libstdc++-devel modifications and ban use gcc/gcc-c++ in BuildRequires (in most of the cases all current gcc/g++ BuildRequires could be replaced by glibc-devel and libstdc++- devel). All because it is not possible to use C compiler without glibc- devel and C++ compiler without libstdc++-devel.
It might be a surprise for you, but there are other implementations of C and C++ standard libraries. If I try to imagine Fedora wanting to switch to clang in the future,
It is not
Your words that "it is not possible to use C compiler without glibc- devel" suggest otherwise. So I think I did have a right to doubt...
but it is quite interesting how you are trying to move technical discussion to kind of "argumentum ad hominem" field.
I'm sorry if it sounded offensive. It wasn't my intention.
I can very well imagine it wanting to switch to libc++ at the same time... So your "improved" proposal is, in fact, just as arbitrary and choice-limiting as the one you criticize.
So you want to tell that putting explicit gcc/gcc-g++ BRs makes such switching (which is less important) or test build (which is way more interesting and valuable options) easier and/or opens some option? Really?
Did I write that? No, I didn't. Did the e-mail I reacted on even mentioned test builds (or %{__cc} or any other idea put forth in other parts of this e-mail thread)? No, it didn't. So please stop putting words in my mouth.
My _only_ argument is that this _particular_ proposal (BR: glibc-devel / libstdc++-devel) does have _no_ advantage over the original one (BR: gcc / gcc-c++).
Btw, I don't like the explicit BRs on gcc/gcc-c++ myself, but that's irrelevant here.
D.
Removal GCC and friends from the buildroot is different than
adding new conflicting BRs which impair Clang usability
On Mon, 9 Jul 2018,
Igor Gnatenko started:
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot.
This will happen before mass rebuild. Stay tuned.
Tomasz Kłoczko wrote:
After adding explicite gcc/g++ in BuildRequires it will be extreamly hard to switch use for example to use clang.
Congratulation.
and this snipeing came in from: Przemek Klosowski:
Come on now...
You 'tutt tutt' this objection, but it seems that the secondary effects of the change are not well thought through. As an alternative, adding a 'virtual build requirement' such as for: BR: CCP-compiler
and adding a manual: Provides: CCP-compiler
to: gcc-c++
and so forth, would make this a non-invasive change. Why ** not ** choose such a route?
Obviously, clang would also need a manual: Provides: CCP-compiler
----------
What is the need to force breakage, rather than doing it in 'friendly' way?
-- Russ herrold
Once upon a time, R P Herrold herrold@owlriver.com said:
As an alternative, adding a 'virtual build requirement' such as for: BR: CCP-compiler
and adding a manual: Provides: CCP-compiler
to: gcc-c++
and so forth, would make this a non-invasive change. Why ** not ** choose such a route?
There are a significant (but unknown number) of packages in Fedora that are probably only currently built and tested upstream with GCC compilers. Putting a virtual requires in the spec, before testing them with alternate compilers, is incorrect.
Linux-developed software tends to have lots of hidden GCCisms, and I believe erring on the side of assuming that until tested and proven otherwise is the safe choice. Start by requiring the status quo; as packages are tested and proved to work with an alternate compiler, then the specs can be modified to allow alternate compilers.
I tend to agree with Chris on this. Whether we like it or not, GCC is an integral part of the Unix environment and most things we care about utilize it in some aspect, even when it's not immediately obvious. Attempting to forcefully bring in an alternate compiler would undoubtedly break a great deal of things.
On Mon, 9 Jul 2018, Matt Milosevic wrote:
I tend to agree with Chris on this. Whether we like it or not, GCC is an integral part of the Unix environment and most things we care about utilize it in some aspect, even when it's not immediately obvious. Attempting to forcefully bring in an alternate compiler would undoubtedly break a great deal of things.
Who said ** anything ** about more than a virtual provide, which will do ... nothing more than put in place a portable way to test for breakage ???
To the contrary, widely hard-coding BR: gcc
locks OUT such efforts to track down 'gcc-isms'
-- Russ herrold
On Mon, Jul 09, 2018 at 02:07:26PM -0400, R P Herrold wrote:
On Mon, 9 Jul 2018, Matt Milosevic wrote:
I tend to agree with Chris on this. Whether we like it or not, GCC is an integral part of the Unix environment and most things we care about utilize it in some aspect, even when it's not immediately obvious. Attempting to forcefully bring in an alternate compiler would undoubtedly break a great deal of things.
Who said ** anything ** about more than a virtual provide, which will do ... nothing more than put in place a portable way to test for breakage ???
To the contrary, widely hard-coding BR: gcc
locks OUT such efforts to track down 'gcc-isms'
Actually, the reverse is true.
Consider this: - identifying packages which require *a* build compiler is hard (gcc was part of the buildroot, so packages didnt't declare the dependency, and there's many different ways to search for and invoke a compiler, so it's not immediately obvious from a spec file if the build needs a compiler)
- this change clearly marks packages which packages which need *a* compiler and *can* be compiled with gcc.
If in the future there's a desire to compile some packages with clang, it'll be easy to adjust those spec files to BR:clang or BR:c-compiler. Thanks to this change, the number of packages that need to be looked at is much smaller.
As you can see from how long it took Igor to apply those changes it's quite easy to do a scripted update of 1000s of packages. It's easy to do a simple BR:this to BR:that change. The hard part is looking at build failures, fixing the builds, and adjusting dependencies.
Zbyszek
On Mon, 9 Jul 2018 at 19:02, Matt Milosevic minima38123@gmail.com wrote:
I tend to agree with Chris on this. Whether we like it or not, GCC is an integral part of the Unix environment and most things we care about utilize it in some aspect, even when it's not immediately obvious. Attempting to forcefully bring in an alternate compiler would undoubtedly break a great deal of things.
I don't remember anyone here who wrote about do this now. Definitely not me. Just in case: I've not been even suggesting to do this. However to have opened doors to be able in the future do something like "rpmbuild -ba foo.spec --with clang" was core part of what I've been trying to tell. .. was because changes started day before it was announced that they will be made and now we have +~1k changes committed to the git repos. So .. "Igor locuta, causa finita".
kloczek
On Mon, 9 Jul 2018 at 18:37, Chris Adams linux@cmadams.net wrote: [..]
There are a significant (but unknown number) of packages in Fedora that are probably only currently built and tested upstream with GCC compilers. Putting a virtual requires in the spec, before testing them with alternate compilers, is incorrect.
It is to late now. Igor wrote "I'm going to do this tomorrow." but what was announced that will start tomorrow already started (ROTFL).
kloczek PS. It is second time when Mr. Gnatenko started doing things before finishing the discussion. I can only repeat .. congratulation. -- Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH
On Mon, Jul 09, 2018 at 06:56:19PM +0100, Tomasz Kłoczko wrote:
On Mon, 9 Jul 2018 at 18:37, Chris Adams linux@cmadams.net wrote: [..]
There are a significant (but unknown number) of packages in Fedora that are probably only currently built and tested upstream with GCC compilers. Putting a virtual requires in the spec, before testing them with alternate compilers, is incorrect.
It is to late now. Igor wrote "I'm going to do this tomorrow." but what was announced that will start tomorrow already started (ROTFL).
PS. It is second time when Mr. Gnatenko started doing things before finishing the discussion. I can only repeat .. congratulation.
To be fair, his email was sent Date: Sun, 8 Jul 2018 20:46:26 +0200. And today is 9th of July, so it is tomorrow.
But yeah, doing the changes in the middle of a discussion… that's not excellent.
On 07/09/2018 11:15 AM, Tomasz Torcz wrote:
On Mon, Jul 09, 2018 at 06:56:19PM +0100, Tomasz Kłoczko wrote:
On Mon, 9 Jul 2018 at 18:37, Chris Adams linux@cmadams.net wrote: [..]
There are a significant (but unknown number) of packages in Fedora that are probably only currently built and tested upstream with GCC compilers. Putting a virtual requires in the spec, before testing them with alternate compilers, is incorrect.
It is to late now. Igor wrote "I'm going to do this tomorrow." but what was announced that will start tomorrow already started (ROTFL).
PS. It is second time when Mr. Gnatenko started doing things before finishing the discussion. I can only repeat .. congratulation.
To be fair, his email was sent Date: Sun, 8 Jul 2018 20:46:26 +0200. And today is 9th of July, so it is tomorrow.
But yeah, doing the changes in the middle of a discussion… that's not excellent.
This is an approved by FESCo Fedora 29 change that needs to be done before the mass rebuild, which is scheduled for the 11th (2 days from now).
2018-07-11 Mass Rebuild
( see https://fedoraproject.org/wiki/Releases/29/Schedule )
So, really the discussion time for this would have been best in March when the change was discussed on list).
I don't see any chance at all that some other compiler will be default in Fedora anytime soon. Fedora has a close and productive relationship with gcc which I hope will continue. In the event years down the road something changes, we can always just change the BuildRequires: gcc to whatever we are switching to, or have that provide gcc or any other number of things.
kevin
On Mon, 9 Jul 2018 at 20:45, Kevin Fenzi kevin@scrye.com wrote: [..]
This is an approved by FESCo Fedora 29 change that needs to be done before the mass rebuild, which is scheduled for the 11th (2 days from now).
2018-07-11 Mass Rebuild
( see https://fedoraproject.org/wiki/Releases/29/Schedule )
So, really the discussion time for this would have been best in March when the change was discussed on list).
OK I found https://fedoraproject.org/wiki/Releases/29/ChangeSet#Tracking_15 Which is pointing to https://bugzilla.redhat.com/show_bug.cgi?id=1551327 .. which is pointing to https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot than this page points back to Biugzilla ticket. If it was somewhere approved (I'm sure that somehow it was) because it is so little possible to find about real discussion about proposal .. I'm not sure did anyone from FESCo where really aware what they are approving and/or did anyone where aware that some alternative proposals have been send but generally not commented/ignored.
kloczek
On Tue, 2018-07-10 at 00:40 +0100, Tomasz Kłoczko wrote:
On Mon, 9 Jul 2018 at 20:45, Kevin Fenzi kevin@scrye.com wrote: [..]
This is an approved by FESCo Fedora 29 change that needs to be done before the mass rebuild, which is scheduled for the 11th (2 days from now).
2018-07-11 Mass Rebuild
( see https://fedoraproject.org/wiki/Releases/29/Schedule )
So, really the discussion time for this would have been best in March when the change was discussed on list).
OK I found https://fedoraproject.org/wiki/Releases/29/ChangeSet#Tracking_15 Which is pointing to https://bugzilla.redhat.com/show_bug.cgi?id=1551327 .. which is pointing to https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot than this page points back to Biugzilla ticket. If it was somewhere approved (I'm sure that somehow it was) because it is so little possible to find about real discussion about proposal ..
https://pagure.io/fesco/issue/1851
One thing that *would* be nice is if, when citing meeting discussions on Change tickets, FESCo would *link to the meeting where the discussion happened*, so interested parties can read it. You can find it quite easily by looking at the ticket dates:
https://meetbot.fedoraproject.org/teams/fesco/fesco.2018-03-02-15.00.log.htm...
but an explicit link would be nice.
I'm not sure did anyone from FESCo where really aware what they are approving and/or did anyone where aware that some alternative proposals have been send but generally not commented/ignored.
The discussion thread on devel@ is a fundamental part of the Change process, so you can generally assume that FESCo members are going to have read it when they vote on a Change. Suggesting that they didn't know what they were voting for seems a bit insulting to the FESCo members...
On 07/09/2018 04:40 PM, Tomasz Kłoczko wrote:
On Mon, 9 Jul 2018 at 20:45, Kevin Fenzi kevin@scrye.com wrote: [..]
This is an approved by FESCo Fedora 29 change that needs to be done before the mass rebuild, which is scheduled for the 11th (2 days from now).
2018-07-11 Mass Rebuild
( see https://fedoraproject.org/wiki/Releases/29/Schedule )
So, really the discussion time for this would have been best in March when the change was discussed on list).
OK I found https://fedoraproject.org/wiki/Releases/29/ChangeSet#Tracking_15 Which is pointing to https://bugzilla.redhat.com/show_bug.cgi?id=1551327 .. which is pointing to https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot than this page points back to Biugzilla ticket. If it was somewhere approved (I'm sure that somehow it was) because it is so little possible to find about real discussion about proposal .. I'm not sure did anyone from FESCo where really aware what they are approving and/or did anyone where aware that some alternative proposals have been send but generally not commented/ignored.
As for all Changes, this one was posted to this very list:
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
a 36 post long thread in Feb.
You posted the idea to it of now using "gcc" but some macro, but the change owner and at least one other person replied and didn't feel it was worthwhile as we have no plans to move away from gcc.
It was approved by fesco in the ticket at:
https://pagure.io/fesco/issue/1851
kevin
On Tue, 10 Jul 2018 at 01:12, Kevin Fenzi kevin@scrye.com wrote: [..]
a 36 post long thread in Feb.
You posted the idea to it of now using "gcc" but some macro, but the change owner and at least one other person replied and didn't feel it was worthwhile as we have no plans to move away from gcc.
Yeah .. as many engineers technically I'm really sometimes unable to continue conversation when someone brings to the table own feelings and not something which possible to prove/disprove :-L So few people have been trying to bring some arguments but original plan has not been changed even by milometer .. Only thing done after this seems was actual impact estimation to form full list of packages spec files which will be necessary to change and in no one head it does not touched any bells that maybe number of packages is a bit to big. No doubts .. and all sceptic have been treated as annoying bees. As you see more or less it was kind of monologues of few people ..
I'm trying only to tell that it would be good to not conduct any such mass change the same way. Making mistakes is not the problem, and only repeating them is something really unacceptable. Definitely have +4 months quiet period after initial discussion with incomplete at this point documentation and after this sudden/instant change wasn't the best way .. isn't it?
kloczek
On Mon, Jul 9, 2018, 20:22 Tomasz Torcz tomek@pipebreaker.pl wrote:
On Mon, Jul 09, 2018 at 06:56:19PM +0100, Tomasz Kłoczko wrote:
On Mon, 9 Jul 2018 at 18:37, Chris Adams linux@cmadams.net wrote: [..]
There are a significant (but unknown number) of packages in Fedora that are probably only currently built and tested upstream with GCC compilers. Putting a virtual requires in the spec, before testing them with alternate compilers, is incorrect.
It is to late now. Igor wrote "I'm going to do this tomorrow." but what was announced that will start tomorrow already started (ROTFL).
PS. It is second time when Mr. Gnatenko started doing things before finishing the discussion. I can only repeat .. congratulation.
To be fair, his email was sent Date: Sun, 8 Jul 2018 20:46:26 +0200. And today is 9th of July, so it is tomorrow.
But yeah, doing the changes in the middle of a discussion… that's not excellent.
Discussion was over few months ago when FESCo approved the change.
On Mon, 9 Jul 2018 at 20:40, Tomasz Torcz tomek@pipebreaker.pl wrote: [..]
PS. It is second time when Mr. Gnatenko started doing things before finishing the discussion. I can only repeat .. congratulation.
To be fair, his email was sent Date: Sun, 8 Jul 2018 20:46:26 +0200. And today is 9th of July, so it is tomorrow.
But yeah, doing the changes in the middle of a discussion… that's not excellent.
Technically you are right. Email has been sent at 8 July 8:46 p.m. (US time) My comment was at 9 July 10:15 a.m. however first changes in git started about 2h later.
I've spend a bit time to have look on the wiki. It started at 14 February 2018. I remember some discussion about gcc BR 2-3 years ago but IIRC it was not final conclusion or proper justification. After Feb this year few comments have been posted but .. Kevin Kofler opinion have been ignored. Jan Kurik proposal going to avoid explicit gcc/g++ dependency by use :
"BuildRequires: %{__cc} or: BuildRequires: c-compiler"
was kind other solution to not have static dependency. However I think that dependencies hooked to {glibc,libstdc++}-devel could be slightly better because such solution hides completely compiler dependency (but IMO even something like this was better and would require more analyse). Actually %{__cc} and %{__cxx} dependencies may be not so bad .. however as long as gcc does not require glibc-devel and gcc-g++ does not require libstdc++-devel still it is kind of hole here and this is not 1:1 equivalent of the {glibc,libstdc++}-devel gcc/g++ Requires .added only to those two packages. At the end hanging all on {glibc,libstdc++}-devel or %{__cc}/${__cxx} will be only matter of taste. Both variants are equally good as they are not using straight gcc/gcc-g++ BRs.
https://bugzilla.redhat.com/show_bug.cgi?id=1551327 points to the wiki. Additional it is ref to https://pagure.io/releng/issue/7317 on which is as same hard to find anything more. In any emailed notifications cannot find any deadline (quite possible that it has been posted somewhere).
Such change has relatively big impact and seems to be following https://fedoraproject.org/wiki/Changes/Policy However cannot find anything on https://pagure.io/packaging-committee/issues?status=Open&search_pattern=... (but maybe it does not show anything because tickets are not indexed only).
Only post which I found on devel-announce was +4 months ago https://lists.fedoraproject.org/archives/list/devel-announce@lists.fedorapro... and originally was posted with so little details that it was easy to loose it from radar.
Maybe it is only my impression that decision making process was at least not enough transparent or at least properly fagged.
Generally recently more and more such changes in Fedora is made completely not transparently, like start discussion about approve upgrade community-mysql to 8.x when already such change has been pushed to git and build systems .. all with ignoring whole impact of the upgrade and still unresolved issues with 5.7.x -> 8.0.x upgrade (however no one really cries because I'm betting that number of community-mysql users is not more than few). community-mysql case is especially strange as in BTS ticket https://bugzilla.redhat.com/show_bug.cgi?id=1573642 still hangs as not closed (looks like using ostrich tactics).
kloczek -- Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH
Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
I still think that this change is absolutely counterproductive, because it will actually INCREASE local mock build times for all C/C++ programs for all packagers, because gcc and gcc-c++ will no longer be included in the root cache.
It is also yet another pointless mass change to a huge number of packages, right after the %defattr one.
Kevin Kofler
On Tuesday, July 10, 2018 5:44:50 PM CEST Kevin Kofler wrote:
Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned.
I still think that this change is absolutely counterproductive, because it will actually INCREASE local mock build times for all C/C++ programs for all packagers, because gcc and gcc-c++ will no longer be included in the root cache.
You are free to tweak your local mock configs to preinstall arbitrary packages (and include them in root cache) if build speed is your concern. Have a look at the 'chroot_setup_cmd' option.
Kamil
It is also yet another pointless mass change to a huge number of packages, right after the %defattr one.
Kevin Kofler
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org /message/LIQJUDDE2WBU32PGOVHRAR2SBJ4ILR2R/
On Tue, Jul 10, 2018 at 05:44:50PM +0200, Kevin Kofler wrote:
I still think that this change is absolutely counterproductive, because it will actually INCREASE local mock build times for all C/C++ programs for all packagers, because gcc and gcc-c++ will no longer be included in the root cache.
If you or someone else cares about this for their own setup I recommend to change or add a mock config that just extends the chroot_setup_cmd to include gcc and gcc-c++ and the problem is solved.
Kind regards Till
On Tue, Jul 10, 2018 at 5:52 PM Kevin Kofler kevin.kofler@chello.at wrote:
Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from
buildroot.
This will happen before mass rebuild. Stay tuned.
I still think that this change is absolutely counterproductive, because it will actually INCREASE local mock build times for all C/C++ programs for all packagers, because gcc and gcc-c++ will no longer be included in the root cache.
However, it will DECREASE local mock build times for all non-C/C++ programs. And now we will know which packages actually need C and/or C++ compiler.
A lot of packages in 2018 are not written in C/C++, welcome to XXI century!
It is also yet another pointless mass change to a huge number of packages, right after the %defattr one.
It came up multiple times and we are pretty much in agreement that we *need* such cleanups.
On Tue, Jul 10, 2018 at 06:03:33PM +0200, Igor Gnatenko wrote:
On Tue, Jul 10, 2018 at 5:52 PM Kevin Kofler kevin.kofler@chello.at wrote:
Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from
buildroot.
This will happen before mass rebuild. Stay tuned.
I still think that this change is absolutely counterproductive, because it will actually INCREASE local mock build times for all C/C++ programs for all packagers, because gcc and gcc-c++ will no longer be included in the root cache.
However, it will DECREASE local mock build times for all non-C/C++ programs. And now we will know which packages actually need C and/or C++ compiler.
Yes.
Also, we'll have a mass rebuild tomorrow. If it turns out to be slower than the previous one, we can easily re-add gcc to the koji buildroot.
Zbyszek
On Tue, Jul 10, 2018 at 1:13 PM, Zbigniew Jędrzejewski-Szmek < zbyszek@in.waw.pl> wrote:
On Tue, Jul 10, 2018 at 06:03:33PM +0200, Igor Gnatenko wrote:
On Tue, Jul 10, 2018 at 5:52 PM Kevin Kofler kevin.kofler@chello.at
wrote:
Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot,
I'm
going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like
gcc:
command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from
buildroot.
This will happen before mass rebuild. Stay tuned.
I still think that this change is absolutely counterproductive,
because it
will actually INCREASE local mock build times for all C/C++ programs
for
all packagers, because gcc and gcc-c++ will no longer be included in the
root
cache.
However, it will DECREASE local mock build times for all non-C/C++ programs. And now we will know which packages actually need C and/or C++ compiler.
Yes.
Also, we'll have a mass rebuild tomorrow. If it turns out to be slower than the previous one, we can easily re-add gcc to the koji buildroot.
From my perspective as an occasional Fedora packager, I'm regularly surprised by just how long it takes for Koji builders to install dependencies. I've never tried to dig in too far, but it looks like the builders download package metadata, download packages, and then install things. Surely this could be massively optimized by having the metadata pre-downloaded (at least when side tags aren't involved) and by having the packages already present over NFS or similar.
From a design perspective, minimizing the contents of the buildroot is a good idea, I think, but I think it would be great if the runtime installation of dependencies during the package build process were sped up dramatically.
(Hmm. Some future version of rpm/dnf could get really fancy and *reflink* package contents into the build chroot rather than untarring them every time.)
On Wed, Jul 11, 2018, at 12:37 PM, Andrew Lutomirski wrote:
(Hmm. Some future version of rpm/dnf could get really fancy and *reflink* package contents into the build chroot rather than untarring them every time.)
Try `rpm-ostree ex container` today and see just how fast it is to construct filesystem trees out of hardlinks from cached unpacked package trees imported into an OSTree repository.
The main blocker right now actually is: https://github.com/projectatomic/rpm-ostree/issues/1180
On 07/11/2018 06:37 PM, Andrew Lutomirski wrote:
From my perspective as an occasional Fedora packager, I'm regularly surprised by just how long it takes for Koji builders to install dependencies. I've never tried to dig in too far, but it looks like the builders download package metadata, download packages, and then install things. Surely this could be massively optimized by having the metadata pre-downloaded (at least when side tags aren't involved) and by having the packages already present over NFS or similar.
Koji gets repodata and packages from HTTP servers, through caching proxies located in the same datacenters as builders. Most often used packages are cached in memory, so download speeds are not a problem. At least for non-s390x builders. Accessing packages directly from NFS would be slower.
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
Once this is optimized, another slow part is loading repodata into memory - uncompressing it, parsing and creating internal libsolv data structures. This could be speeded up by including solv/solvx files in repodata, but I think that would require some code changes.
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
On 07/11/2018 06:37 PM, Andrew Lutomirski wrote:
From my perspective as an occasional Fedora packager, I'm regularly surprised by just how long it takes for Koji builders to install dependencies. I've never tried to dig in too far, but it looks like the builders download package metadata, download packages, and then install things. Surely this could be massively optimized by having the metadata pre-downloaded (at least when side tags aren't involved) and by having the packages already present over NFS or similar.
Koji gets repodata and packages from HTTP servers, through caching proxies located in the same datacenters as builders. Most often used packages are cached in memory, so download speeds are not a problem. At least for non-s390x builders. Accessing packages directly from NFS would be slower.
I wonder if the time taken to decompress everything is relevant. Fedora currently uses xz, which isn't so fast. zchunk is zstd under the hood, which should be lot faster to decompress, especially on ARM builders.
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
Once this is optimized, another slow part is loading repodata into memory - uncompressing it, parsing and creating internal libsolv data structures. This could be speeded up by including solv/solvx files in repodata, but I think that would require some code changes.
Hmm. On my system, there are lots of .solv and .solvx files in /var/cache/dnf. I wonder if it would be straightforward to have a daily job that updates the builder filesystem by just having dnf refresh metadata and generate the .solv/.solvx files? There wouldn't be any dnf changes needed AFAICT -- just some management on the builder infrastructure. This would at least avoid a bunch of duplicate work on most builds.
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
On 07/11/2018 06:37 PM, Andrew Lutomirski wrote:
From my perspective as an occasional Fedora packager, I'm regularly surprised by just how long it takes for Koji builders to install dependencies. I've never tried to dig in too far, but it looks like the builders download package metadata, download packages, and then install things. Surely this could be massively optimized by having the metadata pre-downloaded (at least when side tags aren't involved) and by having the packages already present over NFS or similar.
Koji gets repodata and packages from HTTP servers, through caching proxies located in the same datacenters as builders. Most often used packages are cached in memory, so download speeds are not a problem. At least for non-s390x builders. Accessing packages directly from NFS would be slower.
I wonder if the time taken to decompress everything is relevant. Fedora currently uses xz, which isn't so fast. zchunk is zstd under the hood, which should be lot faster to decompress, especially on ARM builders.
Repodada consumed by dnf is gzip-compressed, which is quickly decompressible. But decompression is done in the same thread as XML parsing and creating pool data structures, so it affects repodata loading times to some degree.
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
Once this is optimized, another slow part is loading repodata into memory - uncompressing it, parsing and creating internal libsolv data structures. This could be speeded up by including solv/solvx files in repodata, but I think that would require some code changes.
Hmm. On my system, there are lots of .solv and .solvx files in /var/cache/dnf. I wonder if it would be straightforward to have a daily job that updates the builder filesystem by just having dnf refresh metadata and generate the .solv/.solvx files? There wouldn't be any dnf changes needed AFAICT -- just some management on the builder infrastructure. This would at least avoid a bunch of duplicate work on most builds.
That wouldn't save much time (and would still require Koji code changes as dnf uses different cache directories for each task). Just like caching chroots is not effective, so Koji disables it. Most repos simply change too often, and there are a lot (over 150) of builders. What would help is generating solv/solvx during repo generation - builders would download them and load very quickly. But that requires code changes and would only save a few seconds per build.
On 07/11/2018 10:53 AM, Mikolaj Izdebski wrote:
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
I think the reason why releng didn't want to do that is because we don't want to trade speed for reliability. True, we don't care if a machine crashes in the middle of a build (because another one will take it after the crashed one comes back), but we don't want to change anything that might affect the actual build artifacts.
So, are we sure that nosync (disabling all fsync calls) doesn't change the builds being made? What about test suites for packages that specifically call fsync? They would always pass even if there was a problem? We could try this in staging I suppose and have koschei run a ton of builds to see what breaks...
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
kevin
On 07/11/2018 09:26 PM, Kevin Fenzi wrote:
On 07/11/2018 10:53 AM, Mikolaj Izdebski wrote:
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
I think the reason why releng didn't want to do that is because we don't want to trade speed for reliability. True, we don't care if a machine crashes in the middle of a build (because another one will take it after the crashed one comes back), but we don't want to change anything that might affect the actual build artifacts.
So, are we sure that nosync (disabling all fsync calls) doesn't change the builds being made? What about test suites for packages that specifically call fsync? They would always pass even if there was a problem?
nosync is used by mock only for running dnf(/yum). It's not used for rpmbuild nor runroot, so it won't affect package tests. It could theoretically affect scriplets ran during package installation, but I've been using nosync in all my Koji instances for a few years and I didn't see any problems. Nosync is used in Copr and I didn't get any reports about it breaking anything. Recently, to test the change in subject, Igor Gnatenko did a few Fedora rebuilds a Koji set up by me, of course with nosync enabled, and I didn't see any problems related to nosync either.
We could try this in staging I suppose and have koschei run a ton of builds to see what breaks...
I would really like that.
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
cache=unsafe is documented at [1]. (Basically, in virt_install_command you append ",cache=unsafe" to --disk parameter, next to "bus=virtio".) It makes buildvmhost cache all disk operations and ignore sync operations. Similar to nosync, but does not work on buildhw, works on virthost level, applies to all operations, not just dnf.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
On 07/11/2018 12:57 PM, Mikolaj Izdebski wrote:
On 07/11/2018 09:26 PM, Kevin Fenzi wrote:
On 07/11/2018 10:53 AM, Mikolaj Izdebski wrote:
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
I think the reason why releng didn't want to do that is because we don't want to trade speed for reliability. True, we don't care if a machine crashes in the middle of a build (because another one will take it after the crashed one comes back), but we don't want to change anything that might affect the actual build artifacts.
So, are we sure that nosync (disabling all fsync calls) doesn't change the builds being made? What about test suites for packages that specifically call fsync? They would always pass even if there was a problem?
nosync is used by mock only for running dnf(/yum). It's not used for rpmbuild nor runroot, so it won't affect package tests. It could theoretically affect scriplets ran during package installation, but I've been using nosync in all my Koji instances for a few years and I didn't see any problems. Nosync is used in Copr and I didn't get any reports about it breaking anything. Recently, to test the change in subject, Igor Gnatenko did a few Fedora rebuilds a Koji set up by me, of course with nosync enabled, and I didn't see any problems related to nosync either.
We could try this in staging I suppose and have koschei run a ton of builds to see what breaks...
I would really like that.
I'd say open a releng ticket on it and we can track it there? This sounds like it might be worth doing...
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
cache=unsafe is documented at [1]. (Basically, in virt_install_command you append ",cache=unsafe" to --disk parameter, next to "bus=virtio".) It makes buildvmhost cache all disk operations and ignore sync operations. Similar to nosync, but does not work on buildhw, works on virthost level, applies to all operations, not just dnf.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
Ah, I see at the vm level. Yeah, I don't think this would be very much of a win for us. The x86_64 buildvm's have all their storage on iscsi, the arm ones have their storage on ssd's. I suppose it could help the ppc64{le} ones, they are on 10k sas drives. I'm pretty leary of enabling anything called 'unsafe' though.
kevin
On 07/11/2018 04:37 PM, Kevin Fenzi wrote:
On 07/11/2018 12:57 PM, Mikolaj Izdebski wrote:
On 07/11/2018 09:26 PM, Kevin Fenzi wrote:
On 07/11/2018 10:53 AM, Mikolaj Izdebski wrote:
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
I think the reason why releng didn't want to do that is because we don't want to trade speed for reliability. True, we don't care if a machine crashes in the middle of a build (because another one will take it after the crashed one comes back), but we don't want to change anything that might affect the actual build artifacts.
So, are we sure that nosync (disabling all fsync calls) doesn't change the builds being made? What about test suites for packages that specifically call fsync? They would always pass even if there was a problem?
nosync is used by mock only for running dnf(/yum). It's not used for rpmbuild nor runroot, so it won't affect package tests. It could theoretically affect scriplets ran during package installation, but I've been using nosync in all my Koji instances for a few years and I didn't see any problems. Nosync is used in Copr and I didn't get any reports about it breaking anything. Recently, to test the change in subject, Igor Gnatenko did a few Fedora rebuilds a Koji set up by me, of course with nosync enabled, and I didn't see any problems related to nosync either.
We could try this in staging I suppose and have koschei run a ton of builds to see what breaks...
I would really like that.
I'd say open a releng ticket on it and we can track it there? This sounds like it might be worth doing...
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
cache=unsafe is documented at [1]. (Basically, in virt_install_command you append ",cache=unsafe" to --disk parameter, next to "bus=virtio".) It makes buildvmhost cache all disk operations and ignore sync operations. Similar to nosync, but does not work on buildhw, works on virthost level, applies to all operations, not just dnf.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
Ah, I see at the vm level. Yeah, I don't think this would be very much of a win for us. The x86_64 buildvm's have all their storage on iscsi, the arm ones have their storage on ssd's. I suppose it could help the ppc64{le} ones, they are on 10k sas drives. I'm pretty leary of enabling anything called 'unsafe' though.
I think it's unsafe only in the case of on-disk consistency, so across VM reboots. I _think_ over a single run of a VM it's safe, which may describe koji usage.
I know rjones has looked deeply at qemu caching methods for use in libguestfs so maybe he can comment, CC'd
- Cole
On Thu, Jul 12, 2018 at 02:10:37PM -0400, Cole Robinson wrote:
On 07/11/2018 04:37 PM, Kevin Fenzi wrote:
On 07/11/2018 12:57 PM, Mikolaj Izdebski wrote:
On 07/11/2018 09:26 PM, Kevin Fenzi wrote:
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
cache=unsafe is documented at [1]. (Basically, in virt_install_command you append ",cache=unsafe" to --disk parameter, next to "bus=virtio".) It makes buildvmhost cache all disk operations and ignore sync operations. Similar to nosync, but does not work on buildhw, works on virthost level, applies to all operations, not just dnf.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
Ah, I see at the vm level. Yeah, I don't think this would be very much of a win for us. The x86_64 buildvm's have all their storage on iscsi, the arm ones have their storage on ssd's. I suppose it could help the ppc64{le} ones, they are on 10k sas drives. I'm pretty leary of enabling anything called 'unsafe' though.
I think it's unsafe only in the case of on-disk consistency, so across VM reboots. I _think_ over a single run of a VM it's safe, which may describe koji usage.
I know rjones has looked deeply at qemu caching methods for use in libguestfs so maybe he can comment, CC'd
I cover caching modes about half way down here:
https://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-...
First off, cache=unsafe really does improve performance greatly, I measured around 25% on a disk-heavy workload.
Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The caveats:
If qemu or the host crashes, the disk image underlying these VMs will (like 99.9% certainty) be corrupted. Even 'sync' inside the VM will not do what you expect, it is just ignored. It's NOT a good idea on VMs which are used for long periods when the host might reboot during that time. It's NOT a good idea if you deeply care about the data in the disk image.
It should only be used when the VM data can be recreated from scratch.
In libguestfs we use cachemode.*unsafe in a few places, carefully chosen, when the above conditions apply. https://github.com/libguestfs/libguestfs/search?q=cachemode+unsafe&unsco...
Rich.
On 07/12/2018 10:17 PM, Richard W.M. Jones wrote:
On Thu, Jul 12, 2018 at 02:10:37PM -0400, Cole Robinson wrote:
On 07/11/2018 04:37 PM, Kevin Fenzi wrote:
On 07/11/2018 12:57 PM, Mikolaj Izdebski wrote:
On 07/11/2018 09:26 PM, Kevin Fenzi wrote:
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
cache=unsafe is documented at [1]. (Basically, in virt_install_command you append ",cache=unsafe" to --disk parameter, next to "bus=virtio".) It makes buildvmhost cache all disk operations and ignore sync operations. Similar to nosync, but does not work on buildhw, works on virthost level, applies to all operations, not just dnf.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
Ah, I see at the vm level. Yeah, I don't think this would be very much of a win for us. The x86_64 buildvm's have all their storage on iscsi, the arm ones have their storage on ssd's. I suppose it could help the ppc64{le} ones, they are on 10k sas drives. I'm pretty leary of enabling anything called 'unsafe' though.
I think it's unsafe only in the case of on-disk consistency, so across VM reboots. I _think_ over a single run of a VM it's safe, which may describe koji usage.
I know rjones has looked deeply at qemu caching methods for use in libguestfs so maybe he can comment, CC'd
I cover caching modes about half way down here:
https://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-...
Thanks Richard, your expert opinion is appreciated.
First off, cache=unsafe really does improve performance greatly, I measured around 25% on a disk-heavy workload.
Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
1. VMs are installed once and are running for week/months until they are reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
2. There would be no data loss in case of host or hypervisor crash. Worst case, if guest operating system was corrupted sysadmins would need to trigger VM install.
The caveats:
If qemu or the host crashes, the disk image underlying these VMs will (like 99.9% certainty) be corrupted. Even 'sync' inside the VM will not do what you expect, it is just ignored. It's NOT a good idea on VMs which are used for long periods when the host might reboot during that time. It's NOT a good idea if you deeply care about the data in the disk image.
We do run guests for long time and reboot hosts. But I think there is no danger if you ensure that guest OS is shut down cleanly before host reboot.
It should only be used when the VM data can be recreated from scratch.
This is the case of Koji builders. They don't contain any special data, just operating system and configuration that can be recreated easily.
In libguestfs we use cachemode.*unsafe in a few places, carefully chosen, when the above conditions apply. https://github.com/libguestfs/libguestfs/search?q=cachemode+unsafe&unsco...
Rich.
On Fri, Jul 13, 2018 at 04:05:42PM +0200, Mikolaj Izdebski wrote:
On 07/12/2018 10:17 PM, Richard W.M. Jones wrote:
Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
- VMs are installed once and are running for week/months until they are
reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
In this case my preferred advice would be: DO NOT use cache=unsafe.
We've only tested scenarios for very short-lived build or temporary VMs (for example when I was building RISC-V packages before we had Koji, I used a script which created a VM per build and there it made sense to use cache=unsafe).
I do not think it's a good idea to be using this for VMs which are in any way long-lived as there could be unforeseen side effects which I'm not aware of and certainly have never tested.
- There would be no data loss in case of host or hypervisor crash.
Worst case, if guest operating system was corrupted sysadmins would need to trigger VM install.
Host crash => yes you'd definitely need to reinstall that VM.
It's not a worst case, a host crash would near-definitely corrupt a VM that was ignoring flush requests. It might even corrupt in an undetectable way (eg. throwing away data while leaving metadata intact).
Rich.
On 07/15/2018 11:47 AM, Richard W.M. Jones wrote:
On Fri, Jul 13, 2018 at 04:05:42PM +0200, Mikolaj Izdebski wrote:
On 07/12/2018 10:17 PM, Richard W.M. Jones wrote:
Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
- VMs are installed once and are running for week/months until they are
reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
In this case my preferred advice would be: DO NOT use cache=unsafe.
We've only tested scenarios for very short-lived build or temporary VMs (for example when I was building RISC-V packages before we had Koji, I used a script which created a VM per build and there it made sense to use cache=unsafe).
I do not think it's a good idea to be using this for VMs which are in any way long-lived as there could be unforeseen side effects which I'm not aware of and certainly have never tested.
One other datapoint is that I _think_ openqa uses cache=unsafe, which is used for Fedora automated install testing. I'm basing this largely on cache=unsafe in the openqa sources.
- There would be no data loss in case of host or hypervisor crash.
Worst case, if guest operating system was corrupted sysadmins would need to trigger VM install.
Host crash => yes you'd definitely need to reinstall that VM.
It's not a worst case, a host crash would near-definitely corrupt a VM that was ignoring flush requests. It might even corrupt in an undetectable way (eg. throwing away data while leaving metadata intact).
I patched kojivm code once, at the time I think new VM instances all used qcow2 overlays ontop of a shared base. It's possible those are created and destroyed with each VM instance, so data loss may not matter in the case of a crash if the overlay will just be discarded regardless. Would need koji devs to confirm though
- Cole
On Mon, 2018-07-16 at 09:27 -0400, Cole Robinson wrote:
On 07/15/2018 11:47 AM, Richard W.M. Jones wrote:
On Fri, Jul 13, 2018 at 04:05:42PM +0200, Mikolaj Izdebski wrote:
On 07/12/2018 10:17 PM, Richard W.M. Jones wrote:
Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
- VMs are installed once and are running for week/months until they are
reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
In this case my preferred advice would be: DO NOT use cache=unsafe.
We've only tested scenarios for very short-lived build or temporary VMs (for example when I was building RISC-V packages before we had Koji, I used a script which created a VM per build and there it made sense to use cache=unsafe).
I do not think it's a good idea to be using this for VMs which are in any way long-lived as there could be unforeseen side effects which I'm not aware of and certainly have never tested.
One other datapoint is that I _think_ openqa uses cache=unsafe, which is used for Fedora automated install testing. I'm basing this largely on cache=unsafe in the openqa sources.
That's mostly true, I think, except when doing multipath testing (where it uses cache=none instead). However, openQA very much meets the definition of 'short-lived / temporary' VMs - each openQA 'job' uses a new VM, so the longest any one ever lasts is 2 hours (the hard limit on an openQA job's lifetime). It also uses fresh disk images each time (even when using a pre-created base disk image, it doesn't use it directly but creates new scratch images based on the base image). I don't know whether this is true of the Koji builder VMs.
On Jul 15, 2018, at 5:47 AM, Richard W.M. Jones rjones@redhat.com wrote:
On Fri, Jul 13, 2018 at 04:05:42PM +0200, Mikolaj Izdebski wrote: On 07/12/2018 10:17 PM, Richard W.M. Jones wrote: Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
- VMs are installed once and are running for week/months until they are
reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
In this case my preferred advice would be: DO NOT use cache=unsafe.
We've only tested scenarios for very short-lived build or temporary VMs (for example when I was building RISC-V packages before we had Koji, I used a script which created a VM per build and there it made sense to use cache=unsafe).
I do not think it's a good idea to be using this for VMs which are in any way long-lived as there could be unforeseen side effects which I'm not aware of and certainly have never tested.
- There would be no data loss in case of host or hypervisor crash.
Worst case, if guest operating system was corrupted sysadmins would need to trigger VM install.
Host crash => yes you'd definitely need to reinstall that VM.
It's not a worst case, a host crash would near-definitely corrupt a VM that was ignoring flush requests. It might even corrupt in an undetectable way (eg. throwing away data while leaving metadata intact).
Would it make sense to boot the builders with -snapshot and cache=unsafe? After all, during normal operation, they don’t need to persist anything.
It might even be reasonable to reboot the VMs after every single build.
On 07/21/2018 02:54 PM, Andrew Lutomirski wrote:
On Jul 15, 2018, at 5:47 AM, Richard W.M. Jones rjones@redhat.com wrote:
On Fri, Jul 13, 2018 at 04:05:42PM +0200, Mikolaj Izdebski wrote: On 07/12/2018 10:17 PM, Richard W.M. Jones wrote: Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
- VMs are installed once and are running for week/months until they are
reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
In this case my preferred advice would be: DO NOT use cache=unsafe.
We've only tested scenarios for very short-lived build or temporary VMs (for example when I was building RISC-V packages before we had Koji, I used a script which created a VM per build and there it made sense to use cache=unsafe).
I do not think it's a good idea to be using this for VMs which are in any way long-lived as there could be unforeseen side effects which I'm not aware of and certainly have never tested.
- There would be no data loss in case of host or hypervisor crash.
Worst case, if guest operating system was corrupted sysadmins would need to trigger VM install.
Host crash => yes you'd definitely need to reinstall that VM.
It's not a worst case, a host crash would near-definitely corrupt a VM that was ignoring flush requests. It might even corrupt in an undetectable way (eg. throwing away data while leaving metadata intact).
Would it make sense to boot the builders with -snapshot and cache=unsafe? After all, during normal operation, they don’t need to persist anything.
I don't think thats at all worth it for a slight bit of build speed.
It might even be reasonable to reboot the VMs after every single build.
Well, koji has no ability to do that currently, and note that some builders can in fact be doing multiple builds at once, so you would need to make sure all in progress builds were done and no new ones arrived, etc.
There was a project a while back to make koji builders more dynamic (I think by making them cloud instances), but I am not sure whatever happened with it.
kevin
On Sun, Jul 22, 2018 at 7:04 PM, Kevin Fenzi kevin@scrye.com wrote:
On 07/21/2018 02:54 PM, Andrew Lutomirski wrote:
On Jul 15, 2018, at 5:47 AM, Richard W.M. Jones rjones@redhat.com wrote:
On Fri, Jul 13, 2018 at 04:05:42PM +0200, Mikolaj Izdebski wrote: On 07/12/2018 10:17 PM, Richard W.M. Jones wrote: Does each build start with its own fresh VM? Do you care about the data in that build VM if either qemu or the host crashes? If the answers are 'Yes' and 'No' respectively to these questions then IMHO this is the ideal situation for cache=unsafe.
The answers are 'No' and 'Not much'.
- VMs are installed once and are running for week/months until they are
reinstalled. In the meantime guests and hosts are rebooted during routine maintenance, to apply updates.
In this case my preferred advice would be: DO NOT use cache=unsafe.
We've only tested scenarios for very short-lived build or temporary VMs (for example when I was building RISC-V packages before we had Koji, I used a script which created a VM per build and there it made sense to use cache=unsafe).
I do not think it's a good idea to be using this for VMs which are in any way long-lived as there could be unforeseen side effects which I'm not aware of and certainly have never tested.
- There would be no data loss in case of host or hypervisor crash.
Worst case, if guest operating system was corrupted sysadmins would need to trigger VM install.
Host crash => yes you'd definitely need to reinstall that VM.
It's not a worst case, a host crash would near-definitely corrupt a VM that was ignoring flush requests. It might even corrupt in an undetectable way (eg. throwing away data while leaving metadata intact).
Would it make sense to boot the builders with -snapshot and cache=unsafe? After all, during normal operation, they don’t need to persist anything.
I don't think thats at all worth it for a slight bit of build speed.
It might even be reasonable to reboot the VMs after every single build.
Well, koji has no ability to do that currently, and note that some builders can in fact be doing multiple builds at once, so you would need to make sure all in progress builds were done and no new ones arrived, etc.
There was a project a while back to make koji builders more dynamic (I think by making them cloud instances), but I am not sure whatever happened with it.
I seem to remember there was discussion of replacing mock with docker containers as well, again I don't know what happened to that either.
On Thu, Jul 12, 2018 at 09:17:41PM +0100, Richard W.M. Jones wrote:
On Thu, Jul 12, 2018 at 02:10:37PM -0400, Cole Robinson wrote:
On 07/11/2018 04:37 PM, Kevin Fenzi wrote:
On 07/11/2018 12:57 PM, Mikolaj Izdebski wrote:
On 07/11/2018 09:26 PM, Kevin Fenzi wrote:
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
cache=unsafe is documented at [1]. (Basically, in virt_install_command you append ",cache=unsafe" to --disk parameter, next to "bus=virtio".) It makes buildvmhost cache all disk operations and ignore sync operations. Similar to nosync, but does not work on buildhw, works on virthost level, applies to all operations, not just dnf.
[1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/htm...
Ah, I see at the vm level. Yeah, I don't think this would be very much of a win for us. The x86_64 buildvm's have all their storage on iscsi, the arm ones have their storage on ssd's. I suppose it could help the ppc64{le} ones, they are on 10k sas drives. I'm pretty leary of enabling anything called 'unsafe' though.
I think it's unsafe only in the case of on-disk consistency, so across VM reboots. I _think_ over a single run of a VM it's safe, which may describe koji usage.
I know rjones has looked deeply at qemu caching methods for use in libguestfs so maybe he can comment, CC'd
I cover caching modes about half way down here:
https://rwmj.wordpress.com/2013/09/02/new-in-libguestfs-allow-cache-mode-to-...
First off, cache=unsafe really does improve performance greatly, I measured around 25% on a disk-heavy workload.
FYI to augment what Rich's blog post says, it helps to understand the difference between cache modes. The QEMU 'cache' setting actually controls 3 separate tunables under the hood:
│ cache.writeback cache.direct cache.no-flush ─────────────┼───────────────────────────────────────────────── writeback │ on off off none │ on on off writethrough │ off off off directsync │ off on off unsafe │ on off on
IOW, changing from cache=none to cache=unsafe turns off O_DIRECT so data is buffered in host RAM, and also turns off disk flushing, so QEMU never requests it to be pushed out to disk. The latter change is what makes it so catastrophic on host failure - even a journalling filesystem in the guest won't save you because we're ignoring the flush requests that are required to make the journal work safely.
The combination of not using O_DIRECT and not honouring flush requests means that all I/O operations on the guest complete pretty much immediately without ever waiting for the host todo the real I/O.
The amount of RAM you have in the host though is pretty relevant here. If the guest is doing I/O faster than the host OS can write it to disk and there's never any flush requests to slow the guest down, you're going to use an ever increasing amount of host RAM for caching I/O. This could be a bad thing if you're contending on host RAM - it could even push other important guests out to swap or trigger OOM killer.
IOW, using O_DIRECT (cache=none or directsync) is a good thing if you need predictable host RAM usage - the only RAM used for I/O cache is that assigned to the guest OS itself.
With using cache=unsafe for Koji I'd be a little concerned about whether a build could inflict a denial of service on host RAM either intentionally or accidentally, as the guest is relatively untrustworthy and/or unconstrained in what it is running.
Finally the issue of O_DIRECT vs host page cache *only* applies if your QEMU process is using locally exposed storage. ie a plain file, or a local device node in /dev. If QEMU is using iSCSI via its built-in network client, then host page cache vs O_DIRECT is irrelevant. In this latter case, using cache=unsafe might be OK from a host RAM consumption POV - though I'm not entirely sure what the RAM usage pattern of the QEMU iSCSI client is like.
Regards, Daniel
On Wed, Jul 11, 2018 at 12:26:01PM -0700, Kevin Fenzi wrote:
On 07/11/2018 10:53 AM, Mikolaj Izdebski wrote:
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
I think the reason why releng didn't want to do that is because we don't want to trade speed for reliability. True, we don't care if a machine crashes in the middle of a build (because another one will take it after the crashed one comes back), but we don't want to change anything that might affect the actual build artifacts.
So, are we sure that nosync (disabling all fsync calls) doesn't change the builds being made? What about test suites for packages that specifically call fsync? They would always pass even if there was a problem? We could try this in staging I suppose and have koschei run a ton of builds to see what breaks...
The effects of fsync are impossible to see unless you hard-reboot the machine. (OK, strictly speaking, you can time the fsync call, but let's ignore that). I'd be more worried about some side-effects of the way that nosync is implemented with a LD_PRELOAD. I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls. (If nspawn is already used by koji, not sure.)
Zbyszek
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
On Wed, Jul 11, 2018 at 08:27:22PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Wed, Jul 11, 2018 at 12:26:01PM -0700, Kevin Fenzi wrote:
On 07/11/2018 10:53 AM, Mikolaj Izdebski wrote:
On 07/11/2018 07:31 PM, Andrew Lutomirski wrote:
On Wed, Jul 11, 2018 at 10:08 AM, Mikolaj Izdebski mizdebsk@redhat.com wrote:
The slowest parts of setting up chroot is writing packages to disk, synchronously. This part can be speeded up a lot by enabling nosync in site-defaults.cfg mock config on Koji builders, setting cache=unsafe on kvm buildvms, or both. These settings are safe because builders upload all results to hubs upon task completion. With these settings chroot setup can take about 30 seconds.
I don't suppose this could get done?
I proposed this a few years ago, but the answer was "no".
I think the reason why releng didn't want to do that is because we don't want to trade speed for reliability. True, we don't care if a machine crashes in the middle of a build (because another one will take it after the crashed one comes back), but we don't want to change anything that might affect the actual build artifacts.
So, are we sure that nosync (disabling all fsync calls) doesn't change the builds being made? What about test suites for packages that specifically call fsync? They would always pass even if there was a problem? We could try this in staging I suppose and have koschei run a ton of builds to see what breaks...
The effects of fsync are impossible to see unless you hard-reboot the machine. (OK, strictly speaking, you can time the fsync call, but let's ignore that). I'd be more worried about some side-effects of the way that nosync is implemented with a LD_PRELOAD. I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls. (If nspawn is already used by koji, not sure.)
Oh, I saw Mikołaj's answer just now. So yeah, if nosync is only used for dnf then we should really enable it by default.
Zbyszek
I don't see the cache=unsafe anywhere (although the name sure makes me want to enable it for official builds let me tell ya. ;) Can you point out more closely where it is or docs for it?
On 2018-07-11, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
The effects of fsync are impossible to see unless you hard-reboot the machine.
Are you sure non-fsynced changes are are guaranteed to be visible on block cache level? E.g. if you mix read/write and mmaped I/O from different processes?
I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls.
Can the syscall filter fake a success of the syscall return value? Correctly written applications check fsync() return value and forward the error.
-- Petr
On Thu, Jul 12, 2018 at 07:32:19AM +0000, Petr Pisar wrote:
On 2018-07-11, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
The effects of fsync are impossible to see unless you hard-reboot the machine.
Are you sure non-fsynced changes are are guaranteed to be visible on block cache level? E.g. if you mix read/write and mmaped I/O from different processes?
Block cache — no, I don't think so. But do we have packages that do anything like this during build? It'd require low-level fs support and would be probably pretty fragile anyway.
I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls.
Can the syscall filter fake a success of the syscall return value? Correctly written applications check fsync() return value and forward the error.
It can, e.g. something like system-nspawn --system-call-filter='~sync:0 fsync:0' should be a good start.
Zbyszek
On Thu, 2018-07-12 at 07:32 +0000, Petr Pisar wrote:
On 2018-07-11, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
The effects of fsync are impossible to see unless you hard-reboot the machine.
Are you sure non-fsynced changes are are guaranteed to be visible on block cache level? E.g. if you mix read/write and mmaped I/O from different processes?
In linux file writes and memory writes all hit the unified page cache so there is not difference at all, only direct io skips the page cache IIRC (but it should also invalidate it, so again no issues to applications).
fsync only really make sure that what's in memory is pushed down to disk and is safely on permanent storage (which is a lie with some storage, but that is a different problem).
I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls.
Can the syscall filter fake a success of the syscall return value? Correctly written applications check fsync() return value and forward the error.
No, nspawn's filter just uses seccmop filters, which return EINVAL/EPERM (IIRC) on blocked arguments/syscalls
So it is indeed not appropriate to use nspawn's filters to block fsync()
Simo.
On Jul 12, 2018, at 4:26 AM, Simo Sorce simo@redhat.com wrote:
On Thu, 2018-07-12 at 07:32 +0000, Petr Pisar wrote:
On 2018-07-11, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote: The effects of fsync are impossible to see unless you hard-reboot the machine.
Are you sure non-fsynced changes are are guaranteed to be visible on block cache level? E.g. if you mix read/write and mmaped I/O from different processes?
In linux file writes and memory writes all hit the unified page cache so there is not difference at all, only direct io skips the page cache IIRC (but it should also invalidate it, so again no issues to applications).
fsync only really make sure that what's in memory is pushed down to disk and is safely on permanent storage (which is a lie with some storage, but that is a different problem).
I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls.
Can the syscall filter fake a success of the syscall return value? Correctly written applications check fsync() return value and forward the error.
No, nspawn's filter just uses seccmop filters, which return EINVAL/EPERM (IIRC) on blocked arguments/syscalls
So it is indeed not appropriate to use nspawn's filters to block fsync()
Seccomp can be used to block a syscall and fake a return value of 0 (success) or any error code chosen by the filter. I assume systemd exposes this functionality.
Once upon a time, Petr Pisar ppisar@redhat.com said:
On 2018-07-11, Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
The effects of fsync are impossible to see unless you hard-reboot the machine.
Are you sure non-fsynced changes are are guaranteed to be visible on block cache level? E.g. if you mix read/write and mmaped I/O from different processes?
fsync() has nothing to do with that - it is purely a request to push the buffer to disk. There is nothing defined about fsync() that would affect inter-process I/O.
http://pubs.opengroup.org/onlinepubs/009695299/functions/fsync.html
On 07/11/2018 10:27 PM, Zbigniew Jędrzejewski-Szmek wrote:
The effects of fsync are impossible to see unless you hard-reboot the machine. (OK, strictly speaking, you can time the fsync call, but let's ignore that). I'd be more worried about some side-effects of the way that nosync is implemented with a LD_PRELOAD. I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls. (If nspawn is already used by koji, not sure.)
Koji does not use systemd-nspawn. It uses plain old chroot.
Dne 12.7.2018 v 18:00 Mikolaj Izdebski napsal(a):
On 07/11/2018 10:27 PM, Zbigniew Jędrzejewski-Szmek wrote:
The effects of fsync are impossible to see unless you hard-reboot the machine. (OK, strictly speaking, you can time the fsync call, but let's ignore that). I'd be more worried about some side-effects of the way that nosync is implemented with a LD_PRELOAD. I wonder if it wouldn't be more robust to use nspawn's syscall filter to filter the fsync calls. (If nspawn is already used by koji, not sure.)
Koji does not use systemd-nspawn. It uses plain old chroot.
Actually, it would be nice if somebody wanted to implement this for us who use mock with systemd-nspawn :)
V.
Igor Gnatenko wrote:
A lot of packages in 2018 are not written in C/C++
… and this is the problem that needs fixing.
It is just a PITA to have packages dragging in more and more interpreters and/or language runtimes. The slowness and lack of compile-time type safety of interpreted languages are also a big problem.
Kevin Kofler
On Wed, 2018-07-11 at 16:37 +0200, Kevin Kofler wrote:
Igor Gnatenko wrote:
A lot of packages in 2018 are not written in C/C++
… and this is the problem that needs fixing.
It is just a PITA to have packages dragging in more and more interpreters and/or language runtimes. The slowness and lack of compile-time type safety of interpreted languages are also a big problem.
Unless you think Fedora can somehow "fix" this "problem", then whether you think it's a "problem" or not, it's the reality of the world Fedora lives in.
On 07/11/2018 07:37 AM, Kevin Kofler wrote:
Igor Gnatenko wrote:
A lot of packages in 2018 are not written in C/C++
… and this is the problem that needs fixing.
It is just a PITA to have packages dragging in more and more interpreters and/or language runtimes. The slowness and lack of compile-time type safety of interpreted languages are also a big problem.
(donning a Rust Evangelism cape) So I hear you like compile-time safety...
No, I don't seriously want to get into a language comparison here, except to say that it's reasonable for the world to expand beyond C/C++, even for compiled languages.
And back on topic, rustc currently requires cc as a linker anyway.
On Wed, 11 Jul 2018 18:26:23 +0200, Josh Stone wrote:
So I hear you like compile-time safety...
No, I don't seriously want to get into a language comparison here, except to say that it's reasonable for the world to expand beyond C/C++,
There is no C/C++ language. There are two orthogonal languages, C and C++. (And some people say C++11 and C++03 are also orthogonal.)
Jan Kratochvil
On 07/11/2018 10:01 AM, Jan Kratochvil wrote:
On Wed, 11 Jul 2018 18:26:23 +0200, Josh Stone wrote:
So I hear you like compile-time safety...
No, I don't seriously want to get into a language comparison here, except to say that it's reasonable for the world to expand beyond C/C++,
There is no C/C++ language. There are two orthogonal languages, C and C++. (And some people say C++11 and C++03 are also orthogonal.)
If you're going to be pedantic, know that "/" can be shorthand for "or": https://en.wikipedia.org/wiki/Slash_(punctuation)#Connecting_alternatives
Jan Kratochvil wrote:
There is no C/C++ language. There are two orthogonal languages, C and C++. (And some people say C++11 and C++03 are also orthogonal.)
Yes, C and C++ are divergent languages (I wouldn't call them "orthogonal", but they are definitely different things), but gcc-c++ currently Requires gcc, so if we have C++ support in the default buildroot (which I think we should), we automatically also have C support.
Kevin Kofler
On 8.7.2018 20:46, Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned. --
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29-failures.html
On Fri, Jul 13, 2018, 11:19 Miro Hrončok mhroncok@redhat.com wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned. --
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29-failures.html
-- Miro Hrončok -- Phone: +420777974800 IRC: mhroncok
On Fri, Jul 13, 2018 at 12:39:55PM +0200, Igor Gnatenko wrote:
On Fri, Jul 13, 2018, 11:19 Miro Hrončok mhroncok@redhat.com wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned. --
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29-failures.html
bionetgen was one. It was failing with "/bin/sh: g++: command not found". It is my package, I took care of that already. (Now it's failing on something unrelated.)
Zbyszek
On 13.7.2018 12:39, Igor Gnatenko wrote:
On Fri, Jul 13, 2018, 11:19 Miro Hrončok <mhroncok@redhat.com mailto:mhroncok@redhat.com> wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote: > As per Changes/Remove GCC from BuildRoot > <https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot>, I'm > going to automatically add BuildRequires: gcc and/or BuildRequires: > gcc-c++ to packages which fail to build with common messages (like gcc: > command not found, also autotools/cmake/meson are supported). > > I'm going to do this tomorrow. > > After which, I'm going to ask rel-eng to finally remove it from > buildroot. This will happen before mass rebuild. Stay tuned. > -- I've clicked randomly trough failures during the mass rebuild at [1]. I see quite a lot of commands not founds for gcc, cc, c++... I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
Sorry, it was just random browsing and I cannot seem to find them again except bionetgen, which Zbyszek already took care of.
This bit the bcache-tools package too, which I fixed.
On 07/13/2018 12:39 PM, Igor Gnatenko wrote:
On Fri, Jul 13, 2018, 11:19 Miro Hrončok <mhroncok@redhat.com mailto:mhroncok@redhat.com> wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote: > As per Changes/Remove GCC from BuildRoot > <https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot>, I'm > going to automatically add BuildRequires: gcc and/or BuildRequires: > gcc-c++ to packages which fail to build with common messages (like gcc: > command not found, also autotools/cmake/meson are supported). > > I'm going to do this tomorrow. > > After which, I'm going to ask rel-eng to finally remove it from > buildroot. This will happen before mass rebuild. Stay tuned. > -- I've clicked randomly trough failures during the mass rebuild at [1]. I see quite a lot of commands not founds for gcc, cc, c++... I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29-failures.html -- Miro Hrončok -- Phone: +420777974800 IRC: mhroncok
--
-Igor Gnatenko
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...
Il giorno ven 13 lug 2018 alle 12:39, Igor Gnatenko ignatenkobrain@fedoraproject.org ha scritto:
On Fri, Jul 13, 2018, 11:19 Miro Hrončok mhroncok@redhat.com wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot
https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm
going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like
gcc:
command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned. --
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
I've just added and pushed the needed BuildRequires for my package (extractpdfmark). I didn't bump the release version. Will you do it when you make a new mass rebuild?
I see that on 3rd of July the build was successful (despite the missing gcc-c++ requirement): https://koji.fedoraproject.org/koji/buildinfo?buildID=1102851
Does it mean that the change in koji was implemented only recently?
On Mon, Jul 16, 2018 at 7:29 AM Federico Bruni fede@inventati.org wrote:
Il giorno ven 13 lug 2018 alle 12:39, Igor Gnatenko ignatenkobrain@fedoraproject.org ha scritto:
On Fri, Jul 13, 2018, 11:19 Miro Hrončok mhroncok@redhat.com wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot
https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot, I'm
going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like
gcc:
command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned. --
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
I've just added and pushed the needed BuildRequires for my package (extractpdfmark). I didn't bump the release version. Will you do it when you make a new mass rebuild?
I see that on 3rd of July the build was successful (despite the missing gcc-c++ requirement): https://koji.fedoraproject.org/koji/buildinfo?buildID=1102851
Does it mean that the change in koji was implemented only recently?
Yes, it was implemented on 10th or something like that. You need to bump release and rebuild as usual.
On Fri, Jul 13, 2018 at 12:39:55PM +0200, Igor Gnatenko wrote:
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29-failures.html
Can you maybe add:
g?cc: [Co]mmand not found
to the script, regrep and fix the resulting packages. I just checked 3 of 12 of my failed pkgs and they had error messages like the following:
| make[1]: gcc: Command not found | sh: gcc: command not found | make: cc: Command not found | | https://kojipkgs.fedoraproject.org//work/tasks/215/28320215/build.log | https://kojipkgs.fedoraproject.org//work/tasks/9137/28229137/build.log | https://kojipkgs.fedoraproject.org//work/tasks/4199/28314199/build.log
If you have the tools ready, it would make it easier to just re-run it IMHO.
Kind regards Till
On Fri, 2018-07-13 at 12:39 +0200, Igor Gnatenko wrote:
On Fri, Jul 13, 2018, 11:19 Miro Hrončok mhroncok@redhat.com wrote:
On 8.7.2018 20:46, Igor Gnatenko wrote:
As per Changes/Remove GCC from BuildRoot <https://fedoraproject.org/wiki/Changes/Remove_GCC_from_BuildRoot , I'm going to automatically add BuildRequires: gcc and/or
BuildRequires:
gcc-c++ to packages which fail to build with common messages
(like gcc:
command not found, also autotools/cmake/meson are supported).
I'm going to do this tomorrow.
After which, I'm going to ask rel-eng to finally remove it from buildroot. This will happen before mass rebuild. Stay tuned. --
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
Yes, I've pushed over 2k commits adding those, however regexp might have not catched all possible cases. Would appreciate if you would link such packages so that I can fix them. Or maintainers can do it themselves.
releng's debconf-1.5.63-4.fc29 failed to build man2html-1.6-22.g.fc29 noip-2.1.9-26.fc29 p7zip-16.02-13.fc29 failed to build perl-File-FcntlLock-0.22-13.fc29 perl-Mail-Transport-Dbx pngquant-2.12.1-2.fc29 subdownloader-2.0.19-8.fc29 python-bitarray-0.8.3-2.fc29 rawstudio-2.1-0.19.20170414.g003dd4f_rawspeed.20161119.gfa23d1c.fc29 virtualbox-guest-additions-5.2.14-2.fc29 tetrinetx-1.13.16-21.fc29
I already fixed unar and dpkg , if you fixed some of those. I'll be grateful
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29- failures.html
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidel ines List Archives: https://lists.fedoraproject.org/archives/list/devel@ lists.fedoraproject.org/message/INTODKJDI2NU36RBHKPYNLDOPSBRAPV6/
On Fri, 13 Jul 2018, Miro Hrončok wrote:
I've clicked randomly trough failures during the mass rebuild at [1].
I see quite a lot of commands not founds for gcc, cc, c++...
I think the maintainers should add them and that's fine, but it seemed that during this change you said you will add those. Did it happen?
[1] https://kojipkgs.fedoraproject.org/mass-rebuild/f29-failures.html
This list seems to only cover packages starting with an uppercase letter, or a letter before lowercase 'i'
also, it only lists one maintainer, and omits co-maintainers
Would it be possible for a full list to be produced, and once done, mentioned here?
thank you
-- Russ herrold
"RPH" == R P Herrold herrold@owlriver.com writes:
RPH> This list seems to only cover packages starting with an uppercase RPH> letter, or a letter before lowercase 'i'
Well, the list is incomplete because the mass rebuild is not complete. Upper case letters and digits were submitted first. Currently packages in the "perl" range are being submitted and with the exception of a few which seem to have hung, most builds through the 'e' range have completed.
So you will have to wait a while longer if you insist on having a complete list of failures.
- J<
On Sun, Jul 8, 2018 at 1:46 PM, Igor Gnatenko ignatenkobrain@fedoraproject.org wrote:
As per Changes/Remove GCC from BuildRoot, I'm going to automatically add BuildRequires: gcc and/or BuildRequires: gcc-c++ to packages which fail to build with common messages (like gcc: command not found, also autotools/cmake/meson are supported).
I just got four bug reports for Vala projects that are failing due to missing GCC:
https://bugzilla.redhat.com/show_bug.cgi?id=1603972 https://bugzilla.redhat.com/show_bug.cgi?id=1604143 https://bugzilla.redhat.com/show_bug.cgi?id=1604150 https://bugzilla.redhat.com/show_bug.cgi?id=1604352
The error message is:
configure: error: in `/builddir/build/BUILD/five-or-more-3.28.0': configure: error: no acceptable C compiler found in $PATH
Michael
Another one, this time without any Vala:
On Mon, Jul 23, 2018 at 4:59 PM mcatanzaro@gnome.org wrote:
Another one, this time without any Vala:
Thanks a lot for your input! I'm going to block the Change tracking bug and fix them automatically within a few days.
https://bugzilla.redhat.com/show_bug.cgi?id=1606043
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/...