I'm running Fedora 21 with a custom compiled kernel, 3.19.0-1.20150211.fc21.x86_64.
I have a multi core system with 6 cores. All are recognized by the kernel.
But, when I run a compile job with -j6, in order to allow all six cores to be used, it limits the total amount of usage to 100% of a *single* core. So, it might use all six cores, but the sum of the percentages on those six cores is always around 100% of one core. This is from htop output.
On large compilations, like the kernel or firefox, even using 4 cores could drastically reduce compile time.
I've looked at /etc/security/limits.conf, but it doesn't seem to have a setting for this. I've also looked at the /proc system to see if there is a kernel variable, though that seems unlikely, with no luck. Online searching found ways to limit the amount that a single job can get, but not how to set this for a user. There must be a configuration variable somewhere that is limiting the amount of total cpu a user can use. But I can't find it.
Can anyone help?
On Mon, 9 Mar 2015 12:03:39 -0700 stan stanl-fedorauser@vfemail.net wrote:
This is from htop output.
Correction. *atop* output.
Hello Stan, Just a minor clarification: when compiling, the -j flag should point to a unit above your available cores in order to fully utilize all of them.
I'm sure you already know this, but nevertheless beware when doing intensive compilation and using all your cores as you might end with a non-responding system until the compilation is done.
Regards,
On Mon, Mar 9, 2015 at 4:12 PM, stan stanl-fedorauser@vfemail.net wrote:
On Mon, 9 Mar 2015 12:03:39 -0700 stan stanl-fedorauser@vfemail.net wrote:
This is from htop output.
Correction. *atop* output.
users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Mon, 9 Mar 2015 17:30:57 -0300 Martin Cigorraga martincigorraga@gmail.com wrote:
Hello Stan, Just a minor clarification: when compiling, the -j flag should point to a unit above your available cores in order to fully utilize all of them.
I'm sure you already know this, but nevertheless beware when doing intensive compilation and using all your cores as you might end with a non-responding system until the compilation is done.
Thanks. Yes, I have read that, but haven't had any chance to experience it yet because of this issue. :-) In fact, I've read that it is good to ask for 50% more processes than the number of cores.
Ah... couldn't tell, but the Gentoo wiki and Arch's one (in a lesser extent), are excellent resources to learn everything about that! HTH
On Mon, Mar 9, 2015 at 5:36 PM, stan stanl-fedorauser@vfemail.net wrote:
On Mon, 9 Mar 2015 17:30:57 -0300 Martin Cigorraga martincigorraga@gmail.com wrote:
Hello Stan, Just a minor clarification: when compiling, the -j flag should point to a unit above your available cores in order to fully utilize all of them.
I'm sure you already know this, but nevertheless beware when doing intensive compilation and using all your cores as you might end with a non-responding system until the compilation is done.
Thanks. Yes, I have read that, but haven't had any chance to experience it yet because of this issue. :-) In fact, I've read that it is good to ask for 50% more processes than the number of cores. -- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 09.03.2015, stan wrote:
But, when I run a compile job with -j6, in order to allow all six cores to be used, it limits the total amount of usage to 100% of a *single* core. So, it might use all six cores, but the sum of the percentages on those six cores is always around 100% of one core. This is from htop output.
This is the CPU scheduler not maximizing usage. Try this next time:
nice -n -20 make -j6
Or choose any nice level which fits better for you, "man nice".
Wouldn't be better to use -j with no argument? The make manual states: "If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously."
----- Original Message -----
On 09.03.2015, stan wrote:
But, when I run a compile job with -j6, in order to allow all six cores to be used, it limits the total amount of usage to 100% of a *single* core. So, it might use all six cores, but the sum of the percentages on those six cores is always around 100% of one core. This is from htop output.
This is the CPU scheduler not maximizing usage. Try this next time:
nice -n -20 make -j6
Or choose any nice level which fits better for you, "man nice".
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 10.03.2015, ergodic wrote:
Wouldn't be better to use -j with no argument?
It's a matter of taste. I compile my kernels with a nice value of 19 (lowest priority), because mostly I have to do other work while compiling a new kernel.
I've never tried -j. How many processes does it open on you machine? Are you able to do something else in parallel, or is the load too high?
Frankly I never check the loading, just use -j with no argument, but I always do other processes in parallel with no problem.
----- Original Message -----
On 10.03.2015, ergodic wrote:
Wouldn't be better to use -j with no argument?
It's a matter of taste. I compile my kernels with a nice value of 19 (lowest priority), because mostly I have to do other work while compiling a new kernel.
I've never tried -j. How many processes does it open on you machine? Are you able to do something else in parallel, or is the load too high?
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Tue, 10 Mar 2015 11:35:24 +0100 Heinz Diehl htd+ml@fritha.org wrote:
On 09.03.2015, stan wrote:
But, when I run a compile job with -j6, in order to allow all six cores to be used, it limits the total amount of usage to 100% of a *single* core. So, it might use all six cores, but the sum of the percentages on those six cores is always around 100% of one core. This is from htop output.
This is the CPU scheduler not maximizing usage. Try this next time:
nice -n -20 make -j6
Or choose any nice level which fits better for you, "man nice".
Thanks. When I had a single core machine, I used to adjust ionice and nice to be ultra kind on heavy jobs so they wouldn't impact my graphical interface experience. Worked fine.
I tried making the job more greedy as you suggest, and it refuses because I don't have the authority to adjust niceness down. I don't want to run it as root, and I shouldn't have to.
I don't see why this is necessary. The system is showing 470% idle. So the kernel cpu scheduler shouldn't need to limit the job to a single core maximum usage. Even if it leaves some margin for error, it should still be using more than a single core equivalent. The kernel programmers are smart folks. Not to mention that they do large compilations on multi-core machines often. I doubt that they hard coded this kind of behavior into the kernel. So there must be a setting that is limiting the kernel scheduler in some way. Maybe it's the scheduler that is being used. I'm using 'on demand' rather than 'performance'. 'Performance' sounds like it keeps everything at full rev all the time. While power isn't an issue for me, I don't see a reason to generate all that heat.
I'll keep plugging away, reading and experimenting, until I get it or give up.
On 11.03.2015, stan wrote:
I don't see why this is necessary. The system is showing 470% idle. So the kernel cpu scheduler shouldn't need to limit the job to a single core maximum usage.
I just tried a simple "make" on an 8-core machine. There was exactly one compile process, and it's 100% load was distributed over 3 cores. So nothing wrong with that one. If you run 100% on one core or 100% distributed over multiple cores is, in terms of efficacy, the same.
Even if it leaves some margin for error, it should still be using more than a single core equivalent. The kernel programmers are smart folks. Not to mention that they do large compilations on multi-core machines often. I doubt that they hard coded this kind of behavior into the kernel.
It's the limiting to one process which causes what you observe. 1 process can not get more resources that 100%. The CPU scheduler handles how they are distributed.
So there must be a setting that is limiting the kernel scheduler in some way. Maybe it's the scheduler that is being used. I'm using 'on demand' rather than 'performance'.
Ondemand and performance affect the cpufreq, not the load balancing or the involvement of different cores.
'Performance' sounds like it keeps everything at full rev all the time.
No. It keeps every core running at full speed all the way, which has nothing to do with how the load is balanced between different cores.
I'll keep plugging away, reading and experimenting, until I get it or give up.
Use "make -j" when compiling and be happy :-)
On Wed, 11 Mar 2015 11:34:25 +0100 Heinz Diehl htd+ml@fritha.org wrote:
I just tried a simple "make" on an 8-core machine. There was exactly one compile process, and it's 100% load was distributed over 3 cores. So nothing wrong with that one. If you run 100% on one core or 100% distributed over multiple cores is, in terms of efficacy, the same.
This is my experience as well.
It's the limiting to one process which causes what you observe. 1 process can not get more resources that 100%. The CPU scheduler handles how they are distributed.
I think this is the key. What is the point of -j6 or -j8 if the make can't spawn additional processes with their own limits, and thus take advantage of more resources that are available? What is it that limits a process and its children from using more resources than a single core, even though they are available?
Ondemand and performance affect the cpufreq, not the load balancing or the involvement of different cores.
Thanks, learn something every day.
No. It keeps every core running at full speed all the way, which has nothing to do with how the load is balanced between different cores.
Can you point me to which area of the kernel has the code that does the actual load balancing? Maybe it would be easy to do a custom patch that bypasses this limiting behavior. I understand that parallel computing requires parallel programming in the code, but I'm thinking more of letting make have more than a single core available. As you point out above, it is already using multiple cores. I just want it to be able to use all of those multiple cores if they are available.
I'll keep plugging away, reading and experimenting, until I get it or give up.
Use "make -j" when compiling and be happy :-)
Truly, it will probably come to this. I can then start the job and let it run in the background with no impact to other things I am doing. What I was hoping was that when I wanted to run things overnight, I could kick off a couple of compute intensive jobs, and they would share all the resources of the computer until they were done. With no impact to my use of the computer because I wouldn't be interacting with it.
On 9 March 2015 at 19:03, stan stanl-fedorauser@vfemail.net wrote:
I'm running Fedora 21 with a custom compiled kernel, 3.19.0-1.20150211.fc21.x86_64.
I have a multi core system with 6 cores. All are recognized by the kernel.
But, when I run a compile job with -j6, in order to allow all six cores to be used, it limits the total amount of usage to 100% of a *single* core. So, it might use all six cores, but the sum of the percentages on those six cores is always around 100% of one core. This is from htop output.
On large compilations, like the kernel or firefox, even using 4 cores could drastically reduce compile time.
I've looked at /etc/security/limits.conf, but it doesn't seem to have a setting for this. I've also looked at the /proc system to see if there is a kernel variable, though that seems unlikely, with no luck. Online searching found ways to limit the amount that a single job can get, but not how to set this for a user. There must be a configuration variable somewhere that is limiting the amount of total cpu a user can use. But I can't find it.
Can anyone help?
Been wondering about this thread for a while, as I use make -j N since some builds I've had to deal with (including the kernel) have pretty much shut down the machines they're running on if allowed to run in unrestricted -j mode.
Some people have said that this is priority related, that's not the case. I can run "make -j10" here (dcmtk-3.6.1 to test if anyone wants to know) and see multiple ccplus going up to 100% at points, RHEL 6, a 2.6.3 kernel. What I do see during that process though is that early on multiple jobs run at less than 100% and maybe at approximately 100%/N, this may be due to how make starts parallel jobs, or it may simple be I/O or other non-CPU limiting on the compilation. You may want to check the .NOTPARALLEL directive is not present http://www.gnu.org/software/make/manual/make.html#Parallel though I think that would simply prevent multiple processes.
To repeat, make -j N should be able to start N processes and they should not be subject to an overall limit other than hardware. (Incidentally, one process can use more than 100% if written to use parallelisation, you can often see jvm doing this.)
Since I can't reproduce this problem I'm not sure what's causing it. If you really are finding make subprocesses limited to 100% cpu across the lot then maybe have a look to see if there are any cgroups limits active https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... may also be worth running on the stock fedora kernel to test that it's not something that you've turned on in your custom kernel.
Like I mentioned above, -j without N I've found can really make things drag on heavy builds. Even if you don't care about running other things at the same time, you can often get better a faster build by choosing a good N, as too many processes at once compete for other resources and run inefficiently. Things like hyperthreading can compound this. The only time I've done that is building the kernel on a dual core intel machine (no hyperthreading) and N=3 did turn out to be fastest, but that 50% rule may not always be the case. With hyperthreading present I've found with other processing tasks that pushing above 50% total system load (i.e. more than N*100%, due to virtual cores being counted) can actually slow down the overall task noticeably.
On 03/11/2015 11:49 AM, stan wrote:
On Wed, 11 Mar 2015 11:34:25 +0100 Heinz Diehl htd+ml@fritha.org wrote:
It's the limiting to one process which causes what you observe. 1 process can not get more resources that 100%. The CPU scheduler handles how they are distributed.
I think this is the key. What is the point of -j6 or -j8 if the make can't spawn additional processes with their own limits, and thus take advantage of more resources that are available? What is it that limits a process and its children from using more resources than a single core, even though they are available?
We're thinking in terms of one machine with multiple cores here. What about an environment with multiple machines (each possibly with multiple cores). Now you have *many* more possibilities of where to run compiles with -j. Consider (for example) distcc. It can be configured to run build components on different machines (configurable per machine as to how many). So now, the -j 10 or -j 20 has more possibilities for distributing the load during the "make".
On 11.03.2015, stan wrote:
What is the point of -j6 or -j8 if the make can't spawn additional processes with their own limits, and thus take advantage of more resources that are available?
The point is simply that you can exactly determine how many processes should be used.
What is it that limits a process and its children from using more resources than a single core, even though they are available?
As said, load balancing, task migration and thelike is done by the CPU scheduler. A single process is not limited to a single core, but its load is distributed over multiple cores.
Can you point me to which area of the kernel has the code that does the actual load balancing?
Haven't looked into this for some time, but take a look into /usr/src/linux/kernel/sched/fair.c. (The CFS code is complex and difficult to understand, though - at least for me).
Maybe it would be easy to do a custom patch that bypasses this limiting behavior. I understand that parallel computing requires parallel programming in the code, but I'm thinking more of letting make have more than a single core available.
Although there are voices saying that the actual CPU scheduler (CFS) underuses the CPU (see e.g. the comments to the BFS), I'm afraid what you see is by intention, and not a faulty behaviour.
above, it is already using multiple cores. I just want it to be able to use all of those multiple cores if they are available.
I see.
I can then start the job and let it run in the background with no impact to other things I am doing.
That is why I compile my things using "nice -n 19 make -j8" (on an 8-core).
What I was hoping was that when I wanted to run things overnight, I could kick off a couple of compute intensive jobs, and they would share all the resources of the computer until they were done.
You'll never be able to use 100% of all resources, because the system has to run while you're compiling. All you can do is to use multiple processes, if appropriate and available.
Btw, here is a good explanation of Linux SMP scheduling: http://tinyurl.com/o4nuaxr
And also take a look here: https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt http://ck.kolivas.org/patches/bfs/3.0/3.18/3.18-sched-bfs-460.patch (BFS is designed with latency in mind, not throughput).
On Wed, 11 Mar 2015 16:36:43 +0000 Ian Malone ibmalone@gmail.com wrote:
You may want to check the .NOTPARALLEL directive is not present http://www.gnu.org/software/make/manual/make.html#Parallel though I think that would simply prevent multiple processes.
This sounded like exactly the problem, but when I checked all the make files in the kernel build tree, none of them had this directive.
To repeat, make -j N should be able to start N processes and they should not be subject to an overall limit other than hardware. (Incidentally, one process can use more than 100% if written to use parallelisation, you can often see jvm doing this.)
I took Martin's suggestion and checked the Gentoo take on this. Your experience is the general experience they had. But there were some people that didn't get that, and the suggestion they got was that the make file had been written in such a way that it wouldn't allow the request (the impression was *badly* written). But, again, there were many people saying they pegged all their cores at 100% when compiling the kernel just by using make -j#. There was lots of discussion of what # should be, and even testing programs that people could use. On my box, I even see the kernel request -j6 on its own, but it still only uses 1 core equivalent. When I run a kernel compile with make -j, I see dozens of processes created by make in htop, but they still only use the equivalent of 1 core of cpu.
When I build firefox nightly with -j6, just at the end of the export phase, and before the compile starts, I see all 6 cores maxed out. Once the compile starts, it is back to a single core equivalent. The Gentoo users seemed to suggest that this was a flaw in the firefox build process, though, and not the fault of the scheduler.
Since I can't reproduce this problem I'm not sure what's causing it. If you really are finding make subprocesses limited to 100% cpu across the lot then maybe have a look to see if there are any cgroups limits active https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... may also be worth running on the stock fedora kernel to test that it's not something that you've turned on in your custom kernel.
This sounds promising, and I have cgroups turned on in the config file, but so does the standard kernel. I also don't know how I would look for cgroup configuration. I'll do more research.
Thanks.
On Wed, 11 Mar 2015 20:56:37 +0100 Heinz Diehl htd+ml@fritha.org wrote:
Haven't looked into this for some time, but take a look into /usr/src/linux/kernel/sched/fair.c. (The CFS code is complex and difficult to understand, though - at least for me).
Took a quick look at this. Only ~8000 lines of well documented code. Yeah. Except, to understand that code, it is necessary to understand a lot about kernel context, and flow. Not to mention all the possible side effects a change here could cause. Because of the research I did with Gentoo experience, I'll assume that this code is working. It's many years old, and mature. Discretion is the better part of valor. ;-)
Btw, here is a good explanation of Linux SMP scheduling: http://tinyurl.com/o4nuaxr
And also take a look here: https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt http://ck.kolivas.org/patches/bfs/3.0/3.18/3.18-sched-bfs-460.patch (BFS is designed with latency in mind, not throughput).
Thanks.
On Wed, 11 Mar 2015 15:14:39 -0400 Kevin Cummings cummings@kjchome.homeip.net wrote:
We're thinking in terms of one machine with multiple cores here. What about an environment with multiple machines (each possibly with multiple cores). Now you have *many* more possibilities of where to run compiles with -j. Consider (for example) distcc. It can be configured to run build components on different machines (configurable per machine as to how many). So now, the -j 10 or -j 20 has more possibilities for distributing the load during the "make".
That makes sense. But because of what I found when looking at Gentoo about this, it should also work for a single machine with multiple cores. That seemed to be the experience of almost everyone there. And, boy, do they take this seriously.
On Tue, 10 Mar 2015 18:01:07 -0400 (EDT) ergodic gmml@embarqmail.com wrote:
Frankly I never check the loading, just use -j with no argument, but I always do other processes in parallel with no problem.
Then I think you must be having the same behavior as me. Because, as Ian found, if a compile grabs all the cpu resources, it is *noticeable*.
On 12.03.2015, stan wrote:
When I build firefox nightly with -j6, just at the end of the export phase, and before the compile starts, I see all 6 cores maxed out. Once the compile starts, it is back to a single core equivalent. The Gentoo users seemed to suggest that this was a flaw in the firefox build process, though, and not the fault of the scheduler.
There are some programs which encounter serious trouble when using more than 1 compile process. Don't remember all, but audacity is an example. So the "flaw" could be a feature..
On Wed, 11 Mar 2015 16:36:43 +0000 Ian Malone ibmalone@gmail.com wrote:
Since I can't reproduce this problem I'm not sure what's causing it. If you really are finding make subprocesses limited to 100% cpu across the lot then maybe have a look to see if there are any cgroups limits active https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm... may also be worth running on the stock fedora kernel to test that it's not something that you've turned on in your custom kernel.
I looked in the kernel documentation, and it seems that in order to be limited by cgroups, the application has to create a cgroup and attach to it. The cgroups are in a virtual file system under /sys/fs. They can be seen by cat /etc/mtab The one of interest in the case of compiling is the cpu,cpuset. The kernel makefiles don't create or attach to a cgroup, so that would seem to eliminate it as a consideration.
There is also a systemd target for cgroups, but it is supposed to be only for services, which compiling wouldn't be.
And, there is also a selinux target for cgroups, but it doesn't seem to apply here.
So, cgroups seem like another dead end.
On 12.03.2015, stan wrote:
So, cgroups seem like another dead end.
It depends on the machine used and the amount of processes. While cgroups limit more than just CPU power, you could try with BFS (which does not use cgroups).
On Thu, 12 Mar 2015 23:20:27 +0100 Heinz Diehl htd+ml@fritha.org wrote:
It depends on the machine used and the amount of processes. While cgroups limit more than just CPU power, you could try with BFS (which does not use cgroups).
Thanks for this. After reading the bfs documentation, I was going to turn off cgroups in the kernel, but that seems to be disallowed in the config file. When I tried his patch for bfs, I got some rejected hunks, so I'll probably have to tweak it. But his rationale for his technique fits my use case just fine. I think the main load balancer is designed for a system being pounded by asynchronous requests i.e. a server, though it works just fine for regular desktop usage as well.
If it is cgroups, I should be able to set up my own cgroup, put it in cgroup_other so it is outside the purview of selinux, set usage limits to be all cpu cores for my group, and then attach my compile jobs to that cgroup in order to get access to all cpu available. I think it will be a lot easier to get the bfs patch working for 4.0.
If I come up with a solution, I'll post back in this thread, but otherwise I think I've whipped this horse enough.
On 09.03.2015, Martin Cigorraga wrote:
Just a minor clarification: when compiling, the -j flag should point to a unit above your available cores in order to fully utilize all of them.
Curious what would happen, I remembered this mail when compiling a new kernel today. A "nice -n 19 make -j" opened *hundreds* of cc incarnations, pushed the load to over 800 and seriously blocked the machine (an 8-core Xeon with 16 GB of RAM) within *seconds*!
On Mon, 16 Mar 2015 16:05:37 +0100 Heinz Diehl htd+ml@fritha.org wrote:
On 09.03.2015, Martin Cigorraga wrote:
Just a minor clarification: when compiling, the -j flag should point to a unit above your available cores in order to fully utilize all of them.
Curious what would happen, I remembered this mail when compiling a new kernel today. A "nice -n 19 make -j" opened *hundreds* of cc incarnations, pushed the load to over 800 and seriously blocked the machine (an 8-core Xeon with 16 GB of RAM) within *seconds*!
Lucky you!
Are you using F21? Which kernel?
So, are you using rpmbuild with the src.rpm package, or compiling directly from the source tree?
What happens if you use -j 4? I would think you should get somewhere between 3 and 4 cores. The recommendation I saw to fully use all cores, were from cores+1 to cores*3. Cores*1.5 was a popular one, to allow for io slowness.
When I tried -j, I saw all the jobs queue, but only one core was used.
Thanks for reporting back.
On Wed, 11 Mar 2015 20:56:37 +0100 Heinz Diehl htd+ml@fritha.org wrote:
http://ck.kolivas.org/patches/bfs/3.0/3.18/3.18-sched-bfs-460.patch (BFS is designed with latency in mind, not throughput).
By significant workaround and patching in kernel/sched, I was able to compile a kernel without FAIR_CGROUP_SCHED active. But it wouldn't boot Fedora, hung when getting EDID. I think cgroups are integrated into Fedora, and so the CFS with cgroup is required. Thus, BFS will probably not work in Fedora, as it has no support for cgroups. That's in the documentation.
Given your experience of maxing all your cores out on a kernel compile, it shouldn't be necessary to do anything. As you've demonstrated, it 'just works'.
Oh well, I guess I can live with it.
On 17.03.2015, stan wrote:
Lucky you!
Lucky? The machine was fully unusable.
Are you using F21? Which kernel?
Yes, this machine is on F21.
[htd@chiara ~]$ uname -a Linux chiara.fritha.org 3.19.2-rc1-bfq #1 SMP PREEMPT Mon Mar 16 16:16:07 CET 2015 x86_64 x86_64 x86_64 GNU/Linux
It's a custom build kernel with two minor patches and the BFQ scheduler.
So, are you using rpmbuild with the src.rpm package, or compiling directly from the source tree?
Directly from source.
What happens if you use -j 4? I would think you should get somewhere between 3 and 4 cores.
This is the top output when compiling a kernel with -j8 (4 cores/8 threads):
top - 18:47:16 up 9:07, 4 users, load average: 1.77, 0.39, 0.13 Tasks: 263 total, 10 running, 253 sleeping, 0 stopped, 0 zombie %Cpu0 : 92.3 us, 5.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.7 hi, 0.3 si, 0.0 st %Cpu1 : 92.7 us, 5.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st %Cpu2 : 92.3 us, 6.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.0 si, 0.0 st %Cpu3 : 93.3 us, 5.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu4 : 92.3 us, 6.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu5 : 93.0 us, 5.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.0 si, 0.0 st %Cpu6 : 92.7 us, 5.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st %Cpu7 : 92.3 us, 6.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st KiB Mem : 16342864 total, 12336128 free, 818288 used, 3188448 buff/cache KiB Swap: 16777212 total, 16777212 free, 0 used. 15174260 avail Mem
19343 root 6 0 207228 81548 16184 R 24.3 0.5 0:00.73 cc1 19359 root 6 0 196268 71812 16320 R 21.6 0.4 0:00.65 cc1 19391 root 7 0 180292 55032 16188 R 13.0 0.3 0:00.39 cc1 19423 root 7 0 171012 40864 10800 R 6.3 0.3 0:00.19 cc1 19431 root 7 0 166552 37340 10888 R 5.6 0.2 0:00.17 cc1 19447 root 6 0 155452 25596 10788 R 2.3 0.2 0:00.07 cc1 19455 root 7 0 153500 24540 10648 R 2.3 0.2 0:00.07 cc1 19463 root 7 0 150112 19440 10500 R 1.3 0.1 0:00.04 cc1
When I tried -j, I saw all the jobs queue, but only one core was used.
Can you reproduce this behaviour with a bog standard 3.19.x?
On 17.03.2015, stan wrote:
I think cgroups are integrated into Fedora, and so the CFS with cgroup is required. Thus, BFS will probably not work in Fedora, as it has no support for cgroups.
CFS is not required at all, and so are cgroups. Any kernel with the BFS patch applied will run just fine on Fedora. In fact, most of the time I run a kernel using both BFS and the BFQ I/O-scheduler, on Fedora and Arch.
On Tue, Mar 17, 2015 at 1:58 PM, Heinz Diehl htd+ml@fritha.org wrote:
On 17.03.2015, stan wrote:
I think cgroups are integrated into Fedora, and so the CFS with cgroup is required. Thus, BFS will probably not work in Fedora, as it has no support for cgroups.
CFS is not required at all, and so are cgroups. Any kernel with the BFS patch applied will run just fine on Fedora. In fact, most of the time I run a kernel using both BFS and the BFQ I/O-scheduler, on Fedora and Arch.
"CONFIG_CGROUPS (it is OK to disable all controllers)" is listed under "REQUIREMENTS" in the systemd README.
I don't know what the difference is between not compiling cgroup suppport into the kernel and compiling it in but disabling all controllers but it looks like that your assumption that cgroup support isn't required is wrong.
On Tue, 17 Mar 2015 18:52:23 +0100 Heinz Diehl htd+ml@fritha.org wrote:
Lucky? The machine was fully unusable.
Yes, but you have full control of your machine.
Yes, this machine is on F21.
[htd@chiara ~]$ uname -a Linux chiara.fritha.org 3.19.2-rc1-bfq #1 SMP PREEMPT Mon Mar 16 16:16:07 CET 2015 x86_64 x86_64 x86_64 GNU/Linux
It's a custom build kernel with two minor patches and the BFQ scheduler.
I have those same options enabled. $ uname -a Linux localhost.localdomain 4.0.0-0.rc3.git2.1.20150313.fc21.x86_64 #1 SMP PREEMPT Fri Mar 13 18:11:04 MST 2015 x86_64 x86_64 x86_64 GNU/Linux
Directly from source.
I'm using the src.rpm from koji, http://koji.fedoraproject.org/koji/packageinfo?packageID=8 and rpmbuild to build the rpms, that I then install via yum -C
This is the top output when compiling a kernel with -j8 (4 cores/8 threads):
[snip] Nice.
Can you reproduce this behaviour with a bog standard 3.19.x?
Because I'm using yum to manage kernels, I can't install a kernel older than the latest I have on my system. But when I was using the stock 3.18 series of kernels, this behavior was there.
I might be able to download the stock version from the koji address above, and use rpm --force to install it.
On Tue, 17 Mar 2015 18:52:23 +0100 Heinz Diehl htd+ml@fritha.org wrote:
Directly from source.
Would you be willing to give a recipe that you use? i.e. what steps do you perform to do this?
There is a vanilla source tree included in the fedora src.rpm for the kernel, so maybe I could use your steps on that vanilla kernel, instead of the fedora patched kernel.
On Tue, 17 Mar 2015 16:53:33 -0400 Tom H tomh0665@gmail.com wrote:
On Tue, Mar 17, 2015 at 1:58 PM, Heinz Diehl htd+ml@fritha.org wrote:
CFS is not required at all, and so are cgroups. Any kernel with the BFS patch applied will run just fine on Fedora. In fact, most of the time I run a kernel using both BFS and the BFQ I/O-scheduler, on Fedora and Arch.
"CONFIG_CGROUPS (it is OK to disable all controllers)" is listed under "REQUIREMENTS" in the systemd README.
I don't know what the difference is between not compiling cgroup suppport into the kernel and compiling it in but disabling all controllers but it looks like that your assumption that cgroup support isn't required is wrong.
Well, here's a conundrum. Heinz uses it and it works, but the documentation says it shouldn't work. =><=
There must be some functionality disabled on Heinz' system. Gracefully disabled, or everything would crash.
Heinz, what does cat /proc/cgroups show?
On Tue, 17 Mar 2015 18:52:23 +0100 Heinz Diehl htd+ml@fritha.org wrote:
This is the top output when compiling a kernel with -j8 (4 cores/8 threads):
top - 18:47:16 up 9:07, 4 users, load average: 1.77, 0.39, 0.13 Tasks: 263 total, 10 running, 253 sleeping, 0 stopped, 0 zombie %Cpu0 : 92.3 us, 5.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.7 hi, 0.3 si, 0.0 st %Cpu1 : 92.7 us, 5.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st %Cpu2 : 92.3 us, 6.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.0 si, 0.0 st %Cpu3 : 93.3 us, 5.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu4 : 92.3 us, 6.4 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.0 hi, 0.3 si, 0.0 st %Cpu5 : 93.0 us, 5.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.0 si, 0.0 st %Cpu6 : 92.7 us, 5.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st %Cpu7 : 92.3 us, 6.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.3 hi, 0.3 si, 0.0 st KiB Mem : 16342864 total, 12336128 free, 818288 used, 3188448 buff/cache KiB Swap: 16777212 total, 16777212 free, 0 used. 15174260 avail Mem
19343 root 6 0 207228 81548 16184 R 24.3 0.5 0:00.73 cc1 19359 root 6 0 196268 71812 16320 R 21.6 0.4 0:00.65 cc1 19391 root 7 0 180292 55032 16188 R 13.0 0.3 0:00.39 cc1 19423 root 7 0 171012 40864 10800 R 6.3 0.3 0:00.19 cc1 19431 root 7 0 166552 37340 10888 R 5.6 0.2 0:00.17 cc1 19447 root 6 0 155452 25596 10788 R 2.3 0.2 0:00.07 cc1 19455 root 7 0 153500 24540 10648 R 2.3 0.2 0:00.07 cc1 19463 root 7 0 150112 19440 10500 R 1.3 0.1 0:00.04 cc1
An afterthought. I notice that you are compiling the kernel as root. I do my build in the rpmbuild system as a user, so the compile is run as a user. Do you think that would matter?
On Wed, 18 Mar 2015 08:01:35 -0700 stan stanl-fedorauser@vfemail.net wrote:
Heinz, what does cat /proc/cgroups show?
Should have included this in my response:
$ cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 2 1 1 cpu 3 1 1 cpuacct 3 1 1 blkio 4 1 1 memory 5 1 1 devices 6 112 1 freezer 7 1 1 net_cls 8 1 1 perf_event 9 1 1 net_prio 8 1 1 hugetlb 10 1 1
On 18.03.2015, stan wrote:
Would you be willing to give a recipe that you use? i.e. what steps do you perform to do this?
1. Download a kernel tarball from kernel.org 2. Unpack it into /usr/src 3. Copy .config from the latest Fedora kernel into the kernel toplevel sourcedir (it is stored in /boot). 4. "make oldconfig" 5. "make" 6. "make modules_install" 7. "make install" 8. Reboot
Step 3. is for convenience, you can of course use your own .config. This kernel will live peacefully alongside of your Fedora kernels.
On 18.03.2015, stan wrote:
An afterthought. I notice that you are compiling the kernel as root. I do my build in the rpmbuild system as a user, so the compile is run as a user. Do you think that would matter?
For the kernel to get properly installed, you have to be root. Precisely, there is nothing wrong with compiling the kernel as user, but "make modules_install" and "make install" have to be performed as root.
On 17.03.2015, Tom H wrote:
I don't know what the difference is between not compiling cgroup suppport into the kernel and compiling it in but disabling all controllers but it looks like that your assumption that cgroup support isn't required is wrong.
Thanks for pinting this out!
You are right, I was not clear enough when writing this. What I meant was that the CPU scheduler doesn't have to be cgroups aware, so it's fine to use e.g. the BFS.
On 18.03.2015, stan wrote:
Heinz, what does cat /proc/cgroups show?
[htd@chiara ~]$ cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 2 1 1 memory 3 1 1 devices 4 74 1 freezer 5 1 1 net_cls 6 1 1 blkio 7 1 1 bfqio 8 1 1 perf_event 9 1 1 net_prio 6 1 1 hugetlb 10 1 1
On Wed, 18 Mar 2015 18:19:54 +0100 Heinz Diehl htd+ml@fritha.org wrote:
- Download a kernel tarball from kernel.org
- Unpack it into /usr/src
- Copy .config from the latest Fedora kernel into the kernel toplevel sourcedir (it is stored in /boot).
- "make oldconfig"
- "make"
- "make modules_install"
- "make install"
- Reboot
Thanks.
Step 3. is for convenience, you can of course use your own .config.
Yeah, I usually do a make menuconfig after make oldconfig.
This kernel will live peacefully alongside of your Fedora kernels.
That's good. How do you remove the old kernels so that they don't pile up indefinitely? Manually? Or does this automatically replace the last version that was compiled and installed this way? That is, does it use a generic install directory, or one that is stamped with kernel version?
Here's my Fedora rpmbuild procedure:
I go to koji, the fedora central build repository for package maintainers, and download the src.rpm.
http://koji.fedoraproject.org/koji/packageinfo?packageID=8
I use rpm to install it into the ~/rpmbuild heirarchy as a user. rpm -ivh kernel-4.0.0-0.rc3.git1.1.fc22.src.rpm This requires that the rpm-build package be installed.
I then go to ~/rpmbuild/SPECS to unpack and patch it. rpmbuild -bp kernel.spec
This puts the unpacked, patched source in, for example, ~/rpmbuild/BUILD/kernel-4.0-rc3.fc21/linux-4.0.0-0.rc3.git2.1.20150315.fc21.x86_64
The vanilla kernel is there as, for example, ~/rpmbuild/BUILD/kernel-4.0-rc3.fc21/vanilla-4.0-rc3-git2 but I'm not using that.
In the ~/rpmbuild/BUILD/kernel-4.0-rc3.fc21/linux-4.0.0-0.rc3.git2.1.20150315.fc21.x86_64 directory, I cp from /boot the config file for the last kernel I built into .config.
I then run make oldconfig to set the new kernel to that previous config.
I then run make menuconfig to do any tweaks or changes I want to the new kernel. I try to remove modules and options I don't need on my system to speed up compilation. I then save that as a new .config.
I edit the new .config, and put # x86_64 as the first line, so the rpmbuild program can find it, and move that to ~/rpmbuild/SOURCES/config-x86_64-generic
I move to ~/rpmbuild/SPECS and edit kernel.spec to add the date to the kernel name. %define buildid .20150315
I then run the rpm build process as rpmbuild -bb --without debug --target=`uname -m` kernel.spec > build_output 2> error_output
This eventually produces the kernel rpms in ~/rpmbuild/RPMS/x86_64 which I then install using yum -C from within that directory.
I do all this in a virtual console, within screen, which has windows for each of the places I need to go. All except the install step are run as user.
On Wed, 18 Mar 2015 18:19:54 +0100 Heinz Diehl htd+ml@fritha.org wrote:
- Download a kernel tarball from kernel.org
- Unpack it into /usr/src
- Copy .config from the latest Fedora kernel into the kernel toplevel sourcedir (it is stored in /boot).
- "make oldconfig"
- "make"
- "make modules_install"
- "make install"
- Reboot
Step 3. is for convenience, you can of course use your own .config. This kernel will live peacefully alongside of your Fedora kernels.
Did this, and unlike for you, in step 5 it still only used a single thread. I even tried it as root to be sure, and single thread for root also.
Poma has directed me to an expert who is looking into it.
Perhaps I should try next a kernel patched with bfs like yours are.
$ top top - 13:24:55 up 3 days, 37 min, 51 users, load average: 0.77, 0.52, 0.27 Tasks: 333 total, 3 running, 330 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.2 us, 3.8 sy, 0.0 ni, 78.8 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 16334484 total, 195200 free, 1649032 used, 14490252 buff/cache KiB Swap: 28979304 total, 28978864 free, 440 used. 13515520 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10851 stan 20 0 157948 28904 10588 R 3.0 0.2 0:00.09 cc1 30672 stan 20 0 837028 136976 79184 S 2.0 0.8 12:20.34 konsole 7 root 20 0 0 0 0 S 1.0 0.0 4:52.56 rcu_preempt 2804 root 20 0 16876 3012 2704 S 1.0 0.0 41:41.74 audio-entropyd 29910 root 20 0 382100 136820 62968 S 1.0 0.8 25:46.96 Xorg.bin 10745 stan 20 0 146812 4528 3344 R 0.7 0.0 0:00.03 top 10794 stan 20 0 9808 2172 1748 S 0.7 0.0 0:00.02 gcc 10803 stan 20 0 10072 2324 1752 S 0.7 0.0 0:00.02 gcc 11472 stan 20 0 1219368 347856 100576 S 0.7 2.1 1:59.81 firefox 30134 stan 20 0 361860 18708 15944 S 0.7 0.1 4:08.11 clipit 3 root 20 0 0 0 0 S 0.3 0.0 0:04.80 ksoftirqd/0 27 root rt 0 0 0 0 S 0.3 0.0 0:01.88 migration/5 10357 stan 20 0 110808 2896 2276 S 0.3 0.0 0:00.01 make 10720 stan 20 0 9912 2160 1748 S 0.3 0.0 0:00.01 gcc 10778 stan 20 0 9800 2084 1684 S 0.3 0.0 0:00.01 gcc 26253 root 20 0 0 0 0 S 0.3 0.0 0:00.70 kworker/0:0 30076 stan 20 0 545072 29472 24320 S 0.3 0.2 0:35.49 marco 30139 stan 20 0 667208 40308 19392 S 0.3 0.2 6:01.70 python 1 root 20 0 192972 13472 5188 S 0.0 0.1 1:43.77 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.31 kthreadd 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 8 root 20 0 0 0 0 S 0.0 0.0 0:00.33 rcu_sched 9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 10 root rt 0 0 0 0 S 0.0 0.0 0:01.55 migration/0 11 root rt 0 0 0 0 S 0.0 0.0 0:01.90 migration/1 12 root 20 0 0 0 0 S 0.0 0.0 0:02.32 ksoftirqd/1 14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:0H 15 root rt 0 0 0 0 S 0.0 0.0 0:01.17 migration/2 16 root 20 0 0 0 0 S 0.0 0.0 0:03.32 ksoftirqd/2 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:0H 19 root rt 0 0 0 0 S 0.0 0.0 0:01.71 migration/3 20 root 20 0 0 0 0 S 0.0 0.0 0:01.49 ksoftirqd/3 22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:0H 23 root rt 0 0 0 0 S 0.0 0.0 0:01.46 migration/4 24 root 20 0 0 0 0 S 0.0 0.0 0:02.17 ksoftirqd/4 26 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/4:0H 28 root 20 0 0 0 0 S 0.0 0.0 0:02.38 ksoftirqd/5 30 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/5:0H 31 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs 33 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns 34 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 perf
On Wed, Mar 18, 2015 at 11:01 AM, stan stanl-fedorauser@vfemail.net wrote:
On Tue, 17 Mar 2015 16:53:33 -0400, Tom H tomh0665@gmail.com wrote:
"CONFIG_CGROUPS (it is OK to disable all controllers)" is listed under "REQUIREMENTS" in the systemd README.
I don't know what the difference is between not compiling cgroup suppport into the kernel and compiling it in but disabling all controllers but it looks like that your assumption that cgroup support isn't required is wrong.
Well, here's a conundrum. Heinz uses it and it works, but the documentation says it shouldn't work. =><=
Yes...
On Wed, Mar 18, 2015 at 1:22 PM, Heinz Diehl htd+ml@fritha.org wrote:
On 18.03.2015, stan wrote:
An afterthought. I notice that you are compiling the kernel as root. I do my build in the rpmbuild system as a user, so the compile is run as a user. Do you think that would matter?
For the kernel to get properly installed, you have to be root. Precisely, there is nothing wrong with compiling the kernel as user, but "make modules_install" and "make install" have to be performed as root.
If I don't create an rpm, I run:
CONCURRENCY_LEVEL=$(getconf _NPROCESSORS_ONLN) make sudo INSTALL_MOD_STRIP=1 make modules_install sudo cp arch/x86/boot/bzImage /boot/vmlinuz-$(make kernelversion) sudo cp System.map /boot/System.map-$(make kernelversion) sudo cp .config /boot/config-$(make kernelversion)
You don't need to run "make" as root.
On Wed, Mar 18, 2015 at 1:26 PM, Heinz Diehl htd+ml@fritha.org wrote:
On 17.03.2015, Tom H wrote:
I don't know what the difference is between not compiling cgroup suppport into the kernel and compiling it in but disabling all controllers but it looks like that your assumption that cgroup support isn't required is wrong.
Thanks for pinting this out!
You're welcome. Although it's somewhat tangential to your problem.
I think that I've understood what the README snippet means:
# grep CONFIG_CGROUP /boot/config-4.0.0-rc4 CONFIG_CGROUPS=y # CONFIG_CGROUP_DEBUG is not set CONFIG_CGROUP_FREEZER=y CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_HUGETLB=y CONFIG_CGROUP_PERF=y CONFIG_CGROUP_SCHED=y CONFIG_CGROUP_NET_PRIO=y CONFIG_CGROUP_NET_CLASSID=y
So you're meant to set CONFIG_CGROUPS to "y" but not the others - if you don't want to have cgroups enabled.
On Wed, Mar 18, 2015 at 3:34 PM, stan stanl-fedorauser@vfemail.net wrote:
Here's my Fedora rpmbuild procedure:
I go to koji, the fedora central build repository for package maintainers, and download the src.rpm.
http://koji.fedoraproject.org/koji/packageinfo?packageID=8
I use rpm to install it into the ~/rpmbuild heirarchy as a user. rpm -ivh kernel-4.0.0-0.rc3.git1.1.fc22.src.rpm
<snip>
You can also run (after doing the config and patching):
CONCURRENCY_LEVEL=$(getconf _NPROCESSORS_ONLN) INSTALL_MOD_STRIP=1 make rpm-pkg sudo yum install ~/rpmbuild/RPMS/x86_64/kernel-4.0.0-rc4-1.x86_64.rpm
On 18.03.2015, stan wrote:
How do you remove the old kernels so that they don't pile up indefinitely? Manually?
Yes. Just delete the related files in /boot and the sourcetree in /usr/src.
Or does this automatically replace the last version that was compiled and installed this way?
No.
That is, does it use a generic install directory, or one that is stamped with kernel version?
The directories used are standardized, but only rpm or similar can guarantee a proper replacement technique. It also doesn't make sense to directly replace a kernel, because the new kernel may not boot.
Here's my Fedora rpmbuild procedure:
[....]
That's really complicated, baah :-)
On Wed, 18 Mar 2015 19:03:45 -0400 Tom H tomh0665@gmail.com wrote:
You can also run (after doing the config and patching):
CONCURRENCY_LEVEL=$(getconf _NPROCESSORS_ONLN) INSTALL_MOD_STRIP=1 make rpm-pkg sudo yum install ~/rpmbuild/RPMS/x86_64/kernel-4.0.0-rc4-1.x86_64.rpm
Thanks. I'll try this.
On Thu, 19 Mar 2015 15:56:34 +0100 Heinz Diehl htd+ml@fritha.org wrote:
That's really complicated, baah :-)
It seems complicated, but with the screen template, it's almost habit. I don't even have to think about it.
I tried the bfs patch on rc4 of the 4.0 kernel, but it got these errors:
kernel/sched/bfs.c:4811:50: error: redefinition of ‘io_schedule’ void __sched io_schedule(void) ^ In file included from include/linux/nmi.h:7:0, from kernel/sched/bfs.c:33: include/linux/sched.h:422:60: note: previous definition of ‘io_schedule’ was here static inline void io_schedule(void) ^ kernel/sched/bfs.c: In function ‘sched_domain_debug_one’: kernel/sched/bfs.c:5695:2: error: implicit declaration of function ‘cpulist_scnprintf’ [-Werror=implicit-function-declaration] cpulist_scnprintf(str, sizeof(str), sched_domain_span(sd)); ^ cc1: some warnings being treated as errors make[2]: *** [kernel/sched/bfs.o] Error 1 make[1]: *** [kernel/sched] Error 2 make: *** [kernel] Error 2
I'm currently running a compile on a vanilla 3.19 with the bfs and bfq patches applied. If it doesn't have these errors, I can see if I get the same behavior as you when compiling.
On Mon, 9 Mar 2015 12:03:39 -0700 stan stanl-fedorauser@vfemail.net wrote:
But, when I run a compile job with -j6, in order to allow all six cores to be used, it limits the total amount of usage to 100% of a *single* core.
Booting from a Knoppix live DVD works. All six cores are utilized during a kernel compile.
What doesn't work: Stock Fedora kernel 3.18.7 in F20. Vanilla kernel.org 3.19.2 kernel in F21. Patched kernel.org 3.19.0 kernel, bfs and bfq patched, in F21. Custom compiled Fedora 4.0 kernel in F21.
So, it is something in Fedora or my configuration of it. My F21 was a clean install, and I don't recall setting anything that would affect this, but it is possible. Maybe I'll try live media from another distro to see if it solves the problem. Or live media from Fedora.
Anyway, getting closer.