On Thu, Oct 31, 2013 at 8:36 AM, Prarit Bhargava <prarit(a)redhat.com> wrote:
On 10/30/2013 07:32 PM, Josh Boyer wrote:
> On Wed, Oct 30, 2013 at 7:25 PM, Prarit Bhargava <prarit(a)redhat.com> wrote:
>> On 10/30/2013 02:10 PM, Simo Sorce wrote:
>>> On Wed, 2013-10-30 at 10:51 -0700, David Strauss wrote:
>>>> On Wed, Oct 30, 2013 at 10:09 AM, Josh Boyer
>>>>> Massive 4096 multi-cored CPU machines with terabytes of DRAM and
>>>>> petabytes of storage, or more commodity style hardware used in
>>>>> heterogeneous environments, etc.
>>>> The latter. We'd want a separate HPC group for 512+ core machines.
>>> Or simply, sites so big can care for their own kernel builds most
>>> probably, or seek for commercial support.
>> Why limit it so low? If we're thinking about going big, well, GO BIG.
>> Users of Fedora want to support these systems out-of-the-box so they can get an
>> idea if their systems work. Stopping at 512 just seems too low these days.
>> We're talking about saving a very small amount of memory by not going to 4096
> Remind me how much again? IIRC, it was around 2MB additional runtime
> overhead to set MAX_CPUS to that, right? That's very small on
> servers, not so small on cloud.
Right, I think that was about it... it may be a little less than that. I
wonder, however, how many people are actually using a bleeding-edge fedora
kernel for memory-critical cloud purposes? I have a feeling that it's in the
same order of magnitude of people booting fedora on systems with greater than
Well, that's why we're talking about this again. With the 3 product
push Fedora is doing, we _want_ people using Fedora in the cloud and
on desktops (workstation) and on servers. So I'm asking all 3 of
those products what their target use cases are.
(As for memory-critical cloud... I have no idea what that is to be
honest. All I hear from the cloud people is "smaller is better".
Mostly that's image size, not memory overhead but I can imagine they
want that limited as well.)