EDAC error

Roger Heflin rogerheflin at gmail.com
Tue Mar 25 23:03:21 UTC 2008


Bill Davidsen wrote:
> Roger Heflin wrote:
>> Ric Moore wrote:
>>> On Sat, 2008-03-22 at 10:03 -0500, Roger Heflin wrote:
>>>> Ric Moore wrote:
>>>>> On Thu, 2008-03-20 at 21:58 -0500, Roger Heflin wrote:
>>>>>> Brent Snow, Mr. wrote:
>>>>>>> Hi All,
>>>>>>>
>>>>>>>  
>>>>>>>
>>>>>>>             I am having a problem with a new Dell PowerEdge 1900 
>>>>>>> Server
>>>>>>> running Fedora 8.
>>>>>>>
>>>>>>>  
>>>>>>>
>>>>>>>             The System setup is as follows:
>>>>>>>
>>>>>>>  
>>>>>>>
>>>>>>>             2 - Xeon  E5310 (Quad-Core 1.6 GHz) processors
>>>>>>>
>>>>>>>  
>>>>>>>
>>>>>>>             16 GB of RAM, I SATA 80 GB HDD.
>>>>> Holy Smokes! 2 quad cores? That's 8 cores total(?) and 16 GIGS of 
>>>>> Ram??
>>>>> My Gawd, not only am I jealous as all hell, I'm wondering what kinda
>>>>> kernel are you running?? Any sort of stock kernel would roll over and
>>>>> join the Choir Eternal. 
>>>> Actually fairly normal kernels work just fine on the large boxes, I 
>>>> have ran stock FC6 kernels up to 8 cpus/16 cores and up to 64GB of 
>>>> ram with no issues.
>>>>
>>>>> Wouldn't you be running some sort of mini clustering setup?? Setup
>>>>> right, it should really blow serious coal. Your problem might lie in
>>>>> that direction. You might have training wheels on a Dodge Hemi. With a
>>>>> machine like that, I could almost do without eating! <huge drooling 
>>>>> grins> Ric
>>>>>  
>>>> Clustering setups are only needed when you have more than 1 machine, 
>>>> having lots of cpus on a single machine is much easier than 
>>>> clustering as you don't need have to worry about the networking, and 
>>>> the memory can be shared easily between the cpus.
>>>
>>> Huh, I wonder then why he's having problems. In the -OLD- days he'd be
>>> rolling a new kernel. Is the stock kernel multi-cpu aware or does he
>>> need a more specialized kernel, or is it the kernel at all?? That's
>>> where I would be looking, fer sure. God, I want one like he's got.
>>> <scratching strong itch> I always stay a couple of years behind. :) Ric
>>>
>>
>> Hyperthreading has been around too long, and dual core has also been 
>> around too long, so pretty much everyone ships with SMP on *NOW*.   
>> And you are correct, several years ago, SMP was default off on a 
>> number of distributions, so you almost always had to compile your own.
>>
> What you say is mostly correct, although some distributions did ship an 
> SMP kernel which you could boot. The one factor you didn't mention is 
> that some changes made in early 2.6 reduced the performance penalty for 
> running an SMP kernel on a uni. I don't remember exactly which, but 
> there's little justification for bothering now, since if you're out 
> after the last drop of performance you probably run SMP anyway.
> 
> The one exception might be someone on old grotty hardware, true uni and 
> slow to boot, where a percent or two would seem to matter.
> 
Bill,

I started using dual socket machines when 2.2 was unstable, and 2.0 was *stable* 
  very few shipped anything SMP back then, it was a roll your own world, this was
pre-hyperthread (HT started at 1.8Ghz Xeons, the 1.26 and 1.44 P3 Xeons did not 
have it).

But every machine we had was SMP simply because the second cpu typically gave us 
a 50-70% speed increase on the application, but cost a lot less than 50% to pick 
a SMP box over a Uni box, and that was counting using a SMP over a Uni kernel in 
the machine.

                                     Roger




More information about the users mailing list