[Beowulf] recommendations for cluster upgrades

Gus Correa gus at ldeo.columbia.edu
Tue May 12 13:37:23 PDT 2009


Rahul Nabar wrote:
> I'm currently shopping around for a cluster-expansion and was shopping
> for options. Anybody out here who's bought new hardware in the  recent
> past? Any suggestions? Any horror stories?
> 
> We've been using Dell SC1435's with Quad-Core AMD 2354 Opterons @
> 2.2GHz. 16 Gig RAM.
> 

Hi Rahul, list

Sometime ago I asked this question on this list (and on the Rocks list 
also):  "How much memory per core is right?"

I've got a variety of answers,
with 2GB/core being the bottom line (but also the most popular value).
A number of people recommended more and up to 128GB/node, depending
on the application.

We bought 16GB/node = 2GB/core and so far so good, all programs
we tried were significantly below the
memory watermark (say 80% of total physical memory).
"Top" on a compute node on your current cluster, running typical
applications, may give the best insight.
If it is close to the watermark, buy more than you have now.


> Any new-cutting edge stuff I ought to be asking my vendors to put into
> the quotes? I already have gigabit bonded backbones and don't think we
> have the financial muscle to upgrade to Myrinet or Infiniband yet. In
> the interest of homogeniety and not wanting to have dual trees of
> executibles I might be tempted to stick with AMDs unless there is
> compelling temptation otherwise.
> 
> I am already looking at the CPU benchmarks on the Intel/AMD websites
> but they can sometimes be misleading / misrepresenting other than the
> obvious glaring conflict of interest. I rather trust first-hand
> anecdotal evidence from you guys actually administering them for
> scientific applications. Unless there is a good third party, relevant
> database?
> 
> For some reason the top500 sublists seem skewed to prefer the Intel
> Xeons. 

True, probably reflecting the current market share.

> Why so few Opterons or any other AMD hardware? Just curious if
> this is driven by technological inferiority of only a marketing
> effect. My vendor seems to be trying to steer me towards an Intel
> Nehalem or Clovertown for whatever reasons good or bad.
> 


Climate/Atmosphere/Oceans problems (the bulk of our research here)
are computational fluid dynamics (CFD) problems.
The codes solve time-dependent PDE's with
domain decomposition techniques,
and march the solution in time on a big grid/mesh (typically 3D),
with tons of memory arrays being read and written all the time.
I.e., the codes are memory-intensive.
Hence, they require good memory bandwidth.

We tested some codes that we use in dual-dual and dual-quad,
single node computers with Opteron and Xeon processors.
Some computers that we have here, and one test thanks to a kind vendor 
on his shop.
You only need one node to test memory bandwidth and the way the
code scales on the *processor*.
(How it scales on the network requires a cluster, of course.)
In all cases we tried to compare similar generations of Opterons beat 
the Xeons, and whenever possible with similar speeds (didn't have all 
this luxury available), but always with particular interest on scaling.

Typically the walltimes flattened out on the Xeons, after the
number of cores in use exceeded half (or half+1)  of the total cores,
whereas on Opterons the (almost-)linear scaling continued up to the
total number of cores, with walltimes always decreasing.

I don't know if your codes (computational chemistry, right?)
are memory-intensive.
If they are, don't get impressed by one short walltime only.
Look for scaling.
You don't want to keep half of the cores idle in your cluster, due to
memory bandwidth saturation.

The tests I mentioned were up to Harpertown, 5400 series,
which is actually a notch
above the Clovertown 5300 series that you mentioned, IIRR.
Haven't tested Nehalem.
Haven't seen any benchmark of CFD-type codes on Nehalem either.
I remember there were some synthetic benchmark numbers,
maybe posted here on this list.
Maybe somebody tested computational chemistry codes, or CFD,
and may post results here.

We have Opteron Shanghai, and it works as well/better than previous 
Opterons  we used (which were older than your Barcelonas).

> Ultimately of course, it might be best if I just got to benchmark my
> very own application on these CPUs before I bought them. But that's
> just wishing I guess!
> 

Not really wishful thinking.
Send your codes to a potential vendor, ask him to run on a single node,
and report to you the walltimes for, say, 1,2,4,8 cores working in parallel.
Big vendors may not do it, smaller shops probably will,
or will offer you access to a test machine for a "do-it-yourself".

Also, ask details about memory configuration, and which ones are optimal.
We recently saw a discussion about this here on the list, search the 
archives.
Somebody posted this useful link then:

http://en.community.dell.com/blogs/hpcc/archive/2009/04/08/nehalem-and-memory-configurations.aspx

My two cents,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------



More information about the Beowulf mailing list