[Beowulf] How much RAM per core is right?

Mark Hahn hahn at mcmaster.ca
Fri Jul 18 09:15:37 PDT 2008


> How much memory per core/processor is right for a Beowulf cluster node?

easy: depends on your app.

> I plan to buy 8GB per node on a dual-processor quad-core machine (1GB per 
> core),

that's reasonably minimal.

> computations we do, and shrunk it down somewhat due to budget constraints.

memory is relatively cheap right now.

> Actually not long ago RAM-per-core ratio used to be 512MB per core (which 
> were physical CPUs back then),

there's a large population of jobs which have trivial memory footprints
(just a few MB).  but even 5 years ago, lots of non-obscure computations 
needed more like 2 GB/core.

> and it seems to me some non-PC HPC machines (IBM BlueGene, SiCortex machines, 
> etc) still use
> the 512MB RAM-per-core ratio.

they're outliers, since those two specific machines are what I'd call 
"many-mini-core" machines designed primarily for power efficiency and which 
assume that the workload will be specialized, highly-tuned massively
parallel codes.  note that they have exceptionally slow processors,
for instance, and relatively fast/elaborate network fabrics.

> For PC-based cluster compute nodes, is 1GB per core right?
> Is it too much?
> Is it too little?

2G/core with dual-socket quad-core machines seems right to me.
4G/core is definitely needed by some people, but many fewer (typical
power-law falloff), but is quite a lot more expensive.

ultimately, it also depends on the dimm socket config of your hardware.
for instance, if you go with 4-socket amd boxes (of course more expensive),
you can use lower-density dimms, so a higher mem/core ratio.  it may be 
that the upcoming nehalem chips will permit this as well.

> "Big is better" is really the best, and minimalism is just an excuse for the 
> cheap and the poor?

no - too much memory will definitely hurt, not just the pocketbook.
normally, any memory controller can run at full speed for only a limited
number of dimms (actually, sides of dimms, since dual-sided dimms usually
count as 2 loads.)

> Is there anybody out there using 64 or 128GB per node?

sure - we expect to buy some fat nodes soon, but the mainstream nodes 
will probably be 2G/core (16G/node).



More information about the Beowulf mailing list