[Beowulf] BIG 'ram' using SSDs - was single machine with 500 GB of RAM
diep at xs4all.nl
Wed Jan 9 05:27:26 PST 2013
What would be a rather interesting thought for building a single box
dirt cheap with huge 'RAM'
is the idea of having 1 fast RAID array of SSD's function as the 'RAM'.
You can get a bandwidth of 2 GB/s then to the SSD's "RAM" pretty
easily, yet for some calculations that bandwidth might be enough
given the fact you can then parallelize a few cores.
The random latency to such SSD raid might be 70 microseconds on
average, maybe even faster with specific SSD's.
Yet every SSD in fact has a bunch of possibilities
to do parallel reads simultaneously - of course not when using SATA
possibly as that's a sequential protocol. Yet maybe
the SSD's will serve you fast in such case for a bunch of reads "at
once". So bigger throughput meanwhile same latency.
Some calculations that prefer a few terabyte or more, which can deal
with the small bandwidth, yet speedup bigtime
having a bigger "RAM", they might then be real cheap to do with such
SSD raid setup.
One might need to optimize which file system gets used in such case
and the way how to access files - as one probably wants to
avoid global locks that avoid several requests pending simultaneously
to the SSD's.
At some of the 4 socket motherboards i saw that different SATA
connectors get hosted onto a different physical socket, so that might
another factor 2 on top of the above latency. Will require clever
filesystem setup, or maybe just write / read RAW to the disks.
On Jan 9, 2013, at 2:12 PM, Joe Landman wrote:
> On 01/09/2013 07:42 AM, Jörg Saßmannshausen wrote:
>> Dear all,
>> Happy New Year!
>> I was wondering whether people on the list here have some first hand
>> experiences with this. I have been asked to purchase a single
>> machine with
>> around 500 GB of RAM. We would not need more than 8 cores here.
>> The job simply
>> needs that much of memory (and even then it is running for 14 days).
> We've built/delivered/supported machines with 1+TB before. They
> too uncommon these days, and you have a number of choices w.r.t. them.
> If you can share more about the specific nature of the calculation,
> you'd likely get better recommendations. That is, large memory,
> light/no threading could be a very large matlab job (we've seen this)
> among other things. This would impact CPU choice as well as RAM speed
> And this gets to a more design focused discussion as well. You can
> build a single machine with this much ram, or aggregate multiple
> machines with vSMP from ScaleMP and use somewhat less expensive RAM.
>> Now, with that amount of memory used by a single core, I would
>> have thought
>> that I need a fast memory interconnect, i.e. a high memory
>> bandwidth. I was
>> thinking of getting an Intel Sandybridge CPU (maybe a E5-2650)
>> for that
>> machine and get a motherboard with can cope with that amount of
>> memory. Does
>> anybody happen to have some recommendations here or knows of
> Don't design the system too early without a better discussion of the
> process. Apart from being memory bound, are you CPU bound? IO bound?
> Network bound?
>> Also, in a related problem, how would I set the kernel.shmmni,
>> and kernel.shmmax values so I am not running out of memory handles
>> here. I am
>> still confused by that.
>> Any kind of advice here is much appreciated. Given it is an
>> expensive piece of
>> hardware we want to purchase I want to get it right.
>> All the best from a grey London
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman at scalableinformatics.com
> web : http://scalableinformatics.com
> phone: +1 734 786 8423 x121
> fax : +1 866 888 3112
> cell : +1 734 612 4615
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf