[Beowulf] SSD caching for parallel filesystems

Vincent Diepeveen diep at xs4all.nl
Mon Feb 11 15:26:21 PST 2013


You make a joke out of it,

Yet SSD's you never buy in in huge quantities, whereas for any self  
respecting HPC practicing organisation or company,
they buy in massive harddrive storage, so they *can* get every single  
harddrive for the prices quoted, which is between $10 and $20 a terabyte
right now, depending upon which quality harddrive, reliability,  
bandwidth and manufacturer you buy.

If such harddrive raidt doesn't deliver enough bandwidth - requiring  
you to also use SSD's - you're doing something completely wrong as a  
HPC organisation.

The RAID controllers also are on the pci-e and both will be  
delivering 3GB/s usable user data to your cluster no matter how many  
harddrives
or SSD's you put in parallel i bet at a single pci-e controller.

Though it would be fun if someone tested it whether you can get a  
larger sustained bandwidth than that.

What we know for sure is that the SSD array is going to be really  
expensive.

In short SSD's are only interesting for latency and caching.

Considering the techniques used to produce SSD's, it never will be  
cheap of course.

Currently cheapest a gigabyte i can find on dutch tweakers.net is 0.4  
euro a gigabyte or 400.44 euro a terabyte. It's from OCZ and pci-e.
Note that this is including 21% VAT/salestax.

As a normal home user, paying the full price for harddrives, unlike  
HPC centers, I had a raid array of 1.2 TB in januari 2006 and had a  
cost of also 400 euro's.
We're 7 years later now exactly, which is like eternity in IT, and  
this 1 TB disk from OCZ i am pretty sure if you buy a lot of them,  
which no one will, they won't be factor 4
cheaper like is the case with harddrives.

Now secondly if you put in 2 of those drives. How do you intend to  
get a larger bandwidth out of it than a single pci-e RAID card with a  
few dozen of harddrives?

Most benchmarks i saw where 2 pci-e cards are inside a machine  
delivering i/o, they actually deliver less of a bandwidth in total.  
It would be interesting to know why
that is - whether that just was an 'installation' or 'tuning' mistake  
of the testers in question. Yet with current OS-es i don't doubt that  
by default this is the default behaviour.

If on the other hand you put 1 pci-e card in a file servermachine  
using SATA ssd's in raid versus a raid controller with 24+  
harddrives, i don't really see the advantage of
the SATA ssd array - except that you need a guard in front of that  
machine if i would be standing in your home - as it's going to be  
pretty expensive.

For a 16 (raw) TB raid array of SSD's you'll be needing pretty  
expensive SSD's.

The cheapest 1 TB ssd i can find that has SATA it's 2139 euro here.

If you have 16 pci-e slots available for 16 of those drives, let me  
know.

So that's going to be a raid array of 2k * 16 = 32k+ euro, just for  
16 SSD's of 1 TB that are SATA.

Even home users building a 24 TB array SATA are going to pay a lot  
less. Say you use 1 TB drives as well.

That's also going to deliver 3GB/s and it's a very small part of the  
price.

Of course - horrible latency compared to a SSD - yet wasn't that the  
main point i was trying to show to you?

RAM is same price like these 2k euro SSD's.

On Feb 11, 2013, at 11:31 PM, Douglas Eadline wrote:

>
>> Buy in price actually for a company here is $35 already a year ago
>> for 2TB disks.
>> They basically have 2 shops selling well in Netherlands and Germany.
>> I'm not sure they like to get mentionned here, but knowing you know
>> some German,
>> you should have no problems figuring out that shop name if you think
>> well.
>>
>> You seem to have no idea what determines prices when you buy in a lot
>> versus just 1.
>
> Indeed, which is why I buy in infinite quantities. Price is always $0.
>
> -- 
> Doug
>
> -- 
> Mailscanner: Clean
>




More information about the Beowulf mailing list