[Beowulf] NFSoIB and SRP benchmarking

Vincent Diepeveen diep at xs4all.nl
Mon Aug 27 04:31:00 PDT 2012


Yeah it will be a great machines. You get what you pay for of course.
Confirming that is not even interesting. If you pay big bucks  
obviously it'll be fast.

For such machines, just break the raid card during operation, then  
replace it with another
card and see whether your data still is there and whether it can work  
without too much problems
within the crucial 2 minutes that you've got normally at exchanges to  
get new equipment in
(not counting rebuild time), or you're fired.

So for rich financials i bet such machines are attractive.

As for HPC, scaling is important.

As for cheap scaling....

If i search for benchmarks of raid controllers, you can find  
everything as well.

What really misses is benchmarks of built in raid controllers.

If you want to scale cheap it's interesting to know how to do the i/o  
obviously.

The motherboards i've got have the ESB2 from intel built in as RAID  
controller.

If i google i see nothing on expected read/write speeds.

Nearly all raid controllers you always see have a limit of 1 GB/s,  
that really sucks
compared to even the bandwidth one can generate to the RAM.

Basically 1 GB/s with 8 cores Xeon 2.5Ghz means you've got 8 * 2.5Ghz  
= 20k cycles a byte, or 32k instructions you can
execute for reading or writing just 1 byte.

Most raid controllers that are cheap, say a 100-300 euro (130 - 400  
dollar), they're not managing more than a
write speed of a 600MB/s , that's in fact in raid-0, not really  
realistic, and 700-900MB/s read speed.

That's with around a drive or 8.

We know however that when the raid array is empty, those drives will  
handsdown get a write speed at outside of drives of
a 130MB/s, so the limitation is the raid cards CPU itself. Usually  
around a 800Mhz at the better cards.

Many benchmark with SAS drives. I see some fujitsu drives that are  
rated 188MB/s at outer side of drives, which benchmark to
900MB/s readspeed.

The limitation seems to be the raidcards in most cases.

So my simple question would be: is this better with the built in raid  
controllers that most motherboards have as they can use
the fast 2.5Ghz Xeon cpu's?

On Aug 26, 2012, at 12:22 AM, holway at th.physik.uni-frankfurt.de wrote:

> Hello Beowulf 2.0
>
> I've just started playing with NFSoIB to provide super fast backend
> storage for a bunch of databases that we look after here. Oracle and
> Nexenta are both sending me ZFS based boxes to test and I hope to  
> compare
> the performance and stability of these with the Netapp (formally lsi
> engenio) E5400.
>
> This will be the first time I will be getting into serious storage
> benchmarking. Does anyone have any interesting tests they would  
> like to
> run or and experience performing these kinds of tests?
>
> I have 4 HP G8 boxes with 96GB ram each QDR IB and 10G ethernet as  
> consumers.
>
> I will also be testing the performance of KVM and Virtuozzo  
> (commercial
> version of OpenVZ) which is a kernel sharing virtualization similar  
> to BSD
> Jails.
>
> Thanks,
>
> Andrew
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin  
> Computing
> To change your subscription (digest mode or unsubscribe) visit  
> http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list