IDE-SCSI RAID units

Josip Loncaric josip at icase.edu
Mon Feb 12 08:20:09 PST 2001


"Robert G. Brown" wrote:
> 
> On Sun, 11 Feb 2001, Greg Lindahl wrote:
> 
> > Another manufacturer is http://3ware.com/, but again you have to buy
> > through a reseller.
> 
> I put this (and the other suggestions I received) on the brahma page:
> 
>   http://www.phy.duke.edu/brahma

A friend of mine pointed out that hardware RAID1 (mirroring) implies the
following problem (quoted from the 3Ware Escalade 3W-6400 RAID
Controller review at
http://www.neoseeker.com/resourcelink.html?rlid=23766 , page 7):

   However, since the mirroring routines are in-built on the card, 
   whenever the operating systems did not shut down correctly, the
   3Ware card will kick in and re-mirror the hard drives again.
   This can be rather "inconvenient" at times whenever our operating
   systems hanged and we were forced to reboot the system. After all,
   waiting 2 hours for the RAID mirror to rebuild every time the
   operating system crashed is no joke!

While this was observed under Windows, any hardware RAID1 controller
would presumably do the same thing under Linux.  If 2 hour crash
recovery is a problem, software RAID remains attractive despite its
larger overhead in RAID1 mode.  In software RAID, disks can be
partitioned, so that you can have a large RAID0 (striping) device for
your temporary storage and a small RAID1 (mirroring) device for your
permanent files on the same pair of disks.  Rebuilding a small (~2GB)
software RAID1 partition should be much faster than rebuilding the
entire hardware RAID1 device, which is typically ten times larger. 

BTW, our users are told to keep their recomputable data on RAID0
(striped) partitions (each is about 100 GB).  We do not use software
RAID1.  Instead, we backup the users' programs/scripts which generated
the data (about 10 GB is backed up).  Larger capacity permanent storage
is also available off-site for those who need it.  Otherwise, recovery
after RAID0 failure would be done by recomputing the lost data. 
Fortunately, in 2.5 years of heavy use, we did not lose any data on our
software RAID0 devices using high end SCSI drives.  By contrast, our
commodity IDE drives are less reliable and MTBF of RAID0 using
inexpensive IDE drives would not be as good (watch out for flaky IDE
cables and/or IDE drives).

Finally, some performance numbers: our 1999 vintage SCSI drives deliver
read performance of ~19 MB/s, which software RAID0 improves to ~36 MB/s
(using two drives) and ~47 MB/s (using three drives).  Similarly,
software striping two 1999 vintage IDE drives (attached as master and
slave to IDE0) increases read rates from ~13 MB/s to ~26 MB/s (but
commodity IDE would be less reliable than high end SCSI).  The CPU load
during these 'hdparm -t' tests appears to be about 7% (SCSI or IDE).

Sincerely,
Josip

P.S.  I cannot quantify the statement that IDE is "less reliable" than
SCSI because we simply did not have any problems with SCSI drives (we
have only 8).  By contrast, MTBF appears to be about two years for our
IDE drives (i.e. we see a few problems per month in a population of 65
IDE drives).  By "problem" I mean a new bad block or any data corruption
(e.g. a single bit error in a file, which sometimes happens in bytes
with all bits on or all bits off, probably due noise picked up by the
IDE cable and/or drive electronics).


-- 
Dr. Josip Loncaric, Senior Staff Scientist        mailto:josip at icase.edu
ICASE, Mail Stop 132C           PGP key at http://www.icase.edu./~josip/
NASA Langley Research Center             mailto:j.loncaric at larc.nasa.gov
Hampton, VA 23681-2199, USA    Tel. +1 757 864-2192  Fax +1 757 864-6134





More information about the Beowulf mailing list