[Beowulf] real hard drive failures
alvin at Mail.Linux-Consulting.com
Tue Jan 25 13:42:05 PST 2005
hi ya donald
On Tue, 25 Jan 2005, Donald Kinghorn wrote:
> I'm only partially interested in the thread "Cooling vs HW replacement" but
> the problem with drive failures is a real pain for me. So, I thought I'd
> share some of my experience.
i'd add 1 or 2 cooling fans per ide disk, esp if its 7200rpm or 10,000 rpm
if the warranty is 1Yr... your disks might start to die at about 1.5yr so
if the warranty is 3yrs, your disks "might" start to die at about 3.5yrs
- or just a day after warranty expired ( from the day it arrived )
> We have used mostly Western Digital (WD) drives for > 4 years. We use the
> higher rpm and larger cache varieties ...
8MB cache versions tend to be better
> We also used IBM 60GB drives for a while and some of you will have experienced
> that mess ... approx. 80% failure over 1 year time frame!
80% failure is way way ( 15x) too high, but if its deskstar ( from
thailand) than, those disks are known to be bad
if it's not the deskstar, than you probably have a vendor problem
of the folks that sold those disks to you
> WD 20, 40, 60 GB drives in the field for 3+ years, [~600 drives] very few, (
> <1%) failures most machines have been retired.
good .. normal ...
> WD 80GB drives in the field for 1+ years, [~500 drives] "ARRRRGGGG!" ~15%
> failure and increasing. I send out 3-5 replacement drives every month.
probably running too hot ... needs fans cooling the disks
- get those "disk coolers with 2 fans on it )
> WD 120 and 200GB SATA in the field <1 year, [~400 drives] one failure so far.
very good .. but too early to tell ...
> I'm moving to a 3 drive raid5 setup on each node (drives are cheap, down time
> is not) and considering changing to Seagate SATA drives anyone care to offer
> opinions or more anecdotes? :-)
== using 4 drive raid is better ... but is NOT the solution ==
- configuring raid is NOT cheap ...
- fixing raid is expensive time ... (due to mirroring and syncing)
- if downtime is important, and should be avoidable, than raid
is the worst thing, since it's 4x slower to bring back up than
a single disk failure
- raid will NOT prevent your downtime, as that raid box
will have to be shutdown sooner or later
( shutting down sooner ( asap ) prevents data loss )
- if you want the system to keep working while you move
data to another node .. than raid did what it supposed to
in keeping your box up after a drive failure,
but, that failed disk still need to be replaced asap
- if downtime is not exceptable... ( high availability is what you'd want)
have 2 nodes that supports the same data
( data is mirrored ( manually or rsync ) on 2 different nodes )
you see it as one system .. ( like www.any-huge-domain.com )
( just one "www" even if lots of machines behind it )
More information about the Beowulf