[Beowulf] Re: Cooling vs HW replacement

Mark Hahn hahn at physics.mcmaster.ca
Fri Jan 21 07:28:01 PST 2005


> > I mean, disk is SO cheap at less than $1/GB. 
> 
> That's certainly true for consumer grade disks.  "Enterprise"

it also includes midline/nearline disks, which are certainly
not "consumer-grade" (whatever that means).

> or "Server" grade disks still cost a lot more than that.  For

this is a very traditional, glass-house outlook.  it's the same one
that justifies a "server" at $50K being qualitatively different 
from a commodity 1U dual at $5K.  there's no question that there 
are differences - the only question is whether the price justifies
those differences.

> instance Maxtor ultra320 drives and Seagate Cheetah drives
> are both about $4-5/GB.   The Western Digital Raptor
> SATA disks are also claimed to be reliable, and are
> again, in the $4-5/GB range. (Ie, it isn't just SCSI

pricewatch says $2-3/GB for Raptors.  I'd question whether Raptors
are anything other than a boutique product.  certainly they do not 
represent a paradigm shift (enterprise-but-sata-not-scsi).

> that makes server disks expensive.)

the real question is whether "server" disks make sense in your application.
what are the advantages?

	1. longer warranty - 5yrs vs typical 3ys for commodity disks.
	this rule is currently being broken by Seagate.  the main caveat
	is whether you will want that disk (and/or server) in 3-5 years.

	2. higher reliability - typically 1.2-1.4M hours, and usually 
	specified under higher load.  this is a very fuzzy area, since 
	commodity disks often quote 1Mhr under "lower" load.

	3. very narrow recording band, higher RPM, lower track density.
	these are all features that optimize for low and relatively
	consistent seek performance.  in fact, the highest RPM disks actually
	*don't* have the highest sustained bandwidth - "consumer" disks are 
	lower RPM, but have higher recording density and bandwidth.

	4. SCSI or FC.  always has been and apparently always will be 
	significantly more expensive infrastructure than PATA was or SATA is.

so really, you have to work to imagine the application that perfectly suits
a "server" disk.  for instance, you can obtain whatever level of reliability
you want from raid, rather than ultra-premium-spec disks.  is your data 
access pattern really one which requires a disk optimized for seeks?

> Sure, you can RAID the cheaper ATA/SATA disks and
> replace them as they fail, but if you're really
> working them hard, the word from the storage lists is that
> they will indeed fail.  (Let Google be your friend.)

under what circumstances will you have a 100% duty cycle?  first, you need
some server that really is 24x7 (let's imagine that all visa card
transactions worldwide update a DB on your server).  OK, you might well
do that using a "server" disk, since DB logs have fairly uniform wear,
and constant activity.  but Visa would and does use many distributed/
replicated/raided servers.

> load a database into memory.  The disk server uses SCSI disks
> and is pushed much harder.

I've looked at the duty cycle of our servers, and am impressed how 
low it is.  even, for instance, on a server that is head node and 
sole fileserver for a cluster of 70ish diskless duals, the duty cycle is
quite low.  I believe that people overestimate the duty cycle of their
servers.  I've also seen servers whose duty cycle got far more reasonable
when, for instance, filesystems were mounted with noatime.  and for that
matter, some mail packages configure their spool directories for all 
synchronous operation without noticing that the filesystem journals.

in summary: there is a place for super-premium disks, but it's just plain
silly to assume that if you have a server, it therefore needs SCSI/FC.
you need to look at your workload, and design the disk system based on 
that, using raid for sure, and probably most of your space on 5-10x 
cheaper SATA-based storage.

regards, mark hahn.




More information about the Beowulf mailing list