[Beowulf] Any recommendations for a good JBOD?
hahn at mcmaster.ca
Fri Feb 19 09:29:43 PST 2010
>>> I was thinking SAS / SCSI / iSCSI is probably easiest and cheapest.
the concept of scsi/sas being cheap is rather amusing.
>> Do you already have a suitable SAS or SCSI controller in the host
>> machine? If not, then you have to factor in the cost of the controller.
> No. true. I have to factor in that price. But almost any kind of disk
> array I can think of will need a controller, correct? Or are there any
unless it already has the controller, of course. most motherboards
these days come with at least 6x 3 Gb sata ports, for instance.
> JBOD formats that can be attached without putting in a controller in
> the server.
I was thinking of esata, and a 5-disk external enclosure with
port-multiplier. if your system already has sata, you might need
to add an esata header (or possibly a controller if the existing
controller doesn't support port multipliers.) 5 disks on PM would
be a pretty simple way to add JBOD for a md-based raid5.
>> If you want iSCSI, then you're looking at a low-end SAN as opposed to a
>> DAS. But the SAN/NAS distinction is blurry these days, as many devices
>> can give you either block or file-level access.
> Yes, true. I'm dropping iSCSC entirely. Don't have the $$ to do a SAN
> with fibre switches etc.
iSCSI doesn't require SAN infrastructure, of course. that's kind of the
point: you plug it into your existing ethernet fabric. for the low-overhead
application you describe, it's a reasonable fit, except that even low-end
iSCSI/NAS boxes tend to ramp up in price. that is, comparable to what
you'd pay for a cheap uATX system (which would be of about the same speed,
power, space and performance, not surprisingly.)
>> Yes, they do. But if you want to access 5TB via iSCSI (or NFS), that's
>> likely the cheapest option.
> That's quite non-intuitive to me. If it' a NAS they must need
> procs+RAM+NICs on board. How does that get cheaper than an equivalent
> "dumb" JBOD which outsources all these 3 functions to the attached
> host server? Maybe I am missing a part of the argument.
procs+ram+nic can easily total less than $100; enclosures can be very
cheap as well. that's what's so appealing about that approach: it's
fully user-servicable, and you don't have to depend on some random vendor
to maintain firmware, supported-disk lists, etc. of course, that's also
the main downside: you have just adopted another system, albeit embedded,
> I already have a server that the JBOD can be attached to so that cost
> to me is a sunk cost. I just need to consider the incrementals above
right - 10 years ago, the cost overhead of the system was larger.
nowadays, integration and moore's law has made small systems very cheap.
this is good, since disks are incredibly cheap as well. (bad if you're
in the storage business, where it looks a little funny to justify
thousands of dollars of controller/etc infrastructure when the disks
cost $100 or so. disk arrays can still make sense, of course, but
availability of useful cheap commodity systems has changed the equation.
regards, mark hahn.
More information about the Beowulf