[Beowulf] Mature open source hierarchical storage management
working at surfingllama.com
Sun Oct 25 22:04:58 PDT 2009
2009/10/26 Joe Landman <landman at scalableinformatics.com>
> Just curious -- how large and how big are the deltas in the
At the start of the year we were seeing an average delta of about 10GB/day,
currently we are seeing an average delta of 70GB/day. There are still a
number of unknowns, but we are expecting that with next generation DNA
sequencing and mass-spectrometry coming on board early next year, that the
delta is likely to jump up to ~500GB/day.
> I ask because the new generation of 2TB SATA disks appear to be
>> establishing the groundwork for a list of new storage options
>> including cluster file systems that run circles around NFS and large
>> storage RAIDS.
> HFS's and tiering in general make sense when the cost of the high
> performance storage per GB or per TB is so large as to make it impractical
> to keep all of the data on disk.
> As Tom points out, this really isn't the case anymore. Petabytes of very
> high speed, very reliable storage can be had for far less money than in the
> This doesn't mean that HFSes don't make sense for some cases. Though those
> cases are diminishing in number over time.
Its definitely good to see the cost of large storage coming down,
unfortunately in our organisation amount of data that the machines are
generating is increasing faster than the storage. The people driving the
machines would like to see the raw data held indefinitely, but with
approximately 10TB of data for an Illumina run its likely that we will only
be able retain it until the intial processing is completed.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf