Large FOSS filesystems, was Re: [Beowulf] 512 nodes Myrinet cluster Challanges
gmpc at sanger.ac.uk
Thu May 4 14:18:07 PDT 2006
> Basically, with the right FS, and right set up, it is doable, though
> management will be a challenge. Lustre may or may not help on this.
> Some vendors are pushing it hard. Some are pushing GPFS hard. YMMV.
It all depends on what you want to do with the data and how much you
care if it goes away.
Just like building a cluster, there is more than one way to build a
large filesystem. You have to decide where your data sits in the
"reliability/performance/cost" space and whether you can get away with
a standalone filesystem, or whether you need a cluster filesystem with
all of the complexity and excitement that goes with it.
As an example, I have a 1TB scratch filesystem holding temporary job
output from a cluster. I also have a 30TB filesystem that will soon hold
the results of every sequencing experiment ever carried out at our
Guess which filesystem is striped across some scsi disks, and which one
is mirrored between two fibrechannel storage arrays physically separated
in different machine rooms.
Getting a 16TB filesystem up and running is only half the story.
Like all filesystems, your 16TB filesystem will go to 80% full within 6
months, and then 12 months later someone will ask you for a 32TB
filesystem. Some filesystem and disk solutions are better than others at
coping with being resized.
And lets not forget about backing up and restoring a 16TB filesystem...
Dr Guy Coates, Informatics System Group
The Wellcome Trust Sanger Institute, Hinxton, Cambridge, CB10 1HH, UK
Tel: +44 (0)1223 834244 ex 6925
Fax: +44 (0)1223 496802
More information about the Beowulf