PVFS vs. file servers
brian at chpc.utah.edu
Wed Sep 20 09:17:31 PDT 2000
We have been using PVFS here for about 6 Months (even earlier in for
testing purposes). We have had our system up in production use now for
nearly 3 months without any major issues. The performance is quite
good. We have a single meta server and 4 io nodes. We use this space
as a global scratch space and an NFS server for home directory space.
We also use local disk space on the compute nodes for another level of
scratch. Kind of like thinking of local node scratch as "L1 scratch"
and global PVFS space as "L2 scratch" and then NFS for the final
repository. PVFS scales quite well and is easily expanded down the
road. The cons would be that the system was not designed to have a lot
of redunancy in it. This for my site isn't really a con at all as we
have selected it for use as a global scratch, which doesn't require much
if any redundance. After all it is scratch space.... A pro would be
that you can use very cheap hardware or current existing hardware to try
it out before you opt to push it for production use. You could easily
move a few of your compute nodes out of the loop for a bit and set them
up as a pvfs system using 3 or 5 nodes (1 meta, 2 or 4 io nodes). This
is similar to how we got started.....
Josip Loncaric wrote:
> A friend of mine is trying to decide between these two options:
> buying a big file server vs.
> using PVFS (see http://parlweb.parl.clemson.edu/pvfs/)
> on his cluster. He has a cluster of Alphas running Linux, and each node
> already has about 10GB of disk space available (>100GB total). Buying a
> big file server would add more disk space (ignoring the space already
> available), but it would allow his nodes to be dedicated to computing
> free of file serving tasks.
> He runs big CFD codes and needs lots of disk space to store results and
> input files.
> Any comments on pros and cons?
> Dr. Josip Loncaric, Senior Staff Scientist mailto:josip at icase.edu
> ICASE, Mail Stop 132C PGP key at http://www.icase.edu./~josip/
> NASA Langley Research Center mailto:j.loncaric at larc.nasa.gov
> Hampton, VA 23681-2199, USA Tel. +1 757 864-2192 Fax +1 757 864-6134
> Beowulf mailing list
> Beowulf at beowulf.org
Brian D. Haymore
University of Utah
Center for High Performance Computing
155 South 1452 East RM 405
Salt Lake City, Ut 84112-0190
Email: brian at chpc.utah.edu - Phone: (801) 585-1755 - Fax: (801) 585-5366
More information about the Beowulf