[Beowulf] PVFS or NFS in a Beowulf cluster?

Reuti reuti at staff.uni-marburg.de
Tue Jan 18 17:53:36 PST 2005


I must admit, that I didn't implement it up to now, because were are still 
waiting for a new cluster...

The idea behind it, is to set aside some of the nodes to be PVFS2 servers only, 
and leaving the remaining nodes for pure calculations. We have only one 
application which needs a shared scratch space across the calculation nodes 
(for which we are at this time using the home directory of the user); others 
are happy with local scratch space on each calculation node. At the PVFS 
website is also a description about some speed tests, to get the right amount 
of PVFS nodes. It depends of course on the used application, and (in our case) 
how often this special application is used in the cluster.

http://www.parl.clemson.edu/pvfs/desc.html

Anyway, I would suggest to start with some speed tests to get the best amount 
of PVFS servers.

This way the MPI tasks are not slowed down on the calculation only nodes.

Cheers - Reuti


Quoting Mark Hahn <hahn at physics.mcmaster.ca>:

> > I don't see a contradiction to use both: NFS for the home directories (on
> some 
> > sort of master node with an attached hardware RAID), PVFS2 for a shared
> scratch 
> > space (in case the applications need a shared scratch space across the
> nodes). 
> 
> that's certainly attractive.  has anyone tried PVFS2 in a *parallel*
> cluster?
> 
> that is, for tight-coupled parallel applications, it's quite critical avoid
> 
> stealing cycles from the MPI worker threads.  I'd be curious to know how
> much
> of a problem PVFS2 would cause this way.
> 
> thanks, mark hahn.
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
> 





More information about the Beowulf mailing list