[Beowulf] MPI-IO + nfs - alternatives?

Robert Horton robh at dongle.org.uk
Wed Sep 29 09:24:13 PDT 2010


Hi,

I've been running some benchmarks on a new fileserver which we are
intending to use to serve scratch space via nfs. In order to support
MPI-IO I need to mount with the "noac" option. Unfortunately this takes
the write block performance from around 100 to 20MB/s which is a bit
annoying given that most of the workload isn't MPI-IO.

1) Does anyone have any hints for improving the nfs performance under
these circumstances? I've tried using jumbo frames, different
filesystems, having the log device on an SSD and increasing the nfs
block size to 1MB, none of which have any significant effect.

2) Are there any reasonable alternatives to nfs in this situation? The
main possibilities seem to be:

 - PVFS or similar with a single IO server. Not sure what performance I
should expect from this though, and it's a lot more complex than nfs.

 - Sharing a block device via iSCSI and using GFS, although this is also
going to be somewhat complex and I can't find any evidence that MPI-IO
will even work with GFS.

Otherwise it looks though the best bet would be to export two volumes
via nfs, only one of which is mounted with noac. Any other suggestions?

Rob




More information about the Beowulf mailing list