[Beowulf] how large of an installation have people used NFS with? would 300 mounts kill performance?

Bruno Coutinho coutinho at dcc.ufmg.br
Wed Sep 9 12:12:04 PDT 2009


My two cents:

2009/9/9 Rahul Nabar <rpnabar at gmail.com>

> Our new cluster aims to have around 300 compute nodes. I was wondering
> what is the largest setup people have tested NFS with? Any tips or
> comments? There seems no way for me to say if it will scale well or
> not.
>

> I have been warned of performance hits but how bad will they be?
> Infiniband is touted as a solution but the economics don't work out.
> My question is this:
>
> Assume each of my compute nodes have gigabit ethernet AND I specify
> the switch such that it can handle full line capacity on all ports.
>
Will there still be performance hits as I start adding compute nodes?
>

Yes.


> Why?


Because final NFS server bandwidth will be the bandwidth of the most limited
device, be it disk, network interface or switch.
Even if you have a switch capable of fill line capacity for all 300 nodes,
you must put a insanely fast interface in your NFS server and a giant pool
of disks to have a decent bandwidth if all nodes access NFS at the same
time.

But depending on the way people run applications in your cluster, only a
small set of nodes will access NFS at the same time and a Ethernet 10Gb with
tens of disks will be enough.



> Or is it unrealistic to configure a switching setup with full
> line capacities on 300 ports?
>

This will be good for MPI but it will not help much your NFS server problem.


> If not NFS then Lustre etc options do exist. But the more I read about
> those the more I am convinced that those open another big can of
> worms. Besides, if NFS would work I do not want to switch.
>
> --
> Rahul
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090909/ce193cdf/attachment.html>


More information about the Beowulf mailing list