[Beowulf] how large can we go with 1GB Ethernet? / Re: how large of an installation have people used NFS, with?

Bruno Coutinho coutinho at dcc.ufmg.br
Wed Sep 9 14:50:53 PDT 2009


2009/9/9 psc <pscadmin at avalon.umaryland.edu>

> I wonder what would be the sensible biggest cluster possible based on
> 1GB Ethernet network . And especially how would you connect those 1GB
> switches together -- now we have (on one of our four clusters) Two 48
> ports gigabit switches connected together with 6 patch cables and I just
> ran out of ports for expansion and wonder where to go from here as we
> already have four clusters and it would be great to stop adding cluster
> and start expending them beyond number of outlets on the switch/s ....
> NFS and 1GB Ethernet works great for us and we want to stick with it ,
> but we would love to find a way how to overcome the current "switch
> limitation".   ... I heard that there are some "stackable switches" ..
> in any case -- any idea , suggestion will be appreciated.
>
>
Stackable switches are small switches 16 to 48 ports that have proprietary
high bandwidth uplinks to connect switches of the same type.
Typically these connections are pairs of 10Gbps (as they are full-duplex,
sometimes vendors say that they are 20Gbps) cables that  connect all
switches in ring configuration.
This solution is cheaper than a modular switch, but has limited bandwidth.


> thanks!!
> psc
>
> > From: Rahul Nabar <rpnabar at gmail.com>
> > Subject: [Beowulf] how large of an installation have people used NFS
> >       with?   would 300 mounts kill performance?
> > To: Beowulf Mailing List <beowulf at beowulf.org>
> > Message-ID:
> >       <c4d69730909091040p3774581dmd50b460dc99e0a60 at mail.gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > Our new cluster aims to have around 300 compute nodes. I was wondering
> > what is the largest setup people have tested NFS with? Any tips or
> > comments? There seems no way for me to say if it will scale well or
> > not.
> >
> > I have been warned of performance hits but how bad will they be?
> > Infiniband is touted as a solution but the economics don't work out.
> > My question is this:
> >
> > Assume each of my compute nodes have gigabit ethernet AND I specify
> > the switch such that it can handle full line capacity on all ports.
> > Will there still be performance hits as I start adding compute nodes?
> > Why? Or is it unrealistic to configure a switching setup with full
> > line capacities on 300 ports?
> >
> > If not NFS then Lustre etc options do exist. But the more I read about
> > those the more I am convinced that those open another big can of
> > worms. Besides, if NFS would work I do not want to switch.
> >
> >
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090909/59a12f62/attachment.html>


More information about the Beowulf mailing list