[Beowulf] recommendations for a good ethernet switch for connecting ~300 compute nodes

Rahul Nabar rpnabar at gmail.com
Thu Sep 3 06:23:36 PDT 2009


On Thu, Sep 3, 2009 at 7:58 AM, Joe
Landman<landman at scalableinformatics.com> wrote:
> For really large clusters, you'd separate out the scheduler and some of the
> other functions as well.  Mark Hahn and some of the other folks on the list
> run some of the really large clusters out there.  They have some good advice
> for those scaling up.

Thanks Joe! I really look forward to Mark Hahn and the others guiding
me. This expansion is on the larger side for me.

> It might be worth asking what your targeted per node budget is.

I do not have a target per node but more of a $/performance budget.
And for my codes I've found that Infy etc. just don't cut it. The
additional cost does not squeeze out the extra performance.

I've benchmarked several chips and configs and the current winner for
our codes seems to be a Intel Nehalem E5520. Less than 3000 $/node.

>24 port SDR
> IB switches are available, and relatively inexpensive.

Is there a approximate $$$ figure someone can throw out? These numbers
have been pretty hard to get.

>24 port SDR PCIe
> cards are available and relatively inexpensive.

Ditto. Any $ figures?

All my calculations boosted up the $ price of a node to a point where
the performance would have to be very stellar to warrant the spending.
And really, the plain-vanilla Nehalem ethernet config is not doing too
badly for us yet. My main concern now is scaling.

-- 
Rahul




More information about the Beowulf mailing list