[Beowulf] How to Diagnose Cause of Cluster Ethernet Errors?
deadline at clustermonkey.net
Mon Apr 2 07:34:44 PDT 2007
I hear your frustration. You are quite right that many of the ASICs
are the same. Implementation is important. In terms of clusters,
there are no hard and fast rules for switches. i.e.
I have fund some cheap GigE switches (like the SMC 8508T)
to be real performer (8 ports/Jumbo Frames for under $100)
I just got an SMC CGS16 to use in my test rack. So I am
a little partial to SMC at this point. However, I have not
tested the CGS16 fully so it may not live up to my
expectations. In the past I have found Foundry and Extreme
to work quite well, and give the price you pay they
should. I think the trick is to find the bargains that still
As I see it, there three ways to buy a good switch:
1. Hire a consultant to help with the cluster (their
experience can save money and head-aches on other
issues as well)
2. Use Google and this list to see what you can find about a
particular switch, but be warned most people do
not push switches the way HPC users do so what
is good for the back office may not be good for
the cluster. (and pretty much ignore vendor data sheets)
3. Get some evaluation switches (or a least test
them within the 30-day return period) for specific
applications you plan to run. This is probably the
best way to proceed.
Unfortunately there does not seem to be an easy way
to really test a switch. The easiest thing to do is
to run netpipe on two ports to establish a baseline.
Choose the switches that provides the best netpipe results.
Then run netpipe on ports at the same time and see if
there is degradation. This however is not the whole story,
some performance may depend on port choice (i.e.
ports may span multiple ASICs) and performance
may vary. Also, to full test a switch I would
assume that you would want to test every port
combination while the other ports were at
some constant network load. So you can probably see
why it is hard to test switches. In any
case, these treads on the list should help as
well (quite informative):
Finally, I am open to any one who can come up with
a reasonably good switch test, maybe combination
of applications and synthetic tests so that we
can at least eliminate the poor performers. I would
like to post this kind of data on ClusterMonkey.
> Douglas Eadline wrote:
>> I am constantly amazed at how many people buy the
>> latest and greatest node hardware and then connect
>> them with a sub-optimal switch (or cheap cables), thus reducing
>> the effective performance of the nodes (for parallel
>> applications). Kind "penny wise and pound foolish" as they say.
> I sincerely appreciate all the comments about my problem. I will reply
> to them in due time. However, I'd like to comment on this, which
> admittedly is off-topic from my original posting.
> I don't disagree with what you're saying. The problem is how
> to recognize "sub-optimal" equipment. For example, I see
> three tiers in ethernet switching hardware:
> 1) The low-end, e.g. Netgear, Linksys, D-link, ...
> 2) The mid-end, e.g. HP Procurve, Dell, SMC, ...
> 3) The high-end, e.g. Cisco, Foundry, ...
> What I, as a system manager, not as an Electrical Engineer,
> have trouble understanding, is what the true differences
> are between these levels, and, at one level, between
> the various vendors.
> These days I suspect that many of the vendors are using
> ASICs made by other chip companies, and the many vendors
> use the same ASICs. Assuming that's true, where's the
> added value that justifies the cost differences? Sometimes
> the value is in the "management" abilities of a device.
> I don't deny this can be a major selling point in a
> large enterprise environment, but in a 30-node cluster,
> or a small LAN, it's hard to justify paying for this.
> In terms of ethernet performance, once a device
> can handle wirespeed communication on all ports,
> where's the added value that justifies the added
> cost? I'm looking for empirical answers, which
> aren't always easy to find, and sometimes to understand.
> In the case of my cluser, it was configured and purchased
> before I got here, so I had nothing to do with choosing
> its components but I have to admit that I'm not
> sure what I would have done differently.
> Jon Forrest
> Unix Computing Support
> College of Chemistry
> 173 Tan Hall
> University of California Berkeley
> Berkeley, CA
> jlforrest at berkeley.edu
More information about the Beowulf