gige benchmark performance
math at velocet.ca
Thu Mar 14 22:12:21 PST 2002
On Thu, Mar 14, 2002 at 03:09:56PM -0700, Mark Hartner's all...
> We are working on putting together a 32 node cluster at the University of
> Utah. I have benchmarked several Gigabit cards over cat5,
> Netgear GA620T
> Netgear GA622T
> Intel Pro/1000T Server
> I have put the performance results on my webpage,
> The Netgear GA622T performed rather poorly and I am wondering if it has
> anything to do with our setup. Is there anyone else who could share thier
> experiences with the GA622T or other cards based on the NS 83820
> chipset? I have seen unfavorable postings in the past regarding NS 83820
> cards, but our results are even worse than expected.
I did some extremely rough tests with a GB of data and dd+netcat over tcp
(probably really inaccurate :), but with 4000 byte MTU (jumbo frames) was able
to get 70MB/s (560Mbps) between 2 NS83820-based cards on two Tyan Tiger 2460s
w/dual 1.333GHz Tbirds (when I was testing the boards with Tbirds to see if
they would work at all -- which they did, for the month I had that test
setup running :) One card was a 32 bit addtron the other was a 64 bit puredata
card in a 64 bit slot on the 2460.
It was in transferring from the puredata to the addtron that I got 70MB/s
(560Mbps). The addtron was not nearly as good at sending out data as the
puredata and I only achieved about 25MB/s (200Mbps) to the puredata. (I didnt
have 2 puredatas to play with unfortunately at the time).
Addtrons seem to be the worst of the 83820 based cards. I have a few 32 and 64
bit ARKs (they are the *EXACT* same design as the Dlink 82830 cards, down to
the same exact component model numbers), and had some DLinks, SMC and
puredatas that were on loan to me from Ben LaHaise at redhat, who is behind
the NS83820 driver. (I lent him my crappy addtrons so he could tell me whats
wrong with them :) Didnt get to run as many tests as I wanted however (now I
have only my own ARKs and a few addtrons Im going to return soon). Ill see
if I can get Ben to run some Netpipe stats for me/us that I can post to
the list later.
Interestingly enough, with some massively large help from Phil (the intermezzo
guy), I got the addtron card driver supplied by addtron to finally compile
(Phil grumbled about 'using compilation standards from 10 years ago' as he
tackled nasty makefiles and a nest of incorrectly reference kernel headers and
lots of missing symbol problems that were beyond me). The addtron driver
actually ran faster for the addtron cards in receiving, which is how I got the
70MB/s figure - with the NS83820 driver I was getting only 56MB/s. Ben
LaHaise said "Oh, I know why that is!" but didnt elaborate - "I'll be fixing
that soon". Hope he finds the time for all us beowulfers. :)
One problem that Ben LaHaise mentioned was that the 2460 "is missing a pulldown
resistor on some boards, so some cards wont function correctly in a 32 bit or
64 bit slot". Not sure if he was referring to 64 bit card in 32 slot or 32 in
64 slot. (I'll get more details on this from him.) Dont know if he's played
with the Tyan Tiger 2466MPX board yet though to see if it also has this
I will try to get out netpipe and run some tests to see how fast things run on
the ARKs in different slots (32/33 or 64/66 on the 2466 boards we got around).
I seem to recall the ARKs, with 1500 mtu, getting about 30MB/s (240Mbps) but
about 50+ (400Mbps) with jumbo frames (mtu 4000). I'll really have to get some
omre solid numbers out of netpipe to be sure.
Oh - to avoid any switch latency problems and the like, I had the cards plugged
directly into one another with regular cables (since GBE doenst have a 'cross
over cable' concept).
Ken Chase, math at velocet.ca * Velocet Communications Inc. * Toronto, CANADA
More information about the Beowulf