[tulip] Re: v0.92 and NetPIPE 2.4

John Connett jrc@art-render.com
Thu, 18 Jan 2001 16:21:53 +0000


Some more evidence to ponder over.  It appeared possible that the root
of the problem may have been the KNE100TX on the receiver system.  To
test this I replaced it with an EEPRO100 using the default Red Hat 6.2
driver (eepro100.c:v1.09j-t 9/29/99 ..., eth2: Intel PCI EtherExpress
Pro100 82557).  This ran NetPIPE without problems on the transmitter
system with either the KNE110TX or KNE111TX and either of the
v0.91g-ppc, v0.92, and v0.92t drivers.

I then replaced the EEPRO100 on the receiver system with a 3C905C,
again with the Red hat 6.2 default driver (3c59x.c:v0.99H 27May00 ...,
eth2: 3Com 3c905C Tornado).  With a KNE110TX on the transmitter system
I again encountered the very marked loss of performance for block
sizes of 4093 bytes and above with either of the v0.91g-ppc, v0.92,
and v0.92t drivers.

If the block size is restricted to less than 4093 bytes the contents
of /proc/net/dev shows few errors.

After a full run of NetPIPE /proc/net/dev contains the following:

Inter-|   Receive                                                | 
Transmit
 face |bytes    packets errs drop fifo frame compressed
multicast|bytes    packets errs drop fifo colls carrier compressed
  eth1:307517474  917852    0    0    0     0          0         0
307811527  913203 36453    0    0 17883   36453          0

I suspect the high values of errs; colls; and carrier on the transmit
side are related to the slow down!

I have looked back through the archives and it appears that this
problem with low performance with blocks around 4k and upwards has
been encountered by others on anything from a cross over cable to
large Beowulf clusters.  However, I have not found a clear explanation
(or fix) for the problem.  Anyone care to enlighten me?

Thanks in anticipation

--
John Connett (jrc@art-render.com)