[Beowulf] bonding and bandwidth
Jean-Marc.Larre at tlse.toulouse.inra.fr
Wed Jun 9 01:25:36 PDT 2004
Thank you very much everybody for yours answers,
I use two chipset Broadcom 5702 on motherboard with tg3 driver. Broadcom
5702 is on PCI bus 32 bit 66Mhz.
Someone has he get better than 900 Mbps with bonding.c and two giga
Michael T. Prinkey wrote:
> Hi Jean-Marc,
> What nics are you using? (if onboard, what motherboard?) Are the nics on
> separate PCI buses? Are the PCI bus(es) 32- or 64-bit? Are they running
> at 33 or 66 MHz or faster? Have you tried using the tg3 driver instead?
> Depending on driver and PCI bus issues, your throughput can be limited by
> PCI bus. Also, I think that some of the gigabit ethernet drivers have
> problems with bonding due to the interrupt grouping or some such.
> Quick googling found this:
> It seems to be a known problem.
> Mike Prinkey
> Aeolus Research, Inc.
> On Mon, 7 Jun 2004, Jean-Marc Larré wrote:
>>I'm testing bonding.c with two gigabit ethernet links toward a HP 2848
>>switch. My kernel is 2.4.24 from kernel.org on RedHat 9.0
>>modules.conf is like this :
>>[root at node01 root]# cat /etc/modules.conf
>>alias eth1 bcm5700
>>alias eth2 bcm5700
>>alias bond0 bonding
>>options bond0 miimon=100 mode=balance-alb updelay=50000
>>My problem :
>>I get a bandwidth around 900Mbit/s and not 1800Mbit/s with netperf or
>>iperf or NetPipe. Could you explain me why I'm not get 1800Mbit/s and
>>where is my problem ?
Jean-Marc Larré - Jean-Marc.Larre at toulouse.inra.fr - 05 61 28 54 27
INRA - Génopole - Unité de Biométrie et Intelligence Artificielle
BP 27 - 31326. Castanet Tolosan cedex
More information about the Beowulf