[Beowulf] split traffic to two interfaces for two "subnets"

Joshua Baker-LePain jlb17 at duke.edu
Thu May 11 15:23:48 PDT 2006


On Thu, 11 May 2006 at 5:12pm, Yaroslav Halchenko wrote

> Just thought that it might be interesting for someone
>
> ok -- I setup bonding ;-) and it works ;)
> with my tuned net/ip params:
>
> net.core.wmem_max = 524288
> net.core.rmem_max = 524288
> net.ipv4.tcp_rmem = 4096 524288 524288
> net.ipv4.tcp_wmem = 4096 65536 524288
>
> and with mtu 9000 on both ends, running two netperfs (netperf params
> are  -P 0 -c -C -n 2 -f K -l 20 ) to two nodes give me next
> results:
>
> (in KBytes/sec... why didn't I make it in megabits? :))
>         node1           node2           total
> average	64276.88	82695.07	146971.95
> std	20555.99	20215.57	10685.59
> min	29857.41	39972.52	129420.64
> max	110149.35	112383.19	166813.32

My quick and dirty test was to do NFS reads from memory on the server (to 
rule out disk contention) to multiple clients.  So I did this:

o On client 1, 'tar cO $DATA | cat > /dev/null' ~4GB of data from the
   server (4GB being the amount of memory in the server) to /dev/null.
   This caches the data in the server's RAM.  Do the same from client 2.

o On clients 1 and 2, tar the same data to /dev/null on both clients
   simultaneously.

In summary, bonding mode 0 got 168MiB/s combined throughput using NFS over 
UDP and 183MiB/s using NFS over TCP.  Bonding mode 4 got 109MiB/s combined 
throughput using NFS over UDP and 106MiB/s using NFS over TCP.  All tests 
were with MTU=9000 and NFS r/wsize=32K.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University



More information about the Beowulf mailing list