NFS file server performance

Schilling, Richard RSchilling at
Fri Mar 23 16:28:05 PST 2001

You might be reducing your throughput by running three NICS on the same
physical segment through the same switch.  The bottleneck will be the I/O on
the file server because it has to serve I/O through three NICS to get to the
same set of disks.

Well, you might check to see if any logical subnets are set up (you know,
are the computers in the cluster grouped together with diferent IP
domains?).  There may have been a reason for that.

Good luck.


-----Original Message-----
From: Christian Storm [mailto:chr at]
Sent: Tuesday, March 20, 2001 12:33 PM
To: beowulf at
Subject: NFS file server performance


I just took over a Beowulf cluster and I'm having having a great time
reconfiguring everything ... :)
It is partly used on large databases (up to 20 GB). Local storage is
not possible therefore the databases reside on a disk of a dedicated
file server that is mounted on all nodes of the cluster.

Here are the questions:

1. To improve performance two additional networks card were put into the
file server. Then the cluster was splitted in three networks (all sitting
on the same switch). Each subnetwork is mounting the file throw a
different NIC. These *seems* to work. But it is rather static and it is
not very elegant ... .
I experienced with the new 2.4 bridiging feature (by assigning all 3 NICs
to a bridge), but it just seems to add redundancy, not performance. I
assume some kind of channel bonding would be needed - as far as I know
supported by the network-cards (3c980) but not by the driver (3c90x).

Anybody knows a solution ?

2. What would be good number of NFS Daemons to run on the file-server ?
(accessed by 12 nodes through 3 NICs, PIII500 system with SCSI)
Currently I'm running 16 with socket input queue resized to 1MB.

Thanks in advance

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the Beowulf mailing list