[scyld-users] IPC Network vrs Management Network

Joel Krauska jkrauska at cisco.com
Thu Feb 24 02:08:05 PST 2005


I am exploring using my cluster with a different IPC networks.
IE: Using MPI RNICs or InfiniBand in my cluster.

Some design questions:

1. Should the head node also be connected to the high-speed IPC network?

2. How do I tweak the /etc/beowulf/config file to support this?

3. Is it possible to pxeboot/dhcp on one interface, but issue bproc
starts over the high speed interface?

It seems benchmarks like hpl (linpack) issue lots of Master->Slave
communication in their normal operation. (This as opposed to pallas,
which seems to do a lot of  slave<->slave communication..)

This seems to imply that Linpack is somewhat bound to your rsh/ssh/bproc
choice of spawning mpi apps.  Which seems flawed to me as it's not
stressing mpi in this way. (comments?)

The above seems to encourage using a higher speed interconnect
from the head node to issue the bproc calls. (leaving the normal
ethernet only for pxe and "management things" like stats)

The "interface" keyword in the config coupled with the
"pxeinterface" keyword seems to encourage this type of setup,
but I find that if "interface foo" is set, the pxeserver doesn't
want to restart if the iprange command doesn't map to the IP
subnet on interface foo.  (suggesting that the dhcp functionality
wants to bind to "foo" and not the given pxeinterface)

Thus "interface" must be the pxeinterface. (maybe someone's not parsing
the pxeinterface command?)


Does anyone here have a successful cluster where the head node is
connected to both the high-speed IPC network and the "management" network?


Thanks,

--joel



More information about the Scyld-users mailing list