[Beowulf] Bonding Ethernet cards / [was] 512 nodes Myrinet cluster Challenges

Bruce Allen ballen at gravity.phys.uwm.edu
Mon May 8 10:40:52 PDT 2006


> Date: Fri, 05 May 2006 10:08:47 +0100
> From: John Hearns <john.hearns at streamline-computing.com>
> Subject: Re: [Beowulf] 512 nodes Myrinet cluster Challanges
>
> On Fri, 2006-05-05 at 10:23 +0200, Alan Louis Scheinine wrote:
>> Since you'all are talking about IPMI, I have a question.
>> The newer Tyan boards have a plug-in IPMI 2.0 that uses
>> one of the two Gigabit Ethernet channels for the Ethernet
>> connection to IPMI.  If I use channel bonding (trunking) of the
>> two GbE channels, can I still communicate with IPMI on Ethernet?
>
> We recently put in a cluster with bonded gigabit, however that was done
> using a separate dual-port PCI card.
> On Supermicro, the IPMI card by default uses the same MAC address as the
> eth0 port which it shares. You could reconfigure this I think.

We are doing this on our Supermicro H8SSL-i systems.  We assign a 
DIFFERENT MAC and a DIFFERENT IP to our IPMI cards.  (Different than the 
ethernet port MAC/IP that the IPMI card piggybacks from).



More information about the Beowulf mailing list