[Beowulf] PXE boot with X7DWT-INF IB onboard card

Andrew Holway andrew.holway at gmail.com
Mon Nov 12 06:01:56 PST 2012


2012/11/12 Vincent Diepeveen <diep at xs4all.nl>

> Problem is not the infiniband NIC's.


Yes it is. You have to flash the device firmware in order for the BIOS
to recognize it as a bootable device.

Yes your a bit screwed for booting over IB but booting over 1GE is
perfectly acceptable. 1GE bit rate is not 1000 miles away for a single disk
drive performance anyway. (100MB/s or so). A lot of the clusters I have put
together were 1GE connected for boot and management.

Typically not very much is written to the local disk once the node has
booted anyway. A few logs here and there. As you have an IB connection you
dont have to worry about it.

You might want to consider iSCSI. This is supported by the open gPXE
project. You can set up a LUN per server.

Ta

Andrew



> You need a motherboard with a
> bios fixed to boot infiniband over PXE, that's all.
> So you need to contact Supermicro to fix the bios if it doesn't boot
> them, if they would be willing to do that.
>
> On Nov 12, 2012, at 1:56 PM, Duke Nguyen wrote:
>
> > On 11/12/12 7:05 PM, Vincent Diepeveen wrote:
> >> You can boot over PXE using normal network cables and boot over
> >> the gigabit ethernet using a cheap ethernet
> >> router.
> >
> > Thanks Vincent. Yes we gonna try that with a gigabyte hub. Just
> > wonder anyone has other experience, for example PXE Boot on this
> > link http://ulm.ccc.de/hg/pxeboot/raw-annotate/8c54643186a9/src/
> > drivers/net/mlx_ipoib/doc/README.boot_over_ib. This PXEBoot lists
> > our cards as supported.
> >
> > D.
> >
> >>
> >> On Nov 12, 2012, at 12:33 PM, Duke Nguyen wrote:
> >>
> >>> Hi folks,
> >>>
> >>> We are still on the way of building our first small-cluster.
> >>> Right now
> >>> we have 16 diskless nodes 8GB RAM on X7DWT-INF and a master also on
> >>> X7DWT-INF with 120GB disk and 16GB RAM. These boards (X7DWT-INF) has
> >>> built-in Infiniband Card (Infiniband MT25204 20Gbps Controller),
> >>> and I
> >>> hoped that we can create a boot server on the master node for the 16
> >>> clients using IB connection.
> >>>
> >>> Unfortunately after reading, I found out that our built-card is
> >>> too old
> >>> for Mellanox FlexBoot:
> >>>
> >>> $ lspci -v | grep Mellanox
> >>> 08:00.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx
> >>> HCA] (rev 20)
> >>>      Subsystem: Mellanox Technologies MT25204 [InfiniHost III Lx
> >>> HCA]
> >>>
> >>> whereas FlexBoot requires at least Connect2X-3X to work with.
> >>>
> >>> My questions are:
> >>>
> >>>   * any body using the same (or similar main boards) and was able to
> >>> boot using PXE?
> >>>   * if PXE server with Infiniband is impossible, then it is OK
> >>> with a
> >>> gigabyte connection? Or should we go for 16 disks for these 16
> >>> clients
> >>> and dont care much on boot over IB or IP? (more money, of course)
> >>>
> >>> Thanks,
> >>>
> >>> D.
> >>> _______________________________________________
> >>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> >>> Computing
> >>> To change your subscription (digest mode or unsubscribe) visit
> >>> http://www.beowulf.org/mailman/listinfo/beowulf
> >>
> >>
> >
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20121112/c4545792/attachment.html>


More information about the Beowulf mailing list