Infiband (was RE: Beowulf: A theorical approach)

Greg Lindahl glindahl at hpti.com
Thu Jun 22 11:25:42 PDT 2000


> I will say that several people have mentioned that Infiniband is for
> server area networks, not system area networks (ie clusters).

This is true, if the rumors I've heard are true. And server networks are for
accessing storage, with few conversations and huge block sizes, not for tiny
messages to any of thousands of hosts in the entire machine. There will be
an annoying limit on the # of connections, and the standard only guarantees
1 outstanding non-connection message.

The one thing that Infiniband will do is provide a much better bus than PCI.
PCI-X is better in ways other than just large-transfer bandwidth, and
Infiniband is still better. I have yet to see any sign that "native"
Infiniband switches are going to be good. IBM's announcement of 8-way 6
megabit switches (perhaps I've got the details wrong, doesn't really matter)
available 12 months from now just doesn't excite me much.

So what will we do? We'll stick Myrinet cards (or GigabitN Ethernet or
Quadrics or whatever) on Infiniband just like we stick Myrinet on PCI today.

-- g





More information about the Beowulf mailing list