[Beowulf] InfiniBand channel bundling?

Prentice Bisbal prentice.bisbal at rutgers.edu
Mon Oct 27 13:52:54 PDT 2014


On 10/27/2014 12:03 PM, Kilian Cavalotti wrote:
> Hi Jörg,
>
> On Mon, Oct 27, 2014 at 3:56 AM, Jörg Saßmannshausen
> <j.sassmannshausen at ucl.ac.uk> wrote:
>> I got some older dual Mellanox Technologies MT23108 cards (8.5 Gb/sec (4X))
>> and currently I am only using one of the two ports on them.
>> I was wondering, is it possible with InfiniBand to utilise the second port in
>> such a way that I can increase the bandwith of the network (and lower the
>> latency maybe)?
> Last time I checked, the Linux bonding driver didn't allow to bond IB
> interfaces to increase throughput. I'd be happy to hear that this
> changed, but I think only redundancy modes (such as active-backup)
> were possible.
>
> Now, Open MPI can take advantage of multiple, independent ports
> connected to the same fabric: see
> http://www.open-mpi.org/faq/?category=openfabrics#ofa-port-wireup for
> details.
>
> Cheers,
This makes sense. Since IB operates mostly in userspace, I would expect 
setting up IB like this to done by the applications, not the operating 
system. Since TCP/IP network is mostly handled by the kernel, it would 
make sense that IPoIB would be configured by the kernel, though. Feel 
free to correct me if this logic is wrong.

You could increase the bandwidth, but not the latency. If anything, the 
latency might go up as some additional work will need to be done to 
coordinate the data going over the two different connections.

-- 
Prentice



More information about the Beowulf mailing list