[Beowulf] InfiniBand channel bundling?

Jörg Saßmannshausen j.sassmannshausen at ucl.ac.uk
Tue Oct 28 02:43:49 PDT 2014


Hi Peter,

what I am after in the end is installing the older DDR network which I got at 
quite newer machines (Ivy-bridges). The reason for that is budget, like so 
often these days, and hence I wanted to try it out first. 
Thus, I would have hoped that the bottle neck is the network and not the 
PCIe/PCI-X bus. I hope that makes sense.

All the best from London

Jörg

On Tuesday 28 Oct 2014 09:02:17 you wrote:
> On Mon, 27 Oct 2014 22:38:26 +0000
> 
> Jörg Saßmannshausen <j.sassmannshausen at ucl.ac.uk> wrote:
> > Dear all,
> > 
> > thanks for the feedback.
> > 
> > I think the best thing to do is to run a few test jobs and see what
> > happens here. Once that is done I can report back so the community
> > gets something back from me as well.
> 
> Running tests is generally a good idea but remember that for almost all
> combinations of infiniband generations and server/pcie generations one
> port has been enough to eat all the pcie bandwidth. So using two ports
> wont give more bandwidth.
> 
> /Peter
> 
> > I did not expect it to be that easy to be honest. I know that OpenMPI
> > does test the various interfaces but I did not expect that it is
> > using the two IB ports if they are available.
> > 
> > So thanks for your help so far!
> > 
> > All the best from London
> > 
> > Jörg
> > 
> > On Montag 27 Oktober 2014 Prentice Bisbal wrote:
> > > On 10/27/2014 12:03 PM, Kilian Cavalotti wrote:
> > > > Hi Jörg,
> > > > 
> > > > On Mon, Oct 27, 2014 at 3:56 AM, Jörg Saßmannshausen
> > > > 
> > > > <j.sassmannshausen at ucl.ac.uk> wrote:
> > > >> I got some older dual Mellanox Technologies MT23108 cards (8.5
> > > >> Gb/sec (4X)) and currently I am only using one of the two ports
> > > >> on them. I was wondering, is it possible with InfiniBand to
> > > >> utilise the second port in such a way that I can increase the
> > > >> bandwith of the network (and lower the latency maybe)?
> > > > 
> > > > Last time I checked, the Linux bonding driver didn't allow to
> > > > bond IB interfaces to increase throughput. I'd be happy to hear
> > > > that this changed, but I think only redundancy modes (such as
> > > > active-backup) were possible.
> > > > 
> > > > Now, Open MPI can take advantage of multiple, independent ports
> > > > connected to the same fabric: see
> > > > http://www.open-mpi.org/faq/?category=openfabrics#ofa-port-wireup
> > > > for details.
> > > > 
> > > > Cheers,
> > > 
> > > This makes sense. Since IB operates mostly in userspace, I would
> > > expect setting up IB like this to done by the applications, not the
> > > operating system. Since TCP/IP network is mostly handled by the
> > > kernel, it would make sense that IPoIB would be configured by the
> > > kernel, though. Feel free to correct me if this logic is wrong.
> > > 
> > > You could increase the bandwidth, but not the latency. If anything,
> > > the latency might go up as some additional work will need to be
> > > done to coordinate the data going over the two different
> > > connections.

-- 
*************************************************************
Dr. Jörg Saßmannshausen, MRSC
University College London
Department of Chemistry
Gordon Street
London
WC1H 0AJ 

email: j.sassmannshausen at ucl.ac.uk
web: http://sassy.formativ.net

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 230 bytes
Desc: This is a digitally signed message part.
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20141028/eab20330/attachment.sig>


More information about the Beowulf mailing list