[Beowulf] Lowered latency with multi-rail IB?
Joshua mora acosta
joshua_mora at usa.net
Thu Mar 26 22:15:17 PDT 2009
The only way I got under 1usec in PingPong test or with
ib_[write/send/read]_lat is with QDR and back to back (ie. no switch).
With switch I get 1.1[3-7]usec [HP-MPI, OpenMPI, MVAPICH].
It does not matter the MPI although I have to agree with Greg that multirail
also increases latency.
Multirail is used for:
ii) higher bandwidth
------ Original Message ------
Received: 11:11 PM CDT, 03/26/2009
From: Greg Lindahl <lindahl at pbm.com>
To: beowulf at beowulf.org
Subject: Re: [Beowulf] Lowered latency with multi-rail IB?
> On Thu, Mar 26, 2009 at 11:32:23PM -0400, Dow Hurst DPHURST wrote:
> > We've got a couple of weeks max to finalize spec'ing a new cluster. Has
> > anyone knowledge of lowering latency for NAMD by implementing a
> > multi-rail IB solution using MVAPICH or Intel's MPI?
> Multi-rail is likely to increase latency.
> BTW, Intel MPI usually has higher latency than other MPI
> If you look around for benchmarks you'll find that QLogic InfiniPath
> does quite well on NAMD and friends, compared to that other brand of
> InfiniBand adaptor. For example, at
> the lowest line == best performance is InfiniPath. Those results
> aren't the most recent, but I'd bet that the current generation of
> adaptors has the same situation.
> -- Greg
> (yeah, I used to work for QLogic.)
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf