[Beowulf] latency vs bandwidth for NAMD

Kevin Ball kevin.ball at qlogic.com
Tue Aug 21 09:26:31 PDT 2007


Hi Dow,

  The QLE7240 DDR HCA is not available yet, but we do not expect that it
would have any substantial advantage on NAMD as compared to the QLE7140
(SDR), because we don't believe that NAMD requires substantial pt to pt
bandwidth from the interconnect.

  The TACC cluster is not using QLogic InfiniBand (IB) cards, but I
believe they are SDR IB cards from another vendor.  

  Just last week I submitted a result to the folks at UIUC with results
on a similar cluster with the QLE7140.  It has not yet shown up on their
results page, but in essence, the scalability is similar until around
256 cores, at which point the results diverge with the QLE7140 cluster
dramatically outperforming the TACC cluster at 512 cores.

  I expect the QLE7140 results will show up in the next week or so on
that website, (http://www.ks.uiuc.edu/Research/namd/performance.html) so
you can compare to TACC performance at that time.  On that site you can
also see performance with a number of other machines, including an SGI
Altix with much higher pt to pt bandwidth yet worse scaling than IB,
which is part of why I don't think DDR will improve results.

  If you are interested in other MD codes, we have found advantages on
codes like CHARMM and GROMACS as well.  Some of thsee are detailed in a
white paper on our website: 
http://www.qlogic.com/documents/datasheets/knowledge_data/whitepapers/HSG-WP07005.pdf

  Fair notice:  I work for QLogic on the InfiniPath product line.  I
have tried my best to make what bias I have open and clear.

-Kevin


On Fri, 2007-08-17 at 14:03, Dow Hurst DPHURST wrote:
> I'd like to get advice on how latency affects scaling of molecular dynamics
> codes versus total bandwidth of the interconnect card.  We use NAMD as the
> molecular dynamics code and have had Ammasso RDMA interconnects.  Right
> now, we have a chance to upgrade and add nodes to our cluster using
> Infiniband.  I've found that NAMD was coded to be latency tolerant,
> however, I'd like to scale up to 64 cores and beyond.  I'm going blind
> reading IB card specs, performance benchmarks, and searching Google.  I'd
> love some advice from someone who knows whether a consistent very low
> latency IB card, such as the Infinipath QLE7140, is better/worse for NAMD
> than a higher latency but higher bandwidth card such as the QLE7240?  I can
> tell that Lonestar at TACC has great NAMD performance but I can't tell what
> IB card is used.  I imagine that switch performance plays a large role too.
> Thanks for your time,
> Dow
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list