[Beowulf] latency vs bandwidth for NAMD

Dow Hurst DPHURST DPHURST at uncg.edu
Fri Aug 17 14:03:44 PDT 2007

I'd like to get advice on how latency affects scaling of molecular dynamics
codes versus total bandwidth of the interconnect card.  We use NAMD as the
molecular dynamics code and have had Ammasso RDMA interconnects.  Right
now, we have a chance to upgrade and add nodes to our cluster using
Infiniband.  I've found that NAMD was coded to be latency tolerant,
however, I'd like to scale up to 64 cores and beyond.  I'm going blind
reading IB card specs, performance benchmarks, and searching Google.  I'd
love some advice from someone who knows whether a consistent very low
latency IB card, such as the Infinipath QLE7140, is better/worse for NAMD
than a higher latency but higher bandwidth card such as the QLE7240?  I can
tell that Lonestar at TACC has great NAMD performance but I can't tell what
IB card is used.  I imagine that switch performance plays a large role too.
Thanks for your time,

More information about the Beowulf mailing list