[Beowulf] NAMD/CUDA scaling: QDR Infiniband sufficient?
Dow Hurst DPHURST
DPHURST at uncg.edu
Mon Feb 9 12:37:06 PST 2009
Has anyone tested scaling of NAMD/CUDA over QLogic or ConnectX QDR interconnects for a large number of IB cards and GPUs? I've listened to John Stone's presentation on VMD and NAMD CUDA acceleration. The consensus I brought away from the presentation was that one QDR per GPU would probably be necessary to scale efficiently. The 60 node, 60 GPU, DDR IB enabled cluster that was used for initial testing was saturating the interconnect. Later tests on the new GT200 based cards show even more performance gains for the GPUs. 1 GPU performing the work of 12 CPUs or 8 CPUs equaling 96 cores were the numbers I saw. So with a ratio of 1gpu/12cores, interconnect performance will be very important.
Dow P. Hurst, Research Scientist
Department of Chemistry and Biochemistry
University of North Carolina at Greensboro
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf