[Beowulf] Re: GPU boards and cluster servers.

Kozin, I (Igor) i.kozin at dl.ac.uk
Fri Sep 5 04:36:53 PDT 2008


> The new Dell R5400 Rackmount workstation is ideal for this. You can
> slip two Xeons, 16GB ram and two chunky graphics cards in there.

The slots in R5400 are PCIe gen1 and 300W total for the graphics might
be a bit too low.

The best I've seen so far are 1U HP DL160G5 servers which offer two PCIe
16x Gen2 slots. Granted, you will not be able to fit in a powerful
graphics card in there but a Tesla setup works quite well. There is a
very interesting recent report published by HP
http://www.hp.com/techservers/hpccn/hpccollaboration/ADCatalyst/download
s/accelerating-HPCUsing-GPUs.pdf
They benchmarked DL160G5 (with single processor => pretty low cost of
the host server) with S870 attached to it. Observed peak performance on
SGEMM was about 200 GFLOPS which is much lower than the theoretical peak
512 GFLOPS (even much less than 350 sustained claimed by Nvidia). When
they factor in i/o, the performance rapidly approaches that of Intel
quad-core. That's not to say GPUs are useless even at single precision;
some results are pretty good. The team promised to benchmark FireStream
next.

> Generally a XEON or Opteron chipset and CPUs will be the choice.
>
> Also, for most GPU/FPU performance work, the memory bandwidth
bottleneck 
> on the Intel product is too much of a negative factor.

Yes, memory bandwidth can be a problem for Intel servers. Now. But we
all know this is going to change soon.
More surprisingly Opteron based servers do not offer PCIe Gen2 just yet.
Perhaps it was long time ago when I checked it last time. The paper
cited above indicates very significant impact of PCIe Gen2 on the
bandwidth.





More information about the Beowulf mailing list