[Beowulf] any gp-gpu clusters?

Mark Hahn hahn at mcmaster.ca
Thu Jun 21 07:57:45 PDT 2007


Hi all,
is anyone messing with GPU-oriented clusters yet?

I'm working on a pilot which I hope will be something 
like 8x workstations, each with 2x recent-gen gpu cards.
the goal would be to host cuda/rapidmind/ctm-type gp-gpu development.

part of the motive here is just to create a gpu-friendly 
infrastructure into which commodity cards can be added and 
refreshed every 8-12 months.  as opposed to "investing" in 
quadro-level cards which are too expensive enough to toss when obsoleted.

nvidia's 1U tesla (with two g80 chips) looks potentially attractive,
though I'm guessing it'll be premium/quadro-priced - not really in 
keeping with the hyper-moore's-law mantra...

if anyone has experience with clustered gp-gpu stuff, I'm interested 
in comments on particular tools, experiences, configuration of the host
machines and networks, etc.  for instance, is it naive to think that 
gp-gpu is most suited to flops-heavy-IO-light apps, and therefore doesn't
necessarily need a hefty (IB, 10Geth) network?

thanks, mark hahn.



More information about the Beowulf mailing list