[Beowulf] any gp-gpu clusters?

Ganesh Shetty ganesh at alvasystems.net
Tue Jun 26 09:23:37 PDT 2007


I did ( or at least attempted to ) to someting similar with the ATI stream processor card(s) 
using the Peakstream VM. We evaluated Cuda - do not want to get into that here.

But now that GOOG has acquired Peakstream, we might have to take a second look at CUDA.

-Ganesh


> 
> Mark,I am messing up with a GPU oriented cluster.
> 
> I am now on travel to ISC, where I will show a sustained Teraflop with a
> workstation with 4 Tesla cards using VMD to do ion placement (for the list
> member going to Dresden stop by to the Nvidia booth to see the demo in
> action). This was a computation that used to take 100 CPU hours on an Altix
> and it is now done in the matter of minutes. Yes, the whole system probably
> consumes 900W ( the tdp of a tesla is 170W not 220W), but I can assure you
> that is nothing compared to a big Altix machine and you can put under your
> desk and do some real science.
> 
> Several groups are building gpu-oriented cluster. Once mine is completed ( 8
> compute nodes, each one with 2 Tesla boards) , it should be accessible for
> testing to academic and research group. People interested in testing their
> CUDA codes on cluster could drop me an email.
> 
> On a side note, it is interesting to see all the speculations from people
> that have never used CUDA (and most of the time don't have a clue...) and at
> the same time to see quality software (mostly open source like VMD, NAMD,
> SOFA ) achieving pretty impressive results and enabling new science.
> 
> 
> Massimiliano
> PS:  Usual disclaimer, I work in the GPU Computing group at NVIDIA.
> 
> 
> 
> On 6/21/07, Mark Hahn <hahn at mcmaster.ca> wrote:
> >
> > Hi all,
> > is anyone messing with GPU-oriented clusters yet?
> >
> > I'm working on a pilot which I hope will be something
> > like 8x workstations, each with 2x recent-gen gpu cards.
> > the goal would be to host cuda/rapidmind/ctm-type gp-gpu development.
> >
> > part of the motive here is just to create a gpu-friendly
> > infrastructure into which commodity cards can be added and
> > refreshed every 8-12 months.  as opposed to "investing" in
> > quadro-level cards which are too expensive enough to toss when obsoleted.
> >
> > nvidia's 1U tesla (with two g80 chips) looks potentially attractive,
> > though I'm guessing it'll be premium/quadro-priced - not really in
> > keeping with the hyper-moore's-law mantra...
> >
> > if anyone has experience with clustered gp-gpu stuff, I'm interested
> > in comments on particular tools, experiences, configuration of the host
> > machines and networks, etc.  for instance, is it naive to think that
> > gp-gpu is most suited to flops-heavy-IO-light apps, and therefore doesn't
> > necessarily need a hefty (IB, 10Geth) network?
> >
> > thanks, mark hahn.
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit
> > http://www.beowulf.org/mailman/listinfo/beowulf
> >
> 
> 

-- 
Ganesh P Shetty



More information about the Beowulf mailing list