[Beowulf] any gp-gpu clusters?
eric-shook at uiowa.edu
Fri Jun 22 08:16:02 PDT 2007
I am a part of new research group that is considering adding gp-gpu
technologies to our cluster, unfortunately we have the same questions
which you raised. Which platform (ctm or cuda), development tools,
configuration, etc. If we decided to add gpu technologies it would most
likely only be added to 1-2 hosts so we can test its viability. So we
are not developing a gpu-oriented cluster like you asked, but if the
viability testing is successful we may look at it in the future.
Do you have experience developing for GPUs? If so what was your
experiences and/or results? Most particularly how high is the learning
Mark Hahn wrote:
> Hi all,
> is anyone messing with GPU-oriented clusters yet?
> I'm working on a pilot which I hope will be something like 8x
> workstations, each with 2x recent-gen gpu cards.
> the goal would be to host cuda/rapidmind/ctm-type gp-gpu development.
> part of the motive here is just to create a gpu-friendly infrastructure
> into which commodity cards can be added and refreshed every 8-12
> months. as opposed to "investing" in quadro-level cards which are too
> expensive enough to toss when obsoleted.
> nvidia's 1U tesla (with two g80 chips) looks potentially attractive,
> though I'm guessing it'll be premium/quadro-priced - not really in
> keeping with the hyper-moore's-law mantra...
> if anyone has experience with clustered gp-gpu stuff, I'm interested in
> comments on particular tools, experiences, configuration of the host
> machines and networks, etc. for instance, is it naive to think that
> gp-gpu is most suited to flops-heavy-IO-light apps, and therefore doesn't
> necessarily need a hefty (IB, 10Geth) network?
> thanks, mark hahn.
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf