[Beowulf] NVIDIA GPUs, CUDA, MD5, and "hobbyists"

John Hearns john.hearns at streamline-computing.com
Thu Jun 19 00:17:07 PDT 2008

On Wed, 2008-06-18 at 16:31 -0700, Jon Forrest wrote:
> Kilian CAVALOTTI wrote:

> I'm glad you mentioned this. I've read through much of the information
> on their web site and I still don't understand the usage model for
> CUDA. By that I mean, on a desktop machine, are you supposed to have
> 2 graphics cards, 1 for running CUDA code and one for regular
> graphics? If you only need 1 card for both, how do you avoid the
> problem you mentioned, which was also mentioned in the documentation?
Actually, I should imagine Kilian is referring to something else,
not the inbuilt timeout which is in the documentation. But I can't speak
for im.

> Or, if you have a compute node that will sit in a dark room,
> you aren't going to be running an X server at all, so you won't
> have to worry about anything hanging?

I don't work for Nvidia, so I can't say!
But the usage model is as you say - you can prototype applications which
will run for a short time on the desktop machine, but long runs are
meant to be done on a dedicated back-end machine.
If you want a totally desk-side solution, they sell a companion box
which goes alongside and attaches via a ribbon cable. I guess the art
here is finding a motherboard with the right number and type of
PCI-express slots to take both the companion box and a decent graphics
card for X use.

> I'm planning on starting a pilot program to get the
> chemists in my department to use CUDA, but I'm waiting
> for V2 of the SDK to come out.

Why wait? The hardware will be the same, and you can dip your toe in the
water now.

More information about the Beowulf mailing list