[Beowulf] Nvidia Tesla GPU clusters?

Jim Lux James.P.Lux at jpl.nasa.gov
Wed Jul 18 16:01:16 PDT 2007


At 03:34 PM 7/18/2007, Jon Forrest wrote:
>Toon Knapen wrote:
>>>http://www.nvidia.com/object/tesla_computing_solutions.html
>>Anyone can point me to more information about the 'thread execution 
>>manager' and how threads can enable getting optimal performance out 
>>of this hardware ?
>
>This is a good question. When word first came out about using GPUs
>for regular computation I sent a message to comp.arch (which is pretty
>much a wasteland these days) asking how jobs were going to be
>scheduled on a GPU. Nobody knew. I would think this would be
>especially important if the same GPU were going to be used
>for graphics display and HPC computations. Even if it would only
>be used for HPC computations its resources will have to be scheduled
>one day. Maybe it could be scheduled as a asymetric MP, which
>certain tasks, e.g. the graphics and HPC tasks, having affinity to
>the GPU.


At this stage of the game, I suspect they're handling it as a 
dedicated coprocessor box, much like the old FPS array processing 
boxes did.  One task uses the box at a time, and it's explicitly 
managed by the user (or the user program).

I don't know that they support, say, 2 simultaneous accelerated 
processes at once.

Heck, it's a pretty big deal for them to provide moderately 
straightforward library access in the first place, as opposed to 
trying to cast your computational problems in terms of graphics primitives.

James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875 





More information about the Beowulf mailing list