[Beowulf] GPU question

Micha Feigin michf at post.tau.ac.il
Tue Sep 1 01:08:20 PDT 2009


On Mon, 31 Aug 2009 16:13:55 +0200
Jonathan Aquilina <eagles051387 at gmail.com> wrote:

> >One thing that's not mentioned out loud by NVIDIA (I have read only in
> >CUDA programming manual) is that if the video system needs more memory
> >that's not available(say you change resolution, while you're waiting
> >for your process to finish), it will crash your cuda app, so I advise
> >you to use  a second card to display (if you have a tesla solution,
> >you certainly  have a "second" display card). If you are running
> >remotly, this i  an non issue (framebuffers don't need much memory
> >neither change resolution).
> in this regard then why waste a pci-x slot when u can get one that has
> graphics integrated onto the board leaving the slots free to use for data
> processing. is there any difference in performance in a motherboard that has
> the graphics card integrated and one that does not?

‎The Problem is that I've never ran into an onboard card that's capable of
doing real hpc work. Laptops can come with a relatively strong nvidia chip but
not enough for hpc (can't handle the wattage and cooling among other things).
Motherboards usually come with a cheap intel.

And it's not like you can use the graphics slot for anything else.

Possibly in a few years when the Intel vision for larabee will come along
where larabee is integrated on the board (although I don't think that it will
happen before pci-e 3 which is supposed to handle the same speed.

By the way, nvidia do say that it's better not to use the main card for cuda if
you intend to do real hpc (don't remember where I read it though) and if the
second card is not a tesla the suggest a quadro as the main card. They claim
that you get better performance (and if you intend to also do glsl, quadro
support opengl gpu affinity and I think that it also supports opengl context
not on the main card. Tesla apparantly also supports opengl but it's not
official).

By the way, if you ask nvidia, they suggest for deployment to use only tesla
and quadro as they claim that they are designed to handle 24/7 work and that
g200 will downclock when it gets hot. The quadros or ridiculously expensive
though.




More information about the Beowulf mailing list