[Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

Mark Hahn hahn at mcmaster.ca
Sun Jun 15 21:47:21 PDT 2008


> It is a bit weird if you claim to be NDA bound, whereas the news has it in
> big capitals what the new IBM CELL can deliver.

I thought he was referring to double-precision on Nvidia gpus,
which have indeed not been shipped publicly (afaik).

> So a very reasonable question to ask is what the latency is from the stream 
> processors to the device RAM.

sure, they're GPUs, not general-purpose coprocessors.  but both AMD and 
Intel are making noises about changing this.  AMD seems to be moving 
GPU units on-chip, where they would presumably share L3, cache coherency,
etc.  Intel's Larrabee approach seems to be to add wider vector units 
to normal x86 cores (and more of them).  I personally think the latter is 
much more promising from an HPC perspective.  but then again, both AMD
and Nvidia have major cred on the line - they have to deliver competitive
levels of the traditional GPU programming model.



More information about the Beowulf mailing list