[Beowulf] coprocessor to do "physics calculations"

Robert G. Brown rgb at phy.duke.edu
Fri May 5 10:13:52 PDT 2006


On Fri, 5 May 2006, Joe Landman wrote:

> I disagree.  For a coprocessor to be useful in any context it has to be 
> "fast" in the sense of "not slower" than the alternative.  If you are looking 
> to offload the CPU, then get a second CPU.  If you are looking to perform 
> specialized calculations, then get the appropriate coprocessor (ala GPU, 
> ...).

I obviously wouldn't argue about that or your other examples.  GPUs and
DSPs both are just some of many examples; modern computer designs use
this sort of parallel processor design as a matter of routine and I
didn't even think to mention them.

Historically, though, this approach to augmenting simple clustering in
HPC designs hasn't had a terribly successful time of it.  Lots of talk
about it, a few successful implementations of it (remember the CM5 with
its network of Sparcs with attached vector units, IIRC), but usually
with the EXCEPTION of mass-market units with clearly defined tasks --
e.g. GPU, DSP, disk/raid controllers, network controllers, etc. -- they
have usually not found a long term home.

I think that this is just a sort of evolution in action.  The generic
meme is there, waiting for conditions to be right for it to be selected
for instead of against, but so far HPC hasn't managed to come up with a
"super floating point unit" one can add to your favorite system at mass
market prices to make it faster -- since the 8087 and successors.  Often
the issue is the bus, of course -- to make an onboard processor capable
of doing things as fast as one would like it often has to be able to use
DMA and share memory with the CPU and the like.  Maybe HyperTransport
will finally open the door here.

>> This may be yet another version of the famed "let's make a sony
>> playstation (or Xbox, or DSP, or whatever) cluster" discussion.  They,
>> too, have (or "are") integrated "physics" engines.  Yet it never quite
>> makes sense compared to just going with the best general purpose CPU.
>
> I strongly disagree with that last sentence.  The example I give (again) is 
> the graphics card.  You can make exactly, precisely the same argument about 
> doing graphics on the CPU versus on the GPU, that it simply doesn't make 
> sense.  But that argument would be IMO, wrong.
>
> I think it is far more correct to use a tautology/Yogi-Berra-ism that APUs in 
> general will be useful where they are useful.  That is, there exists a subset 
> of problems amenable to acceleration.  My argument is that this subset is not 
> minuscule.  It doesn't encompass the entire market either.  In the case of 
> graphics, there is a minuscule portion of the market not served by 
> acceleration products.

I was referring to the HPC-computation market, specifically simulations
or other floating point intensive general purpose computations (context
of reply).  I willingly concede all that you assert elsewhere.

If you like, using a GPU as a CPU for general purpose computations
doesn't usually work that well.  Neither does using a DSP. Although
there are probably exceptions to both rules, if it were USUALLY a good
idea we'd ALL be doing it, as that is what we do...;-)

    rgb

>
> Joe
>
>>
>>    rgb
>
> * I only bet when I already know the answer :)
>
>
>

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list