[Beowulf] GP-GPU experience

Vincent Diepeveen diep at xs4all.nl
Mon Apr 4 15:10:44 PDT 2011


On Apr 4, 2011, at 11:54 PM, Mark Hahn wrote:

>>> well, for your application, which is quite narrow.
>>
>> Which is about any relevant domain where massive computation takes  
>> place.
>
> you are given to hyperbole.  the massive domains I'm thinking of
> are cosmology and explicit quantum condensed-matter calculations.
> the experts in those fields I talk to both do use massive computation
> and do not expect much benefit from GPUs.

Even the field you give as an example: quantum mechanica:
Vaste majority of quantum mechanica calculations are massive matrix  
calculations.

Furthermore i didn't take a look to the field you're speaking about.
I did however take a look to 1 other quantum mechanica calculation,
where someone used 1 core of his quadcore box and massive RAM.

It took me 1 afternoon to explain the guy how to trivially use all 4  
cores doing that calculation
using the same RAM buffer.

You realize that you also can do combined calculations?

Just have a new chipset with big bandwidth to gpu, at cpu's, based  
upon a big RAM buffer, prepare
batches, ship batch to gpu, do tough calculation work on the gpu,  
ship results back.

That's how many use those gpu's.

My attempt to write a sieve directly into the gpu in order to do  
everything inside the gpu,
is of a different league sir than where you are talking.

Your kind of talking is: "there are no tanks in the city, we will  
drive all tanks out of the city, so that only
our cpu's are left again".

Those days are over. Just get creative and find a way to do it at a gpu.

I parallellized 1 quantum mechanica calculation there; i wasn't paid  
for that.
Just pay someone to useful use a GPU. If it ain't easy it doesn't  
mean it's impossible.

Most quantum mechanica guys might be brilliant in their field, in  
manners how to parallellize things
without losing their branching factor that a huge RAM buffer gives,  
they didn't figure out simply yet.

Now it won't be easy to solve for every field; but being a speedfreak  
and in advance saying some faster type of
hardware cannot be used is just monkeytalk. Go get clever and solve  
the problem. Find solutions, don't see just
problems.

>
>> The number of algorithms that really profit bigtime from a lot of  
>> RAM, in some cases you can also
>> replace by massive computation and a tad of memory, the cases  
>> where that cannot be the case
>> are very rare.
>
> no.  you are equating "uses lots of ram" with "uses memoization".
>
>> yet majority of HPC calculations, especially if we add company  
>> codes there, the simulators and the oil,
>> gas, car and aviation industry.
>
> jeez.
> nevermind I said anything.  I'd forgotten about your style.

Read the statistics on the reports what eats system time sir.
You have access to those papers as well if you know how to google.

Regards,
Vincent




More information about the Beowulf mailing list