[Beowulf] El Reg: AMD reveals potent parallel processing breakthrough
atchley at tds.net
Mon May 13 12:38:32 PDT 2013
Likewise, no one save a few people program for uGNI (one of the Gemini
interfaces). Cray provides MPICH and others have written support for
Open-MPI. Use a PGAS language? Someone has written support for it as well.
Need storage? Cray provides Lustre over kGNI.
Now, using the GPGPU takes effort.
Using an interconnect? No. There are plenty of standard interfaces that do
the job for you.
On Mon, May 13, 2013 at 2:02 PM, Prentice Bisbal <
prentice.bisbal at rutgers.edu> wrote:
> On 05/11/2013 04:56 AM, Vincent Diepeveen wrote:
> > The top500.org of today completely refutes your statement there.
> > november 2012 list http://www.top500.org/list/2012/11/
> > number 1: cray with gemini interconnect and K20x.
> > That's not *easy* to program. Interconnect nor CUDA.
> > number 2:
> > BlueGene/Q
> > also not 'easy' to program for with those dead slow latencies it has.
> I agree with your point here, and you are making a good point,but you're
> off with regards to the Blue Genes.
> All the Blue Genes use standard MPI programming, so any MPI-compliant
> program that you can run on your average Linux cluster, will compile and
> run just fine on any Blue Gene, and with good performance. Getting
> absolute maximum performance will take some additional code tweaking to
> use the Dual-Hummer FPUs on the /L and /P, or the QPX on the /Q, but
> that is no different than the Intel processors with the MMX,
> SSE[1,2,3...], AVX performance enhancements that have been added over
> the years.
> And what do you mean by 'dead slow latencies'? The BG networking is
> pretty damn good.
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf