[Beowulf] Working for DUG, new thead

Joe Landman joe.landman at gmail.com
Tue Jun 19 12:10:28 PDT 2018



On 6/19/18 2:47 PM, Prentice Bisbal wrote:
>
> On 06/13/2018 10:32 PM, Joe Landman wrote:
>>
>> I'm curious about your next gen plans, given Phi's roadmap.
>>
>>
>> On 6/13/18 9:17 PM, Stu Midgley wrote:
>>> low level HPC means... lots of things.  BUT we are a huge Xeon Phi 
>>> shop and need low-level programmers ie. avx512, careful cache/memory 
>>> management (NOT openmp/compiler vectorisation etc).
>>
>> I played around with avx512 in my rzf code. 
>> https://github.com/joelandman/rzf/blob/master/avx2/rzf_avx512.c>> Never really spent a great deal of time on it, other than noting that 
>> using avx512 seemed to downclock the core a bit on Skylake.
>
> If you organize your code correctly, and call the compiler with the 
> right optimization flags, shouldn't the compiler automatically handle 
> a good portion of this 'low-level' stuff? 

I wish it would do it well, but it turns out it doesn't do a good job.   
You have to pay very careful attention to almost all aspects of making 
it simple for the compiler, and then constraining the directions it 
takes with code gen.

I explored this with my RZF stuff.  It turns out that with -O3, gcc (5.x 
and 6.x) would convert a library call for the power function into an FP 
instruction.  But it would use 1/8 - 1/4 of the XMM/YMM register width, 
not automatically unroll loops, or leverage the vector nature of the 
problem.

Basically, not much has changed in 20+ years ... you annotate your code 
with pragmas and similar, or use instruction primitives and give up on 
the optimizer/code generator.

When it comes down to it, compilers aren't really as smart as many of us 
would like.  Converting idiomatic code into efficient assembly isn't 
what they are designed for.  Rather correct assembly.  Correct doesn't 
mean efficient in many cases, and some of the less obvious optimizations 
that we might think to be beneficial are not taken. We can hand modify 
the code for this, and see if these optimizations are beneficial, but 
the compilers often are not looking at a holistic problem.

> I understand that hand-coding this stuff usually still give you the 
> best performance (See GotoBLAS/OpenBLAS, for example), but does your 
> average HPC programmer trying to get decent performance need to 
> hand-code that stuff, too?

Generally, yes.  Optimizing serial code for GPUs doesn't work well. 
Rewriting for GPUs (e.g. taking into account the GPU data/compute flow 
architecture) does work well.

-- 

Joe Landman
e: joe.landman at gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman



More information about the Beowulf mailing list