ScaLAPACK/HPL Trends

Andrew Shewmaker shewa at inel.gov
Fri May 10 14:16:15 PDT 2002


Jack Dongarra has several papers that look at the history of the linpack 
benchmark.  A couple are:

http://www.siam.org/siamnews/11-01/top500.pdf
http://www.netlib.org/utk/people/JackDongarra/PAPERS/hpl.pdf

All of his papers are available at 
http://www.netlib.org/utk/people/JackDongarra/papers.htm

I know of a couple papers by the HINT benchmark people that relate to that.
They have a paper describing an analytical model for their benchmark, where 
a person could predict the effects of a hardware change on their results.

http://www.scl.ameslab.gov/hint/
http://www.scl.ameslab.gov/ahint/

A company called Technology Labs, linked off of the main HINT website, 
developed a version of AHINT for MS Windows, but they seem to be a dead.
The HINT website used to have a database of results, but I don't see it 
now.  HINT is availabe in two ways---ftp may be more convenient, but I 
don't think it includes any AHINT code.  The description of the CD 
distribution sounds like it might.

ftp://ftp.scl.ameslab.gov/pub/HINT

http://www.osti.gov/estsc

HINT doesn't use FLOPS, but another paper talks about deriving other 
benchmarks like linpack from HINT.

http://www.scl.ameslab.gov/Publications/HICSS98/HICSS98.html

"Again, HINT appears to hold a superset of the information in the other benchmark. 
One can predict the LINPACK scores using 
100 by 100 LINPACK MFLOPS (rolled) 
49 x (MQUIPS at 29 KBytes), max error 51% 

1000 by 1000 LINPACK MFLOPS 
54 x (MQUIPS at 4.5 MBytes), max error 14% 

The different emphasis on large memory regimes is made clear by the application 
signature for LINPACK, shown in Figure 9. LINPACK tends to operate on single 
vectors and submatrices, which fit in caches. The second peak shows the need 
to sometimes sweep through the entire matrix."

-Andrew Shewmaker

On Fri, 10 May 2002 09:36:24 -0400
Doug Farley <d.l.farley at larc.nasa.gov> wrote:

> I was wondering if there were any Papers that had been published relating 
> to trends of Performance of ScaLAPACK or High Performance Linpack on 
> Beowulf type clusters.  I'm interested in specifically if anyone had come 
> up with a course model to predict the Performance (in Flops) of a x86 based 
> Beowulf cluster given a few variables (clock speeds, network type, ram, etc).
> 
> Thanks!
> 
> Doug
> 
> 
> Doug Farley
> 
> Data Analysis and Imaging Branch
> Systems Engineering Competency
> NASA Langley Research Center
> 
> < D.L.FARLEY at LaRC.NASA.GOV >
> < Phone +1 757 864-8141 >



More information about the Beowulf mailing list