Scalapack: How to increase Grid size. !!

Patrick Geoffray patrick at myri.com
Tue Jul 31 20:40:22 PDT 2001


Hi Eswar,

The best place for this type of question is 
scalapack at cs.utk.edu.

Eswar Dev wrote:
> >  1.Bad Memory parameters: goining into the case.
> >  Unable to perform factorisation: need TOTMEM of at
> > least 4944943 (Maximum Matrix size I can use is
> > 250X250).

TOTMEM is the parameter for the maximum amount of memory 
per processor to use with the tester. Depending upon the 
size of M, N or NRHS, and the number of processes that 
you are spawning, you may need to increase the size of 
TOTMEM. TOTMEM is hardcoded to 2 MB, in order to run the 
sample .dat files in the SCALAPACK/TESTING directory (as 
described in the SCALAPACK installation guide 
http://www.netlib.org/scalapack/scalapack_install.ps)

>    I could get rid of the TOTMEM ERROR but dont know
> how to change process grid size for more than 1x1 so
> that nprow*npcol can accept greater than 1 value
> Right now I could only get executables run for p=1 q=1

The sample .dat files in the SCALAPACK/TESTING directory 
as configured to test with 8 processors. If you don't 
spawn 8 MPI processes on your mpirun command line, you 
will get the following error:

> >  2.ILLEGAL GRID: nprow*npcol = 8

All of this information is explained in the SCALAPACK 
installation guide. You may need to read the documentation 
of your MPI implementation to know how to spawn processes. 

Finally, for best performance, you MUST be using an 
optimized BLAS library (like ATLAS). SCALAPACK performance 
hinges on local BLAS performance, as explained in the 
performance chapter of the SCALAPACK user's guide 
http://www.netlib.org/scalapack/slug).

Regards.

Patrick

---------------------------------------------------------
|   Patrick Geoffray, Ph.D.    patrick at myri.com         |
|   Myricom, Inc.              http://www.myri.com      |
|   Cell: 865-389-8852         325 N. Santa Anita Ave.  |
|   Fax:  865-974-1950         Arcadia, CA 91006        |
---------------------------------------------------------





More information about the Beowulf mailing list