Q: Node selection with mpirun under Scyld

Keith Underwood keithu at parl.clemson.edu
Mon Feb 26 14:59:46 PST 2001


You can create a p4pg file.  In it, you specify which machines and how
many copies of the job for each (covered in the MPI documentation).  Use
numbers for the node designations. Hmmm... not sure how you designate the
head...


On Mon, 26 Feb 2001, Thomas Clausen wrote:

> Hi all,
>
> I have a question about running mpi programs under the Scyld distribution. I
> have been unable to find in an answer in the documentation.
>
> How do I specify which nodes to run an mpi program on? Or equivalently, how
> do I avoid my job being send to specific nodes (if, for instance, some nodes
> have too little memory for the job)?
>
> I have some idea from the following:
>
> Launching an mpi program with
>
> toc at madonna~/src/sturm mpirun --np 2 --log-thresh progress ./sturm
>
> gives
>
> argv[5] (./sturm) is not an option
> mpirun option processing stopped at ./sturm
> scheduling the job
> calling bproc_numnodes
> bproc says there are 32 nodes
> node 0 is not up (status is 1)
>
> permission match (unowned)
> node 0 is 1
>
>  .
> .
>  .
> Writing line: 2 1 ./sturm
>
> Done writing scheduler file
>
>
> Looks like bproc finds a permission match. How do I affect the permissions?
>
> Thanks!
>
> Thomas Clausen
>
> --
>    .^.    Thomas Clausen, graduate student
>    /V\    Physics Department, Wesleyan University, CT
>   // \\   Tel 860-685-2018, fax 860-685-2031
>  /(   )\
>   ^^-^^   Use Linux
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>

---------------------------------------------------------------------------
Keith Underwood                   Parallel Architecture Research Lab (PARL)
keithu at parl.clemson.edu                                  Clemson University





More information about the Beowulf mailing list