[Beowulf] parallelization problem

Peter Faber peterffaber at web.de
Sat Aug 15 08:31:17 PDT 2009


amjad ali wrote:
> I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose that the grid
> (all triangles) is partitioned among 8 processes using METIS. Each process
> has different number of neighboring processes. Suppose each process has n
> elements/faces whose data it needs to sends to corresponding neighboring
> processes, and it has m number of elements/faces on which it needs to get
> data from corresponding neighboring processes. Values of n and m are
> different for each process. Another aim is to hide the communication behind
> computation. For this I do the following for each process:
> DO j = 1 to n
>
> CALL MPI_ISEND (send_data, num, type, dest(j), tag, MPI_COMM_WORLD,
ireq(j),
> ierr)
>
> ENDDO
>
> DO k = 1 to m
>
> CALL MPI_RECV(recv_data, num, type, source(k), tag, MPI_COMM_WORLD,
status,
> ierr)
>
> ENDDO

You may want to place the MPI_WAIT somewhere below the MPI_RECV in order
to ensure that all receives can be executed and thus all sends be completed.
If your program does not work with an MPI_WAIT in place, there may be
something wrong with your values for n, m, dest(j) and/or source(k),
which may also explain the memory "leak". Perhaps you can check these
values with a smaller number of processes?

Just my 2 cents...
  PFF




More information about the Beowulf mailing list