High Performance Computing - Charles Severance [117]
This is a simple example of the typical style of SPMD code. All the processes execute the code at roughly the same time, but, based on information local to each process, the actions performed by different processes may be quite different:
SUBROUTINE SENDCELL(RED,ROWS,COLS,OFFSET,MYLEN,INUM,PTID,R,C)
INCLUDE ’../include/fpvm3.h’
INTEGER ROWS,COLS,OFFSET,MYLEN,INUM,PTID,R,C
REAL*8 RED(0:ROWS+1,0:COLS+1)
REAL*8 CENTER
* Compute local row number to determine if it is ours
I = C - OFFSET
IF ( I .GE. 1 .AND. I.LE. MYLEN ) THEN
IF ( INUM .EQ. 0 ) THEN
PRINT *,’Master has’, RED(R,I), R, C, I
ELSE
CALL PVMFINITSEND(PVMDEFAULT,TRUE)
CALL PVMFPACK( REAL8, RED(R,I), 1, 1, INFO )
PRINT *, ’INUM:’,INUM,’ Returning’,R,C,RED(R,I),I
CALL PVMFSEND( PTID, 3, INFO )
ENDIF
ELSE
IF ( INUM .EQ. 0 ) THEN
CALL PVMFRECV( -1 , 3, BUFID )
CALL PVMFUNPACK ( REAL8, CENTER, 1, 1, INFO)
PRINT *, ’Master Received’,R,C,CENTER
ENDIF
ENDIF
RETURN
END
Like the previous routine, the STORE routine is executed on all processes. The idea is to store a value into a global row and column position. First, we must determine if the cell is even in our process. If the cell is in our process, we must compute the local column (I) in our subset of the overall matrix and then store the value:
SUBROUTINE STORE(RED,ROWS,COLS,OFFSET,MYLEN,R,C,VALUE,INUM)
REAL*8 RED(0:ROWS+1,0:COLS+1)
REAL VALUE
INTEGER ROWS,COLS,OFFSET,MYLEN,R,C,I,INUM
I = C - OFFSET
IF ( I .LT. 1 .OR. I .GT. MYLEN ) RETURN
RED(R,I) = VALUE
RETURN
END
When this program executes, it has the following output:
% pheat
INUM: 0 Local 1 50 Global 1 50
Master Received 100 100 3.4722390023541D-07
%
We see two lines of print. The first line indicates the values that Process 0 used in its geometry computation. The second line is the output from the master process of the temperature at cell (100,100) after 200 time steps.
One interesting technique that is useful for debugging this type of program is to change the number of processes that are created. If the program is not quite moving its data properly, you usually get different results when different numbers of processes are used. If you look closely, the above code performs correctly with one process or 30 processes.
Notice that there is no barrier operation at the end of each time step. This is in contrast to the way parallel loops operate on shared uniform memory multiprocessors that force a barrier at the end of each loop. Because we have used an “owner computes” rule, and nothing is computed until all the required ghost data is received, there is no need for a barrier. The receipt of the messages with the proper ghost values allows a process to begin computing immediately without regard to what the other processes are currently doing.
This example can be used either as a framework for developing other grid-based computations, or as a good excuse to use HPF and appreciate the hard work that the HPF compiler developers have done. A well-done HPF implementation of this simulation should outperform the PVM implementation because HPF can make tighter optimizations. Unlike us, the HPF compiler doesn’t have to keep its generated code readable.
PVM Summary
PVM is a widely used tool because it affords portability across every architecture other than SIMD. Once the effort has been invested in making a code message passing, it tends to run well on many architectures.
The primary complaints about PVM include:
The need for a pack step separate from the send step
The fact that it is designed to work in a heterogeneous environment that may incur some overhead
It doesn’t automate common tasks such as geometry computations
But all in all, for a certain set of programmers, PVM is the tool to use. If you would like to learn more about PVM see PVM — A User’s Guide and Tutorial for Networked Parallel Computing, by Al Geist, Adam Beguelin, Jack Dongarra, Weicheng