High Performance Computing - Charles Severance [66]
Closing Notes*
Loops are the heart of nearly all high performance programs. The first goal with loops is to express them as simply and clearly as possible (i.e., eliminates the clutter). Then, use the profiling and timing tools to figure out which routines and loops are taking the time. Once you find the loops that are using the most time, try to determine if the performance of the loops can be improved.
First try simple modifications to the loops that don’t reduce the clarity of the code. You can also experiment with compiler options that control loop optimizations. Once you’ve exhausted the options of keeping the code looking clean, and if you still need more performance, resort to hand-modifying to the code. Typically the loops that need a little hand-coaxing are loops that are making bad use of the memory architecture on a cache-based system. Hopefully the loops you end up changing are only a few of the overall loops in the program.
However, before going too far optimizing on a single processor machine, take a look at how the program executes on a parallel system. Sometimes the modifications that improve performance on a single-processor system confuses the parallel-processor compiler. The compilers on parallel and vector systems generally have more powerful optimization capabilities, as they must identify areas of your code that will execute well on their specialized hardware. These compilers have been interchanging and unrolling loops automatically for some time now.
Exercises*
Exercise 2.34.1.
Why is an unrolling amount of three or four iterations generally sufficient for simple vector loops on a RISC processor? What relationship does the unrolling amount have to floating-point pipeline depths?
Exercise 2.34.2.
On a processor that can execute one floating-point multiply, one floating-point addition/subtraction, and one memory reference per cycle, what’s the best performance you could expect from the following loop?
DO I = 1,10000
A(I) = B(I) * C(I) - D(I) * E(I)
ENDDO
Exercise 2.34.3.
Try unrolling, interchanging, or blocking the loop in subroutine BAZFAZ to increase the performance. What method or combination of methods works best? Look at the assembly language created by the compiler to see what its approach is at the highest level of optimization.
Note
Compile the main routine and BAZFAZ separately; adjust NTIMES so that the untuned run takes about one minute; and use the compiler’s default optimization level.
PROGRAM MAIN
IMPLICIT NONE
INTEGER M,N,I,J
PARAMETER (N = 512, M = 640, NTIMES = 500)
DOUBLE PRECISION Q(N,M), R(M,N)
C
DO I=1,M
DO J=1,N
Q(J,I) = 1.0D0
R(I,J) = 1.0D0
ENDDO
ENDDO
C
DO I=1,NTIMES
CALL BAZFAZ (Q,R,N,M)
ENDDO
END
SUBROUTINE BAZFAZ (Q,R,N,M)
IMPLICIT NONE
INTEGER M,N,I,J
DOUBLE PRECISION Q(N,M), R(N,M)
C
DO I=1,N
DO J=1,M
R(I,J) = Q(I,J) * R(J,I)
ENDDO
ENDDO
C
END
Exercise 2.34.4.
Code the matrix multiplication algorithm in the “straightforward” manner and compile it with various optimization levels. See if the compiler performs any type of loop interchange.
Try the same experiment with the following code:
DO I=1,N
DO J=1,N
A(I,J) = A(I,J) + 1.3
ENDDO
ENDDO
Do you see a difference in the compiler’s ability to optimize these two loops? If you see a difference, explain it.
Exercise 2.34.5.
Code the matrix multiplication algorithm both the ways shown in this chapter. Execute the program for a range of values for N. Graph the execution time divided by N3 for values of N ranging from 50×50 to 500×500. Explain the performance you see.
[14] However, you can sometimes trade accuracy for speed.
[15] The Livermore Loops was a benchmark that specifically tested the capability of a compiler to effectively optimize a set of loops. In addition to being a performance benchmark, it was