OpenMP
From Free net encyclopedia
The OpenMP application programming interface (API) supports multi-platform shared memory multiprocessing programming in C/[[C++]] and Fortran on many architectures, including Unix and Microsoft Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer.
Often a so-called hybrid-model for parallel programming, using both OpenMP and MPI (Message Passing Interface), is used for programming computer clusters.
Contents |
History
The OpenMP Archetecutre Review Board (ARB) published its first standard, OpenMP for FORTRAN 1.0, in October of 1997. October the following year they released the C/C++ standard. 2000 saw version 2.0 of the FORTRAN standard with version 2.0 of the C/C++ standard being relased in 2002. The current version is 2.5. It is a combined C/C++/FORTRAN standard, which was released in 2005.
The core elements
The core elements of OpenMP are the constructs for thread creation, work load distribution(work sharing), data environment management, thread synchronization, user level runtime routines and environment variables.
- thread creation: omp parallel. It is used to fork several additional threads to carry out the work enclosed in the construct in parallel. The original process will denoted as master thread with thread ID 0.
Example: display hello, world using multiple threads.
#include <stdio.h> #ifdef _OPENMP /* using conditional compilation to let sequential compilers ignore the omp.h header*/ #include <omp.h> #endif void main() { #pragma omp parallel printf("Hello, world."); }
- work-sharing constructs: used to specify how to assign independent work to one or all of the threads.
- omp for or omp do: used to splits up loop iterations among the threads
- sections: assigning consecutive but independent code blocks to different threads
- single: specifying a code block that is executed by only one thread, a barrier is implied in the end
- master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.
Example: initialize the value of a large array in parallel, using each thread to do a portion of the work
#include <stdio.h> #ifdef _OPENMP /* using conditional compilation to let sequential compilers ignore the omp.h header*/ #include <omp.h> #endif #define N 100000 void main() { int a[n]; int i; #pragma omp parallel for for (i=0;i<N;i++) a[i]= 2*i; }
- Data environment management: Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary and there is a need to pass values between the sequential part and the parallel region(the code block executed in parallel), so data environment management is introduced as data clauses.
- shared: the data is shared, which means visible and accessible by all threads simultaneously
- private: the data is private to each thread, which means each thread will have a loal copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region.
- firstprivate: the data is private to each thread, but initialized using the value of the variable using the same name from the master thread.
- lastprivate: the data is private to each thread. The value of this private data will be copied to a global variable using the same name outside the parallel region if current iteration is the last iteration in the parallelized loop. A data can be both firstprivate and lastprivate.
- threadprivate: The data is a global data, but it is private in each parallel region during the runtime. The difference between threadprivate and private is the global scope associated with threadprivate and the preserved value across prallel regions.
- copyin: similar to firstprivatefor private variables, threadprivate variables are not initialized, unless using copyin to pass the value from the corresponding global variables. No copyout is needed because the value of a threadprivate variable is maintained throughout the execution of the whole program.
- reduction: the variable has a local copy in each thread, but the values of the local copies will be summarized(reduced) into a global shared variable.
- synchronization constructs:
- critical section: the enclosed code block will be executed by all threads but only one thread at a time, not simultaneously execution. It is often used to protect shared data from race condition.
- atomic: similar to critical section, but advise the compiler to use special hardware instructions for better performance. Compilers may choose to ignore this suggestion from users and use critical section instead.
- barrier:
- ordered
- flush
- locks
- user-level runtime routines: used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, etc
- environment variables: a method to alter the execution features of OpenMP applications. Used to control loop iterations split, default number of threads, etc.
Sample Programs
Hello World
C/[[C++]]
#include <omp.h> #include <stdio.h> int main (int argc, char *argv[]) { int id, nthreads; #pragma omp parallel private(id) { id = omp_get_thread_num(); printf("Hello World from thread %d\n", id); #pragma omp barrier if ( id == 0 ) { nthreads = omp_get_num_threads(); printf("There are %d threads\n",nthreads); } } return 0; }
Fortran 77
PROGRAM HELLO INTEGER ID, NTHRDS INTEGER OMP_GET_THREAD_NUM, OMP_GET_NUM_THREADS C$OMP PARALLEL PRIVATE(ID) ID = OMP_GET_THREAD_NUM() PRINT *, 'HELLO WORLD FROM THREAD', ID C$OMP BARRIER IF ( ID .EQ. 0 ) THEN NTHRDS = OMP_GET_NUM_THREADS() PRINT *, 'THERE ARE', NTHRDS, 'THREADS' END IF C$OMP END PARALLEL END
Free form Fortran 90
program hello90 use omp_lib integer :: id, nthreads !$omp parallel private(id) id = omp_get_thread_num() write (*,*) 'Hello World from thread', id !$omp barrier if ( id .eq. 0 ) then nthreads = omp_get_num_threads() write (*,*) 'There are', nthreads, 'threads' end if !$omp end parallel end program
Pros and Cons of OpenMP
Pros
- simple: need not deal with message passing as MPI does
- data layout and decomposition is handled automatically by directives.
- incremental parallelism: can work on one portion of the program at one time, no dramatic change to code is needed.
- a unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used.
- Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs.
Cons
- currenty only run efficiently in shared-memory multiprocessor platforms
- requires a compiler that supports OpenMP. Visual C++ 2005 supports it, and so do the Intel compilers for their x86 and IPF product series. GCC 4.2 will support OpenMP, though it is likely that some distributors will add OpenMP support already to their GCC 4.1 based system compilers.
- low parallel efficiency: rely more on parallelizable loops, leaving out a relatively high percentage of a non-loop code in sequential part.
Performance expectation of OpenMP
One may expect to get N times less wall clock execution time (or N times speedup) when running a program parallelized using OpenMP on a N processor platform, which seldom happens due to following reasons:
- A large portion of the program may not be parallelized by OpenMP, which sets theoretical upper limit of speedup according to Amdahl's law,
- N processors in a SMP may bring N times computation power, but the memory bandwidth usually does not scale up N times. Quite often, original memory path is shared by multiple processors and performance degradation may be observed when they compete for the shared memory bandwidth.
- Many other common problems affecting the final speedup in parallel computing also apply to OpenMP, like Load balancing and synchronization overheads.
OpenMP Benchmarks
There are some public domain OpenMP benchmarks for users to try.
- NAS parallel benchmark
- OpenMP validation suite
- OpenMP source code repository
A commercial benchmark is also very popular.
- SPECOMP
See also
External links
- The official site for OpenMP
- cOMPunity Community of OpenMP Users, Researchers, Tool Developers and Providers
- A simple OpenMP Tutorial to introduce OpenMP programming concepts
- TotalView A debugger for OpenMP programs
- Intel® Threading Tools
- Dynamic Performance Monitor for OpenMP
- Parawiki page for OpenMP
- PC Cluster Consortium
- GCC's OpenMP implementationde:OpenMP