Intel®  OpenMP* Runtime Library Interface

Introduction

This document describes the interface provided by the Intel® OpenMP runtime library to the compiler. Routines that are directly called as simple functions by user code are not currently described here, since their definition is in the OpenMP specification available from http://openmp.org

The aim here is to explain the interface from the compiler to the runtime.

The overall design is described, and each function in the interface has its own description. (At least, that's the ambition, we may not be there yet).

Building the Runtime

For the impatient, we cover building the runtime as the first topic here.

A top-level Makefile is provided that attempts to derive a suitable configuration for the most commonly used environments. To see the default settings, type:

% make info

You can change the Makefile's behavior with the following options:

To use any of the options above, simple add <option_name>=<value>. For example, if you want to build with gcc instead of icc, type:

% make compiler=gcc

Underneath the hood of the top-level Makefile, the runtime is built by a perl script that in turn drives a detailed runtime system make. The script can be found at tools/build.pl, and will print information about all its flags and controls if invoked as

% tools/build.pl --help 

If invoked with no arguments, it will try to build a set of libraries that are appropriate for the machine on which the build is happening. There are many options for building out of tree, and configuring library features that can also be used. Consult the --help output for details.

Supported RTL Build Configurations

The architectures supported are IA-32 architecture, Intel®  64, and Intel®  Many Integrated Core Architecture. The build configurations supported are shown in the table below.

icc/icl

gcc

Linux OS

Yes(1,5)

Yes(2,4)

OS X

Yes(1,3,4)

No

Windows OS

Yes(1,4)

No

(1) On IA-32 architecture and Intel®  64, icc/icl versions 12.x are supported (12.1 is recommended).
(2) gcc version 4.6.2 is supported.
(3) For icc on OS X, OS X version 10.5.8 is supported.
(4) Intel®  Many Integrated Core Architecture not supported.
(5) On Intel®  Many Integrated Core Architecture, icc/icl versions 13.0 or later are required.

Front-end Compilers that work with this RTL

The following compilers are known to do compatible code generation for this RTL: icc/icl, gcc. Code generation is discussed in more detail later in this document.

Outlining

The runtime interface is based on the idea that the compiler "outlines" sections of code that are to run in parallel into separate functions that can then be invoked in multiple threads. For instance, simple code like this

void foo()
{
#pragma omp parallel
    {
        ... do something ...
    }
}

is converted into something that looks conceptually like this (where the names used are merely illustrative; the real library function names will be used later after we've discussed some more issues...)

static void outlinedFooBody()
{
    ... do something ...
}

void foo()
{
    __OMP_runtime_fork(outlinedFooBody, (void*)0);   // Not the real function name!
}

Addressing shared variables

In real uses of the OpenMP API there are normally references from the outlined code to shared variables that are in scope in the containing function. Therefore the containing function must be able to address these variables. The runtime supports two alternate ways of doing this.

Current Technique

The technique currently supported by the runtime library is to receive a separate pointer to each shared variable that can be accessed from the outlined function. This is what is shown in the example below.

We hope soon to provide an alternative interface to support the alternate implementation described in the next section. The alternative implementation has performance advantages for small parallel regions that have many shared variables.

Future Technique

The idea is to treat the outlined function as though it were a lexically nested function, and pass it a single argument which is the pointer to the parent's stack frame. Provided that the compiler knows the layout of the parent frame when it is generating the outlined function it can then access the up-level variables at appropriate offsets from the parent frame. This is a classical compiler technique from the 1960s to support languages like Algol (and its descendants) that support lexically nested functions.

The main benefit of this technique is that there is no code required at the fork point to marshal the arguments to the outlined function. Since the runtime knows statically how many arguments must be passed to the outlined function, it can easily copy them to the thread's stack frame. Therefore the performance of the fork code is independent of the number of shared variables that are accessed by the outlined function.

If it is hard to determine the stack layout of the parent while generating the outlined code, it is still possible to use this approach by collecting all of the variables in the parent that are accessed from outlined functions into a single `struct` which is placed on the stack, and whose address is passed to the outlined functions. In this way the offsets of the shared variables are known (since they are inside the struct) without needing to know the complete layout of the parent stack-frame. From the point of view of the runtime either of these techniques is equivalent, since in either case it only has to pass a single argument to the outlined function to allow it to access shared variables.

A scheme like this is how gcc generates outlined functions.

Library Interfaces

The library functions used for specific parts of the OpenMP language implementation are documented in different modules.

Examples

Work Sharing Example

This example shows the code generated for a parallel for with reduction and dynamic scheduling.

extern float foo( void );

int main () {
    int i; 
    float r = 0.0; 
    #pragma omp parallel for schedule(dynamic) reduction(+:r) 
    for ( i = 0; i < 10; i ++ ) {
        r += foo(); 
    }
}

The transformed code looks like this.

extern float foo( void ); 

int main () {
    static int zero = 0; 
    auto int gtid; 
    auto float r = 0.0; 
    __kmpc_begin( & loc3, 0 ); 
    // The gtid is not actually required in this example so could be omitted;
    // We show its initialization here because it is often required for calls into
    // the runtime and should be locally cached like this.
    gtid = __kmpc_global thread num( & loc3 ); 
    __kmpc_fork call( & loc7, 1, main_7_parallel_3, & r ); 
    __kmpc_end( & loc0 ); 
    return 0; 
}

struct main_10_reduction_t_5 { float r_10_rpr; }; 

static kmp_critical_name lck = { 0 };
static ident_t loc10; // loc10.flags should contain KMP_IDENT_ATOMIC_REDUCE bit set 
                      // if compiler has generated an atomic reduction.

void main_7_parallel_3( int *gtid, int *btid, float *r_7_shp ) {
    auto int i_7_pr; 
    auto int lower, upper, liter, incr; 
    auto struct main_10_reduction_t_5 reduce; 
    reduce.r_10_rpr = 0.F; 
    liter = 0; 
    __kmpc_dispatch_init_4( & loc7,*gtid, 35, 0, 9, 1, 1 ); 
    while ( __kmpc_dispatch_next_4( & loc7, *gtid, & liter, & lower, & upper, & incr ) ) {
        for( i_7_pr = lower; upper >= i_7_pr; i_7_pr ++ ) 
          reduce.r_10_rpr += foo(); 
    }
    switch( __kmpc_reduce_nowait( & loc10, *gtid, 1, 4, & reduce, main_10_reduce_5, & lck ) ) {
        case 1:
           *r_7_shp += reduce.r_10_rpr;
           __kmpc_end_reduce_nowait( & loc10, *gtid, & lck );
           break;
        case 2:
           __kmpc_atomic_float4_add( & loc10, *gtid, r_7_shp, reduce.r_10_rpr );
           break;
        default:;
    }
} 

void main_10_reduce_5( struct main_10_reduction_t_5 *reduce_lhs, 
                       struct main_10_reduction_t_5 *reduce_rhs ) 
{ 
    reduce_lhs->r_10_rpr += reduce_rhs->r_10_rpr; 
}

Generated on 25 Aug 2013 for libomp_oss by  doxygen 1.6.1