Razya Ladelsky (IBM R&D, Haifa)
With the emergence of multicore architectures there is a growing need for automatic parallelization, that distributes sequential code into multi threaded code. OpenMP defines language extensions to C, C++, and Fortran for implementing multi-threaded shared memory applications. Generation of such extensions by the compiler relieves programmers from the manual parallelization process. OpenMP specification has been implemented in GCC, and is part of the standard release since version 4.2.
In this talk we review the OpenMP and the data dependence support which serve as the basic infrastructure for the automatic parallelization in GCC. We describe the capabilities of the automatic parallelization, demonstrated by some examples, and show its benefits with SPEC2006 experiments. Finally, we discuss current and future directions of work that may further extend the optimization's applicability.