Distributed-memop systems are potentiall) scalable to a very large number of processors and promise to be povcvrjiul tools ,for solving large-scale scientijc and engineering problems. Howwer. these machines are currently dificult to program, sirice the user has to distribute the data across the processors atid e.xplicitly formulate the communication required by the program under the selected distribution. During the past years. language extensions oj standard programming languages such as Fortrari were rlei~eloped that allow a concise formulation of data distribution. and new compilation methods were designed and impletnetired that allow the programming of such machines at this relatively high level. In this paper, we describe the current stare of the art in compiling procedural languages (in particular, Fortrun) for [ l i .~t r i b u t e [~-~n e m #~ machines, analyze the limitations of these approuches. tirid outline future research. (SPMD) program for execution on the target DMMP. The compiler analyzes the source code, translating global data references into local and nonlocal data references based on the distributions specified by the user. The nonlocal references are satisfied by inserting appropriate messagepassing statements in the generated code. Finally, the communication is optimized where possible, in particular by combining messages and by sending data at the earliest possible point in time. In algorithms where some data references are made through a level of indirection (such as for unstructured mesh codes and sparse matrix solvers), some of the analysis has to be performed at run-time and the task of the compiler is to generate code to perform this analysis and set up the required communication. This paper is devoted to compilation techniques ,for the source-to-source translation of programs in an ex-264