comparison doc/design/graal_compiler.tex @ 2678:b9b0a0aa7ee8

Added addition sections on control flow and exceptions.
author Thomas Wuerthinger <thomas@wuerthinger.net>
date Mon, 16 May 2011 14:05:15 +0200
parents 1586b1b56f0c
children 07aa0a31fffb
comparison
equal deleted inserted replaced
2677:0ea5f12e873a 2678:b9b0a0aa7ee8
1 \documentclass[twocolumn]{svjour3} 1 \documentclass[twocolumn]{svjour3}
2 \usepackage{listings}
2 \usepackage[pdftex]{graphicx} 3 \usepackage[pdftex]{graphicx}
3 \usepackage{environ} 4 \usepackage{environ}
4 \usepackage{amsmath} 5 \usepackage{amsmath}
5 \usepackage{amsfonts} 6 \usepackage{amsfonts}
6 \usepackage[english]{babel} 7 \usepackage[english]{babel}
117 \item Removal of the XIR template mechanism and replacement with a snippet mechanism that works with the compiler graph. 118 \item Removal of the XIR template mechanism and replacement with a snippet mechanism that works with the compiler graph.
118 \item Improvements for peak performance (loop optimizations, escape analysis, bounds check elimination, processing additional interpreter runtime feedback). 119 \item Improvements for peak performance (loop optimizations, escape analysis, bounds check elimination, processing additional interpreter runtime feedback).
119 \item Implementation of a prototype front-end for a different language, e.g., JavaScript. 120 \item Implementation of a prototype front-end for a different language, e.g., JavaScript.
120 \end{itemize} 121 \end{itemize}
121 122
123 \section{Project Source Structure}
124 In order to support the goal of a modular compiler, the code will be divided into the following source code projects (as subprojects of \textbf{com.oracle.max.graal}).
125
126 \begin{description}
127 \item[graph] contains the abstract node implementation, the graph implementation and all the associated tools and auxiliary classes.
128 \item[nodes] contains the implementation of known basic nodes (e.g., phi nodes, control flow nodes, \ldots).
129 Additional node classes should go into separate projects and be specializations of the known basic nodes.
130 \item[java] contains code for building graphs from Java bytecodes and Java-specific nodes.
131 \item[opt] contains optimizations such as global value numbering or conditional constant propagation.
132 \item[compiler] contains the compiler, including:
133 \begin{itemize}
134 \item Scheduling of the compilation phases.
135 \item Implementation of the \emph{compiler interface} (CI).
136 \item Implementation of the final compilation phase that produces the low-level representation.
137 \item Machine code creation, including debug info.
138 \end{itemize}
139 \end{description}
140
141
122 \section{Graph} 142 \section{Graph}
123 143
124 The \emph{intermediate representation}~(IR) of the compiler is designed as a directed graph. 144 The \emph{intermediate representation}~(IR) of the compiler is designed as a directed graph.
125 The graph deals out ids for new nodes and can be queried for the node corresponding to a given id as well as for an unordered list of nodes of the graph. 145 The graph deals out ids for new nodes and can be queried for the node corresponding to a given id as well as for an unordered list of nodes of the graph.
126 Graphs can manage side data structures, which will be automatically invalidated and lazily recomputed whenever the graph changes. Examples for side data structures are dominator trees and temporary schedules. These side data structures will usually be understood by more than one optimization. 146 Graphs can manage side data structures, which will be automatically invalidated and lazily recomputed whenever the graph changes. Examples for side data structures are dominator trees and temporary schedules. These side data structures will usually be understood by more than one optimization.
170 \end{digraphenv} 190 \end{digraphenv}
171 \caption{A node and its edges.} 191 \caption{A node and its edges.}
172 \label{fig:directions} 192 \label{fig:directions}
173 \end{figure} 193 \end{figure}
174 194
175 \subsection{Project Source Structure} 195 \section{Control Flow}
176 In order to support the goal of a modular compiler, the code will be divided into the following source code projects (as subprojects of \textbf{com.oracle.graal}). 196
177 197 Control flow is managed in way where the predecessor node contains direct pointers to its successor nodes.
178 \begin{description} 198 This is opposite to the approach taken in the server compiler, where control flow and data flow edges point in the same direction.
179 \item[graph] contains the abstract node implementation, the graph implementation and all the associated tools and auxiliary classes. 199 The advantage that we see in our approach is that there is no need for projection nodes in case of control flow splits.
180 \item[nodes] contains the implementation of known basic nodes (e.g., phi nodes, control flow nodes, \ldots). 200 An \texttt{If} node can directly point to its true and false successors without any intermediate nodes.
181 Additional node classes should go into separate projects and be specializations of the known basic nodes. 201 This makes the graph more compact and simplifies graph traversal.
182 \item[java] contains code for building graphs from Java bytecodes and Java-specific nodes. 202
183 \item[opt] contains optimizations such as global value numbering or conditional constant propagation. 203 Listing \ref{lst:cfg2} shows an example Java program with an if statement where both paths do not contain any instruction with side effects.
184 \item[compiler] contains the compiler, including: 204 The \texttt{If} instruction can directly point its true and false successors to a \texttt{Merge} node.
185 \begin{itemize} 205 A \texttt{Phi} node that selects the appropriate value is appended to the \texttt{Merge} node.
186 \item Scheduling of the compilation phases. 206 The \texttt{Return} instruction then has a data dependency on the \texttt{Phi} node.
187 \item Implementation of the \emph{compiler interface} (CI). 207
188 \item Implementation of the final compilation phase that produces the low-level representation. 208 \begin{lstlisting}[label=lst:cfg2, caption=Control flow in the graph, captionpos=b]
189 \item Machine code creation, including debug info. 209 if (condition) { return 0; }
190 \end{itemize} 210 else { return 1; }
191 \end{description} 211 \end{lstlisting}
212
213
214 \begin{figure}[h]
215 \centering
216 \begin{digraphenv}{scale=0.5}{cfg2}
217 \textnode{entry}{Entry}
218 \textnode{condition}{condition}
219 \textnode{const0}{0}
220 \textnode{const1}{1}
221 \nodesplit{if}{If}
222 \control{entry}{if}
223 \controllabel{if:succ1}{merge}
224 \controllabel{if:succ2}{merge}
225 \data{if}{condition}
226 \node{merge}{Merge}
227 \node{return}{Return}
228 \nodetri{phi}{Phi}
229 \datalabel{phi:in1}{merge}
230 \datalabel{phi:in2}{const0}
231 \datalabel{phi:in3}{const1}
232 \data{return}{phi}
233 \control{merge}{return}
234 \end{digraphenv}
235 \caption{A simple loop with two exits.}
236 \label{fig:exc1}
237 \end{figure}
238
239 \section{Exceptions}
240 \label{sec:Exceptions}
241
242 We do not throw runtime exceptions (e.g., \texttt{IndexOutOf\-BoundsException}, \texttt{NullPointerException}, or \texttt{Out\-Of\-MemoryException}), but deoptimize instead.
243 This reduces the places in the compiled code where an exact bytecode location and debug information must be known.
244 Additionally, this greatly reduces the number of exception handler edges in the compiled code.
245 The main advantage of this technique is however, that we are free in moving around bounds checks, memory allocation, memory accesses with implicit null checks, etc.
246
247 There are only two kinds of instruction that need explicit exception edges, because they are the only instructions that can throw exceptions in compiled code: \texttt{Throw} instructions and \texttt{Invoke} instructions.
248 They are modelled as instructions with an additional control flow continuation that points to an \texttt{ExceptionDispatch} instruction.
249 The exception dispatch instruction decides based on the type of the exception object whether the control should flow to the catch handler or to another exception dispatch.
250 If there is no catch handler in the currently compiled method, then the control flows into the \texttt{Unwind} instruction that handles the exception by forwarding it to the caller.
251 Listing \ref{lst:exc1} shows an example Java program with nested try blocks and Figure \ref{fig:exc1} shows the corresponding compiler graph.
252
253 \begin{lstlisting}[label=lst:exc1, caption=Exception dispatch in the compiler graph, captionpos=b]
254 try { m1();
255 try { m2();
256 } catch(ExtendedException e) { ... }
257 m3();
258 throw exception;
259 } catch(Exception e) { ... }
260 \end{lstlisting}
261
262
263
264 \begin{figure}[h]
265 \centering
266 \begin{digraphenv}{scale=0.5}{exc1}
267 \textnode{entry}{Entry}
268 \textnode{catch1}{catch1}
269 \textnode{catch2}{catch2}
270 \nodesplit{m1}{Invoke m1}
271 \nodesplit{m2}{Invoke m2}
272 \nodesplit{m3}{Invoke m3}
273 \nodesplit{dispatch1}{ExceptionDispatch}
274 \nodesplit{dispatch2}{ExceptionDispatch}
275 \node{throw}{Throw}
276 \node{unwind}{Unwind}
277 \control{entry}{m1}
278 \controllabel{m1:succ1}{m2}
279 \controllabel{m1:succ2}{dispatch2}
280 \controllabel{m2:succ1}{m3}
281 \controllabel{m2:succ2}{dispatch1}
282 \controllabel{m3:succ1}{throw}
283 \controllabel{m3:succ2}{dispatch2}
284 \control{throw}{dispatch2}
285 \controllabel{dispatch1:succ2}{catch1}
286 \controllabel{dispatch1:succ1}{dispatch2}
287 \controllabel{dispatch2:succ2}{catch2}
288 \controllabel{dispatch2:succ1}{unwind}
289 \end{digraphenv}
290 \caption{A simple loop with two exits.}
291 \label{fig:exc1}
292 \end{figure}
192 293
193 \section{Loops} 294 \section{Loops}
194 \label{sec:loops} 295 \label{sec:loops}
195 Loops form a first-class construct in the IR that is expressed by specialized IR nodes during all optimization phases. 296 Loops form a first-class construct in the IR that is expressed by specialized IR nodes during all optimization phases.
196 We only compile methods with a control flow where every loop has a single entry point. 297 We only compile methods with a control flow where every loop has a single entry point.
225 326
226 \subsection{Loop Phis} 327 \subsection{Loop Phis}
227 Data flow in loops is modelled with special phi nodes at the beginning and the end of the loop. 328 Data flow in loops is modelled with special phi nodes at the beginning and the end of the loop.
228 The \nodename{LoopEnd} node merges every value that flows into the next loop iteration in associated \nodename{LoopEndPhi} nodes. 329 The \nodename{LoopEnd} node merges every value that flows into the next loop iteration in associated \nodename{LoopEndPhi} nodes.
229 A corresponding \nodename{LoopBeginPhi} node that is associated with the loop header has a control flow dependency on the \nodename{LoopEndPhi} node. 330 A corresponding \nodename{LoopBeginPhi} node that is associated with the loop header has a control flow dependency on the \nodename{LoopEndPhi} node.
230 Figure \ref{fig:loop2} shows how a simple counting loop is modelled in the graph. 331 Listing \ref{lst:loop} shows a simple counting loop that is used as an example in the rest of this section.
332 Figure \ref{fig:loop2} shows how the loop is modelled immediately after building the graph.
333
334 \begin{lstlisting}[label=lst:loop, caption=Loop example, captionpos=b]
335 for(int i=0; i<n; ++i) { }
336 \end{lstlisting}
337
338
231 339
232 \begin{figure}[h] 340 \begin{figure}[h]
233 \centering 341 \centering
234 \begin{digraphenv}{scale=0.5}{layout2} 342 \begin{digraphenv}{scale=0.5}{layout2}
235 \textnode{BeforeLoop}{Loop entry} 343 \textnode{BeforeLoop}{Loop entry}