changeset 2679:07aa0a31fffb

Rewrote frame state to be not-so-Java-specific. Clarified and reduced the usage of the term "node".
author Thomas Wuerthinger <thomas@wuerthinger.net>
date Mon, 16 May 2011 14:29:12 +0200
parents b9b0a0aa7ee8
children 9836845a75e6 c5739b99762a
files doc/design/graal_compiler.pdf doc/design/graal_compiler.tex
diffstat 2 files changed, 29 insertions(+), 27 deletions(-) [+]
line wrap: on
line diff
Binary file doc/design/graal_compiler.pdf has changed
--- a/doc/design/graal_compiler.tex	Mon May 16 14:05:15 2011 +0200
+++ b/doc/design/graal_compiler.tex	Mon May 16 14:29:12 2011 +0200
@@ -195,14 +195,15 @@
 \section{Control Flow}
 
 Control flow is managed in way where the predecessor node contains direct pointers to its successor nodes.
+We reserve the term \textit{instruction} for nodes that are embedded in the control flow.
 This is opposite to the approach taken in the server compiler, where control flow and data flow edges point in the same direction.
 The advantage that we see in our approach is that there is no need for projection nodes in case of control flow splits.
-An \texttt{If} node can directly point to its true and false successors without any intermediate nodes.
+An \texttt{If} instruction can directly point to its true and false successors without any intermediate nodes.
 This makes the graph more compact and simplifies graph traversal.
 
 Listing \ref{lst:cfg2} shows an example Java program with an if statement where both paths do not contain any instruction with side effects.
-The \texttt{If} instruction can directly point its true and false successors to a \texttt{Merge} node.
-A \texttt{Phi} node that selects the appropriate value is appended to the \texttt{Merge} node.
+The \texttt{If} instruction can directly point its true and false successors to a \texttt{Merge} instruction.
+A \texttt{Phi} node that selects the appropriate value is appended to the \texttt{Merge} instruction.
 The \texttt{Return} instruction then has a data dependency on the \texttt{Phi} node.
 
 \begin{lstlisting}[label=lst:cfg2, caption=Control flow in the graph, captionpos=b]
@@ -210,7 +211,6 @@
 else { return 1; }
 \end{lstlisting}
 
-
 \begin{figure}[h]
   \centering
 \begin{digraphenv}{scale=0.5}{cfg2}
@@ -259,8 +259,6 @@
 } catch(Exception e) { ... }
 \end{lstlisting}
 
-
-
 \begin{figure}[h]
   \centering
 \begin{digraphenv}{scale=0.5}{exc1}
@@ -295,8 +293,8 @@
 \label{sec:loops}
 Loops form a first-class construct in the IR that is expressed by specialized IR nodes during all optimization phases.
 We only compile methods with a control flow where every loop has a single entry point.
-This entry point is a \nodename{LoopBegin} node.
-This node is connected to a \nodename{LoopEnd} node that merges all control flow paths that do not exit the loop.
+This entry point is a \nodename{LoopBegin} instruction.
+This instruction is connected to a \nodename{LoopEnd} instruction that merges all control flow paths that do not exit the loop.
 The edge between the \nodename{LoopBegin} and the \nodename{LoopEnd} is the backedge of the loop.
 It goes from the beginning to the end in order to make the graph acyclic.
 An algorithm that traverses the control flow has to explicitely decide whether it wants to incorporate backedges (i.e., special case of the treatment of \nodename{LoopEnd}) or ignore them.
@@ -326,7 +324,7 @@
 
 \subsection{Loop Phis}
 Data flow in loops is modelled with special phi nodes at the beginning and the end of the loop.
-The \nodename{LoopEnd} node merges every value that flows into the next loop iteration in associated \nodename{LoopEndPhi} nodes.
+The \nodename{LoopEnd} instruction merges every value that flows into the next loop iteration in associated \nodename{LoopEndPhi} nodes.
 A corresponding \nodename{LoopBeginPhi} node that is associated with the loop header has a control flow dependency on the \nodename{LoopEndPhi} node.
 Listing \ref{lst:loop} shows a simple counting loop that is used as an example in the rest of this section.
 Figure \ref{fig:loop2} shows how the loop is modelled immediately after building the graph.
@@ -335,8 +333,6 @@
 for(int i=0; i<n; ++i) { }
 \end{lstlisting}
 
-
-
 \begin{figure}[h]
   \centering
 \begin{digraphenv}{scale=0.5}{layout2}
@@ -474,19 +470,25 @@
 
 
 \section{Frame States}
-A frame state captures the state of the program in terms of the Java bytecode specification (i.e., the values of the local variables, the operand stack, and the locked monitors).
-Every deoptimization point needs a valid frame state.
-A frame state stays valid as long as the subsequent instructions perform only actions that can safely be reexecuted.
-Thus, frame states need only be generated for the states after bytecodes that cannot be reexecuted:
+A frame state captures the state of the program like it is seen in by an interpreter of the program.
+The frame state contains the information that is local to the current activation and will therefore disappear during SSA-form constructions or other compiler optimizations.
+For Java, the frame state is defined in terms of the Java bytecode specification (i.e., the values of the local variables, the operand stack, and the locked monitors).
+However, a frame state is not a concept specific to Java (e.g., the Crankshaft JavaScript engine uses frame states in their optimizing compiler to model the values of the AST interpreter).
+
+Frame states are necessary to support the deoptimization of the program, which is the precondition for performing aggressive optimizations that use optimistic assumptions.
+Therefore every point in the optimizing compiler that may revert execution back to the interpreter needs a valid frame state.
+However, the point where the interpreter continues execution need not correspond exactly to the execution position of the compiled code, because many Java bytecode instructions can be safely reexecuted.
+Thus, frame states need only be generated for the states after instructions that cannot be reexecuted, because they modify the state of the program.
+Examples for such instructions are:
 
 \begin{itemize}
-    \item Array stores: {\tt IASTORE, LASTORE, FASTORE, \\DASTORE, AASTORE, BASTORE, CASTORE, SASTORE}
-    \item Field stores: {\tt PUTSTATIC, PUTFIELD}
-    \item Method calls: {\tt INVOKEVIRTUAL, INVOKESPECIAL, \\INVOKESTATIC, INVOKEINTERFACE}
-    \item Synchronization: {\tt MONITORENTER, MONITOREXIT}
+    \item Array stores (in Java bytecodes {\tt IASTORE, LASTORE, FASTORE, \\DASTORE, AASTORE, BASTORE, CASTORE, SASTORE})
+    \item Field stores (in Java bytecodes {\tt PUTSTATIC, PUTFIELD})
+    \item Method calls (in Java bytecodes {\tt INVOKEVIRTUAL, INVOKESPECIAL, \\INVOKESTATIC, INVOKEINTERFACE})
+    \item Synchronization (in Java bytecodes {\tt MONITORENTER, MONITOREXIT})
 \end{itemize}
 
-Within the graph a frame state is represented as a node that is attached to the node that caused it to be generated using a control dependency (see Figure~\ref{fig:fs1}).
+Within the graph a frame state is represented as a node that is attached to the instruction that caused it to be generated using a control dependency (see Figure~\ref{fig:fs1}).
 Frame states also have data dependencies on the contents of the state: the local variables and the expression stack.
 
 The frame state at the method beginning does not have to be explicitely in the graph, because it can always be reconstructed at a later stage.
@@ -515,8 +517,8 @@
 
 
 \subsection{Guards}
-A guard node is a node that deoptimizes based on a conditional expression.
-Guard nodes are not attached to a certain frame state node, they can move around freely and will always use the correct frame state when the nodes are scheduled (i.e., the last emitted frame state).
+A guard is a node that deoptimizes based on a conditional expression.
+Guards are not attached to a certain frame state, they can move around freely and will always use the correct frame state when the nodes are scheduled (i.e., the last emitted frame state).
 The node that is guarded by the deoptimization has a data dependency on the guard, and the guard in turn has a data dependency on the condition and on the most distant node that is postdominated by the guarded node.
 
 In the example of Figure~\ref{fig:guard1}, the second load is guarded by a guard, because its receiver might be null (the receiver of the first load is assumed to be non-null).
@@ -557,9 +559,9 @@
 
 A guard can be used to remove branches that have a very low execution frequency and replace them with a conditional jump to deoptimization.
 
-In the example of Figure~\ref{fig:guard2}, an \texttt{If} node was replaced by an anchor and a guard.
-The anchor takes the place of the \texttt{If} node in the control flow, and is connected to the guard node.
-The guard is now anchored to the most distant node of which the \texttt{If} node was a postdominator.
+In the example of Figure~\ref{fig:guard2}, an \texttt{If} instruction was replaced by an anchor and a guard.
+The anchor takes the place of the \texttt{If} instruction in the control flow, and is connected to the guard.
+The guard is now anchored to the most distant node of which the \texttt{If} instruction was a postdominator.
 
 \begin{figure}[h]
   \centering
@@ -583,7 +585,7 @@
   \label{fig:guard2}
 \end{figure}
 
-At some point during the compilation, guard nodes need to be fixed, which means that appropriate data and control dependencies will be inserted so that they cannot move outside the scope of the associated frame state.
+At some point during the compilation, guards need to be fixed, which means that appropriate data and control dependencies will be inserted so that they cannot move outside the scope of the associated frame state.
 This will generate deoptimization-free zones that can be targeted by the most aggressive optimizations.
 A simple algorithm for this removal of frame states would be to move all guards as far upwards as possible.
 Multiple guards with the same condition and anchor can be merged (see Figure~\ref{fig:guard3}).
@@ -620,7 +622,7 @@
   \label{fig:guard3}
 \end{figure}
 
-Also, if two guards that are anchored to the true and false branch of the same \texttt{If} node have the same condition, they can be merged, so that the resulting guard is anchored at the most distant node of which the \texttt{If} node is a postdominator.
+Also, if two guards that are anchored to the true and false branch of the same \texttt{If} instruction have the same condition, they can be merged, so that the resulting guard is anchored at the most distant node of which the \texttt{If} instruction is a postdominator.
 
 \section{Conclusions}
 \label{sec:conclusions}