Document 14194092

advertisement
c Copyright
Stephan J. Ellner
2004
RICE UNIVERSITY
PreVIEW: An Untyped Graphical Calculus for
Resource-aware Programming
by
Stephan J. Ellner
A T S
 P F  
R   D
Master of Science
A, T C:
Dr. Walid Taha,
Assistant Professor,
Computer Science, Rice University
Dr. Keith Cooper,
Professor and Chair,
Computer Science, Rice University
Dr. Robert “Corky” Cartwright,
Professor,
Computer Science, Rice University
Dr. Peter Druschel,
Professor,
Computer Science, Rice University
Dr. Moshe Vardi,
Professor,
Computer Science, Rice University
H, T
A, 2004
As visual programming languages become both more expressive and more popular in
the domains of real-time and embedded software, the need for rigorous techniques for reasoning about programs written in these languages becomes more pressing. Indeed, due to
a subtle but fundamental mismatch between graphical and textual representations of programs, semantic concepts established in the textual setting cannot be mapped to the graphical setting without a careful analysis of the connection between the two representations.
Focusing on operational (as opposed to type-theoretic) aspects of Resource-aware Programming (RAP), we analyze the connection between graphical and textual representations
of programs that can express both higher-order functions and staging constructs. After establishing a precise connection between the two, we illustrate how this connection can be
used to lift a reduction semantics from the textual to the graphical setting.
I am very grateful for the dedication and support I received from my advisor Dr. Walid
Taha while working on this thesis. I also want to thank the members of my thesis committee
(Dr. Keith Cooper, Dr. Robert Cartwright, Dr. Peter Druschel, and Dr. Moshe Vardi) for
their time and their interest in my work. Kedar Swadi, Samah Abu Mahmeed, and Roumen
Kaiabachev read and commented on drafts of this thesis. I would like to thank them for
their insightful and detailed suggestions.
Contents
1 Introduction
1
1.1
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.3
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.4
Organization of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2 From LabVIEW to the Lambda-Calculus, and Beyond
5
3 PreVIEW
7
3.1
Visual Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2
Formal Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3
Auxiliary Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4
Well-Formed Graphs G . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Graph-Term Connection
4.1
Do Graphs Represent all Terms? . . . . . . . . . . . . . . . . . . . 18
From Terms to Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.1
Constructing Intermediate Graphs . . . . . . . . . . . . . . . . . . 20
4.2.2
Simplifying Intermediate Graphs . . . . . . . . . . . . . . . . . . . 21
5 A Graphical Reduction Semantics
5.1
16
From Graphs to Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1.1
4.2
7
24
Staged Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2
Staged Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2.1
Beta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2.2
Escape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2.3
Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6 Conclusion and Future Work
31
7 Formal Definitions
33
7.1
Translations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.2
Reduction Semantics for PreVIEW . . . . . . . . . . . . . . . . . . . . . . 37
7.2.1
Supplemental Term Reduction Rules . . . . . . . . . . . . . . . . . 37
7.2.2
Graph Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8 Proofs
Bibliography
43
62
1
Chapter 1
Introduction
Visual programming languages are finding increasing popularity in embedded software development, and are often preferred by developers of embedded software. LabVIEW [12, 7],
Simulink [18], Ptolemy [10], and a wide range of hardware CAD design environments are
examples for such languages. While the expressivity of visual languages is often restricted
to ensure that the resulting programs are resource-bounded, these restrictions can deprive
the programmer from many useful abstraction mechanisms. Resource-aware Programming
(RAP) languages [16, 17] aim to resolve this tension by 1) providing a highly expressive
untyped substrate supporting features such as dynamic data-structures, modules, objects,
and higher-order functions, 2) allowing the programmer to express the stage distinction between computation on the development platform and computation on the deployment platform using multi-stage programming (MSP) [11, 14, 15] constructs, and 3) using advanced
static type systems to ensure that computations intended for execution on resource-bounded
platforms are indeed resource-bounded. The combination of these three ingredients allows
the programmer to use sophisticated abstraction mechanisms in generators that are statically guaranteed to generate only resource-bounded programs. As such, RAP languages
provide a bridge between traditional software engineering techniques and the specific demands of the embedded software domain. For example, RAP languages can concisely
express Cooley and Tukey’s recurrence that defines the Fast Fourier Transform. By adding
staging annotations to such a definition, a program generator can be used to generate exactly the butterfly circuit for FFT for any size 2n . A detailed discussion of this application
can be found elsewhere [9].
The goal of this work is to apply the principles of RAP language design to visual pro-
2
gramming languages. Because advanced abstraction mechanisms, staging (or even macro)
constructs, and advanced static type systems are not typical in visual languages, we need
to migrate these concepts from text-based languages to the visual setting. Intuitively, this
migration requires a precise connection between graphical and textual program representations. Ideally, we would like such a connection to be a one-to-one correspondence.
1.1
Problem
However, establishing connections between textual and graphical program representations
is non-trivial. In particular, graphical languages naturally provide a notion of sharing of
values by wiring the output of one component to the inputs of multiple different components. Especially if we are using visual languages to describe circuits, being able to express
sharing is essential. Consider the butterfly circuit for computing the FFT: It would be exponentially larger if there was no sharing [5, Figure 32.5].
However, no notion in text-based languages exactly corresponds to the notion of sharing
provided by graphs. In particular, sharing in graphs almost corresponds to local variable
declarations in textual languages, but not quite: it only corresponds to local declarations
that are used more than once. To illustrate this problem, note that the following two C code
fragments both correspond to the LabVIEW graph to the right:
int x = 4;
int y = 5;
int y = 5;
print_int(x+y+y);
print_int(4+y+y);
While the first code fragment assigns a local variable name to the constant 4, the second
snippet uses the number 4 directly. However, there is no corresponding distinction that can
be made in LabVIEW. For LabVIEW programs and text-based representations to be oneto-one, we would then have to disallow variable declarations that are used only once, or
require all subterms to be explicitly named. Both would be unnatural syntactic restrictions
3
for the programmer. Thus, the treatment of sharing and local binders adds substantial
complexity to the connection between the textual and graphical representations.
1.2
Contributions
From the practical point of view, our work was motivated directly by a study of the visual
programming language LabVIEW. The results presented provide the foundations for RAP
extensions of such languages. The starting point for our work is the observation that the
results of a study by Ariola and Blom on sharing in the lambda calculus are not only applicable to graph-based implementations of textual languages, but are in fact highly-suited as
a formal foundation for relating visual and textual representations of programs.
The first technical contribution of this thesis is to adopt Ariola and Blom’s theory of
lambda-graphs as a model for visual programming languages and to extend it with staging constructs typical in text-based MSP languages. The presented calculus PreVIEW is
based on a one-to-one correspondence between visual programs and a variation of the textbased lambda-calculus. In this calculus, the programmer can write graph generators using
high-level abstraction mechanisms, while avoiding the performance overhead of these abstractions during the execution of the generated visual program.
The second technical contribution of this thesis is using the first result to show how
the semantics of text-based MSP languages can be lifted to the graphical setting. This is
achieved by establishing a formal connection between the semantics for PreVIEW graphs
and lambda terms: graph reductions have corresponding reductions at the term level, and
similarly, term reductions have corresponding reductions at the graph level.
1.3
Related Work
The ability to reason about the correctness of embedded software implemented in expressive visual languages requires a careful analysis of the formal semantics of these languages,
especially in relation to textual languages from which they can borrow a wide range of state
4
of the art concepts, such as those of Resource-aware Programming (RAP). As noted above,
the closest work to ours is that of Ariola and Blom [1]. The development presented here
shows that their techniques can be usefully extended to the multi-stage setting.
The visual programming research community seems to focus largely on languages that
are accessible to novice programmers and domain-experts, rather than general-purpose languages. Examples of such languages include form-based [2] and spreadsheet-based [8]
ones. As such, only a few works are closely related to ours: Citrin et al. give a purely
graphical description of an object-oriented language called VIPR [3] and a functional language called VEX [4]. The mapping to and from textual representations is only treated
informally.
Erwig [6] presents a denotational semantics for VEX using inductive definitions of
graph representations to support pattern matching on graphs. But this semantics does not
capture sharing in graphs conveniently. For example, it gives any circuit that implements
FFT the same meaning, independently of how much sharing is present in this circuit.
1.4
Organization of this Thesis
Chapter 2 explains how core features of visual languages such as LabVIEW and Simulink
can be modeled using a variation Ariola and Blom’s cyclic lambda-graphs. Chapter 3
presents the calculus PreVIEW and discusses well-formedness conditions for PreVIEW
programs. Chapter 4 defines textual representations for PreVIEW graphs and shows that
graphs and terms in normal form are one-to-one. In Chapter 5, we describe a reduction
semantics for both terms and graphs, and Chapter 6 concludes. Formal definitions for the
entire development are given in Chapter 7, and Chapter 8 provides proofs for the results
presented in this thesis.
5
Chapter 2
From LabVIEW to the Lambda-Calculus, and Beyond
A LabVIEW program is defined as a sequence of procedures, called virtual instruments
(VIs). Each VI is defined visually as a diagram of boxes connected by wires. Boxes
represent numeric constants, calls to built-in operators and procedures, and calls to userdefined VIs. Wires represent the flow of values between the individual boxes of a VI
definition. Part (a) of the following figure shows the definition of a simple VI in LabVIEW.
Given the parameters x and y, this VI returns the value f (11, (x + y) + y).
l
l
l
f
(a) LabVIEW [12]
f
11
@
@
l
+
11 +
(b) Ariola and Blom [1]
@
+
+
@
(c) PreVIEW (this thesis)
LabVIEW is in fact designed for a functional style1 of programming [7]. Since we are
interested in relating a core subset of LabVIEW to a textual representation, the lambda
calculus appears to be a well-suited formalism. However, the presence of sharing in graphbased program representations has no directly corresponding notion in the pure lambdacalculus. In [1], Ariola and Blom address this problem in the context of programming
1
Like any functional language, implementations of LabVIEW support imperative features, but these fea-
tures are not intended for general user-level programming
6
language implementations. Their work establishes a strong formal connection between the
lambda calculus extended with a letrec-construct on one side, and syntax trees with sharing
and cycles, called lambda-graphs, on the other side. The graph (b) in the above figure
shows how the LabVIEW program in (a) can be represented by a lambda-graph for the
corresponding term λx.λy.( f 11 (x + y) + y).
In this model, lambda abstractions are drawn as boxes describing the scope of the parameter bound by the abstraction. Edges represent subterm relationships in the syntax tree,
and parameter references are drawn as back-edges to a lambda abstraction. By flipping
the direction of edges in a lambda-graph to express the flow of data in the program, and
by making connection points in the graph explicit in the form of ports, we can visualize lambda-graphs in a way that captures the core features of languages like LabVIEW.
Based on this observation, illustrated by graph (c), we extend Ariola and Blom’s theory of
lambda-graphs to model such languages. We call the resulting visual calculus PreVIEW.
7
Chapter 3
PreVIEW
PreVIEW is a graphical representation of the lambda-calculus with letrec and staging constructs. The core language features include function abstraction, function application, and
the staging constructs Bracket “hi”, Escape “∼”, and Run “!”. We stress the fact that PreVIEW is a calculus, and not a language. While an intuitive concrete syntax is essential
for the usability and success of any full programming language, this work focuses on the
formal aspects of visual programming. We therefore define only an abstract syntax for PreVIEW, both visually and mathematically, and consider topics such as layout and coloring
of diagrams to be beyond the scope of this thesis.
3.1
Visual Syntax
Before defining a formal representation of PreVIEW programs, we first describe the visual
appearance of the different language features in PreVIEW, which will be used in this thesis
to communicate a number of key ideas. All language constructs are visualized as boxes,
and values flow along wires connecting these boxes. A PreVIEW program is a graph built
from the following components:
1. Nodes represent function abstraction, function application, the MSP constructs Brackets, Escape, and Run, and “black holes”. They are drawn as boxes labeled λ,@,hi,∼,
!, and • respectively.
8
bind
·Ò
fun
l
return
out
arg
@
out
~
return
out
return
in
! out
out
out
Each lambda node contains a subgraph inside its box. The subgraph represents the
body of a lambda abstraction, and the box for the lambda node visually defines the
scope of the parameter bound by the lambda abstraction. Bracket and Escape nodes
also contain subgraphs. The subgraph of a Bracket node represents code being generated for a future-stage computation, and is marked by a box drawn using dotted lines.
The subgraph of an Escape node represents code that is integrated into a larger piece
of code at runtime, and is also marked using dotted lines. Black holes represent unresolvable cyclic dependencies that can arise from the letrec-construct in functional
programs. In Section 4.2 we generalize black holes to indirection nodes, which play
an important role in the translations and semantics presented in the development.
2. Free variables, displayed as variable names, represent name references in the program that are not bound inside the PreVIEW graph.
3. Ports represent specific connection points on nodes. We distinguish between source
ports (drawn as triangles in the picture above) and target ports (drawn as rectangles).
Source ports can have the following types:
• out: carries the result value of any node.
• bind: carries the function parameter of a lambda abstraction.
Target ports can have the following types:
• fun: receives the function input to an application node.
• arg: receives the argument input to an application node.
• return: receives the value of a computation represented by a subgraph.
9
• in: receives the input to a Run node.
4. Edges connect nodes and are drawn as arrows:
x
x
The source of any edge is either the source port of a node (drawn as a triangle) or a
free variable x. The target of any edge is the target port (drawn as a rectangle) of a
node in the graph. The only exception to this is the root of the graph. Similar to the
root of an abstract syntax tree, it marks the entry-point for evaluating the graph. It is
drawn as a “dangling” edge without a target port, instead marked with a dot.
For convenience, the examples in this thesis assume that PreVIEW is extended with
integers, booleans, binary integer operators, and conditionals.
Example 3.1 The following figure illustrates how a recursive definition of the power function in OCaml can be represented in PreVIEW. Given two inputs x and n, this function
computes the number xn :
l
iszero?
l
@
1
let rec power =
*
fun x -> fun n ->
@
if iszero? n then 1
else x*(power x (n-1))
1
-
@
in power
e
(a) OCaml
(b) PreVIEW
if
10
The OCaml keyword fun corresponds to the λ-construct in the lambda-calculus. The PreVIEW graph for this function consists of two nested lambda nodes, one providing a bind
port for the parameter x, and the other with a bind port for n. Each use of the parameters
x and n is drawn as an edge originating at the bind port of the respective lambda node.
The recursive call to power in its definition is drawn using the feedback edge e, from the
out port of the outer lambda node to the fun port of an application node. Furthermore, to
visualize this recursive call we use two application nodes because of currying in functional
languages: power is partially applied to its first argument x, and the resulting function is
then applied to n − 1.
Example 3.2 Now consider staged versions of the previous power function example in both
MetaOCaml1 [11] and PreVIEW:
l
iszero?
l
@
·Ò
1
let rec power’ =
fun x -> fun n ->
if iszero? n then .<1>.
·Ò
~
if
*
~
@
else
.<.˜x * .˜(power’ x (n-1))>.
1
-
@
in power’
(a) MetaOCaml
1
(b) PreVIEW
MetaOCaml is a MSP extension of OCaml, and adds dots to disambiguate the concrete syntax. Therefore
Brackets around an expression e are written as .<e>., an Escaped expression e is written as .∼ e, and ! e is
written as .!e.
11
As in the text-based program, in PreVIEW we only need to add a few staging “annotations”
in the form of Bracket and Escape boxes to the unstaged version of the power function in
the previous example. We can use power’ to generate a power function specialized to a
particular exponent, both in MetaOCaml and in PreVIEW. The following figure illustrates
this:
·Ò
l
l
~ ·Ò
*
power’
*
@
3
1
*
@
!
.! .<fun x -> .˜(power’ .<x>. 3)>.
fun x -> x*x*x*1
Evaluating the graph to the left generates the graph to the right, which always computes x3
given the parameter x. (In Section 5.2, we define the semantics for PreVIEW graphs formally.) Note that calls to this specialized version of the power-function are non-recursive.
In particular, the three recursive calls in the original power-function are inlined during the
generation of the specialized program. Generated programs are often non-recursive, and
therefore have a direct correspondence to circuits. Preserving sharing in generated circuits
ensures that computational units are not duplicated.
3.2
Formal Representation
Typically, graphs are represented as sets of nodes and sets of edges connecting nodes. In
our model, edges are connected to specific ports of nodes. To this end, we use the following
12
syntactic sets to define nodes and edges in PreVIEW:
Nodes u, v, w ∈ U, V, W ⊆ V
x, y
∈
X
Source port types
o
∈
O ::= bind | out
Target port types
i
∈
I
Source ports
r, s
∈
S ::= v.o | x
Target ports
t
∈
T ::= v.i
Edges
e
∈
Free variables
E
::= return | fun | arg | in
⊆ E ::= (s, t)
In PreVIEW, each node also carries a label indicating which language construct the node
represents, e.g. function abstraction, function application, or a black hole. We model this
with a labeling function that maps nodes to labels. Furthermore, lambda nodes contain
subgraphs that represent the computation defined by the respective lambda abstraction. We
represent graph nesting with a scoping function that maps each lambda node to the set of
nodes for the respective subgraph. Similarly, Bracket and Escape nodes contain subgraphs.
We reflect this similarity between lambda, Bracket and Escape nodes by including all three
node types in the domain of the scoping function.
A PreVIEW graph is a tuple g = (V, L, E, S , r) where V ⊆ V is a finite set of nodes,
L : V → {λ, @, hi, ∼, ! , •} is a labeling function, E ⊆ E is a finite set of edges, S : {v ∈ V |
L(v) ∈ {λ, hi, ∼}} → P(V) is a scoping function, r ∈ S is the root of the graph, and P(V) is
the power set of V. When it is clear from the context, we often refer to the components V,
L, E, S , and r of a graph g without making the binding g = (V, L, E, S , r) explicit.
3.3
Auxiliary Definitions
Let g be a PreVIEW graph. The set of incoming edges of a node v ∈ V is defined as
pred(v) = {(s, v.i) ∈ E}.
Given a set U ⊆ V, the set of top-level nodes in U that are
not inside the scope of any other node in U is defined as toplevel(U) = {u ∈ U | ∀v ∈
U : u ∈ S (v) ⇒ v = u}. If v ∈ V is in the domain of S , then we define the contents of
v as contents(v) = S (v)\{v}. For a given node v ∈ V, if there exists a node u ∈ V with
13
v ∈ toplevel(contents(u)), then u is the surrounding scope of v. A path v { w in g is an
acyclic path from v ∈ V to w ∈ V that only consists of edges in {(s, t) ∈ E | s , u.bind}.
3.4
Well-Formed Graphs G
Whereas context-free grammars are generally sufficient to describe well-formed terms in
textual programming languages, part of the complexity of formalizing graph-based languages is that characterizing well-formed programs is much more subtle. In statically
scoped text-based programming languages, there are two key requirements for sound scoping: names are not visible outside of their scope, and scopes can be nested or disjoint, but
they may not partially overlap. A similar condition is desirable for graph-based languages,
and must therefore be enforced explicitly in PreVIEW. Intuitively, bound names in textual
languages correspond to bind and out ports of nodes in a PreVIEW graph, and scopes are
drawn as boxes. We then define the set G of well-formed graphs as the set of graphs that
satisfy the following conditions:
1. Connectivity - Edges may connect ports belonging only to nodes in V with the correct port types. We formalize this requirement as follows:
(v.o, w.i) ∈ E ⇒ v, w ∈ V and o ∈ outports(v) and i ∈ inports(w)
(x, w.i) ∈ E
⇒ w ∈ V and i ∈ inports(w)
where the mappings inports and outports specify which port types are available for
a given node type. Depending on its label, each node is associated with a set of
target (input) and source (output) port types, as defined in the first two columns of
the following table:
14
L(v)
inports(v)
outports(v)
pred(v)
λ
return
bind, out
(s, v.return)
@
fun, arg
out
(s1 , v.fun), (s2 , v.arg)
hi, ∼
return
out
(s, v.return)
!
in
out
(s, v.in)
•
∅
out
∅
The right column of the table restricts the set pred(v) of incoming edges for a given
node v ∈ V. This condition specifies that each target port (drawn as a rectangle) in
the graph must be the target of exactly one edge, while a source port (drawn as a
triangle) can be unused, used by one or shared by multiple edges:
fun
arg
@ out
·Ò
l
bind
return
out
return
out
2. Scoping - Assume that w, w1 , w2 ∈ V and v, v1 , v2 ∈ dom(S ). By convention, all
nodes that have a scope will be considered to be in their own scope (v ∈ S (v)). A
name used outside the scope where it is bound corresponds to an edge from a bind or
an out port that leaves a scope. For bind ports, we therefore require that (v.bind, t) ∈
pred(w) implies w ∈ S (v). For out ports, we require that if w1 < S (v) and w2 ∈ S (v)
and (w2 .out, t) ∈ pred(w1 ) then w2 = v. Partially overlapping scopes translate to
overlapping lambda, Bracket, or Escape boxes. We disallow this by requiring that
S (v1 ) ∩ S (v2 ) = ∅ or S (v1 ) ⊆ S (v2 ) \ {v2 } or S (v2 ) ⊆ S (v1 ) \ {v1 }. The following
three graph fragments illustrate the scoping errors of edges leaving a scope, and of
overlapping scopes respectively:
15
l
l
l
l
3. Root Condition - Intuitively, the root of a PreVIEW graph cannot be the port of a
node nested in the scope of another node. Therefore, the root must either be a free
variable (r ∈ X) or the out port of a node w that is visible at the “top-level” of the
graph (r = w.out and w ∈ toplevel(V)).
The well-formedness conditions for the functional features of PreVIEW are mostly
adopted from the work of Ariola and Blom. However, the fact that Bracket and Escape
nodes contain subgraphs like lambda nodes was important in defining the well-formedness
conditions for the full PreVIEW calculus: we simply generalized the conditions for lambda
nodes to all nodes that have a scope, i.e. to all nodes in the domain of S . The only exception
is the above restriction on edges of the form (v.bind, t). This rule only concerns lambda
nodes since no other node type carries a bind port.
16
Chapter 4
Graph-Term Connection
In this section, we formally relate PreVIEW graphs and their textual representations. We
proceed by first defining a term language and a translation from PreVIEW graphs to this
term language. We then observe that not all terms can be generated using this translation,
but rather only terms in a specific normal form. We define this normal form and a backwardtranslation from terms to graphs. We then show that a term in normal form represents all
terms that map to the same graph. Finally, we show that the sets of PreVIEW graphs and
normal forms are isomorphic.
4.1
From Graphs to Terms
To represent PreVIEW graphs textually, we combines features from Ariola and Blom’s
calculus of cyclic lambda-terms with the staging constructs Bracket “hi”, Escape “∼”, and
Run “!”. The set M of staged lambda-terms with letrec is defined as follows:
M ∈ M ::= x | λx.M | M M | letrec d∗ in M | ∼ M | hMi | ! M
d ∈ D ⊆ D ::= x = M
where d∗ stands for a possibly empty sequence of letrec declarations d. We write M1 [x :=
M2 ] for the result of substituting M2 for all free occurrences of the variable x in M1 , without
capturing any free variables in M2 . By convention, bound and free variables are distinct
from each other. We use ≡α to denote syntactic equality up to α-renaming of both lambdabound variables and recursion variables.
To translate a graph into a term, we define the translation function τ in Section 7.1.
Intuitively, this translation associates all nodes in the graph with a unique variable name
in the term language. These variables are used to explicitly name each subterm of the
17
resulting term. Lambda nodes are associated with an additional variable name, which is
used to name the formal parameter of the represented lambda abstraction.
The translation τ then starts by computing the set of top-level nodes in the graph, i.e.
all nodes that are not inside the scope of a different node (see Section 3.3). The translation
τ creates a letrec-declaration for each of these nodes. For all nodes without subgraphs, the
letrec declaration binds the variable associated with that node to a term that combines the
variables associated with the incoming edges to that node. For nodes with subgraphs, the
translation is applied recursively to the subgraph. The variable for the node is then bound
in a letrec declaration to the term that represents the subgraph.
Example 4.1 Consider the PreVIEW graph g in part (a) of the following figure:
l
letrec x1 = λy1 .(letrec x2 = hletrec x3 = ∼ y1 ,
·Ò
x4 = ∼ y1 ,
~
x5 = y3 y4
@
in x5 i
~
in x2 )
in x1
(a) The graph g
(b) The term τ(g)
Let v1 be the lambda node, v2 the Bracket node, v3 and v4 the top and bottom Escape nodes,
and v5 the application node in the graph g. We associate a variable name x j with each node
v j . In addition, the name y1 is associated with the parameter of the lambda node v1 . All
nodes are in the scope of v1 , therefore v1 is the only node at the “top-level” of g. We create
a letrec declaration for v1 , binding x1 to a term λy1 .N where N is the result of recursively
translating the subgraph inside v1 . When translating the subgraph of the Bracket node v2 ,
note that this subgraph contains three top-level nodes (v3 , v4 , v5 ). Therefore, the letrec-term
for v2 contains three variable declarations (x3 , x4 , x5 ).
18
4.1.1
Do Graphs Represent all Terms?
The translation function τ defined in the previous section can only generate terms in a very
specific form that is determined by the translation. For instance, while the graph in part
(a) of the previous example represents the computation λy1 .h∼ y1 ∼ y1 i, the translation
τ generates the term shown in part (b) above. Intuitively, this term represents the same
computation as λy1 .h∼ y1 ∼ y1 i, but every subterm is explicitly named using the letrec
construct. This explicit naming of subterms captures the notion of value sharing in PreVIEW graphs, where the output port of any node can be the source of multiple edges. In a
term with explicitly named subterms, any subterm can be shared by simply referencing the
corresponding letrec variable multiple times. We call such explicitly named terms normal
forms and define the set Mnorm of normal forms as follows:
N ∈ Mnorm ::= x | letrec q+ in x
q ∈ Q ⊆ Dnorm ::= x = x | x = y z | x = λy.N | x = hNi | x = ∼ N | x = ! y
where q+ is a non-empty sequence of declarations q. Note that in normal forms, declarations of the form x = y are only allowed if x ≡ y. We observed that normal forms
are restricted forms of generic terms and that τ only creates terms in normal form. The
following lemmas formalize these observations.
Lemma 4.1 (Normal forms are terms) Mnorm ⊆ M.
Lemma 4.2 (τ maps graphs to normal forms) If g ∈ G then τ(g) ∈ Mnorm .
The restriction on the resulting terms of the translation τ means that in PreVIEW, we cannot
represent each distinct term by a distinct graph. However, what we can show is that every
term has a normal form associated with it. For this purpose, a normalization function
ν is defined in Appendix 7.1. Intuitvely, the action of this function can be described as
follows. If the term M that we want to normalize is just a variable, then it is already
in normal form. Otherwise, we map M to a letrec-term, assigning a fresh letrec-variable
to each subterm of M in the process. The translation ν preserves the nesting of lambda
19
abstractions, Bracket and Escapes by normalizing such subterms recursively1 . Once every
subterm of M has a letrec variable associated with it, and all lambda, Bracket, and Escape
subterms are normalized recursively, we eliminate letrec indirections of the form x = y
with x . y using substitution.
Lemma 4.3 (ν maps terms to normal forms) If M ∈ M then ν(M) ∈ Mnorm .
Example 4.2 Applying the normalization function ν to each of the terms
1. λx.h∼ x ∼ xi
2. letrec y = λx.h∼ x ∼ xi in y
3. λx.letrec y = h∼ x ∼ xi in y
yields a term alpha-equivalent to
letrec y1 = λx.(letrec y2 = hletrec y3 = ∼ x, y4 = ∼ x, y5 = y3 y4 in y5 i in y2 ) in y1
Note that the core structure of the original terms (lambda term with Bracket body and
application of two escaped parameter references inside) is maintained in the normalized
version, but every subterm is now named explicitly.
4.2
From Terms to Graphs
We have established that for every term we can compute a normal form. In order to show
that every term can also be mapped to a PreVIEW graph, we need to define a mapping from
terms in M to graphs in G. To simplify the definition of this translation, we introduce the
notion of an intermediate graph for which the well-formedness condition is relaxed. This
notion corresponds to Ariola and Blom’s concept of scoped pre-graphs.
1
This is similar to the translation τ from graphs to terms presented above, where lambda, Bracket and
Escape nodes are translated to terms recursively.
20
The set G pre of intermediate graphs consists of all graphs for which the well-formedness
condition is relaxed: nodes with label • may have 0 or 1 incoming edge. Formally, whenever L(v) = • then pred(v) = ∅ or pred(v) = {(s, v.in)}. If such a node has 1 predecessor, we
call it an indirection node.
Since free variables are not represented as nodes in PreVIEW, the idea is to associate an
indirection node with each variable occurrence in the translated lambda-term.This makes
it easier to connect subgraphs during the translation, and provides “hooks” in the graph for
connecting bound variable occurrences in the graph to their binders. We also use indirection
nodes to model intermediate states in the graph reductions presented in Section 5.2. We
translate terms to PreVIEW graphs in two steps:
• The function γ pre maps terms to intermediate graphs.
• The simplification function σ maps intermediate graphs to proper PreVIEW graphs.
Composing these two functions then gives us the backward-translation γ which maps
lambda-terms to well-formed PreVIEW graphs. Here, we give a pictorial definition of this
translation, Section 7.1 gives the formal definition.
4.2.1
Constructing Intermediate Graphs
We first illustrate graph construction for variables x, lambda abstractions λx.M, and applications M1 M2 :
gpre(x) =
x
gpre(lx.M) =
l
gpre(M1 M2) =
x
gpre(M1)
gpre(M)
gpre(M2)
x
@
The term x is mapped to the graph consisting of one indirection node as the root, and with
one edge from x to v’s input-port. The graph for λx.M consists of a root node v with label
21
λ. The lambda-body M is translated to the subgraph γ pre (M), and v is connected to the
sub-graph by a new edge from the root of the subgraph to v’s return-port. In the subgraph,
all edge sources x are replaced by v’s binding port. The scope of v contains v and all nodes
in the subgraph. An application M1 M2 is mapped to a graph with a root node v labeled
@. The roots of the two subgraphs γ pre (M1 ) and γ pre (M2 ) are connected to v’s fun and arg
−−−−−−→
ports. Furthermore, we construct graphs for terms of the form (letrec x j = M j in M), hMi,
∼ M, and !M as follows:
gpre(letrec x1=M1..xn=Mn in M) = gpre(·MÒ) =
x1
·Ò
gpre(M1)
gpre(M)
xn
x1
gpre(Mn)
xn
gpre(~M) =
~
gpre(!M) =
gpre(M)
gpre(M)
!
x1
gpre(M)
xn
To construct the graph for (letrec x1 = M1 , .., xn = Mn in M), we first translate the terms M1
through Mn and M individually. The root of the new graph is the root of γ pre (M). Any edge
that starts with one of the free variables x1 , .., xn is replaced by an edge originating at the
root of the corresponding graph γ pre (M j ). The cases for hMi and ∼ M are treated similarly
to the case for λx.M, and the case for !M is treated similarly to the case for application.
4.2.2
Simplifying Intermediate Graphs
Simplification eliminates any indirection nodes from the graph using the following graph
transformations:
x
x
x
x
Any indirection node with a self-loop (i.e. there is an edge from its out port to its in port)
is replaced by a black hole. If there is an edge from a free variable x or from a different
22
node’s port s to an indirection node v, then the indirection node is “skipped” by replacing
all edges originating at v to edges originating at x or s. Note that the second and third cases
are different since free variables cannot be shared in the graph.
Based on the formal definition of the backward-translation γ in Section 7.1, we can
construct a well-formed PreVIEW graph for any term.
Lemma 4.4 (Graphing) For any M ∈ M, γ(M) is defined and is a unique, well-formed
graph.
4.2.2.1
The Meaning of Normal Forms
In Section 4.1, we have shown that we cannot associate a distinct graph with each distinct
term. However, the function ν associates a normal form with each distinct term. Using
the mappings ν, γ, and τ, we can now give a precise definition of the connections between
terms, graphs, and normal forms. Two terms map to the same graph if and only if they have
the same normal form. Therefore, normal forms represent equivalence classes of terms
that map to the same graph by γ. The function ν gives an algorithm for computing such
representative terms. Given two well-formed graphs g, h ∈ G, we write g = h if g and h are
isomorphic graphs with identical node labels.
Lemma 4.5 (Soundness of Normalization) If M ∈ M. then γ(M) = γ(ν(M)).
Lemma 4.6 (Recovery of normal forms) If N ∈ Mnorm then N ≡α τ(γ(N)).
Lemma 4.7 (Completeness of Normalization) Let M1 , M2 ∈ M. If γ(M1 ) = γ(M2 ) then
ν(M1 ) ≡alpha ν(M2 ).
Example 4.3 In Example 4.2 we illustrated that the three terms in part (a) of the following
figure have the same normal form. As specified by Lemma 4.5, they also translate to the
same graph shown in part (b):
23
l
·Ò
~
@
M1 ≡ λx.h∼ x ∼ xi
~
M2 ≡ letrec y = λx.h∼ x ∼ xi in y
M3 ≡ λx.letrec y = h∼ x ∼ xi in y
(a) The terms M1 , M2 , M3
(b) The graph γ(M1 ) = γ(M2 ) = γ(M3 )
Furthermore, Lemma 4.7 specifies that the terms M1 , M2 , and M3 must have the same
normal form since they map to the same graph by γ.
Using this result, we can show that the sets of PreVIEW graphs and normal forms are
isomorphic.
Theorem 4.1 (Main) The sets of well-formed graphs and normal forms are one-to-one:
1. If M ∈ M then ν(M) ≡alpha τ(γ(M)).
2. If g ∈ G then g = γ(τ(g)).
We now come to the first application of this result.
24
Chapter 5
A Graphical Reduction Semantics
Having established an isomorphism between PreVIEW graphs and terms in normal form,
we now relate the computational behavior of terms and graphs. To this end, we present a
reduction semantics for staged cyclic lambda-terms, and formally define similar reductions
for PreVIEW graphs. We show that graph reduction steps can be modeled using term reductions, and that term reduction steps can be modeled using reductions on the corresponding
graph.
5.1
Staged Terms
Ariola and Blom define a call-by-need reduction semantics for the lambda-calculus extended with a letrec-construct. In order to extend this semantics to support staging constructs, we use the notion of expression families proposed for the reduction semantics of
call-by-name λ-U [14]. In the context of λ-U, expression families restrict beta-redexes to
terms that are valid at level 0. Intuitively, given a staged term M, the level of a subterm
of M is the number of Brackets minus the number of Escapes surrounding the subterm. A
term M is valid at level n if all Escapes inside M occur at a level greater than n.
Example 5.1 Consider the lambda term M ≡ hλx. ∼ ( f hxi)i. The variable f occurs
at level 0, while the use of x occurs at level 1. Similarly, the Escape occurs at level 1.
Therefore, M is valid at level 0.
The calculus λ-U does not provide a letrec-construct to directly express sharing in lambda
terms. Therefore, we extend the notion of expression families to include the letrec-construct
25
as follows:
M 0 ∈ M0
::= x | λx.M 0 | M 0 M 0 | letrec D0 in M 0 | hM 1 i | ! M 0
M n+ ∈ Mn+ ::= x | λx.M n+ | M n+ M n+ | letrec Dn+ in M n+ | hM n++ i | ∼ M n | ! M n+
Dn ∈ Dn
−−−−−−−→
::= x j = M nj
In order to combine Ariola and Blom’s reduction semantics for cyclic lambda-terms with
the reduction semantics for λ-U, we need to account for the difference in beta-reduction
between the two formalisms: While λ-U is based on a standard notion of substitution,
Ariola and Blom’s beta-rule uses the letrec-construct to express a binding from the applied
function’s parameter to the argument of the application, without immediately substituting
the argument for the function’s parameter. Instead, substitution is performed on demand by
a separate reduction rule. Furthermore, substitution in λ-U is restricted (implicitly by the
β-rule) to M 0 -terms. We make this restriction explicit by defining which contexts are valid
at different levels:
C∈C
::= | λx.C | C M | M C | letrec D in C
| letrec x = C, D in M | hCi | ∼ C | ! C
C n ∈ Cn
= {C ∈ C | C[x] ∈ Mn }
We write C[M] for the result of replacing the hole in C with the term M, potentially
capturing free variables in M in the process. Using these families of terms and contexts,
we extend Ariola and Blom’s reduction rules as follows:
letrec x = M 0 , Dn in C 0 [x] → sub letrec x = M 0 , Dn in C 0 [M 0 ]
letrec x = C 0 [y], y = M 0 , Dn in M n → sub letrec x = C 0 [M 0 ], y = M 0 , Dn in M n
(λx.M10 ) M20
→β◦
letrec x = M20 in M10
∼ hM 0 i →esc M 0
! hM 0 i →run M 0
The full set R of reduction rules for terms in Mn contains these rules together with a set of
supplemental rules defined in Section 7.2.1. By convention, the reduction rules in R can be
26
applied in any context. We then write → for the compatible extension of the rules in R, and
we write →∗ for the reflexive and transitive closure of →.
The idea behind the rules sub is to perform substitution on demand after a function
application has been performed. In this sense, the reduction rules sub and the rule β◦
together mimic the behavior of beta-reduction in λ-U.
5.2
Staged Graphs
To define a reduction semantics for PreVIEW, we apply these ideas to graphs and define
the level of a node in the graph as the number of surrounding Bracket nodes minus the
surrounding Escape nodes (See Definition 7.4. The notion of a surrounding node is defined
in Section 3.3). Furthermore, we specify when a set of nodes U is valid at level n using the
judgment `n U. Intuitively, we have `n U if all Escape nodes in U occur at a level greater
than n. Definition 7.5 gives derivation rules for this judgment.
Context families and node levels are closely related. In the term reductions presented
in the previous section, context families restrict the terms in which a variable may be substituted. In the graph reductions described in this section, determining whether two nodes
constitute a redex will require comparing the levels of the two nodes. Furthermore, we
can show that the notion of a set of nodes valid at a given level corresponds directly to the
restriction imposed on terms by expression families.
Lemma 5.1 (Properties of graph validity)
1. Let M n ∈ Mn and g = γ(M n ). Then `n V for any n ≥ 0.
2. Let g ∈ G with `n V. Then τ(g) ∈ Mn for any n ≥ 0.
When evaluating a graph g = (V, L, E, S , r), we require that g be well-formed (see
Section 3.4) and that `0 V. This ensures that level(v) is defined for all v ∈ V.
Lemma 5.2 (Node levels in well-formed graphs) Given a graph g ∈ G with `0 V. Then
for all v ∈ V, we have level(v) = n for some n ≥ 0.
27
This property is important for defining graph reductions involving staging constructs, since
each reduction step requires a comparison of node levels.
We now present three graph reductions that correspond to the term-reduction rules β◦,
esc, and run defined in the previous section. Each graph reduction is performed in two
steps:
1. If necessary, we copy nodes to expose the redex in the graph. This step corresponds
to using the term reduction rules sub or the rules merge, li f t, and gc (see Section
7.2.1) on the original term.
2. We contract the redex by removing nodes and by redirecting edges in the graph. This
step corresponds to performing the actual β◦-, esc-, or run-reduction on a term.
The formal definitions for these graph reductions can be found in Section 7.2.2.
5.2.1
Beta
A β◦-redex in a PreVIEW graph consists of an application node v that has a lambda node w
as its first predecessor (See Section 7.2.2 for the exact set of conditions). The contraction
of the redex is performed in two steps:
w
1⊕w
l
1⊕w
l
1⊕w
l
1
l
2a
2b
out
2⊕w
l
2⊕w
s1
bind
return
in
out
s1
s1
out
arg
s0
fun
@
v
g
s2
arg
fun
@
1⊕v
g’
s2
in
s2
1⊕v
g’ := …
s(g’)
1. We check whether the edge (w.out, v.fun) is the only edge originating at w.out, and
whether the application node v is outside the scope of w. If any of these conditions
28
do not hold, then we copy the lambda node in a way that ensures that the conditions
hold for the copy of w. The copy of w is called 2 ⊕ w, and the original of v is called
1 ⊕ v. We then place 2 ⊕ w and its scope in the same scope as 1 ⊕ v.
2. We convert 1⊕v and 2⊕w into indirection nodes, which are then removed by the graph
simplification function σ (defined in Section 4.2). We redirect edges so that after
simplification, edges that originated at the applied function’s parameter (2 ⊕ w.bind)
now start at the root s2 of the function’s argument, and edges that originated at the
application node’s output (1 ⊕ v.out) now start at the root s1 of the function’s body.
5.2.2
Escape
An esc-redex in a PreVIEW graph consists of an Escape node v that has a Bracket node w
as its predecessor (See Section 7.2.2 for the exact set of conditions). We contract the redex
in two steps:
w
1⊕w
·Ò
1⊕w
·Ò
1⊕w
·Ò
1
·Ò
2a
2b
out
v
~
1⊕v
~
2⊕w
·Ò
s1
return
out
out
g
g’
return
s1
return
2⊕w
1⊕v
in
in
out
s1
g’ := …
out
s(g’)
1. We check whether the edge (w.out, v.return) is the only edge originating at w.out,
and whether the Escape node v is outside the scope of w. If any of these conditions
do not hold, then we copy the Bracket node in a way that ensures that the conditions
hold for the copy of w. The copy of w is called 2 ⊕ w, and the original of v is called
1 ⊕ v. We then place 2 ⊕ w (and its scope) in the scope of 1 ⊕ v.
29
2. We convert 1 ⊕ v and 2 ⊕ w into indirection nodes, which are then removed by the
function σ. We redirect edges so that after simplification, edges that originated at the
Escape node’s output port (1 ⊕ v.out) now start at the root s1 of the Bracket node’s
body.
5.2.3
Run
A run-redex in a PreVIEW graph consists of a Run node v that has a Bracket node w as
its predecessor (See Section 7.2.2 for the exact set of conditions). The contraction of the
redex is performed in two steps:
w
1⊕w
·Ò
1⊕w
·Ò
1⊕w
·Ò
1
·Ò
2a
2b
out
2⊕w
s1
·Ò
s1
2⊕w
out
v
in
1⊕v
!
!
in
in
g
out
g’
s1
return
out
1⊕v
in
out
g’ := …
s(g’)
1. We check whether the edge (w.out, v.in) is the only edge originating at w.out, and
whether the Run node v is outside the scope of w. If any of these conditions do not
hold, then we copy the Bracket node in a way that ensures that the conditions hold
for the copy of w. The copy of w is called 2 ⊕ w, and the original of v is called 1 ⊕ v.
We then place 2 ⊕ w (and its scope) in the same scope as 1 ⊕ v.
2. We convert 1 ⊕ v and 2 ⊕ w into indirection nodes, which are then removed by the
function σ. We redirect edges so that after simplification, edges that originated at the
Run node’s output port (1 ⊕ v.out) now start at the root s1 of the Bracket node’s body.
30
We can now establish that reductions on terms are closely related to graph reductions: any
reduction step on a graph g = γ(M) corresponds to a sequence of reduction steps on the
term M to expose a redex, followed by a reduction step to contract the exposed redex.
Conversely, the contraction of any redex in a term M corresponds to the contraction of a
redex in the graph γ(M).
Theorem 5.1 (Correctness of Graphical Reductions) Let g ∈ G, δ ∈ {β◦, esc, run}, M 0 ∈
M0 and g = γ(M 0 ).
1. Graph reductions preserve well-formedness:
g →δ h implies h ∈ G
2. Graph reductions are sound:
g →δ h implies M10 →∗ M20 →δ M30 for some M20 , M30 ∈ M0 such that h = γ(M30 )
3. Graph reductions are complete:
M10 →δ M20 implies g →δ h for some h ∈ G such that h = γ(M20 )
31
Chapter 6
Conclusion and Future Work
Because sharing is expressed differently in graphical and textual representations of programs, there is a slight but fundamental mismatch between the two. As a result, semantic
concepts established in the textual setting cannot be mapped to the graphical setting without
a careful analysis of the connection between the two representations. To study the operational aspects of Resource-aware Programming (RAP), we develop a precise connection
between graphical and textual representations of programs that can express both higherorder functions and staging constructs. This connection can be used to lift a reduction
semantics from the textual to the graphical setting.
While the work presented here is motivated by a visual programming language that
is already in wide use (LabVIEW), these extensions have not yet been implemented. As
such, future work will include both furthering the theory and as well as implementing
the extensions. An important next step in developing the theory will be lifting both type
checking and type inference algorithms defined on textual representations to the graphical
setting. Given the interactive manner in which visual programs are developed, it will also
be important to see if these algorithms can be incrementalized so that any errors can be
detected locally and without need for analysis of the full program.
From a human-computer-interaction standpoint, it will be essential for the acceptance
of advanced functional and multi-stage constructs in visual languages to explore how we
can keep such languages intuitive for domain experts. To this end, we consider color shading to be a promising way of visualizing stage distinctions in graphical programs. Similarly,
it will be desirable to design “syntactic sugar” for nested lambda declarations and cascading function applications in order to visually model LabVIEW procedures with multiple
32
parameters.
At a more technical level, it will be interesting to consider the possibility of normal
forms with a minimal number of letrec variables. We expect such a representation to be a
significant improvement over the presented normal form with respect to the scalability of
future implementations.
33
Chapter 7
Formal Definitions
7.1
Translations
Conventions: By assumption, all recursion variables x in letrec-declarations are distinct,
and the sets of bound and free variables are disjoint. Different permutations of the same
sequence of declarations d∗ are identified. Because of the identity of different permutations
of declarations, we often use the set notation D instead of d∗ . For sequences of declarations
D1 and D2 in D, we write D1 , D2 for the concatenation of the two sequences.
Definition 7.1 (Term construction) Let g ∈ G. The function τ : G → Mnorm maps g to a
term in Mnorm as follows:
1. The function mkrec avoids the construction of empty letrec-terms (letrec
in M) in
the translation:




 M
mkrec(D, M) = 


 letrec D in M
if D = ∅
otherwise
2. For every node v ∈ V, we define a unique name xv , and a second distinct name yv if
L(v) = λ. The function name(s) then associates a name with each edge source s in
the graph:




xv





name(s) = 
yv






 x
if s = v.out
if s = v.bind
if s = x
3. To construct letrec-terms, we define two mutually recursive functions. The function
34
term(v) constructs a term corresponding to each node v ∈ V:
L(v) = •
L(v) = λ
pred(v) = ∅
term(v) = xv
pred(v) = {(s, v.return)}
term(v) = λyv .mkrec(decl(contents(v)), name(s))
L(v) = @
pred(v) = {(s1 , v.fun), (s2 , v.arg)}
term(v) = name(s1 ) name(s2 )
L(v) = hi
pred(v) = {(s, v.return)}
term(v) = hmkrec(decl(contents(v)), name(s))i
L(v) = ∼
pred(v) = {(s, v.return)}
L(v) = !
term(v) = ∼ mkrec(decl(contents(v)), name(s))
pred(v) = {(s, v.in)}
term(v) = ! name(s)
The function decl(W) constructs letrec-declarations for a given set of nodes W:
decl(W) = {xv = term(v) | v ∈ toplevel(W)}
4. The function τ(g) then translates g to a staged lambda-term in normal form:
τ(g) = mkrec(decl(V), name(r))
The constraint v ∈ toplevel(W) in the definition of decl ensures that equations are generated only for nodes that are not inside the scope of another node w ∈ W. Equations for
nodes in the scope of w are instead generated inside term(w).
Definition 7.2 (Term Normalization)
1. We define the set M pre of intermediate forms as follows:
N 0 ∈ M pre ::= x | letrec q0∗ in x
q0 ∈ Q0 ⊆ D pre ::= x = y | x = y z | x = λy.N 0 | x = hN 0 i | x = ∼ N 0 | x = ! y
35
2. The translation J K pre : M → M pre is defined as:
JxK pre = letrec in x
JMK pre = N 0
x1 fresh
Jλx.MK pre = (letrec x1 = λx.N 0 in x1 )
JM1 K pre = letrec Q1 in x1
JM2 K pre = letrec Q2 in x2
x3 fresh
JM1 M2 K pre = (letrec Q1 , Q2 , x3 = x1 x2 in x3 )
−−−−−−−−−−−−−−−−−−−−−−−→
JMK pre = letrec Q in y JM j K pre = letrec Q j in y j
−−−−−−→
−−−−−−−−−→
Jletrec x j = M j in MK pre = (letrec Q, Q j , x j = y j in y)
JMK pre = N 0
JMK pre = N 0
x1 fresh
JhMiK pre = (letrec x1 = hN 0 i in x1 )
x1 fresh
J∼ MK pre = (letrec x1 = ∼ N 0 in x1 )
JMK pre = letrec Q in y
x1 fresh
J! MK pre = (letrec Q, x1 = ! y in x1 )
3. The translation J Kclean : M pre → Mnorm is defined as:
JNKclean = N
N 0 < Mnorm
Jletrec in xKclean = x
JN 0 Kclean = N1
Jletrec y = λz.N1 , Q in xKclean = N2
JN 0 Kclean = N1
Jletrec y = hN1 i, Q in xKclean = N2
JN 0 Kclean = N1
Jletrec y = ∼ N1 , Q in xKclean = N2
Jletrec y = λz.N 0 , Q in xKclean = N2
N 0 < Mnorm
Jletrec y = hN 0 i, Q in xKclean = N2
N 0 < Mnorm
Jletrec y = ∼ N 0 , Q in xKclean = N2
J(letrec Q in x)[y := z]Kclean = N
y.z
Jletrec y = z, Q in xKclean = N
4. The translation ν : M → Mnorm is then defined as ν = J Kclean ◦ J K pre .
The restriction N 0 < Mnorm in the second, third, and fourth rules in the definition of
J Kclean is needed to ensure that normalization terminates: without this restriction we could
apply any of these rules to a fully normalized term, and yet not make any progress.
36
Definition 7.3 (Graph construction)
1. The translation γ pre : M → G pre from terms to intermediate graphs is defined recursively as follows:
v fresh
γ pre (x) = ({v}, {v 7→ •}, {(x, v.in)}, ∅, v.out)
γ pre (M) = (V, L, E, S , r) v fresh
γ pre (λx.M) = (V ] {v}, L ] {v 7→ λ}, E[x := v.bind] ] {(r, v.return)}, S ] {v 7→ V ] {v}}, v.out)
γ pre (M1 ) = (V1 , L1 , E1 , S 1 , r1 )
γ pre (M2 ) = (V2 , L2 , E2 , S 2 , r2 ) v fresh
γ pre (M1 M2 ) = (V1 ] V2 ] {v}, L1 ] L2 ] {v 7→ @}, E1 ] E2 ] {(r1 , v.fun), (r2 , v.arg)}, S 1 ] S 2 , v.out)
−−−−−−−−−−−−−−−−−−−−−−−−−−→
γ pre (M j ) = (V j , L j , E j , S j , r j ) γ pre (M) = (V, L, E, S , r) v fresh
−−−−−−→
−−→ −−→ −−→ −−−−−→ −−→
γ pre (letrec x j = M j in M) = (V ]V j , L]L j , (E ]E j )[−
x j := r j ], S ]S j , r)
γ pre (M) = (V, L, E, S , r) v fresh
γ pre (hMi) = (V ] {v}, L ] {v 7→ hi}, E ] {(r, v.return)}, S ] {v 7→ V ] {v}}, v.out)
γ pre (M) = (V, L, E, S , r) v fresh
γ pre (∼ M) = (V ] {v}, L ] {v 7→∼}, E ] {(r, v.return)}, S ] {v 7→ V ] {v}}, v.out)
γ pre (M) = (V, L, E, S , r) v fresh
γ pre (! M) = (V ] {v}, L ] {v 7→!}, E ] {(r, v.in)}, S , v.out)
We write E[s1 := s2 ] for the result of substituting any edge in E that originates from
s1 with an edge that starts at s2 :
E[s1 := s2 ] = {(s, t) ∈ E | s , s1 } ∪ {(s2 , t) | (s1 , t) ∈ E}
37
2. The translation σ : G pre → G is defined as follows:
∀v ∈ V : L(v) = • ⇒ pred(v) = ∅
σ(V, L, E, S , r) = (V, L, E, S , r)
σ(V ] {v}, L ] {v 7→ •}, E, S , r) = g
σ(V ] {v}, L ] {v 7→ •}, E ] {(v.out, v.in)}, S , r) = g
−−−−→
σ(V, L, E ] {(s, t j )}, S \v, r[v.out := s]) = g
−−−−−−−→
σ(V ] {v}, L ] {v 7→ •}, E ] {(s, v.in)} ] {(v.out, t j )}, S , r) = g
s , v.out
(v.out, t) < E
We write S \u for the result of removing node u from any scope in the graph:
(S \u)(v) = S (v)\{u}
The substitution r[s1 := s2 ] results in s2 if r = s1 and in r otherwise.
3. The translation γ : M → G is then defined as γ = σ ◦ γ pre .
7.2
7.2.1
Reduction Semantics for PreVIEW
Supplemental Term Reduction Rules
letrec Dn1 in (letrec Dn2 in M n ) →merge letrec Dn1 , Dn2 in M n
letrec x = (letrec Dn1 in M1n ), Dn2 in M2n →merge letrec x = M1n , Dn1 , Dn2 in M2n
(letrec Dn in M1n ) M2n
→li f t
letrec Dn in (M1n M2n )
M1n (letrec Dn in M2n )
→li f t
letrec Dn in (M1n M2n )
letrec Dn in hM n i
→li f t
hletrec Dn in M n i
letrec in M n
→gc
Mn
letrec Dn1 , Dn2 in M n
→gc
letrec Dn1 in M n
if Dn2 , ∅ ∧ Dn2 ⊥ letrec Dn1 in M n
where D ⊥ M means that the set of variables occurring as the left-hand side of a letrecdeclaration in D does not intersect with the set of free variables in M.
38
7.2.2
Graph Reductions
Definition 7.4 (Node level) Given a PreVIEW graph g ∈ G, then a node v ∈ V has level n
if there is a derivation for the judgment level(v) = n defined as follows:
v ∈ toplevel(V)
surround(v) = u
level(v) = 0
surround(v) = u
L(u) = hi
L(u) = λ level(u) = n
level(v) = n
level(u) = n
surround(v) = u
level(v) = n + 1
L(u) = ∼
level(u) = n + 1
level(v) = n
We write level(v1 ) < level(v2 ) as a shorthand for level(v1 ) = n1 ∧ level(v2 ) = n2 ∧ n1 < n2 .
Definition 7.5 (Graph validity) Given a PreVIEW graph g ∈ G, then a set U ⊆ V is valid
at level n if there is a derivation for the judgment `n U defined as follows:
`n v
∀v ∈ toplevel(U)
L(v) ∈ {@, •, !}
`n U
L(v) = hi
L(v) = λ
`n v
`n+1 contents(v)
`n v
`n contents(v)
`n v
L(v) = ∼
`n contents(v)
`n+1 v
We write j ⊕ V for the set { j ⊕ v | v ∈ V} where j ∈ {1, 2}. Furthermore, we write U ⊕ V
for the set (1 ⊕ U) ∪ (2 ⊕ V).
Definition 7.6 (Graph Beta) Given a graph g ∈ G with `0 V and v, w ∈ V such that
1. L(v) = @ and L(w) = λ
2. (w.out, v.fun) ∈ E
3. `0 contents(w)
4. `0 {u | u ∈ S (surround(v)) ∧ u { v}
5. level(w) ≤ level(v)
Then the contraction of the β◦-redex v, written g →β◦ h, is defined as follows:
39
1. We define a transitional graph g0 = (V 0 , L0 , E 0 , S 0 , r0 ) using the functions f1 and f2
that map edge sources in E to edge sources in E 0 :
= x





f2 (u.bind) = 








f2 (u.out) = 



f2 (x)
f1 (x)
= x
f1 (u.o) = 1 ⊕ u.o
2 ⊕ u.bind
if u ∈ S (w)
1 ⊕ u.bind
otherwise
2 ⊕ u.out
if u ∈ S (w)\{w}
1 ⊕ u.out
otherwise
Let s0 be the origin of the unique edge in E with target v.arg. The components of g0
are constructed as follows:




 (V\S (w)) ⊕ S (w)
0
V = 


 V ⊕ S (w)
if |{(w.out, t) ∈ E}| = 1 and v < S (w)
otherwise
E 0 = {( f1 (s), 1 ⊕ u.i) | 1 ⊕ u ∈ V 0 ∧ (s, u.i) ∈ E ∧ u , v}
∪ {(2 ⊕ w.out, 1 ⊕ v.fun), ( f1 (s0 ), 1 ⊕ v.arg)}
∪ {( f2 (s), 2 ⊕ u.i) | 2 ⊕ u ∈ V 0 ∧ (s, u.i) ∈ E}
L0 ( j ⊕ u) = L(u)
for j ∈ {1, 2}
S 0 (2 ⊕ u) = 2 ⊕ S (u)
S 0 (1 ⊕ u) = 1 ⊕ S (u)
if v < S (u)
S 0 (1 ⊕ u) = S (u) ⊕ S (w)
if v ∈ S (u)
r0 = f1 (r)
2. Let s1 and s2 be the origins of the unique edges in E 0 with targets 2 ⊕ w.return and
40
1 ⊕ v.arg respectively. We modify E 0 , L0 , and S 0 as follows:
(2 ⊕ w.out, 1 ⊕ v.fun) := (s1 , 1 ⊕ v.in)
(s1 , 2 ⊕ w.return) := (s2 , 2 ⊕ w.in)
(s2 , 1 ⊕ v.arg) := removed
L0 (1 ⊕ v) := •
L0 (2 ⊕ w) := •
S 0 (2 ⊕ w) := undefined
Furthermore, any occurrence of port 2 ⊕ w.bind in E 0 is replaced by 2 ⊕ w.out. The
resulting graph h of the β◦-reduction is then the simplification σ(g0 ).
Definition 7.7 (Graph Escape) Given a graph g ∈ G with `0 V and v, w ∈ V such that
1. L(v) = ∼ and L(w) = hi
2. (w.out, v.return) ∈ E
3. `0 contents(w)
4. level(w) < level(v)
Then the contraction of the esc-redex v, written g →esc h, is defined as follows:
1. We define a transitional graph g0 = (V 0 , L0 , E 0 , S 0 , r0 ) where V 0 ,L0 ,S 0 , and r0 are
constructed as in Definition 7.6 1 . The set of edges E 0 is constructed as follows:
E 0 = {( f1 (s), 1 ⊕ u.i) | 1 ⊕ u ∈ V 0 ∧ (s, u.i) ∈ E ∧ u , v}
∪ {(2 ⊕ w.out, 1 ⊕ v.return)}
∪ {( f2 (s), 2 ⊕ u.i) | 2 ⊕ u ∈ V 0 ∧ (s, u.i) ∈ E}
1
In Definition 7.6, v and w refer to the application- and lambda nodes of a β◦-redex. Here, v stands for
the Escape node, and w stands for the Bracket node of the esc-redex.
41
2. Let s1 be the origin of the unique edge in E 0 with target 2 ⊕ w.return. We modify E 0 ,
L0 , and S 0 as follows:
(2 ⊕ w.out, 1 ⊕ v.return) := (2 ⊕ w.out, 1 ⊕ v.in)
(s1 , 2 ⊕ w.return) := (s1 , 2 ⊕ w.in)
L0 (1 ⊕ v) := •
L0 (2 ⊕ w) := •
S 0 (1 ⊕ v) := undefined
S 0 (2 ⊕ w) := undefined
The resulting graph h of the esc-reduction is then σ(g0 ).
Definition 7.8 (Graph Run) Given a graph g ∈ G with `0 V and v, w ∈ V such that
1. L(v) = ! and L(w) = hi
2. (w.out, v.in) ∈ E
3. `0 contents(w)
4. level(w) ≤ level(v)
Then the contraction of the run-redex v, written g →run h, is defined as follows:
1. We define a transitional graph g0 = (V 0 , L0 , E 0 , S 0 , r0 ) where V 0 ,L0 ,S 0 , and r0 are
constructed as in Definition 7.6 2 . The set of edges E 0 is constructed as follows:
E 0 = {( f1 (s), 1 ⊕ u.i) | 1 ⊕ u ∈ V 0 ∧ (s, u.i) ∈ E ∧ u , v}
∪ {(2 ⊕ w.out, 1 ⊕ v.in)}
∪ {( f2 (s), 2 ⊕ u.i) | 2 ⊕ u ∈ V 0 ∧ (s, u.i) ∈ E}
2
In Definition 7.6, v and w refer to the application- and lambda nodes of a β◦-redex. Here, v stands for
the Run node, and w stands for the Bracket node of the run-redex.
42
2. Let s1 be the origin of the unique edge in E 0 with target 2 ⊕ w.return. We modify E 0 ,
L0 , and S 0 as follows:
(s1 , 2 ⊕ w.return) := (s1 , 2 ⊕ w.in)
L0 (1 ⊕ v) := •
L0 (2 ⊕ w) := •
S 0 (2 ⊕ w) := undefined
The resulting graph h of the run-reduction is then σ(g0 ).
43
Chapter 8
Proofs
Lemma 4.1 (Normal forms are terms) Mnorm ⊆ M.
Proof Let N ∈ Mnorm . We proceed by structural induction on N. If N ≡ x, then we are done
since x ∈ M. If N ≡ letrec Q in x, then we consider the different cases for declarations in
Q, and show that the right-hand side of any such declaration is in M.
• x = x. Clearly, x ∈ M.
• x = λx.N1 where N1 ∈ Mnorm . By the inductive hypothesis we have N1 ∈ M. It
follows that λx.N1 ∈ M.
• x = y z. Trivially holds since y z ∈ M.
• x = hN1 i where N1 ∈ Mnorm . By the inductive hypothesis we have N1 ∈ M. It follows
that hN1 i ∈ M.
• x = ∼ N1 where N1 ∈ Mnorm . By the inductive hypothesis we have N1 ∈ M. It follows
that ∼ N1 ∈ M.
• x = ! y. Clearly, ! y ∈ M.
Lemma 4.2 (τ maps graphs to normal forms) If g ∈ G then τ(g) ∈ Mnorm .
Proof By definition, τ(g) = mkrec(decl(V), name(r)). We first show that for any U ⊆ V the
set of letrec declarations decl(U) is in normal form, i.e. decl(U) ⊆ Dnorm . By the definition
of mkrec it then follows that τ(g) is a normal form. We prove this claim by strong induction
on the size of U.
44
We assume that for all W ⊂ U we have decl(W) ⊆ Dnorm . The set decl(U) consists of
one letrec declaration xu = term(u) for each top-level node u ∈ U. By a case analysis on
the label of u, we show that each such declaration has the form specified by the grammar
for normal forms. It then follows that decl(U) ⊆ Dnorm .
• L(u) = •: The declaration for u is xu = xu which is a declaration in normal form.
• L(u) = λ: The declaration for u is xu = λyu .mkrec(decl(contents(u)), name(s)) where
s is the root of the subgraph of u. We have contents(u) ⊂ U and so by the inductive
hypothesis we get decl(contents(u)) ⊆ Dnorm . Since name(s) is just a variable name,
the term mkrec(decl(contents(u)), name(s)) is itself a normal form. This establishes
that the letrec declaration for u is in normal form.
• L(u) = @: The declaration for u is xu = name(s1 ) name(s2 ) where s1 and s2 are
the sources of the incoming edges to the fun and arg ports of u respectively (Wellformedness guarantees that u has exactly two incoming edges). The terms name(s1 )
and name(s2 ) are just variable names, which establishes that the declaration for u is
in normal form.
• L(u) = hi: The declaration for u is xu = hmkrec(decl(contents(u)), name(s))i where
s is the root of the subgraph of u. We have contents(u) ⊂ U and so by the inductive
hypothesis we get decl(contents(u)) ⊆ Dnorm . Since name(s) is just a variable name,
the term mkrec(decl(contents(u)), name(s)) is itself a normal form. This establishes
that the letrec declaration for u is in normal form.
• L(u) = ∼ : Similar to the case for L(u) = hi.
• L(u) = ! : The declaration for u is xu = ! name(s) where s is the source of the
incoming edge to the in port of u (Well-formedness guarantees that u has exactly
one such edge). The term name(s) is just a variable name, which establishes that the
declaration for u is in normal form.
45
Lemma 4.3 (ν maps terms to normal forms) If M ∈ M then ν(M) ∈ Mnorm .
Proof Let M ∈ M. Since ν = J Kclean ◦ J K pre , we prove the claim separately for J K pre and
J Kclean .
1. We first show by structural induction on M that JMK pre ∈ M pre .
• M ≡ x. By definition, x ∈ M pre .
• M ≡ λx.M1 . We have Jλx.M1 K pre = (letrec x1 = λx.JM1 K pre in x1 ). By
induction, JM1 K pre ∈ M pre . From the grammar for M pre it then follows that
(letrec x1 = λx.JM1 K pre in x1 ) is in M pre .
• M ≡ M1 M2 . We have JM1 M2 K pre = (letrec Q1 , Q2 , x3 = x1 x2 in x3 ) where
JM1 K pre = (letrec Q1 in x1 ) and JM2 K pre = (letrec Q2 in x2 ). By induction,
(letrec Q1 in x1 ) and (letrec Q2 in x2 ) are both in M pre . From the grammar for
M pre it then follows that (letrec Q1 , Q2 , x3 = x1 x2 in x3 ) is in M pre .
−−−−−−→
−−−−−−→
• M ≡ letrec x j = M j in M. We have Jletrec x j = M j in MK pre = (letrec Q,
−−−−−−−−−→
Q j , x j = y j in y) where JM j K pre = (letrec Q j in y j ) and JMK pre = (letrec Q in y).
By induction, (letrec Q j in y j ) and (letrec Q in y) are all in M pre . From the
−−−−−−−−−→
grammar for M pre it then follows that (letrec Q, Q j , x j = y j in y) is in M pre .
• M ≡ hM1 i. Similar to the case for λx.M1 .
• M ≡ ∼ M1 . Similar to the case for λx.M1 .
• M ≡ ! M1 . Similar for the case for M1 M2 .
2. We now show that JN 0 Kclean ∈ Mnorm for any N 0 ∈ M pre . We proceed by induction
over the derivation of the judgment JN 0 Kclean = N and by a case analysis on the last
rule applied.
• JNKclean = N. By the premise of this rule, N ∈ Mnorm .
• Jletrec in xKclean = x. By definition, the variable x is a normal form.
46
• Jletrec y = λz.N 0 , Q0 in xKclean = N2 where JN 0 Kclean = N1 and Jletrec y =
λz.N1 , Q0 in xKclean = N2 . By assumption, (letrec y = λz.N 0 , Q0 in x) ∈ M pre .
This can only be the case if N 0 ∈ M pre and Q0 ⊆ D pre . By induction, we get
N1 ∈ Mnorm . By a similar argument to the proof of Lemma 4.1, we can show that
Mnorm ⊆ M pre . Therefore we have N1 ∈ M pre and (letrec y = λz.N1 , Q0 in x) ∈
M pre . We can then apply the inductive hypothesis again, and get N2 ∈ Mnorm .
• Jletrec y = hN 0 i, Q0 in xKclean = N2 . Similar to the third case.
• Jletrec y = ∼ N 0 , Q0 in xKclean = N2 . Similar to the third case.
• Jletrec y = z, Q0 in xKclean = N where y . z and J(letrec Q0 in x)[y := z]Kclean =
N. By assumption, (letrec y = z, Q0 in x) ∈ M pre . This implies that the term
(letrec Q0 in x)[y := z] is also in M pre . We apply the inductive hypothesis and
get N ∈ Mnorm .
Lemma 4.4 (Graphing) For any M ∈ M, γ(M) is defined and is a unique, well-formed
graph.
Proof Given a term M ∈ M, we show separately that 1) γ(M) is defined, 2) γ(M) is a
well-formed graph, and 3) γ is deterministic.
1. We show that γ terminates for any input term M ∈ M by showing that both γ pre
and σ terminate. The mapping γ pre (M) is defined recursively. Each recursive call
is performed on a term strictly smaller than M. Therefore every chain of recursive
applications of γ pre is finite, eventually reaching the base case for variables. The only
condition in the rules defining γ pre is that node names v be fresh. This requirement
can always be satisfied, and therefore γ pre is always defined.
To show that σ(g) terminates, we observe that each application of the second or third
rules in the definition of σ removes exactly one indirection node from the intermediate graph g. If there is at least one such node in the graph, then one of these two
rules can be applied: the side conditions in the premises of these rules only enforce
47
correct pattern matching. This ensures progress towards the well-defined base case
for σ, a graph with no indirection nodes.
2. According to the definition of well-formedness for graphs, we need to consider the
following three aspects:
• Connectivity: By a simple inspection of the rules in the definition of γ pre , we
can verify that these rules only create edges connecting nodes in the graph with
the appropriate port types, thereby avoiding dangling edges. Furthermore, for
each created node v, the function γ pre generates exactly one edge for each target
port of v, and makes v the root of the resulting graph. This ensures that each
target port in the graph has exactly one incoming edge associated with it. The
definition of σ ensures that for any indirection node that is removed, the incoming and outgoing edges are merged into one edge that “skips” the indirection
node. Therefore, σ does not introduce any dangling edges.
• Scoping: The construction of a graph for the term λx.M in the definition of γ pre
creates the subgraph γ pre (M) recursively, and places it entirely inside the scope
of a new lambda node. No edges of the subgraph leave the lambda node. In
fact, the only “edge” of the subgraph that is affected by the construction is the
root of the subgraph: it is connected to the return port of the new lambda node.
Similarly, we can show that the constructions for Bracket and Escape nodes
generate well-scoped intermediate graphs. The remaining cases follow by a
simple induction on the structure of the term being translated. The mapping σ
only affects indirection nodes, and therefore does not impact the scoping of the
respective graph.
• Root condition: Each rule in the definition of γ pre generates a new node and
marks this node as the root of the resulting graph. The only cases in which the
root is placed in the scope of a node are the constructions of lambda, Bracket,
and Escape nodes. However, in these cases the root is placed only in its own
48
scope. This matches the requirements for correct scoping. The only rule in the
definition of γ pre that does not involve the creation of a new node is the rule for
letrec. Here, the claim holds by considering the construction of each subgraph
inductively. Again, the mapping σ does not affect scoping, and therefore does
not impact the root condition.
3. The mapping γ pre is defined by induction on the structure of the input term M. For
any term, there is exactly one rule that can be applied. This ensures that γ pre is
deterministic.
The rules in an application of σ can be applied in non-deterministic order. However,
the final result of a graph without indirection nodes is unique for a given intermediate
graph, and therefore σ is deterministic.
Lemma 4.5 (Soundness of Normalization) If M ∈ M then γ(M) = γ(ν(M)).
Proof Let M ∈ M. Since ν = J Kclean ◦ J K pre , we prove the claim separately for J K pre and
J Kclean .
1. We first show by structural induction on M that γ(M) = γ(JMK pre ).
• M ≡ x. We have JxK pre = (letrec
in x) and γ pre (x) = γ pre (letrec
in x).
Because γ pre and σ are deterministic, we get γ(x) = γ(JxK pre ).
• M ≡ λx.M1 . We have Jλx.M1 K pre = (letrec x1 = λx.JM1 K pre in x1 ) We now
describe the construction of the graphs g = γ pre (λx.M1 ) and h = γ pre (letrec x1 =
λx.JM1 K pre in x1 ), and then show that g and h simplify to the same graph.
The graph g consists of a lambda node v as its root. The root of the graph
γ pre (M1 ) is connected to v’s return port, and all nodes in this subgraph are
placed in the scope of v. The graph h represents the term (letrec x1 = λx.JM1 K pre
in x1 ). It contains an indirection node w for x1 as its root, and a lambda node
u for the subterm λx.JM1 K pre . The nodes u and w are connected by an edge
49
(u.out, w.in). The root of the graph γ pre (JM1 K pre ) is connected to to u’s return
port, and all nodes in this subgraph are placed in the scope of u.
From the inductive hypothesis we get σ(γ pre (M1 )) = σ(γ pre (JM1 K pre )). Since σ
also removes the indirection node w from h, this shows that g and h simplify to
the same graph.
• M ≡ M1 M2 . We have JM1 M2 K pre = (letrec Q1 , Q2 , x3 = x1 x2 in x3 ) where
JM1 K pre = (letrec Q1 in x1 ) and JM2 K pre = (letrec Q2 in x2 ). We now describe
the construction of the graphs g = γ pre (M1 M2 ) and h = γ pre (JM1 M2 K pre ), and
then show that g and h simplify to the same graph.
The graph g consists of an application node v as its root. The roots of the graphs
γ pre (M1 ) and γ pre (M2 ) are connected to v’s fun and arg ports respectively. The
graph h represents the term (letrec Q1 , Q2 , x3 = x1 x2 in x3 ). It contains an
indirection node w for x3 as its root, and one application node u for the subterm
x1 x2 . The nodes u and w are connected by an edge (u.out, w.in). The roots of the
graphs γ pre (letrec Q1 in x1 ) and γ pre (letrec Q2 in x2 ) are connected to u’s fun
and arg ports respectively. From the inductive hypothesis we get σ(γ pre (M1 )) =
σ(γ pre (letrec Q1 in x1 )) and σ(γ pre (M2 )) = σ(γ pre (letrec Q2 in x2 )). Since σ
also removes the indirection node w from h, this shows that g and h simplify to
the same graph.
−−−−−−→
−−−−−−→
• M ≡ letrec x j = M j in M. We have Jletrec x j = M j in MK pre = (letrec Q,
−−−−−−−−−→
Q j , x j = y j in y) where JM j K pre = (letrec Q j in y j ) and JMK pre = (letrec Q in y).
−−−−−−→
We now describe the construction of the graphs g = γ pre (letrec x j = M j in M)
−−−−−−→
and h = γ pre (Jletrec x j = M j in MK pre ), and then show that g and h simplify to
the same graph.
The graph g consists of the graphs γ pre (M j ) and γ pre (M), where the root of
γ pre (M) is the root of g, and each free variable occurrence x j in the the subgraphs γ pre (M j ) and γ pre (M) is replaced by the root of the respective graph
γ pre (M j ).
50
−−−−−−−−−→
The graph h represents the term (letrec Q, Q j , x j = y j in y). It consists of
the graphs γ pre (letrec Q j in y j ) and γ pre (letrec Q in y) where the root of
γ pre (letrec Q in y) is the root of h, and each free variable occurrence x j in
the graph is replaced by the root of the respective graph γ pre (letrec Q j in y j ).
From the inductive hypothesis we get σ(γ pre (M j )) = σ(γ pre (letrec Q j in y j ))
and σ(γ pre (M)) = σ(γ pre (letrec Q in y)). This shows that g and h simplify to
the same graph.
• M ≡ hM1 i. Similar to the case forλx.M1 .
• M ≡ ∼ M1 . Similar to the case for λx.M1 .
• M ≡ ! M1 . Similar to the case for M1 M2 .
2. We now show that for a given N 0 ∈ M pre we have γ(N 0 ) = γ(JN 0 Kclean ). We proceed
by induction over the derivation of the judgment JN 0 Kclean = N and by a case analysis
on the last rule applied.
• JNKclean = N. In this case, the claim trivially holds since γ is deterministic.
• Jletrec
in xKclean = x. We have γ pre (letrec
in x) = γ pre (x). Since σ is
deterministic, we get γ(letrec in x) = γ(x).
• Jletrec y = λz.N 0 , Q0 in xKclean = N2 . The premises are JN 0 Kclean = N1 and
Jletrec y = λz.N1 , Q0 in xKclean = N2 . Applying the inductive hypothesis to these
premises yields γ(N 0 ) = γ(N1 ) and γ(letrec y = λz.N1 , Q0 in x) = γ(N2 ). When
constructing a graph g for the term (letrec y = λz.N 0 , Q0 in x), the function γ
recursively constructs a graph for N 0 which is then made the subgraph of the
lambda node representing the term λz.N 0 . Since γ(N 0 ) = γ(N1 ), it follows that
g is the same graph as γ(letrec y = λz.N1 , Q0 in x) which in turn is equal to
γ(N2 ).
• Jletrec y = hN 0 i, Q0 in xKclean = N2 . Similar to the third case.
• Jletrec y = ∼ N 0 , Q0 in xKclean = N2 . Similar to the third case.
51
• Jletrec y = z, Q0 in xKclean = N. The important premise of this rule is J(letrec Q0
in x)[y := z]Kclean = N. Applying the inductive hypothesis yields γ((letrec Q0
in x)[y := z]) = γ(N). The graph g = γ pre (letrec y = z, Q0 in x) has one more
indirection node than the graph h = γ pre ((letrec Q0 in x)[y := z]). This indirection node represents the right-hand side z of the declaration y = z. However, the
function σ removes this indirection node, and therefore both g and h simplify
to the same graph γ(N).
Lemma 4.6 (Recovery of normal forms) If N ∈ Mnorm then N ≡α τ(γ(N)).
Proof We proceed by structural induction over N.
• N ≡ x. γ(x) is a graph with no nodes and the free variable x as its root. This graph is
translated back by τ to the term x.
• N ≡ letrec Q in x. The function γ constructs a graph g by mapping letrec declarations in Q to nodes in g. The translation τ maps nodes in g back to letrec declarations.
We assume that τ associates with any node uz the name xuz , and with any lambda
node uz the additional name yuz . By a case analysis on the different possibilities for
declarations in Q, we can then show that N ≡alpha τ(g).
– x = x. This equation is translated by γ to a black hole v x . The black hole is
mapped back by τ to a letrec declaration xvx = xvx .
– x = y z. This equation is translated by γ to an application node v x with two
incoming edges (sy , v x .fun) and (sz , v x .arg). The edge sources sy and sz are the
roots of the subgraphs for y and z respectively. The application node is mapped
back by τ to a letrec declaration xvx = name(sy ) name(sz ). Since the mapping
from node names to variable names in τ is deterministic, all occurrences of
sy and sz in the resulting term will be mapped to the variables name(sy ) and
name(sz ) respectively. Therefore, the declaration xvx = name(sy ) name(sz ) is
alpha-equivalent to x = y z with respect to both full terms.
52
– x = λy.N1 . Let U and s be the set of nodes and the root of the graph γ(N1 )
respectively. The letrec declaration is translated by γ to a lambda node v x with
an incoming edge (s, v x .return) and the set U placed in the scope of v x , i.e. U =
contents(v). The lambda node is mapped back by τ to a letrec declaration xvx =
λyvx .mkrec(decl(contents(v)), name(s)). We observe that mkrec(decl(contents(v))
, name(s)) = mkrec(decl(U), name(s)) = τ(γ(N1 )). Therefore the resulting letrec declaration is xvx = λyvx .τ(γ(N1 )). By induction we have N1 ≡α τ(γ(N1 )).
Furthermore, all references to node v x will be mapped to the variables xvx (normal reference) or yvx (lambda parameter reference) in the resulting term. This
establishes that the initial and resulting letrec declarations are alpha-equivalent.
– x = hN1 i. Similar to the case for x = λy.N1 .
– x = ∼ N1 . Similar to the case for x = λy.N1 .
– x = ! y. This equation is translated by γ to a run node v x with an incoming
edge (sy , v x .in). The edge source sy is the root of the subgraph for y. The run
node is mapped back by τ to a letrec declaration xvx = ! name(sy ). Since all
occurrences of sy in the resulting term will be mapped to the variable name(sy ),
the declaration xvx = ! name(sy ) is alpha-equivalent to x = ! y with respect to
both full terms.
Lemma 4.7 (Completeness of Normalization) Let M1 , M2 ∈ M. If γ(M1 ) = γ(M2 ) then
ν(M1 ) ≡alpha ν(M2 ).
Proof We proceed in two steps. We first show that γ(M) = g implies ν(M) ≡alpha τ(g).
The main claim then follows trivially.
1. Let γ(M) = g and N = ν(M). By Lemma 4.5 we have γ(M) = γ(ν(M)) and therefore
γ(N) = γ(ν(M)) = γ(M) = g. Furthermore, by Lemma 4.6 we have N ≡alpha τ(γ(N)).
It then follows that ν(M) = N ≡alpha τ(γ(N)) ≡alpha τ(g).
2. Let γ(M1 ) = γ(M2 ) = g. From the first part of this proof we know that ν(M1 ) ≡alpha
53
τ(g) and that ν(M2 ) ≡alpha τ(g). Then clearly ν(M1 ) ≡alpha ν(M2 ).
Theorem 4.1 (Main) The sets of well-formed graphs and normal forms are one-to-one:
1. If M ∈ M then ν(M) ≡alpha τ(γ(M)).
2. If g ∈ G then g = γ(τ(g)).
Proof We prove both claims separately.
1. Let N ∈ Mnorm . From Lemma 4.6 we know that N ≡alpha τ(γ(N)). If we let N =
ν(M), then we get ν(M) ≡alpha τ(γ(ν(M))). From Lemma 4.5 it then follows that
ν(M) ≡alpha τ(γ(M)).
2. We map the graph g to a normal form N by τ which is then mapped to a graph h by
γ pre . We show that σ(h) = g. To this end, we investigate the effects of τ and γ pre .
The translation τ maps any top-level node v in g to a letrec declaration of the form
xv = term(v). The function γ pre maps subterms of N to nodes in h. More precisely, for
every declaration xv = term(v) in N, γ pre creates a node that we call node(xv ). We can
show that after simplification, the graph h only contains the nodes {node(xv ) | v ∈ V}
where each node(xv ) corresponds to exactly one v ∈ V. Furthermore, these nodes are
labeled, connected, and scoped in the same way as the nodes in g. We proceed by a
case analysis on the label of v:
• L(v) = •. The black hole v is mapped by τ to the declaration xv = xv , which
is mapped back by γ pre to an indirection node node(xv ) pointing to itself. The
simplification function σ will change node(xv ) into a black hole.
• L(v) = λ. The well-formedness condition ensures that there is exactly one
edge of the form (s, v.return) in g. The declaration in N for v is then xv =
λyv .mkrec(decl(contents(v)), name(s)). The function γ pre maps this declaration
to a lambda node node(xv ) and an indirection node placed between the image of
s and the return port of node(xv ). The function σ removes this indirection node,
54
so that there is an edge from the image of s to v.return. All nodes in contents(v)
are mapped by τ to declarations nested inside the lambda term for xv , which are
in turn mapped by γ pre to nodes in contents(node(xv )).
• L(v) = @. The well-formedness condition ensures that there are exactly two
edges in g of the form (s1 , v.fun) and (s2 , v.arg). The declaration in N for v
is then xv = name(s1 ) name(s2 ). This declaration is mapped by γ pre to an
application node node(xv ) with an indirection node placed between the image
of s1 and the fun port of node(xv ), and with a second indirection node placed
between the image of s2 and the arg port of node(xv ). These indirection nodes
are removed by σ so that the images of s1 and s2 are directly connected to the
fun and arg ports of node(xv ).
• L(v) = hi. Similar to the case for L(v) = λ.
• L(v) = ∼. Similar to the case for L(v) = λ.
• L(v) = !. The well-formedness condition ensures that there is exactly one edge
in g of the form (s, v.in). The declaration in N for v is then xv = ! name(s). This
declaration is mapped by γ pre to a Run node node(xv ) with an indirection node
placed between the image of s and the in port of node(xv ). This indirection node
is removed by σ so that the image of s is directly connected to the in port of
node(xv ).
Lemma 5.1 (Properties of graph validity)
1. Let M n ∈ Mn and g = γ(M n ). Then `n V for any n ≥ 0.
2. Let g ∈ G with `n V. Then τ(g) ∈ Mn for any n ≥ 0.
Proof We prove the two claims separately.
1. Let M n ∈ Mn and g = γ(M n ). We show that `n V by structural induction on M n .
• M n ≡ x. The graph γ(x) contains no nodes, and we have `n ∅ for any n ≥ 0.
55
• M n ≡ λx.M1n . Let V1 be the set of nodes for the graph γ(M1n ). From the inductive hypothesis we get `n V1 . The graph g = γ(λx.M1n ) consists of a lambda
node v with the nodes in V1 placed inside the scope of v, i.e. V = V1 ] {v}
and contents(v) = V1 . By the definition of graph validity, L(v) = λ and
`n contents(v) then imply `n V.
• M n ≡ M1n M2n . Let V1 and V2 be the sets of nodes for the graphs γ(M1n ) and γ(M2n )
respectively. The graph g = γ(M1n M2n ) consists of an application node v with
the roots of γ(M1n ) and γ(M2n ) connected to v’s fun and arg ports respectively.
From the inductive hypothesis we get `n V1 and `n V2 . This means that `n u for
all nodes u in toplevel(V1 ) and in toplevel(V2 ). By definition of graph validity,
L(v) = @ implies `n v. Since V = V1 ] V2 ] {v}, this establishes that `n u for all
nodes u in toplevel(V), and therefore `n V.
−−−−−−−→
• M n ≡ letrec x j = M nj in M n . Let V j and U be the sets of nodes for the graphs
−−−−−−−→
γ(M nj ) and γ(M n ) respectively. The graph g = γ(letrec x j = M nj in M n ) consists
of all nodes in the sets V j and U, i.e. V = V j ]U. From the inductive hypothesis
we get `n V j and `n V. This means that `n u for all nodes u in toplevel(V j ) and
in toplevel(U), and therefore `n V.
• M n ≡ hM1n+1 i. Let V1 be the set of nodes for the graph γ(M1n+1 ). From the
inductive hypothesis we get `n+1 V1 . The graph g = γ(hM1n+1 i) consists of a
Bracket node v with the nodes in V1 placed inside the scope of v, i.e. V =
V1 ] {v} and contents(v) = V1 . By the definition of graph validity, L(v) = hi and
`n+1 contents(v) then imply `n V.
• M n ≡ ∼ M1k where n = k + 1. Let V1 be the set of nodes for the graph γ(M1k ).
From the inductive hypothesis we get `k V1 . The graph g = γ( ∼ M1k ) consists
of an Escape node v with the nodes in V1 placed inside the scope of v, i.e.
V = V1 ] {v} and contents(v) = V1 . By the definition of graph validity, L(v) = ∼
and `k contents(v) then imply `k+1 V and so `n V.
56
• M n ≡ ! M1n . Let V1 be the set of nodes for the graph γ(M1n ). The graph g =
γ(! M1n ) consists of a Run node v with the root of γ(M1n ) connected to v’s in
port. From the inductive hypothesis we get `n V1 . This means that `n u for all
nodes u in toplevel(V1 ). By definition of graph validity, L(v) =! implies `n v.
Since V = V1 ] {v}, this establishes that `n u for all nodes u in toplevel(V), and
therefore `n V.
2. Let g ∈ G with `n V. We first show that for any U ⊆ V, the set of letrec declarations decl(U) contains only right-hand sides in Mn . It then follows that τ(g) =
mkrec(decl(V), r) ∈ Mn . We proceed by strong induction on the size of U and assume that decl(U) contains only right-hand sides in Mn for all U ⊂ V. We also
assume that `n V, i.e. `n v for all v ∈ toplevel(V). The set decl(v) contains a declaration xv = term(v) for each such v. By a case analysis on the label of v, we show that
term(v) ∈ Mn for each such v.
• L(v) = •. We have term(v) = xv and clearly, xv ∈ Mn .
• L(v) = λ. The well-formedness condition ensures that g contains exactly one
incoming edge for v of the form (s, v.return). We then have term(v) =
λyv .mkrec(decl(contents(v)), name(s)). We know that contents(v) ⊂ V. Then by
the inductive hypothesis we know that all letrec right-hand sides in decl(contents(v))
are in Mn . Together with the fact that name(s) expands to a variable name, this
establishes that λyv .mkrec(decl(contents(v)), name(s)) is in Mn .
• L(v) = @. The well-formedness condition ensures that g contains exactly two
incoming edges for v. These edges have the form (s1 , v.fun) and (s2 , v.arg). We
then have term(v) = name(s1 ) name(s2 ). We know that name(s1 ) and name(s2 )
expand to just variable names, and therefore (name(s1 ) name(s2 )) ∈ Mn .
• L(v) = hi. The well-formedness condition ensures that g contains exactly
one incoming edge for v of the form (s, v.return). We then have term(v) =
hmkrec(decl(contents(v)), name(s))i. We know that contents(v) ⊂ V. Then by
57
the inductive hypothesis we know that all letrec right-hand sides in decl(contents(v))
are in Mn . Together with the fact that name(s) expands to a variable name, this
establishes that hmkrec(decl(contents(v)), name(s))i is in Mn .
• L(v) = ∼. Similar to the previous case.
• L(v) = !. The well-formedness condition ensures that g contains exactly one
incoming edge for v of the form (s, v.in). We then have term(v) = ! name(s). We
know that name(s) expands to just a variable name, and therefore (! name(s)) ∈
Mn .
Theorem 5.1 (Correctness of Graphical Reductions) Let g ∈ G, δ ∈ {β◦, esc, run}, M 0 ∈
M0 and g = γ(M 0 ).
1. Graph reductions preserve well-formedness:
g →δ h implies h ∈ G
2. Graph reductions are sound:
g →δ h implies M10 →∗ M20 →δ M30 for some M20 , M30 ∈ M0 such that h = γ(M30 )
3. Graph reductions are complete:
M10 →δ M20 implies g →δ h for some h ∈ G such that h = γ(M20 )
Proof We prove the three claims separately.
1. According to the definition of well-formedness for graphs, we need to consider three
aspects:
• Connectivity: The graph reduction rules avoid dangling edges by first changing
the nodes to be removed into indirection nodes. In the case of β◦, the difference
in arity between application and indirection nodes is reflected in the removal
of the edge from the lambda node to the fun port of the application node. The
graph reductions esc and run maintain correct node arity in a similar fashion.
58
• Scoping: The first step of all graph reductions (potentially) moves an entire
scope into another scope. Assuming a well-formed original graph, it is clear
that this operation cannot create any edges leaving a scope, which is the key
requirement for correct scoping. The second step of any graph reduction only
removes scopes, and the re-wiring of edges is limited to edges that did not leave
the original scope of the respective redex. Again, it is easy to see that these
operations do not give rise to edges that leave a scope in the graph.
• Root condition: This condition requires that the root of the graph not be nested
inside any scope in the graph. While nodes may be moved into different scopes
in all three graph reduction rules, it is not possible for the root of the graph
to be moved in this way: if the lambda node of a β◦-redex is also the root of
the graph, then the edge from the lambda node to the application node is not
unique. Therefore, the lambda node will be copied, and only the copy will be
moved inside the scope of the application node, thereby leaving the root of the
graph unchanged. Similar reasoning applies to the graph reductions esc and
run.
2. We consider each case of δ separately.
• δ = β◦. Let g0 be the result of the first step of the β◦-reduction on g. We can
show that M10 can be reduced to a term M20 with γ(M20 ) = g0 such that the β◦redex in M20 is of the form (λx.M40 ) M50 . From the third part of this theorem it
then follows that reducing the subterm (λx.M40 ) M50 to (letrec x = M50 in M40 )
yields a term M30 with h = γ(M30 ). To this end, we need to consider two cases
for the term M10 .
– The redex in g corresponds to a subterm of M10 of the form
(letrec Dnm in (...(letrec Dn1 in λx.M40 ))) M50 .
59
This subterm can be reduced by m lift-steps to
(letrec Dnm in (...(letrec Dn1 in (λx.M40 ) M50 ))).
This subterm of M20 now contains the exposed redex (λx.M40 ) M50 . Therefore, M20 corresponds to g0 , where the reference to the lambda node being
applied is unique, and the lambda and application nodes are in the same
scope.
– The redex in g corresponds to a subterm of M10 of the form
(letrec Dnm1 in (...(letrec Dn1 in x1 ))) M50 .
Furthermore, M10 contains the declarations
x1 = letrec Emn 2 in (...(letrec E1n in x2 ))
through
xm = letrec Fmn m in (...(letrec F1n in λx.M40 )).
These declarations are placed in M10 so that the variables x1 , ..., xm are visible to the redex. By m1 applications of the lift reduction rules, we can
reduce the redex to (letrec Dnm1 in (...(letrec Dn1 in x1 M50 ))). Furthermore,
the declarations for x1 , ..., xm can be reduced by merge to
x1 = x2 , E1n , ..., Emn 1 , ..., xm = λx.M40 , F1n , ..., Fmn m .
By definition of the β◦-redex in the graph g, the level of the lambda node
is no greater than the level of the application node. This means that in M10 ,
the occurrence of x1 in the redex is at a level no less than the level of the
definition of the variables x1 , ..., xm . Therefore, we can substitute variables
using sub until we have the subterm xm M50 . The term obtained in this way
still represents g. We can then perform one more substitution using sub,
yielding a term with the subterm (λx.M40 ) M50 . This reduction corresponds
60
to moving the lambda node in the same scope as the application node. If
going from g to g0 did require copying the lambda node, then the term with
the subterm (λx.M40 ) M50 precisely represents g0 , and we are done. If the
lambda node did not need to be copied, then the nodes corresponding to the
declaration xm = λx.M40 are now garbage in the corresponding graph. We
therefore need to use gc to garbage collect all unreferenced declarations in
the set
x1 = x2 , E1n , ..., Emn 1 , ..., xm = λx.M40 , F1n , ..., Fmn m .
The resulting term is then M20 representing g0 .
• δ = esc. We proceed similarly to the case for β◦. The main difference is the use
of the lift rule for Brackets instead of the lift rule for application to expose the
Escape redex in the term M10 .
• δ = run. We proceed similarly to the case for β◦. The main difference is the
use of the lift rule for Brackets instead of the lift rule for application to expose
the Run redex in the term M10 .
3. We consider each case of δ separately.
• δ = β◦. If M10 →β◦ M20 , then we know that M10 contains a subterm of the form
(λx.M30 ) M40 . In the graph g = γ(M10 ), this corresponds to a lambda node and an
application node that are in the same scope, and with the lambda node’s out port
connected to the application node’s fun port. From Lemma 5.1 we know that the
contents of the lambda node and the set of nodes in g that reach the application
node and are in the same scope are valid at level 0. This establishes the fact that
g contains a β◦-redex. Since the lambda and application nodes are already in the
same scope, reducing this redex in g only involves the second step of the betareduction for graphs: we remove the lambda and application nodes, and replace
edges from the lambda’s bind port by edges originating from the application’s
61
argument. This corresponds directly to the subterm letrec x = M20 in M10 , which
is the result of reducing (λx.M10 ) M20 .
• δ = esc. We proceed similarly to the case for β◦. The term reduction involves
a redex of the form ∼ hM 0 i, which corresponds at the graph level to a Bracket
node nested inside an Escape node. From Lemma 5.1 we know that the contents
of the Bracket node are valid at level 0, and therefore that the graph contains an
esc-redex. The reduction of the redex in the graph simply removes the Bracket
and Escape nodes, only keeping the contents of the Bracket node. This corresponds to the subterm M 0 , which is the result of the term reduction.
• δ = run. We proceed similarly to the case for β◦. The term reduction involves
a redex of the form !hM 0 i, which corresponds at the graph level to a Bracket
node connected to the in port of a Run node. Furthermore, the Bracket and Run
nodes are in the same scope. From Lemma 5.1 we know that the contents of the
Bracket node are valid at level 0, and therefore that the graph contains a runredex. The reduction of the redex in the graph simply removes the Bracket and
Run nodes, only keeping the contents of the Bracket node. This corresponds to
the subterm M 0 , which is the result of the term reduction.
62
Bibliography
[1] Z. M. Ariola and S. Blom. Cyclic lambda calculi. Lecture Notes in Computer Science,
1281:77, 1997.
[2] M. Burnett, J. Atwood, R. Walpole Djang, H. Gottfried, J. Reichwein, and S. Yang.
Forms/3: A first-order visual language to explore the boundaries of the spreadsheet
paradigm. Journal of Functional Programming, 11:155–206, March 2001.
[3] W. Citrin, M. Doherty, and B. Zorn. Formal semantics of control in a completely
visual programming language. In Allen L. Ambler and Takayuki Dan Kimura, editors,
Proceedings of the Symposium on Visual Languages, pages 208–215, Los Alamitos,
CA, USA, October 1994. IEEE Computer Society Press.
[4] W. Citrin, R. Hall, and B. Zorn. Programming with visual expressions. In Volker
Haarslev, editor, Proc. 11th IEEE Int. Symp. Visual Languages, pages 294–301. IEEE
Computer Society Press, 5–9 September 1995.
[5] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to algorithms. MIT
Press and McGraw-Hill Book Company, 14th edition, 1994.
[6] M. Erwig. Abstract syntax and semantics of visual languages. Jounral of Visual
Languages and Computing, 9:461–483, October 1998.
[7] National Instruments. LabVIEW Student Edition 6i. Prentice Hall, 2001.
[8] S. Peyton Jones, A. Blackwell, and M. Burnett. A user-centred approach to functions
in Excel. ICFP, pages 165–176, 2003.
63
[9] Oleg Kislyov, Kedar N. Swadi, and Walid Taha. A methodology for verifiable generation of combinatorial circuits. Submitted for publication, April 2004.
[10] Edward A. Lee. What’s ahead for embedded software? IEEE Computer, pages 18–
26, September 2000.
[11] MetaOCaml: A compiled, type-safe multi-stage programming language. Available
online from http://www.metaocaml.org/, 2003.
[12] National Instruments. LabVIEW. Online at http://www.ni.com/labview.
[13] Oregon
Portland,
Graduate
OR
Institute
Technical
97291-1000,USA.
Reports.
P.O.
Available
Box
online
91000,
from
ftp://cse.ogi.edu/pub/tech-reports/README.html.
[14] Walid Taha. Multi-Stage Programming: Its Theory and Applications. PhD thesis,
Oregon Graduate Institute of Science and Technology, 1999. Available from [13].
[15] Walid Taha. A gentle introduction to multi-stage programming. In Don Batory,
Charles Consel, Christian Lengauer, and Martin Odersky, editors, Dagstuhl Workshop
on Domain-specific Program Generation, LNCS. 2004. To appear.
[16] Walid Taha, Stephan Ellner, and Hongwei Xi. Generating Imperative, Heap-Bounded
Programs in a Functional Setting. In Proceedings of the Third International Conference on Embedded Software, Philadelphia, PA, October 2003.
[17] Walid Taha, Paul Hudak, and Zhanyong Wan. Directions in functional programming
for real(-time) applications. In the International Workshop on Embedded Software
(EMSOFT ’01), volume 221 of Lecture Notes in Computer Science, pages 185–203,
Lake Tahoe, 2001. Springer-Verlag.
[18] The MathWorks. Simulink. Online at http://www.mathworks.com/products/simulink.
Download