scope ambiguities, movement and binding

advertisement
CHAPTER SEVEN: SCOPE AMBIGUITIES, MOVEMENT AND BINDING.
We will now add a mechanism for creating scope ambiguities. Following Cooper 1975,
1983, the the mechanism we will use will be a storage mechanism. We will first introduce
the basic mechanism and show its working in several examples.
After that we generalize the discussion, and discuss syntactic and semantic dislocation, scope
and movement, scope and binding.
7.1. Basic storage.
In this section we will introduce just the basic idea of storage, and we won't worry about the
precise details of the implementation in the grammar. We'll do some of that worrying in
later subsections. So a warning: what we'll do in this section is going to change a bit in
further sections.
Up to now our grammar generates pairs of syntactic structures and semantic representation.
We will extend this now.
The grammar will now generate triple <A,A',S> where:
-A is a syntactic tree.
-A', the translation of A, is an IL expression.
-S, the quantifier store, is a set of pairs <n,α> where n is an index and α an IL
expression.
The quantifier store will be used for creating scope ambiguities.
We will assume that for lexical entries the quantifier store is empty.
We assume that stores are inherited upward in the following way:
<A,B',SB>
│
B
For unary branching, both the meaning and the store inherit up.
< A ,APPLY[f,a],SB  SC>
B
C
For binary branching, the stores corresponding to the daughters are united.
We now introduce two operations STOREn and RETRIEVEn
For the moment, we define them for triples consisting of a syntactic tree, its interpretation,
and store:
STOREn[<DP,DP',SDP>] = <DP,xn,SDP  {<n,DP'>}>
176
176
STOREn replaces the interpretation DP' of the DP by a variable xn (of type e), and stores the
interpretation DP' under index n in the quantifier store. Thus, STOREn tells you to use a
variable xn for the time being as the interpretation of the DP, instead of the translation that
we have just stored.
Example:
STORE1[ < DP , λP.x[BOY(x)  P(x)],Ø > ] =
D
NP
│
│
every boy
< DP , x1, {<1,λP.x[BOY(x)  P(x)]>} >
D
NP
│
│
every boy
Our second new rule is quantifying in, RETRIEVEn.
Let <IP,IP',SIP> be a triple, with IP a syntactic tree, IP' its interpretation, SIP the
corresponding store.
Let <n,α>  SIP, and assume that no other interpretation is stored under index n in
SIP.
RETRIEVEn[<IP,IP',SIP>] = <IP,APPLY[λxn.IP',α],SIP{<n,α>}>
RETRIEVEn turns the IP meaning IP'of type t into a predicate λxn.IP' of type <e,t>, by
abstracting over variable xn of type e.
It takes the DP meaning α, which was stored under index n, out of the quantifier store SIP and
it applies the predicate λxn.IP' to α with APPLY.
We assume that we always want to end up with an empty store.
With this mechanism, we are able to derive translations of sentences where the meaning of
an NP takes wider semantic scope than its syntactic scope.
EXAMPLES
(1)
Every girl kisses a boy.
I will use this example to demonstrate the effects of the scope mechanism. This sentence has
five types of derivations. I will go through all five, and show that as a result, the sentence
will be generated by the grammar with two meanings: one where the meaning of every girl
takes scope over the meaning of a boy, and one where the meaning of a boy takes scope over
177
177
the meaning of every girl.
All derivations start out with the same structure, interpretation and store for every girl and
some boy:
< DP , λPu[GIRL(u)  P(u)],Ø >
D
NP
│
│
every girl
< DP , λPz[BOY(z)  P(z)],Ø >
D
│
a
NP
│
boy
(1)
Every girl kisses a boy.
DERIVATION 1
From the lexical item kiss we generate:
<V,KISS,Ø >
│
kiss
The V takes the DP as a complement, we apply the V interpretation to the interpretation of
[DP a boy], and we unite the stores (giving Ø ). This gives:
<
VP , APPLY[KISS,λP.z[BOY(z)  P(z)]] ,Ø >
V
DP
│
kiss D
NP
│
│
a boy
APPLY[KISS,λP.z[BOY(z)  P(z)]]
= LIFT[KISS] (λP.z[BOY(z)  P(z)])
= λTλx.T(λy.[KISS(y)](x)) (λP.z[BOY(z)  P(z)])
= λx.λP.z[BOY(z)  P(z)](λy.[KISS(y)](x))
= λx.z[BOY(z)  (λy.[KISS(y)](x))(z)]
= λx.z[BOY(z)  [KISS(z)](x))]
= λx.z[BOY(z)  KISS(x,z)]
So we get:
178
178
[def APPLY]
[def LIFT]
[λ-con]
[λ-con]
[λ-con]
[rel notatiom]
<
VP , λx.z[BOY(z)  KISS(x,z)],Ø >
V
DP
│
kiss D
NP
│
│
a boy
We have: <I,λP.P,Ø >
│
Ø
The I takes the VP as a complement, and we get:
<
I
│
Ø
I' , λx.z[BOY(z)  KISS(x,z)] ,Ø >
VP
V
DP
│
kiss D
NP
│
│
a boy
The I' takes [DP every girl] as a specifier, and we get:
IP , APPLY[λx.z[BOY(z)  KISS(x,z)], λP.x[BOY(x)  P(x)],Ø >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
APPLY[λx.z[BOY(z)  KISS(x,z)],λPu[GIRL(u)  P(u)]]
= LIFT[λx.z[BOY(z)  KISS(x,z)]] (λPu[GIRL(u)  P(u)])
= λT.T(λx.z[BOY(z)  KISS(x,z)]) (λPu[GIRL(u)  P(u)])
= λPu[GIRL(u)  P(u)] (λx.z[BOY(z)  KISS(x,z)])
= u[GIRL(u)  [λx.z[BOY(z)  KISS(x,z)]](u)]
= u[GIRL(u)  z[BOY(z)  KISS(u,z)]
So we get:
179
179
[def APPLY]
[def LIFT]
[λ-con]
[λ-con]
[λ-con]
IP , u[GIRL(u)  z[BOY(z)  KISS(u,z)]],Ø >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
(1)
Every girl kisses a boy.
DERIVATION 2
We build up the same I' as under derivation 1:
<
I
│
Ø
I' , λx.z[BOY(z)  KISS(x,z)] ,Ø >
VP
V
DP
│
kiss D
NP
│
│
a boy
We apply STORE1 to every girl:
STORE1[< DP
D
│
every
<
, λP.u[GIRL(u)  P(u)] ,Ø >] =
NP
│
girl
DP , x1, {<1,λPu[GIRL(u)  P(u)]>} >
D
NP
│
│
every girl
The I' takes this DP as a specifier, applies I' interpretation to the interpretation of the DP,
which is x1, and unites the stores. We get:
180
180
IP , APPLY[λx.z[BOY(z)  KISS(x,z)],x1], {<1,λPu[GIRL(u)  P(u)]>}>
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
APPLY[λx.z[BOY(z)  KISS(x,z)],x1]
= λx.z[BOY(z)  KISS(x,z)] (x1)
= z[BOY(z)  KISS(x1,z)]
[def APPLY]
[λ-con]
So we get:
IP , z[BOY(z)  KISS(x1,z)], {<1,λPu[GIRL(u)  P(u)]>}>
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
To this we apply RETRIEVE1, which gives:
IP , APPLY[λx1.z[BOY(z)  KISS(x1,z)], λPu[GIRL(u)  P(u)]],Ø >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
181
181
APPLY[λx1.z[BOY(z)  KISS(x1,z)],λPu[GIRL(u)  P(u)]]
=LIFT[λx1.z[BOY(z)  KISS(x1,z)]] (λPu[GIRL(u)  P(u)]) [def APPLY]
=λT.T(λx1.z[BOY(z)  KISS(x1,z)]) (λPu[GIRL(u)  P(u)]) [def LIFT]
= [λPu[GIRL(u)  P(u)]] (λx1.z[BOY(z)  KISS(x1,z)]) [λ-con]
= u[GIRL(u)  [λx1.z[BOY(z)  KISS(x1,z)]](u)]
[λ-con]
= u[GIRL(u)  z[BOY(z)  KISS(u,z)]]
[λ-con]
So we get the same as in derivation 1:
IP , u[GIRL(u)  z[BOY(z)  KISS(u,z)]],Ø >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
(1)
Every girl kisses a boy.
DERIVATION 3
This time we start by applying STORE2 to a boy and get:
< DP ,x2, {<2,λPz[BOY(z)  P(z)]>} >
D
│
a
NP
│
boy
The V kiss takes this as a complement, and we get:
<
VP , APPLY[KISS,x2], {<2,λP.z[BOY(z)  P(z)]>} >
V
DP
│
kiss D
NP
│
│
a boy
APPLY[KISS,x2]
= KISS(x2)
[def APPLY]
So we get:
182
182
<
VP , KISS(x2), {<2,λP.z[BOY(z)  P(z)]>} >
V
DP
│
kiss D
NP
│
│
a boy
The empty I takes this as a complement, and we get:
<
I
│
Ø
I' , KISS(x2), {<2,λP.z[BOY(z)  P(z)]>} >
VP
V
DP
│
kiss D
NP
│
│
a boy
This I' takes every girl as a specifier, and we get
IP , APPLY[KISS(x2),λPu[GIRL(u)  P(u)]], {<2,λP.z[BOY(z)  P(z)]>}>
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
APPLY[KISS(x2),λPu[GIRL(u)  P(u)]]
= LIFT[KISS(x2)] (λPu[GIRL(u)  P(u)])
[def APPLY]
= λT.T(KISS(x2)) (λPu[GIRL(u)  P(u)])
[def LIFT]
= λPu[GIRL(u)  P(u)] (KISS(x2))
[λ-con]
= u[GIRL(u)  [KISS(x2)](u)]
[λ-con]
= u[GIRL(u)  KISS(u,x2)]
[rel notation]
So we get:
183
183
IP , u[GIRL(u)  KISS(u,x2)], {<2,λP.z[BOY(z)  P(z)]>}>
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
To this we apply RETRIEVE2 and we get:
IP , APPLY[λx2.u[GIRL(u)  KISS(u,x2)], λPz[BOY(z)  P(z)]],Ø >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
APPLY[λx2.u[GIRL(u)  KISS(u,x2)],λPz[BOY(z)  P(z)]]]
= LIFT[λx2.u[GIRL(u)  KISS(u,x2)]](λPz[BOY(z)  P(z)])
= λT.T(λx2.u[GIRL(u)  KISS(u,x2)])(λPz[BOY(z)  P(z)])
= λPz[BOY(z)  P(z)] (λx2.u[GIRL(u)  KISS(u,x2)])
= z[BOY(z)  [λx2.u[GIRL(u)  KISS(u,x2)]](z)]
= z[BOY(z)  u[GIRL(u)  KISS(u,z)]]
So we get:
IP , z[BOY(z)  u[GIRL(u)  KISS(u,z)]] ,Ø >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
184
184
[def APPLY]
[def LIFT]
[λ con]
[λ-con]
[λ-con]
This time the derivation produces the sentence with a different meaning: the object NP takes
wide scope over the subject NP.
(1)
Every girl kisses a boy.
DERIVATIONS 4 and 5
In both derivations 4 and 5 we apply STORE1 to every girl and STORE2 to a boy, so these
derivations use:
<
DP , x1, {<1,λPu[GIRL(u)  P(u)]>} >
D
NP
│
│
every girl
and
< DP , x2, {<2,λPz[BOY(z)  P(z)]>}>
D
│
a
NP
│
boy
DERIVATION 4
We build up the I' kiss a boy in the same way as in derivation 3 and get:
<
I
│
Ø
I' , KISS(x2), {<2,λP.z[BOY(z)  P(z)]>} >
VP
V
DP
│
kiss D NP
│
│
a boy
This I' takes every girl as a specifier, and we get:
185
185
<
DP
IP , APPLY[KISS(x2),x1], {<1,λPu[GIRL(u)  P(u)]>,
<2,λP.z[BOY(z)  P(z)]>} >
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
APPLY[KISS(x2),x1]
= [KISS(x2)](x1)
[def APPLY]
= KISS(x1,x2)
[rel notation]
So we get:
IP , KISS(x1,x2), {<1,λPu[GIRL(u)  P(u)]>, <2,λP.z[BOY(z)  P(z)]>} >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
To this we apply RETRIEVE2
and we get:
<
DP
IP , APPLY[λx2.KISS(x1,x2), λP.z[BOY(z)  P(z)]],
{<1,λPu[GIRL(u)  P(u)]>} >
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
186
186
APPLY[λx2.KISS(x1,x2), λP.z[BOY(z)  P(z)]]
= LIFT[λx2.KISS(x1,x2)](λP.z[BOY(z)  P(z)])
= λT.T(λx2.KISS(x1,x2))(λP.z[BOY(z)  P(z)])
= [λP.z[BOY(z)  P(z)]](λx2.KISS(x1,x2))
= z[BOY(z)  [λx2.KISS(x1,x2)](z)]
= z[BOY(z)  KISS(x1,z)]
[def APPLY]
[def LIFT]
λ con]
[λ-con]
[λ-con]
So we get:
IP , z[BOY(z)  KISS(x1,z)], {<1,λPu[GIRL(u)  P(u)]>} >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
At this moment, we have reached the same stage as we did in derivation 1, RETRIEVE1 will
produce, after reduction:
IP , u[GIRL(u)  z[BOY(z)  KISS(u,z)]] ,Ø>
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
187
187
DERIVATION 5
Derivation 5 proceeds in exactly the same way as derivation 4 and produces:
IP , KISS(x1,x2), {<1,λPu[GIRL(u)  P(u)]>, <2,λP.z[BOY(z)  P(z)]>} >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
This time, instead of applying RETRIEVE2 we apply RETRIEVE1 and get:
<
DP
IP , APPLY[λx1.KISS(x1,x2),λPu[GIRL(u)  P(u)] ],
{<2,λP.z[BOY(z)  P(z)]>} >
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
APPLY[λx1.KISS(x1,x2), λPu[GIRL(u)  P(u)]]
= LIFT[λx1.KISS(x1,x2)](λPu[GIRL(u)  P(u)])
= λT.T(λx1.KISS(x1,x2))(λPu[GIRL(u)  P(u)])
= λPu[GIRL(u)  P(u)] (λx1.KISS(x1,x2))
= u[GIRL(u) [ λx1.KISS(x1,x2)](u)]
= u[GIRL(u)  KISS(u,x2)]
So we get:
188
188
[def APPLY]
[def LIFT]
[λ con]
[λ-con]
[λ-con]
IP , u[GIRL(u)  KISS(u,x2)], {<2,λP.z[BOY(z)  P(z)]>} >
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
We have now joined derivation 3, applying RETRIEVE2 gives after reduction:
IP , z[BOY(z)  u[GIRL(u)  KISS(u,z)]] ,Ø>
<
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DP
│
kiss D
NP
│
│
a
boy
We conclude that we derive every girl kisses a boy with two meanings: for every girl there is
a boy that she kisses, and: there is a boy such that every girl loves him.
7.2. Dislocation in the syntax and in the semantics.
The X-bar theory I have introduced provides us with a theory of syntactic trees. For the
present discussion it is useful to give the semantic interpretations associated with these
syntactic trees the form of a tree as well. This is done along the following lines:
189
189
Let:  := λQλP.x[Q(x)  P(x)]
 : = λQλP.x[Q(x)  P(x)]
A := APPLY
[A α, β ] := APPLY[α,β]
IP
A
DP
I'
A
A
D
NP I
VP
λP.P
A  BOY
│
│ │
every boy Ø V
DP
KISS
A
│
kiss D
NP

GIRL
│
│
a
girl
Thus, the semantic tree consists of the meanings of the basic expressions at the leaves, and
the semantic operations labeling the non-leaves, under the convention that the function is on
the left branch.
Now, we are interested in two kinds of dislocation: syntactic dislocation and semantic
dislocation:
Syntactic dislocation: e.g. topicalization:
the topic is sitting at the top of the syntactic tree while its interpretation is (or can be) sitting
in its normal place down in the semantic tree:
TopP
DP
D
│
a
A
Top'
NP Topn
IP
│
│
girl Ø DP
A
λP.P
I'
D NP I
VP
│
│ │
every boy Ø V
DPn
│
│
kiss
Ø
A

A
KISS
BOY
A

GIRL
Semantic dislocation: e.g. inverse scope:
the meaning of a girl is sitting at the top of the semantic tree while the DP a girl is sitting in
its normal place down in the syntactic tree:
190
190
Let: [λ xn, β] := λxn.β
IP
DP
A
I'
λ
A
D
NP I
VP
xn
A 
GIRL
│
│ │
every boy Ø V
DP
A
A
│
kiss D
NP λP.P A  BOY
│
│
a
girl
KISS
xn
When you calculate the meaning, you'll get out:
APPLY[λxn.x[BOY(x)  KISS(x,xn)], λP.y[GIRL(y)  P(y)]]
= y[GIRL(y)  x[BOY(x)  KISS(x,y)]]
For each of these two dislocation situations there are three strategies to deal with them:
1. Syntactic dislocation: a syntactic expression is sitting at the top of the syntactic tree,
while its meaning is sitting at the bottom of the semantic tree:
STRATEGY 1.1:
Generate both [DP a girl] and its meaning A[,GIRL] at the bottom of the respective trees,
and move [DP a girl] up to the top of the syntactic tree.
This is syntactic movement.
STRATEGY 1.2:
Generate both [DP a girl] and its meaning A[,GIRL] at the top of the respective trees, and
move the meaning A[,GIRL] down to the bottom of the semantic tree.
This is what is called reconstruction.
STRATEGY 1.3:
Generate both [DP a girl] and its meaning A[,GIRL] at the top of the respective trees, but
assume a different semantic tree, which will have the effect of movement, without
movement:
This is a non-movement function composition analysis: (e.g. GPSG, Categorial Grammar,
Jacobson)
191
191
TopP
DP
D
│
a
A
Top'
NP Topn
IP
│
│
girl Ø DP
LC
RC
I'
λP.P
A
A
RC 
D
NP I
VP
KISS
│
│ │
every boy Ø V
DPn
│
│
kiss
Ø

GIRL
BOY
λT.T
Here: [RC α, β ] := RC[α,β] and RC is right function composition:
Right Function Composition:
RC[α,β] = λx.APPLY[α,β(x)]
[LC α, β ] := LC[α,β] and LC is left function composition:
Left Function Composition:
LC[α,β] = λx.APPLY[α(x),β]
RC[KISS,λT.T] = λT.APPLY[KISS,λT.T(T)]
= λTλx.T(λy.KISS(x,y))
RC[λP.P, λTλx.T(λy.KISS(x,y))]
= λT.APPLY[λP.P, λTλx.T(λy.KISS(x,y))(T) ]
= λTλx.T(λy.KISS(x,y))
LC[λTλx.T(λy.KISS(x,y)), λP.x[BOY(x)  P(x)]]
= λT.APPLY[λx.T(λy.KISS(x,y)), λP.x[BOY(x)  P(x)]
= λT.x[BOY(x)  T(λy.KISS(x,y))]
APPLY[λT.x[BOY(x)  T(λy.KISS(x,y))], λP.y[GIRL(y)  P(y)]]
= x[BOY(x)  y[GIRL(y)  KISS(x,y)]]
2. Semantic dislocation: a meaning is sitting at the top of the semantic tree, while the
syntactic expression of which it is the meaning is sitting at the bottom of the syntactic tree:
STRATEGY 2.1:
Generate both [DP a girl] and its meaning A[,GIRL] at the bottom of the respective trees,
and move A[,GIRL] up to the top of the semantic tree.
This is semantic movement: Cooper's storage mechanism, and also Quantifier Raising.
192
192
STRATEGY 2.2:
Generate both [DP a girl] and its meaning A[,GIRL] at the top of the respective trees, and
move the [DP a girl] down to the bottom of the syntactic tree. An example of this strategy is
(in essence) Montague's original quantifying-in rule.
STRATEGY 2.3:
Generate both [DP a girl] and its meaning A[,GIRL] at the bottom of the respective trees, but
assume a different semantic tree, which will have the effect of movement, without
movement:
This is a non-movement type shifting analysis (Hendriks in Categorial Grammar):
IP
DP
A
I'
A
A
D
NP I
VP
λP.P
A  BOY
│
│ │
every boy Ø V
DP
INV[KISS] A
│
kiss D
NP

GIRL
│
│
a
girl
Here: INV[α] = λTλU.T(λy.U(λx.α(x,y)))
This means that you shift the verb meaning to the function which will put the arguments it
receives in inverse scope order:
INV[KISS] = λTλU.T(λy.U(λx.KISS(x,y)))
APPLY[INV[KISS],λP.y[GIRL(y)  P(y)]]
= λU.y[GIRL(y)  U(λx.KISS(x,y))]
λP.P stands for the identity function at the required type (which is a higher type in this case).
We get:
APPLY[λU.y[GIRL(y)  U(λx.KISS(x,y))], λP.x[BOY(x)  P(x)]]
= y[GIRL(y)  x[BOY(x)  KISS(x,y)]
7.3. Movement in the syntax and in the semantics.
I am now concerned with the parallels between storage and syntactic movement. To show
the parallels, I will formulate a storage approach to syntactic movement as well.
Let us suppose we have a theory T which allows a certain kind of trees. We add to T a
storage mechanism. We assign to every node of every tree a store, which is a set of pairs
of the form <n,α>, where n is an index and α is a tree.
193
193
We assume that stores are inherited up in the tree by union of stores:
A,S
│
B,S
C,S1  S2
A,S1
B,S2
But we will assume that theory T may contain bounding constraints, which may block
stored information from inheriting upwards.
The core of the storage mechanism are two operations: STOREn and RETRIEVEn:
I will simplify things by not assuming incremental storage:
Let <A,Ø > be a tree with topnode A,Ø.
n an index
STOREn[<A,Ø >] = <xn,{<n,A>}>
STOREn[A,Ø] replaces tree <A,Ø > by a tree consisting of a variable xn and tree A stored
under index n.
We assume that theory T will determine constraints on this, in particular, what trees can be
stored, and, importantly, what variable xn is in the theory.
Let <B,S> be a tree with topnode B,S
and let <n,A>  S be the only item in S stored under index n.
RETRIEVEn[<B,S>] = < o, S{<n,A>}>
A,Ø
On,Ø
o <S,n>
B,S
So, RETRIEVEn creates (or requires) a structure in which B has an operator On as a sister.
At the mother, S inherits up, but as part of a pair <S,n>. This we use to trigger retrieval at
the next level up. There A is taken out of the store and added as a sister to the mother of On.
Again, theory T will determine what this means in terms of the tree structures that it allows.
Syntactic storage.
In the syntax the relevant theory of trees is our X-bar syntax.
The storage mechanism is meant to deal with movement to SPEC, not with head movement
(the latter is even simpler).
STOREn[<DP,Ø >] = <DPn,{<n,DP>}>
│
Ø
194
194
Thus, we assume that variable xn is a trace [DP Ø] with syntactic index n.
RETRIEVEn[<IP,{<n,DP>}>] = < XP,Ø
DP
>
X',<{<n,DP>},n>
Xn,Ø
│
Ø
IP,{<n,DP>}
In the case of topicalization XP may be TopP, if we assume such a category, and in the case
of relativization it would be CP.
Semantic storage.
In the semantics, the relevant theory of trees is the set of semantic interpretation trees
introduced above. Here the operations are:
Let α be of type e or <<e,t>,t>.
STOREn[<α,S>] = <xn,S  {<n,α>}>
So we store DP meanings.
Let φ be of type t, <n,α> the only pair in S stored under index n.
RETRIEVEn[<φ,{<n,α>}>] = < A,S{<n,α>} >
λ,<S,n> α,Ø
xn,Ø
φ,S
Thus, the meaning of this operation is:
APPLY[λxn.φ,α]
The semantics of syntactic storage and retrieval.
First, I will assume that there is no semantic effect of syntactic storage. This means that
syntactic storage has the following effect on syntactic, semantic trees:
195
195
<DP, Ø , DP',S> => [syntactic STOREn] <DPn,{<n,DP>} , DP',S>
│
Ø
Syntactic retrieval extends the syntactic tree. Whether or not a semantic operation takes
place depends on the particular syntactic operator introduced. I assume that in the case of
topicalization, nothing happens in the semantics, but in the case of relativization λabstraction takes place. I will work this into the theory of interpretation by stipulating the
interpretation of the tree-bits added. For simplicity here, I will stipulate the interpretations,
rather than try to work them into the general interpretation theory introduced earlier.
I will use two features on nodes in syntactic trees,  and λ, which instruct the semantic
interpretation:  tells you that this node is not interpreted (for instance, because it was
already interpreted downstairs), λ tells you that the operation is λ-abstraction. We get the
following syntax-semantics map (ignoring the stores)
<
A
B[]
<
A
B
<
, C' >
C
, B' >
C[]
A
Bn[λ]
,
C
λ
xn
>
C'
The syntax of semantic storage and retrieval.
I will assume that semantic retrieval has no syntactic effect.
This means that semantic retrieval has the following effect on syntactic and semantic trees:
<IP,S , IP',S'> => [semantic RETRIEVEn] <IP,S ,A,S{<n,α>} >
λ,<{<n,α>},n>
xnØ,
α,Ø
φ,S
For binding considerations to be discussed later, I assume that semantic STOREn adds index
n to the syntactic tree as well. Thus semantic STOREn does have a syntactic effect.
Thus, we get:
196
196
<DP,S , DP',Ø > => [semantic STOREn] <DPn,S , xn,{<n,DP'>} >
We will modify the theory once more below to deal with island constraints. But let us at this
point discuss some examples.
Topicalisation.
We have a syntactic feature T which can be added to a DP, and we assume that V cannot take
DP[T] as a complement. This triggers syntactic movement:
STOREn[ DP[T],Ø ] = DPn,{<n,DP[T]>}
│
Ø
The store gets passed up the tree, until we reach IP. We will talk about movement up below.
But if this is the top IP, the store needs to be emptied. This means that RETRIEVEn needs
to take place:
TopP,Ø
RETRIEVEn[ IP,{<n,DP[T]>} ] =
DP[T][],Ø
T',<{<n,DP[T]>},n>
Tn[],Ø
│
Ø
IP,{<n,DP[T]>}
Let us look at how this works for the following example:
(1) Some boy, every girl kisses.
For readability, I will make the convention of writing only the relevant stores in the trees.
We start with:
SYNTAX:
DP[T],Ø
D
│
some
NP
│
boy
SEMANTICS:
A

BOY
197
197
Syntactic storage with STOREn gives:
SYNTAX:
DPn , {<n,DP[T]>}
│
Ø
D
NP
│
│
some boy
SEMANTICS:
A

BOY
< V , KISS > takes this as a complement, and we get:
│
kiss
SYNTAX:
VP
V
│
kiss
, {<n,DP[T]>}
DPn
│
Ø
D
│
some
NP
│
boy
Again, technically, both the DPn node and the VP node have this store, but I write only the
top one (we could, of course, change the operation a bit and make the store on the DPn empty
in the process of moving up, but we won't bother here).
SEMANTICS:
A
KISS
A

BOY
From here we build the following IP:
198
198
SYNTAX:
IP
,
{<n,DP[T]>}
I'
D
NP
│
│
some boy
DP
D
│
every
NP I
VP
│ │
girl Ø V
DPn
│
│
kiss
Ø
SEMANTICS:
A
A
A
λP.P
A
KISS


GIRL
A
BOY
We now apply RETRIEVEn and get:
SYNTAX:
TopP ,Ø
DP[T][]
D
NP
│
│
some boy
T'
Tn[]
IP
│
Ø
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DPn
│
│
kiss
Ø
199
199
SEMANTICS:
Since both DP[T] and Tn have feature , they don't change the interpretation at this level, the
semantic interpretation tree is the same as the semantic interpretation tree for the IP:
A
A
A
λP.P

A
KISS
GIRL
A

BOY
= x[GIRL(x)  y[BOY(y)  KISS(x,y)]]
It is, of course, an empirical question whether this is correct, i.e. whether topicalisation
doesn't have a semantic effect. That is not important here, since my only purpose here was
precisely to show how you can do syntactic movement which doesn't affect the meaning.
This is not the only interpretation we get for topicalisation.
We started with:
SYNTAX:
DP[T],Ø
D
│
some
NP
│
boy
SEMANTICS:
A

BOY
But to this we could apply semantic storage with STOREn, and get:
SYNTAX:
DP[T],Ø
D
│
some
NP
│
boy
200
200
SEMANTICS:
xn, {<n,A>}

BOY
Topicalisation with STOREn will give:
SYNTAX:
DPn , {<n,DP[T]>}
│
Ø
D
NP
│
│
some boy
SEMANTICS:
xn, {<n,A>}

BOY
And we build the following IP and IP meaning:
SYNTAX:
IP
,
{<n,DP[T]>}
I'
D
NP
│
│
some boy
DP
D
│
every
NP I
VP
│ │
girl Ø V
DPn
│
│
kiss
Ø
SEMANTICS:
A
A
λP.P
,
{<n,A>}

A
A 
KISS
BOY
GIRL
xn
To this we can apply semantic RETRIEVEn and get:
201
201
SYNTAX:
IP
DP
D
│
every
,
{<n,DP[T]>}
I'
D
NP
│
│
some boy
NP I
VP
│ │
girl Ø V
DPn
│
│
kiss
Ø
SEMANTICS:
A
λ
A
xn

A
A
λP.P
BOY
A
A 
KISS
GIRL
xn
= y[BOY(y)  x[GIRL(x)  KISS(x,y)]]
After this, syntactic RETRIEVEn gives:
SYNTAX:
TopP ,Ø
DP[T][]
D
NP
│
│
some boy
T'
Tn[]
IP
│
Ø
DP
I'
D
NP I
VP
│
│ │
every girl Ø V
DPn
│
│
kiss
Ø
SEMANTICS:
y[BOY(y)  x[GIRL(x)  KISS(x,y)]]
202
202
We briefly take stock. What we see is that we can regard syntactic movement/storage and
semantic movement/storage as basically the same operation. Based on this, we would expect
similarities between them. But we would also expect some differences, arising out of the
fact that the operation is realized in different mediums, syntactic X-bar theory trees v.
semantic function-argument structure trees.
We have also seen that there are some (simple) linking constraints between the syntax and
the semantics, and there may be more.
For instance, while I have assumed syntactic and semantic storage to be basically
independent, giving two readings for example (1), we allow for the possibility that in a
particular language these processes are more intimately linked, i.e. languages in which, say,
topicalisation induces semantic storage (then we would only generate the second reading), or
languages in which semantic storage induces syntactic storage (which would produce, say,
syntactically scrambled structures which are unambiguous).
We will talk about islands in a moment, but let us assume that we have in place for English a
mechanism that will allow syntactic long distance movement for topicalisation and semantic
long distance movement for indefinites. Then the theory presented here will
unproblematically generate sentence (2) with the readings (2a)-(2c) (assuming Montague's
analysis of seek, the details of which are discussed later):
(2) A unicorn, Mary believes that Sue seeks.
a. BELIEVE(m,SEEK(s,λP.x[UNICORN(x)  P(x)]))
Mary believes that Sue engages in unicorn-seeking behavior
b. BELIEVE(m,x[UNICORN(x)  SEEK*(s,x)])
Mary believes that there is a unicorn that Sue seeks.
c. x[UNICORN(x)  BELIEVE(m,SEEK*(s,x))]
There is a unicorn that Mary believes Sue seeks.
The syntax is derived in the same way, for all three cases, a unicorn is syntactically stored
and retrieved at the top level.
But there are three distinct semantic derivations.
We can not store the meaning of a unicorn and get reading (2a).
We can store the meaning of a unicorn and retrieve it the first time in the derivation that we
hit type t (i.e. at the embedded IP-level), and get reading (2b).
We can store the meaning of a unicorn and retrieve it the second time in the derivation that
we hit type t, at the level of the top IP, and get reading (2c).
It seems that in English all three readings are actually possible for sentence (2), so the theory
works rather well here.
The parallels between syntactic movement and scope have, in the syntactic literature, been
taken as strong evidence for the operation of QR, quantifier raising and the level of Logical
Form. We see that, in as much as semantic storage is QR and the level of semantic trees is
Logical Form, the present theory accepts that point fully.
What it doesn't accept is the way QR and Logical Form are tied to the syntax in the syntactic
203
203
literature. In the syntactic literature, derivations start out building one structure and then
branch into two, one deriving a structure at the mystical level of PF (Phonological Form),
and one deriving a structure at LF. Syntax proper is everything that takes place before the
branching. On this view, the semantic interpretation takes place after the syntax is done with
it, and hence semantic movement is ordered after syntactic movement.
In the theory discussed here, syntactic and semantic representations are build up from the
start in parallel. Presumably, the interface with phonology takes place from the syntax, and
with pragmatics from the semantics. But the compositional theory gives you no derivational
ordering of syntax and semantics: it is not the 'completed' syntactic from which the
semantics takes off. The upshot of the whole discussion in this section is that, if we want to
account for the parallels between movement and scope, we can do that without having to
assume the derivational ordering between syntax and semantics imposed in the syntactic
literature.
And there are advantages to this. As is well known, while the syntactic literature typically
assumes that syntactic movement is always upward, and not downward, they are forced to
assume that at logical form there is both upward and downward movement, QR and
reconstruction. The three readings of example (2) tells you why. a unicorn is moved up in
the syntax to the top of the tree. It is that tree that is the input for the semantics. If you don't
do anything, you get the widest scope reading (2c). But in order to get the other readings,
you will have to move a unicorn down in the tree.
Usually, the assumption is that you just reconstruct in the intermediate positions of the chain
created by syntactic movement. But things are not straightforward here. Usually the
assumption is that long distance syntactic movement goes through the intermediate SPEC of
CP, while QR is assumed to be adjunction (or whatever) to IP. But that means that if you
reconstruct a unicorn back into the intermediate chain position, it is sitting in the wrong
place. But that suggests that for the purpose of reconstruction at LF we would have to
create a more complex movement chain in the syntax, say, first move to a DP position
adjoined to IP (or whatever) and from there to SPEC of CP, with the idea that, for
interpretation reasons, reconstruction cannot stop in SPEC of CP.
If we stop at the top of IP, we would get reading (2b), and if we reconstruct all the way back,
we get (2a).
The fans of reconstruction seem to go by the Tolkien motto:
There and back again. A Hobbit's holiday.
The adversaries hold more with Dr. Seuss:
From there to here,
from here to there,
funny things are everywhere.
If we do not accept the ordering of semantics after syntax, and we generate the syntax and
semantics in parallel, we do not need semantic reconstruction. That seems to be a Rather
Good Thing.
204
204
7.4. Relativization.
We briefly discuss relativization as another example of syntactic movement. I will here only
discuss the simple case of relativizer who.
Let us assume that relativizer who is a DP with relativizer feature R and index n, and it is
interpreted as a variable:
<DPn[R],Ø , xn,Ø >
│
who
Again we assume that V cannot take a DP with feature R as a complement; STOREn gives:
< DPn,{<n,DPn[R]>} , xn,Ø >
│
Ø
At the top of the tree RETRIEVEn takes the following form:
CP[R],Ø
RETRIEVEn[ IP,{<n,DPn[R]>} ] =
DPn[R][],Ø
C',<{<n,DPn[R]>},n>
Cn[λ],Ø
│
Ø
IP,{<n,DPn[R]>}
The only difference with topicalisation is that now the operator Cn has feature λ, rather than
.
This means that the interpretation for the CP is going to be:
λ
xn
IP'
Now we derive:
205
205
(3) Who Mary kissed
SYNTAX:
CP[R]
DPn[R][]
C'
│
who
Cn[λ]
IP
│
Ø
DP
I'
│
mary I
VP
│
Ø V
DPn
│
│
kiss
Ø
SEMANTICS:
λ
xn
A
A
λP.P
KISS
MARY
A
xn
= λxn.KISS(MARY,xn)
Now assume: MOD[N,C[R]] =E l
Then we get the following structure for (4):
206
206
(4) Boy who Mary kissed
SYNTAX:
NP
NP
│
boy
CP[R]
DPn[R][]
C'
│
who
Cn[λ]
IP
│
Ø
DP
I'
│
mary I
VP
│
Ø V
DPn
│
│
kiss
Ø
The semantics of this will be:
APPLY[λxn.KISS(MARY,xn),BOY]
= LIFT[λxn.KISS(MARY,xn)](BOY)
[def APPLY]
= [λPλx.P(x)  λxn.KISS(MARY,xn)(x)](BOY)
[def LIFT]
= λx.BOY(x)  KISS(MARY,x)
7.5. Syntactic and semantic islands.
Syntactic Islands (For single movement only, with intermediate traces.)
I will assume that the operators top and C introduce syntactic islands. This means that a
stored tree has to land in the SPEC of topP or CP.
There are two cases. Let X be top or C, <n,A> in store.
CASE 1: X has index n. I will assume that this triggers RETRIEVEn.
CASE 2: X does not have index n. I will assume that in this case we move on, leaving an
intermediate trace.
207
207
RETRIEVE
Let Xn be an island indexed by n.
X',<S,n>
if S = {<n,A>} (semantics as before)
Xn,Ø IP,S
XP,Ø
A,Ø
if S = {<n,A>} (semantics as before)
X',{<S,n>}
If we assume that top and C[REL] are always indexed, it is going to follow that you cannot
move out of TopP or CP[REL].
BOUNCE
If island X is not indexed (and the case we consider here is C[REL], we can move on,
leaving a trace in SPEC of CP.
Standardly, this goes as follows: The moved DP has to land in SPEC of CP to satisfy the
island. From there it can move on again, leaving a trace.
In other words, the standard assumption is that long distance movement needs to make a
sanitary stop in SPEC of CP.
While I could formalize this operation as a retrieve and store operation in the present
framework, I will take the simpler path of assuming that rather than literally touching down
in SPEC of CP, the stored tree can hover over the face of the waters. As you can imagine,
this will leave a trace.
Let X be a non-indexed island.
X',S
X,Ø
if S = {<n,A>} (semantics as before)
IP,S
XP,S
DPn[]
│
Ø
if S = {<n,A>} (semantics as before)
X',S
Of course, we create here the same structure that literal touch down would create. But
instead of literally moving A into the SPEC of DP, we leave it in store and let this create the
intermediate trace.
When we look at the two operations RETRIEVE and BOUNCE we see that we can regard
each of them as consisting of two normal structure building operations: build-X' and build208
208
XP. The special constraints on the resulting structures come out of constraints on comparing
the tree with the store. But since the store can be regarded as just a complex feature, this is
really just a process of structure building and feature matching. The observation is that this
holds as much for RETRIEVE as it holds for BOUNCE. Thus, while we introduced
RETRIEVE as moving from the store to the tree, we don't need to assume a movement
metaphor here: we can just as well see it as building a tree with daughter A, under the
condition that it matches the store feature <n,A>.
This means that the only operation which is really literally a movement operation is the
operation STORE. But again, this is not essential either. We could, without loss of
anything, assume that instead of applying movement operation STOREn to [DP α], we can
allow the grammar to just generate the result of such storage, i.e. we can allow the grammar
to generate:
<DPn,{<n,[DP α]>}>
│
Ø
What we will have done then, is turn the grammar (at least with respect to movement) into a
monostratal grammar, a grammar which has only one syntactic level. This means in
essence that the generated structure tree coincides with the syntactic derivation tree. (In yet
other words, the grammar just generates surface structure trees.)
Movement is traditionally analyzed in a multistratal framework: you build up a tree with the
moved expression in its original position, each movement adds a tree to the derivation, the
tree in which the moved expression is in its touch-down or landing position.
What the present discussion suggests is that we can collapse this sequence of trees into a
single tree which encodes the movement in its feature structure. Vive versa, it is
straightforward to reconstruct the sequence of movement trees
from our single base-generated tree.
Semantic Islands.
RETRIEVE
Let <n,α> be the only pair stored under index n in S.
λ,<S,n>
xn,Ø
IP',S
A,S{<n,α>}
λ,<S,n> α,Ø
Semantic RETRIEVE takes place at the IP-level (type t). But Islandhood takes place at the
CP-level. While for syntactic movement, the two levels coincide, they do not for semantic
movement.
209
209
It has been generally accepted since Rodman 1972 that relative clauses are scope islands in
that quantificational noun phrases cannot take scope out of relative clauses: (1a) does not
have a reading that can be paraphrased as (1b):
(1) a. Some boy who danced with every girl left early.
b. For every girl there was some boy who danced with her and left early
(possibly different boys for different girls).
(For discussion and analysis of some seeming counterexamples, see Sharvit 199?.)
While the claim that relative clauses are scope islands for quantificational expressions is by
and large uncontroversial, the claim that that-complements are scope islands as well has
always been and remains highly controversial.
Here our only aim is to show how the notion of a scope island can be incorporated:
Let Q be the set of pairs of the form <n,α> where α is a quantificational DP meaning.
Let us indicate that X is a semantic Island, by assuming that its store is a special
marker i.
A,S
X,i
if S  Q = Ø
IP',S
Assuming that relative clauses are indeed scope islands, a semantically stored DP <n,α> can
only move on to (and hence from) the topnode A if <n,a>  Q. This has the consequence
that quantificational DPs cannot take their scope out of the relative clause.
7.6. Binding Theory.
The binding theory that I will introduce here has (at least) three limitations on it.
1. It will not deal with infinitives. Since my purpose is mainly expositionary, I will not get
entangled in question of the structure of infinitives and the problems of control.
2. It will not deal with discourse anaphora or with deictic pronouns. Thus, I will not assume
that these are interpreted as semantic variables that are assigned values by our static
assignment function mechanism.
3. It will not deal with weak cross over phenomena.
Let us call a DP to which semantic storage has applied a semantic operator. We will
assume that pronouns and reflexives are interpreted as semantic variables. This raises the
question of when a semantic operator can bind a variable.
Strong cross over is the situation where an element DPn c-commands an operator
with index n.
As is common wisdom, semantic binding is not possible in a strong cross-over situation:
210
210
(1) a. #Hen likes Johnn.
b. #Hen likes every boyn.
The storage mechanism all by itself will generate readings (2a) for (1a) and (2b) for (1b):
(2) a. LIKE(j,j)
b. x[BOY(x)  LIKE(x,x)]
The problem is, of course, that (1a) and (1b) don't have these readings. The syntactic binding
theory introduced will rule out (1a) and (1b).
Weak cross-over is the situation where a semantic operator with index n does not ccommand an element DPn.
The common wisdom is that a semantic operator can bind such an element iff the operator is
non-quantificational (and it isn't out for independent reasons, like strong cross-over).
So we get the contrast in (3) and interpretations in (4):
(3) a. Hisn mother loves Johnn.
b. #Hisn mother loves every boyn.
(4) a. LOVE(MOTHER(j),j)
b. x[BOY(x)  LOVE(MOTHER(x),x)]
i.e. (3a) can mean (4a), but (3b) cannot mean (4b).
The binding theory that I will introduce will not deal with the contrast between (3a) and (3b).
The theory will generate (3a) with reading (4a), but also (3b) with reading (4b).
Something obviously needs to be done. But it is not clear that this should be worked into the
binding theory.
The reason is that there are also weak cross over situations where binding by a
quantificational operator is fine, as in (5):
(5) a. Hisn mother irritates every boyn.
b. The fact that hisn daughter's boyfriend isn't jewish bothers every pious jewn.
c. The fact that many rabbis don't accept hisnconversion infuriates every reform
convertn.
Now, the cases in (5a) all have psych-verbs, which have special syntactic and semantic
properties, but there are also cases with verbs that don't seem to be special, as in (6):
(6) The string that binds himn to hisn mother keeps every unborn babyn alive.
While the contrast between (3a) and (3b) is clear and obviously needs to be accounted for,
simple modifications of the binding theory that eliminate (3b) tend to eliminate the cases in
(5) and (6) as well.
So, I will leave this problem open here.
211
211
I will start by introducing the semantic binding theory:
SEMANTIC BINDING
Semantic Binding
At the end of the derivation the following conditions hold:
A: No free variables:
every variable xn is bound by an operator λxn.
B: No vacuous abstraction:
every operator λxn binds some variable xn.
-Pronouns and reflexives introduce semantic variables. Semantic binding tells you that these
will have to be bound.
-Cn[REL] introduces a binding operation.
Semantic binding tells you that it will have to bind a variable.
Binding, free and bound have their standard logical meaning here. In fact, I will avoid these
terms in the syntactic binding theory to be discussed now:
SYNTACTIC BINDING
The theory of syntactic binding is stated in terms of indices and binding features.
Let us start with an inventory of indexed expressions and their meanings.
Pronouns: pronouns are generated with an index, their interpretation is a variable with the
same index:
<DPn,Ø
│
pro
xn,Ø >
Reflexives: the same as for pronouns:
<DPn,Ø
│
proself
xn,Ø >
212
212
Semantic operators: STOREn introduces syntactic index n, the index of the semantic
variable used:
STOREn produces:
<DPn,Ø
xn,{<n,DP'>}>
Syntactic operators: syntactic operators can be generated with a syntactic index, and if they
are they trigger syntactic retrieval relative to that index.
topn[] is not semantically interpreted.
Cn[λ] triggers abstraction over semantic variable xn.
Traces: traces are introduced by syntactic STOREn, or by the passing mechanism. In both
cases they are not semantically interpreted:
<DPn,{<n,A>}> or <DPnØ>
│
│
Ø
Ø
Note, by the way, that there is no requirement that a DP bears only one index. We will in
fact make use of multiple syntactic indices.
We can get multiple syntactic indices if we move a DP that has an index, for instance, a
reflexive, or a DP to which we have applied semantic storage.
i.e. <DPn,Ø
│
proself
xn,Ø > ==> <DPn,m,{<m,DP>}
│
│
Ø
proself
xn,Ø >
This example indicates an important assumption:
Syntactic indices are not moved.
When we put the reflexive proselfn in store the index n stays where it is: the index does not
get stored.
This completes our assumptions about indices, let us now discuss binding features.
Binding features are syntactic features, generated on indexed DPs and inherited up the
syntactic tree from daughter to mother in the syntactic build-up, until they are canceled.
There are three types of binding features:
[n,N,top(tree)], [n,N,top(I)], [n,A,top(I)]
and they are assigned as follows:
213
213
Pronouns: DPn, [n,N,top(I)]
│
pro
Reflexives: DPn, [n,A,top(I)]
│
proself
Semantic operators: STOREn introduces both a syntactic index and a binding feature:
DPn, [n,N,top(tree)]
Syntactic operators:
Xn, [n,N,top(tree)]
Traces: let n be the index on the trace created by syntactic STOREn or by passing of <n,A>.
This introduces both a syntactic index and a binding feature:
DPn, [n,N,top(I)]
│
Ø
When we look at this, we see that we find the following:
[n,A,top(I)] on reflexives.
[n,N,top(I)] on pronouns and traces.
[n,N,top(tree)] on operators
Like syntactic indices, more than one binding feature can occur on one indexed DP, and
likewise we assume as a general constraint on syntactic storage that binding features cannot
be syntactically stored:
Movement constraint:
Syntactic indices and binding features cannot be moved by syntactic storage:
STOREn[ DPm,[m,α],[k,β] ,Ø ] = DPn,m,[m,α],[k,β] , {<n,DP>}
│
Ø
This simple constraint will actually allow us to do binding theory without the need for
reconstruction.
It tells us that if a binding feature percolates up the tree to a DP which is moved, it will
continue to percolate up the tree from the place where the DP was moved from, and not from
the place where it is moved to.
214
214
Let us now look at the binding features:
[n,N,top(tree)], [n,N,top(I)], [n,A,top(I)]
In these features, the first element is the index, the second element tells you whether the
feature is an anaphor feature A or a non-anaphor feature N. The third element specifies
the binding domain.
Both top(tree) and top(I) are derivational calculators: they assign a value 0 or 1 to the node
that they are at, depending on whether the derivation continues to assign a mother to that
node and what that mother is.
Thus, assume we have derived a tree with topnode:
X,[...top(tree)]
The value of top(tree) at X is determined as follows:
-if the derivation ends here, then top(tree)(X)=1
-if the derivation does not end here, and X becomes a daughter then top(tree)(X)=0
Thus the only node to which top(tree) assigns 1 is indeed the top of the tree.
Assume we have derived a tree with topnode:
X,[...top(I)]
The value of top(I) at X is determined as follows:
-if X=,/IP then top(I)(X)=0
-if X=IP and the derivation ends here, then top(I)(X)=1
-if X=IP and the derivation does not end here, then:
top(I)(X)=1 iff the mother of X is not a projection of I
or not a projection of the same I that X is a projection of.
In practice this means that top(I)(X)=1 if X is the IP complement of top or C. As we will
see, the definition of canceling will tell us that only the first node where the value of the top
feature is 1 is relevant.
Binding Feature Inheritance:
binding features are inherited from daughter to mother until they are canceled.
215
215
Binding Feature Canceling:
1. a binding feature [n,α] at node D is canceled at D by a sister S with index n:
M
Sn
M
D[n,α]
D[n,α]
Sn
2. a binding feature [n,α,top(X)] at node Y is canceled if top(X)(Y)=1
The first canceling condition tells you that when a binding feature [n,α] hits a c-commanding
index n, the feature gets canceled.
The second canceling condition says that a binding feature with specification top(tree) gets
canceled at the top of the tree. And it says that a binding feature with specification top(I)
gets canceled at the top of the first IP projection it goes through.
The way we have set things up here, all binding features will always be canceled at some
point. What is important is whether they are canceled legally or illegally, because:
Canceling Constraint:
Illegal canceling leads to ungrammaticality.
We can now finally state the binding theory:
Binding Theory:
1. Canceling [n,A,top(I)] at X is illegal if top(I)(X)=1
2. Canceling [n,N,top(X)] at Y is illegal if top(X)(Y)=0
In other words:
-an anaphoric feature that gets to the top of its binding domain, gets illegally canceled there.
- a non-anaphoric feature that gets canceled before it gets to the top of its binding domain
gets canceled illegally.
Let us look at the binding configurations for each of the indexed items.
Anaphors.
We start with reflexives. Suppose we have the following configuration:
A.
[IP ..αn....β,[n,A,top(I)]...]
where αn c-commands β, and IP is the binding domain. In this case [n,A,top(I)] percolates up
to the sister of αn and gets legally canceled there.
216
216
B.
#[IP ........β,[n,A,top(I)]...]
In this case [n,A,top(I)] percolates to the IP and gets illegally canceled there.
C.
#[IP ........β,[n,A,top(I)]...]
αn
This configuration has αn, but not c-commanding β. As in case B, [n,A,top(I)] percolates up
to the IP and gets illegally canceled there.
Thus, a reflexive with index n needs to be c-commanded by αn within its binding domain.
The cases in A-C are illustrated in (1) a-d:
(1) a. Every boyn likes himselfn.
b1. #Every boy likes himselfn.
b2. #Every boyn says that Mary likes himselfn.
c. #Every boyn's mother likes himselfn.
(1b1) violates both the semantic and the syntactic binding theory:
-himselfn introduces a variable xn which doesn't get bound.
-Syntactically (1b1) illustrates case B.
In all three other cases, semantic binding is satisfied.
(1b2) is out for the same syntactic reason as (1b1): it illustrates case B.
(1c) illustrates case C.
Let's show (1a).
SYNTAX:
IP,[n,N,top(tree)]
DPn,[n,N,top(tree)]
D
NP
│
│
every boy
I',[n,A,top(I)]
I
│
Ø
VP,[n,A,top(I)]
V
│
like
DPn,[n,A,top(I)]
│
himself
At I', [n,A,top(I)] gets legally canceled by its sister DPn;
[n,N,top(tree)] gets legally canceled at IP.
217
217
SEMANTICS:
A,Ø
λ,<{<n,A[.BOY]>},n>
xn
A,{<n,A[.BOY]>}
A
λP.P
A

BOY
xn,{<n,A[.BOY]>}
A
LIKE
xn
= x[BOY(x)  LIKE(x,x)]
Operators.
The relevant configurations here are A and B below:
A #[XP...αn....β[n,N,top(tree)]...]
Locality is irrelevant here. Any situation where αn c-commands operator β will lead to
illegal canceling of the operator feature at the sister of αn, and hence to ungrammaticality.
B [XP.........β[n,N,top(tree)]...]
αn
In this situation αn does not c-command β and the operator feature percolates up.
Situation A represents cases where two operators try to bind the same variable as in (2a), and
strong cross-over cases as in (2b). Situation B is the weak cross-over configuration we find
in (2c) and (2d):
(2) a. #The boy who Cn every boyn likes tn.
b. #Hen likes every boyn.
c. Hisn mother likes Johnn.
d. Hisn mother likes every boyn.
We look at (2c):
218
218
IP
DP
I'
DPn
D'
I
│
│
pro D
NP Ø
│
│
's mother
VP
V
│
like
DPn
│
john
We semantically store john. Its binding feature gets to the top of the tree. The binding
feature of the possessive pronoun also gets to the top of the tree, which is the top of its
binding domain. Both features are legally canceled there.
Let us assume that the interpretation of mother is a relation MOTHER  CON<e,<e,t>>.
mother

MOTHER  CON<e,<e,t>>.
Let us assume that 's is interpreted as the function which maps a relation like MOTHERonto
the function which maps every individual onto his/her mother:
's

λRλy.σ(λx.R(x,y))
of type <<<e,<e,t>>,<e,e>>
Then we get the following semantic interpretation:
A
λ
xn
JOHN
A
A
λP.P
LIKE
A
A
A
xn,{<n,JOHN>}
λRλy.σ(λx.R(x,y))
xn
MOTHER
= LIKE(σ(λx.MOTHER(x,JOHN)),JOHN)
Pronouns and traces.
As long as we look at configurations within their binding domain, pronouns and traces
behave just like operators:
A #[IP...αn....β[n,N,top(I)]...]
B [IP.........β[n,N,top(I)]...]
αn
Strong cross-over leads to ungrammaticality, weak cross-over doesn't. Thus we get:
219
219
(3) a. #Every boyn likes himn.
b. Every boyn's mother likes himn.
c. #The boy who Cn [IP hen likes tn].
d. The boy who Cn [IP hisn mother likes tn].
We do (3d):
DP
D
│
the
NP
NP
│
boy
CP
DP
│
who
C'
Cn
│
Ø
IP
DP
I'
DPn
D'
I
│
│
pro D
NP Ø
│
│
's mother
VP
V
│
like
DPn,n
│
Ø
The syntax and semantics of the relative pronoun is as follows:
DPn[REL][n,N,top(I)],Ø
│
who
xn,Ø
Since the syntactic movement is semantically interpreted, in the sense that the syntactic
operator at the top of the chain will be interpreted as λ-abstraction, I assume a constraint on
syntactic storage here:
Relativization constraint:
Relative pronoun whon is stored with syntactic STOREn.
Since the index and binding feature are not stored, after storage the trace will be:
DPn,n,[n,N,top(I)],[n,N,top(I)], {<n,DP>}
│
│
Ø
who
220
220
xn,Ø
The doubling of the indices and features on the trace makes no difference for the binding
theory.
We move the relative pronoun who to CP. The binding features of the trace and the pronoun
percolate up to IP where they get legally canceled.
The interpretation is:
A
λP.σ(P)
A
λ
xn
BOY
A
A
λP.P
A
A
LIKE
A
x n,
λRλy.σ(λx.R(x,y))
xn
MOTHER
= σ[λz.BOY(z)  LIKE(σ(λx.MOTHER(x,z)),z)]
Outside their binding domain, trace and pronoun differ.
Pronouns.
A pronoun needs to be semantically bound. It does not allow c-commanding αn inside its
binding domain, but at the top of the binding domain, the binding feature gets canceled. This
means that there is no syntactic constraint on where αn can and can't be outside the binding
domain.
A pronoun can c-commanded by αn if αn is outside the binding domain: both (4a) and (4b)
are ok:
(4) a. Johnn said that [IP hen is a genius].
b. Every boyn's mother thinks that [IP hen is a genius].
Traces.
The differences between traces and pronouns do not lie in the binding theory, but in the
structure introduced by syntactic passing and retrieval.
Suppose the binding feature of the trace is legally canceled at IP. Then the syntactic storage
mechanism will create either situation A or B:
Retrieval:
A. [CP...Cn,[n,T,top(tree)][IP...tn,[n,T,top(I)]...]...]
221
221
Passing:
B. [CP...DPn,[n,T,top(I)]...[IP...tn,[n,T,top(I)]...]...]
In the first case, a c-commanding αn anywhere higher in the tree than this CP will be
ungrammatical, because it illegally cancels the operator feature.
In the second case, a c-commanding αn in the next IP will be ungrammatical, because of the
binding feature of the intermediate trace. Thus, for a trace tn of, say, relativization, the only
c-commanding αn allowed are the intermediate traces created in SPEC of CP by passing and
finally the operator Cn: (5a) is grammatical, (5b) is not, because hen illegally cancels the
binding feature of the intermediate trace:
(5) a. The boy [CP who Cn [IP Mary thinks [CP tn that [IP Sue likes tn]]]].
b. The boy [CP who Cn [IP hen thinks [CP tn that [IP Sue likes tn]]]].
This much is standard binding theory. Let us now look at some not so standard cases,
involving topicalization.
First look at (6a) and (6b) and meaning (6c):
(6) a. Every boy, Mary said loves himself.
b. Every boy, Mary said he loves.
c. SAY(m,x[BOY(x)  LOVE(x,x)])
(6a) can have interpretation (6c), (6b) cannot.
I gave (6a,b) since (6a) is unambiguously a topicalisation structure, but for simplicity, lets
look at the derivation of (7a,b):
(7) a. Every boy, t loves himself.
b. Every boy, he loves t.
We can derive the following syntactic tree for (7a):
222
222
TopP, [m,N,top(tree)]CANCEL [n,N,top(tree)]CANCEL
DP
Top',[m,N,top(tree)],[n,N,top(tree)] <{<m,DP>},m>
D
NP
│
│
every boy
Topm,[m,N,top(tree)]
│
Ø
D
│
every
NP
│
boy
IP,[n,N,top(tree)],[m,N,top(I)]CANCEL {<m,DP>}
D
NP
│
│
every boy
DPn,m,[n,N,top(tree)], [m,N,top(I)] {<m,DP>}
│
Ø
D
NP
│
│
every
boy
I',[n,A,top(I)]CANCEL
I
│
Ø
VP,[n,A,top(I)]
V
│
love
DPn,[n,A,top(I)]
│
himself
The binding feature [n,A,top(I) of the reflexive himselfn inherits up to I'. Its meaning is a
variable xn which needs to be bound. So we semantically store every boy with semantic
STOREn. This introduces index n and binding feature [n,N,top(tree)] on the DP:
DPn,[n,N,top(tree)]
D
NP
│
│
every boy
This DP gets syntactically stored with syntactic STOREm, where m=,/n. This introduces the
doubly indexed trace shown in the tree.
Note that the indices and binding features are not stored.
Note also that we couldn't have used syntactic STOREn, because then the operator Cn would
illegally cancel feature [n,N,top(tree)] introduced by semantic STOREn.
We build the IP and the following happens:
-the anaphoric feature [n,A,top(I)] gets legally canceled on the I' by its sister DPn,m.
-the features from DPn,m inherit up to the IP. The trace-feature [m,N,top(I)] gets legally
canceled here.
We have operator Topm with feature [n,N,top(tree)]. We form the I', and the following
happens:
-the remaining operator features [n,N,top(tree)] and [m,N,top(tree)] inherit up to the Top'.
-the DP in store is readied for retrieval.
223
223
We build the TopP by retrieving the DP in store as a specifier.
-the syntactic store is emptied, and the two binding features inherit to the TopP and get
legally canceled there.
The semantic derivation is as follows:
A

A

A
λ,<{<n,A[,BOY]>},n>
xn
A,{<n,A[,BOY]>}
A
λP.P
LOVE
A

BOY
xn,{<n,A[,BOY]>}
A
xn
= x[BOY(x)  LOVE(x,x)]
If we try to derive (7b) in the same way, we build up the following IP:
IP,[n,N,top(I)]CANCEL,[m,N,top(I)]CANCEL {<m,DP>}
DPn,[n,N,top(I)]
│
he
I
│
Ø
V
│
love
D
NP
│
│
every boy
I'[n,N,top(tree)]ILLEGAL CANCEL [m,N,top(I)] {<m,DP>}
D
│
every
NP
│
boy
VP[n,N,top(tree)], [m,N,top(I)] {<m,DP>}
D
NP
│
│
every boy
DPn,m,[n,N,top(tree)], [m,N,top(I)] {<m,DP>}
│
Ø
D
NP
│
│
every boy
224
224
We semantically store every boy with STOREn, the same index as on the pronoun. This
introduces the binding feature [n,N,top(tree)], which inherits up, together with everything
else, and gets illegally canceled by the sister pronoun at the I' level.
Let us finally look at the examples in (8):
(8) a. One of his relatives, every boy admires.
b. A picture of himself with Pavarotti, every executive keeps in his drawer.
c. Himself, every boy loves.
All these cases allow a reading where the pronoun resp. anaphor is bound by the quantifier.
We do (8c).
TopP,[m,N,top(tree)]CANCEL [n,N,top(tree)]CANCEL
DP
│
himself
Top',[m,N,top(tree)],[n,N,top(tree)] <{<m,DP>},m>
│
himself
Topm,[m,N,top(tree)]
IP,[n,N,top(tree)],[m,N,top(I)]CANCEL {<m,DP>}
│
│
Ø
himself
DPn,[n,N,top(tree)]
I',[n,A,top(I)]CANCEL, [m,N,top(I)] {<m,DP>}
│
D NP
himself
│
│
every boy
I
VP ,[n,A,top(I)], [m,N,top(I)] {<m,DP>}
│
│
Ø
himself
V
│
love
DPn,m, [n,A,top(I)], [m,N,top(I)] {<m,DP>}
│
│
Ø
himself
Himself has index n and gets syntactically stored with syntactic STOREm. This means that
the binding features [n,A,top(I)] and [m,N,top(I)] percolate up the tree. Since every boy
needs to semantically bind the variable xn we apply semantic STOREn to it. This introduces
index n and feature [n,N,top(tree)]. This indexed DP legally cancels feature [n,A,top(I)] at
the I'. The trace-feature [m,N,top(I)] gets legally canceled at the IP. We retrieve himself, and
the remaining operator features get canceled legally at the top of the tree.
The semantic tree derived is exactly the same as before for (7a), so indeed the meaning
derived is:
x[BOY(x)  LOVE(x,x)]
225
225
In a multistratal theory, the topicalisation examples discussed raise complicated questions
about the level(s) at which the binding theory applies. The standard assumption that binding
theory applies at surface structure is problematic for these cases, because the right ccommand relations do not hold at surface structure (e.g. the reflexive in (8c) is not bound).
And we can't assume that the binding theory applies at deep structure instead, because the
binding theory should apply to traces, including intermediate traces, which don't exist at deep
structure. It can't apply at logical form, because after quantifier raising, strong crossviolations would count as good. Hence, it would apply to logical form after reconstruction,
where operations like topicalisation are undone.
The present discussion shows that we can do away with reconstruction and keep the binding
theory in the syntax (that means, at 'surface structure trees'.) The solution consists of three
assumptions:
1. If we move α in the syntax, we do not move its indices and binding features.
I have formulated here a trace-based theory of movement and a feature percolation and
canceling mechanism for binding. If you prefer a c-command definition for binding, you can
get the same effect by assuming a copy theory of movement.
Copy-movement differs only from trace-movement in the storage operation:
COPY-STOREn[DP,Ø] = DPn,{<n,DP>}
α
α
α
Thus, in copy-movement, the stored DP does not get removed and replaced byØ when it
becomes a trees, it only gets an index. In this theory, it is only when the tree is
phonologically interpreted (i.e. not in the syntax) that the trace gets spelled out asØ . This
means that what c-command structure existed before movement gets preserved after
movement. Hence this structure can be used for the binding theory, and binding can be
defined in terms of c-command on these richer trees.
But this is only possible under the same assumption that I have made: if we apply copymove to α, the index of α is not copied.
2. Syntactic movement introduces an index on the trace. If the DP moved already carries an
index, the index introduced need not be the same as the index that was present.
If we topicalize an indexed expression, the topicalization operator is likely to end up ccommanding the operator binding that indexed expression. If these two operators have the
same index, this will violate binding theory. Assuming multiple indices avoids this problem
in a very simple way.
Again, this solution is independent from the particular implementation of the binding theory
as feature percolation and canceling: it can be just as easily adopted in a theory with copymovement.
226
226
3. Semantic variables and semantic operators have the same index in syntax and semantics.
3.a. This means that a pronoun or anaphor with interpretation xn will carry index n in the
syntax.
3.b. It means that the relativization operator, which introduces semantic abstraction over
variable xn carries index n in the syntax.
3.c. And it means that a DP which is given scope by the scoping mechanism, which involves
semantic abstraction over variable xn carries index n in the syntax.
It is these three assumptions that allow the binding theory to be formulated on the syntax.
This way of formulating the binding problems discussed clarifies what is at stake in the
logical form and reconstruction debate.
Assumptions 1 and 2 are just syntactic conditions and can easily form part, one way or other,
of anybody's syntactic theory of binding.
Assumption 3 is an assumption about the syntax-semantics map and it is this assumption,
and this assumption only, that is at stake in the logical form and reconstruction debate, in as
much as it relates to scope and binding.
We can formulate the binding theory in the syntax iff semantic indices are present in the
syntax and behave there just like syntactic indices (i.e. satisfying assumptions 1 and 2).
The binding theory requires a level at which both syntactic and semantic indices are present.
With assumption 3 this level can be a syntactic level. If we don't make assumption 3, we
will need to make the opposite assumption, namely, that syntactic indices are present at some
semantic level, and this is how we get the level of logical form, ordered after the syntax.
Whether we will need reconstruction is now going to depend on whether we assume a tracebased theory of movement or a copy-theory of movement. If topicalized himself does not
carry and index in the syntax, and the movement is trace-based, it would get its index in the
wrong position at logical form, so it should only get an index after reconstruction. If the
movement is copy-movement, we don't need reconstruction, because we can assign the index
to the original down in the tree, rather than the copy up in the tree.
We see that either way, the case for reconstruction is very weak. If you believe in logical
form ordered after the syntax, and you believe that the binding theory should be stated at that
level, then I think you better accept the copy theory of movement (both in the syntax and at
logical form), because it can allow you to do without reconstruction.
But how strong is the case for believing in logical form and believing that the binding theory
should be stated at logical form?
The parallels between movement and scope were the original motivation for logical form
ordered after the syntax, as a level formulated in the vocabulary of syntax, X-bar theory, and
for the movement operation of quantifier raising. And these parallels are still often quoted as
evidence for them. I have argued that in as much as there are real parallels, this at most show
that the same abstract operation is operating in the syntax and in the semantics: they don't say
227
227
anything about the ordering of syntax and semantics with respect to each other, nor do they
show that the semantic level at which the operation parallel to movement is operating is a
level of trees formulated in terms of the vocabulary of syntax, i.e. X-bar theory.
The second motivation for logical form ordered after the syntax, formulated in the
vocabulary of syntax, and the operation of quantifier raising as movement, is the need to
have a level at which to formulate the binding theory. Arguably we need a level at which
both syntactic and semantic indices are present.
As I have argued, this hinges purely on assumption 3. If you think it plausible that semantic
indices are present in the syntax, the binding theory can be formulated in the syntax, and the
need for a unified level is not an argument for logical form.
You don't want semantic indices in the syntax? Well, I don't want syntactic indices in my
semantics. So I rather have my semantic indices be present in the syntax as well, and stay
agnostic about the level of logical form.
228
228
Download