Modeltheoretic Semantics

advertisement
ADVANCED SEMANTICS
CLASS NOTES, SPRING 1994
revised spring 1998
corrected, spring 2005
Fred Landman
1
CHAPTER ONE: MODELTHEORETIC SEMANTICS
A semantic theory for a language like English or Hebrew is a theory in which we make
predictions about semantic phenomena like entailments, ambiguity.
A semantic framework is a framework for developing and comparing such theories, i.e. it is
a framework for studying semantic problems.
The kind of framework that we will be developing here is modeltheoretic semantics. It is
based on a couple of well-known assumptions: aboutness and compositionality.
1. Aboutness.
A core part of what we call meaning concerns the relation between linguistic expressions and
non-linguistic entities, or 'the world' as our semantic system assumes it to be, the world as
structured by our semantic system.
Some think about semantics in a realist way: semantics concerns the relation between
language and the world.
Others think about semantics in a more conceptual, or if you want idealistic way: semantics
concerns the relation between language and an intersubjective level of shared information, a
conceptualization of the world, the world as we jointly structure it. Both agree that
semantics is a theory of interpretation of linguistic expressions: semantics concerns the
relation between linguistic expressions and what those expressions are about. Both agree
that important semantic generalizations are to be captured by paying attention to what
expressions are about, and important semantic generalizations are missed when we don't pay
attention to that.
But semantics concerns semantic competence. Semantic competence does not concern what
expressions happen to be about, but how they happen to be about them.
Native speakers obviously do not have to know what, say, a name happens to stand for in a
certain situation, or what the truth value of a sentence happens to be in a certain situation.
That is not necessarily part of their semantic competence. What is part of their semantic
competence is reference conditions, truth conditions:
If I utter the sentence: the chalk is under the table, it is not necessarily part of your semantic
competence that you know that that sentence is true or false. What is part of your semantic
competence is that, in principle, you're able to distinguish situations where that sentence is
true, from situations where it is false, i.e. that you know what it takes for a possible situation
to be the kind of situation in which that string of words, that sentence, is true, and what it
takes for a situation to be the kind of situation where that sentence is false.
The first thing to stress is: semantics is not interested in truth; semantics is interested in truth
conditions.
From this it follows too that we're not interested in truth conditions per se, but in
truthconditions relative to contextual parameters.
Take the sentence: I am behind the table. The truth of this sentence depends on who the
2
speaker is, when it is said, what the facts in the particular situation are like. But we're not
interested in the truth of this sentence, hence we're not interested in who is the speaker, when
it was said, and what the facts are like.
What we're interested in is the following: given a certain situation (any situation) at a certain
time where a certain speaker (any speaker) utters the above sentence, and certain facts obtain
in that situation (any combination of facts): do we judge the sentence true or false under
those circumstantial conditions?
A semantic theory assumes that when we have set such contextual parameters, native
speakers have the capacity to judge the truth or falsity of a sentence in virtue of the meanings
of the expressions involved, i.e. in virtue of their semantic competence. And that is what
we're interested in.
To summarize: a semantic theory contains a theory of aboutness and this will include a
theory of truth conditions.
Given the above, when I say truth, I really mean, truth relative to settings of contextual
parameters.
Furthermore, given what I said before about realistic vs. idealistic interpretations of the
domain of non-linguistic entities that the expressions are about, you should not necessarily
think of truth in an absolute or realistic way: that depends on your ontological assumptions.
If you think that semantics is directly about the real world as it is in itself, then truth means
truth in a real situation. If you think that what we're actually talking about is a level of
shared information about the 'real' world, then situations are shared conceptualizations,
structurings of the real world, and truth means truth in a situation which is a structuring
of reality. This difference has very few practical consequences for most actual semantic
work: it concerns the interpretation of the truth definition rather than its formulation.
This is a gross overstatement, but for all the phenomena that we will be concerned with in
this course, this is true enough.
Specifying a precise theory of truth conditions, makes our semantic theory testable. We
have a general procedure for defining a notion of entailment in terms of truth conditions.
Once we have formulated a theory of the truth conditions of sentences containing the
linguistic expressions whose semantics we are studying, our semantic theory gives a theory
of what entailments we should expect for such sentences. Those predictions we can compare
with our judgments, the intuitions concerning the entailments that such sentences actually
have.
3
2. Compositionality.
The interpretation of a complex expression is a function of the interpretations of its parts and
the way these parts are put together.
Semantic theories differ of course in what semantic entities are assumed to be the
interpretations of syntactic expressions. They share the general format of a compositional
interpretation theory, which is often called 'the rule-to rule format of interpretation'. (The
terminology is slightly misleading because the interpretation theory is not necessarily
married to a particular rule-based view of syntax.)
Let us assume that we have a certain syntactic structure, say, a tree T. We can regard this
syntactic structure as built through certain syntactic operations from its parts.
For instance, the tree:
S
NP
VP
│
john V
NP
│
│
kiss mary
can be built by applying the following syntactic operations to the lexical items John, kiss,
and Mary:
S[ NP[John],VP[ V[Kiss],NP[Mary] ] ]
Where:
NP[α] is the result of forming a tree with mothernode NP and daughternode α; similarly for
V[α];
VP[α,β] is the result of forming a tree with left daughter α and right daughter β; similarly for
S[α,β].
In a compositional theory of interpretation, we choose semantic entities as the interpretations
of the parts: say, m(john), m(kiss) and m(Mary) (and, again, which semantic entities these
are depends on our semantic theory).
And we assume that corresponding to each (relevant) operation for building up syntactic
structure, i.e. for each operation on syntactic structures, there corresponds a semantic
operation on the semantic interpretations of those structures.
Thus, the syntactic operation NP[ ] will be interpreted as a semantic operation m(NP)( ).
While NP[ ] is an operation that takes a lexical item and gives you a tree, m(NP) is an
operation that takes the meaning of that lexical item and gives you the meaning of the tree.
Similarly, with VP[ , ] there will be a corresponding operation m(VP)( , ), which takes the
meanings of the V and the NP and gives as output the meaning of the VP.
4
In this way, the compositional interpretation theory is able to provide a compositional
semantics for complex expressions based on the meanings of their parts and the way they are
put together. For instance, the meaning of our example sentence will be:
m(S)( m(NP)(m(John)),m(VP)( m(V)(m(Kiss)),m(NP)(m(Mary)) ) ).
Of course, what precise predictions this semantics makes about the meaning of the sentence
John kiss Mary depends on what semantic entities we happen to choose here, and what
semantic operations.
One thing does follow already at this level: if our semantic theory determines that two
syntactic expressions α and β have the same meaning, and α occurs in an expressions φ, then
also the result of replacing α by β in φ (and leaving the syntactic operations the same), φ[β/α]
has the same meaning as φ:
Substitution of meaning: if m(α) = m(β) then m(φ[β/α]) = m(φ)
This follows from compositionality.
Look at the trees:
A
B
A
C
B
D
and assume that m(C) = m(D).
The meaning of the first tree is:
m(A) [ m(B),m(C) ]
The meaning of the second tree is:
m(A) [ m(B),m(D) ]
Obviously, this is the same meaning.
Of course, since semantic theories differ in their notion of meaning, semantic theories differ
in which expressions this holds for.
As we will see, in extensional theories, meanings are identified with extensions, and hence
substitution of expressions with the same extension preserves the extension (truth value) of
the whole. In intensional theories, meanings are not identified with extensions, hence there
is no requirement that in general substitution of expressions with the same extension will
yield complex expressions with the same extension, but, as we will see, meanings are
identified with intensions, and hence, substitution of expressions with the same intension
will lead to a complex expressions with the same intension.
5
But at this level of generality, it doesn't matter if we think about meanings as fried bananas:
if two expressions α and β are interpreted as the same fried banana, and φ[α] is a complex
expressions containing α, then the fried banana which is the interpretation of φ[α] is the same
as the fried banana which is the interpretation of φ[β/α].
3. The logical language.
We interpret natural language in structured domains of meanings. When we go beyond the
simplest natural language constructions, these domains and the meanings they contain tend to
become rather complicated. That is, when we use a metalanguage to describe the content of
these domains and these meanings, it becomes very hard to see which meaning we are
dealing with, what its properties are, whether or not two such metalanguage descriptions of
meanings describe the same thing, etc. This is because these meanings tend to be
complicated functions and the metalanguage of functions tends to be rather unreadable.
It is instructive to compare the situation with what is going on in your computer. Different
states that your computer can be in can be described as states of being on and off of a wide
array of switches in your computer. An action of the computer consists in a series of changes
of a large number of these switches. Such a change of switches corresponds to a meaning.
You could directly instruct the computer to do something by offering it a description of how
to set all its switches in order. This corresponds to a machine language instruction.
Such an instruction is a description of a meaning, but the problem is that it is unreadable: it
is very difficult to tell in machine language code which actual meaning you are dealing with,
and what follows from a certain action.
For that reason we design programming languages. Their use is purely to make life easier for
us, and what their ingredients are is in this way purely functionally determined: we put in
these languages, whatever facilitates their readability and their easy use.
The idea behind programming languages is the following. A programming language is
designed in such a way that it has a fixed and understood relation to the machine language.
In other words, we make sure that the interpretation of the programming language in the
machine language (the domain of meanings) is fixed and given. In using the programming
language we translate our instructions, what we want the computer to do in the programming
language (which is easy if the programming language is rich enough and well designed), and
rely on the given relation between the programming language and the machine language for
this translation to be interpreted correctly into the correct action (the correct meaning). Since
we know a lot about the programming language, since the programming language gives a
way of making meanings (machine language instructions) readable to us, and since we rely
on a lot of known properties of the programming language (like entailment, which actions
entail other actions), we can in practice avoid working directly at the level of meanings,
computer actions, but rather we do all our work at the level of the programming language,
and assume that that has the right results at the level of meanings, because we have set up the
relation of interpretation between the programming language and the level of meanings
correctly.
6
This is exactly the way in which we use logical languages in modeltheoretic semantics. It is
often too complicated to work directly at the level of meanings all the time. Hence, we
define our structures of meanings, we define a suitable, useful logical language, in which we
put whatever makes things easy for us. We make sure that the interpretation of the logical
language in the domains of meanings is well defined. And then we define the compositional
interpretation of our natural language in two steps:
-we give a compositional interpretation of every expression and formation rule of the
logical language in the domains of meaning.
-we give a compositional translation of every expression and formation rule of the syntax
into our logical language.
This gives us in two steps the compositional interpretation of our natural language
expressions and the rules of their formation in the domain of meanings:
-the interpretation function for the logical language v b is a function which associates with
every expression of the logical language a meaning.
-the translation function for the natural language syntax tr is a function which associates with
every expression of the natural language syntax its translation in the logical language.
-The composition of these two functions is a function from natural language expressions to
meanings: if α is a natural language expressions, then vtr(α)b is its associated meaning.
A fact about these functions tr and v b is that:
If v b gives a compositional interpretation of logical language L into domain of
meanings M and tr gives a compositional translation of natural language N into
logical language L, then the composition vtr( )b gives a compositional interpretation
of natural language N into domain of meanings M.
This means, among others, that the role of the logical language is indeed purely to make
things easier for us: in a compositional semantics, we could always skip that level, not give
an indirect interpretation through translation and interpretation, but give the result of the
composition directly, i.e. associate directly with every natural language expression the result
of interpreting its translation: the level of the logical language is superfluous (in the same
sense that strictly speaking the programming language is superfluous).
But of course, in practice the logical language is far from superfluous, because we
understand much easier what meaning we associate with an expression by looking at its
translation in the logical language than by looking in the model, and we have techniques for
proving easily whether two expressions of the logical language are interpreted as the same
meaning, while that can be very difficult to see directly in the model.
I will stress the conventional nature of the logical language at various points in this course
and show some of the choices that you may want to make, depending on what you are
studying.
These logical languages all tend to look basically the same and their interpretation tends to
be along the same lines as well. Before we get into any details, it may be good to just give
7
you, with an example (predicate logic) the general structure of logical languages and the
general structure of their interpretation. To some extent, if you remember the ingredients of
an arbitrary logical language and the components of the interpretation process, you will
realize that in all the languages that you come across, exactly the same goes on, and you will
learn in studying their properties, to just ignore what is the same as in every other language
and directly look for what is special in this one.
4. The semantic interpretation of logical languages.
4.a. The syntax of the logical language.
The syntax of logical languages tends to be very simple and meant to bring out in as simple a
way possible, the ingredients that the language is meant to describe.
Predicate logic is a language to talk about the interactions between the following ingredients:
-predicates and relations.
-quantification over individuals.
-sentence connectives.
-the relation of identity.
The predication relation ( ,..., ), the quantifiers ,, the connectives ¬,,,, identity = are
called the logical constants. The meaning of these expressions is fixed and the same for
every model and every interpretation.
These are the expressions whose semantics is the focus of study in this language.
The language further contains expressions whose whole function is that their interpretation
can vary within one model through the quantification mechanism: variables. And it contains
expressions whose meaning depends on the model, but is, at this level of description
assigned rather arbitrarily (because at this level of description we are not interested in what
their precise meaning is, of which kind of meaning they are assigned). These expressions are
called the non-logical constants. It is appropriate to think of the non-logical constants as
those lexical items whose meaning we are not trying to fix in complete detail in the
semantics, expressions that, for the sake of our studying the interaction of the meanings of
expressions containing them, and the semantic contribution of, say, connectives and
quantifiers, we keep as primitives.
In specifying the syntax of a language like predicate logic, we specify what the non-logical
constants (and variables) are, and based on that, we define recursively all the ways of
forming complex expressions of the language, in particular, formulas.
A language of predicate logic L:
VAR = {x1,x2,...}
a (countably infinite) set of individuals variables.
CONL = {c1,c2,...}
a set of individual constants (at most countably infinite)
for every n > 0:
PREDnL = {P1,P2,...} a set of n-place predicate constants (at most countably infinite)
TERML = CONL  VAR
(terms are individual constants or variables)
We complete the definition of the syntax of L with a recursive definition of the set of all
wellformed formulas of L:
8
FORML is the smallest set such that:
1. if P  PREDnL and t1,...,tn  TERML then P(t1,...,tn)  FORML
2. if t1,t2  TERML then (t1=t2)  FORML
3. if φ,ψ  FORML then ¬φ, (φ  ψ), (φ  ψ), (φ  ψ)  FORML
4. if x  VAR and φ  FORML then xφ, xφ  FORML
4.b the semantics of the logical language.
The semantics of the logical language consists of three parts:
1. a definition of a possible model for the language.
2. a definition of the interpretation of an expression in a model, for any expression and any
model: the truth definition.
3. a definition of entailment in terms of the truth definition.
A model for the language always consists of two components: a structure and an
interpretation for the non-logical constants.
The structure of the model can be thought of a possible structuring of the world, with just as
much structure as the logical language we are interpreting requires. In the case of predicate
logic we are only interested in the structure of the world in so far as it allows us to express
predication and quantification over individuals. For this purpose it suffices to assume that
the structure of the world in so far as predicate logic is concerned is just a set of individuals.
More precise, it is a set of individuals, a set of two truth values, and a set theoretic structure
determined by that, but all the latter is predictable from the basic set of individuals and it is
our habit not to mention what is predictable (from which we will deviate when we want to,
as we will see later).
If we put other things in our language, then the structures of our models will become richer.
For instance, if we include temporal operators in our language and expressions that make
temporal reference, we will add a temporal domain to our language, which will be a structure
of moments of time, ordered by a temporal ordering. If we add expressions that make event
reference, we will add a structure of events, etc.
So, a model consists of a structure and in interpretation. The structure determines what kinds
of things our expressions can refer to, what kinds of things they can quantify over.
The interpretation determines what the facts are. If we have basic predicates in our language
like LOVE and KISS, then the structure of the model determines who is there to stand in
those relations or not. The interpretation of the non-logical constants determines the facts: it
determines who loves whom and who doesn't, who kisses whom and who doesn't etc. In this
way, the structure and the interpretation together make the model into a possible structuring
of the world: determining what there is and what basic relations happen to hold.
Since the semantics will specify truth conditions rather than truth, it follows that we are
never interested in one model, but rather in defining truth for an arbitrary model for the
language, i.e. in defining how the truth of a complex sentence in a model depends on the
truth of it parts, and how the truth of a sentence varies across different models.
9
Thus, we define for our predicate logical language:
A model for predicate logical language L is a pair:
M = <D,F>, where:
1. D, the domain of M, is a non-empty set (the domain of individuals)
2. F, the interpretation function of M for the non-logical constants of L is a function
such that:
a. for every c  CONL: F(c)  D
b. for every P  PREDnL: F(P)  pow(Dn)
Here Dn = {<d1,...,dn>: d1,...,dn  D}
the set of all n-tuples of elements of domain D.
pow(X), the powerset of X, is the set of all subsets of X.
Hence pow(Dn) is the set of all subsets of the set of all n-tuples of elements of D. This
means that each n-place predicate P is interpreted by F as some set of n-tuples, an n-place
relation.
As is wellknown, predicate logical formulas contain expressions, variables, that are not
interpreted by the interpretation function in the model. For them, we add special devices that
take care of their interpretation: assignment functions:
Let M = <D,F>.
An assignment function (on model M) is a function g from VAR into D.
i.e. an assignment function is any function g that assigns every variable in VAR an object in
D.
Furthermore, we define for any assignment function g and variable x and object d  D:
gxd is that assignment function from VAR into D such that:
1. for every variable y  VAR-{x}: gxd(y) = g(y)
2. gxd(x) = d
I.e. gxd is that assignment that differs at most from g in that it assigns d to x.
Given a model and an assignment function, we have now the means of specifying the
interpretation in any given model, relative to any given assignment function for all the nonlogical constants and variables. The truth definition extends this to a full interpretation for
any possible wellformed expression of our language L.
Thus, the truth definition defines for any well formed expression α of our language L: vαbM,g:
the interpretation of α in model M relative to assignment g.
10
Truth definition for L: vαbM,g
Given model M = <D,F> and assignment function g:
Interpretation of terms and predicates:
1. if c  CONL then vcbM,g = F(c)
2. if P  PREDnL then vPbM,g = F(P)
3. if x  VAR
then vxbM,g = g(x)
Interpretation of formulas:
1. vP(t1,...,tn)bM,g = 1 iff <vt1bM,g,...,vtnbM,g>  vPbM,g; 0 otherwise
2. v(t1=t2)bM,g = 1 iff vt1bM,g = vt2bM,g; 0 otherwise
v¬φbM,g = 1 iff vφbM,g = 0; 0 otherwise
v(φ  ψ)bM,g = 1 iff vφbM,g = 1 and vψbM,g=1; 0 otherwise
v(φ  ψ)bM,g = 1 iff vφbM,g = 1 or vψbM,g = 1; 0 otherwise
v(φ  ψ)bM,g = 1 iff vφbM,g = 0 or vψbM,g = 1; 0 otherwise
3. vxφbM,g = 1 iff for every d  D: vφbM,gxd = 1; 0 otherwise
vxφbM,g = 1 iff for some d  D: vφbM,gxd = 1; 0 otherwise
(where gxd is what Bill Gates does to gxd when you make it a subscript.)
We have now given a complete compositional interpretation for our language L. Given any
model M and assignment function g, vαbM,g is well defined for any expression α of L.
The third aspect of the semantics is the definition of entailment in terms of the semantics.
This tends to be the same (or very similar) for the kinds of logical languages that we study
here.
First we define:
A sentence is a formula without free variables (relying on some definition of free and bound
occurrences of variables in an expressions).
Let φ be a sentence of L.
vφbM, the interpretation of φ in M (independent of assignment functions) is definid as
follows:
vφbM = 1, φ is true in M, iff for every assignment function g: vφbM,g = 1
vφbM = 0, φ is false in M,iff for every assignment function g: vφbM,g = 0
Let X be a set of sentences of L and φ a sentence of L:
X ╞ φ, X entails φ iff for every model M for L:
if for every δ  X: vδbM = 1 then vφbM = 1
i.e. X entails φ iff for every model where all the premises δ in X are true, the conclusion φ is
11
true as well.
We say that φ entails ψ iff {φ} entails ψ.
φ and ψ are logically equivalent iff φ entails ψ and ψ entails φ.
5. Adapting logical languages to your needs.
The predicate logical language given here is just an example of how logical languages work
and how they are interpreted. What I want to show now is what you typically do, when you
want to make a logical language of your own to suit the needs of your semantic theory. What
I want to stress here is that some changes in the language and/or its semantics you have to
make are obvious in the light of what you want the language to express; most other changes
are just straightforward accommodations of the language and its semantics to the new
format, that really don't change anything; yet other changes are options that you can choose
to make the language and its interpretation to be as easy and readable as possible for your
purposes.
Predicate logic is a non-temporal language, in the sense that the language does not express
any sensitivity to time. Sentences are just true or false in the model, you can't express that a
sentence like John walks is true at one moment and false at another, because you haven't
included the facilities for temporal reference in your language or in your semantics. I want to
discuss two ways of modifying predicate logic and its semantics to include temporal
reference. Since this is only an example, my aims are very modest: the only thing that I want
to be able to do in the new language, which I can't do in predicate logic, is being able to say
that John walks is true at some moment of time and false at another moment. So I am not
here introducing tenses or temporal operators, only the variation of reference over time.
There are typically two kinds of ways of extending predicate logic and its semantics to
incorporate this:
-keep the same language, leave the temporal reference implicit in the language, and work it
into the semantics.
-work the temporal reference explicitly into the semantics.
I will discuss both.
5.a. Temporal semantics for predicate logic.
Our language is exactly the same language L of predicate logic.
Since we want to be able to have the interpretations of expressions vary with time, we will
have to change our models in two ways:
1. we have to change the structure: we have to include a domain of times in our structures.
2. we have to change the interpretation function. The interpretation function tells us what the
basic facts are in our model. But now that we are going to include times, we need to be able
to say what the basic facts are at different moments of time.
In predicate logic, the interpretation function specifies the basic facts by specifying the
interpretation of the non-logical constants, in particular the predicate constants in the model.
In temporal predicate logic, the interpretation function will specify the basic facts at
different moments of time, by being sensitive to moments of time, by specifying the
interpretations of the predicate constants at different moments of time.
Thus, we change our notion of a model for language L:
12
A model for L is a triple:
M = <D,I,F>, where:
1. D, the domain of individuals, is a non-empty set.
2. I, the domain of moments of time, is a non-empty set.
3. D  I = Ø
4. F is the interpretation function for the non-logical constants.
-for every c  CONL: F(c)  D
We do not here make the interpretation of the individual constants sensitive to time.
For predicate constants, F is a function that maps every n-place predicate and a moment of
time on a set of n-tuples of individuals:
-for every P  PREDnL and for every i  I: F(P,i)  pow(Dn)
This incorporates the variability with respect to time:
If we have a predicate like WALK, F(walk,i), the set of walkers at moment i can be a
different set from F(walk,j), the set of walkers at moment j, and hence - as we will see - the
truth value of sentences containing this predicate can vary over time.
We have worked the variation over time in our models. We now have to work it into our
semantics. Since the extension of expressions can now vary relative to time, we can no
longer define recursively the basic relation vαbM,g, the interpretation of expression α in model
M relative to assignment g, because, say, for a predicate P, that interpretation varies with
respect to time. Hence, we have to parameterize our notion of interpretation to time: we
have give a definition of the relation: vαbM,g,i: the interpretation of expression α in model M,
relative to assignment g at moment of time i.
This is a general procedure that you see over and over again: if the interpretation of an
expression varies relative to a parameter p, then the basic semantic interpretation relation that
is defined recursively will be parameterized for p.
Another way of saying this is: if the extension of an expression varies along parameter p,
then the meaning of that expression incorporates the pattern of variation of extension over
parameters p.
In general, the meaning of an expression α can be identified with the pattern of variation of
the extension of α across parameters, more precisely, as an intension: a function that tells
you for the relevant parameters what the extension of α relative to those parameters is.
The next step, then, is to parameterize the semantic relation v bM,g to v bM,g,i.
This is by and large trivial. You take each clause of the semantics for predicate logic given,
and you replace in that clause everywhere v bM,g or v bM,gxd by v bM,g,i or v bM,gxd,i.
The only clause that are changed more than this, that is, the only clauses where you have to
be careful, are typically the basic clauses: the interpretation clauses for the non-logical
constants (because you have been changing their interpretation) and possibly atomic
13
formulas (that depends on how you have changed the interpretation of the non-logical
constants), and clauses that were not in the language before (like the interpretation of
temporal operators added, which we haven't done here).
Sometimes other clauses change as well (like quantification clauses),but that tends to depend
on whether you want them to change. I.e. if you want quantification to be temporally
sensitive, you will make more changes to the models (like introduce temporally dependent
domains that change with time) and make quantification sensitive to that. For various
reasons to be discussed later, we tend not to do this in the languages we use in natural
language semantics.
So, the only serious change in the temporal semantics for our language L, now that we define
vαbM,g,i rather than vαbM,g concerns the interpretation of the non-logical constants:
vαbM,g,i
Let M = <D,I,F> be a model for L, g an assignment function, i  I.
Interpretation of terms and predicates:
1. if c  CONL then vcbM,g,i = F(c)
2. if P  PREDnL then vPbM,g,i = F(P,i)
3. if x  VAR
then vxbM,g,i = g(x)
We see that the only clause that has changed is the second clause. Relative to a moment of
time i, a predicate P denotes a set of n-tuples as before, possibly different sets at different
moments of time.
The rest of the semantics stays exactly as is, except with parameter i added. For
completeness sake I will give it here:
Interpretation of formulas:
1. vP(t1,...,tn)bM,g,i = 1 iff <vt1bM,g,i,...,vtnbM,g,i>  vPbM,g,i; 0 otherwise
2. v(t1=t2)bM,g,i = 1 iff vt1bM,g,i = vt2bM,g,i; 0 otherwise
v¬φbM,g,i = 1 iff vφbM,g,i = 0; 0 otherwise
v(φ  ψ)bM,g,i = 1 iff vφbM,g,i = 1 and vψbM,g,i=1; 0 otherwise
v(φ  ψ)bM,g,i = 1 iff vφbM,g,i = 1 or vψbM,g,i = 1; 0 otherwise
v(φ  ψ)bM,g,i = 1 iff vφbM,g,i = 0 or vψbM,g,i = 1; 0 otherwise
3. vxφbM,g,i = 1 iff for every d  D: vφbM,gxd,i = 1; 0 otherwise
vxφbM,g,i = 1 iff for some d  D: vφbM,gxd,i = 1; 0 otherwise
Given that expressions can vary their truth value relative to moments of time, we are no
longer interested in the absolute notion of truth, but only in the notion of truth at a moment
of time. Apart from that, we define truth at a moment of time in the same way as we defined
truth before:
Let φ be a sentence, M a model, i a moment of time:
vφbM,i = 1, φ is true in M at i, iff for every g: vφbM,g,i = 1
vφbM,i = 0, φ is false in M at i, iff for every g: vφbM,g,i = 0
Let X be a set of sentences and φ a sentence:
14
X ╞ φ, X entails φ iff for every model M = <D,I,F> and every moment of time i  I:
if for every δ  X: vδbM,i = 1 then vφbM,i = 1
That is, in the notion of entailment, we quantify over models and moments of time. This is,
because we want our notion of entailment to have content, at least as much content as the
notion had in predicate logic. In predicate logic we were able to say that it rains and it is
cold entails it rains. In our new logic we still want to be able to say that, and for that we
have to require that every moment of time where it rains and is cold is a moment of time
where it rains. The quantification over moments of time makes sure that in evaluating
whether it rains and it is cold entails it rains, only moments of time where it rains and it is
cold are taken into account.
This is again a standard procedure: when we parameterize our notion of truth, we will
quantify over that parameter in our notion of entailment. That will guarantee that the notion
of entailment that we get is an extension of the notion of entailment that we had before.
This completes the semantics. We now have incorporated temporal reference into the
semantics of our language and indeed we can express variation of truth under time.
Let WALK be a one-place predicate constant, and j an individual constant.
We can associate with the sentence John walks the formula WALK(j).
Let us assume that we have a model M = <D,I,F> such that:
F(j) = d
F(WALK,i) = {d}
F(WALK,i') = Ø
Then vWALK(j)bM,i = 1 iff
for every g: vWALK(j)bM,g,i = 1 iff
for every g: vjbM,g,i  vWALKbM,g,i iff
for every g: F(j)  F(WALK,i) iff
F(j)  F(WALK,i) iff
d  {d}
Hence vWALK(j)bM,i = 1
Similarly vWALK(j)bM,i' = 1 iff
for every g: vWALK(j)bM,g,i' = 1 iff
for every g: vjbM,g,i'  vWALKbM,g,i' iff
for every g: F(j)  F(WALK,i') iff
F(j)  F(WALK,i') iff
d 0/
Hence vWALK(j)bM,i' = 0
5b. Temporal predicate logic.
Above we have given a temporal interpretation for predicate logic. There we kept the
language the same and gave a temporal interpretation. We will now change the language and
15
incorporate the temporal reference explicitly into the expressions of our language.
We define a new language Lt, temporal predicate logic.
We add to the language a new set of variable, temporal variables:
TEMP = {i1,i2,...}
a set of countably many temporal variables.
TEMP  VAR = Ø
So we have individual variables and temporal variables.
We keep all the other constants and variables the same:
VAR = {x1,x2,...}
CONLt = {c1,c2,...}
PREDnLt = {P1,P2,...}
TERMLt = CONLt  VAR
The only real change comes in the definition of predication. Before we formed from an nplace predicate P and n terms t1,...,tn a formula P(t1,...,tn). Now, we will assume that each
predication formula has an additional argument place which is filled by a temporal variable.
This is the only change we make in the syntax of the language. As can be seen from the
following definition, all the other clauses are exactly as they were before in L:
FORMLt is the smallest set such that:
1. if P  PREDnL and i  TEMP and t1,...,tn  TERMLt then P(i,t1,...,tn)  FORMLt
2. if t1,t2  TERMLt then (t1=t2)  FORMLt
3. if φ,ψ  FORMLt then ¬φ, (φ  ψ), (φ  ψ), (φ  ψ)  FORMLt
4. if x  VAR and φ  FORMLt then xφ, xφ  FORMLt
One more thing that changes is the notion of a sentence: a sentence is a formula without free
individual variables (it can, and often will, have a free temporal variable).
Let us look at the semantics now and start, as before, with the models. Our aim is the same
for this language as it was for the previous one: we want to be able to express temporal
variation. For this reason, we will have to modify the predicate logical models in a very
similar way: we will have to extend the structures to include times, and we will have to
encode the variation of the facts relative to different moments of time.
Compare the atomic formulas of our previous temporally interpreted predicate logic with the
present one:
P(t1,...,tn)
P(i,t1,...,tn)
Before we kept the temporal dependency implicit: F interpreted at each moment of time i, nplace predicate P as a set of n-tuples F(P,i).
In the present language we represent n-place predicate P in the language by an n+1-place
predicate P, where the first place is a temporal place. We will follow the lead of the
language here and let F interpret P as an n+1-place relation: a set of n+1 tuples, where the
first element is a moment of time:
16
A model for Lt is a triple
M = <D,I,F> where,
1. D, the domain of individuals, is a non-empty set.
2. I, the set of moments of time, is a non-empty set.
3. D  I = Ø
4. F, the interpretation function for the non-logical constants is given as follows:
-for every c  CONLt: F(c)  D
-for every P  PREDnLt: F(P)  pow(I  Dn)
i.e. indeed F(P) is an n+1 place relation, a set of n+1 tuples in pow(I  Dn)
Here I  Dn = {<i,d1,...,dn>: i  I and d1,...,dn  D}
We interpret this as follows:
<i,d1,...,dn>  F(P) means: at moment i, d1,...,dn stand in relation F(P)
The difference with the previous temporal semantics is slight: there we had:
<d1,...,dn>  F(P,i) meaning: d1,...,dn stand in relation F(P,i)
Obviously, these are just two ways of incorporating the same information: clearly, if we
assume that apart from the temporal parameter these two interpretation functions do not
differ, we get that:
<i,d1,...,dn>  FLt(P) iff <d1,...,dn>  FL(P,i)
We have now two sets of variables: individual variables in VAR and temporal variables in
TEMP. We could just define the semantics relative to one type of variable assignment, g, if
we let g assign values to both individual and temporal variables. Mostly, when we introduce
different sorts of variables, that is exactly what we do. However, in our little example here
that will make the definition of the notion truth at a moment of time rather more complicated
than necessary here. For that reason I will introduce two kinds of variable assignments, and
since our expressions contain two kinds of variables, we will evaluate them in a model
relative to both of those assignment functions.
Let M = <D,I,F> be a model for Lt,
As before, an (individual) assignment function g is a function from VAR into D.
We add: a (temporal) assignment function h is a function from TEMP into I.
So, just as g assigns every individual variable a value in D, h assigns every temporal variable
a value in I.
With this, we define the interpretation relation:
vαbM,g,h, the interpretation of expression α in model M, relative to individual assignment g
and temporal assignment h.
17
Given model M = <D,I,F> for Lt, individual assignment function g and temporal assignment
function h:
Interpretation of terms and predicates:
1. if c  CONL then vcbM,g,h = F(c)
2. if P  PREDnL then vPbM,g,h = F(P)
3. if x  VAR
then vxbM,g,h = g(x)
4. if i  TEMP then vibM,g,h = h(i)
We see that the interpretation is just as in predicate logic L, except that we have added the
obvious interpretation clause for temporal variables.
For the interpretation of formulas we see that, when we make the temporal variable an
explicit argument of the predication clause, the resulting semantic clause is exactly the
standard predicate logical interpretation clause (but for an n+1 place predicate rather than an
n-place predicate). All the other clauses only differ in that they contain assignment h at the
relevant evaluation clauses.
Interpretation of formulas:
1. vP(i,t1,...,tn)bM,g,h = 1 iff <vibM,g,h,vt1bM,g,h,...,vtnbM,g,h>  vPbM,g,h; 0 otherwise
2. v(t1=t2)bM,g,h = 1 iff vt1bM,g,h = vt2bM,g,h; 0 otherwise
v¬φbM,g,h = 1 iff vφbM,g,h = 0; 0 otherwise
v(φ  ψ)bM,g,h = 1 iff vφbM,g,h = 1 and vψbM,g,h=1; 0 otherwise
v(φ  ψ)bM,g,h = 1 iff vφbM,g,h = 1 or vψbM,g,h = 1; 0 otherwise
v(φ  ψ)bM,g,h = 1 iff vφbM,g,h = 0 or vψbM,g,h = 1; 0 otherwise
3. vxφbM,g,h = 1 iff for every d  D: vφbM,gxd,h = 1; 0 otherwise
vxφbM,g,h = 1 iff for some d  D: vφbM,gxd,h = 1; 0 otherwise
Now we define for a sentence of Lt truth in M relative to temporal assignment h:
vφbM,h = 1, φ is true in M relative to h iff for every g: vφbM,g,h = 1
vφbM,h = 0, φ is false in M relative to h iff for every g: vφbM,g,h = 0
And entailment:
X ╞ φ, X entails φ iff for every model M, temporal assignment function h:
if for every δ  X vδbM,h=1 then vφbM,h=1
Now, strictly speaking we have not defined truth of a sentence in a model at a moment of
time, but truth of a sentence in a model relative to a temporal assignment. But we can
introduce truth at a moment.
Let us assume that one of the variables i0 in TEMP has a special status, we treat it as a
18
variable indicating the 'now'.
Then we can define: φ is true at a moment of time i iff vφbM,g where g(i0) = i.
This completes the semantics.
Let us now look at the sentence John walks. As before we assume that our language has an
individual constant j, and a one place predicate constant WALK and our designated temporal
variable i0.
We now represent John walks in our language as: WALK(i0,j)
Again let us assume a model M = <D,I,F> where:
i,i'  I
F(j) = d
F(WALK) = {<i,d>}
This means that at moment i, d is a walker, and at moment i', d is not a walker
(<i',d>  F(WALK)).
Let h and h' be temporal assignment functions such that:
h(i0)=i and h'(i0)=i'
Then vWALK(i0,j)bM,h = 1 iff
for every g: vWALK(i0j)bM,g,h = 1 iff
for every g: <vi0bM,g,h,vjbM,g,h>  vWALKbM,g,h iff
for every g: <h(i0),F(j)>  F(WALK) iff
<h(i0),F(j)>  F(WALK) iff
<i,d>  F(WALK)
Hence vWALK(i0,j)bM,h = 1
But
vWALK(i0,j)bM,h' = 1 iff
for every g: vWALK(i0j)bM,g,h' = 1 iff
for every g: <vi0bM,g,h',vjbM,g,h'>  vWALKbM,g,h' iff
for every g: <h'(i0),F(j)>  F(WALK) iff
<h'(i0),F(j)>  F(WALK) iff
<i',d>  F(WALK)
Hence vWALK(i0,j)bM,h' = 0
We see that also in this semantics we deal with temporal variation across moments of time,
and in fact, the two semantic theories are equivalent (more precisely, intertranslatable).
Which of these is better is a matter of convenience.
Clearly, the first theory is shorter, we represent John walks as WALK(j), while in the second
theory we represent it as WALK(i0,j).
When formulas get complicated that can have a lot of advantages to it. Yet, the second
representation has advantages too. Though it is not as short it makes all the parameters of
interpretation explicit in the representation language. Again, when formulas get complicated
(in particular when complicated quantification takes place), such explicitness is an
19
advantage.
In the literature one finds both, and one finds mixtures of both as well (treating certain
parameters, like events arguments through explicit variables, while leaving temporal or
modal variables implicit).
Thus, when we incorporate a certain semantic phenomenon, like variation over time, in any
logical language we will have to make accommodations in the language and/or its semantics,
and very often we have to make the same or similar accommodations in them (because of the
semantic nature of the phenomenon). But there isn't one royal way of formulating the logical
language to be used. The language is a means for representing the semantic entities we
study, and what is the easiest and most perspicuous way of doing that is often a matter of
taste and experience.
20
Download