Truth, validity, satisfaction, models and assignments. When we come to interpret our first-order language, things look a little bit complicated at first. We have predicate letters, proper names, arbitrary names and variables to deal with. In L, we only needed to assign truth values to sentence letters. Now we need to assign meanings to all of these items, and then figure out how these meanings are to be combined so that atomic wffs and propositional functions get truth values—and of course it has to work differently for proper names, arbitrary names and variables even though each of these items ultimately is treated as picking out some item in the domain. But there are only a few new tricks we need to learn, and a little thought should convince you that each trick accomplishes just what we need it to. First, we have models. A model M is a pair, <D,I>, where D is a collection (set) of objects called the domain of the model M, and I is an interpretation. This is already enough to allow us to deal with combinations of predicate letters and proper names. The interpretation assigns some tuples from the domain to each predicate letter. This is a very open-ended way to interpret the predicate letters. It doesn’t require them to be only unary, or binary, or nary relations, for some n. But it makes things very flexible when it comes to settling truth values, and causes no problems. If a 0-tuple is assigned (this would be either -- the empty 0-tuple, or {}, the non-empty 0tuple), we treat the predicate letter just like a sentence letter in L—it gets the value ‘true’ if the 0-tuple is non-empty, and ‘false’ if it’s empty. If only 1-tuples or higher are assigned, the predicate letter will need to be followed by some proper names before we get something we can assign a truth value to. In that case, we have something of the form Pp1…pn Where P is a predicate letter, and p1…pn are proper names. The whole expression will be true (we say valid) in a model if and only if the tuple that I assigns to p1…pn, <I(p1),….I(pn)>, is in the collection of tuples assigned to P by I. That is, M Pp1…pn iff <I(p1),….I(pn)> I(M) So far, so good. But of course this only handles proper names. We still need to figure out what to do with arbitrary names and variables. I doesn’t assign values to them, because we want I to be fixed for a given interpretation of our language (the models M are, in the end, what really counts for our interpretations). But the values assigned to variables and arbitrary names will not be fixed. Arbitrary names must pick out some particular item, but we don’t care what it is, so we must consider every possibility (i.e. everything in the domain) when we think of the value assigned to one. And variables must be even more indeterminate, since they never really refer to any particular thing—they are always standing, ambiguously, for anything at all in the domain. To assign values to these, we bring in an assignment. Names for assignments are lower case Greek letters, , , . An assignment is always made in the context of a fixed model M. An assignment assigns members of the domain to every proper name, arbitrary name, and variable. But the assignment preserves the interpretation of the proper names. So for a proper name t, (t) = I(t). Now we have a simple rule for the truth (now we say satisfaction) of an atomic wff or propositional function in a model M at an assignment : M, Pt1…tn if <(t1),…(tn)> I(P). (Otherwise, M, Pt1…tn) Here t1 etc. stand for terms or variables, i.e. for any sort of nominal at all. We say that the wff or propositional function is satisfied in the model M at the assignment , or not satisfied in the model M at the assignment . A wff is valid in our model M if it’s satisfied in M at every assignment—so validity (which is basically a notion of truth according to a model) considers all the ways of assigning values to arbitrary names and variables, holding only the proper names fixed. There is another level to this hierarchy of truth- like properties: Some wffs are valid on every model, or universally valid. If our formal system is (weakly) complete and sound, these wffs will be all and only the theorems. This also allows us to define entailment in our interpretation of Lq. A iff for every model M and assignment , if M, for every in , then M, A. Now we can also say what strong completeness and soundness will require: That whenever A, A (completeness) and vice versa. But we still need to deal with the connectives, since all we’ve done here is provide an account of the atomic wffs and propositional functions. The propositional connectives we handle in the old truth functional way. So it’s the quantifiers that we need to think through carefully here. And it’s when we deal with the quantifiers that we will see the difference between our treatment of arbitrary names and our treatment of variables, which so far have been treated in exactly the same way by our assignments. The key idea here is what Jennings calls an alternativeness relation. This is a relation that links a variable with two assignments. We say that is a v-alternative to if and only if agrees with except possibly on its assignment to v. For every other nominal v´(i.e. every other variable, proper name and arbitrary name), (v´) = (v´). To write it out formally, is a v-alternative to if and only if, for all v´ v, (v´) = (v´). We write this Rv. Note that this does not require to differ from on v—it only allows it to differ. So is a v-alternative to itself: The v-alternative relation is reflexive. Further, since and differ, at most, on what they assign to v, is a v-alternative to if is one to lambda. So the v-alternative relation is also symmetrical. Finally, if is a v-alternative to and is a v-alternative to , then is a valternative to . That is, the v-alternative relation is transitive. These collections of alternatives that differ only on the assignment to one variable allow us to interpret the quantifiers; we have to consider them in order to differentiate between what variables do here and what arbitrary names do. Whether a quantified wff is satisfied in a model M at an assignment will depend not just on that assignment but on all the v-alternatives to that assignment, where v is the variable of quantification. The conditions for the universal and the existential quantifier are straightforward from here—in the case of the universal, the propositional function that we get when we strip off the quantifier must hold for every v-alternative. And for the existential, propositional function that we get when we strip off the quantifier needs to hold for at least one v-alternative. M, (v) A(v) if for every v-alternative assignment , M, A(v). Otherwise, M, (v)A(v). M, (v) A(v) if for some v-alternative assignment , M, A(v). Otherwise, M,(v)A(v). This completes our interpretation of Lq. Models are really the key here; we settle values for models by considering all the assignments that go with that model—that is, by considering all the interpretations we could have for the arbitrary names and variables. Since no fixed interpretation is intended for these, we have to consider all those assignments if we want to come up with values that are really intended for an expression. If the expression ever turns out false (not satisfied) at an assignment, it is not one that is valid in our model. The next step is to apply this interpretation to prove the fundamental theorems of soundness and completeness for the system Lq. We follow the same old pattern for soundness—that is, we note that onestep proofs are always sound, because the assumption and the wff are one and the same. We then perform an induction to show that extending a sound proof by using any one of the rules of Lq gives us a line corresponding to a sound sequent according the rules for interpretation we’ve given. So we need to identify the assumptions that are ‘in force’ at the new line, according to the rule, use the induction assumption (which ensures that the previous steps are all sound), and argue that the wff written on the new line according to the new rule is indeed entailed by the assumptions listed for this new line (given the definition of Lq entailment that we’ve now got). With this in mind, let’s examine the proofs for each of our quantifier rules (the proofs for the other rules are just as in the proof of soundness for L). We begin with a warm-up event, where the basic moves are clear and there’s nothing too tricky about restrictions. That is, we begin with UE and EI. 1. Assume that our extended proof now ends with a UE step. Then we know that our proof contains these lines: i (i) (v) A(v) . . k (k) A(t) i, UE Recall how this works. A(t) is the result of replacing every v in A(v) with t, and k = i. (See p. 131.) Now, we consider an arbitrary model and assignment, M and . What we need to show is that if this model and assignment satisfy k, then they satisfy A(t). (That is, that k A(t).) Now we assume that M and do satisfy every member of k. If we can show just from this that they must also satisfy A(t), we’re done. But k = i. And our hypothesis of induction says that any proofs shorter than this one are sound. So if M, B for every B k, then M, B for every B i, and M, (v) A(v). Now, applying our standard of satisfaction for quantified wffs, we know that (v) A(v) is satisfied in M at if and only if A(v) is satisfied in M at every v-alternative to . Further, we know that there is a valternative to —call it —such that (v) = (t), i.e. that assigns as the value of v the very thing in the domain that assigns as the value of t. Like every v-alternative to , must make A(v) true. But then M, A(t): The only difference is that t appears here wherever v does in A(v), but (t) is the very same individual as (v), so each of the two symbols v and t contributes exactly the same result to the satisfaction of A(v) and A(t) at and , respectively. 2. Assume that our extended proof now ends with an EI step. Then we know that our proof contains these lines: i (i) A(t) . . k (k) (v) A(v) i, EI Again, we begin with an arbitrary model and assignment, M, . Keep in mind that our satisfaction condition for existentially quantified wffs says that, so long as we’re guaranteed that there is one valternative to that satisfies A(v), (v)A(v) is satisfied in M at . We assume that M, B for every B in k. Again, by our rule for UE, k = j and so, by our induction assumption, i A(t). So M, A(t). Let be an assignment that is a v-alternative to , and for which (v) = (t). Then M, A(v) (since our specification of ensures that A(v) asserts exactly the same thing under M, as A(t) does under M, ). But that means that is the required v-alternative to satisfying A(v), and M, (v) A(v). 3. UI. With the warm up done, we turn to Universal Intro. This is the central one, since we will appeal to this result in proving the other hard case, EE. Suppose that a thus-far sound proof is extended by a line using UI. Then the proof has the following lines: i (i) A(e) . . k (k) (v) A(v) i, UI Again, we suppose M, to be an arbitrary model, assignment pair. We assume that M, B for every B in k, the assumptions in force at line k. We need to show that M, (v)A(v). Recall that this requires that A(v) be satisfied in M at every v-alternative to . But k = i, so by our induction assumption (that the proof is sound up to line k) we know that M, A(e). Now we turn to consider an arbitrary valternative on , . Assume (for reductio) that M, A(v). Consider another assignment, , such that is identical to except for its assignment to e, and (e) = (v). Then M, A(e). But none of the assumptions in k include e. So if M, B for every B in k, then M, B for every B in k. But then, since M, A(e), this contradicts our induction assumption which implies that line i is sound. This contradiction follows directly from the assumption that M, A(v). So our assumption must be wrong: M, A(v). But was just an arbitrary v-alternative to . So by our definition of satisfaction for universal quantifiers, M, (v) A(v). 4. EE. This time we assume the final step of our extended proof uses the rule EE. Then our proof must contain lines like these: i (i) (v) A(v) g (g) A(e) A . j (j) C k (k) C i,g,j, EE Again we start with an arbitrary model, assignment pair, M, , and we assume that M, B, for every B in k. But our rule for EE tells us that k = i j - g. And by our induction hypothesis, i (v) A(v). So M, (v) A(v). Further, we could add a CP step following step (j), and write A(e) C at that step, putting j- g down as the assumption list at the new step. So (since we know CP is sound) j - g A(e) C. Further, EE requires that C contains no instance of e, and the only assumption containing e that we used to obtain C is (g). Therefore A(e) C depends on no assumption involving e. Therefore (since we’ve already shown that UI is sound) j - g (v)(A(v) C). And C contains no occurrence of v. So M, (v)(A(v) C). But (see exercise 6.64 e) if C does not contain any instance of v, (v)(A(v) C) (v) A(v) C. So M, (v)A(v) C. So we could replace our EE step with UI, the sequent of 6.64 e, and MPP to get C. All these are already known to be sound, so EE is sound: M, C. Remarks on exercise 6.64 e. This exercise requires you to show that any arbitrary M, , if M, (x)(Fx P), then M (x)Fx P, and vice versa. Suppose that M, (x)(Fx P). Now suppose that M (x)Fx. The first requires that if is an arbitrary x-alternative to , then M, Fx P. The second requires that there be some x-variant of , , such that M, Fx. So M, Fx P and M, Fx. Therefore (by the truth-table for ) M, P. But is just a v-alternative to lambda. So agrees with on its assignments to all variables and arbitrary names other than v. And P does not include v. So if M, P, then M, P. Therefore if M, (x)Fx, then M, P, and (by the truth-table for ) M, (x) Fx P. This argument in no way depends on how simple or complex F is—all that matters is that v must not appear in P. So the argument here completes our proof of the soundness of EE as claimed.