EPTCS 197 Focusing First International Workshop on Proceedings of the

advertisement
EPTCS 197
Proceedings of the
First International Workshop on
Focusing
Suva, Fiji, 23rd November 2015
Edited by: Iliano Cervesato and Carsten Schürmann
Published: 8th November 2015
DOI: 10.4204/EPTCS.197
ISSN: 2075-2180
Open Publishing Association
i
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Iliano Cervesato and Carsten Schürmann
ii
Towards the Automated Generation of Focused Proof Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Vivek Nigam, Giselle Reis and Leonardo Lima
1
Proof Outlines as Proof Certificates: A System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Roberto Blanco and Dale Miller
7
Realisability semantics of abstract focussing, formalised . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Stéphane Graham-Lengrand
Multiplicative-Additive Focusing for Parsing as Deduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Glyn Morrill and Oriol Valentín
Preface
This volume constitutes the proceedings of WoF’15, the First International Workshop on Focusing,
held on November 23rd, 2015 in Suva, Fiji. The workshop was a half-day satellite event of LPAR-20,
the 20th International Conferences on Logic for Programming, Artificial Intelligence and Reasoning.
The program committee selected four papers for presentation at WoF’15, and inclusion in this volume. In addition, the program included an invited talk by Elaine Pimentel (UFRN, Brazil).
Focusing is a proof search strategy that alternates two phases: an inversion phase where invertible
sequent rules are applied exhaustively and a chaining phase where it selects a formula and decomposes
it maximally using non-invertible rules. Focusing is one of the most exciting recent developments in
computational logic: it is complete for many logics of interest and provides a foundation for their use as
programming languages and rewriting calculi.
This workshop had the purposes of bringing together researchers who work on or with focusing, to
foster discussion and to report on recent advances. Topics of interest included: focusing in forward,
backward and hybrid logic programming languages, focusing in theorem proving, focusing for substructural logics, focusing for epistemic logics, focused term calculi, implementation techniques, parallelism
and concurrency, focusing in security, and pearls of focusing.
Many people helped make WoF’15 a success. We wish to thank the organizers of LPAR-20 for their
support. We are indebted to the program committee members and the external referee for their careful
and efficient work in the reviewing process. Finally we are grateful to the authors, the invited speaker
and the attendees who made this workshop an enjoyable and fruitful event.
November, 2015
I. Cervesato and C. Schürmann (Eds.)
First Workshop on Focusing (WoF’15)
EPTCS 197, 2015, pp. ii–iii, doi:10.4204/EPTCS.197.0
Iliano Cervesato
Carsten Schürmann
c I. Cervesato and C. Schürmann
This work is licensed under the
Creative Commons Attribution License.
I. Cervesato and C. Schürmann
Program Committee of WoF’15
• Iliano Cervesato (Carnegie Mellon University — co-chair)
• Kaustuv Chaudhuri (Inria)
• Paul Blain Levy (University of Birmingham)
• Chuck Liang (Hofstra University)
• Elaine Pimentel (Universidade Federal do Rio Grande do Norte)
• Carsten Schürmann (IT University of Copenhagen — co-chair)
Additional Reviewers
Giselle Reis.
iii
Towards the Automated Generation of Focused Proof Systems
Vivek Nigam
Giselle Reis
Leonardo Lima
Federal University of Paraı́ba, Brazil
Inria & LIX, France
Federal University of Paraı́ba, Brazil
vivek.nigam@gmail.com
giselle.reis@inria.fr
leonardo.alfs@gmail.com
This paper tackles the problem of formulating and proving the completeness of focused-like proof
systems in an automated fashion. Focusing is a discipline on proofs which structures them into phases
in order to reduce proof search non-determinism. We demonstrate that it is possible to construct a
complete focused proof system from a given un-focused proof system if it satisfies some conditions.
Our key idea is to generalize the completeness proof based on permutation lemmas given by Miller
and Saurin for the focused linear logic proof system. This is done by building a graph from the rule
permutation relation of a proof system, called permutation graph. We then show that from the permutation graph of a given proof system, it is possible to construct a complete focused proof system,
and additionally infer for which formulas contraction is admissible. An implementation for building
the permutation graph of a system is provided. We apply our technique to generate the focused proof
systems MALLF, LJF and LKF for linear, intuitionistic and classical logics, respectively.
1
Introduction
In spite of its widespread use, the proposition and completeness proofs of focused proof systems are
still an ad-hoc and hard task, done for each individual system separately. For example, the original
completeness proof for the focused linear logic proof system (LLF) [1] is very specific to linear logic.
The completeness proof for many focused proof systems for intuitionistic logic, such as LJF [5], LKQ
and LKT [3], are obtained by using non-trivial encodings of intuitionistic logic in linear logic.
One exception, however, is the work of Miller and Saurin [7], where they propose a modular way to
prove the completeness of focused proof systems based on permutation lemmas and proof transformations. They show that a given focused proof system is complete with respect to its unfocused version
by demonstrating that any proof in the unfocused system can be transformed into a proof in the focused
system. Their proof technique has been successfully adapted to prove the completeness of a number of
focused proof systems based on linear logic, such as ELL [11], µ MALL [2] and SELLF [8].
This paper proposes a method for the automated generation of a sound and complete focused proof
system from a given unfocused sequent calculus proof system. Our approach uses as theoretical foundations the modular proof given by Miller and Saurin [7]. There are, however, a number of challenges
in automating such a proof for any given unfocused proof system: (1) Not all proof systems seem to
admit a focused version. We define sufficient conditions based on the definitions in [7]; (2) Even if a
proof system satisfies such conditions, there are many design choices when formulating a focused version for a system; (3) Miller and Saurin’s proof cannot be directly applied to proof systems that have
contraction and weakening rules, such as LJ; Focused proof systems, such as LJF and LKF, allow only
the contraction of some formulas. This result was obtained by non-trivial encodings in linear logic [6].
Here, we demonstrate that this can be obtained in the system itself, i.e., without a detour through linear
logic; (4) Miller and Saurin did not formalize why their procedure or transforming an unfocused proof
into a focused one terminates. It already seems challenging to do so for MALL as permutations are not
I. Cervesato and C. Schürmann (Eds.)
First Workshop on Focusing (WoF’15)
EPTCS 197, 2015, pp. 1–6, doi:10.4204/EPTCS.197.1
c Nigam, Reis and Lima
This work is licensed under the
Creative Commons Attribution License.
Towards the Automated Generation of Focused Proof Systems
2
necessarily size preserving (with respect to the number of inferences). We are still investigating general
conditions and this is left to future work.
In order to overcome these challenges, we introduce in Section 3 the notion of permutation graphs.
Our previous work [9, 10] showed how to check whether a rule permutes over another in an automated
fashion. We use these results to construct the permutation graph of a proof system. This paper then
shows that, by analysing the permutation graph of an unfocused proof system, we can construct possibly
different focused versions of this system, all sound and complete (provided a proof of termination is
given). We sketch in Section 4 how to check the admissibility of contraction rules.
2
Permutation Graphs
In the following we assume that we are given a sequent calculus proof system S which is commutative,
i.e., sequents are formed by multi-sets of formulas, and whose non-atomic initial and cut rules are admissible. We will also assume that whenever contraction is allowed then weakening is also allowed, that
is, our systems can be affine, but not relevant. Finally, we assume the reader is familiar with basic proof
theory terminology, such as main and auxiliary formulas, formula ancestors.
Definition 1 (Permutability). Let α and β be two inference rules in a sequent calculus system S. We will
say that α permutes up β , denoted by α ↑ β , if for every S derivation of a sequent S in which α operates
on S and β operates on one or more of α ’s premises (but not on auxiliary formulas of α ), there exists
another S derivation of S in which β operates on S and α operates on zero or more of β ’s premises
(but not on β ’s auxiliary formulas). Consequently, β permutes down α (β ↓ α ).
Note that if there is no derivation in which β operates on α ’s premises without acting on its auxiliary
formulas (e.g., ∨r and ∧r in LJ), the permutation holds vacuously.
Definition 2 (Permutation graph). Let R be the set of inference rules of a sequent calculus system S. We
construct the (directed) permutation graph PS = (V, E) for S by taking V = R and E = {(α , β ) | α ↑ β }.
Definition 3 (Permutation cliques). Let S be a sequent calculus system and PS its permutation graph.
Consider PS∗ = (V ∗ , E ∗ ) the undirected graph obtained from PS = (V, E) by taking V ∗ = V and E ∗ =
{(α , β ) | (α , β ) ∈ E and (β , α ) ∈ E}. Then the permutation cliques of S are the maximal cliques1 of PS∗ .
For LJ, we obtain the following cliques CL1 = {∧l , ∨l , →r , ∨r } and CL2 = {∧r , ∨r , →l }.
Permutation cliques can be thought of as equivalence classes for inference rules. For example, the
rule ∧l permutes over all rules in CL1 . Permutation cliques are not always disjoint. For example, the rule
∨r appears in both cliques.
Definition 4 (Permutation partition). Let S be a proof system and PS its permutation graph. Then a
permutation partition P is a partition of PS such that each component is a complete graph. We will call
each component of such partitions a permutation component, motivated by the fact that inferences in the
same component permute over each other.
It is always possible to find such a partition by taking each component to be one single vertex, but
we are mostly interested in bi-partitions.
Although in general cliques are computed in exponential time, it is still feasible to compute them
since the permutation graph is usually small. The partitions can be obtained simply by choosing at most
one partition to those rules present in more than one clique. Therefore, there might be many possible
1A
clique in a graph G is a set of vertices such that all vertices are pairwise connected by one edge.
Nigam, Reis and Lima
3
ways to partition the rules of a system. In what follows (Definition 6) we will define which are the
partitions that will yield a focused proof system. As we will see, the following partition will lead to LJF,
restricted to multiplicative conjunctions: C1 = {∧l , ∨l , →r } and C2 = {∧r , ∨r , →l }.
Definition 5 (Permutation partition hierarchy). Let S be a proof system, PS its permutation graph and
P = C1 , ...,Cn a permutation partition. We say that Ci ↓ C j iff for every inference αi ∈ Ci and α j ∈ C j
we have that αi ↓ α j , i.e., α j ↑ αi or equivalently (α j , αi ) ∈ PS .
Notice that the partition hierarchy can be easily computed from the permutation graph. For the
partition used above, we have C1 ↓ C2 .
3
Focused Proof Systems Generation
We derive a focused proof system S f from the permutation partitions of a given proof system S if some
conditions are fulfilled. In this section we explain these conditions and prove that the induced focused
system is sound and complete with respect to S.
Definition 6 (Focusable permutation partition). Let S be a sequent calculus proof system and C1 , ...,Cn
a permutation partition of the rules in S. We say that it is a focusable permutation partition if:
• n = 2 and C1 ↓ C2 ;
• Every rule in component C2 has at most one auxiliary formula in each premise;
• Every non-unary rule in component C2 splits the context among the premises (i.e., there is no
implicit copying of context formulas on branching rules).
We call C1 the negative component and C2 the positive component following usual terminology from
the focusing literature (e.g. [5]) and classify formula occurrences in a proof as negative and positive
according to their introduction rules.
Observe that, in contrast to the usual approach, we do not assign polarities to connectives on their
own. Therefore the polarity of a formula can change depending on whether it occurs on the right or on
the left side of the sequent. As for now, we will only define permutation partitions of logical inference
rules. The structural inference rules will be treated separately. In particular, the role of contraction and
its relation to the partitions is discussed in Section 4.
The partition {C1 ,C2 } for LJ is a focusable permutation partition. Interestingly, LJ allows for other
focusable partitions, for example: C1 = {∧l , ∨l , →r , ∨r } and C2 = {∧r , →l }.
We conjecture that all proof systems derived from a focusable permutation partition are sound and
complete. It is not our goal here to justify which partition leads to a more suitable focused proof system,
as this would depend on the context where the proof system would be used.
Based on the focusable permutation partition, we can define a focused proof system for S. This
definition is syntactically different from those usually present in the literature. It will, in particular, force
the store and subsequent selection of a negative formula. This extra step is only for the sake of uniformity
and clear separation between phases (there will always be a “no phase” state between two phases).
Definition 7 (Focused proof system). Let S be a sequent calculus proof system and C1 ↓ C2 a focusable
permutation partition of the rules in S. Then we can define the focused system S f in the following way:
Sequents. S f sequents are of the shape Γ; Γ′ ⊢ p ∆; ∆′ , where p ∈ {+, −, 0} indicates a positive,
negative and neutral polarity sequents respectively. We will call Γ′ and ∆′ the active contexts.
Towards the Automated Generation of Focused Proof Systems
4
Inference Rules. For each rule α in S belonging to the negative (positive) component, S f will have a
rule α with conclusion and premises being negative (positive) sequents and main and auxiliary formulas
occurring in the active contexts.
Structural rules. The connection between the phases is done via the following structural rules.
Selection rules move a formula F to the active context. If F is negative, then p = −. If F is positive,
then there is no negative F ′ ∈ Γ ∪ ∆ and p = +. Store rules remove a formula F from the active context
if F is negative and p = + or if F is positive and p = −. The end rule removes the label p = {+, −} of
a sequent by setting it to 0 if the active contexts are empty.
Γ; F ⊢ p ∆; ·
Γ, F; · ⊢0
∆; ·
sell
Γ; · ⊢ p ∆; F
Γ; · ⊢0
∆, F; ·
selr
Γ, F; Λ ⊢ p ∆; Π
stl
Γ; Λ, F ⊢ p ∆; Π
Γ; Λ ⊢ p ∆, F; Π
str
Γ; Λ ⊢ p ∆; Π, F
Γ; · ⊢0 ∆; ·
end
Γ; · ⊢ p ∆; ·
An S f proof is characterized by sequences of inferences labeled with + or − which we will call phases.
Thus, we can say that selection rules are responsible for starting a phase and the end rule finishes a
phase. Between any two phases there is always a “neutral” state, denoted by a sequent labeled with 0.
We can prove using the machinery given in [7] that the focused proof system obtained is complete.
There is one catch, however: one also needs to prove that the procedure to convert an unfocused proof
into a focused proof using permutation lemmas terminates. This was not formalized in [7], although one
can prove it. Finding general conditions is more challenging and is subject of current investigation.
Conjecture 1 (Completeness of focused proof systems). A sequent Γ ⊢ ∆ is provable in S iff the sequent
Γ; · ⊢0 ∆; · is provable in S f .
4
Admissibility of contraction
During proof search, it is desirable to avoid unnecessary copying of formulas at each rule application.
Either by not copying the same context in all premises or by not auto-contracting the main formula of a
rule application. The analysis of where the contraction rule lies in the permutation cliques can give us
insights on when it can be avoided.
Definition 8 (Admissibility of contraction). Let S be a sequent calculus system with a set of rules R. We
say that contraction is admissible for R ′ ⊆ R if for every S derivation ϕ there exists an S derivation ϕ ′
such that contraction is never applied to main formulas of inferences in R ′ .
The intuitionistic system LJ is an example of a calculus in which contraction is not admissible for all
formulas. It is only complete if the main formula of the implication left rule is contracted [4].
The admissibility of contraction involves transformations which are similar to the rank reduction
rewriting rules of reductive cut-elimination. This is a special case of permutation checking, since the
upper inference must be applied to auxiliary formulas of the lower inference.
Definition 9 (Contraction permutation). Let S be a sequent calculus proof system, c one of its contraction
rules and α a logical rule applied to a formula Fα . We say that c ↑c α if a derivation composed by
contraction of Fα followed by applications of α to the contracted formulas can be transformed into a
derivation where α is applied to Fα and contraction is applied to the auxiliary formulas of α .
It is worth noting that many of the cases for contraction permutation rely on α being applied to all
contracted formulas in all premises where they occur. The proofs of such cases require a lemma stating
that α can be “pushed down” until the correct location.
If c ↑c α for some inference α , then it is admissible for that inference, as it can always be replaced
by contraction on its ancestors To prove full admissibility of contraction in the calculus, it is necessary
to prove that contraction on atoms can be eliminated. We will not address this issue in this paper, but we
will analyse the behavior of contraction among the phases in a focused proof.
Nigam, Reis and Lima
5
Γ,A ⊢ ∆ Γ′ ,B ⊢ ∆′
Γ,Ai ⊢ ∆
Γ,A,B ⊢ ∆
Γ,A ⊢ ∆ Γ,B ⊢ ∆
Ol
Nl
⊗l
⊕l
Γ,A1 NA2 ⊢ ∆
Γ,A ⊗ B ⊢ ∆
Γ,A ⊕ B ⊢ ∆
Γ,Γ′ ,AOB ⊢ ∆,∆′
Γ ⊢ ∆,A Γ′ ⊢ ∆′ ,B
Γ ⊢ ∆,Ai
Γ ⊢ ∆,A,B
Γ ⊢ ∆,A Γ ⊢ ∆,B
⊗r
Nr
⊕r
Or
Γ ⊢ ∆,ANB
Γ,Γ′ ⊢ ∆,∆′ ,A ⊗ B
Γ ⊢ ∆,A1 ⊕ A2
Γ ⊢ ∆,AOB
Figure 1: MALL logical inferences
Γ,A,B ⊢ ∆
Γ,A ⊢ ∆ Γ,B ⊢ ∆ a Γ,A ⊢ ∆ Γ′ ,B ⊢ ∆′ m
Γ,Ai ⊢ ∆
∨l
∧al
∧m
∨l
l
Γ,A1 ∧ A2 ⊢ ∆
Γ,A ∧ B ⊢ ∆
Γ,A ∨ B ⊢ ∆
Γ,Γ′ ,A ∨ B ⊢ ∆,∆′
Γ ⊢ ∆,Ai
Γ ⊢ ∆,A,B
Γ ⊢ ∆,A Γ ⊢ ∆,B a Γ ⊢ ∆,A Γ′ ⊢ ∆′ ,B m
∧r
∧r
∨a
∨m
Γ ⊢ ∆,A ∧ B
Γ,Γ′ ⊢ ∆,∆′ ,A ∧ B
Γ ⊢ ∆,A1 ∨ A2 r Γ ⊢ ∆,A ∨ B r
Figure 2: Additive and multiplicative logical inferences of the LK system.
Definition 10 (Admissibility of contraction in a phase). Let S be a sequent calculus proof system and
C1 ,C2 a focusable permutation partition. We say that contraction is admissible in phase i if every S proof
can be transformed into a proof where contraction is never applied to main formulas of rules α ∈ Ci .
Theorem 1. Let S be a sequent calculus system, C its contraction rules, C1 ,C2 a focusable permutation
partition. If for all c ∈ C and α ∈ Ci , c ↑ α and c ↑c α , then contraction is admissible in phase i.
The focused proof is obtained by only contracting formulas that can be introduced by C2 rules. It is
easy to extend Definition 7 to enforce this as done in LJF and LKF [5].
5
Case studies
Given the permutation cliques, it is up to the user to analyse them and decide which partition to use for
the focused proof system. As case studies we will show how the focused proof systems LKF, LJF and
MALLF can be obtained from LK, LJ and MALL respectively using the permutation cliques.
MALL MALL stands for multiplicative additive linear logic (without exponentials) and its rules are
depicted in Figure 1. A focused system, MALLF, for this calculus was proposed in [1].
Given the logical inferences of MALL, the permutation cliques found were the following: CL1 =
{⊗l , ⊕l , Or , Nr , Nl , ⊕r } and CL2 = {⊗r , ⊕r , Ol , Nl }, with the relation CL1 ↓ CL2 . The following focusable permutation partition corresponds to MALLF: C1 = {⊗l , ⊕l , Or , Nr } and C2 = {⊗r , ⊕r , Ol , Nl }.
Notice that all invertible rules are in C1 , while all positive rules are in C2 as expected.
LK and LJ In order to derive the focused system LKF for classical logic from LK, all variations of
inferences must be considered. We need to take into account the additive and multiplicative versions of
each conjunction and disjunction, as depicted in Figure 2. In principle an analysis could be made with
the usual presentation of the LK system, but it would certainly not result in LKF. Asserting that we can
generate a well-known focused system serves as a validation of our method.
a
a
a
m
The permutation cliques for the inferences in Figure 2 are: CL1 = {∧ar , ∧m
l , ∨r , ∨l , ∧l , ∨r } and CL2 =
a
m
m
a
{∧r , ∧l , ∨r , ∨l }, where CL1 ↓ CL2 . Analogous to MALL, we can drop the two last rules from CL1 and
obtain a focusable permutation partition which corresponds to the propositional fragment of LKF.
By analysing the permutation relation of contraction to the rules in the partitions, we observe that
it permutes up (↑ and ↑c ) all the inferences in CL1 {∧al , ∨ar }. Therefore, it is admissible in the negative
phase. For the positive phase, on the other hand, contraction will not permute up, for example, ∧al . We
can thus conclude that such a system must have contraction for positive formulas2 .
2 This
contraction is implicit on the decide rule and the positive rules for the usual presentation of LKF.
Towards the Automated Generation of Focused Proof Systems
6
The case for LJ is completely analogous as that of LK when considering the partition: CL1 =
m
a
a
a
{∧m
l , ∧r , ∨l , →r , ∧l , ∨r } and CL2 = {∧l , ∧r , ∨r , →l }.
6
Conclusion
This paper proposed a method for automatically devising focused proof systems for sequent calculi. Our
aim was to provide a uniform and automated way to obtain the sound and complete systems without
using an encoding in linear logic. The main element in our solution is the permutation graph of a sequent
calculus system. By using this graph we can separate the inferences into positives and negatives and also
reason on the admissibility of contraction. The permutation graph represents the permutation lemmas
used in the proof in [7]. We extended the method developed in [7] to handle contraction.
For future work, we plan to apply/extend our technique to other proof systems in order to obtain
sensible focused proof systems. There are, however, some more foundational challenges in doing so. We
would need to extend the conditions used here for determining whether a partition is focusable. For example, non-commutative and bunched proof systems have even more complicated structural restrictions.
It is not even clear how would be the focusing discipline for these proof systems. We expect that our
existing machinery may help make some of these decisions by investigating different partitions.
Although we can deduce in which phase the contraction of formulas is admissible, it is still unclear
if the position of this rule in the permutation graph can indicate exactly which rules do not admit contraction. We expect to further investigate the permutation graphs of other systems to find out if this and
other properties can be discovered.
References
[1] Jean-Marc Andreoli (1992): Logic Programming with Focusing Proofs in Linear Logic. Journal of Logic and
Computation 2(3), pp. 297–347, doi:10.1093/logcom/2.3.297.
[2] David Baelde (2008): A linear approach to the proof-theory of least and greatest fixed points. Ph.D. thesis,
Ecole Polytechnique.
[3] V. Danos, J.-B. Joinet & H. Schellinx (1995): LKT and LKQ: sequent calculi for second order logic based
upon dual linear decompositions of classical implication. In: Advances in Linear Logic, pp. 211–224,
doi:10.1017/CBO9780511629150.011.
[4] Roy Dyckhoff (1992): Contraction-free sequent calculi for intuitionistic logic. Journal of Symbolic Logic
57(3), pp. 795–807, doi:10.2307/2275431.
[5] Chuck Liang & Dale Miller (2007): Focusing and Polarization in Intuitionistic Logic. In: Computer Science
Logic, LNCS 4646, Springer, pp. 451–465, doi:10.1007/978-3-540-74915-8 34.
[6] Chuck Liang & Dale Miller (2009): Focusing and Polarization in Linear, Intuitionistic, and Classical Logics.
Theoretical Computer Science 410(46), pp. 4747–4768, doi:10.1016/j.tcs.2009.07.041.
[7] Dale Miller & Alexis Saurin (2007): From proofs to focused proofs: a modular proof of focalization in Linear
Logic. In: CSL 2007, LNCS, pp. 405–419, doi:10.1007/978-3-540-74915-8 31.
[8] Vivek Nigam (2009): Exploiting non-canonicity in the sequent calculus. Ph.D. thesis, Ecole Polytechnique.
[9] Vivek Nigam, Giselle Reis & Leonardo Lima (2013): Checking Proof Transformations with ASP. In: ICLP
(Technical Communications), 13.
[10] Vivek Nigam, Giselle Reis & Leonardo Lima (2014): Quati: An Automated Tool for Proving Permutation
Lemmas. In: 7th IJCAR Proceedings, pp. 255–261, doi:10.1007/978-3-319-08587-6 18.
[11] Alexis Saurin (2008): Une étude logique du contrôle (appliquée à la programmation fonctionnelle et
logique). Ph.D. thesis, Ecole Polytechnique.
Proof Outlines as Proof Certificates: A System Description
Roberto Blanco
Dale Miller
Inria & LIX, École Polytechnique
Inria & LIX, École Polytechnique
We apply the foundational proof certificate (FPC) framework to the problem of designing high-level
outlines of proofs. The FPC framework provides a means to formally define and check a wide range
of proof evidence. A focused proof system is central to this framework and such a proof system
provides an interesting approach to proof reconstruction during the process of proof checking (relying
on an underlying logic programming implementation). Here, we illustrate how the FPC framework
can be used to design proof outlines and then to exploit proof checkers as a means for expanding
outlines into fully detailed proofs. In order to validate this approach to proof outlines, we have built
the ACheck system that allows us to take a sequence of theorems and apply the proof outline “do the
obvious induction and close the proof using previously proved lemmas”.
1
Introduction
Inference rules, as used in, say, Frege proofs (a.k.a. Hilbert proofs) are usually greatly restricted by
limitations of human psychology and by what skeptics are willing to trust. Typically, checking the
application of inference rules involves simple syntactic checks, such as deciding on whether or not two
premises have the structure A and A ⊃ B and the conclusion has the structure B. The introduction of
automation into theorem proving has allowed us to engineer inference steps that are more substantial
and include both computation and search. In recent years, a number of proof theoretic results allow
us to extend that literature from being a study of minuscule inference rules (such as modus ponens or
Gentzen’s introduction rules) to a study of large scale and formally defined “synthetic” inference rules.
In this paper, we briefly describe the ACheck system in which we build and check proof outlines as
combinations of such synthetic rules.
Consider the following inductive specification of addition of natural numbers
Define plus : nat - > nat - > nat - > prop by
plus z N N ;
plus ( s N ) M ( s P ) := plus N M P .
where z and s denote zero and successor, respectively. (Examples will be displayed using the syntax of
the Abella theorem prover [2]: this syntax should be familiar to users of other systems, such as Coq.)
When this definition is introduced, we should establish several properties immediately, e.g., that the
addition relation is determinate and total.
Theorem plustotal :
forall N , nat N - > forall M , nat M - > exists S , plus N M S .
T h e o r e m p l u s d e t e r m : forall N , nat N - > forall M , nat M - >
forall S , plus N M S - > forall T , plus N M T - > S = T .
Anyone familiar with proving such theorems knows that their proofs are simple: basically, the obvious
induction leads quickly to a final proof. Of course, if we wish to prove more results about addition, one
may need to invent and prove some lemma before simple inductions will work. For example, proving
the commutativity of addition makes use of two additional lemmas.
I. Cervesato and C. Schürmann (Eds.)
First Workshop on Focusing (WoF’15)
EPTCS 197, 2015, pp. 7–14, doi:10.4204/EPTCS.197.2
Proof Outlines as Proof Certificates
8
T h e o r e m p l u s 0 c o m : forall N , nat N - >
plus N z N .
T h e o r e m p l u s s c o m : forall M , nat M - > forall N , nat N - >
forall P , plus M N P - > plus M ( s N ) ( s P ).
T h e o r e m p l u s c o m : forall N , nat N - > forall M , nat M - >
forall S , plus N M S - > plus M N S .
These three lemmas have the same high-level proof outline: use the obvious induction invariant, apply
some previously proved lemmas and the inductive hypothesis, and deal with any remaining case analysis.
The fact that many theorems can be proved by using induction-lemmas-cases is well-known and built
into existing theorem provers. For example, the waterfall model of the Boyer-Moore prover [6] proves
such theorems in a similar fashion (but for inductive definitions of functions). Twelf [11] can often prove
that some relations are total and functional using a series of similar steps [12]. The tactics and tacticals of
LCF have been used to implement procedures that attempt to find proofs using these steps [13]. Finally,
TAC [4] attempts to follow such a procedure as well but in a rather fixed and inflexible fashion.
In this paper, we present an approach to describing the simple rules that can prove a given lemma
based on previously proved lemmas. Specifically, we define proof certificates that describe the structure
of a proof outline that we expect and then we run a proof checker on that certificate to see if the certificate
can be elaborated into a full proof of the lemma.
2
A focused proof system
Consider the two introduction rules in Figure 1. If one attempts to
Γ ⊢ B[t/x]
Γ ⊢ Bi
prove sequents by reading these rules from conclusion to premises,
Γ ⊢ B1 ∨ B2
Γ ⊢ ∃x.B
then these rules need either information from some external source
(e.g., an oracle providing the i ∈ {1, 2} or the term t) or some implementation support for non-determinism (e.g., unification and back- Figure 1: From the LJ calculus
tracking search).
It is meaningless to use Gentzen’s sequent calculus to directly
Γ ⊢ B[t/x] ⇓ support proof automation. Consider, for example, attempting to prove
Γ ⊢ Bi ⇓
Γ ⊢ B1 ∨ B2 ⇓
Γ ⊢ ∃x.B ⇓ the sequent Γ ⊢ ∃x∃y[(p x y) ∨ ((q x y) ∨ (r x y))], where Γ contains, say, a hundred formulas. The search for a (cut-free) proof of
Figure 2: Focusing annotations this sequent can confront the need to choose from among a hundredand-one introduction rules. If we choose the right-side introduction rule, we will then be left with,
again, a hundred-and-one introduction rules to apply to the premise. Thus, reducing this sequent to, say,
Γ ⊢ (q t s) requires picking one path of choices in a space of 1014 choices.
Focused proof systems address this explosion by organizing rules into two phases. For example,
the rules in Figure 1 are written instead as Figure 2. Here, the formula on which one is introducing a
connective is marked with the ⇓: as a result, reducing the sequent Γ ⊢ ∃x∃y[(p x y) ∨ ((q x y) ∨ (r x y))] ⇓
to Γ ⊢ (q t s) ⇓ involves only those choices related to the formula marked for focus: no interleaving of
other choices needs to be considered.
While the ⇓ phase involves rules that may not be invertible, the ⇑ phase involves rules that must be
invertible. For example, the left rules for ∨ and ∃ are invertible and their introduction rule is listed as
Γ ⇑ B1 , Θ ⊢ R Γ ⇑ B2 , Θ ⊢ R
Γ ⇑ B1 ∨+ B2 , Θ ⊢ R
Γ ⇑ [y/x]B, Θ ⊢ R
Γ ⇑ ∃x.B, Θ ⊢ R
R. Blanco & D. Miller
9
These rules need no external information (in particular, any new variable y will work in the ∃ introduction
rule). In these last two rules, the zone between ⇑ and ⊢ contains a list of formulas. When there are no
more invertible rules that can be applied to that first formula, that formula is moved to (i.e., stored in) the
zone written as Γ, using the following store-left rule
C, Γ ⇑ Θ ⊢ R
S.
Γ ⇑C, Θ ⊢ R l
Finally, when the zone between the ⇑ and the ⊢ is empty (i.e., all invertible inference rules have been
completed), it is time to select a (possibly non-invertible) introduction rule to attempt. For that, we have
the two decide rules
Γ⊢P⇓
Γ, N ⇓ N ⊢ E
and
Dl
D.
Γ, N ⇑ · ⊢ · ⇑ E
Γ⇑· ⊢ ·⇑P r
Although we cannot show all focused inference rules, we will present those that deal with the least
fixed point operator. Formally speaking, when we define a predicate, such as plus in the previous
section, we are actually naming a least fixed point expression. In the case of plus, that expression is
µλ Pλ nλ mλ p.(n = z ∧ m = p) ∨ ∃n′∃p′ .[n = (s n′ ) ∧ p = (s p′ ) ∧ (P n′ m p′ )].
For the treatment of least fixed points, we follow the µ LJ proof system and its focused variant [1, 3].
The treatment of least fixed point expressions in the ⇑ phase and the ⇓ phase is given by the three rules
µ Bt¯, Γ ⇑ Θ ⊢ R
Γ ⇑ µ Bt¯, Θ ⊢ R
Γ ⇑ St¯, Θ ⊢ N ⇑ BSx̄ ⊢ Sx̄
Γ ⇑ µ Bt¯, Θ ⊢ N
Γ ⊢ B(µ B)t¯ ⇓
Γ ⊢ µ Bt¯ ⇓
Notice that the right introduction rule is just an unfolding of the fixed point. There are two ways to treat
the least fixed point on the left: one can either perform a store operation or one can do an induction using,
in this case, the invariant S. The right premise of the induction rule shows that S is a prefixed point (i.e.,
BS ⊆ S). In general, supplying an invariant can be tedious so we shall also identify two admissible rules
for unfolding (also on the left) and obvious induction, meaning that the invariant to use is nothing more
than the immediately surrounding sequent as the invariant S. In the case of the obvious induction, the
left premise sequent will be trivial.
3
Foundational proof certificates
The main idea behind using the foundational proof certificate (FPC) [9] approach to defining proof evidence is that theorem provers output their proof evidence to some persistent memory (e.g., a file) and that
independent and trustable proof checkers examine this evidence for validity. In order for such a scheme
to work, the semantics of the output proof evidence must be formally defined. The FPC framework provides ways to formally define such proof semantics which is also executable when interpreted on top of
a suitable logic engine.
There are four key ingredients to providing such a formal definition and they are all described via their
relationship to focused proof systems. In fact, consider the following augmented versions of inference
rules we have seen in the previous section.
Ξ1 : Γ ⊢ Bi ⇓ ∨e (Ξ0 , Ξ1 , i)
Ξ0 : Γ ⊢ B1 ∨ B2 ⇓
Ξ1 : Γ ⊢ B[t/x] ⇓ ∃e (Ξ0 , Ξ1 ,t)
Ξ0 : Γ ⊢ ∃x.B ⇓
Proof Outlines as Proof Certificates
10
These two augmented rules contain two of the four ingredients of an FPC: the schema variable Ξ ranges
over terms that denote the actual proof evidence comprising a certificate. The additional premises involve experts which are predicates that relate the concluding certificate Ξ0 to a continuation certificate
Ξ1 and some additional information. The expert predicate for the disjunction can provide an indication of which disjunct to pick and the expert for the existential quantifier can provide an indication
of which instance of the quantifier to use. Presumably, these expert predicates are capable of digging
into a certificate and extracting such information. It is not required, however, for an expert to definitively extract such information. For example, the disjunction expert might guess both 1 and 2 for i:
the proof checker will thus need to handle such non-determinism during the checking of certificates.
Another ingredient of an FPC is illustrated by the
Ξ1 : hl,Ci, Γ ⇑ Θ ⊢ R storec (Ξ0 , Ξ1 , l)
augmented inference rules in Figure 3. The store-left
Sl
Ξ0 : Γ ⇑C, Θ ⊢ R
(Sl ) inference rule is augmented with an extra premise
that invokes a clerk predicate which is responsible for Ξ1 : Γ, hl,Ni ⇓ N ⊢ E decidee (Ξ0 , Ξ1 , l)
Dl
computing an index l that is associated to the stored forΞ0 : Γ, hl,Ni ⇑ · ⊢ · ⇑ E
mula (here, C). The augmented decide-left (Dl ) rule is
given an extra premise that uses an expert predicate: that
Figure 3: Two augmented rules
premise can be used to compute the index of the formula that is to be selected for focus. The indexing mechanism does not need to be functional (i.e.,
different formulas can have the same index) in which case the decide rule must also be interpreted as
non-deterministic. In earlier work [9], indexes have been identified with structures as diverse as formula
occurrences and de Bruijn numerals. In this paper, the only role planned by indexes will be as the names
given to lemmas and to hypotheses (i.e., formulas that are stored on the left using the Sl inference rule).
As indicated above, there are essentially three operations that we can perform to treat a least fixed
point formula in the left-hand context. In fact, we shall expand these into the following four augmented
inference rules.
1. The fixed point can be “frozen” in the sense that the store-left (Sl ) rule is applied to it. As a result of
such a store operation, the stored occurrence of the fixed point will never be unfolded and will not
be the site of an induction. Such a frozen fixed point can only be used later in proof construction
within an instance of the initial rule.
2. The fixed point can be unfolded just as it can be on the right-hand side of the sequent. (Unfolding on the left is a consequence of the more general induction rule.) The following augmented
inference rule can be used to control when such a fixed point is unfolded.
Ξ1 : N ⇑ B(µ B) t¯, Γ ⊢ R
unfold(Ξ0 , Ξ1 )
Ξ0 : N ⇑ µ B t¯, Γ ⊢ R
3. The most substantial inference rule related to the least fixed point is the induction rule. In its
most general form, this inference rule involves proving premises that involve an invariant. A proof
certificate term Ξ0 could include such an invariant explicitly and the following augmented rule
could be used to extract and use that invariant.
Ξ1 : N ⇑ S t¯, Γ ⊢ R
Ξ2 ȳ : N ⇑ B S ȳ ⊢ S ȳ
Ξ0 : N ⇑ µ B t¯, Γ ⊢ R
ind(Ξ0 , Ξ1 , Ξ2 , S)
4. In general, it appears that invariants are often complex, large, and tedious structures to build and
use. Thus, it is most likely that we need to develop a number of techniques by which invariants
R. Blanco & D. Miller
11
are not built directly but are rather implied by alternative reasoning principles. For example, the
Abella theorem prover [2], allows the user to do induction not by explicitly entering an invariant but
rather by performing a certain kind of guarded, circular reasoning. In the context of this paper, we
consider, however, only one approach to specifying induction and that involves taking the sequent
N ⇑ µ B t¯, Γ ⊢ R and abstracting out the fixed point expression to yield the “obvious” invariant Ŝ
so that the premise N ⇑ S t¯, Γ ⊢ R has an easy proof. As a result, only the second premise related
to the induction rule needs to be proved. The following augmented rule is used to generate and
check whether or not the obvious induction invariant can be used.
Ξ1 ȳ : N ⇑ B Ŝ ȳ ⊢ Ŝ ȳ
obvious(Ξ0 , Ξ1 )
Ξ0 : N ⇑ µ B t¯, Γ ⊢ R
An FPC definition is then a collection of the following: (i) the term constructors for the term encoding
proof evidence (the Ξ schema variable above), (ii) the constructors for building indexes, and (iii) the
definitions of predicates for describing how the clerks and experts behave in different inference rules.
Given these definitions, we then check whether or not a sequent of the form Ξ : Γ ⊢ B is provable. This
latter check can be done using a logic programming engine since such an engine should support both
unification and backtracking search (thereby allowing one to have a trade-off between large certificates
with a great deal of explicit information and small certificates where details are elided and reconstructed
as needed). In our own work, we have used both λ Prolog [10] and Bedwyr [5] as that logic-based engine.
4
Certificate design
Imagine telling a colleague “The proof of this theorem follows by a simple induction and the three
lemmas we just proved.” You may or may not be correct in such an assertion since (a) the proposed
theorem may not be provable and (b) the simple proof you describe may not exist. In any case, it is
clear that there is a rather simple, high-level algorithm to follow that will search for such a proof. In this
section, we translate that algorithm into an FPC.
Following the paradigm of focused proof systems for first-order logic, there is a clear, high-level
outline to follow for doing proof search for cut-free proofs: first do all invertible inference rules and
then, select a formula on which to do a series of non-invertible choices. This latter phase ends when one
encounters invertible inference rules again or the proof ends. In the setting we describe here, there are
two significant complicating features with which to be concerned.
Treating the induction rule. The invertible (⇑) phase is generally a place where no important choices
in the search for a proof appear. When dealing with a formula that is a fixed point, however, this is no
longer true. As described in the previous section, we treat that fixed point expression either by freezing
(see also [1]), unfolding, or using an explicitly supplied invariant or the “obvious” induction.
Lemmas must be invoked. Although the focusing framework can work with any provable formula as
a lemma, we shall only consider lemmas that are Horn clauses (for example, the lemmas at the end of
Section 2). Consider applying a lemma of the form ∀x̄[A1 ⊃ A2 ⊃ A3 ] in proving the sequent Γ ⊢ E.
Since the formulas A1 , A2 , and A3 are polarized positively, we can design the proof outline (simply by
only authorizing fixed points to be frozen during this part of the proof) so that Γ ⇓ ∀x̄[A1 ⊃ A2 ⊃ A3 ] ⊢ E
is provable if and only if there is a substitution θ for the variables in the list of variables x̄ such that θ A1
and θ A2 are in Γ and the sequent Γ, θ A3 ⊢ E is provable. The application of such a lemma is then seen
as forward chaining: if the context Γ contains two atoms (frozen fixed points) then add a third.
Proof Outlines as Proof Certificates
12
The main issue that a certificate-as-proof-outline therefore needs to provide is some indication of
what lemmas should be used during the construction of a proof. The following collections of supporting
lemmas—starting from the least explicit to the most explicit—have been tested within our framework:
(i) a bound on the number of lemmas that can be used to finish the proof; (ii) a list of possible lemmas
to use in finishing the proof; and (iii) a tree of lemmas, indicating which lemmas are applied following
the conjunctive structure of the remaining proof.
5
The proof checker
A direct and natural way to implement the FPC approach to proof checking is to use a logic programming
language: by turning the augmented inference rules sideways, one gets logic programming clauses. In
this way, the rule’s conclusion becomes the head of the clause and the rule’s premises become the body
of the clause. When proof checking in (either classical or intuitionistic) first-order logic without fixed
points, the λ Prolog programming language [10] is a good choice since its treatment of bindings allows
for elegant and effective implementations of first-order quantification in formulas and of eigenvariables in
proofs [9]. When the logic itself contains fixed points, as is the case in this paper, λ Prolog is no longer a
good choice: instead, a stronger logic that incorporates aspects of the closed world assumption is needed.
In particular, the ACheck system has two parts. The first is a theorem prover; we have used Abella since it
was easy to modify it for exporting theorems and certificates for use by the second part. The second part
is the proof checker, FPCcheck, that verifies the output of a session of the theorem prover, suitably translated. This checker is written in the Bedwyr model checking system [5] and is the new component underlying this particular paper: its code is available from https://github.com/proofcert/fpccheck.
The documentation at that address explains where to find Bedwyr and our modified Abella system.
To illustrate here the kinds of examples available on the web page, the Abella theory files can have a
ship command that is followed by a string describing a certificate to use to prove the proposed theorem:
the checking of this certificate is shipped to the Bedwyr-based kernel for checking. In this particular
case, the induction certificate constructor is given three arguments: the first is the maximal number of
decides that can be used in the proof and the second and third are bounds on the number of unfoldings in
the ⇑ and ⇓ phases respectively.
T h e o r e m p l u s 0 c o m : forall N , is_nat N - > plus N zero N .
ship " ( i n d u c t i o n 1 0 1) " .
T h e o r e m p l u s s c o m : forall M , is_nat M - > forall N , is_nat N - >
forall P , plus M N P - > plus M ( succ N ) ( succ P ).
ship " ( i n d u c t i o n 1 0 1) " .
T h e o r e m p l u s c o m : forall N , is_nat N - > forall M , is_nat M - >
forall S , plus N M S - > plus M N S .
ship " ( i n d u c t i o n 2 1 0) " .
The bound on the number of decide rules (first argument) is also a bound on the number of lemmas that
can be used on any given branch of the proof.
6
System architecture
Our system for producing and checking proof outlines follows a simple linear work-flow: first, state a
theorem and obtain a proof outline, next, attempt to check the theorem with the outline given as a proof
R. Blanco & D. Miller
13
certificate. We have structured this process in three computation steps, involving a theorem prover, a
translator, and a proof checker. Their roles are given here.
The theorem prover. An existing theorem prover is principally the source of the concrete syntax of
definitions and theorems. It may not be directly involved in the work-flow, particularly if the proof
language is extended to support ship tactics that enable the omission of detailed proof scripts. At the
end of the phase a theorem file with all relevant definitions, theorem statements and proof scripts is
obtained.
The translator. For each theorem prover, we need to furnish a translator that can convert the concrete
syntax of the theorem prover into that of the proof checker. It may export explicit certificates given by
the ship tactic or derive certificates automatically, possibly making use of proof scripts and execution
traces in the theorem file. These translation facilities may be built into the theorem prover itself, or an
instrumented version of it, thus encapsulating the first two stages. The translator outputs translation files
usable directly by a general-purpose, universal proof checker.
The proof checker. The proof checker, as described in Section 5, implements a focused version of the
µ LJ logic in the Bedwyr model checking system, augmented to further implement the FPC framework.
The translated theorems and their certificates plug into this kernel and constitute a full instantiation of the
system, which Bedwyr can execute to verify that the certificates can be elaborated into complete proofs
of their associated theorems.
Translators are the only addition needed to enable a new theorem prover to use the FPC framework.
Such translators are free to implement sophisticated mechanisms to produce efficient certificates from
proof scripts but, in fact, only a more shallow syntactic translation is strictly required: in this latter case,
sophistication must be built into the FPC definitions. For the present study with the Abella interactive
theorem prover, the translator has been integrated in an instrumented version of Abella, a system whose
proximity with Bedwyr syntax is naturally well-suited to the approach.
The work-flow structure just described makes no explicit reference to the FPCs that could be shipped,
constructed, and checked. These can conform to the definition of proof outlines used throughout this
paper or be tailored to each specific problem. The translator module can choose to use proof outlines as
the default, as is the case in our examples. It could also let a user of ship specify an FPC definition to
be included in the resulting translation, or generate tentative certificates with rules involving other FPC
definitions.
7
Conclusion
We have described a methodology for elaborating proof outlines using a framework for checking proof
certificates. We have illustrated this approach with “simple inductive proofs” that are applicable in a wide
range of situations involving inductively defined datatypes. We plan to scale and study significantly more
complex examples in the near future. Finally, it would be interesting to see how our use of high-level
descriptions of proofs and proof reconstruction might be related to the work of Bundy and his colleagues
on proof plans and rippling [8, 7].
Acknowledgments: We thank anonymous reviewers and K. Chaudhuri for comments on a draft of this
paper. This work was funded by the ERC Advanced Grant ProofCert.
14
Proof Outlines as Proof Certificates
References
[1] David Baelde (2012): Least and greatest fixed points in linear logic. ACM Trans. on Computational Logic
13(1), doi:10.1145/2071368.2071370. Available at http://tocl.acm.org/accepted/427baelde.
pdf.
[2] David Baelde, Kaustuv Chaudhuri, Andrew Gacek, Dale Miller, Gopalan Nadathur, Alwen Tiu & Yuting
Wang (2014): Abella: A System for Reasoning about Relational Specifications. Journal of Formalized
Reasoning 7(2), doi:10.6092/issn.1972-5787/4650. Available at http://jfr.unibo.it/article/
download/4650/4137.
[3] David Baelde & Dale Miller (2007): Least and greatest fixed points in linear logic. In N. Dershowitz
& A. Voronkov, editors: International Conference on Logic for Programming and Automated Reasoning (LPAR), LNCS 4790, pp. 92–106. Available at http://www.lix.polytechnique.fr/Labo/Dale.
Miller/papers/lpar07final.pdf.
[4] David Baelde, Dale Miller & Zachary Snow (2010): Focused Inductive Theorem Proving. In J. Giesl &
R. Hähnle, editors: Fifth International Joint Conference on Automated Reasoning, LNCS 6173, pp. 278–
292, doi:10.1007/978-3-642-14203-1_24. Available at http://www.lix.polytechnique.fr/Labo/
Dale.Miller/papers/ijcar10.pdf.
[5] (2015): The Bedwyr system. Available at http://slimmer.gforge.inria.fr/bedwyr/.
[6] Robert S. Boyer & J. Strother Moore (1979): A Computational Logic. Academic Press.
[7] A. Bundy, A. Stevens, F. van Harmelen, A. Ireland & A. Smaill (1993): Rippling: A Heuristic for Guiding
Inductive Proofs. Artificial Intelligence 62(2), pp. 185–253, doi:10.1016/0004-3702(93)90079-Q.
[8] Alan Bundy (1987): The use of explicit plans to guide inductive proofs. In: Conf. on Automated Deduction
(CADE 9), pp. 111–120.
[9] Zakaria Chihani, Dale Miller & Fabien Renaud (2013): Foundational proof certificates in first-order logic.
In Maria Paola Bonacina, editor: CADE 24: Conference on Automated Deduction 2013, LNAI 7898, pp.
162–177, doi:10.1007/978-3-642-38574-2_11.
[10] Dale Miller & Gopalan Nadathur (2012): Programming with Higher-Order Logic. Cambridge University
Press, doi:10.1017/CBO9781139021326.
[11] Frank Pfenning & Carsten Schürmann (1999): System Description: Twelf — A Meta-Logical Framework
for Deductive Systems. In H. Ganzinger, editor: 16th Conf. on Automated Deduction (CADE), LNAI 1632,
Springer, Trento, pp. 202–206, doi:10.1007/3-540-48660-7_14.
[12] Carsten Schürmann & Frank Pfenning (2003): A Coverage Checking Algorithm for LF. In: Theorem Proving
in Higher Order Logics: 16th International Conference, TPHOLs 2003, LNCS 2758, Springer, pp. 120–135,
doi:10.1007/10930755_8.
[13] Sean Wilson, Jacques Fleuriot & Alan Smaill (2010): Inductive Proof Automation for Coq. In: Second Coq
Workshop. Available at http://hal.archives-ouvertes.fr/inria-00489496/en/.
Realisability semantics of abstract focussing, formalised
Stéphane Graham-Lengrand
CNRS, École Polytechnique, INRIA, SRI International
We present a sequent calculus for abstract focussing, equipped with proof-terms: in the tradition
of Zeilberger’s work, logical connectives and their introduction rules are left as a parameter of the
system, which collapses the synchronous and asynchronous phases of focussing as macro rules. We
go further by leaving as a parameter the operation that extends a context of hypotheses with new
ones, which allows us to capture both classical and intuitionistic focussed sequent calculi.
We then define the realisability semantics of (the proofs of) the system, on the basis of MunchMaccagnoni’s orthogonality models for the classical focussed sequent calculus, but now operating
at the higher level of abstraction mentioned above. We prove, at that level, the Adequacy Lemma,
namely that if a term is of type A, then in the model its denotation is in the (set-theoretic) interpretation of A. This exhibits the fact that the universal quantification involved when taking the orthogonal
of a set, reflects in the semantics Zeilberger’s universal quantification in the macro rule for the asynchronous phase.
The system and its semantics are all formalised in Coq.
1
Introduction
The objective of this paper is to formalise a strong connection between focussing and realisability.
Focussing is a concept from proof theory that arose from the study of Linear Logic [Gir87, And92]
with motivations in proof-search and logic programming, and was then used for studying the proof theory
of classical and intuitionistic logics [Gir91, DJS95, DL07, LM09].
Realisability is a concept used since Kleene [Kle45] to witness provability of formulae and build
models of their proofs. While originally introduced in the context of constructive logics, the methodology
received a renewed attention with the concept of orthogonality, used by Girard to build models of Linear
Logic proofs, and then used to define the realisability semantics of classical proofs [DK00].
Both focussing and realisability exploit the notion of polarity for formulae, with an asymmetric
treatment of positive and negative formulae.
In realisability, a primitive interpretation of a positive formula such as ∃xA (resp. A1 ∨ A2 ) is given as
a set of pairs (t, π), where t is a witness of existence and π is in the interpretation of {tx } A (resp. a set
of injections inji (π), where π is in the interpretation of Ai ). In other words, the primitive interpretation
of positive formulae expresses a very contructive approach to provability. In contrast, a negative formula
(such as ∀xA or A1 ⇒ A2 ) is interpreted “by duality”, as the set that is orthogonal to the interpretation of
its (positive) negation. In classical realisability, a bi-orthogonal set completion provides denotations for
all classical proofs of positive formulae.
In focussing, introduction rules for connectives of the same polarity can be chained without loss of
generality: for instance if we decide to prove (A1 ∨ A2 ) ∨ A3 by proving A1 ∨ A2 , then we can assume
(without losing completeness) that a proof of this in turn consists in a proof of A1 or a proof of A2 (rather
than uses a hypothesis, uses a lemma or reasons by contradiction).
I. Cervesato and C. Schürmann (Eds.)
First Workshop on Focusing (WoF’15)
EPTCS 197, 2015, pp. 15–28, doi:10.4204/EPTCS.197.3
© Stéphane Graham-Lengrand
This work is licensed under the
Creative Commons Attribution License.
16
Realisability semantics of abstract focussing, formalised
Such a grouping of introduction steps for positives (resp. negatives) is called a synchronous phase
(resp. asynchronous phase). In order to express those phases as (or collapse them into) single macro
steps, some formalisms for big-step focussing (as in Zeilberger’s work [Zei08a, Zei08b]) abstract away
from logical connectives and simply take as a parameter the mechanism by which a positive formula can
be decomposed into a collection of positive atoms and negative formulae. While the proof of a positive
needs to exhibit such a decomposition, the proof of a negative needs to range over such decompositions
and perform a case analysis.
This asymmetry, and this universal quantification over decompositions, are reminiscent of the orthogonal construction of realisability models. To make the connection precise, we formalise the construction
of realisability models for (big-step) focussed systems in Zeilberger’s style.
In [MM09], the classical realisability semantics was studied for the classical sequent calculus, with
an extended notion of cut-elimination that produced focussed proofs. Here we want to avoid relying on
the specificities of particular logical connectives, and therefore lift this realisability semantics to the more
abstract level of Zeilberger’s systems, whose big-step approach also reveals the universal quantification
that we want to connect to the orthogonality construction. We even avoid relying on the specificities of a
particular logic as follows:
• We do not assume that formulae are syntax, i.e. have an inductive structure, nor do we assume that
“positive atoms” are particular kinds of formulae; positive atoms and formulae can now literally be
two arbitrary sets, of elements called atoms and molecules, respectively.
• The operation that extends a context of hypotheses with new ones, is usually taken to be set or
multiset union. We leave this extension operation as another parameter of the system, since tweaking
it will allow the capture of different logics.
In Section 2 we present our abstract system called LAF, seen through the Curry-Howard correspondence as a typing system for (proof-)terms. Section 3 describes how to tune (i.e. instantiate) the notions
of atoms, molecules, and context extensions, so as to capture the big-step versions of standard focussed
sequent calculi, both classical and intuitionistic. Section 4 gives the definition of realisability models,
where terms and types are interpreted, and proves the Adequacy Lemma. In very generic terms, if t is a
proof(-term) for A then in the model the interpretation of t is in the interpretation of A. Finally, Section 5
exhibits a simple model from which we derive the consistency of LAF.
2
The abstract focussed sequent calculus LAF
An instance of LAF is given by a tuple of parameters (Lab+ , Lab− , A, M, Co, Pat, ) where each parameter is described below. We use → (resp. *) for the total (resp. partial) function space constructor.
Since our abstract focussing system is a typing system, we use a notion of typing contexts, i.e. those
structures denoted Γ in a typing judgement of the form Γ ` . . .. Two kinds of “types” are used (namely
atoms and molecules), and what is declared as having such types in a typing context, is two corresponding
kinds of labels:1 positive labels and negative labels, respectively ranging over Lab+ and Lab− . These
two sets are the first two parameters of LAF.
1 We
choose to call them “labels”, rather than “variables”, because “variable” suggests an object identified by a name
that “does not matter” and somewhere subject to α-conversion. Our labels form a deterministic way of indexing atoms and
molecules in a context, and could accommodate De Bruijn’s indices or De Bruijn’s levels.
Stéphane Graham-Lengrand
17
D EFINITION 1 (Generic contexts, generic decomposition algebras)
Given two sets A and B, an (A , B)-context algebra is an algebra of the form
G × Lab+ *A G × Lab− *B G × DA ,B →G
G,
,
,
(Γ, x+ ) 7→Γ x+
(Γ, x− ) 7→Γ x−
(Γ, ∆) 7→Γ; ∆
whose elements are called (A , B)-contexts, and where DA ,B , which we call the (A , B)-decomposition
algebra and whose elements are called (A , B)-decompositions, is the free algebra defined by the following grammar:
∆, ∆1 , . . . ::= a ∼b • ∆1 , ∆2
where a (resp. b) ranges over A (resp. B).
Let Dst abbreviate Dunit,unit , whose elements we call decomposition structures.
The (decomposition) structure of an (A , B)-decomposition ∆, denoted |∆|, is its obvious homomorphic
projection in Dst .
We denote by dom+ (Γ) (resp. dom− (Γ)) the subset of Lab+ (resp. Lab− ) where Γ [x+ ] (resp. Γ [x− ]) is
defined, and say that Γ is empty if both dom+ (Γ) and dom− (Γ) are.
Intuitively, an (A , B)-decomposition ∆ is simply the packaging of elements of A and elements of
B; we could flatten this packaging by seeing • as the empty set (or multiset), and ∆1 , ∆2 as the union of
the two sets (or multisets) ∆1 and ∆2 . Note that the coercion from B into DA ,B is denoted with ∼. It
helps distinguishing it from the coercion from A (e.g. when A and B intersect each other), and in many
instances of LAF it will remind us of the presence of an otherwise implicit negation. But so far it has
no logical meaning, and in particular B is not equipped with an operator ∼ of syntactical or semantical
nature.
Now we can specify the nature of the LAF parameters:
D EFINITION 2 (LAF parameters) Besides Lab+ and Lab− , LAF is parameterised by
• two sets A and M, whose elements are respectively called atoms (denoted a, a0 ,. . . ), and molecules
(denoted M, M 0 ,. . . );
• an (A, M)-context algebra Co, whose elements are called typing contexts;
• a pattern algebra, an algebra of the form
Pat→Dst
Pat,
p 7→|p|
whose elements are called patterns,
• a decomposition relation, i.e. a set of elements
( : ) : (D × Pat × M)
such that if ∆ p : M then the structure of ∆ is |p|.
The (A, M)-decomposition algebra, whose elements are called typing decompositions, is called the
typing decomposition algebra and is denoted D.
The group of parameters (A, M) specifies what the instance of LAF, as a logical system, talks about.
A typical example is when A and M are respectively the sets of (positive) atoms and the set of (positive)
formulae from a polarised logic. Section 3 shows how our level of abstraction allows for some interesting
variants. In the Curry-Howard view, A and M are our sets of types.
The last group of parameters (Pat, ) specifies the structure of molecules. If M is a set of formulae
featuring logical connectives, those parameters specify the introduction rules for the connectives. The
18
Realisability semantics of abstract focussing, formalised
intuition behind the terminology is that the decomposition relation decomposes a molecule, according
to a pattern, into a typing decomposition which, as a first approximation, can be seen as a “collection of
atoms and (hopefully smaller) molecules”.
D EFINITION 3 (LAF system) Proof-terms are defined by the following syntax:
Positive terms
Terms+ t + ::= pd
Decomposition terms Termsd d ::= x+ f • d1 , d2
Commands
Terms
c ::= hx− | t + i h f | t + i
where p ranges over Pat, x+ ranges over Lab+ , x− ranges over Lab− , and f ranges over the partial
function space Pat * Terms.
LAF is the inference system of Fig. 1 defining the derivability of three kinds of sequents
( ` [ : ]) : (Co × Terms+ × M)
( ` : )
: (Co × Termsd × D)
( ` )
: (Co × Terms)
We further impose in rule async that the domain of function f be exactly those patterns that can decompose M (p ∈ Dom( f ) if and only if there exists ∆ such that ∆ p : M).
LAFcf is the inference system LAF without the cut-rule.
∆ p:M
Γ ` d:∆
Γ ` [pd : M]
Γ x+ = a
Γ ` x+ : a
Γ ` d1 : ∆ 1
∀p, ∀∆,
Γ ` [t + : Γ x− ]
select
Γ ` x− | t +
Γ ` d2 : ∆2
Γ ` d1 , d2 : ∆1 , ∆2
Γ ` •:•
init
sync
∆ p:M
⇒ Γ; ∆ ` f (p)
Γ ` f : ∼M
async
Γ ` f : ∼M
Γ ` [t + : M]
cut
Γ ` f | t+
Figure 1: LAF
An intuition of LAF can be given in terms of proof-search:
When we want to “prove” a molecule, we first need to decompose it into a collection of atoms
and (refutations of) molecules (rule sync). Each of those atoms must be found in the current typing
context (rule init). Each of those molecules must be refuted, and the way to do this is to consider all the
possible ways that this molecule could be decomposed, and for each of those decompositions, prove the
inconsistency of the current typing context extended with the decomposition (rule async). This can be
done by proving one of the molecules refuted in the typing context (rule select) or refuted by a complex
proof (rule cut). Then a new cycle starts.
Now it will be useful to formalise the idea that, when a molecule M is decomposed into a collection
of atoms and (refutations of) molecules, the latter are “smaller” than M:
Stéphane Graham-Lengrand
19
D EFINITION 4 (Well-founded LAF instance)
We write M 0 / M if there are ∆ and p such that ∆ p : M and ∼M 0 is a leaf of ∆.
The LAF instance is well-founded if (the smallest order containing) / is well-founded.
Well-foundedness is a property that a LAF instance may or may not have, and which we will require
to construct its realisability semantics. A typical situation where it holds is when M 0 / M implies that M 0
is a sub-formula of M.
The above intuitions may become clearer when we instantiate the parameters of LAF with actual
literals, formulae, etc in order to capture existing systems: we shall therefore we illustrate system LAF
by specifying different instances, providing each time the long list of parameters, that capture different
focussed sequent calculus systems.
While LAF is defined as a typing system (in other words with proof-terms decorating proofs in the
view of the Curry-Howard correspondence), the traditional systems that we capture below are purely
logical, with no proof-term decorations. When encoding the former into the latter, we therefore need to
erase proof-term annotation, and for this it is useful to project the notion of typing context as follows:
D EFINITION 5 (Referable atoms and molecules) Given a typing context Γ, let Im+ (Γ) (resp. Im− (Γ))
be the image of function x+ 7→ Γ [x+ ] (resp. x− 7→ Γ [x− ]), i.e. the set of atoms (resp. molecules) that
can be refered to, in Γ, by the use of a positive (resp. negative) label.
3
Examples in propositional logic
The parameters of LAF will be specified so as to capture: the classical focussed sequent calculus LKF
and the intuitionistic one LJF [LM09].
3.1
Polarised classical logic - one-sided
In this sub-section we define the instance LAFK1 corresponding to the (one-sided) calculus LKF:
D EFINITION 6 (Literals, formulae, patterns, decomposition)
Let A be a set of elements called atoms and ranged over by a, a0 , . . ..
Negations of atoms a⊥ , a0 ⊥ , . . . range over a set isomorphic to, but disjoint from, A.
Let M be the set defined by the first line of the following grammar for (polarised) formulae of classical
logic:
Positive formulae
P, . . . ::= a >+ ⊥+ A∧+ B A∨+ B
Negative formulae
N, . . . ::= a⊥ >− ⊥− A∧− B A∨− B
Unspecified formulae A
::= P N
Negation is extended to formulae as usual by De Morgan’s laws (see e.g. [LM09]).
The set Pat of pattern is defined by the following grammar:
p, p1 , p2 , . . . ::= + − • (p1 , p2 ) inji (p)
The decomposition relation ( : ) : (D × Pat × M) is the restriction to molecules of the relation
defined inductively for all formulae by the inference system of Fig. 2.
The map p 7→ |p| can be inferred from the decomposition relation.
20
Realisability semantics of abstract focussing, formalised
• • : >+
∆ 1 p1 : A1
∼N ⊥ ∆2 p2 : A2
∆1 , ∆2 (p1 , p2 ) : A1 ∧ A2
+
−
:N
a
+
:a
∆ p : Ai
∆ inji (p) : A1 ∨+ A2
Figure 2: Decomposition relation for LAFK1
Keeping the sync rule of LAFK1 in mind, we can already see in Fig. 2 the traditional introduction
rules of positive connectives in polarised classical logic. Note that these rules make LAFK1 a wellfounded LAF instance, since M 0 / M implies that M 0 is a sub-formula of M. The rest of this sub-section
formalises that intuition and explains how LAFK1 manages the introduction of negative connectives, etc.
But in order to finish the instantiation of LAF capturing LKF, we need to define typing contexts, i.e. give
Lab+ , Lab− , and Co. In particular, we have to decide how to refer to elements of the typing context. To
avoid getting into aspects that may be considered as implementation details (in [GL14a] we present two
implementations based on De Bruijn’s indices and De Bruijn’s levels), we will stay rather generic and
only assume the following property:
D EFINITION 7 (Typing contexts) We assume that context extensions satisfy:
Im+ (Γ; a)
= Im+ (Γ) ∪ {a}
Im− (Γ; a)
= Im− (Γ)
+
+
−
Im (Γ; ∼M) = Im (Γ)
Im (Γ; ∼M)
= Im− (Γ) ∪ {M}
Im± (Γ; •)
= Im± (Γ)
Im± (Γ; (∆1 , ∆2 )) = Im± (Γ; ∆1 ; ∆2 )
where ± stands for either + or −.
We now relate (cut-free) LAFcf
K1 and the LKF system of [LM09] by mapping sequents:
D EFINITION 8 (Mapping sequents)
We encode the sequents of LAFK1 (regardless of derivability) to those of LKF as follows:
φ (Γ ` c)
φ (Γ ` x+ : a)
φ (Γ ` f : ∼P)
φ (Γ ` [t + : P])
:=
:=
:=
:=
⊥
` Im+ (Γ) , Im− (Γ) ⇑
⊥
` Im+ (Γ) , Im− (Γ) ⇓ a
⊥
` Im+ (Γ) , Im− (Γ) ⇓ P⊥
⊥
` Im+ (Γ) , Im− (Γ) ⇓ P
T HEOREM 1 φ satisfies structural adequacy between LAFcf
K1 and LKF.
The precise notion of adequacy used here is formalised in [GL14a]; let us just say here that it preserves the derivability of sequents in a compositional way (a derivation π in one system is mapped to a
derivation π 0 in the other system, and its subderivations are mapped to subderivations of π 0 ).
3.2
Polarised intuitionistic logic
In this sub-section we define the instance LAFJ corresponding to the (two-sided) calculus LJF. Because
it it two-sided, and the LAF framework itself does not formalise the notion of side (it is not incorrect to
see LAF as being intrinsically one-sided), we shall embed a side information in the notions of atoms and
molecules:
Stéphane Graham-Lengrand
21
D EFINITION 9 (Literals, formulae, patterns, decomposition)
Let L+ (resp. L− ) be a set of elements called positive (resp. negative) literals, and ranged over by
l + , l1+ , l2+ , . . . (resp. l − , l1− , l2− , . . .). Formulae are defined by the following grammar:
Positive formulae
P, . . . ::= l + >+ ⊥+ A∧+ B A∨B
Negative formulae
N, . . . ::= l − >− ⊥− A∧− B A⇒B ¬A
Unspecified formulae A
::= P N
We position a literal or a formula on the left-hand side or the right-hand side of a sequent by combining
it with an element, called side information, of the set {l, r}: we define
A := {(l + , r) l + positive literal} ∪ {(l − , l) l − negative literal} ∪ {(⊥− , l)}
M := {(P, r) P positive formula} ∪ {(N, l) N negative formula}
In the rest of this sub-section v stands for either a negative literal l − or ⊥− .
The set Pat of pattern is defined by the following grammar:
+ − • (p , p ) inj (p)
p, p1 , p2 , . . . ::=
r
1 2
i
r
r
+ −
•l p1 ::p2 πi (p) y(p)
l
l
The decomposition relation ( : ) : (D × Pat × M) is the restriction to molecules of the relation
defined inductively for all positioned formulae by the inference system of Fig. 3.
∼(N, l) (l + , r) −
r : (N, r)
+
+
r : (l , r)
• •r : (>+ , r)
∆1 p1 : (A1 , r) ∆2 p2 : (A2 , r)
∆ p : (Ai , r)
∆1 , ∆2 (p1 , p2 ) : (A1 ∧+ A2 , r)
∆ inji (p) : (A1 ∨A2 , r)
∼(P, r) (l − , l) −
l : (P, l)
+
−
l : (l , l)
∆ p : (A, r)
(⊥− , l) •l : (⊥− , l)
∆, (⊥− , l) y(p) : (¬A, l)
∆1 p1 : (A1 , r) ∆2 p2 : (A2 , l)
∆ p : (Ai , l)
∆1 , ∆2 p1 ::p2 : (A1 ⇒A2 , l)
∆ πi (p) : (A1 ∧− A2 , l)
Figure 3: Decomposition relation for LAFJ
Again, we can already see in Fig. 3 the traditional right-introduction rules of positive connectives
and left-introduction rules of negative connectives, and again, it is clear from these rules that LAFJ is
well-founded.
We now interpret LAFJ sequents as intuitionistic sequents (from e.g. LJF [LM09]):
22
Realisability semantics of abstract focussing, formalised
D EFINITION 10 (LAFJ sequents as two-sided LJF sequents)
First, when ± is either + or −, we define
Im±r (Γ) := {A | (A, r) ∈ Im± (Γ)}
Im+l (Γ) := {l − | (l − , l) ∈ Im+ (Γ)}
Im−l (Γ) := {N | (N, l) ∈ Im− (Γ)}
Then we define the encoding:
φ (Γ ` c)
φ (Γ ` x+ : (l − , l))
φ (Γ ` f : ∼(P, r))
φ (Γ ` [t + : (N, l)])
φ (Γ ` x+ : (l + , r))
φ (Γ ` f : ∼(N, l))
φ (Γ ` [t + : (P, r)])
:=
:=
:=
:=
:=
:=
:=
[Im+r (Γ), Im−l (Γ)] −→ [Im+l (Γ), Im−r (Γ)]
l−
[Im+r (Γ), Im−l (Γ)] −→ [Im+l (Γ), Im−r (Γ)]
P
[Im+r (Γ), Im−l (Γ)] −→ [Im+l (Γ), Im−r (Γ)]
N
[Im+r (Γ), Im−l (Γ)] −→ [Im+l (Γ), Im−r (Γ)]
[Im+r (Γ), Im−l (Γ)]−l + →
[Im+r (Γ), Im−l (Γ)]−N →
[Im+r (Γ), Im−l (Γ)]−P →
In the first four cases, we require Im+l (Γ), Im−r (Γ) to be a singleton (or be empty).
D EFINITION 11 (Typing contexts) We assume that we always have (⊥− , l) ∈ Im+ (Γ) and that
Im+ (Γ; (l + , r)) = Im+ (Γ) ∪ {(l + , r)}
Im− (Γ; a)
= Im− (Γ)
+
+
−
Im (Γ; ∼M)
= Im (Γ)
Im (Γ; ∼(N, l)) = Im− (Γ) ∪ {(N, l)}
±
±
Im (Γ; •)
= Im (Γ)
Im± (Γ; (∆1 , ∆2 )) = Im± (Γ; ∆1 ; ∆2 )
Im+ (Γ; (v, l))
= {(l + , r) | (l + , r) ∈ Im+ (Γ)} ∪ {(v, l), (⊥− , l)}
−
Im (Γ; ∼(P, r)) = {(N, l) | (N, l) ∈ Im− (Γ)} ∪ {(P, r)}
where again ± stands for either + or − and v stands for either a negative literal l − or ⊥− .
The first three lines are the same as those assumed for K1, except they are restricted to those cases
where we do not try to add to Γ an atom or a molecule that is interpreted as going to the right-hand side
of a sequent. When we want to do that, this atom or molecule should overwrite the previous atom(s) or
molecule(s) that was (were) interpreted as being on the right-hand side; this is done in the last two lines,
where Im+l (Γ), Im−r (Γ) is completely erased.
T HEOREM 2 φ satisfies structural adequacy between LAFcf
J and LJF.
The details are similar to those of Theorem 1, relying on the LJF properties expressed in [LM09].
4
Realisability semantics of LAF
We now look at the semantics of LAF systems, setting as an objective the definition of models and the
proof of their correctness at the same generic level as that of LAF.
In this section, P(A ) stands for the power set of a given set A .
Given a LAF instance, we define the following notion of realisability algebra:
D EFINITION 12 (Realisability algebra) A realisability algebra is an algebra of the form
˜
(Pat * Terms) × Co*N
Pat→(DL ,N → P)
A→P(L )
˜
L , P, N , ⊥ , Co,
,
, a 7→JaK
( f , ρ)
7→J f K
p 7→ p̃
ρ
Stéphane Graham-Lengrand
23
where
• L , P, N are three arbitrary sets of elements called label denotations, positive denotations, negative denotations, respectively;
• ⊥ is a relation between negative and positive denotations ( ⊥ ⊆ N ×P), called the orthogonality
relation;
˜ is a (L , N )-context algebra, whose elements, denoted ρ, ρ 0 , . . ., are called semantic contexts.
• Co
The (L , N )-decomposition algebra DL ,N is abbreviated D̃; its elements, denoted , 0 . . . , are called
semantic decompositions.
Given a model structure, we can define the interpretation of proof-terms. The model structure already
gives an interpretation for the partial functions f from patterns to commands. We extend it to all proofterms as follows:
D EFINITION 13 (Interpretation of proof-terms)
Positive terms (in Terms+ ) are interpreted as positive denotations (in P), decomposition terms (in
Termsd ) are interpreted as semantic decompositions (in D̃), and commands (in Terms) are interpreted
as pairs in N × P (that may or may not be orthogonal), as follows:
JpdKρ := p̃(JdKρ )
J•Kρ
:= •
Jhx− | t + iKρ := (ρ [x− ] , Jt + Kρ )
Jd1 , d2 Kρ := Jd1 Kρ , Jd2 Kρ
Jh f | t + iKρ := (J f Kρ , Jt + Kρ )
Jx+ Kρ
:= ρ [x+ ]
J f Kρ
:= J f Kρ as given by the realisability algebra
Our objective is now the Adequacy Lemma whereby, if t is of type A then the interpretation of t is in
the interpretation of A. We have already defined the interpretation of proof-terms in a model structure.
We now proceed to define the interpretation of types.
1.
2.
3.
4.
In system LAF, there are four concepts of “type inhabitation”:
“proving” an atom by finding a suitable positive label in the typing context;
“proving” a molecule by choosing a way to decompose it into a typing decomposition;
“refuting” a molecule by case analysing all the possible ways of decomposing it into a typing decomposition;
“proving” a typing decomposition by inhabiting it with a decomposition term.
Correspondingly, we have the four following interpretations, with the interpretations of atoms (1.) in
P(L ) being arbitrary and provided as a parameter of a realisability algebra:
D EFINITION 14 (Interpretation of types and typing contexts) Assume the instance of LAF is wellfounded. We define (by induction on the well-founded ordering between molecules):
2. the positive interpretation of a molecule in P(P);
3. the negative interpretation of a molecule in P(N );
4. the interpretation of a typing decomposition in P(DL ,N ):
24
Realisability semantics of abstract focussing, formalised
JMK+
JMK−
J•K
J∆1 , ∆2 K
JaK
J∼MK
:= { p̃() ∈ P | ∈ J∆K, and ∆ p : M}
:= {n ∈ N
:=
:=
:=
:=
{•}
{1 , 2
JaK
{∼n
| ∀p ∈ JMK+ , n ⊥ p}
| 1 ∈ J∆1 K and 2 ∈ J∆2 K}
as given by the realisability algebra
| n ∈ JMK− }
We then define the interpretation of a typing context:
˜ | ∀x+ ∈ dom+ (ρ), ρ [x+ ] ∈ JΓ [x+ ]K
JΓK := {ρ ∈ Co
−
∀x− ∈ dom− (ρ), ρ [x− ] ∈ JΓ [x− ]K
}
Now that we have defined the interpretation of terms and the interpretation of types, we get to the
Adequacy Lemma.
L EMMA 3 (Adequacy for LAF) We assume the following hypotheses:
Well-foundedness: The LAF instance is well-founded.
Typing correlation: If ρ ∈ JΓK and ∈ J∆K then (ρ; ) ∈ JΓ; ∆K.
Stability: If d ∈ J∆K for some ∆ and J f (p)Kρ;d ∈ ⊥, then J f Kρ ⊥ p̃(d).
We conclude that, for all ρ ∈ JΓK,
1. if Γ ` [t + : M] then Jt + Kρ ∈ JMK+ ;
2. if Γ ` d : ∆ then JdKρ ∈ J∆K;
3. if Γ ` t then JtKρ ∈ ⊥.
Proof: See the proof in Coq [GL14b].
Looking at the Adequacy Lemma, the stability condition is traditional: it is the generalisation, to that
level of abstraction, of the usual condition on the orthogonality relation in orthogonality models (those
realisability models that are defined in terms of orthogonality, usually to model classical proofs [Gir87,
DK00, Kri01, MM09]): orthogonality is “closed under anti-reduction”. Here, we have not defined a
notion of reduction on LAF proof-terms, but intuitively, we would expect to rewrite h f | pdi to f (p)
“substituted by d”.
On the other hand, the typing correlation property is new, and is due to the level of abstraction
we operate at: there is no reason why our data structure for typing contexts would relate to our data
structure for semantic contexts, and the extension operation, in both of them, has so far been completely
unspecified. Hence, we clearly need such an assumption to relate the two.
However, one may wonder when and why the typing correlation property should be satisfied. One
may anticipate how typing correlation could hold for the instance LAFK1 of LAF. Definition 7 suggests
that, in the definition of a typing context algebra, the extension operation does not depend on the nature
of the atom a or molecule M that is being added to the context. So we could parametrically define
(A , B)-contexts for any sets A and B (in the sense of relational parametricity [Rey83]). The typing
context algebra would be the instance where A = A and B = M and the semantic context algebra would
be the instance where A = L and B = N . Parametricity of context extension would then provide the
typing correlation property.
Stéphane Graham-Lengrand
5
25
Example: boolean models to prove Consistency
We now exhibit models to prove the consistency of LAF systems.
We call boolean realisability algebra a realisability algebra where ⊥ = 0.
/ The terminology comes
from the fact that in such a realisability algebra, JMK− can only take one of two values: 0/ or N , depending on whether JMK+ is empty. A boolean realisability algebra satisfies Stability.
T HEOREM 4 (Consistency of LAF instances) Assume we have a well-founded LAF instance, and a
boolean realisability algebra for it, where typing correlation holds and there is an empty semantic
context ρ0/ . There is no empty typing context Γ0/ and command t such that Γ0/ ` t.
Proof: The previous remark provides Stability. If there was such a Γ0/ and t, then we would have
ρ0/ ∈ JΓ0/ K, and the Adequacy Lemma (Lemma 3) would conclude JtKρ0/ ∈ 0.
/
We provide such a realisability model that works with all “parametric” LAF instances:
D EFINITION 15 (Trivial model for parametric LAF instances)
Assume we have a parametric LAF instance, i.e. an instance where the typing context algebra Co is
the instance GA,M of a family of context algebras (GA ,B )A ,B whose notion of extension is defined
parametrically in A , B. The trivial boolean model for it is:
L :=
P :=
N :=
⊥ :=
∀ ∈ D̃, p̃() :=
˜
∀ f : Pat * Terms, ∀ρ ∈ Co,
J f Kρ :=
∀a ∈ A, JaK :=
unit
0/
()
()
unit
˜ := Gunit,unit
Co
and therefore
˜ ∀x+ ∈ dom+ (ρ), ρ [x+ ] := ()
∀ρ ∈ Co,
˜ ∀x− ∈ dom− (ρ), ρ [x− ] := ()
∀ρ ∈ Co,
Note that, not only can JMK− only take one of the two values 0/ or unit, but JMK+ can also only take
one of the two values 0/ or unit.
We can now use such a structure to derive consistency of parametric LAF instances:
C OROLLARY 5 (Consistency for parametric LAF instances) Assume we have a parametric LAF instance that is well-founded and assume there is an empty (unit, unit)-context in Gunit,unit . Then
there is no empty typing context Γ0/ and command t such that Γ0/ ` t.
In particular, this is the case for LAFK1 .
The system LAFJ does not fall in the above category since the operation of context extension is not
parametric enough: when computing Γ; a (resp. Γ; ∼M), we have to make a case analysis on whether a
is of the form (l + , r) or (v, l) (resp. whether M is of the form (N, l) or (P, r)).
But we can easily adapt the above trivial model into a not-as-trivial-but-almost model for LAFJ , as is
shown in [GL14a], Ch. 6.
6
6.1
Conclusion and Further Work
Contributions
In this paper we have used, and slightly tweaked, a system with proof-terms proposed by Zeilberger
for big-step focussing [Zei08a], which abstracts away from the idiosyncrasies of logical connectives and
26
Realisability semantics of abstract focussing, formalised
(to some extent) logical systems: In particular we have shown how two focussed sequent calculi of the
literature, namely LKF and LJF [LM09], are captured as instances of this abstract focussing system LAF.
Building on Munch-Maccagnoni’s description [MM09] of classical realisability in the context of
polarised classical logic and focussing, we have then presented the realisability models for LAF, thus
providing a generic approach to the denotational semantics of focussed proofs. Central to this is the
Adequacy Lemma 3, which connects typing and realisability, and our approach is generic in that the
Adequacy Lemma is proved once and for all, holding for all focussed systems that can be captured as
LAF instances.
Incidently, a by-product of this paper is that we provided proof-term assigments for LKF and LJF,
and provided realisability semantics for their proofs. We believe this to be new.
But showing the Adequacy Lemma at this level of abstraction was also motivated by the will to
exhibit how close typing and realisability are while differing in an essential way:
6.2
Typing vs. realisability
Concerning the positives, typing and realisability simply mimic each other: Ignoring contexts,
• in typing, ` [t + : M] means t + is of the form pd with ∆ p : M and ` d : ∆ for some ∆;
• in realisability, Jt + K ∈ JMK+ means Jt + K is of the form p̃(d) with ∆ p : M and d ∈ ∆ for some ∆.
Concerning the negatives, it is appealing to relate the quantification in rule async with the quantification in the orthogonality construction:
• in typing, ` f : ∼M means that for all p and ∆ such that ∆ p : M, we have ; ∆ ` f (p)
(∆ extending the empty typing context);
• in realisability, J f K ∈ JMK− means that for all p and ∆ such that ∆ p : M, for all ∈ J∆K we have
J f K ⊥ p̃(), usually obtained from J f (p)K; ∈ ⊥ ( extending the empty semantic context).
In both cases, a “contradiction” needs to be reached for all p and ∆ decomposing M. But in typing,
the proof-term f (p) witnessing the contradiction can only depend on the pattern p and must treat its
holes (whose types are agregated in ∆) parametrically, while in realisability, the reason why J f K ⊥ p̃()
holds, though it may rely on the computation of f (p), can differ for every ∈ J∆K. It is the same
difference that lies between the usual rule for ∀ in a Natural Deduction system for arithmetic, and the
ω-rule [Hil31, Sch50]:
0 1 2 A
n A
n A
n A . . .
∀-intro
ω
∀nA
∀nA
The difference explains why typing is (usually) decidable, while realisability is not, and contributes
to the idea that “typing is but a decidable approximation of realisability”.
We believe that we have brought typing and realisability as close as they could get, emphasising
where they coincide and where they differ, in a framework stripped from the idiosyncrasies of logics,
of their connectives, and of the implementation of those functions we use for witnessing refutations,
i.e. inhabiting negatives (e.g. the λ of λ -calculus).
6.3
Coq formalisation and additional results
In this paper we have also exhibited simple realisability models to prove the consistency of LAF instances.
Stéphane Graham-Lengrand
27
The parameterised LAF system has been formalised in Coq [GL14b], together with realisability
algebras. The Adequacy Lemma 3 and the Consistency result (Corollary 5) are proved there as well. Because of the abstract level of the formalism, very few concrete structures are used, only one for (A , B)decompositions and one for proof-terms; rather, Coq’s records are used to formalise the algebras used
throughout the paper, declaring type fields for the algebras’ support sets, term fields for operations, and
proof fields for specifications. Coercions between records (and a few structures declared to be canonical)
are used to lighten the proof scripts.
Besides this, the Coq formalisation presents no major surprises. It contributes to the corpus and
promotion of machine-checked theories and proofs. However, formalising this in Coq was a particularly
enlightening process, directly driving the design and definitions of the concepts. In getting to the essence
of focussing and stripping the systems from the idiosyncrasies of logics and of their connectives, Coq was
particularly useful: Developing the proofs led to identifying the concepts (e.g. what a typing context is),
with their basic operations and their minimal specifications. Definitions and hypotheses (e.g. the three
hypotheses of the Adequacy Lemma) were systematically weakened to the minimal form that would let
proofs go through. Lemma statements were identified so as to exactly fill-in the gaps of inductive proofs,
and no more.
The formalisation was actually done for a first-order version of LAF, that is fully described in [GL14a].
That in itself forms a proper extension of Zeilberger’s systems [Zei08a]. In this paper though we chose
to stick to the propositional fragment to simplify the presentation.
Regarding realisability models, more interesting examples than those given here to prove consistency
can obviously be built to illustrate this kind of semantics. In particular, realisability models built from
the term syntax can be used to prove normalisation properties of LAF, as shown in [GL14a]. Indeed,
one of the appeals of big-step focussing systems is an elegant notion of cut-reduction, based on rewrite
rules on proof-terms and with a functional programming interpretation in terms of pattern-matching. A
cut-reduction system at the abstraction level of LAF is given in [GL14a], in terms of an abstract machine
(performing head-reduction). A command t is evaluated in an evaluation context ρ; denoting such a pair
as hh t | ρ i,
i we have the main reduction rule (where d 0 stands for the evaluation of d in the evaluation
context ρ):
hh h f | pdi | ρ ii −→ hh f (p) | ρ; d 0 ii
Normalisation of this system (for a well-founded LAF instance), is proved by building a syntactic
realisability model, in which orthogonality holds when the interaction between a negative denotation
and a positive one is normalising. This model, together with the head normalisation result, are also
formalised in Coq [GL14b]. It forms a formal connection, via the orthogonality techniques, between
proofs of normalisation à la Tait-Girard and realisability. From this termination result, an informal
argument is proposed [GL14a] to infer cut-elimination, but the argument still needs to be formalised
in Coq. This is tricky since, cut-elimination needing to be performed arbitrarily deeply in a proof-tree
(“under lambdas”), we need to formalise a notion of reduction on those functions we use for witnessing
refutations, for which we have no syntax.
Finally, more work needs to be done on formalising the connections between LAF and other related
systems: firstly, LAF is very strongly related to ludics [Gir01], a system for big-step focussing for linear
logic, and which is also related to game semantics. LAF can be seen as a non-linear variant of ludics,
our proof-term-syntax more-or-less corresponding to ludics’ designs. But in order to get linearity, LAF
would need to force it in the typing rules for the decomposition terms x+ , •, and d1 , d2 . It would also be
interesting to investigate whether or how LAF could be adapted to modal logics.
28
Realisability semantics of abstract focussing, formalised
References
[And92] J. M. Andreoli. Logic programming with focusing proofs in linear logic. J. Logic Comput., 2(3):297–
347, 1992. DOI:10.1093/logcom/2.3.297
[DJS95] V. Danos, J.-B. Joinet, and H. Schellinx. LKQ and LKT: sequent calculi for second order logic based
upon dual linear decompositions of classical implication. In J.-Y. Girard, Y. Lafont, and L. Regnier,
editors, Proc. of the Work. on Advances in Linear Logic, volume 222 of London Math. Soc. Lecture Note
Ser., pages 211–224. Cambridge University Press, 1995.
[DK00] V. Danos and J.-L. Krivine. Disjunctive tautologies as synchronisation schemes. In P. Clote and
H. Schwichtenberg, editors, Proc. of the 9th Annual Conf. of the European Association for Computer
Science Logic (CSL’00), volume 1862 of LNCS, pages 292–301. Springer-Verlag, 2000. DOI:10.1007/
3-540-44622-2_19
[DL07]
R. Dyckhoff and S. Lengrand. Call-by-value λ -calculus and LJQ. J. Logic Comput., 17:1109–1134,
2007. DOI:10.1093/logcom/exm037
[Gir87]
J.-Y. Girard. Linear logic. Theoret. Comput. Sci., 50(1):1–101, 1987. DOI:10.1016/0304-3975(87)
90045-4
[Gir91]
J.-Y. Girard. A new constructive logic: Classical logic. Math. Structures in Comput. Sci., 1(3):255–296,
1991. DOI:10.1017/S0960129500001328
[Gir01]
J. Girard. Locus solum: From the rules of logic to the logic of rules. Mathematical Structures in
Computer Science, 11(3):301–506, 2001. DOI:10.1017/S096012950100336X
[GL14a] S. Graham-Lengrand. Polarities & Focussing: a journey from Realisability to Automated Reasoning.
Habilitation thesis, Université Paris-Sud, 2014. Available at http://hal.archives-ouvertes.fr/
tel-01094980
[GL14b] S. Graham-Lengrand. Polarities & focussing: a journey from realisability to automated reasoning – Coq
proofs of Part II, 2014. http://www.lix.polytechnique.fr/~lengrand/Work/HDR/
[Hil31]
D. Hilbert. Die Grundlegung der elementaren Zahlenlehre. Mathematische Annalen, 104:485–494,
1931.
[Kle45] S. Kleene. On the interpretation of intuitionistic number theory. J. of Symbolic Logic, 10:109–124, 1945.
DOI:10.2307/2269016
[Kri01]
J.-L. Krivine. Typed lambda-calculus in classical Zermelo-Frænkel set theory. Arch. Math. Log.,
40(3):189–205, 2001. DOI:10.1007/s001530000057
[LM09] C. Liang and D. Miller. Focusing and polarization in linear, intuitionistic, and classical logics. Theoret.
Comput. Sci., 410(46):4747–4768, 2009. DOI:10.1016/j.tcs.2009.07.041
[MM09] G. Munch-Maccagnoni. Focalisation and classical realisability. In E. Grädel and R. Kahle, editors, Proc.
of the 18th Annual Conf. of the European Association for Computer Science Logic (CSL’09), volume
5771 of LNCS, pages 409–423. Springer-Verlag, 2009. DOI:10.1007/978-3-642-04027-6_30
[Rey83] J. C. Reynolds. Types, abstraction and parametric polymorphism. In R. E. A. Mason, editor, Proc. of the
IFIP 9th World Computer Congress - Information Processing, pages 513–523. North-Holland, 1983.
[Sch50] K. Schütte. Beweistheoretische Erfassung der unendlichen Induktion in der Zahlentheorie. Mathematische Annalen, 122:369–389, 1950.
[Zei08a] N. Zeilberger. Focusing and higher-order abstract syntax. In G. C. Necula and P. Wadler, editors, Proc.
of the 35th Annual ACM Symp. on Principles of Programming Languages (POPL’08), pages 359–369.
ACM Press, 2008. DOI:10.1145/1328438.1328482
[Zei08b] N. Zeilberger. On the unity of duality. Ann. Pure Appl. Logic, 153(1-3):66–96, 2008. DOI:10.1016/j.
apal.2008.01.001
Multiplicative-Additive Focusing for Parsing as Deduction
Glyn Morrill
Oriol Valentı́n
Department of Computer Science
Universitat Politècnica de Catalunya
Barcelona
morrill@cs.upc.edu
oriol.valentin@gmail.com
Spurious ambiguity is the phenomenon whereby distinct derivations in grammar may assign the same
structural reading, resulting in redundancy in the parse search space and inefficiency in parsing. Understanding the problem depends on identifying the essential mathematical structure of derivations.
This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the
case of type logical categorial grammar, the parse structures are proof nets. However, with respect
to multiplicatives intrinsic proof nets have not yet been given for displacement calculus, and proof
nets for additives, which have applications to polymorphism, are not easy to characterise. Here we
approach multiplicative-additive spurious ambiguity by means of the proof-theoretic technique of
focalisation.
1
Introduction
In context free grammar (CFG) sequential rewriting derivations exhibit spurious ambiguity: distinct
rewriting derivations may correspond to the same parse structure (tree) and the same structural reading.1
In this case it is transparent to develop parsing algorithms avoiding spurious ambiguity by reference to
parse trees. In categorial grammar (CG) the problem is more subtle. The Cut-free Lambek sequent proof
search space is finite, but involves a combinatorial explosion of spuriously ambiguous sequential proofs.
This can be understood, analogously to CFG, as inessential rule reorderings, which we parallelise in
underlying geometric parse structures which are (planar) proof nets.
The planarity of Lambek proof nets reflects that the formalism is continuous or concatenative. But
the challenge of natural grammar is discontinuity or apparent displacement, whereby there is syntactic/semantic mismatch, or elements appearing out of place. Hence the subsumption of Lambek calculus
by displacement calculus D including intercalation as well as concatenation [17].
Proof nets for D must be partially non-planar; steps towards intrinsic correctness criteria for displacement proof nets are made in [5] and [13]. Additive proof nets are considered in [7] and [1]. However,
even in the case of Lambek calculus, parsing by reference to intrinsic criteria [14], [18], appendix B, is
not more efficient than parsing by reference to extrinsic criteria of normalised sequent calculus [6]. In
its turn, on the other hand, normalisation does not extend to product left rules and product unit left rules
nor to additives. The focalisation of [2] is a methodology midway between proof nets and normalisation.
Here we apply the focusing discipline to the parsing as deduction of D with additives.
In [4] multifocusing is defined for unit-free MALL,2 providing canonical sequent proofs; an eventual
goal would be to formulate multifocusing for multiplicative-additive categorial logic and for categorial
1 Research
partially supported by SGR2014-890 (MACDA) of the Generalitat de Catalunya and MINECO project APCOM
(TIN2014-57226-P), and by an ICREA Acadèmia 2012 to GM. Thanks to three anonymous WoF reviewers for comments and
suggestions, and to Iliano Cervesato for editorial attention. All errors are our own.
2 Here we include units, which are linguistically relevant.
I. Cervesato and C. Schürmann (Eds.)
First Workshop on Focusing (WoF’15)
EPTCS 197, 2015, pp. 29–54, doi:10.4204/EPTCS.197.4
c Morrill and Valentı́n
This work is licensed under the
Creative Commons Attribution License.
MA focusing for Parsing
30
logic generally. In this respect the present paper represents an intermediate step. Note that [19] develops focusing for Lambek calculus with additives, but not for displacement logic, for which we show
completeness of focusing here.
1.1 Spurious ambiguity in CFG
Consider the following production rules:
S → QP VP
QP → Q CN
VP → TV N
These generate the following sequential rewriting derivations:
S → QP VP → Q CN VP → Q CN TV N
S → QP VP → QP TV N → Q CN TV N
These sequential rewriting derivations correspond to the same parellelised parse structure:
Q
♠♠ S ◗◗◗◗◗
♠♠♠
◗◗◗
♠
♠
◗◗
♠♠♠
VP
QP ❉
④ ❆❆❆
❉❉
⑥
④
⑥
④
❆❆
❉
④④
⑥⑥
TV
CN
N
And they correspond to the same structural reading; sequential rewriting has spurious ambiguity.
1.2 Spurious ambiguity in CG
Lambek calculus is a logic of strings with the operation + of concatenation. Recall the definitions of
types, configurations and sequents in the Lambek calculus L [11], in terms of a set P of primitive types
(the original Lambek calculus did not include the product unit):
(1) Types
F ::= P | F /F | F \F | F •F
Configurations O ::= Λ | F , O
Sequents
Σ ::= O ⇒ F
Lambek calculus types have the following interpretation:
[[C/B]] = {s1 | ∀s2 ∈ [[B]], s1 +s2 ∈ [[C]]}
[[A\C]] = {s2 | ∀s1 ∈ [[A]], s1 +s2 ∈ [[C]]}
[[A•B]] = {s1 +s2 | s1 ∈ [[A]] & s2 ∈ [[B]]}
The logical rules of L are as follows:
Γ⇒A
∆(C) ⇒ D
Γ⇒B
∆(C) ⇒ D
∆(Γ, A\C) ⇒ D
∆(C/B, Γ) ⇒ D
∆(A, B) ⇒ D
∆(A•B) ⇒ D
•L
\L
/L
Γ1 ⇒ A
A, Γ ⇒ C
Γ ⇒ A\C
Γ, B ⇒ C
Γ ⇒ C/B
\R
/R
Γ2 ⇒ B
Γ1 , Γ2 ⇒ A•B
•R
Morrill and Valentı́n
31
N⇒N
N⇒N
S⇒S
N, N\S ⇒ S
N, (N\S)/N, N ⇒ S
(N\S)/N, N ⇒ N\S
CN ⇒ CN
N⇒N
\L
S⇒S
N, N\S ⇒ S
/L
N\S ⇒ N\S
\R
S⇒S
S/(N\S), (N\S)/N, N ⇒ S
(S/(N\S))/CN, CN, (N\S)/N, N ⇒ S
/L
/L
\L
\R
S⇒S
S/(N\S), N\S ⇒ S
CN ⇒ CN
(S/(N\S))/CN, CN, N\S ⇒ S
N⇒N
(S/(N\S))/CN, CN, (N\S)/N, N ⇒ S
/L
/L
/L
Figure 1: Spurious ambiguity
S◦ ❅
N•
❅❅
❅
S• ❃
❃❃
❃❃
S◦
\◦
⑥
⑥⑥
⑥⑥
/• ❖❖❖
❖❖❖
❖❖❖
❖
S•
N ◦❄
❄❄
❄❄
/•
qq
qqq
q
q
q
CN ◦
\• ❃
❃
N◦
❃❃
CN •
/•
⑧
⑧⑧
⑧⑧
N•
Figure 2: Proof net
Even amongst Cut-free proofs there is spurious ambiguity; consider for example the sequential derivations of Figure 1. These have the same parallelised parse structure (proof net) of Figure 2.
Lambek proof structures are planar graphs which must satisfy certain global and local properties to
be correct as proofs (proof nets). Proof nets provide a geometric perspective on derivational equivalence.
Alternatively we may identify the same algebraic parse structure (Curry-Howard term):
((xQ xCN ) λ x((xTV xN ) x))
But Lambek calculus is continuous (planarity). A major issue issue in grammar is discontinuity, hence
the displacement calculus.
2
D with additives, DA
In this section we present displacement calculus D, and a displacement logic DA comprising D with
additives. Although D is indeed a conservative extension of L, we think of it not just as an extension of
Lambek calculus but as a generalisation, because it involves a whole new machinery of sequent calculus
to deal with discontinuity. Displacement calculus is a logic of discontinuous strings — strings punctuated
by a separator 1 and subject to operations of append and plug; see Figure 3. Recall the definition of types
and their sorts, configurations and their sorts, and sequents, for the displacement calculus with additives:
MA focusing for Parsing
32
α
α
β
+
γ
×k
=
α
1
β
β
=
α
append + : Li , L j → Li+ j
β
γ
plug ×k : Li+1 , L j → Li+ j
Figure 3: Append and plug
(2) Types
Fi
Fj
Fi+ j
F0
Fi+1
Fj
Fi+ j
F1
Fi
Fi
::=
::=
::=
::=
::=
::=
::=
::=
::=
::=
Fi+ j /F j
Fi \Fi+ j
Fi •F j
I
Fi+ j ↑k F j
1 ≤ k ≤ i+1
Fi+1 ↓k Fi+ j 1 ≤ k ≤ i+1
Fi+1 ⊙k F j 1 ≤ k ≤ i+1
J
Fi &Fi
Fi ⊕Fi
Sort sA = the i s.t. A ∈ Fi
For example, s(S↑1 N)↑2 N = s(S↑1 N)↑1 N = 2 where sN = sS = 0
Configurations O ::= Λ | T , O
T ::= 1 | F0 | Fi>0 {O
. . : O}}
| : .{z
iO ′ s
For example, there is the configuration (S↑1 N)↑2 N{N, 1 : S↑1 N, S}, 1, N, 1
Sort sO = |O|1
For example s(S↑1 N)↑2 N{N, 1 : S↑1 N, S}, 1, N, 1 = 3
Sequents Σ ::= O ⇒ A s.t. sO = sA
→
−
The figure A of a type A is defined by:

if sA = 0
 A
→
−
A =
}
A{1
:
.
.
.
:
1
| {z } if sA > 0

sA 1′ s
Where Γ is a configuration of sort i and ∆1 , . . . , ∆i are configurations, the fold Γ ⊗ h∆1 : . . . : ∆i i is the
result of replacing the successive 1’s in Γ by ∆1 , . . . , ∆i respectively.
Where ∆ is a configuration of sort i > 0 and Γ is a configuration, the kth metalinguistic wrap ∆ |k Γ,
1 ≤ k ≤ i, is given by
Morrill and Valentı́n
33
(3) ∆ |k Γ =d f ∆ ⊗ h1
. . : 1} : Γ : 1| : .{z
. . : 1}i
| : .{z
k−1 1’s
i−k 1’s
i.e. ∆ |k Γ is the configuration resulting from replacing by Γ the kth separator in ∆.
In broad terms, syntactical interpretation of displacement calculus is as follows:
[[C/B]]
[[A\C]]
[[A•B]]
[[I]]
=
=
=
=
{s1 | ∀s2 ∈ [[B]], s1 +s2 ∈ [[C]]}
{s2 | ∀s1 ∈ [[A]], s1 +s2 ∈ [[C]]}
{s1 +s2 | s1 ∈ [[A]] & s2 ∈ [[B]]}
{0}
[[C↑k B]]
[[A↓kC]]
[[A⊙k B]]
[[J]]
=
=
=
=
{s1 | ∀s2 ∈ [[B]], s1 ×k s2 ∈ [[C]]}
{s2 | ∀s1 ∈ [[A]], s1 ×k s2 ∈ [[C]]}
{s1 ×k s2 | s1 ∈ [[A]] & s2 ∈ [[B]]}
{1}
The logical rules of the displacement calculus with additives are as follows, where ∆hΓi abbreviates
∆0 (Γ ⊗ h∆1 : . . . : ∆i i):
→
−
Γ⇒B
∆h C i ⇒ D
/L
−−→
∆hC/B, Γi ⇒ D
→
−
Γ, B ⇒ C
→
−
∆h C i ⇒ D
\L
−−→
∆hΓ, A\Ci ⇒ D
−
→
A ,Γ ⇒ C
Γ ⇒ C/B
Γ⇒A
→−
−
→
∆h A , B i ⇒ D
•L
−−→
∆hA•Bi ⇒ D
Γ1 ⇒ A
\R
Γ2 ⇒ B
•R
Γ1 , Γ2 ⇒ A•B
∆hΛi ⇒ A
IL
−
→
∆h I i ⇒ A
Λ⇒I
→
−
Γ⇒B
∆h C i ⇒ D
↑k L
−−−→
∆hC↑k B |k Γi ⇒ D
→ −
−
→
∆h A |k B i ⇒ D
⊙k L
−−−→
∆hA⊙k Bi ⇒ D
Γ ⇒ A\C
/R
→
−
Γ |k B ⇒ C
Γ ⇒ C↑k B
Γ1 ⇒ A
∆h1i ⇒ A
JL
→
−
∆h J i ⇒ A
−
→
A |k Γ ⇒ C
Γ ⇒ A↓kC
1⇒J
↑k R
Γ2 ⇒ B
Γ1 |k Γ2 ⇒ A⊙k B
→
−
∆h C i ⇒ D
↓k L
−−−→
∆hΓ |k A↓kCi ⇒ D
Γ⇒A
IR
JR
⊙k R
↓k R
MA focusing for Parsing
34
→
−
Γh A i ⇒ C
&L1
−−→
ΓhA&Bi ⇒ C
→
−
Γh B i ⇒ C
&L2
−−→
ΓhA&Bi ⇒ C
Γ⇒A
Γ⇒B
Γ ⇒ A&B
&R
−
→
→
−
Γh A i ⇒ C
Γh B i ⇒ C
⊕L
−−→
ΓhA⊕Bi ⇒ C
Γ⇒A
Γ ⇒ A⊕B
⊕R1
Γ⇒B
Γ ⇒ A⊕B
⊕R2
The continuous multiplicatives {/, \, •, I} of Lambek (1958[11]; 1988[10]), are the basic means
of categorial (sub)categorization. The directional divisions over, /, and under, \, are exemplified by
assignments such as the: N/CN for the man: N and sings: N\S for John sings: S, and loves: (N\S)/N for
John loves Mary: S. Hence, for the man:
CN ⇒ CN
N⇒N
N/CN, N ⇒ N
/L
And for John sings and John loves Mary:
N⇒N
S⇒S
N, N\S ⇒ S
\L
N⇒N
N⇒N
S⇒S
N, N\S ⇒ S
N, (N\S)/N, N ⇒ S
\L
/L
The continuous product • is exemplified by a ‘small clause’ assignment such as considers: (N\S)/
(N•(CN/CN)) for John considers Mary socialist: S.
CN ⇒ CN
N⇒N
CN ⇒ CN
CN/CN, CN ⇒ CN
CN/CN ⇒ CN/CN
N, CN/CN ⇒ N•(CN/CN)
/L
/R
•R
N⇒N
S⇒S
N, N\S ⇒ S
N, (N\S)/(N•(CN/CN)), N, CN/CN ⇒ S
\L
/L
Of course this use of product is not essential: we could just as well have used ((N\S)/(CN/CN))/N
since in general we have both A/(C•B) ⇒ (A/B)/C (currying) and (A/B)/C ⇒ A/(C•B) (uncurrying).
The discontinuous multiplicatives {↑, ↓, ⊙, J}, the displacement connectives, of Morrill and Valentı́n
(2010[16]), Morrill et al. (2011[17]), are defined in relation to intercalation. When the value of the k subscript is one it may be omitted, i.e. it defaults to one. Circumfixation, or extraction, ↑, is exemplified by
a discontinuous idiom assignment gives+1+the+cold+shoulder: (N\S)↑N for Mary gives the man the
cold shoulder: S:
N⇒N
S⇒S
CN ⇒ CN
N⇒N
/L
\L
N/CN, CN ⇒ N
N, N\S ⇒ S
↑L
N, (N\S)↑N{N/CN, CN} ⇒ S
Morrill and Valentı́n
35
Infixation, ↓, and extraction together are exemplified by a quantifier assignment everyone: (S↑N)↓S
simulating Montague’s S14 quantifying in:
. . . , N, . . . ⇒ S
. . . , 1, . . . ⇒ S↑N
↑R
S⇒S
. . . , (S↑N)↓S, . . . ⇒ S
id
↓L
Circumfixation and discontinuous product, ⊙, are illustrated in an assignment to a relative pronoun
that: (CN\CN)/((S↑N)⊙I) allowing both peripheral and medial extraction, that John likes: CN\CN and
that John saw today: CN\CN:
N, (N\S)/N, N ⇒ S
N, (N\S)/N, 1 ⇒ S↑N
↑R
N, (N\S)/N ⇒ (S↑N)⊙I
⇒I
IL
⊙R
CN\CN ⇒ CN\CN
(CN\CN)/((S↑N)⊙I), N, (N\S)/N ⇒ CN\CN
N, (N\S)/N, N, S\S ⇒ S
N, (N\S)/N, 1, S\S ⇒ S↑N
↑R
N, (N\S)/N, S\S ⇒ (S↑N)⊙I
⇒I
/L
IL
⊙R
CN\CN ⇒ CN\CN
(CN\CN)/((S↑N)⊙I), N, (N\S)/N, S\S ⇒ CN\CN
/L
The additive conjunction and disjunction {&, ⊕} of Lambek (1961[9]), Morrill (1990[15]), and
Kanazawa (1992[8]), capture polymorphism. For example the additive conjunction & can be used for
rice: N&CN as in rice grows: S and the rice grows: S:
N⇒N
N&CN ⇒ N
&L1
S⇒S
N&CN, N\S ⇒ S
\L
N/CN, CN, N\S ⇒ S
N/CN, N&CN, N\S ⇒ S
&L2
The additive disjunction ⊕ can be used for is: (N\S)/(N⊕(CN/CN)) as in Tully is Cicero: S and
Tully is humanist: S:
N⇒N
N ⇒ N⊕(CN/CN)
⊕R1
N\S ⇒ N\S
(N\S)/(N⊕(CN/CN)), N ⇒ N\S
CN/CN ⇒ CN/CN
/L
CN/CN ⇒ N⊕(CN/CN)
⊕R2
N\S ⇒ N\S
(N\S)/(N⊕(CN/CN)), CN/CN ⇒ N\S
/L
MA focusing for Parsing
36
3
Focalisation for DA
In focalisation situated (antecedent, input, • / succedent, output, ◦ ) non-atomic types are classified as of
negative (asynchronous) or positive (synchronous) polarity according as their rule is reversible or not;
situated atoms are positive or negative according to their bias. The table below summarizes the notational
convention on formulas P, Q, M and N:
sync.
async.
input
Q
M
output
P
N
The grammar of these types polarised with respect to input and output occurrences is as follows;
Q and P denote synchronous formulas in input and output position respectively, whereas M and N denote asynchronous formulas in input and output position respectively (in the nonatomic case we will
abbreviate thus: left sync., right synch., left async., and right async.):
(4) Positive output
P
Positive input
Q
Negative output N
Negative input M
::=
::=
::=
::=
At+ | A•B◦ | I ◦ | A⊙k B◦ | J ◦ | A⊕B◦
At− | C/B• | A\C• | C↑k B• | A↓kC• | A&B•
At− | C/B◦ | A\C◦ | C↑k B◦ | A↓kC◦ | A&B◦
At+ | A•B• | I • | A⊙k B• | J • | A⊕B•
Notice that if P occurs in the antecedent then this occurrence of P is negative, and so forth.
There are alternating phases of don’t-care nondeterministic negative rule application, and positive
rule application locking on to focalised formulas.
Given a sequent with no occurrences of negative formulas, one chooses a positive formula as principal
formula (which is boxed; we say it is focalised) and applies proof search to its subformulas while these
remain positive. When one finds a negative formula or a literal, invertible rules are applied in a don’t
care nondeterminitic fashion until no longer possible, when another positive formula is chosen, and so
on.
A sequent is either unfocused and as before, or else focused and has exactly one type boxed. The
focalised logical rules are given in Figures 4-11 including Curry-Howard categorial semantic labelling.
Occurrences of P, Q, M and N are supposed not to be focalised, which means that their focalised occurrence must be signalled with a box. By contrast, occurrences of A, B,C may be focalised or not.
4
Completeness of focalisation for DA
We shall be dealing with three systems: the displacement calculus DA with sequents notated ∆ ⇒ A,
the weakly focalised displacement calculus with additives DAfoc with sequents notated ∆=⇒w A, and the
strongly focalised displacement calculus with additives DAFoc with sequents notated ∆=⇒A. Sequents
of both DAfoc and DAFoc may contain at most one focalised formula, possibly A. When a DAfoc sequent is notated ∆=⇒w A ✸ foc, it means that the sequent possibly contains a (unique) focalised formula.
Otherwise, ∆=⇒w A means that the sequent does not contain a focus.
In this section we prove the strong focalisation property for the displacement calculus with additives
DA.
The focalisation property for Linear Logic was discovered by [2]. In this paper we follow the proof
idea from [12], which we adapt to the intuitionistic non-commutative case DA with twin multiplicative
modes of combination, the continuous (concatenation) and the discontinuous (intercalation) products.
The proof relies heavily on the Cut-elimination property for weakly focalised DA which is proved in
Morrill and Valentı́n
37
−
→
A : x, Γ ⇒ C: χ
Γ ⇒ A\C: λ xχ
\R
→
−
Γ, B : y ⇒ C: χ
Γ ⇒ C/B: λ yχ
/R
→ −
−
→
∆h A : x, B : yi ⇒ D: ω
•L
−−→
∆hA•B: zi ⇒ D: ω {π1 z/x, π2 z/y}
∆hΛi ⇒ A: φ
IL
→
−
∆h I : xi ⇒ A: φ
−
→
A : x |k Γ ⇒ C: χ
Γ ⇒ A |k C: λ xχ
↓k R
→
−
Γ |k B : y ⇒ C: χ
Γ ⇒ C↑k B: λ yχ
↑k R
−
→
→
−
∆h A : x |k B : yi ⇒ D: ω
⊙k L
−−−→
∆hA⊙k B: zi ⇒ D: ω {π1 z/x, π2 z/y}
∆h1i ⇒ A: φ
JL
→
−
∆h J : xi ⇒ A: φ
Figure 4: Asynchronous multiplicative rules
Γ ⇒ A: φ
Γ ⇒ B: ψ
Γ ⇒ A&B: (φ , ψ )
&R
→
−
→
−
Γh A : xi ⇒ C: χ1
Γh B : yi ⇒ C: χ2
⊕L
−−→
ΓhA⊕B: zi ⇒ C: z → x.χ1 ; y.χ2
Figure 5: Asynchronous additive rules
MA focusing for Parsing
38
−→
Γ ⇒ P :φ
∆h Q : zi ⇒ D: ω
\L
−−−→
∆hΓ, P\Q : yi ⇒ D: ω {(y φ )/z}
→
−
∆h M : zi ⇒ D: ω
Γ ⇒ P :φ
\L
−−−→
∆hΓ, P\M : yi ⇒ D: ω {(y φ )/z}
−→
Γ ⇒ N: φ
∆h Q : zi ⇒ D: ω
\L
−−−→
∆hΓ, N\Q : yi ⇒ D: ω {(y φ )/z}
→
−
Γ ⇒ N: φ
∆h M : zi ⇒ D: ω
\L
−−−−→
∆hΓ, N\M : yi ⇒ D: ω {(y φ )/z}
−→
Γ ⇒ P :ψ
∆h Q : zi ⇒ D: ω
/L
−−−→
∆h Q/P : x, Γi ⇒ D: ω {(x ψ )/z}
−→
∆h Q : zi ⇒ D: ω
Γ ⇒ N: ψ
/L
−−−→
∆h Q/N : x, Γi ⇒ D: ω {(x ψ )/z}
→
−
∆h M : zi ⇒ D: ω
Γ ⇒ P :ψ
/L
−−−→
∆h M/P : x, Γi ⇒ D: ω {(x ψ )/z}
→
−
∆h N : zi ⇒ D: ω
Γ ⇒ M: ψ
/L
−−−−→
∆h N/M : x, Γi ⇒ D: ω {(x ψ )/z}
Figure 6: Left synchronous continuous multiplicative rules
−→
Γ ⇒ P :φ
∆h Q : zi ⇒ D: ω
↓k L
−−−−→
∆hΓ |k P↓k Q : yi ⇒ D: ω {(y φ )/z}
→
−
∆h M : zi ⇒ D: ω
Γ ⇒ P :φ
↓k L
−−−−→
∆hΓ |k P↓k M : yi ⇒ D: ω {(y φ )/z}
−→
∆h Q : zi ⇒ D: ω
Γ ⇒ N: φ
↓k L
−−−−→
∆hΓ |k N↓k Q : yi ⇒ D: ω {(y φ )/z}
→
−
∆h M: zi ⇒ D: ω
Γ ⇒ N: φ
↓k L
−−−−→
∆hΓ |k N↓k M : yi ⇒ D: ω {(y φ )/z}
−→
∆h Q : zi ⇒ D: ω
Γ ⇒ P :ψ
↑k L
−−−−→
∆h Q↑k P : x |k Γi ⇒ D: ω {(x ψ )/z}
−→
∆h Q : zi ⇒ D: ω
Γ ⇒ N: ψ
↑k L
−−−−→
∆h Q↑k N : x |k Γi ⇒ D: ω {(x ψ )/z}
→
−
∆h M : zi ⇒ D: ω
Γ ⇒ P :ψ
↑k L
−−−−→
∆h M↑k P : x |k Γi ⇒ D: ω {(x ψ )/z}
→
−
Γ ⇒ N: ψ
∆h M: zi ⇒ D: ω
↑k L
−−−−→
∆h M↑k N : x |k Γi ⇒ D: ω {(x ψ )/z}
Figure 7: Left synchronous discontinuous multiplicative rules
Morrill and Valentı́n
39
−→
Γh Q : xi ⇒ C: χ
&L1
−−−−→
Γh Q&B : zi ⇒ C: χ {π1 z/x}
→
−
Γh M : xi ⇒ C: χ
&L1
−−−−→
Γh M&B : zi ⇒ C: χ {π1 z/x}
−→
Γh Q : yi ⇒ C: χ
&L2
−−−−→
Γh A&Q : zi ⇒ C: χ {π2 z/y}
→
−
Γh M : yi ⇒ C: χ
&L2
−−−−→
Γh A&M : zi ⇒ C: χ {π2 z/y}
Figure 8: Left synchronous additive rules
Γ1 ⇒ P1 : φ
Γ2 ⇒ P2 : ψ
Γ1 , Γ2 ⇒ P1 •P2 : (φ , ψ )
Γ1 ⇒ N: φ
Γ2 ⇒ P : ψ
Γ1 , Γ2 ⇒ N•P : (φ , ψ )
•R
•R
Γ1 ⇒ P : φ
Γ2 ⇒ N: ψ
Γ1 , Γ2 ⇒ P•N : (φ , ψ )
Γ1 ⇒ N1 : φ
Γ2 ⇒ N2 : ψ
Γ1 , Γ2 ⇒ N1 •N2 : (φ , ψ )
Λ ⇒ I :0
•R
•R
IR
Figure 9: Right synchronous continuous multiplicative rules
Γ1 ⇒ P1 : φ
Γ2 ⇒ P2 : ψ
Γ1 |k Γ2 ⇒ P1 ⊙k P2 : (φ , ψ )
Γ1 ⇒ N: φ
Γ2 ⇒ P : ψ
Γ1 |k Γ2 ⇒ N⊙k P : (φ , ψ )
⊙k R
⊙k R
Γ1 ⇒ P : φ
Γ2 ⇒ N: ψ
Γ1 |k Γ2 ⇒ P⊙k N : (φ , ψ )
Γ1 ⇒ N1 : φ
Γ2 ⇒ N2 : ψ
Γ1 |k Γ2 ⇒ N1 ⊙k N2 : (φ , ψ )
1 ⇒ J :0
⊙k R
⊙k R
JR
Figure 10: Right synchronous discontinuous multiplicative rules
Γ ⇒ P :φ
Γ ⇒ P⊕B : ι1 φ
Γ ⇒ P :ψ
Γ ⇒ A⊕P : ι2 ψ
⊕R1
⊕R2
Γ ⇒ N: φ
Γ ⇒ N⊕B : ι1 φ
Γ ⇒ N: ψ
Γ ⇒ A⊕N : ι2 ψ
⊕R1
⊕R2
Figure 11: Right synchronous additive rules
MA focusing for Parsing
40
the appendix. In our presentation of focalisation we have avoided the react rules of [2] and [3], and use
instead a simpler, box, notation suitable for non-commutativity.
DAFoc is a subsystem of DAfoc . DAfoc has the focusing rules foc and Cut rules p-Cut1 , p-Cut2 , n-Cut1
and n-Cut2 3 shown in (5), and the synchronous and asynchronous rules displayed before, which are read
as allowing in synchronous rules the occurrence of asynchronous formulas, and in asynchronous rules
as allowing arbitrary sequents with possibly one focalised formula. DAFoc has the focusing rules but
not the Cut rules, and the synchronous and asynchronous rules displayed before, which are such that
focalised sequents cannot contain any complex asynchronous formulas, whereas sequents with at least
one complex asynchronous formula cannot contain a focalised formula. Hence, strongly focalised proof
search operates in alternating asynchronous and synchronous phases. The weakly focalised calculus
DAfoc is an intermediate logic which we use to prove the completeness of DAFoc for DA.
−→
∆=⇒w P
∆h Q i=⇒w A
(5)
foc
foc
→
−
∆=⇒w P
∆h Q i=⇒w A
Γ=⇒w P
→
−
∆h P i=⇒wC ✸ foc
∆hΓi=⇒wC ✸ foc
Γ=⇒w P ✸ foc
→
−
∆h P i=⇒wC
∆hΓi=⇒wC ✸ foc
p-Cut1
n-Cut1
Γ=⇒w N ✸ foc
−→
∆h N i=⇒wC
∆hΓi=⇒wC ✸ foc
Γ=⇒w N
→
−
∆h N i=⇒wC ✸ foc
∆hΓi=⇒wC ✸ foc
p-Cut2
n-Cut2
4.1 Embedding of DA into DAfoc
The identity axiom we consider for DA and for both DAfoc and DAFoc is restricted to atomic types;
recalling that atomic types are classified into positive bias At+ and negative bias At− :
(6) If P ∈ At+ , P=⇒w P and P=⇒ P
If Q ∈ At− , Q =⇒w Q and Q =⇒Q
In fact, the Identity rule holds of any type A. It has the following formulation in the sequent calculi
considered
here:
 −
→

A
⇒A
in DA

−→
→
−
(7)
P =⇒w P
N =⇒w N in DAfoc

−
→
−
 →
P =⇒P
N =⇒N
in DAFoc
The Identity axiom for arbitrary types is also known as Eta-expansion. Eta-expansion is easy to
prove in both DA and DAfoc , but the same is not the case for DAFoc . This is the reason to consider what
we have called weak focalisation, which helps us to prove smoothly this crucial property for the proof of
strong focalisation.
Theorem 4.1 (Embedding of DA into DAfoc ) For any configuration ∆ and type A, we have that if ∆ ⇒ A
then ∆=⇒w A.
Proof. We proceed by induction on the length of the derivation of DA proofs. In the following lines,
we apply the induction hypothesis (i.h.) for each premise of DA rules (with the exception of the Identity
rule and the right rules of units):
- Identity axiom:
3 If
it is convenient, we may drop the subscripts.
Morrill and Valentı́n
41
→
−
P =⇒w P
(8) −
foc
→
P =⇒w P
−→
N =⇒w N
foc
→
−
N =⇒w N
- Cut rule: just apply n-Cut.
- Units
(9)
(10)
Λ⇒I
1⇒J
IR
❀
Λ=⇒w I
Λ=⇒w I
JR
❀
1=⇒w J
1=⇒w J
IR
foc
JR
foc
Left unit rules apply as in the case of DA.
- Left discontinuous product: directly translates.
- Right discontinuous product. There are cases P1 ⊙k P2 , N1 ⊙k N2 , N⊙k P and P⊙k N. We show one representative example:
∆⇒P
Γ⇒N
∆ |k Γ ⇒ P⊙k N
⊙k R
❀
Γ=⇒w N
−
→
N =⇒w N
f oc
→
−
→
−
N =⇒w N
P =⇒w P
⊙k R
→
− →
−
P |k N ⇒ P⊙k N
∆=⇒w P
n-Cut2
→
−
∆ |k N =⇒w P⊙k N
n-Cut2
∆ |k Γ=⇒w P⊙k N
f oc
∆ |k Γ=⇒w P⊙k N
- Left discontinuous ↑k rule (the left rule for ↓k is entirely similar). Like in the case for the right discontinuous product ⊙k rule, we only show one representative example:
→
−
Γ⇒P
∆h N i ⇒ A
↑k L
−−−→
∆hN↑k P |k Γi ⇒ A
❀
Γ=⇒w P
−→
−
→
P =⇒w P
N =⇒w N
↑k L
−−−−→ →
−
→
−
N↑k P |k P =⇒w N
∆h N i=⇒w A
n-Cut1
−−−−→ −
→
∆h N↑k P |k P i=⇒w A
n-Cut2
−−−−→
∆h N↑k P |k Γi=⇒w A
foc
−−−→
∆hN↑k P |k Γi=⇒w A
- Right discontinuous ↑k rule (the right discontinuous rule for ↓k is entirely similar):
→
−
→
−
∆ |k A ⇒ B
∆ |k A =⇒w B
(11)
↑k R ❀
↑k R
∆ ⇒ B↑k A
∆=⇒w B↑k A
- Product and implicative continuous rules. These follow the same pattern as the discontinuous case.
We interchange the metalinguistic k-th intercalation |k with the metalinguistic concatenation ’,’, and we
interchange ⊙k , ↑k and ↓k with •, /, and \ respectively.
MA focusing for Parsing
42
Concerning additives, conjunction Right translates directly and we consider then conjunction Left
(disjunction is symmetric):
→
−
∆h P i ⇒ C
(12)
&L
−−−→
∆hP&Mi ⇒ C
❀
−−−→
→
−
P&M=⇒w P
∆h P i=⇒wC
n-Cut1
−−−→
∆hP&Mi=⇒wC
−−−→
where by Eta expansion and application of the foc rule, we have P&M=⇒w P. 4.2 Embedding of DAfoc into DAFoc
Theorem 4.2 (Embedding of DAfoc into DAFoc ) For any configuration ∆ and type A, we have that if
∆=⇒w A with one focalised formula and no asynchronous formula occurrence, then ∆=⇒A with the same
formula focalised. If ∆=⇒w A with no focalised formula and with at least one asynchronous formula, then
∆=⇒A.
Proof. We proceed by induction on the size of DAfoc sequents.4 We consider Cut-free DAfoc proofs
which match the sequents of this theorem. If the last rule is logical (i.e., it is not an instance of the foc
rule) the i.h. applies directly and we get DAFoc proofs of the same end-sequent. Now, let us suppose
that the last rule is not logical, i.e. it is an instance of the foc rule. Let us suppose that the end sequent
∆=⇒w A is a synchronous sequent. Suppose for example that the focalised formula is in the succedent:
(13)
∆=⇒w P
∆=⇒w P
foc
The sequent ∆=⇒w P arises from a synchronous rule to which we can apply i.h.. Let us suppose now that
the end-sequent contains at least one asynchronous formula. We see three cases which are illustrative:
−−−→
(14) a. ∆hA⊙k Bi=⇒w P
−→
b. ∆h Q i=⇒w B↑k A
−→
c. ∆h Q i=⇒w A&B
−−−−→
−−−→
We have by Eta expansion that A⊙k B=⇒w A⊙k B . We apply to this sequent the invertible ⊙k left rule,
−−−−→
→ −
−
→
whence A |k B =⇒w A⊙k B . In case (14a), we have the following proof in DAfoc :
−−−−→
→
− →
−−−→
−
A |k B =⇒w A⊙k B
∆hA⊙k Bi=⇒w P
p-Cut1
→ →
−
−
(15)
∆h A |k B i=⇒w P
foc
→ −
−
→
∆h A |k B i=⇒w P
4 For
a given type A, the size of A, |A|, is the number of connectives in A. By recursion on configurations we have:
|Λ| ::= 0
→
−
| A , ∆| ::= |A| + |∆|, for sA = 0
|1, ∆| ::= |∆|
|A{∆1 : · · · : ∆sA }|
sA
::=
|A| + ∑ |∆i |
Moreover, we have:
−→
→
−
|∆h Q i=⇒w A| = |∆h Q i=⇒w A|
|∆=⇒w P | = |∆=⇒w P|
i=1
Morrill and Valentı́n
43
→ −
−
→
To the above DAfoc proof we apply Cut-elimination and we get the Cut-free DAfoc end-sequent ∆h A |k B i
→ −
−
−−−→
→
=⇒w P. We have |∆h A |k B i=⇒w P| < |∆hA⊙k Bi=⇒w P|. We can apply then i.h. and we derive the
→ →
−
−
provable DAFoc sequent ∆h A |k B i=⇒P to which we can apply the left ⊙k rule. We have obtained
−−−−→ −
−−−→
→
∆hA⊙k Bi=⇒P. In the same way, we have that B↑k A |k A =⇒w B. Thus, in case (14b), we have the
following proof in DAfoc :
−−−−→ −
−→
→
B↑k A |k A =⇒w B
∆h Q i=⇒w B↑k A
p-Cut2
−→ −
→
(16)
∆h Q i |k A =⇒w B
foc
→ −
−
→
∆h Q i |k A =⇒w B
→ −
−
→
As before, we apply Cut-elimination to the above proof. We get the Cut-free DAfoc end-sequent ∆h Q i|k A
→
−
=⇒w B. It has size less than |∆h Q i=⇒w B↑k A|. We can apply i.h. and we get the DAFoc provable sequent
→ −
−
→
∆h Q i|k A =⇒B to which we apply the ↑k right rule.
In case (14c):
−→
∆h Q i=⇒w A&B
(17)
foc
→
−
∆h Q i=⇒w A&B
→
−
by applying the foc rule and the invertibility of &R we get the provable DAfoc sequents ∆h Q i=⇒w A
→
−
→
−
and ∆h Q i=⇒w B. These sequents have smaller size than ∆h Q i=⇒w A&B. The aforementioned sequents
→
−
→
−
have a Cut-free proof in DAfoc . We apply i.h. and we get ∆h Q i=⇒A and ∆h Q i=⇒B. We apply the &
→
−
right rule in DAFoc , and we get ∆h Q i=⇒A&B. By this theorem we obtain the completeness of strong focalisation.
5
Example
We can have coordinate unlike types with nominal and adjectival complementation of is:
(18) [Tully]+is+[[Cicero+and+humanist]] : S f
Lexical lookup of types yields:
(19) [Nt(s(m)) : b], ((hi∃gNt(s(g))\S f )/(∃aNa⊕(∃g(CNg/CN)) :
λ Aλ B(Pres (A → C.[B = C]; D.((D λ E[E = B]) B))), [[∀gNt(s(g)) : 007,
∀ f ∀a(((((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))\[]−1 []−1 (((hiNa\S f )/
(∃bNb⊕∃g(CNg/CN))\(hiNa\S f )))/(((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))) :
λ F λ Gλ H λ I[((G H) I) ∧ ((F H) I)], ∀n(CNn/CNn) : ˆλ J λ K[(J K) ∧ (ˇteetotal K)]]] ⇒ S f
The bracket modalities hi and [ ]−1 mark as syntactic domains subjects and coordinate structures which are
weak and strong islands respectively. The quantifiers and first-order structure mark agreement features
such as third person singular for any gender for is. The normal modality marks semantic intensionality
and marks rigid designator semantic intensionality. The example has positive and negative additive
disjunction so that the derivation in Figures 12–16 illustrates both synchronous and asynchronous focusing additives. This delivers the correct semantics: [(Pres [t = c]) ∧ (Pres (ˇhumanist t))].
MA focusing for Parsing
44
CNA ⇒ CNA
CNA ⇒ CNA
CNA/CNA , CNA ⇒ CNA
∀n(CNn/CNn) , CNA ⇒ CNA
/L
∀L
∀n(CNn/CNn) , CNA ⇒ CNA
∀n(CNn/CNn) ⇒ CNA/CNA
✷L
⊔R
∀n(CNn/CNn) ⇒ ∃g(CNg/CNg)
∃R
∀n(CNn/CNn) ⇒ ∃bNb⊕∃g(CNg/CNg)
Nt(s(m)) ⇒ Nt(s(m))
⊕R
[Nt(s(m))] ⇒ hiNt(s(m))
hiR
Sf ⇒ Sf
[Nt(s(m))], hiNt(s(m))\S f ⇒ S f
[Nt(s(m))], (hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)) , ∀n(CNn/CNn) ⇒ S f
hiNt(s(m)), (hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)), ∀n(CNn/CNn) ⇒ S f
(hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)), ∀n(CNn/CNn) ⇒ hiNt(s(m))\S f
\L
/L
hiL
\R
∀n(CNn/CNn) ⇒ ((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f )
\R
∀n(CNn/CNn) ⇒ (((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f ))
R
1
Figure 12: Coordination of unlike types, Part I
Nt(s(A)) ⇒ Nt(s(A))
∀gNt(s(g)) ⇒ Nt(s(A))
∀L
∀gNt(s(g)) ⇒ Nt(s(A))
∀gNt(s(g)) ⇒ ∃bNb
L
∃R
∀gNt(s(g)) ⇒ ∃bNb⊕∃g(CNg/CNg)
Nt(s(m)) ⇒ Nt(s(m))
⊕R
[Nt(s(m))] ⇒ hiNt(s(m))
hiR
Sf ⇒ Sf
[Nt(s(m))], hiNt(s(m))\S f ⇒ S f
[Nt(s(m))], (hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)) , ∀gNt(s(g)) ⇒ S f
hiNt(s(m)), (hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)), ∀gNt(s(g)) ⇒ S f
(hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)), ∀gNt(s(g)) ⇒ hiNt(s(m))\S f
hiL
\R
∀gNt(s(g)) ⇒ ((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f )
\R
∀gNt(s(g)) ⇒ (((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f ))
2
Figure 13: Coordination of unlike types, Part II
/L
R
\L
Morrill and Valentı́n
45
N2 ⇒ N2
N2 ⇒ ∃aNa
Nt(s(m)) ⇒ Nt(s(m))
∃R
N2 ⇒ ∃aNa⊕∃g(CNg/CNg)
Nt(s(m)) ⇒ ∃gNt(s(g))
⊕R
∃R
[Nt(s(m))] ⇒ hi∃gNt(s(g))
hiR
Sf ⇒ Sf
[Nt(s(m))], hi∃gNt(s(g))\S f ⇒ S f
[Nt(s(m))], (hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg) , N2 ⇒ S f
[Nt(s(m))], ((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))) , N2 ⇒ S f
[Nt(s(m))], ((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))), ∃bNb ⇒ S f
hiNt(s(m)), ((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))), ∃bNb ⇒ S f
\L
/L
L
∃L
hiL
3
Figure 14: Coordination of unlike types, Part III
CN1 ⇒ CN1
CN1 ⇒ CN1
CN1/CN1 , CN1 ⇒ CN1
CN1/CN1 ⇒ CN1/CN1
/L
/R
CN1/CN1 ⇒ (CN1/CN1)⊔(CN1\CN1)
⊔R
CN1/CN1 ⇒ ∃g((CNg/CNg)⊔(CNg\CNg))
CN1/CN1 ⇒ ∃g((CNg/CNg)⊔(CNg\CNg))
Nt(s(m)) ⇒ Nt(s(m))
∃R
−R
CN1/CN1 ⇒ ∃aNa⊕(∃g((CNg/CNg)⊔(CNg\CNg))−I)
Nt(s(m)) ⇒ ∃gNt(s(g))
⊕R
∃R
[Nt(s(m))] ⇒ hi∃gNt(s(g))
hiR
Sf ⇒ Sf
[Nt(s(m))], hi∃gNt(s(g))\S f ⇒ S f
[Nt(s(m))], (hi∃gNt(s(g))\S f )/(∃aNa⊕(∃g((CNg/CNg)⊔(CNg\CNg))−I)) , CN1/CN1 ⇒ S f
[Nt(s(m))], ((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g((CNg/CNg))) , CN1/CN1 ⇒ S f
[Nt(s(m))], ((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))), ∃g(CNg/CNg) ⇒ S f
hiNt(s(m)), ((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))), ∃g(CNg/CNg) ⇒ S f
4
Figure 15: Coordination of unlike types, Part IV
∃L
hiL
L
/L
\L
(
hiR
L
[Nt(s(m))], hiNt(s(m))\S f
[Nt(s(m))] ⇒ hiNt(s(m))
Nt(s(m)) ⇒ Nt(s(m))
Nt(s(m)) ⇒ Nt(s(m))
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[[∀gNt(s(g)),
(((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f ))\[]−1 []−1 (((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f )) ]] ⇒ S f
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[[∀gNt(s(g)),
\L
\L
[]−1 L
[]−1 L
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[[ []−1 []−1 (((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg))))\(hiNt(s(m))\S f )) ]] ⇒ S f
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[ []−1 (((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg))))\(hiNt(s(m))\S f )) ] ⇒ S f
\L
⇒ Sf
⇒ Sf
Sf
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))), ((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f ) ⇒ S f
/R
\R
((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg)) ⇒ (hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg))
((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),∃bNb⊕∃g(CNg/CNg) ⇒ hiNt(s(m))\S f
⊕L
∀n(CNn/CNn)]] ⇒ S f
∀ f ∀a(((((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))\[]−1 []−1 (((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f )))/(((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))) ,
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[[∀gNt(s(g)),
∀n(CNn/CNn)]] ⇒ S f
∀ f ∀a(((((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))\[]−1 []−1 (((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg))))\(hiNa\S f )))/(((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))) ,
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[[∀gNt(s(g)),
∀n(CNn/CNn)]] ⇒ S f
∀a(((((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))\[]−1 []−1 (((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f )))/(((hiNa\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNa\S f ))) ,
[Nt(s(m))],((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),[[∀gNt(s(g)),
∀n(CNn/CNn)]] ⇒ S f
∀L
((((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f ))\[]−1 []−1 (((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f )))/(((hiNt(s(m))\S f )/(∃bNb⊕∃g(CNg/CNg)))\(hiNt(s(m))\S f )) ,
1
2
4
hiNt(s(m)),((hi∃gNt(s(g))\S f )/(∃aNa⊕∃g(CNg/CNg))),∃bNb⊕∃g(CNg/CNg) ⇒ S f
3
∀L
/L
46
MA focusing for Parsing
Morrill and Valentı́n
47
References
[1] Vito Michele Abrusci & Roberto Maieli (2015): Cyclic Multiplicative-Additive Proof Nets of Linear Logic
with an Application to Language Parsing. In Annie Foret, Glyn Morrill, Reinhard Muskens & Rainer Osswald, editors: Preproceedings of the 20th Conference on Formal Grammar, ESSLLI 2015, Barcelona, pp.
39–54.
[2] J. M. Andreoli (1992): Logic programming with focusing in linear logic. Journal of Logic and Computation
2(3), pp. 297–347, doi:10.1093/logcom/2.3.297.
[3] Kaustuv Chaudhuri (2006): The Focused Inverse Method for Linear Logic. Ph.D. thesis, Carnegie Mellon
University, Pittsburgh, PA, USA. AAI3248489.
[4] Kaustuv Chaudhuri, Dale Miller & Alexis Saurin (2008): Canonical Sequent Proofs via Multi-Focusing.
In: Fifth IFIP International Conference On Theoretical Computer Science - TCS 2008, IFIP 20th World
Computer Congress, TC 1, Foundations of Computer Science, September 7-10, 2008, Milano, Italy, pp.
383–396. Available at http://dx.doi.org/10.1007/978-0-387-09680-3_26.
[5] Mario Fadda (2010): Geometry of Grammar: Exercises in Lambek Style. Ph.D. thesis, Universitat Politècnica
de Catalunya, Barcelona.
[6] H. Hendriks (1993): Studied flexibility. Categories and types in syntax and semantics. Ph.D. thesis, Universiteit van Amsterdam, ILLC, Amsterdam.
[7] Dominic J. D. Hughes & Rob J. van Glabbeck (2005): Proof nets for unit-free multiplicative-additive linear
logic. ACM Transactions on Computational Logic (TOCL) 6(4), pp. 784–842, doi:10.1145/1094622.
1094629.
[8] M. Kanazawa (1992): The Lambek calculus enriched with additional connectives. Journal of Logic, Language and Information 1, pp. 141–171, doi:10.1007/BF00171695.
[9] J. Lambek (1961): On the Calculus of Syntactic Types. In Roman Jakobson, editor: Structure of Language and
its Mathematical Aspects, Proceedings of the Symposia in Applied Mathematics XII, American Mathematical
Society, Providence, Rhode Island, pp. 166–178, doi:10.1090/psapm/012/9972.
[10] J. Lambek (1988): Categorial and Categorical Grammars. In Richard T. Oehrle, Emmon Bach & Deidre
Wheeler, editors: Categorial Grammars and Natural Language Structures, Studies in Linguistics and Philosophy 32, D. Reidel, Dordrecht, pp. 297–317, doi:10.1007/978-94-015-6878-411.
[11] Joachim Lambek (1958): The mathematics of sentence structure. American Mathematical Monthly 65, pp.
154–170, doi:10.2307/2310058.
[12] Olivier Laurent (2004): A proof of the Focalization property of Linear Logic. Unpublished manuscript, CNRS
- Université Paris VII.
[13] Richard Moot (2014): Extended Lambek Calculi and First-Order Linear Logic. In Michael Moortgat Claudia Casadio, Bob Coeke & Philip Scott, editors: Categories and Types in Logic, Language and Physics:
Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday, LNCS, FoLLI Publications in Logic,
Language and Information 8222, Springer, Berlin, pp. 297–330. Available at http://dx.doi.org/10.
1007/978-3-642-54789-8_17.
[14] Richard Moot & Christian Retoré (2012): The Logic of Categorial Grammars: A Deductive Account of
Natural Language Syntax and Semantics. Springer, Heidelberg, doi:10.1007/978-3-642-31555-8.
[15] G. Morrill (1990): Grammar and Logical Types. In Martin Stockhof & Leen Torenvliet, editors: Proceedings
of the Seventh Amsterdam Colloquium, pp. 429–450.
[16] Glyn Morrill & Oriol Valentı́n (2010): Displacement Calculus. Linguistic Analysis 36(1–4), pp. 167–192.
Available at http://arxiv.org/abs/1004.4181. Special issue Festschrift for Joachim Lambek.
[17] Glyn Morrill, Oriol Valentı́n & Mario Fadda (2011): The Displacement Calculus. Journal of Logic, Language
and Information 20(1), pp. 1–48, doi:10.1007/s10849-010-9129-2.
MA focusing for Parsing
48
[18] Glyn V. Morrill (2011): Categorial Grammar: Logical Syntax, Semantics, and Processing. Oxford University
Press, New York and Oxford.
[19] Robert J. Simmons (2012): Substructural Logical Specifications. Ph.D. thesis, Carnegie Mellon University,
Pittsburgh.
Appendix: Cut Elimination
We prove this by induction on the complexity (d, h) of top-most instances of Cut, where d is the size5 of
the cut formula and h is the length of the derivation the last rule of which is the Cut rule. There are four
cases to consider: Cut with axiom in the minor premise, Cut with axiom in the major premise, principal
Cuts, and permutation conversions. In each case, the complexity of the Cut is reduced. In order to save
space, we will not be exhaustive showing all the cases because many follow the same pattern. In particular, for any synchronous logical rule there are always four cases to consider corresponding to the polarity
of the subformulas. Here, and in the following, we will show only one representative example. Concerning continuous and discontinuous formulas, we will show only the discontinuous cases (discontinuous
connectives are less known than the continuous ones of the plain Lambek Calculus). For the continuous
instances, the reader has only to interchange the meta-linguistic wrap |k with the meta-linguistic concatenation ′ ,′ , ⊙k with •, ↑k with / and ↓k with \. The units cases (principal case and permutation conversion
cases) are completely trivial.
Proof. - Id cases:
→
−
→
−
∆h P i=⇒w B ✸ foc
P =⇒w P
(20)
p-Cut1
→
−
∆h P i=⇒w B ✸ foc
→
−
∆h P i=⇒w B ✸ foc
❀
−→
N =⇒w N
∆=⇒w N ✸ foc
→
−
❀
∆h N i=⇒w B ✸ foc
p-Cut2
→
−
∆h N i=⇒w B ✸ foc
The attentive reader may have wondered whether the following Id case could arise:
(21)
−
→
Q ⇒Q
→
−
Γh Q i ⇒ A
n-Cuti
Γh Q i ⇒ A
If Q were a primitive type q, and Γ were not the empty context, we would have then a Cut-free underivable
sequent. For example, if the right premise of the Cut rule in (21) were the derivable sequent q, q\s ⇒ s,
we would have then as conclusion:
(22) q , q\s ⇒ s
Since the primitive type q in the antecedent is focalised, there is no possibility of applying the \ left rule,
which is a synchronous rule that needs that its active formula to be focalised. Principal cases:
• foc cases:
∆=⇒w P
(23) ∆=⇒w P
foc
−
→
Γh P i=⇒w A ✸ foc
Γh∆i=⇒w A ✸ foc
5 The
❀
n-Cut1
size of |A| is the number of connectives appearing in A.
∆=⇒w P
→
−
Γh P i=⇒w A ✸ foc
Γh∆i=⇒w A ✸ foc
p-Cut1
Morrill and Valentı́n
49
∆h N i=⇒w A
foc
→
−
(24) ∆=⇒w N
Γh N i=⇒w A
n-Cut2
Γh∆i=⇒w A
❀
∆=⇒w N
−→
Γh N i=⇒w A
Γh∆i=⇒w A
• logical connectives:
→
−
∆|k P1 =⇒w P2 ✸ foc
↑R
(25) ∆=⇒ P ↑ P ✸ foc k
→
−
Γ1 =⇒w P1
Γ2 h P2 i=⇒w A
↑k L
−−−−−→
Γ2 h P2 ↑k P1 |k Γ1 i=⇒w A
p-Cut2
Γ2 h∆|k Γ1 i=⇒w A ✸ foc
w 2 k 1
p-Cut2
❀
→
−
→
−
∆|k P1 =⇒P2 ✸ foc
Γ2 h P2 i=⇒w A
n-Cut1
→
−
Γ1 =⇒w P1
Γ2 h∆|k P1 i=⇒w A ✸ foc
p-Cut1
Γ2 h∆|k Γ1 i=⇒w A ✸ foc
The case of ↓k is entirely similar to the ↑k case.
The case of ↓k is entirely similar to the ↑k case.
→ −
−
→
Γh P |k N i=⇒w A ✸ foc
⊙k R
⊙k L
−−−→
∆1 |k ∆2 =⇒w P⊙k N
ΓhP⊙k Ni=⇒w A ✸ foc
p-Cut1
Γh∆1 |k ∆2 i=⇒w A ✸ foc
∆1 =⇒w P
(26)
∆2 =⇒w N
❀
→
→ −
−
Γh P |k N i=⇒w A ✸ foc
p-Cut1
→
−
∆2 =⇒w N
Γh∆1 |k N i=⇒w A ✸ foc
n-Cut2
Γh∆1 |k ∆2 i=⇒w A ✸ foc
∆1 =⇒ P
∆=⇒w Q ✸ foc
(27)
∆=⇒w A ✸ foc
∆=⇒w Q&A ✸ foc
&R
Γh∆i=⇒w B ✸ foc
∆=⇒Q ✸ foc
−→
Γh Q i=⇒w B
(28)
∆=⇒w A ✸ foc
∆=⇒w M&A ✸ foc
&R
Γh∆i=⇒w B ✸ foc
∆=⇒M ✸ foc
→
−
Γh M i=⇒w B
Γh∆i=⇒w B ✸ foc
❀
p-Cut2
Γh∆i=⇒w B ✸ foc
∆=⇒w M ✸ foc
−→
Γh Q i=⇒w B
&L
−−−−→
Γh Q&A =⇒w B
p-Cut2
n-Cut1
- Left commutative p-Cut conversions:
→
−
Γh M i=⇒w B
&L
−−−−→
Γh M&A i=⇒w B
p-Cut2
❀
MA focusing for Parsing
50
−→
∆h Q i=⇒w N
−→
foc
→
(29) ∆h−
Q i=⇒w N
Γh N i=⇒wC
p-Cut2
→
−
Γh∆h Q ii=⇒wC
❀
−→
−→
Γh N i=⇒wC
∆h Q i=⇒w N
p-Cut2
−→
Γh∆h Q ii=⇒wC
foc
→
−
Γh∆h Q ii=⇒wC
→ −
−
→
∆h A |k B i=⇒w P
⊙k L
−−→
→
−
(30) ∆h−
A⊙k Bi=⇒w P
Γh P i=⇒wC ✸ foc
p-Cut1
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
❀
→ −
−
→
→
−
∆h A |k B i=⇒w P
Γh P i=⇒wC ✸ foc
p-Cut1
→ −
−
→
Γh∆h A |k B ii=⇒wC ✸ foc
⊙k L
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
→ −
−
→
∆h A |k B i=⇒w N ✸ foc
⊙k L
−−→
→
−
(31) ∆h−
A⊙k Bi=⇒w N ✸ foc
Γh N i=⇒wC
p-Cut2
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
❀
→ −
−
→
→
−
∆h A |k B i=⇒w N ✸ foc
Γh N i=⇒wC
p-Cut2
→ −
−
→
Γh∆h A |k B ii=⇒wC ✸ foc
⊙k L
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
−−→
Γ2 h N1 i=⇒w N
Γ1 =⇒w P1
↑k L
−−−−−→
−→
(32)
Θh N i=⇒wC
Γ2 h N1 ↑k P1 |k Γ1 i=⇒w N
p-Cut2
−−−−−→
ΘhΓ2 h N1 ↑k P1 |k Γ1 ii=⇒wC
❀
−−→
−→
Γ1 h N1 i=⇒w N
Θh N i=⇒wC
p-Cut2
−−→
Γ1 =⇒w P1
ΘhΓ2 h N1 ii=⇒wC
↑k L
−−−−−→
ΘhΓ2 h N1 ↑k P1 |k Γ1 ii=⇒wC
→
−
→
−
Γh B i=⇒w P
Γh A i=⇒w P
⊕L
−−→
→
−
(33)
ΓhA⊕Bi=⇒w P
∆h P i=⇒wC ✸ foc
p-Cut1
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
❀
Morrill and Valentı́n
51
→
−
→
−
→
−
→
−
Γh A i=⇒w P
∆h P i=⇒wC ✸ foc
∆h P i=⇒wC ✸ foc
Γh B i=⇒w P
p-Cut1
p-Cut1
→
−
→
−
∆hΓh A ii=⇒wC ✸ foc
∆hΓh B ii=⇒wC ✸ foc
⊕L
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
→
−
→
−
Γh A i=⇒w N ✸ foc
Γh B i=⇒w N ✸ foc
⊕L
−−→
→
−
(34)
❀
ΓhA⊕Bi=⇒w N ✸ foc
∆h N i=⇒wC
p-Cut2
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
→
−
→
−
→
−
→
−
Γh A i=⇒w N ✸ foc
∆h N i=⇒wC
Γh B i=⇒w N ✸ foc
∆h N i=⇒wC
p-Cut2
p-Cut2
→
−
→
−
∆hΓh A ii=⇒wC ✸ foc
∆hΓh B ii=⇒wC ✸ foc
⊕L
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
- Right commutative p-Cut conversions (unordered multiple distinguished occurrences are separated
by semicolons):
→ −→
−
→ −→
−
∆=⇒w P
Γh P ; Q i=⇒wC
Γh P ; Q i=⇒wC
p-Cut1
foc
−→
→
→−
−
❀
(35) ∆=⇒w P
Γh P ; Q i=⇒wC
Γh∆; Q i=⇒wC
p-Cut1
foc
→
−
→
−
Γh∆; Q i=⇒wC
Γh∆; Q i=⇒wC
→
−
→
−
Γh P1 i=⇒w P2
Γh P1 i=⇒w P2
∆=⇒w P1
foc
p-Cut1
→
−
❀
(36) ∆=⇒w P1
Γh∆i=⇒w P2
Γh P1 i=⇒w P2
p-Cut1
foc
Γh∆i=⇒w P2
Γh∆i=⇒w P2
→
→
→ −
−
→ −
−
∆=⇒w P
Γh P i|k A =⇒w B ✸ foc
Γh P i|k A =⇒w B ✸ foc
↑k R
p-Cut1
→
−
→
−
❀
(37) ∆=⇒w P
Γh P i=⇒w B↑k A ✸ foc
Γh∆i|k A =⇒w B ✸ foc
p-Cut1
↑k R
Γh∆i=⇒w B↑k A ✸ foc
Γh∆i=⇒w B↑k A ✸ foc
→
→ −
−
→
→ −
−
Γh N i|k A =⇒w B
∆=⇒w N ✸ foc
Γh N i|k A =⇒w B
↑k R
p-Cut2
→
−
→
−
(38) ∆=⇒ N ✸ foc
❀
Γh∆i|
A
=⇒
B
✸
foc
k
w
N
Γh
i=⇒
B↑
A
w
k
w
↑k R
p-Cut2
Γh∆i=⇒w B↑k A ✸ foc
Γh∆i=⇒w B↑k A ✸ foc
→ →
→ −
→−
−
−
→−
−
→
Γh P ; A |k B i=⇒wC ✸ foc
∆=⇒w P
Γh P ; A |k B i=⇒wC ✸ foc
⊙k L
p-Cut1
→ −
−
→ −−−→
−
→
❀
(39) ∆=⇒w P
Γh∆; A |k B i=⇒wC ✸ foc
Γh P ; A⊙k Bi=⇒wC ✸ foc
p-Cut1
⊙k L
−−−→
−−−→
Γh∆; A⊙k Bi=⇒wC ✸ foc
Γh∆; A⊙k Bi=⇒wC ✸ foc
−→ −
−→ −
→ −
→
→ −
→
Γh N ; A |k B i=⇒wC
∆=⇒w N ✸ foc
Γh N ; A |k B i=⇒wC
⊙
L
k
p-Cut2
−→ −−−→
→ −
−
→
(40) ∆=⇒ N ✸ foc
❀
Γh∆; A |k B i=⇒wC ✸ foc
Γh N ; A⊙k Bi=⇒wC
w
⊙k L
p-Cut2
−−−→
−−−→
Γh∆; A⊙k Bi=⇒wC ✸ foc
Γh∆; A⊙k Bi=⇒wC ✸ foc
−
→−
→
Γ=⇒w P1
Θh P2 ; P i=⇒wC
↑k L
−−−−−→
→
−
Θh P2 ↑k P1 |k Γ; P i=⇒wC
❀
(41) ∆=⇒w P
p-Cut1
−−−−−→
Θh P2 ↑k P1 |k Γ; ∆i=⇒wC
MA focusing for Parsing
52
∆=⇒w P
−
→−
→
Θh P2 ; P i=⇒wC
→
−
Θh P2 ; ∆i=⇒wC
Γ=⇒w P1
−−−−−→
Θh P2 ↑k P1 |k Γ; ∆i=⇒wC
(42) ∆=⇒w P
∆=⇒w P
p-Cut1
↑k L
→
−
→
−
Γh P i=⇒w A ✸ foc
Γh P i=⇒w B ✸ foc
&R
→
−
Γh P i=⇒w A&B ✸ foc
p-Cut1
Γh∆i=⇒w A&B ✸ foc
→
−
Γh P i=⇒w A ✸ foc
Γh∆i=⇒w A ✸ foc
p-Cut1
❀
→
−
Γh P i=⇒w B ✸ foc
∆=⇒w P
Γh∆i=⇒w B ✸ foc
Γh∆i=⇒w A&B ✸ foc
→
−
Γh N i=⇒w A
→
−
Γh N i=⇒w B
→
−
Γh N i=⇒w A&B
(43) ∆=⇒ N ✸ foc
w
Γh∆i=⇒w A&B ✸ foc
∆=⇒w N ✸ foc
→
−
Γh N i=⇒w A
Γh∆i=⇒w A ✸ foc
p-Cut2
&R
p-Cut1
&R
❀
p-Cut2
∆=⇒w N ✸ foc
→
−
Γh N i=⇒w B
Γh∆i=⇒w B ✸ foc
Γh∆i=⇒w A&B ✸ foc
p-Cut2
&R
- Left commutative n-Cut conversions:
−→
∆h Q i=⇒w P
foc
→
→
−
(44) ∆h−
Q i=⇒w P
Γh P i=⇒wC
n-Cut1
→
−
Γh∆h Q ii=⇒wC
❀
→ −
−
→
∆h A |k B i=⇒w P ✸ foc
⊙k L
−−→
→
−
(45) ∆h−
A⊙k Bi=⇒w P ✸ foc
Γh P i=⇒wC
n-Cut1
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
−→
→
−
Γh P i=⇒wC
∆ Q =⇒w P
n-Cut1
−→
Γh∆h Q ii=⇒wC
foc
→
−
Γh∆h Q ii=⇒wC
❀
→ −
−
→
→
−
∆h A |k B i=⇒w P ✸ foc
Γh P i=⇒wC
n-Cut1
→ −
−
→
Γh∆h A |k B ii=⇒wC ✸ foc
⊙k L
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
→ −
−
→
∆h A |k B i=⇒w N
⊙k L
−−→
→
−
(46) ∆h−
A⊙k Bi=⇒w N
Γh N i=⇒wC ✸ foc
n-Cut2
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
❀
Morrill and Valentı́n
53
→ −
−
→
→
−
∆h A |k B i=⇒w N
Γh N i=⇒wC ✸ foc
n-Cut2
→ −
−
→
Γh∆h A |k B ii=⇒wC ✸ foc
⊙k L
−−−→
Γh∆hA⊙k Bii=⇒wC ✸ foc
−−→
Γ2 h N1 i=⇒w P
Γ1 =⇒w P1
↑k L
−−−−−→
→
−
(47)
Θh P i=⇒wC
Γ2 h N1 ↑k P1 |k Γ1 i=⇒w P
n-Cut1
−−−−−→
ΘhΓ2 h N1 ↑k P1 |k Γ1 ii=⇒wC
❀
−−→
→
−
Γ1 h N1 i=⇒w P
Θh P i=⇒wC
n-Cut1
−−→
ΘhΓ2 h N1 ii=⇒wC
Γ1 =⇒w P1
↑k L
−−−−−→
ΘhΓ2 h N1 ↑k P1 |k Γ1 ii=⇒wC
→
−
→
−
Γh A i=⇒w P ✸ foc
Γh B i=⇒w P ✸ foc
⊕L
−−→
→
−
(48)
ΓhA⊕Bi=⇒w P ✸ foc
∆h P i=⇒wC ✸ foc
n-Cut1
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
❀
→
−
→
−
→
−
→
−
Γh B i=⇒w P ✸ foc
∆h P i=⇒wC
Γh A i=⇒w P ✸ foc
∆h P i=⇒wC
n-Cut1
n-Cut1
→
−
→
−
∆hΓh A ii=⇒wC ✸ foc
∆hΓh B ii=⇒wC ✸ foc
⊕L
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
→
−
→
−
Γh A i=⇒w N
Γh B i=⇒w N
⊕L
−−→
→
−
❀
(49)
ΓhA⊕Bi=⇒w N
∆h N i=⇒wC ✸ foc
n-Cut2
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
→
−
→
−
→
−
→
−
Γh A i=⇒w N
∆h N i=⇒wC ✸ foc
Γh B i=⇒w N
∆h N i=⇒wC ✸ foc
n-Cut2
n-Cut2
→
−
→
−
∆hΓh A ii=⇒wC ✸ foc
∆hΓh B ii=⇒wC ✸ foc
⊕L
−−→
∆hΓhA⊕Bii=⇒wC ✸ foc
- Right commutative n-Cut conversions:
→ −→
−
Γh N ; Q i=⇒wC
foc
→
→−
−
(50) ∆=⇒w N
Γh N ; Q i=⇒wC
n-Cut2
→
−
Γh∆; Q i=⇒wC
→
−
Γh N i=⇒w P
foc
→
−
(51) ∆=⇒w N
Γh N i=⇒w P
n-Cut2
Γh∆i=⇒w P
→ −→
−
Γh N ; Q i=⇒wC
n-Cut2
−→
❀
Γh∆; Q i=⇒wC
foc
→
−
Γh∆; Q i=⇒wC
→
−
∆=⇒w N
Γh N i=⇒w P
n-Cut2
❀
Γh∆i=⇒w P
foc
Γh∆i=⇒w P
∆=⇒w N
MA focusing for Parsing
54
(52)
(53)
(54)
(55)
(56)
→
→
→ −
−
→ −
−
Γh P i|k A =⇒w B
∆=⇒w P ✸ foc
Γh P i|k A =⇒w B
↑k R
n-Cut1
→
−
→
−
❀
∆=⇒w P ✸ foc
Γh P i=⇒w B↑k A
Γh∆i|k A =⇒w B ✸ foc
n-Cut1
↑k R
Γh∆i=⇒w B↑k A ✸ foc
Γh∆i=⇒w B↑k A ✸ foc
−
→
→ →
−
→ −
−
∆=⇒w N
Γh P i|k A =⇒w B ✸ foc
Γh P i|k A =⇒w B ✸ foc
↑k R
n-Cut2
→
−
→
−
❀
Γh P i=⇒w B↑k A ✸ foc
Γh∆i|k A =⇒w B ✸ foc
∆=⇒w N
n-Cut2
↑k R
Γh∆i=⇒w B↑k A ✸ foc
Γh∆i=⇒w B↑k A ✸ foc
→ −
→ −
→−
−
→
→−
−
→
Γh P ; A |k B i=⇒wC
∆=⇒w P ✸ foc
Γh P ; A |k B i=⇒wC
⊙k L
n-Cut1
→ →
−
→ −−−→
−
−
❀
∆=⇒w P ✸ foc
Γh P ; A⊙k Bi=⇒wC
Γh∆; A |k B i=⇒wC ✸ foc
n-Cut1
⊙k L
−−−→
−−−→
Γh∆; A⊙k Bi=⇒wC ✸ foc
Γh∆; A⊙k Bi=⇒wC ✸ foc
→ −
→ −
→−
−
→
→−
−
→
∆=⇒w N
Γh N ; A |k B i=⇒wC ✸ foc
Γh N ; A |k B i=⇒wC ✸ foc
⊙
L
n-Cut2
k
→ −
−
→ −−−→
−
→
❀
Γh N ; A⊙k Bi=⇒wC ✸ foc
Γh∆; A |k B i=⇒wC ✸ foc
∆=⇒w N
n-Cut2
⊙k L
−−−→
−−−→
Γh∆; A⊙k Bi=⇒wC ✸ foc
Γh∆; A⊙k Bi=⇒wC ✸ foc
→−
−
→
Γ=⇒w P1
Θh P2 ; N i=⇒wC
↑k L
−−−−−→
→
−
Θh P2 ↑k P1 |k Γ; N i=⇒wC
∆=⇒w N
❀
n-Cut
2
−−−−−→
Θh P2 ↑k P1 |k Γ; ∆i=⇒wC
∆=⇒w N
→−
−
→
Θh P2 ; N i=⇒wC
→
−
Θh P2 ; ∆i=⇒wC
Γ=⇒w P1
n-Cut2
↑k L
−−−−−→
Θh P2 ↑k P1 |k Γ; ∆i=⇒wC
→
−
→
−
Γh P i=⇒w A
Γh P i=⇒w B
&R
→
−
(57) ∆=⇒w P ✸ foc
Γh P i=⇒w A&B
n-Cut1
Γh∆i=⇒w A&B ✸ foc
∆=⇒w P
→
−
Γh P i=⇒w A
Γh∆i=⇒w A ✸ foc
(58) ∆=⇒w N
∆=⇒w N
∆=⇒w P ✸ foc
n-Cut1
❀
→
−
Γh P i=⇒w B
Γh∆i=⇒w B ✸ foc
Γh∆i=⇒w A&B ✸ foc
→
−
→
−
Γh N i=⇒w A ✸ foc
Γh N i=⇒w B ✸ foc
&R
→
−
Γh N i=⇒w A&B ✸ foc
n-Cut2
Γh∆i=⇒w A&B ✸ foc
→
−
Γh N i=⇒w A ✸ foc
Γh∆i=⇒w A ✸ foc
n-Cut2
∆=⇒w N
Γh∆i=⇒w A&B ✸ foc
This completes the proof. n-Cut1
&R
❀
→
−
Γh N i=⇒w B ✸ foc
Γh∆i=⇒w B ✸ foc
&R
n-Cut2
Download