EPISTEMIC LOGIC AND MODEL CHECKING OF UNCONDITIONALLY SECURE PROTOCOLS PHILIP AMORTILA AND NICOLAS GAGNÉ Abstract. We go over “Model checking Russian cards” [12] from a broader context while making a few critical remarks and comments. 1. Introduction At the heart of cryptography is the desire to establish secure communication between one or more parties; cryptographic protocols are the proffered solutions to specific instances of this problem. Cryptographic protocols secure communication by rendering data in a way that is hard to read by an unauthorized party. They are hence used in situations where the communication channels are susceptible to eavesdropping. Protocols may be grouped into two categories, symmetric and asymmetric. In symmetric cryptography, communicating users rely on some shared secret, whereas in asymmetric cryptography, each user has a key pair, one part of which may be made public, termed a public key, and a second part, termed a private key, which should be known only to the user in question. On one hand, symmetric encryption relies on the fact that a secret was shared prior to the establishment of a secure communication and on the other hand, asymmetric encryption relies on the fact that the adversaries have limited computational resources available, such as bounded memory and computing power. In this paper, we consider protocols which don’t rely on a shared secret nor do they assume a computationally bounded adversary. It is easily shown that without any further assumptions, secured communication is impossible. We thus make the assumption that users and adversaries each have access to different shares, or portions, of the secret information used in the set-up phase. Secure protocols which rely on the assumption of a distributed setting form a subset of unconditionally secure protocols; the information-theoretically secure protocols that cannot be broken even when the adversary has unlimited computing power. We study protocols that always work, even when the sender’s and receiver’s algorithms are known in advance, and all conversation between sender and receiver is public and is heard by all potential adversaries. We introduce an extended fragment of epistemic logic in order to model an instance of such protocols on a toy model: the Russian Cards. We then proceed to prove that the protocol satisfies the desired properties by means of model checking. 2. Motivation and Background In this section, we give some motivations for unconditionally secure protocols and give a quick overview of epistemic logic and its background. 1 2 PHILIP AMORTILA AND NICOLAS GAGNÉ 2.1. Unconditionally secure protocols. Bridge. Example taken from [2] In the game of bridge, partners exchange public bids in order to arrive at a contract before playing the cards. In the course of bidding, partners gain information about each other’s hands, and indeed, considerable effort has been put into designing protocols for maximizing the amount of useful information conveyed to the partner. It is of course desirable that the opponents obtain as little useful information as possible from the bidding. Nevertheless, one partner may well learn more from her partner’s bid than do the opponents. The laws of bridge require that a partnership make public their conventions prior to play, so all communications understood by the partner will also be understood by the opponents. Of course, since the cards are themselves the secrets and are handed repeatedly, it doesn’t make sense to talk about a priori shared secret. As a side note, most of these techniques, as originally introduced by Peter Winkler [13], are now banned from the game of bridge. Spy agency. Adapted from [3]. A spy agency that handles security information. On each mission, a group of spies are chosen. We call this group a team. Teams form and dissolve as various missions arise and get completed. All communication regarding the mission is intended to be shared with those on the team. However, the spies outside of the given team might try to steal information by eavesdropping on the conversation among members of the team. Hence, each team that forms would like to exchange a secret key, which it can then use as a part of some cryptographic protocol to securely send all further communication regarding the mission. Of course, the trivial solution would be to give each potential team a priori secret key, but since the number of teams increase exponentially, the required amount of initial information blows up. This method being infeasible, one solution out is to rely on unconditionally secure protocols. 2.2. Protocol verification. Epistemic logic and its extensions. Epistemic logic is a subfield of modal logic that is concerned with reasoning about knowledge. In the last decade, the use of epistemic logic has been heavily used in the field of artificial intelligence to specify and very protocols as well as to represent knowledge and formalise reasoning methods. However, in the context of communication protocols, modelling knowledge is not enough, one needs to model knowledge over time; we need to be able to capture and reason about information change and knowledge update.Thus a rigorous epistemic approach to security protocol verification needs to harmonise both the epistemic and the temporal aspects. Model checking. Recently, the field of model checking has been extended to cover verification of multi-agent systems [30], which has become a topic of very active research. Some popular model checkers include MCK [4], MCMAS [9] and the one we use in this paper: DEMO [8]. While DEMO takes about 4 seconds to model check a protocol in our toy model (Russian Cards), newer model checkers can do the same in less than a second [10]. We move on to formalizing the machinery that will be used in this paper. EPISTEMIC LOGIC AND MODEL CHECKING OF UNCONDITIONALLY SECURE PROTOCOLS 3 3. Definitions We introduce the basic epistemic logic. 3.1. Epistemic logic and its temporal extension. We begin by introducing the syntax and semantics of the basic epistemic logic. Epistemic Logic Syntax Let P be a set of atomic propositions, and A a set of agent-symbols. The language LK , is generated by the following BNF: φ ::= p | ¬ϕ | (ϕ ∧ φ) | Ka (ϕ) The basic version of the language is by itself useful because it allows iterations of individual epistemic operators; it can already capture simple protocol specifications such as the alternating bit protocol [11]. Before giving a meaning to our language, we must first introduce a few definitions. Given a countable set of atomic propositions P and a finite set of agents A, an epistemic model M = hS, ∼, V i consists of a domain S of states, accessibility ∼ : N → P(S × S), and a valuation V : P → (P )(S). Epistemic Logic Semantics We define that a formula ϕ is true in (M, s), written M, s |= ϕ as follows: M, s |= p iff s ∈ V (p) M, s |= (ϕ ∧ φ) iff M, s |= ϕ and M, s |= φ M, s |= p iff not s ∈ ϕ M, s |= Ka ϕ iff for all t such that s ∼a t, M, t |= ϕ. The keypoint in the truth definition is that an agent a is said to know an assertion ϕ in a state (M, s) if and only if that assertion is true in all the states she considers possible. Common knowledge and public announcement. Consider the following well-known puzzle [11]: “Imagine n children playing together. Some, say k of them, get mud on their foreheads. Each can see the mud on others but not on his own forehead. Along comes the father, who says, ‘At least one of you has mud on your forehead’, thus expressing a fact known to each of them before he spoke (if k > 1). The father then asks the following question, over and over: ‘Does any of you know whether you have mud on your own forehead?’. Assuming that all the children are perceptive, intelligent, truthful, and that they answer simultaneously, what will happen?” This puzzle is known as the muddy children puzzle. Given that there were at least two children with mud on their forehead, one can ask oneself what information the father added by saying “At least one of you has mud on your forehead”? Everyone already knew there was at least one person with mud on their forehead, so why 4 PHILIP AMORTILA AND NICOLAS GAGNÉ even bother mentioning to the children? The answer is common knowledge. Before formally defining this notion, we first introduce some useful notation. For every B ⊂ A, we write ^ EB ϕ = Kb ϕ b∈B 2 for everybody in B knows ϕ. For instance, EA ϕ means that everybody knows that everybody knows ϕ. In the muddy children puzzle, letting p denote “at least one k−1 k person has mud on their forehead”, it’s true that EA p, but its not true that EA p. However, the father gave a piece of information that allows all of them to deduce k EA p; that piece of information is called common knowledge and is denoted and defined as ∞ ^ n CB := EB ϕ. n=0 k EA was already sufficient, but there are instances In the case of the muddy children, where common knowledge is truly necessary. For instance, protocols which solve the coordinated attack problem (as introduced in [5]) must absolutely rely on common knowledge which can only be acquired through public announcement [7]. Therefore, to prove the correctness of protocols in which we are interested, we must be able to capture the notion of common knowledge and public announcements. Extended Epistemic Logic Syntax Let P be a set of atomic propositions, and A a set of agent-symbols. The language L∗K , is generated by the following BNF: φ ::= p | ¬ϕ | (ϕ ∧ φ) | Ka (ϕ) | CG ϕ | [φ]ϕ For CG ϕ read ‘group of agents G commonly know formula ϕ’. For [φ]ϕ, read ‘after public announcement of φ, formula ϕ is true’. In a sense, ‘announce ϕ’ captures the ‘temporal’ nature of knowledge update. We also extend the previously defined semantics in order to properly include these new modalities. Given an epistemic model M = hS, ∼, V i, we define the model M |ϕ := hS 0 , ∼0 , B 0 i as S 0 := {s0 ∈ S | M, s0 |= ϕ}, We write ∼G ∼0n :=∼n ∩(S 0 × S), Vp0 := Vp ∩ S 0 . S S i for the transitive closure of n∈G ∼n , i.e., i∈N (∪n∈G ∼n ) . Extended Epistemic Logic Semantics We define that a formula ϕ is true in (M, s), written M, s |= ϕ as follows: M, s |= p iff s ∈ V (p) M, s |= (ϕ ∧ φ) iff M, s |= ϕ and M, s |= φ M, s |= p iff not s ∈ ϕ M, s |= Ka ϕ iff for all t such that s ∼a t, M, t |= ϕ M, s |= CB ϕ iff for all t such that s ∼B t, M, t |= ϕ M, s |= [φ]ϕ iff for all t such that s ∼a t, M |ϕ , s |= ϕ EPISTEMIC LOGIC AND MODEL CHECKING OF UNCONDITIONALLY SECURE PROTOCOLS 5 We now have the right machinery to approach the unconditionally secure protocols. 3.2. Unconditional security. In order to avoid giving a lot of general definitions, we will first introduce a toy model and then formally define the desired properties, protocols and goals. We borrow the notation and set-up from [2]. We are interested in the following setting: An agent, Alice, has a secret bit s, which she wishes to send to another agent, Bob. All communication is via an insecure channel, so we must proceed under the assumption that an eavesdropper, Cath, overhears all communication between Alice and Bob. As mentioned previously, we assumed that Cath has unlimited computing power, so we must rule out asymmetric encryption. We also rule out private key cryptography by assuming that Alice and Bob have no prior secret information between themselves. However, we assume that the three players have been random dealt hands from a deck of n distinct cards (we can assume a total order on the cards). In an (a, b, c) deal, Alice gets a cards, Bob gets b cards, and Cath gets the remaining n − a − b cards. We assume that the size of the deck, the values of the cards and the total order on the cards are common knowledge. Moreover, the cardinalities of each player’s hands is also common knowledge. Every communication is made public knowledge. The goal is for Alice to transmit a bit of information, say s, to Bob while keeping Cath completely in the dark. A protocol that given a random (a, b, c) deal, suceeds in transmitting s secretly is called an (a, b, c) secret bit transmission protocol. We next explore a specific instance of the above scenario: the case of (3, 3, 1). When Alice and Bob strive to learn each other’s card, as opposed to sending a single bit, the problem is known as the “Russian Cards Problem”. 4. Toy model We take the Russian Cards Problem as a toy model. The problem was originally described as “From a pack of seven known cards, two players each draw three cards and a third player gets the remaining card. How can the players with three cards openly inform each other about their cards, without the third player learning from any of their cards who holds it.” We encode the cards from 0 to 6 and denote a hand which contains {x, y, z} as xyz. By reordering the labels, we can assume that Alice has 012, Bob has 345 and Cath has 6. A solution to this problem is the following : “ Alice says ‘My hand of cards is one of 012, 034, 056, 135, 246’ after which Bill says ‘Cath has card 6’. ” Our goal is to capture the problem and solution using our machinery of extended epistemic logic. 6 PHILIP AMORTILA AND NICOLAS GAGNÉ 4.1. Formally capturing the problem. We first want to capture the indistinguishability between the hands (012, 345, 6) and (σ(012), σ(345), 6) for any σ ∈ S7 that fixes {0, 1, 2} and {3, 4, 5}. This induces an equivalence relation on deals. There are 73 43 11 = 140 equivalent deals. We write qn for ‘card q is held by player n’. For instance, 0a denotes the ownership of 0 by a. We want that after every announcement of the executed protocol: it is publicly known that Cath is ignorant and that both Alice and Bob knows that they know each other’s cards. Using the following notation: ^ Ka (B) := (Ka (qb ) ∨ Ka (¬qb )), q∈[6] ^ Kb (A) := (Kb (qa ) ∨ Kb (¬qa )), q∈[6] ¬Kc (AB) := ^ (¬Kc (qa ) ∧ ¬Kc (qb )), q∈[5] we formally capture the desired requirements as Cab (Ka (B)) , Cab (Kb (A)) and Cabc (¬Kc (AB)) . 4.2. Formally capturing the solution. Using the following notation: annoa := 012a ∨ 034a ∨ 056a ∨ 135a ∨ 246a , annob := 6c , we formally capture the solution (succession of announcements) as [annoa ][annob ] (Cab (Ka (B)) ∧ Cab (Kb (A)) ∧ Cabc (¬Kc (AB))) We next prove that the solution is indeed a solution, that is we model check the following: Rus, 012.345.6 |= [annoa ][annob ] (Cab (Ka (B)) ∧ Cab (Kb (A)) ∧ Cabc (¬Kc (AB))) 5. Model Checking Two types of model checkers were available: those who require interpreted system representations and those who don’t. We chose one who doesn’t in order to reduce the number of definitions and complexity. 5.1. DEMO. We use the model checker DEMO, a Dynamic Epistemic Modelling tool. DEMO allows modelling epistemic updates, graphical display of action models and so on. It minimizes epistemic models under bisimulation and it minimizes action models under the notion of action simuluation. It is implemented in Haskell. It implements the dynamic epistemic logic as described in [1]. We restrict ourselves to action models for public announcements. We define formulas in DEMO recursively as: Form = Top | Prop Prop | Neg Form | Conj [Form] | Disj [Form] | K Agent Form | CK [Agent] Form EPISTEMIC LOGIC AND MODEL CHECKING OF UNCONDITIONALLY SECURE PROTOCOLS 7 A pointed action model, i.e., a fancy word for a function that takes an epistemic model to the updated epistemic model, for a public announcement is created by the function public. The update operation is specified as upd :: EpistM -> PoAM -> EpistM where EpistM is an epistemic state and PoAM is a pointed action model. It returns an updated epistemic state. Finally, model checking is done through the function isTrue, which has type: isTrue :: EpistM -> Form -> Bool 5.2. Capturing the Russian Cards in DEMO. The initial epistemic state rus has one world per equivalence class of deal, so |S| = 140. The accessibility relations ∼a , ∼b , ∼c and the valuation function V are defined as one would expected; for instance, the state corresponding to the deal 012.345.6 has valuation (0, [P 0, P 1, P 2, Q 3, Q 4, Q 6, R 6]). Where P corresponds to Anne, Q corresponds to Bob and R corresponds to Cath. Anne’s public announcement Annoncea corresponds to the following action model: public( K a (Disj[Conj [p0,p1,p2], Conj[p0,p3,p4], Conj [p0,p5,p6], Conj[p1,p3,p5], Conj [p2,p4,p6]])) and the action model for Bob’s announcement is public( K b R 6). V We capture the condition that Alice knows Bob’s cards, previously defined as q∈[6] (Ka (qb ) ∨ Ka (¬qb ), with aKnowsB = Conj[ Disj[K a Disj[K a Disj[K a Disj[K a Disj[K a Disj[K a Disj[K a q0, q1, K a (Neg q2, K a (Neg q3, K a (Neg q4, K a (Neg q5, K a (Neg q6, K a (Neg K a q1) q2) q3) q4) q5) q6) (Neg q0) ], ], ], ], ], ], ]] We capture Kb (A) and ¬Kc (AB) similarly. We can now run the software DEMO on our model and check that Rus, 012.345.6 |= [annoa ][annob ] (Cab (Ka (B)) ∧ Cab (Kb (A)) ∧ Cabc (¬Kc (AB))) indeed holds. 6. Conclusion We have effectively used epistemic logic to capture a subset of unconditionally secure protocols and formally verified that a given protocol satisfies the desired secure properties. We did so by model checking the constructed models using DEMO: a model checker written in Haskell that doesn’t rely on interpreted systems. 8 PHILIP AMORTILA AND NICOLAS GAGNÉ DEMO is not extremely convenient as one needs to explicitly spell out, for every agent, all pairs in each equivalence relation. Instead of relying on the assumption of “a piori” shared secret or on the computationally bounded assumption, our unconditionally secure protocol relies on the implicit correlations arising from our deck of cards. In a sense, one could recast the given requirements in terms of mutual information: Cath’s prior belief on the distribution of cards doesn’t update upon hearing the announcements, but Alice and Bob both update to the dirac delta concentrated on the actual card’s configuration. It would be interesting to see if one can connect unconditionally secure protocols with the notion of synergy [6]. References [1] Alexandru Baltag and Lawrence S Moss. Logics for epistemic programs. In Information, Interaction and Agency, pages 1–60. Springer, 2004. [2] Michael J Fischer, Michael S Paterson, and Charles Rackoff. Secret bit transmission using a random deal of cards. Technical report, DTIC Document, 1990. [3] Michael J Fischer and Rebecca N Wright. An efficient protocol for unconditionally secure secret key exchange. In SODA, pages 475–483, 1993. [4] Peter Gammie and Ron Van Der Meyden. Mck: Model checking the logic of knowledge. In International Conference on Computer Aided Verification, pages 479–483. Springer, 2004. [5] Jim Gray. Notes on database operating systems,” operating systems: An advanced course,(bayer, r. ed.), 1978. [6] Virgil Griffith. Quantifying synergistic information. PhD thesis, California Institute of Technology, 2014. [7] Joseph Y Halpern and Yoram Moses. Knowledge and common knowledge in a distributed environment. Journal of the ACM (JACM), 37(3):549–587, 1990. [8] Thomas A Henzinger, Pei-Hsin Ho, and Howard Wong-Toi. Hytech: A model checker for hybrid systems. In International Conference on Computer Aided Verification, pages 460– 463. Springer, 1997. [9] Alessio Lomuscio, Hongyang Qu, and Franco Raimondi. Mcmas: A model checker for the verification of multi-agent systems. In International Conference on Computer Aided Verification, pages 682–688. Springer, 2009. [10] Johan Van Benthem, Jan van Eijck, Malvin Gattinger, and Kaile Su. Symbolic model checking for dynamic epistemic logic. In Logic, Rationality, and Interaction, pages 366–378. Springer, 2015. [11] Hans Van Ditmarsch, Wiebe van Der Hoek, and Barteld Kooi. Dynamic epistemic logic, volume 337. Springer Science & Business Media, 2007. [12] Hans P van Ditmarsch, Wiebe Van Der Hoek, Ron Van Der Meyden, and Ji Ruan. Model checking russian cards. Electronic Notes in Theoretical Computer Science, 149(2):105–123, 2006. [13] Peter Winkler. Cryptologic techniques in bidding and defense: Parts i, ii, iii, and iv. Bridge Magazine, pages 148–149, 1981. Department of Computer Science, McGill University Montreal Department of Computer Science, McGill University Montreal E-mail address: nicolas.gagne@mail.mcgill.ca