Slides - University of Maryland

advertisement

Achieving Byzantine Agreement and Broadcast against Rational

Adversaries

Adam Groce

Aishwarya Thiruvengadam

Ateeq Sharfuddin

CMSC 858F: Algorithmic Game Theory

University of Maryland, College Park

Overview

 Byzantine agreement and broadcast are central primitives in distributed computing and cryptography.

 The original paper proves (LPS1982) that successful protocols can only be achieved if the number of adversaries is 1/3 rd the number of players.

Our Work

 We take a game-theoretic approach to this problem

 We analyze rational adversaries with preferences on outputs of the honest players.

 We define utilities rational adversaries might have.

 We then show that with these utilities, Byzantine agreement is possible with less than half the players being adversaries

 We also show that broadcast is possible for any number of adversaries with these utilities.

The Byzantine Generals

Problem

 Introduced in 1980/1982 by Leslie Lamport,

Robert Shostak, and Marshall Pease.

 Originally used to describe distributed computation among fallible processors in an abstract manner.

 Has been applied to fields requiring fault tolerance or with adversaries.

General Idea

 The n generals of the Byzantine empire have encircled an enemy city.

 The generals are far away from each other necessitating messengers to be used for communication.

 The generals must agree upon a common plan (to attack or to retreat).

General Idea (cont.)

 Up to t generals may be traitors.

 If all good generals agree upon the same plan, the plan will succeed.

 The traitors may mislead the generals into disagreement.

 The generals do not know who the traitors are.

General Idea (cont.)

 A protocol is a Byzantine Agreement

(BA) protocol tolerating t traitors if the following conditions hold for any adversary controlling at most t traitors:

A.

All loyal generals act upon the same plan of action.

B.

If all loyal generals favor the same plan, that plan is adopted.

General Idea (broadcast)

 Assume general i acting as the commanding general, and sending his order to the remaining n-1 lieutenant generals.

 A protocol is a broadcast protocol tolerating

t traitors if these two conditions hold:

1.

All loyal lieutenants obey the same order.

2.

If the commanding general is loyal, then every loyal lieutenant general obeys the order he sends.

Impossibility

 Shown impossible for t ≥ n/3

 Consider n = 3, t = 1; Sender is malicious; all initialized with 1; from A’s perspective:

1

A

1

I’m Spartacus!

Sender

1

0

A does not know if Sender or B is honest.

0

B

1

I’m Spartacus!

Equivalence of BA and

Broadcast

 Given a protocol for broadcast, we can construct a protocol for

Byzantine agreement.

1.

All players use the broadcast protocol to send their input to every other player.

2.

All players output the value they receive from a majority of the other players.

Equivalence of BA and

Broadcast

 Given a protocol for Byzantine agreement, we can construct a protocol for broadcast.

1.

The sender sends his message to all other players.

2.

All players use the message they received in step 1 as input in a Byzantine agreement protocol.

3.

Players output whatever output is given by the Byzantine agreement protocol.

Previous Works

 It was shown in PSL1980 that algorithms can be devised to guarantee broadcast/Byzantine agreement if and only if n ≥ 3t+1.

 If traitors cannot falsely report messages

(for example, if a digital signature scheme exists), it can be achieved for n t ≥ 0.

 PSL1980 and LSP1982 demonstrated an exponential communication algorithm for reaching BA in t+1 rounds for t < n/3.

Previous Works

 Dolev, et al. presented a 2t+3 round BA with polynomially bounded communication for any t < n/3.

 Probabilistic BA protocols tolerating t < n/3 have been shown running in expected time

O (t / log n); though running time is high in worst case.

 Faster algorithms tolerating (n - 1)/3 faults have been shown if both cryptography and trusted parties are used to initialize the network.

Rational adversaries

 All results stated so far assume general, malicious adversaries.

 Several cryptographic problems have been studied with rational adversaries

 Have known preferences on protocol output

 Will only break the protocol if it benefits them

 In MPC and secret sharing, rational adversaries allow stronger results

 We apply rational adversaries to Byzantine agreement and broadcast

Definition: Security against rational adversaries

A protocol for BA or broadcast is secure against rational adversaries with a particular utility function if for any adversary A

1 against which the protocol does not satisfy the security conditions there is another adversary

A

2 such that:

 When the protocol is run with A

2 as the adversary, all security conditions are satisfied.

 The utility achieved by A achieved by A

1

.

2 is greater than that

Definition: Security against rational adversaries (cont.)

 This definition requires that, for the adversary, following the protocol

strictly dominates all strategies resulting in security violations.

 Guarantees that there is an incentive not to break the security.

 We do not require honest execution to strictly dominate other strategies.

Utility Definitions

 Meaning of “secure against rational adversaries” is dependent on preferences that adversaries have.

 Some utility functions make adversarycontrolled players similar to honest players, with incentives to break protocol only in specific circumstances.

 Other utility functions define adversaries as strictly malicious.

 We present several utility definitions that are natural and reasonable.

Utility Definitions (cont.)

 We limit to protocols that attempt to broadcast or agree in a single bit.

 The output can be one of the following:

 All honest players output 1.

 All honest players output 0.

 Honest players disagree on output.

Utility Definitions (cont.)

 We assume that the adversary knows the inputs of the honest players.

 Therefore, the adversary can choose a strategy to maximize utility for that particular input set.

Utility Definitions (cont.)

 Both protocol and adversary can act probabilistically.

 So, an adversary’s choice of strategy results not in a single outcome but in a probability distribution over possible outcomes.

 We establish a preference ordering on probability distributions.

Strict Preferences

 An adversary with “strict preferences” is one that will maximize the likelihood of its first-choice outcome.

 For a particular strategy, let a

1 be the probability of the adversary achieving its firstchoice outcome and a

2 be the probability of achieving the second choice outcome.

 Let b

1 and b

2 be the same probabilities for a second potential strategy.

 We say that the first strategy is preferred if and only if a

1

> b

1 or a

1

=b

1 and a

2

> b

2

.

Strict Preferences (cont.)

 Not a “utility” function in the classic sense

 Provides a good model for a very single-minded adversary.

Strict Preferences (cont.)

 We will use shorthand to refer to the ordering of outcomes:

 For example: 0s > disagreement > 1s.

 Denotes an adversary who prefers that all honest players output 0, whose second choice is disagreement, and last choice is all honest players output 1.

Linear Utility

 An adversary with “linear utilities” has its utilities defined by:

Utility = u

1 u

2

Where u

1

Pr[players output 0] +

Pr[players output 1] + u

3

+ u

2

Pr [players disagree]

+ u

3

= 1.

Definition (0-preferring)

 A 0-preferring adversary is one for which

Utility = E[number of honest players outputting 0].

 Not a refinement of the strict ordering adversary with a preference list of 0s > disagree > 1.

Other possible utility definitions

 The utility definitions do not cover all preference orderings.

 With n players, there is 2 n output combinations.

 There are an infinite number of probability distributions on those outcomes.

 Any well-ordering on these probability distributions is a valid set of preferences against which security could be guaranteed.

Other possible utility definitions

 Our utility definitions preserve symmetry of players, but this is not necessary.

 It is also possible that the adversary’s output preferences are a function of the input.

 The adversary could be risk-averse

 Adversarial players might not all be centrally controlled.

 Could have conflicting preferences

Equivalence of Broadcast to BA, revisited

 Reductions with malicious adversaries don’t always apply

 Building BA from broadcast fails

 Building broadcast from BA succeeds

Strict preferences case

 Assume a preference ordering of

0 > 1 > disagree.

 t < n/2

 Protocol:

 Each player sends his input to everyone.

 Each player outputs the majority of all inputs he has received. If there is no strict majority, output 0.

Strict preferences case (cont.)

 Proof:

 If all honest players held the same input, the protocol terminates with the honest players agreeing despite what the adversary says.

 If the honest players do not form a majority, it is in adversarial interest to send 0’s.

Generalizing the proof

 Same protocol:

 Each player sends his input to everyone.

 Each player outputs the majority of all inputs he has received. If there is no strict majority, output 0.

 Assume any preference set with allzero output as first choice.

 Proof works as before.

A General Solution

 Task: To define another protocol that will work for strict preferences with disagree > 0s > 1s.

 We have not found a simple efficient solution for this case.

 Instead, we define a protocol based on Fitzi et al.’s work on detectable broadcast.

 Solves for a wide variety of preferences

Definition: Detectable Broadcast

 A protocol for detectable broadcast must satisfy the following three conditions:

 Correctness: All honest players either abort or accept and output 0 or 1. If any honest player aborts, so does every other honest player. If no honest players abort then the output satisfies the security conditions of broadcast.

 Completeness: If all players are honest, all players accept (and therefore, achieve broadcast without error).

 Fairness: If any honest player aborts then the adversary receives no information about the sender’s bit (not relevant to us).

Detectable broadcast (cont.)

 Fitzi’s protocol requires t + 5 rounds and O(n 8 (log n + k) 3 ) total bits of communication, where k is a security parameter.

 Assumes computationally bounded adversary.

 This compares to one round and n 2 bits for the previous protocol.

 Using detectable broadcast is much less efficient.

 However, this is not as bad when compared to protocals that achieve broadcast against malicious adversaries.

General protocol

1.

Run the detectable broadcast protocol

2.

- If an abort occurs, output adversary’s least-preferred outcome

- Otherwise, output the result of the detectable broadcast protocol

 Works any time the adversary has a known least-favorite outcome

 Works for t<n

Conclusion

 Rational adversaries do allow improved results on BA/broadcast.

 For many adversary preferences, we have matching possibility and impossibility results.

 More complicated adversary preferences remain to be analyzed.

Download