On the Evolution of Adversary Models (from the beginning to sensor networks)

advertisement
On the Evolution of Adversary Models
(from the beginning to sensor networks)
Virgil D. Gligor
Electrical and Computer Engineering
University of Maryland
College Park, MD. 20742
gligor@umd.edu
Lisbon, Portugal
July 17-18, 2007
VDG, July 17, 2007
Copyright © 2007
1
Overview
1. New Technologies often require a New Adversary Def.
- continuous state of vulnerability
2. Why is the New Adversary Different ?
- ex. sensor, mesh networks, MANETs
- countermeasures
3. Challenge: find “good enough” security countermeasures
4. Proposal: Information Assurance Institute
VDG, July 17, 2007
Copyright © 2007
2
A system without an adversary definition cannot
possibly be insecure; it can only be astonishing…
… astonishment is a much underrated security vice.
(Principle of Least Astonishment)
VDG, July 17, 2007
Copyright © 2007
3
Why an Adv. Def. is a fundamental concern ?
1. New Technology > Vulnerability ~> Adversary <~> Methods & Tools
-sharing user-mode
programs& data;
- computing utility
(early – mid 1960s)
confidentiality and
integrity breaches;
system penetration;
- shared stateful
DoS instances
services
e.g., DBMS, net. protocols
dyn. resource alloc.
(early - mid 1970s)
untrusted usermode programs
& subsystems
sys. vs. user mode (’62->)
rings, sec. kernel (’65, ‘72)
FHM (’75) theory/tool (’91)*
acc. policy models (’71)
untrusted user
processes;
concurrent,
coord. attacks
DoS = a diff. prob.(83-’85)*
formal spec. & verif. (’88)*
DoS models (’92 -> )
- PCs, LANs;
read, modify, block,
public-domain Crypto replay, forge
(mid 1970s)
messages
“man in the middle” informal: NS, DS (’78–81)
active, adaptive
semi-formal: DY (‘83)
network adversary Byzantine (‘82 –>)
crypto attk models (‘84->)
auth. prot. analysis (87->)
- internetworking
(mid – late 1980s)
geo. distributed,
coordinated
attacks
large-scale effects:
worms, viruses,
DDoS (e.g., flooding)
virus scans, tracebacks
intrusion detection
(mid ’90s ->)
2. Technology Cost -> 0, Security Concerns persist
VDG, July 17, 2007
Copyright © 2007
4
Continuous State of Vulnerability
New
New
Technology > Vulnerability ~>
+/- O(months)
New
New Analysis
Adversary Model <~> Method & Tools
+O(years)
+O(years)
… a perennial challenge (“fighting old wars”)
New
Technology ~>
New
Vulnerability
Old
Adversary Model
Reuse of Old
(Secure)
Systems &
Protocols
mismatch
VDG, July 17, 2007
Copyright © 2007
5
New Technology Ex.: Sensor Networks
Claim
Sensor Networks introduce:
- new, unique vulnerabilities: nodes captured and replicated
- new adversary: different from and Dolev-Yao and traditional Byzantine adv.s
and
- require new methods and tools: emergent algorithms & properties
(for imperfect but good-enough security)
Mesh Networks have similar but not identical characteristics
VDG, July 17, 2007
Copyright © 2007
6
Limited Physical Node Protection
Two Extreme Examples
Low end: Smart Cards (< $15)
High end: IBM 4764 co-proc. (~ $9K)
- no tamper resistance
- non-invasive phys. attacks
- side-channel (timing, DPA)
- tamper resistance, real time response
- independent battery, secure clock
- battery-backed RAM (BBRAM)
- wrapping: several layers of non-metallic
- unusual operating conditions
- temperature, power clock glitches grid of conductors in a grounded shield
to reduce detectable EM emanations
- invasive phys. attacks
- tamper detection sensors (+ battery)
- chip removal from plastic cover
- temp., humidity, pressure, voltage,
- microprobes, electron beams
clock, ionizing radiation
- response: erase BBRAM, reset device
VDG, July 17, 2007
Copyright © 2007
7
Limited Physical Node Protection
Observation:
a single on-chip secret key is sufficient to protect
(e.g., via Authenticated Encryption)
many other memory-stored secrets (e.g., node keys)
Problem:
how do we protect that single on-chip secret key ?
Potential Solution: Physically Unclonable Functions (PUFs)
observation: each IC has unique timing
basic PUF: Challenge extracts unique, secret
Response (i.e., secret key) from
IC-hidden, unique timing sequence
VDG, July 17, 2007
Copyright © 2007
8
Basic PUF circuit [Jae W. Lee et al. VLSI ‘04]
unknown challenge bit
IC
b62
feed-fwd arbiter
255
0
1
Arbiter
b0
b1
bi=0
switch
VDG, July 17, 2007
b2
b61
b128
LFSR
Response
e.g., 255 bits
Challenge
e.g., 128 bits
Arbiter
0
Arbiter
1
bi=1
Arbiter operation
Copyright © 2007
9
Basic PUF circuit [Jae W. Lee et al. VLSI ‘04]
Basic PUF counters:
brute-force attacks (2*128 challenge-response pairs => impractical)
duplication (different timing => different Secret Response)
invasive attacks (timing modification => different Secret Response)
However,
Pr. 1: adversary can build timing model of Arbiter’s output
=> can build clone for secret-key generation
Pr. 2: Arbiter’ output (i.e., secret-key generation) is unreliable
Reality: intra-chip timing variation (e.g., temp, pressure, voltage)
=> errors in Arbiter’s output (e.g., max. error: 4 – 9%)
VDG, July 17, 2007
Copyright © 2007
10
Suggested PUF circuit [Ed Suh et al. ISCA ‘05]
Solution to Pr. 1:
hash Arbiter’s output to provide new Response
- cannot discover Arbiter output from known Challenges and new Responses
Solution to Pr. 2:
add Error Correcting Codes (ECCs) on Arbiter’s output
e.g., use BCH(n, k, d)
n(timing bits) = k(secret bits) + b(syndrome bits) for (d-1)/2 errors
BCH (255,63,61)
=> 30 (> 10%n > max. no.) errors in Arbiter’s
output are corrected
> 30 errors ? (probability is 2.4 X10-6)
probability of incorrect output is smaller but not zero
hash Arbiter’s output and verify against stored Hash(Response)
VDG, July 17, 2007
Copyright © 2007
11
Suggested PUF circuit
IC
BCH
b62
feed-fwd arbiter
255
bits
Arbiter
b0
b1
b2
b61
b128
Hash
LFSR
known
Syndrome
e.g., 192 bits
secret
Response
known
Challenge
e.g., 128 bit
generate response: C -> R, S; retrieve response: C, S -> R
However, syndrome reveals some (e.g., b=192) bits of Arbiter’s output (n=255)
(Off-line) Verifiable-Plaintext Attack:
Get C, S, hash(R); guess remaining (e.g., 63) bits of Arbiter’s output; verify new R;
repeat verifiable guesses until Arbiter’s output is known; discover secret key
VDG, July 17, 2007
Copyright © 2007
12
Some Characteristics of Sensor Networks
1. Ease of Network Deployment and Extension
- scalability => simply drop sensors at desired locations
- key connectivity via key pre-distribution =>
neither administrative intervention nor TTP interaction
2. Low Cost, Commodity Hardware
- low cost => physical node shielding is impractical
=> ease of access to internal node state
(Q: how good should physical node shielding be to prevent access
to a sensor’s internal state ?)
3. Unattended Node Operation in Hostile Areas =>
adversary can capture, replicate nodes (and node states)
VDG, July 17, 2007
Copyright © 2007
13
Replicated Node Insertion: How Easy ?
NEIGHBORHOOD j
NEIGHBORHOOD i
1
NEIGHBORHOOD k
i
3
2
VDG, July 17, 2007
Copyright © 2007
14
Attack Coordination among Replicas:
How Easy ?
NEIGHBORHOOD j
NEIGHBORHOOD i
3
1
NEIGHBORHOOD k
3
collusion
i
3
2
Note: Replica IDs are cryptographically bound to pre-distributed keys and cannot be changed
VDG, July 17, 2007
Copyright © 2007
15
New vs. Old Adversary
Old (Dolev-Yao) Adversary can
- control network operation
- man-in-the-middle: read, replay, forge, block, modify, insert messages
anywhere in the network
- send/receive any message to/from any legitimate principal (e.g., node)
- act as a legitimate principal of the network
Old (Dolev-Yao) Adversary cannot
1) adaptively capture legitimate principals’ nodes and discover a legitimate
principal’s secrets
2) adaptively modify network and trust topology (e.g., by node replication)
Old Byzantine Adversaries
- can do 1) but not 2)
- consensus problems impose fixed thresholds for captured nodes
(e.g., t < n/2, t < n/3) and fixed number of nodes, n.
VDG, July 17, 2007
Copyright © 2007
16
Countermeasures for Handling New Adv.?
1. Detection and Recovery
-
Ex. Detection of node-replica attacks
Cost ? Traditional vs. Emergent Protocols
Advantage: always possible, good enough detection
Disadvantage: damage possible before detection
2. Avoidance: early detection of adversary’s presence
-
Ex. Periodic monitoring
- Cost vs. timely detection ? False negatives/positives ?
- Advantage: avoids damage done by new adversary
- Disadvantage: not always practical in MANETs, sensor and mesh
networks
3. Prevention: survive attacks by “privileged insiders”
VDG, July 17, 2007
Ex. Subsystems that survive administrators’ attacks (e.g., auth)
Cost vs. design credibility ? Manifest correctness
Advantage: prevent damage; Disadvantage: very limited use
Copyright © 2007
17
Example of Detection and Recovery
(IEEE S&P, May 2005)
- naïve: each node broadcasts <ID, “locator,” signature>
perfect replica detection: ID collisions, different Iocators
complexity: O(n2) messages
- realistic: each node broadcasts locally <ID, “locator,” signature>
local neighbors further broadcast to g << n random witnesses
good enough replica detection: ID collision, different
Iocators at witness
detection probability: 70 - 80% is good enough
complexity: O(n x sqrt(n)) messages
VDG, July 17, 2007
Copyright © 2007
18
A New App.: Distributed Sensing
VDG, July 17, 2007
Copyright © 2007
19
A New Application: Distributed Sensing
Application: a set of m sensors observe and signal an event
- each sensor broadcasts “1” whenever it senses the event;
else, it does nothing
- if t  m broadcasts, all m sensors signal event to neighbors; else do nothing
Operational Constraints
- absence of event cannot be sensed (e.g., no periodic “0” broadcasts)
- broadcasts are reliable and synchronous (i.e., counted in sessions)
Adversary Goals: violate integrity (i.e., issues t  m/2 false broadcasts)
deny service (i.e., t > m/2, suppresses m-t+1 broadcasts)
New (Distributed-Sensing) Adversary
- captures nodes, forges, replays or suppresses (jams) broadcasts
(within same or across different sessions)
- increases broadcast count with outsiders’ false broadcasts
VDG, July 17, 2007
Copyright © 2007
20
An Example: distributed revocation decision
[IEEE TDSC, Sept. 2005]
m=6, t = 4 votes in a session => revoke target
Keying
Neighborhood
revocation
target
Communication
Neighborhood
4
10
3
2
8
14
5
11
1
7
12
VDG, July 17, 2007
13
9
6
Copyright © 2007
21
New vs. Old Adversary
A (Reactive) Byzantine Agreement Problem ?
- both global event and its absence are (“1/0”) broadcast by each node
- strong constraint on t ; i.e., no PKI => t > 2/3m; PKI => t >m/2
- fixed, known group membership
No.
New (Distributed-Sensing) Adv. =/= Old (Byzantine) Adv.
- new adversary need not forge, initiate, or replay “0” broadcasts
- new adversary’s strength depends on a weaker t (e.g., t < m/2)
- new adversary may modify membership to increase broadcast count ( > t)
VDG, July 17, 2007
Copyright © 2007
22
Conclusions
1. New Technologies => New Adversary Definitions
- avoid “fighting the last war”
- security is a fundamental concern of IT
2. No single method of countering new and powerful adversaries
- detection
- avoidance (current focus)
- prevention (future)
3. How effective are the countermeasures ?
- provide “good enough” security; e.g., probabilistic security
properties
VDG, July 17, 2007
Copyright © 2007
23
Download