A Rant About Security UI Dan Simon Microsoft Research Systems and Networking Group

advertisement
A Rant About Security UI
Dan Simon
Microsoft Research
Systems and Networking Group
Outline
 Three myths about Security UI
 Guiding principles
 Example 1: Why Johnny Can’t Encrypt
 Example 2: WindowBox
 Example 3: MSR .NET Security Model
2
Three Myths About
Security UI
Myth #1: “It shouldn’t exist”
 Security should “just happen”,
“automatically”, “by default”
 Computer should “recognize” bad
actions/events and stop them
 Easy at the extremes
– Don’t give everyone access to everything
 Impossible in the fuzzy middle, where it
matters
– When is an installed/run program a “virus”?
4
Myth #1 (cont’d)
 Leads to things not working for reasons the
user doesn’t understand
– “I’m sorry, Dave, I can’t do that.”
 Users will overcome the obstacle, probably
jeopardizing security even more
– Password standards
– Firewalls
5
Myth #2: “Three-word UI”
(‘are you sure?’)
 “Security should be transparent, except
when the user tries something dangerous, in
which case a warning is given”
 …But how is the user supposed to evaluate
the warning?
 Only two realistic cases
– Always heed the warning: see Myth #1
– Always ignore the warning: what’s the point?
6
Myth #3: You can’t handle the
truth!
 “Users can’t possibly be expected to understand
and manage their own security”
 ….But they do it all the time in real life!
– Vehicle, home, office keys/alarms/barriers
– Cash, checks, credit cards, ATM cards/PINs, Safety
deposit boxes, ID, documents
– Purchases, transactions, contracts
 Complex, adaptive security policies
– “Key under the mat”, town/suburb/city behaviors
7
Guiding Principles
What is Security?
 Security is about implementing people’s
preferences for privacy, trust and
information sharing (i.e., their “Security
Policies”)
 Real-world mechanisms follow this rule
– Key: “Whoever I give the key to has access to
that which it locks”
– Signature: Authorizes (restricted) delegated
access
9
What Makes Security Usable?
 Clarity: Everyone knows exactly what a key
does, and how it’s used (but not how it works)
– Allows users to “internalize” the security model
governing the mechanism, and form policies naturally
within the model
 Intuitiveness: A key is a physical “portal” that
grants access, even when not literally (e.g., a car)
– Eases the internalization process
 Consistency: A key is a key is a key
– Allows users to understand new instances of the
mechanism and use them with confidence right away
10
What works in a Security UI?
 Clear, understandable metaphors
– Abstract out the mechanism meaningfully for users
– Use physical analogs where possible
 Top-down design
– Start with the user model, design the underlying
mechanism to implement it
 Unified security model
– Across applications: “Windows GUI for security”
 Meaningful, intuitive user input
– Don’t assume things on the user’s behalf—figure out
how to ask so that the user can answer intelligently
11
Example 1: “Why Johnny
Can’t Encrypt”
Security Usability Study
 Whitten, Tygar (Usenix Security 1999):
“Why Johnny Can’t Encrypt”
 Usability evaluation of PGP 5.0
 Bottom line: unusable even to technerds
– No understanding of PK cryptography
– Complex, confusing management necessary
• Public keys, private keys, certificates, key rings,
webs of trust, key servers, etc. etc.
13
The Real Problem
 “Public-key” crypto makes no sense!
– Keys don’t work that way in real life
 “Certificate store” metaphor inapt
– We don’t normally copy & store other people’s
(or our own) IDs
 “Web of Trust” is unnatural trust model
– Maybe for 15th-century merchants….
 Much better metaphors available
– “Mail slot”, “Driver’s License”
14
Example #2: WindowBox
Problem: Unsafe Code
 Viruses/Trojan horses
– Can arrive via email, Web link, Web link in
email…
 “Honest apps” with massive security holes
– Application developers don’t understand
security, and/or have ship deadlines
– One security hole in one app. is all it takes
 Users want to run (possibly) unsafe code,
but safely
16
Sandboxing Strategies
 Java applet approach
– Sandbox everything so restrictively that all code is harmless
– …and hence unable to perform lots of necessary functionality
 ActiveX control approach
– Free rein to code from a sufficiently trusted source
– Generally stops malicious code, but does nothing to plug security
holes in honest controls
 Intermediate approaches
– Allow limited freedom based on various characteristics, e.g. origin
– But how does a user evaluate code characteristics?
– How does a user judge the risk of offering a given privilege to a
given app.?
17
The WindowBox Premise
 One security model that users can
understand is complete physical separation
(e.g., separate PCs)
 Breaching the separation can also make
sense, as long as it requires explicit user
action (e.g., carrying a floppy disk)
 User’s work and data subdivide naturally in
any event, minimizing the inconvenience
caused by the separation
18
The WindowBox Model
 Users have multiple, mutually isolated desktops
 Applications, network access provided (or denied)
on a per-desktop basis
 Data, objects confined to one desktop by default
 Explicit user action required to transfer data,
objects between desktops
 Some objects (systems components, desktop
management tools) “unconfined”
 Orthogonal to user-based access control
19
Examples
 “Personal” desktop
– Highly sensitive personal data, only highly trusted
applications
– Network access restricted to highly trusted addresses
(bank, broker, etc.)
 “Enterprise” desktop
– Work-related data and applications
– Direct access only to enterprise LAN/VPN (or
elsewhere via firewall/proxy)
 “Play” desktop
– Arbitrary untrusted data, applications
– Full Internet access
20
Other Possible Configurations
 “Communication” desktop
– Only trusted communications applications
(email client, browser)
– Data moved elsewhere before being “touched”
 “Ghost” desktops
– Created “on the fly” for suspicious incoming
data (attachments, downloaded files)
 “Console” desktop
– Unrestricted access for administrative tasks
21
Using the Desktops
 Sensitive personal data is isolated from
untrusted applications, data, Internet sites
 Enterprise data and apps. are isolated from
everything unrelated to the enterprise
 Untrusted data or apps. received via email
or the Web are isolated from anything they
might be able to damage
 E.g.: authentication credentials
22
Example #3: MSR .NET
Security Model
Goals and Methodology
 Project organized (late 2000) to create a
“from scratch” security model suitable for
.NET
 Started with key envisioned .NET scenarios
 Goals
– Identify security aspects of the required
functionality anticipated by the scenarios
– Design security models that make the security
functionality usable and manageable
24
Identity Model
Identities form a global name space
(name@domain) and are embodied in “ID
cards”
State of Washington
Driver’s License
John Doe
DOEJQ1234
John_Doe@washington.gov
Taxpayer Identification
John Doe
012-34-5678
U.S. Treasury Dept.
EMPLOYEE
Group Health
John Doe
1234567
Hotmail Account Holder
Masked
Avenger
John
Doe
12345
M
masked_av@hotmail.com
25
Access Model
“Containers” are the basic unit of access
control
 All objects protected at container level
 Each object (e.g., document or folder)
exists in exactly one container
– Containers cannot be nested
 Containers are directly accessible (e.g., via
links)
26
Conclusions
 Traditional security UI sucks
 Good news: there are lots of new ideas and
approaches out there
 Bad news: they’ve never been tried on a
large scale
– nobody knows what (if anything) really works
 Arrogant claim: common sense goes far
– common sense + prototypes goes even further
27
Download