MaksimAfanasjevSlides

advertisement
Users vs. security
Cyberdefence seminar, Tallinn
Technical University
Maksim Afanasjev, 2011
Weakest link
It is not difficult to make a secure system. In theory.
Reality is, however, different
Experiment on the weakest link
The U.S. Department of Homeland Security ran a test
in 2011 to see how hard it was for hackers to corrupt
workers and gain access to computer systems:
staff secretly dropped computer discs and USB thumb
drives in the parking lots of government buildings and
private contractors.
Of those who picked them up:
• 60 percent plugged the devices into office computers
• If the drive or CD case had an official logo, 90 percent
were installed.
Why humans are weakest link?
Humans are not perfect. But security systems expect
us to be.
On average, 25 accounts, login 8 times per day
They’re not thinking, ‘I want to be secure,’
They’re thinking, ‘I want to do my banking.’
“Users will knowingly install spyware if the tradeoffs are
good for them,” research shows.
Users' story
Users usually
• do not understand the importance of software,
hardware and systems for their organizations
• do not believe assets are at risk
• do not understand they their behavior puts assets
under risk (i.e. they will be attacked)
• have problems using security tool correctly (e.g. PGP)
Often users can not behave in way required or users
do not want to behave in way required.
Examples to follow:
Users stories continued
Policy "Lock the screens whenever you leave your computer"
Outcome: does not work. People do not use it.
Reason: "What my colleagues will think? That will ruin trusting
relationship with colleagues."
Lesson: Users will no comply with policies and mechanisms
that are at odds with values they hold.
Person that uses complex passwords, changing them often
and uses locking screens will look paranoid in our culture.
If behavior requires conflicts with norms, values, self-imageusers will not comply.
Users' workarounds
•
write passwords down (e.g. on ATM panel). Can not
take care of own banking card!
• share passwords
• choose passwords that are insecure (but easy to
remember)
User solution
If possible to avoid at all- avoid!
If system allows easy password- use it!
If too complex to use (e.g. demands too complex
passwords), impossible to avoid- find a workaround!
Is the tradeoff inherent?
If system is secure, but not usable, people will start
using other (completely) insecure system that is
usable.
If system is usable, but not secure, it will not last long
(compromised, hacked)
There is agreement that systems must be designed
secure and usable, but no agreement- how?
It is pretty clear how to make system more or less
secure to a needed degree. There are guidelines and
libraries. But not clear how to make them usable?
Tradeoff
When trying to make secure systems usable, we add
complexity. Complexity leads to higher chances to
doing something wrong, i.e. less secure!
Tradeoff example- relay attacks
Passive keyless entry to cars -higher usability, but
what about security?
The car sends beacons on the LF channel periodically.
These beacons are short wake-up messages for key. When
the key detects the signal on the LF channel, it wakes up
the microcontroller, demodulates the signal and interprets it.
After computing a response to the challenge, the key
replies on the UHF channel. This response is received and
verified by the car. In the case of a valid response the car
unlocks the doors.
Relay attacks on keyless entry
The challenge-response algorithm is tested and proven
in conventional key systems. If a proper (length) key is
used along with strong algorithm,- difficult to interfere.
LF requests work within 1-2m outside the car. UHF
response works up to 100 meters.
Relay attack
car side with antennae
key side
Relay attack
It works!
Out of 10 cars, all were opened and started.
Cars will continue running after they detect key is missing (for
safety reasons)
Very safe attack- no physical contact neither with key nor the car.
If it does not work- nobody is risking.
Countermeasures:
• shielding the key (usability?)
• removing battery from the key (usability?)
• RF distance bounding protocol: a class of protocols in which one
entity (the verifier) measures an upper-bound on its distance to
another (trusted or untrusted) entity (the prover).
Relay attack conclusion
Manufacturers took an established system (challengeresponse e.g. keeloq), snapped-on usability features.
As a result,- absolutely new security flaws introduced.
Outcome: usable, but less secure.
Improving security at cost
On August 25, 2004, Microsoft releases Service Pack 2
for Windows XP. This is a "security" SP.
By default, Windows Firewall was enabled in this
release.
SP2 Firewall
Were users expecting this? No! They just updated.
Were users knowing what to do with it? No!
Outcome: many found a way to disable the bugger.
Problem: you can do this only once with a user. No
ways to make it more gentle next time.
Result: dissatisfaction with security measures, some
disabled firewalls and maybe more secure overall.
Ways to deal with security
complexity
In an ideal world, all of the security complexity is
hidden from the user. In reality, not possible.
2 approaches:
•
to communicate an accurate conceptual model of the
security to the user as quickly as possible. The smaller
and simpler that conceptual model is, the more
plausible it will be that it can be successful.
•
user does not need to understand the "big picture". He
must clearly understand the sequence of steps need
to perform a task (Wizard?).
Complexity
Finding ways to both maximize security and usability
has been a longstanding problem:
According to Saltzer and Schroeder [Saltzer 75] in
"Basic Principles of Information Protection" :
Psychological acceptability: It is essential that the human interface be
designed for ease of use, so that users routinely and automatically apply
the protection mechanisms correctly. Also, to the extent that the user's
mental image of his protection goals matches the mechanisms he must
use, mistakes will be minimized. If he must translate his image of his
protection needs into a radically different specification language, he will
make errors.
In practice, the principle is interpreted to mean that the
security mechanism may add some extra burden, but
that burden must be both minimal and reasonable.
Possible solutions
Increasing the security without undermining usability and vv.
Example:
•
locking screens. Make it a part of professional culture, not
personal. Make absolutely clear that it is a part of professional
behavior, not in any way connected to personal issues. Once
users realize that locking screen has nothing to do with trust
among colleagues, it will work
Overall: security must be designed with usability in mind and vv.
Once we start introducing security features when usability
guidelines have been established- it will cause problems.
Also, all security policies should take human psychology into
account.
Questions?
Why Johnny Can’t Encrypt:
A Usability Evaluation of PGP 5.0
Alma Whitten
J. D. Tygar1
1999
"and the user test
demonstrated that when our test participants were given
90 minutes in which to sign and encrypt a message
using PGP 5.0, the majority of them were unable to do
so successfully."
specific definition of
usability for security
Three of the twelve test participants (P4, P9, and P11)
accidentally emailed the secret to the team members
without encryption.
Definition: Security software is usable if the
people who are expected to use it:
1. are reliably made aware of the security
tasks they need to perform;
2. are able to figure out how to successfully
perform those tasks;
3. don’t make dangerous errors; and
4. are sufficiently comfortable with the
interface to continue using it.
Passwords:
easily guessed, diffuclt to remember
default passwords
usually passwords that are easy to remember, are diffiult to
guess
People have difficult perception of what constitutes a
"password that is difficult to guess". For example- do not use
names, user will use Barbara1
People use foreign words (american would not expect an
attacker to try Japanese word).
Or, organization circulated a memo, explaining what a good
password is, with samples. Attackers used the samples.
Identification/Authentification
1. The unmotivated user. Security is usually a
secondary goal.
2. The abstraction property
abstractive layer
3. The lack of feedback property
The need to prevent dangerous errors makes it
imperative to provide good feedback to the user,
but providing good feedback for security
management is a difficult problem. The state of a
security configuration is usually complex, and
attempts to summarize it are not adequate.
Download