Invisible Things Lab

advertisement
Human Factor vs. Technology
Joanna Rutkowska
Invisible Things Lab
SecTOR Conference, Toronto, Canada, 21 November, 2007.
Basic Definitions…
Message of this talk
Human factor is not the weakest link in IT security
The technology factor is as week as the human factor!
Human factor used to describe:
User’s unawareness (“stupidity”)
Admin’s incompetence
NOT developer’s incompetence
NOT system designer’s incompetence
Security Consumers  “Human Factor”
Security Vendors  “Technology Factor”
© Invisible Things Lab, http://invisiblethingslab.com, 2007
3
Getting Into System
Exploiting User’s Unawareness/Incompetence
Social engineering
Bad configuration
Exploiting Technological Weakness
Software flaw (e.g. buffer overflow)
Protocol weakness (e.g. MitM)
Usual Goal: arbitrary code execution on target system
© Invisible Things Lab, http://invisiblethingslab.com, 2007
4
After Getting In…
“Break and Escape”
E.g. website defacement, files deletion
Introduce damage, not compromise!
“Steal and Escape”
Steal confidential files, databases records, etc..
Do not compromise system – escape after data theft!
Problems:
encrypted data,
passwords – only hashes stored
“Install Some Malware”
Compromise the system for full control!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
5
Prevention Approaches…
Prevention Approaches
Signature-based
User’s education
AI-based (anomaly detection)
Host IPSes
OS hardening (anti-exploitation)
Host IPSes
Least privilege design
Code verification
© Invisible Things Lab, http://invisiblethingslab.com, 2007
7
Signature based approaches
Protect against “user’s stupidity” by blacklisting known
attack patterns – e.g. certain “phishing mails”
Protect against technological weaknesses by having a
signature for an exploit (majority) or generic signature for
an attack (minority, unfortunately)
No protection against unknown (targeted) attacks!
All major A/V vendors alerts about increasing number of
targeted attacks, since 2006
targeted  we usually don’t have a signature
© Invisible Things Lab, http://invisiblethingslab.com, 2007
8
User’s education
Increase awareness among users and competences of
system administrators
Should eliminate most of the social engineering based
attacks, e.g. sending a malware via email
Can not protect against attacks exploiting flaws in
software, i.e. exploits
“Keeping your A/V up to date” does not address the
problem of targeted attacks
© Invisible Things Lab, http://invisiblethingslab.com, 2007
9
AI (anomaly based)
Using “Artificial Intelligence” (heuristics) to detect
“abnormal” patterns of:
… behavior (e.g. iexplore.exe starting cmd.exe)
… network traffic (e.g. suspicious connections)
Problems:
No guarantee to detect anything!
False positives!
Do you think “AI” can solve problems better then “HI”
(Human Intelligence)? ;)
© Invisible Things Lab, http://invisiblethingslab.com, 2007
10
Anti Exploitation
Make exploitation process (very) hard!
Stack Protection
Stack Guard for UNIX-like systems (1998)
Microsoft /GS stack protection (2003)
Address Space Layout Randomization (ASLR)
PaX project for Linux (2001)
Vista ASLR (Microsoft, 2007)
Non-Executable pages
PaX project for Linux (2000)
OpenBSD’s W^X (2003)
Windows NX (2005-06)
Other technologies
© Invisible Things Lab, http://invisiblethingslab.com, 2007
11
Least Privilege and Privilege Separation
Limit scope of an attack by limiting the rights/privileges of
the components exposed to the attack (e.g. processes)
Least Privilege Principle: every process (or other entity)
has the minimal set of rights necessary to do its job
How many people work using the Administrator’s account?
Privilege Separation
Different programs have different, non-overlapping,
competences,
Also: some more complex programs split into more then
one process
© Invisible Things Lab, http://invisiblethingslab.com, 2007
12
Example: Vista’s User Account Control
Attempt to force people to adhere to the Least Privilege
Principle
All user’s processes run by default with restricted privs,
User want to perform an operation which requires more
privileges – a popup appears asking for credentials,
Goal: if restricted process gets exploited, attacker does
not automatically get administrator’s rights!
Many implementation problems though:
February 2007: Microsoft announced that UAC is not… a
security feature!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
13
Example: Privilege Separation
Different account for different tasks, e.g.:
joanna – main account used to log in
joanna.web – used to run Firefox
joanna.email – used to run Thunderbird
joanna.sensitive – access to /projects directory, run
password manager and another instance of web browser
for banking.
Easy to implement on Linux or even on Vista!
In Vista we rely on User Interface Privilege Isolation (UIPI)
© Invisible Things Lab, http://invisiblethingslab.com, 2007
14
Problems with priv-separation
If attacker exploits a bug in kernel or one of kernel
drivers (e.g. graphics card driver)…
… then she has full control over the system and can
bypass all the protection offered by the OS!
This is a common problem of all general purpose OSes
based on monolithic kernel – e.g. Linux, Windows.
Drivers are the weakest point in OS security!
Hundreds of 3rd party drivers,
All run with kernel privileges!
We will get back top this later…
© Invisible Things Lab, http://invisiblethingslab.com, 2007
15
Avoiding Bugs and Code Verification
Developers education
e.g. Microsoft and Secure Development Lifecycle (SDL)
Fuzzing
Generate random “situations” and see when software
crashes… Currently the favorite bughunter’s technique…
Code auditing
Very expensive – requires experienced experts,
Few automatic tools exist to support the process.
Formal verification methods
Manual methods only for very small projects (a few k-lines)
No mature automatic tools yet (still 5-10 years?)
© Invisible Things Lab, http://invisiblethingslab.com, 2007
16
How Prevention Fails In
Practice…
Example: the ANI bug
ANI bug (MS07-17, April 2007)
“This vulnerability can be exploited by a malicious web
page or HTML email message and results in remote
code execution with the privileges of the logged-in user.
The vulnerable code is present in all versions of
Windows up to and including Windows Vista. All
applications that use the standard Windows API for
loading cursors and icons are affected. This includes
Windows Explorer, Internet Explorer, Mozilla Firefox,
Outlook and others.”
Source: Determina Security, http://www.determina.com/
© Invisible Things Lab, http://invisiblethingslab.com, 2007
18
ANI Bug vs. Vista
Code Review and Testing Process?
MS admitted their fuzzers were not tuned up to catch this
bug in their code…
Anti-Exploitation technologies?
GS stack protection failed, because compiler “heuristics”
decided not to include it for the buggy function!
NX usually fails, because IE and explorer have DEP
disabled by default!
ASLR could be bypassed due to implementation
weaknesses!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
19
ANI Bug vs. Vista UAC?
UAC allows to run IE in a so called Protected Mode (PM)
However:
PM is not deigned to protect user’s information!
It only protects against modification user’s data!
Also, MS announced that UAC/Protected Mode can not
be treated as a security boundary!
i.e. expect that it will be easy to break out from Protected
Mode…
© Invisible Things Lab, http://invisiblethingslab.com, 2007
20
ANI Bug vs. educated user?
To exploit this bug it’s just enough to redirect a user to
browse a compromised page (or open an email)…
No special action from a user required!
Exploit can be very reliable – even experienced user
might not realize that he or she has been just attacked!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
21
ANI vs. A/V
Attack was discovered in December 2006
Information has been published in April 2007
What if it was discovered by a “black hat” even earlier?
Do you really believe that there was only 1 person on the
planet capable of discovering it?
Why would A/V block/detect such an attack when the
information about it was not public?
© Invisible Things Lab, http://invisiblethingslab.com, 2007
22
Going further…
So, now we see that the technology can not protect
(even smart) user from being exploited…
We saw an attack scenario, when an exploit bypasses
various anti-exploitation techniques and eventually gets
admin access to the systems…
The next goal is usually to install some rootkit
in other words to get into kernel…
But, we have Vista Kernel Protection on Vista!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
23
Digital Drivers Signing…
“Digital signatures for kernel-mode software are an
important way to ensure security on computer systems.”
“Windows Vista relies on digital signatures on kernel
mode code to increase the safety and stability of the
Microsoft Windows platform”
“Even users with administrator privileges cannot load
unsigned kernel-mode code on x64-based systems.”
Quotes from the official Microsoft documentation:
Digital Signatures for Kernel Modules on Systems Running Windows Vista,
http://www.microsoft.com/whdc/system/platform/64bit/kmsigning.mspx
© Invisible Things Lab, http://invisiblethingslab.com, 2007
24
Example: Vista Kernel Protection Bypassing
Presented by Invisible Things Lab at Black Hat in August
Exploiting bugs in 3rd party kernel drivers, e.g.:
ATI Catalyst driver
NVIDIA nTune driver
It’s not important whether the buggy driver is present on
the target system – a rootkit might always bring it there!
There are hundreds of vendors providing kernel drivers
for Windows…
All those drivers share the same address space with the
kernel…
© Invisible Things Lab, http://invisiblethingslab.com, 2007
25
Buggy Drivers: Solution?
Today we do not have tools to automatically analyze
binary code for the presence of bugs
Binary Code Validation/Verification
There are only some heuristics which produce too many
false positives and also omit more subtle bugs
There are some efforts for validation of C programs
e.g. ASTREE (http://www.astree.ens.fr/)
Still very limited – e.g. assumes no dynamic memory
allocation in the input program
Effective binary code verification is a very distant future
© Invisible Things Lab, http://invisiblethingslab.com, 2007
26
Buggy Drivers: Solutions?
Drivers in ring 1 (address space shared among drivers)
Not a good solution today (lack of IOMMU)
Update October 2007: New Intel processors have VT-d
(IOMMU)!
Drivers in usermode
Drivers execute in their own address spaces in ring3
Very good isolation of faulty/buggy drivers from the kernel
Examples:
MINIX3, supports all drivers, but still without IOMMU
Vista UMDF, supports only drivers for a small subset of
devices (PDAs, USB sticks). Most drivers can not be written
using UMDF though.
© Invisible Things Lab, http://invisiblethingslab.com, 2007
27
Message
I believe its not possible to implement effective kernel
protection on General Purpose OSes that use monolithic
kernel!
Establishing a 3rd party drivers verification authority might
raise a bar, but will not solve a problem
Move on towards microkernel based architecture!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
28
Moral
Today’s prevention technology does not always work…
In how many cases it does work vs. fails?
© Invisible Things Lab, http://invisiblethingslab.com, 2007
29
How secure is our system?
In how many cases our prevention fails?
This is a meaningless question!
If you know that a certain type of attacks is possible (i.e.
practically) then the system is simple insecure!
“System is not compromised with probability = 98%”?!
“The cat is alive with probability of 50%”?!
What does it mean?
© Invisible Things Lab, http://invisiblethingslab.com, 2007
30
Detection for the Rescue!
Detection
Detection is used to verify that prevention works
Detection can not replace prevention
E.g. data theft – even if we detect it, we can not make the
attacker to “forget” the data she has stolen!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
32
Detection
Host-Based
Tries to find out whether current OS and applications has
been compromised or not
A/V products
Network Based
Tries to detect attacks by analysis network traffic
E.g. detect known exploit, or suspicious connections
Network IDS
Sometimes combined with firewall – IPS systems
© Invisible Things Lab, http://invisiblethingslab.com, 2007
33
Stealth Malware
rootkits, backdoors, keyloggers, etc…
stealth is a key feature!
stealth – means that legal processes can’t see it (A/V)
stealth – means that administrator can’t see it (admin
tools)
stealth – means that we should never know whether
we’re infected or not!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
34
Paradox…
If a stealth malware does its job well…
…then we can not detect it…
…so how can we know that we are infected?
© Invisible Things Lab, http://invisiblethingslab.com, 2007
35
How we know that we were infected?
We count on a bug in the malware! We hope that the
author forgot about something!
We use hacks to detect some known stealth malware
(e.g. hidden processes).
We need to change this!
We need a systematic way to check for system integrity!
We need a solution which would allow us to detect
malware which is not buggy!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
36
Detection
Enumerate
Badness
© Invisible Things Lab, http://invisiblethingslab.com, 2007
Ensure
Goodness!
37
State of Detection
Current detection products cannot not deal well with
targeted stealth malware,
We need systematic way for checking system
compromises, but,
Unfortunately current OS are too complex!
1. We can’t reliably read system memory
2. We can’t protect the detector from tampering
3. We don’t know which places to check!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
38
Prevention vs. Detection
Prevention is not perfect as we saw...
... detection is very immature
OS complexity is the main problem when verifying system
integrity
© Invisible Things Lab, http://invisiblethingslab.com, 2007
39
Human Factor vs. Technology
“User stupidity” is only part of the problem (a small part)
Many modern attacks do not require user to do anything
“stupid” or suspicious (e.g. WiFi driver’s exploitation)
There is no technology on the market that offers
unbreakable prevention
Even competent admins can not do much about it
Current technology does not even allow for detecting
many modern stealth malware!
Conscious users can not find out whether their systems
has been compromised -- they can only count on
attacker’s mistake!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
40
Final Message
Human Factor is a weak link in computer security,
But the technology is also flawed!
We should work on improving the technology just as we
work on educating users…
Unfortunately challenges here are much bigger, mostly
due to complexity of the current OSes.
As a savvy user, I would like to have technology, that
would protect me!
I don’t have it today! Not even effective detection!
Cooperation from OS vendors required!
© Invisible Things Lab, http://invisiblethingslab.com, 2007
41
Invisible Things Lab
Focus on Operating System Security
In contrast to application security and network security
Targeting 3 groups of customers
Vendors – assessing their products, advising
Corporate Customers (security consumers) – unbiased
advice about which technology to deploy
Law enforcement/forensic investigators – educating about
current threats (e.g. stealth malware)
© Invisible Things Lab, http://invisiblethingslab.com, 2007
42
Thank You
Joanna Rutkowska, Invisible Things Lab
joanna@invisiblethingslab.com
Download