ch8

advertisement
Chapter 8
Authorization
Access control matrix
Multilevel Security
Multilateral security
Covert channel
Inference control
CAPTCHA
Firewalls
IDS
Authentication vs Authorization

Authentication  Who goes there?


Authorization  Are you allowed to do that?



Restrictions on who (or what) can access system
Restrictions on actions of authenticated users
Authorization is a form of access control
Authorization enforced by


Access Control Lists
Capabilities
Chapter 8 Authorization
2
Access Control Basic Concept



An access control system regulates the
operations that can be executed on data and
resources to be protected
Its goal is to control operations executed by
subjects in order to prevent actions that
could damage data and resources
Access control is typically provided as part
of the operating system and of the database
management system (DBMS)
Chapter 8 Authorization
3
Access Control Basic Concept
Subject


Access
request
Reference
monitor
Object
The very nature of access control suggests
that there is an active subject requiring
access to a passive object to perform some
specific access operation.
A reference monitor grants or denies access
This fundamental and simple notion of access control is due to
Lampson
Chapter 8 Authorization
4
Access Control Basic Concept
Access Control
Policies
Subject
Access
Permissions
Access
request
Reference
monitor
Chapter 8 Authorization
Object
5
Access control matrix
Chapter 8 Authorization
6
Lampson’s Access Control Matrix


Subjects 주체(users) index the rows
Objects 객체(resources) index the columns
Insurance
data
Payroll
data
os
Accounting
program
Accounting
data
Bob
rx
rx
r
---
---
Alice
rx
rx
r
rw
rw
Sam
rwx
rwx
r
rw
rw
rx
rx
rw
rw
rw
Accounting
program
Chapter 8 Authorization
7
Are You Allowed to Do That?


Access control matrix has all relevant info
But how to manage a large access control
(AC) matrix ?




Could be 1000’s of users, 1000’s of resources
Then AC matrix with 1,000,000’s of entries
Need to check this matrix before access to any
resource is allowed: Hopelessly inefficient
To obtain acceptable performance, split AC
into manageable pieces;

Two ways: by column or by row
Chapter 8 Authorization
8
Access Control Lists (ACLs)


ACL: store access control matrix by column
Example: ACL for insurance data is in blue
os
Accounting
data
Insurance
data
Payroll
data
Bob
rx
rx
r
---
---
Alice
rx
rx
r
rw
rw
Sam
rwx
rwx
r
rw
rw
rx
rx
rw
rw
rw
Accounting program

Accounting
program
ACL(insurance data)
= {(Bob,---), (Alice,rw), (Sam,rw), (Acc prog, rw)}
Chapter 8 Authorization
9
Capabilities (or C-Lists)


Store access control matrix by row
Example: Capability for Alice is in red
os
Accounting
data
Insurance
data
Payroll
data
Bob
rx
rx
r
---
---
Alice
rx
rx
r
rw
rw
Sam
rwx
rwx
r
rw
rw
rx
rx
rw
rw
rw
Accounting program

Accounting
program
C-list(Alice)
= {(OS,rw), (Acct prog,rw), (Acct data,r),
(Insur data,rw), (payroll data, rw)}
Chapter 8 Authorization
10
ACLs vs Capabilities
Alice
r
--r
file1
Alice
r
w
rw
file1
Bob
w
r
---
file2
Bob
--r
r
file2
Fred
rw
r
r
file3
Fred
r
--r
file3
Capability
Access Control List


Note that arrows point in opposite directions!
With ACLs, still need to associate users to files
Chapter 8 Authorization
11
Confused Deputy - 1/4

Two resources




Compiler and BILL
file (billing info)
Access control matrix
Compiler BILL
Compiler can
write file BILL
Alice
x
--Alice can invoke
compiler with a
Compiler
rx
rw
debug filename
Alice not allowed
to write to BILL
Chapter 8 Authorization
12
Confused Deputy - 2/4
Compiler
Alice


Compiler is deputy acting on behalf of Alice
Compiler is confused


ALS’s Confused Deputy
BILL
Alice is not allowed to write BILL
Compiler has confused its rights with Alice’s
Chapter 8 Authorization
13
Confused Deputy - 3/4

Compiler acting for Alice is confused



Compiler is acting based on it’s own privileges,
when it should be acting based on Alice’s
privileges
There has been a separation of authority
from the purpose for which it is used
With ACLs, difficult to avoid this problem


ACLs are not easy delegated
ACL(insurance data)
= {(Bob,---), (Alice,rw), (Sam,rw), (Acc prog, rw)}
Chapter 8 Authorization
14
Confused Deputy - 4/4

With Capabilities, easier to prevent
problem



Must maintain association between authority and
intended purpose
Capabilities make it easy to delegate authority
C-list(Alice)
= {(OS,rw), (Acct prog,rw), (Acct data,r),
(Insur data,rw), (payroll data, rw)}
Chapter 8 Authorization
15
ACLs vs Capabilities

ACLs




Capabilities





Good when users manage their own files
Protection is data-oriented
Easy to change rights to a resource
Easy to delegate
Easy to add/delete users
Easier to avoid the confused deputy
More difficult to implement
Capabilities loved by academics

Capability Myths Demolished
Chapter 8 Authorization
16
Multilevel Security (MLS)
Models
Chapter 8 Authorization
17
Classifications and Clearances



Classifications apply to objects
Clearances apply to subjects
US Department of Defense uses 4
levels of classifications/clearances
TOP SECRET
SECRET
CONFIDENTIAL
UNCLASSIFIED
Chapter 8 Authorization
18
Clearances and Classification



To obtain a SECRET clearance requires a
routine background check
A TOP SECRET clearance requires
extensive background check
Practical classification problems




Proper classification not always clear
Level of granularity to apply classifications
Aggregation  flip side of granularity
Ex: each paragraph -> unclassifed, but whole
doc TOP SECRECT
Chapter 8 Authorization
19
Subjects and Objects

Let O be an object, S a subject




O has a classification
S has a clearance
Security level denoted L(O) and L(S)
For DoD levels, we have
TOP SECRET > SECRET >
CONFIDENTIAL > UNCLASSIFIED
Chapter 8 Authorization
20
Multilevel Security (MLS)



MLS needed when subjects/objects at
different levels use same system
MLS is a form of Access Control
Military/government interest in MLS for
many decades



Lots of funded research into MLS
Strengths and weaknesses of MLS relatively
well understood (theoretical and practical)
Many possible uses of MLS outside military
Chapter 8 Authorization
21
MLS Applications


Mainly used for Classified gov/mil info
But Business area also used it





Network firewall is another application area


Senior management only – Top Secret
All management - Secret
Everyone in company - Confidential
General public - Unclassified
Keep intruders at low level to limit damage
Confidential medical info, databases, etc.
Chapter 8 Authorization
22
MLS Security Models

Models are descriptive설명하는, not
prescriptive규정하는





High level description, not an algorithm
MLS models explain what needs to be done
Models do not tell you how to implement
There are many MLS models
We’ll discuss simplest MLS model


Other models are more realistic
Other models also more complex, more difficult
to enforce, harder to verify, etc.
Chapter 8 Authorization
23
Bell-LaPadula


BLP security model designed to express
essential requirements for MLS
BLP deals with confidentiality


To prevent unauthorized reading
Recall that O is an object, S a subject



Object O has a classification
Subject S has a clearance
Security level denoted L(O) and L(S)
Chapter 8 Authorization
24
Bell-LaPadula

BLP consists of
Simple Security Condition: S can read O if and
only if L(O)  L(S)
*-Property (Star Property): S can write O if
and only if L(S)  L(O)

No read up, no write down
Chapter 8 Authorization
25
McLean’s Criticisms of BLP


McLean: BLP is “so trivial that it is hard to
imagine a realistic security model for which
it does not hold”
To poke holes in BLP, McLean defined
“system Z” which is allowed administrator to
reclassify object, then “write down”


Violates spirit of BLP, but not expressly
forbidden in statement of BLP
Raises fundamental questions about the
nature of (and limits of) modeling
Chapter 8 Authorization
26
B and LP’s Response

BLP enhanced with tranquility property안정특성



Strong tranquility property: security labels
never change
Weak tranquility property: security label can
only change if it does not violate “established
security policy”
Security labels can be changed

For example, DoD regularly declassifies doc,
which is impossible under strong tranquility
property
Chapter 8 Authorization
27
B and LP’s Response

Another example, we often want to enforce
“least privilege”권한한정





Give users lowest privilege needed for current work
Then upgrade privilege as needed (and allowed by
policy)
This is known as the high water mark principle
Strong tranquility impractical in real world
Weak tranquility allows


for least privilege (high water mark),
but the property is vague
Chapter 8 Authorization
28
Least privilege


from wikipedia
The least privilege, requires that in a
particular abstraction layer of a computing
environment every module must be able to
access only such resources that are necessary
to its legitimate purpose.
The principle of least privilege is widely
recognized as an important design
consideration in enhancing the protection of
data and functionality from faults (fault
tolerance) and malicious behaviour (computer
security).
Chapter 8 Authorization
29
High water mark


from wikipedia
Under high-water mark, any object less than
the user's security level can be opened, but
the object is relabeled to reflect the highest
security level currently open.
If A is writing a 3 III doc, and checks the
unclassified dic, the dic becomes 3 III. Then
when B is writing an 2 II report and checks
the spelling of a word, the dic become 2 II. If
user C is assigned to assemble the daily
intelligence briefing at 1 I, reference to the
dic makes the dictionary 1 I.
Chapter 8 Authorization
30
BLP: The Bottom Line



BLP is simple, but probably too simple
BLP is one of the few security models
that can be used to prove things about
systems
BLP has inspired other security models



Most other models try to be more realistic
Other security models are more complex
Other models difficult to analyze and/or
apply in practice
Chapter 8 Authorization
31
Biba’s Model

BLP for confidentiality, Biba for integrity



Biba is (in a sense) the dual of BLP
Integrity model




Biba is to prevent unauthorized writing
Spse you trust the integrity of O but not O
If object O includes O and O then you cannot
trust the integrity of O
Integrity level of O is minimum of the
integrity of any object in O
Low water mark principle for integrity
Chapter 8 Authorization
32
Biba


Let I(O): the integrity of object O and
I(S): the integrity of subject S
Biba can be stated as
Write Access Rule: S can write O iff I(O)  I(S)
(if S writes O, the integrity of O  that of S)
Biba’s Model: S can read O iff I(S)  I(O)
(if S reads O, the integrity of S  that of O)

Often, replace Biba’s Model with
Low Water Mark Policy: If S reads O, then
I(S) = min(I(S), I(O))
Chapter 8 Authorization
33
BLP vs Biba
high
l
e
v
e
l
Biba
BLP
L(O)
L(O)
L(O)
high
I(O)
I(O)
I(O)
l
e
v
e
l
low
low
Integrity
Confidentiality
Chapter 8 Authorization
34
Multilateral Security
(Compartments)
Chapter 8 Authorization
35
Multilateral Security



Multilevel Security (MLS) enforces
access control up and down
Simple hierarchy of security labels may
not be flexible enough
Multilateral security enforces access
control across by creating compartments
Chapter 8 Authorization
36
Multilateral Security

Suppose TOP SECRET divided into



TOP SECRET {CAT} and
TOP SECRET {DOG}
Both are TOP SECRET but information
flow restricted across the TOP SECRET
level
Chapter 8 Authorization
37
Multilateral Security

Why compartments?


Since may not want either of



Why not create a new classification level?
TOP SECRET {CAT}  TOP SECRET {DOG}
TOP SECRET {DOG}  TOP SECRET {CAT}
Compartments allow us to enforce the need
to know principle지식한정의 원칙

Regardless of your clearance, you only have
access to info that you need to know
Chapter 8 Authorization
38
Multilateral Security

Arrows indicate “” relationship
TOP SECRET {CAT, DOG}
TOP SECRET {CAT}
TOP SECRET {DOG}
TOP SECRET
SECRET {CAT, DOG}
SECRET {CAT}
SECRET {DOG}
SECRET

Not all classifications are comparable, e.g.,
TOP SECRET {CAT} vs SECRET {CAT, DOG}
Chapter 8 Authorization
39
MLS vs Multilateral Security


MLS can be used without multilateral
security or vice-versa
But, the two usually used together
Chapter 8 Authorization
40
MLS vs Multilateral Security

Example






MLS mandated for protecting medical records of
British Medical Association (BMA)
AIDS was TOP SECRET, prescriptions SECRET
What is the classification of an AIDS drug?
Everything tends toward TOP SECRET
Defeats the purpose of the system!
Multilateral security was used instead

AIDS prescription be compartmented from
others
Chapter 8 Authorization
41
Covert Channel
Chapter 8 Authorization
42
Covert Channel




MLS designed to restrict legitimate
channels of communication
May be other ways for information to flow
For example, resources shared at different
levels may signal information
Covert channel: “communication path not
intended as such by system’s designers”
Chapter 8 Authorization
43
Covert Channel Example




Alice has TOP SECRET clearance, Bob
has CONFIDENTIAL clearance
Suppose the file space shared by all users
Alice creates file FileXYzW to signal “1”
to Bob, and removes file to signal “0”
Once each minute Bob lists the files



If file FileXYzW does not exist, Alice sent 0
If file FileXYzW exists, Alice sent 1
Alice can leak TOP SECRET info to Bob!
Chapter 8 Authorization
44
Covert Channel Example
Alice:
Create file
Delete file
Create file
Bob:
Check file
Check file
Check file
Data:
1
0
1
Delete file
Check file
1
Check file
0
Time:
Chapter 8 Authorization
45
Covert Channel

Other examples of covert channels



Print queue, ACK messages
Network traffic, etc., etc., etc.
When does a covert channel exist?
Have to satisfy the 3 conditions
1. Sender and receiver have a shared resource
2. Sender able to vary property of resource that
receiver can observe
3. Communication between sender and receiver
can be synchronized
Chapter 8 Authorization
46
Covert Channel


Covert channels exist almost everywhere
Easy to eliminate covert channels…???


Provided you eliminate all shared resources and
all communication
Virtually impossible to eliminate all covert
channels in any useful system


DoD guidelines: goal is to reduce covert
channel capacity to no more than 1 bit/second
Implication is that DoD has given up trying to
eliminate covert channels!
Chapter 8 Authorization
47
Covert Channel

Consider 100MB TOP SECRET file



Plaintext version stored in TOP SECRET place
Encrypted with AES using 256-bit key,
ciphertext stored in UNCLASSIFIED location
Suppose we reduce covert channel
capacity to 1 bit per second


It would take more than 25 years to leak entire
document thru a covert channel
But it would take less than 5 minutes to leak
256-bit AES key thru covert channel!
Chapter 8 Authorization
48
Real-World Covert Channel


Hide data in TCP header “reserved” field
Or use covert_TCP tool to hide data in


Sequence number
ACK number
Chapter 8 Authorization
49
Real-World Covert Channel



Hide data in TCP sequence numbers
Tool: covert_TCP
Sequence number X contains covert info
ACK (or RST)
Source: B
Destination: C
ACK: X
SYN
Spoofed source: C
Destination: B
SEQ: X
B. Innocent
server
C. Covert_TCP
receiver
A. Covert_TCP
sender
Chapter 8 Authorization
50
Inference Control
Chapter 8 Authorization
51
Inference Control Example

Suppose we query a database





Question: What is average salary of female
CS professors at A University?
Answer: $95,000
Question: How many female CS professors at
A University?
Answer: 1
Specific information has leaked from
responses to general questions!
Chapter 8 Authorization
52
Inference Control and Research



For example, medical records are
private but valuable for research
How to make info available for research
and protect privacy?
How to allow access to such data
without leaking specific information?
Chapter 8 Authorization
53
Naïve Inference Control



Remove names from medical records?
Still may be easy to get specific info
from such “anonymous” data
Removing names is not enough


As seen in previous example
What more can be done?
Chapter 8 Authorization
54
Less-naïve Inference Control

Query set size control


N-respondent, k% dominance rule




Do not release statistic if k% or more of the result
contributed by N or fewer subjects
Example: Avg salary in Bill Gates’ neighborhood –
reasonable setting of N and k make it difficult to find
B Gate salary
Used by the US Census Bureau
Randomization


Don’t return an answer if set size is too small
Add small amount of random noise to data
Many other methods  none satisfactory
Chapter 8 Authorization
55
Inference Control: The Bottom Line


Robust inference control may be impossible
Is weak inference control better than no
inference control?


Yes: Reduces amount of information that leaks
and thereby limits the damage
Is weak crypto better than no crypto?


Probably not: Encryption indicates important
data
May be easier to filter encrypted data
Chapter 8 Authorization
56
CAPTCHA
Chapter 8 Authorization
57
Turing Test





Proposed by Alan Turing in 1950
Human asks questions to one other human
and one computer (without seeing either)
If human questioner cannot distinguish the
human from the computer responder, the
computer passes the test
The gold standard in artificial intelligence
No computer can pass this today
Chapter 8 Authorization
58
CAPTCHA





CAPTCHA  Completely Automated
Public Turing test to tell Computers and
Humans Apart
Automated  test is generated and
scored by a computer program
Public  program and data are public
Turing test to tell…  humans can pass
the test, but machines cannot pass the
test
Like an inverse Turing test (sort of…)
Chapter 8 Authorization
59
CAPTCHA Paradox





“…CAPTCHA is a program that can generate
and grade tests that it itself cannot pass…”
“…much like some professors(???)…”
Paradox  computer creates and scores
test that it cannot pass!
CAPTCHA used to restrict access to
resources to humans (no computers)
CAPTCHA useful for access control
Chapter 8 Authorization
60
CAPTCHA Uses?


Original motivation: automated “bots”
stuffed ballot box in vote for best CS
school
Free email services  spammers used bots
sign up for 1000’s of email accounts


CAPTCHA employed so only humans can get accts
Sites that do not want to be automatically
indexed by search engines


HTML tag only says “please do not index me”
CAPTCHA would force human intervention
Chapter 8 Authorization
61
CAPTCHA: Rules of the Game


Must be easy for most humans to pass
Must be difficult or impossible for
machines to pass



Even with access to CAPTCHA software
The only unknown is some random number
Desirable to have different CAPTCHAs in
case some person cannot pass one type

Blind person could not pass visual test, etc.
Chapter 8 Authorization
62
Do CAPTCHAs Exist?



Test: Find 2 words in the following
Easy for most humans
Difficult for computers (OCR problem)
Chapter 8 Authorization
63
CAPTCHAs

Current types of CAPTCHAs

Visual



Audio


Like previous example
Many others
Distorted words or music
No text-based CAPTCHAs

Maybe this is not possible…
Chapter 8 Authorization
64
CAPTCHA’s and AI

Computer recognition of distorted text
is a challenging AI problem


Same is true of distorted sound



But humans can solve this problem
Humans also good at solving this
Hackers who break such a CAPTCHA have
solved a hard AI problem
Putting hacker’s effort to good use!
Chapter 8 Authorization
65
Firewalls
Chapter 8 Authorization
66
Firewalls
Internet


Firewall
Internal
network
Firewall must determine what to let in to
internal network and/or what to let out
Access control for the network
Chapter 8 Authorization
67
Firewall as Secretary


A firewall is like a secretary
To meet with an executive




You want to meet chair of CS department?


First contact the secretary
Secretary decides if meeting is reasonable
Secretary filters out many requests
Secretary does some filtering
You want to meet President of US?

Secretary does lots of filtering!
Chapter 8 Authorization
68
Firewall Terminology


No standard terminology
Types of firewalls




Packet filter  works at network layer
Stateful packet filter  transport layer
Application proxy  application layer
Personal firewall  for single user, home
network, etc.
Chapter 8 Authorization
69
Packet Filter


Operates at network layer
Can filters based on






Source IP address
Destination IP address
Source Port
Destination Port
Flag bits (SYN, ACK, etc.)
Egress or ingress
Chapter 8 Authorization
Application
Transport
Network
Link
Physical
70
Packet Filter

Advantage


Speed
Application
Disadvantages



No concept of state – each
packet is treated
independently of all others
Cannot see TCP connections
Blind to application data – so,
many viruses can reside
Chapter 8 Authorization
Transport
Network
Link
Physical
71
Packet Filter

Configured via Access Control Lists (ACLs)


Different meaning of ACL than previously
Action
Source
IP
Dest IP
Source
Port
Dest
Port
Protoc
ol
Flag
Bits
Allow
Inside
Outside
Any
80
HTTP
Any
Allow
Outside
Inside
80
>1023
HTTP
ACK
Deny
All
All
All
All
All
All
Intention is to restrict incoming packets
to Web responses
Chapter 8 Authorization
72
TCP ACK Scan



Attacker sends packet with ACK bit set,
without prior 3-way handshake
Violates TCP/IP protocol
ACK packet pass thru packet filter firewall



Appears to be part of an ongoing connection
RST sent by recipient of such packet
Attacker scans for open ports thru firewall
(Go to the next slide)
Chapter 8 Authorization
73
TCP Three Way Handshake
SYN request
SYN-ACK
ACK (and data)

SYN: synchronization requested
SYN-ACK: acknowledge SYN request
ACK: acknowledge msg 2 and send data

Then TCP “connection” established



Connection terminated by FIN or RST packet
(Back to the page)
Appendix
74
TCP ACK Scan
ACK dest port 1207
ACK dest port 1208
ACK dest port 1209
Trudy


Packet
Filter
RST
Internal
Network
Attacker knows port 1209 open thru firewall
A stateful packet filter can prevent this (next)

Since ACK scans not part of established connections
Chapter 8 Authorization
75
Stateful Packet Filter




Adds state to packet filter
Operates at transport layer
Remembers TCP connections
and flag bits
Can even remember UDP
packets (e.g., DNS requests)
Application
Transport
Network
Link
Physical
Chapter 8 Authorization
76
Stateful Packet Filter

Advantages



Can do everything a packet filter
can do plus...
Keep track of ongoing
connections
Disadvantages


Cannot see application data
Slower than packet filtering
Chapter 8 Authorization
77
Application Proxy



A proxy is something that acts on
your behalf
Application proxy looks at incoming
application data
Verifies that data is safe before
letting it in
Chapter 8 Authorization
78
Application Proxy

Advantages



Complete view of connections
and applications data
Filter bad data at application
layer (viruses, Word macros)
Disadvantage

Speed
Application
Transport
Network
Link
Physical
Chapter 8 Authorization
79
Application Proxy




Creates a new packet before sending it
thru to internal network
Attacker must talk to proxy and convince
it to forward message
Proxy has complete view of connection
Prevents some attacks stateful packet
filter cannot  see next slides
Chapter 8 Authorization
80
Firewalk

Tool to scan for open ports thru firewall
The purpose: the same as TCP ACK
 Known: IP address of firewall, IP address of one
system inside firewall, and the number of hopes to
the firewall
 TTL set to 1 more than number of hops to
firewall and set destination port to N
 If firewall does not let thru data on port N, no
response
 If firewall allows data on port N thru firewall,
get time exceeded error message

Chapter 8 Authorization
81
Firewalk and Proxy Firewall
Trudy
Router
Router
Packet
filter
Router
Dest port 12343, TTL=4
Dest port 12344, TTL=4
Dest port 12345, TTL=4
Time exceeded


This will not work thru an application proxy
The proxy creates a new packet, destroys old
TTL(Time To Live) and reset to default value
Chapter 8 Authorization
82
Personal Firewall


To protect one user or home network
Can use any of the methods



Packet filter
Stateful packet filter
Application proxy
Chapter 8 Authorization
83
Firewalls and Defense in Depth
Example security architecture
DMZ
FTP server
WWW server
DNS server
Internet
Packet
Filter
Application
Proxy
Chapter 8 Authorization
Intranet with
Personal
Firewalls
84
Intrusion Detection
Systems
Chapter 8 Authorization
85
Intrusion Prevention


Want to keep bad guys out
Intrusion prevention is a traditional
focus of computer security




Authentication is to prevent intrusions
Firewalls a form of intrusion prevention
Virus defenses also intrusion prevention
Comparable to locking the door on your
car
Chapter 8 Authorization
86
Intrusion Detection


In spite of intrusion prevention, bad guys
will sometime get into system
Intrusion detection systems (IDS)


Detect attacks before, during, and after they
hace occured
Basic appproach is to look for “unusual” activity



Automated IDS developed out of log file analysis
IDS is currently a very hot research topic
How to respond when intrusion detected?

We don’t deal with this topic here
Chapter 8 Authorization
87
Intrusion Detection

Who is likely intruder?



May be outsider who got thru firewall
May be evil insider
What do intruders do?





Launch well-known attacks – maybe beginers
Launch variations on well-known attacks
Launch new or little-known attacks
Use a system to attack other systems
Etc.
Chapter 8 Authorization
88
Intrusion Detection

Intrusion detection approaches



흔적 기반
Intrusion detection architectures



Signature(Pattern)-based IDS
Anomaly-based IDS 비정상 기반
Host-based IDS
Network-based IDS
Most systems can be classified as above

In spite of marketing claims to the contrary!
Chapter 8 Authorization
89
Host-based IDS

Monitor activities on hosts for



Designed to detect attacks such as



Known attacks or
Suspicious behavior
Buffer overflow
Escalation of privilege
Little or no view of network activities
Chapter 8 Authorization
90
Network-based IDS

Monitor activity on the network for



Known attacks
Suspicious network activity
Designed to detect attacks such as



Denial of service
Network probes 네트워크 탐침
Malformed packets, etc.

Can be some overlap with firewall
Little or no view of host-base attacks

Can have both host and network IDS

Chapter 8 Authorization
91
Signature Detection


Signature Detection involves searching
network traffic for a set of pre-defined
attack patterns
Failed login attempts may indicate
password cracking attack


IDS could use the rule “N failed login
attempts in M seconds” as signature
If N or more failed login attempts in M
seconds, IDS warns of attack
Chapter 8 Authorization
92
Signature Detection

Suppose IDS warns whenever N or more
failed logins in M seconds




Must set some proper N and M, so that false
alarms not excessive
Can do this based on normal behavior
But if attacker knows the signature, he can try
N-1 logins every M seconds!
In this case, signature detection slows the
attacker, but might not stop him
Chapter 8 Authorization
93
Signature Detection



Many techniques used to make signature
detection more robust
Goal is usually to detect “almost signatures”
For example, if “about” N login attempts in
“about” M seconds


Warn of possible password cracking attempt
What are reasonable values for “about”?


Can use statistical analysis, heuristics, other
Must take care not to increase false alarm rate
Chapter 8 Authorization
94
Signature Detection

Advantages of signature detection





Simple
Efficient (if reasonable number of signatures)
Detect known attacks
Know which attack at time of detection
Disadvantages of signature detection




Signature files must be kept up to date
Number of signatures may become large
Can only detect known attacks
Variation on known attack may not be detected
Chapter 8 Authorization
95
Anomaly Detection


Anomaly detection systems look for
unusual or abnormal behavior
There are (at least) two challenges



What is normal for this system?
How “far” from normal is abnormal?
Statistics is obviously required here!


The mean defines normal
The variance indicates how far abnormal lives
from normal
Chapter 8 Authorization
96
What is Normal?

Consider the scatterplot below
y

White dot is “normal”

Is red dot normal?

Is green dot normal?


x
How abnormal is the
blue dot?
Statistics can be
tricky!
Chapter 8 Authorization
97
How to Measure Normal?

How to measure normal?





Must measure during “representative”
behavior
Must not measure during an attack…
…or else attack will seem normal!
Normal is statistical mean
Must also compute variance to have any
reasonable chance of success
Chapter 8 Authorization
98
How to Measure Abnormal?

Abnormal is relative to some “normal”


Statistical discrimination techniques:





Abnormal indicates possible attack
Bayesian statistics
Linear discriminant analysis (LDA)
Quadratic discriminant analysis (QDA)
Neural nets, hidden Markov models, etc.
Fancy modeling techniques also used



Modeling technique in Artificial intelligence
Artificial immune system principles
Many others!
Chapter 8 Authorization
99
How to Measure Abnormal?

The approaches are beyond the scope of
this class



Statistical discrimination techniques:
Fancy modeling techniques also used
Here, two simplified examples of anormaly
detection will be considered


The first example is simple but not realistic
The second is slightly more realistic
Chapter 8 Authorization
100
Anomaly Detection (1)

Spse we monitor use of three commands:
open, read, close

Under normal use we observe that Alice
open, read, close, open, open, read, close,…
 Of the six possible ordered pairs, four pairs are
“normal” for Alice:
(open,read), (read,close), (close,open), (open,open)

The other two pairs are abnormal
(read, open), (close,read)

Can we use this to identify unusual activity?
Chapter 8 Authorization
101
Anomaly Detection (1)


If the ratio of abnormal to normal pairs is
“too high”, warn of possible attack
Could improve this approach by


Also using expected frequency of each pair
Use more than two consecutive commands



Ex: (Open Read Close)
Include more commands/behavior in the model
More sophisticated statistical discrimination
Chapter 8 Authorization
102
Anomaly Detection (2)


For slightly realistic anomaly detection,
let’s focus on file access
Over time, Alice
has accessed file
Fn at rate Hn

Recently, Alice has
accessed file Fn at
rate An
H0
H1
H2
H3
A0
A1
A2
A3
.10
.40
.40
.10
.10
.40
.30
.20
Chapter 8 Authorization
103
Anomaly Detection (2)


Is this “normal” use?
We employ the statistic
S = (H0A0)2+(H1A1)2+…+(H3A3)2 = .02


And consider S < 0.1 to be normal, so this is
normal for this one statistic
Problem: How to account for use that
varies over time?
Chapter 8 Authorization
104
Anomaly Detection (2)

To allow “normal” to adapt to new use, we
update long-term averages as
Hn = 0.2An + 0.8Hn

Then H0 and H1 are unchanged, but
H2=.2.3+.8.4=.38 and
H3=.2.2+.8.1=.12

And the long term averages are updated as
H0
H1
H2
H3
.10 .40 .38 .12
Chapter 8 Authorization
105
Anomaly Detection (2)





The updated long
term average is

New observed
rates are…
H0
H1
H2
H3
A0
A1
A2
A3
.10
.40
.38
.12
.10
.30
.30
.30
Is this normal use?
Compute S = (H0A0)2+…+(H3A3)2 = .0488
Since S = .0488 < 0.1 we consider this normal
And we again update the long term averages by
Hn = 0.2An + 0.8Hn
Chapter 8 Authorization
106
Anomaly Detection (2)




The starting
averages were

After 2 iterations,
the averages are
H0
H1
H2
H3
H0
H1
.10
.40
.40
.10
.10
.38
H2
H3
.364 .156
The statistics slowly evolve to match behavior
This reduces false alarms and work for admin
But also opens an avenue for attack…


Suppose Trudy always wants to access F3
She can convince IDS this is normal for Alice!
Chapter 8 Authorization
107
Anomaly Detection (2)



To make this approach more robust, must
also incorporate the variance
Can also combine N statistics as, for
example,
T = (S1 + S2 + S3 + … + SN) / N
to obtain a more complete view of “normal”
Similar (but more sophisticated) approach is
used in IDS known as NIDES

NIDES includes anomaly and signature IDS
Chapter 8 Authorization
108
Anomaly Detection Issues

System constantly evolves and so must IDS



What does “abnormal” really mean?




Otherwise, false alarms would overwhelm the admin
But evolving means Trudy to convince slowly AD that
an attack is normal
Only that there is possibly an attack
May not say anything specific about attack!
How to respond to such vague information?
Signature detection tells exactly which attack
Chapter 8 Authorization
109
Anomaly Detection

Advantages

Chance of detecting unknown attacks
May be more efficient (since no signatures)

Today, cannot be used alone


Disadvantages



Must be used with a signature detection system

May be subject to attack
Reliability is unclear
Anomaly detection indicates something unusual

But lack of specific info on possible attack!
Chapter 8 Authorization
110
The Bottom Line




Anomaly-based IDS is active research topic
Many security professionals have very high
hopes for its ultimate success
Often cited as key future security technology
Hackers are not convinced!



Title of a talk at Defcon 11: “Why Anomaly-based
IDS is an Attacker’s Best Friend”
Anomaly detection is difficult and tricky
Is anomaly detection as hard as AI?
Chapter 8 Authorization
111
Access Control Summary

Authentication and authorization

Authentication  who goes there?


Passwords  something you know
Biometrics  something you are (or “you
are your key”)
Chapter 8 Authorization
112
Access Control Summary

Authorization - are you allowed to do that?








Access control matrix/ACLs/Capabilities
MLS/Multilateral security
BLP/Biba
Covert channel
Inference control
CAPTCHA
Firewalls
IDS
Chapter 8 Authorization
113
Download