Malicious Packet Dropping How It Might Impact the TCP Performance &

advertisement
Malicious Packet Dropping:
How It Might Impact the TCP Performance
& How We Can Detect It
Xiao-Bing Zhang,
Felix Wu,
Zhi Fu,
Tsung-Li Wu,
Ericsson
UC Davis
NC State University
CCIT
http://www.cs.ucdavis.edu/~wu
wu@cs.ucdavis.edu
full paper:
http://www.cs.ucdavis.edu/publications/PDALong.ps
11/17/2000
IEEE ICNP'2000, Osaka, Japan
1
Outline
Packet Dropping
Anomaly Detection
Evaluation
11/17/2000
IEEE ICNP'2000, Osaka, Japan
2
Packet Dropping Attacks
Maliciously drop a small portion of packets
e.g., the first 20 packets in a connection
Selectively drop some important packets
e.g., retransmission packets, signaling packets in IP
telephony
Degrade QoS
Difficult to detect
packet loss could be due to network congestion
11/17/2000
IEEE ICNP'2000, Osaka, Japan
3
Attack Types
Persistent
attack every connection between two TCP
ends.
Intermittent
attack some of the connections
e.g., 1 of every 5 connections
11/17/2000
IEEE ICNP'2000, Osaka, Japan
4
Dropping Patterns
Periodical Packet Dropping (PerPD)
Retransmission Packet Dropping
(RetPD)
Random Packet Dropping (RanPD)
11/17/2000
IEEE ICNP'2000, Osaka, Japan
5
Periodical Packet Dropping
Parameters (K, I, S)
 K, the total number of dropped packets in a connection
 I, the interval between two consecutive dropped packets
 S, the position of the first dropped packet.
Example (5, 10, 4)
 5 packets dropped in total
 1 every 10 packets
 start from the 4th packet
 The 4th, 14th, 24th, 34th and 44th packet will be dropped
11/17/2000
IEEE ICNP'2000, Osaka, Japan
6
Retransmission Packet Dropping
Parameters (K, S)
 K, the times of dropping the packet's retransmissions
 S, the position of the dropped packet
Example (5, 10)
first, drops the 10th packet
then, drops the retransmissions of the 10th packet 5
times
11/17/2000
IEEE ICNP'2000, Osaka, Japan
7
Random Packet Dropping
Parameters (K)
 K, the total number of packets to be dropped in a
connection
Example (5)
randomly drops 5 packets in a connection
11/17/2000
IEEE ICNP'2000, Osaka, Japan
8
Dropper Model
P%
11/17/2000
IEEE ICNP'2000, Osaka, Japan
Per
Ret
Ran
(K,I,S)
(K,S)
(K)
9
How can this happen?
Unintentional:
ill-configuration
aggressive traffic control or management
Intentional:
compromised packet forwarding engine
selectively-flooded routers/switches
11/17/2000
IEEE ICNP'2000, Osaka, Japan
10
How to Practice Dropping Attacks
Compromise intermediate routers
easy to manipulate victim's traffic
hard to detect
difficult to practice
Congest intermediate routers
hard to manipulate victim's traffic
cause more attention
easy to practice
11/17/2000
IEEE ICNP'2000, Osaka, Japan
11
Impacts of Packet Dropping
Delay
Response time
Quality
Bandwidth
Throughput
...
11/17/2000
IEEE ICNP'2000, Osaka, Japan
12
Experiment Setting
FTP Client on Linux 2.0.36
FTP Server
FTP
 FTP client runs Linux
2.0.36 in SHANG lab
xyz.zip 5.5M
Divert
Socket
Attack
Agent
Data Packets
11/17/2000
 4 FTP Servers across
the Internet
 Size of downloaded
file is 5.5MB
 Attack Agent
Internet
IEEE ICNP'2000, Osaka, Japan
 runs on the same
host as FTP client
 act as on a
compromised router
13
FTP Severs and Clients
FTP Client
SHANG
FTP Servers
Heidelberg
NCU
SingNet
UIUC
11/17/2000
IEEE ICNP'2000, Osaka, Japan
14
FTP Severs
Name
FTP Server
IP Address
Location
Heidelberg
ftp.uni-heidelberg.de
129.206.100.134
Europe
NCU
ftp.ncu.edu.tw
140.115.1.71
Asia
SingNet
ftp.singnet.com.sg
165.21.5.14
Asia
UIUC
ftp.cso.uiuc.edu
128.174.5.14
North America
11/17/2000
IEEE ICNP'2000, Osaka, Japan
15
Impacts of Packet Dropping
On Session Delay
300
260.3
250.9
250
Session Delay (s)
218.4
200
183.9
Normal
RanPD(7)
PerPD(7, 4, 5)
RetPD(7, 5)
150
125.8
108.2
98.6
100
86.9
77.1
56
63.4 66
62.6
44.6
50
23.6
26.5
0
Heidelberg
11/17/2000
NCU
SingNet
IEEE ICNP'2000, Osaka, Japan
UIUC
16
Compare Impacts of Dropping
Patterns
500
500
PerPD
RanPD
RetPD
Heidelberg
450
400
400
350
Session delay
Session delay
350
300
250
200
300
250
200
150
150
100
100
50
50
0
0
0
10
20
30
40
0
Number of victim packets
PerPD
SingNet
30
40
RetPD:
S=5
RetPD
PerPD
UIUC
450
RanPD
400
RanPD
RetPD
400
350
Session delay
350
Session delay
20
500
450
300
250
200
150
300
250
200
150
100
100
50
50
0
0
0
10
20
30
Number of victim packets
50
0
0
-10
10
PerPD:
I=4, S=5
Number of victim packets
500
11/17/2000
PerPD
RanPD
RetPD
NCU
450
40
0
10
IEEE ICNP'2000, Osaka, Japan
40
20
30
40
Number of victim packets
17
Different K, I, S for PerPD
250
250
200
250
Heidelberg
Heidelberg
Heidelberg
NCU
NCU
NCU
SingNet
200
SingNet
Session Delay
UIUC
UIUC
150
150
150
100
100
100
50
50
50
0
0
0
10
20
30
Number of Victim Packets, K
(a) I=4, S=5
11/17/2000
SingNet
200
UIUC
40
0
0
20
40
60
80
Dropping Interval, I
(b) K=20, S=5
IEEE ICNP'2000, Osaka, Japan
100
0
50
100
150
Dropping start point, S
(c) K=20, I=50
18
200
On Interval
If Interval is extremely small (< 4), PerPD
is similar to RetPD.
If Interval is larger,
if RTT is small, session delay will be smaller if
the interval is also smaller (but not too small).
11/17/2000
IEEE ICNP'2000, Osaka, Japan
19
Compare Impacts of Dropping
Patterns (cont.)
 Periodical Packet Dropping
 session delay linearly increases with an increase of K
 packet loss is repaired by fast retransmit or timeout
 Random Packet Dropping
 comparatively small damage, relating to RTT
 session delay increases linearly when increasing K
 packet loss is usually repaired by fast retransmit
 Retransmission Packet Dropping
 severe damage, relating to RTO
 session delay increases exponentially when increasing K
11/17/2000
IEEE ICNP'2000, Osaka, Japan
20
The Plain DDOS Model
(1999-2000)
Slaves
Victim
Masters
Attackers
:
:.
11/17/2000
src: random
dst: victim
.com
...
IEEE ICNP'2000, Osaka, Japan
ISP
21
Congestion Tools: Tribe Flood Network
Distributed Denial Of Service (DDOS) attack tools
Master
 a host running an application called Client
 Client initiates attacks by sending commands to Agents
Agent
 a host running a Daemon
 Daemon receives and carries out commands issued by a Client.
 Attack
 UDP flood, ICMP echo reply (ping), SYN flood, and TARGA3
11/17/2000
IEEE ICNP'2000, Osaka, Japan
22
Congestion Experiment Setting
FTP server
fire
FTP client
FTP data
redwing
152.1.75.0
congestion
bone
172.16.0.0
UDP flood
light
192.168.1.0
TFN target
air
TFN master
11/17/2000
 Networks are
in SHANG lab
 All machines
are PCs
 Bone with
500MHz Intel
Pentium CPU
acts as a router
 Downloaded
file size: 44MB
TFN agents
IEEE ICNP'2000, Osaka, Japan
23
Congestion Experiment Results
12
12
flood 1, Stop 5
10
Number of Lost Packets
Number of Lost Packets
flood 1, Stop 20
8
6
4
2
10
8
6
4
2
0
0
0
20
40
60
Time (s)
80
100
0
20
flood 5, Stop 10
100
80
100
flood 5, Stop 2
10
Number of Lost Packets
Number of Lost Packets
80
12
12
8
6
4
2
0
10
8
6
4
2
0
0
11/17/2000
40
60
Time (s)
20
40
60
Time (s)
80
100
0
20
IEEE ICNP'2000, Osaka, Japan
40
60
Time (s)
24
Congestion Experiment Results (cont.)
Attack mode
(flood m , stop
Normal
Flood 1, stop 20
Flood 1, stop 5
Flood 5, stop 10
Flood 5, stop 2
n)
Number of
packet loss per
connection
0.9
18.5
57.4
62.1
124.4
Session
delay
(sec.)
31.7
470.5
58.4
67.3
164.5
450
Damage
damage = (delayflood – delaynormal) / delaynormal
27.8%
84.5%
112.6%
418.9%
400
387.5
400
350
350
323.2
300
300
250
250
185.4
200
200
161.1
150
123
150
126.4
131.4
118.4
Normal
F1,S20
F1,S5
F5,S10
F5,S2
100
100
38.3
50
50
0
0
0
Number of Lost Packets
11/17/2000
Session Delay (seconds)
IEEE ICNP'2000, Osaka, Japan
25
Intrusion Detection: TDSAM
TCP-Dropping Statistic Analysis Module (TDSAM)
run on the protected asset, e.g., the FTP client
Expected Behavior
described in long-term profile
e.g., the average session delay is 50 seconds
Observed Behavior
described in short-term profile
e.g., the average session delay becomes 100 seconds
11/17/2000
IEEE ICNP'2000, Osaka, Japan
26
Intrusion Detection: TDSAM (cont.)
Statistic Measures
Position Measure: position of each packet reordering
Delay Measure: session delay
NPR Measure: number of packet reordering
11/17/2000
IEEE ICNP'2000, Osaka, Japan
27
TDSAM Experiment Setting
FTP Client on Linux 2.0.36
FTP Server
FTP
p1, p2, p3, p5, p4
max
TDSAM
reordering counting
xyz.zip 5.5M
Divert
Socket
Attack
Agent
Data Packets
11/17/2000
Internet
IEEE ICNP'2000, Osaka, Japan
28
Long-term Profile
Category, C-Training
learn the aggregate distribution of a statistic
measure
Q Statistics, Q-Training
learn how much deviation is considered
normal
Threshold
11/17/2000
IEEE ICNP'2000, Osaka, Japan
29
Long-term Profile: C-Training
For each sample of the statistic measure, X
(0, 50]
(50, 75]
(75, 90]
(90, +)
20%
30%
40%
10%
 k bins
k
 Expected Distribution, P1 P2 ... Pk , where i 1 pi  1
 Training time: months
11/17/2000
IEEE ICNP'2000, Osaka, Japan
30
Long-term Profile: Q-Training (1)
For each sample of the statistic measure, X
(0, 50]
(50, 75]
(75, 90]
(90, +)
20%
40%
20%
20%
 k bins, Yi samples fall into i bin
k

 N samples in total ( i 1Yi  N )
 Weighted Sum Scheme with the fading factor s
th
11/17/2000
IEEE ICNP'2000, Osaka, Japan
31
Long-term Profile: Q-Training (2)
Deviation:
(Yi N   pi )2
Q
N   pi
i 1
k
Example:
(2  10  0.2)2 (4  10  0.3)2 (2  10  0.4)2 (2  10  0.1)2
Q



 2.33
10  0.2
10  0.3
10  0.4
10  0.1
Qmax
the largest value among all Q values
11/17/2000
IEEE ICNP'2000, Osaka, Japan
32
Long-term Profile: Q-Training (3)
Q Distribution
[0, Qmax) is equally divided into 31 bins and
the last bin is [Qmax, +)
distribute all Q values into the 32 bins
11/17/2000
IEEE ICNP'2000, Osaka, Japan
33
Threshold
Predefined threshold, 
If Prob(Q>q) < , raise alarm
Probability
0.08
TH_yellow
TH_red
0
0
5
10
15
20
25
30
Q bins
11/17/2000
IEEE ICNP'2000, Osaka, Japan
34
Q-Distribution for Position M.
0.2
0.2
0.16
0.16
0.14
0.14
0.12
0.12
0.1
0.08
0.1
0.08
0.06
0.06
0.04
0.04
0.02
0.02
0
0
0
5
10
15
20
Q bins
25
30
0
35
0.2
5
10
15
20
Q bins
25
30
35
0.2
SingNet
0.18
UIUC
0.18
0.16
0.16
0.14
0.14
0.12
0.12
Probability
Probability
NCU
0.18
Heidelberg
Probability
Probability
0.18
0.1
0.08
0.1
0.08
0.06
0.06
0.04
0.04
0.02
0.02
0
0
0
5
11/17/2000
10
15
20
Q bins
25
30
35
0
IEEE ICNP'2000, Osaka, Japan
5
10
15
20
Q bins
25
30
35
35
Q-Distribution for Delay M.
0.3
0.3
NCU
0.25
0.2
0.2
Probability
Probability
Heidelberg
0.25
0.15
0.15
0.1
0.1
0.05
0.05
0
0
0
5
10
15
20
25
30
35
0
5
10
15
Q bins
20
25
0.3
35
0.3
SingNet
UIUC
0.25
0.25
0.2
0.2
Probability
Probability
30
Q bins
0.15
0.1
0.05
0.15
0.1
0.05
0
0
0
5
10
15
20
25
30
35
0
Q bins
11/17/2000
5
10
15
20
25
30
35
Q bins
IEEE ICNP'2000, Osaka, Japan
36
Detect Malicious Dropping
For each Observed Distribution
compares it to the Expected Distribution
(calculate a Q value)
if the Q value falls into alarm zone, raise
alarm
Short-term profile is updated using
Weighted Sum Scheme
11/17/2000
IEEE ICNP'2000, Osaka, Japan
37
Long-term Profile Update
Update when no attacks occurs during the
a period of time
Update Expected Distribution and Q
Distribution
weighted sum scheme
fading factor equals l
11/17/2000
IEEE ICNP'2000, Osaka, Japan
38
TDSAM Performance Analysis:
Experiment Setting
FTP Client on Linux 2.0.36
FTP Server
FTP
TDSAM
Divert
Socket
 PerPD:
(10, 4, 5), ...
(100, 40, 5)
 RetPD:
(5, 5)
 RanPD:
(10), (40)
 Intermittent Atk.
Data Packets
11/17/2000
njcom210.zip
5.5M
Attack
Agent
 Persistent Atk.
Internet
IEEE ICNP'2000, Osaka, Japan
 PerPD (10, 4,
5) with attack
period 5 and
50
39
Example
Long-Term profile
nbin = 5, bin-width =800
p1=0.194339, p2=0.200759, p3=0.197882,
p4=0.204260, p5=0.202760.
PerPD(20,4,5)
drop packets only in the first 85.
p1=0.837264, p2=0.039390, p3=0.043192,
p4=0.041045, p5=0.039109.
11/17/2000
IEEE ICNP'2000, Osaka, Japan
40
Results: Position Measure
Position
Heidelberg
nbin=5
NCU
SingNet
UIUC
DR
MR
DR
MR
DR
MR
DR
MR
Normal*
-
4.0%
-
5.4%
-
3.5%
-
6.5%
-
PerPD
(10, 4, 5)
99.7%
0.3%
100%
0%
100%
0.0%
100%
0%
(20, 4, 5)
100%
0%
98.1%
1.9%
99.2%
0.8%
100%
0%
(40, 4, 5)
96.6%
3.4%
100%
0%
100%
0%
98.5%
1.5%
(20, 20, 5)
100%
0%
100%
0%
100%
0%
100%
0%
(20, 100, 5)
98.9%
1.1%.
99.2%
0.8%
99.6%
0.4%
99.1%
0.9%
(20, 200, 5)
0%
100%
76.5%
23.5%
1.5%
98.5%
98.3%
1.7%
(100, 40, 5)
0.2%
99.8%
0%
100%
0%
100%
100%
0%
RetPD
(5, 5)
84.9%
15.1%
81.1%
18.9%
94.3%
5.7%
97.4%
2.6%
RanPD
10
0%
100%
42.3%
57.7%
0%
100%
0%
100%
40
0%
100%
0%
100%
0%
100%
0%
100%
Intermittent
5
98.6%
1.4%
100%
0%
98.2%
1.8%
100%
0%
(10, 4, 5)
50
34.1%
65.9%
11.8%
88.2%
89.4%
10.6%
94.9%
5.1%
11/17/2000
IEEE ICNP'2000, Osaka, Japan
41
Results: Delay Measure
Delay
Heidelberg
NCU
SingNet
UIUC
nbin=3
DR
MR
DR
MR
DR
MR
DR
MR
Normal*
-
1.6%
-
7.5%
-
2.1%
-
7.9%
-
PerPD
(10, 4, 5)
97.4%
2.6%
95.2%
4.8%
94.5%
5.5%
99.2%
0.8%
(20, 4, 5)
99.2%
0.8%
98.5%
1.5%
100%
0%
100%
0%
(40, 4, 5)
100%
0%
100%
0%
100%
0%
100%
0%
(20, 20, 5)
96.3%
3.7%
100%
0%
92.6%
7.4%
98.9%
1.1%
(20, 100, 5)
100%
0%
95.3%
4.7%
98.7%
1.3%
100%
0%
(20, 200, 5)
98.6%
1.4%
99%
1%
97.1%
2.9%
100%
0%
(100, 40, 5)
100%
0%
100%
0%
100%
0%
100%
0%
RetPD
(5, 5)
100%
0%
100%
0%
100%
0%
100%
0%
RanPD
10
74.5%
25.5%
26.8%
73.2%
67.9%
32.1%
99.5%
0.5%
40
100%
0%
100%
0%
100%
0%
100%
0%
Intermittent
5
25.6%
74.4%
0%
100%
0%
100%
97.3%
2.7%
(10, 4, 5)
50
0%
100%
24.9%
75.1%
0%
100%
3.7%
96.3%
11/17/2000
IEEE ICNP'2000, Osaka, Japan
42
Results: NPR Measure
NPR
Heidelberg
nbin=2
NCU
SingNet
UIUC
DR
MR
DR
MR
DR
MR
DR
MR
Normal*
-
4.5%
-
5.8%
-
8.2%
-
2.9%
-
PerPD
(10, 4, 5)
0%
100%
14.4%
85.6%
29.1%
70.9%
100%
0%
(20, 4, 5)
83.1%
16.9%
94.2%
5.8%
95.2%
4.8%
100%
0%
(40, 4, 5)
100%
0%
97.4%
2.6%
100%
0%
100%
0%
(20, 20, 5)
91.6%
8.4%
92%
8%
93.5%
6.5%
100%
0%
(20, 100, 5)
94.3%
5.7%
92.2%
7.8%
96.4%
3.6%
100%
0%
(20, 200, 5)
0%
100%
96.5%
3.5%
94.8%
5.2%
100%
0%
(100, 40, 5)
100%
0%
100%
0%
100%
0%
100%
0%
RetPD
(5, 5)
0%
100%
84.7%
15.3%
23.9%
76.1%
46.5%
53.5%
RanPD
10
0%
100%
0%
100%
100%
0%
100%
0%
40
100%
0%
100%
0%
100%
0%
100%
0%
Intermittent
5
0%
100%
0%
100%
82.2%
17.8%
100%
0%
(10, 4, 5)
50
0%
100%
1%
99%
40%
60%
64.8%
35.2%
11/17/2000
IEEE ICNP'2000, Osaka, Japan
43
TDSAM Performance Analysis:
Results (good or bad!!)
 False Alarm Rate
 less than 10% in most cases, the highest is 17.4%
 Detection Rate
 Position: good on RetPD and most of PerPD
 at NCU, 98.7% for PerPD(20,4,5), but 0% for PerPD(100, 40, 5) in
which dropped packets are evenly distributed
 Delay: good on those significantly change session delay, e.g.,
RetPD, PerPD with a large value of K
 at SingNet, 100% for RetPD(5,5), but 67.9% for RanPD(10)
 NPR: good on those dropping many packets
 at Heidelberg, 0% for RanPD(10), but 100% for RanPD(40)
11/17/2000
IEEE ICNP'2000, Osaka, Japan
44
TDSAM Performance Analysis:
Results (cont.)
 Good sites correspond to a high detection rate.
 stable and small session delay or packet reordering
 e.g., using Delay Measure for RanPD(10): UIUC (99.5%) >
Heidelberg(74.5%) > SingNet (67.9%) > NCU (26.8%)
 How to choose the value of nbin is site-specific
 e.g., using Position Measure, lowest false alarm rate occurs
when nbin= 5 at Heidelberg(4.0%) and NCU(5.4%), 10 at
UIUC(4.5%) and 20 at SingNet(1.6%)
11/17/2000
IEEE ICNP'2000, Osaka, Japan
45
Conclusion
TDSAM with a single measure
able to detect dropping attacks
has weakness in identifying some malicious droppings
Combines the 3 measures
works well on most of the attacks
except for those causing very limited damages
RanPD with a small value of K
intermittent attacks with a large attack interval
 Limitations….
11/17/2000
IEEE ICNP'2000, Osaka, Japan
46
Future….
Detect Non-TCP Packet Dropping Attacks
choose appropriate statistic measures
Service Level Agreement Monitoring
build long-term profile statistically monitoring
the quality of service
e.g., evaluate the DNS response time
11/17/2000
IEEE ICNP'2000, Osaka, Japan
47
Contributions
Packet Dropping Attacks
Studied how to practice the attacks
Studied the impacts of dropping attacks
Implemented the Attack Agent
Intrusion Detection
Implementation of TDSAM
TDSAM performance analysis over the real
Internet
11/17/2000
IEEE ICNP'2000, Osaka, Japan
48
Thanks
full paper:
http://www.cs.ucdavis.edu/publications/PDALong.ps
Any questions?
11/17/2000
IEEE ICNP'2000, Osaka, Japan
49
Weighted Sum Scheme
Problems of Sliding Window Scheme
 Keep the most recent N pieces of audit records
 required resource and computing time are O(N)
 When Ei occurs, update
Assume
 K: number of bins
 Yi: count of audit records
falls into ith bin
 N: total number of audit
records
 : fading factor
11/17/2000
Yi  Yi  2   1
Y j  Y j  2  ,
ji
N  ik1Yi  N  2   1
IEEE ICNP'2000, Osaka, Japan
50
Download