漫談網路安全產品之驗證 - NCC資訊安全宣導網

advertisement
漫談網路安全產品之驗證
劉榮太
威播科技
tie@broadweb.com
關於我…
2
劉榮太
清華大學資訊工程系博士
威播科技 CEO/CTO
高速網路設備研發:頻寬管理、內容過濾、入侵偵測與防禦
2007 Activities:
 國家資通安全技術服務與防護管理計畫 “入侵偵測/預防系統安全參
考指引” 撰稿人
 RSA 2007 演講者, 題目 “IPS TEST”
Outlines
3

Independent Test Labs for Network Security Devices
 NSS
Labs
 ICSA Labs

IPS Test
 In
both Labs
 In 公安三所

Conclusions
4
Independent Test Labs
for Network Security Devices
Independent Test Labs
5


Why we need it?
Why we pick up NSS Labs and ICSA Labs?
NSS Test Labs
6




http://www.nss.co.uk/
Founded in 1991 by Bob Walder
Presences in in Austin, TX and Carlsbad, CA
Established the de facto standards for testing IDS
and IPS, and introduced the industry’s first reports
to validate PCI DSS functionality.
NSS Labs – Test Services
7
Category
“Covered” Products
Anti-Malware
10 products including Trend Micro,
McAfee, and others.
Browser Security
Safari, Chrome, IE8, Firefox, Opera
Attack Mitigator
Top Layer, Radware, and V-Secure
Intrusion Prevention (IPS)
BroadWeb, IBM, TippingPoint, …
Secure Content Appliances
Panda GateDefender 8200
Unified Threat Management
(UTM)
IBM, Fortinet, TippingPoint
Web App Firewall (WAF)
Assurent AssureLogic
PCI Suitability
eEye, IBM, ThirdBridge
10 Gbps IPS
IBM, McAfee
ICSA Labs
8




http://www.icsalabs.com/
Founded in 1989
National Computer Security Association (NCSA),
Cybertrust, and Verizon Business
Pioneer in Anti-Virus, Firewall, VPN, and WAF.
ICSA Labs - Test Services
9
Category
“Covered” Products
Anti-Spam
Fortinet, IBM, Kaspereky, Symantec
Anti-Spyware
Eset, McAfee, Microsoft, Symantec
Anti-Virus
Symantec, McAfee, TrendMicro, … around 110 products
IPSec
3com, ALU, D-Link, Fortinet, Juniper, McAfee, Watchguard,
Zyxel.
Network Firewall
3com, ALU, D-Link, Fortinet, Checkpoing, Zyxel, Sonicwall, …
around 28 products
Network IPS
Fortinet, Sourcefire, Stonesoft (IBM, TippingPoint, and
BroadWeb)
PC Firewalls
CA, Microsoft
SSL-TLS
AEP, Array Networks, F5, Juniper, O2, Sonicwall
Web Application Firewalls
Barracuda, F5, Breach, Citrix, Imperva
FIPS 140-2, FIPS 201, SCAP
10
How to test a NIPS
How are so-called dependent 3rd party tests
preformed?
NSS Labs - http://www.nss.co.uk
11
Section 1 – Detection Engine
12

Aim





Verify sensor’s capable of detecting and blocking exploits
accurately, while remaining resistant to false positives
No background network load
Signatures acquired from vendor
All available attack signatures enabled
Two testing


Test 1.1 Attack Recognition
Test 1.2 Resistance to False Positives
1.1 Attack Recognition
13



Common exploit, port scans, and DoS attempts
Over 100 exploits run with no load on the network
and no IP fragmentation
Test 1.1.1 ~ Test 1.1.14
 Backdoors
(standard ports and random ports), DNS,
DoS, False negatives, Finger, FTP, HTTP, ICMP (including
unsolicited ICMP response), Reconnaissance, RPC, SSH,
Telnet, Database, Mail
1.1 Attack Recognition (Cont.)
14

Report




This test is repeated twice




Attacks should be identified by their assigned CVE reference
Noisy
Device blocks the attack packet only or the entire “suspicious”
TCP session
monitor mode (blocking disabled)
blocking enabled
“Default” ARRD/ARRB
“Custom” ARRD/ARRB
Example of Test 1.1
15
1.2 Resistance To False Positives
16

Feed them with
 Normal
traffic with “suspicious” content, together with
several “neutered” exploits



“PASS” if no raise an alert nor block the traffic
“FAIL” if raising an alert
Test 1.2.1 false positives
Example of Test 1.2
17
Section 2 – IPS Evasion
18

Aim





Verify that the sensor is capable of detecting and
blocking basic exploits when subjected to varying
common evasion techniques
Test 2.1 Baselines (with no evasion techniques
applied)
Test 2.2 Packet Fragmentation and Stream
Segmentation (fragroute)
Test 2.3 URL obfuscation (whisker/Nikto)
Test 2.4 Miscellaneous Evasion Techniques
2.2 Packet Fragmentation and Stream
Segmentation
19

IP fragmentation




Test 2.2.1 Ordered 8/24 byte fragments
Test 2.2.2 Out of ordered 8 byte fragments
Test 2.2.X
TCP segmentation




Test 2.2.9 Ordered 1 byte segments, interleaved duplicate
segments with invalid TCP checksums
Test 2.2.10 Ordered 1 byte segments, interleaved duplicate
segments with null TCP control flags
Test 2.2.11 Ordered 1 byte segments, duplicate last packet
Test 2.2.1X
20
2.3 URL Obfuscation
(whisker/Nikto)









2.3.1 URL encoding
2.3.2 /./ directory insertion
2.3.3 Long URL
2.3.4 Premature URL ending
2.3.5 Fake parameter
2.3.6 TAB separation
2.3.7 Case sensitivity
2.3.8 Windows \ delimiter
2.3.9 Session splicing
2.4 Miscellaneous Evasion Techniques
21






Test 2.4.1 Alter default ports
Test 2.4.2 Inserting spaces in FTP command lines
Test 2.4.3 Inserting non-text Telnet opcodes in FTP
data stream
Test 2.4.4 Polymorphic mutation (ADMmutate)
Test 2.4.5 Altering protocol and RPC PROC numbers
Test 2.4.6 RPC record fragging
Section 2 – IPS Evasion
22

Above testing, we note if
 Attempted
attack blocked successfully
 Attempted attacks detected and an alert raised
 If the exploit successfully “decoded” to the original
exploit, rather than alerting purely on anomalous traffic
detected as a result of the evasion technique itself.
Section 3 – Stateful Operation
23

Aim
 Sensor’s
capability of monitoring stateful sessions
established through the device at various traffic loads
without either losing state or incorrectly inferring state
3.1 Stateless Attack Replay
24


Stick and Snot, to generate large numbers of false
alerts
“PASS” if
 No
alerts raised
 Packets blocked

Test 3.1.1 Stateless attack replay
3.2 Simultaneous Open Connections (default
settings)
25

Two goals



Testing steps




reserving state
whether or not the sensor will block legitimate traffic
First packet of a two-packet exploit transmitted
Opens sessions from 10,000 to one million
The second half of the exploit and session closed
Both halves of the exploit required to trigger an alert
3.2 Simultaneous Open Connections
26

Test 3.2.1 Attack Detection


Test 3.2.2 Attack Blocking


Ensures that the sensor continues to block new exploits as the number of open
sessions is increased in stages from 10,000 to 1,000,000
Test 3.2.3 State Preservation


Ensures that the sensor continues to detect new exploits as the number of
open sessions is increased in stages from 10,000 to 1,000,000
Ensures that the sensor maintains the state of pre-existing sessions as the
number of open sessions is increased in stages from 10,000 to 1,000,000
Test 3.2.1 Legitimate Traffic Blocking

Ensures the sensor does not begin to block legitimate traffic as the number of
open sessions is increased in stages from 10,000 to 1,000,000
3.3 Simultaneous Open Connections (after tuning)
27

Test 3.3.1 Attack Detection

Test 3.3.2 Attack Blocking

Test 3.3.3 State Preservation

Test 3.3.4 Legitimate Traffic Blocking
Section 4 – Detection/Blocking Performance Under
Load
28

Aim
 Verify
that the sensor is capable of detecting and
blocking exploits when subjected to increasing loads
of background traffic up to the maximum bandwidth
supported as claimed by vendor.
The Baseline Attack Test Environment
29
Machines
generating exploits
Traffic generation equipment
External
network
Target hosts
Internal
network
Infrastructure
Adtech AX/4000 monitors:
1. The overall traffic loading
2. The total number of exploits
Management
Network
1.
2.
Baseline attacks testing with zero background traffic.
Background traffic applied (250, 500, 750 and 1000Mbps.
•
•
Attack Blocking Rate (ABR) / Attack Detecting Rate (ADR)
The MAX load the IPS can sustain before it begins to
drop packets/miss alerts.
4.1 UDP Traffic To Random Valid Ports
30

UDP packets of varying sizes generated by a
SmartBits SMB6000 with LAN-3301A
10/100/1000Mbps TeraMetrics cards
 With
variable source IP addresses and ports
transmitting to a single fixed IP address/port
 Not attempt to simulate “real world” network
 Determine the raw packet processing capability
4.1 UDP Traffic To Random Valid Ports
31



Test 4.1.1 64 byte packets – maximum 1,480,000
pps
Test 4.1.2 440 byte packets – maximum 260,000
pps
Test 4.1.3 1514 byte packets – maximum 81,720
pps
4.2 HTTP “MAX Stress” Traffic With No Transaction
Delays
32

Aim


CAW Networks Gigabit WebAvalanche and
WebReflector



Stress the HTTP detection engine and determine how the sensor
copes with detecting and blocking exploits under loads
Creating true “real world” traffic at speeds of up to 2.2 Gbps
as a background load for our IPS tests.
Capable of simulating over 2.5 million users, with over 2.5
million concurrent sessions, and almost 100,000 HTTP requests
per second.
Transaction consists of a single HTTP GET request
4.2 HTTP “MAX Stress” Traffic With No Transaction
Delays
33




Test 4.2.1 Max 2,500 new connections per second
Test 4.2.2 Max 5,000 new connections per second
Test 4.2.1 Max 10,000 new connections per second
Test 4.2.1 Max 20,000 new connections per second
4.3 HTTP “MAX Stress” Traffic With Transaction
Delays
34

10 second delay in the server (WebReflector)
response
 Test
4.3.1 Max 5,000 new connections per second
 Test
4.3.2 Max 10,000 new connections per second
4.4 Protocol Mix Traffic
35

To simulate more of a “real world” environment
 Test
4.4.1 72% HTTP traffic (560 byte packets) + 20%
FTP traffic + 6% UDP traffic (256 byte packets)
4.5 “Real World” Traffic
36
Caw WebAvalanche
Traffic generation equipment
External
network
Target hosts
Internal
network
Infrastructure


IIS Web server installed on a dual P4 SuperMicro
server with Gigabit interface
WebAvalanche replay multiple identical sessions from
up to 25 new users per second
4.5 “Real World” Traffic
37


Test 4.5.1 Pure HTTP Traffic (simulated browsing
session on NSS Web site)
Test 4.5.2 Protocol Mix (72% HTTP traffic
(simulated browsing sessions as 4.5.1)) + 20% FTP
traffic + 6% UDP traffic (256 byte packets))
Section 5 – Latency & User Response Times
38

Aim


to determine the effect the IPS sensor has on the traffic
passing through it under various load conditions
Test 5.1 Latency




Tools: Spirent SmartFlow and SMB6000 with Gigabit
TeraMetrics cards
Measure the throughput, packet loss, and latency
Traffic load from 250Mbps to 1Gbps bi-directionally in steps
of 250Mbps
Repeated for a range of packet sizes (64, 440, and 1518
bytes) of UDP traffic
5.1 Latency
39

Latency With No Background Traffic
 SmarttFlow

Latency With Background Traffic Load
 WebAvalanche
& WebReflector
 SmartFlow traffic at various packet sizes (64, 440,
1514 byte)

Latency When Under Attack
 Spirent
WebSuite Software generate a fixed load of
DoS/DDoS
5.2 User Response Times
40

WebAvalanche & WebReflector generate HTTP
sessions
 To
gauge how any increases in latency impact user
experience in terms of failed connections and increased
Web response times


Web Response With No Background Traffic
Web Response When Under Attack
Section 6 – Stability & Reliability
41

Aim
 To
verify the stability of the device under test under
various extreme conditions

Test 6.1.1 Blocking Under Extended Attack
 Exploits
mixed with legitimate sessions transmitted
through the device at a max of 100Mbps for 8 hours
 Merely a reliability test
 Device
expected to remain operational and stable
 Block 100% of recognizable exploits, raising an alert for
each
Section 6 – Stability & Reliability
42

Test 6.1.2 Passing Legitimate Traffic Under
Extended Attack
 FAIL

if legitimate traffic blocked
Test 6.1.3 ISIC/ESIC/TCPSIC/UDPSIC/ICMPSIC
 Stress
the protocol stack of the device under test
 Tool: IP Stack Integrity Checker (ISIC)
Section 7 – Management and Configuration
43

Aim



The feature of the management system
The ability of the management port to resist attack
Management Port



Attacking management interface more effective than attacking
detection interface
Test 7.1.1 open ports
Test 7.1.2 ISIC/ESIC/TCPSIC/UDPSIC/ICMPSIC
ICSA Labs Test – Administration
Functions
44

AF1 – Changing Its Mode of Operation


Unless it is already in the Selected Mode, the SUT must include a means to place its Mission
Interfaces into the Selected Mode.
AF2 – Administrative Capabilities
While in the Selected Mode, the SUT must provide a means to:









1. Access the SUT through the Remote Administration interface;
2. Configure and apply various Policies;
3. Configure and change or acquire the date and time;
4. Enable and disable logging of the events defined in LO1.1;
5. Display all required log data in the Log(s) that was specified in LO2 for the events
defined in LO1;
6. Generate and display all required report data for the events defined in RE1 and RE2;
7. Configure and change all Authentication Configuration Data;
8. Configure and change Remote Administration settings;
9. Enable and disable the automatic network acquisition and automatic enforcement of
protection updates.
ICSA Labs Test – Administration
45

AD1 – Remote Administration
 The
capability must exist for a User to perform
encrypted Remote Administration of the Engine through
at least a single Engine interface.
ICSA Labs Test – Identification &
Authentication
46

IA1 – Identify & Authenticate Prior to Administrative
Function Access


The SUT must include the capability to require and enforce
User identification followed by authentication with a password
having the characteristics specified in IA2 or a multi-factor
Authentication Mechanism prior to permitting access to the
Administrative Functions and other non-required SUT functions.
IA2 – Strength of Password (CONDITIONAL)

The SUT must include the capability to set User passwords to a
mix of eight or more letters, numbers, and special characters.
ICSA Labs Test – Traffic Flow
47

Traffic Flow – Passing IP Traffic
 While
in the Selected Mode, the SUT must pass all Clean
IP traffic up to 80% of the Rated Throughput through its
Mission Interfaces according to the Policy being enforced.
Logging
48

LO1 – Required Log Events
The SUT must include the capability to capture the required log data in LO2
for the following security, operational, and system events:

1. Security Events

a. All attempts to pass attacks through the Engine that target any Vulnerability Set
elements when the Policy for the Vulnerability Set element related to the attack is tuned to:



2. Operational Events





i. Detect and prevent;
ii. Detect and permit.
a. When a User powers down the Engine, in the event that such functionality exists;
(CONDITIONAL)
b. When a change is made to the Policy being enforced;
c. When a change is made to the Authentication Configuration Data of a User;
d. When a User attempts to authenticate to a Remote Administration interface.
3. System Events


a. After any startup sequence is complete when the Engine powers on;
b. When the link status of a Mission Interface changes.
Logging
49

LO2 – Required Log Data
The SUT must include the capability to accurately capture in a Log for each required log event in LO1 the following
log data elements:

1. For all events:

a. The date and time that the event occurred;










a. An indication of the username that attempted to authenticate;
b. An indication of success or failure to authenticate;
4. For “System” event LO1.3.b:


a. An indication of the action taken by the SUT;
b. The protocol;
c. For IP, the source and destination IP addresses;
d. For TCP and UDP, the source and destination ports;
e. A unique identifier representing the Engine that detected the event.
3. For “Operational” event LO1.2.d:


b. A description indicating why the SUT logged the event.
2. For “Security” events in LO1.1:


i The date must consist of the year, the month, and the numerical day in the month;
ii The time must consist of the hour, the minute, and the second;
a. The physical SUT interface link status.
LO3 – Log Data Presentation
All required log data corresponding to all required log events defined in LO1 must be available for review upon
demand and presented in a human readable format while preserving the relative sequence of events.
LO4 – Linking Multiple Logs for a Single Event (CONDITIONAL)
Reporting
50

RE1 – Most Common Policy Violations
The SUT must include the capability to report the ten most common Policy violations
over the preceding:






1. Hour;
2. Day;
3. Seven days;
4. Thirty days;
5. Ninety days.
RE2 – Most Common Sources of Policy Violations
From its perspective, the SUT must include the capability to report on the ten most
common sources of Policy violations over the preceding:





1. Hour;
2. Day;
3. Seven days;
4. Thirty days;
5. Ninety days
Functional Testing
51


FT1 – Administrative Functions Work Properly
The SUT must demonstrate through testing that the
Administrative Functions defined in AF1 and AF2 operate
properly.
FT2 – Average One-Way Latency
While testing under the following conditions:




1. While in the Selected Mode;
2. While enforcing a Policy that meets ST4, ST5, ST6, and ST7;
3. With Background Traffic flowing through the SUT and filling the
SUT bandwidth between 0% and 80% of the Rated Throughput;
4. With attack traffic targeting Vulnerability Set elements
comprising between 0% and 2% of the Rated Throughput.
Security Testing
52




ST1 – SUT Not Addressable
While in the Selected Mode, it must be demonstrated through testing that the Mission Interfaces ignore nonadministrative communication attempts.
ST2 – No Unauthorized Access to Administrative Functions
While in the Selected Mode, it must be demonstrated through testing that unauthorized access to or control of
any Administrative Function does not occur.
ST3 – Engine Not Vulnerable
While in the Selected Mode, it must be demonstrated through testing that the Engine itself is not vulnerable via
its Mission Interfaces to the evolving set of vulnerabilities known in the Internet community that can be remotely
tested.
ST4 – Coverage of Attacks against Relevant Vulnerabilities
The SUT must demonstrate through testing that it is capable of preventing all attacks aimed at Vulnerability Set
elements from passing through after arriving on SUT Mission Interfaces, regardless of their origin and
destination, under the following conditions:





1. While in the Selected Mode;
2. While exercising the Administrative Functions;
3. With Background Traffic flowing through the SUT and filling the SUT bandwidth between 0% and 80% of the Rated
Throughput;
4. With attack traffic targeting Vulnerability Set elements comprising between 0% and 2% of the Rated Throughput;
5. With and without the use of evasion techniques known in the Internet community.
Security Testing (Cont.)
53

ST5 – Coverage of Trivial Denial of Service (DoS) Attacks
The SUT must demonstrate through testing that it has the capability to appropriately Mitigate all
Trivial DoS Attacks arriving on a SUT Mission Interface, regardless of their origin, under the
following conditions:







1. While in the Selected Mode;
2. While exercising the Administrative Functions;
3. With Background Traffic flowing through the SUT and filling the SUT bandwidth between 0% and
80% of the Rated Throughput;
4. With Trivial DoS Attack traffic comprising between 0% and 10% of the Rated Throughput;
5. With attack traffic targeting Vulnerability Set elements comprising between 0% and 2% of the Rated
Throughput.
ST6 – Repeated Protection
While in the Selected Mode, the SUT must demonstrate through testing that at all times after
successfully preventing attacks targeting Vulnerability Set members and mitigating all Trivial
DoS Attacks that it continues to successfully prevent and mitigate, respectively, such attacks in
accordance with the Policy.
ST7 – No False Positives after Tuning
While in the Selected Mode and following appropriate tuning of the Policy, the SUT must
demonstrate through testing that it does not detect in Clean traffic an attack of any kind.
Documentation
54


DO1 – Set Up Instructions
Sufficient, accurate Guidance must be provided for a
User to set up the SUT.
DO2 – Administrative Functions Usage Instructions
Sufficient, accurate Guidance must be provided for a
User to perform the Administrative Functions in AF1
and AF2.
Thanks! Q&A?


55
劉榮太
tie@broadweb.com
Download