Traffic Measurement - Computer Science & Engineering

advertisement
Traffic Measurements
Modified from Carey Williamson
Network Traffic Measurement
• A focus of networking research for 20+ years
• Collect data or packet traces showing packet activity on the
network for different applications
• Study, analyze, characterize Internet traffic
• Goals:
– Understand the basic methodologies used
– Understand the key measurement results to date
2
Why Network Traffic Measurement?
• Understand the traffic on existing networks
• Develop models of traffic for future networks
• Useful for simulations, capacity planning studies
3
Measurement Environments
• Local Area Networks (LAN’s)
– e.g., Ethernet LANs
• Wide Area Networks (WAN’s)
– e.g., the Internet
• Wireless LANs
• …
4
Requirements
• Network measurement requires hardware or software
measurement facilities that attach directly to network
• Allows you to observe all packet traffic on the network, or
to filter it to collect only the traffic of interest
• Assumes broadcast-based network technology, superuser
permission
5
Measurement Tools (1 of 3)
• Can be classified into hardware and software measurement
tools
• Hardware: specialized equipment
– Examples: HP 4972 LAN Analyzer, DataGeneral Network Sniffer,
others...
• Software: special software tools
– Examples: tcpdump, xtr, SNMP, others...
6
Measurement Tools (2 of 3)
• Measurement tools can also be classified as active or passive
• Active: the monitoring tool generates traffic of its own during
data collection (e.g., ping, pchar)
• Passive: the monitoring tool is passive, observing and
recording traffic info, while generating none of its own (e.g.,
tcpdump)
7
Measurement Tools (3 of 3)
• Measurement tools can also be classified as real-time or
non-real-time
• Real-time: collects traffic data as it happens, and may even
be able to display traffic info as it happens, for real-time
traffic management
• Non-real-time: collected traffic data may only be a subset
(sample) of the total traffic, and is analyzed off-line (later),
for detailed analysis
8
Potential Uses of Tools (1 of 4)
• Protocol debugging
– Network debugging and troubleshooting
– Changing network configuration
– Designing, testing new protocols
– Designing, testing new applications
– Detecting network weirdness: broadcast storms, routing loops, etc.
9
Potential Uses of Tools (2 of 4)
• Performance evaluation of protocols and applications
– How protocol/application is being used
– How well it works
– How to design it better
10
Potential Uses of Tools (3 of 4)
• Workload characterization
– What traffic is generated
– Packet size distribution
– Packet arrival process
– Burstiness
– Important in the design of networks, applications, interconnection
devices, congestion control algorithms, etc.
11
Potential Uses of Tools (4 of 4)
• Workload modeling
– Construct synthetic workload models that concisely capture the
salient characteristics of actual network traffic
– Use as representative, reproducible, flexible, controllable workload
models for simulations, capacity planning studies, etc.
12
13
Traffic Measurement Time Scales
• Performance analysis
– representative models
• throughput, packet loss, packet delay
– Microseconds to minutes
• Network engineering
– network configuration
– capacity planning
– demand forecasting
– traffic engineering
– Minutes to years
• Different measurement methods
14
Properties
• Most basic view of traffic is as a collection of packets passing
through routers and links
• Packets and Bytes
– One can capture/observe packets at some location
– Packet arrivals
• interarrivals
• count traffic at timescale T
– Captures workload generated by traffic on a per-packet basis
– Packet Size
• time series of Byte count
– Captures the amount of consumed bandwidth
• packet size distribution
– router design etc.
15
Higher-level Structure
• Transport protocols and applications
• ON/OFF process
– bursty workload
– Packet-level
– Packet Train
• interarrival threshold
– Session
• single execution of an application
• Human generated
16
Flows
• Set of packets passing an observation point during a time
interval with all packets having a set of common properties
– Header field contents, packet characteristics, etc.
• IP flows
– source/destination addresses
– IP or transport header fields
– prefix
• Network-defined flow
– network’s workload
– ingress and egress
– Traffic matrix and Path matrix
17
Semantically Distinct Traffic Types
• Control Traffic
– Control plane
• Routing protocols
– BGP, OSPF, IS-IS
• Measurement and management
– SNMP
• General control packets
– ICMP
– Data plane
• Malicious Traffic
18
19
Challenges
• Practical issues
– Observability
• Core simplicity
– Flows
– Packets
• Distributed Internetworking
• IP Hourglass
– Data volume
– Data sharing
20
Challenges
• Statistical difficulties
– Long tails and High variability
• Instability of metrics
• Modeling difficulty
• Confounding intuition
– Stationarity and stability
• Stationarity: joint probability distribution does not change when shifted in
time
• Stability: consistency of properties over time
– Autocorrelation and memory in system behavior
– High dimensionality
21
Tools
• Packet Capture
– General purpose systems
•
•
•
•
•
libpcap
tcpdump
ethereal
scriptroute
…
– Special purpose system
– Control plane traffic
• GNU Zebra
• Routeviews
22
Data Management
• Full packet capture and storage is challenging
• Limitations of commodity PC
• Data stream management
• Big Data platforms
– Hadoop, etc.
23
Data Reduction
• Lossy compression
• Counters
– SNMP Management Information Base
• Flow capture
– Packet trains
– Packet flows
24
Data Reduction
• Sampling
– Basic packet sampling
• Random: with fixed probability
• Deterministic: periodic samples
• Stratified: multi step sampling
– Trajectory sampling
• Chose a randomly sampled packet at all locations
25
Data Reduction
• Summarization
– Bloom filters
– Sketches: Dimension reducing random projections
– Probabilistic counting
– Landmark/sliding window models
26
Review: Bloom Filters
• Given a set S = {x1,x2,x3,…xn} on a universe U, want to answer
queries of the form:
Is y  S
• Bloom filter provides an answer in
– “Constant” time (time to hash).
– Small amount of space.
– But with some probability of being wrong.
• Alternative to hashing with interesting tradeoffs.
Review: Bloom Filters
Start with an m bit array, filled with 0s.
B
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
1
1
0
0
1
1
0
1
1
0
Hash each item xj in S k times. If Hi(xj) = a, set B[a] = 1.
B
0
1
0
0
1
0
1
0
0
1
1
To check if y is in S, check B at Hi(y). All k values must be 1.
B
0
1
0
0
1
0
1
0
0
1
1
1
Possible to have a false positive; all k values are 1, but y is not in S.
B
0
n items
1
0
0
1
0
1
m = cn bits
0
0
1
1
1
0
k hash functions
Review: Bloom Filters
• Tradeoffs
• Three parameters.
– Size m/n : bits per item.
– Time k : number of hash functions.
– Error f : false positive probability.
29
Review: Bloom Filters
• False Positive Probability
• Pr(specific bit of filter is 0) is
p'  (1  1 / m) kn  e  kn / m  p
• If r is fraction of 0 bits in the filter then false positive
probability is
(1   )  (1  p' )  (1  p)  (1  e
k
k
k
k / c k
• Approximations valid as r is concentrated around E[r].
– Martingale argument suffices.
• Find optimal at k = (ln 2)m/n by calculus.
– So optimal fpp is about (0.6185)m/n
n items
m = cn bits
k hash functions
)
Data Reduction
• Dimensionality reduction
– Clustering
– Principal Component Analysis
• Probabilistic models
– Distribution models
– Dependence structure
• Inference
– Traffic Matrix estimation
31
Curse of Dimensionality.
• A major problem is the curse of dimensionality.
• If the data x lies in high dimensional space, then an
enormous amount of data is required to learn
distributions or decision rules.
• Example: 50 dimensions. Each dimension has 20
levels. This gives a total of
cells. But the no. of
data samples will be far less. There will not be enough
data samples to learn.
Dimension Reduction
• One way to avoid the curse of dimensionality is by projecting
the data onto a lower-dimensional space.
• Techniques for dimension reduction:
– Principal Component Analysis (PCA)
– Fisher’s Linear Discriminant
– Multi-dimensional Scaling.
– Independent Component Analysis.
–…
Principal Component Analysis
• PCA is the most commonly used dimension reduction
technique.
– Also called the Karhunen-Loeve transform
• PCA – data samples
• Compute the mean
• Computer the covariance:
Principal Component Analysis
• Compute the eigenvalues
and eigenvectors
of the matrix
• Solve
• Order them by magnitude:
• PCA reduces the dimension by keeping direction
such that
Principal Component Analysis
• For many datasets, most of the eigenvalues
\lambda are negligible and can be discarded.
The eigenvalue
In the direction e
Example:
measures the variation
Why Principal Component Analysis?
• Motive
– Find bases which has high variance in data
– Encode data with small number of bases with low MSE
Dimensionality Reduction
Can ignore the components of less significance.
25
Variance (%)
20
15
10
5
0
PC1
PC2
PC3
PC4
PC5
PC6
PC7
PC8
PC9
PC10
You do lose some information, but if the eigenvalues are small, you
don’t lose much
–
–
–
–
n dimensions in original data
calculate n eigenvectors and eigenvalues
choose only the first p eigenvectors, based on their eigenvalues
final data set has only p dimensions
Dimensionality Reduction
Variance
Dimensionality
PCA and Discrimination
• PCA may not find the best directions for
discriminating between two classes.
• Example: suppose the two classes have 2D Gaussian
densities as ellipsoids.
• 1st eigenvector is best for representing
the probabilities.
• 2nd eigenvector is best for
discrimination.
Linear methods..
• Principal Component Analysis (PCA)
One Dimensional
Manifold
Nonlinear Manifolds..
A
PCA and MDS see the Euclidean
distance
What is important is the geodesic distance
Unroll the manifold
To preserve structure preserve the geodesic distance and
not the euclidean distance.
Two methods
• Tenenbaum et.al’s Isomap Algorithm
– Global approach.
– On a low dimensional embedding
• Nearby points should be nearby.
• Farway points should be faraway.
• Roweis and Saul’s Locally Linear Embedding Algorithm
– Local approach
• Nearby points nearby
Isomap
• Estimate the geodesic distance between faraway
points.
• For neighboring points Euclidean distance is a
good approximation to the geodesic distance.
• For farway points estimate the distance by a
series of short hops between neighboring points.
– Find shortest paths in a graph with edges connecting
neighboring data points
Once we have all
pairwise geodesic
distances use classical
metric MDS
Isomap - Algorithm
• Determine the neighbors.
– All points in a fixed radius.
– K nearest neighbors
•
Construct a neighborhood graph.
– Each point is connected to the other if it is a K nearest neighbor.
– Edge Length equals the Euclidean distance
• Compute the shortest paths between two nodes
– Floyd’s Algorithm
– Djkastra’s ALgorithm
• Construct a lower dimensional embedding.
– Classical MDS
Isomap
Observations
48
Overview of Traffic Analysis
49
Traffic Samples from Internet2
50
Packet Trains and Autocorrelation
51
Observation #1
• The traffic model that you use is extremely important in the
performance evaluation of routing, flow control, and
congestion control strategies
– Have to consider application-dependent, protocol-dependent, and
network-dependent characteristics
– The more realistic, the better
52
Observation #2
• Characterizing aggregate network traffic is hard
– Lots of (diverse) applications
– Just a snapshot: traffic mix, protocols, applications, network
configuration, technology, and users change with time
53
Observation #3
• Packet arrival process is not Poisson
– Packets travel in trains
– Packets travel in tandems
– Packets get clumped together (ack compression)
– Interarrival times are not exponential
– Interarrival times are not independent
54
Observation #4
• Packet traffic is bursty
– Average utilization may be very low
– Peak utilization can be very high
– Depends on what interval you use!!
– Traffic may be self-similar
• bursts exist across a wide range of time scales
– Defining burstiness (precisely) is difficult
55
Observation #5
• Traffic is non-uniformly distributed amongst the hosts on the
network
– Example: 10% of the hosts account for 90% of the traffic (or 20-80)
– Why?
• Clients versus servers, geographic reasons, popular ftp sites, web sites, etc.
56
Observation #6
• Network traffic exhibits ‘‘locality’’ effects
– Pattern is far from random
– Temporal locality
– Spatial locality
– Persistence and concentration
– True at host level, at gateway level, at application level
57
Observation #7
• Well over 90% of the byte and packet traffic on most networks
is TCP/IP
– By far the most prevalent
– Often as high as 95-99%
– Most studies focus only on TCP/IP for this reason
58
Observation #8
• Most conversations are short
– Example: 90% of bulk data transfers send less than 10 kilobytes of
data
– Example: 50% of interactive connections last less than 90 seconds
– Distributions may be ‘‘heavy tailed’’
• i.e., extreme values may skew the mean and/or the distribution
59
Observation #9
• Traffic is bidirectional
– Data usually flows both ways
– Not just acks in the reverse direction
– Usually asymmetric bandwidth though
– Pretty much what you would expect from the TCP/IP traffic for most
applications
60
Observation #10
• Packet size distribution is bimodal
– Lots of small packets for interactive traffic and acknowledgements
– Lots of large packets for bulk data file transfer type applications
– Very few in between sizes
61
Bingdong Li
Network Security Monitoring and Analysis
based on Big Data Technologies
62
Objectives
• A network security monitor and analysis system based on
Big Data technologies to
– Measures the network
– Real time continuous monitoring and interactive visualization
– Intelligent network object classification and identification based on
role behavior as context
Objectives
Network
Security
Big Data
Machine
Learning
65
System Design
• Data Collection
66
System Design
• Online Real Time Process
67
System Design
• NoSQL Storage
68
System Design
• User Interfaces
69
70
Monitoring and Visualization
• Real Time
response within a time constraint
• Interactive
involve user interaction
• Continuously
“continue to be effective overtime in light
of the inevitable changes that occur”
(NIST)
71
Network Status
72
Top N
73
Dhruba Borthakur
Hadoop
74
Hadoop, Why?
• Need to process Multi Petabyte Datasets
• Expensive to build reliability in each application.
• Nodes fail every day
– Failure is expected, rather than exceptional.
– The number of nodes in a cluster is not constant.
• Need common infrastructure
– Efficient, reliable, Open Source Apache License
• The above goals are same as Condor, but
– Workloads are IO bound and not CPU bound
Commodity Hardware
Typically in 2 level architecture
– Nodes are commodity PCs
– 30-40 nodes/rack
– Uplink from rack is 3-4 gigabit
– Rack-internal is 1 gigabit
Goals of HDFS
• Very Large Distributed File System
– 10K nodes, 100 million files, 10 PB
• Assumes Commodity Hardware
– Files are replicated to handle hardware failure
– Detect failures and recovers from them
• Optimized for Batch Processing
– Data locations exposed so that computations can move to
where data resides
– Provides very high aggregate bandwidth
• User Space, runs on heterogeneous OS
HDFS Architecture
Cluster Membership
NameNode
Secondary
NameNode
Client
Cluster Membership
NameNode : Maps a file to a file-id and list of MapNodes
DataNode : Maps a block-id to a physical location on disk
SecondaryNameNode: Periodic merge of Transaction log
DataNodes
Distributed File System
• Single Namespace for entire cluster
• Data Coherency
– Write-once-read-many access model
– Client can only append to existing files
• Files are broken up into blocks
– Typically 128 MB block size
– Each block replicated on multiple DataNodes
• Intelligent Client
– Client can find location of blocks
– Client accesses data directly from DataNode
NameNode Metadata
• Meta-data in Memory
– The entire metadata is in main memory
– No demand paging of meta-data
• Types of Metadata
– List of files
– List of Blocks for each file
– List of DataNodes for each block
– File attributes, e.g creation time, replication factor
• A Transaction Log
– Records file creations, file deletions. etc
DataNode
• A Block Server
– Stores data in the local file system (e.g. ext3)
– Stores meta-data of a block (e.g. CRC)
– Serves data and meta-data to Clients
• Block Report
– Periodically sends a report of all existing blocks to the
NameNode
• Facilitates Pipelining of Data
– Forwards data to other specified DataNodes
Data Flow
Web Servers
Scribe Servers
Network
Storage
Oracle RAC
Hadoop Cluster
MySQL
Download