Deriving Traffic Demands for Operational IP Networks: Methodology and Experience

advertisement

Deriving Traffic Demands for Operational

IP Networks: Methodology and Experience

Anja Feldmann*, Albert Greenberg, Carsten Lund,

Nick Reingold, Jennifer Rexford, and Fred True

Internet and Networking Systems Research Lab

AT&T Labs-Research; Florham Park, NJ

*University of Saarbruecken

PowerPoint: view slide show for animation; view notes page for notes 1

Traffic Engineering For Operational IP Networks

Improve user performance and network efficiency by tuning router configuration to the prevailing traffic demands.

– Why? some customers or peers

– Time Scale?

AS 7018

(AT&T)* backbone

*synthetic loads some customers or peers

2

Traffic Engineering Stack

Topology of the ISP backbone

– Connectivity and capacity of routers and links

Traffic demands

– Expected/offered load between points in the network

Routing configuration

– Tunable rules for selecting a path for each flow

Performance objective

– Balanced load, low latency, service level agreements …

Optimization procedure

– Given the topology and the traffic demands in an IP network, tune routes to optimize a particular performance objective

3

Traffic Demands

How to model the traffic demands?

– Know where the traffic is coming from and going to

– Support what-if questions about topology and routing changes

– Handle the large fraction of traffic crossing multiple domains

How to populate the demand model?

– Typical measurements show only the impact of traffic demands

» Active probing of delay, loss, and throughput between hosts

» Passive monitoring of link utilization and packet loss

– Need network-wide direct measurements of traffic demands

How to characterize the traffic dynamics?

– User behavior, time-of-day effects, and new applications

– Topology and routing changes within or outside your network

4

Outline

Sound traffic model for traffic engineering of operational IP networks

Methodology for populating the model

Results

Conclusions

5

Outline

Sound traffic model for traffic engineering of operational IP networks

– Point to Multipoint Model

Methodology for populating the model

Results

Conclusions

6

Big Internet

Traffic Demands

Web Site User Site

7

Traffic Demands

Interdomain Traffic

Web Site

AS 2

AS 3, U

AS 3, U

AS 1

AS 4, AS 3, U

AS 4

AS 3, U

AS 3

U

User Site

•What path will be taken between AS’s to get to the User site?

•Next: What path will be taken within an AS to get to the User site?

8

Traffic Demands

Zoom in on one AS

25

110

110

Web Site

200 300

75

OUT

2

User Site

300

OUT

1

10

110 50 110

IN

OUT

3

Change in internal routing configuration changes flow exit point!

9

Point-to-Multipoint Demand Model

Definition: V(in, {out}, t)

– Entry link (in)

– Set of possible exit links ({out})

– Time period (t)

– Volume of traffic (V(in,{out},t))

Avoids the “coupling” problem with traditional point-topoint (input-link to output-link) models:

Pt to Pt Demand Model

Traffic Engineering

Pt to Pt Demand Model

Traffic Engineering

Improved Routing Improved Routing

10

Outline

Sound traffic model for traffic engineering of operational IP networks

Methodology for populating the model

– Ideal

– Adapted to focus on interdomain traffic and to meet practical constraints in an operational, commercial IP network

Results

Conclusions

11

Ideal Measurement Methodology

Measure traffic where it enters the network

– Input link, destination address, # bytes, and time

– Flow-level measurement (Cisco NetFlow)

Determine where traffic can leave the network

– Set of egress links associated with each network address

(forwarding tables)

Compute traffic demands

– Associate each measurement with a set of egress links

12

Adapted Measurement Methodology

Interdomain Focus

A large fraction of the traffic is interdomain

Interdomain traffic is easiest to capture

– Large number of diverse access links to customers

– Small number of high speed links to peers

Practical solution

– Flow level measurements at peering links (both directions!)

– Reachability information from all routers

13

Inbound and Outbound Flows on Peering Links

Outbound

Peers

Customers

Inbound

Note: Ideal methodology applies for inbound flows.

14

Most Challenging Part:

Inferring Ingress Links for Outbound Flows

Outbound traffic flow measured at peering link

Example output

? input

Customers destination

? input

Use Routing simulation to trace back to the ingress links!

15

Computing the Demands

Forwarding

Tables

Configuration

Files

NetFlow SNMP researcher in data mining gear

Data

– Large, diverse, lossy

NETWORK

– Collected at slightly different, overlapping time intervals, across the network.

– Subject to network and operational dynamics. Anomalies explained and fixed via understanding of these dynamics

Algorithms, details and anecdotes in paper!

16

Outline

Sound traffic model for traffic engineering of operational IP networks

Methodology for populating the model

Results

– Effectiveness of measurement methodology

– Traffic characteristics

Conclusions

17

Experience with Populating the Model

Largely successful

– 98% of all traffic (bytes) associated with a set of egress links

– 95-99% of traffic consistent with an OSPF simulator

Disambiguating outbound traffic

– 67% of traffic associated with a single ingress link

– 33% of traffic split across multiple ingress (typically, same city!)

Inbound and transit traffic (uses input measurement)

– Results are good

Outbound traffic (uses input disambiguation)

– Results are pretty good, for traffic engineering applications, but there are limitations

– To improve results, may want to measure at selected or sampled customer links; e.g., links to email, hosting or data centers.

18

Proportion of Traffic in Top Demands (Log Scale)

Zipf-like distribution. Relatively small number of heavy demands dominate.

19

Time-of-Day Effects (San Francisco) midnight EST midnight EST

Heavy demands at same site may show different time of day behavior

20

Discussion

Distribution of traffic volume across demands

– Small number of heavy demands (Zipf’s Law!)

– Optimize routing based on the heavy demands

– Measure a small fraction of the traffic (sample)

– Watch out for changes in load and egress links

Time-of-day fluctuations in traffic volumes

– U.S. business, U.S. residential, & International traffic

– Depends on the time-of-day for human end-point(s)

– Reoptimize the routes a few times a day (three?)

Stability?

– No and Yes

21

Outline

Sound traffic model for traffic engineering of operational IP networks

Methodology for populating the model

Results

Conclusions

– Related work

– Future work

22

Related Work

Bigger picture

– Topology/configuration (technical report)

» “IP network configuration for traffic engineering”

– Routing model (IEEE Network, March/April 2000)

» “Traffic engineering for IP networks”

– Route optimization (INFOCOM’00)

» “Internet traffic engineering by optimizing OSPF weights”

Populating point-to-point demand models

– Direct observation of MPLS MIBs (GlobalCenter)

– Inference from per-link statistics (Berkeley/Bell-Labs)

– Direct observation via trajectory sampling (next talk!)

23

Future Work

Analysis of stability of the measured demands

Online collection of topology, reachability, & traffic data

Modeling the selection of the ingress link (e.g., use of multi-exit descriptors in BGP)

Tuning BGP policies to the prevailing traffic demands

Interactions of Traffic Engineering with other resource allocation schemes (TCP, overlay networks for content delivery, BGP traffic engineering

“games” among ISP’s)

24

Backup

25

Identifying Where the Traffic Can Leave

Traffic flows

– Each flow has a dest IP address (e.g., 12.34.156.5)

– Each address belongs to a prefix (e.g., 12.34.156.0/24)

Forwarding tables

– Each router has a table to forward a packet to “next hop”

– Forwarding table maps a prefix to a “next hop” link

Process

– Dump the forwarding table from each edge router

– Identify entries where the “next hop” is an egress link

– Identify set all egress links associated with a prefix

26

Measuring Only at Peering Links

Why measure only at peering links?

– Measurement support directly in the interface cards

– Small number of routers (lower management overhead)

– Less frequent changes/additions to the network

– Smaller amount of measurement data

Why is this enough?

– Large majority of traffic is interdomain

– Measurement enabled in both directions (in and out)

– Inference of ingress links for traffic from customers

27

Full Classification of Traffic Types at Peering Links

Outbound

Peers

Transit

Inbound

Customers

Internal

28

Flows Leaving at Peer Links

Single-hop transit

– Flow enters and leaves the network at the same router

– Keep the single flow record measured at ingress point

Multi-hop transit

– Flow measured twice as it enters and leaves the network

– Avoid double counting by omitting second flow record

– Discard flow record if source does not match a customer

Outbound

– Flow measured only as it leaves the network

– Keep flow record if source address matches a customer

– Identify ingress link(s) that could have sent the traffic

29

Results: Populating the Model

Ingress Egress Effectiveness

Inbound Netflow Reachability Good

Transit Netflow Netflow &

Reachability

Outbound

Internal

Packet filters

Netflow &

Reachability

X Reachability

Good

Pretty

Good

X

Data Used

30

Download