Uploaded by Sikander Ali

Learn Firewalls with Dr. WoW

Dr. WoW
Jointly presented by Enterprise Network Documentation
Development Department and Firewall PDU
Basics
Security Policy
Attack Defense
NAT
GRE&L2TP VPN
IPSec VPN
SSL VPN
Hot Standby
Multi-homing
oW
W
Learn Firewalls with
Learn Firewalls with Dr. WoW
Contents
1 Basics ............................................................................................................................................... 1
1.1 What Are Firewalls? ......................................................................................................................................... 1
1.2 Development of Firewalls ................................................................................................................................ 3
1.2.1 Stage One: 1989–1994 ............................................................................................................................ 4
1.2.2 Stage Two: 1995–2004 ............................................................................................................................ 4
1.2.3 Stage Three: 2005–Present ...................................................................................................................... 4
1.2.4 Summary ................................................................................................................................................. 5
1.3 Huawei’s Firewall Products at a Glance ........................................................................................................... 5
1.3.1 USG2110 Product Introduction ............................................................................................................... 7
1.3.2 USG6600 Product Introduction............................................................................................................... 7
1.3.3 USG9500 Product Introduction............................................................................................................... 8
1.4 Security Zones .................................................................................................................................................. 8
1.4.1 Relationships Between Interfaces, Networks and Security Zones .......................................................... 9
1.4.2 Direction of Packet Flow Between Security Zones ............................................................................... 12
1.4.3 Security Zone Configuration ................................................................................................................. 14
1.5 Stateful Inspection and Session Mechanism .................................................................................................. 17
1.5.1 Stateful Inspection ................................................................................................................................ 17
1.5.2 Session .................................................................................................................................................. 19
1.5.3 Verification of Stateful Inspection......................................................................................................... 20
1.6 Appendix to the Stateful Inspection and Session Mechanism ........................................................................ 20
1.6.1 More About Sessions ............................................................................................................................ 21
1.6.2 Stateful Inspection and Session Establishment ..................................................................................... 23
1.7 Precautions for Configuration and Troubleshooting Guides .......................................................................... 28
1.7.1 Security Zones ...................................................................................................................................... 28
1.7.2 Stateful Inspection and Session Mechanism ......................................................................................... 29
2 Security Policy ............................................................................................................................. 31
2.1 First Experience of Security Policies ............................................................................................................. 31
2.1.1 Basic Concepts ...................................................................................................................................... 31
2.1.2 Matching Sequence ............................................................................................................................... 33
2.1.3 Implicit Packet Filtering........................................................................................................................ 33
2.2 History of Security Policies ............................................................................................................................ 35
2.2.1 Phase 1: ACL-based Packet Filtering .................................................................................................... 35
1
Learn Firewalls with Dr. WoW
2.2.2 Phase 2: UTM-integrated Security Policy ............................................................................................. 36
2.2.3 Phase 3: Unified Security Policy........................................................................................................... 39
2.3 Security Policies in the Local Zone ................................................................................................................ 42
2.3.1 Configuring a Security Policy in the Local Zone for OSPF .................................................................. 42
2.3.2 Which Protocols Require Security Policies Configured in the Local Zone on Firewalls? .................... 47
2.4 ASPF .............................................................................................................................................................. 50
2.4.1 Helping FTP Data Packets Traverse Firewalls ...................................................................................... 50
2.4.2 Helping QQ/MSN Packets Traverse Firewalls ...................................................................................... 54
2.4.3 Helping User-Defined Protocol Packets Traverse Firewalls ................................................................. 55
2.5 Configuration Precautions and Troubleshooting Guide ................................................................................. 57
2.5.1 Security Policy ...................................................................................................................................... 57
2.5.2 ASPF ..................................................................................................................................................... 60
3 Attack Defense............................................................................................................................. 63
3.1 DoS Attack ..................................................................................................................................................... 63
3.2 Single-Packet Attack and Defense ................................................................................................................. 64
3.2.1 Ping of Death Attack and Defense ........................................................................................................ 64
3.2.2 LAND Attack and Defense ................................................................................................................... 65
3.2.3 IP Scanning ........................................................................................................................................... 65
3.2.4 Recommended Configurations for Preventing Single-Packet Attacks .................................................. 65
3.3 SYN Flood Attack and Defense ..................................................................................................................... 66
3.3.1 Attack Mechanism ................................................................................................................................ 68
3.3.2 TCP Proxy ............................................................................................................................................. 69
3.3.3 TCP Source Authentication ................................................................................................................... 71
3.3.4 Commands ............................................................................................................................................ 73
3.3.5 Threshold Configuration Guide ............................................................................................................ 73
3.4 UDP Flood Attack and Defense ..................................................................................................................... 73
3.4.1 Rate Limiting ........................................................................................................................................ 74
3.4.2 Fingerprint Learning ............................................................................................................................. 74
3.4.3 Commands ............................................................................................................................................ 76
3.5 DNS Flood Attack and Defense ..................................................................................................................... 77
3.5.1 Attack Mechanism ................................................................................................................................ 78
3.5.2 Defense Measure ................................................................................................................................... 78
3.5.3 Commands ............................................................................................................................................ 81
3.6 HTTP Flood Attack and Defense ................................................................................................................... 81
3.6.1 Attack Mechanism ................................................................................................................................ 81
3.6.2 Defense Measure ................................................................................................................................... 82
3.6.3 Commands ............................................................................................................................................ 85
4 NAT................................................................................................................................................ 87
4.1 Source NAT .................................................................................................................................................... 87
4.1.1 Source NAT Mechanism ....................................................................................................................... 87
4.1.2 NAT No-PAT ......................................................................................................................................... 89
2
Learn Firewalls with Dr. WoW
4.1.3 NAPT .................................................................................................................................................... 92
4.1.4 Egress Interface Address Mode (Easy-IP) ............................................................................................ 93
4.1.5 Smart NAT ............................................................................................................................................ 94
4.1.6 Triplet NAT ........................................................................................................................................... 96
4.1.7 Source NAT in Multi-Egress Scenario ................................................................................................ 100
4.1.8 Summary ............................................................................................................................................. 102
4.1.9 Further Reading .................................................................................................................................. 103
4.2 NAT Server................................................................................................................................................... 104
4.2.1 NAT Server Mechanism ...................................................................................................................... 104
4.2.2 NAT Server in Multi-Egress Scenario ................................................................................................. 107
4.3 Bidirectional NAT ........................................................................................................................................ 113
4.3.1 NAT Inbound + NAT Server ............................................................................................................... 113
4.3.2 Intrazone NAT + NAT Server.............................................................................................................. 116
4.4 NAT ALG ..................................................................................................................................................... 119
4.4.1 FTP Packets Traversing NAT Devices ................................................................................................ 119
4.4.2 QQ/MSN/User-defined Protocol Packets Traversing NAT Devices ................................................... 123
4.4.3 One Command Controlling Two Functions ........................................................................................ 123
4.4.4 Differences Between ASPF for User-defined Protocols and Triplet NAT ........................................... 124
4.5 Function of Blackhole Routes in NAT Scenarios ......................................................................................... 126
4.5.1 Blackhole Route in a Source NAT Scenario ....................................................................................... 126
4.5.2 Blackhole Route in a NAT Server Scenario ........................................................................................ 131
4.5.3 Summary ............................................................................................................................................. 133
5 GRE&L2TP VPN ....................................................................................................................... 135
5.1 Introduction to VPN Technology ................................................................................................................. 135
5.1.1 VPN Classification .............................................................................................................................. 135
5.1.2 Key VPN Technologies ....................................................................................................................... 138
5.1.3 Summary ............................................................................................................................................. 140
5.2 GRE.............................................................................................................................................................. 141
5.2.1 GRE Encapsulation/Decapsulation ..................................................................................................... 142
5.2.2 Configuring Basic GRE Parameters .................................................................................................... 145
5.2.3 Configuring GRE Security Mechanisms ............................................................................................. 147
5.2.4 Approach to Security Policy Configuration ........................................................................................ 150
5.3 The Birth and Evolution of L2TP VPNs ...................................................................................................... 153
5.4 L2TP Client-initiated VPNs ......................................................................................................................... 154
5.4.1 Step 1: Setting Up an L2TP Tunnel (Control Connection)—Three Pieces of Information Enter the
Wormhole ..................................................................................................................................................... 156
5.4.2 Step 2: Establishing an L2TP Session—Three Pieces of Information to Awaken the Wormhole
Gateguard ..................................................................................................................................................... 157
5.4.3 Step 3: Creating a PPP Connection—Identity Authentication and Issuance of the "Special Pass" ..... 158
5.4.4 Step 4: Data Encapsulation Transmission— Passing Through the Wormhole to Visit Earth .............. 161
5.4.5 Approach to Security Policy Configuration ........................................................................................ 163
5.5 L2TP NAS-initiated VPNs ........................................................................................................................... 166
3
Learn Firewalls with Dr. WoW
5.5.1 Step 1: Establishing a PPPoE Connection—The Dialing Interface Dials the VT Interface ................ 168
5.5.2 Step 2: Establishing the L2TP Tunnel— Three Pieces of Information to Negotiate Entrance to the
Wormhole ..................................................................................................................................................... 169
5.5.3 Step 3 Establishing an L2TP Session—Three Pieces of Information to Awaken the Wormhole
Gatekeeper ................................................................................................................................................... 171
5.5.4 Steps 4-5: LNS Authentication and IP Address Assignment—the LNS Sternly Accepts the LAC ..... 171
5.5.5 Step 6: Data Encapsulation Transmission— Obstacle-Free Communication ..................................... 173
5.5.6 Approach to Security Policy Configuration ........................................................................................ 174
5.6 L2TP LAC-Auto-initiated VPNs .................................................................................................................. 178
5.6.1 LAC-Auto-initiated VPN Principles and Configuration ..................................................................... 178
5.6.2 Approach to Security Policy Configuration ........................................................................................ 182
5.7 Summary ...................................................................................................................................................... 185
6 IPSec VPN................................................................................................................................... 187
6.1 IPSec Overview ............................................................................................................................................ 187
6.1.1 Encryption and Authentication ............................................................................................................ 187
6.1.2 Security Encapsulation ........................................................................................................................ 190
6.1.3 Security Associations .......................................................................................................................... 191
6.2 Manual IPSec VPNs ..................................................................................................................................... 192
6.3 IKE and ISAKMP ........................................................................................................................................ 196
6.4 IKEv1 ........................................................................................................................................................... 197
6.4.1 Configuring IKE/IPSec VPNs............................................................................................................. 197
6.4.2 Phase 1: Establishing IKE SA (Main Mode) ....................................................................................... 200
6.4.3 Phase 2: Establishing IPSec SA .......................................................................................................... 204
6.4.4 Phase 1: Establishing IKE SA (Aggressive Mode) ............................................................................. 206
6.5 IKEv2 ........................................................................................................................................................... 207
6.5.1 IKEv2 Overview ................................................................................................................................. 207
6.5.2 IKEv2 Negotiation Process ................................................................................................................. 209
6.6 Summary of IKE/IPSec ................................................................................................................................ 212
6.6.1 IKEv1 V.S. IKEv2 ............................................................................................................................... 212
6.6.2 IPSec Protocol Profiles ....................................................................................................................... 213
6.7 Template IPSec ............................................................................................................................................. 214
6.7.1 Point-to-Multi-Point Networking Applications ................................................................................... 215
6.7.2 Customized Pre-Shared Keys (USG9500-Series Firewall Only) ........................................................ 219
6.7.3 Designated Peer Domain Name Usage ............................................................................................... 220
6.7.4 Summary ............................................................................................................................................. 221
6.8 NAT Traversal .............................................................................................................................................. 222
6.8.1 Overview of NAT Traversal Scenarios ................................................................................................ 222
6.8.2 IKEv1 NAT Traversal Negotiation (Main Mode) ............................................................................... 227
6.8.3 IKEv2 NAT Traversal Negotiation ...................................................................................................... 228
6.8.4 IPSec and NAT for a Single Firewall .................................................................................................. 230
6.9 Digital Certificate Authentication ................................................................................................................ 231
6.9.1 Public Key Cryptology and PKI Profiles ............................................................................................ 232
4
Learn Firewalls with Dr. WoW
6.9.2 Certificate Applications....................................................................................................................... 233
6.9.3 Digital Certificate Identity Authentication .......................................................................................... 237
6.10 Security Policy Configuration Roadmap .................................................................................................... 239
6.10.1 IKE/IPSec VPN Scenarios ................................................................................................................ 239
6.10.2 IKE/IPSec VPN+NAT Traversal Scenarios ...................................................................................... 242
7 SSL VPN ..................................................................................................................................... 246
7.1 SSL VPN Mechanisms ................................................................................................................................. 246
7.1.1 Advantages of SSL VPN ..................................................................................................................... 246
7.1.2 SSL VPN Use Scenarios ..................................................................................................................... 247
7.1.3 SSL Protocol Operating Mechanisms ................................................................................................. 248
7.1.4 User Identity Authentication ............................................................................................................... 255
7.2 File Sharing .................................................................................................................................................. 258
7.2.1 File Sharing Use Scenarios ................................................................................................................. 258
7.2.2 Configuring File Sharing .................................................................................................................... 260
7.2.3 Interaction Between the Remote User and the Firewall ...................................................................... 260
7.2.4 Interaction of the Firewall with the File Server .................................................................................. 265
7.3 Web Proxy .................................................................................................................................................... 267
7.3.1 Configuring Web Proxy Resources ..................................................................................................... 267
7.3.2 Rewriting URL addresses.................................................................................................................... 269
7.3.3 Rewriting Resource Paths in URLs..................................................................................................... 271
7.3.4 Rewriting Files Contained in URLs .................................................................................................... 272
7.4 Port Forwarding ........................................................................................................................................... 273
7.4.1 Configuring Port Forwarding .............................................................................................................. 273
7.4.2 Preparatory Stage ................................................................................................................................ 275
7.4.3 Telnet Connection Establishment Stage .............................................................................................. 276
7.4.4 Data Communication Stage ................................................................................................................ 279
7.5 Network Extension ....................................................................................................................................... 280
7.5.1 Network Extension Use Scenarios ...................................................................................................... 280
7.5.2 Network Extension Process ................................................................................................................ 281
7.5.3 Reliable Transport Mode and Fast Transport Mode ............................................................................ 283
7.5.4 Configuring Network Extension ......................................................................................................... 285
7.5.5 Login Process ...................................................................................................................................... 288
7.6 Configuring Role Authorization ................................................................................................................... 290
7.7 Configuring Security Policies ...................................................................................................................... 292
7.7.1 Configuring Security Policies for Web Proxy/File Sharing/Port Forwarding Scenarios ..................... 292
7.7.2 Configuring a Security Policy in a Network Extension Scenario ....................................................... 294
7.8 Integrated Use of the Four Major SSL VPN Functions ................................................................................ 299
8 Hot Standby ............................................................................................................................... 304
8.1 Hot Standby Overview ................................................................................................................................. 304
8.1.1 Dual Device Deployment Improving Network Availability ................................................................ 304
8.1.2 Only Routing Failover Needs to Be Considered in Dual Router Deployments .................................. 305
5
Learn Firewalls with Dr. WoW
8.1.3 Session Failover Also Needs to Be Considered in Dual Firewall Deployments ................................. 308
8.1.4 Hot Standby Resolving the Problem with Firewall Session Failover ................................................. 309
8.1.5 Summary ............................................................................................................................................. 311
8.2 The Story of VRRP and VGMP ................................................................................................................... 311
8.2.1 VRRP Overview.................................................................................................................................. 311
8.2.2 VRRP Working Mechanisms .............................................................................................................. 314
8.2.3 Issues Created by Multiple, Independent VRRP States ...................................................................... 320
8.2.4 The Creation of VGMP Solves VRRPs' Problems .............................................................................. 321
8.2.5 VGMP Packet Structure ...................................................................................................................... 323
8.2.6 Firewall VGMP Groups' Default States .............................................................................................. 325
8.2.7 The Process of State Formation for Active/Standby Failover Hot Standby ........................................ 326
8.2.8 State Switching Process Following a Primary Device Interface Failure ............................................. 331
8.2.9 State Switching Process After a Failure of the Entire Primary Device ............................................... 334
8.2.10 Process of State Switching After a Failure on the Original Primary Device is Fixed (Preemption) . 334
8.2.11 Process of State Formation in Load Sharing Hot Standby ................................................................ 337
8.2.12 State Switching Process in Load Sharing Hot Standby ..................................................................... 340
8.2.13 Summary ........................................................................................................................................... 342
8.2.14 Addendum: VGMP State Machine .................................................................................................... 342
8.3 Explanation of VGMP Techniques ............................................................................................................... 344
8.3.1 VGMP Technique For Firewall-Router Connections .......................................................................... 344
8.3.2 VGMP Technique When Firewalls Transparently Access and Connect to Switches .......................... 348
8.3.3 VGMP Technique When Firewalls Transparently Access and Connect to Routers ............................ 350
8.3.4 VGMP Groups' Remote Interface Monitoring Techniques ................................................................. 352
8.3.5 Summary ............................................................................................................................................. 354
8.4 Explanation of the HRP Protocol ................................................................................................................. 356
8.4.1 HRP Overview .................................................................................................................................... 356
8.4.2 HRP Packet Structure and Implementation Mechanisms .................................................................... 359
8.4.3 HRP Backup Methods ......................................................................................................................... 361
8.4.4 Configurations and State Information that HRP Can Back Up ........................................................... 364
8.4.5 Heartbeat Interface and Heartbeat Link Detection Packets ................................................................. 364
8.4.6 HRP Consistency Check Packets' Role and Mechanism ..................................................................... 366
8.5 Hot Standby Configuration Guide ................................................................................................................ 367
8.5.1 Configuration Process ......................................................................................................................... 368
8.5.2 Configuration Check and Result Verification ..................................................................................... 372
9 Multi-homing ............................................................................................................................. 376
9.1 Multi-homing Overview ............................................................................................................................... 376
9.1.1 Shortest Path Routing.......................................................................................................................... 376
9.1.2 Policy-based Routing .......................................................................................................................... 377
9.2 Shortest Path Routing ................................................................................................................................... 378
9.2.1 Default Routing vs. Specific Routing ................................................................................................. 378
9.2.2 ISP Routing ......................................................................................................................................... 382
6
Learn Firewalls with Dr. WoW
9.3 Policy-based Routing ................................................................................................................................... 385
9.3.1 Policy-based Routing Concepts .......................................................................................................... 386
9.3.2 Destination IP Address-based Policy-based Routing .......................................................................... 387
9.3.3 Source IP Address-based Policy-based Routing.................................................................................. 389
9.3.4 Application-based Policy-based Routing ............................................................................................ 391
9.3.5 Policy-based Routing In Out-of-path Networks .................................................................................. 393
10 Firewall Deployment on Campus Network ....................................................................... 399
10.1 Networking Requirements .......................................................................................................................... 399
10.2 Network Planning ....................................................................................................................................... 400
10.2.1 Multi-ISP Routes Planning................................................................................................................ 400
10.2.2 Security Planning .............................................................................................................................. 401
10.2.3 NAT Planning .................................................................................................................................... 402
10.2.4 Bandwidth Management Planning .................................................................................................... 402
10.2.5 Network Management Planning ........................................................................................................ 402
10.3 Configuration Procedure ............................................................................................................................ 403
10.4 Highlights ................................................................................................................................................... 412
11 Firewall Deployment on Media Company Network ....................................................... 414
11.1 Networking Requirements .......................................................................................................................... 414
11.2 Network Planning ....................................................................................................................................... 416
11.2.1 Hot Standby Planning ....................................................................................................................... 416
11.2.2 Multi-ISP Routing Planning .............................................................................................................. 416
11.2.3 Bandwidth Management Planning .................................................................................................... 417
11.2.4 Security Planning .............................................................................................................................. 417
11.2.5 NAT Planning .................................................................................................................................... 418
11.2.6 Intranet Server Planning .................................................................................................................... 419
11.2.7 Log Planning ..................................................................................................................................... 419
11.3 Configuration Procedure ............................................................................................................................ 419
11.4 Highlights ................................................................................................................................................... 431
12 Firewall Deployment on Stadium Network ...................................................................... 432
12.1 Networking Requirements .......................................................................................................................... 432
12.2 Network Planning (For Egress Firewall) .................................................................................................... 434
12.2.1 BGP Planning .................................................................................................................................... 434
12.2.2 OSPF Planning .................................................................................................................................. 434
12.2.3 Hot Standby Planning ....................................................................................................................... 434
12.2.4 Security Function Planning ............................................................................................................... 435
12.2.5 NAT Planning .................................................................................................................................... 435
12.2.6 Planning for Inconsistent Forward and Return Paths ........................................................................ 436
12.3 Network Planning (For Data Center Firewall) ........................................................................................... 437
12.3.1 Hot Standby Planning ....................................................................................................................... 437
12.3.2 Security Function Planning ............................................................................................................... 437
7
Learn Firewalls with Dr. WoW
12.4 Configuration Procedure (For Egress Firewall) ......................................................................................... 437
12.5 Configuration Procedure (For Data Center Firewall) ................................................................................. 443
12.6 Highlights ................................................................................................................................................... 446
13 Firewalls On the VPN Connecting Corporate Branches and Headquarters................ 447
13.1 Networking Requirements .......................................................................................................................... 447
13.2 Network Planning ....................................................................................................................................... 449
13.2.1 Interface Planning ............................................................................................................................. 449
13.2.2 Security Policy Planning ................................................................................................................... 449
13.2.3 IPSec Planning .................................................................................................................................. 449
13.2.4 NAT Planning .................................................................................................................................... 449
13.2.5 Routing Planning .............................................................................................................................. 450
13.3 Configuration Procedure ............................................................................................................................ 450
13.4 Highlights ................................................................................................................................................... 458
8
Learn Firewalls with Dr. WoW
1
Basics
1.1 What Are Firewalls?
In September 2013, Huawei released its USG6600 Next Generation Firewall (NGFW) at the
first Huawei Enterprise Networking Conference, marking the beginning of a new stage of
development for Huawei’s firewalls.
Following this, in December 2013, Huawei’s NGFW made Huawei the only Chinese vendor
mentioned in Forrester Research’s newest report on network segmentation gateways. This
firewall’s comprehensive functional support and reliable quality guarantees have earned it an
exceptionally high satisfaction rating of over 95%, as well as excellent reviews.
Thirteen years ago, in 2001, Huawei released its first plug-in firewall card. Time flies, and
over these past 13 years, the Internet has developed at a speed that could not have been
predicted. Huawei’s firewalls have weathered many storms during these formative years, all
the while gradually maturing and growing, a process that continues today.
There are likely more readers familiar with network switches and routers than with firewalls.
As a first line defense in cyber security, firewalls play an important role, and the time has
come to learn a bit more about this faithful protector of cyber security.
My name is Dr. WoW. I’ve worked my way up through the ranks at Huawei, and today I’m a
member of Huawei’s Firewall R&D team. In this chapter I’ll combine Huawei’s firewall and
security products together to explain firewalls’ developmental history and their key
technologies to everyone. I’ll also go over the implementation principles behind firewall’s
security features, as well as the methods for their configuration. I hope that through my
explanation, all of you network engineers will gain a firm understanding of firewalls.
I’ll begin with a discussion of the word "firewall". Walls had their beginnings as defensive
structures, and since ancient times have given people a feeling of safety. A firewall is true to
its name—firewalls prevent fires. The word was used originally in construction/architecture,
and these original firewalls stopped fires from spreading from one area to another by isolating
them.
As used in the telecommunications field, firewalls also came to embody this one feature: a
firewall is a specific kind of network equipment generally used to separate two networks from
one another. Of course, this kind of separation is highly ‘smart’; firewalls stop "fires" from
spreading, but guarantee that "people" can still pass through. "Fire" here refers to various
kinds of attacks on networks, while "people" refers to normal communication packets.
With this in mind, and to give a definition that suits firewalls’ position in the telecom world, a
firewall is primarily used to protect a network from attacks and intrusion from other networks.
1
Learn Firewalls with Dr. WoW
Because of their abilities to isolate and protect, firewalls are flexibly positioned on network
perimeters, used for network segmentation, and others. For example, they can be used on
enterprise network egresses, or to segment internal subnets in large networks, or on data
center perimeters, as shown in Figure 1-1.
Figure 1-1 Schematic of firewall deployment scenarios
From the above introduction we can see that firewalls, routers, and network switches are
different from one another. Routers are used to connect different networks, and use routing
protocols to guarantee interconnectedness and ensure that packets are sent to their intended
destinations. Network switches are generally used to set up local area networks (LANs), and
are important hubs for LAN communication, quickly forwarding packets through
Layer-2/Layer-3 switching. Firewalls are primarily deployed to network perimeters, exert
control over access into and out of the network, and their core feature is security protection.
Routers and network switches are based in forwarding, while firewalls are based in control,
as shown in Figure 1-2.
Figure 1-2 Comparison of firewalls, network switches, and routers
2
Learn Firewalls with Dr. WoW
There is an ongoing trend of low and mid-end routers and firewalls being combined together.
This is largely because the two are similar in form and functionality. Huawei has released a
line of this kind of low and mid-end equipment (for example the USG2000/5000 firewall
series) which possess both routing and security functions—these are truly "all in one"
products.
Now that we’ve learned about the basic concepts behind firewalls, the next order of business
is for me to take everyone down the road of firewalls’ evolution.
1.2 Development of Firewalls
In the last section I introduced the basics about firewalls. In this section I will talk with all of
you about the past, present, and future of firewalls. I ask that everyone come along with me,
Dr. WoW, on a trip through the developmental history of firewalls, after which we’ll climb
into our time machine to catch a glimpse of their bright future.
Just as with mankind’s evolution, firewalls have transitioned from being simple ‘life forms’
(simple functionality) to advanced ‘life forms’ (complex functionality) over the course of their
evolutionary history, as shown in Figure 1-3. During this process, the development and
evolution of firewalls have been spurred forward by the fast development of network
technology and continuous new demand.
Figure 1-3 Firewalls’ developmental history
The earliest firewalls can be traced back to the end of the 1980s, meaning that firewalls have
already been around for more than 20 years. Over these 20+ years, their developmental
process can roughly be broken down into three stages.
3
Learn Firewalls with Dr. WoW
1.2.1 Stage One: 1989–1994
The major events during this stage of firewalls’ development include:

1989 saw the birth of packet filtering firewalls, which achieved simple control over
access. We call these 1st generation firewalls.

Following this came proxy firewalls, which act as application layer proxies for
communication between internal and external networks—these are the 2nd generation
firewalls. Proxy firewalls are fairly secure, but have slow processing speeds. Moreover,
developing a corresponding proxy service for every application is very difficult to
achieve, and therefore these can only provide proxy support for a small number of
applications.

In 1994, the Checkpoint Company released the first firewall based in stateful inspection
technology. Stateful inspection uses dynamic analysis of the state of a packet to
determine the action it should take towards the packet, and does not require proxy
services for every application. These firewalls also have fast processing times, and
provide a high level of security. Stateful inspection firewalls have been called 3rd
generation firewalls.
NOTE
CheckPoint is an Israeli security company, and released the first stateful inspection technology-based
firewall intended for commercial use.
1.2.2 Stage Two: 1995–2004
The major events during this stage of firewalls’ development include:

Stateful inspection firewalls became mainstream. In addition to offering functions for
controlling access, firewalls also began to incorporate other functions, such as virtual
private networks (VPNs).

Meanwhile, specialized equipment also began to appear in embryonic form. One
example of this was Web application firewall (WAF) equipment specially designed to
protect the security of Web servers.

In 2004, the united threat management (UTM) theory was first introduced in the industry.
This combined traditional firewalls, intrusion detection, antivirus functions, URL
filtering, application control, email filtering, and other functions into one firewall, thus
achieving comprehensive security protection.
1.2.3 Stage Three: 2005–Present
The major events during this stage of firewalls’ development include:

After 2004, the UTM market developed quickly, and numerous UTM products sprung
onto the scene. However, new problems also appeared. First among these was that the
degree to which application layer information could be inspected was limited. For
example, if a firewall allowed "men" to pass through, but refused to pass "women"
through, should it allow an alien named Professor Du to pass through? Such scenarios
require even more advanced inspection measures, and this has resulted in the wide use of
deep packet inspection (DPI) technology. A second issue is that of performance: with
multiple security functions operating at the same time, UTM equipment’s processing
performance was significantly lower than other models.

In 2008, Palo Alto Networks released the Next-Generation Firewall (NGFW), which
resolved these issues of performance degradation that occurred when multiple functions
4
Learn Firewalls with Dr. WoW
are running at the same time. NGFWs also allow for management/control of users,
applications, and content.

In 2009, Gartner defined Next Generation Firewalls, clarifying the functions and features
that such firewalls should possess. Following this, each security solutions company
released its own NGFW, marking the beginning of a new era for firewalls.
NOTE
Palo Alto Networks is an American security products vendor. It was the first to release the Next
Generation Firewall, and is thus the trailbreaker for NGFWs.
Gartner is a renowned IT research & consulting company, and is the developer of the world famous
Magic Quadrant. In 2013, Huawei became the first Chinese company to enter the Gartner firewall and
UTM Magic Quadrant, ample evidence of Huawei’s abilities in developing security products.
1.2.4 Summary
Below are three of the main messages to be learned from firewalls’ developmental history:

The first is that firewalls have attained increasingly accurate control over access. The
transition from simple access control in the earliest stages of firewall development, to
session-based access control, and then again to NGFW’s user, application and
content-based access control, has all been to bring about more effective and accurate
control.

The second is that firewalls’ protective capabilities have grown ever stronger. In the
initial stages of their development, the function of firewalls was to separate/segment.
Intrusion detection functions were then gradually added, as were functions such as
antivirus capabilities, URL filtering, application control, and email filtering. Therefore,
protective measures have increased, and the scope of firewalls’ protection has become
broader.

Third is that firewall processing performance has become better with time. The explosion
in network traffic has placed increasingly high demands on firewall performance.
Vendors have continued to improve and optimize both the hardware and software
framework of their firewalls, bringing about continued improvement in firewall
processing performance.
NGFWs do not signal the ‘end of history’ for firewalls. Networks are changing all the time,
and new technologies and demands will continue to arise. Therefore, it may not be many
years before firewalls become even more advanced and smart, and even easier to manage and
configure—this is something worth looking forward to.
1.3 Huawei’s Firewall Products at a Glance
In the last section, I explained the history of firewall development. Firewall functionality has
become increasingly robust, and performance increasingly higher, throughout 20+ years of
development and evolution. Huawei’s firewalls have likewise run the course of beginning
from scratch and becoming stronger through gradual improvements. Over these past 10+
years, Huawei has used bold innovation and continuous breakthroughs to keep its firewalls in
the industry vanguard, achieving milestone after milestone, as shown in Figure 1-4.
5
Learn Firewalls with Dr. WoW
Figure 1-4 Timeline of Huawei’s firewall development
I will now go into further detail about Huawei’s firewall products by leading you on a tour of
the entire line of Huawei’s firewalls, with a focus on our star products. Hearing (or in this case
reading) is nothing, but seeing is everything, so let’s first take a look at the whole family of
Huawei firewalls, shown in Figure 1-5.
Figure 1-5 Huawei’s firewalls
Huawei has four main lines of firewall products—the USG2000, USG5000, USG6000, and
USG9000 series. These series cover low-end, mid-end, and high-end equipment, with broad
functionality and models to suit any need, and are thus more than able to satisfy the demands
of any network scenario.
Of these, the USG2000 and USG5000 series are UTM products, the USG6000 series are
NGFW products, and the USG9000 series are high-end firewall products. Next, I will pick a
few representative firewall products to introduce to you.
6
Learn Firewalls with Dr. WoW
1.3.1 USG2110 Product Introduction
First for us to look at today is the small USG2110, shown in Figure 1-6. It might look small,
but it has strong functionality.
Figure 1-6 USG2110 external view
The USG2110 combines firewall, UTM, VPN, routing, wireless (WiFi/3G) and other
functions into one box. It is ‘plug and play’, easy to configure, and is a network set-up and
connection solution that provides security, flexibility and convenience to clients.
The USG2110 has excellent quality and is inexpensive. It not only allows customers to save
on their investment costs, but effectively reduces operation and maintenance costs, and thus is
a must-have ‘genie’ for small and medium business, franchises, and SOHO enterprises.
1.3.2 USG6600 Product Introduction
Next is the popular product USG6600, shown in Figure 1-7. A part of the USG6000 product
series, the USG6600 is one of Huawei’s products designed to face the next generation of
network scenarios: this firewall provides next generation security services based in
application layer threat protection, allowing network managers to exercise complete control
over the network, and improving oversight, management, and ease of use.
Figure 1-7 USG6600 external view
It’s not an exaggeration to say that the USG6600 is extremely popular. The firewall earned
placement on the IT CIO Excellence in Product Trustworthiness ranking, an IT 168 Annual
Award for Technical Excellence, and several first place finishes in China Network World’s
NGFW horizontal evaluations. The USG6600 has also won attention and praise in every area
following its release, further proving its popularity.
USG6600’s stand-out features are endless: it has the most accurate application access control,
recognizes over 6000 applications, has multiple kinds of user identification technologies, and
also offers comprehensive unknown threat protection, the simplest security management, and
the best total service performance experience…the list just goes on and on!
7
Learn Firewalls with Dr. WoW
1.3.3 USG9500 Product Introduction
The last products to be introduced today are the large USG9500s. These ‘smart giants with big
hearts’ belong to the USG9000 series, and are shown in Figure 1-8. As the industry’s first
Terabit data center firewalls, the USG9500s have successfully passed industry authority and
security assessment organization NSS’s (USA) third party laboratory testing, and have been
deemed to be the industry’s fastest firewall!
Figure 1-8 USG9500 external views
The USG9500s' use of distributed software design and integration of multiple industry leading
security technologies has led to their wide deployment in organizations/industries including
large data centers and enterprises, education, government, and broadcasting.
Having the strength of Hercules, or the charm of Aphrodite, is all well and good, but speed is
what truly makes the world go round. High speed networks are the cornerstone for the cloud
computing era, and the USG9500 series offers Huawei’s highest-end firewall products.
Created to carry the weight of the cloud computing world, and sturdy and ready for any
challenge, these products are capable of easily handling an enormous volume of access
requests and data traffic.
Through the above introductions, I’ve led everyone in getting to know several of Huawei’s
firewall products. With your appetite now whetted, I’m sure you’d like to see even more of
Huawei’s outstanding products, wouldn’t you? In addition to the aforementioned products,
Huawei also has many more firewalls. If you’d like to learn more about Huawei’s firewall
products, please visit Huawei’s enterprise business website, where I trust you’ll be able to find
the information you’re looking for.
1.4 Security Zones
Thus far, we’ve examined the concepts behind firewalls as well as their developmental history,
and I’ve also just introduced Huawei’s firewall products; I trust that everyone now has an
8
Learn Firewalls with Dr. WoW
elementary understanding of firewalls. Beginning with this section, I’ll detail firewall
technologies and continue to explore the amazing world of firewalls.
1.4.1 Relationships Between Interfaces, Networks and Security
Zones
As we mentioned in section "What Are Firewalls?", firewalls are primarily deployed on
network perimeters to help separate/segment them. The question then arises, how can
firewalls recognize different networks?
To help answer this question, we’ve incorporated a very important concept for use with
firewalls: the security zone, or ‘zone’ for short. A security zone is a combination of one or
many interfaces. Firewalls use security zones to separate/segment networks, thus marking the
"route" that packets can flow. Generally speaking, packets are only controlled when passing
between two different security zones.
NOTE
Under default settings, packets are controlled when passing between different security zones, but are not
controlled when they flow within a single security zone. However, Huawei’s firewalls also support
control over packets flowing within the same security zone. The control mentioned here is implemented
by "rules", also called "security policies", and we will introduce the specifics of these in Chapter 2
"Security Policies."
We all know that firewalls connect networks through interfaces; after interfaces are assigned
to security zones, these interfaces can connect security zones to the network. Generally
speaking, referring to any one security zone is the same as referring to the network connected
to by the security zone’s interface. The relationships between interfaces, networks and
security zones are shown in Figure 1-9.
CAUTION
On Huawei’s firewalls, any one interface can only be added to one security zone.
9
Learn Firewalls with Dr. WoW
Figure 1-9 Relationships between interfaces, networks and security zones
Security
zone
Network
Port
Firewall
By assigning interfaces to different security zones, we can create different networks on a
firewall. As shown in Figure 1-10, we’ve assigned Interface 1 and Interface 2 to Security
Zone A, Interface 3 to Security Zone B, and Interface 4 to Security Zone C. In this way there
are three security zones on the firewall, and three corresponding networks.
Figure 1-10 Assigning interfaces to security zones
Security
zone B
3
Security
zone A
1
4
Security
zone C
2
Firewall
Huawei’s default setting for its firewalls is to provide three security zones. These are the Trust,
DMZ and Untrust security zones. From these names alone it is apparent that these three
security zones are rich in meaning, and I’ll delve deeper into this below.

The Trust Zone—the trustworthiness of this zone’s network is very high, and this is
generally used to define the network on which internal users are located.

The DMZ Zone—the trustworthiness of this zone’s network is at an intermediate level,
and this is generally used to define the network on which internal servers are located.
10
Learn Firewalls with Dr. WoW

The Untrust Zone—this zone represents untrustworthy networks, and is generally
defined as being the Internet and other unsafe networks.
NOTE
The demilitarized zone (DMZ) is a military term used to describe territory administered in a way that is
‘looser’ than the strict administration of districts under military control, but stricter than loosely
administered public spaces. This term has been incorporated into firewall terminology to describe a
security zone with a degree of trustworthiness intermediate to those of internal and external networks.
In scenarios where network traffic is relatively light and the network environment is simple,
using the provided default security zones is enough to satisfy network segmentation needs. In
Figure 1-11, Interface 1 and Interface 2 are connecting to internal users, and so we can assign
these two interfaces to the Trust Zone; Interface 3 is connecting to internal servers, and so it
can be assigned to the DMZ zone; Interface 4 is connecting to the Internet, and it can be
assigned to the Untrust Zone. Of course, for network settings with more data traffic, we can
create new security zones based upon need.
Figure 1-11 Assigning interfaces to default security zones
Internal
server
DMZ
3
Trust
1
Intranet
2
4
Untrust
Internet
Firewall
Therefore, we can describe the route taken by packets through the firewall when users from
different networks communicate with one another. For example, when users in internal
networks access the Internet, the ‘route’ for packets through the firewall is from the Trust
Zone to the Untrust Zone; when Internet users access internal servers, the route of packets
through the firewall is from the Untrust Zone to the DMZ Zone.
In addition to packets passing between different networks, there are also packets sent from
networks to the firewall itself (for example when we log into the firewall to configure it), as
well as packets sent out by the firewall. How can the routes taken by these packets be
identified on the firewall?
As shown in Figure 1-12, a Local Zone is provided on the firewall—this represents the
firewall itself. Any locally-generated packets from the firewall can be deemed to have been
sent from the Local Zone; any packets that need a response and handling (not including
forwarding) from/by the firewall can be deemed to have been destined to the Local Zone.
11
Learn Firewalls with Dr. WoW
Figure 1-12 Local security zone
DMZ
3
1
Trust
Local
4
Untrust
2
Firewall
I’ll also add one reminder about the Local Zone: the Local Zone cannot add any interfaces,
but all of the firewall’s interfaces are hidden in the Local Zone. This is to say that when
packets pass through an interface towards a network, their destination security zone is the
security zone in which the interface is located, but when packets pass through an interface to
the firewall itself, their destination security zone is simply the Local Zone. This allows all
equipment/devices under an interface to be able to access the firewall itself. This also makes
the Local Zone’s relationship to the other security zones more explicit, thus killing two birds
with one stone.
1.4.2 Direction of Packet Flow Between Security Zones
As I explained above, different networks have different levels of trustworthiness. After using
security zones to define networks on firewalls, how can we deduce the trustworthiness of a
security zone? On Huawei’s firewalls, every security zone must have its own unique security
level from 1-100; the larger the number, the higher the trustworthiness of the zone’s network.
For default security zones, the security rating is fixed. The Local Zone’s security level is 100,
the Trust Zone’s security level is 85, the DMZ’s security level is 50, and the Untrust Zone’s
security level is 5.
Setting security levels separates security zones by rank. When packets pass between two
security zones, our rule is that when packets pass from low-level security zones to
high-level security zones, the packet’s direction is considered to be Inbound; when
packets pass from high-level security zones to low-level security zones the packet’s
direction is considered to be Outbound. Figure 1-13 details the directions for packets
passing between the Local Zone, the Trust Zone, the DMZ Zone, and/or the Untrust Zone.
12
Learn Firewalls with Dr. WoW
Figure 1-13 Direction of packets passing between security zones
Outbound
Outbound
DMZ
50
Inbound
Inbound
Trust
85
Inbound
Outbound
Inbound
Inbound
Local
100
Outbound
Outbound
Untrust
5
Firewall
Inbound
Outbound
By configuring security levels, each security zone on the firewall has an explicit, tiered
relationship with one another. Different security zones represent different networks, and the
firewall serves as the node that connects all the networks together. With this architecture as a
foundation, the firewall can manage and control packets passing between each network.
How do firewalls determine which two security zones a packet is passing between? First, the
source security zone can be easily determined, as the security zone for whichever interface the
firewall receives a packet from is the source security zone for the packet.
There are two different scenarios to consider when determining the destination security zone.
In Layer 3 mode, the firewall determines which interface a packet will be sent out from by
checking against the routing table—this interface’s security zone is the destination security
zone for the packet. In Layer 2 mode, the firewall checks the MAC address table to determine
which interface the packet will be sent out from—this interface’s security zone is the
destination security zone for the packet. After the source security zone and the destination
security zone are determined, the two security zones a packet is flowing between can be
ascertained.
There is also another scenario. This involves VPN settings in which the packet a firewall
receives is an encapsulated packet. The firewall decapsulates the packet to obtain the original
packet, and then checks a routing table to determine the destination security zone—the
security zone for whichever interface the packet will be sent out from is the destination
security zone for the packet. However, the source security zone cannot be simply determined
according to the interface that receives the packet, and so the firewall will use "reverse route
table checking " to determine the source security zone for the original packet. More
specifically, the firewall will assume that the original packet’s source address is its destination
IP address, and then use the routing table to determine which interface a packet with this
destination IP address would be sent from—this interface’s security zone is the security zone
the packet would be sent to. But as the real situation is the reverse of this, we have actually
determined the security zone from which the packet has been sent, and so the security zone
13
Learn Firewalls with Dr. WoW
found using this "reverse route table checking" method is in fact the source security zone for
the packet.
Determining a packet’s source and destination security zones is the premise for us to
accurately configure security policies, and it is vital that everyone understands the methods
for determining a packet’s source and destination security zones. We’ll also discuss this while
configuring security policies later in this manual.
1.4.3 Security Zone Configuration
Security zone configuration primarily involves creating security zones and adding interfaces
to security zones. Below is a test for creating a new security zone and then adding Interface
GE0/0/1 to this security zone (Interface GE0/0/1 can work in either Layer 3 mode or Layer 2
mode).
The configuration commands are very simple. The only thing to pay attention to is that newly
created security zones do not have security levels, and we need to configure a security level
for them before adding an interface to the security zone. Of course, as security levels are
unique, the configured security level cannot be the same as any existing security zone’s rating.
[FW] firewall zone name test
[FW-zone-test] set priority 10
[FW-zone-test] add interface GigabitEthernet 0/0/1
//creates security zone test
//sets security level to 10
//adds Interface GE0/0/1 to
security zone
All of the content we discussed above concerned adding a physical interface to a security zone.
In addition to physical interfaces, firewalls can also support logical interfaces, such as
subinterfaces, and VLANIF interfaces. When these logical interfaces are used they also need
to be added to security zones. Below, I have given examples of adding subinterfaces and
VLANIF interfaces to security zones.
As shown in Figure 1-14, PC A and PC B belong to different sub-networks, and the network
switch, which is connected to the firewall’s GE0/0/1 interface, has segmented the PC A and
PC B’s subnets using two VLANs. This kind of networking is a classic "one-armed"
environment for firewalls.
Figure 1-14 Using one firewall interface to connect multiple sub-nets
PC A
192.168.10.2/24
VLAN 10
Switch
VLAN 20
GE0/0/1
Firewall
PC B
192.168.20.2/24
In this scenario, one of the firewall’s interfaces is connecting two subnets. If we wanted to set
different security levels for these two-subnets, that is, if we needed to assign PC A and PC B
14
Learn Firewalls with Dr. WoW
to different security zones, how would we go about configuring this? As any one of a
firewall’s interfaces can only be added to one security zone, we cannot simply add Interface
GE0/0/1 to just any security zone. However, we can use subinterfaces or VLANIF interfaces
to achieve our desired result.
Let’s first take a look at how to create subinterfaces. First, we create two subinterfaces,
GE0/0/1.10 and GE0/0/1.20, under interface GE0/0/1. These correspond with VLAN 10 and
VLAN 20 respectively. Following this, these two subinterfaces are assigned to different
security zones (Interface GE0/0/1 does not need to be added to a security zone) thus achieving
the goal of assigning PC A and PC B to different security zones, as shown in Figure 1-15.
Figure 1-15 Assigning subinterfaces to security zones
PC A
192.168.10.2/24
GE0/0/1
Trust1
GE0/0/1.10
GE0/0/1.20
Trust2
PC B
192.168.20.2/24
Firewall
The specifics of configuration are as follows:
[FW] interface GigabitEthernet 0/0/1.10
[FW-GigabitEthernet0/0/1.10] vlan-type dot1q 10
[FW-GigabitEthernet0/0/1.10] ip address 192.168.10.1 24
[FW-GigabitEthernet0/0/1.10] quit
[FW] interface GigabitEthernet 0/0/1.20
[FW-GigabitEthernet0/0/1.20] vlan-type dot1q 20
[FW-GigabitEthernet0/0/1.20] ip address 192.168.20.1 24
[FW-GigabitEthernet0/0/1.20] quit
[FW] firewall zone name trust1
[FW-zone-trust1] set priority 10
[FW-zone-trust1] add interface GigabitEthernet 0/0/1.10
[FW-zone-trust1] quit
[FW] firewall zone name trust2
[FW-zone-trust2] set priority 20
[FW-zone-trust2] add interface GigabitEthernet 0/0/1.20
[FW-zone-trust2] quit
Following the above configuration, PC A has been assigned to the Trust 1 security zone, and
PC B has been assigned to the Trust 2 security zone, and we can now exert control over
packets from PC A accessing PC B.
Next, we’ll look at how to set up VLANIF interfaces. While still using the network
organization from Figure 1-14, we can create two VLANs on the firewall, configure an IP
address for each of their VLANIF interfaces, and then configure Interface GE0/0/1 to work in
Layer 2 mode (transparent mode), allowing VLAN10 and VLAN20’s packets to pass through.
Assigning VLANIF10 and VLANIF20 to different security zones (without needing to add
GE0/0/1 to a security zone), achieves the goal of assigning PC A and PC B to different
security zones, as shown in Figure 1-16.
15
Learn Firewalls with Dr. WoW
Figure 1-16 Assigning VLANIF interfaces to security zones
PC A
192.168.10.2/24
GE0/0/1
Trust1
Trust2
PC B
192.168.20.2/24
VLANIF10
VLANIF20
Firewall
The specifics of configuration are as follows:
[FW] vlan 10
[FW-vlan-10] quit
[FW] interface Vlanif 10
[FW-Vlanif10] quit
[FW] vlan 20
[FW-vlan-20] quit
[FW] interface Vlanif 20
[FW-Vlanif20] quit
[FW] interface GigabitEthernet 0/0/1
[FW-GigabitEthernet0/0/1] portswitch
[FW-GigabitEthernet0/0/1] port link-type trunk
[FW-GigabitEthernet0/0/1] port trunk permit vlan 10 20
[FW-GigabitEthernet0/0/1] quit
[FW] firewall zone name trust1
[FW-zone-trust1] set priority 10
[FW-zone-trust1] add interface Vlanif 10
[FW-zone-trust1] quit
[FW] firewall zone name trust2
[FW-zone-trust2] set priority 20
[FW-zone-trust2] add interface Vlanif 20
[FW-zone-trust2] quit
After completing configuration, PC A has been assigned to the Trust 1 security zone, and PC
B has been assigned to the Trust 2 security zone. Control can now be exerted over PC A
packets accessing PC B.
Above, we introduced examples of adding subinterfaces and VLANIF interfaces to security
zones. Firewalls can also support other logical interfaces besides these two, such as Tunnel
interfaces used in generic routing encapsulation (GRE), Virtual Template interfaces used in
Layer two tunneling protocols (L2TP). These logical interfaces still need to be added to
security zones, and we’ll introduce how to do this in the corresponding GRE and L2TP
chapters to follow.
Our introduction of the concepts behind security zones and their configuration is complete. I
hope that my introduction has allowed everyone to understand the use of security zones and
grasp the relationships between them, as this will provide you with a good foundation for
deepening your knowledge of firewalls in the following chapters.
16
Learn Firewalls with Dr. WoW
1.5 Stateful Inspection and Session Mechanism
As mentioned in section 1.2 "Development of Firewalls", the third generation firewall is
stateful inspection firewall. This type of firewall sets a milestone on the firewall history and
its stateful inspection and session mechanism has been used as a basic function for firewalls to
provide security defense. Now, I'm going to introduce the stateful inspection and session
mechanism.
1.5.1 Stateful Inspection
Let's start from the background of stateful inspection firewall. On a simple network setup
shown in Figure 1-17, the PC and Web server are deployed in different networks and both
directly connected to the firewall, which controls communications.
Figure 1-17 Network setup for PC-to-Web server access
Untrust
Trust
PC
192.168.0.1
Web server
172.16.0.1
Firewall
When the PC needs to access the Web server for Web pages, a rule numbered 1 listed in Table
1-1 has to be configured on the firewall to allow the packets to pass through just as a security
policy does. As this section focuses on the stateful inspection and session mechanism instead
of a security policy, a rule is used for easy understanding. Security policies will be described
in Chapter 2 "Security Policies."
Table 1-1 Rule 1 on the firewall
No.
Source IP
Address
Source
Port
Destination IP
Address
Destination
Port
Action
1
192.168.0.1
ANY
172.16.0.1
80
Permit
In this rule, ANY indicates that the source port can be any port, because it is the PC's OS that
determines the source port when the PC accesses the Web server. For the Windows OS, the
source port number can be any one in the range of 1024 to 65535. This port number is
uncertain and is therefore set to ANY.
When this rule applies, all the packets from the PC can pass the firewall and reach the Web
server. When receiving the packets, the Web server replies with packets, which will reach the
PC through the firewall as well. Before the stateful inspection firewall came up, a packet
filtering firewall has to be deployed for this function, for which another rule numbered 2 has
to be configured to allow the packets in the reverse direction to pass.
17
Learn Firewalls with Dr. WoW
Table 1-2 Rule 2 on the firewall
No.
Source IP
Address
Source
Port
Destination IP
Address
Destination
Port
Action
1
192.168.0.1
ANY
172.16.0.1
80
Permit
2
172.16.0.1
80
192.168.0.1
ANY
Permit
In rule 2, the destination port can be any port, as the PC uses an uncertain source port to
access the Web server. For the reply packets from the Web server to traverse the firewall and
reach the PC, the destination port has to be set to any in rule 2.
If the PC is running on a properly-protected network, this configuration may leave a serious
security risk. As rule 2 opens all destination ports to the PC, an attacker with malicious
attention may attack the PC under disguise of the Web server and attack packets will traverse
the firewall straightway.
Then let's see how a stateful inspection firewall solves this issue. In the preceding network
setup, rule 1 has to be applied on the firewall as well to allow the PC to access the Web server.
When the access packets reach the firewall, the firewall allows them to pass and sets up a
session for the access. This session will include information about the PC-sent packets, such
as IP addresses and ports.
When receiving the reply packets from the Web server, the firewall compares the packet
information with that included in the session. If the packet information matches and the reply
packets agree with the HTTP protocol, the firewall takes the reply packets as the subsequent
reply packets associated with the PC-to-Web server access, and allows the packets to pass.
Figure 1-18 shows the process.
NOTE
For easy understanding, this section uses an example where the PC and Web server are directly
connected to a firewall. In a real-world setup, if the PC and Web server are deployed in different
networks and directly connected to the firewall, routes have to be configured on the firewall so that the
PC and Web server are mutually reachable. In other words, a route to the PC has to be found on the
firewall even when the reply packets match the session. Only in this way can the reply packets reach the
PC as expected.
18
Learn Firewalls with Dr. WoW
Figure 1-18 Packet exchange through the stateful detection firewall
Untrust
Trust
PC
192.168.0.1
1
Web server
172.16.0.1
Firewall
PC access to Web
server
2
Session
established
3 Web server reply to PC
4
Session
matched, permit
If an attacker with malicious intentions requests the PC for access while under disguise of the
Web server, the firewall will not take the request packets as the subsequent reply packets
associated with the PC-to-Web server session, and then denies them. This design prevents
security risks associated with open ports while enabling the PC to access the Web server.
To sum up, before the stateful inspection firewall came up, a packet filtering firewall permits
or denies packets based on static rules as it takes packets as stateless and isolated ones, while
ignoring their associations. Then the packet filtering firewall requires a rule for packets in
each direction, which means low efficiency and high security risks.
The stateful inspection firewall fixes this defect. The stateful inspection firewall uses an
inspection mechanism based on connection status and takes all the packets exchanged over
the same connection between communication peers as a single date flow. For this firewall, the
packets in a data flow are associated, not isolated. A session is established for the first packet,
and the subsequent packets will be directly forwarded without any rule-by-rule inspection,
given that they match the session. This design improves the efficiency in forwarding packets.
1.5.2 Session
Then let's take a close look at "session". On a firewall, a session refers to a connection
established between communication peers. A collection of sessions form a session table. The
following example is a standard session table entry.
http VPN:public --> public 192.168.0.1:2049-->172.16.0.1:80
The key fields in the session table entry are as follows:

http: application-layer protocol

192.168.0.1: source IP address

2049: source port
19
Learn Firewalls with Dr. WoW

172.16.0.1: destination IP address

80: destination port
Then how to tell the source and destination? You need to find the "->" symbol in the entry.
The field before the symbol is the source and that after the symbol is the destination.
The five fields (source address, source port, destination address, destination port, and protocol)
are important information for a session, and they are called "5-tuple". The stateful inspection
firewall takes the packets that have the same 5-tuple as one flow and uniquely identifies a
connection by the 5-tuple.
How does the firewall generate a session table when dealing with some protocol packets that
do not include port information? For example, the ICMP protocol packets do not include port
information. Then the firewall uses the ID field in the packet header as the source port and
2048 as the destination port for the ICMP session. For other examples, the authentication
header (AH) and encapsulating security payload (ESP) protocol packets, which are used in
IPSec (to be depicted in later sections), do not include port information either. For these
packets, the firewall takes the source and destination ports as 0 for the AH and ESP sessions.
1.5.3 Verification of Stateful Inspection
Talk is cheap. Let's use an eNSP simulator to set up a simple network to verify stateful
inspection of the firewall. The network uses the same topology in Figure 1-17.
NOTE
Enterprise network simulation platform (eNSP) is a graphic network device simulation platform
provided by Huawei for free. It is capable of simulating network devices like enterprise routers, switches
and firewalls, for the purposes of verifying functions and learning network technologies without having
to use real devices. The eNSP can simulate the USG5500 firewall and supports the majority of its
security functions. As mentioned in the following sections, the eNSP will be used for verification.
Only one rule (listed in Table 1-1) is configured on the firewall to allow the PC-to-Web server
packets to pass. When the HttpClient runs on the PC to access the Web server, the access is
successful. On the firewall, run the display firewall session table command and you will find
a session in the displayed session table.
[FW] display firewall session table
Current Total Sessions : 1
http VPN:public --> public 192.168.0.1:2049-->172.16.0.1:80
The preceding information shows that the stateful inspection mechanism is functioning.
Specifically, when receiving the reply packets from the Web server, the firewall takes them as
session-matched and allows them to pass, even if a rule is absent for allowing the packets to
pass in the reverse direction.
Hopefully, my introduction helps you understand the stateful inspection and session
mechanism and I suggest that you practice using the eNSP, too.
1.6 Appendix to the Stateful Inspection and Session
Mechanism
In section 1.5 "Stateful Inspection and Session Mechanism", we've learned how stateful
inspection works and what 5-tuple means. Now, you may have the following questions:

Does a firewall session include only 5-tuple?
20
Learn Firewalls with Dr. WoW

For what protocols does the firewall establishes connections?

Does stateful inspection apply to all network environments?
In this appendix to the previous section, I will further discuss the stateful inspection and
session mechanism, introduce more about sessions, and conclude how the firewall processes
packets with stateful inspection enabled or not. Hopefully, this appendix will answer your
questions.
1.6.1 More About Sessions
Let's again start from a simple network setup as shown in Figure 1-19, where the PC and Web
server are directly connected to the firewall. The firewall interfaces connecting the PC and
Web server are added to different security zones, and a rule is applied to allow the PC to
access the Web server.
Figure 1-19 Network setup for PC-to-Web server access
Untrust
Trust
GE0/0/1
PC
192.168.0.1
GE0/0/2
Firewall
Web server
172.16.0.1
The PC is properly accessing the Web server. If you run the display firewall session table
verbose command on the firewall, you can find that the session has been established
successfully. The verbose parameter in the command displays more details about the session.
[FW] display firewall session table verbose
Current Total Sessions : 1
http VPN:public --> public
Zone: trust--> untrust TTL: 00:00:10 Left: 00:00:04
Interface: GigabitEthernet0/0/2 NextHop: 172.16.0.1 MAC: 54-89-98-fc-36-96
<--packets:4 bytes:465
-->packets:7 bytes:455
192.168.0.1:2052-->172.16.0.1:80
In addition to the 5-tuple, the command output also includes:

Zone: the direction in which packets flow between security zones. trust->untrust
indicates that packets flow from a trust zone to an untrust zone.

TTL: session aging time. When TTL expires, the session will be tore down.

Left: time left for the session to live.

Interface: outgoing interface.

NextHop: IP address of the next hop towards the destination, which is the Web server's
IP address in this network setup.

MAC: MAC address of the next hop towards the destination, which is the Web server's
MAC address in this network setup.

<--packets:4 bytes:465: packet statistics in the reverse session direction, or the number of
packets and bytes sent by the Web server to PC.
21
Learn Firewalls with Dr. WoW

<--packets:7 bytes:455: packet statistics in the forward session direction, or the number
of packets and bytes sent by the PC to Web server.
Among the preceding items, two deserves more attention.
2.
Aging time
A session is generated dynamically and will not exist for ever. If a session does not match
packets in a long time, the communication peers may have been disconnected and this session
is not required any more. To save the system resources, the firewall will delete the session
after a certain period of time, which is called session aging time.
The session aging time has to be set properly. If it is too large, the system resources may be
unnecessarily occupied, affecting the establishment of other sessions; if it is too small, the
firewall may forcibly tear down service connections that have more data to transmit. For
different types of protocols, Huawei firewall sets proper default aging time, for example, 20s
for ICMP and 30s for DNS. Generally, the default aging time is fine, but you can change it
using the firewall session aging-time command. For example, you can run the following
command to change the DNS session aging time to 10s.
[FW] firewall session aging-time dns 10
For one type of services on networks, such as SQL database services, the two consecutive
packets over a connection may have an extended time interval. When a user retrieves data on
an SQL database server, the time interval between retrievals may far exceed the session aging
time of SQL database service. After the firewall ages out the session for this service, the user
may experience sluggish or even failed access to the SQL database.
One way to resolve the issue is to extend the session aging time for such type of services, but
some other sessions may not need extended aging time and have to unnecessarily occupy
system resources.
To completely resolve this issue, Huawei firewall provides the persistent connection function,
which extends the session aging time only for the specified packets that match certain ACL
rules. Unlike the way to extend the session aging time by protocol, the persistent connection
function extends the session aging time more precisely. The default session aging time of a
persistent connection is 168 hours (long enough), but you can change the aging time from the
default.
NOTE
Currently, the persistent connection function applies to TCP protocol packets only.
Persistent connections can be configured within or between security zones. The following
provides an example of configuring a persistent connection between the trust and untrust
security zones for the SQL database packets from the 192.168.0.1 (the source) to the
172.16.0.2 (the destination).
[FW] acl 3000
[FW-acl-adv-3000] rule permit tcp source 192.168.0.1 0 destination 172.16.0.2 0
destination-port eq sqlnet
[FW-acl-adv-3000] quit
[FW] firewall interzone trust untrust
[FW-interzone-trust-untrust] long-link 3000 outbound
WARNING: Too large range of ACL maybe affect the performance of firewall, please use
this command carefully!
Are you sure?[Y/N]y
3.
Packet statistics
22
Learn Firewalls with Dr. WoW
Packet statistics in both directions (identified by the <- and -> symbols) are important for
locating network faults. If there are packet statistics in only the "->" direction but not the "<-"
direction, the PC-to-Web server packets have passed the firewall but the Web server-to-PC
packets have not, which means a communication anomaly. Regarding the possible causes of
anomaly, the firewall may have discarded the Web server-to-PC packets, the firewall and Web
server may have failed communication, or the Web server may be malfunctioning. Then the
scope of faults can be narrowed down. There can be surely exceptions. Under special network
environment, communication may be functional even if there are no packet statistics in one
direction. How special is the network environment? This remains to be seen in the later
sections.
1.6.2 Stateful Inspection and Session Establishment
The firewall's stateful inspection function takes packets over a connection as one data flow.
How to express a connection as a session? This requires the firewall's analysis of
protocol-specific exchange modes. The following uses TCP as an example. A TCP connection
between the communication peers is established with a three-way handshake, as shown in
Figure 1-20.
Figure 1-20 Three-way handshake of TCP
Web server
PC
1 SYN(seq=a)
2 SYN+ACK(seq=b, ack=a+1)
3 ACK(ack=b+1)
As a SYN packet initiates a TCP connection, the SYN packet is usually called the first packet.
For a TCP connection, the firewall establishes a session only after it receives a SYN packet
and the SYN packet is permitted by a rule. Then the TCP packets that match the session will
be directly forwarded. If the firewall does not receive any SYN packet, but the subsequent
SYN+ACK or ACK packets, it does not establish a session and discards these packets.
This process is fine unless under special network environments. As shown in Figure 1-21, the
request packets from the internal network go to the external network directly through the
router, and the reply packets from the external network are forwarded by the router to the
firewall, which then forwards them back to the router after processing them. Finally, the
router forwards the reply packets to the internal network. In other words, the firewall does not
receive any SYN packet but only SYN+ACK packets. In this example, the request and reply
packets are forwarded over different paths.
23
Learn Firewalls with Dr. WoW
Figure 1-21 Request and reply packets being forwarded over different paths
External
network
SYN+ACK
Router
SYN
Firewall
Internal
network
In this network environment, the firewall discards the received SYN+ACK packets as they
match no session. Consequently, the internal and external networks have interrupted
communication. Then what to do next?
The firewall provides the option of disabling stateful inspection. After stateful inspection is
disabled, the firewall will not analyze the status of connection just as a packet filtering
firewall does. Then the firewall establishes a session for the following packets if the rules
(security policies) allow them to pass, which ensures normal communication.
CAUTION
Disabling stateful inspection will change the firewall working mode. On live networks, do not
disable stateful inspection, unless otherwise required.
The following uses a network setup where the request and reply packets are forwarded over
different paths as an example to show how the firewall processes the TCP, UDP, and ICMP
protocol packets, when its stateful inspection is enabled and disabled.
TCP
Let's start with the TCP protocol. The network setup is simulated using the eNSP. The request
packets from the PC reach the Web server through the router and the reply packets from the
Web server are forwarded to the firewall, then back to the router and finally to the PC. Figure
1-22 shows the network topology.
24
Learn Firewalls with Dr. WoW
NOTE
To simulate the network setup, policy-based routing (PBR) needs to be configured on the router so that
the reply packets from the Web server are redirected to the firewall. For details on how to configure PBR,
see the router configuration guides. In addition, a route to the PC needs to be configured on the firewall
and the route's next hop has to be the router's interface (at 10.1.2.2 in this example) connected to the
firewall's interface GE0/0/1.
Figure 1-22 TCP request and reply packets being forwarded over different paths
Web server
172.16.0.1
Untrust
GE0/0/2
Router
GE0/0/1
Firewall
Trust
PC
192.168.0.1
A rule listed in Table 1-3 is configured on the firewall to allow the reply packets from the Web
server to pass.
Table 1-3 Rule to allow the reply packets from the Web server to pass
No.
Source IP
Address
Source
Port
Destination IP
Address
Destination
Port
Action
1
172.16.0.1
80
192.168.0.1
ANY
Permit
When stateful inspection is enabled on the firewall, an attempt for the PC to access the Web
server fails, as shown in Figure 1-23.
25
Learn Firewalls with Dr. WoW
Figure 1-23 PC's failure to access the Web server
On the firewall, no session information can be found.
[FW] display firewall session table
Current Total Sessions : 0
When you run the display firewall statistic system discard command to check packet loss
on the firewall, you will find Session miss packets discarded message.
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
Session miss packets discarded:
8
8
This information indicates that the firewall has discarded packets that match no session. As
the firewall receives the reply SYN+ACK packets but not SYN packets, the session cannot be
established, and the firewall has to discard the SYN+ACK packets.
Then you use the undo firewall session link-state check command to disable stateful
inspection.
[FW] undo firewall session link-state check
Then an attempt for the PC to access the Web server succeeds, and session information can be
found on the firewall.
[FW] display firewall session table verbose
Current Total Sessions : 1
tcp VPN:public --> public
Zone: untrust--> trust TTL: 00:00:10 Left: 00:00:10
Interface: GigabitEthernet0/0/1 NextHop: 10.1.2.2 MAC: 54-89-98-e4-79-d5
<--packets:0 bytes:0 -->packets:5 bytes:509
172.16.0.1:80-->192.168.0.1:2051
26
Learn Firewalls with Dr. WoW
In the session information, there are packet statistics in the "->" direction but none in the "<-"
direction, which means that only the reply packets from the server pass the firewall. Then we
can conclude that after stateful inspection is disabled, the firewall establishes a session for the
received SYN+ACK packets, maintaining communication between the PC and Web server.
On a network where the request and reply packets are forwarded over different paths, and
stateful inspection is disabled on the firewall, there are no packet statistics in one session
direction but the communication is normal. This is how we say "special" in the preceding
sections. For live networks, no rule always applies.
UDP
Then let's take a look at the UDP protocol. Unlike TCP, UDP is a connectionless protocol. The
firewall establishes a session for the received UDP packets if the rule allows them to pass,
regardless of whether stateful inspection is enabled or not.
ICMP
ICMP is a reminder of ping tests. Ping tests are usually used in routine maintenance to check
whether a device is reachable on a network. In a ping operation, you send an echo request,
and the destination device replies with an echo reply.
When stateful inspection is enabled, the firewall establishes a session for the received echo
request only if the firewall rule allows it to pass, and establishes no session for the received
echo reply if it does not receive the echo request, and discards the echo reply. When stateful
inspection is disabled, the firewall establishes a session for the echo request or reply.
The following provides an example of session information for a network where the request
and reply packets are forwarded over different paths, and stateful inspection is disabled on the
firewall.
[FW] display firewall session table verbose
Current Total Sessions : 1
icmp VPN:public --> public
Zone: untrust--> trust TTL: 00:00:20 Left: 00:00:11
Interface: GigabitEthernet0/0/1 NextHop: 10.1.2.2 MAC: 54-89-98-e4-79-d5
<--packets:0 bytes:0 -->packets:1 bytes:60
172.16.0.1:2048-->192.168.0.1:45117
For other types of ICMP packets, the firewall establishes a session for the received packets if
the rule allows them to pass, regardless of whether stateful inspection is enabled or not.
Table 1-4 concludes how the firewall processes TCP, UDP, and ICMP packets when stateful
inspection is enabled or disabled, given that the firewall rule allows the packets to pass.
Table 1-4 Session establishment for the TCP, UDP, and ICMP packets
Stateful Inspection
Enabled
Stateful Inspection
Disabled
SYN packets
Session established,
packets forwarded
Session established,
packets forwarded
SYN+ACK and ACK
packets
Session not established,
packets discarded
Session established,
packets forwarded
Session established,
packets forwarded
Session established,
packets forwarded
Protocol
TCP
UDP
27
Learn Firewalls with Dr. WoW
Stateful Inspection
Enabled
Stateful Inspection
Disabled
Ping echo requests
Session established,
packets forwarded
Session established,
packets forwarded
Ping echo replies
Session not established,
packets discarded
Session established,
packets forwarded
Other ICMP packets
Session not established,
packets forwarded
Session not established,
packets forwarded
Protocol
ICMP
The preceding sections explain how the firewall processes the TCP, UDP, and ICMP packets
when stateful inspection is enabled or disabled, for you to better understand the stateful
inspection and session mechanism. The next section will describe the essential precautions for
configuring security zones and the stateful inspection and session mechanism, and will also
provide troubleshooting guidelines.
1.7 Precautions for Configuration and Troubleshooting
Guides
1.7.1 Security Zones
For a new security zone on a firewall, a priority (security level) has to be specified; otherwise,
the associated interfaces cannot be added into the security zone. The following provides an
example of failing to add interfaces to a security zone.
[FW] firewall zone name abc
[FW-zone-abc] add interface GigabitEthernet 0/0/1
Error: Please set the priority on this zone at first.
The following command can be used to specify a priority, which must be unique.
[FW-zone-abc] set priority 10
A common mistake during security zone configuration is to forget to add interfaces to security
zones. If no interfaces are added to a security zone, the firewall cannot determine the path for
forwarding packets as well as interzone relationship. Consequently, the firewall discards the
packets and the service will be unavailable.
In this case, you can use the display zone command to check security zone configurations on
the firewall and the interfaces that have been added to the security zone.
[FW] display zone
local
priority is 100
#
trust
priority is 85
interface of the zone is (1):
GigabitEthernet0/0/1
#
untrust
28
Learn Firewalls with Dr. WoW
priority is 5
interface of the zone is (1):
GigabitEthernet0/0/2
GigabitEthernet0/0/3
#
dmz
priority is 50
interface of the zone is (0):
#
abc
priority is 10
interface of the zone is (0):
#
When a service is unavailable, there may be packet loss. You can use the display firewall
statistic system discard command to check packet statistics on the firewall. If the following
command output is displayed, the firewall cannot determine interzone relationship and have to
discard packets.
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
Interzone miss packets discarded:
5
5
The root cause of packet loss is that the interfaces have not been added to the security zone.
Then you can see how the packet loss information on the firewall helps in locating faults.
1.7.2 Stateful Inspection and Session Mechanism
The core technology inside the stateful inspection firewall is to analyze status of connections
between communication peers, and establish sessions for forwarding packets. If a service is
unavailable, a session may not be established on the firewall. Remember this when
troubleshooting.
You can use the display firewall session table command to check for a session of a service.
If there is no service session on the firewall…
There are two possible causes:
The service packets do not reach the firewall.
The service packets are discarded by the firewall.
For the first possible cause, the service packets may be discarded by other network devices
before they reach the firewall. If the other network devices do not discard the service packets,
it is the firewall that discards them.
In this case, run the display firewall statistic system discard command to check packet loss
statistics on the firewall. If the following information is displayed, the firewall fails to
determine the interzone relationship or find an ARP entry.
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
ARP miss packets discarded:
2
2
If the firewall fails to obtain ARP entries, check the ARP function on its upstream and
downstream devices.
29
Learn Firewalls with Dr. WoW
If the following information is displayed, the firewall discards packets because it cannot find a
route for them.
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
FIB miss packets discarded:
2
2
Then the firewall has an issue with route configurations. In this case, check for routes to the
destinations on the firewall.
If the following information is displayed, the firewall discards packets because it cannot find a
session for them.
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
Session miss packets discarded:
2
2
The firewall may receive the subsequent packets but not the first packets. In this case, check
whether the request and reply packets are forwarded over different paths. If required, use the
undo firewall session link-state check command to disable stateful inspection for
verification.
If the following information is displayed, the firewall discards packets because it fails to
establish a session.
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
Session create fail packets discarded:
2
2
The sessions on the firewall may reach the capacity and no more session can be established.
In this case, check for the sessions that are idle for a long time. For example, there are a large
number of DNS sessions that have only a small number of packets to transmit. Then the DNS
session aging time can be changed to 3s to speed up aging, using the following command:
[FW] firewall session aging-time dns 3
If there is a service session on the firewall…
Use the display firewall session table verbose command to check the session details. If the
following information is displayed, there are packet statistics in the forward session direction
but none in the reverse session direction.
[FW] display firewall session table verbose
Current Total Sessions : 1
icmp VPN:public --> public
Zone: trust--> untrust TTL: 00:00:10 Left: 00:00:04
Interface: GigabitEthernet0/0/1 NextHop: 172.16.0.1 MAC: 54-89-98-fc-36-96
<--packets:0 bytes:0 -->packets:5 bytes:45
192.168.0.1: 54187-->172.16.0.1:2048
Regarding the probable causes, the reply packets may not reach the firewall or be discarded
by the firewall. Then check whether the packets are discarded by other network devices
before reaching the firewall. And also check packet loss statistics on the firewall.
30
Learn Firewalls with Dr. WoW
2
Security Policy
2.1 First Experience of Security Policies
As I mentioned many times in the preceding chapter, "rules" are actually "security check
inspectors" for security control and play an important role when firewalls forward packets.
Packets can flow between security zones only when the action in the rules is "permit". If the
action is "deny", the packets will be discarded.
On firewalls, rules are expressed as "security policies". I will explain security policies in
detail in this chapter.
2.1.1 Basic Concepts
First, let's start from a simple network environment. As shown in Figure 2-1, a PC and a Web
server are on different networks, and both of them connect to a firewall. The PC is in the Trust
zone, while the Web server is in the Untrust zone.
Figure 2-1 Networking for a PC to access a Web server
Untrust
Trust
PC
192.168.0.1
Firewall
Web server
172.16.0.1
If we want the firewall to allow the PC to access the Web server, the requirement can be
described as follows: Allow packets to pass from source address 192.168.0.1 in the Trust zone
to destination address 172.16.0.1 and destination port 80 (HTTP port) in the Untrust zone.
If we express the requirement in a security policy and add the implied source port information,
the result is shown in Figure 2-2.
31
Learn Firewalls with Dr. WoW
Figure 2-2 Security policy for a PC to access a Web server
Trust
Untrust
Security policy
Condition
Source
address
Action
Permit
/Deny
Source
port
Destination
address
Destination
port
We can see that security policies are based on interzone relationships. A security policy
consists of the following parts:

Condition
Indicates the condition based on which the firewall checks packets. The firewall
compares the information carried in a packet with the condition one by one to check
whether the packet matches the condition.

Action
Indicates the action to be taken on matching packets. One policy has only one action,
either permit or deny.
Note that the condition has multiple fields, such as the source address, destination address,
source port, and destination port. These fields are ANDed. That is, a packet matches a policy
only when the information in the packet matches all the fields in the policy. If one field has
multiple options (such as two source addresses or three destination addresses), the options are
ORed. That is, a packet matches a field when it matches one option of the field.
After the security policy is configured on the firewall, the PC can access the Web server. The
packets that the Web server replies to the PC match a session, and therefore no additional
security policy is required. This mechanism has been described in section 1.5.
In real-world network environments, it is often that the communications are between two
networks (such as 192.168.0.0/24 and 172.16.0.0/24), not only between two systems (such as
the PC and Web server). In this case, we set the condition of a security policy to a network.
For example, we can permit the packets from source network 192.168.0.0/24 in the Trust zone
to destination network 172.16.0.0/24 in the Untrust zone. You may ask that what if we want to
deny the packets destined from one host (for example, 192.168.0.100) on network
192.168.0.0/24 to network 172.16.0.0/24?
We can configure another security policy to deny the packets from source address
192.168.0.100 in the Trust zone to the Untrust zone. Here you may ask what the firewall will
do since source address 192.168.0.100 matches the conditions of both security policies.
Let's look at the matching sequence of security policies.
32
Learn Firewalls with Dr. WoW
2.1.2 Matching Sequence
Security policies are matched in sequence. When a firewall forwards packets between security
zones, it searches interzone security policies top down. If a packet matches a specific security
policy, the firewall takes the action defined in the policy and stops searching remaining
security policies. If the packet does not match the policy, the firewall continues to search
remaining policies one by one.
Because of the matching sequence, we must comply with the principle of "putting more
specific policies before more general ones" when configuring security policies, just as in
configuring ACL rules.
The preceding situation is used as an example. As shown in Figure 2-3, we configure the first
security policy to deny the packets from 192.168.0.100 in the Trust zone to the Untrust zone
and the second security policy to allow packets to pass from network 192.168.0.0/24 in the
Trust zone to network 172.16.0.0/24 in the Untrust zone.
Figure 2-3 Matching sequence of security policies
Trust
Untrust
Security policy 1
Condition
Source address
192.168.0.100
Action
Deny
Matching
sequence
Security policy 2
Condition
Source address
192.168.0.0/24
Action
Permit
Destination address
172.16.0.0/24
When the firewall searches security policies, the packets from 192.168.0.100 matches the first
policy and are therefore denied. Other packets from network 192.168.0.0/24 match the second
policy and are forwarded. If we change the sequence of the two security policies, packets
from 192.168.0.100 will never match the policy with the action being deny.
You may ask how does the firewall process packets if none of security policies is matched?
For such a situation, firewalls implement the "implicit packet filtering" policy.
2.1.3 Implicit Packet Filtering
The implicit packet filtering policy can be implicit permit or implicit deny, which applies to
all packets that do not match any explicit policy. Therefore, the implicit packet filtering is the
last resort of the firewall, as shown in Figure 2-4. Note that implicit packet filtering has
nothing to do with the first-generation packet filtering firewalls.
33
Learn Firewalls with Dr. WoW
Figure 2-4 Security policies and implicit packet filtering
Trust
Untrust
Security policy 1
Condition
Source address
192.168.0.100
Action
Deny
Matching
condition
Security policy 2
Condition
Source address
192.168.0.0/24
Action
Permit
Destination address
172.16.0.0/24
Implicit packet filtering
Condition
Any
Action
Permit
/Deny
By default, the action of the implicit packet filtering is deny. That is, the firewall does not
allow packets that do not match any explicit security policy to flow between security zones.
To simplify configuration, we may set the action to permit for the implicit packet filtering
between security zones. However, this operation brings huge security risks because the
firewall is no longer used as a firewall if you do so, and there will be no network isolation.
Therefore, we recommend that you configure specific security policies instead of setting the
action of the implicit packet filtering to permit.
The preceding security policies apply to packets flowing between security zones. Can Huawei
firewalls control packets within a security zone? Of course YES. By default, packets within a
security zone are not controlled by security policies and they are forwarded without restriction.
Huawei firewalls support intrazone security policies. You can configure security policies to
control the flow of packets within a security zone if necessary.
note that when the interfaces on a firewall work in Layer 2 (transparent) mode, packets
passing the firewall are controlled by security policies. In this case, security policies are
required.
Security policies control not only the service packets forwarded by a firewall, but also the
packets that the firewall exchanges with other devices, such as the packets generated when an
administrator logs in to the firewall or the firewall establishes a VPN with another device. The
conditions in such security policies differ a lot, and we will introduce them in section 2.4.
Through the preceding introduction, I believe that you have had a rough understanding of
security policies. Everything is changing, so are the security policies on Huawei firewalls. In
the next section, we will tell you the history of Huawei firewall security policies.
34
Learn Firewalls with Dr. WoW
2.2 History of Security Policies
The networks are constantly changing, and security threats emerge one after another. To adapt
to such changes, Huawei firewalls are constantly upgraded, and security policies are improved
accordingly.
As shown in Figure 2-5, the development of Huawei firewall security policies has
experienced three phases: ACL-based packet filtering, UTM-integrated security policy, and
unified security policy.
Figure 2-5 History of Huawei firewall security policies
Function
Unified security policy
UTM-integrated security policy
ACL-based packet filtering
Matching condition: quintuple
information in packets (source/
destination addresses, source/
destination ports, and protocol)
and time range
Action: permit or deny
Implemented by referencing an
ACL in an interzone
Matching condition: quintuple
information in packets, user, and
time range
Action: permit or deny (perform
UTM processing, such as IPS,
antivirus, and URL filtering, on
permitted packets)
Implemented by configuring
policies with conditions and
actions
Traditional firewall
UTM
Accurately identifies packets
based on six dimensions
(ACTUAL).
Identifies the application type
and content of traffic at a time
and allows concurrent
processing of multiple content
security services.
Enhanced application,
content, and threat awareness
capabilities
NGFW
The development history shows the following characteristics:

Matching conditions are more refined, from IP address- and port-based packet
identification by traditional firewalls to user-, application-, and content-based packet
identification by next-generation firewalls (NGFWs). The packet identification capability
is enhanced.

More actions are available. At the beginning, packets were simply permitted or denied.
Now, firewalls can perform various content security checks on packets.

The configuration is more convenient. To configure security policies on traditional
firewalls, you must be familiar with the ACL configuration. The unified security policy
configuration on NGFWs is more simple, convenient, and easy to understand.
Let's describe the three phases one by one.
2.2.1 Phase 1: ACL-based Packet Filtering
ACL-based packet filtering is the implementation on early Huawei firewalls, such as
V100R003 of USG2000/5000 series and V200R001 of USG9500 series.
In this phase, ACLs are configured to control packets. Each ACL contains several rules, and
each rule has its condition and action. ACLs must be configured and referenced in interzones.
When forwarding a packet between security zones, a firewall searches for rules in ACLs top
down. If the packet matches a rule, the firewall takes the action defined in the rule and stops
rule searching. If the packet does not match the rule, the firewall continues to search for the
35
Learn Firewalls with Dr. WoW
next rule. If the packet does not match any rule, the firewall takes the action defined for the
implicit packet filtering.
As shown in Figure 2-6, the Trust-Untrust interzone relationship is used as an example to
explain the logic of ACL-based packet filtering.
Figure 2-6 Logic of ACL-based packet filtering
Trust
Untrust
ACL-based packet filtering
ACL
Rule 1
Condition
Protocol|Source address|Source
port|Destination address|Destination port
Action
Permit/Deny
Matching
sequence
Rule 2
Condition
Protocol|Source address|Source
port|Destination address|Destination port
Action
Permit/Deny
…...
Rule N
Condition
Protocol|Source address|Source
port|Destination address|Destination port
Action
Permit/Deny
Implicit packet filtering
Condition
Action
Any
Permit/Deny
To configure ACL-based packet filtering, you must first configure an ACL and then reference
the ACL in the interzone. For example, to deny the packets from 192.168.0.100 in the Trust
zone to the Untrust zone and permit the packets from 192.168.0.0/24 to 172.16.0.0/24,
configure the following ACL:
[FW] acl 3000
[FW-acl-adv-3000] rule deny ip source 192.168.0.100 0
[FW-acl-adv-3000] rule permit ip source 192.168.0.0 0.0.0.255 destination 172.16.0.0
0.0.0.255
[FW-acl-adv-3000] quit
[FW] firewall interzone trust untrust
[FW-interzone-trust-untrust] packet-filter 3000 outbound
2.2.2 Phase 2: UTM-integrated Security Policy
With the release of UTM products, Huawei firewall security policies take a step forward and
become actual "policies". Different from ACL-based packet filtering, UTM-integrated
36
Learn Firewalls with Dr. WoW
security policies have conditions and actions defined without referencing ACLs. In addition, if
the action is permit, UTM policies, such as antivirus and IPS policies, can be referenced for
further packet inspection.
V300R001 of USG2000/5000 series uses UTM-integrated security policies. V300R001 of
USG9500 series also supports this type of security policy, but only conditions and actions can
be set, and UTM policies cannot be referenced.
As shown in Figure 2-7, a UTM-integrated security policy consists of the condition, action,
and UTM policy. Note that the "service set" concept emerges in security policy conditions to
replace the protocol and port. Some service sets including common protocols have been
predefined in security policies. These service sets can be directly set as conditions. For other
protocols or ports, we can define new service sets and reference them in security policies.
Figure 2-7 Composition of a UTM-integrated security policy
UTM-integrated security policy
Condition
Action
UTM policy
Source address
Permit
IPS
Destination
address
Deny
AV
Service set
Web filtering
Packet
Time range
Mail filtering
User
FTP filtering
Application
control
UTM-integrated security policies are matched in sequence. When a firewall forwards packets
between security zones, it searches interzone security policies top down. If a packet matches a
specific security policy, the firewall takes the action defined in the policy and stops searching
remaining security policies. If the packet does not match the policy, the firewall continues to
search remaining policies. If the packet does not match any policy, the firewall takes the
action defined for the implicit packet filtering.
As shown in Figure 2-8, the Trust-Untrust interzone relationship is used as an example to
explain the logic of UTM-integrated security policies.
37
Learn Firewalls with Dr. WoW
Figure 2-8 Logic of UTM-integrated security policies
Trust
Untrust
UTM-integrated security policy
Policy 1
Condition
Source address|Destination
address|Service set|Time range|User
Action
Permit/Deny
Matching
sequence
UTM policy
Policy 2
Condition
Source address|Destination
address|Service set|Time range|User
Action
Permit/Deny
UTM policy
…...
Policy N
Condition
Source address|Destination
address|Service set|Time range|User
Action
Permit/Deny
UTM policy
Implicit packet filtering
Condition
Action
Any
Permit/Deny
To configure a UTM-integrated security policy, you can directly set the condition and action.
If UTM inspection on packets is required, set a UTM policy and reference the UTM policy in
the security policy with the action being permit. For example, to deny the packets from
192.168.0.100 in the Trust zone to the Untrust zone and permit the packets from
192.168.0.0/24 to 172.16.0.0/24, configure the following security policy:
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] action
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] policy 2
[FW-policy-interzone-trust-untrust-outbound-2] policy
[FW-policy-interzone-trust-untrust-outbound-2] policy
0.0.0.255
source 192.168.0.100 0
deny
source 192.168.0.0 0.0.0.255
destination 172.16.0.0
38
Learn Firewalls with Dr. WoW
[FW-policy-interzone-trust-untrust-outbound-2] action permit
2.2.3 Phase 3: Unified Security Policy
With the high-speed growth of networks and applications, the way protocols are used and data
is transmitted has changed dramatically, and network worms, botnets, and other
application-based attacks constantly emerge. Traditional firewalls are incapable of preventing
threats from network worms and botnets as they identify applications based on ports and
protocols and detect and defend against attacks based on transport-layer signatures. The new
security requirement drives the emergence of the next-generation firewall. Huawei firewalls
keep pace with the times, and their security policies have evolved into the "unified" security
policies. Currently, V100R001 of USG6000 series supports unified security policies.
By "unified", we mean:

Unified configuration: Security profiles can be referenced in security policies to
implement security functions, such as antivirus, intrusion prevention, URL filtering, and
mail filtering, simplifying configurations.

Unified service processing: Multiple services are conducted on packets in parallel when
security policies are used to check the packets, greatly improving system performance.
As shown in Figure 2-9, unified security policies are configured to identify actual service
environments based on applications, content, time, users, attacks, and locations (ACTUAL) in
addition to traditional quintuple information, implementing accurate access control and
security inspection.
Figure 2-9 Identification dimensions of unified security policies
A unified security policy consists of the condition, action, and profile, as shown in Figure
2-10. The profile is used for content security inspection on packets and can be referenced only
when the action in the policy is permit.
39
Learn Firewalls with Dr. WoW
Figure 2-10 Composition of a unified security policy
Unified security policy
Condition
Action
Profile
Source zone
Permit
AV
Deny
IPS
Destination zone
Packet
Source address/
zone
Destination
address/zone
URL filtering
File filtering
User
Service
Content filtering
Application
Application
behavior control
Time range
Mail filtering
Compared with security policies in the first two phases, unified security policies have the
following advantages:

Unified security policies are applied globally, not just between security zones. Security
zones are optional, and multiple security zones can be set at the same time. A special
implementation on Huawei USG6000 series is that packets are not allowed to flow
between security zones by default. To allow the data flow, you must configure an
intrazone security policy.

The default action in security policies replaces the implicit packet filtering, and the
action takes effect globally.
If multiple unified security policies are configured on a firewall, the firewall searches the
policies top down when forwarding packets. As shown in Figure 2-11, if a packet matches a
specific security policy, the firewall takes the action defined in the policy and stops searching
remaining security policies. If the packet does not match the policy, the firewall continues to
search remaining policies. If the packet does not match any policy, the firewall takes the
default action for security policies. The function of the default action is the same as that in the
implicit packet filtering. The difference is that the default action is set in a security policy.
40
Learn Firewalls with Dr. WoW
Figure 2-11 Logic of unified security policies
Unified security policy
Policy 1
Condition
Source zone|Destination zone|Source address (zone)|Destination address
(zone)|User|Service|Application|Time range
Action
Permit/Deny
Matching
sequence
Configuration file
Policy 2
Condition
Source zone|Destination zone|Source address (zone)|Destination address
(zone)|User|Service|Application|Time range
Action
Permit/Deny
Configuration file
…...
Policy N
Condition
Source zone|Destination zone|Source address (zone)|Destination address
(zone)|User|Service|Application|Time range
Action
Permit/Deny
Configuration file
Default action for security policies
Condition
Any
Action
Permit/Deny
For example, to deny the packets from 192.168.0.100 in the Trust zone to the Untrust zone
and permit the packets from 192.168.0.0/24 to 172.16.0.0/24, configure the following unified
security policies:
[FW] security-policy
[FW-policy-security] rule name policy1
[FW-policy-security-rule-policy1] source-zone trust
[FW-policy-security-rule-policy1] destination-zone untrust
[FW-policy-security-rule-policy1] source-address 192.168.0.100 32
[FW-policy-security-rule-policy1] action deny
[FW-policy-security-rule-policy1] quit
[FW-policy-security] rule name policy2
[FW-policy-security-rule-policy2] source-zone trust
[FW-policy-security-rule-policy2] destination-zone untrust
[FW-policy-security-rule-policy2] source-address 192.168.0.0 24
[FW-policy-security-rule-policy2] destination-address 172.16.0.0 24
[FW-policy-security-rule-policy2] action permit
After the preceding introduction, I believe that you have understood the history of Huawei
firewall security policies. The security policies mentioned in the following parts are
configured as UTM-integrated security policies, which are popular nowadays, but we only
provide conditions and actions and do not cover the configuration of UTM policies.
41
Learn Firewalls with Dr. WoW
2.3 Security Policies in the Local Zone
Firewalls also forward locally generated or terminated traffic when, for example,
administrators log in to the firewalls for management, Internet devices and users establish
VPNs with the firewalls, the firewalls and routers run routing protocols, such as Open
Shortest Path First (OSPF), and the firewalls interconnect with authentication servers.
To ensure the normal flow of such traffic, we must configure corresponding security policies
on the firewalls. To be specific, we must configure security policies between the Local zone
of the firewalls and the security zones where the interfaces used by the services reside.
In the preceding parts, we describe security policies for the packets passing through firewalls.
Now, let's see how to configure security policies for locally generated or terminated packets of
firewalls. OSPF packets are used as an example.
2.3.1 Configuring a Security Policy in the Local Zone for OSPF
A USG9500 running V300R001 is connected to two routers, as shown in Figure 2-12.
This section verifies the security policy configuration between the security zone where the firewall
interface connecting the router resides and the Local zone when the firewall participates in OSPF route
calculation. If the firewall does not participate in OSPF route calculation, only transparently transmits
OSPF packets, and uses interfaces in different security zones to send and receive OSPF packets, a
security policy must be configured on the firewall to allow the OSPF packets to pass.
Figure 2-12 Networking for OSPF packet exchange
Untrust
Local
GE1/0/1
192.168.0.1/24
Firewall
GE0/0/1
192.168.0.2/24
GE0/0/2
192.168.1.1/24
Router1
GE0/0/1
192.168.1.2/24
Router2
The configuration on the firewall is as follows:
[FW] interface GigabitEthernet1/0/1
[FW-GigabitEthernet1/0/1] ip address 192.168.0.1 24
[FW-GigabitEthernet1/0/1] quit
[FW] firewall zone untrust
[FW-zone-untrust] add interface GigabitEthernet1/0/1
[FW-zone-untrust] quit
[FW] ospf
[FW-ospf-1] area 1
[FW-ospf-1-area-0.0.0.1] network 192.168.0.0 0.0.0.255
The configuration on Router 1 is as follows:
[Router1] interface GigabitEthernet0/0/1
[Router1-GigabitEthernet0/0/1] ip address 192.168.0.2 24
[Router1-GigabitEthernet0/0/1] quit
[Router1] interface GigabitEthernet0/0/2
[Router1-GigabitEthernet0/0/2] ip address 192.168.1.1 24
42
Learn Firewalls with Dr. WoW
[Router1-GigabitEthernet0/0/2] quit
[Router1] ospf
[Router1-ospf-1] area 1
[Router1-ospf-1-area-0.0.0.1] network 192.168.0.0 0.0.0.255
[Router1-ospf-1-area-0.0.0.1] network 192.168.1.0 0.0.0.255
The configuration on Router 2 is as follows:
[Router2] interface GigabitEthernet0/0/1
[Router2-GigabitEthernet0/0/1] ip address 192.168.1.2 24
[Router2-GigabitEthernet0/0/1] quit
[Router2] ospf
[Router2-ospf-1] area 1
[Router2-ospf-1-area-0.0.0.1] network 192.168.1.0 0.0.0.255
By default, no security policy is configured between the Untrust zone where GE1/0/1 resides
and the Local zone, and therefore packets are not allowed to flow between the zones.
After the configuration is complete, run the display ospf peer command to view the OSPF
adjacency.
[FW] display ospf peer
OSPF Process 1 with Router ID 192.168.0.1
Neighbors
Area 0.0.0.1 interface 192.168.0.1(GigabitEthernet1/0/1)'s neighbors
Router ID: 192.168.1.1
Address: 192.168.0.2
State: ExStart Mode:Nbr is Slave Priority: 1
DR: None BDR: None MTU: 0
Dead timer due in 32 sec
Retrans timer interval: 0
Neighbor is up for 00:00:00
Authentication Sequence: [ 0 ]
Run the display ospf peer command on Router 1 to view the OSPF adjacency.
[Router1] display ospf peer
OSPF Process 1 with Router ID 192.168.1.1
Neighbors
Area 0.0.0.1 interface 192.168.0.2(GigabitEthernet0/0/1)'s neighbors
Router ID: 192.168.0.1
Address: 192.168.0.1
GR State: Normal
State: ExStart Mode:Nbr is Slave Priority: 1
DR: 192.168.0.1 BDR: 192.168.0.2 MTU: 0
Dead timer due in 32 sec
Neighbor is up for 00:00:00
Authentication Sequence: [ 0 ]
Neighbors
Area 0.0.0.1 interface 192.168.1.1(GigabitEthernet0/0/2)'s neighbors
Router ID: 192.168.1.2
Address: 192.168.1.2
GR State: Normal
State: Full Mode:Nbr is Slave Priority: 1
DR: 192.168.1.2 BDR: 192.168.1.1 MTU: 0
Dead timer due in 32 sec
Neighbor is up for 00:09:28
43
Learn Firewalls with Dr. WoW
Authentication Sequence: [ 0 ]
The OSPF adjacency state is ExStart on both the firewall and Router 1. According to the
process for establishing an OSPF adjacency shown in Figure 2-13, we can find that the OSPF
adjacency failed to be established because the firewall and Router 1 did not exchange
Database Description (DD) packets.
Figure 2-13 Process for establishing an OSPF adjacency
Firewall
Down
Router 1
Hello(DR=0.0.0.0, Neighbors Seen=0)
Hello(DR=R2, Neighbors Seen=R1)
ExStart
Init
DD(Seq=x, I=1,M=1,MS=1)
DD(Seq=y, I=1,M=1,MS=1)
Exchange
Down
ExStart
DD(Seq=y, I=0,M=1,MS=0)
DD(Seq=y+1, I=0,M=1,MS=1)
Exchange
DD(Seq=y+1, I=0,M=1,MS=0)
DD(Seq=y+n, I=0,M=0,MS=1)
Loading
DD(Seq=y+n, I=0,M=0,MS=0)
LS Request
Full
LS Update
Full
LS Ack
It is possible that the firewall discards the DD packets. Run the display firewall statistic
system discarded command on the firewall to view information about discarded packets.
[FW] display firewall statistic system discarded
Packets discarded statistic on slot 3 CPU 3
Total packets discarded : 31
Total deny bytes discarded : 1,612
Default deny packets discarded : 31
The command output shows that packets are discarded due to implicit packet filtering. This is
because we have not configured a security policy to allow DD packets to pass, and therefore
the packets matched were discarded. Additionally, we find that the number of discarded
44
Learn Firewalls with Dr. WoW
packets is increasing, indicating that the OSPF module keeps sending DD packets to establish
the OSPF adjacency, but the packets are still discarded.
Then, we configure a security policy in the Local-Untrust interzone to allow OSPF packets to
pass. Note that the security policy must be configured in both inbound and outbound
directions as the firewall needs to send and receive DD packets.
To exactly match OSPF, we use the OSPF service set provided by security policies. If this service set is
unavailable, create one and set the protocol number to 89.
[FW] policy interzone local untrust inbound
[FW-policy-interzone-local-untrust-inbound] policy 1
[FW-policy-interzone-local-untrust-inbound-1] policy service service-set ospf
[FW-policy-interzone-local-untrust-inbound-1] action permit
[FW-policy-interzone-local-untrust-inbound-1] quit
[FW-policy-interzone-local-untrust-inbound] quit
[FW] policy interzone local untrust outbound
[FW-policy-interzone-local-untrust-outbound] policy 1
[FW-policy-interzone-local-untrust-outbound-1] policy service service-set ospf
[FW-policy-interzone-local-untrust-outbound-1] action permit
Run the display ospf peer command on the firewall and Router 1 to view the OSPF adjacency.
The following result may appear after several minutes, or we can run the reset ospf process
command to restart the OSPF process to see the results more quickly.
[FW] display ospf peer
OSPF Process 1 with Router ID 192.168.0.1
Neighbors
Area 0.0.0.1 interface 192.168.0.1(GigabitEthernet1/0/1)'s neighbors
Router ID: 192.168.0.2
Address: 192.168.0.2
State: Full Mode:Nbr is Slave Priority: 1
DR: 192.168.0.2 BDR: 192.168.0.1
Dead timer due in 32 sec
Retrans timer interval: 4
Neighbor is up for 00:00:51
Authentication Sequence: [ 0 ]
MTU: 0
[Router1] display ospf peer
OSPF Process 1 with Router ID 192.168.1.1
Neighbors
Area 0.0.0.1 interface 192.168.0.2(GigabitEthernet0/0/1)'s neighbors
Router ID: 192.168.0.1
Address: 192.168.0.1
GR State: Normal
State: Full Mode:Nbr is Slave Priority: 1
DR: 192.168.0.1 BDR: 192.168.0.2 MTU: 0
Dead timer due in 32 sec
Neighbor is up for 00:00:00
Authentication Sequence: [ 0 ]
Neighbors
Area 0.0.0.1 interface 192.168.1.1(GigabitEthernet0/0/2)'s neighbors
Router ID: 192.168.1.2
Address: 192.168.1.2
GR State: Normal
45
Learn Firewalls with Dr. WoW
State: Full Mode:Nbr is Slave Priority: 1
DR: 192.168.1.2 BDR: 192.168.1.1 MTU: 0
Dead timer due in 32 sec
Neighbor is up for 01:35:43
Authentication Sequence: [ 0 ]
As the command output indicates, the OSPF adjacency has been established, and the firewall
has learned the OSPF route to network 192.168.1.0/24.
[FW] display ip routing-table protocol ospf
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Public routing table : OSPF
Destinations : 2
Routes : 2
OSPF routing table status : <Active>
Destinations : 1
Routes : 1
Destination/Mask
Proto Pre Cost
192.168.1.0/24 OSPF
10
2
Flags NextHop
D
192.168.0.2
Interface
GigabitEthernet1/0/1
In conclusion, we need to configure a security policy between the security zone where the
OSPF-enabled interface resides and the Local zone to allow OSPF packets to pass, so that the
firewall can establish an OSPF adjacency with the connected device.
Actually, we can also consider this issue from the unicast and multicast packets' perspective.
For firewalls, unicast packets are controlled by security policies in most cases, and
therefore security policies must be configured to allow the packets to pass. Multicast
packets, however, are not controlled by security policies, and no additional security
policy needs to be configured.
Which OSPF packets are unicast packets and which are multicast packets? OSPF packet types
vary with network types, as listed in Table 2-1.
The ospf network-type command can be used to change the OSPF network type.
Table 2-1 OSPF network and packet types
Network
Type
Hello
Broadcast
Database
Link State
Link State
Link State
Description
Request
Update
Ack
Multicast
Unicast
Unicast
Multicast
Multicast
P2P
Multicast
Multicast
Multicast
Multicast
Multicast
NBMA
Unicast
Unicast
Unicast
Unicast
Unicast
P2MP
Multicast
Unicast
Unicast
Unicast
Unicast
As Table 2-1 indicates, when the network type is Broadcast, OSPF DD and LSR packets are
unicast, and security policies must be configured to allow the unicast packets to pass. When
the network type is P2P, all OSPF packets are multicast, and no additional security policy
needs to be configured. This security policy configuration principle also applies to the NBMA
46
Learn Firewalls with Dr. WoW
and P2MP network types. In real-world networks, if the OSPF status is abnormal on a firewall,
check whether security policies are absent for unicast OSPF packets.
In addition to OSPF packets, other services that firewalls need to process require security
policies in the Local zone to allow the packets to pass. In the next part, I will tell you how to
configure security policies for such services.
2.3.2 Which Protocols Require Security Policies Configured in the
Local Zone on Firewalls?
As shown in Figure 2-14, the USG2200/5000 series is used as an example. The firewall needs
to process locally generated or terminated packets in the the following scenarios: an
administrator logs in to the firewall, the firewall interconnects with an authentication server,
an Internet device or user establishes a VPN with the firewall, and the firewall runs OSPF to
communicate with a router.
GRE VPN, L2TP VPN, IPSec VPN, and SSL VPN will be described in the following chapters.
Figure 2-14 Common types of locally generated and terminated packets
DMZ
RADIUS server
172.16.0.2
Trust
Untrust
SSL VPN
172.16.0.1/24
GE0/0/3
192.168.0.2
Telnet/SSH/FTP/
HTTP/HTTPS
192.168.0.1/24
GE0/0/1
1.1.1.1/24
GE0/0/2
User
OSPF 1.1.1.2
Administrator
Firewall
GRE/L2TP/IPSec VPN
2.2.2.2
When configuring security policies for such services, we need to ensure normal service
operation and secure the firewall. Therefore, we must specify refined matching conditions for
the security policies. How to specify accurate matching conditions? We need to analyze
information such as the source addresses, destination addresses, and protocol types of the
services.
In Table 2-2, I provide matching conditions for services shown in Figure 2-14 for your
reference.
47
Learn Firewalls with Dr. WoW
Table 2-2 Setting matching conditions in security policies based on protocols or applications
Service
Telnet
Matching Conditions in Security Policies
Source
Zone
Destination
Zone
Source
Address
Destinatio
n Address
Application or Protocol +
Destination Port
Trust
Local
192.168.0.2
192.168.0.1
Telnet
Or TCP + port 23
SSH
Trust
Local
192.168.0.2
192.168.0.1
SSH
Or TCP + port 22
FTP
Trust
Local
192.168.0.2
192.168.0.1
FTP
Or TCP + port 21
HTTP
Trust
Local
192.168.0.2
192.168.0.1
HTTP
Or TCP + port 80
HTTPS
Trust
Local
192.168.0.2
192.168.0.1
HTTPS
Or TCP + port 443
RADIUS
Local
DMZ
172.16.0.1
172.16.0.2
RADIUS
Or UDP + port
1645/1646/1812/1813*
Sending OSPF
Local
negotiation packets
(outbound)
Untrust
1.1.1.1
1.1.1.2
OSPF
Receiving OSPF
Untrust
negotiation packets
(inbound)
Local
1.1.1.2
1.1.1.1
OSPF
Sending GRE
VPN tunnel
establishment
requests
(outbound)
Local
Untrust
1.1.1.1
2.2.2.2
GRE
Receiving GRE
VPN tunnel
establishment
requests (inbound)
Untrust
Local
2.2.2.2
1.1.1.1
GRE
Sending L2TP
VPN tunnel
establishment
requests
(outbound)
Local
Untrust
1.1.1.1
2.2.2.2
L2TP
Receiving L2TP
VPN tunnel
establishment
requests (inbound)
Untrust
Or UDP + port 1701
Local
2.2.2.2
1.1.1.1
L2TP
Or UDP + port 1701
48
Learn Firewalls with Dr. WoW
Service
Matching Conditions in Security Policies
Sending IPSec
VPN tunnel
establishment
requests
(outbound)
Local
Untrust
1.1.1.1**
2.2.2.2
Manual mode:
No configuration is required.
IKE mode (non-NAT
traversal environments):
UDP + port 500
IKE mode (NAT traversal
environments):
UDP + ports 500 and 4500
Receiving IPSec
VPN tunnel
establishment
requests (inbound)
Untrust
Local
2.2.2.2
1.1.1.1
Manual mode:
AH/ESP
IKE mode (non-NAT
traversal environments):
AH/ESP and UDP + port
500
IKE mode (NAT traversal
environments):
UDP + ports 500 and 4500
SSL VPN
Untrust
Local
ANY
1.1.1.1
Reliable mode:
UDP + port 443***
Fast mode:
UDP + port 443
*: The default port number is used here. For the actual port number, see the RADIUS server configuration.
**: In NAT traversal environments, the source and destination addresses may be public or private addresses,
depending on actual situations.
***: If the firewall supports both HTTPS and SSL VPN, configure the ports depending on actual situations.
In the preceding example that an administrator logs in to a USG2000/5000 through GE 0/0/1,
a security policy must be configured for the access from the Trust zone to the Local zone.
Actually, all firewall models provide default login methods, and administrators can log in to
the firewalls through specific ports without security policies. For details, see Table 2-3.
Table 2-3 Default login modes supported by firewalls
Model
Port
Login Mode
USG2100
LAN ports (GE0/0/0 through
GE0/0/7)
Telnet or HTTP
USG2200/5000
Management port (GE0/0/0)
Telnet or HTTP
USG6000
Management port (GE0/0/0)
HTTPS
USG9500
Management port (GE0/0/0)
HTTPS
49
Learn Firewalls with Dr. WoW
2.4 ASPF
After understanding security policies, you may think that security policies are configured once
for all to defend against all threats. However, some protocols, such as FTP, are more complex
than security policies can handle. In this case, we need the application-specific packet filter
(ASPF)
Now I'll use FTP as an example to introduce how ASPF works.
2.4.1 Helping FTP Data Packets Traverse Firewalls
First, I'll use the eNSP to simulate an FTP client accessing an FTP server, as shown in Figure
2-15. The FTP client and server directly connect to a firewall. The FTP client resides in the
Trust zone, and the FTP server is in the Untrust zone.
Figure 2-15 Networking for an FTP client to access an FTP server
Untrust
Trust
FTP client
192.168.0.1
FTP server
172.16.0.1
Firewall
How to configure a security policy to allow the FTP client to access the FTP server? You may
say: "It is easy. Configure a security policy to allow FTP packets from 192.168.0.1 in the
Trust zone to 172.16.0.1 in the Untrust zone."
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] action
source 192.168.0.1 0
destination 172.16.0.1 0
service service-set ftp
permit
After the configuration, simulate the FTP client accessing the FTP server on the eNSP. The
access FAILS. Let's check the configuration. You can see that policy 1 was matched,
indicating that the configuration has taken effect.
[FW] display policy interzone trust untrust outbound
policy interzone trust untrust outbound
firewall default packet-filter is deny
policy 1 (1 times matched)
action
policy
policy
policy
permit
service service-set ftp (predefined)
source 192.168.0.1 0
destination 172.16.0.1 0
Let's check the session table. You can see that a session has been established on the firewall.
[FW] display firewall session table
Current Total Sessions : 1
ftp VPN:public --> public 192.168.0.1:2049-->172.16.0.1:21
50
Learn Firewalls with Dr. WoW
Everything seems OK. Then, why did the access fail?
Let's see the particular characteristics of FTP. FTP is a typical multi-channel protocol. The
FTP client and server establish two connections in between, namely, control and data
connections. The control connection communicates FTP commands and parameters including
information necessary for setting up the data connection. The data connection obtains
directories and transfers data.
FTP works in either active (PORT) or passive (PASV) mode, determined by the data
connection initiation mode. In active mode, the FTP server initiates a data connection to the
FTP client. In passive mode, the FTP server receives the data connection initiated from the
FTP client.
The working mode can be set on the FTP client. This example uses the active mode, as shown
in Figure 2-16.
Figure 2-16 Working mode setting on the FTP client
Let's look at the FTP interactive process in active mode, as shown in Figure 2-17.
51
Learn Firewalls with Dr. WoW
Figure 2-17 FTP interactive process in active mode
FTP client
FTP server
SYN
IP
Port
192.168.0.1
xxxx
SYN+ACK
IP
Port
172.16.0.1
21
ACK
TCP three-way
handshake for
control connection
……
Interact for the user
name/password
……
PORT Command (IP 192.168.0.1 Port yyyy)
PORT command
interaction for
control connection
PORT Command OK
SYN
IP
Port
192.168.0.1
yyyy
SYN+ACK
ACK
IP
Port
172.16.0.1
20
TCP three-way
handshake for
data connection
LIST Command
Transmit data
……
Control connection
Data connection
xxxx/yyyy
Random port
The process is described as follows:
2.
The FTP client initiates a control connection request from a random port xxxx to port 21
of the FTP server.
3.
The FTP client uses the PORT command to negotiate the port number with the server
for a data connection. Port yyyy is obtained.
4.
The FTP server initiates a data connection request to port yyyy of the FTP client.
5.
The FTP server sends data to the client after the data connection is established.
In the preceding example, we have configured only one security policy to allow the FTP client
to access the FTP server. That is, the control connection was established. When the firewall
received the packet from the FTP server to port yyyy of the FTP client, the firewall considered
the packet a new connection, not the subsequent packet of the previous connection. To allow
the packet to arrive at the FTP client, we must configure another security policy on the
firewall.
How to resolve this problem? Is that OK if we configure a security policy for packets from the
FTP server to the FTP client? But there is another problem. The port used for the data
connection is negotiated by the client and server and is therefore random. So we have to
enable all ports, and this operation brings security risks to the FTP client. It would be perfect
if the firewall could record the port and automatically configure a security policy to allow the
packets to pass from the FTP server to the FTP client.
Fortunately, firewall designers have considered this issue and introduced the Application
Specific Packet Filter (ASPF). As the name suggests, ASPF works on the application-layer
information of packets. Its working principle is to check the application-layer information of
52
Learn Firewalls with Dr. WoW
packets and record the key data in the application-layer information, so that the packets that
are not explicitly permitted in security policies can be forwarded.
Entries recording the key application-layer data are called server map entries. Once a packet
matches a server map entry, it is no longer controlled by any security policy. It's like enabling
an "invisible channel" on the firewall. Of course, this channel is not arbitrarily enabled.
Instead, the firewall allows the existence of such a channel only after analyzing the
application-layer information of packets to predict the behavior of subsequent packets.
What is the difference between the server map and session table? First, the session table
records the connection status of communicating parties. After generating a session for the first
packet of a connection, the firewall directly forwards subsequent packets of the session, and
the packets are no longer controlled by security policies. The server map records the
information obtained by analyzing the packets on existing connections. This information
indicates packet features, according to which the firewall predicts packet behavior.
Second, after receiving a packet, the firewall checks whether the packet matches the session
table. If so, the firewall forwards the packet. If not, the firewall checks whether the packet
matches the server map. If the packet matches the server map, it is no longer controlled by
security policies. Certainly, the firewall will generate a session for the packet.
Both the server map and session table are important for firewalls. They have different
functions and cannot replace each other.
In addition to ASPF, NAT can generate the server map, which will be detailed in chapter 4.
It is easy to enable ASPF. For example, enable ASPF for FTP in the Trust-Untrust interzone.
ASPF can also be enabled for FTP within a security zone.
[FW] firewall interzone trust untrust
[FW-interzone-trust-untrust] detect ftp
Then, let's verify the access from the FTP client to the FTP server again. Run the display
firewall server-map command on the firewall to view the server-map entry recording the
FTP data connection.
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------ASPF, 172.16.0.1 -> 192.168.0.1:2052[any], Zone: --Protocol: tcp(Appro: ftp-data), Left-Time: 00:00:57, Addr-Pool: --VPN: public -> public
We can see that the firewall has generated a server-map entry. The packet from the FTP server
to the FTP client matched this entry and has been forwarded. In this manner, no security
policy is required for this packet. This server-map does not permanently exist. It will be
deleted after it expires. This means that the "invisible channel" is not permanently enabled,
improving security.
View the session table on the firewall. The command output shows that the FTP server has
established a data connection with the FTP client.
[FW] display firewall session table
Current Total Sessions : 2
ftp VPN:public --> public 192.168.0.1:2051+->172.16.0.1:21
ftp-data VPN:public --> public 172.16.0.1:20-->192.168.0.1:2052
53
Learn Firewalls with Dr. WoW
Figure 2-18 shows the ASPF processing for FTP. After ASPF is enabled, the firewall
generates a server map in the FTP control connection, so that the FTP data connection can be
established.
Figure 2-18 ASPF processing for FTP
Firewall
FTP client
IP
Port
192.168.0.1
xxxx
SYN
SYN
SYN+ACK
SYN+ACK
ACK
Interact for the user
name and password
……
Create
server-map
SYN
PORT Command (IP 192.168.0.1 Port yyyy)
PORT command
interaction for
control connection
PORT Command OK
PORT Command OK
192.168.0.1
yyyy
TCP three-way
handshake for
control connection
……
Interact for the user
name and password
……
PORT Command (IP 192.168.0.1 Port yyyy)
IP 172.16.0.1
Port
21
ACK
……
IP
Port
FTP server
Match
server-map
SYN
SYN+ACK
SYN+ACK
ACK
ACK
LIST Command
LIST Command
Transmit data
Transmit data
……
……
IP 172.16.0.1
Port
20
TCP three-way
handshake for
data connection
Control connection
Data connection
xxxx/yyyy Random port
In conclusion, ASPF dynamically generates server-map entries based on the application-layer
information in packets, simplifying security policy configuration and improving security.
ASPF can be considered a firewall traversal technique. Server-map entries "open" a channel
on the firewall, so that subsequent packets of multi-channel protocols, such as FTP, traverse
the firewall through the channel without being controlled by security policies.
In addition to FTP, firewalls support ASPF for other multi-channel protocols, such as the
Session Initiation Protocol (SIP), H.323, and Media Gateway Control Protocol (MGCP). To
check whether a firewall model supports ASPF for a specific protocol, see the product
documentation of the firewall model.
2.4.2 Helping QQ/MSN Packets Traverse Firewalls
Firewalls also support ASPF for common instant messaging protocols, such as Tencent QQ
and Microsoft Service Network (MSN) Messenger. The implementation process differs from
that for FTP. Let me introduce APSF for QQ and MSN for you.
Generally, text QQ/MSN messages are relayed by a QQ or MSN server. However, audio and
video messages are not relayed through a server. Instead, the communicating parties establish
a connection to transmit such messages because they consume a lot of resources, as shown in
Figure 2-19.
54
Learn Firewalls with Dr. WoW
Figure 2-19 Networking for transmitting QQ/MSN messages
Trust
Untrust
QQ/MSN server
QQ/MSN client
192.168.0.1
Firewall
QQ/MSN client
Text message
Audio/Video message
In most cases, we configure only the security policy for the Trust-Untrust interzone on a
firewall to allow QQ/MSN clients on an intranet to access the Internet. Due to the lack of the
security policy for the Untrust-Trust interzone, QQ/MSN clients on the Internet cannot initiate
audio/video connection requests to the intranet.
QQ is used as an example. To allow QQ clients on the Internet to access the QQ server on an
intranet, ASPF generates the following server-map entry (This entry is only an example. The
actual entry should contain address translation information.):
Type: STUN, ANY -> 192.168.0.1:53346, Zone:--Protocol: udp(Appro: qq-derived), Left-Time:00:05:45, Pool: ---,
Vpn: public -> public
In the entry, the source address is ANY, indicating that any user can initiate connection
requests to port 53346 at 192.168.0.1, and the firewall allows the requests to pass. The entry
contains the destination address (192.168.0.1), destination port (53346), and protocol type
(udp), which are considered a triplet for server-map entries.
The entry type is Simple Traversal of UDP Through Network Address Translators (STUN). QQ, MSN,
and user-defined server-map entries, which will be described in the following part, are all of the STUN
type. STUN will be described in section 4.7 "NAT ALG."
The command for enabling ASPF for QQ and MSN is similar to that for FTP. Enable ASPF
for QQ and MSN in the Trust-Untrust interzone.
ASPF can also be enabled for QQ and MSN within a security zone.
[FW] firewall interzone trust untrust
[FW-interzone-trust-untrust] detect qq
[FW-interzone-trust-untrust] detect msn
2.4.3 Helping User-Defined Protocol Packets Traverse Firewalls
For applications beyond the supported application scope of the detect command, firewalls
provide ASPF for user-defined protocols. We can define an ACL to identify the packets of an
55
Learn Firewalls with Dr. WoW
application, given that the protocol used by the application is understood. ASPF automatically
establishes a triplet server-map entry for the application on a firewall, so that the packets of
the application can pass the firewall. Note that precise ACL rules are preferred to minimize
the adverse impact on other services.
Currently, the most typical application is the Trivial File Transfer Protocol (TFTP), as shown
in Figure 2-20.
Figure 2-20 Networking for TFTP
Trust
Untrust
TFTP client
192.168.0.1
TFTP server
Firewall
The TFTP control and data connections share the TFTP client port number. After the TFTP
client initiates an access request to the TFTP server, ASPF generates the following server-map
entry:
Type: STUN, ANY -> 192.168.0.1:55199, Zone:--Protocol: udp(Appro: stun-derived), Left-Time:00:04:52, Pool: ---,
Vpn: public -> public
In this entry, 192.168.0.1 is the TFTP client IP address, and 55199 is the port opened on the
TFTP client. The TFTP client also uses this port number to access the TFTP server. Before
this server-map entry expires, the TFTP client at any address can initiate connection requests
to port 55199 at 192.168.0.1, ensuring that TFTP packets can pass through the firewall.
Similarly, it is not difficult to enable ASPF for user-defined protocols. The support condition
and command syntax vary with firewall models. See the product documentation for details.
[FW] acl 3000
[FW-acl-adv-3000] rule permit ip source 192.168.0.1 0
[FW-acl-adv-3000] quit
[FW] firewall interzone trust untrust
[FW-interzone-trust-untrust] detect user-defined 3000 outbound
For QQ, MSN, and user-defined protocols, although triplet server-map entries generated by
ASPF ensure the normal running of the services, this mechanism brings risks because the
ports have been enabled for access and packets matching the server-map entries are no longer
controlled by security policies.
To reduce the risks, firewalls provide ASPF-specific security policies (packet filtering) to
filter the packets matching triplet server-map entries for refined access control. For example,
after the previous triplet server-map entry is generated, configure the following ACL to allow
only the matching packets from 192.168.0.1 to 172.16.0.1 to pass.
[FW] acl 3001
[FW-acl-adv-3001] rule permit ip source 192.168.0.1 0 destination 172.16.0.1 0
[FW-acl-adv-3001] quit
[FW] firewall interzone trust untrust
[FW-interzone-trust-untrust] aspf packet-filter 3001 outbound
56
Learn Firewalls with Dr. WoW
Upon the preceding description, we find that ASPF generates server-map entries for
multi-channel protocols (such as FTP), QQ, MSN, and user-defined protocols to allow the
packets of these protocols traverse firewalls.
In addition, ASPF on firewalls can block Java and ActiveX plug-ins in HTTP. These plug-ins
provided by HTTP may be made into Trojan horses and viruses to compromise hosts in
intranets. Generally, Java and ActiveX plug-ins are transmitted in HTTP payloads. If firewalls
check only HTTP headers, they cannot identify the plug-ins. In this case, ASPF must be used
to check HTTP payloads to block Java and ActiveX plug-ins.
To block HTTP plug-ins, run the detect activex-blocking and detect java-blocking
commands in a security zone or an interzone. The support condition and command syntax
vary with firewall models. See the product documentation for details.
2.5 Configuration Precautions and Troubleshooting Guide
2.5.1 Security Policy
In real-world networks, inappropriate security policies often result in service interruptions.
The display firewall statistic system discard command can display statistics on packets
discarded by firewalls. By analyzing the command output, we can determine whether the
packets are discarded due to security policies. For example:
[FW] display firewall statistic system discard
Packets discarded statistic
Total packets discarded:
ACL deny packets discarded:
Default deny packets discarded:
10
5
5
In the command output, the value of "ACL deny packets discarded" indicates the number of
packets discarded due to security policies; the value of "Default deny packets discarded"
indicates the number of packets discarded due to implicit packet filtering. If statistics on
discarded packets contain the preceding information, and we need to troubleshoot security
policies.
First, check the matching conditions in security policies. If the matching conditions are
incorrect, packets cannot match the security policies, and therefore firewalls cannot take the
predefined actions on the packets. After a security policy is configured on a firewall, if the
firewall does not process packets in the expected way, we must check the security policy
configuration.
[FW] display policy interzone trust untrust outbound
policy interzone trust untrust outbound
firewall default packet-filter is deny
policy 1 (0 times matched)
action
policy
policy
policy
permit
service service-set http (predefined)
source 192.168.0.1 0
destination 172.16.0.1 0
In the preceding command output, 0 packets matched policy 1. If the corresponding interface
has been added to the correct security zone, we should check whether the conditions in the
policy are correct.
57
Learn Firewalls with Dr. WoW
Second, if multiple security policies are configured in an interzone, pay attention to their
matching sequence. In the following example, two security policies are configured in the
Trust-Untrust interzone.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] policy
0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] policy 2
[FW-policy-interzone-trust-untrust-outbound-2] policy
[FW-policy-interzone-trust-untrust-outbound-2] action
[FW-policy-interzone-trust-untrust-outbound-2] quit
source 192.168.0.0 0.0.0.255
destination 172.16.0.0
permit
source 192.168.0.100 0
deny
As the source address scope in policy 1 covers that in policy 2, packets from 192.168.0.100
always match policy 1 and pass through the firewall. The action deny defined in policy 2 for
packets from 192.168.0.100 will never be taken.
To resolve this problem, we can run the following command to put policy 2 before policy 1.
[FW-policy-interzone-trust-untrust-outbound] policy move 2 before 1
Then, packets from 192.168.0.100 first match policy 2 and are denied by the firewall.
#
policy interzone trust untrust outbound
policy 2
action deny
policy source 192.168.0.100 0
policy 1
action permit
policy source 192.168.0.0 0.0.0.255
policy destination 172.16.0.0 0.0.0.255
#
It is difficult to specify accurate matching conditions for security policies. Loose matching
conditions bring security risks, while strict matching conditions may cause some packets not
to match policies, affecting services. Here, I want to introduce a general configuration
roadmap for you: First, set the action for the implicit packet filtering to "permit" to
debug services, ensuring normal service operation. Then, view the session table and
configure security policies with information recorded in the session table being matching
conditions. At last, restore the default implicit packet filtering configuration to debug
services again, verifying the effect of the security policy.
When the action for the implicit packet filtering is permit, the firewall allows all unmatched
packets to pass, exposing the firewall to risks. Therefore, using this setting only for debugging.
After service debugging, you must restore the default implicit packet filtering configuration.
That is, set the action to deny.
Let's look at two examples. Figure 2-21 shows the networking for the first example. A PC and
a Web server directly connect to a firewall. The PC resides in the Trust zone, and the Web
server resides in the Untrust zone. The PC needs to access the Web server.
58
Learn Firewalls with Dr. WoW
Figure 2-21 Networking for a PC to access a Web server
Untrust
Trust
PC
192.168.0.1
Web server
172.16.0.1
Firewall
At the beginning, we do not know the exact matching condition. So, set the action to permit
for the implicit packet filtering in the Trust-Untrust interzone and enter y when the following
message is displayed.
[FW] firewall packet-filter default permit interzone trust untrust direction outbound
Warning: Setting the default packet filtering to permit poses security risks. You are
advised to configure the security policy based on the actual data flows. Are you sure
you want to continue?[Y/N] y
At this time, the firewall allows all unmatched packets to pass from the Trust zone to the
Untrust zone. Use the PC to access the Web server. After the access succeeds, view the
session table on the firewall.
[FW] display firewall session table verbose
Current Total Sessions : 1
http VPN:public --> public
Zone: trust--> untrust TTL: 00:00:10 Left: 00:00:07
Interface: GigabitEthernet0/0/1 NextHop: 172.16.0.1 MAC: 54-89-98-c0-15-c5
<--packets:4 bytes:465
-->packets:7 bytes:455
192.168.0.1:2052-->172.16.0.1:80
A session has been generated for the connection from the PC to the Web server. Then,
configure the following security policy:
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] policy
[FW-policy-interzone-trust-untrust-outbound-1] action
[FW-policy-interzone-trust-untrust-outbound-1] quit
source 192.168.0.1 0
destination 172.16.0.1 0
service service-set http
permit
At last, set the action back to deny for the implicit packet filtering. The security policy
configuration is complete.
[FW] firewall packet-filter default deny interzone trust untrust direction outbound
Figure 2-22 shows the networking for the second example. A PC in the Trust zone directly
connects to a firewall. An administrator needs to log in to the firewall through Telnet from the
PC.
59
Learn Firewalls with Dr. WoW
Figure 2-22 Networking for an administrator to log in to a firewall through Telnet from a PC
Trust
Local
PC
192.168.0.1
Firewall
First, set the action for the implicit packet filtering in the Trust-Local interzone to permit.
When the following message is displayed, enter y.
[FW] firewall packet-filter default permit interzone trust local direction inbound
Warning: Setting the default packet filtering to permit poses security risks. You
are advised to configure the security policy based on the actual data flows. Are
you sure you want to continue?[Y/N] y
Use Telnet to log in to the firewall from the PC. After the login succeeds, view the session
table on the firewall.
[FW] display firewall session table verbose
Current Total Sessions : 1
telnet VPN:public --> public
Zone: trust--> local TTL: 00:10:00 Left: 00:09:55
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:6 bytes:325
-->packets:8 bytes:415
192.168.0.1:2053-->192.168.0.2:23
Then, configure the following security policy based on the preceding session:
[FW] policy interzone local trust inbound
[FW-policy-interzone-local-trust-inbound] policy 1
[FW-policy-interzone-local-trust-inbound-1] policy
[FW-policy-interzone-local-trust-inbound-1] policy
[FW-policy-interzone-local-trust-inbound-1] policy
[FW-policy-interzone-local-trust-inbound-1] action
source 192.168.0.1 0
destination 192.168.0.2 0
service service-set telnet
permit
At last, set the action back to deny for the implicit packet filtering. The security policy
configuration is complete.
[FW] firewall packet-filter default deny interzone trust local direction inbound
I hope that these two examples help you understand the configuration roadmap, so that you
can configure accurate matching conditions in real-world networks.
2.5.2 ASPF
ASPF determines whether firewalls can properly forward the packets of special protocols.
When FTP is used on a network, check whether ASPF is enabled. The USG2000/5000 series
firewalls are used as an example. Table 2-4 lists the protocols for which ASPF can be enabled.
For the support conditions of other firewall models, see the product documentation of the
firewall model.
60
Learn Firewalls with Dr. WoW
Table 2-4 Protocols for which ASPF can be enabled on the USG2000/5000
Location
Protocol
Interzone
DNS, FTP, H.323, ICQ, ILS, MGCP, MMS, MSN, NETBOIS,
PPTP, QQ, RTSP, SIP, and SQLNET
Zone
DNS, FTP, H.323, ILS, MGCP, MMS, MSN, NETBOIS, PPTP,
QQ, RTSP, SIP, and SQLNET
Run the display interzone command to check whether ASPF has been enabled in an
interzone. For example:
[FW] display interzone
interzone trust untrust
detect ftp
#
The command output shows that ASPF has been enabled for FTP in the Trust-Untrust
interzone. Run the display zone command to check whether ASPF has been enabled in a zone.
For example:
[FW] display zone
local
priority is 100
#
trust
priority is 85
detect qq
interface of the zone is (1):
GigabitEthernet0/0/1
#
untrust
priority is 5
interface of the zone is (0):
#
dmz
priority is 50
interface of the zone is (0):
#
The command output shows that ASPF has been enabled for QQ in the Trust zone.
If ASPF for a user-defined protocol does not take effect, run the display firewall server-map
command to check whether the corresponding server-map entry has been generated. For
example:
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Type: STUN, ANY -> 192.168.0.1:55199, Zone:--Protocol: udp(Appro: stun-derived), Left-Time:00:04:52, Pool: ---,
Vpn: public -> public
The command output shows that the server-map entry has been generated. If the server-map
entry does not exist, check whether ACL rules are correctly configured and whether packets
can match the ACL rules. If the ACL is incorrect, reconfigure it.
61
Learn Firewalls with Dr. WoW
62
Learn Firewalls with Dr. WoW
3
Attack Defense
3.1 DoS Attack
In the previous two chapters, we have learned that the major function of a firewall is to
protect a particular network from attacks of an untrusted network. In this chapter, we will
learn the common single-packet, traffic-based, and application-layer attacks and the defensive
measures of the firewall.
First, let's look back to the recent evolution of attacks. In the 1990s, the Internet was growing
fast, and so were attacks, which had gone from labs to the Internet. However, a fox can be
out-foxed. Although attack techniques are involving, so are defense measures, as shown in
Figure 3-1.
Figure 3-1 Evolution of attack and defense techniques
1996
2008
2000
Scanning & overflow
Network-layer attacks HTTP application attacks
attacks
2011
Upgraded HTTP/HTTPS
application attacks
2012
DDoS attacks on mobile
applications
Attacks Evolve
DoS attacks
Defense Keeps Up
Malformed packet
filtering
DDoS attacks with
fake source addresses
Transport layer
source
authentication
Attacks with
real source
addresses
Application layer
source
authentication
1 HTTPS encrypted attacks
2 Low-and-slow Web application
attacks
1 IP reputation of botnet
2 Session monitoring
3 IP reputation of local service
addresses
Pretend to be smart
devices
1 Behavior analysis
2 IP reputation of botnet
3 IP reputation of local service
addresses
When we talk about "network attacks", we can never forget to mention denial of service (DoS)
attacks. As the name suggests, the purpose of a DoS attack is to make the target computer or
network unable to provide normal services.
Then what does "denial of service" really mean? Let's say there is a diner on the street
providing meals, but some villains often make trouble in the diner, such as occupying dining
tables, blocking the door, or harassing waiters, waitresses, or chefs so that customers cannot
enjoy the flood of the diner. This is "denial of service."
63
Learn Firewalls with Dr. WoW
The computers and servers on the Internet are like the diners and provide resources and
services. Attackers can exhaust the resource of the computers and servers or the bandwidth of
the links to them to launch a DoS attack.
3.2 Single-Packet Attack and Defense
Single-packet attack is a common DoS attack and is usually launched by individuals using
simple attack packets. Such attacks may cause severe impacts, but can be easily prevented if
we know the attack signature.
We divide single-packet attacks into three types, as shown in Figure 3-2.
Figure 3-2 Types of single-packet attacks
Single-Packet Attacks
Malformed packet
attacks
·
·
·
·
·
·
·
·
·
Smurf
LAND
Fraggle
Attacks using IP
fragments
IP spoofing
Ping of Death
Attacks using TCP
flags
Teardrop
WinNuke
Scanning attacks
·
·
IP scanning
Port scanning
·
·
·
·
·
·
·
Attacks using special
control packets
Oversized ICMP packets
Attacks using ICMP
redirects
Attacks using ICMP
unreachable packets
Attacks using packets
with record route option
Attacks using packets
with source route
Attacks using Tracert
packets
Attacks using packets
with timestamp

Malformed packet attack: Attackers send malformed packets. The target systems may
crash if they cannot process such packets.

Scanning attack: To be accurate, scanning attacks are not really attacks, but
reconnaissance activities for attacks.

Attacks using special control messages: To be accurate, such attacks are not really
attacks, but reconnaissance activities for attacks. They use special control messages to
probe network structures.
Preventing single-packet attacks is a basic function of firewalls. All Huawei firewalls support
this function. Now let's see how Huawei firewalls prevent typical single-packet attacks.
3.2.1 Ping of Death Attack and Defense
The length field of an IP packet has 16 bits, which means that the maximum length of this IP
packet is 65535 bytes. Some old versions of operating systems have restrictions on packet size.
If a packet is larger than 65535 bytes, memory allocation error occurs, and the receiving
64
Learn Firewalls with Dr. WoW
system crashes. The ping of death attack is launched by sending packets larger than 65535
bytes to target hosts to crash them.
To prevent such attacks, the firewall checks the size of packets. If a packet is larger than
65535 bytes, the firewall considers it an attack packet and discards it.
3.2.2 LAND Attack and Defense
In a local area network denial (LAND) attack, the attacker sends spoofed TCP packets with
the target host's IP address as both the source and destination. This causes the victim to reply
to itself continuously to exhaust its system resources.
To prevent LAND attacks, firewalls check the source and destination addresses of TCP
packets and discard the packets if the source and destination addresses are the same or the
source addresses are loopback addresses.
3.2.3 IP Scanning
An attacker uses ICMP packets (such as ping or Tracert commands) or TCP/UDP packets to
initiate connections to certain IP addresses to check whether the targets reply. In this way, the
attacker can determine whether these hosts are live on the network.
IP scanning does not have direct impacts, but is a reconnaissance method that gathers
information for later attacks. However, firewalls will not ignore IP scanning.
Firewalls inspect TCP, UDP, and ICMP packets. If the destination address of a packet sent
from a source address is different from that of the previous packet, the exception count will
increase by 1. When the exception count reaches the predefined threshold, the firewalls
consider that the source IP address is performing an IP scanning. Then, the firewalls blacklist
the source IP address and discard subsequent packets from the source.
From these single-packet attacks and the defense mechanisms, we can see that single-packet
attacks demonstrate noticeable patterns. Therefore, we can prevent these attacks as long as we
identify their patterns.
3.2.4 Recommended Configurations for Preventing Single-Packet
Attacks
Firewalls have a lot of defense functions to prevent single-packet attacks. However, in real
networks, which functions should be enabled and which should not? This question must have
been bugging us for a long time. To tackle this, some recommended configurations are
provided as follows:
As shown in Figure 3-3, the recommended configurations allow firewalls to prevent
single-packet attacks without compromising performance in real-world networks. The
scanning attack defense functions are resource-intensive. Therefore, you are advised to enable
these functions only when scanning attacks occur.
65
Learn Firewalls with Dr. WoW
Figure 3-3 Recommended configurations for preventing single-packet attacks
Enable
·
·
·
·
·
·
·
Smurf attack defense
LAND attack defense
Fraggle attack defense
Ping of Death attack defense
WinNuke attack defense
Defense against attacks using
packets with source route
Defense against attacks using
packets with timestamp
Disable
·
·
·
IP scanning attack defense
Port scanning attack
defense
Teardrop attack defense
Table 3-1 lists the commands to enable defense functions on USG9500 V300R001, for
example, to prevent common single-packet attacks.
Table 3-1 Commands for enabling defense functions against single-packet attacks
Function
Command
Enable Smurf attack defense.
firewall defend smurf enable
Enable LAND attack defense.
firewall defend land enable
Enable Fraggle attack defense.
firewall defend fraggle enable
Enable WinNuke attack defense.
firewall defend winnuke enable
Enable ping of death attack defense.
firewall defend ping-of-death enable
Enable defense against attacks launched
through IP packets with the timestamp
option set.
firewall defend time-stamp enable
Enable defense against attacks launched
through IP packets with the record route
option set.
firewall defend route-record enable
Actually, single-packet attacks are only a small fraction of network attacks. The most
common and troublesome network attacks are traffic-based attacks (such as SYN and UDP
floods) and application-layer (such as HTTP and DNS floods) attacks, which will be
described in the following sections.
3.3 SYN Flood Attack and Defense
In the past, a major obstacle facing attackers is insufficient bandwidth, which prevents
attackers from sending requests in a large number. Although such attacks like ping of death
can crash an unpatched operating system using a small number of packets, most DoS attacks
require a large amount of traffic to crash the victims, which cannot be done by a single
attacker. That is when distributed denial of service (DDoS) attacks have emerged.
DDoS attackers control massive zombie hosts to send a large number of grafted attack packets
to the target. As a result, links are congested and system resources are exhausted on the
66
Learn Firewalls with Dr. WoW
attacked network, making the victim unable to respond to legitimate users, as shown in Figure
3-4.
Figure 3-4 DDoS attack
Attacker
Zombie hosts
Attack target
Zombie hosts
Control traffic
Attack traffic
When we talk about DDoS attacks, the first that comes to mind is SYN flood. SYN flood is a
highly technical attack and has been a major DDoS attack for quite a long time. The special
aspect of SYN flood is that it is difficult to prevent based on the features of a single packet or
traffic statistics because it is too "real" and "commonplace."
SYN floods have powerful variation capabilities and have not fallen into oblivion these years
thanks to the "excellent genes":

Each packet looks like "real" and is not malformed.

The attack cost is low, and a small overhead can be used to launch massive attacks.
During the 2014 Chinese New Year, an IDC experienced three rounds of attacks consecutively
within days, and the longest attack lasted three hours and created a burst traffic volume of 160
Gbit/s. Based on the target and attack type analysis, it could be concluded that the attacks
were well coordinated by hacker groups to attack the same target. The analysis of the captured
packets showed that the major attack method was SYN flood.
According to a security operation report in 2013, DDoS attacks are increasing each year, and
SYN flood attacks account for 31% of DDoS attacks in 2013.
Obviously, SYN flood attacks are still rampant nowadays. Know yourself and know your
enemy, you will never be defeated. Let's take a look at the attack mechanism of SYN floods.
67
Learn Firewalls with Dr. WoW
3.3.1 Attack Mechanism
As the name suggests, SYN flood attack is related to the SYN message of TCP. Therefore,
let's review the TCP three-way handshake process, as shown in Figure 3-5.
Figure 3-5 TCP three-way handshake
Server
Client
SYN
Three-way
handshake
SYN+ACK
ACK
Data transmission
……
1.
First handshake: The client sends a SYN (synchronize) message to the server.
2.
Second handshake: After receiving the SYN message from the client, the server replies
with a SYN+ACK message, indicating that the request sent by the client is accepted. In
addition, the server sets the acknowledgment number in the SYN+ACK message to the
client's ISN plus 1.
3.
Third handshake: After receiving the SYN+ACK message from the server, the client
sends an ACK message to the server to complete the three-way handshake.
If the client becomes faulty after sending the SYN message, the server will not receive the
ACK message after sending the SYN+ACK message. In this case, the three-way handshake
cannot be completed. In this situation, the server usually retransmits the SYN+ACK message
and waits for a period of time. If the server cannot receive an ACK message from the client
within the specified period of time, the incomplete connection is removed.
An attacker can take advantage of the TCP three-way handshake mechanism to launch SYN
flood attacks. As shown in Figure 3-6, the attacker sends the target server a large number of
SYN messages, whose source IP addresses do not exist or are unreachable. Therefore, after
the server replies with SYN+ACK messages, the server will receive no ACK message,
causing a large number of half-open connections. These half-open connections exhaust server
resources and make the server unable to respond to legitimate requests.
68
Learn Firewalls with Dr. WoW
Figure 3-6 SYN flood attacks
Attacker
Forged packet
Target server
SYN
SYN+ACK
?
Forged packet
SYN
SYN+ACK
?
Forged packet
SYN+ACK
……
?
SYN
The firewalls usually use TCP proxy or TCP source authentication to defend against SYN
flood attacks.
3.3.2 TCP Proxy
The firewall can be deployed between the client and server as a TCP proxy to establish a
three-way handshake with the client on behalf of the server and relay the TCP connection to
the server if the three-way handshake is complete.
As shown in Figure 3-7, the firewall collects statistics on SYN packets. If the number of SYN
packets destined to a destination reaches the preset threshold during a specified period of time,
the TCP proxy is triggered.
After TCP proxy is enabled, the firewall will return a SYN+ACK message on behalf of the
server upon receiving a SYN message from a client. If the client fails to return an ACK
message, thee firewall considers the SYN message abnormal, and maintains the half-open
connection until the half-open connection expires. If the client returns an ACK message, the
firewall considers the SYN message normal and establishes a three-way handshake with the
client. Subsequent TCP packets from the client will be sent to the server. The TCP proxy
process is transparent to both the client and server.
69
Learn Firewalls with Dr. WoW
Figure 3-7 TCP proxy
Attacker
Forged packet
Firewall
Target server
SYN
SYN+ACK
?
Forged packet
SYN
SYN+ACK
……
?
TCP proxy is triggered when the rate
of SYN packets destined to a
destination during a period of time
reaches the specified threshold.
Forged packet
SYN
SYN+ACK
?
……
Legitimate client
ACK
After the client passes the authentication, the
three-way handshake between the client and
firewall is completed.
SYN
SYN+ACK
ACK
The three-way handshake
between the firewall and
the server is completed.
Subsequent packets are
sent to the server.
During the TCP proxying, the firewall proxies and responds to each SYN message received
and maintains half-open connections. Therefore, if a large number of SYN messages are sent
to the firewall, the firewall must have high performance to handle them. In TCP proxying, the
firewall is using its own resource to handle half-open connections. Firewalls usually have
higher performance than servers. Therefore, firewalls can handle the resource intensive
attacks.
However, when the forward and return paths are different, TCP proxy cannot be used because
the packets destined from the client to the server pass through the firewall, but the packets
70
Learn Firewalls with Dr. WoW
destined from the server to the client do not. Therefore, the SYN+ACK message returned by
the server to the client does not pass through the firewall during the three-way handshake.
In this case, TCP proxy cannot be used to prevent SYN flood. However, different forward and
return paths are common scenarios. How can we prevent SYN flood attacks in these
scenarios?
Don't worry. We have another measure: TCP source authentication.
3.3.3 TCP Source Authentication
TCP source authentication can prevent SYN flood attacks when forward and return paths are
different. Therefore, compared with TCP proxy, TCP source authentication is more widely
used.
As shown in Figure 3-8, the firewall collects statistics on SYN packets. If the number of SYN
packets destined to a destination reaches the preset threshold during a specified period of time,
TCP source authentication is triggered.
After TCP source authentication is enabled, the firewall will reply with a SYN+ACK message
that carries an incorrect acknowledge number upon receiving a SYN message from the client.
If the firewall does not receive a RST message from the client, the firewall considers the SYN
message abnormal and determines that the source address is a fake address. If the firewall
receives a RST message, the firewall considers the SYN message normal and determines that
the source address is real. Then, the firewall whitelists the source address and considers all
packets from the client legitimate until the whitelist entry expires.
71
Learn Firewalls with Dr. WoW
Figure 3-8 TCP source authentication
Firewall
Attacker
Forged packet
SYN
SYN+ACK
?
Forged packet
Forged packet
?
SYN
SYN+ACK
……
?
Target server
TCP source authentication is triggered if
the rate of SYN packets destined to a
destination reaches the preset threshold
during a specified period of time.
SYN
Replies with a SYN+ACK
packet with an incorrect
acknowledge number.
……
Legitimate client
SYN
Replies with a SYN+ACK
packet with an incorrect
acknowledge number.
RST
The client passes the
authentication and is
whitelisted.
SYN
The source IP address matches
the whitelist and is trusted.
In TCP source authentication, the source client is whitelisted once the client passes the
authentication, and authentication is not performed on subsequent SYN messages sent by this
source. This implementation greatly improves the defense efficiency and performance and
minimizes the resource consumption.
72
Learn Firewalls with Dr. WoW
3.3.4 Commands
Table 3-2 lists the TCP proxy and TCP source authentication configuration commands on
USG9500 V300R001, for example.
Table 3-2 TCP proxy and TCP source authentication configuration commands
Function
Command
Enable SYN flood attack
defense.
firewall defend syn-flood enable
Configure interface-based
TCP proxy.
firewall defend syn-flood interface { interface-type
interface-number | all } [ alert-rate alert-rate-number ]
[ max-rate max-rate-number ] [ tcp-proxy { auto | on } ]
Enable IP address-based
TCP proxy.
firewall defend syn-flood ip ip-address [ max-rate
max-rate-number ] [ tcp-proxy { auto | on | off } ]
Enable security zone-based
TCP proxy.
firewall defend syn-flood zone zone-name [ max-rate
max-rate-number ] [ tcp-proxy { auto | on | off } ]
Configure TCP
authentication.
firewall source-ip detect interface { interface-type
interface-number | all } [ alert-rate alert-rate-number ]
[ max-rate max-rate-number ]
source
3.3.5 Threshold Configuration Guide
In this section, we will learn some tips for configuring flood attack alarm thresholds.
Alarm thresholds are tricky. If they are too high, attacks may not be detected in time; if they
are too low, legitimate packets may be considered attack packets and discarded.
The traffic patterns vary with networks. Therefore, before configuring these thresholds, you
must learn the types and patterns of the traffic on your network in normal situations. These
benchmarks can be based on your experience or statistics for a period of time.
For example, if you want to configure an alarm threshold to prevent SYN flood attacks, you
must roughly know the peak rate of SYN packets on your network in normal situations. The
SYN flood attack defense threshold is usually 1.2 to 2 times that peak rate. After the threshold
is configured, you need to monitor your network for the next days to check whether the
threshold interrupts normal services. If the threshold interrupts normal services, increase the
value of the threshold.
These configuration tips apply to the thresholds for preventing the following attacks, such as
UDP, DNS, and HTTP flood attacks.
3.4 UDP Flood Attack and Defense
Let's review the UDP protocol before moving on to UDP flood attacks. As we know, TCP is a
connection-oriented protocol, but UDP is a connectionless protocol. No connection is set up
between the client and server before data transmission. If packet loss occurs during the data
transmission from the client to the server, UDP cannot detect the packet loss or send any error
message. Therefore, UDP is usually considered an unreliable transmission protocol.
73
Learn Firewalls with Dr. WoW
Then why should we use an unreliable protocol like UDP? Is UDP useless?
Yes. UDP could be very useful in some scenarios. The biggest advantage of UDP over TCP is
speed. TCP provides some security and reliability mechanisms, but at the cost of high
overhead and slow transmission speed. In contrast, UDP leaves these mechanisms to
higher-layer protocols to achieve high transmission speed.
However, UDP can be exploited by hackers to launch UDP flood attacks. UDP flood attacks
are high-bandwidth attacks. In UDP flood attacks, attackers use zombies to send a large
number of oversized UDP packets to target servers at high speed, bringing the following
impacts:

Network bandwidth resources are exhausted, and links are congested.

The large numbers of UDP attack packets with changing source IP addresses or ports
compromise the performance of session-based forwarding devices or even crash the
network to cause a DoS.
Firewalls cannot prevent UDP flood attacks as they do to SYN flood attacks because UDP is
connectionless and source authentication cannot be used. The, how do firewalls prevent UDP
flood attacks?
3.4.1 Rate Limiting
A simple way to prevent UDP flood attacks is rate limiting. The rate limiting types are
described as follows:

Incoming interface-based rate limiting: Limit the rate of an incoming interface and
discard excess UDP packets.

Destination address-based rate limiting: Limit the rate of a destination address and
discard excess UDP packets.

Destination security zone-based rate limiting: Limit the rate of a destination security
zone and discard excess UDP packets.

Session-based rate limiting: Collect the statistics on UDP packets of each UDP session.
If the rate of UDP packets reaches the alarm threshold, the session is locked and
subsequent UDP packets matching the session are discarded. If no traffic matches the
session in three or more consecutive seconds, the firewall unlocks the session and
subsequent packets matching the session are permitted.
3.4.2 Fingerprint Learning
Rate limiting is effective to protect bandwidth, but may interrupt normal services. To resolve
this problem, the firewalls also support fingerprint learning to prevent UDP flood attacks.
As shown in Figure 3-9, fingerprint learning is to check whether the payloads in UDP packets
sent from the client to the server are identical to determine whether the packets are normal.
Firewalls collect the statistics on the UDP packets destined to the target server. If the rate of
UDP packets reaches the alarm threshold, the firewalls start the fingerprint learning. If
identical features appear repeatedly, the features will be learned as fingerprints. Subsequent
UDP packets matching the fingerprints will be considered attack packets and discarded. Those
do not match any fingerprint will be forwarded by the firewalls.
74
Learn Firewalls with Dr. WoW
Figure 3-9 Fingerprint learning
Attacker
Forged packet
Firewall
Target server
UDP packet
Reply
Forged packet
UDP packet
Reply
Forged packet
UDP packet
Reply
……
Fingerprint learning is triggered if the
rate of UDP packets destined to a
destination during a period of time
reaches the specified threshold.
UDP packet
Forged packet
UDP packet
Forged packet
UDP packet
Forged packet
UDP packet
……
UDP packet
Forged packet
Fingerprint
learning
Forged packet
UDP packets
matching the
fingerprints are
discarded.
Legitimate client
UDP packet
UDP packets that do not match
any fingerprint are forwarded.
UDP flood attack packets have some common features, such as identical character string or
payload. Fingerprint learning is based on this fact. This is because attackers often use tools to
graft UDP packets with identical payload to increase the UDP flood speed.
However, normal UDP packets have different payloads. Therefore, firewalls can learn the
fingerprints of UDP packets to distinguish attack packets from normal packets to reduce false
positives.
75
Learn Firewalls with Dr. WoW
As shown in the following two packet capture screenshots, the two UDP packets destined for
the same destination have identical payload. If a firewall receives a large number of such UDP
packets, the firewall can determine that a UDP flood attack is going on.
To sum up, firewalls prevent UDP flood attacks through rate limiting and fingerprint learning,
with each having its own merits and limitations. Rate limiting is a simple and crude way to
control the rate of UDP packets, but rate limiting discards packets indiscriminately and may
interrupt normal services. In contrast, fingerprint learning is smarter and can distinguish
attack packets from normal packets after learning the fingerprints of attack packets. Currently,
fingerprint learning is a major measure to prevent UDP flood attacks and is supported by all
Huawei firewall series.
3.4.3 Commands
Table 3-3 lists the rate limiting and fingerprint learning configuration commands on USG9500
V300R001, for example.
76
Learn Firewalls with Dr. WoW
Table 3-3 Rate limiting and fingerprint learning configuration commands
Function
Enable UDP
defense.
Command
flood
attack
firewall defend udp-flood enable
Configure
interface-based
UDP rate limiting.
firewall defend udp-flood interface { interface-type
interface-number | all } max-rate max-rate-number ]
Configure IP address-based
UDP rate limiting.
firewall defend udp-flood ip ip-address [ max-rate
max-rate-number ]
Configure security zone-based
UDP rate limiting.
firewall defend udp-flood zone zone-name [ max-rate
max-rate-number ]
Configure session-based UDP
rate limiting.
firewall defend udp-flood base-session max-rate
max-rate-number
Configure IP address-based
UDP
flood
fingerprint
learning.
firewall defend udp-fingerprint-learn ip ip-address
[ alert-rate alert-rate-number ]
Configure security zone-based
UDP
flood
fingerprint
learning.
firewall defend udp-fingerprint-learn zone zone-name
[ alert-rate alert-rate-number ]
Configure UDP fingerprint
learning parameters.
firewall defend udp-flood fingerprint-learn offset
offset fingerprint-length fingerprint-length
3.5 DNS Flood Attack and Defense
Before we move to application-layer attacks, let's take a look at some real attack cases.
In the evening of May 19 2009, the recursive domain name services in six provinces of China
were compromised due to excessive DNS requests, and domain name services in other
provinces were also interrupted, causing network outage for a long time.
Let's play back the attack. In the evening of May 19, some attackers attacked the DNS server
(DNSPod) that provided DNS service for private servers of other game websites. The attack
traffic exceeded 10 Gbit/s, which crashed DNSPod. However, DNSPod also provided DNS
services for the servers of the Storm player.
The Storm player has a process that automatically starts during the startup of clients and
automatically connects to the Storm servers to download advertisements or software updates.
After DNSPod was crashed, the domain names of Storm servers could not be resolved, but the
process of the Storm player automatically attempted to connect to the servers. As a result, the
Storm clients accidentally became zombies that continuously sent DNS requests to local DNS
servers. The DNS traffic exceeded 30 Gbit/s and caused the DNS flood.
Then, the police started to investigate the attack and busted the attackers on May 29. The
investigation showed that the attackers were operators of some private game servers. The
attackers had rented servers to attack other private game servers or websites for illegal gains.
77
Learn Firewalls with Dr. WoW
This attack case demonstrates the severe impacts of application-layer attacks. These attacks
interrupt our lives and must be prevented. Now let's talk about DNS flood attack and defense.
3.5.1 Attack Mechanism
Let's start with the mechanism of the DNS protocol. When we surf the Internet, we enter
domain names of websites we want to visit. The domain names are resolved into IP addresses
by DNS servers. As shown in Figure 3-10, when we visit www.huawei.com, the client will
send a DNS request to the local DNS server. If the local DNS server stores the mapping
between the domain name and IP address, it sends the IP address to the client.
If the local DNS server cannot find the IP address, the server will send a request to the
upper-level DNS server. After the upper-level DNS server finds out the IP address, it sends the
IP address to the local DNS server, which in turn, sends the IP address to the client. To reduce
the DNS traffic on the Internet, the local DNS server caches the domain name-IP address
mappings so that the local DNS server does not need to request upper-level DNS servers to
honor the requests of hosts.
Figure 3-10 DNS process
Client
Local DNS server
Upper-level DNS
server
Sends a DNS request for the IP address of
www.huawei.com.
Replies with the IP
address if a match is
found in the cache.
DNS reply: The IP address of www.huawei.com is XX.
Sends a DNS request for the IP address of
www.huawei.com.
Sends a request to the upper-level
DNS server if no match is found in the
cache.
Sends a DNS request for the IP address of
www.huawei.com.
DNS reply: The IP address of www.huawei.com is XX.
DNS reply: The IP address of www.huawei.com is XX.
DNS flood is to send a DNS server a large number of requests for the IP addresses of domain
names that do not exist to crash the DNS server and make it unable to handle legitimate DNS
requests. In the above-mentioned attack case, the DNS server (DNSPod) was crashed and
unable to resolve domain names of Storm servers, but tens of thousands of Storm clients
continuously sent DNS requests to local DNS servers, causing the DNS flood.
3.5.2 Defense Measure
DNS supports TCP and UDP. Usually, UDP is used because the connectionless protocol is fast.
UDP also has a smaller overhead than TCP, reducing the resource consumption on DNS
servers.
78
Learn Firewalls with Dr. WoW
However, in some cases, you must configure DNS servers to instruct clients to use TCP to
send requests. In this situation, when the DNS server receives a request from a client, the
server replies with a message with the TC flag being set to 1, indicating that the client must
use TCP to send the request.
This mechanism can be used by firewalls to verify whether the source of DNS requests is real
to prevent DNS flood attacks.
As shown in Figure 3-11, the firewall collects statistics on DNS requests. If the number of
DNS requests destined to a destination reaches the preset threshold during a specified period
of time, the DNS source authentication is triggered.
After DNS source authentication is enabled, the firewall responds to the DNS requests on
behalf of the DNS server, with the TC flag of DNS replies being set to 1. This flag instructs
the client to use TCP to send the DNS requests. If the firewall does not receive a TCP DNS
request from the client, the firewall considers the client false. If the firewall receives a TCP
DNS request, the firewall considers the client real. Then, the firewall whitelists the source
address of the client and considers all packets from the client legitimate until the whitelist
entry expires.
Figure 3-11 DNS source authentication
Attacker
Forged packet
DNS server
Firewall
DNS request
DNS reply
?
Forged packet
DNS request
DNS reply
?
……
DNS source authentication is triggered if the rate
of DNS requests to a destination during a period
of time reaches the specified threshold.
UDP DNS request: What is the IP address of
www.huawei.com?
Forged
packet
?
DNS reply: Please use TCP to send the request.
……
Legitimate client
UDP DNS request: What is the IP address of
www.huawei.com?
DNS reply: Please use TCP to send the request.
TCP DNS request: What is the IP address
of www.huawei.com?
The client passes the
authentication and is
whitelisted.
TCP DNS request: What is the IP address of
www.huawei.com?
TCP DNS reply: 200.72.X.X
DNS request
TCP DNS reply: 200.72.X.X
Subsequent packets
match the whitelist and
are forwarded.
79
Learn Firewalls with Dr. WoW
Let's see the detailed process through the following packet capture screenshots.
1.
The client uses UDP to send a DNS request, as shown in the following figure.
2.
The firewall responds to the DNS request on behalf of the DNS server, with the TC flag
of the DNS reply being set to 1, as shown in the following figure. This flag instructs the
client to use TCP to send the DNS requests.
3.
After receiving the DNS reply, the client uses TCP to send the DNS request as instructed
by the firewall, as shown in the following figure.
80
Learn Firewalls with Dr. WoW
However, DNS source authentication is not a one-fit-all solution in real world because not all
clients can send TCP DNS requests. If a client cannot send TCP DNS requests, requests of the
client cannot be honored, interrupting normal services.
3.5.3 Commands
Table 3-4 lists the rate DNS flood attack defense configuration commands on USG9500
V300R001, for example.
Table 3-4 DNS flood attack defense commands
Function
Command
Enable DNS flood attack defense.
firewall defend dns-flood enable
Configure the DNS flood attack
defense parameters.
firewall defend dns-flood interface { interface-type
interface-number
|
all
}
[
alert-rate
alert-rate-number ] [ max-rate max-rate-number ]
3.6 HTTP Flood Attack and Defense
Now let's take a look at another typical application-layer attack: HTTP flood. HTTP flood
attacks are increasing each year and should not be underestimated.
3.6.1 Attack Mechanism
To launch an HTTP flood, the attacker can use zombie hosts to send a large number of HTTP
requests to the target. The requests contain uniform resource identifiers (URIs) that require
resource-intensive operations, such as database operations, to exhaust the resources on the
target server and make it unable to respond to normal requests.
URI is used to define web resources, but Uniform Resource Locator (URL) is used to locate web
resources. For example, www.huawei.com/abc/12345.html is a URL, but /abc/12345.html is a URI.
81
Learn Firewalls with Dr. WoW
3.6.2 Defense Measure
To prevent HTTP flood attacks, we can use HTTP redirection. When a client requests
www.huawei.com/1.html from the web server, the web server returns a message to instruct the
client to request www.huawei.com/2.html to redirect the request to a new URI.
HTTP redirection is a self-healing process for web servers to redirect a request to a new URI
if the originally requested URI has been obsolete so that the client can visit the desired web
page, as shown in Figure 3-12.
Figure 3-12 HTTP redirection
Client
Web server
SYN
SYN+ACK
ACK
HTTP request: Get (an obsolete URI)
302Moved Temporarily (new URI)
HTTP request: Get (new URI)
HTTP reply: 200 OK
Data transmission
……
This mechanism can be used by firewalls to verify whether the source of HTTP requests is
real to prevent HTTP flood attacks.
As shown in Figure 3-13, the firewall collects statistics on HTTP requests. If the number of
HTTP requests destined to a destination reaches the preset threshold during a specified period
of time, the HTTP source authentication is triggered.
After HTTP source authentication is enabled, the firewall sends an HTTP redirect to the client
on behalf of the web server upon receiving a request to instruct the client to request a new
URI that does not exist. If the firewall does not receive a request for the new URI, the firewall
considers the client false. If the firewall receives a request for the new URI, the firewall
considers the client real and whitelists the IP address of the client. Then, the firewall sends
another HTTP redirect to the client to instruct the client to request the original URI, that is,
the URI requested by the client in the first place. All subsequent HTTP requests from the
client are considered legitimate until the whitelist entry expires.
82
Learn Firewalls with Dr. WoW
Figure 3-13 HTTP source authentication
Attacker
Forged packet
HTTP request
HTTP reply
?
Forged packet
HTTP request
HTTP reply
?
……
Forged packet
?
Web server
Firewall
HTTP source authentication is triggered if the
rate of HTTP requests to a destination during a
period of time reaches the specified threshold.
HTTP request: search1.huawei.com
HTTP reply: Please request search2.huawei.com
……
Legitimate client
HTTP request: search1.huawei.com
HTTP reply: Please request search2.huawei.com
HTTP request: search2.huawei.com
The client passes
the authentication
and is whitelisted.
HTTP reply: Please request search1.huawei.com
HTTP request: search1.huawei.com
HTTP reply
HTTP request
Subsequent packets match the
whitelist and are forwarded.
Although two HTTP redirects are used in the authentication process, the redirection is done
quickly between the server and browser and will not affect user experience.
Let's see the detailed process through the following packet capture screenshots.
2.
The client requests /index.html, as shown in the following figure.
83
Learn Firewalls with Dr. WoW
3.
Upon receiving the request, the firewall replies on behalf of the web server to redirect the
client to /index.html?sksbjsbmfbclwjcc, as shown in the following figure.
4.
The client requests /index.html?sksbjsbmfbclwjcc, as shown in the following figure.
5.
Upon receiving the request, the firewall determines that the source of the HTTP request
is real and redirects the client to the originally requested URI (/index.html), as shown in
the following figure.
84
Learn Firewalls with Dr. WoW
However, HTTP source authentication is not a one-fit-all solution in real world because some
clients, such as set top boxes (STBs), do not support HTTP redirection. Therefore, before
configuring HTTP source authentication, verify that no such clients exist on your network.
Otherwise, normal services will be interrupted.
3.6.3 Commands
Table 3-1 lists the rate HTTP flood attack defense configuration commands on USG9500
V300R001, for example.
Table 3-5 HTTP flood attack defense commands
Function
Command
Enable HTTP flood attack defense.
firewall defend http-flood enable
Configure HTTP flood
defense parameters.
firewall defend http-flood source-detect interface
{ interface-type interface-number | all } alert-rate
alert-rate-number [ max-rate max-rate-number ]
attack
Those are common DDoS attacks and related defense measures on firewalls. Although
firewalls have DDoS attack defense capabilities, they are not dedicated anti-DDoS products.
If you need dedicated anti-DDoS products, we have AntiDDoS1000 and AntiDDoS8000.
These are world-leading anti-DDoS products and Huawei's killer products. For more
information about these products, visit Huawei website and download the product documents.
Questions from Dr. WoW:
1. What are the three types of single-packet attacks?
2. What are the measures to prevent SYN flood attacks? What are the application scenarios
of the measures?
3. What are the measures to prevent UDP flood attacks?
85
Learn Firewalls with Dr. WoW
4. To prevent HTTP flood attacks, is each HTTP request from a source redirected?
86
Learn Firewalls with Dr. WoW
4
NAT
4.1 Source NAT
4.1.1 Source NAT Mechanism
When the Internet was invented, no one thought that it could grow so fast as to be pervasive in
our lives in merely 20 years. Therefore, the problems that were not considered during the
invention of the Internet are surfacing. For example, IPv4 addresses are exhausting. While
seeking alternatives, people are also using technologies that can alleviate the exhaustion of
IPv4 addresses, and the most common technology is network address translation (NAT). A lot
of NAT implementations are out there. The most common one is source NAT.
Source NAT translates private source IP addresses into public source IP addresses. With
source NAT, users on an intranet can access the Internet from their private addresses to use
public IP addresses more efficiently.
The process of source NAT is shown in Figure 4-1.Upon receiving the packets destined from
the private network to the Internet, the firewall translates the private source addresses into
public addresses. Upon receiving the return packets, the firewall translates the public
destination addresses back to private destination addresses. The whole NAT process is
transparent to the users on the private network and hosts on the Internet.
Figure 4-1 Source NAT process
Private
network user
192.168.0.2
IP packet
IP packet
Src addr: 192.168.0.2
Src addr: 202.1.1.2
Dst addr: 210.1.1.2
Data
210.1.1.2
Dst addr: 210.1.1.2
NAT
Data
Private network
IP packet
Firewall
Src addr: 210.1.1.2
Dst addr: 192.168.0.2
Data
IP packet
Src addr: 210.1.1.2
Dst addr: 202.1.1.2
NAT
Data
Before moving on to similar and different features of NAT implementations, let's introduce
the concept of NAT address pool. NAT address pool is a pool or container where we put IP
87
Learn Firewalls with Dr. WoW
addresses. During address translation, the firewall translates the private address into a public
address selected from the pool. The public address is randomly selected and has nothing
to do with the configuration time or value of the IP addresses.
The following command is used to configure a NAT address pool on USG2000/USG5000
series. The NAT address pool has four public IP addresses. We will use the
USG2000/USG5000 as an example in NAT address pool configuration thereafter unless
otherwise specified.
[FW] nat address-group 1 202.1.1.2 202.1.1.5
A configured NAT address pool can be referenced by NAT policies. On USG2000/USG5000
firewall series, NAT policies are similar to security policies. They all contain conditions and
actions. The difference is that the action in a NAT policy is source NAT or no-NAT. If the
action is source NAT, a NAT address pool must be referenced, as shown in Figure 4-2. We
will use the USG2000/USG5000 as an example in NAT policy configuration thereafter unless
otherwise specified.
Figure 4-2 NAT policy
Trust
Untrust
NAT policy 1
Condition
Src addr|Dst addr|...
Action
Source NAT|No NAT
Match
sequence
NAT address pool
NAT policy 2
Condition
Src addr|Dst addr|...
Action
Source NAT|No NAT
NAT address pool
If a packet matches one NAT policy, the NAT policy is implemented, and the remaining NAT
policies are ignored. If a packet does not match a NAT policy, the packet is compared against
the next NAT policy.
Configuring multiple NAT policies provides flexibility. For example, user group 1
(192.168.0.2-192.168.0.5) and user group 2 (192.168.0.6-192.168.0.10) can use different
public IP addresses to access the Internet. This cannot be done if we put the two public IP
addresses into the same NAT address pool because the public IP addresses are randomly
selected.
Instead, we can put the two IP addresses into different NAT address pools and configure two
NAT policies. One NAT policy allows user group 1 to use NAT address pool 1, and the other
88
Learn Firewalls with Dr. WoW
allows user group 2 to use NAT address pool 2. Then, the two user groups can use different
public IP address to access the Internet.
Table 4-1 lists the source NAT implementations supported by Huawei firewalls.
Table 4-1 Source NAT implementations supported by Huawei firewalls
Source
NAT
Implementation
Description
Application Scenario
NAT No-PAT
Only IP addresses are translated, and
ports are not translated.
The number of available
public IP addresses is almost
the same as the private
network users who need
Internet access.
NAPT
Both addresses
translated.
are
The number of private
network users is larger than
that of available public
addresses.
Egress
interface
address mode (also
called easy-IP)
Both IP addresses and ports are
translated, but the public address can
only be the IP address of the egress
interface.
Only one public IP address is
available, and the public IP
address
is
dynamically
obtained on the egress
interface.
Smart NAT
One address in an address pool is
reserved for NAPT, and other
addresses in the address pool are used
for NAT No-PAT.
Usually,
each
private
network user can have a
public IP address in the
address
pool,
but
occasionally,
public
addresses are not sufficient
and
NAPT
must
be
implemented so that multiple
users can share the same
public IP address.
Triplet NAT
The mappings between private IP
address/port
and
public
IP
address/port are fixed instead of
being random.
Users on the Internet initiate
access to users on the private
network. This is the case in
P2P service.
and
ports
Each of the NAT implementations has their own merits and demerits. Let's dive deeper into
them.
4.1.2 NAT No-PAT
"No-PAT" means that port addresses are not translated and public addresses cannot be shared
by more than one private network address user. Therefore, NAT No-PAT is a one-to-one
address translation. Figure 4-3 shows an example of NAT No-PAT configuration. In this
example, the firewall and the web server are reachable to each other.
89
Learn Firewalls with Dr. WoW
Figure 4-3 NAT No-PAT networking
Trust
Untrust
Private network user
192.168.0.3/24
Web Server
210.1.1.2
NAT address pool
202.1.1.2-202.1.1.3
Private
network
Firewall
Private network user
192.168.0.2/24
The detailed configuration process is as follows:
1.
Configure a NAT address pool and NAT policy.
Configure a NAT address pool.
[FW] nat address-group 1 202.1.1.2 202.1.1.3
//Add two public IP addresses to the
address pool.
Configure a NAT policy.
[FW] nat-policy interzone trust untrust outbound
[FW-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0
0.0.0.255
//Specify the match condition.
[FW-nat-policy-interzone-trust-untrust-outbound-1] action source-nat //Specify the
action (source NAT).
[FW-nat-policy-interzone-trust-untrust-outbound-1] address-group 1 no-pat
//Reference the NAT address pool and specify No-PAT as the NAT method.
[FW-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW-nat-policy-interzone-trust-untrust-outbound] quit
Note that security policies and blackhole routes must be configured after the NAT
configuration is complete.
2.
Configure a security policy.
Security policies and NAT policies are similar, just as their names suggest. However, they
have different functions. Security policies determine whether packets can pass through the
firewall, whereas NAT policies determine how to translate IP addresses in the packets. NAT is
performed only for permitted packets. Security policies are processed before NAT policies.
Therefore, if you configure a security policy for a source address, the source address must be
the private address.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action permit
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
90
Learn Firewalls with Dr. WoW
3.
Configure blackhole routes.
A blackhole route is a route that goes nowhere and is used to drop packets that match the
route. To avoid routing loops, blackhole routes must be configured on the firewall for the
addresses in the public address pool. The blackhole routes are configured as follows. The
reason why we need to configure blackhole routes will be discussed later.
[FW] ip route-static 202.1.1.2 32 NULL 0
[FW] ip route-static 202.1.1.3 32 NULL 0
After the previous configurations are complete, the users on the private network can access
the web server. If you display the sessions on the firewall, you can see the following
information:
[FW] display firewall session table
Current Total Sessions : 1
http VPN:public --> public 192.168.0.2:2050[202.1.1.2:2050]-->210.1.1.2:80
http VPN:public --> public 192.168.0.3:2050[202.1.1.3:2050]-->210.1.1.2:80
From the session table, we can see that the two private IP addresses have been translated into
different public IP addresses in the brackets, but the port is not translated.
Do you remember that we have mentioned "server-map" table in Chapter 2 Security Policies?
NAT No-PAT generates two server-map entries, one in the forward direction, and the other in
the return direction.
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------No-Pat, 192.168.0.2[202.1.1.2] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: 00:11:59, Addr-Pool: 1
VPN: public -> public
No-Pat Reverse, any -> 202.1.1.2[192.168.0.2], Zone: untrust
Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
No-Pat, 192.168.0.3[202.1.1.3] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: 00:11:59, Addr-Pool: 1
VPN: public -> public
No-Pat Reverse, any -> 202.1.1.3[192.168.0.3], Zone: untrust
Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
The server-map entry in the forward direction allows for fast address translation when a
private network user accesses the Internet, because in NAT No-PAT, each private address is
exclusively translated to a public IP address and the translation is performed when the packets
match the server-map entry. Similarly, when the packets destined from the Internet to the
private network match the server-map entry in the return direction, address translation is
performed. Note that packets matching the server-map entries must be checked against
security policies. Only packets permitted by the security policies can pass through the
firewall.
Other users on the private network cannot access the web server, because only two public IP
addresses are available in the address pool and both public IP addresses have been used. Other
users must wait until the public addresses are released. As we can see, one public IP address
can be used by only one private network user in NAT No-PAT. This implementation does not
91
Learn Firewalls with Dr. WoW
conserve public addresses. The following NAT implementation, NAPT, can conserve public
IP addresses.
4.1.3 NAPT
Network address and port translation (NAPT), sometimes also known as port address
translation (PAT), means both the network address and port are translated. NAPT is the most
widely used address translation implementation. NAPT allows a large number of private
network users to share a small number of public IP addresses to access the Internet.
The difference between NAPT and NAT No-PAT configuration is: In NAPT configuration, the
"no-pat" keyword is not specified when you reference a NAT address pool in a NAT policy.
The following NAPT configuration is still based on Figure 4-3.
1.
Configure a NAT address pool.
[FW] nat address-group 1 202.1.1.2 202.1.1.3
2.
Configure a NAT policy.
[FW] nat-policy interzone trust untrust outbound
[FW-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0
0.0.0.255
[FW-nat-policy-interzone-trust-untrust-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-untrust-outbound-1] address-group 1
//Reference the NAT address pool
[FW-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW-nat-policy-interzone-trust-untrust-outbound] quit
3.
Configure a security policy.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action permit
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
4.
Configure blackhole routes.
[FW] ip route-static 202.1.1.2 32 NULL 0
[FW] ip route-static 202.1.1.3 32 NULL 0
After the previous configurations are complete, the users on the private network can access
the web server. If you display the sessions on the firewall, you can see the following
information:
[FW] display firewall session table
Current Total Sessions : 2
http VPN:public --> public 192.168.0.2:2053[202.1.1.2:2048]-->210.1.1.2:80
http VPN:public --> public 192.168.0.3:2053[202.1.1.3:2048]-->210.1.1.2:80
From the session table, we can see that the two private IP addresses have been translated into
different public IP addresses and the port is also translated.
Other users on the private network can also access the web server. If you display the sessions
on the firewall, you can see the following information:
[FW] display firewall session table
Current Total Sessions : 3
http VPN:public --> public 192.168.0.2:2053[202.1.1.2:2048]-->210.1.1.2:80
http VPN:public --> public 192.168.0.3:2053[202.1.1.3:2048]-->210.1.1.2:80
92
Learn Firewalls with Dr. WoW
http VPN:public --> public 192.168.0.4:2051[202.1.1.2:2049]-->210.1.1.2:80
From the session table, we can see that two users on the private network share the same public
IP address, but the ports of the users are different. The two users sharing the same public IP
address are distinguished by the ports. Therefore, you do not need to worry about IP address
conflict.
Note that in NAPT, no server-map entry will be generated. This is different than in NAT
No-PAT.
4.1.4 Egress Interface Address Mode (Easy-IP)
In egress interface address mode, the public IP address of the egress interface is used for
address translation. Multiple users of the private network share the same public IP address.
Therefore, port translation is also performed. This mode can be deemed a variant of NAPT.
When the egress of a firewall obtains the public IP address through dial-up, you cannot add
the public IP address to the address pool because the public address is dynamically obtained.
In this case, you need to configure the egress interface address mode so that addresses can be
translated when the public IP address changes. The egress interface address mode simplifies
the configuration process and is therefore called easy-IP, which is available on USG2000,
USG5000, and USG6000 series.
Easy-IP does not require a NAT address pool or blackhole route. All you need is to specify the
outgoing interface in the NAT address policy, as shown in Figure 4-4.
Figure 4-4 Easy-IP networking
Trust
Untrust
Private network user
192.168.0.3/24
Web Server
210.1.1.2
GE1/0/2
Dynamic address
Private
network
Firewall
Private network user
192.168.0.2/24
The configuration is as follows:
2.
Configure a NAT policy.
[FW] nat-policy interzone trust untrust outbound
[FW-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0
0.0.0.255
[FW-nat-policy-interzone-trust-untrust-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-untrust-outbound-1] easy-ip GigabitEthernet1/0/2
//Specify the outgoing interface.
[FW-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW-nat-policy-interzone-trust-untrust-outbound] quit
93
Learn Firewalls with Dr. WoW
3.
Configure a security policy.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action permit
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
The users on the private network can access the web server. If you display the sessions on the
firewall, you can see the following information:
[FW] display firewall session table
Current Total Sessions : 2
http VPN:public --> public 192.168.0.2:2054[202.1.1.1:2048]-->210.1.1.2:80
http VPN:public --> public 192.168.0.3:2054[202.1.1.1:2049]-->210.1.1.2:80
From the session table, we can see that the two private IP addresses have been translated into
the public IP address (202.1.1.1) of the egress interface and the port is also translated. If other
uses on the private network access the web server, the addresses are also translated to
202.1.1.1, and the port is translated to different ports to distinguish the users.
Like NAPT, easy-IP does not generate any server-map entry.
4.1.5 Smart NAT
As we have mentioned before, NAP No-PAT is a one-to-one address translation, which means
that a public address in the address pool can be used by only one private network user. If all
the public addresses are used, other users cannot access the Internet. Then, what if other users
want to access the Internet? The solution is smart NAT.
Smart NAT incorporates the benefits of both NAT No-PAT and NAPT. The mechanism is as
follows:
Let's say the address pool has N IP addresses, and one of them is reserved and the remaining
addresses form address section 1. During address translation, the addresses in section 1 are
preferentially used for one-to-one address translation. When the IP addresses in section 1 are
exhausted, the reserved IP address is used for NAPT (many-to-one address translation).
We can consider smart NAT an enhanced NAP No-PAT because smart NAT overcomes the
limitation of NAT No-PAT. In NAT No-PAT, other users cannot access the Internet if the
number of users accessing the Internet equals to the number of public addresses in the address
pool until the used public addresses are released (the sessions expire).
If the same situation occurs in smart NAT, other users can share the reserved public IP address
to access the Internet.
Smart NAT is available on USG9500 V300R001. Therefore, USG9500 series is used as an
example to describe smart NAT configuration, as shown in Figure 4-5.
94
Learn Firewalls with Dr. WoW
Figure 4-5 Smart NAT networking
Trust
Private network user
192.168.0.3/24
Untrust
NAT address pool
Seciton1: 202.1.1.2
Reserved address: 202.1.1.3
Web Server
210.1.1.2
Private
network
Firewall
Private network user
192.168.0.2/24
The detailed configuration process is as follows:
2.
Configure a NAT address pool.
[FW] nat address-group 1
[FW-address-group-1] mode no-pat local
[FW-address-group-1] smart-nopat 202.1.1.3
//reserved address
[FW-address-group-1] section 1 202.1.1.2 202.1.1.2 //This section cannot contain the
reserved address.
[FW-address-group-1] quit
3.
Configure a NAT policy.
[FW] nat-policy interzone trust untrust outbound
[FW-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0
0.0.0.255
[FW-nat-policy-interzone-trust-untrust-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-untrust-outbound-1] address-group 1 //Reference the
NAT address pool.
[FW-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW-nat-policy-interzone-trust-untrust-outbound] quit
4.
Configure a security policy.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action permit
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
5.
Configure blackhole routes.
[FW] ip route-static 202.1.1.2 32 NULL 0
[FW] ip route-static 202.1.1.3 32 NULL 0
If one user on the private network accesses the web server, the session information on the
firewall resembles:
[FW] display firewall session table
Current total sessions: 1
Slot: 2 CPU: 3
95
Learn Firewalls with Dr. WoW
http VPN:public --> public 192.168.0.2:2053[202.1.1.2:2053]-->210.1.1.2:80
From the session table, we can see that the private IP address has been translated into a public
IP address in section 1 and the port is not translated.
Other users on the private network can also access the web server. If you display the sessions
on the firewall, you can see the following information:
[FW] display firewall session table
Current total sessions: 3
Slot: 2 CPU: 3
http VPN:public --> public 192.168.0.2:2053[202.1.1.2:2053]-->210.1.1.2:80
http VPN:public --> public 192.168.0.3:2053[202.1.1.3:2048]-->210.1.1.2:80
http VPN:public --> public 192.168.0.4:2053[202.1.1.3:2049]-->210.1.1.2:80
From the session table, we can see that the two private IP addresses have been translated into
the reserved public IP address and the port is also translated. That is, NAPT is performed for
the two users. NAPT is performed using the reserved public IP address only when the
public IP addresses (except the reserved address) in the address pool are exhausted.
Let's take a look at the server-map table. Smart NAT includes NAT No-PAT. Therefore, related
server-map entries are generated.
[FW] display firewall server-map
ServerMap item(s) on slot 2 cpu 3
-----------------------------------------------------------------------------Type: No-Pat, 192.168.0.2[202.1.1.2] -> ANY, Zone: untrust
Protocol: ANY(Appro: unknown), Left-Time:00:05:55, Pool: 1, Section: 1
Vpn: public -> public
Type: No-Pat Reverse, ANY -> 202.1.1.2[192.168.0.2], Zone: untrust
Protocol: ANY(Appro: unknown), Left-Time:---, Pool: 1, Section: 1
Vpn: public -> public
4.1.6 Triplet NAT
We have learned four types of source NAT, among which NAPT is most widely used. Source
NAT not only alleviates the exhaustion of public addresses, but also hides the real private
network addresses, improving security as well. However, these NAT implementations do not
work well with P2P, which is widely used in file sharing, voice communications, and video
transfer. When NAT meets P2P, it is not a rosy NAT-P2P. Instead, the result is that you cannot
use P2P to download the latest movies or video chat.
To resolve this problem, we need triplet NAT. To understand triplet NAT, we must first
understand the P2P mechanism and the problems to P2P services if NAPT is enabled.
As shown in Figure 4-6, P2P services are running on both PC2 and PC2. To run the P2P
services, the two clients must exchange messages with the P2P server for login and
authentication. The P2P server records the addresses and ports of the clients. If PC1 resides on
a private network, the firewall performs NAPT for packets destined from PC1 to the P2P
server. Therefore, the client address and port recorded on the P2P server are the post-NAT
public address and port. When PC2 downloads a file, the P2P server sends PC2 the IP address
and port of the client on which the requested file resides (for example, the address and port of
PC1). Then, PC2 sends a request to PC1 and starts to download the file.
96
Learn Firewalls with Dr. WoW
Figure 4-6 P2P service interaction process
P2P Server
PC1 (P2P Client)
1
Private
network
2
3
Firewall
PC2 (P2P Client)
The interaction seems to be perfect. However, two problems exist:
2.
PC1 periodically sends packets to the P2P server, and NAPT is performed on these
packets. Therefore, the address and port are changing after NAPT. Therefore, the address
and port of PC1 stored on the P2P server must be constantly updated, affecting the
running of P2P services.
3.
More importantly, the forwarding mechanism of the firewall determines that packets
returned by the P2P server to the PC1 can pass through the firewall only when they
match the session table. Other hosts, such as PC2, cannot initiate access to PC1 through
the post-NAT address and port. By default, the security policies on the firewall do not
allow such packets to pass through.
Triplet NAT can perfectly resolve these two problems because triplet NAT has the following
two features:
4.
The post-NAT port is stable.
During a period of time after PC1 accesses the P2P server, the post-NAT port will be the
same when PC1 accesses the P2P server again or accesses other hosts on the Internet.
5.
Access initiated from the Internet is supported.
PC2 can obtain the post-NAT address and port of PC1 to initiate access to the address
and port, regardless of whether PC1 has accessed PC2. The access packets initiated from
PC2 to PC1 are permitted, even when no security policy is configured on the firewall for
such packets.
These features of triplet NAT support P2P services. Triplet NAT is available on USG9500
V300R001. The triplet NAT configuration is described as follows, as shown in Figure 4-7.
For USG2000, USG5000, and USG6000 series firewalls, user-defined ASPF can be configured to ensure
that P2P services are normal.
97
Learn Firewalls with Dr. WoW
Figure 4-7 Triplet NAT networking
Trust
P2P Client
192.168.0.2/24
Untrust
NAT address pool
202.1.1.2-202.1.1.3
P2P Server
210.1.1.2
Private
network
Firewall
P2P Client
Triplet NAT configuration is described as follows. Blackhole routes cannot be configured.
Otherwise, services will be interrupted.
6.
Configure a NAT address pool.
[FW] nat address-group 1
[FW-address-group-1] mode full-cone local
//Set the mode to triplet NAT.
[FW-address-group-1] section 1 202.1.1.2 202.1.1.3
[FW-address-group-1] quit
7.
Configure a NAT policy.
[FW] nat-policy interzone trust untrust outbound
[FW-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0
0.0.0.255
[FW-nat-policy-interzone-trust-untrust-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-untrust-outbound-1] address-group 1
//Reference the NAT address pool.
[FW-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW-nat-policy-interzone-trust-untrust-outbound] quit
8.
Configure a security policy.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action permit
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
When the P2P clients on the private network access the P2P server, the session information on
the firewall resembles:
[FW] display firewall session table
Current total sessions: 1
Slot: 2 CPU: 3
tcp VPN: public --> public 192.168.0.2:4661[202.1.1.2:3536] --> 210.1.1.2:4096
From the session table, we can see that the private IP address of the P2P client has been
translated into a public IP address and the port is also translated. Now let's take a look at the
server-map table.
98
Learn Firewalls with Dr. WoW
−
The Untrust zone in the server-map table is generated because the local parameter in the mode
full-cone local command is specified. If the command is mode full-cone global, the zone is not
specified, indicating that security zones are not restricted.
−
For more information about the FullCone field in the server-map table, see 4.1.9 Further Reading.
[FW] display firewall server-map
ServerMap item(s) on slot 2 cpu 3
-----------------------------------------------------------------------------Type: FullCone Src, 192.168.0.2:4661[202.1.1.2:3536] -> ANY, Zone: Untrust
Protocol: tcp(Appro: ---), Left-Time:00:00:58, Pool: 1, Section: 0
Vpn: public -> public
Hotversion: 2
Type: FullCone Dst, ANY -> 202.1.1.2:3536[192.168.0.2:4661], Zone: Untrust
Protocol: tcp(Appro: ---), Left-Time:00:00:58, Pool: 1, Section: 0
Vpn: public -> public
Hotversion: 2
From the server-map table, we can see that two server-map entries are generated for the triplet
NAT: a source server-map entry (FullCone Src) and a destination server-map entry (FullCone
Dst). The functions of the two entries are described as follows:

Source server map entry (FullCone Src)
Before the expiration of the entries, the address and port after address translation are
202.1.1.2:3536 when PC1 accesses any host in the Untrust zone, ensuring port
consistency.

Destination server map entry (FullCone Dst)
Before the expiration of the entries, any host in the Untrust zone can access port 4661 on
PC1 through 202.1.1.2:3536, meaning that P2P clients on the Internet can initiate
connection to PC1.
Therefore, the source and destination server-map entries allow triplet NAT to support P2P
services. From the source and destination server-map entries, we can see that only the source
IP address and port and protocol are involved in triplet NAT, and that is why it is called
"triplet" NAT.
As we have mentioned, the destination server-map entry allows P2P clients on the Internet to
initiate connections to PC1. Some may ask, are server-map entries generated in triplet NAT
the same as those in ASPF so that packets matching the entries are not subject to the control
of security policies? There are more to tell. For triplet NAT, the firewalls also support
endpoint-independent filter function. The command is as follows.
In the command, the endpoint-independent parameter means that the address translation is independent
from the address and port translation on the other end. This parameter can be considered another name
for triplet NAT. On Huawei firewalls, this command controls whether security policies are needed to
examine packets in triplet NAT.
[FW] firewall endpoint-independent filter enable
After the endpoint-independent filter is enabled, packets matching the destination server-map
entry can pass through the firewall without being matched against security policies. If the
function is disabled, packets matching the destination server-map entries must also be
matched against security policies to determine whether they are permitted. By default, the
99
Learn Firewalls with Dr. WoW
endpoint-independent filter function is enabled. That is why the P2P clients on the Internet
can initiate connections to PC1 on the private network.
4.1.7 Source NAT in Multi-Egress Scenario
We have learned NAT implementations of various types and it seems that we have understood
everything about NAT. However, in real-world configuration, we may still have challenges.
For example, in the multi-egress scenario, how to configure source NAT?
In the following example, the firewall has two ISP links to the Internet. We will use this
example to illustrate how to configure source NAT. If more ISP links are available, the
configuration method is similar.
As shown in Figure 4-8, an enterprise has deployed a firewall at the network egress as the
gateway, which is connected to the Internet through links ISP1 and ISP2 so that the PCs on
the private network can access the Internet.
Figure 4-8 Source NAT networking in the dual-ISP scenario
Private network user
ISP1
Private
network
Firewall
ISP2
In this scenario, a major challenge for the firewall is to select an ISP link when forwarding
traffic destined from the private network to the Internet. If the optimal ISP link for a packet is
ISP1 but the packet is forwarded through ISP2, the detour may increase latency and
deteriorate user experience.
ISP links can be selected based on destination addresses. In this case, we can configure two
equal-cost default or specific routes. ISP links can also be selected based on source addresses.
In this case, we can configure policy-based routing. These will be described in detail in
Chapter 10 ISP Link Selection.
For NAT, packets are sent out through either ISP1 or ISP2, regardless of the route selection
method. Regardless of which ISP link is used, NAT is doing its job as long as the private
addresses are translated into a public address before packets are sent out.
Usually, we add the interfaces connected to ISP1 and ISP2 to different security zones. Then,
we configure source NAT policies between the security zone (usually the Trust zone) where
the private network resides and the security zones of the two interfaces connected to ISP1 and
ISP2, as shown in Figure 4-9.
100
Learn Firewalls with Dr. WoW
Figure 4-9 NAT networking in the dual-ISP scenario
Trust
ISP1
Private network
user
Private
network
GE1/0/1
192.168.0.1/24
2
0/ 0
1/ .1/3
E
G 1.1
1.
GE
2.2 1/0
.2 . /3
Firewall
2 /3
0
ISP1
ISP2
ISP2
This following example describes how to configure source NAT in NAPT mode. In the
example, the public addresses assigned by ISP1 are 1.1.1.10 through 1.1.1.12, and those by
ISP2 are 2.2.2.10 through 2.2.2.12.
Add interfaces to security zones.
[FW] firewall zone trust
[FW-zone-trust] add interface GigabitEthernet1/0/1
[FW-zone-trust] quit
[FW] firewall zone name isp1
[FW-zone-isp1] set priority 10
[FW-zone-isp1] add interface GigabitEthernet1/0/2
[FW-zone-isp1] quit
[FW] firewall zone name isp2
[FW-zone-isp2] set priority 20
[FW-zone-isp2] add interface GigabitEthernet1/0/3
[FW-zone-isp2] quit
Configure two NAT address pools.
[FW] nat address-group 1 1.1.1.10 1.1.1.12
[FW] nat address-group 2 2.2.2.10 2.2.2.12
Configure two NAT policies based on the interzone relationship.
[FW] nat-policy interzone trust isp1 outbound
[FW-nat-policy-interzone-trust-isp1-outbound] policy 1
[FW-nat-policy-interzone-trust-isp1-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-nat-policy-interzone-trust-isp1-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-isp1-outbound-1] address-group 1
[FW-nat-policy-interzone-trust-isp1-outbound-1] quit
[FW-nat-policy-interzone-trust-isp1-outbound] quit
[FW] nat-policy interzone trust isp2 outbound
[FW-nat-policy-interzone-trust-isp2-outbound] policy 1
[FW-nat-policy-interzone-trust-isp2-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-nat-policy-interzone-trust-isp2-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-isp2-outbound-1] address-group 2
[FW-nat-policy-interzone-trust-isp2-outbound-1] quit
[FW-nat-policy-interzone-trust-isp2-outbound] quit
101
Learn Firewalls with Dr. WoW
Configure two security policies based on the interzone relationship.
[FW] policy interzone trust isp1 outbound
[FW-policy-interzone-trust-isp1-outbound] policy 1
[FW-policy-interzone-trust-isp1-outbound-1] policy
[FW-policy-interzone-trust-isp1-outbound-1] action
[FW-policy-interzone-trust-isp1-outbound-1] quit
[FW-policy-interzone-trust-isp1-outbound] quit
[FW] policy interzone trust isp2 outbound
[FW-policy-interzone-trust-isp2-outbound] policy 1
[FW-policy-interzone-trust-isp2-outbound-1] policy
[FW-policy-interzone-trust-isp2-outbound-1] action
[FW-policy-interzone-trust-isp2-outbound-1] quit
[FW-policy-interzone-trust-isp2-outbound] quit
source 192.168.0.0 0.0.0.255
permit
source 192.168.0.0 0.0.0.255
permit
Of course, do not forget blackhole routes.
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
ip
ip
ip
ip
ip
ip
route-static
route-static
route-static
route-static
route-static
route-static
1.1.1.10
1.1.1.11
1.1.1.12
2.2.2.10
2.2.2.11
2.2.2.12
32
32
32
32
32
32
NULL
NULL
NULL
NULL
NULL
NULL
0
0
0
0
0
0
If we add the interfaces connected to ISP1 and ISP2 to the same security zone, for example,
the Untrust zone, the NAT policies cannot distinguish the two links based on the interzone
relationship. To help you understand this, we provide a configuration script, in which NAT
policy 1 and policy 2 are configured in the Trust-to-Untrust interzone.
#
nat-policy interzone trust untrust outbound
policy 1
action source-nat
policy source 192.168.0.0 0.0.0.255
address-group 1
policy 2
action source-nat
policy source 192.168.0.0 0.0.0.255
address-group 2
#
Policy 1 has a higher priority than policy 2. Therefore, all packets destined from the private
network to the Internet match policy 1 and are forwarded through ISP1. Policy 2 is ignored.
Therefore, we must add the interfaces to different security zones and configure NAT policies
based on the interzone relationship.
4.1.8 Summary
We have thoroughly studied various NAT implementations. Now let's compare them side by
side, as shown in Table 4-2.
102
Learn Firewalls with Dr. WoW
Table 4-2 Comparison between source NAT implementations supported by Huawei firewalls
Source NAT
Implementatio
n
IP Address
Mapping
Port
Translated?
Dynamic
Server-Map
Entry
Generated?
Blackhole
Route
Needed?
Source
Address
in
the
Security
Policy
NAT No-PAT
One-to-one
No
Yes
Yes
NAPT
Many-to-one
Yes
No
Yes
Private
address
before
NAT
Many-to-ma
ny
Egress
interface
address mode
(also
called
easy-IP)
Many-to-one
Yes
No
No
Smart NAT
One-to-one+
many-to-one
(the reserved
address)
No, except
for
the
reserved
address
Yes, but only
for
NAT
No-PAT
Yes
Triplet NAT
Many-to-one
Yes
Yes
No
Many-to-ma
ny
4.1.9 Further Reading
Triplet NAT has a scientific name: full cone. According to RFC3489, full cone is one of the
four types of port mapping methods in NAT. The other three are restricted cone, port restricted
cone, and symmetric.
To further your understanding, we will compare the full cone mode with the symmetric mode.
Since RFC3489 has been obsoleted by RFC5389, the restricted cone and port restricted cone
are not discussed.
Full cone NAT is illustrated in Figure 4-10. The public address and port for hosts on the
private network are stable in a period of time after address translation, regardless of the
destination addresses. Therefore, the hosts on the private network can use the same triplet
(source IP address, source port, and protocol) to access different hosts on the Internet. The
hosts on the Internet can also initiate access to the hosts on the private network using the same
triplet.
103
Learn Firewalls with Dr. WoW
Figure 4-10 Full cone NAT
TCP[S1:P1]
Client2
TCP[S1:P1]
Client1
TCP[S0:P0]
Firewall
Client3
TCP[S1:P1]
Client4
Symmetric NAT is illustrated in Figure 4-11. The addresses of the hosts on the private
network are translated based on destination addresses, and the public addresses and ports vary.
The hosts on the private network have different triplets (source IP address, source port, and
protocol). Therefore, only hosts on the Internet that have specific ports can access the private
network. That is to say, you need to specify the target hosts and ports. Symmetric NAT is also
called quintuplet (source IP address, destination IP address, source port, destination port, and
protocol) NAT. NAPT is also quintuplet NAT.
Figure 4-11 Symmetric NAT
TCP[S1:P1]
Client2
TCP[S1:P2]
Client1
TCP[S0:P0]
Firewall
Client3
TCP[S1:P3]
Client4
4.2 NAT Server
4.2.1 NAT Server Mechanism
Schools and companies may need to provide services for external users. The servers that
provide such services usually use private addresses and cannot be accessed by users on the
Internet. In this case, how can we configure the firewall as the gateway to resolve the
problem?
104
Learn Firewalls with Dr. WoW
Readers who have read about source NAT must have thought about NAT.
Bingo! You are in the right direction. However, source NAT works well when users on a
private network need to access the Internet. However, the situation is just the opposite here.
The server on a private network provides services, and users on the Internet imitate the access
to the server. The address to be translated is changed from source address to destination
address. Therefore, we name this implementation NAT server.
Let's use Figure 4-12 to illustrate how to configure NAT server on the firewall. NAT server
also needs a public IP address, just as in source NAT. However, you do not need to put the
public address into an address pool. Let's say the public address is 1.1.1.1 in this example.
If possible, do not use the IP address of the WAN interface on the firewall as the public address for NAT.
If you must do so, specify the protocol and port during NAT server configuration to avoid conflicts
between NAT server and management functions, such as Telnet and web interface.
Figure 4-12 NAT server networking
DMZ
Private server
10.1.1.2/24
Untrust
Public IP address
1.1.1.1
Public network user
1.1.1.2/24
Firewall
NAT server is configured as follows:
2.
Configure NAT server.
Run the following command on the firewall to map the private address (10.1.1.2) of the server
to a public address (1.1.1.1).
[FW] nat server global 1.1.1.1 inside 10.1.1.2
If multiple protocols and ports are enabled on the same server, this configuration will make all
services accessible to users on the Internet, bringing security risks. Huawei firewalls support
service-specific NAT server so that only specified services are accessible to users on the
Internet after NAT server is configured. For example, we can map port 80 to port 9980 for
users on the Internet to access.
In this example, port 80 is translated into port 9980 instead of port 80 because some carriers will block
new services on ports 80, 8000, and 8080.
[FW] nat server protocol tcp global 1.1.1.1 9980 inside 10.1.1.2 80
After NAT server is configured, server-map entries will be generated. However, unlike in
source NAT, the server-map entries in NAT server are static and are not triggered by packets.
The entries will be automatically generated after NAT server configuration and will be
automatically deleted after NAT server configuration is deleted. The server-map entries in
NAT server look like the following output:
[FW] display firewall server-map
server-map item(s)
105
Learn Firewalls with Dr. WoW
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.1:9980[10.1.1.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 10.1.1.2[1.1.1.1] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Just like triplet NAT, NAT server also generates two server-map entries:

Forward server-map entry
"Nat Server, any -> 1.1.1.1:9980[10.1.1.2:80]" is the forward server-map entry, which
records the mapping between private address/port and public address/port.
"[10.1.1.2:80]" is the private address and port of the server, and 1.1.1.1:9980 is the
public address and port. If we translate the entry into human words, it is: When any
client initiates a connection to 1.1.1.1:9980, the destination address and port will be
translated into 10.1.1.2:80. This entry is used to translate the destination address and port
of packets destined from the Internet to the server.

Return server-map entry
"Nat Server Reverse, 10.1.1.2[1.1.1.1] -> any" is the return server-map entry. It is used to
translate the private source address into a public address when the server initiates access
to the Internet without using a source NAT policy. This is the sweet part of NAT server,
because you can use one command to configure NAT in both directions between the
server and the Internet.
Here the word "translate" appears multiple times. Yes. The entries are just address translation,
whether in the forward or return direction. They are not like the server-map entries in ASPF.
In ASPF, server-map entries can create a channel that can bypass security policies. Therefore,
in NAT server, you must configure security policies to permit traffic in both directions
between the private server and the Internet.
3.
Configure a security policy.
Now comes a typical question asked by thousands of people: To allow users on the Internet
to access the private server in NAT server configuration, is the destination address in the
security policy the private address or public address? Before answering this question, let's
first take a look at how the firewall process packets destined from users on the Internet to the
private server.
When a user initiates access to 1.1.1.1:9980 (the private server), the firewall check whether
the packet matches a server-map entry. If a match is found, the firewall translates the
destination address and port to 10.1.1.2:80. Then, the firewall look for an outgoing interface
for destination address 10.1.1.2. Then, the firewall checks the security zones where the
incoming and outgoing interfaces reside to determine the interzone security policy. Therefore,
the destination address in the security policy must be the private address, not the public
address mapped to the private address of the server. Therefore, the security policy for this
example should be:
[FW] policy interzone dmz untrust inbound
[FW-policy-interzone-dmz-untrust-inbound] policy 1
[FW-policy-interzone-dmz-untrust-inbound-1] policy destination 10.1.1.2 0
[FW-policy-interzone-dmz-untrust-inbound-1] policy service service-set http
[FW-policy-interzone-dmz-untrust-inbound-1] action permit
[FW-policy-interzone-dmz-untrust-inbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
106
Learn Firewalls with Dr. WoW
If the packet is permitted by the security policies, the firewall creates the following session
and forward the packet to the private server.
[FW] display firewall session table
Current Total Sessions : 1
http VPN:public --> public 1.1.1.2:2049-->1.1.1.1:9980[10.1.1.2:80]
Upon receiving the packet, the firewall responds to the packet. After the response packet
arrives on the firewall and matches the session table, the firewall translates the source address
and port of the packet to 1.1.1.1:9980 and forwards the packet to the Internet. When
subsequent packets between the user and private server arrive on the firewall, the firewall
translates the address and port based on the session table, not the server-map entry.
The packet captures before and after NAT show the results of NAT server:

The destination address and port of packets destined from the user on the Internet to the
private server are translated.

The source address and port of packets destined from the private server to the user on the
Internet are translated.
4.
Configure a blackhole route.
To avoid routing loops, a blackhole route must be configured for NAT server.
[FW] ip route-static 1.1.1.1 32 NULL 0
4.2.2 NAT Server in Multi-Egress Scenario
Similar like source NAT, NAT server also needs to address the multi-egress scenario. As
shown in the following example, the firewall has two ISP links. NAT server is configured as
follows:
1.
Configure NAT server.
107
Learn Firewalls with Dr. WoW
As shown in Figure 4-13, an enterprise has deployed a firewall at the network egress as the
gateway, which is connected to the Internet through links ISP1 and ISP2 so that the users on
the Internet can access the server on the private network.
Figure 4-13 NAT server networking in the dual-ISP scenario
Web Server
ISP1
Private
network
Firewall
ISP2
As the egress gateway, the firewall is connected to two ISPs. Therefore, the NAT server
configuration is divided into two parts so that the private server can provide services through
public addresses obtained from both ISPs. Two methods are available:
Method 1: Add the WAN interfaces connected to the two ISPs to different security zones
and specify the zone parameter during NAT server configuration. In this way, the server
can advertise different public IP addresses to different security zones, as shown in
Figure 4-14.
Figure 4-14 NAT server networking in the dual-ISP scenario (WAN interfaces in different security
zones)
DMZ
ISP1
Web Server
172.16.0.2
Private
network
GE1/0/4
172.16.0.1/24
2
0/ 0
1/ .1/3
E
1
G 1.
1.
GE
2.2 1/0
.2 . /3
Firewall
2/3
0
ISP1
ISP2
ISP2
In the following example, the public address advertised to ISP1 is 1.1.1.20 and that advertised
to ISP2 is 2.2.2.20.
Add interfaces to security zones.
[FW] firewall
[FW-zone-dmz]
[FW-zone-dmz]
[FW] firewall
zone dmz
add interface GigabitEthernet1/0/4
quit
zone name isp1
108
Learn Firewalls with Dr. WoW
[FW-zone-isp1] set priority 10
[FW-zone-isp1] add interface GigabitEthernet1/0/2
[FW-zone-isp1] quit
[FW] firewall zone name isp2
[FW-zone-isp2] set priority 20
[FW-zone-isp2] add interface GigabitEthernet1/0/3
[FW-zone-isp2] quit
Configure NAT server with the zone parameter specified.
[FW] nat server zone isp1 protocol tcp global 1.1.1.20 9980 inside 172.16.0.2 80
[FW] nat server zone isp2 protocol tcp global 2.2.2.20 9980 inside 172.16.0.2 80
Configure two security policies based on the interzone relationship.
[FW] policy interzone isp1 dmz inbound
[FW-policy-interzone-dmz-isp1-inbound] policy 1
[FW-policy-interzone-dmz-isp1-inbound-1] policy
[FW-policy-interzone-dmz-isp1-inbound-1] policy
[FW-policy-interzone-dmz-isp1-inbound-1] action
[FW-policy-interzone-dmz-isp1-inbound-1] quit
[FW-policy-interzone-dmz-isp1-inbound] quit
[FW] policy interzone isp2 dmz inbound
[FW-policy-interzone-dmz-isp2-inbound] policy 1
[FW-policy-interzone-dmz-isp2-inbound-1] policy
[FW-policy-interzone-dmz-isp2-inbound-1] policy
[FW-policy-interzone-dmz-isp2-inbound-1] action
[FW-policy-interzone-dmz-isp2-inbound-1] quit
[FW-policy-interzone-dmz-isp2-inbound] quit
destination 172.16.0.2 0
service service-set http
permit
destination 172.16.0.2 0
service service-set http
permit
Of course, do not forget blackhole routes.
[FW] ip route-static 1.1.1.20 32 NULL 0
[FW] ip route-static 2.2.2.20 32 NULL 0
After the configuration, the following server-map entries are generated on the firewall.
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.20:9980[172.16.0.2:80], Zone: isp1
Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 172.16.0.2[1.1.1.20] -> any, Zone: isp1
Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server, any -> 2.2.2.20:9980[172.16.0.2:80], Zone: isp2
Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 172.16.0.2[2.2.2.20] -> any, Zone: isp2
Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
109
Learn Firewalls with Dr. WoW
We can see that both the forward and return server-map entries are generated. The forward
server-map entries allow users on the Internet to access the private server, and the return
server-map entries allow the private server to initiate access to the Internet.
Therefore, we recommend that you add the WAN interfaces connected to ISP1 and ISP2 to
different security zones and configure NAT server with the zone parameter specified. If the
two interfaces have been added to the same zone and cannot be changed, there is another way.
Method 2: Specify the no-reverse parameter during NAT server configuration so that
the server can advertise two public IP addresses, as shown in Figure 4-15.
Figure 4-15 NAT server networking in dual-ISP scenario (WAN interfaces in the same security
zone)
DMZ
Untrust
Web Server
172.16.0.2
Private
network
GE1/0/4
172.16.0.1/24
2
0/ 0
1/ .1/3
E
G 1.1
1.
GE
2.2 1/0
.2 . /3
Firewall
2/3
0
ISP1
ISP2
In this scenario, the no-reverse parameter must be specified to ensure the functioning of NAT
server. The following example illustrates the NAT server configuration. Some configurations
are the same as in method 1 and are therefore omitted.
Configure NAT server with parameter no-reverse specified.
[FW] nat server protocol tcp global 1.1.1.20 9980 inside 172.16.0.2 80 no-reverse
[FW] nat server protocol tcp global 2.2.2.20 9980 inside 172.16.0.2 80 no-reverse
After the configuration, the following server-map entries are generated on the firewall.
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.20:9980[172.16.0.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server, any -> 2.2.2.20:9980[172.16.0.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
We can see only the forward server-map entries are generated to allow users on the Internet to
access the private server. However, to allow the private server to initiate access to the
Internet, you must configure a NAT policy in the Trust-to-Untrust interzone.
110
Learn Firewalls with Dr. WoW
You may ask that what if we do not specify the no-reverse parameter and configure two NAT
server entries? The answer is that the two NAT server commands cannot be delivered if we do
not specify the parameter.
[FW] nat server protocol tcp global 1.1.1.20 9980 inside 172.16.0.2 80
[FW] nat server protocol tcp global 2.2.2.20 9980 inside 172.16.0.2 80
Error: This inside address has been used!
Let's see what will happen if the two commands can be delivered. Let's run one command on
one firewall and the other command on the other firewall and check the server-map entries on
the two firewalls.
[FW1] nat server protocol tcp global 1.1.1.20 9980 inside 172.16.0.2 80
[FW1] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.20:9980[172.16.0.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 172.16.0.2[1.1.1.20] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
[FW2] nat server 1 global 2.2.2.20 inside 172.16.0.2
[FW2] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 2.2.2.20:9980[172.16.0.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 172.16.0.2[2.2.2.20] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
We can see that the return server-map entry on one firewall translates source address
172.16.0.2 to 1.1.1.20, and that on the other firewall translates source address 172.16.0.2 to
2.2.2.20. If the two return server-map entries appear on the same firewall, what will happen?
The firewall must translate source address 172.16.0.2 to both 1.1.1.20 and 2.2.2.20. Then, the
firewall does not know what to do. That is the problem if we do not specify the no-reverse
parameter in the nat server command. If the no-reverse parameter is specified, the return
server-map entries will not be generated, and this problem will not happen.
2.
Configure the sticky load balancing function.
We have learned how to determine whether to specify the zone or no-reverse parameter based
on whether the WAN interfaces connected to ISP1 and ISP2 are added to the same security
zone. However, we need to consider more than that in the dual-ISP scenario. We also need to
consider which ISP will be used to access the private server.
For example, if the users on ISP1 network access the private server through the public address
assigned by ISP2, the route is a detour. Moreover, the two ISPs may not be connected to each
other. As a result, the connection will be slow or even unavailable.
111
Learn Firewalls with Dr. WoW
Therefore, we must avoid such situations to ensure that the public address advertised for users
on ISP1 network is the public address obtained from ISP1 and that for users on ISP2 network
is the public address obtained from ISP2.
Moreover, when the firewall processes the return packets from the private server, such
problem may also occur. As shown in Figure 4-16, users on ISP1 network access the private
server through the public address obtained from ISP1, and the packets are received on
GE1/0/2. When the return packets from the private server arrive on the firewall, although the
packets match the session table and NAT is performed, the firewall must determine the
outgoing interface based on the destination address. If the firewall has a default route but does
not have a specific route to the user on the Internet, the return packets may be forwarded
through GE1/0/3, which is connected to the ISP2 network. The packets transmitted through
the ISP2 network may not be able to arrive on the ISP1 network.
Figure 4-16 NAT server traffic interrupted because the forward and return packets do not pass
through the same firewall interface
DMZ
ISP1
Web Server
172.16.0.2
Private
network
GE1/0/4
172.16.0.1/24
2
0/ 0
1/ .1/3
E
G 1.1
1.
ISP1
GE
2.2 1/0
.2 . /3
Firewall
2/3
0
ISP2
ISP2
To resolve this problem, we can configure routes to the users on ISP1 and ISP2 networks.
However, ISP1 and ISP2 networks have a large number of networks, and manual
configuration is not pragmatic. To resolve this issue, the firewall provides the sticky load
balancing function, meaning that packets take the same route back, without depending on the
routing table to determine the outgoing interface.
The sticky load balancing function must be configured on both firewall interfaces connected
to ISP1 and ISP2 networks. The following commands are used to enable sticky load balancing
on GE1/0/2. In this example, the next hop on ISP1 is 1.1.1.254. The commands are available
on USG9500 series.
[FW] interface GigabitEthernet 1/0/2
[FW-GigabitEthernet1/0/2] redirect-reverse nexthop 1.1.1.254
For USG2000 or USG5000 series, the commands are:
[FW] interface GigabitEthernet 1/0/2
[FW-GigabitEthernet1/0/2] reverse-route nexthop 1.1.1.254
For USG6000 series, the commands are:
[FW] interface GigabitEthernet 1/0/2
[FW-GigabitEthernet1/0/2] gateway 1.1.1.254
[FW-GigabitEthernet1/0/2] reverse-route enable
112
Learn Firewalls with Dr. WoW
4.3 Bidirectional NAT
With the description in the previous sections, I believe you have known what are Source NAT
and NAT Server. With the two NAT functions, a firewall easily and skillfully translates both
incoming and outgoing traffic. Well, can the two NAT functions work together? The answer is
definitely "YES".
If both source and destination addresses of packets need to be translated, Source NAT and
NAT server are required. This configuration is also called "bidirectional NAT". Note that
bidirectional NAT is not an independent function. Instead, it is only a combination of Source
NAT and NAT Server. This combination applies to the same flow (for example, a packet from
an Internet user to an intranet server). When receiving the packet, the firewall translates both
its source and destination addresses. If Source NAT and NAT Server are configured on a
firewall for different flows, the configuration is not called bidirectional NAT.
To help you understand Source NAT, we assume the networking in which intranet users
access the Internet and verify the Source NAT configuration in that networking. Actually,
Source NAT can be classified into interzone NAT and intrazone NAT based on packet
transmission directions on the firewall.

Interzone NAT
NAT is performed on the packet transmitted between security zones. Interzone NAT can
also be classified into the following types based on packet transmission directions:
−
NAT Inbound
NAT is performed on the packets transmitted from a low-level security zone to a
high-level security zone. Generally, such NAT applies when Internet users access an
intranet, and therefore this technique is seldom used.
−
NAT Outbound
NAT is performed on the packets transmitted from a high-level security zone to a
low-level security zone. Such NAT applies when intranet users access the Internet,
which is a common scenario.

Intrazone NAT
NAT is performed when packets are transmitted within a security zone. Typically,
intrazone NAT works with NAT Server. Intrazone NAT is seldom separately configured.
When intrazone or interzone NAT works with NAT Server, bidirectional NAT is implemented.
Of course, the prerequisites of the previous description are the proper setting of security levels
for security zones and appropriate network planning: the intranet belongs to the Trust zone
(with a high security level); intranet servers belong to the DMZ (with a medium security
level); and the Internet belongs to the Untrust zone (with a low security level).
Bidirectional NAT is not special in terms of technologies and implementation principles, but
its applicable scenario has characteristics. When is bidirectional NAT required? What are
benefits after bidirectional NAT is configured? Is that OK if I do not configure bidirectional
NAT? These questions must be considered for the planning and deployment of live networks.
4.3.1 NAT Inbound + NAT Server
Figure 4-17 shows a typical NAT Server scenario in which an Internet user accesses an
intranet server. The following part describes how to configure and apply bidirectional NAT in
this scenario and the advantages of bidirectional NAT.
113
Learn Firewalls with Dr. WoW
Figure 4-17 Networking for NAT Inbound + NAT Server
DMZ
Intranet
server
10.1.1.2/24
Untrust
Public IP
address
1.1.1.1
Internet PC
1.1.1.2/24
Firewall
The NAT Server and Source NAT are configured as follows. The security policy and
blackhole route configurations are the same as those provided in previous sections and
therefore are omitted in this part. Let's first look at the NAT Server configuration.
[FW] nat server protocol tcp global 1.1.1.1 9980 inside 10.1.1.2 80
I think you have no doubt about the NAT Server configuration. After the configuration is
complete, the following server map is generated on the firewall:
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.1:9980[10.1.1.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 10.1.1.2[1.1.1.1] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Then, let's look at the Source NAT configuration.
[FW] nat address-group 1 10.1.1.100 10.1.1.100
[FW] nat-policy interzone untrust dmz inbound
[FW-nat-policy-interzone-dmz-untrust-inbound] policy 1
[FW-nat-policy-interzone-dmz-untrust-inbound-1] policy destination 10.1.1.2 0
//As
NAT Server is performed prior to Source NAT, the destination address here is the post-NAT
Server address, namely, the private address of the server.
[FW-nat-policy-interzone-dmz-untrust-inbound-1] action source-nat
[FW-nat-policy-interzone-dmz-untrust-inbound-1] address-group 1
[FW-nat-policy-interzone-dmz-untrust-inbound-1] quit
[FW-nat-policy-interzone-dmz-untrust-inbound] quit
The Source NAT configuration is different from that described in the previous section. The
difference is that the NAT address pool here has private addresses, not public addresses. In
addition, the NAT policy direction is inbound, indicating that NAT is performed when packets
flow from a low-level security zone to a high-level security zone. This NAT configuration is
NAT Inbound.
After the configuration is complete, when the Internet user accesses the intranet server, we
can view the session table on the firewall. The command output shows that both the source
and destination addresses of the packet have been translated.
[FW] display firewall session table
Current Total Sessions : 1
114
Learn Firewalls with Dr. WoW
http VPN:public --> public
1.1.1.2:2049[10.1.1.100:2048]-->1.1.1.1:9980[10.1.1.2:80]
Let's see the NAT process as indicated in Figure 4-18. After the packet from the Internet user
to the intranet server arrives at the firewall, NAT Server translates the destination address
(public address of the intranet server) into a private address, and NAT Inbound translates the
source address into a private address in the same network segment as the server address. In
this way, both the source and destination addresses of the packet are translated, implementing
bidirectional NAT. When the response packet from the intranet server arrives at the firewall,
bidirectional NAT is performed again. To be specific, both the source and destination
addresses of the packet are translated into public addresses.
Figure 4-18 Address translation procedures for NAT Inbound + NAT Server
IP packet
DMZ
Intranet server
10.1.1.2/24
IP packet
Source: 10.1.1.100
Source: 1.1.1.2
Destination: 10.1.1.2
Data
IP packet
Destination: 1.1.1.1
NAT
Firewall
Source: 10.1.1.2
Internet PC
1.1.1.2/24
IP packet
Source: 1.1.1.1
Destination: 10.1.1.100
Data
Data
Untrust
Destination: 1.1.1.2
NAT
Data
Here you may have a question: The Internet user can still access the intranet server even if
NAT Inbound is not configured. Why do you configure it? The answer lies in how the intranet
server processes the response packet.
We have set the addresses in the NAT address pool into the same network segment as the
intranet server address. When the intranet server replies to the access requests from the
Internet user, it finds that its address and the destination address are in the same network
segment. Then, the server does not search the routing table. Instead, it sends an ARP broadcast
packet to query the MAC address corresponding to the destination address. In this case, the
firewall sends the MAC address of the interface connecting to the intranet server to the
intranet server and asks the intranet server to reply. Then, the intranet server sends the
response packet to the firewall, and the firewall processes the packet.
As the intranet server does not search the routing table, it is unnecessary to set a gateway.
This is the benefit of using NAT Inbound. Someone may say "it is easier to set a gateway on
the server than configuring NAT Inbound on the firewall". It is true if there is only one server
on the network. If there are dozens of or even hundreds of servers on the network, you will
find how convenient the NAT Inbound configuration is. Certainly, applying bidirectional NAT
in such a scenario has a prerequisite that the intranet server and firewall must be in the same
network segment. Otherwise, bidirectional NAT does not apply.
If I add a Trust zone in this networking and intranet users in the Trust zone need to access the
intranet server in the DMZ, how can I configure bidirectional NAT? The NAT Server
configuration remains unchanged, while the Source NAT configuration changes a little bit. As
the security level of the Trust zone is higher than that of the DMZ, NAT Outbound is required
for the packets transmitted from the Trust zone to the DMZ. That is, the bidirectional NAT
configuration changes to NAT Server + NAT Outbound.
115
Learn Firewalls with Dr. WoW
4.3.2 Intrazone NAT + NAT Server
The combination of intrazone NAT + NAT Server applies to small networks. Figure 4-19
shows a typical small network. The administrator saves the trouble and plans the intranet host
and server in the same security zone.
Figure 4-19 Networking diagram for intrazone NAT + NAT Server
Trust
Intranet PC
10.1.1.3/24
Public IP address
1.1.1.1
Switch
Intranet
server
10.1.1.2/24
Firewall
In this networking, if the intranet host wants to use the public address 1.1.1.1 to access the
intranet server, NAT Server must be configured on the firewall. However, merely configuring
NAT Server is not enough. As shown in Figure 4-20, after a packet from the intranet host to
the intranet server arrives at the firewall, the firewall translates the destination address of the
packet from 1.1.1.1 to 10.1.1.2. When the intranet server replies, it finds that the destination
address is in the same network segment as its own address, and the reply packet is directly
forwarded through the switch to the intranet host, bypassing the firewall.
Figure 4-20 Diagram for packet forwarding after NAT Server is configured
IP packet
Source: 10.1.1.3
Destination: 1.1.1.1
Trust
Data
Packet sent from the intranet PC
to the intranet server
Intranet PC
10.1.1.3/24
IP packet
Source: 10.1.1.2
Destination: 10.1.1.3
Reply packet from the
intranet server
Data
SW
Firewall
Intranet server
10.1.1.2/24
116
Learn Firewalls with Dr. WoW
To improve intranet security by forcing the packets replied by the intranet server to pass
through the firewall, we must configure intranet NAT to translate the source address of the
packet sent from the intranet host to the intranet server. The post-NAT source address can be a
public or private address as long as it is not in the same network segment as the intranet server
address, ensuring that the reply packets from the intranet server can be forwarded to the
firewall.
The NAT Server and intrazone NAT are configured as follows. The blackhole route
configuration is the same as that provided in previous sections and therefore is omitted in this
part. Let's first look at the NAT Server configuration.
[FW] nat server protocol tcp global 1.1.1.1 9980 inside 10.1.1.2 80
After the configuration is complete, the following server map is generated on the firewall:
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.1:9980[10.1.1.2:80], Zone: --Protocol: tcp(Appro: unknown), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 10.1.1.2[1.1.1.1] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Then, let's look at the intrazone NAT configuration. The intrazone NAT configuration is
almost the same as the interzone NAT configuration. The only difference is that NAT is
performed within a zone for intrazone NAT while between zones for interzone NAT.
[FW] nat address-group 1 1.1.1.100 1.1.1.100
//The address can be either a public or
private address but cannot be in the same network segment as the intranet server.
[FW] nat-policy zone trust
[FW-nat-policy-zone-trust] policy 1
[FW-nat-policy-zone-trust-1] policy destination 10.1.1.2 0 //As NAT Server is
performed prior to intrazone NAT, the destination address here is the post-NAT Server
address, namely, the private address of the server.
[FW-nat-policy-zone-trust-1] action source-nat
[FW-nat-policy-zone-trust-1] address-group 1
[FW-nat-policy-zone-trust-1] quit
[FW-nat-policy-zone-trust] quit
The security policy configuration is not provided because firewalls (except the USG6000
series) do not control the packets transmitted within a security zone by default. Of course,
administrators can configure proper intrazone security policies as required.
After the configuration is complete, when the intranet host at 1.1.1.1 accesses the intranet
server, we can view the session table on the firewall. The command output shows that both
the source and destination addresses of the packet have been translated.
[FW] display firewall session table
Current Total Sessions : 1
http VPN:public --> public
10.1.1.3:2050[1.1.1.100:2048]-->1.1.1.1:9980[10.1.1.2:80]
Figure 4-21 shows the packet forwarding process.
117
Learn Firewalls with Dr. WoW
Figure 4-21 Diagram for packet forwarding after intrazone NAT and NAT Server are configured
Packet sent from the intranet PC
to the intranet server
IP packet
Source: 10.1.1.3
Trust
Destination: 1.1.1.1
Data
Intranet PC
10.1.1.3/24
SW
Intranet server
10.1.1.2/24
Firewall
IP packet
Source: 1.1.1.100
Destination: 10.1.1.2
Data
IP packet
Source: 1.1.1.1
Trust
Destination: 10.1.1.3
Data
Intranet PC
10.1.1.3/24
SW
Intranet server
10.1.1.2/24
Firewall
IP packet
Source: 10.1.1.2
Reply packet from the
intranet server
Destination: 1.1.1.100
Data
On the basis of this networking, if we connect the intranet host and server to the firewall
through interfaces in different network segments, only NAT Server is required, and all the
packets transmitted between the intranet host and server are forwarded through the firewall.
By way of the previous description, do you feel that the principle and configuration of
bidirectional NAT is not complicated? It is important to clarify the NAT direction and the
function of post-NAT addresses, not the attribute of the post-NAT addresses (public or
private). In addition, bidirectional NAT is not required. Sometimes, only Source NAT or NAT
118
Learn Firewalls with Dr. WoW
Server can achieve the same effect. The flexible use of bidirectional NAT simplifies network
configuration and facilitates network management, achieving the effect that one plus one is
greater than two.
4.4 NAT ALG
In the previous sections, I have introduced Source NAT, NAT Server, and bidirectional NAT
supported by firewalls. These NAT functions can be flexibly used in different scenarios. As
we know, NAT translates only the addresses in headers. For some protocols, such as FTP,
packet payloads also carry address information. If firewalls cannot properly process such
information, FTP may not work properly. Well, how should firewalls process such protocol
packets?
4.4.1 FTP Packets Traversing NAT Devices
Let's use FTP as an example to see what problems occur when FTP packets attempt to pass
through NAT devices. In this example, an FTP client resides on an intranet, and an FTP server
resides on the Internet. FTP works in active mode. Figure 4-22 shows the FTP interaction
process on the NAT device.
Figure 4-22 FTP interaction process 1 when Source NAT is configured
Source NAT
Firewall
FTP client
Intranet
src IP 192.168.1.2
src Port
2048
SYN
SYN+ACK
ACK
……
User name/password
exchange
……
PORT Command
(IP 192.168.1.2 Port 2049)
PORT Command OK
FTP server
Internet
src IP
src Port
1.1.1.1
4096
dst IP
dst Port
SYN
SYN+ACK
ACK
……
User name/password
exchange
……
PORT Command
(IP 192.168.1.2 Port 2049)
PORT Command OK
3.3.3.3
21
1
TCP three-way
handshake for
the control
connection
2
PORT command
exchange
3
192.168.1.2:2049
???
TCP three-way
handshake for
the data
connection
We can see that after a control connection is established between the FTP client and server,
the client sends a PORT command packet to the server, and the packet contains the private
address and port of the client. The firewall directly forwards the packet to the server. After
119
Learn Firewalls with Dr. WoW
receiving the packet, the FTP server initiates a data connection to 192.168.1.2 as instructed by
the PORT command. The problem is incurred. 192.168.1.2 is a private address. Packets
destined for this address cannot be transmitted on the Internet. As a result, the FTP service is
abnormal.
What should the firewall do to resolve the problem? If the firewall translates both the source
address in the header and the IP address in the application-layer information in the payload,
the data connection can be established.
The NAT Application Level Gateway (ALG) function resolves the problem. As a technique to
help packets traverse firewalls, NAT ALG enables a firewall to translate the IP addresses in
both headers and payloads. Figure 4-23 shows the FTP interaction process after NAT ALG is
enabled on the firewall.
Figure 4-23 FTP interaction process 2 when Source NAT is configured
Source NAT
Firewall
FTP client
Intranet
src IP 192.168.1.2
src Port
2048
FTP server
Internet
src IP
src Port
1.1.1.1
4096
dst IP
dst Port
SYN
SYN
SYN+ACK
SYN+ACK
ACK
……
User name/password
exchange
……
PORT Command
(IP 192.168.1.2 Port 2049)
NAT
ALG
PORT Command OK
3.3.3.3
21
1
TCP three-way
handshake for the
control connection
ACK
……
User name/password
exchange
……
PORT Command
(IP 1.1.1.1 Port 4097)
2
PORT command
exchange
PORT Command OK
dst IP
dst Port
1.1.1.1
4097
SYN
src IP
src Port
3.3.3.3
20
3
TCP three-way
handshake for the
data connection
After NAT ALG is enabled, the firewall translates the IP address carried in the PORT
command packet payload into the public address 1.1.1.1. After receiving the packet, the FTP
server initiates a data connection to 1.1.1.1.
You may have another question. Can the firewall permit the connection request even if the
FTP server can initiate the connection request? This question is so familiar. Right? To find the
answer, let's go back to the ASPF function in chapter 2 "Security Policy". After ASPF is
enabled, the firewall opens up an invisible channel for FTP data to bypass security policy
checks, traversing the firewall. The FTP service requires ASPF in NAT scenarios. Actually,
NAT ALG and ASPF are implemented using one command. Enabling NAT ALG also has
ASPF enabled. Therefore, after NAT ALG is enabled, the following server-map entry is
generated:
120
Learn Firewalls with Dr. WoW
Type: ASPF, 3.3.3.3 -> 1.1.1.1:24576[192.168.1.2:55177], Zone:--Protocol: tcp(Appro: ftp-data), Left-Time:00:00:03, Pool: --Vpn: public -> public
This server-map entry has address translation information. Before the aging time expires, this
entry helps the FTP server's data connection request packet traverse the firewall and
accurately arrive at the FTP client on the intranet. Figure 4-24 shows the complete FTP
interaction process.
Figure 4-24 FTP interaction process 3 when Source NAT is configured
Source NAT
Firewall
FTP client
Intranet
FTP server
Internet
src IP
src Port
src IP 192.168.1.2
src Port
2048
1.1.1.1
4096
dst IP
dst Port
SYN
SYN
SYN+ACK
SYN+ACK
ACK
……
User name/password
exchange
……
PORT Command
(IP 192.168.1.2 Port 2049)
PORT Command OK
ACK
……
User name/password
exchange
……
NAT
ALG
3.3.3.3
21
1
TCP three-way
handshake for the
control connection
PORT Command
(IP 1.1.1.1 Port 4097)
2
PORT Command OK
PORT command
exchange
Generate
server-map
dst IP 192.168.1.2
2049
dst Port
SYN
SYN+ACK
ACK
LIST Command
dst IP
dst Port
Match
server-map
1.1.1.1
4097
src IP
src Port
SYN
SYN+ACK
ACK
3.3.3.3
20
3
TCP three-way
handshake for the
data connection
LIST Command
Transmit data
Transmit data
……
……
In addition to the Source NAT scenario, the NAT Server scenario requires NAT ALG and
ASPF. In this example, an FTP client resides on the Internet, and an FTP server resides on an
intranet. FTP works in passive mode. Figure 4-25 shows the complete FTP interaction
process.
121
Learn Firewalls with Dr. WoW
Figure 4-25 FTP interaction process when NAT Server is configured
NAT Server
Firewall
FTP server
Intranet
src IP
src Port
src IP 192.168.1.2
src Port
21
FTP client
Internet
1.1.1.1
21
dst IP
dst Port
3.3.3.3
2048
SYN
SYN
1
SYN+ACK
ACK
……
User name/password
exchange
……
SYN+ACK
TCP three-way
handshake for the
control connection
ACK
……
User name/password
exchange
……
PASV Command
PASV Command
PASV Command OK
(IP 192.168.1.2 Port 2049)
NAT
ALG
2
PASV command
exchange
PASV Command OK
(IP 1.1.1.1 Port 4097)
Generate
server-map
dst IP 192.168.1.2
2049
dst Port
SYN
SYN+ACK
dst IP
dst Port
Match
server-map
ACK
LIST Command
1.1.1.1
4097
src IP
src Port
3.3.3.3
2049
SYN
SYN+ACK
ACK
3
TCP three-way
handshake for the
data connection
LIST Command
Transmit data
Transmit data
……
……
The following server-map entries are generated on the firewall:
[FW] display firewall server-map
server-map item(s)
-----------------------------------------------------------------------------Nat Server, any -> 1.1.1.1:21[192.168.1.2:21], Zone: --Protocol: tcp(Appro: ftp), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
Nat Server Reverse, 192.168.1.2[1.1.1.1] -> any, Zone: --Protocol: any(Appro: ---), Left-Time: --:--:--, Addr-Pool: --VPN: public -> public
ASPF, 3.3.3.3 -> 1.1.1.1:4097[192.168.1.2:2049], Zone: --Protocol: tcp(Appro: ftp-data), Left-Time: 00:00:53, Addr-Pool: --VPN: public -> public
The control connection packet initiated by the FTP client matches the forward server-map
entry for NAT Server, and the address is translated. The data connection packet initiated by
the FTP client matches the server-map entry for ASPF, and the address is translated.
122
Learn Firewalls with Dr. WoW
In conclusion, if multi-channel protocols such as FTP, SIP, and H.323 are used and NAT
applies on a network, enabling both NAT ALG and ASPF on the firewall is
recommended, so that the protocols can work properly.
4.4.2 QQ/MSN/User-defined Protocol Packets Traversing NAT
Devices
We also mentioned QQ, MSN, and user-defined protocols in chapter 2 "Security Policy".
Firewalls classify such protocols into one type: STUN. STUN is short for Simple Traversal of
UDP Through Network Address Translators, which is a NAT traversal mode. Unlike NAT
ALG, STUN does not need any processing on NAT devices. Instead, the STUN client on an
intranet obtains its public address through information exchanges between the STUN client
and server. Then, the client adds the public address in the payload and sends the packet. In
this manner, the server on the Internet obtains the public address of the client from the
payload and initiates a connection request to the client.
STUN protocols are responsible for their running during address translation. Therefore, NAT
ALG is not required for these protocols. However, we still to consider how to make their
service packets pass through the firewall. The solution is also ASPF. After a packet matches
the ASPF server-map entry, it can traverse the firewall.
TFTP is used as an example. The following part provides a triplet ASPF server-map entry for
the user-defined protocol in a NAT scenario (the server-map entries generated for QQ and
MSN are similar to this server-map entry):
Type: STUN, ANY -> 1.1.1.1:4096[192.168.1.2:63212], Zone:--Protocol: udp(Appro: stun-derived), Left-Time:00:04:58, Pool: 1, Section: 0
Vpn: public -> public
Type: STUN Reverse, 192.168.1.2:63212[1.1.1.1:4096] -> ANY, Zone:--Protocol: udp(Appro: stun-derived), Left-Time:00:04:58, Pool: 1, Section: 0
Vpn: public -> public
Unlike the entries generated in non-NAT scenarios, these entries contain address translation
information. In addition, two server-map entries are generated. The entry with the Type being
STUN is the forward entry, while the entry with the Type being STUN Reverse is the reverse
entry. Before the aging time expires, the forward entry helps the connection request initiated
by the TFTP server traverse the firewall and accurately arrive at the TFTP client on the
intranet; the reverse entry helps the firewall translate the address and port (63212) of the
packet sent from the TFTP client to the server into the public address (1.1.1.1) and port (4096).
This implementation ensures that a specific application (identified by the port number) for an
intranet user is presented using the fixed public address and port, ensuring the normal running
of TFTP services.
4.4.3 One Command Controlling Two Functions
In early Huawei firewalls, ASPF and NAT ALG are controlled using separate commands.
Currently, most firewalls use one command to control these two functions. To be specific, the
detect command can be used in the intrazone or interzone view to enable ASPF. Then, NAT
ALG is automatically enabled.
The firewall determines the use of NAT ALG, ASPF, or both as required. What we should do
is only to run the command, reducing the configuration workload.
Specific to typical protocols, Table 4-3 lists the firewall processing modes after the detect
command is used.
123
Learn Firewalls with Dr. WoW
Table 4-3 Processing modes for typical protocols
Typical
Protocol
NAT Scenario
or Not
Effective
Function
Effect
FTP
Non-NAT
scenario
ASPF
Server-map entries are generated to
help the packets from other hosts to
FTP, SIP, and H.323 hosts traverse the
firewall.
NAT scenario
NAT ALG
The IP addresses in payloads are
translated.
ASPF
Server-map entries (with address
translation information) are generated
to help the packets from other hosts on
the Internet to FTP, SIP, and H.323
hosts on an intranet traverse the
firewall.
Non-NAT
scenario
ASPF
Triplet
server-map
entries
are
generated to help the packets from
other hosts to QQ, MSN, and
user-defined hosts traverse the firewall.
NAT scenario
ASPF
Triplet server-map entries (with
address translation information) are
generated to help the packets from
other hosts on the Internet to QQ,
MSN, and user-defined hosts on an
intranet traverse the firewall.
SIP
H323
QQ
MSN
User-defined
As indicated in the preceding table, ASPF generates a triplet server-map entry for a
user-defined protocol in the NAT scenario. We also mentioned triplet NAT in section 4.1.6
"Triplet NAT". What is the difference between them?
4.4.4 Differences Between ASPF for User-defined Protocols and
Triplet NAT
Triplet NAT allows the co-existence of P2P and NAT. When an intranet host accesses the
Internet, an asymmetric network system is established after NAT is performed on a firewall.
This means that the intranet host can access the Internet, but Internet users cannot initiate
access to the intranet host.
After triplet NAT is deployed, the firewall generates a server-map entry to allow the packets
from an Internet user to the intranet user to traverse the firewall. In this way, P2P-based
applications such as file sharing, voice communications, and video transmission can properly
run in NAT scenarios.
The server map for triple NAT contains the source (forward) and destination (reverse) entries.
Type: FullCone Src, 192.168.1.2:51451[1.1.1.1:2048] -> ANY, Zone:--Protocol: tcp(Appro: ---), Left-Time:00:00:57, Pool: 1, Section: 0
Vpn: public -> public
Type: FullCone Dst, ANY -> 1.1.1.1:2048[192.168.1.2:51451], Zone:---
124
Learn Firewalls with Dr. WoW
Protocol: tcp(Appro: ---), Left-Time:00:00:57, Pool: 1, Section: 0
Vpn: public -> public
In the preceding example, the entry marked FullCone Src is the source server-map entry.
Before the aging time expires, the addresses of the packets that the intranet host initiates using
the specific port (51451) to the Internet are translated into the public address (1.1.1.1) and
port (2048). This implementation ensures that a specific application (identified by the port
number) for the intranet user is presented using the fixed public address and port, ensuring the
normal running of P2P services. The entry marked FullCone Dst is the destination
server-map entry. Before the aging time expires, Internet users can initiate access to the
specified intranet user.
If we compare these server-map entries with the triplet server-map entries that ASPF
generates for user-defined protocols in NAT scenarios, we can find that two server-map
entries are generated in both conditions, including the address, port, and protocol information.
The ASPF processing for user-defined protocols in NAT scenarios can be considered as a
special triplet NAT implementation. Both techniques allow P2P packets to traverse firewalls.
Let's look at their difference. ASPF applies to both NAT and non-NAT scenarios. In non-NAT
scenarios, triplet server-map entries do not carry address translation information. In NAT
scenarios, triplet server-map entries carry address translation information. As a NAT mode,
triplet NAT works only in NAT scenarios, and its server-map entries carry address translation
information.
Another important difference is that after a packet matches an ASPF triplet server-map entry,
the firewall allows the packet to pass without checking it based on security policies (the aspf
packet-filter acl-number { inbound | outbound } command in the interzone view or the aspf
packet-filter acl-number command in the intrazone view can also be configured to filter
packets); after a packet matches a triplet NAT server-map entry, the firewall determines
whether to check the packet based on security policies according to the firewall
endpoint-independent filter enable command setting. If this function is enabled, the firewall
does not check the packet. If this function is disabled, the firewall checks the packet.
From the configuration perspective, the ASPF configuration for a user-defined protocol
requires the familiarity of the protocol features for the accurate ACL definition. An incorrect
configuration may cause the protocol unable to work or affect the normal running of other
services. Compared with the ASPF configuration, the triplet NAT configuration is simpler.
What we should do is to configure a NAT policy and specify the address pool mode to triplet
NAT (full-cone).
Their support conditions on Huawei firewalls also differ. The USG9500 series supports both
ASPF for user-defined protocols and triplet NAT; the USG2000/5000/6000 series supports
only ASPF for user-defined protocols. When both NAT and P2P applications exist on a
network, you are advised to configure triplet NAT on the USG9500 series or ASPF for
user-defined protocols on the USG2000/5000/6000 series for the normal running of P2P
services.
Table 4-4 lists the comparison between ASPF for user-defined protocols and triplet NAT.
Table 4-4 Differences between ASPF for user-defined protocols and triplet NAT
ASPF
for
Protocols
Item
Server-map
elements
entry
User-defined
Triplet NAT
Triplet
Triplet
Including the address, port,
and protocol type
Including the address, port, and
protocol type
125
Learn Firewalls with Dr. WoW
Item
ASPF
for
Protocols
Number
of
server-map entries
Two
Two
Including the forward and
reverse entries
Including the source
destination entries
Operating
environment
NAT and non-NAT scenarios
NAT scenario
The packets matching ASPF
server-map entries are not
controlled by security policies.
A specific command setting
determines whether the packets
matching
triplet
NAT
server-map
entries
are
controlled by security policies.
Configuration
requirement
ACL rules must be accurately
defined.
The NAT address pool must be
set to the full-cone mode.
Support condition
Supported
by
the
USG2000/5000/6000/9500
series
Supported only by the USG9500
Impact on
policies
security
User-defined
Triplet NAT
and
4.5 Function of Blackhole Routes in NAT Scenarios
In previous sections, I have mentioned multiple times that a blackhole route must be
configured for NAT to prevent routing loops. Why should we do this? I will tell you the cause
in this section.
4.5.1 Blackhole Route in a Source NAT Scenario
First, let's establish a typical Source NAT network, as shown in Figure 4-26.
Figure 4-26 Networking diagram 1 for Source NAT
Trust
Untrust
NAT address pool
202.1.1.10
GE1/0/1
192.168.0.1/24
Intranet PC
192.168.0.2
Internet Web server
210.1.1.2
GE1/0/2
202.1.1.1/30
Firewall
Router
The NAT configuration on the firewall is as follows:
Configure a NAT address pool.
[FW] nat address-group 1 202.1.1.10 202.1.1.10
Configure a NAT policy.
126
Learn Firewalls with Dr. WoW
[FW] nat-policy interzone trust untrust outbound
[FW-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0
0.0.0.255
[FW-nat-policy-interzone-trust-untrust-outbound-1] action source-nat
[FW-nat-policy-interzone-trust-untrust-outbound-1] address-group 1
[FW-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW-nat-policy-interzone-trust-untrust-outbound] quit
Configure a security policy.
[FW] policy interzone trust untrust outbound
[FW-policy-interzone-trust-untrust-outbound] policy 1
[FW-policy-interzone-trust-untrust-outbound-1] policy source 192.168.0.0 0.0.0.255
[FW-policy-interzone-trust-untrust-outbound-1] action permit
[FW-policy-interzone-trust-untrust-outbound-1] quit
[FW-policy-interzone-trust-untrust-outbound] quit
In addition, configure a default route with the next hop pointing to the address of the router
interface.
[FW] ip route-static 0.0.0.0 0 202.1.1.2
The address in the NAT address pool is 202.1.1.10. The address of the interface connecting
the firewall to the router is 202.1.1.1 with a 30-bit mask. The two addresses are not in the
same network segment.
In normal conditions, when the intranet PC accesses the Web server on the Internet, a session
table is generated, and the source address is translated.
[FW] display firewall session table
Current Total Sessions : 1
http VPN:public --> public 192.168.0.2:2050[202.1.1.10:2049]-->210.1.1.2:80
If a PC on the Internet proactively accesses the NAT address pool on the firewall, as shown in
Figure 4-27, what will happen?
Figure 4-27 Networking diagram 2 for Source NAT
Trust
Untrust
NAT address pool
202.1.1.10
GE1/0/1
192.168.0.1/24
Intranet PC
192.168.0.2
Internet Web server
210.1.1.2
GE1/0/2
202.1.1.1/30
Firewall
Router
Internet PC
220.1.1.2
Run the ping 202.1.1.10 command on the Internet PC. The ping fails.
PC> ping 202.1.1.10
Ping 202.1.1.10: 32 data bytes, Press Ctrl_C to break
Request timeout!
Request timeout!
Request timeout!
Request timeout!
Request timeout!
127
Learn Firewalls with Dr. WoW
--- 202.1.1.10 ping statistics --5 packet(s) transmitted
0 packet(s) received
100.00% packet loss
Obviously, this is the expected result. The NAT address pool is used only for private address
translation. In other words, the firewall translates the address in the request packet only when
the intranet PC initiates an access request. The NAT address pool does not provide other
services. Therefore, when the Internet PC initiates an access request to the NAT address pool,
the request packet cannot traverse the firewall to reach the intranet PC. Consequently, the ping
fails.
The actual situation is much more complicated. If we enable packet capture at GE1/0/2 on the
firewall and run the ping 202.1.1.10 -c 1command on the Internet PC to send only one packet,
the command output is as follows:
PC> ping 202.1.1.10 -c 1
Ping 202.1.1.10: 32 data bytes, Press Ctrl_C to break
Request timeout!
--- 202.1.1.10 ping statistics --1 packet(s) transmitted
0 packet(s) received
100.00% packet loss
Then, check information about the packets captured on GE1/0/2.
Wow! The result shocks me. So many ICMP packets! I analyze these packets and find that the
TTL values of the packets decrease by 1 and finally become 1. We know that the TTL stands
for Time to Live. The TTL value of a packet reduces 1 whenever the packet is forwarded by a
device. When the TTL value becomes 0, the packet will be discarded. This means that the
packet from the Internet PC to the NAT address pool is repeatedly forwarded between the
firewall and router until the TTL value of the packet becomes 0 and the packet is discarded.
128
Learn Firewalls with Dr. WoW
Let's go through the process:
2.
The router receives a packet from the Internet PC to the NAT address pool and finds the
destination address is not in the directly connected network segment. Then, the router
searches its routing table and forwards the packet to the firewall.
3.
After receiving the packet, the firewall forwards it based on the default route because the
packet is not the return packet from the intranet to the Internet and does not match the
session table. What's more, the destination address is not in the directly connected
network segment (the firewall is unaware that the destination address is its NAT address
pool address). As the packet comes in to and goes out of the firewall through the same
interface, it flows within one security zone, and therefore the packet is not controlled by
security policies by default. Consequently, the firewall forwards the packet through
GE1/0/2 to the router.
4.
After receiving the packet, the router searches the routing table again and then sends the
packet back to the firewall. The process repeats. This poor packet is like a ball kicked
between the devices and finally discarded, leaving the network with pity.
Well, what will happen if a blackhole route is configured? First, let's configure a blackhole
route with the destination address being the NAT address pool address. To prevent the
blackhole route from affecting services, set its mask to 32-bit to exactly match 202.1.1.10.
[FW] ip route-static 202.1.1.10 32 NULL 0
Then, enable packet capture at GE1/0/2 on the firewall and run the ping 202.1.1.10 -c 1
command on the Internet PC. This time we also send only one packet. View information about
the captured packet.
You can see that only one ICMP packet is captured, indicating that the packet matches the
blackhole route on the firewall and the firewall directly discards the packet. The blackhole
route prevents routing loops between the firewall and router. The firewall sends such packets
to the black hole, instead of repeatedly forwarding them. Moreover, the blackhole route does
not affect services. The intranet PC can still access the Web server on the Internet.
129
Learn Firewalls with Dr. WoW
You may say that the packet will be finally discarded even if I do not configure the blackhole
route. So the blackhole route is not necessary. In the preceding example, we use only one ping
packet to demonstrate the process. Try to imagine, if a malicious user on the Internet
manipulates thousands of PCs to initiate access to the NAT address pool, numerous packets
will be repeatedly forwarded between the firewall and router, consuming link bandwidth
resources and exhausting the system resources on the devices for processing such packets,
probably affecting normal services.
Therefore, when the NAT address pool and the public interface address are in different
network segments, you must configure a blackhole route to prevent loops.
Does the problem persist if the NAT address pool and the public interface address are in the
same network segment? Let's verify the process.
First, change the mask to 24-bit for the interface connecting the firewall to the router. In this
way, the NAT address pool and interface address are in the same network segment. Then,
delete the blackhole route configuration, enable packet capture on GE/1/0/2, run the ping
202.1.1.10 -c 1 command on the Internet PC, and view information about the captured
packets.
The result shows that only three ARP packets and one ICMP packet are captured. The packets
from the Internet PC to the NAT address pool are not forwarded between the firewall and
router. Let's look at the process:
5.
After receiving a packet requesting to access the NAT address pool from the Internet PC,
the router finds that the destination address of the packet belongs to a directly connected
network segment and sends an ARP request. The firewall then replies to the ARP request.
The first two captured packets complete this interaction process. Then, the router
encapsulates the MAC address notified by the firewall into a packet and sends the packet
to the firewall.
6.
After receiving the packet, the firewall finds that the destination address belongs to the
same network segment as its GE1/0/2 and sends an ARP request (the third captured ARP
packet) to search for the MAC address corresponding to this IP address (the firewall is
still unaware that the destination address is its NAT address pool address). No device
130
Learn Firewalls with Dr. WoW
replies because this address is configured only on the firewall. Finally, the firewall
discards the packet.
So, no routing loop occurs in this situation. But if malicious users on the Internet initiate a
large number of access requests, the firewall has to send the corresponding number of ARP
requests, exhausting system resources. Therefore, a blackhole route is recommended even if
the NAT address pool and the public interface address are in the same network segment,
saving system resources on the firewall.
The following screenshot shows information about the captured packet after a blackhole route
is configured. You can see that the firewall does not send ARP requests.
There is an extreme case that the public interface address is configured as the post-NAT
address (in Easy-IP mode) or the NAT address pool. Do I still need to configure a blackhole
route?
Let's analyze the process. The firewall receives a packet from the Internet PC and finds that
the firewall itself is the destination of the packet. How the firewall processes the packet is
determined by the security policy applying to the interzone between the public interface's
zone and the Local zone. If the action for the matching condition is permit, the firewall
processes the packet; if the action is deny, the firewall discards the packet. In this process, no
routing loop occurs, and no blackhole route is required.
4.5.2 Blackhole Route in a NAT Server Scenario
Now, you may ask me "Does NAT Server have the same problem?" Yes, NAT Server may also
encounter routing loops, but the prerequisites are special and determined by the NAT Server
configuration. In the typical NAT Server networking shown in Figure 4-28, the Global address
of NAT Server and public interface address are in different network segments. The following
description is based on the assumption that the interface addresses, security zones, security
policies, and routes have been configured.
131
Learn Firewalls with Dr. WoW
Figure 4-28 NAT Server networking
DMZ
Untrust
NAT Server public IP address
202.1.1.20
Internet PC
220.1.1.2
GE1/0/1
192.168.0.1/24
Intranet Web server
192.168.0.20
GE1/0/2
202.1.1.1/30
Firewall
Router
If we configure imprecise NAT Server on the firewall to advertise the intranet Web server to
the Internet as follows:
[FW] nat server global 202.1.1.20 inside 192.168.0.20
The firewall translates the destination addresses of all packets from the Internet PC to
202.1.1.20 into 192.168.0.20 and then sends the packets to the intranet Web server. No loop
occurs.
If we configure refined NAT Server to advertise only the port number used by the intranet
Web server to the Internet as follows:
[FW] nat server protocol tcp 202.1.1.20 9980 inside 192.168.0.20 80
But the Internet PC uses the ping command to access 202.1.1.20, instead of accessing port
9980 for 202.1.1.20 as we expected, the packet does not match the server map or session table
on the firewall. Finally, the firewall searches the routing table and forwards the packet
through GE1/0/2. After receiving the packet, the router sends it back to the firewall, causing a
routing loop.
Therefore, when NAT Server with the specified protocol and port number is configured
on the firewall, and the Global address for NAT Server and the public interface address
132
Learn Firewalls with Dr. WoW
are in different network segments, you must configure a blackhole route to prevent
loops.
If the Global address for NAT Server and the public interface address are in the same network
segment, after the firewall receives a ping packet, it sends an ARP request, and the following
process is the same as that described above. Likewise, a blackhole route is recommended
when the specified protocol and port number for NAT Server are configured and the
Global address for NAT Server and the public interface address are in the same network
segment, saving system resources on the firewall.
Also, we can set the public interface address as the Global address when configuring NAT
Server. In this case, after receiving a packet from the Internet PC, if the packet matches the
server map, the firewall translates the destination address of the packet and forwards the
packet to the intranet; if the packet does not match the server map, the firewall considers
itself as the destination of the packet. How the firewall processes the packet is determined
by the security zone applying to the interzone between the public interface's zone and the
Local zone. No routing loop occurs, and no blackhole route is required.
4.5.3 Summary
Now, I believe you have understood why we need to configure a blackhole route. Do you feel
your "internal strength" has improved? Let's sum up.
For Source NAT:

If the NAT address pool and public interface address are in different network segments, a
blackhole route is required.

If the NAT address pool and public interface address are in the same network segment, a
blackhole route is required.
For NAT Server with the specified protocol and port number:

If the Global address and public interface address are in different network segments, a
blackhole route is required.

If the Global address and public interface address are in the same network segment, a
blackhole route is recommended.
Besides the advantages described above, a blackhole route has another function: to advertise
the blackhole route on the firewall (OSPF route) to the router.
When the NAT address pool (or Global address) and the address of the interface connecting
the firewall to the router are in different network segments, a static route must be configured
on the router to the NAT address pool or Global address, so that the router can forward the
packets destined for the NAT address pool or Global address to the firewall.
If the firewall and router run OSPF, they can automatically learn OSPF routes, reducing
manual configuration workloads. However, unlike interface addresses, the NAT address pool
and Global address cannot be advertised using the network command as OSPF routes. Well,
how can the router learn such routes?
The blackhole route helps resolve the problem. We can import the blackhole route as a static
route to the OSPF routing table on the firewall and advertise this OSPF route to the router. In
this manner, the router forwards the packets destined for the NAT address pool or Global
address to the firewall (NOT to the black hole).
The NAT Server networking is used as an example. The Global address and public interface
address are in different network segments. Both the firewall and router run OSPF. Import the
following static route to the OSPF routing table on the firewall:
133
Learn Firewalls with Dr. WoW
[FW] ospf 100
[FW-ospf-100] import-route static
[FW-ospf-100] area 0.0.0.0
[FW-ospf-100-area-0.0.0.0] network 202.1.1.0 0.0.0.3
[FW-ospf-100] quit
Now, the router can learn the route to the Global address for NAT Server:
[Router] display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 7
Routes : 7
Destination/Mask
127.0.0.0/8
127.0.0.1/32
202.1.1.0/30
202.1.1.2/32
202.1.1.20/32
Proto
Direct
Direct
Direct
Direct
O_ASE
Pre Cost
0
0
0
0
150
210.1.1.0/30 Direct 0
210.1.1.1/32 Direct 0
Flags NextHop
Interface
0
0
0
0
1
D
D
D
D
D
127.0.0.1
127.0.0.1
202.1.1.2
127.0.0.1
202.1.1.1
InLoopBack0
InLoopBack0
Ethernet0/0/0
Ethernet0/0/0
Ethernet0/0/0
0
0
D
D
210.1.1.1
127.0.0.1
Ethernet0/0/1
Ethernet0/0/1
134
Learn Firewalls with Dr. WoW
5
GRE&L2TP VPN
5.1 Introduction to VPN Technology
The wireless access needs of large companies are not limited only to those of the company
HQ network—branch companies, offices, mobile employees and partner entities also require
access to company HQ network resources. Everyone knows that these circumstances require
the use of Virtual Private Network (VPN) technology, but choosing which VPN technology to
use still requires quite a bit of skill and knowledge, and therefore I, Dr. WoW, will share some
of my knowledge about this below.
VPNs refer to private, dedicated virtual communications networks established on public
networks, and are extensively used in corporate network scenarios where branch
organizations and mobile employees connect to their company's HQ network.
How are VPN networks and VPN technologies generally classified?
5.1.1 VPN Classification
1.
By the entity that builds them
This kind of classification is made according to whether the VPN network's endpoint
equipment (key equipment) is provided by a carrier or by the enterprise itself.
−
Leasing carrier VPN lines to build a corporate VPN network: as shown in Figure 5-1,
this primarily refers to leasing a carrier's Multiprotocol Label Switching (MPLS)
VPN line services. Examples of this include the MPLS VPN line services offered by
China Unicom and China Telecom. The main advantage of MPLS VPN lines
compared with more traditional leased transmissions lines, such as E1 or
Synchronous Digital Hierarchy (SDH) lines, is that line leasing costs are lower.
135
Learn Firewalls with Dr. WoW
Figure 5-1 Leasing carrier VPN lines to build a corporate VPN network
Branch 1
CE
Carrier
MPLS VPN
PE
HQ
PE
CE
Branch 2
CE
−
CE: Customer Edge
PE: Provider Edge
User-built VPN networks: As shown in Figure 5-2, the most commonly used method
at present is to build an Internet-based corporate VPN network, using technology
such as GRE, L2TP, IPSec, DSVPN, SSL and VPN. When using this sort of plan, a
company only needs to pay for equipment purchases and Internet access fees—there
is no VPN line leasing fee. In addition, companies enjoy more decision-making
power over network control, and can carry out network adjustments more
conveniently. The VPNs that I'll be introducing are exactly this class of VPNs.
Figure 5-2 User-built enterprise VPN network
Branch
Partner
HQ
Mobile employee
2.
By the method of network organization
−
Remote access VPNs: Figure 5-3 shows a scenario used when a mobile employee
connects to the network using a VPN dial up. The employee can simply access the
enterprise's internal network at any place with an Internet connection through a
remote dial-up, allowing him/her to access internal network resources.
136
Learn Firewalls with Dr. WoW
Figure 5-3 Remote access VPN
Access user
HQ
−
Site-to-site VPN: As shown in Figure 5-4, this kind of VPN is used when
interconnecting the LANs of two of an enterprise's branches from different locations.
Figure 5-4 Site-to-site VPN
Branch
3.
HQ
By type of use
−
Access VPNs (remote access): targeted towards mobile employees, these permit a
mobile employee to "step-over" the public network to obtain remote access to a
company's internal network.
−
Intranet VPNs: intranet VPNs use a public network to interconnect a corporation's
various internal networks.
−
Extranet VPNs: an extranet VPN uses a VPN to extend a company's network to
include its partners' offices, allowing different companies to set up a VPN together
using the Internet. The difference between intranet VPNs and extranet VPNS
primarily lies in the extent to which access is granted to a company's HQ network
resources.
Figure 5-5 Remote access VPN/intranet VPN/extranet VPN
Intranet VPN
Branch
HQ
Partner
Extranet VPN
Remote access VPN
Mobile employee
137
Learn Firewalls with Dr. WoW
4.
By the network layer on which VPN technology operates
−
Data link layer-based VPNs: L2TP, L2F, and PPTP. Of these, L2F and PPTP have
already been replaced by L2TP, and this chapter will not further detail these two
technologies.
−
Network layer-based VPNs: GRE, IPSec, and DSVPN
−
Application layer-based VPNs: SSL
5.1.2 Key VPN Technologies
The common point of Internet-based VPN technologies is that they must solve the VPN
network's security problems:

The geographical location from which mobile employees connect to a network is not
fixed, and the location at which they are located frequently is not protected by their
company's information security measures, so there needs to be strict access
authentication for mobile employees. This involves identity authentication technology. In
addition, there also needs to be precise control over the resources that can be accessed by
mobile employees and the authority given to them.

Authorization needs to be given flexibly to partner companies/individuals based upon
new operational developments, and limits also need to be placed on the extent to which
partners can access the network and on the categories of data they can transmit. It is
recommended that identity authentication be conducted for partners. After successful
authentication, security policies can be used to limit partner's access privileges.

In addition, data transmission between HQ and its branch organizations, partners, and
mobile users must be secure, and the process of achieving this involves data encryption
and data validation technologies.
A brief explanation of several key technologies that VPNs use in resolving the aforesaid
problems is made below:
1.
Tunneling technology
Tunneling technology is a fundamental VPN technology, and is similar to point-to-point
connection technologies. As shown in Figure 5-6, after VPN gateway 1 receives the
original packet, it "encapsulates" the packet, and then transmits it over the Internet to
VPN gateway 2. VPN gateway 2 then "decapsulates" the packet to obtain the original
packet.
Figure 5-6 Tunneling technology
HQ
Branch
VPN gateway 1
Original
packet
VPN gateway 2
Encapsulated packet
Server
Decapsulated
packet
VPN tunnel
The process of "encapsulation/decapsulation" itself provides security protections for the
original, 'raw' packets, and so when encapsulated packets are transmitted on the Internet,
138
Learn Firewalls with Dr. WoW
the logical path they travel is called a "tunnel". The processes of
encapsulation/decapsulation used by different VPN technologies are completely different,
and specific encapsulation processes will be explained below in detail for each VPN
technology.
2.
Identity authentication technologies
These are primarily used in remote connections by employees working remotely. HQ's
VPN gateways authenticate users' identities to confirm that users connecting to the
internal network are legitimate, and not malicious, users.
Different VPN technologies provide different methods for user identity authentication:
3.
−
GRE: does not support user identity authentication technology.
−
L2TP: relies on PPP-provided authentication (for example, CHAP, PAP, or EAP).
When authenticating users accessing the network, it can use either local
authentication methods or a third-party RADIUS server to verify the users' identities.
Following successful authentication, users are assigned internal IP addresses, and
authorization and management of the users is conducted using these IP addresses.
−
IPSec: supports EAP authentication of users when IKEv2 is used. Authentication can
be made using local authentication methods or using a third-party RADIUS server.
Following successful authentication, users are assigned internal IP addresses, and
authorization and management of the users is conducted using these IP addresses.
−
DSVPN: does not support user identity authentication technology.
−
SSL VPN: supports local authentication, certificate-based authentication and
server-based authentication of access users. In addition, users seeking to connect to a
network can also authenticate the identity of the SSL VPN server to confirm the SSL
VPN server's legitimacy.
Encryption technology
Encryption is the process of making a plaintext message into a ciphertext message. As
shown in Figure 5-7, this makes it so that even if hackers intercept and capture a packet,
they have no way of knowing the packet's real meaning. The target of encryption can be
either data packets or protocol packets, allowing for improved protocol security for both
protocol packets and data packets.
Figure 5-7 Data encryption
Source
Destination
Plaintext
Hello
Encryption
!@#$%
!@#$%
Ciphertext
!@#$%
Ciphertext
Decryption
Hello
Plaintext
139
Learn Firewalls with Dr. WoW
4.
−
GRE and L2TP protocols do not provide encryption technology themselves, so these
are generally combined with IPSec protocols, relying on IPSec's encryption
technology.
−
IPSec: supports encryption of data packets and protocol packets.
−
DSVPN: supports encryption of data packets and protocol packets after an IPSec
security framework is configured.
−
SSL VPN: supports encryption of data packets and protocol packets.
Data validation technologies
Data validation technology conducts inspections of packet integrity, and discards
counterfeit packets and packets that have been tampered with. How is this validation
conducted? By using a kind of "digest" technology, shown in Figure 5-8 (the figure only
shows the validation process; under normal circumstances, validation is used together
with encryption). "Digest" technology primarily uses the hash function to convert a long
packet into a short packet. Packet validation is conducted both at the sending and
receiving ends, with only packets with identical digests being accepted.
Figure 5-8 Data validation
Source
Destination
Plaintext
Hello
Hash
Digest
Hello
Digest
Hello
Digest
Hash
Consistent?
Digest
−
GRE: only provides simple checksum validation and keyword validation, but it can
be used together with the IPSec protocol, allowing IPSec's data validation technology
to be used.
−
L2TP: doesn't provide data validation technology itself, but can be used together with
IPSec protocols, allowing IPSec data validation technology to be used.
−
IPSec: supports complete data validation and data source validation.
−
DSVPN: supports complete data validation and data source validation after an IPSec
security framework is configured.
−
SSL VPN: supports complete data validation and data source validation.
5.1.3 Summary
Table 5-1 is a brief summary of the commonly used security technologies and use scenarios
for GRE, L2TP, IPSec, and SSL VPNs.
140
Learn Firewalls with Dr. WoW
Table 5-1 Comparison of commonly used VPN technologies
Protocol
Scope of
Protection
Use
Scenario
User Identity
Authentication
Encryption and
Validation
GRE
Data at the IP
layer and above
Intranet
VPN
Not supported
Simple keyword
validation and
checksum
validation supported
L2TP
Data at the IP
layer and above
Access VPN
PPP-based
CHAP, PAP, and
EAP
authentication
supported
Not supported
Data at the IP
layer and above
Access VPN
Pre-shared key or
certificate-based
authentication and
IKEv2's EAP
authentication
supported
Supported
Not supported
Supported after an
IPSec security
framework is
configured
User
name/password or
certificate
authentication
supported
Supported
IPSec
Extranet
VPN
Intranet
VPN
Extranet
VPN
DSVPN
Data at the IP
layer and above
Intranet
VPN
Extranet
VPN
SSL VPN
Specific
application-layer
data
Access VPN
This section has provided a simple introduction to VPNs. If these basics are not enough, a
more detailed introduction to the use, configuration, and principles behind each kind of VPN
technology is below.
5.2 GRE
To discuss GRE, we first have to shift our focus 20 years back into the past, and look at some
of the events that happened during those times. At that time, the Internet had already begun to
develop quickly, and increasing amounts of resources were being connected by the Internet,
making contact between people faster and more convenient—certainly cause for celebration.
However, the seemingly harmonious online world was also full of many causes for worry.
Life is often like this—people's sources of happiness tend to be roughly the same, but their
sources of misery vary greatly. After being connected to the Internet, private networks faced
the following headaches:

There was no way for networks with private IP addresses to directly connect with each
other through the Internet.
141
Learn Firewalls with Dr. WoW
Not too much needs to be said about this point—all private networks use private
addresses, while packets sent on the Internet must use public addresses. This is shown in
Figure 5-9.
Figure 5-9 Private IP networks cannot connect directly through the Internet)
Private IP network

Internet
Private IP network
Different types of networks (IPX, AppleTalk) cannot directly communicate with each
other through the Internet.
This headache is caused by a head-ache inducing, but logical 'birth defect', this being that
as IPX and IP are not the same type of Internet protocol, IP networks don't transmit IPX
packets. This is shown in Figure 5-10.
Figure 5-10 Different types of networks (IPX, AppleTalk) cannot directly communicate with each
other through the Internet
Private
IPX
network
Internet
Private
IPX
network
There are many more similarly tear-inducing stories, and I won't go into detail about
them, but basically these headaches coagulated and formed together into an enormous
impetus that drove network engineers to wrack their brains in looking for a solution.
Finally, in 1994, Generic Routing Encapsulation (GRE) (RFC 1701 and RFC 1702) came
into being.
The creation of GRE ensures that the aforesaid headaches will no longer occur today.
Now, I'm sure everyone is thinking: "What methods does GRE use exactly to solve all of
these aforesaid difficulties? Actually, this is very easy to explain. GRE uses the "alter
ego" technology popular today--if packets sent by private networks cannot be transmitted
on the Internet for various reasons, why not give these packets "alter egos" that the
Internet can recognize, and then transmit them on the Internet? This works because the
Internet only recognizes the alter-ego, not the person behind it. The network term for this
kind of alter-ego is "encapsulation."
5.2.1 GRE Encapsulation/Decapsulation
Any kind of network encapsulation technology's basic structural elements can be divided into
three parts (passenger protocols, encapsulation protocols, and transport protocols), and GRE
is no exception. Below, I'll use a comparison to the postal system to help us understand
encapsulation technology.

Passenger protocols
142
Learn Firewalls with Dr. WoW
Passenger protocols are the letters we write. These letters may be written in Chinese,
English, French, etc., and the writer and reader themselves are responsible for the
specific content of the letter.

Encapsulation protocols
Encapsulation protocols can be compared to different types of mail: mail can be sent
using ordinary mail, registered mail or EMS. Different types of mail correspond to
different encapsulation protocols.

Transport protocols
Transport protocols are a letter's method of transport; this could be by land, sea or air.
Different methods of transportation correspond to different transport protocols.
Now that we've understood the above metaphor, we can take another look at which protocols
are used in GRE, as shown in Figure 5-11.
Figure 5-11 GRE protocols
IP/IPX
Passenger protocol
GRE
Encapsulation protocol
IP
Transport protocol
Link-layer protocol
The figure allows us to clearly see that the protocols that GRE can carry include IP protocols
and IPX protocols, and that the transport protocol used by GRE is the IP protocol.
Now that we've learned about the basic concepts behind GRE, let us next look at the
principles behind GRE encapsulation. In Figure 5-12, we've used the IP protocol as the
passenger protocol, so that the end result of encapsulation is an IP packet encapsulating an IP
packet.
Figure 5-12 GRE packet encapsulation
IP header
Data
GRE header IP header
Data
New IP header GRE header IP header
Data
The GRE encapsulation process is made up of two steps. The first step is adding a GRE
heading onto the front of the private network's original packet. The second step is adding a
new IP heading in front of this GRE heading, with the IP address in the new IP heading being
the public network address. The addition of the new IP heading means that this private
network's packet, having been encapsulated layer by layer, can now be transmitted on the
Internet.
143
Learn Firewalls with Dr. WoW
On firewalls, encapsulation operations are achieved using a logical interface, this being the
famous tunnel interface. From the word tunnel we can see that this logical interface is created
for tunneling. Information about the source address and destination address for the new IP
header is on the tunnel interface, and after a packet enters the tunnel interface, the firewall
will encapsulate a GRE header and IP header onto the packet.
So how does a firewall deliver a packet to the tunnel interface? This is achieved via routing,
and firewalls support two methods for this:

Static routing
This refers to firewalls on both ends of the GRE tunnel configuring static routing to and
from each other's private network segments. The next hop is set to the IP address of the
terminal's tunnel interface, which is the sending tunnel interface.

Dynamic routing
Configuring dynamic routing (such as OSPF) on firewalls on both ends of the GRE
tunnel means broadcasting the addresses of their private network segments and tunnel
interfaces, so that the two firewalls will learn routes to each other's private network
segments. The next hop is the other firewall's tunnel interface's IP address, and the
sending interface is the tunnel interface of the sending firewall.
Regardless of which kind of routing is used, the end goal is to generate routes for the
corresponding private network segments on the firewalls' routing tables, and to use these
routes to guide packets into the tunnel interface for encapsulation.
Figure 5-13 shows the process by which firewalls encapsulate, decapsulate, and forward
private network packets.
Figure 5-13 GRE packet forwarding process
FW_A
FW_B
Public IP: 2.2.2.2
Tunnel IP: 10.1.1.2
Public IP: 1.1.1.1
Tunnel IP: 10.1.1.1
GRE tunnel
PC_A:192.168.1.1/24
data
PC_B:192.168.2.1/24
dst:192.168.2.1
data
dst:192.168.2.1
next hop
192.168.2.1
Routing table
1
Routing table
dst add
192.168.2.0
2.2.2.2
……
2
next hop
10.1.1.1
1.1.1.2
dst add
192.168.2.1
7
3
6
Tunnel interface encapsulation
tnl src add
1.1.1.1
……
4
Tunnel interface decapsulation
tnl dst add
2.2.2.2
…
…
src add
192.168.1.1
……
dst add
192.168.2.1
……
5
data
dst:192.168.2.1
GRE
dst:2.2.2.2
When PC_A seeks to access PC_B through GRE tunneling, the packet forwarding process by
FW_A and FW_B is as follows:
2.
After PC_A's original packets accessing PC_B enters FW_A, the routing table is first
checked for a match.
3.
FW_A then sends the packet to the tunnel interface for GRE encapsulation based upon
the results of the route check, where a GRE header and a new outer-layer IP header are
added.
144
Learn Firewalls with Dr. WoW
4.
FW_A then re-checks the routing table using the encapsulated packet's new IP header's
destination address.
5.
FW__A sends the packet to FW_B based upon the results of the route check. In the
above figure it is assumed that the next-hop address found by FW_A for FW_B is
1.1.1.2.
6.
After FW_B receives the packet, it must first determine whether or not this packet is a
GRE packet.
How can this be deduced? We've seen that during the encapsulation process the
encapsulated GRE packet has a new IP header. This new IP header includes a protocol
segment, which marks the class of the inner layer's protocol. If this protocol segment's
value is 47, this means that the packet is a GRE packet.
7.
If a packet received by FW_B is a GRE packet, then the packet will be sent to the tunnel
interface for decapsulation, where the outer layer IP header and GRE header will be
stripped away, restoring the original packet.
8.
FW_B will re-check the routing table based upon the destination address of the original
packet, and will then send the packet to PC__B using the matching routing result.
This is the complete process for how firewalls carry out GRE encapsulation/decapsulation and
forwarding for private network packets. Pretty simple, huh!?
5.2.2 Configuring Basic GRE Parameters
Above, we discussed the tunneling, encapsulation and decapsulation processes for private
network packets from a theoretical perspective. But, I bet everyone is more interested in
learning about how to configure GRE tunneling on firewalls, isn't that right? Below, we'll use
Figure 5-14 to help explain configuration methods for GRE tunneling.
Figure 5-14 GRE VPN network organization
GRE tunnel
Tunnel1
10.1.1.1/24
GE0/0/2
1.1.1.1/24
Branch
192.168.1.0/24
FW_A
Tunnel1
10.1.1.2/24
GE0/0/2
2.2.2.2/24
Branch
192.168.2.0/24
FW_B
Configuring GRE tunneling is very simple, and can be divided into two steps.
2.
Configure the tunnel interface.
Configure FW_A's tunnel interface's encapsulation parameters.
[FW_A] interface Tunnel 1
[FW_A-Tunnel1] ip address 10.1.1.1 24
[FW_A-Tunnel1] tunnel-protocol gre
[FW_A-Tunnel1] source 1.1.1.1
[FW_A-Tunnel1] destination 2.2.2.2
[FW_A-Tunnel1] quit
Add FW_A's tunnel interface to a security zone. The tunnel interface can be added onto
any one security zone; here we've added the tunnel interface to the DMZ zone.
[FW_A] firewall zone dmz
[FW_A-zone-dmz] add interface Tunnel 1
145
Learn Firewalls with Dr. WoW
[FW_A-zone-dmz] quit
Configure FW_B's tunnel interface's encapsulation parameters.
[FW_B] interface Tunnel 1
[FW_B-Tunnel1] ip address 10.1.1.2 24
[FW_B-Tunnel1] tunnel-protocol gre
[FW_B-Tunnel1] source 2.2.2.2
[FW_B-Tunnel1] destination 1.1.1.1
[FW_B-Tunnel1] quit
Add FW_B's tunnel interface to a security zone. As with FW_A, we've added the tunnel
interface to the DMZ zone.
[FW_B] firewall zone dmz
[FW_B-zone-dmz] add interface Tunnel 1
[FW_B-zone-dmz] quit
When configuring encapsulation parameters for the tunnel interfaces, we first set the
tunnel interfaces' encapsulation type to GRE, and then designated the GRE tunnel's
source and destination addresses. These steps seem very simple, but although few in
number, they play a decisive role in allowing the tunnel interfaces to complete GRE
packet encapsulation:
−
This first stipulated that the tunnel interfaces need to encapsulate GRE headers.
−
Next, this stipulated the source and destination addresses for the new IP header,
which in fact are just the IP addresses for the public network interfaces for the
firewalls on both ends of the GRE tunnel.
These two points are identical in theory to encapsulating GRE packets, and are relatively
easy to understand. However, I'm thinking that everyone may now have the following
thoughts about the properties of the tunnel interfaces themselves:
−
Is it necessary to configure IP addresses for the tunnel interfaces?
−
Are the IP addresses for the tunnel interfaces belonging to the firewalls on both ends
of the tunnel related to one another?
−
Do the tunnel interfaces use public network IP addresses or private network IP
addresses?
It is necessary to configure IP addresses for the tunnel interfaces. If IP addresses are not
configured, it is impossible for the tunnel interfaces to be in an UP state. Secondly, in
terms of the process of GRE encapsulation, the tunnel interfaces' IP addresses do not
participate in packet encapsulation, so there is no relationship between the IP addresses
for the tunnel interfaces belonging to the firewalls on each of the tunnel; each can be
configured separately. Finally, since the tunnel interfaces do not participate in
encapsulation, there is no need to use a public network address, and configuring a private
network IP address is fine.
3.
Routing configuration—guiding packets in need of GRE encapsulation to the tunnel
interface.
I've already mentioned above that firewalls support both static and dynamic routing, and
either one of these two methods can be chosen.
Static routing
To configure static routing on FW_A, set the next-hop for the route to HQ's private network
as the tunnel interface.
[FW_A] ip route-static 192.168.2.0 24 Tunnel 1
To configure static routing on FW_B, set the next-hop for the route to the branch
organization's private network as the tunnel interface.
146
Learn Firewalls with Dr. WoW
[FW_B] ip route-static 192.168.1.0 24 Tunnel 1
Dynamic routing
To configure OSPF on FW_A, broadcast the network segments for the branch organization's
private network and the Tunnel interface in OSPF.
[FW_A] ospf 1
[FW_A-ospf-1] area 0
[FW_A-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
[FW_A-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
To configure OSPF on FW_B, broadcast the network segments for HQ's private network and
the tunnel interface in OSPF.
[FW_B] ospf 1
[FW_B-ospf-1] area 0
[FW_B-ospf-1-area-0.0.0.0] network 192.168.2.0 0.0.0.255
[FW_B-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
After configuration is complete, FW_A and FW_B will learn the routes to each other's private
network segments.
There is one point to be aware of: when using the OSPF dynamic routing method, if the
public network interface corresponding to the GRE tunnel also uses OSPF to broadcast routes,
we need to use a new OSPF process to broadcast the network segments for the private
network and the tunnel interface, to avoid private network packets being directly forwarded
through the public network interface rather than being forwarded through the GRE tunnel.
5.2.3 Configuring GRE Security Mechanisms
I'm guessing that everyone has a common worry, which is if a malicious user on the Internet
forged a GRE packet to look like it was being sent from FW_A to FW_B, wouldn't this
pretender then be able to access FW_B's resources? When building the GRE tunnel on FW_A
and FW_B, how can we achieve mutual trust? Next we'll discuss GRE's security mechanisms.
1.
Keyword validation
After the GRE tunnel has been configured on the firewall, the firewall won't actually
handle every GRE packet it receives, but will instead only handle GRE packets sent by
the corresponding terminal device with which it jointly built the GRE tunnel. The "key"
segment in the GRE header is used to achieve this function.
When the firewall encapsulates packets with a GRE header, the value of the key bit in
the GRE header is set to 1, and the key segment is inserted into the GRE header. When
two firewalls build a tunnel, this key segment's value is used by the firewalls to verify
each other's identity, and the tunnel can only be built if the value for the key set by the
two corresponding terminals is exactly identical.
The figure below shows the GRE header's information. In this header, the fact that the
key bit is set to 1 shows that the keyword validation function has been activated, while
the "Key:0x00003039" near the bottom is the keyword value (converted to the decimal
system this is '12345').
147
Learn Firewalls with Dr. WoW
The steps for configuring keyword validation are very simple. The only thing to pay
attention to is that the keywords set for the firewalls on both ends of the tunnel must be
identical.
Set the keyword to 12345 on FW_A.
[FW_A-Tunnel1] gre key 12345
Also set the keyword on FW_B to 12345.
[FW_B-Tunnel1] gre key 12345
2.
Checksum validation
Although the firewalls on both ends of the tunnel have now established mutual trust, if
the possibility exists for packets to be tampered with by malicious users when they are
transmitted over the Internet, how can we guarantee the integrity of packets during
transmission? The "checksum" field in the GRE header can be used here.
When a firewall encapsulates a GRE header onto a packet, the GRE header's checksum
bit value is set to 1. A checksum is then calculated according to the packet's information,
and this checksum is added to the checksum field. When the other terminal of the tunnel
receives this packet, it can also compute a checksum based on the packet's information
and compare this with the checksum carried by the packet. If the results of the check are
identical, then the firewall will accept the packet; if they are not identical, it will discard
the packet.
The checksum validation function is one-directional; whether or not the other firewall
has enabled the function will not affect the firewall's checksum validation function. In
real-world environments, it is advised that this be simultaneously enabled on the
firewalls on both ends of the tunnel.
In the screenshot below, the GRE header's checksum bit is 1, meaning that the checksum
validation function has been enabled. The "checksum: 0x8f8d" in the figure below is the
checksum value.
148
Learn Firewalls with Dr. WoW
The steps for configuring checksum validation are also very simple. To activate
checksum validation on FW_A:
[FW_A-Tunnel1] gre checksum
To activate checksum validation on FW_B:
[FW_B-Tunnel1] gre checksum
3.
Keepalive
GRE's security mechanisms can achieve mutual trust between firewalls on both ends of a
tunnel, and can guarantee the integrity of packet transmission. However, there is still a
problem: if one of a tunnel's terminals has a failure/problem, how can the tunnel's other
terminal detect this?
GRE tunnels are a kind of stateless tunnel. This word "stateless" means that one terminal
of the tunnel will not determine the state of the other terminal. Or, to put things another
way, if a problem arises with one terminal of the tunnel, the terminal on the other end of
the tunnel will not be able to detect this. To resolve this problem, GRE tunneling
provides the Keepalive mechanism.
As shown in Figure 5-15, after the Keepalive function is activated on FW_A, FW_A will
regularly send probe packets to FW_B to detect the state of this other end of the tunnel.
If FW_B can be reached, then FW__A will receive a response packet from FW_B, and
FW_A will then maintain the tunnel's normal state. If FW_A does not receive a response
packet from FW_B, this means that FW_B is unreachable, and FW_A will close the
tunnel. This avoids data "black holes" due to one of a tunnel's terminals being
unreachable.
Figure 5-15 GRE Keepalive function
FW_A
FW_B
GRE tunnel
Periodically sends
Keepalive packets.
Receives a reply packet, indicating
that the tunnel works properly.
Does not receive any reply packets
and therefore terminates the tunnel.
GRE tunnel
The Keepalive function is one-directional; whether or not one terminal enables the
Keepalive function does not affect the other terminal's Keepalive function. In real-world
environments, it is advised that both of a tunnel's firewalls activate this function
simultaneously.
The commands to activate the Keepalive function are given below. Here is the command
to activate the Keepalive function on FW_A:
[FW_A-Tunnel1] keepalive
To activate the Keepalive function on FW_B:
[FW_B-Tunnel1] keepalive
At this point in our lesson, I'm guessing that everyone thinks that GRE tunnels are a
great thing, but actually that's not completely correct. GRE tunneling has its own
149
Learn Firewalls with Dr. WoW
Achille's heel: it doesn't include a security encryption function. GRE packets without
security encryption packets are really only transparent "alter-egos", and the packets are
all 'visible' when transmitted in the tunnel. Therefore, in real life we very rarely use GRE
by itself, but rather frequently use GRE and IPSec together. As IPSec technology is
equipped with very strong encryption functions, this approach resolves GRE's security
problems, and this is the GRE over IPS technology that we'll explain below.
5.2.4 Approach to Security Policy Configuration
In "Chapter 2 Security Policies," we stated that "both data streams forwarded by a firewall
and data streams between a firewall and the outside are controlled by security policies." With
this in mind, which interfaces and security zones should we use with GRE? How can we
configure security policies for use with GRE? Due to the existence of tunnel interfaces, GRE
security policy configuration can be a bit complicated and confusing. But luckily for you, Dr.
WoW has a suite of methods that I'll use to help you understand this!
In Figure 5-16, FW_A and FW_B's GE0/0/1 interfaces are connected to private networks, and
belong to the Trust zone; the GE0/0/2 interfaces belong to the Untrust zone; the tunnel
interface belongs to the DMZ zone.
Figure 5-16 A network's GRE VPN security policy configuration
DMZ
DMZ
Tunnel1
10.1.1.1/24
Tunnel1
10.1.1.2/24
Untrust
Trust
GE0/0/1
PC_A
192.168.1.1/24
Untrust
Trust
GE0/0/2
2.2.2.2/24
GE0/0/2
1.1.1.1/24
GE0/0/1
GRE tunnel
FW_A
FW_B
PC_B
192.168.2.1/24
The process for configuring security policies is below:
2.
We first configure an interzone security policy that is as broad as possible, in order to
assist GRE adjustment/testing.
FW_A's interzone default packet filtering action is set to "permit":
[FW_A] firewall packet-filter default permit all
FW_B's interzone default packet filtering action is set to "permit":
[FW_B] firewall packet-filter default permit all
3.
After GRE is configured, PC_A pings PC_B. The session table is then checked. Using
FW_A as an example:
[FW_A] display firewall session table verbose
Current Total Sessions : 2
gre VPN:public --> public
Zone: local--> untrust TTL: 00:04:00 Left: 00:03:37
Interface: GigabitEthernet0/0/2 NextHop: 1.1.1.2 MAC: 54-89-98-87-22-a4
<--packets:4 bytes:352 -->packets:5 bytes:460
1.1.1.1:0-->2.2.2.2:0
icmp VPN:public --> public
Zone: trust--> dmz TTL: 00:00:20 Left: 00:00:00
150
Learn Firewalls with Dr. WoW
Interface: Tunnel1 NextHop: 192.168.2.2 MAC: 00-00-00-00-00-00
<--packets:1 bytes:60 -->packets:1 bytes:60
192.168.1.2:22625-->192.168.2.2:2048
The above information shows that PC_A can successfully ping PC_B, and that a GRE
session has been successfully and normally established.
4.
Analysis of the session table allows for the most appropriate security policy for the
existing conditions to be selected.
We can see two streams on the session table. One is the ICMP packets from Trust to
DMZ, and the other is the GRE packets from Local to Untrust. This allows us to obtain
the direction of packets on FW_A, as shown in Figure 5-17.
Figure 5-17 FW_A packet direction
DMZ
Tunnel1
Trust
Untrust
GE0/0/1
Local
GRE tunnel
GE0/0/2
PC_A
192.168.1.1/24
1.1.1.1
FW_A
Original packet from PC_A to PC_B
GRE packet
The above figure shows that FW_A needs to configure a Trust-->DMZ security policy
that permits packets from PC_A seeking access to PC_B to pass; another
Local-->Untrust security policy that permits FW_A and FW_B to establish a GRE tunnel
also needs to be configured.
Similarly, we can also obtain the direction for packets on FW_B, as shown in Figure
5-18.
Figure 5-18 FW_B packet direction
DMZ
Tunnel
Untrust
GRE tunnel
Trust
Local
2.2.2.2 GE0/0/2
GE0/0/1
PC_B
192.168.2.1/24
FW_B
Original packet from PC_A to PC_B
GRE packet
151
Learn Firewalls with Dr. WoW
The above figure shows that FW_B needs to configure a DMZ-->Trust security policy
that permits packets from PC_A seeking access to PC_B to pass; another
Untrust-->Local security policy that permits FW_A and FW_B to establish a GRE tunnel
also needs to be configured.
When PC_B initiates calls on PC_A, the packet direction is the opposite of the direction
when PC_A accesses PC_B, and there is no need to further review this.
To summarize, the security policies that should be configured on FW_A and FW_B
under various conditions are shown in Table 5-2, and we should configure the security
policies that best match existing conditions as shown in the table.
Table 5-2 Selecting security policies for FW_A and FW_B based upon conditions
Transaction
Direction
Device
Source
Security Zone
Destination
Security Zone
Source
Address
Destination
Address
Used in
PC_A
accesses
PC_B
FW_A
Local
Untrust
1.1.1.1/32
2.2.2.2/32
GRE
Trust
DMZ
192.168.1.0/24
192.168.2.0/24
*
Untrust
Local
1.1.1.1/32
2.2.2.2/32
GRE
DMZ
Trust
192.168.1.0/24
192.168.2.0/24
*
Untrust
Local
2.2.2.2/32
1.1.1.1/32
GRE
DMZ
Trust
192.168.2.0/24
192.168.1.0/24
*
Local
Untrust
2.2.2.2/32
1.1.1.1/32
GRE
Trust
DMZ
192.168.2.0/24
192.168.1.0/24
*
FW_B
PC_B
accesses
PC_A
FW_A
FW_B
*: Use in this instance is related to the transaction type, and can be configured according to actual
circumstances (for example: TCP, UDP, and ICMP).
In GRE scenarios, FW_A and FW_B's tunnel interfaces must be added to security
zones, and the security zones the tunnel interfaces belong to determine the direction
of packets within a firewall. If the tunnel interface belongs to the Trust zone, then no
DMZ-Trust interzone security policy needs to be configured, but this approach also
carries security risks. Therefore, I suggest that the tunnel interface be added to a separate
security zone, and then configured with the security policy that best matches existing
conditions.
5.
Finally, the default packet filtering's action is changed to "deny".
Set FW_A's interzone default packet filtering action to "deny":
[FW_A] firewall packet-filter default deny all
Set FW_B's interzone default packet filtering action to "deny":
[FW_B] firewall packet-filter default deny all
Although the aforesaid adjustment/testing process is a bit difficult, security policies
configured in this way are relatively refined, and can adequately ensure the security of
firewalls and the internal network.
152
Learn Firewalls with Dr. WoW
5.3 The Birth and Evolution of L2TP VPNs
To discuss L2TP VPNs we must first shift our focus to the early stages of the Internet's
development once more. This was an era when both individual and corporate users generally
went online using telephone lines, and of course company branches and traveling users
normally also used the "phone network " (academic name: PSTN-Public Switched Telephone
Network)/ISDN (Integrated Services Digital Network)" to connect to their HQ networks.
People called PSTN/ISDN-based VPNs "VPDNs" (Virtual Private Dial Networks). L2TP
VPNs are a kind of VPDN technology, but other VPDN technologies have already gradually
fallen out of use.
As shown in Figure 5-19, in a traditional PSTN/ISDN-based L2TP VPN, a carrier deploys a
LAC (for VPDN's this is called a NAS--Network Access Server) between the PSTN/ISDN
and IP networks. This provides centralized L2TP VPN line services for multiple corporate
users, and is equipped with authentication and charging functions. When branch organizations
and mobile employees dial the special connection number for the L2TP VPN, the connecting
modem uses a PPP protocol to establish a PPP session with the LAC, and authentication and
charging are simultaneously enabled. After successful authentication, the LAC initiates L2TP
tunnel and session negotiation with the LNS, and the corporate HQ LNS re-authenticates the
access user's identity (due to security concerns). After successful authentication, the branch
organization or mobile employee can access the HQ network.
Figure 5-19 PSTN/ISDN-based L2TP VPN
AAA server
AAA server
Modem
LAC
LNS
Branch
Carrier IP network
Modem
PPP session
HQ
L2TP session & tunnel
Mobile employee
LAC: deployed on the carrier network
LNS: deployed at the enterprise headquarter’s egress
LAC and LNS are concepts of L2TP protocols, while NAS is a VPDN concept. So actually, for L2TP
VPNs, the LAC is actually the NAS.
As IP networks became widespread, PSTN/ISDN networks gradually fell out of use in the
data communication sector. As companies and individual users were both able to use the
Ethernet to directly connect to the Internet, L2TP VPNs were also able to quietly take "two
small steps" forward. This may have looked like only two small steps, but actually these two
small steps allowed the formerly 'over the hill' L2TP VPNs to remain on the ever-changing IP
scene. Today's L2TP VPN use scenarios are shown in Figure 5-20, and from the figure we can
see that L2TP VPNs have already calmly stepped onto the IP stage.
153
Learn Firewalls with Dr. WoW
Figure 5-20 Common L2TP VPN scenario
Branch
L2TP session & tunnel
PPPoE session
AAA Server
LAN
PPPoE client
AAA server
LAC
(PPPoE server)
LNS
HQ
Mobile
employee
L2TP client
L2TP session & tunnel

The first "small step"—PPP deigns to dwell on the Ethernet. This was a mandatory
step in the process of evolving from dial-up networks to the Ethernet. While this was not
specially designed for L2TP VPNs, L2TP VPNs were the biggest beneficiary of this. If
branch organization users install a PPPoE client, and trigger PPPoE dialing on the
Ethernet, a PPPoE session is established between the PPPoE client and the LAC (PPPoE
Server). This does not change the process of setting up the L2TP VPN between the LAC
and the LNS.

The second "small step"—extending L2TP to users' PCs: under this kind of scenario,
PCs can use the L2TP client their system comes equipped with, or third-party L2TP
client software, to directly dial and set up an L2TP VPN with the LNS. L2TP clients
render the services of the broker, the LAC, moot, by establishing a direct 'partnership'
with HQ—it looks like this sort of 'replacement' isn't limited just to everyday life!

The common ground between these two scenarios and the original L2TP VPN scenarios
is that the company invests in buying equipment, and then uses the Internet to establish
an L2TP VPN. This avoids the carrier charging for use of VPN line services, and reduces
long-term investment. In order to distinguish between the aforesaid two kinds of L2TP
VPNs, the former (LAC-dial-up-based L2TP VPNs) are called NAS-initiated VPNs,
while the latter (L2TP VPNs established by direct client dialing) are called
client-initiated VPNs. We'll go into further detail below about these two kinds of L2TP
VPNs.
5.4 L2TP Client-initiated VPNs
At present most people are pretty familiar with clients on PCs, tablets, or mobile phones. The
most common are PPPoE clients, which are the often-talked about broadband Internet access
clients. Second to these are VPN clients. These kinds of clients are not used by leisure Internet
users, but are rather generally services provided by companies for their employees working
remotely. Here we'll primarily discuss one type of VPN client, the L2TP VPN client.
154
Learn Firewalls with Dr. WoW
The role of an L2TP VPN client is to help users initiate and build an L2TP tunnel directly to
the company HQ network on a PC, tablet or cell phone. This achieves the objective of
allowing the user to freely access the HQ network, and is a bit like how Professor Du (star of
a popular Korean sci-fi/romance drama) was able to instantly travel between two distant
worlds by controlling the entrance to the wormhole to Earth (HQ). Whether we're talking
about the real world or the virtual online world, it seems that happiness can only be
experienced when time and distance concerns are eliminated. I'll use real-world experience to
inform everyone how client-initiated VPNs can help you attain the same happiness as
Professor Du.
If Professor Du wanted to use the L2TP VPN client to pass through the "wormhole" and enter
an enterprise network, he would first have to pass through the LNS "gatekeeper" identity
check (the methods involved in this check are very clear, with everything necessary for tunnel
inspection included, including checks of user name, password, and host name). Users who
pass inspection are supplied with a special pass (an IP address of the enterprise network) by
the LNS, while those who attempt to gain unauthorized access are bid adieu—this is the
simple approach displayed in client-initiated VPNs' information exchange. In order to help
everyone better grasp this, and to aid in comparing client-initiated VPNs to the NAS-initiated
VPNs discussed in the next section, I've drawn a simple diagram (Figure 5-21). I'll then use
this diagram to further dissect the information exchanges between the L2TP client and the
LNS.
Figure 5-21 Process for building a client-initiated VPN
Public IP address: 1.1.1.2
Obtained private IP address: 192.168.2.2
L2TP client
Address pool: 192.168.2.2~192.168.2.100
VT interface’s IP address: 192.168.2.1
L2TP tunnel
LNS
Intranet server
IP: 192.168.1.2/24
GE0/0/2
1.1.1.1/24
HQ
1. Create an L2TP tunnel.
2. Create an L2TP session.
3. Create a PPP connection.
Authentication
succeeds, and an
address is allocated.
4. Encapsulate data and transmit the packet.
Client-initiated VPN configuration is shown in Table 5-3. A key point is that the connections
between the L2TP client, the LNS, and the internal network servers are all direct connections,
which avoids routing configuration; user authentication also is done using relatively simple
local authentication. In addition, the internal network server needs to configure a gateway, to
ensure that its response packets bound for the L2TP client are able to reach the LNS.
155
Learn Firewalls with Dr. WoW
Table 5-3 Client-initiated VPN configuration
Configured Item
L2TP Client (Using a VPN
Client as an Example)
LNS
L2TP
configuration

Other terminal's IP address:
1.1.1.1
l2tp enable

User log-in (PPP user)
name: l2tpuser

User log-in (PPP user)
password: Admin@123

LNS Tunnel name
(optional): LNS


PPP authentication mode
(PAP/CHAP/EAP; some
clients are defaulted to
CHAP): CHAP
Tunnel validation (optional,
some clients don't support
this): not selected
interface Virtual-Template1
ppp authentication-mode chap
ip address 192.168.2.1
255.255.255.0
remote address pool 1
l2tp-group 1
undo tunnel authentication
allow l2tp virtual-template 1
//Designates a VT interface.
tunnel name LNS
//Indicates the name of this
tunnel terminal.
The first three fields are
mandatory, while the last three
fields may not be available on
all clients
AAA
authentication
configuration
-
aaa
local-user l2tpuser password
cipher Admin@123 //Indicates
the local user name and password.
local-user l2tpuser
service-type ppp //Indicates
the user service type.
ip pool 1 192.168.2.2
192.168.2.100
//Indicates the
address pool.
Now, I don't think many people know too much about the VT interface, is that right? The VT
interface is a logical interface used in Layer 2 protocol communication, and is usually used
during PPPoE negotiation. L2TP cooperates with PPPoE in order to acclimate itself to the
Ethernet environment, which is why we find the VT interface here. I will explain more about
the role of VT interfaces in client-initiated VPNs as we proceed.
Below, I will use packet captures to help explain the complete process of setting up a
client-initiated VPN.
5.4.1 Step 1: Setting Up an L2TP Tunnel (Control
Connection)—Three Pieces of Information Enter the Wormhole
An L2TP client and the LNS negotiate parameters such as the tunnel ID, UDP port (the LNS
uses port 1701 to respond to the client's tunnel building request), host name, L2TP version,
tunnel authentication (if the client does not support tunnel authentication, the LNS tunnel
156
Learn Firewalls with Dr. WoW
authentication function should be closed—this is true for the WIN7 operating system) by
exchanging three pieces of information.
To aid everyone in understanding the meaning of negotiation, Table 5-4 gives the tunnel ID
negotiation process.
Table 5-4 Tunnel ID negotiation process
Step 1
SCCRQ
L2TP Client: Hey LNS, use "1"
as the tunnel ID to communicate
with me.
SCCRP
LNS: OK, L2TP Client, and
make sure you also use "1" as
the tunnel ID to communicate
with me.
Step 3
L2TP Client: OK.
Step 2
-
SCCCN
5.4.2 Step 2: Establishing an L2TP Session—Three Pieces of
Information to Awaken the Wormhole Gateguard
The L2TP client and the LNS exchange three pieces of information to negotiate a session ID
and establish an L2TP session. However, only if the "gateguard" is first notified can identity
authentication information be submitted!
Table 5-5 gives the process for negotiating a session ID.
157
Learn Firewalls with Dr. WoW
Table 5-5 Process for negotiating a session ID
Step 1
ICRQ
L2TP Client: Hey LNS, use "1" as the
session ID to communicate with me.
ICRP
LNS: OK, L2TP Client, and make sure you
also use "1" as the session ID to
communicate with me.
Step 3
L2TP Client: OK.
Step 2
-
ICCN
5.4.3 Step 3: Creating a PPP Connection—Identity Authentication
and Issuance of the "Special Pass"
1.
LCP negotiation
LCP negotiation is conducted separately in both directions, and primarily negotiates
MRU size. MRU is a PPP data link layer parameter, and is similar to the Ethernet's MTU.
If one of the terminal devices in the PPP link sends a packet with a payload larger than
the other terminal's MRU, this packet will be fragmented when it is sent.
The above screenshot shows that the post-negotiation MRU value is 1460.
2.
PPP authentication
Authentication methods include CHAP, PAP, and EAP. Both CHAP and PAP
authentication can be conducted either locally or on an AAA server, while EAP can
conduct authentication on an AAA server. EAP authentication is relatively complex, and
there are differences in the support provided for this by different models of firewalls, so
here we'll only discuss CHAP, the most common authentication process.
158
Learn Firewalls with Dr. WoW
Table 5-6 displays a classic three-way handshake PPP authentication process.
Table 5-6 Three-way handshake PPP authentication process
Step 1
LNS: Hey, L2TP Client, I'm
sending you a "challenge", use it
to encrypt your password.
Step 2
L2TP Client: OK, I'm sending
my user name and encrypted
password to you, please
authenticate them.
Step 3
LNS: Authentication was
successful, welcome to the world
of PPP!
The user name and password configured on the LNS are used to authenticate the client.
Of course, this requires that the "person in question" and the "visa" have to be exact
matches—this is to say that the user name and password configured on the L2TP client
and the LNS have to be identical. Next, I'll briefly explain what it means for the user
names to be identical.
−
If the "visa" configured on the LNS is the user name (no domain), then the L2TP
client's user log-in name must be the user name.
−
If the "visa" configured on the LNS is the full user name (username@default or
username@domain), then the L2TP client's user log-in name must be
username@default or username@domain.
In this example, the user name configured on the LNS is 12tpuser, so when the client
logs-in it must enter an identical user name. The reasoning behind this is very simple, but
this is a common error that many make during configuration.
The concept of a "domain" is always used in AAA authentication, and I'm sure everyone
is wondering what purpose it serves to add a domain behind the user name.
Large corporations will often assign different departments to different domains, and then
create different address pools for these different departments on the LNS according to
their domain—this is to say that different departments' network segments can be
separated using address pools, which makes it easy to later deploy different security
policies for different departments.
3.
IPCP negotiation to successfully assign an IP address
The IP address assigned by the LNS to the L2TP client is 192.168.2.2.
159
Learn Firewalls with Dr. WoW
After having read this far, everyone should be clear that the addresses in the LNS's
address pool are used to assign IP addresses to remote clients. Of course, these should be
private addresses and should also abide by the internal network's IP address planning
rules just like other internal network host addresses. But what about the VT interface?
Actually, the VT interface is also an internal network interface, and should also be
planned according to the internal network's IP address planning principles. The overall
principles behind IP address planning are as follows:
−
It is suggested that independent network segments be planned separately for the VT
interface, the address pool and the HQ network address, so that the addresses for the
three don't overlap.
−
If the addresses of the address pool and the HQ network address are configured to the
same network segment, then the ARP proxy function must be activated on the LNS
interface that connects to the HQ network, and the L2TP virtual forwarding function
must also be enabled, to ensure that the LNS can respond to ARP requests sent by the
HQ network server.
If the LNS interface connecting to the HQ network is GE0/0/1, then the configuration for
enabling the ARP proxy function and the L2TP virtual forwarding function is as follows:
[LNS] interface GigabitEthernet0/0/1
[LNS-GigabitEthernet0/0/1] arp-proxy enable
//Enable the ARP proxy function.
[LNS-GigabitEthernet0/0/1] virtual-l2tpforward enable
//Enable the L2TP
virtual forwarding function.
After reading through the process for PPP authentication, everyone should now know
that L2TP is cleverly able to use PPP's authentication functions to achieve its own
objective of authenticating remote user access. What was responsible for facilitating this
cooperative project? The VT interface:
[LNS] l2tp-group 1
[LNS-l2tp1] allow l2tp virtual-template 1
It is this above command that links L2TP with PPP; the VT interface manages PPP
authentication, while the L2TP module is the VT interface's boss. Cooperation between
the two is thus achieved in this way. The VT interface is only used between L2TP and
PPP—this is a nameless hero that doesn't participate in encapsulation, and also doesn't
need to be broadcast publicly, so it is perfectly acceptable to configure its IP address as a
private network IP address.
The L2TP client-initiated VPN negotiation process is far more complex than for GRE
VPNs. Let's summarize the characteristics of client-initiated VPN tunnels:
−
L2TP VPNs are greatly different than GRE VPNs. GRE VPNs do not have a tunnel
negotiation process, and are tunnels that do not control connections and state, so there
is no way to view the tunnel or inspect the state of the tunnel. However, L2TP VPNs
are controlled-connection tunnels, and can check on and view the tunnel and session.
160
Learn Firewalls with Dr. WoW
−
As shown in Figure 5-22, there is an L2TP tunnel between the L2TP client and the
LNS for client-initiated VPNs. There is only one L2TP session in the tunnel, and the
PPP connection is carried on this L2TP session. This is different than the
NAS-initiated VPNs that will be discussed in the next section, and this is important to
pay attention to.
Figure 5-22 Relationship between an L2TP tunnel and session with the PPP connection on a
client-initiated VPN
L2TP client
LNS
Intranet server
PPP dial-up
L2TP tunnel
L2TP session
PPP connection
5.4.4 Step 4: Data Encapsulation Transmission— Passing Through
the Wormhole to Visit Earth
After the L2TP tunnel has been built, L2TP client data can freely go to and from the HQ
network. Giving a clear explanation as to the process of how Professor Du passed through his
wormhole is difficult, but it is not very difficult to explain how L2TP client data passes
through the L2TP tunnel to reach the HQ network; this involves the encapsulation process for
L2TP data packets. This process is very similar to GRE packets adopting and later discarding
"alter-egos", with the difference being that the "alter-egos" are slightly different here.
Public IP header
UDP header
L2TP header
PPP header
Private IP header
The packet capture above shows the structure of L2TP packet encapsulation, and a more
detailed analysis elucidates the encapsulation/decapsulation process for L2TP data packets in
a client-initiated VPN scenario (Figure 5-23).
161
Learn Firewalls with Dr. WoW
Figure 5-23 Process for client-initiated VPN packet encapsulation/decapsulation
L2TP client
Intranet server
LNS
L2TP tunnel
Data
Data
Private IP
Private IP
PPP header
PPP header
L2TP header
L2TP header
Data
Data
UDP header
UDP header
Private IP
Private IP
Public IP
Public IP
Ethernet header
Ethernet header
L2TP data
decapsulation
Ethernet
encapsulation
Ethernet
decapsulation
L2TP data
encapsulation
Data flow direction
The L2TP client's process for forwarding packets towards the internal network server is as
follows:
2.
The L2TP client encapsulates the original packet using a PPP header, an L2TP header, a
UDP header, and finally, on the outermost layer, a public network IP header, making this
into an L2TP packet. The source address on the outer layer public network IP header is
the L2TP client's public network IP address, while the destination address is the LNS's
public network interface's IP address.
3.
L2TP packets pass through the Internet to the LNS.
4.
After the LNS receives the packets, it completes identity authentication and packet
decapsulation in the L2TP module, discarding the PPP header, the L2TP header, the
UDP header and the outer layer IP header, to restore the original packet.
5.
The original packet carries only an inner-layer private network IP header. This
inner-layer private network IP header's source address is the private network IP address
obtained by the L2TP client, while the destination address is the private network IP
address of the internal network server. The LNS checks its routing table based upon the
destination address, and then forwards the packet using the matching routing result.
The L2TP client now has obstacle-free access to the HQ internal network server, but there is
still a problem: how do return packets from HQ's internal network server bound for the L2TP
client enter the tunnel to return to the L2TP client? We don't seem to have configured any
route to guide these return packets into the tunnel, right? A look at the routing table on the
LNS shows something interesting: the LNS has automatically issued a host route for the L2TP
client that obtained the private network IP address.
[LNS] display ip routing-table
Destination/Mask
Proto Pre Cost
192.168.2.2/32 Direct 0
0
Flags NextHop
D
192.168.2.2
Interface
Virtual-Template1
This automatically generated host route is a user network route (UNR), the destination
address and the next-hop are both the private network address that the LNS assigned for the
L2TP client, and the sending interface is the VT interface. This route is the LNS's wormhole
entrance, and guides packets bound for the L2TP client into the tunnel. Our question has been
162
Learn Firewalls with Dr. WoW
resolved, and it should now be easy to understand the forwarding process for return packets
from the internal network server.
6.
After the LNS receives a return packet from the internal network server, it checks for a
route based upon the packet's destination address (the L2TP client's private network IP
address), selects a UNR route, and sends the return packet to the VT interface.
7.
The return packet is encapsulated with a PPP header, an L2TP header, a UDP header,
and an outer-layer public network IP header in the L2TP module.
8.
The LNS checks the routing table based upon the packet's outer-layer IP header's
destination IP address (the L2TP client's public network IP address), and then forwards
the packet using the matching routing result.
The above process is a bit complicated, as return packets must be matched twice to the routing
table on their trip back to the L2TP client.
Our above explanation only used one L2TP client, but in real-world environments there will
be many L2TP clients accessing the HQ network through the wormhole simultaneously. If the
L2TP client is not satisfied with only accessing the HQ network, but also wants to access
other L2TP clients (this is to say, if there is to be mutual access between L2TP clients), can
L2TP achieve this? Don't forget, the LNS is the transfer station that connects multiple
wormholes, and it has host routes to many L2TP clients. Therefore, two L2TP clients can
freely access each other through LNS forwarding, as shown in Figure 5-24. Of course, the
premise for mutual access is that each L2TP client must know the IP address assigned by the
LNS for each other. This premise is not very easily achieved, so scenarios with mutual access
between L2TP clients are not very common.
Figure 5-24 L2TP client mutual access scenario
L2TP client
L2TP
L2TP
tunn
e
LNS
l
e
tunn
Intranet server
l
L2TP client
5.4.5 Approach to Security Policy Configuration
The overall approach to configuration of security policies for L2TP client-initiated VPNs is
similar to that for configuration of GRE security policies, except that the tunnel interface has
been replaced by the VT interface.
As shown in Figure 5-25, in our hypothetical scenario, the LNS's GE0/0/1 is connected to the
HQ private network, and belongs to the Trust zone; GE0/0/2 is connected to the Internet, and
belongs to the Untrust zone; the VT interface belongs to the DMZ zone; the IP address
assigned by the LNS for the L2TP client is 192.168.2.2.
163
Learn Firewalls with Dr. WoW
Figure 5-25 Network organization for configuring client-initiated VPN security policies
DMZ
VT1
192.168.2.1/24
Public IP address: 1.1.1.2
Private IP address: 192.168.2.2
Untrust
Trust
Intranet server
GE0/0/2
1.1.1.1/24
GE0/0/1
L2TP tunnel
L2TP client
LNS
192.168.2.1/24
The process for configuring a security policy is as follows:
2.
We first configure the broadest possible interzone security policy to assist L2TP
VPN adjustments/testing.
The interzone default packet filtering action on the LNS is set to "permit":
[LNS] firewall packet-filter default permit all
3.
After L2TP configuration, we ping the internal network server with the L2TP client, and
then check the session table.
[LNS] display firewall session table verbose
Current Total Sessions : 2
l2tp VPN:public --> public
Zone: untrust--> local TTL: 00:02:00 Left: 00:01:58
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:20 bytes:1120 -->packets:55 bytes:5781
1.1.1.2:1701-->1.1.1.1:1701
icmp VPN:public --> public
Zone: dmz--> trust TTL: 00:00:20 Left: 00:00:01
Interface: GigabitEthernet0/0/1 NextHop: 192.168.1.2 MAC: 20-0b-c7-25-6d-63
<--packets:5 bytes:240 -->packets:5 bytes:240
192.168.2.2:1024-->192.168.1.2:2048
The above information shows that the L2TP client successfully pinged the internal
network server, and that an L2TP session has been created.
4.
Analysis of the session table shows the existing conditions to which a refined security
policy can be matched.
From the session table we can see two streams (one being Untrust-->Local L2TP packets,
and the other DMZ-->Trust ICMP packets), and we therefore can obtain the direction of
packets on the LNS, as shown in Figure 5-26.
164
Learn Firewalls with Dr. WoW
Figure 5-26 LNS packet direction
DMZ
VT
L2TP tunnel
Untrust
Trust
Local
1.1.1.1
GE0/0/1
GE0/0/2
L2TP client
private IP address: 192.168.2.2
Intranet server
192.168.1.2/24
LNS (FW)
Original packet from the L2TP client to
the intranet server
L2TP packet
The above figure shows that the LNS needs to configure a DMZ-->Trust security policy
that allows packets from the L2TP client seeking access to the internal network server to
pass, and also needs to configure an Untrust-->Local security policy to allow the L2TP
client and the LNS to establish an L2TP tunnel.
After the L2TP tunnel is established, the direction of return packets sent from the internal
server to the L2TP client is the opposite of when the L2TP client accesses the internal
server, and we therefore do not need to elaborate further about this.
To summarize, the security policies that should be configured on the LNS under various
conditions are shown in Table 5-7, and we should configure the security policies that best
match existing conditions.
Table 5-7 Selecting LNS security policies based upon existing conditions
Transactional
Direction
Source
Security Zone
Destination
Security Zone
Source Address
Destination
Address
Used
in
The L2TP client
accesses the
internal network
server
Untrust
Local
ANY
1.1.1.1/32
L2TP
DMZ
Trust
192.168.2.2~192.
168.2.100
192.168.1.0/24
*
192.168.2.2~192.1
68.2.100
*
(address pool
addresses)
The internal
network server
accesses the
L2TP client
Trust
DMZ
192.168.1.0/24
address pool
addresses
*: Use in this instance is related to the transaction type, and can be configured according to actual
circumstances (for example: TCP, UDP, and ICMP).
In this scenario, the LNS only passively receives the request from the L2TP client to establish a tunnel,
but will not actively initiate a request to the L2TP client to establish a tunnel, so only an Untrust-->Local
security policy needs to be configured on the LNS for the L2TP tunnel.
165
Learn Firewalls with Dr. WoW
Therefore, in L2TP VPN scenarios using the client-initiated method, the LNS's VT
interface must be added to a security zone, and the security zone the VT interface
belongs to determines the direction of packets on the firewall. If the VT interface
belongs to the Trust zone, then no DMZ-Trust interzone security policy is necessary, but
this also involves security risks. Therefore, it is advisable to add the VT interface to a
separate security zone, and then configure the security policy that best matches the
existing conditions.
5.
Finally, the default packet filtering's action is changed to "deny".
[LNS] firewall packet-filter default deny all
5.5 L2TP NAS-initiated VPNs
In the above section we discussed how client-initiated VPNs can allow companies' mobile
employees to pass through the "wormhole", just like Professor Du, and freely access the HQ
network. However, company branch organization users are not as fortunate, and they
generally use a dial-up network to connect to the Internet. Faced with the vast ocean of the
Internet, and unable to find the entrance to the "wormhole", they can only look at this ocean
and sigh sadly. Even if their dial-up network has evolved onto the Ethernet, this only solves
the local Internet connection problem, but they are still unable to access the HQ network.
Does this mean branch organization users are destined to be forever separated from the HQ
network?
Fortunately, the arrival of LACs onto the network scene has helped branch organizations solve
this problem. The LAC serves as a PPPoE server, and a branch organization user, as the
PPPoE client, can establish a PPPoE connection with the LAC, allowing PPP to be used freely
on the Ethernet. Additionally, as an "agent" for the LNS, the LAC provides a "wormhole"
entrance for branch organizations, meaning that branch organization users can use the LAC as
a portal to reaching the HQ network.
The LAC is also called the NAS on VPDNs, and so these kinds of L2TP VPNs are also called
NAS-initiated VPNs. This would be easier to understand if the name "NAS-initiated VPN"
was changed to "LAC-initiated VPN", because on our representations of network organization
the name LAC is clearly displayed, but we still have to use this now-discarded "former name",
which might be a bit confusing to newcomers.
The process of constructing a NAS-initiated VPN is a bit complicated, and to aid in
remembering this I have drawn a simple figure (Figure 5-27), to aid in your understanding of
the following.
166
Learn Firewalls with Dr. WoW
Figure 5-27 Process for establishing a NAS-initiated VPN
LAC
(PPPoE server)
PPPoE client
LNS
Intranet server
L2TP tunnel
Branch
HQ
1. Establish a PPPoE session.
Authentication succeeds.
2. Establish an L2TP tunnel.
3. Establish an L2TP session.
4. Perform LNS authentication and
secondary authentication (optional).
Authentication succeeds,
and an address is allocated.
5. Establish a PPP connection.
6. Encapsulate data and transmit the packet.
In order to review our understanding of PPPoE, we've used a firewall as the PPPoE client,
while a virtual PC's PPPoE client is shown in Figure 5-28.
Figure 5-28 NAS-initiated VPN network schematic
PPPoE client
Obtained private IP
address: 172.16.0.2
LAC
(PPPoE server)
GE0/0/1
Branch
GE0/0/1
GE0/0/2
1.1.1.1/24
LNS address pool: 172.16.0.2~172.16.0.100
VT interface IP address: 172.16.0.1
GE0/0/2
1.1.1.2/24
Intranet server IP
address: 192.168.0.2
GE0/0/1
192.168.0.1/24
HQ
A key point to remember is that the connections between the PPPoE client, the LAC, the LNS,
and the internal server are direct connections, eliminating the need to configure routing; user
authentication also uses local authentication, which is relatively simple. In addition, the
internal server needs to configure a gateway to ensure that the packets it gives the PPPoE
client in response are able to be sent to the LNS.
167
Learn Firewalls with Dr. WoW
5.5.1 Step 1: Establishing a PPPoE Connection—The Dialing
Interface Dials the VT Interface
After PPP changed to PPPoE so that it could settle down in the Ethernet "world", in order to
simulate the PPP dial-up process on the Ethernet, PPPoE invented two virtual interfaces—the
dialer interface and the VT interface. When PPPoE runs on a firewall, these two interfaces are
also used: when a firewall serves as the PPPoE client it uses the dialer interface, and when a
firewall serves as the PPPoE server it uses the VT interface. Related PPPoE parameters are
configured on these two interfaces, as shown in Table 5-8.
Table 5-8 Configuring an NAS-initiated VPN's PPPoE
PPPoE Client
PPPoE Server (LAC)
interface dialer 1
interface Virtual-Template 1
dialer user user1
ppp authentication-mode chap
dialer-group 1
interface GigabitEthernet 0/0/1
dialer bundle 1
pppoe-server bind virtual-template 1
//Enables the PPPoE server on the physical
interface and binds the VT interface.
ip address ppp-negotiate //Configures
the negotiation mode, so that IP addresses
are dynamically assigned.
ppp chap user user1 //Indicates the PPPoE
client's user name.
aaa
local-user user1 password Password1
local-user user1 service-type ppp
ppp chap password cipher Password1
//Indicates the PPPoE client's password.
dialer-rule 1 ip permit
interface GigabitEthernet0/0/1
pppoe-client dial-bundle-number 1
//Enables the PPPoE client on the physical
interface and binds dial-bundle.
The VT interface on the PPPoE server (LAC) only completes PPPoE work, providing PPP
authentication functions for the PPPoE server, but doesn't contain a function for cooperation
with L2TP.
In L2TP, all user IP addresses are uniformly assigned by HQ (the LNS or AAA server), so
there is no need to configure an address pool on the LAC (even if an address pool is
configured, so long as the L2TP tunnel has already been established, the HQ address pool will
be preferentially used for address assignment), but ordinary PPPoE dialing needs an address
pool to be configured on the PPPoE server.
We can use the below packet capture to analyze the process of establishing a PPPoE
connection.
168
Learn Firewalls with Dr. WoW
The negotiation process in the PPPoE discovery step is quite important. As shown in Table
5-9, the PPPoE client and PPPoE server exchange PADI, PADO, PADR and PADS packets to
confirm each other's Ethernet address and PPPoE session ID.
Table 5-9 Negotiation process in the PPPoE discovery stage
Step 1
PADI
Step 2
PADO
Step 3
PADR
Step 4
PADS
PPPoE Client: Attention, attention, I
want to connect to PPPoE, who will
come help me?
PPPoE Server: PPPoE client, look for
me, I'll help you!
PPPoE Client: That's great PPPoE
Server! I'd like to establish a PPPoE
session with you.
PPPoE Server: Fine. I'll send the
session ID to you, and we can simply
use this ID to establish a PPPoE session.
Then, following PPP LCP negotiation and PPP CHAP authentication, the PPPoE connection is
established.
5.5.2 Step 2: Establishing the L2TP Tunnel— Three Pieces of
Information to Negotiate Entrance to the Wormhole
Let's first look at the specific configuration of the LAC and LNS, as shown in Table 5-10.
169
Learn Firewalls with Dr. WoW
Table 5-10 Configuring the NAS-initiated VPN's L2TP
LAC
LNS
l2tp enable
l2tp enable
l2tp-group 1
interface Virtual-Template 1
tunnel authentication //Avoids
counterfeit LACs from connecting to the
LNS.
ppp authentication-mode chap
tunnel password cipher Password1
tunnel name lac
start l2tp ip 1.1.1.2 fullusername
user1 //Designates the address of the
other tunnel terminal.
ip address 172.16.0.1 255.255.255.0
remote address pool 1
l2tp-group 1
tunnel authentication //Avoids
counterfeit LACs from connecting to the
LNS.
tunnel password cipher Password1
allow l2tp virtual-template 1 remote lac
//Designates the VT interface and allows a
remote LAC connection.
aaa
local-user user1 password Password1
local-user user1 service-type ppp
ip pool 1 172.16.0.2 172.16.0.100
The LAC and the LNS exchange three pieces of information in negotiating the L2TP tunnel.
We've already covered this process in "5.4 L2TP Client-initiated VPNs", and we'll review this
again here. See the packet capture information below:
The tunnel ID negotiation process is shown in Table 5-11.
Table 5-11 Tunnel ID negotiation process
Step 1
SCCRQ
Step 2
SCCRP
LAC: LNS, use "1" as the
Tunnel ID to communicate
with me.
LNS: OK. LAC, make sure
you also use "1" as your
tunnel ID to communicate
with me.
170
Learn Firewalls with Dr. WoW
Step 3
LAC: OK.
-
SCCCN
5.5.3 Step 3 Establishing an L2TP Session—Three Pieces of
Information to Awaken the Wormhole Gatekeeper
The LAC and the LNS exchange three pieces of information to negotiate a session ID and
establish an L2TP session. We'll likewise review this process again. See the packet capture
information below:
Table 5-12 gives the session ID negotiation process.
Table 5-12 Session ID Negotiation Process
Step 1
ICRQ
LAC: LNS, use "4" as the session ID to
communicate with me.
ICRP
LNS: OK, LAC, and make sure you also
use "4" as the session ID to
communicate with me.
Step 3
LAC: OK.
Step 2
-
ICCN
5.5.4 Steps 4-5: LNS Authentication and IP Address
Assignment—the LNS Sternly Accepts the LAC
1.
LNS authentication and secondary authentication (optional)
The LAC sends user information to the LNS for authentication. However, the LNS
understands all too well that the LAC is only an "agent", and the LNS can adopt one of
three attitudes towards this:
−
LAC agent authentication: the LNS doesn't trust that the LAC is trustworthy, and
directly carries out authentication of the user information sent by the LAC.
171
Learn Firewalls with Dr. WoW
−
Enforce mandatory CHAP authentication: the LNS doesn't trust the LAC, and
requires that a "qualification inspection" be carried out anew of the user (enforce new
CHAP authentication of the user).
−
LCP re-negotiation: the LNS not only doesn't trust the LAC, but also expresses its
dissatisfaction with the aforesaid "service contract" that was signed, and requires the
user to again 'negotiate services' (re-initiate LCP negotiation, and negotiate MRU
parameters and authentication methods).
The latter two methods are together called LNS secondary authentication. If the LNS is
configured for secondary authentication but the PPPoE client does not support secondary
authentication, this means the L2TP VPN cannot be established. The common point
between these two secondary authentications is that the LNS skirts around the LAC and
carries out direct verification of the user information provided by the PPPoE client,
providing increased security protections for VPN services. Configuration methods for
the three kinds of authentication methods are shown in Table 5-13.
Table 5-13 Configuring a NAS-initiated VPN's LNS authentication
Authentication
Method
Configuration Method
LAC proxy
authentication*
Default, don't need to
configure
Mandatory
CHAP
authentication
**
l2tp-group 1
LCP
re-negotiation***
interface
virtual-template 1
Packet Capture Analysis
The LNS directly authenticates the user information sent by
the LAC, and if authentication is successful a PPP
connection is established.
mandatory-chap
The LNS re-conducts CHAP authentication of the user. The
LNS sends a challenge, the PPPoE client uses the challenge
to send the user name and encrypted password to the LNS,
and after the LNS authenticates this, a PPP connection is
successfully established.
ppp authentication-mode
chap // designates
authentication mode
after re-negotiation
l2tp-group 1
mandatory-lcp
The LNS re-initiates LCP negotiation, negotiates MRU
parameters and an authentication method, and then conducts
CHAP authentication. After successful negotiation, a PPP
connection is established.
* Represents degree of excellence; configuring LCP re-negotiation is the most preferable of these three
methods.
2.
IP address assignment
The LNS assigns an IP address for the PPPoE client through PPP IPCP negotiation.
172
Learn Firewalls with Dr. WoW
We went over the address pool's address planning problem in section 5.4 "Client-initiated
VPNs", and you can go back and review this if you wish.
Let's summarize the characteristics of NAS-initiated VPNs:
As shown in Figure 5-29, in NAS-initiated VPNs, multiple tunnels can be established
between a LNS-LAC pair (one is built for every L2TP group), and every tunnel can carry
multiple sessions. This means that each LAC can carry sessions for all dial-up users of
the branch organization it belongs to. For example, PPP connection 1 and L2TP session 1
are established between access user 1 and the LNS, and PPP connection 2 and L2TP
session 2 are established between access user 2 and the LNS. When a user dials-in, this
triggers establishment of a tunnel between the LAC and the LNS. So long as this user
doesn't go offline, when other users dial-in, they will establish sessions in the existing
tunnel rather than re-triggering tunnel establishment.
Figure 5-29 Relationship between an L2TP tunnel and session with the PPP connection on a
NAS-initiated VPN
Access user 1
LAC
LNS
Intranet server
Access user 2
PPPoE dial-up
L2TP tunnel
PPP connection 1
L2TP session 1
PPP connection 2
L2TP session 2
……
5.5.5 Step 6: Data Encapsulation Transmission— Obstacle-Free
Communication
After the PPPoE client packets bound for the HQ server reach the LAC, the LAC gives the
packets three layers of "alter-egos", these being the L2TP header, the UDP header and the
public network IP header, and then sends these to the LNS. After the LNS receives the packets,
it strips off these three layers of "alter-egos", and then forwards the packet to the internal
server.
173
Learn Firewalls with Dr. WoW
Public IP header
UDP header
L2TP header
PPP header
Private IP header
Figure 5-30 shows the process of packet encapsulation and decapsulation in a NAS-initiated
VPN scenario.
Figure 5-30 NAS-initiated VPN packet encapsulation process
Access user
(PPPoE client)
Data
LAC
(PPPoE server)
Data
Intranet
server
LNS
L2TP tunnel
Data
Data
Private IP
Private IP
Private IP
Private IP
PPP header
PPP header
PPP header
PPP header
L2TP header
L2TP header
Data
Data
PPPoE header
PPPoE header
UDP header
UDP header
Private IP
Private IP
Ethernet header
Ethernet header
Public IP
Public IP
Ethernet header
Ethernet header
PPP encapsulation
PPPoE encapsulation
PPPoE
decapsulation
L2TP
encapsulation
L2TP decapsulation
Ethernet
PPP decapsulation encapsulation
Ethernet
decapsulation
Data flow direction
As with client-initiated VPNs, in NAS-initiated VPN scenarios, the LNS will also
automatically issue a host route (UNR route) for the PPPoE client that obtained the private
network IP address, and use this to guide return packets from the internal server bound for the
PPPoE client into the tunnel.
5.5.6 Approach to Security Policy Configuration
NAS-initiated VPN security policy configuration is a bit more troublesome than for
client-initiated VPNs because both the LAC and the LNS need to be configured. However, the
approach to configuration is similar.
As shown in Figure 5-31, in our hypothetical scenario, the LAC's GE0/0/2 is connected to the
Internet, and belongs to the Untrust zone. On the LNS, GE0/0/1 is connected to the private
network, and belongs to the Trust zone; GE0/0/2 is connected to the Internet, and belongs to
the Untrust zone; the VT interface belongs to the DMZ zone; the IP address assigned by the
LNS for the PPPoE client is 172.16.0.2.
174
Learn Firewalls with Dr. WoW
Figure 5-31 Network organization for VPN security policy configuration in a NAS-initiated VPN
DMZ
VT1
172.16.0.1/24
Untrust
172.16.0.2
GE0/0/1
PPPoE client
Untrust
GE0/0/2
1.1.1.1/24
Trust
GE0/0/2
1.1.1.2/24
GE0/0/1
L2TP tunnel
LAC
(PPPoE server)
LNS
192.168.0.2/24
The security policy configuration process is as follows:
2.
We first configure the interzone security policy to be as broad as possible, to aid in
L2TP VPN adjustment/testing.
The LAC's interzone default packet filtering action is set to "permit":
[LAC] firewall packet-filter default permit all
The LNS's interzone default packet filtering action is also set to "permit":
[LNS] firewall packet-filter default permit all
3.
After both the LAC and LNS have configured L2TP, we ping the internal server with the
PPPoE client, and then look at both the LAC and LNS session tables.
−
LAC session table
[LAC] display firewall session table verbose
Current Total Sessions : 1
l2tp VPN:public --> public
Zone: local--> untrust TTL: 00:02:00 Left: 00:01:52
Interface: GigabitEthernet0/0/2 NextHop: 1.1.1.2 MAC: 00-00-00-53-62-00
<--packets:26 bytes:1655 -->packets:11 bytes:900
1.1.1.1:60416-->1.1.1.2:1701
Analysis of the session table provides the direction of packets on the LAC, as shown
in Figure 5-32.
Figure 5-32 LAC packet direction
Untrust
GE0/0/1
L2TP
tunnel
Local
PPPoE client
Private IP address:
172.16.0.2/24
GE0/0/2
1.1.1.1
LAC
Original packet from the PPPoE client to
the intranet server
L2TP packet
175
Learn Firewalls with Dr. WoW
There is no ICMP session on the LAC, just the one L2TP session. Therefore, a
Local-->Untrust security policy needs to be configured on the LAC permitting the
LAC and LNS to build an L2TP tunnel. Moreover, the PPPoE client's packets
accessing the internal server will first be encapsulated into PPPoE packets, then
be directly encapsulated into L2TP packets when the LAC receives the PPPoE
packet, and finally enter the L2TP tunnel, and so these are not controlled by the
security policy. Therefore, only a Local-->Untrust security policy needs to be
configured on the LAC.
−
LNS session table
[LNS] display firewall session table verbose
Current Total Sessions : 2
l2tp VPN:public --> public
Zone: untrust--> local TTL: 00:02:00 Left: 00:01:52
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:18 bytes:987 -->packets:23 bytes:2057
1.1.1.1:60416-->1.1.1.2:1701
icmp VPN:public --> public
Zone: dmz--> trust TTL: 00:00:20 Left: 00:00:00
Interface: GigabitEthernet0/0/1 NextHop: 192.168.0.2 MAC:
54-89-98-62-32-60
<--packets:4 bytes:336 -->packets:5 bytes:420
172.16.0.2:52651-->192.168.0.2:2048
There are two sessions on the LNS: one L2TP session, and one ICMP session. Analysis
of the session table provides the direction of packets on the LNS, as shown in Figure
5-33.
Figure 5-33 LNS packet direction
DMZ
VT
Untrust
L2TP
tunnel
Trust
Local
1.1.1.2 GE0/0/2
GE0/0/1
Intranet server
192.168.0.2/24
LNS
Original packet from the PPPoE client to
the intranet server
L2TP packet
The above figure shows that a DMZ-->Trust security policy needs to be configured on
the LNS permitting PPPoE client packets seeking access to the internal server to pass; an
Untrust-->Local security policy permitting the LAC and LNS to build an L2TP tunnel
also needs to be configured.
After the L2TP tunnel is established, when the internal server initiates a call on the
PPPoE client, the packet direction is the opposite of when the PPPoE client accesses the
server, and there is no need to further elaborate on this.
176
Learn Firewalls with Dr. WoW
To summarize, the security policies that should be configured on the LAC and LNS
under various conditions are shown in Table 5-14, and we should configure the security
policies that best match existing conditions.
Table 5-14 Selecting security policies for the LAC and LNS based upon conditions
Direction of
Transaction
Device
Source
Security
Zone
Destination
Security Zone
Source Address
Destination
Address
Used
in
PPPoE client
accesses the
server
LAC
Local
Untrust
1.1.1.1/32
1.1.1.2/32
L2TP
LNS
Untrust
Local
1.1.1.1/32
1.1.1.2/32
L2TP
DMZ
Trust
172.16.0.2~172.16.0.1
00
192.168.0.0/24
*
172.16.0.2~172.1
6.0.100
*
address pool
addresses
Server
accesses the
PPPoE client
LNS
Trust
DMZ
192.168.0.0/24
address pool
addresses
*: Use in this instance is related to the transaction type, and can be configured according to actual
circumstances (for example: TCP, UDP, and ICMP).
In this scenario, the LNS only passively accepts the LAC's request to establish the tunnel, but will not
actively initiate a request to the LAC to establish the tunnel, so only an Untrust-->Local security policy
needs to be configured on the LNS for the L2TP tunnel.
Therefore, in L2TP VPNs using the NAS-initiated method, the LNS's VT interface
must be added to a security zone, and the security zone the VT interface belongs to
determines the packet's direction within the device. If the VT interface belongs to the
Trust zone, a DMZ-Trust interzone security policy doesn't need to be configured, but this
will also carry with it additional security risks. Therefore, it is advisable to add the VT
interface to a separate security zone, and then configure the most appropriate security
policy.
4.
Finally, the default packet filtering's action is changed to "deny".
Set the LAC's interzone default packet filtering action to "deny":
[LAC] firewall packet-filter default deny all
Set the LNS's interzone default packet filtering action to "deny":
[LNS] firewall packet-filter default deny all
In NAS-initiated VPNs, the branch organization user must dial-in before using the L2TP
VPN, and packets also need to be encapsulated into PPPoE, which is too much trouble.
Moreover, dial-up networks are gradually disappearing, and Ethernet is becoming
mainstream—does this mean that branch organization users cannot access the HQ
network directly on the Ethernet? Of course they can. Mankind's desire to reduce work is
the real driver of advances in science and technology, and in the next section I'll
introduce LAC-auto-initiated VPNs to everyone, in which the LAC automatically dials
the LNS, eliminating the dial-up process for branch organization employees—this is
known as being the L2TP VPN that involves the least work.
177
Learn Firewalls with Dr. WoW
5.6 L2TP LAC-Auto-initiated VPNs
LAC-auto-initiated VPNs are also called LAC automatic dialing VPNs. These do just what
they sound like they do—after LAC configuration is complete, the LAC will automatically
dial the LNS and establish an L2TP tunnel and session, meaning that it is not necessary for the
branch organization user to dial-up to trigger this. For the branch organization user, this
means that accessing the HQ network is the same as accessing his/her own branch
organization's network, and he/she won't feel at all as if they are on a remote connection.
However, in this method the LNS only authenticates the LAC, so that so long as the branch
organization user is able to connect to the LAC they can use the L2TP tunnel to connect to
HQ, meaning that this method offers slightly lower security compared with NAS-initiated
VPNs.
5.6.1 LAC-Auto-initiated VPN Principles and Configuration
As shown in Figure 5-34, the set-up process for LAC-auto-initiated VPNs is similar to that of
client-initiated VPNs, except that for LAC-auto-initiated VPNs, the LAC has replaced the role
the L2TP client plays for client-initiated VPNs.
Figure 5-34 Set-up process for LAC-auto-initiated VPNs
LNS
LAC
Intranet server
L2TP tunnel
Branch
HQ
1. Establish an L2TP tunnel.
2. Establish an L2TP session.
3. Establish a PPP connection.
Authentication succeeds,
and an address is allocated.
4. Encapsulate data and
transmit the packet.
Each step in the set-up process is largely similar to the set-up process for client-initiated
VPNs, and I won't say any more except that you can review "5.4 L2TP Client-initiated VPNs".
One point to note is that in step 3, the LNS only authenticates the LAC, and after successful
authentication the LNS assigns an IP address to the LAC's VT interface, not to the branch
organization user. Although the LNS does not assign an IP address for the branch organization,
this doesn't mean that the branch organization's IP address can be configured at will. In order
to ensure normal access between the branch organization network and the HQ network, please
plan separate, independent private network segments for the branch organization and HQ
networks, and ensure that the network segments for the two do not overlap.
178
Learn Firewalls with Dr. WoW
Configuration of LAC-auto-initiated VPNs is not complicated, and we can set up a network
like the one shown in Figure 5-35.
Figure 5-35 LAC-auto-initiated VPN network
LNS address pool: 10.1.1.2
Intranet server
VT interface IP address: 10.1.1.1 192.168.0.2
LAC
User in a branch
172.16.0.2
172.16.0.1/24
1.1.1.1/24
1.1.1.2/24
Branch
192.168.0.1/24
HQ
Configuration for the LAC and LNS is shown in Table 5-15. A key point is that the
connections between the LAC, LNS, and internal server are direct connections, eliminating
the need to configure routing. User authentication also uses relatively simple local
authentication. Additionally, gateways need to be configured for both the branch organization
user and the internal network server, to ensure that packets exchanged between the two are
able to be sent to the LAC and LNS.
Table 5-15 Configuring an L2TP LAC-auto-initiated VPN
LAC
LNS
l2tp enable
l2tp enable
l2tp-group 1
interface Virtual-Template 1
tunnel authentication
tunnel password cipher Password1
tunnel name lac
start l2tp ip 1.1.1.2 fullusername lac
//Designates an address for the tunnel's
other terminal.
interface Virtual-Template 1
ppp authentication-mode chap
ip address 10.1.1.1 255.255.255.0
remote address pool 1
l2tp-group 1
tunnel authentication
tunnel password cipher Password1
ppp authentication-mode chap
allow l2tp virtual-template 1 remote lac
//Permits remote access.
ppp chap user lac
aaa
ppp chap password cipher Password1
local-user lac password Password1
ip address ppp-negotiate
call-lns local-user lac binding
l2tp-group 1 //LAC dials LNS.
ip route-static 192.168.0.0
255.255.255.0 Virtual-Template 1
//Configures a static route to the HQ
network; this is different than in
client-initiated VPNs and NAS-initiated
VPNs, and the LAC must configure this
route to guide branch organization user
packets seeking to access the HQ network
into the L2TP tunnel.
local-user lac service-type ppp
ip pool
assigns
address
address
1 10.1.1.2 //As the LNS only
addresses for the LAC, only one IP
needs to be configured in the
pool.
ip route-static 172.16.0.0 255.255.255.0
Virtual-Template 1 //Configures a
static route to the branch organization
network; if a source NAT is configured on
the LAC this route does not need to be
configured, and more details about this are
provided below.
179
Learn Firewalls with Dr. WoW
The characteristics of LAC-Auto-initiated VPN tunnels are briefly summarized below:
As shown in Figure 5-36, in LAC-auto-initiated VPN scenarios, a permanent tunnel is
established between the LAC and LNS, which only carries one permanent L2TP session and
PPP connection. The L2TP session and PPP connection only exists between the LAC and the
LNS.
Figure 5-36 Relationship between the L2TP tunnel/session and the PPP connection on a
LAC-auto-initiated VPN
User in a branch
LAC
Intranet server
LNS
Permanent L2TP tunnel
Permanent L2TP session
Permanent PPP connection
PPP encapsulation and L2TP encapsulation in a LAC-auto-initiated VPN are limited only to
packets exchanged between the LAC and the LNS, as shown in Figure 5-37.
Figure 5-37 LAC-auto-initiated VPN packet encapsulation process
User in a branch
Data
Private IP
LAC
Data
L2TP tunnel
LNS
Data
Data
Private IP
Private IP
Private IP
PPP header
PPP header
L2TP header
L2TP header
UDP header
UDP header
Public IP
Public IP
PPP encapsulation
L2TP encapsulation
Intranet server
L2TP decapsulation
PPP decapsulation
Data flow direction
Additionally, there is still another issue that requires special attention, this being how return
packets enter the tunnel. This process is different than in client-initiated VPNs and
NAS-initiated VPNS; in LAC-auto-initiated VPNs the LNS only issues one destination
address for the LAC's VT interface address's UNR route, and doesn't have a route going to the
branch organization's network. In response to this the LNS asserts: "I'm only responsible for
assigning IP addresses, and can ensure that the LAC's VT interface is reached. The branch
180
Learn Firewalls with Dr. WoW
organization network's addresses are not assigned by me, and I don't even know what their
addresses are, so I can only say 'Sorry, I can't help you'".
How can this problem be resolved? The simplest method is to manually configure a static
route to the branch organization network on the LNS, guiding the return packets into the
tunnel:
[LNS] ip route-static 172.16.0.0 255.255.255.0 Virtual-Template 1
Are there other methods we can use besides configuring a static route? I've just recalled NAT,
which we mentioned earlier! Even if the LNS only recognizes the IP addresses it assigns, we
can configure a source NAT function on the LAC to convert the source addresses of the
packets from the branch organization accessing the HQ network into the VT interface's
address—this is the Easy-IP method's "source NAT". After the LNS receives a return packet,
and discovers the destination address is the LAC's VT interface's address, it will forward this
by a direct route into the tunnel. In this way there is no need to configure a static route on the
LNS.
I will next use a real network shown in Figure 5-38 as an example to briefly introduce the
whole process for packet encapsulation and decapsulation when a branch organization user
accesses the HQ server following configuration of source NAT on a LAC:
Figure 5-38 The LAC-auto-initiated VPN packet encapsulation process following configuration
of source NAT on the LAC
User in a branch
172.16.0.2
LAC
VT: 10.1.1.2
Static route: 192.168.0.0 24
Source NAT
1.1.1.1/24
172.16.0.1/24
1 Original packet
VT1
NAT + PPP encapsulation +
L2TP encapsulation
LNS
VT: 10.1.1.1
Direct route: 10.1.1.2 32
1.1.1.2/24
VT1
Intranet server
192.168.0.2
192.168.0.1/24
PPP decapsulation +
L2TP decapsulation
Decapsulated packet
Data
Data
Data
Data
Source IP: 172.16.0.2
Source IP: 10.1.1.2
Source IP: 10.1.1.2
Source IP: 10.1.1.2
Destination IP: 192.168.0.2
Destination IP: 192.168.0.2
Destination IP: 92.168.0.2
Destination IP: 192.168.0.2
PPP
PPP
L2TP
L2TP
UDP
UDP
Public source IP: 1.1.1.1
Public source IP: 1.1.1.1
Public destination IP: 1.1.1.2
Decapsulated packet
2.
PPP decapsulation + L2TP
decapsulation + NAT by
matching the session table
Public destination IP: 1.1.1.2
Entering the VT interface
according to the direct route
PPP encapsulation + L2TP
encapsulation
2 Return packet
Data
Data
Data
Data
Source IP: 192.168.0.2
Source IP: 192.168.0.2
Source IP: 192.168.0.2
Source IP: 192.168.0.2
Destination IP: 172.16.0.2
Destination IP: 10.1.1.2
Destination IP: 10.1.1.2
Destination IP: 10.1.1.2
PPP
PPP
L2TP
L2TP
UDP
UDP
Public source IP: 1.1.1.2
Public source IP: 1.1.1.2
Public destination IP: 1.1.1.1
Public destination IP: 1.1.1.1
After the LAC receives an original packet from a branch organization user seeking
access to the HQ server, it checks the route according to the destination address, selects
our manually configured static route, and sends the packet to the VT interface.
181
Learn Firewalls with Dr. WoW
3.
The LAC conducts NAT conversion on the original packet at the VT interface,
converting the source address into the VT interface's address, and then encapsulates a
PPP header, an L2TP header and a public network address onto the packet. The LAC
then checks the route based upon the public network destination address, and sends the
encapsulated packet to the LNS.
4.
After the LNS receives the packet, it strips away the PPP header and the L2TP header,
and checks the route based upon the destination address (this is a directly connected
route), and then sends the packet to the HQ server.
5.
After the LNS receives the return packet from the HQ server, it checks the route based
upon the destination address, selects the route automatically issued by the LNS, and
sends the packet to the VT interface.
6.
The packet is encapsulated with a PPP header, an L2TP header and a public network
address at the VT interface. The LNS checks the route based upon the public network
address, and sends the encapsulated packet to the LAC.
7.
After the LAC receives the packet, it strips away the PPP header and the L2TP header,
converts the packet's destination address into the branch organization user's address, and
then sends the packet to the branch organization user.
An example of configuring the Easy-IP method's source NAT on the LAC is given below (this
example assumes the LAC's interface connecting to the branch organization network belongs
to the Trust zone, and that the VT interface belongs to the DMZ zone):
[LAC] nat-policy interzone trust dmz outbound
[LAC-nat-policy-interzone-trust-dmz-outbound] policy 1
[LAC-nat-policy-interzone-trust-dmz-outbound-1] policy source 172.16.0.0 0.0.0.255
[LAC-nat-policy-interzone-trust-dmz-outbound-1] action source-nat
[LAC-nat-policy-interzone-trust-dmz-outbound-1] easy-ip Virtual-Template 1
5.6.2 Approach to Security Policy Configuration
The overall approach to security policy configuration in LAC-auto-initiated VPNs is basically
the same as the configuration approaches introduced in the previous two sections, but we'll
still discuss this briefly below.
In Figure 5-39 we assume that on the LAC and LNS GE0/0/1s are connected to the private
network and belong to the Trust zone; GE0/0/2s are connected to the Internet, and belong to
the Untrust zone; the VT interfaces belong to the DMZ zone.
Figure 5-39 Network security policy configuration in a LAC-auto-initiated VPN
DMZ
DMZ
VT1
10.1.1.2/24
VT1
10.1.1.1/24
Untrust
Trust
GE0/0/1
User in a branch
172.16.0.2
Untrust
Trust
GE0/0/2
1.1.1.2/24
GE0/0/2
1.1.1.1/24
GE0/0/1
L2TP tunnel
LAC
LNS
Intranet server
192.168.0.2
The process for security policy configuration is as follows:
182
Learn Firewalls with Dr. WoW
2.
We first configure the broadest possible interzone security policy to assist L2TP
VPN adjustments/testing.
The LAC's interzone default packet filtering action is set to "permit":
[LAC] firewall packet-filter default permit all
The LNS's interzone default packet filtering action is set to "permit":
[LNS] firewall packet-filter default permit all
3.
After the LAC and LNS have successfully configured L2TP, a branch organization user
pings the internal network server to initiate access, and the LAC and LNS's session
tables are then checked.
−
LAC session table
[LAC] display firewall session table verbose
Current Total Sessions : 2
l2tp VPN:public --> public
Zone: local--> untrust TTL: 00:02:00 Left: 00:01:57
Interface: GigabitEthernet0/0/2 NextHop: 1.1.1.2 MAC: 00-00-00-c5-48-00
<--packets:38 bytes:2517 -->packets:62 bytes:4270
1.1.1.1:60416-->1.1.1.2:1701
icmp VPN:public --> public
Zone: trust--> dmz TTL: 00:00:20 Left: 00:00:07
Interface: Virtual-Template1 NextHop: 192.168.0.2 MAC: 00-00-00-c5-48-00
<--packets:1 bytes:60 -->packets:1 bytes:60
172.16.0.2:11749-->192.168.0.2:2048
Analysis of the session table provides the direction of packets on the LAC, as shown
in Figure 5-40.
Figure 5-40 LAC packet direction
DMZ
VT
Trust
Untrust
GE0/0/1
L2TP
tunnel
Local
User in a branch
Private IP address:
172.16.0.2/24
GE0/0/2
1.1.1.1
LAC
Original packet from the branch
to the intranet server
L2TP packet
As shown above, the LAC needs to configure a Trust-->DMZ security policy to
permit the branch organization user's packets seeking access to the internal server to
pass; it also needs to configure a Local-->Untrust security policy to allow the LAC
and the LNS to establish an L2TP tunnel.
−
LNS session table
183
Learn Firewalls with Dr. WoW
[LNS] display firewall session table verbose
Current Total Sessions : 2
l2tp VPN:public --> public
Zone: untrust--> local TTL: 00:02:00 Left: 00:01:52
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:18 bytes:987 -->packets:23 bytes:2057
1.1.1.1:60416-->1.1.1.2:1701
icmp VPN:public --> public
Zone: dmz--> trust TTL: 00:00:20 Left: 00:00:00
Interface: GigabitEthernet0/0/1 NextHop: 192.168.0.2 MAC:
54-89-98-62-32-60
<--packets:4 bytes:336 -->packets:5 bytes:420
172.16.0.2:52651-->192.168.0.2:2048
There are two sessions on the LNS: one L2TP session, and one ICMP session. Analysis
of the session table provides the LNS's packet direction, as shown in Figure 5-41.
Figure 5-41 LNS packet direction
DMZ
VT
Untrust
L2TP tunnel
Trust
Local
1.1.1.2 GE0/0/2
GE0/0/1
Intranet server
192.168.0.2/24
LNS
Original packet from the branch
to the intranet server
L2TP packet
From the above it is obvious that a DMZ-->Trust security policy needs to be configured on
the LNS to permit branch organization users' packets seeking access to the internal network
server to pass; an Untrust-->Local security policy also needs to be configured to permit the
LAC and LNS to establish an L2TP tunnel.
After the L2TP tunnel is established, when the PC initiates a call on the server, the packet
direction is the opposite of the direction when the server accesses the PC, and we do not need
to elaborate further about this
To summarize, the security policies that should be configured on the LAC and LNS under
various conditions are shown in Table 5-16, and we should configure the security policies that
best match existing conditions.
184
Learn Firewalls with Dr. WoW
Table 5-16 Selecting security policies for the LAC and LNS based upon existing conditions
Direction of
Transaction
Device
Source
Security Zone
Destination
Security Zone
Source
Address
Destination
Address
Used
in
PC seeks
access to the
server
LAC
Local
Untrust
1.1.1.1/32
1.1.1.2/32
L2TP
Trust
DMZ
172.16.0.0/24
192.168.0.0/24
*
Untrust
Local
1.1.1.1/32
1.1.1.2/32
L2TP
DMZ
Trust
172.16.0.0/24
192.168.0.0/24
*
LAC
DMZ
Trust
192.168.0.0/24
172.16.0.0/24
*
LNS
Trust
DMZ
192.168.0.0/24
172.16.0.0/24
*
LNS
Server seeks
access to the
PC
*: Use in this instance is related to the transaction type, and can be configured according to actual
circumstances (for example: TCP, UDP, and ICMP).
Therefore, in L2TP scenarios using the LAC automatic dial-up method, the LAC and
LNS's VT interfaces must be added to security zones, and the security zones the VT
interfaces belong to determine the direction of the packet within the device. If a VT
interface belongs to the Trust zone, then a DMZ-Trust interzone security policy does not
need to be configured, but this will add additional security risk. Therefore, it is suggested
that the VT interfaces be added to separate security zones, and that the most appropriate
security policy be configured for them.
4.
Finally, the default packet filtering's action is set to "deny".
Set the LAC's interzone default packet filtering action to "deny":
[LAC] firewall packet-filter default deny all
Set the LNS's interzone default packet filtering action to "deny":
[LNS] firewall packet-filter default deny all
5.7 Summary
Over the past three sections I've introduced three kinds of L2TP VPNs, and we'll summarize
these three kinds of L2TP VPNs in Table 5-17.
185
Learn Firewalls with Dr. WoW
Table 5-17 Comparison of three kinds of L2TP VPNs
Item
Client-initiated VPN
NAS-initiated VPN
LAC-Auto-initiated VPN
Negotiation method
The L2TP client and the
LNS negotiate the
establishment of an
L2TP tunnel and L2TP
session, and establish a
PPP connection.
The access user uses
PPPoE dial-up to
trigger the
establishment of an
L2TP tunnel and L2TP
session between the
LAC and LNS, and the
access user and LNS
negotiate the
establishment of a PPP
connection.
The LAC initiates dialing,
and negotiates the
establishment of an L2TP
tunnel and L2TP session and
of a PPP connection with the
LNS.
Tunnel and session
relationship
One L2TP tunnel is
established between
every L2TP client and
LNS, and each tunnel
only carries one L2TP
session and PPP
connection.
Multiple L2TP tunnels
can exist between a
LAC and a LNS, and
one L2TP tunnel can
carry multiple L2TP
sessions.
A permanent L2TP tunnel is
established between the LAC
and LNS, which only carries
one permanent L2TP session
and PPP connection.
Security
The LNS conducts PPP
authentication of the
L2TP client (PAP or
CHAP); this method
offers relatively high
security.
The LAC authenticates
access users, and then
the LNS conducts
secondary
authentication
(optional) of access
users; this offers the
highest in security.
The LAC does not conduct
authentication of users; the
LNS conducts PPP
authentication (PAP or
CHAP) of LAC configured
users. This provides low
security.
Return route
The LNS will
automatically issue a
UNR route, guiding the
return packet into the
L2TP tunnel; manual
configuration is not
required.
The LNS will
automatically issue a
UNR route, guiding the
return packet into the
L2TP tunnel; manual
configuration is not
required.
It is necessary to manually
configure the LNS so that the
destination address is the
network segment's static
route, or configure the
easy-IP method's source NAT
for the LAC.
IP address assignment
The LNS assigns an IP
address for the client.
The LNS assigns an IP
address for the client.
The LNS assigns an IP
address for the LAC's VT
interface.
Our look at L2TP VPNs has ended. It is important to note that none of the L2TP VPN
methods supports encryption functions, and therefore security risks are present during the
process of data transmission through the tunnel. How can this problem be resolved? The
answer to this will be clear after we've studied the functionally powerful, hard-to-configure,
and highly secure IPSec VPNs.
186
Learn Firewalls with Dr. WoW
6
IPSec VPN
6.1 IPSec Overview
With the wide-spread use of GRE and L2TP, private "Tiandihui" (lit. Heaven and Earth
Society, in reference to anti-Qing Dynasty resistance groups) have also kept pace with the
times, deploying host-to-sub-hosts GREs and L2TPs. Hosts and sub-hosts use GRE and L2TP
tunnels to exchange and transmit messages. The cause to "overthrow the Qing Dynasty and
restore the Ming Dynasty" is in full swing. However, the good times rarely last for long, and
many of the confidential messages transmitted between hosts and sub-hosts have been seized
by "government officials". Sub-host client groups have been rounded up by the dozen. An
undercurrent surges through the Internet, and the road ahead is perilous.
Faced with the faction's life and death, Host Chen hastened to convene a conference to discuss
countermeasures. The problem at hand, be it GRE or L2TP, no security encryption measures
had been taken for any established tunnels, and as such, it is all too easy for "government
officials" to seize plaintext confidential messages transmitted between hosts and sub-hosts
through both GRE and L2TP tunnels. Tiandihui was faced with the issue of ensuring the safe
transfer of messages. Private lines are a possible solution, but the treasures of the Sutra of 42
Chapters have yet to be discovered. Strapped for the cash needed to build host-to-sub-host
private lines, Tiandihui must seek recourse through an existing, common resource - the
Internet.
After paying homage to the Supreme Host, Tiandihui finally found the answer: IPSec (IP
Security). As a next-generation VPN technology, IPSec can establish secure, stable dedicated
lines across the Internet. Compared to GREs and L2TPs, IPSec is more secure and can
guarantee the safe transfer of messages between hosts and sub-hosts.
6.1.1 Encryption and Authentication
To discuss IPSec is no easy task. It's not a single maneuver, but rather a set of tactics. IPSec
cleverly borrows the art of hoodwinks from the school of cryptology and has created its own
unique blend of shapeshifting for safe passage through changing AHs (Authentication
Headers) and ESPs (Encapsulating Security Payloads) for positive identification to "return the
jade intact to its rightful owner". Even if the message is intercepted, no one would ever
understand it, and any message that has been tampered can be spotted almost instantly.

Shapeshifting for safe passage - encryption
As shown in Figure 6-1, IPSec borrows a simple trick - before either end transmits a
message, an encryption algorithm and cipher key is first used to change the message
header; this process is known as encryption. Once the other end receives the message,
the same encryption algorithm and cipher key is used to restore the message to its
187
Learn Firewalls with Dr. WoW
original; this process is known as decryption. As the message is being transmitted, it's
impossible to see its true nature, leaving perpetrators empty-handed.
Figure 6-1 Schematic of packet encryption/decryption
Source
Destination
IP packet
IP packet
Encryption algorithm
(encryption)
Symmetric
encryption key
Encrypted IP packet
Encryption algorithm
(decryption)
Encrypted IP packet
When Tiandihui hosts and sub-hosts must exchange messages, both ends must first agree to
an encryption algorithm and cipher key. Suppose that the host must send the command
"August 15, shores of Lake Taihu, holding a big event" to the sub-host. The host must first use
an incoherent encryption algorithm to garble the text. Then, once the cipher key "Overthrow
the Qing to Restore the Ming" is inserted, the encrypted command will finally read out a
message like "Overthrow 15 the Lake Taihu Qing shores to holding Restore a the big event
August Ming" which is then transmitted. Even if this message is intercepted by "government
officials" along the way, they'll be stumped, without even so much as a trace of the original
meaning. Once the sub-host receives the message, it will use the same encrypted algorithm
and decryption key "Overthrow the Qing to Restore the Ming", so that the message will be
restored to its original command "August 15, shores of Lake Taihu, holding a big event".
Hosts and sub-hosts use the same key for encryption and decryption. This method is also
known as a symmetric encryption algorithm (or symmetric-key algorithm), of which there are
3 - DES, 3DES and AES. See Table 6-1 for a comparison of the 3 algorithms.
Table 6-1 Symmetrical encryption algorithms
Item
DES
3DES
AES
Full Name
Data Encryption
Standard
Triple Data
Encryption Standard
Advanced Encryption
Standard
Key Length
(bits)
56
168
128, 192. 256
Security Level
Low
Medium
High

Positive identification to "return the jade intact to its rightful owner" - authentication
Packet authentication, as shown in Figure 6-2, is a process wherein an authentication
algorithm and authentication key pair is used before the message is transmitted to "sign
the papers" and create a signature. Then, the signature is sent together with the message.
Once the other end receives the message, the same authentication algorithm and
authentication key pair is used to get the same signature. If the packet is sent over with a
188
Learn Firewalls with Dr. WoW
signature, and if the signature is the same, it will verify that the message has not been
tampered with.
Figure 6-2 Schematic of packet authentication
Source
Destination
IP packet
IP packet
Yes
No
Signature
matched?
Signature
Authentication
algorithm
Symmetric
authentication
key
IP packet
IP packet
Drop
Signature
Authentication
algorithm
Signature
Apart from authenticating the integrity of the message, IPSec can also authenticate the source
of the message. That is, it positively identifies the message to ensure that the message was
sent from the real sender.
In general, authentication and encryption are used in tandem, and encrypted packets will go
through an authentication algorithm to generate signatures. MD5 and SHA series algorithms
are two common forms of authentication. See Table 6-2 for a comparison of the two.
Table 6-2 Authentication algorithms
Item
MD5
SHA1
SHA2
Full Name
Message Digest 5
Secure Hash Algorithm 1
Secure Hash Algorithm 2
Signature
Length (bits)
128
160
SHA2-256: 256
SHA2-384: 384
SHA2-512: 512
Security
Level
Low
Medium
High
Of IPSec's two feats, AHs can only be used for authentication but not for encryption. ESP, on
the other hand, can be used for both encryption and authentication. AH and ESP can be used
independently or in tandem.
189
Learn Firewalls with Dr. WoW
6.1.2 Security Encapsulation
Tiandihui cannot raise their "anti-Qing" banners or proclaim their mission, so they often have
to cloak their actions through "legitimate" forms of business. For instance, the public identity
of a host might be as online shopkeeper, and the public identity of a sub-host might be a buyer;
such an exchange just might be the best catch-all. To better utilize this catch-all, IPSec has
designed two modes of encapsulation:

Openly repair the gallery by day while secretly passing through Chencang - tunnel mode
With tunneling, before an AH or ESP header is inserted into the original IP header, a new
packet header is generated and placed before the AH or ESP header, as shown in Figure 6-3.
Figure 6-3 Tunnel mode packet encapsulation
Original packet
Original packet
Encapsulated packet
Encapsulated packet
Host FW
Sub-host FW
Original
packet
IP Header
AHencapsulated
packet
New IP Header
ESPencapsulated
packet
New IP Header ESP
AH
IP Header
TCP
Header
data
TCP
Header
data
TCP
Header
data
For authentication
IP Header
ESP Tail
ESP Auth
data
ESP Tail
ESP Auth
data
For encryption
AH-ESPencapsulated
packet
New IP Header
For authentication
AH
ESP
IP Header
TCP
Header
data
For ESP encryption
For ESP authentication
For AH authentication
The tunnel uses the new packet header to encapsulate the message; the new IP header's source
and destination IP addresses serve as the tunnel's two public IP addresses; in this way,
tunneling uses two gateways to establish an IPSec tunnel, thereby guaranteeing
communication between the two networks behind each gateway. This is currently one of the
more commonly-used modes of encapsulation. Messages within the host-to-sub-host private
network is often disguised as everyday communication between the identities of the host and
sub-host (public IP addresses) as ship-owner and buyer once the message is encrypted and
encapsulated so it won't be suspected.
190
Learn Firewalls with Dr. WoW

"The door opens on a view of mountains"; cut to the chase - transfer mode
In a transfer, the AH header or ESP header is inserted between the IP header and TCP, as
shown in Figure 6-4.
Figure 6-4 Transfer mode packet encapsulation
Packet encapsulation
Packet decapsulation
PC
PC
Original
packet
AHencapsulated
packet
IP Header
IP Header
AH
TCP Header
data
TCP Header
data
For authentication
ESPencapsulated
packet
IP Header
ESP
TCP Header
data
ESP Tail
ESP Auth data
ESP Tail
ESP Auth data
For encryption
For authentication
AH-ESPencapsulated
packet
IP Header
AH
ESP
TCP Header
data
For ESP encryption
For ESP authentication
For AH authentication
In a transfer, the packet header is left unchanged, and the tunnel source and destination IP
addresses are the final source and destination IP addresses of the string of communication. As
such, only messages sent between the two ends are protected instead of all network-wide
messages. For this reason, this mode is only useful for communication between two parties
rather than private networks amongst Tiandihui hosts and sub-hosts.
6.1.3 Security Associations
A connection established in IPSec between two ends is known as an SA (Security
Association). As the name suggests, the two ends become associates wherein the same
encapsulation mode, encryption algorithm, cipher key, authentication algorithm, and
authentication key are used, which naturally requires mutual trust and a degree of intimacy.
An SA is a unidirectional connection, wherein hosts and sub-hosts will establish an SA to
protect all aspects of communication. Host inbound SAs correspond to sub-host outbound
SAs, and host outbound SAs correspond to sub-host inbound SAs, as shown in Figure 6-5.
191
Learn Firewalls with Dr. WoW
Figure 6-5 Schematic of IPSec security association
Data flow 1
Data flow 2
Sub-host
FW
Host FW
Outbound
SA1
Inbound
Inbound
SA2
Outbound
Sub-host
FW
Host FW
To differentiate between SAs in different directions, IPSec will add unique identifier to each
SA. This identifier is called an SPI (Security Parameter Index).
The most direct method of establishing an SA is to have hosts and sub-hosts define
encapsulation modes, encryption algorithms, cipher keys, authentication algorithms, and
authentication keys, that is, to manually establish IPSec SAs.
6.2 Manual IPSec VPNs
Having learned of the power that IPSec had to offer, Tiandihui Host Chen decided to first
manually establish an IPSec between the host and a single sub-host to protect messages
transmitted within the internal host-to-sub-host network to test the security that IPSec tunnels
had to offer.
Figure 6-6 Manual IPSec VPN networking
Host
GE0/0/2
2.2.2.2/24
GE0/0/2
1.1.1.1/24
GE0/0/1
192.168.0.1/24
GE0/0/1
172.16.0.1/24
IPSec tunnel
FW_A
Sub-host
FW_B
PC_A
192.168.0.2/24
PC_B
172.16.0.2/24
IPSec is a VPN technique established on the Internet, overlaid on the fundamental features of
a firewall. As such, before an IPSec VPN is configured, communication throughout the entire
network must first be unimpeded. Specifically, the following two conditions must first be met:

The FW_A and FW_B are routable through a public network.

FW_A and FW_B security policies allow for traffic between PC_A and PC_B.
As for the configuration of IPSec VPN security policies, see section 6.10 "Security Policy
Configuration Roadmap". For now, we should avoid any detours and first focus on this
section - manual IPSec VPN configuration.
To make the relationships between encryption, authentication, and SA configuration even
clearer, there are 4 steps to manually configure an IPSec:
192
Learn Firewalls with Dr. WoW

Define which data flows must be protected
Only internal network messages between hosts and sub-hosts will be protected by the
IPSec. All other messages are unprotected.

Configure the IPSec proposal
Host and sub-host FWs decide whether or not to become an associate based on the other
party's proposal. Encapsulation modes, security protocols, encryption algorithms and
authentication algorithms are all set in the security proposal.

Configure the manual IPSec policy
The host and sub-host FW public IP addresses, the SA identifier (SAID), and the cipher
and authentication keys are designated.

Apply the IPSec policy
The logic behind manual IPSec configuration is as shown in Figure 6-7.
Figure 6-7 Schematic of manual IPSec VPN configuration
1. Define which data flows must
be protected.
security acl A
3. Configure the manual IPSec
policy
ipsec policy C
rule permit ip
security acl A
2. Configure the IPSec
proposal
proposal B
ipsec proposal B
tunnel remote
4. Apply the IPSec
policy
interface
ipsec policy C
tunnel local
transform
sa spi inbound
esp encryptionalgorithm
esp authenticationalgorithm
ah authenticationalgorithm
sa spi outbound
sa string-key inbound
sa string-key outbound
encapsulation-mode
Tiandihui host and sub-host FW key configurations and corresponding explanations are as
shown in Table 6-3.
Table 6-3 Manual IPSec VPN configuration (IPSec parameters)
Configuration
Host FW_A
Sub-Host FW_B
ACL
acl number 3000
acl number 3000
rule 5 permit ip source
192.168.0.0 0.0.0.255 destination
172.16.0.0 0.0.0.255
rule 5 permit ip source 172.16.0.0
0.0.0.255 destination 192.168.0.0
0.0.0.255
193
Learn Firewalls with Dr. WoW
IPSec Proposal
IPSec proposal pro1
IPSec proposal pro1
transform esp
transform esp
encapsulation-mode tunnel
encapsulation-mode tunnel
esp authentication-algorithm
sha1
esp authentication-algorithm sha1
esp encryption-algorithm aes
esp encryption-algorithm aes
IPSec Policy
IPSec Policy
Application
IPSec policy policy1 1 manual
security acl 3000
security acl 3000
proposal pro1
proposal pro1
tunnel local 1.1.1.1
tunnel local 2.2.2.2
tunnel remote 2.2.2.2
tunnel remote 1.1.1.1
sa spi inbound esp 54321
sa spi inbound esp 12345
sa spi outbound esp 12345
sa spi outbound esp 54321
sa string-key inbound esp
huawei@123
sa string-key inbound esp
huawei@456
sa string-key outbound esp
huawei@456
sa string-key outbound esp
huawei@123
interface GigabitEthernet0/0/2
interface GigabitEthernet0/0/2
ip address 1.1.1.1 255.255.255.0
IPSec policy policy1
Route
IPSec policy policy1 1 manual
ip route-static 172.16.0.0
255.255.255.0 1.1.1.2 //static
route configured in peer private
network to guide traffic through
the applied IPSec policy interface
ip address 2.2.2.2 255.255.255.0
IPSec policy policy1
ip route-static 192.168.0.0
255.255.255.0 2.2.2.1 //static route
configured in peer private network
to guide traffic through the applied
IPSec policy interface
When IPSec is manually configured, all IPSec SA parameters, including the encryption and
authentication keys, must be manually configured and updated by the user. Also, the IPSec
VPN access route between the two private network users can only rely on the configured
static route. There are not any better options.
Once deployed, PC_A will ping messages to PC_B and PC_B will ping back a reply.
Tiandihui simulated a "government" checkpoint on the Internet to find that the messages
pinged back and forth had already been protected by the IPSec SA. The IPSec SA identifier
SPIs for each direction were, respectively, 0x3039 (decimal notation 12345) as well as
0xd431 (decimal notation 54321), which are consistent with the configurations.
194
Learn Firewalls with Dr. WoW
Public network packet header
ESP header
Encapsulated message
Because tunneling was used for encapsulation, the external IP header address on the IPSec
packet was the public IP address. By taking a look at the contents of the packet, we'll see that
the ESP header ping message was already encrypted and completely unintelligible on the
surface. In other words, even if this message was flagged down by "government officials",
they'd have no way of retrieving anything of value.
To weigh the value of ESPs and AHs, Tiandihui used IPSec's other trick, AH, to establish an
SA. AH can only be used for authentication and cannot encrypt messages. As such, messages
obtained in the government checkpoint would reveal the true nature of the private network
packet header and ping messages encapsulated within the AH header. As such, if encryption is
needed, it's still better to use ESP, or AH and ESP in tandem.
Public network packet header
AH header
Private network packet header
Message not encrypted
Once IPSec is in use, communication between Tiandihui hosts and sub-hosts was unimpeded.
Shortly thereafter, many new sub-hosts were added. These new sub-hosts also needed to
establish an IPSec tunnels to the host. If manual configurations were to continue to be used,
every single sub-host would have to configure their own parameters, a Herculean effort for
sure; also, for security reasons, cipher and authentication keys would also need to be updated
often. Naturally, this backwards approach did not last long, and Host Chen and his attendants
195
Learn Firewalls with Dr. WoW
quickly solved this dilemma: an IKE/IPSec VPN could be used to replace their manual IPSec
VPNs.
6.3 IKE and ISAKMP
Keys used for manual IPSec VPN encryption and authentication are all manually configured.
To ensure the long-term security of an IPSec VPN, these keys must be modified and replaced
often. As the number of sub-hosts grow, the task of key configuration and modification
becomes ever more difficult. As Tiandihui's forces grew in strength, the maintenance and
administration of IPSec VPNs became a daunting task. To lessen the load of IPSec VPN
administration, the Tiandihui Host paid homage to the Supreme Host to find an elixir for their
ailments. The elixir was there all along, a gift of the gods. IPSec protocol profiles have long
pondered this dilemma - the answer lies in a "smart" key steward - the IKE (Internet Key
Exchange) protocol.
IKE combines 3 major protocols - the ISAKMP (Internet Security Association and Key
Management Protocol), Oakley protocol and SKEME protocol.

ISAKMP mainly defines the process used to establish the collaborative relationship (for
example, IKE SA and IPSec SA) between IKE peers.

At the heart of the Oakley and SKEME protocols is the DH (Diffie-Hellman) algorithm,
which ensures data transfer security through the safe distribution of key and
authentication identification over the Internet. Do not underestimate the DH algorithm.
With it, the cipher and authentication keys needed for IPSec and IKE SAs can also be
dynamically refreshed.
With an IKE association, IPSec VPN security and administrative hassles were no longer a
problem for Tiandihui. Now, new sub-host VPNs could be established at incredible speeds.
The ultimate goal of the IKE protocol is to establish a dynamic IPSec SA through
host-to-sub-host negotiations while also providing real-time maintenance of the IPSec SA.
The establishment of an IPSec SA and the corresponding IKE negotiation is completed
through ISAKMP packets, and as such, before we deploy our IKE, Dr. WoW will give us a
lesson on ISAKMP packets, as shown in Figure 6-8.
Figure 6-8 ISAKMP packet encapsulation and packet headers
IP Header
UDP Header
ISAKMP Header
Payload
Source port Destination port
(IKE=500)
(IKE=500)
Initiator’s Cookie (SPI)
Responder’s Cookie (SPI)
Protocaol
(udp=17)
Source address (local
gateway)
Destination address
(remote gateway)
Next
Playload
Maj Min Exchange
Type
Ver Ver
Message ID
Flag
s
Message Length
ISAKMP Payload (Type Payload)
196
Learn Firewalls with Dr. WoW


IP packet header
−
SRC (Source IP Address): local IP address of the initiated IKE negotiation; may be
that of a physical/logical interface and maybe be command configured.
−
DST (Destination IP Address): peer IP address of the initiated IKE negotiation;
command configured.
UDP packet header
IKE protocol port 500 initiates negotiation and responds to negotiation. When both the
host and sub-hosts have fixed IP addresses, this port will never change in the negotiation
process. When either the host or the sub-host s have an NAT device (NAT traversal
scenario), the IKE protocol will use a special process which we will discuss later on.

ISAKMP packet header
−
Initiator's cookie (SPI) and responder's cookie (SPI): the SPI serves as a cookie for
both IKEv1 and IKEv2, a unique IKE SA identifier.
−
Version: the IKE version number. Many things have changed for the better since the
launch of IKEs. To differentiate, older IKEs are known as IKEv1 while updated IKEs
are known as IKEv2.
−
Exchange Type: the IKE defined exchange type. Exchange types define the exchange
sequence that ISAKMP messages must follow. Later, we will discuss the IKEv1 main
mode, aggressive mode, and fast mode. When discussing IKEv2, we'll mention initial
exchanges and child SA exchanges. All of these are different IKE defined exchange
types.
−
Next Payload: The next payload type identifies the message. A single ISAKMP
packet may be loaded with multiple payloads. This field provides "link" capabilities
within the payload. If the current payload is the message's final payload, this field
will be 0.
−
ISAKMP Payload (Type Payload): A type of payload carried in an ISAKMP packet
that is used as a "parameters packet" for negotiating IKE and IPSec SAs. There are
many different types of payloads, and each different payload may carry different
"parameter packets". The specific usage of different payloads will be discussed
together with the packet capturing process.
The essence of IKEv1 and IKEv2 is the focus of this section. Without further ado, Dr. WoW
will introduce it.
6.4 IKEv1
6.4.1 Configuring IKE/IPSec VPNs
IKEs are best for establishing IPSec VPNs between a single host and multiple sub-hosts. Its
advantages become ever more prominent as the number of sub-hosts grows. For
convenience's sake, here we'll only give a snapshot of how IPSec VPN networking is
established between one host FW and one sub-host FW, as shown in Figure 6-9.
197
Learn Firewalls with Dr. WoW
Figure 6-9 IKEv1/IPSec VPN networking
GE0/0/2
1.1.1.1/24
GE0/0/1
192.168.0.1/24
GE0/0/1
172.16.2.1/24
GE0/0/2
2.2.3.2/24
Host
Sub-host 1
FW_A
FW_B
PC_B
172.16.2.2/24
PC_A
192.168.0.2/24
As shown in Figure 6-10, compared to manual configurations, the procedure used for
IKE/IPSec VPNs only adds two further steps: the configuration of the IKE proposal and the
IKE peer. The IKE proposal is primarily used to configure the encryption and authentication
algorithms used to establish the IKE SA. The IKE peer is primarily used to configure the IKE
version, identity authentication and exchange mode.
Figure 6-10 Schematic of IKEv1/IPSec configuration
1. Define which data flows must
be protected.
security acl A
rule permit ip
2. Configure the IKE
security proposal.
ike proposal B
encryption-algorithm
authentication-method
authentication-algorithm
dh
sa duration
Dotted line: optional
Solid line: mandatory
3. Configure the IKE
peer.
ike peer D
5. Configure the IPSec security
policy in IKE mode.
ipsec policy F
security acl A
Ike-peer D
6. Apply the IPSec
security policy group.
interface
ike proposal B
proposal C
undo version 2
pfs
Configure identify
authentication parameters
according to the selected
authentication-method.
speed-limit
ipsec policy F
exchange-mode
4. Configure the IPSec
security proposal.
ipsec proposal C
transform
esp encryptionalgorithm
esp authenticationalgorithm
ah authenticationalgorithm
encapsulation-mode
Similar to manual IPSec VPNs, communication throughout the entire network must first be
unimpeded. Specifically, the following two conditions must first be met:

The FW_A and FW_B are routable over a public network.

FW_A and FW_B security policies allow for traffic between PC_A and PC_B.
As for the configuration of IPSec VPN security policies, see section 6.10 "Security Policy
Configuration Roadmap". For now, we should first focus on this section - IKE IPSec VPN
configuration.
See Table 6-4 for the steps to configure an IKE/IPSec VPN.
198
Learn Firewalls with Dr. WoW
Table 6-4 IKE/IPSec VPN configuration
Configuration
Host FW_A
Sub-Host FW_B
IKE proposal
ike proposal 10
ike proposal 10
IKE Peer
ike peer b
ike peer a
ike-proposal 10
undo version 2 //IKEv1
IPSec Proposal
undo version 2 //IKEv1
exchange-mode main//main
mode (default)
exchange-mode main//main mode
(default)
remote-address 2.2.3.2//peer
initiated IKE negotiation address
remote-address 1.1.1.1//peer initiated
IKE negotiation address
pre-shared-key tiandihui2
ACL
ike-proposal 10
pre-shared-key tiandihui2
acl number 3001
acl number 3000
rule 5 permit ip source
192.168.0.0 0.0.0.255
destination 172.16.2.0 0.0.0.255
rule 5 permit ip source 172.16.2.0
0.0.0.255 destination 192.168.0.0
0.0.0.255
IPSec proposal a
IPSec proposal b
transform esp
transform esp
encapsulation-mode tunnel
esp authentication-algorithm
md5
encapsulation-mode tunnel
esp authentication-algorithm md5
esp encryption-algorithm des
esp encryption-algorithm des
IPSec Policy
IPSec Policy
Application
Route
IPSec policy policy1 1 isakmp
IPSec policy policy1 1 isakmp
security acl 3001
security acl 3000
proposal a
proposal b
ike-peer b
ike-peer a
interface GigabitEthernet0/0/2
interface GigabitEthernet0/0/2
ip address 1.1.1.1 255.255.255.0
ip address 2.2.3.2 255.255.255.0
IPSec policy policy1
IPSec policy policy1
ip route-static 172.16.2.0
255.255.255.0 1.1.1.2 //static
route configured in peer private
network to guide traffic through
the applied IPSec policy
interface
ip route-static 192.168.0.0
255.255.255.0 2.2.3.1 //static route
configured in peer private network to
guide traffic through the applied
IPSec policy interface
Once configured, when host and sub-host FWs access a data flow, it can trigger the
establishment of an IPSec VPN tunnel. Next, Dr. WoW will use capture packets to help us all
understand the essence of IKEv1.
IKEv1 can be divided into two phases to complete the dynamic establishment of an IPSec SA:

Phase 1 establish IKE SA: the main mode or aggressive mode is used to establish IKE
SA negotiation.
199
Learn Firewalls with Dr. WoW

Phase 2 establish IPSec SA: the fast mode is used to establish IPSec SA negotiation.
Why would we divide this into two phases? How are these two phases related? To simply
put, phase 1 is used to prepare for phase 2. Both IKE peers will exchange key materials,
generate keys, and perform mutual identity authentication. Only after these preparations are
completed will the IPSec actually begin the IPSec SA negotiation.
Next, we will introduce the main mode + fast mode process used to establish an IPSec SA.
First, we will take a look at the differences between IKE/IPSec VPNs and manual IPSec
VPNs.
6.4.2 Phase 1: Establishing IKE SA (Main Mode)
Three steps are taken to establish an IKE SA with six ISAKMP messages through main mode
IKEv1. The screen capture of packet messages is as follows:
Next, we will use a host FW initiated IKE negotiation to illustrate the phase 1 negotiation
process, as shown in Figure 6-11.
Figure 6-11 IKE SA main mode negotiation process
IKE SA
FW_A
FW_B
1. Send the local IKE security proposal.
Negotiate
Negotiate
parameters for
parameters for
setting up IKE SA. 2. Confirm the IKE security proposal. setting up IKE SA.
Exchange key
information and
generate new
keys.
Exchange identify
information and
authenticate
peer’s identity.
2.
3. Send key materials.
4. Send key materials.
5. Send identity information.
5. Send identity information.
Exchange key
information and
generate new
keys.
Exchange identify
information and
authenticate
peer’s identity.
IKE proposal negotiation.
The negotiation is divided into two circumstances:

The initiator's IKE peer cited an IKE proposal (sends the cited IKE proposal)
200
Learn Firewalls with Dr. WoW

The initiator's IKE peer did not cite an IKE proposal (sends all IKE proposals)
Under these two circumstances, responders will search from among their own configured IKE
proposals for an IKE proposal that matches the sender's (As such, the cited IKE proposal line
in Figure 6-10 "Schematic of IKEv1/IPSec configuration" is dashed which means that cited
and uncited are both acceptable). If there is no matching security proposal, the negotiation
will fail.
The principles by which both IKE peers judge whether or not the IKE proposals match
include whether or not the ends use the same encryption algorithm, authentication algorithm,
identity authentication method, and DH group ID, but does not include the IKE SA's life
cycle.
IKE peer configuration parameters are header keys, peer IDs or IP addresses or domain names
that allow the IKE ends to successfully connect. They also include identity authentication
parameters. No matter where the error lies, "though we sit end by end, fate would not have it
so".
NOTE
When an IPSec SA is established through IKEv1 negotiation, the timing of the IKE SA timeout is either
the local life cycle or the peer life cycle, whichever is shorter. If the life cycles configured on the two
devices on either end of the tunnel differ, it will not affect IKE negotiation.
By taking a look at a screen capture packet, we can see the IKE proposal used for negotiation
that is carried in the SA payload of the ISAKMP messages. Message (1) is used as an example
below:
3.
The DH algorithm is used to exchange key materials and also generate keys.
Host and sub-host FWs use Key Exchange and nonce payloads in ISAKMP messages to
exchange key materials. Key Exchange is used to exchange public DH values; nonce is used
to transfer temporary random numbers. The two IKE peers in the DH algorithm only
exchange key materials and do not exchange the actual shared keys. As such, hackers who
steal DH values and temporary values cannot figure out shared keys. This is precisely the
beauty of the DH algorithm. Within the screen capture packet, we can see the key materials
exchanged between the IKE peers. Message (3) is used as an example below:
201
Learn Firewalls with Dr. WoW
Once key materials are exchanged, the two IKE peers will combine their own configured
identity authentication methods to each begin their respective complicated key computing
processes (pre-shared keys or digital certificates are all involved in the key computing
process), and ultimately generate 3 keys:

SKEYID_a: ISAKMP message integrity authentication key - No one would dare to
tamper with an ISAKMP message. If the message has even the slightest change, its
response integrity will be detected!

SKEYID_e: ISAKMP message cipher key - Don't even try to steal an ISAKMP message.
Even if you do, you won't understand it!

SKEYID_d: Used for encryption and authentication keys derived from IPSec packets ultimately, this key guarantees the security of IPSec encapsulated data packets!
The first two keys above ensure the security of subsequent ISAKMP message exchanges!
Throughout the entire key exchange and computing process, when controlled by an IKE SA
timeout, the time will be automatically refreshed at a certain interval to avoid potential
security issues when keys are used for a long period of time.
4.
Identity authentication
The two IKE peers perform identity authentication with the ISAKMP message (5) and (6)
exchange identity message. Currently, there are two identity authentication techniques that are
commonly used:

Pre-shared key method (pre-share): The device identity message is the IP address or
name.

Digital certificate method: The device identity message is the certificate and a partial
message HASH value (signature) authenticated through private-key encryption.
The identity messages above are encrypted with the SKEYID_e. As such, in the screen
capture packets, we can only see ISAKMP messages identified as "Encrypted" and cannot see
the contents of the message (identity message). Message (5) is used as an example below:
202
Learn Firewalls with Dr. WoW
The pre-shared key is the simplest and most common method of identity authentication. With
this method, device identity messages can be identified by the IP address or name (including
the FQDN and USER-FQDN). When IKE peers both have a fixed IP address, in general, they
will both be identified by their IP addresses; when one peer has a dynamically assigned IP
address, this peer will only be identified by its name. Only when the identity messages of the
authenticating peer and authenticated peer match will identity authentication be successful.
The key points to host and sub-host FW identity authentication configuration are as shown in
Table 6-5.
Table 6-5 Identity authentication message configuration
Device Identity
Type
Host
(Authenticating)
Sub-Host (Subject to Authentication)
IP address
remote-address
Interface address or local-address
designated address for initiating IPSec tunnel
negotiation
FQDN
remote-id
ike local-name
USER-FQDN
remote-id
ike local-name
There's an issue here that we should all be aware of. When the two IKE peers both have a
fixed IP address, the IP addresses configured with a remote-address command must keep the
same address as the one used for peer-initiated IKE negotiation. The use of this IP address is
threefold: not only does it designate the IP address of the tunnel peer, it's also used when
looking for local pre-shared keys and for authenticating peer identities. This is the greatest
pitfall of IPSec; in different circumstances, it may change over many different times. If the
proper law cannot be found, there will definitely be an error.
In summary, the IKE will perform two major actions in the phase 1 negotiation process:

It will generate cipher and authentication keys for successive ISAKMP messages and
also generate key materials for IPSec SA.

Once identity authentication is complete and when encrypted, the identity message can
be an IP address, device name, or a digital certificate with even more information.
Moreover, all of these contributions are subject to the SA life cycle control, and as soon as the
SA times out, all of the tasks above will be repeated. The benefit here is that regardless of the
203
Learn Firewalls with Dr. WoW
results of key computing and identity authentication, the whole process will be repeated at a
definite time, thereby eliminating opportunities for malicious behavior. None of this is
included for manual IPSec VPNs, which is precisely why IKEs are "smart" key stewards.
6.4.3 Phase 2: Establishing IPSec SA
Beginning with phase 2, the two IKE peers will continue to exchange key materials, including
SKEYID_d, SPIs and protocols (AH and/or ESP protocols), nonce, and other such parameters.
Then, each peer will perform key computing and generate keys for IPSec SA encryption and
authentication. In this way, each IPSec SA is guaranteed to use an absolutely unique key for
the subsequent encryption and authentication of data transfers. Also, IPSec SA also has
timeout; as soon as the IPSec SA times out, the IKE SA and IPSec SA will be deleted, and the
negotiation will begin anew.
IKEv1 uses the fast exchange mode to establish an IPSec SA through three ISAKMP
messages. The screen capture of the packet messages is as follows:
Because the SKEYID_e generated by IKEv1 phase 1 of the fast exchange mode encrypts
ISAKMP messages, all of the packets that we have captured are all encrypted and we cannot
see the specific contents of the payloads. As such, we can only explain the following steps
textually. As shown in Figure 6-12, we will still illustrate this with a host FW initiated IPSec
negotiation.
Figure 6-12 IPSec SA fast exchange mode negotiation process
IPSec SA
FW_A
FW_B
1. Send IPSec security parameters and key
materials.
Negotiate IPSec
SA
2. (Responder) sends matched security
parameters and key materials.
Negotiate IPSec
SA
3. Send confirmation.
2.
Initiator sends the IPSec proposal, protected data flow (ACL) and key materials to the
responder.
3.
Responder responds with a matching IPSec proposal and protected data flow. At the
same time, both parties generate a key for the IPSec SA.
The IPSec uses ACL to delineate the data flows that it wants to protect. As in our example,
where the host only needs to communicate with a single sub-host, the configuration is
relatively simple and only an ACL "mutual mirror" (the source and destination IP addresses of
one ACL are the opposite of the other) needs to be configured on each FW. However, for other
more complex situations, for instance, when a host establishes a VPN with multiple sub-hosts
204
Learn Firewalls with Dr. WoW
that the sub-hosts access through the host, as well as L2TP/GRE over IPSec, or 2-in-1 IPSec
and NAT gateways, ACL configurations become more particular. We will revisit these
scenarios when we encounter them again.
The principle by which both IKE peers judge whether or not the IPSec proposals match
is that the authentication algorithms, encryption algorithms and encapsulation modes
used by both security protocols must all be the same.
Because all IPSec SA keys are derived from SKEYID_d, as soon as SKEYID_d is leaked, it
may compromise the IPSec VPN. To elevate the security of key management, IKE delivers a
PFS (perfect forward secrecy) function. Once PFS is activated, a one-time add-on DH
exchange will be performed during IPSec SA negotiation, to regenerate new IPSec SA keys,
thereby further improving IPSec SA security. Remember that if a PFS is to be configured, it
must be configured on both tunnel FWs!
4.
Initiator sends confirmation results.
Once negotiation is complete, the sender will begin to send the IPSec (ESP) packets.
Once an IPSec SA is successfully established, check the IPSec VPN presence message. This is
illustrated with the host FW_A display message.

Check the IKE SA status:
<FW_A> display ike sa
current ike sa number: 2
-----------------------------------------------------------------------conn-id peer flag phase vpn
-----------------------------------------------------------------------40129 2.2.3.2 RD|ST v1: 2 public
40121 2.2.3.2 RD|ST v1: 1 public
Here both the IKE SA (v1: 1) and IPSec SA (v1: 2) statuses are displayed. RD means
that the SA is set to READY. There is only one IKE SA between the IKE peers. The IKE
SA is a two-way logical connection (no distinction between the source and destination).

Check the IPSec SA status:
<FW_A> display IPSec sa brief
current IPSec sa number: 2
current IPSec tunnel number: 1
--------------------------------------------------------------------Src Address Dst Address SPI Protocol Algorithm
--------------------------------------------------------------------1.1.1.1 2.2.3.2 4090666525 ESP E: DES; A: HMAC-MD5-96;
2.2.3.2 1.1.1.1 2927012373 ESP E: DES; A: HMAC-MD5-96;
The IPSec SA is unidirectional (distinction between source and destination), and both
IPSec SAs together constitute a single IPSec tunnel. Generally speaking, a single data
flow will correspond to a single IPSec SA. However, when the IPSec also uses ESP +
AH encapsulation, a single data flow will correspond to two IPSec SAs.

Check the session table:
<FW_A> display firewall session table
205
Learn Firewalls with Dr. WoW
Current Total Sessions: 3
icmp VPN: public --> public 192.168.0.2: 18334-->172.16.2.2: 2048
udp VPN: public --> public 1.1.1.1: 500-->2.2.3.2: 500
esp VPN: public --> public 2.2.3.2: 0-->1.1.1.1: 0
We have now established the IPSec SA. On the surface, it might seem like there are not many
differences between the IKE method and the manual method, but in fact, the differences are
massive:

Different key generation methods
Manually, all IPSec SA parameters that must be created, including the encryption and
authentication keys, are all manually configured and can only be manually refreshed.
With the IKE method, all IPSec SA encryption and authentication keys that must be
created are generated by the DH algorithm and can be dynamically refreshed. Key
management costs are lower, and security is higher.

Different IPSec SA life cycles
Manually created IPSec SAs will have a permanent presence.
With the IKE method, IPSec SAs are triggered by the data flow, and the SA life cycle
parameter (also known as the SA timeout) controls are configured by both ends.
The main mode is the recommended method of configuration for reliable security. Next, we
will take a brief look at another kind of IKE SA negotiation method - aggressive mode.
6.4.4 Phase 1: Establishing IKE SA (Aggressive Mode)
IKEv1 completes IPSec dynamic negotiation in two phases; when phase 1 uses the main
mode negotiation, it offers reliable security, but sometimes, the perfect main mode will still be
useless. Why is that? Let's take a look at what happens when we replace main mode with
aggressive mode.
In Table 6-4 "IKE/IPSec VPN configuration", if we change the IKE peer configuration
command exchange-mode main to exchange-mode aggressive, the IKEv1 phase 1
negotiation mode will be changed to aggressive mode. Let's take a look at the capture packets
to see how aggressive mode packets interact:
As shown in Figure 6-13, aggressive mode only needs three ISAKMP messages to complete
the phase 1 negotiation process. Phase 2 is still the same fast mode as before.
206
Learn Firewalls with Dr. WoW
Figure 6-13 IKE SA aggressive mode negotiation process
IKE SA
FW_B
FW_A
1. Send the local IKE security proposal, key and identity
information.
Negotiate
parameters.
Generate
keys.
Authenticate
identities.
2. Search for the matched IKE security proposal and
send key and identity information as well as local
authentication information.
3. Return the local authentication
information and implement authentication.
Negotiate
parameters.
Generate
keys.
Authenticate
identities.
The initiator and responder peremptorily place all of their messages, including the IKE
proposal, relevant key messages and identity messages, into a single ISAKMP message and
send it to the other end, thereby improving IKE negotiation efficiency. However, because the
identity message is transferred as plaintext, it does not go through the encryption and integrity
authentication process, which lowers IKE SA security. Clearly, security here falls shorts. So
why do we even have aggressive mode?
Back in the early days of IPSec VPNs, when people used the main mode + pre-shared key
identity authentication method, the IPSec needed to search for the pre-shared key locally on
the peer IP address. This kind of key search method only works when the peer has a fixed IP
address. If there is not a fixed address, aggressive mode can be used to "brutally" solve this
problem.
In aggressive mode, the "identity messages" are not encrypted, and locally, the identity
message is exactly the same as what was sent from the peer (i.e. the IP address). This is all
that is needed to search for the pre-shared key. As such, in the initial stages of IPSec VPN
applications, aggressive mode was mainly used to deploy IPSec VPNs to nodes that did not
have fixed IP addresses. Nowadays, IPSec VPN has many other ways to solve this problem
and the unsafe aggressive mode is clearly not the best option. All we really need to do now is
understand how it used to be used; Dr. WoW does not recommend any actual use of it though.
6.5 IKEv2
6.5.1 IKEv2 Overview
IKEv1 seemed perfect enough, but before long, its shortcomings started to become clear.

Long IPSec SA negotiation times
For a single IPSec SA negotiation, IKEv1 main mode will typically need to send 6 (IKE
SA negotiation) + 3 (IPSec SA negotiation) = 9 messages.
207
Learn Firewalls with Dr. WoW
For a single IPSec SA negotiation, IKEv1 aggressive mode will typically need to send 3
(IKE SA negotiation) + 3 (IPSec SA negotiation) = 6 messages.

No support for remote user accessions
IKEv1 cannot perform authentication for remote users. If we want to support remote user
accessions, we can only use L2TP, and use AAA authentication with PPP to authenticate
remote users.
How to deal with this? Never fear! There's never a problem too difficult to solve! IKEv2 is the
perfect answer to all these problems.
IKEv2 v.s. IKEv1:

Major speed boost to IPSec SA negotiations
The average IKEv2 negotiation for a single IPSec SA will only require 2 (IKE SA
negotiation) + 2 (IPSec SA negotiation) = 4 messages. Successive IPSec SAs will only
require two more messages.

EAP (Extensible Authentication Protocol) method identity authentication added.
Here, we'll only discuss the basic IKEv2 negotiation process and won't touch on EAP
authentication in our discussion of enterprise networks just yet.
In Figure 6-14 we can see the entire networking environment which can help explain how
IKEv2 actually works.
Figure 6-14 IKEv2/IPSec VPN networking
GE0/0/2
1.1.1.1/24
GE0/0/1
192.168.0.1/24
GE0/0/1
172.16.2.1/24
GE0/0/2
2.2.2.2/24
Host
Sub-host 1
FW_A
PC_A
192.168.0.2/24
FW_B
PC_B
172.16.2.2/24
The IKEv2 configuration roadmap is almost exactly the same as IKEv1's, with a few detours.
As shown in Figure 6-15, the commands in italics differ from IKEv1. By default, the firewall
will start both the IKEv1 and IKEv2 protocols. When the negotiation is initiated locally,
IKEv2 is used. When receiving a negotiation, IKEv1 and IKEv2 are both supported. So, we
can also leave IKEv1 activated.
208
Learn Firewalls with Dr. WoW
Figure 6-15 Schematic of IKEv2/IPSec configuration
1. Define which data flows
must be protected.
security acl A
rule permit ip
2. Configure the IKE
security proposal.
ike proposal B
encryption-algorithm
authentication-method
authentication-algorithm
5. Configure the IPSec security
policy in IKE mode.
ipsec policy F
security acl A
3. Configure the IKE peer.
ike peer D
ike proposal B
Ike-peer D
6. Apply the IPSec
security policy group.
proposal C
interface
undo version 1
pfs
Configure identity
authentication parameters
based on the selected
authentication-method.
speed-limit
ipsec policy F
dh
integrity-algorithm
sa duration
4. Configure the IPSec
security proposal.
ipsec proposal C
transform
Dotted line: optional
Solid line: mandatory
esp encryptionalgorithm
esp authenticationalgorithm
ah authenticationalgorithm
encapsulation-mode
6.5.2 IKEv2 Negotiation Process
The IKEv2 IPSec SA negotiation process is very different from IKEv1's; we can create an
IPSec SA in as little as 4 messages! Talk about efficiency!
1.
With 4 initial exchange messages, IKE SA and IPSec SA are simultaneously taken care
of.
The IKEv2 initial exchange uses 4 messages to establish IKE SA and IPSec SA. The capture
packet messages are as follows:
As shown in Figure 6-16, the initial exchange includes the IKE SA initial exchange
(IKE_SA_INIT exchange) and the IKE authentication exchange (IKE_AUTH exchange).
209
Learn Firewalls with Dr. WoW
Figure 6-16 Initial exchange
IPSec
SA
FW_A
Set up IKE SA and
generate keys.
Authenticate identities
and create the first pair of
IPSec SA.

FW_B
Initially exchange IKE SA
Initially exchange IKE SA
Exchange authentication
Exchange authentication
Set up IKE SA and
generate keys.
Authenticate identities
and create the first pair of
IPSec SA.
First message (IKE_SA_INIT)
This negotiation is responsible for IKE SA parameters, including the IKE proposal,
temporary random numbers (nonce) and DH values.
210
Learn Firewalls with Dr. WoW
The SA payload is mainly used for negotiating IKE proposals.
KE (Key Exchange) and nonce payloads are mainly used to exchange key materials.
Once exchanged by the IKE_SA_INIT, the IKEv2 will ultimately generate 3 types of
keys:

−
SK_e: used to encrypt the second message.
−
SK_a: used to authenticate the identity of the second message.
−
SK_d: used for encryption materials derived from the child SA (IPSec SA).
Second message (IKE_AUTH)
This is responsible for identity authentication and for creating the first child SA (IPSec
SA). Currently, there are two commonly used techniques for identity authentication:
211
Learn Firewalls with Dr. WoW
−
Pre-shared key method (pre-share): The device identity message is the IP address or
name.
−
Digital certificate method: The device identity message is the certificate and a partial
message HASH value (signature) authenticated through private-key encryption.
The identity messages above all use SKEYID_e encryption.
When a child SA is created, naturally, it also needs to negotiate an IPSec proposal for the
protected data flow. IKEv2 will use TS payloads (TSi and TSr) to negotiate ACL rules
between the two devices. The final result is that the two ACL rules will set up an intersection
(this is different from IKEv1; IKEv1 does not have TS payloads to negotiate the ACL rule).
When a single IKE SA needs to create multiple IPSec SAs, such as when two IPSec peers
send multiple data flows back and forth, a child SA exchange must be used to negotiate the
subsequent IPSec SAs.
2.
The child SA exchanges two messages to establish one IPSec SA.
Child SA exchanges can only be performed after the IKE initial exchange is completed. The
exchange initiator can be the initiator from the IKE initial exchange as well as the responder
to the IKE initial exchange. These two messages will be protected by the IKE initial exchange
negotiation key.
IKEv2 also supports the PFS function, and a new DH exchange can be performed and a new
IPSec SA key generated during the child SA exchange phase.
6.6 Summary of IKE/IPSec
6.6.1 IKEv1 V.S. IKEv2
Having written so much about IKEv1 and IKEv2 already, it's a good time for us to summarize
the main differences between the two, as shown in Table 6-6.
Table 6-6 Comparison of IKEv1 and IKEv2
Function
IKEv1
IKEv2
IPSec SA
Establishment
Process
Divided into two phases. Phase 1 is
divided into two modes: main mode and
aggressive mode; phase 2 is fast mode.
Not divided into phases.
IPSec SA can be established
with a minimum of 4
messages.
Main mode + fast mode requires 9
messages to establish IPSec SA.
Aggressive mode + fast mode requires 6
messages to establish IPSec SA.
IKE SA Integrity
Authentication
Not supported
Supported
212
Learn Firewalls with Dr. WoW
ISAKMP
Payload
The supported payloads differ; for instance, IKEv2 supports TS
payloads for ACL negotiations, but IKEv1 does not.
IKEv1 and IKEv2 supported payloads also have other differences, but
for now, we'll only mention TS payloads.
Authentication
Method
pre-shared key
pre-shared key
digital certificate
digital certificate
Digital envelope (rarely used)
EAP
Digital envelope (rarely
used)
Remote Access
Via L2TP over IPSec
Via EAP authentication
support
Clearly, IKEv2 with its faster and more secure services, takes the win. As in the Yangtze River,
the coming waves ride on the ones before them. No surprises here.
6.6.2 IPSec Protocol Profiles
Security protocols (AH and ESP), encryption algorithms (DES, 3DES, AES), authentication
algorithms (MD5. SHA1. SHA2), IKEs, DHs… did you catch all that? In case this is getting a
little confusing, Dr. WoW has come up with a summary for us all. Let's take a look:

Security protocol (AH and ESP) - IP packet security encapsulation.
Once an IP packet puts on its AH or/and ESP vest, it becomes an IPSec packet. This
"vest" is not just your everyday vest; this is a "bulletproof vest" stitched with
"encryption" and "authentication" algorithms (for tips on wearing these vests, see section
6.1.2 "Security Encapsulation"). The differences between the two are as shown in Table
6-7.
Table 6-7 AH versus ESP
Security Features
AH
ESP
IP Protocol No.
51
50
Data Integrity Check
Supported (authentication
for entire IP packet)
Supported (no IP header
authentication)
Data Origin Authentication
Supported
Supported
Data Encryption
Not Supported
Supported
Packet Replay Attack Protection
Supported
Supported
IPSec NAT-T (NAT Traversal)
Not Supported
Supported

Encryption algorithms (DES, 3DES, AES) - the IPSec packet's Ace of Spades. IPSec
data packets use symmetrical encryption algorithms for encryption, but only the ESP
protocol supports encryption; the AH protocol does not. Also, IKE negotiation packets
also perform encryptions.
213
Learn Firewalls with Dr. WoW

Authentication algorithms (MD5. SHA1. SHA2) - the IPSec packet method of positive
identification. Encrypted packets will generate digital signatures through the
authentication algorithm; the digital signature will fill out the AH and ESP packet
headers' integrity check value ICV segment and send it to the peer; in the receiving
device, the integrity and origin of the data is authenticated by comparing digital
signatures.

IKE - powerful, attentive key steward. IPSec uses the IKE protocol when sending and
receiving inter-device security negotiation keys and updated keys.

DH algorithm - the attentive steward's abacus. DH is known as the public key exchange
method. It's used to generate key materials and perform exchanges via ISAKMP
messages. Moreover, it will ultimately send and receive the computer cipher and
authentication keys from both peers.
Having laid out all these concepts before us, Dr. WoW cannot help but marvel at the genius of
the IPSec protocol designers. With all these new and old protocols and algorithms stitched
together so seamlessly, all the malice of the Internet can be shielded outside of our tunnels! Dr.
WoW has come up with a diagram to help us remember these acronyms, as shown in Figure
6-17.
Figure 6-17 IPSec security protocol profile
Security protocol
Encryption
Authentication
Key exchange
ESP
DES
MD5
3DES
AH
AES
SHA
MD5
SHA
IKE (ISAKMP,DH)
With this out of the way, Dr. WoW is relieved and can smile to himself with pride at the
success of the Tiandihui network. But we're not out of the woods just yet - once IKE
applications are used for IPSec VPNs, those communication problems faced by large and
mid-sized fixed public IP address sub-hosts are quickly resolved, but those sub-hosts who
cannot get onto fixed public IP addresses are not so fortunate, crying out that such favoritism
flies in the face of true fraternity! Sub-hosts without fixed public IP addresses cannot establish
stable IPSec VPNs with hosts. It is for this reason that Tiandihui posed the question: Do both
ends have to use fixed public IP addresses to establish an IPSec VPN?
Of course not! Next, Dr. WoW will teach us all about a new way to establish IPSec VPNs the IPSec template method.
6.7 Template IPSec
Similar to the IKE IPSec policy, the template IPSec policy also must rely on an IKE
negotiated IPSec tunnel. The greatest improvement that comes with the template IPSec policy
is that it does not require fixed peer IP addresses: a peer IP address can be strictly
designated (single IP), broadly designated (IP address segment) or simply left
undesignated (i.e. the peer IP can be any IP).
The template IPSec policy is like a valiant general, leaving no peer IP behind enemy lines;
fixed, dynamic, private… all are welcome to join the ranks. It's precisely because of this
214
Learn Firewalls with Dr. WoW
demeanor that the template IPSec policy is suitable for Tiandihui hosts because it allows them
to respond to a multitude of sub-host negotiation requests. The benefits become ever more
obvious as the ranks of sub-hosts begin to swell:

IKE IPSec policy is used. Hosts must configure n-number of IPSec policies and IKE
peers, where n equals the number of sub-hosts.

Template IPSec policy is used. Hosts only need to configure one IPSec policy and one
IKE peer, irrespective of n.
In conclusion, the template IPSec policy's two main advantages secure its ticket to the IPSec
"Hall of Fame":

For seldom initiation requests, non-fixed public IPs are okay

Simple configuration; only one IKE peer is needed
However, even stars have their own unspoken difficulties; template IPSec policies can only
respond to peer-initiated negotiation requests and cannot actively initiate negotiations.
6.7.1 Point-to-Multi-Point Networking Applications
As shown in Figure 6-18, sub-host 1 and sub-host 2's interfaces use the dynamic method of
retrieving public IP addresses. Respective IPSec tunnels must be established between sub-host
1 and the host and sub-host 2 and the host. Moreover, sub-host 1 and sub-host 2 can also
communicate through the IPSec VPN.
Figure 6-18 IPSec VPN point-to-multi-point networking
PC_B
172.16.1.2
GE0/0/1
172.16.1.1/24
IPSec Tunnel
GE0/0/1
192.168.0.1/24
GE0/0/2
1.1.1.1/24
GE0/0/2
Subhost 1
FW_B
Host
GE0/0/2
GE0/0/1
172.16.2.1/24
FW_A
PC_A
192.168.0.2/24
Subhost 2
IPSec Tunnel
FW_C
PC_C
172.16.2.2/24
The logic behind IPSec Policy template configuration is as shown in Figure 6-19.
215
Learn Firewalls with Dr. WoW
Figure 6-19 Schematic of IPSec policy template configuration
1. Define which data flows must
be protected.
security acl A
rule permit ip
2. Configure the IKE
security proposal.
ike proposal B
encryption-algorithm
authentication-method
authentication-algorithm
integrity-algorithm
dh
Dotted line: optical
Solid line: mandatory
3. Configure the IKE peer.
ike peer D
ike proposal B
undo version
Configure identity
authentication parameters
based on the selected
authentication-method.
exchange-mode
5. Configure the IPSec security policy
in template mode.
1. ipsec policy-template E
4. Configure the IPSec
security proposal.
Ike-peer D
transform
proposal C
encapsulation-mode
interface
security acl A
ipsec proposal C
esp encryptionalgorithm
esp authenticationalgorithm
ah authenticationalgorithm
6. Apply the IPSec
security policy group.
ipsec policy F
pfs
speed-limit
2. ipsec policy F isakmp
template E
The template IPSec policy configuration is as shown in Table 6-8. We won't go over IKE and
IPSec proposal default configurations again. Sub-host 2 configuration is similar to that of
sub-host 1. Simply refer to the sub-host 1 configuration.
Table 6-8 IPSec VPN template configuration
Configuration
Host FW_A (with Template
IPSec policy)
Sub-Host 1 FW_B (with IKE
IPSec Policy)
IKE proposal
Configuration
ike proposal 10
ike proposal 10
IKE Peer
Configuration
ike peer a
ike peer a
ike-proposal 10
pre-shared-key tiandihui1
//can forego remote-address
configuration; the
remote-address can also be used
to designate the IP address
segment.
ACL
ike-proposal 10
remote-address 1.1.1.1
pre-shared-key tiandihui1
acl number 3000
acl number 3000
rule 5 permit ip destination
172.16.1.0 0.0.0.255
rule 5 permit ip source 172.16.1.0
0.0.0.255
rule 10 permit ip destination
172.16.2.0 0.0.0.255
IPSec Proposal
IPSec proposal a
IPSec proposal a
216
Learn Firewalls with Dr. WoW
IPSec Policy
IPSec policy-template tem1 1
//configuration IPSec policy
template
security acl 3000
proposal a
IPSec policy policy1 1 isakmp
security acl 3000
proposal a
ike-peer a
ike-peer a
IPSec policy policy1 12 isakmp
template tem1//configuration
template IPSec policy
IPSec Policy
Application
Route
interface GigabitEthernet0/0/2
interface GigabitEthernet0/0/2
ip address 1.1.1.1 255.255.255.0
ip address 2.2.2.2 255.255.255.0
IPSec policy policy1
IPSec policy policy1 auto-neg //once
auto-neg is configured, traffic
trigger is not needed; IPSec tunnel is
established directly.
Route configuration required for
each private network exchange
of visits
Route configuration required for
each private network exchange of
visits
The configuration above has two differences from the standard IKE method; note the
following:

Host FW uses template IPSec policy.
The template IPSec policy does not require IKE peers to configure remote-addresses; or
remote-addresses can be used to designate IP address segments.
In section 6.4.2 "Phase 1: Establishing IKE SA (Main Mode)", we discussed the three
uses of remote-address, command, and designated IP addresses. Now, let's consider for
a moment - if the template IPSec policy does not configure the remote-address
command, will it cause confusion? Even if the host gives up its rights, there still won't be
any problems:
−
If the host does not configure a remote-address command, there won't be a
designated tunneling peer IP address. At this time, the host can only accept active
sub-host access and cannot perform sub-host authentication nor can it actively access
the sub-host.
−
If the host uses a remote-address to designate the tunneling peer IP address segment,
the host can check to see whether the sub-host device ID (IP address) is contained in
the IP address segment; requests will only be accepted when the ID is in fact within
the segment. Even at this time, the host still cannot actively access the sub-host.
From the two points above it can be seen that the template IPSec policy seems to let
hosts get over the issue of "no fixed IP" and "no public IP" peers. But, in fact, this is only
achieved when the host actively gives up two rights.

ACL configuration is specific; with more sub-hosts, host ACL configuration becomes
more complicated.
−
Host FW_A's ACL requirements include two rules:
To allow sub-host 2 and host FW to access sub-host 1 FW, the source must include
the sub-host 2 and host network segment, and the destination must be the sub-host 1
217
Learn Firewalls with Dr. WoW
network segment. In this case, the source is not designated which means that the
source can be sub-host 2 or the host or another IP address segment.
To allow sub-host 1 and the host FW to access the sub-host 2 FW, the source must
include the sub-host 1 and host network segment, and the destination must be the
sub-host 2 network segment. In this case, there is no designated source which means
that the source can be sub-host 1 or the host or another IP address segment.
−
Sub-host FW's ACL requirement:
To allow the sub-host 1 FW to access the sub-host 2 and host FWs, the source must
be the sub-host 1network segment, and the destination must be the sub-host 2 and
host network segments. In this case, there is no designated destination which means
that the source can be sub-host 2 or the host or another IP address segment.
Authentication is performed when configuration is complete:
2.
On the host FW_A, the first and second-phase SAs are normally established between the
host and the sub-host 1 and sub-host 2 FWs.
3.
Sub-host 1, sub-host 2, and host can communicate back and forth.
Question: If auto-neg parameters are not configured on the FW_B interface IPSec policy,
can sub-host 1 and sub-host 2 communicate directly?
4.
After the IPSec policies are cancelled on the sub-host 1 and sub-host 2 FWs, when
re-applied, the auto-neg parameters are not configured. If sub-host 1's PC_B pings
sub-host 2's PC_C, the ping will not go through.
5.
Check the status of sub-host 1 FW_B's SA.
<FW_B> display ike sa
current ike sa number: 2
------------------------------------------------------------------------conn-id peer flag phase vpn
------------------------------------------------------------------------40022 1.1.1.1 RD|ST v2: 2 public
7 1.1.1.1 RD|ST v2: 1 public
The SA established between sub-host 1 and the host is normal.
6.
Check the status of sub-host 2 FW_ C's SA.
<FW_C> display ike sa
current sa Num: 0
There is no SA established between sub-host 2 and the host FW. This is because the host FW
configured the template IPSec policy and can only respond to negotiations. As such, sub-host
1 created a normal SA with the host FW, but the host cannot establish an SA with the sub-host
2 FW. Once auto-neg parameters are set up on the IPSec policy applied by the sub-host 1 and
sub-host 2 FWs, the IPSec SA will be automatically created. Because SAs are already in place
from the sub-host 1 to the host FW and the host to the sub-host 2 FW, sub-host 1 can
communicate with sub-host 2. Similarly, he host can send pings to the sub-hosts.
At this point, we have already introduced three different IPSec policies: the manual IPSec
policy, IKE IPSec policy, and the template IPSec policy. All three of these IPSec policies can
be configured in a single IPSec policy group. This so-called IPSec policy group is a group of
IPSec policies with the same name. At most, there can only be one IPSec template in a single
IPSec policy group, the serial number of which must also be the largest, set to minimum
priority. Otherwise, accession request will first be received by the template IPSec policy; low
priority IKE IPSec policies won't have any way to show off their talent.
218
Learn Firewalls with Dr. WoW
Template and IKE IPSec policies both need to negotiate IPSec tunnels with IKEs. The
negotiation process is the same, and there's no need for Dr. WoW to go over that again. Next,
we will focus our discussion on the "characteristics" of the template IPSec policy.
6.7.2 Customized Pre-Shared Keys (USG9500-Series Firewall
Only)
Only one IKE peer can be cited in the IPSec template. Moreover, a single IKE Peer can only
have one pre-shared key configured. As such, all interconnected peers must be configured
with the same pre-shared key. Because of this, if even a single firewall leaks the pre-shared
key, the security of all other firewalls is compromised.
So, if a host wants to establish interconnected point-to-multi-point networking with multiple
sub-hosts, can the sub-host firewalls configure different pre-shared keys? Since the pre-shared
key is associated with key generation and identity authentication, the pre-shared key only
needs to be linked to the device identity, and identity authentication can be performed with the
pre-shared key authentication method for the local IP address or device name. When an IP
address or a device name is used to designate a pre-shared key, a "customized pre-shared
key" can be configured for each sub-host firewall.

Customized pre-shared key designation through peer IP address for each sub-host
firewall
This method is suitable for sub-host firewalls with fixed IP addresses. The host firewall
ike peer configured remote-address and pre-shared-key are deleted and changed
globally so that each sub-host firewall has its own remote-address and pre-shared-key.
In this way, the template method can stay ahead of the game and cleverly circumvent its
limitations.
Table 6-9 Customized pre-shared key designation through peer IP address
Configuration
Host FW_A
Sub-Host 1 FW_B
IKE Peer
Configuration
ike peer a
ike peer a
local-id-type ip
ike-proposal 10
local-id-type ip
ike-proposal 10
remote-address 1.1.1.1
pre-shared-key tiandihui1
Customized
Pre-Shared Key
Configuration

ike remote-address 2.2.2.2
pre-shared-key tiandihui1
-
ike remote-address 2.2.3.2
pre-shared-key tiandihui2
Pre-shared key name designation through peer device
When the sub-host firewall does not have a fixed IP address, the device name can be
used to verify the identity (ike local-name). At this time, host firewall can then globally
configure a unique remote-id and pre-shared-key for each sub-host firewall.
219
Learn Firewalls with Dr. WoW
Table 6-10 Customized pre-shared key name designation through peer device
Configuration
Host FW_A
sub-host 1 FW_B
IKE Peer
Configuration
ike peer a
ike peer a
local-id-type ip
ike-proposal 10
local-id-type fqdn // when
identity authentication is
performed via the device
name, the local authentication
must be configured to FQDN
or USER-FQDN.
ike-proposal 10
remote-address 1.1.1.1
pre-shared-key tiandihui1
Local Name
Configuration
-
ike local-name tdhfd1
Customized
Pre-Shared Key
Configuration
ike remote-id tdhfd1
pre-shared-key tiandihui1
-
ike remote-id tdhfd2
pre-shared-key tiandihui2
NOTE
The host FW_A configured "ike remote-id" must be the same as the sub-host 1 FW_B configured "ike
local-name".
6.7.3 Designated Peer Domain Name Usage
When a sub-host firewall is accessed with a dynamic IP address, are IKE IPSec policy's hands
tied?
In fact, there's also a method out there that can help the IKE IPSec policy: when the peer IP
address is not fixed, naturally, a remote-address cannot be configured, but the host can still
learn the IP address through other indirect methods, such as through the domain name. In
other words, the host can use a designated remote-domain to replace the remote-address;
the sub-host firewall configured DNS will then obtain a mapping relationship between the
domain name and IP address, and use the DDNS to ensure that the mapping relationship is
constantly updated. Of course, dynamically configured domain names can also be used with
the template IPSec policy.
220
Learn Firewalls with Dr. WoW
Table 6-11 Peer domain name designation in IKE peer
Configuration
Adjustments
Host FW_A
Sub-Host 1 FW_B
IPSec
Configuration
Only IKE Peer
configuration changes
No changes
ike peer a
ike-proposal 10
pre-shared-key tiandihui1
remote-domain
www.adcd.3322.org
Additional
Configuration
-
1. Start domain name resolution
dns resolve
dns server 200.1.1.1
2. Configure DDNS policy //the
following configuration requires
contact with the DDNS service
provider and must be performed
according to the DDNS service
provider's instructions.
ddns policy abc
ddns client www.adcd.3322.org
ddns server www.3322.org
ddns username abc123 password
abc123
3. Apply DDNS policy //dialer port is
the logical interface that corresponds
to the ADSL interface. This solution
has relatively more applications for
sub-host ADSL dial-up ports; in this
case, the sub-host DDNS policy will
apply to the dialer port.
ddns client enable
interface dialer 1
ddns apply policy abc
The limitation of this solution is that the dynamic access parties must have a fixed domain
name and must also add DNS and DDNS configurations, which is a little complicated. As
such, it's not as useful as the template IPSec policy. Only use this method when you have no
other choice.
6.7.4 Summary
So just how great is the template IPSec policy? In practical situations, it's not fighting the
battle alone, and the war can only be won if the template and IKE IPSec policies work
together. Their compatibility is as shown in Table 6-12.
221
Learn Firewalls with Dr. WoW
Table 6-12 Template IPSec policy and IKE IPSec policy compatibility
Scenario
Host FW_A
Sub-Host FW_B
Host IP address fixed +
sub-host IP address
fixed
IKE IPSec policy or template
IPSec policy
IKE IPSec policy
Host IP address fixed +
sub-host IP address
dynamically assigned
Template IPSec policy or IKE
IPSec policy (designated peer
domain name)
IKE IPSec policy
Host IP address
dynamically assigned +
sub-host IP address
dynamically assigned
Template IPSec policy or IKE
IPSec policy (designated peer
domain name)
IKE IPSec policy (designated
peer domain name)
The template IPSec policy truly is quite powerful, and only with this shield in hand can the
host FW connect to sub-host FWs, worry-free, regardless of whether they're using public IP
addresses, fixed configuration, or even dynamic addresses. However, after taking a closer
look, we can see that the template IPSec policy takes different approaches to public and
private IP addresses. When the peer is a private IP address, the template IPSec policy still has
to take some other steps to get everything in order!
6.8 NAT Traversal
Previously, we learned that the template IPSec policy can be used to establish an IPSec
tunnels when the host connects to sub-hosts without fixed egress IP addresses. At this point,
regardless of whether or not the sub-host is using a fixed or dynamic public IP address, it can
still safely access the host through the IPSec tunnel. All was well for the Tiandihui.
And yet the lands of the Internet never remain calm for long. Tiandihui faced yet another
challenge. Some sub-hosts did not even have dynamic public IP addresses! They could only
access the Internet through network NAT device address. Can these sub-hosts access the host
as usual? Also, apart from sub-host access to the host, they still need to access the Internet;
some sub-hosts have both IPSec and NAT configured on their firewalls; can the two
peacefully coexist? Two answer these two questions, let's listen to Dr. WoW.
6.8.1 Overview of NAT Traversal Scenarios
First, let's take a look at networks with NAT devices. As shown in Figure 6-20, if the sub-host
firewall interface IP is a private network address, it must be transformed by an NAT device.
The address can only be used to establish an IPSec tunnel with the host firewall after it has
been transformed into a public IP address.
222
Learn Firewalls with Dr. WoW
Figure 6-20 NAT traversal scenario
NAT
Host
Sub-host
FW_A
NAT device
FW_B
As we all know, IPSec is used to ensure that packets cannot be modified; NAT, on the other
hand, are specially made to modify packet IP addresses. It feels like we're mixing fire with ice.
After taking a closer look though, we can see that, first of all, the IPSec negotiation process is
completed via ISAKMP messages; ISAKMP messages are UDP-encapsulated, the source and
destination port numbers for which are both 500; NAT devices can transform the IP address
and ports of these messages, and as such, ISAKMP messages can be successfully transformed
by NAT transform to complete IPSec SA negotiation. However data traffic is transferred via
the AH or ESP protocol; this throws a wrench in the NAT transformation process. Below,
we'll take a look at whether or not AH and ESP packet s can pass through NAT devices.

AH protocol

Because AH performs an integrity check on data, it will perform a HASH computation
for all IP packets within the IP address. NAT, on the other hand, will change the IP
addresses, thereby breaking the AH HASH values. As such, AH packets cannot pass
through the NAT gateway.
ESP protocol
ESP performs an integrity check on data, but this check does not include the external IP
header; as such, ID address translation will not break the ESP HASH value. However,
because the ESP packet TCP port is encrypted and cannot be changed, because ports are
simultaneously transformed by the NAT, ESP cannot be supported.
To better understand these issues, we must first take a look at the NAT traversal function (nat
traversal) that occurs on both firewalls when the IPSec tunnel is established. After the NAT
traversal function is activated and when the NAT device must be traversed, the ESP packet
will be encapsulated to the UDP header; the source and destination port number will both be
4500. The IP address and port of IPSec packets with this kind of UDP header won't be
modified by the NAT device.
Given the different NAT device settings and address transformation functions, we'll introduce
this topic from the following three different scenarios.
Scenario 1: Post-NAT Transformation Sub-Host Public IP Address Unknown
As shown in Figure 6-21, when there is an NAT device within the carrier network, the
sub-host FW interface GE0/0/2 private IP address will become a public IP address when
transformed by the NAT device. Because Tiandihui has no way of knowing the sub-host's
public IP address when transformed by the NAT device, there's no way for the host FW to
explicity designate the peer sub-host public IP address. As such, the host FW must use the
template method to configure the IPSec policy; at the same time, the host and sub-host FWs
both must activate the NAT traversal function.
In this scenario, the host still uses the template method, so it cannot actively access the
sub-host; only the sub-host can actively initiate access to the host.
223
Learn Firewalls with Dr. WoW
Figure 6-21 Post-NAT transformation sub-host public IP address unknown
NAT
Sub-host
Host
GE0/0/2
1.1.1.1/24
GE0/0/1
192.168.0.1/24
FW_A
GE0/0/2
172.16.0.1/24
NAT device
GE0/0/1
172.16.1.1/24
FW_B
PC_A
192.168.0.2/24
PC_B
172.16.1.2/24
Host and sub-host FW key configuration is as shown in Table 6-13.
Table 6-13 NAT traversal configuration (1)
Key
Configuration
Host FW_A
Sub-Host FW_B
IPSec Proposal
IPSec proposal pro1
IPSec proposal pro1
transform esp // ESP protocol
transform packet used
transform esp // ESP protocol
transform packet used
ike peer sub-host
ike peer host
IKE Peer
pre-shared-key tiandihui1
pre-shared-key tiandihui1
ike-proposal 10
ike-proposal 10
nat traversal / both ends start
simultaneously, default start
IPSec Policy
IPSec policy-template tem1 1 //
template method configuration
remote-address 1.1.1.1
nat traversal / /both ends start
simultaneously, default start
IPSec policy policy1 1 isakmp
security acl 3000
security acl 3000
proposal pro1
proposal pro1
ike-peer host
ike-peer sub-host
IPSec policy policy1 1 isakmp
template tem1
Scenario 2: Post-NAT Transformation Sub-Host Public IP Address Known
As shown in Figure 6-22, when there is an NAT device within the sub-host network, the
sub-host FW interface GE0/0/2 private IP address will be become a public IP address when
transformed by the NAT device. Because the NAT device is within the scope of the sub-host's
control, the transformed public IP address will be known, so the host FW can use both the
template and IKE methods for IPSec policy configuration.
It must be noted that even if the IKE method is used, the host still cannot actively establish an
IPSec tunnel with the sub-host. This is not an IPSec issue but rather an NAT device issue. The
NAT device can only transform the source address for sub-host --->host access; once the
sub-host is "hidden" by the NAT device, host--->sub-host access is impossible. If the host
224
Learn Firewalls with Dr. WoW
needs to actively accesss the ub-host FW_B private network address, the NAT Server function
must be configured on the NAT device; we'll discuss this in Scenario 3.
Figure 6-22 Post-NAT transformation sub-host public IP address known
NAT
2.2.2.10
Host
GE0/0/2
1.1.1.1/24
GE0/0/1
192.168.0.1/24
FW_A
Sub-host
GE0/0/2
172.16.0.1/24
GE0/0/0
172.16.0.2/24
NAT device
GE0/0/1
172.16.1.1/24
FW_B
PC_A
192.168.0.2/24
PC_B
172.16.1.2/24
Using the IKE IPSec policy as an example, host and sub-host FW key configuration is as
shown in Table 6-14.
Table 6-14 NAT traversal configuration (2)
Key
Configuration
Host FW_A
Sub-Host FW_B
IPSec Proposal
IPSec proposal pro1
IPSec proposal pro1
transform esp // ESP protocol
transform packet used
transform esp // ESP protocol
transform packet used
ike peer sub-host
ike peer host
IKE Peer
pre-shared-key tiandihui1
pre-shared-key tiandihui1
ike-proposal 10
ike-proposal 10
nat traversal // both ends start
simultaneously, default start
remote-address 2.2.2.10 //peer
address is the post-NAT address.
When IKE used, because peer
address is a single address, only one
address from the NAT device
address pool can be used. When
template used, this does not apply.
remote-address 1.1.1.1
nat traversal // both ends start
simultaneously, default start
remote-address
authentication-address 172.16.0.1 //
Authentication address is the
pre-NAT address. When template
used, this does not apply.
IPSec Policy
IPSec policy policy1 isakmp
IPSec policy policy1 1 isakmp
security acl 3000
security acl 3000
proposal pro1
proposal pro1
ike-peer sub-host
ike-peer host
225
Learn Firewalls with Dr. WoW
Scenario 3: NAT Device with NAT Server Functions
As shown in Figure 6-23, when the NAT device is within the sub-host network, it will provide
the NAT Server function; the publicly issued address is 2.2.2.20, and the mapped private
network address is the sub-host FW interface GE0/0/2 address 172.16.0.1. When the host FW
uses the IKE method to configure the IPSec policy, host--->sub-host access is possible.
When NAT Server is configured on the NAT device, the 2.2.2.20 UDP 500 and 4500 ports
will be mapped respectively to the 172.16.0.1 UDP 500 and 4500 ports; the specific
configuration is as follows:
[NAT] nat server protocol udp global 2.2.2.20 500 inside 172.16.0.1 500
[NAT] nat server protocol udp global 2.2.2.20 4500 inside 172.16.0.1 4500
Meanwhile, because the NAT Server configuration on the NAT device will generate a reverse
Server-map table, the sub-host FW will also be able to actively initiate access to the host.
Once the packet arrives at the NAT device and matches the reverse Server-map table, the
source address will be transformed to 2.2.2.20, thereby making sub-host --->host access
possible.
Figure 6-23 NAT device with NAT server functions
Host
NAT Server
Global: 2.2.2.20
GE0/0/2
1.1.1.1/24
GE0/0/1
192.168.0.1/24
FW_A
Sub-host
GE0/0/2
172.16.0.1/24
GE0/0/0
172.16.0.2/24
NAT device
GE0/0/1
172.16.1.1/24
FW_B
PC_A
192.168.0.2/24
PC_B
172.16.1.2/24
Host and sub-host FW key configuration is as shown in Table 6-15.
Table 6-15 NAT traversal configuration (3)
Key
Configuration
Host FW_A
Sub-Host FW_B
IPSec Proposal
IPSec proposal pro1
IPSec proposal pro1
transform esp // ESP protocol
transform packet used
transform esp // ESP protocol
transform packet used
ike peer sub-host
ike peer host
IKE Peer
pre-shared-key tiandihui1
pre-shared-key tiandihui1
ike-proposal 10
ike-proposal 10
nat traversal // both ends start
simultaneously, default start
remote-address 2.2.2.20 //peer
addressis the Server's Global address
remote-address 1.1.1.1
nat traversal // both ends start
simultaneously, default start
remote-address
authentication-address 172.16.0.1 //
authentication address is the
226
Learn Firewalls with Dr. WoW
pre-NAT transformation address
IPSec Policy
IPSec policy policy1 isakmp
IPSec policy policy1 1 isakmp
security acl 3000
security acl 3000
proposal pro1
proposal pro1
ike-peer sub-host
ike-peer host
Three characteristics of NAT traversal configuration:

The two firewalls must both activate the NAT traversal function (nat traversal), that is,
only one firewall egress is a private IP address.

Because the sub-host firewall egress is "hidden" by the NAT device, the tunneling peer
IP address "visible to the host firewall is the post-NAT transformation public IP address.
So, when the host uses the IKE IPSec policy, the remote-address command designated
IP address is the NAT-transformed address and no longer the peer initiated IKE
negotiation private network address.

Because the remote-address command designated public IP address can no longer be
used for identity authentication, an additional command must be added, i.e. the
remote-address authentication-address command to designate the peer identity
authentication address (this must be the peer device's pre-NAT transformation address,
the actual IKE negotiation initiating address); this IP address is used locally to
authenticate the peer device.
Of course, if the host uses a template-configured IPSec policy, it will still automatically give
up its rights to actively initiate access nor will it be able to authenticate the peer device; as
such, the remote-address and remote-address authentication-address commands don't
need to be configured.
Next, we'll use the second scenario to introduce how IPSec can traverse NATs when IKEv1
and IKEv2 are used.
6.8.2 IKEv1 NAT Traversal Negotiation (Main Mode)
The IKEv1 main mode NAT traversal negotiation packet interaction process is as follows:
1.
When NAT traversal begins, the IKEv1 negotiation phase 1 messages (1) and (2) will be
marked with the NAT traversal (NAT-T) Vendor ID payload to check whether or not
both ends of the communication support NAT-T.
227
Learn Firewalls with Dr. WoW
When the messages from both ends include this payload, only then will the relevant NAT-T
negotiation begin.
2.
Main mode messages (3) and (4) will be sent with the NAT-D (NAT Discovery) payload.
The NAT-D payload is used to detect whether or not the two firewalls that wish to
establish the IPSec tunnel have NAT gateways and NAT gateway settings.
If both ends of the negotiation send source and destination IP addresses and port HASH
values to peers through the NAT-D, changes in the address and port transfer process can be
detected. If the packet computer HASH value received by the recipient is the same as the
HASH value sent by the peer, the table will display that there is no NAT device; otherwise, it
will indicate that an NAT device transformed the packet's IP address and port during the
transfer process.
The first NAT-D payload is the peer IP and port HASH value; the second NAT-D payload is
the local IP and port HASH value.
3.
Once the NAT gateway is discovered, the port number of subsequent ISAKMP messages
(main mode from message (5)) will be transformed to 4500. The ISAKMP packet ID is
"Non-ESP Marker".
4.
When the second IKEv1 phase negotiates whether or not to use the NAT transformation
and traversal, the IPSec packet will be encapsulated: as a UDP encapsulation tunnel
packet (UDP-Encapsulated-Tunnel) and UDP encapsulation transfer packet
(UDP-Encapsulated-Transport).
To encapsulate the UDP header for the ESP packet, the UDP packet port number is 4500.
When the encapsulated packet goes through the NAT device, the NAT device will transform
the address and port number of the packet‘s outer IP header and added UDP header.
6.8.3 IKEv2 NAT Traversal Negotiation
The IKEv2 NAT traversal negotiation packet interaction process is as follows:
1.
Once the NAT traversal begins, the IKE initiator and responder will both set the notify
payload in the IKE_SA_INIT message to NAT_DETECTION_SOURCE_IP and
228
Learn Firewalls with Dr. WoW
NAT_DETECTION_DESTINATION_IP. These two notify payloads are used to
detection whether or not an NAT exists between the two firewalls that wish to establish
an IPSec tunnel and which firewall is post-NAT. If the NAT_DETECTION_SOURCE
notify payload received does not match the packet IP header's source IP and port HASH
value, this indicates that the peer is behind an NAT gateway. If the
NAT_DETECTION_DESTINATION_IP notify payload received does not match the
packet IP header's destination IP and port HASH value, this means that the local peer is
behind the NAT gateway.
2.
Once the NAT gateway is detected, from the IKE_AUTH message on, the ISAKMP
packet port number will change to 4500. The packet ID is "Non-ESP Marker".
IKEv2 also uses UDP to encapsulate the ESP packet; the UDP packet port number is 4500.
When the encapsulated packet goes through the NAT device, the NAT device will perform an
address and port number transformation for the packet's outer IP header and added UDP
header.
In the second scenario, the configured PC_A can ping to PC_B. Check the IKE and IPSec
SAs on the host FW_A:
<FW_A> display ike sa
current ike sa number: 2
--------------------------------------------------------------------------conn-id peer flag phase vpn
---------------------------------------------------------------------------
229
Learn Firewalls with Dr. WoW
40014 2.2.2.10: 264 RD v1: 2 public
40011 2.2.2.10: 264 RD v1: 1 public
Check host FW_A session:
<FW_A> display firewall session table
Current Total Sessions: 2
udp VPN: public --> public 2.2.2.10: 2050-->1.1.1.1: 4500
udp VPN: public --> public 2.2.2.10: 2054-->1.1.1.1: 500
Check sub-host FW_B session:
<FW_B> display firewall session table
Current Total Sessions: 2
udp VPN: public --> public 172.16.0.1: 4500-->1.1.1.1: 4500
udp VPN: public --> public 172.16.0.1: 500-->1.1.1.1: 500 //at the start of the
negotiation, the port number is still 500
Because source NAT transform is configured on the NAT device, there can only be sub-host to
host sessions on the sub-host FW_B and no host to sub-host sessions.
6.8.4 IPSec and NAT for a Single Firewall
We discussed IPSec NAT traversal, but what happens when the IPSec and NAT are configured
on the same firewall?
As shown in Figure 6-24, when the IPSec and NAPT are configured on the sub-host FW_B at
the same time, the IPSec is used to protect the traffic of sub-host to host communications
when NAPT deals with the traffic of the sub-host's access to the Internet. When IPSec and the
NAT Server are configured on the host FW_A, the IPSec is used to protect the traffic of host
to sub-host communications when NAT Server deals with the traffic of the Internet user's
access to the host server.
Figure 6-24 IPSec and NAT for a single gateway
IPSec tunnel
IPSec traffic
Host
192.168.0.0/24
NAT Server traffic
FW_A
Sub-host
172.16.1.0/24
NAPT traffic
FW_B
192.168.0.1/24
(Global address: 1.1.1.1)
Arguably, the IPSec and NAT traffic on the two firewalls should be entirely different and
unrelated; instead, however, in this case, the IPSec and NAT traffic overlap, and in the
firewall forwarding process, the NAT is upstream and the IPSec is downstream; as such,
IPSec traffic will not be interrupted by the NAT process. In other words, the traffic that was
supposed to be sent through the IPSec tunnel will be transformed by the NAT as soon as the
NAT policy is in place; the transformed traffic will no longer match that of the IPSec ACL and
cannot be used in IPSec. As such, if the IPSec and NAT relationship is not ironed out at this
point, all sorts of baffling problems will arise.

For the sub-host, the sub-host user access to the host user will be unsuccessful. Upon
investigation, it will show that the sub-host user access traffic to the host user did not
enter the IPSec tunnel, matching the NAT policy.
230
Learn Firewalls with Dr. WoW

For the host, the host server access to the sub-host user will be unsuccessful, as the
access traffic matches the NAT Server reverse Server-map table such that it cannot enter
the IPSec tunnel.
The method for solving these two problems is simple, as follows:

When an IPSec and NAPT are on a single firewall, the requirements are as follows:
When configuring our NAT policy, we must set a specific policy to not transform IPSec
traffic addresses; this policy's priority must be higher than that of other policies, and the
traffic range defined in this policy must be a subset of other policies. In this way, IPSec
traffic will first be excluded from the NAT transformation policy so that the addresses
won't be transformed nor will transformation have an impact on future IPSec
troubleshooting; traffic that must go through NAT troubleshooting will be transformed as
usual according to other policies.
Next, we'll take a look at a string of NAT policy configuration script; we've configured
two NAT policies - policy 1 and policy 2 in the Trust-->Untrust security zone:
nat-policy interzone trust untrust outbound
policy 1 // IPSec protected traffic must not be transformed by NAT
action no-nat
policy source 172.16.1.0 mask 24
policy destination 192.168.0.0 mask 24
policy 2 // Internet access traffic transformed by NAT
action source-nat
policy source 172.16.1.0 mask 24
address-group 1

When an IPSec and NAT Server are on a single firewall, the requirements are as follows:
When configuring the NAT Server, designate no-reverse parameters, and do not generate
a reverse Server-map table.
[FW_A] nat server protocol tcp global 1.1.1.1 9980 inside 192.168.0.1 80 no-reverse
The key to understanding the issues above is to understand the firewall forwarding process.
The firewall forwarding process is incredibly complicated, and here, we'll only breach the tip
of the iceberg. For a more in-depth look at the firewall forwarding process, see "Appendix A
Packet Processing Process".
6.9 Digital Certificate Authentication
For each additional sub-host, the host must configure a pre-shared key for each peer formed
for each sub-host. If all peers used the same key, there is a glaring security risk; if each peer
uses a different key, it will be hard to administer and maintain the network. As such, Tiandihui
urgently required a new identity message authentication scheme to replace the pre-shared key
method and lower administration costs. Since Tiandihui's hosts and sub-hosts were all under
the guise of legitimate businesses (i.e. shop-owners and buyers), they might as well directly
apply to the government's Penal Bureau for shop-owner and buyer identity vouchers to market
their respective identity messages as shop-owner and buyer. Because the Penal Bureau is a
fair and reliable government authority, identity vouchers stamped by the Penal Bureau can
also be trusted; in this way, hosts and sub-hosts can directly authenticate their respective
identities with these identity vouchers.
This identity voucher, known as a digital certificate or simply a certificate, is an "ID card" that
serves as a device identity message issued by third-party organizations. By introducing this
certificate function, the host and sub-host s can simply and easily perform identity
231
Learn Firewalls with Dr. WoW
authentication. For a further introduction to certificates, see "Appendix B Certificate
Analysis".
Before we discuss the certificate implementation principles and methods of retrieval, we first
must understand public key cryptology and PKI profiles.
6.9.1 Public Key Cryptology and PKI Profiles
In section 6.1 "IPSec Overview", we mentioned symmetric cryptology, in which the host and
sub-hosts use the same key for encryption and decryption. Conversely, asymmetric cryptology
is when encryption and decryption data uses different keys; this is also known as public key
cryptology. Currently, the more commonly used public key cryptology algorithms are RSA
(Ron Rivest, Adi Shamirh, LenAdleman) and DSA (Digital Signature Algorithm).
In public key cryptology, two different keys are used: one key, available to the public, is
known as the "public key"; conversely, the key only known to its owner is the "private key".
What's unique about this set of keys is that the message used to encrypt the public key can
only be used to decrypt the corresponding private key; conversely, the message used to
encrypt the private key can only be used to decrypt the corresponding public key.
By utilizing this special feature, mutual identity authentication is possible. For instance,
suppose a certain sub-host firewall uses its own private key for message encryption (digital
signature); the host firewall will then use the sub-host firewall's publicly available public key
for decryption. Because others don't know the sub-host firewall's private key, only when the
host uses the corresponding public key can the message be decrypted, thereby confirming that
the message was sent by the sub-host - thus completing identity authentication.
Now that we understand the basic concept of public key cryptology, how then are these
concepts put to use in our actual environment? PKI (Public Key Infrastructure) is a basic
profile based on public key cryptology that is used for message security services; the digital
certificate is a core component of this profile, and IKE borrows the PKI certificate function to
perform peer identity authentication.
A PKI profile will include the following two major characters:

End Entity (EE): the certificate end user, such as the host and sub-host firewalls.

Certificate Authority (CA): an authoritative, trusted third-party organization (similar to
the Imperial Penal Bureau), responsible for issuing, querying and updating certificates
and other such tasks.
In an IPSec, the host and sub-host firewalls are the certificate end users. If we want to
generate a certificate for a firewall, we must first guarantee that the firewall has its own
private-public key pair. We can create a private-public key pair on the host and sub-host
firewalls and then send the public key and firewall messages (entity messages) to the CA for
certificate application; private-public key pairs can also be generated on the CA for the host
and sub-host, and on this basis, a certificate can be generated as well; the host and sub-host
firewalls will then import their own respective private-public key pairs and certificates. In this
section, we will mainly discuss host and sub-host firewall self-created private-public key pairs
and the certificate that they use; for the CA host and sub-host firewall private-public key pair
and certificate generation process, see "Appendix B Certificate Analysis".
NOTE
All certificates mentioned herein can be divided into two categories: firewall certificates known as local
certificates which serve as the firewall identity and CA "signature-based" certificates known as CA
certificates or root certificates which serve as the CA identity.
We've just briefly introduced some of the major concepts about certificates. Next, let's take a
look at how host and sub-host firewalls obtain such certificates.
232
Learn Firewalls with Dr. WoW
6.9.2 Certificate Applications
Before host and sub-host firewalls can apply a certificate, they must first generate a
private-public key pair. Then, the public key and firewall entity messages shall be given to the
CA, and the CA will generate a certificate based on these messages. The host and sub-host
firewalls can apply for certificates through the following two methods:

Online method (in-band method)
The firewall and CA exchange packets through the certificate application protocol for an
online certificate application. The certificate applied for will be directly saved to the
firewall's storage device (Flash or CF card); common certificate application protocols
include SCEP (Simple Certification Enrollment Protocol) and CMP (Certificate
Management Protocol). This method is suitable for firewalls that support SCEP or CMP
and also depends on the network connectivity between the firewall and CA.

Offline method (out-of-band method)
First, the firewall will generate a certificate request (the contents of which include the
public key and entity message); we'll then send these documents to the CA on a disk, via
email or other such methods. Then, the CA will formulate a certificate for the firewall
based on the certificate request and the certificate will be returned in the same way, on a
disk, via email or through other such methods. Lastly, we will transfer the certificate to
the firewall storage device. This method is suitable for firewalls that do not support
SCEP or CMP, or when there is no network connectivity between the firewall and CA.
These two methods can be chosen flexible based on the actual firewall situation. Next, we will
use the offline method and the networking environment as shown in Figure 6-25 to illustrate
the process by which the host and sub-host firewalls retrieve certificates.
Figure 6-25 IKE/IPSec networking
Host
192.168.0.0/24
1.1.1.1/24
FW_A
Sub-host
192.168.1.0/24
3.3.3.3/24
FW_B
The offline certificate application process is as shown in Figure 6-26.
233
Learn Firewalls with Dr. WoW
Figure 6-26 Offline certificate application process
3 Generate
Certificate
request file
2 Configure
Entity info
Send CA through disk or email.
Public/private
key pair
1 Create
CA
FW
Certificate
request file
5 Import
4
Gener
ate
Returned through disk or email.
+
FW certificate
2.
CA certificate
FW certificate
CA certificate
Create private-public key pair
First, respective private-public key pairs are created on FW_A and FW_B to be used in the
public key message when applying for the certificate. In the process, the system will prompt
for the public key number; the length of the number will range from 512 to 2048. The longer
the length of the public key, the higher its security; however, in terms of computing speeds,
longer keys take longer to process. Here, we need maximum security, so we'll enter 2048.
Create private-public key pair on FW_A:
[FW_A] rsa local-key-pair create
The key name will be: FW_A_Host
The range of public key size is (512 ~ 2048).
NOTES: If the key modulus is greater than 512.
It will take a few minutes.
Input the bits in the modulus[default = 512]: 2048
Generating keys...
.................................................+++
...............................................+++
..............++++++++
.++++++++
Create private-public key pair on FW_B:
[FW_B] rsa local-key-pair create
The key name will be: FW_B_Host
The range of public key size is (512 ~ 2048).
NOTES: If the key modulus is greater than 512.
It will take a few minutes.
Input the bits in the modulus[default = 512]: 2048
Generating keys...
.................................................+++
...............................................+++
..............++++++++
234
Learn Firewalls with Dr. WoW
.++++++++
3.
Configure entity message
When applying for the certificate, FW_A and FW_B must provide sufficient messages to the
CA to verify their identities; entity messages serve as identity messages for firewalls, such as:
CN (Common Name), FQDN (Fully Qualified Domain Name) name, IP address, email
address, etc. Of these messages, the CN must be configured, while the configuration of other
items is optional.
These messages must be included in the certificate, and when configuring the ID type on the
IKE peer, the ID type to be used for authentication can be determined based on the entity
messages included in the certificate.
Once the entity messages are configured, they must also be cited in the PKI domain. Table
6-16 lays out the configuration of FW_A and FW_B entity messages and PKI domains.
Table 6-16 Configuration entity and PKI
Key
Configuration
Host FW_A
Sub-Host FW_B
Entity Message
pki entity fwa
pki entity fwb
common-name fwa //CN
PKI domain
4.
common-name fwb //CN
fqdn fwa.tdh.com //FQDN name
fqdn fwb.tdh.com //FQDN name
ip-address 1.1.1.1 //IP address
ip-address 3.3.3.3 //IP address
email fwa@tdh.com //Email
address
email fwb@tdh.com //Email
address
pki domain fwa
pki domain fwb
certificate request entity fwa //PKI
domain cited entity message
certificate request entity fwb //PKI
domain cited entity message
Generate certificate request
Next, we can generate the certificate request on FW_A and FW_B. The generated certificate
request can be saved on the FW_A and FW_B storage devices under the name
"PKIdomainname.req". The name of the certificate request generated on FW_A is fwa.req,
while the name of the certificate request generated on FW_B is fwb.req.
[FW_A] pki request-certificate domain fwa pkcs10
Creating certificate request file...
Info: Certificate request file successfully created.
[FW_B] pki request-certificate domain fwb pkcs10
Creating certificate request file...
Info: Certificate request file successfully created.
By checking the certificate request generated on FW_A, you can see that it will include the
configured CN, FQDN name, IP address, and email address, as well as the FW_A public key
message.
[FW_A] display pki cert-req filename fwa.req
Certificate Request:
Data:
Version: 0 (0x0)
235
Learn Firewalls with Dr. WoW
Subject: CN=fwa
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (2048 bit)
Modulus (2048 bit):
00: ae: 68: 50: 18: e7: 55: 32: 7a: 0e: 61: b6: 6e: 47: 45:
ec: fb: 29: d9: 1b: 4a: 9d: 6b: b0: 00: b0: 65: c8: fc: 5b:
b4: 68: d7: 90: 7d: 96: f7: 1d: e4: 62: 43: 06: bc: d0: a3:
5b: b4: fa: 30: a3: 19: 7e: 6f: 7c: 05: 6b: 47: 0c: a2: 42:
1b: c4: 82: f7: 5b: 0a: 73: a1: 0a: 8b: 00: dd: 37: aa: 5e:
21: 02: 56: b2: e6: 55: 31: 08: 8f: 71: 03: 13: 92: b9: c1:
51: 7e: 51: 04: e2: ca: 85: 2e: 45: 97: bb: 9a: 0e: ed: 61:
03: 97: d2: 1e: 44: b2: 9f: ff: b9: b1: 1d: 5d: 65: 7e: fc:
e6: 13: c3: 1e: 71: 81: d0: fe: a0: 60: 71: a4: 8a: 40: 93:
92: e3: b3: b6: cf: 56: f1: 30: b2: fc: 53: 31: bd: 9d: 6f:
3c: 33: 1e: 4a: a5: 6f: 83: c7: 45: 26: 8d: c6: 9c: 84: 85:
b5: 8f: b9: e3: 86: 86: 59: ad: 9b: 58: 63: a1: 3d: 7b: 81:
d7: 43: 14: 3d: 98: 4a: a2: cb: 82: 2c: fa: ca: 91: 32: b1:
e0: 09: de: fa: a8: d6: fc: ea: 8e: 7e: 36: 8f: fb: 86: 31:
1e: bc: 5e: 01: 71: 6b: b4: 23: 86: 7b: 05: c1: 63: 7a: f5:
bc: a7: 9b: a1: da: ff: 4f: 26: 2d: 33: 44: 06: 72: f1: 7b:
84: d5: a8: 49: 1d: be: b4: 0e: 9c: 94: 85: 34: 7b: e5: bb:
8a: 49
Exponent: 65537 (0x10001)
Attributes:
Requested Extensions:
X509v3 Subject Alternative Name:
IP Address: 1.1.1.1. DNS: fwa.tdh.com, email: fwa@tdh.com
Signature Algorithm: md5WithRSAEncryption
4b: a6: fc: 91: 2a: 77: e3: 30: 02: bb: e4: 0f: 1a: bf: d2:
3e: 44: 51: 81: b1: 26: 2d: 2e: 83: 7c: 0c: 29: 70: 3c: 6a:
27: c8: a4: 8d: 3b: 8f: dc: a7: d7: df: 10: be: 4c: 96: 1f:
4d: e9: 28: 82: b9: 2d: 9b: e6: 6d: 22: 52: ca: 50: 07: c2:
c7: 49: 7a: a6: a5: 7c: cc: 82: 02: 15: 14: ca: 9c: 69: 39:
3a: c9: 75: d9: f5: b6: bf: b1: 45: e4: e7: f4: db: df: eb:
ac: 14: e9: 51: af: b1: c8: d6: c1: 19: 48: bc: 27: c1: 37:
9c: 1f: 9a: 7e: c7: fe: 20: c9: e8: 1d: 94: 55: ff: 85: 3e:
f3: ff: 9b: 18: 36: b1: 25: 2b: 4d: 60: 2e: 13: 7b: be: 91:
6c: 5c: 1a: f6: 3a: 5b: e7: 87: 2b: 43: 7f: d8: f6: 2b: c8:
c8: 40: df: 07: f9: 52: 4c: 8b: ba: b0: 10: f3: 34: 00: 00:
c1: 7a: 9c: dd: de: 26: 26: 28: 30: de: e8: 6c: dc: 0a: c6:
c6: 0d: 5e: 8e: 68: a8: 8d: cc: eb: 91: 9c: 59: 3d: 1e: f3:
16: bf: cc: f5: df: 71: bc: 51: fb: 98: 83: c5: 2b: 17: 73:
f7: 93: 76: f4
5.
a1:
8a:
f5:
7a:
eb:
3d:
59:
8c:
c0:
b1:
74:
8f:
f3:
d7:
ad:
7a:
db:
2b:
fb:
6b:
41:
5a:
a1:
df:
0b:
27:
58:
0a:
81:
1a:
96:
17:
44:
74:
38:
f5:
1f:
7d:
ae:
27:
72:
6c:
CA formulates request based on certificate request
Once the certificate request is generated, these documents can be sent to the CA on a disk, via
email or though other such methods, and the CA will formulate a certificate for FW_A and
FW_B. In addition to the FW_A and FW_B certificates, the CA will also create its own
certificate, i.e. the CA certificate. The CA will return the FW_A and FW_B certificates along
with its own certificate on a disk, via email or through other such methods.
The commonly used Windows Server operating system can serve as a CA for generating and
issuing certificates; the specific operating steps can be found through a simple online search
and will not be discussed at this time.
236
Learn Firewalls with Dr. WoW
6.
Import certificate
Once CA processing is complete, we will ultimately receive FW_A certificate fwa.cer, FW_B
certificate fwb.cer, and also the CA's own certificate, ca.cer.
fwa.cer and ca.cer will be uploaded to the FW_A storage device, and fwb.cer and ca.cer will
be uploaded to the FW_B storage device; afterwards, the certificate must also be respectively
imported to FW_A and FW_B.
Import CA certificate and local certificate on FW_A:
[FW_A] pki import-certificate ca filename ca.cer
Info: Import file successfully.
[FW_A] pki import-certificate local filename fwa.cer
Info: Import file successfully.
Import CA certificate and local certificate on FW_B:
[FW_B] pki import-certificate ca filename ca.cer
Info: Import file successfully.
[FW_B] pki import-certificate local filename fwb.cer
Info: Import file successfully.
6.9.3 Digital Certificate Identity Authentication
Once the certificates are imported, FW_A and FW_B will both have "identitified" devices.
When the certificate is cited in the IKE peer, FW_A and FW_B can authenticate the other
peer's identity through the certificate.
Previously we mentioned that when certificates are used for identity authentication, the ID
type can be determined based on the entity message in the certificate. Currently, for IKE peers,
the four ID types of the DN (Distinguished Name), FQDN, User-FQDN and IP can be used.
These four ID types correspond to the certificate text as well as the values on FW_A and
FW_B, as shown in Table 6-17.
Table 6-17 Certificate identity ID text and firewall values
ID Type
Corresponding
Certificate Text
FW_A Values
FW_B Values
DN
Subject
local ID: /CN=fwa
local ID: /CN=fwb
peer ID: /CN=fwb
peer ID: /CN=fwa
local ID: fwa.tdh.com
local ID: fwb.tdh.com
peer ID: fwb.tdh.com
peer ID: fwa.tdh.com
local ID:
fwa@tdh.com
local ID: fwb@tdh.com
FQDN
User-FQDN
DNS
email
peer ID: fwa@tdh.com
peer ID: fwb@tdh.com
IP
IP Address
local ID: 1.1.1.1
local ID: 3.3.3.3
peer ID: 3.3.3.3
peer ID: 1.1.1.1
Table 6-18 illustrates the key configuration for FW_A and FW_B with the ID type of DN.
237
Learn Firewalls with Dr. WoW
Table 6-18 IKE/IPSec certificate authentication configuration
Key
Configuration
Host FW_A
Sub-Host FW_B
IKE Proposal
ike proposal 10
ike proposal 10
authentication-method
rsa-sig//use certificate
authentication method
authentication-method rsa-sig//use
certificate authentication method
ike peer fwb
ike peer fwa
certificate local-filename
fwa.cer //FW_A certificate
certificate local-filename fwb.cer
//FW_B certificate
ike-proposal 10
ike-proposal 10
local-id-type dn //ID type of DN
local-id-type dn //ID type of DN
remote-id /CN=fwb //FW_B
DN
remote-id /CN=fwb //FW_A DN
IKE Peer
remote-address 3.3.3.3 //FW_B
IP address
Certificate
Attribute
Access Control
Policy
pki certificate
access-control-policy default
permit
remote-address 1.1.1.1 //FW_A IP
address
pki certificate access-control-policy
default permit
When the certificate method is used for the IKE negotiation process, it is roughly the same as
when the pre-shared key method is used. The difference between the two is that when the
certificate method is used, the ISAKMP identity messages (main mode is ISAKMP message
(5) and (6); aggressive mode is ISAKMP message (1) and (2)) sent between the two peers will
include an additional certificate payload and signature payload. We won't go into the specifics
on the negotiation process again.
At this point, Tiandihui had found an alternative identity message authentication scheme to
the pre-shared key method. When new sub-hosts establish an IPSec connection to the host FW,
all the sub-host needs to do is apply for a certificate with the same CA; then the sub-host and
host can perform identity authentication with the certificate. Since the host does not need to
protect the same pre-shared key for each sub-host, Tiandihui could reduce its administration
costs.
NOTE
Apart from its use in IPSec connections, a digital certificate can also be used for identity authentication
between SSL VPN clients and servers; for details, see "Chapter 7 Overview of SSL VPNs".
In addition to the surge in new sub-hosts, Tiandihui also had some older sub-hosts who had
already been accessed into the host via GRE or L2TP method accession host. How can IPSec
be used to ensure secure communications between these sub-hosts and the host without
changing the original access mode? Dr. WoW will take us through an in-depth study on this
very topic.
238
Learn Firewalls with Dr. WoW
6.10 Security Policy Configuration Roadmap
With IPSec VPNs, there are lots of special features when it comes to security policy
configurations; not only will it allow firewall traffic traversal, but it will also allow for IPSec
protocol packets. This section is designed to teach us all about the precise security policy
configuration methods for IPSec VPNs.
6.10.1 IKE/IPSec VPN Scenarios
As shown in Figure 6-27, an IPSec tunnel is established between host FW_A and sub-host
FW_B so that PC_A and PC_B can interact through the IPSec tunnel. Suppose that FW_A
and FW_B connect to a private network through the GE0/0/1 interface; this is the trust zone;
when connecting to the Internet through the GE0/0/2 interface, this is the untrust zone.
Figure 6-27 IKE/IPSec networking
Untrust
Trust
GE0/0/1
192.168.0.1/24
Trust
GE0/0/1
172.16.2.1/24
GE0/0/2
GE0/0/2
1.1.1.1/24
2.2.3.2/24
IPSec tunnel
Host
FW_A
PC_A
192.168.0.2/24
Sub-host 1
FW_B
PC_B
172.16.2.2/24
The security policy configuration process is as follows:
2.
First configure the broadest domain security policy to commission the IPSec.
Set the FW_A domain default packet filtering to permit:
[FW_A] firewall packet-filter default permit all
Set the FW_B domain default packet filtering to permit:
[FW_A] firewall packet-filter default permit all
3.
Once the IPSec is configured on the firewalls, PC_A will initiate access to PC_B; by
analyzing the firewall session table, we can obtain the security policy match conditions.

Host FW_A session table
[FW_A] display firewall session table verbose
Current Total Sessions: 3
udp VPN: public --> public
Zone: local--> untrust TTL: 00: 02: 00 Left: 00: 00: 55
Interface: GigabitEthernet0/0/2 NextHop: 1.1.1.2 MAC: 00-e0-fc-e4-65-58
<--packets: 4 bytes: 692 -->packets: 6 bytes: 944
1.1.1.1: 500-->2.2.3.2: 500 //Corresponds to ISAKMP negotiation packet, port 500
icmp VPN: public --> public
Zone: trust--> untrust TTL: 00: 00: 20 Left: 00: 00: 16
Interface: GigabitEthernet0/0/2 NextHop: 1.1.1.2 MAC: 00-e0-fc-e4-65-58
<--packets: 0 bytes: 0 -->packets: 1 bytes: 60
192.168.0.2: 14235-->172.16.2.2: 2048 //Corresponds to original IP packets
esp VPN: public --> public
239
Learn Firewalls with Dr. WoW
Zone: untrust--> local TTL: 00: 10: 00 Left: 00: 09: 59
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets: 0 bytes: 0 -->packets: 2 bytes: 224
2.2.3.2: 0-->1.1.1.1: 0 //Corresponds to IPSec packets. If AH and ESP encapsulation
are both configured, there will be two sessions.
There's a strange phenomenon herein; the session direction that corresponds to host FW_A
sending an ESP packet is untrust-->local (2.2.3.2: 0—>1.1.1.0); on the table, there seems to
be an inconsistency with the session direction when FW_A receives ESP packets. Why is that?
It turns out that when FW_A sends the encrypted ESP packet, it does not establish a session
and so it does not go through the firewall forwarding process; naturally, it won't check the
security policy either. However, when the firewall receives the ESP packet for encryption, it
must first establish a session for the forwarding process and check the security policy; as we
can see, this session corresponds to an ESP packet receipt. Whether ISAKMP negotiation
packets are sent or received, the forwarding process must be performed both times, so this
issue does not occur.
By analyzing the session table we can obtain the FW_A packet path, as shown in Figure 6-28.
Figure 6-28 Host FW_A packet path
Trust
Untrust
GE0/0/2
GE0/0/1
Local
IPSec tunnel
1.1.1.1
PC_A
192.168.0.2/24
FW_A
Original packet of PC_A-to-PC_B access
ISAKMP and IPSec packets
As can be seen in the figure above, FW_A must configure the trust zone--->untrust zone
security policy to allow PC_A to PC_B access packets to go through; it must also configure
the local zone--->untrust zone security policy to allow ISAKMP negotiation packets to go
through; an untrust zone--->local zone security policy must also be configured to allow ESP
packets to go through.

Sub-host FW_B session table
[FW_B] display firewall session table verbose
Current Total Sessions: 3
udp VPN: public --> public
Zone: untrust--> local TTL: 00: 02: 00 Left: 00: 00: 36
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets: 1 bytes: 200 -->packets: 2 bytes: 280
1.1.1.1: 500-->2.2.3.2: 500
esp VPN: public --> public
Zone: untrust--> local TTL: 00: 10: 00 Left: 00: 09: 59
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets: 0 bytes: 0 -->packets: 4 bytes: 448
1.1.1.1: 0-->2.2.3.2: 0
icmp VPN: public --> public
240
Learn Firewalls with Dr. WoW
Zone: untrust--> trust TTL: 00: 00: 20 Left: 00: 00: 16
Interface: GigabitEthernet0/0/1 NextHop: 172.16.2.2 MAC: 54-89-98-39-60-e2
<--packets: 1 bytes: 60 -->packets: 1 bytes: 60
192.168.0.2: 61095-->172.16.2.2: 2048
By analyzing the session table, we can obtain the FW_B packet path, as shown in Figure 6-29.
Figure 6-29 Sub-Host FW_B packet path
Untrust
GE0/0/2
IPSec tunnel
Trust
GE0/0/1
Local
2.2.3.2
PC_B
172.16.2.2/24
FW_B
Original packet of PC_A-to-PC_B access
ISAKMP and IPSec packets
As can be seen in the figure above, FW_B must configure an untrust zone--->trust zone
security policy to allow PC_A to PC_B access packets to go through; it must also configure
an untrust zone--->local zone security policy to allow ISAKMP negotiation packets to go
through; an untrust zone--->local zone security policy must also be configured to allow ESP
packets to go through.
When PC_B initiates access to PC_A, the packet path is the opposite of the PC_A access path
to PC_B, so we don't need to discuss it again.
In conclusion, in the IKE/IPSec scenario, FW_A and FW_B should configure security policy
match conditions, as shown in Table 6-19.
Table 6-19 Host and Sub-host FW security policy match conditions
Service
Direction
Device
Source
Security
Zone
Destination
Security
Zone
Source
Address
Destination
IP Address
Applications
(Protocol+Destination
Port)
PC_A
access to
PC_B
Host
FW_A
Untrust
Local
2.2.3.2/32
1.1.1.1/32
AH and/or ESP (ESP for
this example)
Local
Untrust
1.1.1.1/32
2.2.3.2/32
UDP+500
Trust
Untrust
192.168.0.
0/24
172.16.2.0/24
*
Untrust
Local
1.1.1.1/32
2.2.3.2/32
AH and/or ESP (ESP for
this example)
Sub-Host
FW_B
UDP+500
Untrust
Trust
192.168.0.
0/24
172.16.2.0/24
*
241
Learn Firewalls with Dr. WoW
PC_B
access to
PC_A
Host
FW_A
Untrust
Local
2.2.3.2/32
AH and/or ESP (ESP for
this example)
1.1.1.1/32
UDP+500
Sub-Host
FW_B
Untrust
Trust
172.16.2.0
/24
192.168.0.0/24
*
Untrust
Local
1.1.1.1/32
2.2.3.2/32
AH and/or ESP (ESP for
this example)
Local
Untrust
1.1.1.1/32
2.2.3.2/32
UDP+500
Trust
Untrust
172.16.2.0
/24
192.168.0.0/24
*
*: This depends on the specific service type and can be configured based on the actual situation, e.g. tcp, udp,
icmp, etc.
NOTE
Manual IPSec VPNs differ from IKE IPSec VPNs in that they do not have an ISAKMP session and as
such, there is no need to configure a UDP+500 security policy.
4.
Finally, change the default packet filtering action to deny.
Set the FW_A domain default packet filtering to deny:
[FW_A] firewall packet-filter default deny all
Set the FW_B domain default packet filtering to deny:
[FW_B] firewall packet-filter default deny all
6.10.2 IKE/IPSec VPN+NAT Traversal Scenarios
With IKE/IPSec VPN+NAT traversals, the security policy configuration also has several
unique characteristics. We'll introduce this characteristic with Scenario 2 from "6.8.1
Overview of NAT Traversal Scenarios".
As shown in Figure 6-30, an IPSec tunnel has been established between host FW_A and
sub-host FW_B; there is an NAT device between the two; the post-NAT transformation
address is 2.2.2.10 (an address set in the address pool). Suppose that FW_A and FW_B
connect to a private network through the GE0/0/1 interface; this is the trust zone; the upstream
device connects through the GE0/0/2 interface; this is the untrust zone.
Figure 6-30 IKEv1/IPSec VPN+NAT traversal networking
Untrust
Trust
GE0/0/1
192.168.0.1/24
GE0/0/2
1.1.1.1/24
IPSec tunnel
Host
FW_A
PC_A
192.168.0.2/24
Trust
NAT
2.2.2.10
GE0/0/1
172.16.1.1/24
GE0/0/2
172.16.0.1/24
Sub-host
GE0/0/0
172.16.0.2/24
NAT device
FW_B
PC_B
172.16.1.2/24
242
Learn Firewalls with Dr. WoW
The security policy configuration process is as follows:
2.
First configure the broadest domain security policy to commission the IPSec.
Set the FW_A domain default packet filtering to permit:
[FW_A] firewall packet-filter default permit all
Set the FW_B domain default packet filtering to permit:
[FW_A] firewall packet-filter default permit all
3.
Once the IPSec is configured on the firewalls, PC_B will initiate access to PC_A; by
analyzing the firewall session table, we can obtain the security policy match conditions.

Host FW_A session table
<FW_A> display firewall session table verbose
Current Total Sessions: 3
udp VPN: public --> public
Zone: untrust--> local TTL: 00: 02: 00 Left: 00: 01: 52
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets: 1 bytes: 296 -->packets: 1 bytes: 296
2.2.2.10: 2052-->1.1.1.1: 500
udp VPN: public --> public
Zone: untrust--> local TTL: 00: 02: 00 Left: 00: 01: 58
Interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets: 1 bytes: 228 -->packets: 5 bytes: 740
2.2.2.10: 2049-->1.1.1.1: 4500
icmp VPN: public --> public
Zone: untrust--> trust TTL: 00: 00: 20 Left: 00: 00: 14
Interface: GigabitEthernet0/0/2 NextHop: 192.168.0.2 MAC: 54-89-98-7f-1e-b2
<--packets: 1 bytes: 60 -->packets: 1 bytes: 60
172.16.1.2: 34201-->192.168.0.2: 2048
Host FW_A has a total of three sessions: two udp sessions and one icmp session. Of the two
udp sessions, one port is 500, and one is 4500; this means that after the IKE detected the NAT
gateway, the ISAKMP outer packet added UDP encapsulation, and the negotiation port also
switched from 500 to 4500. The subsequently transferred ESP packet also has an added UDP
header, so FW_A cannot see the corresponding ESP packet session.
By analyzing the session table, we can obtain the host FW_A packet path, as shown in Figure
6-31.
Figure 6-31 Host FW_A packet path
Trust
GE0/0/2
GE0/0/1
Untrust
Local
IPSec tunnel
1.1.1.1
PC_A
192.168.0.2/24
FW_A
Packets of PC_B-to-PC_A access
ISAKMP and IPSec packets
243
Learn Firewalls with Dr. WoW
As can be seen in the figure above, host FW_A must configure an untrust zone--->trust zone
security policy to allow PC_B to access PC_A packets; it must also configure an untrust
zone--->local zone security policy to allow host FW_A and sub-host FW_B to establish an
IPSec tunnel.

Sub-host firewall FW_B session table
<FW_B> display firewall session table verbose
Current Total Sessions: 3
udp VPN: public --> public
Zone: local--> untrust TTL: 00: 02: 00 Left: 00: 01: 45
Interface: GigabitEthernet0/0/2 NextHop: 172.16.0.2 MAC: 00-00-00-d3-84-01
<--packets: 1 bytes: 296 -->packets: 1 bytes: 296
172.16.0.1: 500-->1.1.1.1: 500
udp VPN: public --> public
Zone: local--> untrust TTL: 00: 02: 00 Left: 00: 01: 50
Interface: GigabitEthernet0/0/2 NextHop: 172.16.0.2 MAC: 00-00-00-d3-84-01
<--packets: 5 bytes: 708 -->packets: 1 bytes: 260
172.16.0.1: 4500-->1.1.1.1: 4500
icmp VPN: public --> public
Zone: trust--> untrust TTL: 00: 00: 20 Left: 00: 00: 07
Interface: GigabitEthernet0/0/2 NextHop: 10.1.5.1 MAC: 00-00-00-d3-84-01
<--packets: 1 bytes: 60 -->packets: 1 bytes: 60
172.16.1.2: 34201-->192.168.0.2: 2048
By analyzing the session table, we can obtain the sub-host FW_B packet path, as shown in
Figure 6-32.
Figure 6-32 Sub-Host FW_B packet path
Untrust
IPSec tunnel
GE0/0/2
GE0/0/1
Trust
Local
PC_B
172.16.2.2/24
FW_B
Packets of PC_B-to-PC_C access
ISAKMP and IPSec packets
As can be seen in the figure above, sub-host FW_B must configure a trust zone--->untrust
zone security policy to allow PC_B to PC_A access packets to go through; it must also
configure a local zone--->untrust zone security policy to allow host FW_A and sub-host
FW_B to establish an IPSec tunnel.
If the NAT device is only configured for source NAT, then this scenario will only allow
sub-host FW_B to actively establish an IPSec tunnel with host FW_A. If this is the case, the
match conditions for host FW_A and sub-host FW_B security policy configurations are as
shown in Table 6-20.
244
Learn Firewalls with Dr. WoW
Table 6-20 Host and sub-host fw security policy match conditions
Service
Direction
Device
Source
Security
Zone
Destination
Security
Zone
Source
Address
Destination
IP Address
Applications
(Protocol+Destination
Port)
PC_B
access to
PC_A
Host
FW_A
Untrust
Local
172.16.0.
1/32
1.1.1.1/32
UDP+500 and 4500
Untrust
Trust
172.16.1.
0/24
192.168.0.0/24
*
Local
Untrust
172.16.0.
1/32
1.1.1.1/32
UDP+500 and 4500
Trust
Untrust
172.16.1.
0/24
192.168.0.0/24
*
Sub-Host
FW_B
*: This depends on the specific service type and can be configured based on the actual situation, e.g. tcp, udp,
icmp, etc.
If the host has not configured a template IPSec policy and NAT Server is configured on the
NAT device, host FW_A is allowed to actively establish an IPSec tunnel with sub-host FW_B;
we can summarize the match conditions for this security policy for PC_A access to PC_B
based on the method above.
4.
Lastly, change the default packet filtering to deny.
Set the FW_A domain default packet filtering to deny:
[FW_A] firewall packet-filter default deny all
Set the FW_B domain default packet filtering to deny:
[FW_B] firewall packet-filter default deny all
245
Learn Firewalls with Dr. WoW
7
SSL VPN
7.1 SSL VPN Mechanisms
7.1.1 Advantages of SSL VPN
The Internet has developed in a fast, unpredictable manner, and it can now be said that
"network connections are available anywhere, anytime". Although PCs and laptops are
ubiquitous, these are already being looked down upon as being too heavy and inconvenient,
and everyone now has a smart phone or tablet, allowing them to enjoy anytime/anywhere
Internet access. The times are ever changing, and advances in science and technology
continue to be made, but what remains the same is the concept that science and technology are
people-centric—people are looking for more convenience, simplicity and security. For
specific use scenarios involving remote connections and access to internal networks, a few
cracks have appeared in the leading traditional VPN technology, IPSec:

Inflexible networking. When constructing an IPSec VPN, if equipment is added or the
user's IPSec policy is altered, the existing IPSec configuration needs to be adjusted.

Client software needs to be installed for IPSec VPNs, leading to a good deal of trouble in
terms of compatibility, deployment and maintenance.

IPSec is not strict enough regarding user access control; IPSec can only undertake
network layer control, but is unable to conduct granular access control of application
layer resources.
As the saying goes, "there is a way up every mountain; if there's a problem there must be a
solution", and accordingly a new kind of technology has begun to take center stage: Secure
Sockets Layer VPN (SSL VPN), a new lightweight remote access solution, effectively solves
the aforementioned problems, and has very broad application in real-world remote access
solutions.

SSL VPN operates between the transport layer and the application layer, and does not
change IP or TCP headers or impact the existing network topology. If a firewall is
deployed in the network, an SSL VPN only requires that the traditional HTTPS (port 443)
be enabled on the firewall.

SSL VPN uses the browser/server (B/S) architecture. Therefore, SSL VPN does not
require client applications, and only requires a browser for ease of use.
Although SSL VPN does not require the installation of additional clients, SSL VPN features have
explicit requirements regarding the browser and operating system type and version. For specific
requirements please review product documents.
246
Learn Firewalls with Dr. WoW

More importantly, compared with IPSec network layer control, all access control in an
SSL VPN is based in the application layer, and the level of granularity can reach the
URL or file levels, which can greatly improve the security of remote access.
Below I, Dr. WoW, will take everyone on a detailed introduction of SSL VPN technology.
7.1.2 SSL VPN Use Scenarios
So-called SSL VPN technology is actually a name created by VPN vendors, and refers to
remotely connected users using the SSL function embedded in standard Web browsers to
connect to an SSL VPN server on an enterprise intranet, following which the SSL VPN server
can forward packets to a specific internal server, thereby allowing remotely connected users to
access specially designated server resources within the company after passing authentication.
In this scenario, remotely connected users and SSL VPN servers use the standard SSL
protocol to encrypt data transmitted between them—this amounts to establishing a tunnel
between the remotely connected user and the SSL VPN server.
Generally speaking, SSL VPN servers are deployed behind firewalls at the egress; the typical
use scenario for SSL VPN is shown in Figure 7-1.
Figure 7-1 Typical use scenario for SSL VPN
SSL VPN server
Server
SSL VPN
Internal
network
resources
Remotely
connected
users
Enterprise egress firewall
Remotely connected users will hereinafter be called "remote users."
Huawei's USG2000/5000/6000 families of firewalls can serve directly as SSL VPN servers,
conserving network construction and administration costs. While I have the opportunity, let
me get a word in on behalf of Huawei, which has released the specialized SVN2000 and
SVN5000 series of SSL VPN server products. These provide support for higher numbers of
users, and support SSL VPN solutions for even greater numbers of use scenarios.
In this section, I will primarily introduce the establishment of a connection between remote
users and an SSL VPN server, as well as the process of successfully logging in to the SSL
VPN server. How SSL VPN servers forward remote user requests to various internal servers
will be introduced in subsequent sections.
Before beginning a rather dull, theoretical introduction, I will provide a demonstration of the
steps involved in logging in to an SSL VPN server, so that the ease and convenience of SSL
VPNs can be observed visually.
The steps typically involved in remote user access of an SSL VPN server are extremely
simple, as shown in Table 7-1.
247
Learn Firewalls with Dr. WoW
Table 7-1 Steps for remote user access of an SSL VPN server
Step
Detailed description
1
Open a browser, enter the https://SSL VPN server address: port or https://domain,
to initiate a connection.
2
The web page may give a reminder that there is a problem with the security
certificate for the website about to be accessed, but we'll select "continue to this
website."
3
The SSL server login interface successfully appears. The interface's right side
requests for the user name/password to be input.
4
Input the user name/password (previously obtained from the enterprise network
administrator), successfully log in to the SSL VPN server, and enter the intranet
resources access page.
Can these few steps guarantee the establishment of an SSL VPN connection and secure access?
Why is there a reminder that there is a problem with the security certificate of the web address
to be accessed? I'm sure there are many questions swirling around in everyone's minds now.
With these questions in mind, we'll explore how remote users exchange packets with the SSL
VPN server in these few brief steps.
I believe there are two keys involved in this, which also display two basic security features of
SSL VPN technology:
1.
Security of the transmission process
In the aforesaid definition of SSL VPN, we mentioned that remote users and SSL VPN
servers use the standard SSL protocol to encrypt data transmitted between them. The
SSL protocol begins operating from when a user opens his/her browser and accesses the
SSL VPN server address. Therefore, we need to explore the mechanisms through which
the SSL protocol operates in more detail.
2.
Security of user identities.
In the above demonstration of logging in to SSL VPN servers, the remote user accessed
the SSL VPN server's login interface, and the SSL VPN server requested that the user
name/password be input. This is actually the SSL VPN server requesting authentication
of the user's identity. SSL VPN servers generally support multiple kinds of user
authentication methods to guarantee the security and legitimacy of access. Huawei's
firewalls support multiple methods for authenticating user names/passwords, including
local authentication, server authentication, certificate authentication, two-factor
authentication (user name/password + certificate), etc.
7.1.3 SSL Protocol Operating Mechanisms
SSL is a kind of protocol that establishes a secure path between clients and servers. It is a Web
application-based security protocol developed by the Netscape company, and provides data
encryption, server authentication, information integrity and optional client authentication for
application protocols based in TCP/IP connections (such as HTTP, Telnet and FTP, etc.). This
is to say that the SSL protocol has the following features:
248
Learn Firewalls with Dr. WoW

All data to be transmitted is encrypted for transmission, so that third parties cannot
eavesdrop.

It possesses verification mechanisms, so the two communicating parties will
immediately discover any information tampering.

It's equipped with an identity certificate to prevent identities from being forged.
The SSL protocol has been developed continuously since being released in 1994. The SSL 2.0
and SSL 3.0 versions released by the Netscape company have been broadly used. Following
these, an Internet standardization organization released the TLS 1.0 protocol (also known as
SSL 3.1) based in SSL 3.0, and later also released the TLS1.1 and TLS 1.2 versions. At
present, most mainstream browsers now support TLS 1.2. Huawei's firewalls support the SSL
2.0, SSL 3.0 and TLS 1.0 versions.
The SSL protocol's structure is made up of two layers. The lower layer is the SSL record
protocol, and the higher layer has the SSL handshake protocol, the SSL change cipher spec
protocol and the SSL alert protocol; the roles of each protocol are as shown in Figure 7-2.
Figure 7-2 SSL protocol structure and uses
It's thus obvious that the establishment of an SSL connection primarily relies on the SSL
handshake protocol, and below we'll explore the SSL handshake protocol in detail.
249
Learn Firewalls with Dr. WoW
The SSL handshake protocol's basic design approach can be summarized in a brief sentence:
it transmits ciphertext using a public key encryption algorithm. Or, to put things another
way, the server tells its public key to the client, the client then uses the server's public key to
encrypt information, and after the server receives the ciphertext, it uses its own private key for
decryption.
There are two problems with this design approach which require further refined solutions:
2.
When the server tells its public key to the client, how can it be guaranteed that the public
key has not been tampered with?
Solution: Incorporate a digital certificate. Include the server's public key into the server
certificate, with the server sending the certificate to the client. So long as the certificate
is trustworthy, the public key can be trusted.
3.
The security of the public key encryption algorithm is high, however, as the two
endpoints each use private keys for decryption, the algorithm is relatively complicated,
and the encryption and decryption are computing-intensive—how can efficiency be
increased?
Solution: Incorporate a new "session key". The client and server negotiate the "session key"
using a public key encryption algorithm, and subsequent data packets all use this "session
key" for encryption and decryption (this is also known as a symmetric encryption algorithm).
The computing speed is very fast when using a symmetric encryption algorithm, allowing for
a great increase in the computing efficiency of encryption and decryption.
To explain things further, "the session key" is actually a secret key shared by the server and
the client. It is called the "session key" because it incorporates the concept of a session. Every
TCP-based SSL connection is associated with a session, and the session is created by the SSL
handshake protocol. This provides complete transmission encryption for each connection, and
means that the handshake process is included in the session.
Since the design of the SSL handshake protocol has already resolved the key questions, below
we'll cover specific design details: the aforementioned design approach is achieved through
four communications between the server and the client, thereby ensuring that highly efficient,
securely encrypted packet transmission can be conducted following the handshake stage.
The specific contents of the four communications involved in the SSL handshake is shown in
Figure 7-3. It is important to note that all communications in this stage are cleartext.
250
Learn Firewalls with Dr. WoW
Figure 7-3 SSL handshake process
SSL VPN Client
SSL VPN Server
Client Hello
2
Server Hello
Sever Certificate
Sever Key Exchange
Client Certificate Request
Server Hello Done
Client Certificate
Client Key Exchange
Certificate Verify
Change Cipher Spec
Client Finished Message
4
4.
1
3
Change Cipher Spec
Server Finished Message
Client sends request (Client Hello)
The client (generally the browser) first sends a communication encryption request to the
server. The primary information provided to the server in this step is as follows:
(1) The protocol version supported, for example TLS version 1.0.
(2) A random number generated by the client, to be used a bit later in generating the
"session key".
5.
Server's reply (Server Hello)
After receiving the client's request, the server sends a reply to the client. This step
includes the following information:
(1) Confirmation of the version of the cryptographic communication protocol being
used, for example TLS version 1.0. If the version is not supported by both the
browser and the server, then the server closes encrypted communication.
251
Learn Firewalls with Dr. WoW
(2) A random number generated by the server, to be used a bit later in generating the
"session key".
(3) Confirmation of the cipher suite.
(4) A server certificate that contains the server's public key.
The SSL handshake protocol supports two-way authentication between the client and the server. If the
server needs to verify the client, the server must send a request to authenticate the certificate of the client
in this step.
6.
Client Response
After the client receives the server's reply, it first authenticates the server's certificate. If
the certificate has not been issued by a trustworthy authority, the domain name provided
in the certificate is not the real domain name, or the certificate has expired, an alert will
be displayed in which a choice can be made as to whether or not to continue the
communication. If there is no problem with the certificate, the client will extract the
server's public key from the certificate. Following this it will send the following three
items of information to the server:
(1) A pre-master key, encrypted using the server's public key. This prevents
eavesdropping, and the pre-master-key will be used momentarily to generate the
"session key". At this time the client will have three random numbers, and can
compute the "session key" to be used for this session.
(2) A change cipher spec notice, expressing that all future information will be sent
using the encryption method and key negotiated by the two parties.
(3) The client's handshake finish notice, expressing that the client's handshake stage has
already been concluded. At the same time as this is the HASH value for all content
sent previously, used for server authentication.
7.
The server's final response
After the server receives the client's random pre-master key, it computes and generates
the "session key" to be used for this session (the computing method and computed results
are the same as the client's). Following this, the below, final information is sent to the
client:
(1) A change cipher spec notice, expressing that future information will all be sent
using the encryption method and key negotiated by the two parties.
(2) The server handshake finish message, expressing the end of the server's handshake
stage.
Now that everyone has finished reviewing the specific content of the SSL handshake
protocol's four communications, I think it likely that some additional questions have
arisen. As always, I'm ready to serve:
(1) When the random pre-master-key appears, the client and server already have three
random numbers, and the two then use the previously negotiated encryption method
to each generate the same "session key" to be used in this session. Why do three
random numbers need to be used to generate the "session key"?
Answer: It is clear that the public key encryption algorithm's use of the three
random numbers in obtaining the final symmetric secret key is done in order to
increase security. The reason the pre-master-key exists is because the SSL protocol
does not trust that every host will be able to generate "completely random" random
numbers. If the random numbers are not random, then it may be possible to infer
them, creating security problems. And, three 'pseudo random numbers' gathered
together are extremely close to being random.
252
Learn Firewalls with Dr. WoW
(2) During the SSL handshake protocol's second communication, when the server
responds (server hello), it sends its own certificate, and the client immediately
conducts verification of the servers' certificate; this is to say that the client verifies
the servers' legitimacy. Is this related to the alert encountered in Table 7-1 when we
were demonstrating logging in into the SSL VPN server that "there is a problem
with this website's security certificate"?
Answer: Actually, the SSL protocol begins to operate from when the client (the
remote user) accesses the SSL VPN server through HTTPS in step 1 of Table 7-1.
The notice in step 2 just happens to correspond with the second communication of
the SSL handshake protocol—at this time the server sends its own local certificate
to the client, and the client needs to authenticate the server's certificate. The
appearance of the alert indicates that the client believes this server's certificate is not
trustworthy. If this reminder appears during our everyday access to online banking
and other similar interfaces, we need to increase our vigilance to prevent mistakenly
entering phishing websites. However, we'll choose to mandate trust for this website
for now.
My explanation of the SSL handshake protocol's operating mechanisms is now complete.
Take a deep breath—we still need to test the principles behind this from a practical
standpoint.
Step 1 Use a firewall as the SSL VPN server and complete the configurations on the firewall.
SSL server functionality on a firewall is called a virtual gateway. The virtual gateway's
address/domain name is the SSL VPN server's address/domain name.
a.
Configure the virtual gateway, enable the SSL VPN server function, and configure the
server address.
b.
Configure the authentication method as local authentication, and create the user
(including the user name and password.)
c.
Configure security policies to ensure network connectivity. We will introduce methods
for configuring security policies in section 8.7 "Configuring Security Policies."
Step 2 Follow the steps provided at the beginning of this chapter demonstrating how to log in to
the SSL VPN server from a client, and use the configured user name/password to log in
to the SSL VPN server.
a.
Client 10.108.84.93 initiates a connection request using an IE browser to the firewall's
virtual gateway at https://10.174.64.61. In the figure blow, numbers 21-29 show the
entire four-communication SSL handshake process. After the server replies with its
Server Finished Message (encrypted) in No. 29, the alert interface stating that the
253
Learn Firewalls with Dr. WoW
security certificate has a problem appears. At this point, the client and server have
actually not begun normal communication, but are in the SSL handshake stage. The
client has verified that the server is not legitimate, and the client's Web interface has
asked the user whether or not they want to continue browsing this website.
b.
Select "continue to this website". Beginning with No. 103, the client requests to use a
new session, and again initiates the SSL handshake protocol. After handshake
completion, normal encrypted communication is begun, and continues until the user's
browser successfully loads the firewall's virtual gateway's user login interface.
c.
Enter the user name and password. Beginning with No. 1561, SSL handshake protocol
initiation is continued. Following the four communications, the "session key" is
negotiated, and beginning with the packet carrying the user name and password, all data
between the user and the server is encrypted (shown as 'Application Data') and sent to
the server.
254
Learn Firewalls with Dr. WoW
----End
7.1.4 User Identity Authentication
To guarantee the legitimacy of SSL VPN remote users and improve system security, the SSL
VPN server supports multiple authentication methods. In the previous example, the user name
and password are configured and stored on the firewalls. This is the most basic and simple
authentication method. Huawei's firewalls support the following authentication methods:

Local user name/password authentication: The user name and password are
configured and stored on the firewall. The user can successfully log in by simply
entering the matching user name/password.

User name/password authentication on server: The user name and password are
stored on a special third party authentication server. After the user enters the user name
and password, the firewall forwards them to the authentication server for authentication.
Currently supported types of authentication servers include RADIUS, HWTACACS,
SecurID, AD, and LDAP.

Certificate anonymous authentication: A client certificate is configured on the user's
client. The firewall verifies the client certificate to authenticate the user's identity.

Certificate challenge authentication: The server uses two-factor authentication (user
name/password+ client certificate) to authenticate a user's identity. This method is clearly
the most secure.
−
If only client certificate authentication is used, it is impossible to guarantee security if
the client is lost or is illegally used;
−
If only the client name/password is used for authentication, and a different client is
used, the client may present a security hazard.
255
Learn Firewalls with Dr. WoW
The two-factor authentication method ensures that a designated user uses a designated
client to log in to the SSL VPN server, thereby legitimately accessing internal network
resources.
Local user name/password authentication and user name/password authentication on a
third-party server are the most common authentication methods, and will not be described
further here. Below, I will introduce certificate authentication.
Certificate challenge authentication has one more user name/password authentication than
certificate anonymous authentication, but the principles remain the same, and they can thus be
described together.
The firewall (the SSL VPN server) verifies the client's certificate to authenticate the user's
identity, as shown in Figure 7-4.
Figure 7-4 Certificate authentication process
CA
1
1
Client CA certificate
Client certificate
Resource server
2
3
User
Optional
2'
Challenge password
authentication:
The User field in a certificate
+ entered password
Authentication input
Auxiliary password
authentication system
(local or third-party server)
Authentication data flow
Service data flow
The process is described as follows:
2.
Import a client certificate on the client, and import a client CA certificate on the firewall.
3.
The user (client) sends its own certificate to the firewall for authentication, which will be
successful if the following conditions are met.
−
The client's certificate and the client CA certificate imported onto the firewall are
issued by the same CA.
−
The client certificate is within its validity period.
−
The user filtering field in the client certificate is the user name that has already been
configured and stored on the firewall. For example, if the client certificate's user
256
Learn Firewalls with Dr. WoW
filtering field reads CN=user000019, the certificate is issued for user000019.
Therefore, user000019 must have been configured on the firewall.
4.
After the user passes the firewall's identity authentication, the user can log in to the
resource interface and access the specified resources on the internal network.
Above, I, Dr. WoW, have already displayed a packet capture from the SSL handshake stage
when using the user name/password to log in to the firewall's virtual gateway, and below we'll
change the authentication method to certificate anonymous authentication to take a look at
how the server authenticates a client certificate during the transfer of encrypted data.
On the firewall virtual gateway interface, and after configuration of the certificate the client
needs to use, the packet capture information is shown below. It is impossible to discern what
packet this is from the information, and so we import the firewall's (the SSL server) private
key, and use the packet capturing tool to decode the captured packet.
257
Learn Firewalls with Dr. WoW
To compare the left and right columns briefly, we can see that in No. 895, the first message
that appears as 'Encrypted Handshake Message' is actually a Hello Request sent from Server
10.174.64.61 to Client 10.108.84.93. The client then replies, after which the server sends a
Server Hello. Following this message, the server sends the client a request to authenticate the
client's certificate. From the packet capture, it seems that this negotiation wasn't successful for
some reason, and negotiation between the client and the server will continue.
Beginning with No. 1045, the server again discovers a Hello Request, and then continues on
with operations. In No. 1085, the server requests that the client provide a certificate. In No.
1088, the client sends its certificate to the server, and in No. 1097 the server authenticates the
client's certificate, with the packet capture displaying that the certificate is illegitimate and
cannot pass authentication. Although authentication was unsuccessful, the aforesaid
information factually reflects the entire process of server authentication of the client's
certificate; please compare the left and right sides to aid your understanding.
Above, I've finished my entire introduction of the process of establishing a connection
between a remote user and the SSL VPN server and successfully logging in to the SSL VPN
server. In the following sections, I will use the USG6000 family of firewalls as an example,
and first introduce file access and Web access (of e-mails, etc.; file access and Web access are
extremely common uses in an office scenario), and then introduce port forwarding and
network extension, organizing our discussion from more refined access control granularity to
coarser access control granularity.
We're using the USG6000 family of firewalls in our introduction because compared to the
USG2000/5000 firewall series, the SSL VPN functionality on the USG6000 has an improved user
authentication method, using the universal authentication method provided by the Firewall (this was
introduced in section 7.1.4 "User Identity Authentication") to render the configuration logic and process
clearer and easier to understand. Below, we'll shift our focus to SSL VPN operational configuration,
resource authorization and access control to introduce SSL VPN functionality, and will not give a further
detailed introduction regarding user authentication.
7.2 File Sharing
7.2.1 File Sharing Use Scenarios
In the introduction in the last section, we learned that a great difference between SSL VPN
and IPSec is that SSL VPN can refine the granularity of a remote user's access onto a
designated resource object, for example a file or a URL. In order to allow remote users to
instantly understand their own access permissions, the virtual gateway provides an especially
friendly and personalized platform: it combines files and URLs into a "custom made"
resource lists to show to the remote user. This is as if the virtual gateway is a new-wave,
fashionable restaurant that not only sells gourmet food but also sells customized services,
allowing different menus to be customized for customers with different tastes.
This isn't its only special feature. As most enterprises, out of security considerations, don't
want to make their internal server's resource addresses (URL or file path) public, the SSL
VPN therefore also provides a "resource address encryption" service that rewrites the
resource's path, allowing the remote user to not only smoothly access internal network
resources, but also making it very difficult to discover the internal network resource address.
This is like calling a simple potato dish 'Spheres of Glory', or calling a hotdog 'the King's
Scepter'—at first glance the name's meaning is unclear, and a great deal of time and effort is
required before such a name can be deciphered. But I digress-let's begin with the first dish:
258
Learn Firewalls with Dr. WoW
To put things simply, the SSL VPN's file sharing function allows remote users to securely
access company internal file servers directly using a browser, and supports file operations
such as creating new files, editing, uploading and downloads, as shown in Figure 7-5.
Figure 7-5 SSL VPN file sharing use scenario
SMB server
4.0.2.11/24
GE1/0/1
GE0/0/1
4.0.2.1/24 4.1.64.12/24
Internal network
SSL VPN
Firewall
Client
NFS server
At the moment, file sharing protocols that are relatively popular in companies include SMB
(Server Message Block) and NFS (Network File System). The former is primarily used in the
Windows operating system, while the latter is primarily used in the Linux operating system.
Huawei firewalls' SSL VPN is compatible with both of these protocols, so we don't need to
worry about this. The following content will use the SMB protocol as an example, and will
make use of the domain controller, a common authentication method, in introducing the file
sharing interaction.
In Figure 7-6, it can be seen that the firewall serves as a proxy device, and that its
communication with the client is always encrypted through the HTTPS (HTTP+SSL) protocol.
After the encrypted packet reaches the firewall, the firewall decrypts it and conducts protocol
conversion. Finally, the firewall serves as the SMB client and initiates a request to the
corresponding SMB file sharing server, and this also involves the file server authentication
process. Based upon the protocols used in communication, the aforesaid process can be
summarized into two phases:
2.
HTTPS interaction between the remote client serving as the Web client and the firewall
serving as the Web server.
3.
SMB interaction between the firewall serving as the SMB client and the file server (the
SMB server)
Figure 7-6 SSL VPN file sharing process
259
Learn Firewalls with Dr. WoW
Below, we'll describe file sharing configuration methods in detail and the principles behind
file sharing.
7.2.2 Configuring File Sharing
Before officially introducing the packet exchanges involved in file sharing, we will first
assume that file sharing resources have already been configured on an SMB file server (here
we'll use Windows Server 2008 as an example), and that permissions have been granted on
the domain controller:
Resource access address: \\4.0.2.11\huawei
Configuration of user permissions: the admin has read/write permissions; usera only has the
read permission.
The virtual gateway serves as the SSL VPN's entrance for all resources. Any resources that
need to be accessed must appear in the SSL VPN configuration—this also embodies the SSL
VPN's design approach that allows the granularity of access control to be refined. File sharing
first requires turning on the file sharing function and creating new file sharing resources, with
the goal being to provide a visible file sharing resources "menu" for the remote user, as shown
in Figure 7-7.
Figure 7-7 Configuring file sharing
7.2.3 Interaction Between the Remote User and the Firewall
After a successful login, the resources the virtual gateway makes available to the user will
appear on this interface. Hovering the mouse over a resource allows for the resources'
corresponding Web link to be seen in the browser status bar; this link includes the pages that
need to be requested from the firewall and parameters that need to be delivered, as shown in
Figure 7-8. We don't want to underestimate this URL, as it represents the remote user's
requested file resource information and corresponding operational commands. Different
directories and operations will each correspond with different URLs.
260
Learn Firewalls with Dr. WoW
Figure 7-8 SSL VPN login interface—file sharing
https://4.1.64.12/protocoltran/Login.html?VTID=0&UserID=4&SessionID=2141622535&Re
sourceType=1&ResourceID=4&PageSize=20&%22,1)
Q: Why can't the file resource \\4.0.2.11\huawei mentioned above be seen here?
A: Because the firewall has already hidden this. Using Resource ID is the only way to identify
the resource's address. The corresponding relationship between the Resource ID and the
resource's address is stored in the firewall's brain (memory); this allows for the internal
server's real address to be hidden, protecting server security.
To further our analysis of this Web link, in addition to the obvious fact that 4.1.64.12 is the
virtual gateway's address, I will break the remaining portions of the link's structure into three
parts:

protocoltran is the special directory for file sharing. It is clear from its name that this is
protocol+transform, indicating that this carries out conversion back and forth between
the HTTPS protocol and the SMB/NFS protocols.

Login.html is the request page. Generally speaking, different operations will correspond
with different request pages. I have organized all of the request pages and request results
pages that may be used in Table 7-2.
Table 7-2 File sharing request pages and request results pages
Page Name
Meaning
login.html
SMB file server authentication page
loginresult.html
dirlist.html
Shows folder structure and detailed list of file sharing resources.
downloadresult.html
Downloads files.
downloadfailed.html
create.html
Creates folders.
result.html
deleteresult.html
Deletes files and folders.
result.html
rename.html
Renames files and folders.
result.html
upload.html
Uploads files.
uploadresult.html
261
Learn Firewalls with Dr. WoW

?VTID=0&UserID=4&SessionID=2141622535&ResourceType=1&ResourceID=4&Pag
eSize=20&%22,1 are parameters transmitted to the request page. Here I will first give a
detailed parameter table. In addition to the parameters covered by this URL, I've also
included request parameters for other operations to aid everyone's understanding.
Table 7-3 Request page parameter details
Parameter
Meaning
VTID
The Virtual Gateway ID, used to distinguish between multiple
virtual gateways on the same firewall.
UserID
The User ID, identifying the currently logged-in user. For
security purposes, the ID is different for each login by the same
user to prevent a man-in-the-middle-attack from creating
fabricated data packets.
SessionID/RandomID
The session ID; all session IDs for the same login to the virtual
gateway are the same.
ResourceID
The Resource ID, identifying each file sharing resource.
CurrentPath
The file path of the current operation.
MethodType
Types of operations:
1: Deleting folders
2: Deleting files
3: Displaying a directory
4: Renaming a directory
5: Renaming a file
6: Creating a new directory
7: Uploading a file
8: Downloading a file
ItemNumber
Number of operational objects.
ItemName1
Name(s) of operational object(s); this can contain multiple
operational targets, for example deleting multiple files.
ItemType1
Type of operational object:
0: File
1: Folder
NewName
New name.
ResourceType
Resource type:
1: SMB resources
2: NFS resources
PageSize
Each page displays the number of resource items.
262
Learn Firewalls with Dr. WoW
In order to allow everyone to gain an understanding of the entire panorama of file sharing
functions, I will give a further, one-by-one explanation of some of the aforesaid operations
and commands using specific file sharing functions as examples.
When accessing file sharing resources for the first time, the file server's authentication must
first be passed.
The authentication stated here must be distinguished from the authentication that occurs when
logging in to the SSL VPN. In the login stage, the first thing the remote user needs to pass is
the firewall's authentication. This time we want to access file sharing resources, and of course
need to check whether or not the file server agrees to this. When "Public_Share" in the
resource list is clicked on, an authentication page will pop out, as shown in Figure 7-9.
Figure 7-9 File sharing login
After the authentication succeeds, the file resource page appears, as shown in Figure 7-10.
Figure 7-10 File sharing file operations
We know the above access process can be divided into two stages (authentication and folder
display), but is the real interaction process like this? A packet capture analysis of the
interaction process is as below:
263
Learn Firewalls with Dr. WoW
That's right, it looks like my understanding was correct. Login.html/LoginResult.html are all
authentication pages, and after the encrypted packet is decrypted, LoginResult.html also
includes the user name and password awaiting authentication by the file server. In addition,
Dirlist.html is the page that displays the folder structure.
2.
Verification of file downloading
The file download page and the corresponding URL are as shown in Figure 7-11.
Figure 7-11 Downloading files
The above table can be used to put the file download operation in words: the download
(MethodType=8) is of a root directory's (CurrentPath=2F) file (ItemType1=0), named
readme_11 (ItemName1=%r%e%a%d%m%e_%1%1). But it is important to note that
there is a bit of URL decrypted content here. For example, decrypting CurrentPath's
value ('2F') gives '/', which expresses the current resources' root directory.
3.
Folder rename authentication
The rename folder page is shown in Figure 7-12.
264
Learn Firewalls with Dr. WoW
Figure 7-12 Renaming folders
As usera only has the read permission, a failure notice is given, but this doesn't stop us from
continuing with analyzing this: file (ItemType1=1) userb (ItemName1=%u%s%e%r%b) in the
root directory (CurrentPath=2F) is being renamed usera (NewName=%u%s%e%r%a); the
corresponding URL is shown in Figure 7-13.
Figure 7-13 URL corresponding to folder rename operation.
Through the above introduction I believe that everyone now understands that the firewall's
establishment of these links is first to hide the true internal network file resource path
(\\4.0.2.11\huawei\), and secondly to, as the SSL VPN gateway, create a bridge for remote
user access and, as the SMB client, initiate file access to the SMB server (defining the file
object and operation to be accessed.)
7.2.4 Interaction of the Firewall with the File Server
A packet capture from between the firewall and the file server is shown below.
265
Learn Firewalls with Dr. WoW
1.
Firewall 4.0.2.1, serving as the client, initiates a negotiation request to file server
4.0.2.11. First to be negotiated is the SMB version (dialect). The firewall currently only
supports the use of SMB1.0 (NT LM 0.12) in its role as the client in interacting with the
server.
2.
The server's response information contains the authentication method to be used next and
a 16-bit challenge random number. A kind of secure authentication mechanism is used
here: the NT challenge/response mechanism, known as NTLM.
The authentication process is roughly as follows:
a.
The server generates a 16 bit random number and sends it to the firewall, to serve as a
challenge random number.
b.
The firewall uses a HASH algorithm to hash the user password and encrypt the received
challenge random number. It also returns its own user name (in plaintext) together with
this to the server.
c.
The server sends the user name, challenge random number and the encrypted challenge
random number returned from the firewall to the domain controller.
d.
The domain controller uses the user name to find the user password hash value in the
password administration database, encrypts the challenge random number, and compares
the encrypted challenge random number with the encrypted challenge random number
sent by the server. If they are the same then authentication is successful.
266
Learn Firewalls with Dr. WoW
After authentication succeeds, the user can access the designated file or folder.
To summarize the above, we can see that the firewall's role in the file sharing function is
actually that of a proxy that is an intermediary between the remote user and the SMB Server:
in the HTTPS stage it serves as the Web Server in receiving the file access request from the
remote user, and translates this into an SMB request; in the SMB stage, it serves as the SMB
client in initiating a request, receiving the response, and translating this to give to the remote
user. With the file sharing function, the remote user access to the internal file server is just as
convenient as its access of ordinary Web pages—it doesn't need to install a file sharing client,
doesn't have to remember the server's IP address, and won't get lost among a multitude of
servers.
7.3 Web Proxy
Although they both involve object level resource access, URL and file sharing are not the
same. When accessing a URL, the HTTP protocol is used. As the SSL protocol is a natural
born partner to HTTP, protocol conversion is no longer necessary in the Web proxy function.
However, we still want to focus on the two most critically important areas of content in our
description here: URL level access control and hiding the real URL address.
Web proxy services, means accessing an internal network's Web server resources (URL
resources) using the firewall as a proxy. Here, you might ask, isn't this just an ordinary proxy
function: when using one server as a springboard to access a destination URL address, this
server acts as a proxy—isn't the firewall doing the same? The answer is that these are not
completely the same thing, as throughout the entire process the firewall not only acts as a
proxy, but also rewrites the real URL, thereby achieving the goal of hiding the real internal
network URL, and further protecting the security of the internal network Web server.
7.3.1 Configuring Web Proxy Resources
Let's assume that a company has already set up a Web server and provided a portal address for
the company internal network (http://portal.test.com:8081/), and hopes to use the Web proxy
function to provide access for remote users.
Just as with file sharing resources, in order to refine the granularity of access control to the
URL level, it is necessary to configure a corresponding Web proxy resource in the virtual
gateway, as shown in Figure 7-14.
267
Learn Firewalls with Dr. WoW
Figure 7-14 Web proxy resource list—creating a new resource.
In the above configuration, the most important parameter is the resource type, which defines
the Web proxy method. Proxy methods include Web rewriting and Web-Link, and the
differences between the two are as shown in Table 7-4.
Table 7-4 Web Rewriting and Web-Link comparison
Item
Compared
Web Rewriting
Web-Link
Security
Rewrites the real URL, hiding
the internal server's
address—confers strong
security.
Cannot rewrite the URL, and directly
forwards Web requests and
responses, which can reveal the
internal server's real address.
Ease of use
Doesn't rely on IE controls, and
can be used normally on a
non-IE environment browser.
Relies on IE controls, and cannot be
used normally in non-IE
environments.
Compatibility
As Web technology has
developed very quickly,
firewalls cannot rewrite every
single class of URL resources,
and there may be some problems
such as misplaced pictures,
abnormal looking font, etc.
Does not need to rewrite resources,
and the firewall directly forwards
requests and responses, so there are
no problems with page compatibility.
Use advice
Web rewriting is the preferential
choice, as it is the most secure
and convenient type of access
method. If page display
abnormalities appear, then the
Web-Link method can be
Web-Link is the best substitute for
Web rewriting, but due to its reliance
on IE controls, there are still
limitations on its use. Moreover, it
does not rewrite the internal network
URL, meaning there is a security
268
Learn Firewalls with Dr. WoW
considered.
risk.
In Table 7-5 I list the meaning of some other parameters.
Table 7-5 Details of Web proxy parameters
Parameter
Details
URL
A Web application address that can be directly accessed by the internal
network. If in a domain name format, this requires that a corresponding
DNS server address be configured on the virtual gateway.
Resource
group
Is equivalent to a user-defined classification of Web application
addresses; after the remote user logs in they can screen needed resources
by resource group—this is like the entrée and beverage groupings on a
menu.
Portal link
Selects whether or not Web proxy resources appear on the virtual
gateway's homepage after login. If not selected, this is like preparing a
'house dish' that is not on the menu for an old customer. The 'old user'
can, after logging in, manually input a URL address in the upper right
hand corner address bar, and access some relatively confidential URL
resources.
I will take everyone on a further exploration of how Web rewriting actually works. As for
Web-Link, I've only made a small introduction to this here, as I will highlight this in 7.4 Port
Forwarding.
7.3.2 Rewriting URL addresses
From the URL address that actually appears to the user, we can see that the Web proxy
resource URL configured above, http://portal.test.com:8081/, has been rewritten.
269
Learn Firewalls with Dr. WoW
Figure 7-15 SSL VPN login interface—Web proxy
To analyze the rewriting results, in the address, 4.1.64.12 is the virtual gateway address, and
the remaining portions can roughly be broken down as:

Webproxy: the Web proxy's exclusive directory.

1/1412585677/4: UserID/SessionID/ResourceID; these parameters have already been
mentioned when introducing file sharing

http/portal.test.com:8081/0-2+: the altered form of the original URL address.
When the user accesses the rewritten address, the following exchange occurs.
2.
The remote user makes a request to the firewall for the rewritten URL address.
Before arriving at the firewall, the request packet is in an encrypted state. The above
screenshot is taken from after decryption, so we can also understand this as being the real
request received by the firewall.
3.
After the firewall decrypts the received packet, but before it sends the request to the
internal server, it conducts the following further treatment of the original packet:
a.
The original packet header's Accept-Encoding field needs to be deleted, otherwise
the Web server may encrypt the response packet and send it to the virtual firewall,
which would be unable to decrypt the packet, and unable to further forward the
packet. In the below screenshot, it can be seen that the firewall has already deleted
the original packet's Accept-Encoding field.
b.
The real internal network Web resource address is substituted in for the host field.
270
Learn Firewalls with Dr. WoW
c.
4.
The referer field for some URLs related to this Web resource is rewritten to be the
real internal network Web resource address.
The firewall, serving as the Web client, sends the rewritten data to the real Web server.
After this comes normal HTTP exchange, which we won't elaborate further on here.
7.3.3 Rewriting Resource Paths in URLs
The firewall receives the response packet—the page that needs to be displayed for the user
(we'll use the home page http://portal.test.com:8081/ as an example)—, and also needs to
rewrite some resource paths in the page. If the resource paths are not rewritten, the client will
use erroneous/non-existent addresses to obtain the resources, which will ultimately mean that
the corresponding content cannot be normally displayed. At present, firewalls support
rewriting for the following page resources:

HTML attributes

HTML events
271
Learn Firewalls with Dr. WoW

JavaScript

VBScript

ActiveX

CSS

XML
The firewall can rewrite the internal paths of these resources for normal page display and
function use.
7.3.4 Rewriting Files Contained in URLs
Actually, in the last sub-section we already gave a partial introduction to file rewriting.
However, this was all based on the rewriting of requested page resources, which is to say that
there was no need for the user to perceive the rewritten content—what the user cared about
was whether the page could be normally displayed and whether Web functionality was normal.
However, what we'll talk about next are the files near and dear to the user's heart, including
PDF, Java Applet and Flash.
Using PDF as an example, we've embedded a.pdf into http://portal.test.com:8081/, to provide
this to the user for download in the form of a link. The content in the PDF is as below,
including a link that can only be accessed on the internal network
(http://support.test.com/enterprise). If the firewall doesn't rewrite this, when the remote user
opens the downloaded PDF and attempts to access the links in it, they will be unable to attain
access, as shown in Figure 7-16.
Figure 7-16 File contained in a URL
But, through a virtual gateway download of the PDF file contained in a Web proxy resource,
display is as below when this is opened locally. As you can see, the original internal network
URL in the file has already been rewritten, and the rewritten URL is the beginning of the
virtual gateway address. In this way, external network users can access the internal network
resource embedded in the PDF file. This is shown in Figure 7-17.
Figure 7-17 Rewriting a file contained in a URL
272
Learn Firewalls with Dr. WoW
7.4 Port Forwarding
File sharing and Web proxies can address the majority of remote users' needs for access to
internal resources, however, under some circumstances (for example for access to TCP-based
non-Web applications such as Telnet, SSH, email, etc.), file sharing and Web proxies appear
to be helpless. However, it is not this way in practice, and in this section I will introduce port
forwarding, SSL VPN's third remarkable function, to everyone.
Port forwarding, to put things simply, is using a special port forwarding client program on the
remote user side to obtain a user's access requests, and then forwarding these to the internal
network's corresponding server through the virtual gateway. Next, we'll use Telnet, the most
commonly used application, as an example to introduce the configuration and processes
involved in remote clients accessing internal networks through SSL VPN.
7.4.1 Configuring Port Forwarding
A port forwarding use scenario is shown in Figure 7-18.
Figure 7-18 Port forwarding use scenario
FTP server
4.1.64.179
Email
server
Internal
network
10.1.1.2 4.1.64.11
SSL VPN
Firewall
Client
Telnet server
10.1.1.1
Just as with the other SSL VPN functions introduced above, regardless of what kind of
application is being accessed, corresponding resources need to first be added to the virtual
gateway. For Telnet, all that is required is to configure a Telnet server's IP address and port on
the virtual gateway, as shown in Figure 7-19.
Figure 7-19 Port forwarding—adding a new resource.
273
Learn Firewalls with Dr. WoW
There are two methods to enable the port forwarding function. The first is for the remote user
to choose to manually enable the port forwarding function in the virtual gateway interface that
appears after login. The second is for the administrator to set configuration as being
automatically enabled on the client after login, as shown in Figure 7-20. In addition to
being able to automatically enable the port forwarding function, the administrator can also
choose whether or not to keep port forwarding connections alive. This is important because
as some applications' access continues for a relatively long time (for example, if the remote
user suddenly needs to leave for a period of time while operating Telnet), after this is selected
it can prevent the interruption of the port forwarding service due to the SSL connection
timeout.
Figure 7-20 Configuring port forwarding
The data handling process for port forwarding is comparatively complex. Here, I'll give a
simple figure (Figure 7-21), and will go through each step from this bit by bit below.
274
Learn Firewalls with Dr. WoW
Figure 7-21 Port forwarding handling process
Remote users
Virtual gateway
Telnet server
+
Port forwarding client
Telnet client
1. Log in to SSL VPN.
Preparation
2. Download the ActiveX control
and deliver resource information.
Initiate a Telnet connection
C:\> telnet 10.1.1.1
Loopback
interface
3. Obtain the original request,
establish a local loopback
connection (TCP connection 1).
Put the client socket ID
into the private header.
4. Construct a private header and
establish an SSL connection
(open).
Telnet connection establishment
6. Construct a private header and
establish an SSL connection
(data).
Display the login page
Loopback
interface
Data communications
Through two private header
interactions, the relationship between
the TCP connections is established.
5. Decrypt the packet, obtain
connection information, and establish
TCP connection 2 with the internal
server.
Put the server socket
ID into the private
header.
7. Decrypt the packet and send the packet
through TCP connection 1 based on the
relationship between TCP connection 1 and
TCP connection 2.
Reuse TCP connection 1 and TCP connection 2 for data communications.
The port forwarding client here provides the SSL VPN client function, and is only called by this name in
order to emphasize its port forwarding service.
7.4.2 Preparatory Stage
1.
Log in to the SSL VPN
This process has already been introduced in 7.1.2 SSL VPN Use Scenarios and will not
be elaborated upon further here.
In addition, everyone needs to be aware that in the above sections I've focused on URLs
in beginning my analysis, but our analysis of port forwarding will be different. Although
we've once again logged in to the virtual gateway, because this is non-Web application
access, the Web is no longer used for the corresponding resource access, and instead
other applications, such as Putty (a Telnet/SSH tool), Filezilla (an FTP tool), and
Foxmail (an email program) are used. This gives rise to a question-- how does a
non-Web application make use of an already logged-in SSL VPN connection?
2.
The port forwarding client enters the "listening" state.
275
Learn Firewalls with Dr. WoW
It would seem that using a non-Web application for data access shouldn't involve an SSL
VPN, but in reality, the key technological point of port forwarding is displayed here:
after a user uses the Windows operating system's IE browser to log in to the virtual
gateway, their local PC's IE browser will automatically run the port forwarding client
(Active X control). The role of this client is to constantly "listen" to all requests from
other programs (making the client an 'All-Hearing Listener'; we'll call it the 'Listener' at
times below), "intercept" requests remote users send to the internal server at important
moments, and then send these to the virtual gateway over the SSL connection. Of the
requests it "listens to", the Listener doesn't unilaterally choose which requests to
intercept, but instead strictly implements the instructions given to it by its 'superior
officer', the virtual gateway. Now, what orders are given?
The port forwarding resource configured above is actually the order issued to the port
forwarding client by the virtual gateway: "if a user wants to access this resource, please
assist them in completing their access tasks." In the port forwarding function, the orders
given are the destination host IP address+destination port, and only the aforesaid
information can identify the application information the remote user wants to access.
As shown in Figure 7-22, after a remote user manually enables the port forwarding
function, the port forwarding client will automatically request resource information from
the virtual gateway. The resource information successfully requested by the client will be
saved in the memory of the remote user's PC, and the client will wait for further orders,
to aid in subsequent selection of which requests are to be "intercepted."
Figure 7-22 Port forwarding—initiation
So that the internal server address is not disclosed, specific resource information cannot be
viewed on the port forwarding client, and the resources in the menu can't be directly clicked
on and thus serve only a simple notification role.
7.4.3 Telnet Connection Establishment Stage
1.
Accurate interception by the Listener and display of its proxy abilities
The Listener's 'mind' is now clear as to which requests it needs to "intercept", and the
next thing to do is to use its 'ears' and listen to the content it is concerned with. When a
user uses Telnet to request a connection to 10.1.1.1's port 23 (a TCP SYN packet), the
Listener discover that this matches the resource information (destination IP+destination
address) issued by its commanding officer (the virtual gateway), and immediately
"intercepts" this TCP SYN packet. If the normal method was used, at this time the
request packet would be sent to the virtual gateway by the Listener per its duty, but here
the Listener considers that if this is sent to the virtual gateway without further processing,
this will result in every Telnet request (that is, every TCP connection) correspondingly
establishing a new SSL connection, which would not only occupy too many system
resources, but also delay response speed.
276
Learn Firewalls with Dr. WoW
In order to conserve the virtual gateway's session and memory resources and improve
user experience, the Listener decides to first disguise itself as the receiver, and simulate
reception of a Telnet service request (TCP connection) to determine exactly what kind of
resources the user wants to access—the strategy it is using here is a centralized proxy
method that seeks a one SSL connection solution to lessen the pressure on its superior
officer (the virtual gateway). How can receipt of Telnet services be simulated? And how
can centralized proxy services be provided? The Listener, this outstanding proxy, has an
ingenious plan:
After the port forwarding client receives a Telnet request, it modifies the packet,
changing the original request that was to be sent to 10.1.1.1 to being sent to itself
(127.0.0.1). This is equivalent to substituting itself in place of the Telnet server in
receiving the request. However, a simulation is still just a simulation, and at the same
time as the simulation it must record the corresponding pre-modification and
post-modification relationship, so that it can subsequently reply to the real user
(4.1.64.179) in place of the Telnet server.
The port forwarding client establishes TCP Connection 1 (also called the local loopback
connection) with itself. Use of the netstat command provides the following verification
of this:
C:\> netstat -anp tcp
Active Connections
Protocol Local address
TCP
127.0.0.1:1047
TCP
127.0.0.1:1047
TCP
127.0.0.1:7319
2.
External Address
Status
0.0.0.0:0
LISTENING
127.0.0.1:7319
ESTABLISHED
127.0.0.1:1047
ESTABLISHED
Creating a private packet header, and submitting the "port forwarding service
request form".
After the simulated receipt of a Telnet service request, the port forwarding client has a
complete understanding of the user's request, and needs to complete the "port forwarding
service request form" and submit this to it's the virtual gateway in accordance with
procedural requirements. The service request form must include the destination address
(10.1.1.1) and port (23) requested by the user, as well as a command word (to establish a
connection, transfer data packets, or close the connection, etc.) so that the virtual
gateway can carry out further processing.
Here we need to note that as the port forwarding client itself simulated the receiver's
establishment of TCP Connection 1, there must be a marker (TCP Connection 1's socket
ID, called the client socket ID) in the service request form of this TCP Connection. Only
in this way can the port forwarding client use the marker to find TCP Connection 1 when
the superior officer returns the results, and then send the return results to the
corresponding Telnet client.
The port forwarding service request form is called a private packet header (or
simply a private header) during the port forwarding service, as shown in Table 7-6.
Here, we'll only look at Telnet request connection packets, and will explain the primary
fields in the private packet header. In the Telnet connection establishment stage the
packet's payload is empty, so during transmission there is only a private packet header;
the packet doesn't have a payload until the data transfer stage.
277
Learn Firewalls with Dr. WoW
Table 7-6 Port forwarding private packet header
Field Name
Field Details
User ID
Marks the user's identity, and is automatically assigned for the user
by the virtual gateway. This can be understood to be the No. of the
port forwarding service request form.
Command word
Open- create a new connection
Data- data command
Close-close connection
Service type
Port forwarding
Web-Link
Web-Link is actually HTTP/HTTPS's port forwarding service. Or to
put things another way, Web-Link resources can likewise also be
configured as port forwarding resources. But please remember that a
well-known port does not need to be designated for Web-Link
resources, for example http://www.huawei.com/, but in configuring
port forwarding the port number is mandatory, for example, HTTP's
well-known port number 80 and HTTPS's 443.
Source IP Address
The source IP address for the original request. In this example this is
the remote user's client address: 4.1.64.179.
Destination IP
address.
The destination IP address from the original request. In this example
this is the internal network's Telnet address: 10.1.1.1.
Protocol type
At the moment only TCP is supported.
Destination port
The destination port in the original request. In this example this is the
internal network Telnet port 23.
Client socket ID.
The socket ID used to establish a connection between the remote user
and the firewall. This is used to identify this session; subsequent
packets will continue to use this socket ID.
Server socket ID
The socket ID used to establish a connection between the firewall
(serving as the Telnet client) and the internal server. This has the
same role as the client socket ID—both are used to identify the
session.
After being completed, the "port forwarding service request form" is encrypted and sent
to the virtual gateway using the SSL connection.
It is important to note that what it is established here is an SSL connection specially used
for the port forwarding service, not the SSL connection that was established during login.
When Telnet initiates similar access requests to other resources, and after it establishes a
new TCP connection, the port forwarding client will also again complete a "port
forwarding service request form", and send it via this SSL connection. In this way, an
exclusive SSL connection will always be maintained between the client and virtual
gateway. To summarize, all "port forwarding service request forms" will be sent first to
this SSL connection, and will then be sent to the virtual gateway after encryption, greatly
reducing the virtual gateway's workload.
3.
Establishing a connection between the virtual gateway and the internal server
278
Learn Firewalls with Dr. WoW
The virtual gateway receives the encrypted packet, decrypts it, and obtains the real Telnet
destination IP address and port, the command key and other information from the "port
forwarding service request form". At this point, the virtual gateway will serve as the
Telnet client and interact with the internal server to establish a Telnet connection. A
check of the firewall's session table shows that the firewall randomly enabled port 10010
to initiate an access request to 10.1.1.1:23, establishing TCP connection 2:
telnet VPN:public --> public 10.1.1.2:10010-->10.1.1.1:23
4.
The server on the internal network returns a reply to the Telnet client.
The virtual gateway receives the internal server's response packet (login interface), and
before sending it to the remote client, the virtual gateway will still construct a private
packet header, and fill in TCP Connection 2's socket ID (the server socket ID); this
allows it to establish a relationship with TCP Connection 1. Finally, the virtual gateway
sends the SSL encrypted private packet header +data to the port forwarding client; the
port forwarding client finds TCP Connection 1 based upon the client socket ID in the
private header, then finds the Telnet client's real IP address, and finally returns the real
data.
SSL-decrypted data received by the port forwarding client is shown below. In the portion
highlighted in the screenshot, the login page's text can already be vaguely seen (this is
the Telnet data packet), and the content in the upper part is the private header's content.
7.4.4 Data Communication Stage
Subsequent Telnet data packets will continue to use the previously established TCP
Connection 1 and TCP Connection 2, and will associate the two connections using private
packet headers, finally opening a Telnet client- port forwarding client- virtual gateway-Telnet
server transmission channel, allowing for data communication to be achieved.
The introduction to the port forwarding process of Telnet concludes here. Telnet is only a
single channel protocol, the simplest kind of protocol. In addition to this, port forwarding also
supports the following types of applications:

Multi-channel protocols, supporting FTP and Oracle SQL Net. During actual
configuration, only the control channel's port 21 needs to be designated for the FTP
protocol; negotiated data ports will be passively "listened" to, and additional
configuration is not required.
279
Learn Firewalls with Dr. WoW

Multi-protocol applications. Some applications require the support of multiple protocols.
Take email for example: prior to configuring the port forwarding service, the port
numbers of the sending protocol (SMTP:25) and the receiving protocol (POP3:110 or
IMAP:143) must be configured, as must a port forwarding resource for each type of
protocol.

Multi-IP fixed port applications. Using IBM Lotus Notes as an example, its
corresponding databases are on multiple servers, however, when providing external
service, only the 1352 port is used. When configuring port forwarding for this type of
application, we do not need to specify all servers. Instead, we can simply select "Any IP
address" from "Host address type".
In real-world scenarios, the configuration and use of port forwarding is extremely simple (of
course it is— this is an SSL VPN feature after all!), but what you don't see is how much
behind-the-scenes work goes into creating this marvelous functionality. You may not use such
abstract content frequently, but just trust that when the need does arrive, my words will appear
again and help you discover ways to improve your VPN connection even further.
7.5 Network Extension
There is an old saying that goes "everything has its place; each has their own abilities." SSL
VPN's four major services are similar: to access Web resources you need to use the Web
proxy service; to access file resources you need to use the file sharing service, etc. With this in
mind, I'm sure there will be a few questions, such as: "In what scenarios will the network
extension that you're speaking about today be used?" "What are its working principles?" Why
is this service called "network extension"? Perhaps you have even more questions, but it's
alright, as I'll explain these questions one by one in this section.
7.5.1 Network Extension Use Scenarios
Figure 7-23 is a scenario in which a remote user is accessing the internal network resources of
a company. Specifically, the remote user needs to access the company's internal voice server
(SIP server) to participate in a teleconference. Can SSL VPN's first three services meet this
kind of need?
Figure 7-23 Network extension use scenario
Enterprise HQ
Web server
FTP server
SSL VPN
Mail server
Switch
Firewall
Remote
user
SIP server
280
Learn Firewalls with Dr. WoW
Let's analyze this first. The remote user wants to access the voice server. SIP, which is layered
on top of UDP, will be used in the communication between the two. Web proxy and file
sharing enable remote users to access Web and file resources, but not voice resources. Can the
port forwarding service resolve this problem? The answer is also no. The reason is that port
forwarding applies to only TCP-based application protocols. But SIP is generally a
UDP-based protocol, and so the port forwarding service is helpless in accomplishing this. Is it
possible that SSL VPN can't aid in accomplishing even this need? Of course not, they can, but
they need to use the network extension service that we are discussing today.
Initiating the network extension service on a firewall is of large value here, as the network
extension service can meet the remote user's need to access all IP resources on the company
internal network, and the SIP-based voice resource mentioned above is a kind of IP resource.
Perhaps some readers don't have a very deep understanding regarding what we mean by
saying that network extension allows remote users to access all IP resources on company
internal networks, and so I've used Figure 7-24 to further my explanation.
Figure 7-24 Network extension's position in the layers
NMS, ERP, telephone
conference system...
OA system,
email, financing
Web proxy
File sharing
HTTP, FTP, Telnet,
SMB, SMTP...
SIP, SNMP, TFTP,
NFS, DHCP, DNS...
Port
forwarding
TCP
UDP
UDP
Network
extension
IP
User service
systems
Application-layer
protocols
Transport-layer
protocols
Network-layer
protocol
From the above figure it can be seen that the user has many kinds of service systems, and
indeed there are too many to review separately. But if we dig several layers deeper, we'll
discover that regardless of how many service systems the user has at the upper layers, they
still need to rely on lower layer protocols to provide communication support for them—it's
just that the lower-layer protocol types used by different service systems are different.
The application layer protocols supported by Web proxies and file sharing are very specific.
For example, Web proxy can only support HTTP-based applications; file sharing only
supports SMB and NFS applications; and port forwarding supports all TCP-based applications.
However, having the port forwarding service doesn't mean the SSL VPN can do everything:
for example, the port forwarding service is in over its head when it encounters some
UDP-based applications (for example the SIP protocol used by the user's teleconference
system). If we want to enable SSL VPN to support more user applications, this requires that
we provide protocol support at the layer below this, and network extension is exactly this kind
of function: it offers complete support directly at the IP layer. Therefore, the network
extension service is able to provide even more varied types of resources to remote users.
7.5.2 Network Extension Process
When a remote user uses the network extension function to access internal network resources,
the internal exchange process involved is shown in Figure 7-25.
281
Learn Firewalls with Dr. WoW
Figure 7-25 Network extension process
Address pool
Start add 192.168.1.1
End add 192.168.1.100
SSL VPN
Server
10.1.1.2
GE0/0/1
10.1.1.1
GE0/0/2
Firewall 1.1.1.1
Remote user
(Public IP address: 6.6.6.6)
Log in to the virtual
gateway
1
Deliver IP addresses and
routes
2
Virtual NIC
(192.168.1.1)
Physical port
(6.6.6.6)
5
Decrypted IP packet
Response packet
Encrypted IP packet
6
Encrypted response packet
4
IP packet
Response
packet
3
7
2.
The remote users logs in to the virtual gateway using an IE browser.
3.
After the remote user successfully logs in to the virtual gateway, he/she enables the
network extension function.
When the remote user enables the network extension function, it will trigger the
following actions:
a.
A new SSL VPN tunnel will be established between the remote user and the virtual
gateway.
b.
The remote user's local PC will automatically generate a virtual network card. The
virtual gateway randomly selects an IP address from the address pool, and assigns
the address to the remote user's virtual network card for the communication
between the remote user and the company internal network. With this private IP
address, the remote user can conveniently access internal network IP resources just
as if they were a user in the company internal network.
c.
The virtual gateway issues routing information for reaching the internal server to
the remote user.
4.
The remote user sends a service request packet to the company internal network's server.
This packet reaches the virtual gateway through the SSL VPN tunnel.
5.
After receiving the packet, the virtual gateway decapsulates it, and then sends the
decapsulated service request packet to the internal server.
6.
The internal server responds to the remote user's service request.
7.
After arriving at the virtual gateway, the response packet enters the SSL VPN tunnel.
8.
After the remote user receives the service response packet, it decapsulates the packet to
extract the service response packet within.
282
Learn Firewalls with Dr. WoW
The above is the basic process of a remote user utilizing the network extension service to
access company internal network IP resources. If we compare network extension with the
other three SSL VPN services, it's not hard to see that the mechanisms by which these three
services (Web proxy, file sharing and port forwarding) are accomplished are largely the same
as each other—they map the enterprise network's internal resources onto the firewall, and
these are then presented for viewing to the remote user by the firewall. From this perspective,
the firewall is simply a piece of secure proxy equipment, and the remote user hasn't actually
connected into the company internal network.
However, network extension is different. During the network extension service, the remote
user obtains a company internal private network IP address from the firewall, and uses this IP
address to access the enterprise network's internal resources. When an Internet user possesses
the company private IP address, it is as if the user itself is located inside the enterprise
network. Or, to switch our perspective, this is equivalent to the borders of the enterprise
network being extended to the remote user's location. The area surrounded with gray dashes in
Figure 7-26 can be understood to be the extension of the enterprise network onto the Internet,
so it's not hard to understand why this service is called network extension.
Figure 7-26 Network extension schematic
Remote user
Extended enterprise network
Enterprise network
Firewall
Remote user
Remote user
To allow us to further understand the internal implementation mechanisms for network
extension, I will use the aforesaid exchange process and add in an explanation of the
principles behind encapsulation and decapsulation of service request packets entering the
VPN tunnel and packets emerging from the VPN tunnel.
7.5.3 Reliable Transport Mode and Fast Transport Mode
There are two methods by which the network extension function can establish an SSL VPN
tunnel: reliable transport mode and fast transport mode. In reliable transport mode, the SSL
VPN uses SSL to encapsulate packets, and uses TCP as the transport protocol; in fast transport
mode, the SSL VPN uses the QUIC (Quick UDP Internet Connections) protocol to
encapsulate packets, and uses UDP as the transport protocol. QUIC is also a TLS/SSL-based
data encryption protocol, and its role is the same as SSL, except that packets encapsulated by
it need to be transported using UDP.
Figure 7-27 displays the use of reliable transport mode for packet encapsulation. From the
figure it can be seen that the source address (SRC: 192.168.1.1) for communication between
the remote user and the company internal network (the SIP Server) is its virtual gateway
card's IP address. Packets being exchanged in the process safely reach the two communicating
parties following repeated encapsulation and decapsulation. When the remote user accesses
283
Learn Firewalls with Dr. WoW
the SIP server, the source port for the inner packet layer is 5880 (random), the destination port
is 5060, and the transport protocol UDP. The encapsulation protocol used for the outer packet
layer is SSL, and the transport protocol is TCP.
Figure 7-27 Packet encapsulation process when using reliable transport mode
Address pool
Start add 192.168.1.1
End add 192.168.1.100
GE0/0/1
10.1.1.1
SIP server
10.1.1.2
Firewall
GE0/0/2
1.1.1.1
Remote user
(Public IP address: 6.6.6.6)
(Virtual NIC address: 192.168.1.1)
SSL VPN tunnel
IP
UDP
SIP
SRC:192.168.1.1 SRC Port: 5880 Request
DST Port: 5060 DATA
DST:10.1.1.2
IP
UDP
SIP
SRC:192.168.1.1 SRC Port: 5880 Request
DST Port: 5060 DATA
DST:10.1.1.2
4
Service request packet
(SSL decapsulation)
IP
SIP
UDP
Reply SRC Port: 5060 SRC:10.1.1.2
DATA DST Port: 5880 DST:192.168.1.1
5 Service response packet
SSL
TCP
SRC Port: 6293
Encrypt
DST Port: 443
IP
SRC: 6.6.6.6
DST: 1.1.1.1
3
Service request packet
(SSL encapsulation)
IP
SIP
UDP
Reply SRC Port: 5060 SRC:10.1.1.2
DATA DST Port: 5880 DST:192.168.1.1
TCP
IP
SSL
443 SRC: 1.1.1.1
Encrypt SRC Port: 6293
DST: 6.6.6.6
DST Port:
Service response packet
6 (SSL encapsulation)
Figure 7-28 shows the process of using fast transport mode to encapsulate packets. The packet
encapsulation principles in this mode are the same as those in the reliable mode, with the
difference that the outer layer packet encapsulation protocol has been changed from SSL to
QUIC, and the transport protocol has been changed from TCP to UDP.
284
Learn Firewalls with Dr. WoW
Figure 7-28 Packet encapsulation process when using fast transfer mode
Address pool
Start add 192.168.1.1
End add 192.168.1.100
GE0/0/1
10.1.1.1
GE0/0/2
1.1.1.1
Remote user
(Public IP address: 6.6.6.6)
(Virtual NIC address: 192.168.1.1)
Firewall
SIP server
10.1.1.2
SSL VPN tunnel
IP
UDP
SIP
SRC:192.168.1.1 SRC Port: 5880 Request
DST Port: 5060 DATA
DST:10.1.1.2
IP
UDP
SIP
SRC:192.168.1.1 SRC Port: 5880 Request
DST Port: 5060 DATA
DST:10.1.1.2
IP
SRC: 6.6.6.6
DST: 1.1.1.1
Service request packet
3 (QUIC encapsulation)
4 Service request packet
(QUIC decapsulation)
IP
SIP
UDP
Reply SRC Port: 5060 SRC:10.1.1.2
DATA DST Port: 5880 DST:192.168.1.1
5 Service response packet
QUIC
UDP
SRC Port: 54013
Encrypt
DST Port: 443
IP
SIP
UDP
Reply SRC Port: 5060 SRC:10.1.1.2
DATA DST Port: 5880 DST:192.168.1.1
UDP
IP
QUIC
SRC: 1.1.1.1
Encrypt SRC Port: 443
6.6.6.6
DST Port: 54013 DST:
6
Service response packet
(QUIC encapsulation)
In unstable network environments, reliable transport mode is the suggested mode; however,
when the network environment is relatively stable, fast encapsulation mode is suggested, as
this improves data transmission efficiency.
7.5.4 Configuring Network Extension
The configuration of the network extension service can be divided into the following steps:
1.
Create a virtual gateway.
2.
Under the virtual gateway, create and configure the authentication method for remote
users and configure role authorization.
3.
Configure the network extension service.
Figure 7-29 shows the detailed configuration page.
285
Learn Firewalls with Dr. WoW
Figure 7-29 Configuring network extension
The network extension service only requires two IP address segments to be configured; there
are few items to be configured, so this is very simple. However, selecting these two IP address
segments is complicated.
Parameter 1: The range of the IP address pool
In the theoretical section above, I explained that after a remote user enables the network
extension function, the virtual gateway will assign an IP address to the remote user's virtual
network card, but where does this address come from? Clever readers have probably already
guessed that this is randomly selected from the address pool we are about to configure.
This address pool is designated by the network administrator. When designating the address
pool it is important to pay attention to the relationship between this address pool's network
segment and the internal network's segment. If this address segment is configured on the same
subnet as the internal network segment 10.1.1.2, then after the remote user obtains this
address assigned by the virtual gateway, it will be as if the remote user and the internal server
were connected together by a Layer 2 switch—the remote user will be able to directly access
the server, and so there will be no issues with routing. If the address pool and the internal
server are not in the same network segment here (in the example they aren't in the same
network segment), then a route with a destination address being the address pool network
segment (192.168.1.0) and an outgoing interface being a public interface linked to the
Internet needs to be configured on the firewall. This route is only used for determining
relationships between security zones and is not used for packet forwarding.
[FW] ip route-static 192.168.1.0 255.255.255.0 GigabitEthernet0/0/2 1.1.1.2
Additionally, if a server (such as a DHCP server, a third party authentication server, etc.)
dedicated to assigning IP addresses to users has been set up inside a company, this will be
acceptable so long as the address pool used in network extension does not conflict with the
address segments assigned by the server—each can assign their own IP addresses without
affecting each other.
Parameter 2: List of accessible internal network segments
I stated above that a remote user that enables network extension can access all IP resources in
a company internal network, but if this is true, why is there still an "accessible internal
network segment"? This is ultimately done for control, as if we don't configure this parameter,
remote users will be able to access all resources on the internal network by default; we add
this function for access control.
286
Learn Firewalls with Dr. WoW
Whether or not we configure this parameter not only affects the scope to which users can
access the company internal network, but also affects other network statuses for the remote
user.

If the "accessible internal network segment" is configured as 10.1.1.0 for network
extension, then the virtual gateway will send a specific route to the remote user's PC,
with the destination address being the internal network segment 10.1.1.0 and the
outgoing interface being the virtual gateway card's address (the company internal
network's private IP address 192.168.1.1 obtained by the remote user)
C:\> route print
IPv4 Routing Table
===========================================================================
Active routes:
Network Destination
Netmask
Gateway
Interface
Metric
0.0.0.0
0.0.0.0
10.111.78.1
10.111.78.155
10
10.1.1.0
255.255.255.0
On-Link
192.168.1.1
1
10.1.1.255

255.255.255.255
On-Link
192.168.1.1
257
If the "accessible internal network segment" parameter is not configured for network
extension, what would the remote user's route be like? In the below table we can see that
the virtual gateway has sent a default route to the remote user, and that the outgoing
interface is the virtual network card's address (the company internal network's private IP
address 192.168.1.1 obtained by the remote user)
C:\> route print
IPv4 Routing Table
===========================================================================
Active Routes:
Network Destination
Netmask
Gateway
Interface
Metric
0.0.0.0
0.0.0.0
On-Link
192.168.1.1
1
Don't underestimate the differences between the two kinds of routes shown just above. When
configuring "the accessible internal network segment", the virtual gateway only issues a route
to some company internal network segments to the remote user, and this route will not affect
other routes. This is to say that if the remote user wants to access the company internal
network, he/she can access the company internal network, and if the remote user wants to
access the Internet, he/she can access the Internet—this kind of access will not be affected at
all, meaning that the user can accomplish whatever should be accomplished.
If we choose not to configure this parameter then problems arise. Normally a remote user's
route for accessing the Internet is the default route, but now the virtual gateway is sending
another default route, and this default route sent by the virtual gateway has the highest priority
(the hop count is 1). This will make the remote user's original default route invalid, meaning
that the remote user will have no way to access the Internet. If the remote user must access the
Internet, then he/she can only temporarily disconnect from the network extension connection,
and then re-enable network extension when they want to access the internal network.
Therefore the choice of which network extension configuration method to choose depends
upon the corporate user's needs.
Configuration of the network extension service has been completed, and below we'll look at
how the remote user should use the network extension function to access internal network
resources.
287
Learn Firewalls with Dr. WoW
7.5.5 Login Process
The SSL VPN network extension function provides remote users with two kinds of paths to
access the internal network—one uses the IE browser, and the other uses an independent
network extension client.

IE Browser
a.
The remote user inputs the virtual gateway's access address into the IE browser's
address bar.
b.
After the virtual gateway's login interface appears, the remote user enters the user
name and password.
c.
Users who have successfully logged in can see the Network Extension tab on the
virtual gateway's resource page, and can click Start under Network Extension. As
shown in Figure 7-30, the remote user will obtain the company internal network IP
address assigned to it by the virtual gateway, and in this way can directly access the
company's internal network resources.
Figure 7-30 Network extension—initiation

When introducing the principles of packet encapsulation, I mentioned that the
establishment of an SSL VPN tunnel is divided into two modes (reliable transport and
fast transport), and the default mode when establishing the SSL VPN tunnel between
the IE browser and the virtual gateway is the fast transport mode.
Independent client
a.
The remote user downloads and installs the network independent extension client
After the remote user successfully logs in to the virtual gateway, he/she then clicks
user options in the upper right corner of the interface, after which the network
extension client download link can be seen, as shown in Figure 7-31. Installation is
very simple—all that needs to be done is to follow the instructions in clicking Next.
Figure 7-31 Downloading the network extension client software
288
Learn Firewalls with Dr. WoW
The advantage of using the independent client is that the network extension client
can initiate automatically when a device turns on, and can automatically reconnect
when a connection is lost. On the other hand, when using the IE browser method,
the virtual gateway must be logged in to each time, which is relatively cumbersome.
b.
Log in to the virtual gateway.
URL: virtual gateway address
User and Password: the virtual gateway login user name and password assigned to
the remote user by the administrator.
As shown in Figure 7-32, by clicking Login, the remote user can access internal
network resources in the same way as internal network users.
Figure 7-32 Logging in to the virtual gateway
When using the independent client to establish an SSL VPN tunnel, the tunnel establishment
mode can be configured. On the login interface, click Option. A choice can then be made in
Tunnel to use either the reliable transfer mode or the fast transfer mode. Within Tunnel Mode
there is also a Auto-sensing mode, meaning that the client will automatically select whether
to use reliable transfer mode or fast transfer mode to establish the SSL VPN tunnel according
to network conditions.
If the network extension function has already been enabled, how can the remote user
determine if their network extension function is working? Two methods can be used here.
First, the ipconfig command can be used to look at whether the remote user has obtained the
private IP address assigned by the virtual gateway. According to the above example, if, after
network extension is enabled, you as a remote user obtain an IP address within the
192.168.1.0 network segment, then congratulations! You've already successfully connected
into the enterprise's internal network.
The second method is for the remote user to test and see whether or not they can access the
company internal network's resources.
289
Learn Firewalls with Dr. WoW
We frequently encounter the following circumstance: a remote user has already obtained the
IP address assigned by the virtual gateway, but cannot access internal network resources. Why
is this? There are generally two reasons why this kind of situation occurs:

The first is that the remote user did not have service permissions to access this internal
network resource (for example R&D staff not having permissions to access the finance
system);

The second is that when we configured network extension, the network segment in
which the internal network resource(s) the remote user wants to access is located was not
included in the "accessible internal network segment."
These two problems are easily resolved. Either the remote user applies from the network
administrator for service permissions or the network administrator check the firewall to verify
whether the internal network resources have all been added.
7.6 Configuring Role Authorization
In SSL VPN services, the company administrator can create different "special menus" for
different users to control access to Web and non-Web resources. On Huawei's firewalls,
control over different users' access to resources is completed through role authorization. All
users of one role have the same permissions. The administrator can add users or user groups
with the same permissions into a role, and then associate accessible service resources with
that role.
Figure 7-33 shows that a role can contain multiple users/usere groups, and can also be
associated with multiple service resources.
Figure 7-33 Relationship between role and user/user group and resources
As
re soc
so i a
ur tin
ce g
sw
www.abc.com
ith
usera
ro
les
Role
rs/
use s
g
n
i
le
Add s to ro
p
u
o
gr
Employee
www.def.com
Employee group
master
Manager
\\10.1.1.1\a
Manager
\\10.1.1.2\b
The specific controls that can be associated with these roles are as follows:

Service authorization (enable)
290
Learn Firewalls with Dr. WoW
Specifies the services (such as Web proxy, file sharing, port forwarding and network
extension) that users within a role can use.

Resource authorization
For the Web proxy, file sharing, and port forwarding services, assuming the service has
already been enabled, this specifies the resources that can be accessed. If no resource is
specified, users within the role will be unable to access any resources.
For the network extension service, assuming the service has already been enabled,
user-based security policies can be used to control the access of remote users to
resources. For details, see 7.7.2 Configuring a Security Policy in a Network Extension
Scenario.
According to the above approach, we've created different roles (usera and master) for ordinary
employees and managers, and then specified different resources for them. In this way we can
achieve finely granular, role-based resource access control.
Figure 7-34 Configuring role authorization
After completing the above configuration, ordinary employees and managers will see their
own respective resource interface after logging in to the virtual gateway, as shown in Figure
7-35.
Figure 7-35 Resource interfaces after user login
Resource page seen by usera after login
Resource page seen by master after login
291
Learn Firewalls with Dr. WoW
7.7 Configuring Security Policies
The approach to configuring SSL VPN security policies is similar to that with IPSec, as we
first configure a relatively permissive security policy on the firewall, to guarantee that SSL
VPN services run normally, and then obtain refined security policy match conditions through
analyzing the session table. For the specific process refer to the introduction in "Chapter 6
IPSec VPN".
Below I'll detail separate security policy configuration processes for two kinds of
scenarios—Web proxy/file sharing/port forwarding scenarios and network extension
scenarios.
7.7.1 Configuring Security Policies for Web Proxy/File
Sharing/Port Forwarding Scenarios
The purpose of configuring security policies in Web proxy, file sharing and port forwarding
scenarios is to achieve network connectivity. To learn more about conducting permission
control over remote user resource access, please refer to the content we introduced above
regarding "resource authorization"
In our example here, an SSL VPN tunnel is established between a remote user and firewall
(here we'll use the file sharing access service as an example), with the remote user accessing
the file server, as shown in Figure 7-36. We'll assume that interface GE0/0/1 of the firewall is
connected to the private network and belongs to the DMZ, and that interface GE0/0/2 is
connected to the Internet and belongs to the Untrust Zone.
Figure 7-36 SSL VPN file sharing
DMZ
Internal
network
SMB server
4.0.2.11/24
Untrust
GE0/0/1
4.0.2.1/24
GE0/0/2
4.1.64.12/24
SSL VPN
Firewall
Remote
user
After the remote user successfully initiates access to the server, the following session table
can be seen on the firewall.
<FW> display firewall session table verbose
Current Total Sessions : 4
https VPN:public --> public ID: a48f3629814102f62540ade7f
Zone: untrust--> local TTL: 00:10:00 Left: 00:09:52
Output-interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:436 bytes:600276 -->packets:259 bytes:32089
4.1.64.179:41066-->4.1.64.12:443 //packet establishing SSL VPN tunnel
https VPN:public --> public ID: a48f3629815b06fd6540ade7f
Zone: untrust--> local TTL: 00:10:00 Left: 00:09:52
Output-interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
292
Learn Firewalls with Dr. WoW
<--packets:291 bytes:395991 -->packets:176 bytes:26066
4.1.64.179:41067-->4.1.64.12:443 //packet establishing SSL VPN tunnel
tcp VPN:public --> public ID: a48f3629818f0c229540ade8d
Zone: local--> dmz TTL: 00:00:10 Left: 00:00:02
Output-interface: GigabitEthernet0/0/1 NextHop: 4.0.2.11 MAC: 78-ac-c0-ac-93-7f
<--packets:5 bytes:383 -->packets:8 bytes:614
4.0.2.1:10013-->4.0.2.11:445
//packet from firewall (serving as the client)
accessing the server
netbios-session VPN:public --> public ID: a58f3629817501ad8a540ade8d
Zone: local--> dmz TTL: 00:00:10 Left: 00:00:02
Output-interface: GigabitEthernet0/0/1 NextHop: 115.1.1.2 MAC: 78-ac-c0-ac-93-7f
<--packets:1 bytes:40
-->packets:1 bytes:44
4.0.2.1:10012-->4.0.2.11:139
//Packet from firewall (serving as the client)
accessing the server
The packets establishing the SSL VPN tunnel trigger the establishment of two identical
sessions. One session is established during login, and one session is established when
accessing service resources.
Analysis of the above session table reveals movement of packets on the firewall, as shown in
Figure 7-37.
Figure 7-37 Packet movement on the firewall
SSL VPN
tunnel
Untrust
DMZ
Local
Remote
user
4.1.64.12
GE0/0/2
GE0/0/1
4.0.2.1
Server
4.0.2.11/24
FW
Packets used to establish an SSL
VPN tunnel
Packets sent from the firewall that
acts as a proxy to the server
From the above figure we learn that an Untrust Zone-->Local Zone security policy needs to
be configured on the firewall to enable the establishment of an SSL VPN tunnel between the
remote user and the firewall; a Local Zone-->DMZ security policy also needs to be
configured to enable the firewall to serve as a proxy for the remote user to access the server.
The security policy configuration approaches for the Web proxy/port forwarding functions are
completely identical with that of file sharing. To summarize, the security policy match
conditions that should be configured on the firewall for the above three functions are as
shown in Table 7-7.
293
Learn Firewalls with Dr. WoW
Table 7-7 Security zone match conditions
Service
Direction
Source
Security Zone
Destination
Security
Zone
Source
Address
Destination
Address
Application (Protocol +
Destination Port)
Remote
user access
to the server
Untrust
Local
ANY
4.1.64.12/32
TCP+443*
Local
DMZ
ANY**
4.0.2.11/24
TCP+139
TCP+445***
*: The port used by the device should be determined based upon actual circumstances.
**: For the USG6000 family of firewalls, although the source address displayed in the session table is the
interface's private IP address, however, during actual configuration, the source address must be set to ANY. For
the USG 2000/5000 families of firewalls, the source address can be set to the interface's private IP address
during actual configuration.
***: The file sharing service is used as an example here; if this is the Web Proxy or port forwarding service,
please determine this in accordance with actual circumstances.
7.7.2 Configuring a Security Policy in a Network Extension
Scenario
The purpose of configuring a security policy in network extension scenarios is to achieve
network connectivity and control the access of remote users to resources.
In the below example, an SSL VPN tunnel is established between the remote user and the
firewall, and the network extension service is used to access the company internal network's
server. We'll assume that on the firewall, interface GE0/0/1 is connected to the private
network and belongs to the DMZ, and that interface GE0/0/2 is connected to the Internet and
belongs to the Untrust zone.
In network extension, whether or not the server and the virtual gateway address pool are in the
same network segment affects the security zones traversed by service packets, and therefore
the configuration of inter-zone security policies needs to be divided into the following two
circumstances for discussion.

The server and the virtual gateway address pool are in the same network segment:
Figure 7-38 shows network diagram when the server and the virtual gateway address
pool are in the same network segment.
294
Learn Firewalls with Dr. WoW
Figure 7-38 Network extension scenario with the server and the virtual gateway address pool in
the same network segment
DMZ
Address pool
Untrust
Public IP address:
6.6.6.6
Start add 10.1.1.10
End add 10.1.1.100
Remote
user
SSL VPN
Server
10.1.1.2
GE0/0/1
10.1.1.1
Firewall
GE0/0/2
1.1.1.1
Routing table
Direct route
(destination network) (outbound interface)
10.1.1.10
GE0/0/1
Private IP address:
10.1.1.10
DMZ
The server and the virtual gateway address pool being in the same network segment
means that a remote user who has obtained the private IP address is in the same network
segment as the server. Of course, they are also in the same security zone—the DMZ.
After the remote user's access to the server through network extension is successful, we
can verify this conclusion on the firewall's session table:
<FW> display firewall session table verbose
Current Total Sessions : 3
https VPN:public --> public ID: a48f3fc25ef7084f654bfcacd
Zone: untrust--> local TTL: 00:00:10 Left: 00:00:02
Output-interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:10 bytes:2577 -->packets:9 bytes:804
6.6.6.6:50369-->1.1.1.1:443
//packet establishing SSL VPN tunnel
icmp VPN:public --> public ID: a58f3fc25f2b05940054bfcb3f
Zone: dmz--> dmz TTL: 00:00:20 Left: 00:00:13
User: huibo
Output-interface: GigabitEthernet0/0/1 NextHop: 10.1.1.2 MAC:
00-22-a1-0a-eb-7d
<--packets:3 bytes:180 -->packets:4 bytes:240
10.1.1.10:1-->10.1.1.2:2048
//raw packet from remote user accessing the
server
https VPN:public --> public ID: a58f3fc25f1107f81954bfcace
Zone: untrust--> local TTL: 00:10:00 Left: 00:09:55
Output-interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:34 bytes:4611 -->packets:44 bytes:14187
6.6.6.6:58853-->1.1.1.1:443
// packet establishing SSL VPN tunnel
The packets establishing the SSL VPN tunnel trigger the establishment of two identical
sessions: one is established during login, and one is established when initiating network
extension.
Packet movement on the firewall can be obtained by analyzing the above session table,
as shown in Figure 7-39
295
Learn Firewalls with Dr. WoW
Figure 7-39 Packet movement on the firewall when the server and the virtual gateway address
pool are in the same network segment
Untrust
GE0/0/1
GE0/0/2
DMZ
Local
Remote user
1.1.1.1
Public IP address: 6.6.6.6/24
Virtual NIC address: 10.1.1.10/24
Server
10.1.1.2/24
FW
Packets used to establish an SSL
VPN tunnel
Original packets sent from the
remote user to the server
From the above figure we can learn that an Untrust-->Local security policy needs to be
configured to permit the establishment of an SSL VPN tunnel between the remote user
and the firewall; a DMZ-->DMZ security policy also needs to be configured so that
service packets can pass (USG6000 series firewalls' default settings do not permit packet
movement within security zones, so this security policy needs to be configured. USG
2000/5000 series firewalls do not have this restriction.)
In summary, the security policy match conditions that should be configured on the
firewall are shown in Table 7-8.
Table 7-8 Security policy match conditions
Service
Direction
Source
Security
Zone
Destination
Security Zone
Source
Address
Destination
Address
Application (Protocol
+ Destination Port)
Remote
user access
to the
server
Untrust
Local
ANY
1.1.1.1/32
Reliable transport mode
TCP+443
Fast transport mode
TCP+UDP+443*
DMZ
DMZ
10.1.1.0/24
10.1.1.2/32
***
(this is the
network
segment of
the virtual
network card
address
pool)**
*: The port used by devices should be determined based on actual circumstances. The above is an example of
the remote user using the reliable transport mode to establish an SSL VPN tunnel. When a tunnel is established
using the fast transport mode, a UDP session will still be generated between the Untrust-->Local zones, and this
application needs to be configured in the security policy at this time.
**: In addition to the source address, the USG6000 series of firewalls also support user-based security policies,
and can use a remote user's user name as a match condition to configure the security policy. As compared to the
296
Learn Firewalls with Dr. WoW
source address, using the user name as the match condition increases visibility and precision.
***: The application here is related to the specific service type, and can be configured according to actual
circumstances. For example, TCP, UDP, ICMP, etc.

The server and the virtual gateway's address pool are not in the same network segment:
Figure 7-40 shows a network in which the server and the virtual gateway address pool
are not in the same network segment.
Figure 7-40 Network extension scenario in which the server and the virtual gateway address pool
are not in the same network segment
Address pool
DMZ
Untrust
Start add 192.168.1.1
End add 192.168.1.100
SSL VPN
GE0/0/1
Server
10.1.1.2
Remote user
GE0/0/2
Firewall 1.1.1.1
Routing table
Static route
(destination network) (outbound interface)
192.168.1.0
Public IP address: 6.6.6.6
Private IP address: 192.168.1.1
GE0/0/2
The server and the virtual gateway address pool not being in the same network segment
means that a remote user who has obtained the private network IP address is in a
different network segment from the server, and of course the two are also located in
different security zones. With this being the case, which security zone exactly does the
user belong to?
If there is not a route to the 192.168.1.0/24 network segment on the virtual gateway, it is
impossible for the virtual gateway to determine the source security zone the remote user
belongs to, and it will discard the packets the remote user sends. In order to resolve this
problem, we need to manually configure a route that has the virtual gateway address pool
(network segment 192.168.1.0/24) as its destination address; the outgoing interface can
be chosen by the administrator. This is to say that what this packet's source security zone
is at this time depends upon which outgoing interface is used for this route we are
configuring. We introduced content pertaining to this in 7.5.4 Configuring Network
Extension.
We normally believe that since this packet comes from the Internet, it comes from the
Untrust Zone. Therefore, when we configure routes, we configure the route's outgoing
interface as the public interface GE0/0/2 that is connected to the Internet. Using this
route, the data flow from a remote user accessing the server travels from the Untrust
zone to the DMZ.
After the remote user successfully accesses the server through network extension, we can
verify this conclusion using the firewall's session table.
<FW> display firewall session table verbose
Current Total Sessions : 3
https VPN:public --> public ID: a58f3fe3a31502f49354bfcccf
Zone: untrust--> local TTL: 00:10:00 Left: 00:10:00
Output-interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:36 bytes:4989 -->packets:54 bytes:20751
297
Learn Firewalls with Dr. WoW
6.6.6.6:51668-->1.1.1.1:443
//packet establishing SSL VPN tunnel
icmp VPN:public --> public ID: a58f3fe3a36302f8d854bfcd3d
Zone: untrust--> dmz TTL: 00:00:20 Left: 00:00:20
User: huibo
Output-interface: GigabitEthernet0/0/1 NextHop: 10.1.1.2 MAC:
00-22-a1-0a-eb-7d
<--packets:3 bytes:180 -->packets:3 bytes:180
192.168.1.1:1-->10.1.1.2:2048
//remote packet from remote user accessing
server
https VPN:public --> public ID: a58f3fe3a2fb03e68154bfccce
Zone: untrust--> local TTL: 00:10:00 Left: 00:08:08
Output-interface: InLoopBack0 NextHop: 127.0.0.1 MAC: 00-00-00-00-00-00
<--packets:6 bytes:2417 -->packets:7 bytes:724
6.6.6.6:51255-->1.1.1.1:443
//packet establishing an SSL VPN tunnel.
Analyzing the above session table gives the packet direction on the firewall, as shown in
Figure 7-41.
Figure 7-41 Packet direction on the firewall when the server and the virtual gateway address pool
are not in the same network segment
Untrust
GE0/0/1
GE0/0/2
DMZ
Local
Remote user
1.1.1.1
Public IP address: 6.6.6.6
Virtual NIC address: 192.168.1.1/24
Server
10.1.1.2/24
FW
Packets used to establish an SSL
VPN tunnel
Original packets sent from the remote
user to the server
As shown in the above diagram, an Untrust-->Local security policy needs to be
configured to permit the establishment of an SSL VPN tunnel between the remote user
and the firewall; an Untrust-->DMZ security policy needs to be configured to guarantee
that service packets can pass.
To summarize, the security policy match conditions that should be configured on the
firewall are shown in Table 7-9.
Table 7-9 Security policy configuration conditions
Service
Direction
Source
Security Zone
Destination
Security
Zone
Source
Address
Destination
Address
Application (Protocol +
Destination Port)
Remote
user access
to the
Untrust
Local
ANY
1.1.1.1/32
Reliable transport mode
TCP+443
Fast transport mode:
298
Learn Firewalls with Dr. WoW
server
TCP+UDP+443*
Untrust
DMZ
192.168.1.0
/24 (this is
the network
segment of
the virtual
gateway
card's
address
pool)**
10.1.1.2/32
***
*: The port used by devices should be determined based on actual circumstances. The above is an example of
the remote user using the reliable transport mode to establish an SSL VPN tunnel. When a tunnel is established
using the fast transport mode, a UDP session will still be generated between the Untrust-->Local zones, and this
application needs to be configured in the security policy at this time.
**: In addition to the source address, the USG6000 series of firewalls also support user-based security policies,
and can use a remote user's user name as a match condition to configure the security policy. As compared to the
source address, using the user name as the match condition increases visibility and precision.
***: The application here is related to the specific service type, and can be configured according to actual
circumstances. For example, TCP, UDP, ICMP, etc.
7.8 Integrated Use of the Four Major SSL VPN Functions
After finishing their review of network extension, most readers will be a bit confused, and
wonder why, if network extension is so powerful, can't we simply use the network extension
service for a user regardless of what type of internal network resource the user wants to
access—why do we still use Web proxy, file sharing, etc.?
This is a key question. SSL VPN' provision of services at so many different layers and
different granularities is to control remote users' access permissions to internal network
systems; in the end this is done for one goal—security. When the network extension service is
used, this means a remote user can access all types of resources in the company internal
network. Although this is quite convenient for the user, this undoubtedly increases
management and control risk for internal network resources. Both meeting user needs and
correctly controlling permissions requires configuring different services for the user according
to the user's needs, thus avoiding the aforesaid problem.
Figure 7-42 shows a hypothetical network scenario in which a certain company has deployed
firewall equipment and provided SSL VPN service for employees on the move.
299
Learn Firewalls with Dr. WoW
Figure 7-42 SSL VPN integrated scenario
OA system Financing system
www.oa.com www.finance.com
Manager
Web servers
File server
(FTP-based)
SSL VPN
Firewall
Mail server
(SMTP-based)
Employee
Voice server
(SIP-based)
Company remote users' needs for access to the internal network and the plan on the firewall
for opening SSL VPN services for employees on the move are shown in Table 7-10.
Table 7-10 SSL VPN service plan
Role
Access Needed
Service
Type
Role Authorization
Ordinary
employees
Access OA
system
Web proxy
Create a www.oa.com resource in the
Web proxy service, and bind this
resource with an ordinary employee or
the group the ordinary employee
belongs to.
Use the company
email system to
send and receive
emails
Port
forwarding
Create an email server resource in the
port forwarding service, and bind this
resource with an ordinary employee or
the group the ordinary employee
belongs to.
Access the OA
system and the
finance system
Web proxy
Create two resources-- www.oa.com
(already created) and www.
finance.com—in the Web proxy
service, and bind these resources to a
manager or the group the manager
belongs to.
Access the file
sharing server
File sharing
Create a file server resource in the file
sharing service and bind this resource
with a manager or the group the
manager belongs to.
Use the company
Port
Create a file server resource in the port
Managers
300
Learn Firewalls with Dr. WoW
email system to
send and receive
emails
forwarding
forwarding service, and bind the email
server resource with a manager or the
group a manager belongs to.
Convene
teleconferences
Network
extension
Enable the network extension function,
and configure the voice server's
address into "the accessible internal
network segment", and then bind the
network extension service with a
manager or the manager's group.
Once network service configuration is complete, when users with different roles log in to the
virtual gateway, the service resources they are able to see are also different.

Ordinary employees
After ordinary employees on the move log in to the virtual gateway, they can see the
resource links they are able to access, as shown in Figure 7-43, and then access them by
clicking the links.
Figure 7-43 Ordinary employee login interface

Managers
Figure 7-44 displays the interface for mangers on the move after logging in to the virtual
gateway.
301
Learn Firewalls with Dr. WoW
Figure 7-44 Manager login interface
Of these, Web proxy and file sharing resources are all provided for selection using links. Port
forwarding and network extension can only be used after clicking Start. But how does a
remote user know which of the company's internal network resources they will be able to
access after clicking Start? This requires that the network administrator use other channels,
for example a bulletin, to inform the remote user of the company internal network resource
server domain name and address. In this regard, Web proxy and file sharing are both
advantageous, because when the remote user utilizes these two services, he/she can see which
resources he/she can access from the resources list after logging in to the virtual gateway.
The relationship between a remote user's need to access the company internal network and
what kind of SSL VPN service should be enabled on the firewall can be broken down into two
points.

The resource type (Web resource, file resource, TCP, IP) that the remote user accesses on
the company's internal network determines what kind of SSL VPN service the network
administrator should select.
For example, for a traveling employee who only needs to access Web resources and
email resources, just two services, Web proxy and port forwarding, can be enabled.
302
Learn Firewalls with Dr. WoW
However, if a manager needs to access four types of resources, then this requires that
four types of services be initiated for this user.
It is necessary to state that as network extension has the functionality of the former three
services, to make configuration more convenient, we can also enable only the network
extension service for the manager, allowing the manager to access all of the internal
network's IP resources.

Whether the remote user possesses access permissions to a certain resource is determined
through role authorization configuration.
In order to avoid having to configure services authorization for each and every employee,
we can establish two groups (ordinary employees and managers), add these two types of
employees into the appropriate group, and then simply conduct service authorization for
these two role groups.
For example, if a traveling employee and a manager both enable the Web proxy service,
the traveling employee would only be able to access the OA system (www.oa.com),
while the manager could enjoy access permissions to both the OA system and the finance
system (www.finance.com)—this would have been configured in role authorization.
303
Learn Firewalls with Dr. WoW
8
Hot Standby
In the previous chapters, I, Dr. WoW, explained the basic functions of firewalls for everyone.
These previous lessons of mine concerned configuring functions on a single firewall, however,
in order to increase network reliability, we often need to configure identical functions on two
firewalls so that they can back up for one another. How can this be accomplished?
This requires the use of a major, special function of firewalls that is the focus of this
chapter—hot standby.
8.1 Hot Standby Overview
8.1.1 Dual Device Deployment Improving Network Availability
The dynamic development of mobile working, online-shopping, instant messaging, Internet
finance, online education, and other similar network services has been accompanied by a
relentless increase in both the number and the importance of services on networks. Therefore,
uninterrupted network transmission has become a challenge in urgent need of resolution.
On the left side of Figure 8-1, a firewall has been deployed on an enterprise network's egress
to forward all traffic between the intranet and an external. If the firewall failed, this would
result in the complete severance of traffic between the intranet and external network.
Therefore, if only one device is used in such a key network position, we must accept the risk
of a network interruption due to a single point of failure, regardless of how reliable the device
is.
Therefore, when we design a network architecture, we usually deploy two (dual) or more
devices in key network positions to improve network reliability. On the right side of Figure
8-1, we can see that when one firewall fails, traffic will be forwarded through the other
firewall.
304
Learn Firewalls with Dr. WoW
Figure 8-1 Dual device deployment improving network reliability
Router
Firewall
Switch
Internal network Internal
network user
Internal network
Fault
Traffic
8.1.2 Only Routing Failover Needs to Be Considered in Dual
Router Deployments
If using traditional network devices (such as routers or Layer 3 switches), all that needs to be
done to guarantee reliable service is to configure routing failover on two devices. This is
because ordinary routers and switches don't record packets' exchange state and
application-level information, and simply forward packets according to their routing tables.
An example is provided below to illustrate this.
As shown in Figure 8-2, OSPF runs on the two routers (R1 and R2) and R3 and R4. Under
normal circumstances, because the Ethernet interface's default OSPF cost is 1, from the
perspective of R3, the cost for the link on which R1 is positioned (R3 -> R1 -> R4 -> FTP
server) is 3. And, because we've configured the OSPF cost as 10 for the interfaces on the R2
link (R3 -> R2 -> R4 -> FTP server), from the perspective of R3, the cost of the link on which
R2 is positioned is 21. As traffic will only be forwarded through the link with the lower cost,
traffic between the FTP client and the server will only be forwarded through R1.
305
Learn Firewalls with Dr. WoW
Figure 8-2 Traffic forwarded through the link with the lower routing cost
FTP server
1.1.1.10/24
GE0/0/0
1.1.1.1/24
Cost=1
Eth0/0/0
10.1.3.2/24
Eth0/0/1
10.1.4.2/24
Cost=10
Eth0/0/1
10.1.3.1/24
Eth0/0/1
10.1.4.1/24
R4
R1
Cost=1
R2
Eth0/0/0
10.1.1.2/24
Eth0/0/0
10.1.2.2/24
R3
Eth0/0/0
10.1.1.1/24
Cost=10
Eth0/0/1
10.1.2.1/24
GE0/0/0
192.168.1.1/24
FTP client
192.168.1.10/24
Traffic
As OSPF will only choose to add the most optimal routes to the routing table, we can only see
the routes with a relatively low cost in R3's routing table (below). Therefore, packets to/from
the FTP server (destination address is 1.1.1.0/24) can only be forwarded through R1 (next hop:
10.1.1.2).
[R3] display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 11
Routes : 11
Destination/Mask
1.1.1.0/24
10.1.1.0/24
10.1.1.1/32
10.1.2.0/24
10.1.2.1/32
10.1.3.0/24
10.1.4.0/24
127.0.0.0/8
127.0.0.1/32
192.168.1.0/24
192.168.1.1/32
Proto
Pre Cost
Flags NextHop
Interface
OSPF
10
3
D
10.1.1.2
Ethernet0/0/0
Direct
Direct
Direct
Direct
OSPF
OSPF
Direct
Direct
Direct
Direct
0
0
0
0
10
10
0
0
0
0
0
0
0
0
2
12
0
0
0
0
D
D
D
D
D
D
D
D
D
D
10.1.1.1
127.0.0.1
10.1.2.1
127.0.0.1
10.1.1.2
10.1.1.2
127.0.0.1
127.0.0.1
192.168.1.1
127.0.0.1
Ethernet0/0/0
Ethernet0/0/0
Ethernet0/0/1
Ethernet0/0/1
Ethernet0/0/0
Ethernet0/0/0
InLoopBack0
InLoopBack0
GigabitEthernet0/0/0
GigabitEthernet0/0/0
306
Learn Firewalls with Dr. WoW
As shown in Figure 8-3, when R1 fails, the cost of the link on which R1 is positioned
becomes infinitely great, while to R3 the cost of R2's link is still 21. At this time, the network
routes will be converged, and traffic will be forwarded to R2. The time required for traffic to
switch from R1 to R2 is the network routing convergence time. If the routing convergence
time is relatively short, then traffic transmissions won't be interrupted.
Figure 8-3 Routing failover ensuring uninterrupted services
FTP server
1.1.1.10/24
GE0/0/0
1.1.1.1/24
Eth0/0/0
10.1.3.2/24
Eth0/0/1
10.1.4.2/24
Eth0/0/1
10.1.3.1/24
Cost=10
Eth0/0/1
10.1.4.1/24
R4
R1
R2
Eth0/0/0
10.1.1.2/24
R3
Cost=1
Eth0/0/0
10.1.1.1/24
Eth0/0/0
10.1.2.2/24
Cost=10
Eth0/0/1
10.1.2.1/24
GE0/0/0
192.168.1.1/24
Fault
FTP client
192.168.1.10/24
Traffic
From the routing table on R3 below, we can learn that when a failure occurs on R1's Eth0/0/1
interface, packets to/from the FTP server (destination address is 1.1.1.0/24) can only be
forwarded through R2 (next hop: 10.1.2.2).
[R3] display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 10
Routes : 10
Destination/Mask
1.1.1.0/24
OSPF
10.1.1.0/24
10.1.1.1/32
10.1.2.0/24
10.1.2.1/32
10.1.4.0/24
127.0.0.0/8
127.0.0.1/32
192.168.1.0/24
192.168.1.1/32
Proto
10
Pre Cost
21
Direct
Direct
Direct
Direct
OSPF
Direct
Direct
Direct
Direct
D
0
0
0
0
10
0
0
0
0
0
0
0
0
20
0
0
0
0
Flags NextHop
10.1.2.2
D
D
D
D
D
D
D
D
D
Interface
Ethernet0/0/1
10.1.1.1
127.0.0.1
10.1.2.1
127.0.0.1
10.1.2.2
127.0.0.1
127.0.0.1
192.168.1.1
127.0.0.1
Ethernet0/0/0
Ethernet0/0/0
Ethernet0/0/1
Ethernet0/0/1
Ethernet0/0/1
InLoopBack0
InLoopBack0
GigabitEthernet0/0/0
GigabitEthernet0/0/0
307
Learn Firewalls with Dr. WoW
8.1.3 Session Failover Also Needs to Be Considered in Dual
Firewall Deployments
Everything changes when we replace a traditional network device with a stateful inspection
firewall. Let's review the content we discussed in "Stateful Inspection and Session
Mechanism": stateful inspection firewalls inspect only the first packet of a flow, and establish
a session to record packets' stateful information (including the source IP address, source port,
destination IP address, destination port, protocol, etc.). Subsequent packets in this data flow
must match a session to be forwarded by the firewall.
Below we'll give an example to illustrate this—two firewalls (FW1 and FW2) are deployed in
a network. OSPF runs on the two firewalls and R1 and R2. As shown on the left side of
Figure 8-4, under normal circumstances, as the OSPF cost of the link on which FW1 sits is
relatively low, packets will be forwarded through FW1. A session will be established on FW1,
and all subsequent packets will match the session and be forwarded.
The right side of Figure 8-4 shows that when FW1 fails, traffic will be directed onto FW2
based on the upstream and downstream devices' routing information. However, as there is no
session on FW2, packets will be discarded by FW2, leading to service interruption. At this
time the user needs to reinitiate their access request (for example by redownloading FTP) and
trigger FW2 in reestablishing a session before the user's service can continue.
Figure 8-4 Session failover also needs to be considered in dual firewall deployment
FTP server
1.1.1.10/24
FTP server
1.1.1.10/24
GE0/0/0
1.1.1.1/24
GE0/0/0
10.1.3.2/24
GE0/0/1
GE0/0/1
10.1.4.2/24
GE0/0/1
10.1.4.1/24
R2
10.1.3.1/24
Session
established
FW1
on FW1 for
the first SYN
packet
GE0/0/0
10.1.1.2/24
GE0/0/0
10.1.1.1/24
GE0/0/0
1.1.1.1/24
FW2
GE0/0/0
10.1.2.2/24
R1
GE0/0/1
10.1.2.1/24
GE0/0/0
10.1.3.2/24
GE0/0/1
10.1.3.1/24
GE0/0/1
10.1.4.1/24
R2
FW1
GE0/0/0
10.1.1.2/24
GE0/0/0
10.1.1.1/24
GE0/0/0
192.168.1.1/24
FTP client
192.168.1.10/24
GE0/0/1
10.1.4.2/24
FW2
GE0/0/0
10.1.2.2/24
R1
Subsequent
packets discarded
due to session
mismatch on FW2
GE0/0/1
10.1.2.1/24
GE0/0/0
192.168.1.1/24
FTP client
192.168.1.10/24
Fault
Traffic
A session exists on FW1, as shown below:
[FW1] display firewall session table
Current Total Sessions : 1
ftp VPN:public --> public 192.168.1.10:2050-->1.1.1.10:21
No session exists on FW2, as shown below:
[FW2] display firewall session table
Current Total Sessions :0
308
Learn Firewalls with Dr. WoW
8.1.4 Hot Standby Resolving the Problem with Firewall Session
Failover
So, how can we resolve this problem with achieving session failover to ensure service
continuity after active/standby switchover between the two firewalls? Here, the firewall hot
standby function lends a helping hand!
As shown on the left side of Figure 8-5, the most important feature of the firewall hot
standby function is to negotiate active/standby states and synchronize important state
and configuration information, including session and server-map table information,
between the two firewalls through the failover channel (heartbeat link). After the hot
standby function is enabled, one of the two firewalls will become the primary device and the
other the backup device based upon the administrator's configuration. The firewall that
becomes the primary device (FW1) will handle traffic, and synchronizes important status and
configuration information, including session and server-map table information to the backup
device (FW2) through the heartbeat link. The firewall that becomes the backup device (FW2)
will not handle traffic, and only receives the state and configuration information from the
primary device (FW1) through the failover channel.
As shown on the right side of Figure 8-5, when the link where primary device FW1's resides
fails, the two firewalls will use the failover channel to exchange packets, and renegotiate their
active/standby states. At this time, FW2 will negotiate to become the new primary device and
handle traffic, while FW1 will negotiate to become the backup device and will not handle
traffic. Concurrently with this, service traffic will be redirected to the new primary device
(FW2) by the upstream and downstream devices. As FW2 was already provided with the
primary device's backup information (such as session and configuration information) when it
served as the backup device, service packets will match the session and be forwarded.
The backup of routing, session and configuration information guarantees that the backup
device FW2 will successfully replace the original primary device FW1, thus avoiding service
interruption.
Figure 8-5 Hot standby ensuring service continuity
FTP server
1.1.1.10/24
FTP server
1.1.1.10/24
R2
R2
Session
established on
FW1 for the first
SYN packet
Session
backup
Session
backup
FW1
Active
FW2
Standby
R1
FW1
Standby
Subsequent
packets forwarded
after matching the
session on FW2
FW2
Active
R1
Fault
Failover channel
FTP client
192.168.1.10/24
FTP client
192.168.1.10/24
Backup traffic
Service traffic
309
Learn Firewalls with Dr. WoW
There is a session on FW1, as shown below:
[FW1]display firewall session table
Current Total Sessions : 1
ftp VPN:public --> public 192.168.1.10:2050-->1.1.1.10:21
There is also a session on FW2, as shown below:
[FW2]display firewall session table
Current Total Sessions : 1
ftp VPN:public --> public 192.168.1.10:2050-->1.1.1.10:21
The method introduced above is the active/standby failover method of hot standby. In typical
active/standby failover scenarios, the backup device does not handle service traffic, and is in
an idle state. If you don't wish for the device you've bought to be idle, or if there is too much
traffic for one device to handle, we can use the load sharing method of hot standby.
As shown in Figure 8-6, in a load sharing scenario, both firewalls are primary devices, and
each establish sessions and handle service traffic. At the same time, the two firewalls also
serve as each other's backup devices, and receive each other's backup session and
configuration information. As seen on the right side of Figure 8-6, when one of these firewalls
fails, the other firewall will handle all service traffic. As these two firewalls' session
information is backed up, all subsequent service packets can match a session on either firewall
and be forwarded, avoiding service interruption.
Figure 8-6 Load sharing method of hot standby
FTP server
1.1.1.10/24
Web server
1.1.1.20/24
FTP server
1.1.1.10/24
R2
R2
Mutual
session
backup
FTP session
established
on FW1
FW1
Active
Session
backup
HTTP session
established
on FW2
FW2
Active
FW1
Standby
Subsequent FTP and
HTTP packets forwarded
on FW2 after matching
sessions
FW2
Active
R1
R1
PC1
192.168.1.10/24
Web server
1.1.1.20/24
PC2
192.168.1.20/24
PC1
192.168.1.10/24
PC2
192.168.1.20/24
Fault
Failover channel
Backup traffic
FTP traffic
HTTP trafic
310
Learn Firewalls with Dr. WoW
There are FTP and HTTP sessions on FW1, as shown below:
[FW1]display firewall session table
Current Total Sessions : 2
ftp VPN:public --> public 192.168.1.10:2050-->1.1.1.10:21
http VPN:public --> public 192.168.1.20:2080-->1.1.1.20:80
There are also FTP and HTTP sessions on FW2, as shown below:
[FW2]display firewall session table
Current Total Sessions : 2
ftp VPN:public --> public 192.168.1.10:2050-->1.1.1.10:21
http VPN:public --> public 192.168.1.20:2080-->1.1.1.20:80
8.1.5 Summary
To improve network reliability and avoid single point of failures, we need to deploy two
network devices at key network nodes. If these devices are routers or switches, we can simply
configure routing failover. If these devices are firewalls, we also need to provide failover for
stateful information (such as the session table, etc.) between the firewalls.
The firewall hot standby function provides a special failover channel used in negotiating
active/standby states between two firewalls and in providing backup state information about
sessions, etc. Hot standby includes active/standby failover and load sharing scenarios.
Active/standby failover refers to only having the primary device handle traffic, with the
backup device idle; when a failure occurs on the primary device's interface(s), link or the
entire device, the backup device will change to the primary device, and replace the primary
device in handling services. Load sharing can also be called "complementary active/standby",
as this is two devices simultaneously handling services. When one device fails, the other
device will immediately assume its services, guaranteeing that there is no interruption of the
services that originally forwarded through this device.
8.2 The Story of VRRP and VGMP
For readers familiar with routers and switches, the VRRP protocol will certainly be the first to
spring to mind when network dual device deployment is mentioned, and the firewall hot
standby function is actually an expansion on the foundation provided by the VRRP protocol.
Therefore, as I explain the story of VGMP and VRRP step by step in this section, I will first
discuss VRRP, and then introduce VGMP from this basis.
8.2.1 VRRP Overview
In the router or firewall hot standby networking discussed in the above section, whether traffic
was directed to the primary or backup device was decided by the upstream and downstream
devices' routing tables. This is because dynamic routing can dynamically adjust routing
tables according to link states to automatically direct traffic onto the correct device.
However, what if the upstream and downstream devices are using static routing? This is
indeed a problem, as dynamic adjustments cannot be made in static routing.
Let's look at an example of this below. As shown in Figure 8-7, the router is configured as the
default gateway on the hosts on the internal network. Therefore, when the hosts want to
311
Learn Firewalls with Dr. WoW
access the Internet, it will first send a packet to the gateway, and the packet will then be sent
by the gateway to the Internet. However, when the gateway fails, communication between the
hosts and the Internet will be interrupted.
Figure 8-7 A single gateway failure resulting in service interruption
Router
Firewall
Switch
Internal
network users
Fault
Traffic
As shown in Figure 8-8, if we want to solve the problem of network interruptions, we need to
add multiple gateways (Router 1 and Router 2). However, the dynamic routing cannot be
configured on hosts. Only a default gateway can be specified on the hosts. If we configure
Router 1 as the default gateway, then when Router 1 fails, traffic will not be automatically
directed to Router 2. At this time, only manually changing the host's default gateway to
Router 2 will allow the host's traffic to be directed to Router 2. However, this will certainly
result in the interruption of the host's traffic accessing the Internet for a period of time.
Moreover, in large networks, there may be hundreds of hosts, and manually adjusting the
network to achieve gateway failover is clearly not realistic.
312
Learn Firewalls with Dr. WoW
Figure 8-8 Multiple gateways cannot guarantee uninterrupted service
Router1
Router2
I’ve set Router1 as
my default
gateway.
Host
Router1
Router2
I cannot access the
Internet, so it seems that
Router1 failed, and I have
to re-set my default
gateway.
Host
In order to better resolve the problem of network interruptions occurring due to gateway
failures, network developers have developed the VRRP protocol. The VRRP protocol is a
kind of fault-tolerant protocol, and guarantees that when a failure occurs on a host's
next hop router (the default gateway), a backup router will automatically replace the
failed router in completing packet forwarding tasks, thereby maintaining continuous
and reliable network communication.
In Figure 8-9, we've assigned a group of routers (actually these are the routers' downstream
interfaces) from within a LAN together, forming a VRRP group. VRRP groups are equivalent
to a virtual router which has its own virtual IP address and virtual MAC address
(format:00-00-5E-00-01-{VRID}, where VRID is the VRRP group's ID). Therefore, the hosts
within the LAN can configure their default gateway as the VRRP group's virtual IP address.
The hosts within the LAN 'think' they are communicating with the virtual router and using the
virtual router to communicate with the external network.
313
Learn Firewalls with Dr. WoW
Figure 8-9 VRRP basics
3
2
If Router1 fails, I will
take place it to forward
packets.
I and Router2 form a
virtual router.
Normally, I am
responsible for
forwarding packets.
Router2
Backup
Priority: 100
Router1
Master
Priority: 110
Virtual router
GE1/0/1
(VRRP group 1)
10.1.1.1/24
Virtual IP address: 10.1.1.3/24
Virtual MAC address: 00-00-5E-00-01-01
GE1/0/1
10.1.1.2/24
1
I communicate with
the virtual router. The
default gateway is
10.1.1.3/24.
Host
The routers in a VRRP group will determine their own state in the VRRP group based
on the priority specified by the administrator. The highest priority state is Master, and
the other state is Backup. A router whose state is Master is called the master router, and a
router whose state is Backup is called the backup router. When the master router is operating
normally, hosts within the LAN will communicate with the external world through the master
router. If the master router fails, a backup router (the one with the next highest VRRP priority)
will become the new master router and take over the work of forwarding packets,
guaranteeing that the network is not interrupted.
8.2.2 VRRP Working Mechanisms
Here, I will use visual aids to demonstrate the entire process of VRRP operations, in order to
help readers in understanding VRRP's implementation principles. So long as you look through
the below figures in their entirety and commit them to memory, you will assuredly understand
and remember the VRRP protocol.
1.
After the administrator finishes configuring the VRRP group and priorities on routers,
the VRRP group will temporarily work in the Initialize state. As shown in Figure 8-10,
after the VRRP group receives the messages indicating that the interfaces have been
314
Learn Firewalls with Dr. WoW
brought up, the group's routers switch into the Backup state, and wait for their timers to
elapse to switch into the Master state.
Figure 8-10 VRRP group states' switching from Initialize to Backup
My interface works
normally, and I change the
interface status to Backup
and then to Master after the
timer expires.
Router2
Router1
Me too.
Let’s see who will be
faster to be Master.
VRRP group 1
10.1.1.3/24
GE1/0/1
State: Backup
Priority: 110
GE1/0/1
State: Backup
Priority: 100
Eth0/0/1
Eth0/0/2
As shown in Figure 8-11, of the VRRP group's routers, the first router to change its
state to Master will become the master router. The router with the highest priority
in a VRRP group will have the shortest timer, meaning it is easiest for this router to
become the master router. This process is called the master router election.
After a successful election, the master router will immediately send periodic (the default
is one second) VRRP packets to all backup routers in the VRRP group to notify them of
its own Master state and priority.
Figure 8-11 Electing the master router
My priority is higher, and
my timer expires first.
I will switch to the Master
state.
Router2
Router1
GE1/0/1
State: Master
Priority: 110
VRRP group 1
Eth0/0/1
GE1/0/1
State: Backup
Priority: 100
Eth0/0/2
State: Master
VRRP priority: 110
2.
Your priority is higher
than mine.
OK. I will still be in
Backup state.
VRRP packet
VRRP packet
direction
The master router will also send a gratuitous ARP packet to notify the switch connected
to it of the VRRP group's virtual MAC address and virtual IP address; this is shown in
Figure 8-12. An entry will be made in the downstream switch's MAC table recording the
relationship between the virtual MAC address and port Eth0/0/1.
315
Learn Firewalls with Dr. WoW
Figure 8-12 The master router sending a gratuitous ARP packet
This is the virtual MAC
address of the VRRP group.
Send me the subsequent
packets destined for this MAC
address.
Router2
Router1
GE1/0/1
State: Master
Priority: 110
VRRP group 1
Virtual MAC:
00-00-5E-00-01-01
Eth0/0/1
MAC address
00-00-5E-00-01-01
Port
Eth0/0/1
GE1/0/1
State: Backup
Priority: 100
Eth0/0/2
Yes Sir.
I’ve recorded the mapping
between this MAC address
and Eth0/0/1 in the MAC
entry.
Gratuitous ARP packet
Direction of the
gratuitous ARP packet
3.
As shown in Figure 8-13, since the gateway on the intranet's PC is set to the virtual IP
address of VRRP group 1, when an intranet PC accesses the Internet, it will first
broadcast ARP packets in the broadcast network to request the virtual MAC address that
corresponds with the virtual IP address. At this point, only the master router will respond
to this ARP packet by giving its virtual MAC address to the PC.
Figure 8-13 Master router responding to the PC's ARP request packet
I know.
The MAC address of
10.1.1.3 is 00-00-5E00-01.
Router2
Router1
VRRP group 1
10.1.1.3/24
GE1/0/1
State: Master
Priority: 110
Virtual MAC:
00-00-5E-00-01-01
GE1/0/1
State: Backup
Priority: 100
Eth0/0/2
Eth0/0/1
Does anybody know
the MAC address of
the gateway address
10.1.1.3?
ARP broadcast
ARP reply packet
Direction of the ARP
reply packet
PC’s gateway address:
10.1.1.3/24
4.
As shown in Figure 8-14, the PC uses the virtual MAC address as the destination MAC
address for encapsulating packets, and then sends a packet to the switch. The switch
forwards the packet sent by the PC through port Eth0/0/1 to Router 1 according to the
MAC address and port relationship recorded in the MAC table.
316
Learn Firewalls with Dr. WoW
Figure 8-14 A downstream switch sending a packet to the master router
Router2
Router1
VRRP group 1
10.1.1.3/24
GE1/0/1
State: Master
Priority: 110
GE1/0/1
State: Backup
Priority: 100
Eth0/0/1
Eth0/0/2
Yes Sir.
(The packet destined for
this address has been
sent through Eth0/0/1.)
MAC address
00-00-5E-00-01-01
Destination MAC:
00-00-5E-00-01-01
PC’s gateway
address:
10.1.1.3/24
Port
Eth0/0/1
Destination IP:
10.1.1.3
Help me send the
packet to MAC address
00-00-5E-00-01.
Service packet
Direction of the
service packet
The above description is of the establishment of the master router and backup router(s) states
and their operating processes. Below, we'll introduce state switching and related operational
processes for the master router and the backup router.
1.
As shown in Figure 8-15, when the master router fails (a failure of the entire Router 1
device or a failure on interface GE1/0/1), it will be unable to send a VRRP packet to
notify the backup router of the failure. If the backup router(s) has not received a VRRP
packet sent by the master router before the time expires, it will deem this to mean that
the master router has failed, and will therefore switch its own state to Master.
Figure 8-15 VRRP state switching
I am faulty and cannot
send VRRP packets.
Router2
Router1
GE1/0/1
State: Initialize
VRRP group 1
Eth0/0/1
Eth0/0/2
VRRP priority:
100
I do not receive VRRP packets
before the timer expires.
The master router must be faulty.
I must switch to the Master state.
GE1/0/1
State: Master
Priority: 100
Fault
VRRP packet
Direction of the
VRRP packet
317
Learn Firewalls with Dr. WoW
There is also another scenario: if the master router abandons its position as Master (for
example the master router withdraws from the VRRP group), it will immediately send a
VRRP packet with a priority of 0, causing the backup router to quickly switch to become
the master router.
2.
As shown in Figure 8-16, after the completion of state switching, the new master router
will immediately send a gratuitous ARP packet carrying the VRRP group's virtual MAC
address and virtual IP address, to refresh the MAC table entries for the device connected
to it (the downstream switch). The relationship between the virtual MAC address and the
new port Eth0/0/2 will be recorded in the downstream switch's MAC table.
Figure 8-16 The new master router sending a gratuitous ARP packet
Router2
Router1
VRRP group 1
Eth0/0/1
This is the virtual MAC
address of the VRRP group.
Send me subsequent packets
destined for this MAC
address.
GE1/0/1
State: Master
Priority: 100
Virtual MAC:
00-00-5E-00-01-01
Eth0/0/2
Yes Sir.
I’ve updated the mapping
between the MAC address
and Eth0/0/2 in the MAC
entry.
MAC address
00-00-5E-00-01-01
Port
Eth0/0/2
Fault
Gratuitous ARP packet
Direction of the
gratuitous ARP packet
3.
As shown in Figure 8-17, after the intranet PC sends a packet to the switch, the switch
will forward the packet sent by the PC through port Eth0/0/2 to Router 2. Therefore, the
intranet PC's traffic is all forwarded through the new master router, Router 2. This
process is completely transparent to the user, and the intranet PC does not perceive that
the master router has already switched from Router 1 to Router 2.
318
Learn Firewalls with Dr. WoW
Figure 8-17 The downstream switch sending a packet to the new master router
Router2
Router1
VRRP group 1
10.1.1.3/24
GE1/0/1
State: Initialize
GE1/0/1
State: Master
Priority: 100
Eth0/0/2
Eth0/0/1
Eth0/0/2
Yes Sir.
(Packets destined for
this MAC address will
be sent through
Eth0/0/2.)
Destination MAC:
00-00-5E-00-01-01
PC’s gateway
address:
10.1.1.3/24
4.
MAC address
00-00-5E-00-01-01
Port
Eth0/0/2
Destination IP:
10.1.1.3
Fault
Help me send the
packet to MAC address
00-00-5E-00-01.
Service packet
Direction of the
service packet
In Figure 8-18, when the failure on the original master router (the current backup router)
is fixed, this router's priority will be higher than the current master router. At this time, if
the preemption function has been enabled, the original master router will change its state
to Master after the preemption timer expires and becomes the master router again; if the
preemption function has not been enabled, the original master router will continue to
maintain its Backup state.
Figure 8-18 Original master router preemption after a failure is fixed
My fault has been rectified.
My priority is higher than yours
and I’ve been configured with
preemption, so I will switch to
the Master state.
Router2
Router1
GE1/0/1
State: Backup
Priority: 110
VRRP group 1
Eth0/0/1
GE1/0/1
State: Master
Priority: 100
Eth0/0/2
State: Master
Priority: 100
VRRP packet
Direction of the
VRRP packet
319
Learn Firewalls with Dr. WoW
8.2.3 Issues Created by Multiple, Independent VRRP States
The above section explained how running VRRP on a gateway's downstream interface can
ensure gateway availability. But what would happen if we run VRRP simultaneously on both
a gateway's upstream and downstream interfaces?
In Figure 8-19, two devices' downstream interfaces join VRRP group 1, and their upstream
interfaces join VRRP group 2. Under normal circumstances, R1's state is Master in VRRP
group 1, and its VRRP group 2's status is Master, so R1 is the master router in both VRRP
group 1 and VRRP group 2. As we learned above when discussing VRRP principles, all
service packets between the intranet and an external network will therefore be forwarded
through R1.
Figure 8-19 Multiple VRRPs operating simultaneously
PC2
PC2
Internet
MAC address
00-00-5E-00-01-02
Port
Eth0/0/1
Internet
MAC address
00-00-5E-00-01-02
Eth0/0/1
Eth0/0/1
Eth0/0/2
Eth0/0/2
LSW2
LSW2
VRRP group 2
GE1/0/3
Virtual IP address: 1.1.1.1/24
GE1/0/3
Master
R1
VRRP group 2
Virtual IP address: 1.1.1.1/24
GE1/0/3
Master
Backup
R1
R2
VRRP group 1
Backup
Virtual IP address: 10.1.1.1/24
VRRP group 1
Virtual IP address: 10.1.1.1/24
LSW1
PC1's gateway
address:
10.1.1.1/24
Internal
network
GE1/0/1
Master
LSW1
Eth0/0/2
Eth0/0/2
Eth0/0/1
GE1/0/3
Backup
R2
GE1/0/1
Initialize
GE1/0/1
GE1/0/1
Master
Port
Eth0/0/1
MAC address
00-00-5E-00-01-01
Port
Eth0/0/1
Eth0/0/1
PC’s gateway
address:
10.1.1.1/24
MAC address
00-00-5E-00-01-01
Port
Eth0/0/2
Internal
network
Packets from internal
networks to the Internet
Return packets
When R1's GE1/0/1 interface fails, R1's state in VRRP group 1 switches to Initialize, and R2's
state in VRRP group 1 switches to Master. R2 therefore becomes the master router in VRRP
group 1, and sends a gratuitous ARP packet to LSW1, refreshing the MAC table in LSW1; at
this point PC1's packets accessing PC2 will be forwarded through R2. However, as the link
between R1 and LSW2 is operating normally, R1 is still the master router in VRRP group 2,
while R2 is still the backup router in VRRP group 2. Therefore, the return packets sent from
PC2 to PC1 will still be forwarded to R1. However, as R1's downstream interface GE1/0/1
has failed, R1 can only discard these return packets, resulting in an interruption of service
traffic.
320
Learn Firewalls with Dr. WoW
After finishing reading through this process, readers will certainly have discovered the
problem with VRRP: VRRP groups are independent of one another, meaning that when there
are multiple VRRP groups on one device, their states can't be backed up.
Huawei's firewalls, routers, switches and other network devices have a unique method of
solving this VRRP problem. Below, we'll focus on introducing how Huawei's firewalls resolve
this problem.
8.2.4 The Creation of VGMP Solves VRRPs' Problems
In order to resolve this problem of a lack of state unity in multiple VRRP groups, Huawei's
firewalls incorporate VGMP (VRRP group Management Protocol) in exercising unified
management over VRRP groups to guarantee the state consistency in VRRP groups. We add
all VRRP groups on a firewall into a VGMP group, and the VGMP group centrally supervises
and manages the states of all VRRP groups. If the VGMP group discovers that one of its
VRRP groups' state has changed, the VGMP group will mandate that all of the VRRP groups
in the VGMP group undergo unified state switching, guaranteeing the state consistency in all
VRRP groups.
A VGMP group has two basic attributes (state and priority) and three basic operating
principles:

A VGMP group's state determines the state of the VRRP groups within the group,
and also determines its firewall's active/standby state.

The VGMP group states of two firewalls are determined by mutual priority
comparison. The state of the VGMP group with the higher priority is active, while
the state of the VGMP group(s) with the lower priority is standby.

A VGMP group will update its own priority according to state changes by the
VRRP groups within it. When a VRRP group's state changes to Initialize, the
VGMP group's priority will decrease by 2.
Now that we understand and are intimately familiar with the basic principles of VGMP group
operation, we'll together take a look at how the VGMP protocol resolves the problem of
VRRP group state discordance.
As shown in Figure 8-20, on FW1 we've added both VRRP group 1 and VRRP group 2 to a
VGMP group in the active state. On FW2 we've added VRRP group 1 and VRRP group 2 to a
VGMP group in the standby state. As a VGMP group's state determines the states of the
group's VRRP groups, the states of VRRP groups 1 and 2 on FW1 are both active, and the
states of VRRP groups 1 and 2 on FW2 are both standby. Therefore, FW1 is the active router
in VRRP group 1 and VRRP group 2 (it is the primary device of the two firewalls), while
FW2 is the standby router for them (it is the backup device of the two firewalls), and therefore
upstream and downstream service traffic will all be directed to the primary device FW1 for
forwarding.
321
Learn Firewalls with Dr. WoW
Figure 8-20 VGMP ensuring unified VRRP state switching
PC2
Eth0/0/1
VGMP
Active
MAC address
Port
MAC address
Port
Eth0/0/1
00-00-5E-00-01-02
Eth0/0/2
GE1/0/1
Active
Eth0/0/1
Eth0/0/2
GE1/0/2
GE1/0/3
Standby
GE1/0/2
Heartbeat link
VRRP group 1
10.1.1.1/24
Eth0/0/1
VGMP
Standby
GE1/0/1
Standby
Eth0/0/2
MAC address
Internal
network
FW1
Eth0/0/2
VRRP group 2
1.1.1.1/24
GE1/0/3
Standby
FW2
00-00-5E-00-01-01
PC1's gateway
address:
10.1.1.1/24
Internet
00-00-5E-00-01-02
VRRP group 2
1.1.1.1/24
GE1/0/3
Active
FW1
PC2
Internet
GE1/0/2
VGMP
Standby
GE1/0/3
Active
GE1/0/2
Heartbeat link
VGMP
Active
GE1/0/1
Active
VRRP group 1
10.1.1.1/24
GE1/0/1
Initialize
Eth0/0/1
FW2
Eth0/0/2
Port
MAC address
Port
Eth0/0/1
00-00-5E-00-01-01
Eth0/0/2
PC1's gateway
address:
10.1.1.1/24
Internal
network
Packets from internal
networks to the Internet
Return packets
A heartbeat cable is needed between two firewalls to exchange VGMP protocol packets.
[Question from Dr. WoW] Above, while explaining VRRPs, the states we studied were all
"Master" and "standby" —why have these changed to "active" and "standby" here?
Answer: In the USG6000 series of firewalls, hot standby states (originally "Master" and
"Slave") and VRRP states (originally "Master" and "standby") have been uniformly changed
to read "active" and "standby". Therefore, when you see other documents use multiple
different terms to describe states, please don't think this unusual; understanding these states as
the "active" and "standby" described here will work just fine.
As shown in Figure 8-20, when one of FW1's interfaces fails, the process by which the VGMP
group controls unified state switching of the VRRP groups is as follows:
2.
When FW1's GE1/0/1 interface fails, VRRP group 1 on FW1 switches states (switches
from active to initialize).
3.
After FW1's VGMP group perceives this failure, it will lower its own priority, and then
compare priorities with FW2's VGMP group and renegotiate their active/standby states.
4.
After negotiation, the state of the VGMP group on FW1 switches from active to standby,
and the state of the VGMP group on FW2 switches from standby to active.
5.
At the same time, as a VGMP group's state determines the state of the group's VRRP
groups, FW1's VGMP group will mandate that its VRRP group 2 switch from the active
state to the standby state, and FW2's VGMP group will also mandate that its VRRP
groups 1 and 2 switch from the standby state to the active state.
In this way, FW2 becomes the active router in VRRP group 1 and VRRP group 2, and
also is thus the primary device of the two firewalls; meanwhile, FW1 becomes the
322
Learn Firewalls with Dr. WoW
standby router in VRRP group 1 and VRRP group 2, and is thus also the backup device
of the two firewalls.
6.
FW2 will send gratuitous ARP packets to both LSW1 and LSW2 to update their MAC
address tables, causing PC1's upstream packets and return packets accessing PC2 to be
forwarded to FW2. This completes the unified switching of VRRP group states, and
guarantees uninterrupted service traffic.
8.2.5 VGMP Packet Structure
After reading through the above content, everyone should understand that VGMP not only
allows for unified management of VRRP groups, but also replaces VRRP in managing
firewall active/standby states as needed. At this point a question arises: how is information
about states and priorities sent between two firewalls' VGMP groups?
In 8.2.2 VRRP Working Mechanisms above, we explained that two routers' VRRP groups use
VRRP packets to send state and priority information. So, do two firewalls' VGMP groups still
use VRRP packets to send state and priority information? This is of course not very likely, as
new leadership naturally brings with it new methods.
During hot standby, two firewalls' VGMP groups use VGMP packets to send state and
priority information. VGMP is Huawei's proprietary protocol, and this protocol expands on
and alters VRRP packets to achieve the firewall hot standby function, deriving multiple kinds
of packets that use VGMP headers in their encapsulation. An understanding of VGMP packets
and headers is the foundation necessary to understand VGMP state negotiation and switching,
so let's first have a look at the structure of VGMP packets.
The VGMP packet structure discussed in this section applies to the USG2000/5000/6000 firewall series
and the USG9000 firewall series' V100R003 version.
Figure 8-21 VGMP packet structure
VRRP
header
VGMP
header
From the sequence of VGMP packet encapsulation shown in Figure 8-21, we can see that
VGMP packets are rooted in VRRP packets, and are encapsulated from VRRP header.
However, the VRRP header is not standard VRRP header, but is "new VRRP header" that has
been expanded on and amended by Huawei. The specific changes are listed below:
323
Learn Firewalls with Dr. WoW

Standard VRRP header' "Type" field only has a value of "1", while the new VRRP
header has added a value of "2". This is to say that if Type=1, this is a standard VRRP
header; if Type=2, this is a new kind of VRRP header that has been altered by us.

Standard VRRP header' "Virtual Rtr ID" field represents the VRRP group ID, while the
new, altered VRRP header' "Virtual Rtr ID" value is always set to "0".

The standard VRRP header's "IP address" field has been eliminated from the new, altered
VRRP header.

The "Priority" field in the standard VRRP header has been changed to a "Type2" field in
the new VRRP header.
−
When Type2=1, packets are encapsulated as heartbeat link detection packets.
Heartbeat link detection packets are used to detect whether or not the peer device's
heartbeat interface can receive packets from the sending device; this verifies whether
or not the heartbeat interface can be used.
−
When Type2=5, the packets are encapsulated as consistency check packets.
Consistency check packets are used to inspect whether two firewalls in hot standby
state have the same hot standby and policy configuration.
−
Only when Type2=2 will VRRP packets be further encapsulated into a VGMP header.
These packets are further separated into three kinds of packets according to the
VGMP header's "vType" field.

VGMP packets (VGMP Hello packets). VGMP Hello packets are used to
negotiate active/standby states between two firewalls' VGMP groups.

HRP heartbeat packets (HRP Hello packets). HRP heartbeat packets are used
to detect whether a peer VGMP group is in a working state. A VGMP group in
the active state will send HRP heartbeat packets to its peer VGMP group at
intervals (the default is 1s), and these are used to notify the peer of the sending
group's VGMP group state and priority. If a VGMP group in the standby state
doesn't receive HRP heartbeat packets sent from its peer group for three
consecutive intervals, it will deem this to mean there has been a failure on the
peer VGMP group, and will switch its own state to active.

HRP data packets. Only by adding an HRP header after the VGMP header can
a packet be encapsulated into an HRP data packet. HRP data packets are used for
data backup between active and standby devices, and include backup of
command line configuration and state information.
By this point, everyone is likely wondering: if during firewall hot standby a new VRRP
header is used to encapsulate VGMP packets, then do standard VRRP packets still exist, and if
so, what are they used for? The answer is that standard VRRP packets still exist, and they
are still used for internal communication within VRRP groups. However, as their
priority field (Priority) is already a fixed value, it cannot be configured, and so standard
VRRP packets actually exist in name only. The loss of the Priority field means that standard
VRRP packets can no longer control negotiation of VRRP group state, and can only provide
notification of VRRP group states and virtual IP addresses between the active and standby
firewalls. This is the same as the role of the "emperor" in a constitutional monarchy—their
title is preserved but they lack the power to manage the nation.
VGMP's desire to take over management of firewall and VRRP group states means that
VGMP packets must contain VGMP group state and priority information. Let's look again at
the structure of the VGMP header.

The "Mode" field states whether this is a request packet or a response packet.

The "vgmpID" field states whether the VGMP group is an active group or a standby
group.
324
Learn Firewalls with Dr. WoW

The "vPriority" field states the VGMP group's priority.
In addition, information about a VGMP group's state is contained in VGMP packet
"data". These two points demonstrate that the VGMP protocol possesses the material
basis necessary to allow it to replace the standard VRRP protocol in managing VRRP
group and firewall states.
To summarize the above, the VGMP protocol alters the standard VRRP header and defines
different kinds of packets that use the VGMP header in encapsulation. With this in mind,
through what channel are these packets sent between two firewalls? Above, we discussed the
fact that firewalls use a failover channel (the heartbeat cable) to send backup data, and it's
obvious that HRP data packets are transmitted through the failover channel. Indeed, all of the
various VGMP packets discussed above (with the exception of standard VRRP packets) are
transmitted through the failover channel.
In addition, the USG6000 firewall series and the USG2000/5000 firewall series' V300R001
version also support the encapsulation of the various VGMP and HRP packets discussed
above (with the exception of standard VRRP packets) into UDP packets. The structure for this
is shown in Figure 8-22.
Figure 8-22 Using UDP to encapsulate a VGMP packet
So what is the difference between using VRRP-encapsulated VGMP packets and
UDP-encapsulated VGMP packets? The former are multicast packets, can't be transmitted
outside the subnet, and aren't controlled by security policies; the latter are unicast packets, and
can be transmitted outside of the subnet as long as the route is accessible, and they are
controlled by security policies. A bit more specifically, if multicast packets are used, then the
two firewalls' heartbeat interfaces must be directly connected or connected through a Layer 2
switch, but no security policy needs to be configured. If unicast packets are used, then the two
firewalls' heartbeat interfaces can be connected through a Layer 3 device such as a router, but
a security policy permitting packets to pass in both directions between the Local zone and the
security zone the heartbeat interface is located in must be configured. In addition, when using
a service interface as the heartbeat interface, UDP-encapsulated VGMP packets must be used.
8.2.6 Firewall VGMP Groups' Default States
In 8.2.4 The Creation of VGMP Solves VRRPs' Problems, we already made a simple
introduction into how VGMP guarantees unified switching of VRRP group states, but the
actual switching process and packet exchange is a bit more complicated. Before introducing
the formation of VGMP states and the switching process in more detail, we'll first introduce
firewalls' VGMP groups' default states and priorities.
In Figure 8-23, each firewall provides two VGMP groups: the active group and the
standby group. By default, the active group's priority is 65001, and its state is active; the
standby group's priority is 65000, and its state is standby. In an active/standby failover
scenario, the primary device enables the active group, and all members (for example,
VRRP groups) join the active group; the backup device enables the standby group, and
all members join the standby group. In a load sharing scenario, both firewalls enable the
active and standby groups, and all members on each device join both the active group
and the standby group. FW1's active group and FW2's standby group form one
"active/standby" group, and FW2's active group and FW1's standby group form
325
Learn Firewalls with Dr. WoW
another "active/standby" group; the two firewalls are both complementing each other's
"active/standby", creating load sharing.
Figure 8-23 Firewall VGMP groups
Active/standby
backup
FW2 S
FW1 A
Active group
State: Active
Priority: 65001
HRP heartbeat
packet
Standby group
State: Initialize
Standby group
State: Standby
Priority: 65000
Active group
State: Initialize
Load balancing
FW1 A
FW2 S
Active Group
State: Active
Priority: 65001
Standby Group
State: Standby
Priority: 65000
Standby Group
State: Standby
Priority: 65000
HRP heartbeat
packet
Active Group
State: Active
Priority: 65001
The above description is applicable to the USG2000/5000/6000 firewall series. As the
USG9000 firewall has both interface boards and service boards, their default priorities are
different.
The USG9000 firewall series' V100R003 version's VGMP group's default priorities are as
follows:
Master (active) group's default priority = 45001 + 1000 x (# of service boards + # of interface
boards)
Slave (standby) group's default priority = 45000 + 1000 x (# of service boards + # of interface
boards)
8.2.7 The Process of State Formation for Active/Standby Failover
Hot Standby
The active/standby failover method of hot standby is in widespread use at present. Its
configuration and principles are relatively simple, and therefore we'll start with an explanation
of the processes through which states are formed in active/standby failover hot standby.
To allow everyone to truly experience how VRRP and VGMP function on firewalls, below
we'll first detail the configuration of hot standby through active/standby failover, and then
describe the process of hot standby state formation.
326
Learn Firewalls with Dr. WoW
The hot standby configuration described in this section is generally as done using the USG6000 firewall
series.
As shown in Figure 8-24, to implement the active/standby failover method of hot standby, we
need to enable the active VGMP group on FW1, and add FW1's VRRP groups into the active
VGMP group to monitor the them. We also enable the standby VGMP group on FW2 and add
all of FW2's VRRP groups to the standby VGMP group to monitor them.
Figure 8-24 Network diagram of active/standby failover hot standby
PC2
Internet
Eth0/0/1
GE1/0/3
1.1.1.2/24
Active
FW1
Eth0/0/2
GE1/0/3
1.1.1.3/24
Standby
VRRP group 2
1.1.1.1/24
GE1/0/2
VGMP
Active
GE1/0/2
Heartbeat link
GE1/0/1
10.1.1.2/24
Active
VGMP
Standby
VRRP group 1
10.1.1.1/24
Eth0/0/1
PC1's gateway
address:
10.1.1.1/24
FW2
GE1/0/1
10.1.1.3/24
Standby
Eth0/0/2
Internal
network
The command to achieve this operation is vrrp vrid virtual-router-id virtual-ip
virtual-address [ ip-mask | ip-mask-length ] { active | standby }. This command is simple but
very useful and can accomplish the following two tasks:

Add an interface to the VRRP group, and assign a virtual IP address and mask. When the
interface's IP address and the VRRP group's virtual IP address are not on the same subnet,
a virtual IP address subnet mask must be specified.
327
Learn Firewalls with Dr. WoW

Use the "active | standby" parameter to add VRRP groups to the active or standby
VGMP groups.
Configuration of active/standby failover hot standby on two firewalls is shown in Table 8-1.
Table 8-1 Configuration of active/standby failover hot standby
Item
Configuration on FW1
Configuration on FW2
Configure VRRP
group 1.
interface GigabitEthernet 1/0/1
interface GigabitEthernet 1/0/1
ip address 10.1.1.2
255.255.255.0
ip address 10.1.1.3
255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.1
255.255.255.0 active
vrrp vrid 1 virtual-ip 10.1.1.1
255.255.255.0 standby
interface GigabitEthernet 1/0/3
interface GigabitEthernet 1/0/3
Configure VRRP
group 2.
ip address 1.1.1.2 255.255.255.0
vrrp vrid 2 virtual-ip 1.1.1.1
255.255.255.0 active
ip address 1.1.1.3
255.255.255.0
vrrp vrid 2 virtual-ip 1.1.1.1
255.255.255.0 standby
Configure the
heartbeat interface.
hrp interface GigabitEthernet
1/0/2
hrp interface GigabitEthernet
1/0/2
Enable hot standby.
hrp enable
hrp enable
The various VGMP packets and the HRP packets are all sent through the heartbeat interface,
which can be understood as hot standby's "lifeblood", and there are many key points that will
require your focus here:

The two devices' heartbeat interfaces must be added to the same security zone.

The two devices' heartbeat interfaces' interface type and number must be the same. For
example, if the primary device's heartbeat interface is GigabitEthernet 1/0/2, then the
backup device's heartbeat interface must also be GigabitEthernet 1/0/2.

Specifics regarding choosing a suitable heartbeat interface connection method are below.
−
When two hot standby firewalls are relatively close together, the heartbeat interfaces
can be directly connected, or connected via a Layer 2 switch. The configuration
method is that when configuring the heartbeat interfaces, a remote parameter is not
added. At this time the packets sent by the heartbeat interfaces are encapsulated into
multicast VRRP packets. Multicast packets cannot be transmitted across subnets, and
are not controlled by security policies. This is the preferred method.
−
When the distance between two hot standby firewalls is relatively large and
cross-subnet transmission is necessary, the heartbeat interfaces need to be connected
using routers. To configure this, a remote parameter is added when configuring the
heartbeat interfaces, designating the other heartbeat interface's address (for example
hrp interface GigabitEthernet 1/0/2 remote 10.1.1.2). After adding the remote
parameter, the various packets sent from a heartbeat interface will be encapsulated
into UDP packets. UDP packets are unicast packets, and can be transmitted across
subnets as long as a route is available, but need to be controlled by security policies.
To configure the security policy, permit packets with a destination port of 18514 or
18515 to pass in both directions between the Local zone and the security zone the
heartbeat interface is located in.
328
Learn Firewalls with Dr. WoW
−
When no heartbeat interface is available, service interfaces can also be used as the
heartbeat interfaces. To configure this, when configuring the heartbeat interface add
remote parameters, designating the other heartbeat interface's (one of the service
interfaces) address. To configure the security policy, permit packets with destination
ports of 18514 and 18515 to pass in both directions between the Local zone and the
security zone the heartbeat interface is located in.
By now, I believe everyone should understand the method to control VGMP and HRP
packet encapsulation.
After completing configuration, we run the command display hrp state on FW1, which
allows us to see that VRRP groups 1 and 2 have joined the active VGMP group and are in the
active state.
HRP_A<FW1> display hrp state
The firewall's config state is: ACTIVE
Current state of virtual routers configured as active:
GigabitEthernet1/0/3
vrid
2 : active
GigabitEthernet1/0/1
vrid
1 : active
Running the command display hrp state on FW2 shows that VRRP groups 1 and 2 have
joined the standby VGMP group, and are in the standby state.
HRP_S<FW2> display hrp state
The firewall's config state is: STANDBY
Current state of virtual routers configured as standby:
GigabitEthernet1/0/3
vrid
2 : standby
GigabitEthernet1/0/1
vrid
1 : standby
Running the command display hrp group on FW1 shows that the active group's state is
active, its priority is 65001, and that the standby group hasn't been enabled.
HRP_A<FW1> display hrp group
Active group status:
Group enabled:
yes
State:
active
Priority running:
65001
Total VRRP members:
1
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
30
Peer group available: 1
Peer's member same:
yes
Standby group status:
Group enabled:
no
State:
initialize
Priority running:
65000
Total VRRP members:
0
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
0
Peer group available: 0
Peer's member same:
yes
329
Learn Firewalls with Dr. WoW
Running the command display hrp group on FW2 shows that the standby group's state is
standby, its priority is 65000, and that the active group hasn't been enabled.
HRP_S<FW2> display hrp group
Active group status:
Group enabled:
no
State:
initialize
Priority running:
65001
Total VRRP members:
0
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
30
Peer group available: 1
Peer's member same:
yes
Standby group status:
Group enabled:
yes
State:
standby
Priority running:
65000
Total VRRP members:
2
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
0
Peer group available: 1
Peer's member same:
yes
After completing configuration and state switching for the various hot standby networks that we'll
discuss below, we can run the above two commands to check VGMP group information and verify
whether or not our configuration is correct and whether state switching has occurred.
As shown in Figure 8-25, after configuration, the process of state formation for the
active/standby failover method of hot standby is as follows (the numbers in the Figure 8-25
are the same numbers as in the below text)
2.
After hot standby is enabled, the state of the active VGMP group on FW1 switches from
initialize to active, and the state of the standby VGMP group on FW2 switches from
initialize to standby.
3.
As FW1's VRRP groups have all joined the active VGMP group, and as the active
VGMP group's state is active, FW1's VRRP group 1 and VRRP group 2 are both in the
active state. Similarly, FW2's VRRP group 1 and VRRP group 2 are both in the standby
state.
4.
At this time, FW1's VRRP groups 1 and 2 will each send gratuitous ARP packets to the
upstream and downstream switches to notify them of their VRRP group virtual MAC
address. 00-00-5E-00-01-01 is VRRP group 1's virtual MAC address, and
00-00-5E-00-01-02 is VRRP group 2's virtual MAC address.
5.
The upstream and downstream switches' MAC tables will each have entries made
recording the mapping between the virtual MAC address and port Eth0/0/1. In this way,
after upstream and downstream service packets arrive at the switches, the switches will
forward the packets to FW1. Therefore, FW1 becomes the primary device, and FW2
becomes the backup device.
6.
At the same time, FW1's active VGMP group will also send HRP heartbeat packets to
FW2's standby VGMP group at fixed intervals through the heartbeat cable.
330
Learn Firewalls with Dr. WoW
Figure 8-25 Process of state formation in active/standby failover hot standby
Switch
4
3
Gratuitous
ARP
MAC address
Port
00-00-5E-00-01-02
Eth0/0/1
Eth0/0/1
Eth0/0/2
GE1/0/3
GE1/0/3
VRRP group 2
State: Active
2
FW1
VRRP group 2
State: Standby
2
Active group
1 State: Active
Priority: 65001
GE1/0/2 Heartbeat link GE1/0/2
5
A
Standby group
1 State: Standby
Priority: 65000
Standby group
State: Initialize
备
Active group
State: Initialize
2
VRRP group 1
State: Standby
2
VRRP group 1
State: Active
GE1/0/1
GE1/0/1
3
FW2
HRP heartbeat
Gratuitous
ARP
Eth0/0/1
Eth0/0/2
Interface
Switch
4
Network cable
MAC address
Port
00-00-5E-00-01-01
Eth0/0/1
Packet
VRRP group
Packet direction
VGMP monitoring
8.2.8 State Switching Process Following a Primary Device
Interface Failure
Once two firewalls are in their active/standby failover states, if the primary device's interface
fails, the two firewalls will change their active/standby state as follows:
1.
As shown in Figure 8-26, after the primary device's interface GE1/0/1 fails, FW1's
VRRP group 1's state changes to initialize.
2.
FW1's active group will perceive this change, and lower its own priority by 2 (if one
interface fails the priority is lowered by two), and switch its own state to 'active to
standby' (this is abbreviated in the figure as A To S). Active to standby is a temporary,
intermediate state, invisible to the user.
3.
FW1's active VGMP group will send a VGMP request packet to its peer group,
requesting that its state be changed to standby. VGMP request packets are a kind of
VGMP packet, and this packet carries the sending VGMP group's adjusted priority of
64999.
331
Learn Firewalls with Dr. WoW
Figure 8-26 Primary device link or interface failure and request for state switching
GE1/0/3
GE1/0/3
VRRP group 2
State: Active
FW1
VRRP group 2
State: Standby
Active group
2 State: A To S
Priority: 64999
GE1/0/2
3
Heartbeat link GE1/0/2
VGMP request
Standby group
State: Initialize
VRRP group 1
State: Initialize
Standby group
State: Standby
Priority: 65000
FW2
Active group
State: Initialize
VRRP group 1
State: Standby
1
GE1/0/1
GE1/0/1
Interface
VRRP group
Network
cable
Fault
Packet
direction
Packet
VGMP
monitoring
4.
As shown in Figure 8-27, after FW2's standby VGMP group receives the VGMP request
from the active VGMP group on FW1, it will compare its VGMP priority with that of its
peer VGMP group (FW1's active VGMP group). After comparison, it discovers that its
own priority of 65000 is higher than its peer's 64999, and therefore FW2's standby group
will switch its state to active.
5.
FW2's standby VGMP group will return a VGMP reply packet to its peer group (FW1's
active VGMP group), permitting this peer to switch states.
6.
Simultaneous to this, FW2's standby VGMP group will mandate that its VRRP groups 1
and 2 switch their states to active.
7.
FW2's VRRP groups 1 and 2 will send gratuitous ARP packets to the downstream and
upstream switches respectively to update their MAC address tables.
332
Learn Firewalls with Dr. WoW
Figure 8-27 Backup device state switching
Switch
MAC address
Port
00-00-5E-00-01-02
Eth0/0/1
Eth0/0/1
Eth0/0/2
GE1/0/3
7
Gratuitous
ARP
GE1/0/3
VRRP group 2
State: Active
FW1
VRRP group 2
State: Active
6
Active group
State: A To S
Priority: 64999
GE1/0/2 Heartbeat link
VGMP
response
Standby group
State: Initialize
GE1/0/2
Standby group
4 State: Active
Priority: 65000
FW2
5
Active group
State: Initialize
6
VRRP group 1
State: Active
VRRP group 1
State: Initialize
GE1/0/1
7
Eth0/0/1
Gratuitous
ARP
Eth0/0/2
Switch
GE1/0/1
Interface
Fault
Network cable
MAC address
Port
00-00-5E-00-01-01
Eth0/0/1
Packet
VRRP group
Packet direction
VGMP monitoring
8.
As shown in Figure 8-28, after FW1's active VGMP group receives its peer group's
VGMP acknowledgement packet, it switches its own state to standby.
9.
FW1's active VGMP group will mandate that its VRRP groups switch their states to
standby. Due to the interface failure within VRRP group 1, VRRP group 1's state of
initialize does not change, and only VRRP group 2's state will switch to standby.
10. At the same time as this, after the upstream and downstream switches receive FW2's
gratuitous ARP packets, they will update their MAC table entries, by recording the
mapping between the virtual MAC address and port Eth0/0/2. Therefore, after upstream
and downstream service traffic reaches these switches, the switches will forward traffic
onto FW2. At this point the two firewalls' active/standby state switching is complete;
FW2 has become the new primary device, and FW1 has become the new backup device.
11. After the completion of active/standby state switching, FW2 (the new primary device)
will send heartbeat packets to FW1 (the new backup device) at fixed intervals.
333
Learn Firewalls with Dr. WoW
Figure 8-28 Completion of active/standby state switching
Switch
MAC address
Port
00-00-5E-00-01-02
Eth0/0/1
Eth0/0/1
10
Eth0/0/2
GE1/0/3
GE1/0/3
VRRP group 2
State: Standby
9
VRRP group 2
State: Active
Active group
8 State: Standby
Priority: 64999
FW1
GE1/0/2 Heartbeat link
GE1/0/2
Standby group
State: Active
Priority: 65000
FW2
HRP heartbeat
S
Standby group
State: Initialize
A
Active group
State: Initialize
11
9
VRRP group 1
State: Active
VRRP group 1
State: Initialize
GE1/0/1
GE1/0/1
Eth0/0/1
Eth0/0/2
Interface
Switch
Fault
Network cable
MAC address
Port
00-00-5E-00-01-01
Eth0/0/2
10
Packet
VRRP group
Packet direction
VGMP monitoring
8.2.9 State Switching Process After a Failure of the Entire Primary
Device
If there is a total failure of the primary device, the primary device's VGMP group will no
longer send HRP heartbeat packets. At such a time, if the backup device's VGMP group has
not received an HRP heartbeat packet from the primary device for three consecutive intervals,
it will deem this to mean there has been a failure in the other VGMP group, and will switch its
own state to the active state.
8.2.10 Process of State Switching After a Failure on the Original
Primary Device is Fixed (Preemption)
After a failure on the original primary device is fixed. If the preemption function has not
been configured, the original primary device will remain in a backup state; if the preemption
334
Learn Firewalls with Dr. WoW
function has been configured, the original primary device will initiate a 'coup' to again
become the primary device as follows:
1.
In Figure 8-29, after interface GE1/0/1 of the original primary device recovers from
failure, the state of VRRP group 1 switches from initialize to standby.
2.
After FW1's active VGMP group perceives this change, it raises its own priority by 2 (if
a failure on one interface is fixed, priority increases by 2) to 65001. FW1's active VGMP
group will compare its VGMP priority with that of its peer group obtained from an HRP
heartbeat packet sent by the peer. The comparison finds that FW1's active VGMP
group's priority of 65001 is higher than this peer group's 65000. At this point, if the
preemption function has been configured, the preemption hold-down timer will be
enabled. After the timer expires, FW1's active VGMP group will switch its state from
standby to 'standby to active' (this is abbreviated in the figure as S to A), which is a
temporary intermediate state that is invisible to the user.
3.
FW1's active VGMP group will send a VGMP request to its peer group, requesting that
its state be switched to active. The VGMP request is a kind of VGMP packet that carries
this VGMP group's (FW1's active group) adjusted priority of 65001.
Figure 8-29 Request for state switching once the original primary device recovers from failure
GE1/0/3
GE1/0/3
VRRP group 2
State: Standby
FW1
VRRP group 2
State: Active
Active group
2 State: S To A
Priority: 65001
GE1/0/2 Heartbeat link GE1/0/2
Standby group
State: Active
Priority: 65000
3 VGMP request
Standby group
State: Initialize
FW2
Active group
State: Initialize
VRRP group 1
State: Active
VRRP group 1
1
State: Standby
GE1/0/1
GE1/0/1
Network
cable
Interface
VRRP group
Packet
direction
Packet
VGMP
monitoring
4.
As shown in Figure 8-30, after FW2's standby group receives FW1's active group's
VGMP request packet, it will compare its VGMP priority with this peer group. Through
this comparison it will discover that its priority of 65000 is lower than its peer's 65001,
and therefore FW2's standby group will switch its own state from active to standby.
5.
FW2's standby group will return a VGMP response packet to its peer group, permitting
this peer group to switch its state to active.
6.
At the same time as this, FW2's standby group will mandate its VRRP groups 1 and 2
switch their states to standby.
335
Learn Firewalls with Dr. WoW
Figure 8-30 State switching of current primary device
GE1/0/3
GE1/0/3
VRRP group 2
State: Standby
FW1
VRRP group 2
State: Standby
6
Active group
State: S To A
Priority: 65001
GE1/0/2
Heartbeat link GE1/0/2
Standby group
4 State: Standby
Priority: 65000
VGMP ACK
Standby group
State: Initialize
FW2
Active group
State: Initialize
5
6
VRRP group 1
State: Standby
VRRP group 1
State: Standby
GE1/0/1
GE1/0/1
Network
cable
Interface
VRRP group
Packet
direction
Packet
VGMP
Monitoring
7.
As shown in Figure 8-31, after FW1's active VGMP group receives the peer group's
VGMP acknowledgement packet, it will switch its own state to active.
8.
FW1's active VGMP group will mandate that its VRRP groups 1 and 2 also switch their
states to active.
9.
FW1's VRRP groups 1 and 2 will send gratuitous ARP packets to the downstream and
upstream switches respectively to update their MAC address tables to record the
mapping between the virtual MAC address and port Eth0/0/1. In this way, after upstream
and downstream service packets arrive at the switches, the switches will forward the
packets to FW1. At this point, active/standby state switching for the two firewalls is
complete. FW1 has again snatched the position of primary device through preemption,
while FW2 has again become the backup device.
10. After the completion of active/standby state switching, the primary device (FW1) will
send heartbeat packets to the backup device (FW2) at fixed intervals.
336
Learn Firewalls with Dr. WoW
Figure 8-31 The original primary device preempting to become primary again
Switch
9
Gratuitous
ARP
MAC address
Port
00-00-5E-00-01-02
Eth0/0/1
Eth0/0/1
Eth0/0/2
GE1/0/3
GE1/0/3
VRRP group 2
State: Active
8
FW1
VRRP group 2
State: Standby
Active group
7 State: Active
Priority: 65001
GE1/0/2 Heartbeat link GE1/0/2
Standby group
State: Standby
Priority: 65000
FW2
HRP heartbeat
packet
A
Standby group
State: Initialize
8
VRRP group 1
State: Active
VRRP group 1
State: Standby
GE1/0/1
GE1/0/1
9
S
Active group
State: Initialize
10
Gratuitous
ARP
Eth0/0/1
Eth0/0/2
Interface
Switch
Network cable
MAC address
Port
00-00-5E-00-01-01
Eth0/0/1
Packet
VRRP group
Packet direction
VGMP monitoring
8.2.11 Process of State Formation in Load Sharing Hot Standby
Above we described state formation and the switching process for the active/standby failover
method of hot standby. Below we'll take a look at load sharing states.
As shown in Figure 8-32, in order to achieve the load sharing method of hot standby, we need
to enable active and standby VRRP groups on both FW1 and FW2, allow FW1's active VRRP
groups to communicate with FW2's standby VRRP groups to form an "active/standby" group,
and allow FW2's active VRRP groups and FW1's standby VRRP groups to communicate (also
forming an "active/standby" group). In this way the two FWs will be in complementary
active/standby states, which are in fact load sharing states.
337
Learn Firewalls with Dr. WoW
Figure 8-32 Load sharing hot standby network diagram
PC2
Internet
Eth0/0/1
Standby
GE1/0/3
Active
1.1.1.3/24
FW1
Eth0/0/2
VRRP group 4
1.1.1.2/24
Active
VRRP group 3
1.1.1.1/24
Standby
GE1/0/2
VGMP
Active
GE1/0/2
GE1/0/3
1.1.1.4/24
VGMP
Standby
FW2
Heartbeat link
GE1/0/1
Active
10.1.1.3/24
Standby
VRRP group 1
10.1.1.1/24
VRRP group 2
10.1.1.2/24
Eth0/0/1
PC1's gateway
address:
10.1.1.1/24
Standby
GE1/0/1
10.1.1.4/24
Active
Eth0/0/2
Internal
network
Configuration of the load sharing method of hot standby is shown in Table 8-2.
Table 8-2 Configuration of load sharing hot standby
Item
Configuration on FW1
Configuration on FW2
Configure two
VRRP groups on
interface GE1/0/1,
and add one to the
active VGMP group
and the other to the
standby VGMP
group.
interface GigabitEthernet 1/0/1
interface GigabitEthernet 1/0/1
ip address 10.1.1.3
255.255.255.0
ip address 10.1.1.4
255.255.255.0
vrrp vrid 1 virtual-ip 10.1.1.1
255.255.255.0 active
vrrp vrid 1 virtual-ip 10.1.1.1
255.255.255.0 standby
vrrp vrid 2 virtual-ip 10.1.1.2
255.255.255.0 standby
vrrp vrid 2 virtual-ip 10.1.1.2
255.255.255.0 active
338
Learn Firewalls with Dr. WoW
Item
Configuration on FW1
Configuration on FW2
Configure two
VRRP groups on
interface GE1/0/3,
and add one to the
active VGMP group
and the other to the
standby VGMP
group.
interface GigabitEthernet 1/0/3
interface GigabitEthernet 1/0/3
Configure the
heartbeat interface
hrp interface GigabitEthernet
1/0/2
hrp interface GigabitEthernet
1/0/2
Enable hot standby
hrp enable
hrp enable
ip address 1.1.1.3 255.255.255.0
vrrp vrid 3 virtual-ip 1.1.1.1
255.255.255.0 active
vrrp vrid 4 virtual-ip 1.1.1.2
255.255.255.0 standby
ip address 1.1.1.4
255.255.255.0
vrrp vrid 3 virtual-ip 1.1.1.1
255.255.255.0 standby
vrrp vrid 4 virtual-ip 1.1.1.2
255.255.255.0 active
From Table 8-2 above we can see that:
2.
In load sharing scenarios, each service interface needs to be added into two VRRP
groups, and one of these two VRRP groups needs to be added to the active VGMP group
and the other needs to be added to the standby VGMP group. For example, the GE1/0/1
interface is added to groups 1 and 2, and groups 1 and 2 are added to the active VGMP
group and the standby VGMP group respectively.
3.
For every pair of the two firewalls' identically numbered VRRP groups, one group must
be added to the active VGMP group and the other to the standby VGMP group. For
example, FW1's VRRP group 1 is added to the active VGMP group, and FW2's VRRP
group 1 is added to the standby VGMP group.
As shown in Figure 8-33, after the configuration is complete, the state formation process for
the load sharing method of hot standby is as follows:
4.
FW1 and FW2's active VGMP groups' states switch from initialize to active, and their
standby VGMP groups' states switch from initialize to standby.
5.
As FW1's VRRP groups 1 and 3 have joined the active VGMP group whose state is
active, FW1's VRRP groups 1 and 3's states are both active; as FW1's VRRP groups 2
and 4 have joined the standby VGMP group whose state is standby, FW1's VRRP groups
2 and 4's states are both standby. Likewise, the states of FW2's VRRP groups 1 and 3 are
both standby, and the states of VRRP groups 2 and 4 are both active.
6.
At this point, FW1's VRRP groups 1 and 3 will send gratuitous ARP packets to the
downstream and upstream switches respectively, notifying them of VRRP groups 1 and
3's virtual MAC addresses; FW2's VRRP groups 2 and 4 will send gratuitous ARP
packets to the downstream and upstream switches respectively to notify them of the
virtual MAC addresses for VRRP groups 2 and 4.
7.
Entries will be made in the downstream switch's MAC address table recording the
mapping between VRRP group 1's virtual MAC address (00-00-5E-00-01-01) and port
Eth0/0/1, as well as the mapping between VRRP group 2's virtual MAC address
(00-00-5E-00-01-02) and port Eth0/0/2. In this way, when service packets arrive at the
downstream switch, the switch will send packets to either FW1 or FW2 according to the
specific destination MAC address. If the default gateway of the switch's downstream
device is VRRP group 1's address, then its packets will be forwarded to FW1; if the
default gateway of the switch's downstream device is set to VRRP group 2's address,
then its packets will be forwarded to FW2. Upstream switches and devices operate under
the same principles.
339
Learn Firewalls with Dr. WoW
Therefore, FW1 and FW2 can both forward service packets, and so FW1 and FW2 are
both primary devices, and a load sharing state has been achieved.
8.
After a load sharing state is achieved, FW1's active VGMP group will send HRP
heartbeat packets to FW2's standby VGMP group at fixed intervals, and FW2's active
VGMP group will send HRP heartbeat packets to FW1's standby VGMP group at fixed
intervals.
Figure 8-33 State formation process in load sharing hot standby
4
3 Gratuitous ARP packet
MAC address
Port
00-00-5E-00-01-03
Eth0/0/1
00-00-5E-00-01-04
Eth0/0/2
Eth0/0/1
00-00-5E-00-01-03
Gratuitous ARP packet
00-00-5E-00-01-04
Eth0/0/2
GE1/0/3
VRRP group 3
State: Active
2
3
GE1/0/3
VRRP group 4
State: Standby
2
VRRP group 3
State: Standby
2
VRRP group 4
State: Active
2
GE1/0/2
FW1
Active group
1 State: Active
Priority: 65001
Standby group
1 State: Standby
Priority: 65000
Heartbeat link
A
Standby group
1 State: Standby
Priority: 65000
2
VRRP group 1
State: Active
FW2
A
5
Active group
1 State: Active
Priority: 65001
HRP heartbeat
packet
2
VRRP group 1
State: Standby
2
VRRP group 2
State: Standby
GE1/0/1
2
VRRP group 2
State: Active
GE1/0/1
3
3
Gratuitous ARP
packet
00-00-5E-00-01-01
Eth0/0/1
Eth0/0/2
Gratuitous ARP
packet
00-00-5E-00-01-02
Interface
Network cable
Switch
Packet
4
MAC address
00-00-5E-00-01-01
Port
Eth0/0/1
00-00-5E-00-01-02
Eth0/0/2
VRRP group
Packet direction
VGMP monitoring
8.2.12 State Switching Process in Load Sharing Hot Standby
After two firewalls implement hot standby using the load sharing method, if one of the
firewall's interfaces malfunctions, the firewalls will switch into an active/standby failover
state, the process of which is described below.
1.
As shown in Figure 8-34, when FW1's GE1/0/1 interface fails, the states of FW1's VRRP
groups 1 and 2 will each change to initialize.
340
Learn Firewalls with Dr. WoW
2.
The priority of FW1's active and standby VGMP groups will each be lowered by 2. After
this, FW1's active VGMP group's priority will be changed to 64999, lower than FW2's
standby VGMP group's priority of 65000. FW2's standby VGMP group's priority will
change to 64998, which is still lower than FW2's active VGMP group's priority of 65001.
Therefore, following state negotiation between the VGMP groups, FW1's active VGMP
group's state will switch to standby, and FW2's standby VGMP group's state will switch
to active.
3.
FW1's active VGMP group and FW2's standby VGMP group will mandate that the
VRRP groups within them also undergo state switching, and therefore the states of
FW2's VRRP groups 1 and 3 will switch to active.
4.
FW2's VRRP groups 1 and 3 will send gratuitous ARP packets to the downstream and
upstream switches respectively to update their MAC address tables.
5.
After the downstream switch receives the gratuitous ARP packet, it will update its own
MAC address table, and link VRRP group 1's virtual MAC address (00-00-5E-00-01-01)
with Eth0/0/2. Likewise, the upstream switch will link VRRP group 3's virtual MAC
address (00-00-5E-00-01-03) with Eth0/0/2. Therefore, when upstream and downstream
service packets reach these switches, the switches will forward the packets onto FW2. At
this point, hot standby state switching is complete. FW1 has become the backup device
and FW2 the primary device, meaning that the load sharing state has changed into an
active/standby failover state.
6.
After load sharing has switched to active/standby failover, the primary device (FW2)
will send heartbeat packets to the backup device (FW1) at fixed intervals.
Figure 8-34 State switching process in load sharing hot standby
MAC address
Port
00-00-5E-00-01-03
Eth0/0/2
00-00-5E-00-01-04
Eth0/0/2
Eth0/0/1
5
4
Eth0/0/2
GE1/0/3
VRRP group 3
State: Standby
3
Gratuitous ARP
packet
00-00-5E-00-01-03
GE1/0/3
VRRP group 4
State: Standby
VRRP group 3
State: Active
3
VRRP group 4
State: Active
GE1/0/2
FW1
Active group
State: Standby
2
Priority: 64999
Standby group
2 State: Active
Priority: 65000
Heartbeat link
S
FW2
A
6
Standby group
State: Standby
2
Priority: 64998
VRRP group 1
State: Initialize
1
Active group
State: Active
Priority: 65001
HRP heartbeat
packet
3
VRRP group 1
State: Active
VRRP group 2
State: Initialize
GE1/0/1
VRRP group 2
State: Active
GE1/0/1
Eth0/0/1
4
Gratuitous ARP
packet
Eth0/0/2
00-00-5E-00-01-01
Interface
Fault
Network cable
Switch
Packet
MAC address
00-00-5E-00-01-01
00-00-5E-00-01-02
Port
Eth0/0/2
Eth0/0/2
VRRP group
5
Packet direction
VGMP monitoring
341
Learn Firewalls with Dr. WoW
8.2.13 Summary
The above content should have provided a satisfactory answer to the question: "How are two
firewalls' VGMP groups' packet exchange and state negotiation and state switching processes
accomplished?" Therefore, we now know that in hot standby, VGMP's three main functions
are:
1.
Fault monitoring: VGMP groups are able to monitor changes in VRRP groups' states,
and thereby perceive both interface failures within VRRP groups as well as when such
failures are fixed. Here, I've thought of a new question: can VGMP groups directly
monitor interface failures, and do they have to conduct their interface monitoring
through VRRP groups?
2.
State switching: the VGMP group state switching process is actually also the device
active/standby state switching process. After a VGMP group perceives VRRP state
changes, it will adjust its own priority, and will renegotiate active/standby states with its
peer device's VGMP group. This point should already be fairly clear, as this section has
delved deeply into how state switching and negotiation are accomplished.
3.
Traffic direction: after two VGMP groups' active/standby states are established or
switched, the VGMP groups will mandate that their VRRP group states undergo unified
switching. Following this, the active VRRP group will send a gratuitous ARP packet to
direct the traffic to it (the primary device). Here, a new question has popped to mind: "If
VGMP groups were able to directly monitor interfaces, how would traffic direction be
accomplished?
Actually, VGMP's functionality is extremely strong, and effecting firewall fault monitoring
and traffic direction by monitoring VRRP group states is only one of VGMP's techniques.
This technique can only be used when firewall's upstream or downstream devices are switches,
as VRRP itself was created especially for this kind of scenario. Is VGMP useless when a
firewall's upstream or downstream device(s) is a router? Of course not! In the next section I'll
introduce more of VGMP's features to allow everyone to gain a thorough understanding of the
hot standby function, and be completely prepared for all contingencies!
8.2.14 Addendum: VGMP State Machine
Above, we've learned about the processes for VGMP groups' various state changes. Below, I'll
use an explanation of a VGMP state machine (a visual representation is shown in Figure 8-35)
to help deepen everyone's understanding of VGMP group state switching.
The VGMP state machine discussed in this section is currently applicable to the USG2000/5000/6000
firewall series and the USG9000 firewall series' V100R003 version.
342
Learn Firewalls with Dr. WoW
Figure 8-35 VGMP state machine
Other states
0
Initialize
2
1
4
Standby
Active
8
5
7
10
S To A
9
6
3
A To S
0.
After the hot standby function is enabled, each VGMP group enters the initialize state.
1.
After the active VGMP group is enabled, the active group's state switches from initialize
to active.
2.
After the standby VRRP group is enabled, the standby group's state switches from
initialize to standby.
3.
When one of the interfaces monitored by this device's VGMP group fails, its state
switches from 'active' to 'active to standby', and it sends a VGMP request packet to its
peer device's VGMP group.
4.
When this VGMP group receives the peer's VGMP request packet, it discovers that its
priority is higher than its peer's, switches from the standby state to the active state, and
sends a VGMP acknowledgement packet to the peer device's VGMP group.
5.
This device's VGMP group receives its peer's VGMP acknowledgement packet, and
confirms that it (this device) needs to conduct state switching, so this device's VGMP
group's state is switched from 'active to standby' to 'standby'.
6.
The peer device's VGMP group determines that this device's VGMP group does not need
to undergo state switching or the peer doesn't answer this device's VGMP request
packets for three intervals, and so this device's VGMP group's state switches from the
'active to standby' state to the 'active' state.
7.
After the failure with the interface monitored by this device's VGMP group is fixed, if
this device's VGMP group's priority is higher than the peer device's, and if the
preemption function has been configured, then this device's VGMP group's state will
switch from 'standby' to 'standby to active', and it will send a VGMP request packet to its
peer.
343
Learn Firewalls with Dr. WoW
8.
This device's VGMP group receives the peer device's VGMP request packet and
discovers that the peer device's priority is higher, and it therefore switches from the
active state to the standby state and sends a VGMP acknowledgement packet to the peer
device's VGMP group.
9.
This device's VGMP group receives the peer device's VGMP acknowledgement packet,
and confirms that it (this device) needs to undergo state switching. This device's VGMP
group therefore switches from the 'standby to active' state to the 'active' state, completing
the preemption process.
10. The peer VGMP group determines that this device's VGMP group doesn't need to
undergo state switching, or it hasn't answered this device's VGMP request packet for
three consecutive intervals, and therefore this device's VGMP group switches from the
'standby to active' state to the 'standby' state.
8.3 Explanation of VGMP Techniques
The marriage of VGMP and VRRP is only applicable to networking using firewalls connected
to Layer 2 devices. Therefore, if a firewall connects to a router, or a firewall transparently
accesses a network (service interfaces are working in Layer 2), what technique does a VGMP
group use in response? In this section, I will reveal the remaining VGMP group techniques for
everyone.
8.3.1 VGMP Technique For Firewall-Router Connections
In Figure 8-36, two firewalls' upstream and downstream service interfaces are working in
Layer 3, and are connected to routers. The firewalls and the routers are running OSPF
between them. As the upstream and downstream devices are not Layer 2 switches, the VGMP
group cannot use VRRP groups. Therefore, the technique that the VGMP groups will use to
monitor failures is direct interface state monitoring. This is accomplished by directly
adding interfaces to VGMP groups. When there is a failure with one of a VGMP group's
interfaces, the VGMP group will directly perceive the interface's change in state, and
therefore lower its own priority.
344
Learn Firewalls with Dr. WoW
Figure 8-36 Networking with firewalls connected to upstream and downstream routers
PC2
1.1.1.10/24
PC2
1.1.1.10/24
Internet
Internet
R2
R2
GE1/0/3
10.2.1.2/24
FW1
A
VGMP
Active
The cost
increases by
65500.
GE1/0/3
10.2.2.2/24
OSPF
GE1/0/2 10.10.0.2/24
10.10.0.1/24 GE1/0/2
GE1/0/1
10.1.1.2/24
VGMP
Standby
FW2
S
GE1/0/1
10.1.2.2/24
OSPF
The cost
increases by
65500.
GE1/0/3
10.2.1.2/24
FW1
S
R1
PC1
192.168.1.1/24
Internal
network
GE1/0/3
10.2.2.2/24
OSPF
VGMP
Standby
GE1/0/2 10.10.0.2/24
10.10.0.1/24 GE1/0/2
GE1/0/1
10.1.1.2/24
VGMP
Active
FW2
A
GE1/0/1
10.1.2.2/24
OSPF
The cost
increases by
65500.
GE0/0/2
GE0/0/1
GE0/0/2
GE0/0/1
GE0/0/2
GE0/0/1
The cost of the path to PC2
via FW1 is 3 while the cost of
the path via FW2 is 65503.
Therefore, I select FW1 to
forward packets.
GE0/0/2
GE0/0/1
PC1
192.168.1.1/24
R1
Internal
network
The path to PC2 via FW1 is
disconnected while the
path via FW2 is normal.
Therefore, I select FW2 to
forward packets.
The steps for configuring direct interface monitoring using VGMP groups are as shown in
Table 8-3 (as performed using the active/standby failover method of hot standby.)
Table 8-3 Configuration of direct interface monitoring using VGMP groups
Item
Configuration on FW1
Configuration on FW2
Configure the VGMP
group to directly
monitor interface
GE1/0/1.
interface GigabitEthernet 1/0/1
interface GigabitEthernet 1/0/1
ip address 10.1.1.2
255.255.255.0
ip address 10.1.2.2
255.255.255.0
Configure the VGMP
group to directly
monitor interface
GE1/0/3.
interface GigabitEthernet 1/0/3
interface GigabitEthernet 1/0/3
ip address 10.2.1.2
255.255.255.0
ip address 10.2.2.2
255.255.255.0
Configure the
automatic cost
adjustment function.
hrp ospf-cost adjust-enable
hrp ospf-cost adjust-enable
Configure the
heartbeat interface.
hrp interface GigabitEthernet
1/0/2
hrp interface GigabitEthernet
1/0/2
Enable the hot
standby function.
hrp enable
hrp enable
hrp track active
hrp track active
hrp track standby
hrp track standby
345
Learn Firewalls with Dr. WoW
If the load sharing method of hot standby is used, then we only need to execute the hrp track active and
hrp track standby commands on each service interface, and add the service interfaces to both the active
and standby VGMP groups.
[Question from Dr. Wow] Here, curious readers may ask: aren't we adding interfaces to
VGMP groups to allow a VGMP(s) group to monitor interface states? Why is the command
hrp track and not vgmp track? This is because of what we discussed in the section above
regarding VGMP and HRP packets both being encapsulated with a VRRP header and a
VGMP header (with the only difference between them being that HRP packets also need to be
further encapsulated with an HRP header.) Therefore, when developers designed this
command, they used the hrp parameter, and this practice has continued in use until today.
After configuration is complete, we can run command display hrp state on FW1, allowing us
to see that interfaces GE1/0/1 and GE1/0/3 have both been added to the active group, and are
being monitored by the active group.
HRP_A<FW1> display hrp state
The firewall's config state is: ACTIVE
Current state of interfaces tracked by active:
GigabitEthernet0/0/1 : up
GigabitEthernet0/0/3 : up
Running the command display hrp state on FW2 shows that interfaces GE1/0/1 and GE1/0/3
have both been added to the standby VGMP group, and are being monitored by the group.
HRP_S<FW2> display hrp state
The firewall's config state is: Standby
Current state of interfaces tracked by standby:
GigabitEthernet0/0/1 : up
GigabitEthernet0/0/3 : up
Running the command display hrp group on FW1 shows that the active VGMP group's state
is active, its priority is 65001, and that the standby VGMP group hasn't been enabled.
HRP_A<FW1> display hrp group
Active group status:
Group enabled:
yes
State:
active
Priority running:
65001
Total VRRP members:
0
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
30
Peer group available: 1
Peer's member same:
yes
Standby group status:
Group enabled:
no
State:
initialize
Priority running:
65000
Total VRRP members:
0
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
0
Peer group available: 0
346
Learn Firewalls with Dr. WoW
Peer's member same:
yes
Running the command display hrp group on FW2 shows that the standby VGMP group's
state is standby, its priority is 65000, and that the active VGMP group hasn't been enabled.
HRP_S<FW2> display hrp group
Active group status:
Group enabled:
no
State:
initialize
Priority running:
65001
Total VRRP members:
0
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
30
Peer group available: 1
Peer's member same:
yes
Standby group status:
Group enabled:
yes
State:
standby
Priority running:
65000
Total VRRP members:
2
Hello interval(ms):
1000
Preempt enabled:
yes
Preempt delay(s):
0
Peer group available: 1
We can therefore conclude that after the completion of configuration, FW1's VGMP group is
in the active state, and FW1 has become the primary device. FW2's VGMP group's state is in
the standby state, and FW2 has become the backup device.
In 8.1 Hot Standby Overview, I mentioned that if we wish PC1's traffic to PC2 to be
forwarded by FW1, we need to manually increase the OSPF cost of FW2's link (R1—>FW2
—>R2). However, what happens if it's inconvenient/impossible to configure the upstream and
downstream router(s) R1 or R2? This situation requires that we use the firewall's VGMP
group's traffic direction function to automatically direct traffic onto the primary device. This
can be done because the firewall will automatically adjust OSPF costs according to a
VGMP group's state (the command is hrp ospf-cost adjust-enable). Once this function is
enabled, if an active VGMP group is on a firewall, the firewall will advertise routes with
normal costs; if a firewall's VGMP group is in the standby state, then the firewall will increase
costs by 65500 (this is a default value, and can be adjusted) when advertising routes.
If this is a load sharing network, as there are active VGMP groups on both firewalls, each firewall will
advertise routes with normal costs.
On the left of Figure 8-36, the primary firewall FW1 (its VGMP group's state is active) is
advertising routes normally, and the backup device FW2 (its VGMP group is in the standby
state) will therefore increase costs by 65500 when advertising routes to the upstream and
downstream devices. Therefore, from the perspective of R1, the OSPF cost of using FW1 to
access PC2 is 1+1+1=3, while the OSPF cost of using FW2 to access PC2 is
65501+1+1=65503. As the router will choose the path with the lower cost when forwarding
traffic (R1—>FW1—>R2), traffic from the intranet's PC1 to the external network's PC2 will
be forwarded through the primary device FW1.
We can see from R1's routing table that the next hop of packets going to network 1.1.1.0 is
FW1's GE1/0/1's address 10.1.1.2 .
347
Learn Firewalls with Dr. WoW
[R1] display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 11
Routes : 11
Destination/Mask
1.1.1.0/24
OSPF
Proto
10
Pre Cost
3
Flags NextHop
D
10.1.1.2
Interface
GigabitEthernet0/0/1
After one of FW1's service interfaces fails, the two firewalls' VGMP groups will undergo state
switching. After state switching, FW2's VGMP group's state will switch to active, and FW2
will become the primary device; FW1's VGMP group's state will switch to standby, and FW1
will become the backup device. FW2 will announce routes normally (it does not increase the
cost value), while the route cost announced by FW1 will increase to 65500. To R1, the path to
PC2 using FW1 is blocked (because FW1's upstream interface has failed), and the route to
PC2 through FW2 is accessible, and the cost is 3, so traffic from intranet PC1 accessing PC2
on the external network will be forwarded through the new primary device FW2.
From R1's routing table we can also see that the next hop of packets travelling to destination
network segment 1.1.1.0 has changed to FW2's GE1/0/1's address 10.1.2.2.
[R1] display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 11
Routes : 11
Destination/Mask
1.1.1.0/24
OSPF
Proto
10
Pre Cost
3
Flags NextHop
D
10.1.2.2
Interface
GigabitEthernet0/0/2
8.3.2 VGMP Technique When Firewalls Transparently Access and
Connect to Switches
In Figure 8-37, two firewalls' upstream and downstream service interfaces are both working in
Layer 2 and are connected to switches. As the firewalls' service interfaces are working in
Layer 2, they do not have IP addresses, and so there is no way for the VGMP groups to use
VRRP groups or to directly monitor interface states. Therefore, the fault monitoring technique
used by the VGMP groups is to monitor interface states using a VLAN. This is
accomplishing by adding Layer 2 service interfaces to a VLAN, with the VGMP group
monitoring the VLAN. When an interface in a VGMP group fails, the VGMP group will
perceive this change in state of one of its interfaces through the VLAN, and therefore lower
its own priority.
348
Learn Firewalls with Dr. WoW
Figure 8-37 Networking with a firewall transparently accessing and connecting to switches
PC2
1.1.1.10/24
Router
192.168.1.2/24
00-11-22-33-44-66
Eth0/0/1
MAC address
Port
00-11-22-33-44-55
Eth0/0/1
GE1/0/2
10.10.0.2/24
VLAN2
VLAN2
forwards
does not
traffic.
forward
traffic.
GE1/0/1
Eth0/0/1
PC1
192.168.1.1/24
00-11-22-33-44-55
Eth0/0/1
VGMP
Standby
MAC address
Port
00-11-22-33-44-55
Eth0/0/2
Eth0/0/2
GE1/0/3
GE1/0/3
GE1/0/2
10.10.0.1/24
VGMP
Active
Router
192.168.1.2/24
00-11-22-33-44-66
Eth0/0/2
GE1/0/3
FW1
A
PC2
1.1.1.10/24
Internet
Internet
FW1
S
FW2
S
VGMP
Standby
GE1/0/1
GE1/0/1
GE1/0/2
VLAN2 10.10.0.2/24
VLAN2
does not
forwards
forward
packets.
traffic.
Eth0/0/1
Eth0/0/2
GE1/0/3
GE1/0/2
10.10.0.1/24
VGMP
Active
FW2
A
GE1/0/1
Eth0/0/2
MAC address
Port
MAC address
Port
00-11-22-33-44-66
Eth0/0/1
00-11-22-33-44-66
Eth0/0/2
PC1
192.168.1.1/24
00-11-22-33-44-55
Internal
network
Internal
network
Table 8-4 shows the configuration steps used to allow a VGMP group to use a VLAN to
monitor interface states (active/standby failover).
Table 8-4 Configuration of VGMP groups' use of a VLAN to monitor interfaces (active/standby
failover)
Item
Configuration FW1
Configuration FW2
Add Layer 2 service
interfaces into the same
VLAN, and configure the
VGMP group to monitor
the VLAN.
vlan 2
vlan 2
port GigabitEthernet 1/0/1
port GigabitEthernet 1/0/1
port GigabitEthernet 1/0/3
port GigabitEthernet 1/0/3
hrp track active
hrp track standby
Configure the heartbeat
interface.
hrp interface GigabitEthernet
1/0/2
hrp interface
GigabitEthernet 1/0/2
Enable the hot standby
function.
hrp enable
hrp enable
When firewalls' service interfaces work in Layer 2 and are connected to switches, the load sharing
method of hot standby is not supported. This is because if working in the load sharing method, the
VLANs would be enabled on both devices, and each device would be able to forward traffic, so that the
entire network would form a loop.
349
Learn Firewalls with Dr. WoW
After completing configuration, FW1's VGMP group's state is active, and FW1 becomes the
primary device; FW2's VGMP group's state is standby, and FW2 becomes the backup device.
As the firewalls' service interfaces are working in Layer 2, the firewalls themselves cannot
run OSPF, and therefore the VGMP groups cannot direct upstream and downstream traffic
using OSPF costs. However, the VGMPs can control whether or not their VLAN forwards
traffic to ensure that traffic is directed onto the primary device. When a VGMP's group is
active, the group's VLAN is able to forward traffic; when a VGMP group's state is standby,
the group's VLAN is disabled, and it cannot forward traffic. A VGMP's control of whether its
VLAN forwards traffic does not need to be separately configured; adding a VLAN to the
VGMP group is all that is required.
As shown in Figure 8-37, under normal circumstances, the primary device's (FW1; its VGMP
group's state is active) VLAN is enabled, and it can forward traffic. The backup device's
(FW2; its VGMP group is in the standby state) VLAN is disabled, and it cannot forward
traffic. Therefore, the traffic from PC1 to PC2 will all be forwarded by primary device FW1.
After a service interface failure on FW1, the two firewalls' VGMP groups will undergo state
switching; for details on this process please see 8.2.8 State Switching Process Following a
Primary Device Interface Failure. When FW1's VGMP group's state switches from active
to standby, the state of the normal interface(s) in the group's VLAN will go down and
then up. This causes the upstream and downstream switches to update their own MAC
address tables to map the destination MAC address to port Eth0/0/2, thereby directing
traffic onto FW2.
8.3.3 VGMP Technique When Firewalls Transparently Access and
Connect to Routers
In Figure 8-38, two firewalls' upstream and downstream service interfaces are both working in
Layer 2, and are connected to routers. OSPF is running between the two firewalls. In this sort
of networking, the fault monitoring and traffic direction methods adopted by the firewalls'
VGMP groups are essentially the same as in 8.3.2 VGMP Technique When Firewalls
Transparently Access and Connect to Switches, which is to say that VLAN is used to
monitor interface faults and control traffic direction.
The difference between these methods lies in the fact that the networking described in
this section only supports the load sharing method of hot standby, and does not support
active/standby failover. This is because if working using the active/standby failover method,
the backup device's VLAN would be disabled, and its upstream and downstream routers
would be unable to communicate or establish OSPF routes. Therefore, when active/standby
switching occurred, the new primary device's (the original backup device) VLAN would be
enabled, and its upstream and downstream routers would only then begin to build new OSPF
routes. However, the building of new OSPF routes requires a certain amount of time, and this
would result in a temporary service interruption.
350
Learn Firewalls with Dr. WoW
Figure 8-38 Networking with firewall transparently accessing and connecting to routers
PC2
1.1.1.10/24
PC2
1.1.1.10/24
Internet
R2
R2
GE0/0/1
10.1.1.2/24
Internet
GE0/0/1
10.1.1.2/24
GE0/0/2
10.1.2.2/24
OSPF
OSPF
A
VGMP
Active
VLAN2
forwards
traffic.
VGMP
Standby
VGMP
Active
VGMP
Standby
GE1/0/1
GE1/0/1
FW2
FW1
A
S
VLAN2
forwards
traffic.
VGMP
VGMP
Standby Standby
VLAN2
does not
forward
traffic.
VGMP
Active
A
VLAN2
forwards
traffic.
GE0/0/2
10.1.2.1/24
R1
R1
PC1
192.168.1.1/24
FW2
OSPF
GE0/0/1
10.1.1.1/24
GE0/0/2
10.1.2.1/24
VGMP
Active
GE1/0/1
GE1/0/1
OSPF
GE0/0/1
10.1.1.1/24
GE1/0/3
GE1/0/3
GE1/0/3
GE1/0/3
FW1
GE0/0/2
10.1.2.2/24
PC1
192.168.1.1/24
Internal
network
Internal
network
The steps for configuring VGMP group to monitor interface states (load sharing) through
VLAN are shown in Table 8-5.
Table 8-5 Configuration of VGMP group to monitor interfaces using a VLAN (load sharing)
Item
Configuration on FW1
Configuration on FW2
Add the Layer 2 service
interfaces to the same VLAN
and configure the active and
standby VGMP groups to
monitor the VLAN.
vlan 2
vlan 2
port GigabitEthernet 1/0/1
port GigabitEthernet 1/0/1
port GigabitEthernet 1/0/3
port GigabitEthernet 1/0/3
hrp track active
hrp track active
hrp track standby
hrp track standby
Configure the heartbeat
interface.
hrp interface
GigabitEthernet 1/0/2
hrp interface GigabitEthernet
1/0/2
Enable the hot standby
function.
hrp enable
hrp enable
When the firewalls' service interfaces work in Layer 2 and are connected to routers, do not use the
active/standby failover method of hot standby. This is because the backup device's VLAN is disabled,
and its upstream and downstream routers can't communicate, and thus can't establish routes. Therefore,
during active/standby switching, the backup device is unable to immediately replace the primary device,
resulting in a service interruption.
351
Learn Firewalls with Dr. WoW
After configuration, as there are active VGMP groups on both FW1 and FW2, FW1 and FW2
are both primary devices, and each of their VLAN2s will forward traffic. At this time, R1's
routing table shows that traffic going to PC2 can be forwarded through either FW1 or FW2.
<R1> display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 14
Routes : 15
Destination/Mask
1.1.1.0/24
OSPF
10
OSPF
2
Proto
Pre Cost
10
D
2
10.1.2.2
Flags NextHop
D
Interface
10.1.1.2
GigabitEthernet0/0/1
GigabitEthernet0/0/2
After one of FW1's service interfaces fails, the two firewalls' VGMP groups will conduct state
switching, and the hot standby state will change from load sharing to active/standby failover.
When FW1's VGMP group's state switches from active to standby, all of the interfaces
in the groups VLAN will go down and then up. This will cause the upstream and
downstream routers' routes to change and converge, and all traffic will therefore be
directed onto FW2.
At this time, R1's routing table (below) also shows that the next hop of packets going to
network 1.1.1.0 has changed to R2's GE0/0/2's address 10.1.2.2.
<R1>display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 10
Routes : 11
Destination/Mask
1.1.1.0/24
10.1.2.0/24
Proto
Pre Cost
OSPF
10
Direct 0
2
0
Flags NextHop
D
D
10.1.2.2
10.1.2.1
Interface
GigabitEthernet0/0/2
GigabitEthernet0/0/2
8.3.4 VGMP Groups' Remote Interface Monitoring Techniques
The techniques used by VGMP groups in handling various hot standby networks were
described above, and in these the VGMP groups were monitoring the firewall's own interfaces.
Below we'll take a look at two techniques for VGMP group monitoring of remote interfaces.
"Remote interfaces" refer to other devices' interfaces on a link. When a remote interface
monitored by a VGMP group fails, the VGMP group's priority lowers by 2, just as we've seen
previously. The techniques by which VGMP monitors firewalls' own interfaces can be used
together with the techniques by which remote interfaces are monitored.
It is important to note that the two kinds of techniques for VGMP to monitor remote interfaces
can only be used on networks in which firewalls' service interfaces are working in Layer 3,
because only Layer 3 interfaces have IP addresses and can send IP-Link and BFD detection
packets to the remote device(s).

Monitoring the state of remote interfaces using IP-link
The method is to establish an IP-link to probe the remote interface, and then have the
VGMP group monitor the IP-link's state. When an interface being probed through an
IP-link fails, the IP-link state will change to Down, and the VGMP group will perceive
the IP-link's state change and therefore lower its own priority.
352
Learn Firewalls with Dr. WoW
As shown in Figure 8-39, we need to use IP-Link 1 on FW1 (FW2) to inspect R1's (R2's)
GE1/0/1 interface (an indirectly connected remote interface), and then add IP-Link 1 to
the active (standby) VGMP group to monitor IP-Link 1's state.
Figure 8-39 VGMP monitoring of remote interfaces using IP-link
Configuration details are shown in Table 8-6 (configuration of the hot standby function
must be completed prior to the below configuration)
Table 8-6 Configuration of VGMP monitoring of remote interfaces using IP-link
Item
Configuration on FW1
Configuration on FW2
Enable IP-link.
ip-link check enable
ip-link check enable
Configure IP-link to
monitor the remote
address.
ip-link 1 destination 1.1.1.1
interface
GigabitEthernet1/0/3 mode
icmp
ip-link 1 destination 2.2.2.1
interface GigabitEthernet1/0/3
mode icmp
Configure VGMP to
monitor the IP-link.
hrp track ip-link 1 active
hrp track ip-link 1 standby

Monitoring remote interface status using BFD
This method entails using BFD to probe remote interfaces, with a VGMP group
monitoring the BFD state. When there is a failure of the remote interface being inspected
by BFD, BFD's state will change to Down, and the VGMP group will perceive the BFD
state change and therefore lower its own priority.
353
Learn Firewalls with Dr. WoW
As shown in Figure 8-40, we need to use BFD session 10 on FW1 (FW2) to probe R1's
(R2's) GE1/0/1 interface (an indirectly connected remote interface), and then add BFD
session 1 to the active (standby) VGMP group to monitor BFD session 1's state.
Figure 8-40 VGMP monitoring of remote interfaces using BFD
R1
R2
BFD discriminator 20
BFD discriminator 20
GE1/0/1
2.2.2.1/24
GE1/0/1
1.1.1.1/24
BFD discriminator 20
GE1/0/3
BFD 1
BFD 1
GE1/0/3
BFD discriminator 10
BFD discriminator 10
FW1
VGMP
Active
GE1/0/1
VGMP
Standby
FW2
GE1/0/1
Configuration details are shown in Table 8-7 (the hot standby function must be
configured prior to the below configuration)
Table 8-7 Configuring VGMP monitoring of remote interfaces using BFD
Item
Configuration on FW1
Configuration on FW2
Configure BFD to monitor
the remote address, and
specify the local and peer
discriminators.
bfd 1 bind peer-ip 1.1.1.1
bfd 1 bind peer-ip 2.2.2.1
discriminator local 10
discriminator local 10
discriminator remote 20
discriminator remote 20
Configure the VGMP
group to monitor BFD.
hrp track bfd-session 10
active
hrp track bfd-session 10
standby
8.3.5 Summary
In summary, although there are many different VGMP group monitoring and traffic direction
techniques, they all abide by the following two principles:

Whenever a failure occurs on an interface that is being monitored by a VGMP group,
regardless of whether it is directly or indirectly monitored, and regardless of whether the
monitoring is of a firewall's own interface or a remote interface, the VGMP group's
priority will be lowered by 2.

Only primary devices (VGMP group in the active state) will direct traffic onto
themselves, while backup devices (VGMP group in the standby state) will think of a way
to refuse traffic from being directed onto them.
354
Learn Firewalls with Dr. WoW
Finally, I'll summarize the relationships between the various typical hot standby networks and
the VGMP fault monitoring and traffic direction techniques in Table 8-8.
Table 8-8 Summary of various hot standby networks' VGMP techniques
Hot Standby
Network
Supported
Scenarios
Fault Monitoring
Technique
Traffic Direction
Technique
Firewall service
interfaces are
working in Layer
3, and are
connected to
Layer 2 switches.
Active/standby
failover and load
sharing

Interface
monitoring using
VRRP groups

Interface
monitoring using
IP-links (optional)
The primary device will
send gratuitous ARP
packets to connected
switches, updating the
switches' MAC address
tables

Interface
monitoring using
BFD (optional)

Direct interface
monitoring

Interface
monitoring using
IP-links (optional)

Interface
monitoring using
BFD (optional)
Firewall service
interfaces are
working in Layer
3, and are
connected to
routers.
Active/standby
failover and load
sharing
Firewall service
interfaces are
working in Layer
2 (transparent
mode) and are
connected to
Layer 2 switches.
Only supports
active/standby
failover
Interface monitoring
using VLANs
The primary device
advertises routes with
normal costs, and the cost
of routes advertised by
the backup device
increases by 65500.
The primary device's
VLAN is able to forward
traffic, while the backup
device's VLAN is
disabled. When the
primary device becomes
the backup device, the
interfaces in the primary
device's VLAN will go
down and then up,
triggering the upstream
and downstream Layer 2
devices to update their
MAC address tables.
355
Learn Firewalls with Dr. WoW
Hot Standby
Network
Supported
Scenarios
Fault Monitoring
Technique
Traffic Direction
Technique
Firewall service
interfaces are
working in Layer
2 (transparent
mode), and are
connected to
routers.
Only supports
load sharing
Interface monitoring
using VLANs
The primary device's
VLAN is able to forward
traffic, while the backup
device's VLAN is
disabled. When the
primary device becomes
the backup device, the
interfaces in the primary
device's VLAN will go
down and then up once,
triggering route
convergence on the
upstream and
downstream Layer 3
devices.
8.4 Explanation of the HRP Protocol
In the above introduction to VGMP packet structure we reviewed several kinds of HRP
packets, and in this section I'll explain the HRP protocol and several kinds of HRP packets for
everyone, including: HRP data packets, heartbeat link detection packets and HRP consistency
check packets.
You may be wondering: "Isn't HRP responsible only for data backup between two devices—
what's so difficult about this? Actually, there is still much more to discuss about HRP, and I'll
now reveal these juicy details about HRP for everyone.
8.4.1 HRP Overview
Firewalls use commands (Web configuration is actually also executing a command(s)) to
achieve the various functions required by users. If a configuration command is not backed up
to the backup device prior to a backup device becoming the primary device, then the backup
will not be able to achieve the primary device's functions, resulting in a service interruption.
In Figure 8-41, a security policy permitting an intranet user to access an external network has
been configured on the primary device (FW1). If the security policy configured on primary
device FW1 has not been backed up to backup device FW2, then if the primary device's state
changes, the new primary device (FW2) will not permit the intranet user to access the external
network (because the firewall's implicit deny policy denies packets that fail to match any
policy).
356
Learn Firewalls with Dr. WoW
Figure 8-41 Scenario where a configuration command has not been backed up
Internet
Internet
My security policy
permits packets from
the internal network to
the Internet.
FW1
A
My implicit deny
policy denies all
packets that fail to
match any policy.
Configuration
not backed up
Internal
network
FW2
S
FW1
S
Configuration
not backed up
FW2
A
Internal
network
The firewalls are stateful inspection firewalls, and have session table entries corresponding
with every dynamically generated connection. Many dynamic sessions are created on the
primary device, but not on the backup device, because no packets pass through it. If sessions
are not backed up to the backup device before the backup device becomes the primary device,
subsequent service packets will not match any session and will be discarded.
In Figure 8-42, a session for PC1 accessing PC2 (source address 10.1.1.10, destination
address 200.1.1.10), has been created on primary device FW1, and subsequent packets
between PC1 and PC2 will be forwarded according to this session. If the session on the
primary device (FW1) cannot be backed up to the backup device FW2, then after
active/standby state switching, PC1's subsequent packets accessing PC2 will not match a
session on FW2. This will result in the interruption of PC1's services accessing PC2.
357
Learn Firewalls with Dr. WoW
Figure 8-42 Scenario in which the session has not been backed up
PC2
200.1.1.10/24
PC2
200.1.1.10/24
Internet
Internet
I’ve created a session
for the traffic from PC1
to PC2. Packets from
PC1 to PC2 will match
this session.
FW1
A
PC1
10.1.1.10/24
I do not have the session
for the traffic from PC1 to
PC2. Packets from PC1 to
PC2 cannot match any
session.
Sessions not
backed up
Internal
network
FW2
S
FW1
S
PC1
10.1.1.10/24
Sessions not
backed up
FW2
A
Internal
network
Therefore, in order to ensure that the backup device is able to smoothly take over work when
the primary device fails, key configuration commands and state information such as session
tables, must be backed up between the primary and backup devices. To achieve this, Huawei
has introduced the HRP (Huawei Redundancy Protocol) protocol.
In Figure 8-43, a security policy permitting an intranet user to access an external network(s) is
configured on FW1, and FW1 will thus permit packets from intranet PC1 to the external
network's PC2, and will establish a session. As the HRP protocol (with hot standby configured)
is used on both FW1 and FW2, the security policy configured on FW1, as well as the session
created on FW1, will both be backed up to the backup device FW2. Therefore, packets from
PC1 to PC2 will not be denied after the active/standby failover.
358
Learn Firewalls with Dr. WoW
Figure 8-43 Scenario when session and configurations are backed up
PC2
200.1.1.10/24
PC2
200.1.1.10/24
Internet
Internet
I permit internal
network users to
access the Internet and
have created a session
for the traffic from PC1
to PC2.
FW1
A
PC1
10.1.1.10/24
I have a security
policy to permit the
traffic from PC1 to
PC2 and a session to
match the traffic.
Sessions and
configurations
backed up
Internal
network
FW2
S
FW1
S
PC1
10.1.1.10/24
Sessions and
configurations
backed up
FW2
A
Internal
network
To summarize the above, in an active/standby failover network, configuration commands and
state information are both backed up to the backup device from the primary device.
However, in a load sharing network, the two firewalls are both primary devices (they both
have active VGMPs), and thus if the primary devices are permitted to synchronize commands
between them, the two firewalls' commands may be overlapping or conflicting with one
another. Therefore, to avoid such problems, we define that in a load sharing network,
configuration commands are backed up only from the master configuration device
(whose command line prompt begins with HRP_A) to the backup configuration device
(whose command line prompt begins with HRP_S). However, state information is
backed up between both devices.
In load sharing, the first device on which hot standby is enabled is the master configuration device
(whose command line prompt begins with HRP_A).
8.4.2 HRP Packet Structure and Implementation Mechanisms
Firewalls use heartbeat interfaces (the HRP failover channel) to send and receive HRP data
packets to synchronize configuration and state information. In Figure 8-44, HRP data packets
have been sequentially encapsulated with a VRRP header, a VGMP header, and an HRP
header (listed from outside>in). Of these, the VRRP header's Type=2 and Type2=2. In the
VGMP header, the field "vType" corresponds with the "HRP data packet" value.
359
Learn Firewalls with Dr. WoW
Figure 8-44 Structure of an HRP data packet
VRRP header
Ver
Type
(2)
Auth Type
Complete HRP data
packet structure
VRID (0)
Type2
(2)
Interval
IP Count
Checksum
Authentication Data
MAC Header (14+ bytes)
VGMP header
IP Header (20+ bytes)
VRRP Header (16 bytes)
Ver
vType Mode
ID
Priority
VGMP Header (12 bytes)
Check Code
HRP Header (36 bytes)
Data Length
DATA
HRP header
Ptr (NULL)
Source Module ID
Source Sub Module ID
Dest Module ID
Dest Sub Module ID
Message Mode
Message Type
Message Length
Sequence
Error ID || Data
An explanation of the key parameters in an HRP header follows:

Source Module ID and Source Sub Module ID state which of this firewall's modules and
sub-modules' data need to be backed up.

Dest Module ID and Dest Sub Module ID state which modules and submodules data
need to be synchronied to the peer firewall.
360
Learn Firewalls with Dr. WoW
The process of HRP data backup is shown in Figure 8-45:
2.
When FW1 sends an HRP data packet, it will write the ASPF module's ID into the HRP
headers' "Source Module ID" and "Dest Module ID" fields, and encapsulate the ASPF
module's configuration and table entry information into the HRP data packet.
3.
FW1 sends the HRP data packet through the failover channel (heartbeat cable) to FW2.
After FW2 receives the HRP data packet, it will send configuration and table entry
information in the packet to its own ASPF module according to the "Source Module ID" and
"Dest Module ID" fields in the HRP header, and issues the configuration and table entries.
Figure 8-45 HRP data backup process
FW2
FW1
HRP data
packet
Failover channel
Above, I mentioned that USG6000 series firewalls and the USG2000/5000 series' V300R001
version firewall also support many kinds of VGMP and HRP packets being encapsulated into
UDP packets. Of course, we're introducing HRP data packets here, and the heartbeat link
detection packets and consistency check packets that I will discuss below can also be
encapsulated into UDP packets by adding a UDP header to the VRRP header. The structure of
UDP HRP data packets is shown in Figure 8-46.
Figure 8-46 Structure of UDP HRP data packets
A benefit of using UDP packets is that UDP packets can be transmitted across networks and
controlled by security policies because UDP packets are unicast packets.
8.4.3 HRP Backup Methods
Hot standby's HRP supports three backup methods—automatic backup, manual batch backup,
and fast backup. I'll describe these backup methods, and the differences between them, one by
one below

Automatic backup
The automatic backup function (the command is hrp auto-sync [ config |
connection-status ]) is enabled by default to automatically back up configuration
361
Learn Firewalls with Dr. WoW
commands in real time and periodically back up state information; this function is
used in various hot standby networks.
− After the automatic backup function is enabled, each time a command that can be
backed up is executed on the primary (master configuration) device, the configuration
command will be immediately synced and backed up onto the backup (backup
configuration) device.
Configuration commands (that can be backed up) can only be configured on the
primary (master configuration) device, and cannot be configured on the backup
(backup configuration) device. Configuration commands that cannot be backed up
can be configured manually on the backup (backup configuration) device. To view
which configuration commands can or cannot be backed up, please see 8.4.4
Configurations and State Information that HRP Can Back Up.
−
After enabling the automatic backup function, the primary device will periodically
back up state information (so long as it can be backed up) onto the backup device.
This means that the state information established on the primary device won't be
backed up in real time.
Automatic backup will not back up the following session types (only supported
by fast session backup):
 Sessions to the firewall itself, for example a session generated when an
administrator logs in to the firewall


Half-open TCP connection sessions that have not completed a three way
handshake

Sessions that were only created for an initial UDP packet and are not matched by
subsequent packets.
Manual batch backup
Manual batch backup is done by executing the hrp sync { config | connection-status }
command on the primary device. After the command is executed:

−
The primary (master configuration) device will immediately sync configuration
commands (so long as they can be backed up) to a backup (backup configuration)
device.
−
The primary device will immediately sync state information (so long as it can be
backed up) to the backup device.
Fast backup
The fast session backup function (the command is hrp mirror session enable), is
used in load sharing to address scenarios in which the forward and return paths are
not the same. To ensure that state information is immediately synchronized, the fast
backup function only backs up state information, not configuration commands. The
backup of configuration commands is done by the automatic backup function.
After the fast backup function is enabled, the primary device will immediately
synchronize established state information that can be backed up (including the sessions
mentioned above that automatic backup does not support) onto the backup device in real
time.
To summarize the above, these three backup methods are generally used as follows:
automatic backup (hrp auto-sync [ config | connection-status ] is enabled by default, and
shouldn't be disabled; if configuration between primary/backup devices is not synced,
then the manual batch backup command (hrp sync [ config | connection-status ]) needs
to be executed; if the network uses load sharing, this generally requires enabling the fast
session backup function (hrp mirror session enable).
Below I'll explain why fast session backup is especially useful in load sharing networks.
362
Learn Firewalls with Dr. WoW
In load sharing networks, as the two firewalls are both primary devices, they can both forward
packets, and so there may be situations in which the forward and return packets take different
paths (passing through different firewalls). When this happens, if the state information has not
been immediately backed up between the firewalls, return packets will be discarded they
cannot match any state information, resulting in a service interruption.
To prevent this, the fast session backup function must be enabled in load sharing networks to
allow two firewalls to mutually back up state information in real time, so that return packets
can match the state information, regardless of which firewall they pass through.
In the example in Figure 8-47, FW1 and FW2 have formed a load sharing network. Packets
from an intranet PC to the server on the external network are forwarded through FW1, and a
session is established. As the forward and return paths are not the same, the return packets
sent from the server to the PC are forwarded to FW2. If at this time only the automatic backup
function is enabled, then FW1's session has not been backed up onto FW2. This means that
the return packets cannot match a session on FW2 and will be discarded.
If the fast session backup function is enabled, then sessions generated on FW1 will be
immediately backed up onto FW2. Therefore, return packets can match a session on FW2 and
be forwarded to the PC.
Figure 8-47 Scenario in which forward and return packets take different paths
Server
Internet
I’ve created a session for
the traffic from the PC to
the server and will back up
the session to FW2 after a
certain period of time.
I receive a packet, but
the session has not
been backed up from
FW1. Therefore, I have
to discard this packet.
Only auto-backup
is enabled.
FW1
A
FW2
A
PC
Internal
network
363
Learn Firewalls with Dr. WoW
8.4.4 Configurations and State Information that HRP Can Back
Up
The configurations firewalls can back up are shown below (as applies to the USG6000
firewall series' V100R001 version):

Policies: security, NAT, bandwidth management, authentication, and attack defense
policies, blacklists, and ASPF

Objects: addresses, regions, services, applications, users, authentication servers, time
intervals, URL classifications, keyword groups, email address groups, signatures,
security profiles (antivirus, intrusion prevention, URL filtering, file filtering, content
filtering, application behavior control, and email filtering profiles)

Networks: new logical interfaces, security zones, DNS, IPSec, SSL VPN, TSM
interworking

Systems: administrator and log configuration
Backup of display, reset and debugging commands is typically not supported.
From the above description we can see that a firewall network's basic configurations (such as
interface addresses and routes) cannot be backed up, and configuration of these needs to be
completed prior to the successful establishment of a hot standby state. Meanwhile, the
supported configurations listed above can be configured on the primary device alone after a
hot standby state has been successfully established.
The stateful information that firewalls can back up is shown below:

Session tables

Server-map tables

IP monitoring tables

Fragment caching tables

GTP tables

Blacklists

PAT port mapping tables

NO-PAT address mapping tables
8.4.5 Heartbeat Interface and Heartbeat Link Detection Packets
As shown in Figure 8-48, data backed up between two firewalls is sent and received through
the firewalls' heartbeat interfaces over the heartbeat link (the failover channel). A heartbeat
interface must have an IP address. This can be a physical interface or a logical interface (such
as an Eth-Trunk interface bonded by multiple physical interfaces to increase bandwidth).
Under normal circumstances, backup data constitutes about 20%-25% of service traffic, so the
number of physical interfaces depends on the volume of backup data.
364
Learn Firewalls with Dr. WoW
Figure 8-48 Physical and logical interfaces serving as the heartbeat interfaces
A physical interface acts as
the heartbeat interface.
FW1
GE1/0/1 running
1.1.1.1
running
GE1/0/1
1.1.1.2
FW2
An Eth-Trunk interface acts
as the heartbeat interface.
FW1
Eth-Trunk1
1.1.1.1
running
running
GE1/0/1
GE1/0/1
GE1/0/2
GE1/0/2
GE1/0/3
GE1/0/3
FW2
Eth-Trunk1
1.1.1.2
Heartbeat interface
HRP data packet
A heartbeat interface has five possible states (viewed by executing the command display hrp
interface):

invalid: this state is displayed when a firewall's heartbeat interface is incorrectly
configured (the interface is up, but the line protocol is down)—for example, the specified
heartbeat interface is a Layer 2 interface or the heartbeat interface's IP address has not
been configured.

Down: this state is displayed when a firewall's both the heartbeat interface and the line
protocol are down.

peerDown: when both the heartbeat interface and the line protocol are up, the heartbeat
interface will send a heartbeat link detection packet to its peer device's heartbeat
interface. If no response packet from the peer is received, the first firewall will set its
heartbeat interface's state as peerDown. However, the heartbeat interface will continue to
send heartbeat link detection packets to restore the link once the peer's heartbeat
interface is up.

ready: when both the heartbeat interface and the line protocol are up, the heartbeat
interface will send heartbeat link detection packets to the peer device's heartbeat
interface. If the peer's heartbeat interface responds to these packets (by also sending
heartbeat link detection packets), then the firewall will set its heartbeat interface's state
as ready, and be prepared to send and receive heartbeat packets at any time. The
heartbeat interface will continue to send heartbeat link detection packets to monitor the
state of the heartbeat link.

running: when a firewall has multiple heartbeat interfaces in the ready state, the firewall
will choose the heartbeat interface that was first configured to form the heartbeat link,
and will set this heartbeat interface's state as 'running'. If there is only one heartbeat
interface in the ready state, then it will naturally be the heartbeat interface that enters the
running state. The interface in the running state is responsible for sending HRP
heartbeat packets, HRP data packets, HRP consistency check packets and VGMP
365
Learn Firewalls with Dr. WoW
packets. The remaining 'ready' heartbeat interfaces will be in backup state, and if the
heartbeat interface in the running state or the heartbeat link fails, the remaining heartbeat
interfaces in ready state will replace the current heartbeat interface one by one in
configuration order. As shown in Figure 8-49, the order in which two firewalls' heartbeat
interfaces are configured is the same as the interface numbering scheme, so GE1/0/3
changes into the running state, and GE1/0/4 is in a backup state.
Figure 8-49 Heartbeat interface states
invalid
FW1
peerdown
GE1/0/1
GE1/0/2
2.2.2.1
peerdown
GE1/0/3
3.3.3.1
running
GE1/0/4
4.4.4.1
ready
down
GE1/0/1
1.1.1.2
FW2
GE1/0/2
running
GE1/0/3
3.3.3.2
ready
GE1/0/4
4.4.4.2
Interface
Heartbeat cable
HRP heartbeat link detection packet
HRP data packet
To summarize the above, the role of heartbeat link detection packets is to detect whether a
peer device's heartbeat interface can receive the other device's packets to determine
whether a heartbeat link can be used. So long as the sending device's heartbeat interface
and line protocol are both up, the heartbeat interface will send heartbeat link detection packets
to the peer's heartbeat interface.
Heartbeat link detection packets are also encapsulated with the new VRRP header.
When Type=2, and Type2=1 in the new VRRP header, packets are encapsulated into
heartbeat link detection packets.
Above, we discussed the fact that HRP heartbeat packets are used to detect whether the peer
device (VGMP group) is working normally. HRP heartbeat packets are only sent by the
primary device's VGMP group through a heartbeat interface in the running state.
8.4.6 HRP Consistency Check Packets' Role and Mechanism
HRP consistency check packets are used to check whether the hot standby configurations of
two firewalls in hot standby states are consistent and if their policy configurations are
identical. Consistency checking of hot standby configuration includes checking whether two
firewalls are monitoring the same service interfaces, and whether the same heartbeat
interfaces are configured on them. Policy configuration consistency checks primarily involve
checking whether two firewalls have identical policies, including security, bandwidth, NAT,
authentication, and audit policies. HRP consistency check packets are also encapsulated by
VRRP headers. When Type=2 and Type2=5 in a VRRP header, packets are encapsulated into
HRP consistency check packets.
366
Learn Firewalls with Dr. WoW
The implementation mechanism of the HRP consistency check is as follows:
1.
After the consistency check command (hrp configuration check { all | audit-policy |
auth-policy | hrp | nat-policy | security-policy | traffic-policy }) is executed, the device
will send a consistency check request packet to its peer and collect the brief
configuration information from its own relevant modules.
2.
After the peer device receives the request, it will collect the brief configuration
information from its own relevant modules, and then encapsulate the information into a
consistency check packet to return to the first device.
3.
The first device compares its own brief configuration with its peer's configuration, and
then records the compared information. The customer can execute the command display
hrp configuration check to view the results of the consistency check. The below results
show that the hot standby configuration is consistent.
HRP_A<FWA> display hrp configuration check hrp
Module
State Start-time
End-time
Result
hrp
finish 2008/09/08 14:21:56 2008/09/08 14:21:56 same configuration
8.5 Hot Standby Configuration Guide
Prior to deploying hot standby, please first select a suitable hot standby networking approach,
which can be:

Firewall service interfaces work in Layer 3, and are connected to switches.

Firewall service interfaces work in Layer 3, and are connected to routers.

Firewall service interfaces work in Layer 2, and are connected to switches.

Firewall service interfaces work in Layer 2, and are connected to routers.
Of these, when firewall service interfaces work in Layer 3, networking often will involve
different upstream and downstream devices— for example, connecting to switches upstream,
and to routers downstream. There is actually nothing particularly noteworthy about this.
After determining the hot standby networking approach, we also need to determine whether to
select the active/standby failover method or the load sharing method based on the following
principles:

If both the active/standby failover and load sharing methods are feasible, the
active/standby failover method is recommended.

If load sharing is deployed on other parts of the customer's network (for example the
egress gateway, core switches, etc.), then the customer generally will request that load
sharing be deployed on the firewalls, too.

When one firewall forwards all service traffic, if one or more of its three important
parameters— session table, throughput, and CPU usage— has exceeded 80% of the
maximum capacity for a long time, we must use the load sharing method.

Performance will degrade after security features such as IPS and antivirus are enabled on
a firewall. If a firewall's forwarding performance drops to below the existing network's
total capacity, then the load sharing method must be used.
The support for active/standby failover and load sharing depends on the hot standby
networking approaches, as shown in Table 8-9.
367
Learn Firewalls with Dr. WoW
Table 8-9 Support for the active/standby failover and load sharing methods
Networking Approach
Active/Standby Failover
Load Sharing
Firewall service interfaces work in Layer
3, and connect to switches.
Supported
Supported
Firewall service interfaces work in Layer
3, and connect to routers.
Supported
Supported
Firewall service interfaces work in Layer
2, and connect to switches.
Supported
Not supported
Firewall service interfaces work in Layer
2, and connect to routers.
Not supported
Supported
Before deploying hot standby, we still need to check of the two firewalls' hardware and
software to ensure that:

The two firewalls' product model and hardware configurations are identical, including
the locations, types and numbers of interface boards/cards, service boards, and main
processing units (MPU).

The two firewalls' software versions and Bootroom versions must be identical.

(Recommended) The firewalls' configuration files are the initial configuration files.
8.5.1 Configuration Process
The hot standby configuration process is shown in Figure 8-50. Understanding hot standby's
configuration process can help everyone understand the relationships between the hot standby
protocols we discussed before and in remembering the logic behind hot standby configuration.
368
Learn Firewalls with Dr. WoW
Figure 8-50 Hot standby configuration flowchart
Complete basic network
1
configuration.
Interface
Security zone
Routing
Security policy
Layer 3 service interface
connecting to a switch
Layer 2 service
interface
Layer 3 service interface
connecting to a router
Configure VGMP to monitor
VRRP groups.
Configure VGMP to monitor
interfaces.
Configure VGMP to monitor
VLAN
Interface view: vrrp vrid
Interface view: hrp track active |
standby
VLA view: hrp track active | standby
(Optional) Configure VGMP to
monitor remote interfaces.
2
Configure VGMP to
monitor interfaces.
hrp track ip-link
hrp track bfd-session
Configure the heartbeat
3
interfaces.
hrp interface
4 Enable hot standby
hrp enable
5 Configure a backup method.
Auto backup: hrp auto-sync
Manual batch backup: hrp sync
Quick session backup:
hrp mirror session enable
6 Configure security services.
The configuration steps in the flowchart are explained as follows:
2.
Complete basic network configuration.
− Interfaces: if the firewall's service interfaces are working in Layer 3, then IP
addresses need to be configured for every service interface. Service interfaces' IP
addresses must be fixed, and therefore the hot standby cannot work with features that
automatically acquire IP addresses such as PPPoE dialing and DHCP clients.
369
Learn Firewalls with Dr. WoW
If a firewall's service interfaces are working in Layer 2, they must be added into the
same VLAN.
In addition, the primary and backup devices need to select identical service and
heartbeat interfaces. For example, if the primary device selects GigabitEthernet1/0/1
as the service interface and GigabitEthernet1/0/7 as the heartbeat interface, then the
backup device also needs to make the same selections.
−
Security zone: all interfaces must be added to a security zone regardless of whether
they are Layer 2 or Layer 3 interfaces, and regardless of whether they are service
interfaces or heartbeat interfaces. Primary and backup devices' corresponding
interfaces must be added to the same security zone: if the primary device's
GigabitEthernet1/0/1 interface is added to the Trust zone, then the backup device's
GigabitEthernet1/0/1 interface must also be added to the Trust zone.
−
Routing: if a firewall's service interfaces are working in Layer 3 and are connected to
switches, we need to configure static routing on the firewall; if a firewall's service
interfaces are working in Layer 3 and are connected to routers, we need to configure
OSPF on the firewall; if a firewall's service interfaces are working in Layer 2, we do
not need to configure routing on the firewall.
−
Security policies: the primary types of packet exchanges between firewalls and other
devices in hot standby deployments are as follows:

VGMP and HRP packets are exchanged between two firewalls through their
heartbeat interfaces.

VRRP packets are exchanged between two firewalls through their service
interfaces.

When a firewall's service interfaces are working in Layer 3 and are connected to
switches, the firewall will send gratuitous ARP packets to the switches.

When a firewall's service interfaces are working in Layer 3 and are connected to
routers, the firewall needs to exchange OSPF packets with the routers.

When a firewall's service interfaces are working in Layer 2, OSPF packets sent
between the upstream and downstream devices need to pass through the firewall.
To ensure the normal establishment of a hot standby state, we need to configure
corresponding security policies to permit the aforesaid packets, as shown in Table 8-10.
Table 8-10 The security policies needed to establish hot standby
Packets
Security Policies
VGMP and HRP
packets

In USG9000 series firewalls, VGMP and HRP packets are not
controlled by security policies.

For USG2000/5000/6000 series firewalls, if the remote parameter
is not specified when configuring the heartbeat interface, VGMP
and HRP packets are multicast packets and are not controlled by
security policies; if the remote parameter is specified, VGMP and
HRP packets will be encapsulated into unicast UDP packets, and a
security policy needs to be configured between the heartbeat
interface's security zone and the Local zone to permit packets
destined to port 18514 for USG2000/5000 (18514 or 18515 for
USG6000) in both directions.
VRRP packets
VRRP packets are multicast packets, and are not controlled by security
policies
Gratuitous ARP
Gratuitous ARP packets are broadcast packets, and are not controlled
370
Learn Firewalls with Dr. WoW
Packets
Security Policies
packets
by security policies
OSPF packets
destined for the
firewall
A security policy permitting OSPF packets must be configured
between the security zones in which the upstream/downstream service
interfaces are located and the Local zone.
OSPF packets
passing through
the firewall
A security policy permitting OSPF packets must be configured
between the upstream service interface's zone and the downstream
service interface's zone.
After hot standby is successfully established, we can back up security policy configurations. However,
the security policies mentioned above are the foundation for establishing hot standby, and this must
therefore be completed separately on the two firewalls prior to configuring hot standby.
When configuring security policies, we generally first set the default security policy action as 'permit',
and then restore the default security zone action to 'deny' after configuring a specific security policy.
3.
Configure VGMP interface monitoring.
−
−
−
−
When firewalls' service interfaces are working in Layer 3 and are connected to
switches, a VRRP group(s) must be configured on the interfaces.

In active/standby failover, configure a VRRP group on the primary device's
service interface, and then add this VRRP group to the active VGMP group;
configure the same VRRP group on the backup device's service interface, and
then add this VRRP group to the standby VGMP group.

In load sharing, configure two VRRP groups on every service interface on each
device, and then add the VRRP groups to the active VGMP group and standby
VGMP group respectively. The same VRRP group must be added to different
VGMP groups on the two devices – the active VGMP group on one device, and
the standby VGMP group on the other device.
When a firewall's service interfaces are working in Layer 3 and are connected to
routers, VGMP direct interface monitoring must be configured on the interface.

In the active/standby failover method, the primary device's service interfaces
must all be added to the active VGMP group, and the backup device's service
interfaces to the standby VGMP group. The function of automatic OSPF cost
adjustment based on the VGMP state (hrp ospf-cost adjust-enable) must also
be configured.

In the load sharing method, each device's service interfaces must be added to
both the active and the standby VGMP groups.
When a firewall's service interfaces are working in Layer 2, VGMP monitoring of a
VLAN must be configured on the VLAN.

In the active/standby failover method, the primary device's service interfaces
need to all be added to the same VLAN, and then this VLAN is added to the
active group; the backup device's service interfaces must all be added to another
VLAN, and then this VLAN is added to the standby group.

In the load sharing method, all service interfaces for each device need to be
added to the same VLAN, and then this VLAN is simultaneously added to both
the active group and the standby group.
When the firewall needs to monitor remote interfaces, configure the VGMP to
monitor remote interfaces
371
Learn Firewalls with Dr. WoW
VGMP can monitor remote interfaces through IP-link or BFD. Under normal
circumstances, either of the two methods can be selected.
4.
Configure the heartbeat interface.
Directly connect the two firewalls' heartbeat interfaces if possible. In this case, you do
not need to specify the remote parameter in commands (e.g.: hrp track interface
GigabitEthernet1/0/7).
If two firewalls' heartbeat interfaces are connected through Layer 3 devices or if service
interfaces are used as heartbeat interfaces, the remote parameter must be used to specify
the peer's interface address (e.g: hrp track interface GigabitEthernet1/0/7 remote
10.1.1.2). After the remote parameter is specified, packets are encapsulated into UDP
unicast packets, and need to be controlled by security policies.
5.
Enable hot standby.
After completing configuration, we need to execute the hrp enable command to enable
the hot standby feature. If the above configurations were correct, a hot standby state will
be successfully established, and command prompt HRP_A will appear on one device and
HRP_S on the other.
6.
7.
Configure a backup method.
− The automatic backup (hrp auto-sync [ config | connection-status ]) function is
enabled by default, and I suggest that you do not disable this.
−
If configuration is not synced between the primary/backup devices, we must execute
the manual batch backup command (hrp sync [ config | connection-status ]).
−
If this is a load sharing network, we usually need to enable the fast session backup
function (hrp mirror session enable).
Configure security services.
After hot standby setup has been successfully completed, security services
configurations will typically be backed up by the primary (master configuration) device
onto the backup (backup configuration) device. Therefore, we only need to configure
security services on the primary device, and don't need to configure them on the backup
device. Common security services include security policies, NAT, attack defense,
bandwidth management, and VPN policies.
8.5.2 Configuration Check and Result Verification
After completing our hot standby configuration, we need to check the configuration and
verify the results as follows:
Step 1 View the command line prompt.
After successfully setting up hot standby, if a firewall's command line prompt begins with
HRP_A, this means that this firewall has become the primary device after negotiation with the
other firewall; if the command line prompt begins with HRP_S, this means that this firewall
has become the backup device following negotiation with the other firewall.
Step 2 Check whether the key hot standby configurations are correct according to Table 8-11.
Table 8-11 Hot standby configuration checklist
No.
Mandatory?
Item Checked
Command/Method
1
Mandatory
The two firewalls' product models and
software versions are identical.
display version
372
Learn Firewalls with Dr. WoW
No.
Mandatory?
Item Checked
Command/Method
2
Mandatory
The two firewalls' interface card types
and installation positions are identical.
display device
3
Mandatory
The two firewalls use the same service
interfaces.
display hrp state
4
Mandatory
The two firewalls use the same
heartbeat interfaces.
display hrp interface
5
Optional
If an Eth-Trunk interface is used as the
failover channel, the two firewalls'
Eth-Trunk interfaces have identical
member interfaces.
display
eth-trunk trunk-id
6
Optional
If a service channel is used as the
failover channel, both the heartbeat
interface and the IP address of the
heartbeat interface of the peer are
specified.
display
current-configuration
| include hrp interface
7
Mandatory
The two firewalls' interfaces are added
to the same security zone.
display zone
8
Mandatory
The two firewalls' configurations are
consistent (this includes hot standby,
audit, authentication, security, NAT,
and bandwidth policies.)
display hrp
configuration check
all
Service interfaces are working in Layer 3
8
Mandatory
IP addresses have been configured for
the two firewalls' interfaces.
display ip interface
brief
9
Mandatory
If the firewalls are connected to
switches, the two firewalls' service
interfaces are added to the same VRRP
groups and share a virtual IP address.
display vrrp interface
interface-type
interface-number
10
Mandatory
If the firewalls are connected to
switches, the next hop of the firewalls'
upstream and downstream devices have
been set to the VRRP groups' virtual IP
addresses.
Check the firewalls'
upstream and
downstream device's
static routing
configurations.
11
Mandatory
If the firewalls are connected to routers,
the two firewalls' service interfaces are
added to the correct VGMP group. In
active/standby failover, the primary
device's service interfaces are added to
the active VGMP group, and the backup
device's service interfaces to the
standby VGMP group. In load sharing,
the two devices' service interfaces are
added to both the active and standby
VGMP groups.
display hrp state
373
Learn Firewalls with Dr. WoW
No.
Mandatory?
Item Checked
Command/Method
12
Mandatory
If the firewalls are connected to routers,
the firewalls are correctly running
OSPF, and the OSPF area does not
include the heartbeat interfaces.
display ospf
[ process-id ] brief
13
Mandatory
If firewalls are connected to routers,
automatic OSPF cost adjustment based
on active/standby state is configured.
display
current-configuration
| include hrp ospf-cost
Service interfaces are working in Layer 2
14
Mandatory
A firewall's upstream and downstream
service interfaces are added to the same
VLAN.
display port vlan
[interface-type
interface-number ]
15
Mandatory
The firewalls' VLANs are added to the
correct VGMP groups. In
active/standby failover, the primary
device's VLAN is added to the active
VGMP group, and the backup device's
VLAN to the standby VGMP group. In
load sharing, the two devices' VLANs
are each added to both the active and
standby VGMP groups.
display hrp state
16
Mandatory
If the firewalls are connected to
switches, the active/standby failover
method is used.
display hrp group
17
Mandatory
If the firewall is connected to routers,
the load sharing method is used.
display hrp group
Checks for load sharing alone
18
Mandatory
The fast session backup function is
enabled.
display
current-configuration
| include hrp mirror
19
Optional
The port range of the NAT address pool
is correctly specified.
display
current-configuration
| include hrp nat
Prior to a firewall officially going online, complete the verification in steps 3 and 4.
Step 3 In the primary device's interface view, execute the shutdown command to verify whether the
primary/backup devices conduct failover.
After the shutdown command is executed on one of the primary device's service interfaces,
the state of this primary device interface will change to down, but its other interface(s) will be
working normally. The backup device's command line prompt will begin with HRP_A instead
of HRP_S, and the primary device's command line prompt will begin with HRP_S instead of
HRP_A. Traffic will be normally forwarded, indicating that active/backup failover succeeds.
374
Learn Firewalls with Dr. WoW
After the undo shutdown command is executed on the same interface on the primary device,
the state of the interface changes back to up. After the preemption hold-down time expires, the
primary device's command prompt will begin with HRP_A gain instead of HRP_S, and the
backup device's command prompt will begin with HRP_S instead of HRP_A. Traffic will be
normally forwarded, demonstrating that the preemption has succeeded.
Step 4 In the primary device's user view, execute the reboot command to reboot the device and
verify whether the primary/backup devices conduct failover.
If the backup device's command prompt begins with HRP_A instead of HRP_S, and traffic is
normally forwarded after the reboot command is executed on the primary device, the failover
succeeded.
After the primary device has completed its reboot and the preemption hold-down time expires,
the primary device's command prompt will begin with HRP_A instead of HRP_S, the backup
device's command prompt will begin with HRP_S instead of HRP_A, and traffic will be
normally forwarded, demonstrating that the preemption has succeeded.
375
Learn Firewalls with Dr. WoW
9
Multi-homing
9.1 Multi-homing Overview
In "What Are Firewalls" I mentioned that firewalls are primarily deployed on network borders
to separate intranets from external networks. However, firewalls also assume the role of
interconnecting these same intranets and external networks, as the traffic exchanged between
these intranets and external networks must all be forwarded through the firewalls. In
real-world scenarios, due to bandwidth and reliability requirements, enterprises will lease
multiple Internet-link bandwidth resources from multiple ISPs, meaning that their egress
position firewall(s) will be connected to the Internet via multiple egress links. Therefore, how
to select a suitable egress link for user traffic is a problem that needs to be considered by
enterprise network administrators.
I'll thus specially summarize several common methods of multi-homing in multiple egress
environments in which firewalls serve as enterprise egress network gateways. Below, I'll give
an initial introduction to these multi-homing methods, so that everyone can first gain a basic
understanding of them.
9.1.1 Shortest Path Routing
Shortest path routing is a multi-homing method completed through a combination of default
routing and specific routing. This method is relatively simple, and is also the most commonly
used. As shown in Figure 9-1, using default routes can ensure that enterprise user data and
traffic is all matched to routes and forwarded, while using specific routes allows user traffic
accessing a certain ISP to be forwarded from the link connected to this ISP, avoiding traffic
being sent on a circuitous path from another ISP link—this is what is known as shortest path
routing. However, with so many services on the Internet, it's impossible to configure specific
routes one by one, so is there a simple method to configure specific routes en masse? This is
where the ISP routing function comes into play. The ISP routing function collects each ISP's
well-known network segments within a firewall, and issues static routes in batches by
configuring designated egress interfaces and next hops, greatly reducing the workload
involved in configuring specific routes.
376
Learn Firewalls with Dr. WoW
Figure 9-1 Shortest path routing
ISP1
ISP2
Firewall
Enterprise network
Specific route
Default route
Configuring the specific routing +default routing method of multi-homing is very simple,
convenient and practical, and can be used in standard enterprise networks. However, if
enterprises need to conduct differentiated forwarding of traffic for certain special users (such
as managers) or for certain special applications (P2P downloads), this routing method cannot
be used. Therefore, I'll introduce a second multi-homing method below: policy-based routing.
9.1.2 Policy-based Routing
Policy-based routing is exactly what it sounds like—it entails forwarding packets based upon
specific policies. Therefore, policy-based routing is a more flexible forwarding mechanism
than standard static routing and dynamic routing. When routing devices forward packets, they
first filter the packets based upon pre-configured rules, with packets that are successfully
matched being forwarded according to a fixed forwarding policy. The rules discussed here can
be based in the source IP address, the destination IP address, or can be user-based or based in
a certain type of special application.
In Figure 9-2, an enterprise intranet has a high volume of P2P services, and in order to ensure
the bandwidth needs of a special user (the managers), policy-based routing can be used to
formulate rules allowing traffic from the manager and other special users to be forwarded
from the ISP1 link (which has stable link bandwidth), while P2P and other high traffic
services are designated to be forwarded from the ISP2 link (which has uplink and downlink
bandwidths that are greatly unequal— for example, a situation in which the uplink bandwidth
is 50 Mbit/s, and the downlink bandwidth is 500 Mbit/s).
377
Learn Firewalls with Dr. WoW
Figure 9-2 Policy-based routing
ISP1
ISP2
Manager’s
traffic
P2P service
traffic
Firewall
Enterprise
network
Manager
P2P service
Manager’s service traffic
P2P service traffic
Policy-based routing equips enterprise network administrators with a more flexible measure
by which to control traffic: so long as they have a prior understanding of the merits of the
egress links' bandwidths, administrators can allow important users and key services to be
forwarded from links with stable bandwidths. However, policy-based routing requires that
humans (administrators) interfere with traffic routing, and is also not able to assign specific
bandwidth for links (for example, stipulating that a link's maximum bandwidth is 500 Mbit/s).
This is not, however, an issue when using smart routing equipped with "intelligent"
deterministic abilities, as smart routing can choose the most optimal egress link for intranet
user traffic using system-initiated determination, and can set fixed egress bandwidth based
upon the unique properties of a certain link's bandwidth, thereby achieving its goal of
forwarding traffic intelligently.
In summary, each of the two types of routing for multi-homing have their own special features,
and their use scenarios are also different from one another. Administrators can select an
appropriate routing method based upon their network's actual needs. Of course, network
conditions in actual networks are complex, and user needs are diverse and multitudinous, so it
may be difficult for a single routing method to satisfy all needs. If this is the case, multiple
routing methods should be used together in a complementary fashion to complete complex
network planning. In the following sections, we'll introduce the mechanisms and application
of each multi-homing routing method one by one.
9.2 Shortest Path Routing
9.2.1 Default Routing vs. Specific Routing
What is shortest path routing? Just as it sounds, this entails selecting the closest path. For
networks with multiple egresses, shortest path routing refers to packets choosing the link that
will involve the smaller cost to reach the destination network for use in forwarding. Now, how
do packets select the link with the smaller cost for forwarding? This can be accomplished
378
Learn Firewalls with Dr. WoW
using default routes and specific routes. Below, I'll answer several questions to introduce a
few of the essential concepts behind default routing and specific routing and help everyone in
understanding this.
Question 1: What is default routing, and is default routing a kind of static routing?
Actually, default routing is a special kind of routing that can be configured through static
routes or generated through dynamic routes such as OSPF and IS-IS. Therefore, default
routing is actually not a kind of static routing. In routing tables, the default route has a
destination network of 0.0.0.0 and a subnet mask of 0.0.0.0. Below is a default route in a
routing table:
[FW] display ip routing-table
Route Flags: R - relay, D - download to fib
-----------------------------------------------------------------------------Routing Tables: Public
Destinations : 1
Destination/Mask
0.0.0.0/0
Routes : 2
Proto Pre Cost
Static 60 0
Static 60 0
Flags NextHop
RD
10.1.1.2
RD
10.2.0.2
Interface
GigabitEthernet2/2/21
GigabitEthernet2/2/17
If a packet's destination address cannot be matched with any route, then the system will use
default routing to forward this packet.
Question 2: What are specific routes?
I, Dr. WoW, believe that specific routing is defined comparatively against default routing; all
routes in a routing table that are not default routes are specific routes. For example,
10.1.0.0/16 and 192.168.1.0/24 are both specific routes when compared with the default route.
Compared to the parent route 10.1.0.0/16, the routes 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24
are all specific routes. There is no relationship between specific routes and the protocol type,
so specific routes can be configured as static routes or can be generated by dynamic routing
policies.
Question 3: How do packets check the routing table?
As everyone may know, when packets check the routing table, they conduct their check based
upon the longest matching principle, but what exactly does this mean? To give an example, a
routing table has three routes, 10.1.0.0/16,10.1.1.0/24 and 0.0.0.0/0. When packets with a
destination address of 10.1.1.1/30 check the routing table, the route that they ultimately match
is the 10.1.1.0/24 route, because when a packet checks the routing table, a packet's destination
address is matched digit by digit with the routing table's entries' masks using "AND" logic,
and if the obtained address corresponds with a routing table entry's network address then they
match. Ultimately, the longest matching routing table entry is selected to forward the packet.
If a packet with the destination address of 192.168.1.1/30 checks the routing table, it will only
be able to be matched to the default route 0.0.0.0/0 (because the packet's destination address
cannot be matched with any specific route) and so the system will ultimately use default
routing to forward this packet.
From the above questions it should be clear that when there are specific routes in a routing
table, the packet is first matched to a specific route, and only if there is not a matching
specific route is the default route(s) checked.
Below, we'll look at Question 4: How is routing conducted when there are multiple default
routes?
379
Learn Firewalls with Dr. WoW
Let's first look at the networking approach shown in Figure 9-3. We've configured two default
routes on a firewall, with one that has a next hop of R1 and one that has a next hop of R2.
Let's ping the two server addresses on the destination network from the PC.
Figure 9-3 Default route multi-homing
R1
10.10.10.10/32
ISP1
GE0/0/3
PC
192.168.0.2/24
GE0/0/1
GE0/0/2
Firewall
GE0/0/3
10.1.1.2/24
Target
network
GE0/0/2
10.1.2.2/24
ISP2
R2
10.10.11.11/32
The two default routes are configured as follows on the firewall:
[FW] ip route-static 0.0.0.0 0 10.1.1.2
[FW] ip route-static 0.0.0.0 0 10.1.2.2
A packet capture on FW's GE0/0/3 interface shows that the packets have both been forwarded
from GE0/0/3:
Why did this happen? Did the two default routes not load distribute? In fact, for multi-homing
routing when there are multiple default routes, the specific link that a packet travels is
calculated by a HASH algorithm involving the source IP address+the destination IP address.
This kind of algorithm primarily looks at a packet's source IP address and destination IP
address, so when addresses are different, the calculated results will also be different. Using
this kind of algorithm, the opportunity to forward packets is identical between equal-cost
routes. To give an example, if packets' source IP addresses are the same, and the destination IP
addresses neighbor one another, for example 10.1.1.1 and 10.1.1.2, then during path selection,
each link will forward one stream of packets. However, as the source and destination IP
addresses for a network's traffic accessing external networks are random, the results of the
HASH algorithm are completely uncontrollable. Therefore, although the default routes are
equal-cost routes, it's possible that the packets will all be forwarded from one link. This is also
why the packets in the above example were both forwarded from interface GE0/0/3.
What I discussed above was a bit of basic knowledge. Now let's look at how the 'default
routing+specific route' shortest path routing method selects the shortest path. Let's first look at
a simple network environment, shown in Figure 9-4.
380
Learn Firewalls with Dr. WoW
Figure 9-4 Multiple egress network diagram
Path 1: longer way
ISP1
Firewall
Enterprise
network
Path 2: shorter way
ISP2
Server
Path 1
Path 2
In the above figure, when enterprise intranet users access an external network server, there are
two paths for associated packets through the firewall. Under normal circumstances,
enterprises generally configure two default routes on an egress firewall, one for each ISP.
Above, I mentioned that in path selection via default routing, a source IP+destination IP
HASH algorithm determines the path by which data packets are forwarded. This may result in
traffic accessing ISP2's server being forwarded through Path 1 in the figure after the HASH
algorithm is calculated, meaning that the packet would be sent on Path 1 to ISP1, and then
sent through ISP1 to ISP2, travelling a large loop before finally reaching its ultimate
destination. This would severely and negatively affect forwarding efficiency and user
experience.
So, what method can we use to ensure packets do not travel a circuitous path? The answer is
to configure specific routing. As we discussed above, packets are preferentially matched to
specific routes, and only look for default routes if there is not a specific route they can be
matched to. For the network shown in Figure 9-4, we could configure a specific route to the
server, with the next hop pointing to ISP2. In this way, after packets are matched to this
specific route they will not be forwarded circuitously. From the figure it can be seen that the
path selected for sending packets is the shortest of the two paths, which is what we mean by
'shortest path routing.' We can also verify this using the network shown in Figure 9-3. We
configure two static routes on the firewall such as the ones below:
[FW] ip route-static 10.10.10.10 255.255.255.255 10.1.1.2(Next hop is R1's address)
[FW] ip route-static 10.10.11.11 255.255.255.255 10.1.2.2(Next hop is R2's address)
A packet capture on the firewall's GE0/0/3 interface shows that there are only packets going
to 10.10.10.10:
381
Learn Firewalls with Dr. WoW
A packet capture on the firewall's GE0/0/2 interface shows that there are only packets going
to 10.10.11.11.
This proves that the packets preferentially checked for the two specific routes that we just
configured. However, in real-world network environments, there are many servers on the
Internet, and asking administrators to configure so many specific routes on egress network
gateway firewalls is not realistic. Is there a convenient and fast method to configure specific
routes? This requires that the ISP routing function step into the limelight. But what exactly is
ISP routing?
9.2.2 ISP Routing
In the term 'ISP routing', we can see the key acronym "ISP", which indeed speaks to this
method's functionality. Each ISP has their own public well-known network segments, and if
all of these public well-known network segments were configured into specific routes as we
discussed above, then none of the packets going to this ISP would be forwarded in a
circuitous fashion. How can we change an ISP's public well-known network segments into
specific routes?
First, the administrator needs to collect together all of the public network segments within an
ISP (these can be found through online searches), and then compile the address network
segments into a file with an extension of .csv (we'll call this the ISP address file). The
compilation requirements are as shown in Figure 9-5:
Figure 9-5 Compiling an ISP address file
After the ISP address file's compilation is complete, we need to upload this onto the firewall's
designated path, for example onto a CF Card. There are many upload methods, such as SFTP,
FTP, TFTP, etc., and these will not be described here.
After the ISP address file has been uploaded to the firewall, the egress interface and next hop
are configured, so that after the ISP routing function is enabled, each IP address segment in
the ISP address file will be converted into a separate static route. In this way, the entire ISP
382
Learn Firewalls with Dr. WoW
address file will morph into a script for configuring a batch of static routes for one ISP, and
you won't need to worry about configuring an enormous number of static routes again!
Below, we'll use an experimental network to verify the results of multi-homing using ISP
routing; this network is shown in Figure 9-6.
Figure 9-6 ISP routing network diagram
Server
210.1.1.1/32
Server
210.1.1.2/32
ISP1
GE2/0/1
201.1.1.1/24
Enterprise
network
Server
210.1.1.3/32
GE2/0/2
202.1.1.1/24
Server
220.1.1.3/32
Firewall
ISP2
Server
220.1.1.2/32
Server
220.1.1.1/32
Path 1
Path 2
In this network, we've separately compiled ISP1 and ISP2's address network segments into the
files ispa.csv and ispb.csv respectively.
We first used methods such as SFTP, FTP, TFTP, etc. to upload the two csv files onto the
firewall's designated path. The USG9500 firewall series' path is cfcard:/isp/; the USG6000
firewall series' path is hda1:/isp/.
After completing the upload of the csv files, a related command is used to configure the
corresponding egress interface and next hop, and the ISP routing function is enabled. Using
the USG9500 firewall series as an example, the configuration command is as below:
[FW] isp set filename ispa.csv GigabitEthernet 2/0/1 next-hop 201.1.1.2
In addition to this, we can also use the Web configuration method to configure ISP routing.
This method is even simpler, and csv file uploading and configuration(s) input can be
completed in one step. Using USG9500 as an example, the input method is shown in Figure
9-7.
383
Learn Firewalls with Dr. WoW
Figure 9-7 Using the Web configuration method to enable ISP routing
The input method for ispb.csv is the same as for ispa.csv, with the exception that the egress
interface and next hop are changed to GE2/0/2 and 202.1.1.2 respectively:
Destination/Mask
210.1.1.1/32
210.1.1.2/32
210.1.1.3/32
220.1.1.1/32
220.1.1.2/32
220.1.1.3/32
Proto
ISP
ISP
ISP
ISP
ISP
ISP
Pre Cost
60 0
60 0
60 0
60 0
60 0
60 0
D
D
D
D
D
D
Flags NextHop
201.1.1.2
201.1.1.2
201.1.1.2
202.1.1.2
202.1.1.2
202.1.1.2
Interface
GigabitEthernet2/0/1
GigabitEthernet2/0/1
GigabitEthernet2/0/1
GigabitEthernet2/0/2
GigabitEthernet2/0/2
GigabitEthernet2/0/2
When an intranet user accesses a server belonging to ISP1, after packets are matched to the
routing table they are forwarded from interface GigabitEthernet2/0/1; likewise, when one of
ISP2's servers is accessed, packets are forwarded from interface GigabitEthernet2/0/2. This
guarantees that packets are always forwarded to the destination network via the shortest path.
In the above routing table, it can be seen that ISP routing and static routing are extremely
similar; in the routing table, other than the fact that the protocol type is ISP, the table's other
content is exactly the same as with static routing. Moreover, these two kinds of routing can
overlap one another; for example, if a static route is first configured and then an ISP route
with the same destination address and next hop is imported, this route's protocol type will
change from static to ISP in the routing table (the opposite is true as well). However, in
real-world use, there are still several differences between ISP routing and static routing:
2.
Static routes are configured manually route by route, and can be displayed in the
configuration file; ISP routes can only be input collectively via the method described
above, and cannot be displayed in the configuration file.
3.
Static routes can be deleted and added; for ISP routing, deletions and additions may only
be of the address network segments in the ISP address file, but single ISP routes cannot
be deleted or added using commands.
384
Learn Firewalls with Dr. WoW
What we discussed above was the process of an administrator building ISP routes, but in
actuality, firewalls have already been equipped with factory default csv files for 4
ISPs—china-mobile.csv (China Mobile), china-telecom.csv(China Telecom),
china-unicom.csv (China Unicom) and china-educationnet.csv (CERNET)— and so ISP
routing for these can be enabled simply by the admin executing the input.
To summarize, at the core of shortest path routing is a 'head to head battle' between the three
kinds of routing:

Default equal-cost routing allows all packets passing through a firewall to be matched to
a route and forwarded, but there is no way to ensure that packet forwarding uses the
shortest link for forwarding (the packet's forwarding egress is selected through a HASH
algorithm involving the source IP address+the destination IP address).

Specific routing ensures that packets accessing different ISP servers from one another are
all forwarded from the firewall's link connected to the corresponding ISP, achieving
shortest path access, however, the difficulty involved in configuring a large number of
specific routes manually is a problem for enterprise network administrators.

ISP routing, on the other hand, fills in the deficiencies of specific routing in terms of the
difficulty presented by large batch manual configuration, allowing for specific routes to
be configured for all of an ISP's address network segments in just a matter of minutes.
Each of these three kinds of routing has their own unique properties, and only using them
together allows for each of them to make up for the others' shortcomings so that their strong
points may be displayed. In this sort of combined use, specific routing and ISP routing are
used to direct packets in being forwarded by the shortest path, with packets that can't be
matched to a specific route then being forwarded through checking for a default route.
However, the shortest path routing method is only a basic method of conducting routing for
multi-homing. As we know, route checking in this method is performed using packets'
destination addresses, and this is where a problem arises: if an admin wishes to differentiate
between intranet users and allow users with different priority levels to forward packets from
different links, or if the admin wishes to differentiate the links used to forward traffic based
upon different applications, these goals cannot be completed through checking for routes
using the destination address. To accomplish this, we need more flexible path selection
mechanisms, for example using packets' source IP address, the application protocol type, etc.
to differentiate user traffic, and then furthermore conduct differentiated forwarding of this
different user traffic. Therefore, our focus naturally shifts to policy-based routing.
9.3 Policy-based Routing
Now that we've reached policy-based routing, the first thing that comes to my mind is that in
its early years, policy-based routing's greatest use in China was in interconnecting China
Telecom and China Netcom. After Telecom and Netcom split into different companies, a
networking environment unique to China formed, with the south of the country being
dominated by China Telecom and the north by Netcom (which now has merged into China
Unicom). When networks only had single egresses, service for Telecom users accessing
Netcom services was relatively slow, as was service for Netcom users accessing Telecom's
services. Therefore, people thought of an enterprise network dual egress plan consisting of
network egresses that connected to both Telecom and Netcom. The widespread use of the dual
egress plan allowed policy-based routing to display its functionality! Configuring
policy-based routing on enterprise egress network gateway devices allowed Telecom traffic to
use each network's Telecom egress, and Netcom traffic to use each network's Netcom egress.
385
Learn Firewalls with Dr. WoW
How did policy-based routing accomplish the splitting of Telecom and Network traffic? Let's
begin by describing what policy-based routing is.
9.3.1 Policy-based Routing Concepts
So-called 'policy-based routing' is exactly what it sounds like: forwarding packets based on
specific policies. Moreover, these policies are human-formulated, and therefore policy-based
routing is a more flexible mechanism for multi-homing than the traditional method of
selecting routes based upon destination address. After policy-based routes are configured on a
firewall, the firewall first conducts filtering of the packets it receives based upon the rules
configured in policy-based routing. After matching is successful, the packets are then
forwarded according to specific forwarding policies. The 'configured rules' need matching
conditions to be defined, usually using ACL; meanwhile, the "specific forwarding policies"
require related actions be executed according to the matching condition(s). Therefore, we can
deduce that policy-based routing is comprised of the following two components:

Matching conditions (defined using ACL)
Used to differentiate traffic that will be forwarded using policy-based routing. Matching
conditions include: packets' source IP address, destination IP address, protocol type,
application type, etc. The matching conditions that can be set by different firewalls are
different. One policy-based routing rule may contain multiple matching conditions,
related in an "And" manner, and packets must satisfy all of the matching conditions
before the subsequent defined forwarding action can be executed.

Actions
Involve performing an action (such as designating the egress interface and next hop) on
traffic that meets the matching conditions.
When there are multiple policy-based routing rules, a firewall will accord with their matching
order and check the first rule first; if the first policy-based routing rule's matching condition is
met, then the packet will be processed using the designated action. If the first rule's matching
condition is not met, then the firewall will check the next policy-based routing rule. If all
policy-based routing rules' matching conditions cannot be met, the packet will be forwarded
according to the routing table— policy-based routing matching is completed prior to a packet
checking the routing table, which is to say that policy-based routing's priority is higher than
that of routing. The process for policy-based routing rule matching is shown in Figure 9-8.
386
Learn Firewalls with Dr. WoW
Figure 9-8 Process of policy-based routing rule matching
PBR rule 1
Packet
Condition
Rule 2
Matched
Action
Mismatched
Condition
Matched
Action
Mismatched
Rule 3
Condition
……
Matched
Action
Mismatched
Forward based on
the routing table
Redirect to the next hop
or outbound interface
Additionally, if the status of the egress interface designated by policy-based routing is 'Down'
or the next hop cannot be reached, the packet will be forwarded by checking the routing table.
Now that we've finished discussing the basic principles of policy-based routing, let's look
back at how policy-based routing was able to bring about the forwarding of Telecom traffic by
Telecom and Netcom traffic by Netcom.
9.3.2 Destination IP Address-based Policy-based Routing
We'll use a network environment (shown in Figure 9-9) to verify the results of this kind of
policy-based routing.
Figure 9-9 Destination IP address-based policy-based routing
PC1
192.168.0.2/24
Internal network
192.168.0.0/24
Server
10.10.11.11/32
R1
GE0/0/3
Trust
192.168.0.1/24
GE0/0/1
Untrust
10.1.1.1/24
Telecom
10.1.1.2/24
Firewall GE0/0/2
Untrust1
10.1.2.1/24
Target
network
10.1.2.2/24
Netcom
Server
10.10.10.10/32
R2
Here, the firewall is serving as the enterprise's egress network gateway and is connected to the
Internet via two links. Of these, the link passing through R1 is Telecom's line, and the link
passing through R2 is Netcom's line. At this point, we'll want to allow enterprise users
387
Learn Firewalls with Dr. WoW
accessing server 10.10.11.11/32 to be forwarded through the Telecom path, while users
accessing server 10.10.10.10/32 are forwarded through the Netcom path.
If two default routes are configured on the firewall, when enterprise users access the two
services 10.10.11.11/32 and 10.10.10.10/32, we can see that all corresponding packets are
forwarded through the R1 link. We discussed this in the shortest path routing section
above—path selection for default routing is accomplished using a source IP
address+destination IP address HASH algorithm that calculates the egress link chosen by
packets, and there is no way to ensure that the requirement that traffic accessing
10.10.11.11/32 be forwarded through the Telecom line and traffic accessing 10.10.10.10/32 be
forwarded through the Netcom line is fulfilled.
Below, we'll configure policy-based routing on the firewall, and look at what the results of
this experiment are. The configuration is as follows (we'll use the USG2000/5000 firewall
series in this example):
1.
Configure the matching conditions to be based on the packet's destination address.
[FW] acl number 3000
[FW-acl-adv-3000] rule 5 permit ip destination 10.10.11.11 0
[FW-acl-adv-3000] quit
[FW] acl number 3001
[FW-acl-adv-3001] rule 5 permit ip destination 10.10.10.10 0
[FW-acl-adv-3001] quit
2.
Configure policy-based routing.
[FW] policy-based-route test permit node 10
[FW-policy-based-route-test-10] if-match acl 3000
matching condition
[FW-policy-based-route-test-10] apply ip-address next-hop 10.1.1.2
action, redirect Telecom as the next hop
[FW-policy-based-route-test-10] quit
[FW] policy-based-route test permit node 20
[FW-policy-based-route-test-20] if-match acl 3001
matching condition
[FW-policy-based-route-test-20] apply ip-address next-hop 10.1.2.2
//apply
// configure
//apply
// configure
action, redirect Netcom as the next hop
[FW-policy-based-route-test-20] quit
3.
Apply policy-based routing.
[FW] interface GigabitEthernet0/0/3
[FW-GigabitEthernet0/0/3] ip policy-based-route test
//apply
policy-based routing on the ingress interface
[FW-GigabitEthernet0/0/3] quit
After configuration is complete, we ping both the 10.10.11.11 and10.10.10.10 addresses
from the PC, and the pings go through successfully. We can also take a look on the
firewall at the detailed information expressed in the session table, displayed below:
[FW] display firewall session table verbose
Current total sessions: 2
icmp VPN: public --> public
Zone: trust --> untrust TTL: 00:00:20 Left: 00:00:16
Interface: GigabitEthernet0/0/1 Nexthop: 10.1.1.2 MAC:54-89-98-1d-74-24
<--packets: 4 bytes: 240 -->packets: 4 bytes: 240
192.168.0.2:54999 --> 10.10.11.11:2048
icmp VPN: public --> public
Zone: trust --> untrust TTL: 00:00:20 Left: 00:00:17
388
Learn Firewalls with Dr. WoW
Interface: GigabitEthernet0/0/2 Nexthop: 10.1.2.2 MAC:54-89-98-ea-53-c9
<--packets: 4 bytes: 240 -->packets: 4 bytes: 240
192.168.0.2:63959 --> 10.10.10.10:2048
From the displayed information we can see that packets going to 10.10.11.11 are
forwarded from the firewall's GE0/0/1 interface, with the next hop as the address of the
interface connecting R1 with the firewall; meanwhile, packets going to 10.10.10.10 are
forwarded from the firewall's GE0/0/2 interface, with the next hop as the address of the
interface connecting R2 with the firewall. This therefore accomplishes the requirement
of traffic accessing 10.10.11.11/32 being forwarded from Telecom's path and traffic
accessing 10.10.10.10/32 being forwarded from the Netcom path.
Having read to here, some may say, "The shortest path access introduced in the previous
section can also accomplish this objective." Correct! Indeed, as the 'default
routing+specific routing' method of shortest path routing forwards packets according to
destination address, and as the policy-based routing configured above also uses packet's
destination addresses as the condition for formulating the forwarding policy, these are
both able to accomplish the same objective.
However, in reality, traditional static routes and dynamic routes are only able to provide
a relatively simple routing method for users based upon a packet's destination
address—this primarily solves network packet forwarding problems, but is not able to
provide flexible service. Policy-based routing, on the other hand, is different—it allows
network administrators to not only be able to choose forwarding routes based upon
destination address, but also to be able to do this based upon packet source IP address,
protocol type, application type or other conditions. Therefore, policy-based routing
provides greater control over packets than traditional routing protocols.
9.3.3 Source IP Address-based Policy-based Routing
If it seems there was still some crossover between the application of policy-based routing
described in the last section and that of shortest path routing, then let's look at another
application of policy-based routing. Everyone knows that networks today are developing
towards Fiber to the Home (FTTH), however, the cost associated with fiber-optics is not a
small one in today's China, and many networks therefore use the fiber + ADSL connection
method, which entails two simultaneous connections to the Internet via two lines of different
speeds. This means that we can configure policy-based routing to allow traffic with relatively
high priorities to use the fiber-optic connection, while traffic with relatively low priorities
uses the ADSL connection.
Figure 9-10 Source IP address-based policy-based routing
Employee
192.168.0.2/24
R1
GE0
/0/3
T
192.
168. rust
0.1/2
4
GE0/0/1
Untrust
10.1.1.1/24
ISP1
10.1.1.2/24
Target
network
10.1.2.2/24
/0/4
GE0 st
Tru Firewall GE0/0/2
Untrust1
/24
10.1
10.1.2.1/24
168.
192.
Manager
192.168.10.10/24
Server
10.10.10.10/32
ISP2
R2
389
Learn Firewalls with Dr. WoW
In this scenario, the firewall is the enterprise's egress network gateway, and is connected to the
Internet through two links belonging to different ISPs. Of these, the link passing through R1
has a relatively high bandwidth bitrate—let's say it is 10 Mbit/s; the link passing through R2
has a relatively low bandwidth bitrate, 2 Mbit/s. In order to ensure a good user experience for
one of the enterprise's manager's Internet access, we want to allow his/her traffic accessing the
Internet to be forwarded through the R1 link, while an employee's traffic accessing the
Internet is forwarded through the R2 link.
This aforesaid goal cannot be achieved through checking for a route using the destination
address. Instead, setting the source IP address as the matching condition using policy-based
routing allows for this problem to be easily solved. Configure this on the firewall as follows
(we'll use the USG2000/5000 firewall series in this example):
1.
Configure the matching conditions to be based on the packet's source IP address.
[FW] acl number 3000
[FW-acl-adv-3000] rule 5 permit ip source 192.168.10.0 0.0.0.255
[FW-acl-adv-3000] quit
[FW] acl number 3001
[FW-acl-adv-3001] rule 5 permit ip source 192.168.0.0 0.0.0.255
[FW-acl-adv-3001] quit
2.
Configure policy-based routing.
[FW] policy-based-route boss permit node 10
[FW-policy-based-route-boss-10] if-match acl 3000
//apply
matching condition
[FW-policy-based-route-boss-10] apply ip-address next-hop 10.1.1.2 // configure
action, redirect next hop as R1
[FW-policy-based-route-boss-10] quit
[FW] policy-based-route employee permit node 10
[FW-policy-based-route-employee-10] if-match acl 3001
//apply
matching condition
[FW-policy-based-route-employee-10] apply ip-address next-hop 10.1.2.2 //
configure action, redirect next hop as R2
[FW-policy-based-route-employee-10] quit
3.
Apply policy-based routing.
[FW] interface GigabitEthernet0/0/3
[FW-GigabitEthernet0/0/3] ip policy-based-route employee
policy-based routing on the ingress interface
[FW-GigabitEthernet0/0/3] quit
[FW] interface GigabitEthernet0/0/4
[FW-GigabitEthernet0/0/4] ip policy-based-route boss
//apply
//apply
policy-based routing on the ingress interface
[FW-GigabitEthernet0/0/4] quit
After completing configuration, ping server address 10.10.10.10 on the Internet from
both the manager's and employee's PCs, and view the detailed session table information
on the firewall, displayed below:
[FW] display firewall session table verbose
Current total sessions: 2
icmp VPN: public --> public
Zone: trust --> untrust TTL: 00:00:20 Left: 00:00:16
Interface: GigabitEthernet0/0/1 Nexthop: 10.1.1.2 MAC:54-89-98-1d-74-24
<--packets: 4 bytes: 240 -->packets: 4 bytes: 240
192.168.10.2:47646 --> 10.10.10.10:2048
icmp VPN: public --> public
390
Learn Firewalls with Dr. WoW
Zone: trust --> untrust TTL: 00:00:20 Left: 00:00:17
Interface: GigabitEthernet0/0/2 Nexthop: 10.1.2.2 MAC:54-89-98-ea-53-c9
<--packets: 4 bytes: 240 -->packets: 4 bytes: 240
192.168.0.2:53022 --> 10.10.10.10:2048
In the information displayed above, the manager's (192.168.10.2) traffic accessing the
server is forwarded from the link connected to R1 (10.1.1.2), while the employee's
(192.168.0.2) traffic accessing the server is forwarded from the link connected to R2
(10.1.2.2), thus meeting the user's need for higher priority traffic to use the high speed
link and for lower priority traffic to use the lower speed link.
9.3.4 Application-based Policy-based Routing
What I introduced above were some traditional uses of policy-based routing, but there is still a
kind of common policy-based routing scenario in networks today that is related to
applications. As everyone knows, there are more than a few types of applications used in
networks, and of these some are applications with large traffic demands, such as P2P, online
video, etc. These applications use a high amount of egress bandwidth, severely affecting the
forwarding of enterprise service traffic. Application-based policy-based routing was
developed for this sort of circumstance, and application-based policy-based forwarding is
accomplished by combining policy-based routing with functions that identify applications and
by using the traffic application type as a matching condition.
Below, I'll examine the actual results of application-based policy-based routing; please see
Figure 9-11.
Figure 9-11 Application-based policy-based routing
PC1
192.168.0.2/24
Internal network
192.168.0.0/24
R1
GE2/2/23
Trust
192.168.0.1/24
GE2/2/21
Untrust
10.1.1.1/24
ISP1
10.1.1.2/24
GE2/2/17
Firewall Untrust1
10.1.2.1/24
Server
10.10.10.10/24
Target
network
10.1.2.2/24
ISP2
P2P service
R2
Here, the firewall is the enterprise's egress network gateway, and is connected to the Internet
using two links (one from ISP1 and one from ISP2). Of these, the link provided by ISP2 is a
stable link with equal uplink and downlink bandwidth, and is the primary link used for
forwarding the enterprise's normal traffic; the link provided by ISP1 has unequal uplink and
downlink bandwidth and relatively slow speed, but the leasing price is low, and so this can be
provided for use as a link for forwarding certain high traffic applications (in the figure this is
P2P).
We'll use the "BitSpirit" tool to simulate P2P services, have simulated a P2P server on the
server and a P2P client on the enterprise user's PC1, and have also used pings to simulate
normal services.
We first configure application-based policy-based routing on the firewall, to cause the P2P
application's traffic to be forwarded from the GE2/2/21 egress interface, while normal traffic
is directly forwarded by routing from the GE2/2/17 egress interface. The configuration
commands are below (we'll use the USG9500 firewall series in this example):
391
Learn Firewalls with Dr. WoW
1.
Configure the matching condition to be based on the source IP address.
[FW] acl number 3000
[FW-acl-adv-3000] rule 5 permit ip source 192.168.0.0 0.0.0.255
[FW-acl-adv-3000] quit
2.
Configure policy-based routing.
[FW] traffic classifier p2p
[FW-classifier-p2p] if-match acl 3000 category p2p
// set the user's P2P
application as a matching condition
[FW-classifier-p2p] quit
[FW] traffic behavior p2p
[FW-behavior-p2p] redirect ip-nexthop 10.1.1.2 interface GigabitEthernet2/2/21 //
redirect the egress interface and next hop
[FW-behavior-p2p] quit
[FW] traffic policy p2p
[FW-trafficpolicy-p2p] classifier p2p behavior p2p
[FW-trafficpolicy-p2p] quit
3.
Apply policy-based routing.
[FW] interface GigabitEthernet2/2/23
[FW-GigabitEthernet2/2/23] traffic-policy p2p inbound
//apply
policy-based routing on the ingress interface
[FW-GigabitEthernet2/2/23] quit
Following the completion of configuration, we enable the "BitSpirit" client download
function on PC1. A screenshot of this is below.
We then view the session table (displayed below) on the firewall:
[FW] display firewall session table verbose
Current total sessions: 2
tcp VPN: public --> public
Zone: trust --> untrust Slot: 3 CPU: 3 TTL: 00:00:05 Left: 00:00:02
Interface: GigabitEthernet2/2/21 Nexthop: 10.1.1.2
<--packets: 0 bytes: 0 -->packets: 2 bytes: 96
192.168.0.2:1712 --> 10.10.10.10:29553
tcp VPN: public --> public
Zone: trust --> untrust Slot: 3 CPU: 3 TTL: 00:00:05 Left: 00:00:02
Interface: GigabitEthernet2/2/21 Nexthop: 10.1.1.2
<--packets: 0 bytes: 0 -->packets: 2 bytes: 96
192.168.0.2:1711 --> 10.10.10.10:29553
392
Learn Firewalls with Dr. WoW
From the information displayed above, we can see that the session's destination address and
port are the same as those displayed on the "BitSpirit" client—they all are 10.10.10.10:29553.
Moreover, this portion of traffic is being forwarded from the link with GE2/2/21 as the egress
interface and a next hop of 10.1.1.2. This is the same as the policy-based route's redirected
egress interface and next hop, meaning that P2P application-based policy-based routing has
been successfully applied.
Below, we'll ping server address 10.10.10.10 from PC1.
At this point, let's take another look at the session table, displayed below:
[FW] display firewall session table verbose
Current total sessions: 1
icmp VPN: public --> public
Zone: trust --> untrust Slot: 3 CPU: 3 TTL: 00:00:20 Left: 00:00:17
Interface: GigabitEthernet2/2/17 Nexthop: 10.1.2.2
<--packets: 4 bytes: 240 -->packets: 4 bytes: 240
192.168.0.2:768 --> 10.10.10.10:2048
The session table shows that the ping packet was forwarded from the link with egress
interface GE2/2/17 and a next hop of 10.1.2.2. This demonstrates that our normal service
traffic is being forwarded from the link provided by ISP2, meaning that our original objective
has been achieved.
To summarize the above uses of policy-based routing, we now know that the flexibility of
policy-based routing is rooted in the flexibility and diversity of the matching conditions.
There are different matching conditions for different scenarios, and the matching conditions
contained in the three above examples were destination IP address, source IP address and
application type respectively. In addition to this, there are a multitude of other matching
conditions that are relatively commonly used, including user, protocol type, etc. As the
configuration methods for these are essentially the same, we won't introduce these one by one
here.
9.3.5 Policy-based Routing In Out-of-path Networks
Another scenario in which policy-based routing is used in networks today must also be
mentioned: when firewalls are deployed out-of-path off of enterprise egress routers or core
switches. In this sort of scenario, policy-based routing is not configured on the firewall, but
rather is configured on the router or switch. However, as this kind of approach is used on
many enterprise egresses and data center egresses, I'll also give an introduction to this here.
First, we'll examine this kind of scenario using a model of actual network conditions, shown
in Figure 9-12.
393
Learn Firewalls with Dr. WoW
Figure 9-12 Network diagram for an out-of-path firewall
PC
10.10.10.10/24
GE0/0/2
10.1.2.2/24
GE0/0/2
10.1.2.1/24
GE0/0/3
10.1.4.1/24
Firewall
R1
GE0/0/1
10.1.1.2/24
GE0/0/1
10.1.1.1/24
GE0/0/0
192.168.0.1/24
Enterprise
network
Server
192.168.0.2/24
Incoming traffic
Outgoing traffic
Here, the enterprise egress is router R1, and the firewall is deployed out-of-path off of the
egress router R1. When an external network user accesses the intranet server, after traffic is
guided from R1 to the firewall for security protection, it is then forwarded again to the
intranet server.
In this networking approach, policy-based routing is configured on the egress router R1, and
the approach used for configuring the router's policy-based routing is the same as that for
firewall policy-based routing introduced above—both involve first defining a matching
condition(s) (here, this is the destination IP address), setting an action (redirecting the egress
interface or next hop), and then applying this on the ingress interface. In this networking
approach, in addition to conducting security processing of the traffic guided to it, the firewall
also needs to return (reinsert) traffic to the egress router R1.
Traffic return is actually quite simple, and below I'll introduce two return methods—static
routing and OSPF.

Method for configuring static routing return
Table 9-1 only lists the configuration of policy-based routing and static routing.
394
Learn Firewalls with Dr. WoW
Table 9-1 Configuring static routing return
R1
FW
#
ip route-static 192.168.0.0
255.255.255.0 10.1.1.1
acl number 3000
rule 5 permit ip destination 192.168.0.2 0
#
policy-based-route in permit node 10
if-match acl 3000
apply ip-address next-hop 10.1.2.2
#
interface GigabitEthernet0/0/3
ip address 10.1.4.1 255.255.255.0
ip policy-based-route in
#
ip route-static 10.10.10.0 255.255.255.0
10.1.4.2

Configuration method for OSPF routing return
When the number of connected users is relatively high, this configuration method can be
considered, as it eases the administrator's maintenance. In the networking approach
shown in Figure 9-13, R1 is the enterprise egress router, and the core switch LSW is
connected to the intranet server. When an external network user accesses the intranet
server, traffic passes through R1 to LSW, and is then guided onto firewall FW for
security policy filtering by the policy-based routing configured on LSW. After filtering,
the traffic is then returned to LSW through checking the OSPF route on FW, after which
it accesses the intranet server.
395
Learn Firewalls with Dr. WoW
Figure 9-13 OSPF network diagram
PC
10.10.10.10/24
GE0/0/3
10.1.4.1/24
R1
GE0/0/0
10.1.3.1/24
OSPF2
GE0/0/2
10.1.2.2/24
GE0/0/2
10.1.2.1/24
Firewall
GE0/0/1
10.1.3.2/24
LSW
GE0/0/1
GE0/0/1
10.1.1.2/24 10.1.1.1/24
OSPF1
Enterprise
network
Server
192.168.0.2/24
Enterprise
network
Enterprise
network
Server
192.168.1.2/24
Server
192.168.2.2/24
Incoming traffic
Outgoing traffic
Table 9-2 only lists the configuration of policy-based routing and OSPF.
396
Learn Firewalls with Dr. WoW
Table 9-2 Configuring OSPF routing return
R1
LSW
FW
ospf 2
#
ospf 1
area 0.0.0.0
network
10.1.3.0
0.0.0.255
network
10.1.2.0
0.0.0.255
vlan batch 100
import-route ospf 2
#
area 0.0.0.0
interface Vlanif100
ip address 10.1.3.2 255.255.255.0
network 10.1.1.0
0.0.0.255
ospf 2
#
acl number 3000
rule 5 permit ip destination
192.168.0.2 0
rule 10 permit ip destination
192.168.1.2 0
import-route ospf 1
area 0.0.0.0
network 10.1.2.0
0.0.0.255
rule 15 permit ip destination
192.168.2.2 0
#
policy-based-route in permit node 10
if-match acl 3000
apply ip-address next-hop 10.1.2.2
#
interface GigabitEthernet0/0/3
port link-type access
port default vlan 100
ip policy-based-route in
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 192.168.0.0 0.0.0.255
network 192.168.1.0 0.0.0.255
network 192.168.2.0 0.0.0.255
ospf 2
area 0.0.0.0
network 10.1.3.0 0.0.0.255
network 10.1.2.0 0.0.0.255
Configuration for using OSPF return is a bit more complicated. First we need to use OSPF
dual processes on the LSW to isolate upstream and downstream traffic, then use policy-based
routing to guide traffic onto FW, and finally configure the incorporation of the two OSPF
processes together on FW, allowing the two OSPF processes to learn each other's routes.
Traffic guidance is completed by policy-based routing on LSW, while static routing or OSPF
routing on FW completes traffic return. After configuration is complete, using tracert from an
397
Learn Firewalls with Dr. WoW
external network PC (10.10.10.10) for the intranet server's address 192.168.0.2 gives the
following result (a static routing network configuration is used in this example):
PC>tracert 192.168.0.2
traceroute to 192.168.0.2, 8 hops max
(ICMP), press Ctrl+C to stop
1 10.10.10.1 16 ms <1 ms <1 ms
2 10.1.4.1 31 ms 31 ms 32 ms
3 10.1.2.2 140 ms 63 ms 47 ms
4 10.1.1.1 94 ms 62 ms 47 ms
5
192.168.0.2
63 ms 78 ms 94 ms
This path information shows that after access traffic passes through FW it is returned again to
R1, and finally arrives at the target server, achieving the expected results.
Policy-based routing is actually just conducting path selection and redefining the egress
interface and next hop for traffic that matches the matching condition(s). This requires that
administrators have an ample understanding of the current network state and be able to choose
suitable matching conditions accordingly. For example, clearly knowing the merits of multiple
egress links allows traffic from an enterprise's important clients or important services to be
forwarded from a high priority link(s). Flexibly applied policy-based routing can provide
administrators with an expanded network planning toolbox.
398
Learn Firewalls with Dr. WoW
10
Firewall Deployment on Campus
Network
10.1 Networking Requirements
As shown in Figure 10-1, USG9560 V300R001C20 is deployed at the egress of the campus
network as a gateway to provide broadband access to intranet users and internal server access
to external users. The campus has three WAN links: one to ISP1 (1 Gbit/s), one to ISP2 (1
Gbit/s), and one to China Education and Research Network (CERNET) (10 Gbit/s).
Figure 10-1 Networking diagram of campus network egress
218.1.1.2/30
CERNET
GE2/0/0
218.1.1.1/30
172.16.1.2/30
GE1/0/3
172.16.1.1/30
Campus network
10.1.0.0/16
Library
server
10.1.10.10
Log server
10.1.10.30
DNS
10.1.10.20
Server zone
10.1.10.0/24
200.1.1.2/30
GE1/0/1
200.1.1.1/30
ISP1
GE1/0/2
202.1.1.1/30
202.1.1.2/30
ISP2
399
Learn Firewalls with Dr. WoW
The specific requirements:

To ensure the Internet access experience of the intranet users, the campus requires that
traffic to specific destination addresses be forwarded through specified ISP link. For
instance, the traffic to the servers hosted by ISP1 can be forwarded through the link
provided by ISP1; the traffic to servers hosted by ISP2 can be forwarded through the link
provided by ISP2; and the traffic to servers hosted by CERNET can be forwarded
through its link.

In addition, the campus requires the traffic from special intranet users be forwarded
through a specified ISP link. For instance, the data traffic from library users can be
forwarded through the CERNET link.

Servers for external users are deployed on the campus, such as school website, email,
and Portal servers.

The DNS server is also deployed to provide resolve the domain names of the servers.
The campus requires that the domain name of the server is resolved into the public IP
address assigned by the ISP that serves the user who requested the server, so as to
increase access speed.

The campus requires the firewall protect the intranet from SYN flood attacks and alert on
network intrusions.

Due to limited bandwidth of ISP1 and ISP2 links, the campus requires the P2P traffic on
ISP1 and ISP2 links be limited, including the P2P traffic of each user, and the overall
P2P traffic.

The campus requires the network management system display the attack defense and
intrusion detection logs and the IP addresses before and after NAT.
10.2 Network Planning
10.2.1 Multi-ISP Routes Planning

ISP Routing
To ensure the traffic to specific destination addresses can be forwarded through specified
ISP links, ISP routing function needs to be deployed on the firewall. Mainstream ISP
address files are built in the firewall, and ISP address files can be manually created or
modified when necessary. IP-Link can be used to verify whether the ISP links are
normal.
In addition, due to a relatively high CERNET link bandwidth, the next hop of the default
route can be set to the IP address assigned by CERNET to ensure the traffic not matching
any other route can be forwarded through the CERNET link.

Policy-Based Routing (PBR)
To ensure the traffic from special intranet users can be forwarded through specified ISP
links, PBR function needs to be deployed on the firewall.
The PBR of firewalls in this example is implemented through traffic classification, traffic
behavior, and traffic policy. For instance, if the data traffic from library users is to be
forwarded through the CERNET link, the traffic class must match the host addresses of
online users in the library; CERNET link must be set as the next hop in the traffic
behavior; and then associate the traffic class with the traffic behavior in the traffic policy.
400
Learn Firewalls with Dr. WoW
10.2.2 Security Planning

Security Zone
In this example, there are four interfaces on the firewall. Since the four interfaces
connect to different networks, they should be added to different security zones.

−
GE1/0/1 connected to ISP1 is added to isp1 zone, which needs to be created and
whose priority is set to 10.
−
GE1/0/2 connected to ISP2 is added to isp2 zone, which needs to be created and
whose priority is set to 15.
−
GE2/0/0 connected to the CERNET link is added to cernet zone, which needs to be
created and whose priority is set to 25.
−
GE1/0/3 connected to the switch is added to the trust zone. The trust zone is a
predefined security zone of the firewall with a priority of 85.
Security policy
To enable communications and access control between the zones, security policies must
be deployed to:
−
Allow the intranet users in the trust zone to access the isp1, isp2, and cernet zones.
−
Allow Internet users in the cernet, isp1, and isp2 zones to access specified ports on
the servers in the trust zone. The library server (only HTTP and FTP services are
enabled) and the DNS are used as examples in this document.
−
Allow the connection between the firewall in the local zone and the log servers in the
trust zone.
To ensure the communications of multi-channel protocols, such as FTP, between the
zones, application-specific packet filter (ASPF) must be configured between the zones.
The implicit security policy action of the firewall is set to deny. Traffic not matching any security policy
will be blocked.

IPS
To prevent the intrusion of zombies, Trojan horses, and worms, IPS must be deployed on
the firewall. The IPS is implemented by referencing IPS profiles in security policies. In
this example, we need to reference an IPS profile in each security policy (except those
for the local zone) to perform IPS inspection on all traffic that is allowed by security
policies.
This example uses the predefined IPS profile, ids, which only alerts on attack packets
without blocking them. If the requirement on security is not very high, profile ids is
recommended to reduce the risk of IPS false positive. If the requirement on security is
relatively high, predefined IPS profile default is recommended to block intrusion
behavior.

Attack defense
To protect intranet servers and users from network attacks, SYN flood attack defense and
some single-packet attack defense functions need to be enabled.
401
Learn Firewalls with Dr. WoW
10.2.3 NAT Planning

Source NAT
To ensure that intranet users can access the Internet through limited public IP addresses,
source network address translation (source NAT) function must be deployed on the
firewall. When an outbound packet reaches the firewall, its source address will be
translated into a public IP address, and its source port will be randomly translated into an
ephemeral port through NAT. In this way, one public IP address can be used by multiple
intranet users simultaneously to access the Internet.

NAT Server
To make servers available for users of each ISP, the NAT server function needs to be
deployed on the firewall to translate the servers' private addresses into public IP
addresses assigned by the ISPs.
The user usually uses the domain name to access an intranet server. Therefore, a DNS
server is deployed in the server area to translate the domain name into a public IP
address. By deploying the intelligent DNS function on the firewall, the domain name of
the server is resolved into the public IP address assigned by the ISP that serves the user
who requested the server. In this way, the server access latency is minimized and user
experience is optimized.

NAT ALG
When the NAT function and the forwarding of multi-channel protocol packets, such as
FTP packets, are enabled on the firewall, the NAT ALG function must be enabled. In this
example, multi-channel protocols, such as FTP, SIP, H323, MGCP, and RTSP are used.
Therefore, NAT ALG must be enabled for them.
The configuration commands for NAT ALG and ASPF are the same, regardless of their different
implementation principles and functions.
10.2.4 Bandwidth Management Planning
To limit the P2P traffic of ISP1 and ISP2 links, the bandwidth management function needs to
be deployed to control the traffic based on applications.
In this example, the firewall manages the bandwidth through traffic profile and traffic policy.

A traffic profile defines available bandwidth resources that can be assigned to managed
objects and is referenced in a traffic policy. In this example, the total bandwidth of the
traffic profile is configured as no more than 300 Mbit/s, and the bandwidth of each IP as
no more than 1 Mbit/s.

Traffic policy defines the bandwidth management objects and the traffic actions, and
references traffic profile. In this example, the object defined in the traffic policy is P2P
traffic, the action is traffic limit, and the previously configured traffic profile is
referenced. In this manner, the traffic policy can limit P2P traffic.
10.2.5 Network Management Planning
The eSight log server on the firewall can collect, query, and display the log reports. The
session logs on the firewall can be used to query the address information before and after
NAT; and the IPS logs and attack defense system log on the firewall can be used to check for
network attacks and intrusions.
402
Learn Firewalls with Dr. WoW
10.3 Configuration Procedure
Step 1 Configure interface IP addresses, and add the interfaces to security zones.
# Configure the IP addresses of each interface.
[Dr. WoW's comment] Normally, the mask length of the IP addresses assigned by ISPs is 30
bits. It is recommended that interface description or alias be configured to tell the information
about the interface.
<FW> system-view
[FW] interface GigabitEthernet 2/0/0
[FW-GigabitEthernet2/0/0] ip address 218.1.1.1 255.255.255.252
[FW-GigabitEthernet2/0/0] description cernet
[FW-GigabitEthernet2/0/0] quit
[FW] interface GigabitEthernet 1/0/1
[FW-GigabitEthernet1/0/1] ip address 200.1.1.1 255.255.255.252
[FW-GigabitEthernet1/0/1] description isp1
[FW-GigabitEthernet1/0/1] quit
[FW] interface GigabitEthernet 1/0/2
[FW-GigabitEthernet1/0/2] ip address 202.1.1.1 255.255.255.252
[FW-GigabitEthernet1/0/2] description isp2
[FW-GigabitEthernet1/0/2] quit
[FW] interface GigabitEthernet 1/0/3
[FW-GigabitEthernet1/0/3] ip address 172.16.1.1 255.255.255.252
[FW-GigabitEthernet1/0/3] description campus
[FW-GigabitEthernet1/0/3] quit
# Create security zones isp1, isp2, and cernet, and add the interfaces to corresponding security
zones.
[Dr. WoW's comment] When the firewall needs to be connected to multiple ISPs, a security
zone must be created for each interface connecting to an ISP, and the name of the security
zone should suggest the network of the zone.
[FW] firewall zone name isp1
[FW-zone-isp1] set priority 15
[FW-zone-isp1] add interface GigabitEthernet 1/0/1
[FW-zone-isp1] quit
[FW] firewall zone name isp2
[FW-zone-isp2] set priority 20
[FW-zone-isp2] add interface GigabitEthernet 1/0/2
[FW-zone-isp2] quit
[FW] firewall zone name cernet
[FW-zone-cernet] set priority 25
[FW-zone-cernet] add interface GigabitEthernet 2/0/0
[FW-zone-cernet] quit
[FW] firewall zone trust
[FW-zone-trust] add interface GigabitEthernet 1/0/3
[FW-zone-trust] quit
Step 2 Configure IP-Link to detect whether the ISP links are in normal state.
[FW]
[FW]
[FW]
[FW]
ip-link
ip-link
ip-link
ip-link
check enable
1 destination 218.1.1.2 interface GigabitEthernet2/0/0
2 destination 200.1.1.2 interface GigabitEthernet1/0/1
3 destination 202.1.1.2 interface GigabitEthernet1/0/2
403
Learn Firewalls with Dr. WoW
[Dr. WoW's comment] When a link monitored by IP-Link is faulty, the static route or
policy-based route bound to it will become invalid.
Step 3 Configure static routes to ensure IP connectivity.
# Create a default route and set the next hop to the IP address assigned by CERNET to ensure
the traffic not matching any other route can be forwarded through the CERNET link.
[FW] ip route-static 0.0.0.0 0.0.0.0 218.1.1.2
# Create a static route and set the destination IP address to the intranet and the next hop to the
address of the intranet switch, to ensure that the traffic from the Internet can reach the
intranet.
[FW] ip route-static 10.1.0.0 255.255.0.0 172.16.1.2
[Dr. WoW's comment] Configure both outbound and inbound static routes on the gateway.
Generally, at least one outbound default route will be configured.
Step 4 Configure ISP routing function.
1.
Obtain the latest destination IP addresses from each ISP.
2.
Edit the csv file for each ISP in the following format. The figure below is for reference
only, the addresses provided by local ISP may differ.
3.
Import all csv files.
4.
Run the following commands, load the csv files, and specify the next hop.
[FW] isp set filename cernet.csv GigabitEthernet 2/0/0 next-hop 218.1.1.2 track
ip-link 1
[FW] isp set filename isp1.csv GigabitEthernet 1/0/1 next-hop 200.1.1.2 track
ip-link 2
[FW] isp set filename isp2.csv GigabitEthernet 1/0/2 next-hop 202.1.1.2 track
ip-link 3
[Dr. WoW's comment] The ISP routing function actually issues specific routes in a batch,
with the addresses in the ISP files as the destination IP addresses and the addresses
specified by configuration commands as the next-hops. ISP routing can ensure that the
traffic destined for the specific ISP destination IP address is forwarded through
corresponding ISP link.
404
Learn Firewalls with Dr. WoW
Step 5 Configure policy-based routes to ensure the traffic from specific intranet users (IP addresses)
can be forwarded through specified links (interfaces).
[Dr. WoW's comment] Policy-based routing is source IP address-based. (An advanced ACL
can certainly also select routes based on destination address.) For instance, Internet access
users in the library (network 10.1.2.0/24) can only access the Internet through CERNET. The
policy-based routing is configured using Huawei classic classifier and behavior (CB) pair
configuration.
[FW] acl number 2000
[FW-acl-basic-2000] rule permit source 10.1.2.0 0.0.0.255
[FW-acl-basic-2000] quit
[FW] traffic classifier classlb
[FW-classifier-classlb] if-match acl 2000
[FW-classifier-classlb] quit
[FW] traffic behavior behaviorlb
[FW-behavior-behaviorlb] redirect ip-nexthop 218.1.1.2 interface GigabitEthernet 2/0/0
track ip-link 1
[FW-behavior-behaviorlb] quit
[FW] traffic policy policylb
[FW-trafficpolicy-policylb] classifier classlb behavior behaviorlb
[FW-trafficpolicy-policylb] quit
[FW] interface GigabitEthernet 1/0/3
[FW-GigabitEthernet1/0/3] traffic-policy policylb inbound
[FW-GigabitEthernet1/0/3] quit
Step 6 Configure security policies and IPS to detect intrusion behavior of the allowed traffic.
# Configure an outbound security policy for the trust-isp1 interzone to enable intranet users to
access the Internet through ISP1. Reference predefined profile ids when configuring security
policies to detect intrusion behavior.
[FW] policy interzone trust isp1 outbound
[FW-policy-interzone-trust-isp1-outbound] policy 0
[FW-policy-interzone-trust-isp1-outbound-0] action permit
[FW-policy-interzone-trust-isp1-outbound-0] profile ips ids
[FW-policy-interzone-trust-isp1-outbound-0] quit
[FW-policy-interzone-trust-isp1-outbound] quit
# Configure an inbound security policy for the trust-isp1 interzone to enable internet users to
access the intranet library server (only HTTP and FTP services are enabled) and the DNS
through ISP1. Reference predefined profile ids when configuring security policies to detect
intrusion behavior.
[FW] policy interzone trust isp1 inbound
[FW-policy-interzone-trust-isp1-inbound] policy 0
[FW-policy-interzone-trust-isp1-inbound-0] policy destination 10.1.10.10 0.0.0.0
[FW-policy-interzone-trust-isp1-inbound-0] policy service service-set http ftp
[FW-policy-interzone-trust-isp1-inbound-0] action permit
[FW-policy-interzone-trust-isp1-inbound-0] profile ips ids
[FW-policy-interzone-trust-isp1-inbound-0] quit
[FW-policy-interzone-trust-isp1-inbound] policy 1
[FW-policy-interzone-trust-isp1-inbound-1] policy destination 10.1.10.20 0.0.0.0
[FW-policy-interzone-trust-isp1-inbound-1] policy service service-set dns
[FW-policy-interzone-trust-isp1-inbound-1] action permit
[FW-policy-interzone-trust-isp1-inbound-1] profile ips ids
[FW-policy-interzone-trust-isp1-inbound-1] quit
[FW-policy-interzone-trust-isp1-inbound] quit
405
Learn Firewalls with Dr. WoW
# Configure an outbound security policy for the trust-isp2 interzone to enable intranet users to
access the internet through ISP2. Reference predefined profile ids when configuring security
policies to detect intrusion behavior.
[FW] policy interzone trust isp2 outbound
[FW-policy-interzone-trust-isp2-outbound] policy 0
[FW-policy-interzone-trust-isp2-outbound-0] action permit
[FW-policy-interzone-trust-isp2-outbound-0] profile ips ids
[FW-policy-interzone-trust-isp2-outbound-0] quit
[FW-policy-interzone-trust-isp2-outbound] quit
# Configure an inbound security policy for the trust-isp2 interzone to enable internet users to
access the intranet library server (only HTTP and FTP services are enabled) and the DNS
through ISP2. Reference predefined profile ids when configuring security policies to detect
intrusion behavior.
[FW] policy interzone trust isp2 inbound
[FW-policy-interzone-trust-isp2-inbound] policy 0
[FW-policy-interzone-trust-isp2-inbound-0] policy destination 10.1.10.10 0.0.0.0
[FW-policy-interzone-trust-isp2-inbound-0] policy service service-set http ftp
[FW-policy-interzone-trust-isp2-inbound-0] action permit
[FW-policy-interzone-trust-isp2-inbound-0] profile ips ids
[FW-policy-interzone-trust-isp2-inbound-0] quit
[FW-policy-interzone-trust-isp2-inbound] policy 1
[FW-policy-interzone-trust-isp2-inbound-1] policy destination 10.1.10.20 0.0.0.0
[FW-policy-interzone-trust-isp2-inbound-1] policy service service-set dns
[FW-policy-interzone-trust-isp2-inbound-1] action permit
[FW-policy-interzone-trust-isp2-inbound-1] profile ips ids
[FW-policy-interzone-trust-isp2-inbound-1] quit
[FW-policy-interzone-trust-isp2-inbound] quit
# Configure an outbound security policy for the trust-cernet interzone to enable intranet users
to access the Internet through CERNET. Reference predefined profile ids when configuring
security policies to detect intrusion behavior.
[FW] policy interzone trust cernet outbound
[FW-policy-interzone-trust-cernet-outbound] policy 0
[FW-policy-interzone-trust-cernet-outbound-0] action permit
[FW-policy-interzone-trust-cernet-outbound-0] profile ips ids
[FW-policy-interzone-trust-cernet-outbound-0] quit
[FW-policy-interzone-trust-cernet-outbound] quit
# Configure inbound security policies for the trust-cernet interzone to enable Internet users to
access the intranet library server (only HTTP and FTP services are enabled) and the DNS
through the CERNET. Reference predefined profile ids when configuring security policies to
detect intrusion behavior.
[FW] policy interzone trust cernet inbound
[FW-policy-interzone-trust-cernet-inbound] policy 0
[FW-policy-interzone-trust-cernet-inbound-0] policy destination 10.1.10.10 0.0.0.0
[FW-policy-interzone-trust-cernet-inbound-0] policy service service-set http ftp
[FW-policy-interzone-trust-cernet-inbound-0] action permit
[FW-policy-interzone-trust-cernet-inbound-0] profile ips ids
[FW-policy-interzone-trust-cernet-inbound-0] quit
[FW-policy-interzone-trust-cernet-inbound] policy 1
[FW-policy-interzone-trust-cernet-inbound-1] policy destination 10.1.10.20 0.0.0.0
[FW-policy-interzone-trust-cernet-inbound-1] policy service service-set dns
[FW-policy-interzone-trust-cernet-inbound-1] action permit
406
Learn Firewalls with Dr. WoW
[FW-policy-interzone-trust-cernet-inbound-1] profile ips ids
[FW-policy-interzone-trust-cernet-inbound-1] quit
[FW-policy-interzone-trust-cernet-inbound] quit
# Configure outbound and inbound security policies for the local-trust interzone and enable
the connection between the firewall and the log server.
[FW] policy interzone local trust outbound
[FW-policy-interzone-local-trust-outbound] policy 0
[FW-policy-interzone-local-trust-outbound-0] policy destination 10.1.10.30 0.0.0.0
[FW-policy-interzone-local-trust-outbound-0] action permit
[FW-policy-interzone-local-trust-outbound-0] quit
[FW-policy-interzone-local-trust-outbound] quit
[FW] policy interzone local trust inbound
[FW-policy-interzone-local-trust-inbound] policy 0
[FW-policy-interzone-local-trust-inbound-0] policy source 10.1.10.30 0.0.0.0
[FW-policy-interzone-local-trust-inbound-0] action permit
[FW-policy-interzone-local-trust-inbound-0] quit
[FW-policy-interzone-local-trust-inbound] quit
[Dr. WoW's comment] When the firewall serves as the gateway and network does not have
high requirements on security, the action in the interzone security policy can be set as permit.
The reference of IPS profiles in security policies prevents intrusion in the interzone traffic.
The predefined IPS profiles on the firewall are default and ids. Profile default is used to
detect and block intrusion behavior, and profile ids is used to detect and alert on intrusion
behavior (without blocking them).
# Enable the IPS function and set scheduled online update of the signature database.
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
ips enable
update schedule ips-sdb enable
update schedule weekly sun 02:00
update schedule sa-sdb enable
update schedule weekly sun 03:00
undo update confirm ips-sdb enable
undo update confirm sa-sdb enable
# Configure the DNS server address of the firewall so that the firewall can access the security
center platform through domain name and download the signature database.
[FW] dns resolve
[FW] dns server 202.106.0.20
Step 7 Configure source NAT so that multiple intranet users can simultaneously access the Internet
through the shared public IP address.
# Configure source NAT for the trust-isp1 interzone. The addresses in the NAT address pool
are obtained from ISP1.
[FW] nat address-group isp1
[FW-address-group-isp1] mode pat
[FW-address-group-isp1] section 200.1.1.3 200.1.1.5
[FW-address-group-isp1] quit
[FW] nat-policy interzone trust isp1 outbound
[FW-nat-policy-interzone-trust-isp1-outbound] policy 0
[FW-nat-policy-interzone-trust-isp1-outbound-0] action source-nat
[FW-nat-policy-interzone-trust-isp1-outbound-0] address-group isp1
407
Learn Firewalls with Dr. WoW
# Configure source NAT for the trust-isp2 interzone. The addresses in the NAT address pool
are obtained from ISP2.
[FW] nat address-group isp2
[FW-address-group-isp2] mode pat
[FW-address-group-isp2] section 202.1.1.3 202.1.1.5
[FW-address-group-isp2] quit
[FW] nat-policy interzone trust isp2 outbound
[FW-nat-policy-interzone-trust-isp2-outbound] policy 0
[FW-nat-policy-interzone-trust-isp2-outbound-0] action source-nat
[FW-nat-policy-interzone-trust-isp2-outbound-0] address-group isp2
# Configure source NAT for the trust-cernet interzone. The addresses in the NAT address pool
are obtained from CERNET.
[FW] nat address-group cernet
[FW-address-group-cernet] mode pat
[FW-address-group-cernet] section 218.1.1.3 218.1.1.5
[FW-address-group-cernet] quit
[FW] nat-policy interzone trust cernet outbound
[FW-nat-policy-interzone-trust-cernet-outbound] policy 0
[FW-nat-policy-interzone-trust-cernet-outbound-0] action source-nat
[FW-nat-policy-interzone-trust-cernet-outbound-0] address-group cernet
# Configure a black-hole route; and advertise all public IP addresses in the NAT address pool.
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
ip
ip
ip
ip
ip
ip
ip
ip
ip
route-static
route-static
route-static
route-static
route-static
route-static
route-static
route-static
route-static
200.1.1.3
200.1.1.4
200.1.1.5
202.1.1.3
202.1.1.4
202.1.1.5
218.1.1.3
218.1.1.4
218.1.1.5
32
32
32
32
32
32
32
32
32
NULL
NULL
NULL
NULL
NULL
NULL
NULL
NULL
NULL
0
0
0
0
0
0
0
0
0
[Dr. WoW's comment] A black-hole route must be configured for source NAT and NAT
Server.
Step 8 Configure zone-based NAT server to enable Internet users to access intranet servers.
# The private IP address of a server is generally mapped to the public IP addresses of multiple
ISPs.
The campus provides many servers for external access, but we will only take the library
server (10.1.10.10) and DNS server (10.1.10.20) as an example to illustrate how to configure
zone-based NAT server.
[FW] nat server 1 zone isp1 global 200.1.10.10 inside 10.1.10.10 description lb-isp1
[FW] nat server 2 zone isp2 global 202.1.10.10 inside 10.1.10.10 description lb-isp2
[FW] nat server 3 zone cernet global 218.1.10.10 inside 10.1.10.10 description lb-cernet
[FW] nat server 4 zone isp1 global 200.1.10.20 inside 10.1.10.20 description dns-isp1
[FW] nat server 5 zone isp2 global 202.1.10.20 inside 10.1.10.20 description dns-isp2
[FW] nat server 6 zone cernet global 218.1.10.20 inside 10.1.10.20 description dns-cernet
# Configure a black-hole route and advertise all public IP addresses used in NAT server.
[FW] ip route-static 200.1.10.10 32 NULL 0
[FW] ip route-static 202.1.10.10 32 NULL 0
408
Learn Firewalls with Dr. WoW
[FW]
[FW]
[FW]
[FW]
ip
ip
ip
ip
route-static
route-static
route-static
route-static
218.1.10.10
200.1.10.20
202.1.10.20
218.1.10.20
32
32
32
32
NULL
NULL
NULL
NULL
0
0
0
0
Step 9 Configure intelligent DNS.
[Dr. WoW's comment] Intelligent DNS ensures that the domain name of an intranet server is
resolved into the address assigned by the ISP that serves the user to increase access speed. For
instance, when ISP1 users access the library server at private address 10.1.10.10, the domain
name of the server is resolved into public address 200.1.10.10, which is assigned by ISP1 and
mapped to private address 10.1.10.10.
# Configure intelligent DNS, and bind the server address assigned by each ISP with the
outgoing interface connected to the ISP.
[FW] dns-smart enable
[FW] dns-smart group 1
[FW-dns-smart-group-1]
[FW-dns-smart-group-1]
[FW-dns-smart-group-1]
[FW-dns-smart-group-1]
[FW-dns-smart-group-1]
[FW-dns-smart-group-1]
type single
description lb
real-server-ip 10.1.10.10
out-interface GigabitEthernet 2/0/0 map 218.1.10.10
out-interface GigabitEthernet 1/0/1 map 200.1.10.10
out-interface GigabitEthernet 1/0/2 map 202.1.10.10
quit
Step 10 Configure NAT ALG.
[Dr. WoW's comment] The configuration commands for NAT ALG and ASPF are the same.
# Configure NAT ALG between the trust zone and other zones.
[FW] firewall interzone trust isp1
[FW-interzone-trust-isp1] detect ftp
[FW-interzone-trust-isp1] detect sip
[FW-interzone-trust-isp1] detect h323
[FW-interzone-trust-isp1] detect mgcp
[FW-interzone-trust-isp1] detect rtsp
[FW-interzone-trust-isp1] detect qq
[FW-interzone-trust-isp1] quit
[FW] firewall interzone trust isp2
[FW-interzone-trust-isp2] detect ftp
[FW-interzone-trust-isp2] detect sip
[FW-interzone-trust-isp2] detect h323
[FW-interzone-trust-isp2] detect mgcp
[FW-interzone-trust-isp2] detect rtsp
[FW-interzone-trust-isp2] detect qq
[FW-interzone-trust-isp2] quit
[FW] firewall interzone trust cernet
[FW-interzone-trust-cernet] detect ftp
[FW-interzone-trust-cernet] detect sip
[FW-interzone-trust-cernet] detect h323
[FW-interzone-trust-cernet] detect mgcp
[FW-interzone-trust-cernet] detect rtsp
[FW-interzone-trust-cernet] detect qq
[FW-interzone-trust-cernet] quit
Step 11 Enable attack defense to protect the campus network.
[FW] firewall defend land enable
409
Learn Firewalls with Dr. WoW
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
[FW]
auto
[FW]
auto
firewall defend smurf enable
firewall defend fraggle enable
firewall defend winnuke enable
firewall defend source-route enable
firewall defend route-record enable
firewall defend time-stamp enable
firewall defend ping-of-death enable
firewall defend syn-flood enable
firewall defend syn-flood interface GigabitEthernet1/0/1 max-rate 24000 tcp-proxy
firewall defend syn-flood interface GigabitEthernet1/0/2 max-rate 24000 tcp-proxy
[Dr. WoW's comment] Generally, if there is no special requirement on the network security,
the above attack defense functions would suffice.
For SYN flood attack defense, the recommended threshold on the GE interface is 16,000 pps.
All interfaces in this example are GE interfaces with a threshold of 24,000 pps, which is set
based on practical test result. Because, according to practical experience, a higher threshold
value is often set first; then the value is gradually lowered after observation to an appropriate
range, in which attacks can be well prevented and normal services are not affected.
Step 12 Configure bandwidth management.
[Dr. WoW's comment] To configure bandwidth management, a traffic profile (a set of traffic
limiting parameters) is created and referenced in a traffic policy. Pay attention that upload is
in the outbound direction and download is in the inbound direction. In addition, it is
recommended that P2P traffic be limited to 20% to 30% of the total bandwidth.
# Configure download and upload traffic profiles. The total bandwidth is limited to 300
Mbit/s with per-IP address bandwidth to 1 Mbit/s.
[FW] car-class p2p_all_download
[FW-car-class-p2p_all_download] car-mode per-ip
[FW-car-class-p2p_all_download] cir 1000
[FW-car-class-p2p_all_download] cir 300000 total
[FW-car-class-p2p_all_download] quit
[FW] car-class p2p_all_upload
[FW-car-class-p2p_all_upload] car-mode per-ip
[FW-car-class-p2p_all_upload] cir 1000
[FW-car-class-p2p_all_upload] cir 300000 total
[FW-car-class-p2p_all_upload] quit
# Configure inbound and outbound traffic policies in ISP1 zone to limit the P2P download
and upload traffic respectively.
[FW] car-policy zone isp1 inbound
[FW-car-policy-zone-isp1-inbound] policy 0
[FW-car-policy-zone-isp1-inbound-0] policy application category p2p
[FW-car-policy-zone-isp1-inbound-0] action car
[FW-car-policy-zone-isp1-inbound-0] car-class p2p_all_download
[FW-car-policy-zone-isp1-inbound-0] description p2p_limit_download
[FW-car-policy-zone-isp1-inbound-0] quit
[FW-car-policy-zone-isp1-inbound] quit
[FW] car-policy zone isp1 outbound
[FW-car-policy-zone-isp1-outbound] policy 0
[FW-car-policy-zone-isp1-outbound-0] policy application category p2p
410
Learn Firewalls with Dr. WoW
[FW-car-policy-zone-isp1-outbound-0] action car
[FW-car-policy-zone-isp1-outbound-0] car-class p2p_all_upload
[FW-car-policy-zone-isp1-outbound-0] description p2p_limit_upload
[FW-car-policy-zone-isp1-outbound-0] quit
[FW-car-policy-zone-isp1-outbound] quit
# Configure inbound and outbound traffic policies in ISP2 zone to limit the P2P download
and upload traffic respectively.
[FW] car-policy zone isp2 inbound
[FW-car-policy-zone-isp2-inbound] policy 0
[FW-car-policy-zone-isp2-inbound-0] policy application category p2p
[FW-car-policy-zone-isp2-inbound-0] action car
[FW-car-policy-zone-isp2-inbound-0] car-class p2p_all_download
[FW-car-policy-zone-isp2-inbound-0] description p2p_limit_download
[FW-car-policy-zone-isp2-inbound-0] quit
[FW-car-policy-zone-isp2-inbound] quit
[FW] car-policy zone isp2 outbound
[FW-car-policy-zone-isp2-outbound] policy 0
[FW-car-policy-zone-isp2-outbound-0] policy application category p2p
[FW-car-policy-zone-isp2-outbound-0] action car
[FW-car-policy-zone-isp2-outbound-0] car-class p2p_all_upload
[FW-car-policy-zone-isp2-outbound-0] description p2p_limit_upload
[FW-car-policy-zone-isp2-outbound-0] quit
[FW-car-policy-zone-isp2-outbound] quit
Step 13 Enable system logging and NAT source tracing, and check the logs on the network
management system (eSight).
# Enable the firewall to send system logs (IPS and attack defense logs in this example) to the
log host (10.1.10.30).
[FW]
[FW]
[FW]
[FW]
[FW]
info-center enable
engine log ips enable
info-center source ips channel loghost log level emergencies
info-center source ANTIATTACK channel loghost
info-center loghost 10.1.10.30
# Enable the firewall to send session logs to port 9002 of the log host (10.1.10.30), and
configure both inbound and outbound audit policies between the trust zone and the
isp1/isp2/cernet zone.
[Dr. WoW's comment] NAT source tracing is enabled to check the addresses before and after
NAT. Our approach is to configure audit function on the firewall to generate session logs,
which are then exported to the log host. On the log host, we can check the session logs
through eSight to check the addresses before and after NAT.
[FW] firewall log source 172.16.1.1 9002
[FW] firewall log host 2 10.1.10.30 9002
[FW] audit-policy interzone trust isp1 outbound
[FW-audit-policy -interzone-trust-isp1-outbound] policy 0
[FW-audit-policy -interzone-trust-isp1-outbound-0] action audit
[FW-audit-policy -interzone-trust-isp1-outbound-0] quit
[FW-audit-policy -interzone-trust-isp1-outbound] quit
[FW] audit-policy interzone trust isp1 inbound
[FW-audit-policy -interzone-trust-isp1-inbound] policy 0
[FW-audit-policy -interzone-trust-isp1-inbound-0] action audit
[FW-audit-policy -interzone-trust-isp1-inbound-0] quit
411
Learn Firewalls with Dr. WoW
[FW-audit-policy -interzone-trust-isp1-inbound] quit
[FW] audit-policy interzone trust isp2 outbound
[FW-audit-policy -interzone-trust-isp2-outbound] policy 0
[FW-audit-policy -interzone-trust-isp2-outbound-0] action audit
[FW-audit-policy -interzone-trust-isp2-outbound-0] quit
[FW-audit-policy -interzone-trust-isp2-outbound] quit
[FW] audit-policy interzone trust isp2 inbound
[FW-audit-policy -interzone-trust-isp2-inbound] policy 0
[FW-audit-policy -interzone-trust-isp2-inbound-0] action audit
[FW-audit-policy -interzone-trust-isp2-inbound-0] quit
[FW-audit-policy -interzone-trust-isp2-inbound] quit
[FW] audit-policy interzone trust cernet outbound
[FW-audit-policy -interzone-trust-cernet-outbound] policy 0
[FW-audit-policy -interzone-trust-cernet-outbound-0] action audit
[FW-audit-policy -interzone-trust-cernet-outbound-0] quit
[FW-audit-policy -interzone-trust-cernet-outbound] quit
[FW] audit-policy interzone trust cernet inbound
[FW-audit-policy -interzone-trust-cernet-inbound] policy 0
[FW-audit-policy -interzone-trust-cernet-inbound-0] action audit
[FW-audit-policy -interzone-trust-cernet-inbound-0] quit
[FW-audit-policy -interzone-trust-cernet-inbound] quit
# In this example, eSight is installed on the log host (10.1.10.30). To check logs on eSight,
SNMP must be configured on the firewall so that the firewall can communicate with eSight.
The SNMP parameters on eSight must be the same as those on the firewall.
[FW] snmp-agent sys-info v3
[FW] snmp-agent group v3 NMS1 privacy
[FW] snmp-agent usm-user v3 admin1 NMS1 authentication-mode md5 Admin@123 privacy-mode
aes256 Admin@123
# After eSight is configured, choose Business > Security Business > LogCenter > Log
Analysis > Session Analysis > IPv4 Session Query to check session logs.
----End
10.4 Highlights

What is amazing about this example is that it involves almost all classical features of a
firewall, including security policy, NAT, ASPF, attack defense, IPS, and bandwidth
management (application-based and IP-based bandwidth throttling). If readers feel
difficult to choose between various firewall functions, this example is a good reference.

Another highlight of this example is that it shows the firewall's capability to serve as the
gateway. One of the most important features of a gateway is the ISP routing. ISP routing
is based on destination IP addresses, policy-based routing is based on source IP addresses,
and intelligent DNS allows Internet users to access intranet servers through optimal
links.
In addition, as a gateway, the advantages of a firewall over a router are better NAT and
security functions.

Another highlight of this example is that it illustrates the NAT source tracing function on
a firewall (The firewall configured with audit policies will send session logs to the
network management system, on which the customer can check the addresses before and
after NAT). This function is especially useful during the inspection from the upper-level
and related departments.
412
Learn Firewalls with Dr. WoW
413
Learn Firewalls with Dr. WoW
11
Firewall Deployment on Media
Company Network
11.1 Networking Requirements
As shown in Figure 11-1, the network of the media company is connected to two ISPs through
two links of 10 Gbit/s each to provide broadband access to its users on the metropolitan area
network (MAN). Servers are also deployed in the server area to provide hosting services to
internal and external users.
Two firewalls are deployed at the egress of the media company's intranet to the Internet as
egress gateways (USG9560 V300R001C20). The uplink interfaces of the two firewalls are
connected to the two ISPs through the aggregation switches; the downlink interfaces are
connected to the MAN through the core routers. Both firewalls are connected to the servers
through the switch in the server area.
414
Learn Firewalls with Dr. WoW
Figure 11-1 Networking diagram of a media company network egress
ISP1
ISP2
1.1.1.2/30
Egress aggregation
switch
VRRP group 1
1.1.1.1/30
GE1/0/1
10.0.1.1/24
2.2.2.2/30
Standby
Active
Active
Standby
GE1/0/2
GE1/0/1
10.0.2.1/24 10.0.1.2/24
GE1/0/5
10.0.7.1/24 GE1/0/5
10.0.7.2/24
GE1/0/3
GE1/0/3
GE1/0/4
10.0.4.1/24
10.0.3.1/24
10.0.5.1/24
FW1
VRRP group 2
2.2.2.1/30
GE1/0/2
10.0.2.2/24
FW2
GE1/0/4
10.0.6.1/24
DNS server
10.0.10.20/24
Web server
10.0.10.10/24
Server
area
Core router
FTP server
10.0.10.11/24
Log server
10.0.10.30/24
MAN
MAN
MAN
The requirements are as follows:

The two firewalls are deployed in active/standby backup mode to improve the network
availability.

Users on the MANs can simultaneously access the Internet through the NAT function of
the firewall, and the validity of the addresses in the NAT address pool is assured through
the NAT detection.

To improve intranet users' broadband access experience, the traffic destined to specific
destination addresses should be forwarded by the firewalls through specified ISP links.
For example, the traffic to the servers hosted by ISP1 can be forwarded through the link
provided by ISP1; and the traffic to the servers hosted by ISP2 can be forwarded through
the link provided by ISP2.
For the traffic destined to other ISP networks, the firewall can select the link with
minimum latency.

The firewall should be able to identify and control P2P and web video traffic, and
forward the traffic through ISP2 link.
415
Learn Firewalls with Dr. WoW

The servers hosted by the media company can be accessed by intranet and Internet users
of multiple ISPs. The DNS server is also deployed on the network to provide domain
name resolution. Domain name of the server should be resolved into the public IP
address assigned by the ISP that serves the user who requested the server, so as to
increase access speed.

The firewalls should be able to protect the intranet from DDoS attacks and alert on
network intrusion behavior such as zombies, Trojan horses, and worms.

The source tracing function should be provided to trace Internet access, including the IP
addresses before and after NAT and IM login and logout logs.
11.2 Network Planning
11.2.1 Hot Standby Planning
Since each ISP provides only one link which cannot be directly connected to two firewalls,
egress aggregation switches are deployed between the firewalls and ISP links. The egress
aggregation switch connected to each ISP link can be connected to two firewalls. OSPF is
running on the firewalls and downstream routers; thus a typical hot standby network is
formed.
To efficiently use public IP addresses, the uplink interfaces of the firewalls can use private IP
addresses, but the VRRP groups must use public addresses assigned by the ISPs to
communicate with ISP networks.
11.2.2 Multi-ISP Routing Planning
When an egress gateway (firewall in this example) has multiple outgoing interfaces, the
matching sequence in routing is: policy-based routes (PBRs), specific routes, and default
routes. The routing plan is as follows:

Application-specific PBR
P2P and web video traffic is bandwidth-hungry and needs to be forwarded through
specified links using application-specific PBR.
The PBR on firewalls in this example is implemented using traffic classifier, traffic
behavior, and traffic policy. We must define the P2P and web video applications as the
matching conditions in traffic class; set ISP2 link as the next hop in the traffic behavior;
and associate the traffic class with the traffic behavior in QoS policy. In this way, the P2P
and web video traffic will be forwarded through ISP2 link.

ISP routing
ISP routing issues specific ISP routes in batches by specifying ISP address files and the
next hops so that the traffic destined for an ISP can be forwarded through the link
connected to the ISP. Address files of major ISPs are predefined on the firewalls and you
can modify the address files or create new ones when necessary.

Intelligent routing (load balancing mode with minimum latency time)
In the case of dual-egress, besides specific routes (for ISP routing), we will also
configure two equal-cost routes to match the traffic that cannot be matched to specific
routes. For such case, we can enable intelligent routing function to select optimal links
for traffic forwarding through equal-cost routes.
The firewall supports three algorithms of intelligent routing:
416
Learn Firewalls with Dr. WoW
−
Least-delay load balancing: When multiple equal-cost routes are available, the
firewall will probe the delays of the routes to a same destination and select the link
with least delay to forward the traffic.
−
Weight-based load balancing: Weights are assigned to the outgoing interfaces of the
equal-cost routes so that traffic can be allocated to the routes based on the weights.
−
Bandwidth-based load balancing: Traffic is allocated to the equal cost routes in
proportion with the bandwidth of the links.
Least-delay load balancing algorithm is applied in this example for intelligent routing.
11.2.3 Bandwidth Management Planning
P2P and web video traffic is bandwidth-hungry; therefore, in the 11.2.2 Multi-ISP Routing
Planning, we have planned to forward it through ISP2 link using application-specific
policy-based routes. However, if such traffic is not controlled, other services on the ISP2 link
may be interrupted. Therefore, we need to enable bandwidth management (BWM) function to
implement application-specific traffic control.
In this example, the firewall manages the bandwidth through traffic profile and traffic policy.

A traffic profile defines bandwidth resources available to managed objects and is
referenced in a traffic policy. In this example, the maximum bandwidth of the traffic
profile is 300 Mbit/s.

A traffic policy defines the bandwidth management objects and the action, and references
a traffic profile. In this example, the object defined in the traffic policy is P2P and web
video traffic, the action is traffic limit, and the previously configured traffic profile is
referenced. In this way, the bandwidth for P2P and web video traffic will not exceed 300
Mbit/s.
11.2.4 Security Planning

Security Zone
In this example, there are five interfaces on the firewall. Since the five interfaces are
connected to different networks; they should be added to different security zones.

−
GE1/0/1 connected to ISP1 is added to isp1 zone, which needs to be created and
whose priority is set to 10.
−
GE1/0/2 connected to ISP2 is added to isp2 zone, which needs to be created and
whose priority is set to 15.
−
GE1/0/3 connected to the core router is added to the trust zone. Trust zone is a
predefined security zone of the firewall with a priority of 85.
−
GE1/0/4 connected to the server zone is added to the dmz zone. DMZ zone is a
predefined security zone of the firewall with a priority of 50.
−
GE1/0/5, the heartbeat interface, connecting the two firewalls is added to the heart
zone. The heart zone needs to be created and whose priority is set to 75.
Security policy
To enable the communication and access control between the zones, security policies
must be deployed to:
−
Allow the intranet users in the trust zone to access the isp1and isp2 zones.
−
Allow users in the trust, isp1 and isp2 zones to access the specified ports on the
servers in the dmz zone, including web, FTP, and DNS servers.
−
Allow the firewall in the local zone to set up a connection to the log server in the dmz
zone.
417
Learn Firewalls with Dr. WoW
−
The firewall in this example is USG9560, a high-end firewall. The HRP and VGMP
packets of high-end firewalls are free of security policy control.
To ensure the interzone communications of multi-channel protocols, such as FTP,
application-specific packet filter (ASPF) must be configured between the zones.
The implicit security policy action of the firewall is set to deny. Traffic not matching any security policy
will be blocked.

IPS
To prevent the intrusion of zombies, Trojan horses, and worms, IPS must be deployed on
the firewall. IPS is implemented by referencing IPS profiles in security policies. In this
example, we need to reference an IPS profile in each security policy to inspect all traffic
allowed by security policies.
This example uses the predefined IPS profile, ids, which only alerts on attack packets
without blocking them. If the requirement on security is not very high, profile ids is
recommended to reduce the risk of IPS false positive. If the requirement on security is
relatively high, predefined IPS profile default is recommended to block intrusion
behavior.

Attack defense
To protect intranet servers and users from network attacks, SYN flood attack defense and
some single-packet attack defense functions need to be enabled.
11.2.5 NAT Planning

Source NAT
To ensure that intranet users can access the Internet through limited public IP addresses,
network address translation (NAT) must be deployed on the firewall. When an outbound
packet reaches the firewall, its source address will be translated into a public IP address,
and its source port will be randomly translated into an ephemeral port. In this way, one
public IP address can be used by multiple intranet users simultaneously to access the
Internet.

NAT Address Detection
If the public IP address mapped to an intranet user is blocked by the ISP due to the user's
misoperation on the Internet, other users sharing the same public IP address will not be
able to access the Internet. In this case, NAT address detection can be deployed on the
firewall to check whether the public IP addresses in the NAT address pool are blocked. If
no traffic is returned from the Internet to a public IP address in the address pool or if the
traffic volume from the Internet to a public IP address is below the threshold within a
specified period of time, this IP address will be excluded from the NAT address pool to
ensure that the address will not be used for address translation.

NAT ALG
When the NAT function is enabled and multi-channel protocols, such as FTP, need to be
used, NAT ALG must be enabled. In this example, multi-channel protocols, such as FTP,
SIP, H323, MGCP, and RTSP are used.
The configuration commands for NAT ALG and ASPF are the same, regardless of their different
implementation principles and functions.
418
Learn Firewalls with Dr. WoW
11.2.6 Intranet Server Planning
The media company provides hosting services for customers, such as schools, office networks
of companies, and web portals.
The hosted servers are deployed in the dmz of the intranet, so the server addresses are private
IP addresses. The servers, however, provide services for users on the Internet, so they must
offer public IP addresses for intranet and Internet users to access. Therefore, NAT server must
be deployed on the firewall to translate the servers' private addresses into public IP addresses
assigned by different ISPs for their users.
A DNS server is deployed in the server area to translate the domain names of the servers into
public IP addresses. By deploying the intelligent DNS on the firewall, the domain name of a
server is resolved into the public IP address assigned by the ISP that serves the user who
requested the server. In this way, the server access latency is minimized and user experience is
optimized.
11.2.7 Log Planning
The eSight log server on the firewall can collect, query, and display the log reports. The
session logs on the firewall can be used to query the address information before and after NAT.
The IM online and offline logs on the firewall can be used to check and analyze users' IM
online and offline information.
11.3 Configuration Procedure
Step 1 Configure interface IP addresses, and add the interfaces to security zones.
# Configure the interface IP addresses on FW1.
<FW1> system-view
[FW1] interface GigabitEthernet 1/0/1
[FW1-GigabitEthernet1/0/1] ip address
[FW1-GigabitEthernet1/0/1] quit
[FW1] interface GigabitEthernet 1/0/2
[FW1-GigabitEthernet1/0/2] ip address
[FW1-GigabitEthernet1/0/2] quit
[FW1] interface GigabitEthernet 1/0/3
[FW1-GigabitEthernet1/0/3] ip address
[FW1-GigabitEthernet1/0/3] quit
[FW1] interface GigabitEthernet 1/0/4
[FW1-GigabitEthernet1/0/4] ip address
[FW1-GigabitEthernet1/0/4] quit
[FW1] interface GigabitEthernet 1/0/5
[FW1-GigabitEthernet1/0/5] ip address
[FW1-GigabitEthernet1/0/5] quit
10.0.1.1 24
10.0.2.1 24
10.0.3.1 24
10.0.5.1 24
10.0.7.1 24
# Configure the interface IP addresses on FW2.
<FW2> system-view
[FW2] interface GigabitEthernet 1/0/1
[FW2-GigabitEthernet1/0/1] ip address 10.0.1.2 24
[FW2-GigabitEthernet1/0/1] quit
[FW2] interface GigabitEthernet 1/0/2
[FW2-GigabitEthernet1/0/2] ip address 10.0.2.2 24
[FW2-GigabitEthernet1/0/2] quit
419
Learn Firewalls with Dr. WoW
[FW2] interface GigabitEthernet 1/0/3
[FW2-GigabitEthernet1/0/3] ip address 10.0.4.1 24
[FW2-GigabitEthernet1/0/3] quit
[FW2] interface GigabitEthernet 1/0/4
[FW2-GigabitEthernet1/0/4] ip address 10.0.6.1 24
[FW2-GigabitEthernet1/0/4] quit
[FW2] interface GigabitEthernet 1/0/5
[FW2-GigabitEthernet1/0/5] ip address 10.0.7.2 24
[FW2-GigabitEthernet1/0/5] quit
# Create security zones on FW1 and add FW1 interfaces to security zones. The security zones
on FW2 are configured in the same way.
[FW1] firewall zone name isp1
[FW1-zone-isp1] set priority 10
[FW1-zone-isp1] add interface GigabitEthernet1/0/1
[FW1-zone-isp1] quit
[FW1] firewall zone name isp2
[FW1-zone-isp2] set priority 15
[FW1-zone-isp2] add interface GigabitEthernet1/0/2
[FW1-zone-isp2] quit
[FW1] firewall zone trust
[FW1-zone-trust] add interface GigabitEthernet1/0/3
[FW1-zone-trust] quit
[FW1] firewall zone dmz
[FW1-zone-dmz] add interface GigabitEthernet1/0/4
[FW1-zone-dmz] quit
[FW1] firewall zone name heart
[FW1-zone-heart] set priority 75
[FW1-zone-heart] add interface GigabitEthernet1/0/5
[FW1-zone-heart] quit
Step 2 Configure default routes.
# Configure IP-Link to detect whether the ISP links are normal.
[FW1] ip-link check enable
[FW1] ip-link 1 destination 1.1.1.2 interface GigabitEthernet1/0/1
[FW1] ip-link 2 destination 2.2.2.2 interface GigabitEthernet1/0/2
# Create two default routes and set the next hops to the IP addresses assigned by ISP1 and
ISP2.
[FW1] ip route-static 0.0.0.0 0.0.0.0 1.1.1.2 track ip-link 1
[FW1] ip route-static 0.0.0.0 0.0.0.0 2.2.2.2 track ip-link 2
[Dr. WoW's comment] When a link monitored by IP-Link is faulty, the static route or
policy-based route bound to it will become invalid.
# The IP-Link and default route configurations on FW2 are the same as those on FW1.
Step 3 Configure ISP routing function.
1.
Obtain the latest destination IP address ranges from each ISP.
2.
Edit the csv files of each ISP in the following format. The figure below is for reference
only; the addresses provided by local ISP may differ.
420
Learn Firewalls with Dr. WoW
3.
Import all csv files.
4.
Run the following commands to load the csv files and specify the next hops.
[FW1] isp set filename isp2.csv GigabitEthernet 1/0/1 next-hop 1.1.1.2 track ip-link
1
[FW1] isp set filename isp2.csv GigabitEthernet 1/0/2 next-hop 2.2.2.2 track ip-link
2
5.
ISP routing configurations on FW2 are the same.
Step 4 Configure policy-based routes.
# Configure an ACL on FW1 to permit the packets from intranet users.
[FW1] acl number 2000
[FW1-acl-basic-2000] rule permit source 10.0.0.0 0.0.0.255
[FW1-acl-basic-2000] quit
# Configure a user-defined application group on FW1, and add P2P and web video
applications to the group.
[FW1] sa
[FW1-sa] app-set p2p_web_video
[FW1-sa-p2p_web_video] category p2p
[FW1-sa-p2p_web_video] web_video
[FW1-sa-p2p_web_video] quit
[FW1-sa] quit
# Configure a traffic classifier on FW1 to match the P2P and web video traffic from the
intranet.
[FW1] traffic classifier class1
[FW1-classifier-class1] if-match acl 2000 app-set p2p_web_video
[FW1-classifier-class1] quit
# Configure a traffic behavior on FW1 to redirect the traffic to ISP2.
[FW1] traffic behavior behavior1
[FW1-behavior-behavior1] redirect ip-nexthop 2.2.2.2 interface GigabitEthernet 1/0/1
421
Learn Firewalls with Dr. WoW
# Define a traffic policy on FW1 to associate the traffic class with the traffic behavior.
[FW1] traffic policy policy1
[FW1-trafficpolicy-policy1] classifier class1 behavior behavior1
[FW1-trafficpolicy-policy1] quit
# Apply the traffic policy on FW1's GE1/0/3 interface to enable policy-based routing function.
[FW1] interface GigabitEthernet 1/0/3
[FW1-GigabitEthernet1/0/3] traffic-policy policy1 inbound
[FW1-GigabitEthernet1/0/3] quit
[Dr. WoW's comment] To enable policy-based routing, the traffic policy must be applied to
the inbound interface to control the traffic, and the direction must be set to inbound.
# The configurations of policy-based routes on FW2 are the same.
Step 5 Configure OSPF.
# Configure open shortest path first (OSPF) on FW1 and advertise the networks connected to
the downlink interfaces.
[FW1] ospf 1
[FW1-ospf-1] area 0
[FW1-ospf-1-area-0.0.0.0] network 10.0.3.0 0.0.0.255
[FW1-ospf-1-area-0.0.0.0] network 10.0.5.0 0.0.0.255
[FW1-ospf-1-area-0.0.0.0] quit
[FW1-ospf-1] quit
# Configure OSPF on FW2 and advertise the networks connected to the downlink interfaces.
[FW2] ospf 1
[FW2-ospf-1] area 0
[FW2-ospf-1-area-0.0.0.0] network 10.0.4.0 0.0.0.255
[FW2-ospf-1-area-0.0.0.0] network 10.0.6.0 0.0.0.255
[FW2-ospf-1-area-0.0.0.0] quit
[FW2-ospf-1] quit
Step 6 Enable the hot standby function.
# Configure a VRRP group on FW1's uplink interfaces and set the state to Master.
[FW1] interface GigabitEthernet1/0/1
[FW1-GigabitEthernet1/0/1] vrrp vrid 1 virtual-ip 1.1.1.1 30 master
[FW1-GigabitEthernet1/0/1] quit
[FW1] interface GigabitEthernet1/0/2
[FW1-GigabitEthernet1/0/1] vrrp vrid 2 virtual-ip 2.2.2.1 30 master
[FW1-GigabitEthernet1/0/1] quit
# Configure a VGMP group on FW1 to monitor the downlink interfaces.
[FW1] hrp track interface GigabitEthernet1/0/3
[FW1] hrp track interface GigabitEthernet1/0/4
# Configure a heartbeat interface on FW1, and enable the hot standby function.
[FW1] hrp interface GigabitEthernet1/0/5 remote 10.0.7.2
[FW1] hrp enable
# Configure a VRRP group on FW2's uplink interfaces and set the state to Slave.
[FW2] interface GigabitEthernet1/0/1
422
Learn Firewalls with Dr. WoW
[FW2-GigabitEthernet1/0/1] vrrp vrid 1 virtual-ip 1.1.1.1 30 slave
[FW2-GigabitEthernet1/0/1] quit
[FW2] interface GigabitEthernet1/0/2
[FW2-GigabitEthernet1/0/1] vrrp vrid 2 virtual-ip 2.2.2.1 30 slave
[FW2-GigabitEthernet1/0/1] quit
# Configure a VGMP group on FW2 to monitor the downlink interfaces.
[FW2] hrp track interface GigabitEthernet1/0/3
[FW2] hrp track interface GigabitEthernet1/0/4
# Configure a heartbeat interface on FW2, and enable the hot standby function.
[FW2] hrp interface GigabitEthernet1/0/5 remote 10.0.7.1
[FW2] hrp enable
[Dr. WoW's comment] When hot standby is successfully enabled, most of the configurations
can be backed up. Therefore, in the following steps, we will only illustrate the configurations
on FW1, the master device (except for specified configurations).
Step 7 Enable intelligent routing function.
# Set the intelligent routing mode on FW1 to least latency.
HRP_M[FW1] ucmp group ucmp1 mode proportion-of-intelligent-control
# Set the mask length for intelligent routes on FW1 to 24.
HRP_M[FW1] ucmp group ucmp1 intelligent-control-mask 24
# Add GigabitEthernet1/0/1 on FW1 to the UCMP1 group, and set the source IP address for
remote host health check to 10.0.1.1.
HRP_M[FW1] interface GigabitEthernet 1/0/1
HRP_M[FW1-GigabitEthernet1/0/1] ucmp-group ucmp1
HRP_M[FW1-GigabitEthernet1/0/1] healthcheck source-ip 10.0.1.1
HRP_M[FW1-GigabitEthernet1/0/1] quit
# Add GigabitEthernet1/0/2 on FW1 to the UCMP1 group, and set the source IP address for
remote host health check to 10.0.2.1.
HRP_M[FW1] interface GigabitEthernet 1/0/2
HRP_M[FW1-GigabitEthernet1/0/2] ucmp-group ucmp1
HRP_M[FW1-GigabitEthernet1/0/2] healthcheck source-ip 10.0.2.1
HRP_M[FW1-GigabitEthernet1/0/2] quit
[Dr. WoW's comment] All the intelligent routing configurations support the hot standby
except for the healthcheck source-ip command, which needs to be manually executed on
both firewalls.
# Set the health check source IP address on the GigabitEthernet1/0/1 interface of FW2 to
10.0.1.2.
HRP_S[FW1] interface GigabitEthernet 1/0/1
HRP_S[FW1-GigabitEthernet1/0/1] healthcheck source-ip 10.0.1.2
HRP_S[FW1-GigabitEthernet1/0/1] quit
# Set the health check source IP address on the GigabitEthernet1/0/2 interface of FW2 as
10.0.2.2.
HRP_S[FW2] interface GigabitEthernet 1/0/2
HRP_S[FW2-GigabitEthernet1/0/2] healthcheck source-ip 10.0.2.2
423
Learn Firewalls with Dr. WoW
HRP_S[FW2-GigabitEthernet1/0/2] quit
Step 8 Configure bandwidth management.
[Dr. WoW's comment] To configure bandwidth management, a traffic profile (a set of traffic
limiting actions) is created and referenced in a traffic policy. Pay attention that upload is in the
outbound direction and download is in the inbound direction. In addition, it is recommended
that P2P traffic be limited to 20% to 30% of the total bandwidth.
# The total bandwidth for traffic profiles is limited to 3 Gbit/s.
HRP_M[FW1] car-class p2p_web_video
HRP_M[FW1-car-class-p2p_web_video] cir 3000000 total
HRP_M[FW1-car-class-p2p_web_video] quit
# Configure inbound and outbound traffic policies in ISP1 zone to limit the download and
upload traffic of the P2P and web video, respectively.
HRP_M[FW1] car-policy zone isp1 inbound
HRP_M[FW1-car-policy-zone-isp1-inbound] policy 0
HRP_M[FW1-car-policy-zone-isp1-inbound-0] policy application category p2p
HRP_M[FW1-car-policy-zone-isp1-inbound-0] policy application category web_video
HRP_M[FW1-car-policy-zone-isp1-inbound-0] action car
HRP_M[FW1-car-policy-zone-isp1-inbound-0] car-class p2p_web_video
HRP_M[FW1-car-policy-zone-isp1-inbound-0] description limit_download
HRP_M[FW1-car-policy-zone-isp1-inbound-0] quit
HRP_M[FW1-car-policy-zone-isp1-inbound] quit
HRP_M[FW1] car-policy zone isp1 outbound
HRP_M[FW1-car-policy-zone-isp1-outbound] policy 0
HRP_M[FW1-car-policy-zone-isp1-outbound-0] policy application category p2p
HRP_M[FW1-car-policy-zone-isp1-outbound-0] policy application category web_video
HRP_M[FW1-car-policy-zone-isp1-outbound-0] action car
HRP_M[FW1-car-policy-zone-isp1-outbound-0] car-class p2p_web_video
HRP_M[FW1-car-policy-zone-isp1-outbound-0] description limit_upload
HRP_M[FW1-car-policy-zone-isp1-outbound-0] quit
HRP_M[FW1-car-policy-zone-isp1-outbound] quit
# Configure inbound and outbound traffic policies in ISP2 zone to limit the download and
upload traffic of the P2P and web video, respectively.
HRP_M[FW1] car-policy zone isp2 inbound
HRP_M[FW1-car-policy-zone-isp2-inbound] policy 0
HRP_M[FW1-car-policy-zone-isp2-inbound-0] policy application category p2p
HRP_M[FW1-car-policy-zone-isp2-inbound-0] policy application category web_video
HRP_M[FW1-car-policy-zone-isp2-inbound-0] action car
HRP_M[FW1-car-policy-zone-isp2-inbound-0] car-class p2p_web_video
HRP_M[FW1-car-policy-zone-isp2-inbound-0] description limit_download
HRP_M[FW1-car-policy-zone-isp2-inbound-0] quit
HRP_M[FW1-car-policy-zone-isp2-inbound] quit
HRP_M[FW1] car-policy zone isp2 outbound
HRP_M[FW1-car-policy-zone-isp2-outbound] policy 0
HRP_M[FW1-car-policy-zone-isp2-outbound-0] policy application category p2p
HRP_M[FW1-car-policy-zone-isp2-outbound-0] policy application category web_video
HRP_M[FW1-car-policy-zone-isp2-outbound-0] action car
HRP_M[FW1-car-policy-zone-isp2-outbound-0] car-class p2p_web_video
HRP_M[FW1-car-policy-zone-isp2-outbound-0] description limit_upload
HRP_M[FW1-car-policy-zone-isp2-outbound-0] quit
HRP_M[FW1-car-policy-zone-isp2-outbound] quit
424
Learn Firewalls with Dr. WoW
Step 9 Configure security policies and content security function.
# Configure an outbound security policy for the trust-isp1 interzone to enable intranet users to
access the Internet through ISP1 and to detect intrusions.
HRP_M[FW1] policy interzone trust isp1 outbound
HRP_M[FW1-policy-interzone-trust-isp1-outbound] policy 0
HRP_M[FW1-policy-interzone-trust-isp1-outbound-0] action permit
HRP_M[FW1-policy-interzone-trust-isp1-outbound-0] profile ips ids
HRP_M[FW1-policy-interzone-trust-isp1-outbound-0] quit
HRP_M[FW1-policy-interzone-trust-isp1-outbound] quit
# Configure an outbound security policy for the trust-isp2 interzone to enable intranet users to
access the Internet through ISP2 and to detect intrusions.
HRP_M[FW1] policy interzone trust isp2 outbound
HRP_M[FW1-policy-interzone-trust-isp2-outbound] policy 0
HRP_M[FW1-policy-interzone-trust-isp2-outbound-0] action permit
HRP_M[FW1-policy-interzone-trust-isp2-outbound-0] profile ips ids
HRP_M[FW1-policy-interzone-trust-isp2-outbound-0] quit
HRP_M[FW1-policy-interzone-trust-isp2-outbound] quit
# Configure inbound security policies for the isp1-dmz interzone to enable the Internet users
to access the web, FTP, and DNS servers in the dmz zone through the ISP1 link and to detect
intrusions.
HRP_M[FW1] policy interzone isp1 dmz inbound
HRP_M[FW1-policy-interzone-isp1-dmz-inbound] policy 0
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-0] policy destination 10.0.10.10
0.0.0.255
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-0] policy service service-set http
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-0] action permit
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-0] profile ips ids
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-0] quit
HRP_M[FW1-policy-interzone-isp1-dmz-inbound] policy 1
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-1] policy destination 10.0.10.11
0.0.0.255
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-1] policy service service-set ftp
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-1] action permit
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-1] profile ips ids
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-1] quit
HRP_M[FW1-policy-interzone-isp1-dmz-inbound] policy 2
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-2] policy destination 10.0.10.20
0.0.0.255
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-2] policy service service-set dns
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-2] action permit
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-2] profile ips ids
HRP_M[FW1-policy-interzone-isp1-dmz-inbound-2] quit
HRP_M[FW1-policy-interzone-isp1-dmz-inbound] quit
# Configure inbound security policies for the isp2-dmz interzone to enable the Internet users
to access the web, FTP, and DNS servers in the dmz zone through the ISP2 link and to detect
intrusions.
HRP_M[FW1] policy interzone isp2 dmz inbound
HRP_M[FW1-policy-interzone-isp2-dmz-inbound] policy 0
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-0] policy destination 10.0.10.10
0.0.0.255
425
Learn Firewalls with Dr. WoW
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-0] policy service service-set http
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-0] action permit
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-0] profile ips ids
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-0] quit
HRP_M[FW1-policy-interzone-isp2-dmz-inbound] policy 1
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-1] policy destination 10.0.10.11
0.0.0.255
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-1] policy service service-set ftp
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-1] action permit
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-1] profile ips ids
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-1] quit
HRP_M[FW1-policy-interzone-isp2-dmz-inbound] policy 2
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-2] policy destination 10.0.10.20
0.0.0.255
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-2] policy service service-set dns
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-2] action permit
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-2] profile ips ids
HRP_M[FW1-policy-interzone-isp2-dmz-inbound-2] quit
HRP_M[FW1-policy-interzone-isp2-dmz-inbound] quit
# Configure outbound security policies for the trust-dmz interzone to enable the intranet users
to access the web, FTP, and DNS servers in the dmz zone and to detect intrusions.
HRP_M[FW1] policy interzone trust dmz outbound
HRP_M[FW1-policy-interzone-trust-dmz-outbound] policy 0
HRP_M[FW1-policy-interzone-trust-dmz-outbound-0] policy destination 10.0.10.10
0.0.0.255
HRP_M[FW1-policy-interzone-trust-dmz-outbound-0] policy service service-set http
HRP_M[FW1-policy-interzone-trust-dmz-outbound-0] action permit
HRP_M[FW1-policy-interzone-trust-dmz-outbound-0] profile ips ids
HRP_M[FW1-policy-interzone-trust-dmz-outbound-0] quit
HRP_M[FW1-policy-interzone-trust-dmz-outbound] policy 1
HRP_M[FW1-policy-interzone-trust-dmz-outbound-1] policy destination 10.0.10.11
0.0.0.255
HRP_M[FW1-policy-interzone-trust-dmz-outbound-1] policy service service-set ftp
HRP_M[FW1-policy-interzone-trust-dmz-outbound-1] action permit
HRP_M[FW1-policy-interzone-trust-dmz-outbound-1] profile ips ids
HRP_M[FW1-policy-interzone-trust-dmz-outbound-1] quit
HRP_M[FW1-policy-interzone-trust-dmz-outbound] policy 2
HRP_M[FW1-policy-interzone-trust-dmz-outbound-2] policy destination 10.0.10.20
0.0.0.255
HRP_M[FW1-policy-interzone-trust-dmz-outbound-2] policy service service-set dns
HRP_M[FW1-policy-interzone-trust-dmz-outbound-2] action permit
HRP_M[FW1-policy-interzone-trust-dmz-outbound-2] profile ips ids
HRP_M[FW1-policy-interzone-trust-dmz-outbound-2] quit
HRP_M[FW1-policy-interzone-trust-dmz-outbound] quit
# Configure outbound and inbound security policies for the local-dmz interzone to enable the
connection between the firewall and the log server.
HRP_M[FW1] policy interzone local dmz outbound
HRP_M[FW1-policy-interzone-local-dmz-outbound] policy 0
HRP_M[FW1-policy-interzone-local-dmz-outbound-0] policy destination 10.0.10.30
0.0.0.255
HRP_M[FW1-policy-interzone-local-dmz-outbound-0] action permit
HRP_M[FW1-policy-interzone-local-dmz-outbound-0] quit
HRP_M[FW1-policy-interzone-local-dmz-outbound] quit
426
Learn Firewalls with Dr. WoW
HRP_M[FW1] policy interzone local dmz inbound
HRP_M[FW1-policy-interzone-local-dmz-inbound] policy 0
HRP_M[FW1-policy-interzone-local-dmz-inbound-0] policy source 10.0.10.30 0.0.0.255
HRP_M[FW1-policy-interzone-local-dmz-inbound-0] action permit
HRP_M[FW1-policy-interzone-local-dmz-inbound-0] quit
HRP_M[FW1-policy-interzone-local-dmz-inbound] quit
# Enable the IPS function and set scheduled online update of the signature database.
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
ips enable
update schedule ips-sdb enable
update schedule weekly sun 02:00
update schedule sa-sdb enable
update schedule weekly sun 03:00
undo update confirm ips-sdb enable
undo update confirm sa-sdb enable
# Configure the DNS server address of the firewall so that the firewall can access the security
center platform's domain name and download the signature database.
HRP_M[FW1] dns resolve
HRP_M[FW1] dns server 202.106.0.20
Step 10 Enable attack defense function.
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
HRP_M[FW1] firewall
tcp-proxy auto
HRP_M[FW1] firewall
tcp-proxy auto
defend
defend
defend
defend
defend
defend
defend
defend
defend
defend
land enable
smurf enable
fraggle enable
winnuke enable
source-route enable
route-record enable
time-stamp enable
ping-of-death enable
syn-flood enable
syn-flood interface GigabitEthernet1/0/1 max-rate 100000
defend syn-flood interface GigabitEthernet1/0/2 max-rate 100000
[Dr. WoW's comment] The above attack defense functions would suffice for general security
purposes.
For SYN flood attack defense, the recommended threshold on the 10GE interfaces is 100,000
pps.
Step 11 Configure source NAT.
# Configure the NAT address pool isp1 on FW1.
HRP_M[FW1] nat address-group isp1
HRP_M[FW1-address-group-isp1] mode pat
HRP_M[FW1-address-group-isp1] section 1.1.1.10 1.1.1.12
HRP_M[FW1-address-group-isp1] quit
# Configure NAT policy for the trust-isp1 interzone on FW1 to translate the source IP
addresses of the packets from the trust zone into the IP addresses in the address pool isp1.
HRP_M[FW1] nat-policy interzone trust isp1 outbound
HRP_M[FW1-nat-policy-interzone-trust-isp1-outbound] policy 0
HRP_M[FW1-nat-policy-interzone-trust-isp1-outbound-0] action source-nat
427
Learn Firewalls with Dr. WoW
HRP_M[FW1-nat-policy-interzone-trust-isp1-outbound-0] address-group isp1
# Configure the NAT address pool isp2 on FW1.
HRP_M[FW1] nat address-group isp2
HRP_M[FW1-address-group-isp2] mode pat
HRP_M[FW1-address-group-isp2] section 2.2.2.10 2.2.2.12
HRP_M[FW1-address-group-isp2] quit
# Configure NAT policy for the trust-isp2 interzone on FW1 to translate the source IP
addresses of the packets from the trust zone into the IP addresses in the address pool isp2.
HRP_M[FW1] nat-policy interzone trust isp2 outbound
HRP_M[FW1-nat-policy-interzone-trust-isp2-outbound] policy 0
HRP_M[FW1-nat-policy-interzone-trust-isp2-outbound-0] action source-nat
HRP_M[FW1-nat-policy-interzone-trust-isp2-outbound-0] address-group isp2
# Configure a black-hole route on FW1 and advertise the public IP addresses in the NAT
address pool.
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
ip
ip
ip
ip
ip
ip
route-static
route-static
route-static
route-static
route-static
route-static
1.1.1.10
1.1.1.11
1.1.1.12
2.2.2.10
2.2.2.11
2.2.2.12
32
32
32
32
32
32
NULL
NULL
NULL
NULL
NULL
NULL
0
0
0
0
0
0
# Configure a black-hole route on FW2 and advertise the public IP addresses in the NAT
address pool.
HRP_S[FW2]
HRP_S[FW2]
HRP_S[FW2]
HRP_S[FW2]
HRP_S[FW2]
HRP_S[FW2]
ip
ip
ip
ip
ip
ip
route-static
route-static
route-static
route-static
route-static
route-static
1.1.1.10
1.1.1.11
1.1.1.12
2.2.2.10
2.2.2.11
2.2.2.12
32
32
32
32
32
32
NULL
NULL
NULL
NULL
NULL
NULL
0
0
0
0
0
0
[Dr. WoW's comment] Route configuration cannot be backed up, so the blackhole routes must
be configured on both firewalls.
Step 12 Configure NAT ALG.
[Dr. WoW's comment] The configuration commands for NAT ALG and ASPF are the same.
# Configure NAT ALG between the trust zone and other zones.
HRP_M[FW1] firewall interzone trust isp1
HRP_M[FW1-interzone-trust-isp1] detect ftp
HRP_M[FW1-interzone-trust-isp1] detect sip
HRP_M[FW1-interzone-trust-isp1] detect h323
HRP_M[FW1-interzone-trust-isp1] detect mgcp
HRP_M[FW1-interzone-trust-isp1] detect rtsp
HRP_M[FW1-interzone-trust-isp1] detect qq
HRP_M[FW1-interzone-trust-isp1] quit
HRP_M[FW1] firewall interzone trust isp2
HRP_M[FW1-interzone-trust-isp2] detect ftp
HRP_M[FW1-interzone-trust-isp2] detect sip
HRP_M[FW1-interzone-trust-isp2] detect h323
HRP_M[FW1-interzone-trust-isp2] detect mgcp
HRP_M[FW1-interzone-trust-isp2] detect rtsp
428
Learn Firewalls with Dr. WoW
HRP_M[FW1-interzone-trust-isp2] detect qq
HRP_M[FW1-interzone-trust-isp2] quit
HRP_M[FW1] firewall interzone trust dmz
HRP_M[FW1-interzone-trust-dmz] detect ftp
HRP_M[FW1-interzone-trust-dmz] detect sip
HRP_M[FW1-interzone-trust-dmz] detect h323
HRP_M[FW1-interzone-trust-dmz] detect mgcp
HRP_M[FW1-interzone-trust-dmz] detect rtsp
HRP_M[FW1-interzone-trust-dmz] detect qq
HRP_M[FW1-interzone-trust-dmz] quit
# Configure NAT ALG for the dmz-isp1 and dmz-isp2 interzones.
HRP_M[FW1] firewall interzone
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1-interzone-dmz-isp1]
HRP_M[FW1] firewall interzone
HRP_M[FW1-interzone-dmz-isp2]
HRP_M[FW1-interzone-dmz-isp2]
HRP_M[FW1-interzone-dmz-isp2]
HRP_M[FW1-interzone-dmz-isp2]
HRP_M[FW1-interzone-dmz-isp2]
HRP_M[FW1-interzone-dmz-isp2]
HRP_M[FW1-interzone-dmz-isp2]
dmz isp1
detect ftp
detect sip
detect h323
detect mgcp
detect rtsp
detect qq
quit
dmz isp2
detect ftp
detect sip
detect h323
detect mgcp
detect rtsp
detect qq
quit
Step 13 Configure NAT server and intelligent DNS.
# Configure NAT server function to map the private IP address of the web server to the public
IP addresses that could be accessed by ISP1 and ISP2 users.
HRP_M[FW1] nat server 1 zone isp1 global 1.1.1.15 inside 10.0.10.10
HRP_M[FW1] nat server 2 zone isp2 global 2.2.2.15 inside 10.0.10.10
# Configure NAT server function to map the private IP address of the FTP server to the public
IP addresses that could be accessed by ISP1 and ISP2 users.
HRP_M[FW1] nat server 3 zone isp1 global 1.1.1.16 inside 10.0.10.11
HRP_M[FW1] nat server 4 zone isp2 global 2.2.2.16 inside 10.0.10.11
# Configure NAT server function to map the private IP address of the DNS server to the
public IP addresses that could be accessed by ISP1 and ISP2 users.
HRP_M[FW1] nat server 5 zone isp1 global 1.1.1.17 inside 10.0.10.20
HRP_M[FW1] nat server 6 zone isp2 global 2.2.2.17 inside 10.0.10.20
# Configure intelligent DNS to ensure that the domain name of an intranet server is resolved
into the address assigned by the ISP that serves the user to increase access speed. For instance,
when accessing the intranet web server 10.0.10.10, ISP1 users will obtain the address
assigned by ISP1, which is 1.1.1.15, while ISP2 users will obtain the address assigned by
ISP2, which is 2.2.2.15.
HRP_M[FW1] dns-smart enable
HRP_M[FW1] dns-smart group 1 type single
HRP_M[FW1-dns-smart-group-1] real-server-ip 10.0.10.10
429
Learn Firewalls with Dr. WoW
HRP_M[FW1-dns-smart-group-1]
HRP_M[FW1-dns-smart-group-1]
HRP_M[FW1-dns-smart-group-1]
HRP_M[FW1] dns-smart group 2
HRP_M[FW1-dns-smart-group-2]
HRP_M[FW1-dns-smart-group-2]
HRP_M[FW1-dns-smart-group-2]
HRP_M[FW1-dns-smart-group-2]
out-interface GigabitEthernet
out-interface GigabitEthernet
quit
type single
real-server-ip 10.0.10.11
out-interface GigabitEthernet
out-interface GigabitEthernet
quit
1/0/1 map 1.1.1.15
1/0/2 map 2.2.2.15
1/0/1 map 1.1.1.16
1/0/2 map 2.2.2.16
# Configure a black-hole route on FW1 and advertise the post-NAT public IP addresses.
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
HRP_M[FW1]
ip
ip
ip
ip
ip
ip
route-static
route-static
route-static
route-static
route-static
route-static
1.1.1.15
1.1.1.16
1.1.1.17
2.2.2.15
2.2.2.16
2.2.2.17
32
32
32
32
32
32
NULL
NULL
NULL
NULL
NULL
NULL
0
0
0
0
0
0
# Configure a black-hole route on FW2 and advertise the post-NAT public IP addresses.
HRP_M[FW2]
HRP_M[FW2]
HRP_M[FW2]
HRP_M[FW2]
HRP_M[FW2]
HRP_M[FW2]
ip
ip
ip
ip
ip
ip
route-static
route-static
route-static
route-static
route-static
route-static
1.1.1.15
1.1.1.16
1.1.1.17
2.2.2.15
2.2.2.16
2.2.2.17
32
32
32
32
32
32
NULL
NULL
NULL
NULL
NULL
NULL
0
0
0
0
0
0
[Dr. WoW's comment] Route configuration cannot be backed up, so the blackhole routes must
be configured on both firewalls.
Step 14 Enable NAT source tracing and IM logging function.
[Dr. WoW's comment] NAT source tracing is enabled to check the addresses before and after
NAT. Our approach is to configure audit function on the firewall to generate session logs,
which are then sent to the log host. On the log host, we can check the session logs through
eSight to check the addresses before and after NAT.
# Enable the firewall to send session logs to port 9002 of the log host (10.1.10.30), and
configure both inbound and outbound audit policies between the trust zone and the isp1/isp2
zone.
HRP_M[FW1] firewall log source 10.0.5.1 9002
HRP_M[FW1] firewall log host 2 10.0.10.30 9002
HRP_M[FW1] audit-policy interzone trust isp1 outbound
HRP_M[FW1-audit-policy-interzone-trust-isp1-outbound] policy 0
HRP_M[FW1-audit-policy-interzone-trust-isp1-outbound-0] action audit
HRP_M[FW1-audit-policy-interzone-trust-isp1-outbound-0] quit
HRP_M[FW1-audit-policy-interzone-trust-isp1-outbound] quit
HRP_M[FW1] audit-policy interzone trust isp1 inbound
HRP_M[FW1-audit-policy-interzone-trust-isp1-inbound] policy 0
HRP_M[FW1-audit-policy-interzone-trust-isp1-inbound-0] action audit
HRP_M[FW1-audit-policy-interzone-trust-isp1-inbound-0] quit
HRP_M[FW1-audit-policy-interzone-trust-isp1-inbound] quit
HRP_M[FW1] audit-policy interzone trust isp2 outbound
HRP_M[FW1-audit-policy-interzone-trust-isp2-outbound] policy 0
HRP_M[FW1-audit-policy-interzone-trust-isp2-outbound-0] action audit
HRP_M[FW1-audit-policy-interzone-trust-isp2-outbound-0] quit
HRP_M[FW1-audit-policy-interzone-trust-isp2-outbound] quit
430
Learn Firewalls with Dr. WoW
HRP_M[FW1] audit-policy interzone trust isp2 inbound
HRP_M[FW1-audit-policy-interzone-trust-isp2-inbound] policy 0
HRP_M[FW1-audit-policy-interzone-trust-isp2-inbound-0] action audit
HRP_M[FW1-audit-policy-interzone-trust-isp2-inbound-0] quit
HRP_M[FW1-audit-policy-interzone-trust-isp2-inbound] quit
# Enable IM logging delivery function on FW1.
HRP_M[FW1] firewall log im enable
# The above log configurations on FW1 will be backed up to FW2. The source IP address of
the log will not be backed up, so we configure it on FW2.
HRP_S[FW2] firewall log source 10.0.6.1 9002
# Configure the SNMP on FW1 to connect FW1 with the eSight. The SNMP parameters on
eSight must be the same as those on FW1.
HRP_M[FW1] snmp-agent sys-info v3
HRP_M[FW1] snmp-agent group v3 NMS1 privacy
HRP_M[FW1] snmp-agent usm-user v3 admin1 NMS1 authentication-mode md5 Admin@123
privacy-mode aes256 Admin@123
# SNMP configuration is not backed up; therefore, we must configure SNMP on FW2 to
connect FW2 with the eSight. The SNMP parameters on eSight must be the same as those on
FW2.
HRP_S[FW2] snmp-agent sys-info v3
HRP_S[FW2] snmp-agent group v3 NMS1 privacy
HRP_S[FW2] snmp-agent usm-user v3 admin1 NMS1 authentication-mode md5 Admin@456
privacy-mode aes256 Admin@456
# After eSight is configured, choose Business > Security Business > LogCenter > Log
Analysis > Session Analysis > IPv4 Session Query to check session logs; choose Log
Analysis > Network Security Analysis > IM to check IM logs.
----End
11.4 Highlights

This example illustrates the typical application of firewall at the egress of the media
company's intranet to the Internet. If you are facing the same scenario, this example will
be a good reference.

This is a typical hot standby network of "firewalls are connected to upstream switches
and downstream routers".

This example demonstrates the multiple egress routing methods of the firewall as
gateway, including ISP routing, intelligent routing, policy-based routing, and intelligent
DNS.

This example shows the application identification and control capabilities of the firewall.
The firewall can identify port information and various applications, and carry out
application-based access control, policy-based routing, and traffic control.
431
Learn Firewalls with Dr. WoW
12
Firewall Deployment on Stadium
Network
12.1 Networking Requirements
As shown in Figure 12-1, two firewalls (USG6680 V100R001C20) are deployed as gateways
at the egress of the stadium network to provide broadband access. FW1 and FW2 are
connected to the same carrier through its a domestic link and an international link.
Another two firewalls (USG6680 V100R001C20) are deployed at the egress of the data center
inside the stadium to protect the server.
432
Learn Firewalls with Dr. WoW
Figure 12-1 Stadium networking
Domestic
GW
5.1.1.89/30
GE1/0/9
5.1.1.90/30
FW1
GE1/0/8
192.168.166.35/29
192.168.166.34/29
R1
International
GW
5.1.1.93/30
BGP
Eth-trunk 1
1.1.1.2/30
BGP
GE1/0/9
5.1.1.94/30
FW2
Eth-trunk 1
1.1.1.1/30
GE1/0/8
OSPF 192.168.166.19/29
OSPF
FW4
192.168.166.18/29
GE1/0/8
R2
Active
Standby
Standby
Active
Core switch
Eth-trunk 1
2.2.2.1/30
VLAN2100
GE1/0/9
Eth-trunk 1
2.2.2.2/30
Data
center
GE1/0/9
VLAN2200
GE1/0/8
FW3
West end
East end
The requirements on firewalls at the egress are as follows:

BGP is running between the firewalls serving as egress gateways and the carrier routers,
and OSPF is running between the firewalls and internal routers.

The two firewalls work in hot standby mode to improve network availability.

To ensure that users in the stadium can simultaneously access the Internet, NAT must be
deployed on the firewalls.

To ensure internal network security, traffic should only be initiated from the intranet to
the Internet, and security functions, such as IPS, antivirus, and URL filtering, must be
deployed on the firewall.
The requirements on internal data center firewalls:

The firewalls are deployed between the routers and the data center switch in transparent
mode (service interfaces working at Layer 2).

The two firewalls work in hot standby mode to improve network availability.

To ensure the data center network security, traffic should only be initiated from the
intranet to the Internet, and IPS and antivirus functions must be deployed on the firewall.
433
Learn Firewalls with Dr. WoW
12.2 Network Planning (For Egress Firewall)
12.2.1 BGP Planning
Users in the stadium can access the Internet through two nodes at different levels of the same
carrier. As mentioned previously, one ISP link is connected to the domestic gateway and the
other to the international gateway. The domestic gateway advertises domestic routes while the
international gateway advertises international routes to the stadium through BGP. In this
manner, the optimal paths can be used for users to access Internet sites hosted on different
locations.
The dual-link in this example is rather common outside of China.
12.2.2 OSPF Planning

Redistribute BGP routes into OSPF
The IGP routing table has a smaller capacity than the BGP routing table and cannot hold
the Internet routes. Therefore, BGP routes cannot be redistributed into IGP. Therefore,
you must redistribute only the default routes advertised by the carrier routers using the
default-route-advertise command in OSPF.


Advertise OSPF routes
To run OSPF between egress firewalls and downstream intranet routers, routes to the
intranet must be advertised in OSPF on the firewalls. Pay attention to the following
points:
−
The designated router (DR) election in OSPF takes dozens of seconds; however,
because OSPF is running only on the firewall and the intranet router for each network,
and the two devices are directly connected, so the network-type of OSPF can be
configured as p2p to save the time of DR election. In addition, P2P OSPF packets are
multicast packets, free of security policy control.
−
The heartbeat cables on the firewall are used only for the synchronization of
configuration commands and state information; therefore it is unnecessary to
advertise their addresses in OSPF.
Speed up route convergence
Enable the interworking of BFD and OSPF on the egress firewalls and intranet routers.
12.2.3 Hot Standby Planning

Hot standby networking
The network where the egress firewall is connected to the upstream carrier router and
downstream intranet router is a typical hot standby network where "firewall service
interfaces work at Layer 3 and connect to upstream and downstream routers". The
difference lies in the fact that BGP is running between the firewall and the upstream
router while OSPF is running between the firewall and the downstream router in this
example. We use VGMP group to directly monitor service interfaces in such networking.

Hot standby solution
Hot standby in load balancing mode is used in this example in consideration of the
following points:
−
The volume of traffic from the intranet to the Intranet is relatively high. Because IPS,
antivirus, and NAT need to be enabled on each egress firewall, the forwarding
434
Learn Firewalls with Dr. WoW
performance may be downgraded and inadequate for the traffic to the Internet without
load balancing.
−
The carrier provides one domestic link and one international link. If active/standby
backup mode is used, one of the links will be idle and the traffic cannot be forwarded
through the optimal path.
12.2.4 Security Function Planning
The following security functions are configured on the egress firewall:

Security zone: Generally, we will add the outside interfaces (interfaces connected to
external networks) to the untrust zone and the inside interfaces (interfaces connected to
internal networks) to the trust zone.

Security policy: To ensure internal network security, traffic can be initiated only from the
intranet to the Internet; therefore, security policy must be configured on egress firewalls
to allow the users in the trust zone to access the untrust zone.

IPS: To prevent the intrusion of zombies, Trojan horses and worms, IPS must be
deployed on the egress firewalls. This example uses the predefined IPS profile, ids,
which only alerts on attack packets without blocking them. If the requirement on security
is not very high, profile ids is recommended to reduce the risk of IPS false positive.

Antivirus: to prevent virus intrusion, antivirus function must be enabled on egress
firewalls. This example uses the predefined profile, default. During the initial
deployment of the firewall, predefined antivirus profile, default, is generally used. After
the firewall runs for a period of time, the administrator can customize the profile based
on the network operation.

URL filtering: To regulate the online behavior of intranet and Internet users, URL
filtering function must be deployed on egress firewalls. This example uses the
user-defined profile, url filtering. This profile blocks the access to specified URLs, and
marks the QoS priority of specified URLs for upstream and downstream routers to
control the traffic from these websites.
12.2.5 NAT Planning
To allow intranet users to access the Internet through limited number of public IP addresses,
network address translation (NAT) function must be deployed on egress firewall.

The number of needed public IP addresses
The IP addresses of outgoing interfaces of the firewalls and those in the NAT address
pools must be public IP address provided by the carrier. When deciding the number of
public addresses in the NAT address pool, consider the following factors:
−
Number of applications a public IP address can support
−
Estimated the per-user concurrent applications based on the traffic model
−
Number of concurrent users
Here is an example:
−
Empirical data: in this example, on average, each public IP address can support
60,000 applications, and each user runs 10 applications concurrently.
−
We assume that the number of concurrent users is 48,000, according to the empirical
data, eight public IP addresses are required, however, 10 are recommended for
redundancy.
435
Learn Firewalls with Dr. WoW

NAT ALG ensures the NAT of multi-channel protocols
When the NAT function and the forwarding of multi-channel protocol packets, such as
FTP packets, are enabled on the firewall, the NAT ALG function must be enabled. In this
example, three multi-channel protocols, FTP, SIP, H323, are used. Therefore, NAT ALG
must be enabled for them.
12.2.6 Planning for Inconsistent Forward and Return Paths
The IPS and antivirus functions require that the forward and return packets of a flow pass
through the same firewall. For example, if the packets from an intranet user to the Internet are
forwarded through FW1, the return packets must also be forwarded through FW1.
However, FW1 and FW2 both forward traffic in load balancing mode. As a result, the forward
and return packets may take different paths. Therefore, we need some sophisticated
configurations to ensure consistent forward and return paths.
We must first ensure that traffic from the west end (east end) to the Internet is forwarded to
FW1 (FW2), on which the source addresses are translated to public addresses in address pool
natpool1 (natpool2). To achieve this, the following configurations are required:

Configure two VRRP groups on each downlink interface of the intranet routers to
balance load (back up each other). Set the gateway to the address of VRRP group 1 in the
west end and the address of VRRP group 2 in the east end. In this way, the traffic from
the west end will be forwarded to FW1 via Router1; the traffic from the east end will be
forwarded to FW2 via Router2.

Divide the received public IP addressed into two parts and respectively assign them to
the NAT address pools natpool1 and natpool2. Then configure two NAT policies,
nat_policy1 and nat_policy2. The source addresses in nat_policy1 are the addresses of
the west end users, and natpool1 is referenced; the source addresses in nat_policy2 are
the addresses of the east end users, and natpool2 is referenced. We only need to
configure these on FW1 because the configurations will be synchronized to FW2.
In this manner, the source address of a packet from the west end will be translated into
an address in natpool1 on FW1; and the source address of a packet from the east end
will be translated into an address in natpool2 on FW2. If FW1 is faulty, the packets from
both ends will be forwarded to FW2, on which two NAT rules are applied; therefore, the
addresses of the packets from both ends can be translated into correct public IP
addresses.
The next step is to ensure that the packets returned from the Internet to the west end (east end)
users are forwarded to FW1 (FW2). To achieve this, we must use routing policy to control the
MED value of the BGP. The following configurations are required:

Advertise the addresses in the two NAT address pools in BGP to ensure that return
packets can be routed. NAT address pool is normally advertised through black-hole
routes. This example uses black-hole routes to advertise the networks first, and then
redistribute these black-hole routes to BGP.

A routing policy must be deployed on FW1 to increase the cost (BGP MED value) of
routes to addresses in natpool2. Therefore, packets destined for these addresses will be
forwarded to FW2 through the link with a lower cost rather than to FW1. The
configuration of routing policy on FW2 is the same.
The preceding configurations ensure that forward and return packets take the same path so
that IPS and antivirus functions apply to return packets.
436
Learn Firewalls with Dr. WoW
12.3 Network Planning (For Data Center Firewall)
12.3.1 Hot Standby Planning
The firewalls are deployed transparently at the egress of the data center network, connected to
the upstream routers and the downstream Layer 3 switch in the data center, which can be
regarded as a router. This is a typical hot standby network where the "firewall interfaces work
at Layer 2 and are connected to upstream and downstream routers". In such networking,
OSPF is running between the upstream routers and downstream Layer 3 switch of the
firewalls, but the firewalls are not part of the OSPF process. The VGMP groups of the
firewalls monitor service interfaces through VLAN.
12.3.2 Security Function Planning
The following security functions are configured on data center firewalls:

Security zone: In this case, the data center area has a high security level, so we must add
the interfaces connected to the data center to the trust zone and the interfaces connected
to the core routers to the untrust zone.

Security policy: To ensure the security of the data center, stadium permits only intranet
users to access specific ports on the data center servers. Therefore, security policies must
be deployed on the firewall to allow users from the untrust zone to access the specified
ports on the specified servers in the trust zone.

ASPF: To support multi-channel protocols, the ASPF function must be enabled. In this
example, three multi-channel protocols, FTP, SIP, H323, are used. Therefore, ASPF must
be enabled for them.

IPS: To prevent the intrusion of zombies, Trojan horses and worms, IPS must be
deployed. The requirements on security for data center firewalls are higher than those for
egress firewalls, so the predefined IPS profile, default, must be used to block intrusion
behavior.

Antivirus: To prevent viruses, the antivirus function must be deployed. The predefined
antivirus profile default is used in this example.
We can use the predefined IPS and antivirus profiles (default) at first and fine-tune the
profiles over time to better suit the network.
12.4 Configuration Procedure (For Egress Firewall)
Step 1 Configure interface IP addresses and add the interfaces to security zones.
# Configure the IP address and description of the inside interface GE1/0/8 on FW1.
<FW1> system-view
[FW1] interface GigabitEthernet 1/0/8
[FW1-GigabitEthernet1/0/8] ip address 192.168.166.35 255.255.255.248
[FW1-GigabitEthernet1/0/8] description to R1-GigabitEthernet1/0/0_192.168.166.34
[FW1-GigabitEthernet1/0/8] quit
# Configure the IP address and description of the outside interface GE1/0/9 on FW1.
[FW1] interface GigabitEthernet 1/0/9
[FW1-GigabitEthernet1/0/9] ip address 5.1.1.90 255.255.255.252
[FW1-GigabitEthernet1/0/9] description to ISP-internal
[FW1-GigabitEthernet1/0/9] quit
437
Learn Firewalls with Dr. WoW
# Configure Eth-Trunk1, add heartbeat interfaces GE3/0/8 and GE3/0/9 on FW1 to
Eth-Trunk1, and configure its IP address.
[FW1] interface Eth-Trunk1
[FW1-Eth-Trunk1] ip address 1.1.1.2 255.255.255.252
[FW1-Eth-Trunk1] description to hrp
[FW1-Eth-Trunk1] quit
[FW1] interface GigabitEthernet 3/0/8
[FW1-GigabitEthernet3/0/8] eth-trunk 1
[FW1-GigabitEthernet3/0/8] description to FW2-GigabitEthernet3/0/8
[FW1-GigabitEthernet3/0/8] quit
[FW1] interface GigabitEthernet 3/0/9
[FW1-GigabitEthernet3/0/9] eth-trunk 1
[FW1-GigabitEthernet3/0/9] description to FW2-GigabitEthernet3/0/9
[FW1-GigabitEthernet3/0/9] quit
# Add the interfaces on FW1 to security zones.
[FW1] firewall zone trust
[FW1-zone-trust] add interface GigabitEthernet1/0/8
[FW1-zone-trust] quit
[FW1] firewall zone untrust
[FW1-zone- untrust] add interface GigabitEthernet1/0/9
[FW1-zone- untrust] quit
[FW1] firewall zone dmz
[FW1-zone-dmz] add interface Eth-Trunk1
[FW1-zone-dmz] quit
# Configure the IP address and description of the inside interface GE1/0/8 on FW2.
<FW2> system-view
[FW2] interface GigabitEthernet 1/0/8
[FW2-GigabitEthernet1/0/8] ip address 192.168.166.19 255.255.255.248
[FW2-GigabitEthernet1/0/8] description to R2-GigabitEthernet1/0/0_192.168.166.18
[FW2-GigabitEthernet1/0/8] quit
# Configure the IP address and description of the outside interface GE1/0/9 on FW2.
[FW2] interface GigabitEthernet 1/0/9
[FW2-GigabitEthernet1/0/9] ip address 5.1.1.94 255.255.255.252
[FW2-GigabitEthernet1/0/9] description to ISP-International
[FW2-GigabitEthernet1/0/9] quit
# Configure Eth-Trunk1, add heartbeat interfaces GE3/0/8 and GE3/0/9 on FW2 to
Eth-Trunk1, and configure its IP address.
[FW2] interface Eth-Trunk1
[FW2-Eth-Trunk1] ip address 1.1.1.1 255.255.255.252
[FW2-Eth-Trunk1] description to hrp
[FW2-Eth-Trunk1] quit
[FW2] interface GigabitEthernet 3/0/8
[FW2-GigabitEthernet3/0/8] eth-trunk 1
[FW2-GigabitEthernet3/0/8] description to FW1-GigabitEthernet3/0/8
[FW2-GigabitEthernet3/0/8] quit
[FW2] interface GigabitEthernet 3/0/9
[FW2-GigabitEthernet3/0/9] eth-trunk 1
[FW2-GigabitEthernet3/0/9] description to FW1-GigabitEthernet3/0/9
[FW2-GigabitEthernet3/0/9] quit
# Add the interfaces on FW2 to security zones.
[FW2] firewall zone trust
438
Learn Firewalls with Dr. WoW
[FW2-zone-trust] add interface GigabitEthernet1/0/8
[FW2-zone-trust] quit
[FW2] firewall zone untrust
[FW2-zone- untrust] add interface GigabitEthernet1/0/9
[FW2-zone- untrust] quit
[FW2] firewall zone dmz
[FW2-zone-dmz] add interface Eth-Trunk1
[FW2-zone-dmz] quit
Step 2 Configure BGP.
# Configure BGP on FW1, such as specifying the peers, advertising neighboring networks,
and redistributing direct routes.
[FW1] bgp 65010
[FW1-bgp] router-id 5.1.1.90
[FW1-bgp] peer 5.1.1.89 as-number 20825
[FW1-bgp] peer 5.1.1.89 password cipher Admin@1234
[FW1-bgp] ipv4-family unicast
[FW1-bgp-af-ipv4] network 5.1.1.88 255.255.255.252
[FW1-bgp-af-ipv4] import-route static
[FW1-bgp-af-ipv4] peer 5.1.1.89 enable
# Configure BGP on FW2, such as specifying the peers, advertising neighboring networks,
and redistributing direct routes.
[FW2] bgp 65010
[FW2-bgp] router-id 5.1.1.94
[FW2-bgp] peer 5.1.1.93 as-number 20825
[FW2-bgp] peer 5.1.1.93 password cipher Admin@1234
[FW2-bgp] ipv4-family unicast
[FW2-bgp-af-ipv4] network 5.1.1.92 255.255.255.252
[FW2-bgp-af-ipv4] import-route static
[FW2-bgp-af-ipv4] peer 5.1.1.93 enable
Step 3 Configure routing policies.
# Configure IP prefix W2 on FW1 to permit IP addresses (5.1.1.153-5.1.1.158) on the network
5.1.1.152/29.
[FW1] ip ip-prefix W2 index 10 permit 5.1.1.152 29 greater-equal 29 less-equal 29
# Configure routing policy W2 on FW1 to set the cost of packets with source addresses from
5.1.1.153 to 5.1.1.158 as 300.
[FW1] route-policy
[FW1-route-policy]
[FW1-route-policy]
[FW1-route-policy]
[FW1] route-policy
W2 permit node 10
if-match ip-prefix W2
apply cost 300
quit
W2 permit node 20
# Configure IP prefix N1 on FW2 to permit IP addresses (5.1.1.145-5.1.1.150) on the network
5.1.1.144/29.
[FW2] ip ip-prefix N1 index 10 permit 5.1.1.144 29 greater-equal 29 less-equal 29
# Configure routing policy N1 on FW2 to set the cost of packets with source addresses from
5.1.1.145 to 5.1.1.150 as 300.
[FW2] route-policy N1 permit node 10
439
Learn Firewalls with Dr. WoW
[FW2-route-policy]
[FW2-route-policy]
[FW2-route-policy]
[FW2] route-policy
if-match ip-prefix N1
apply cost 300
quit
N1 permit node 20
Step 4 Configure OSPF.
# Configure OSPF on FW1, including redistributing default routes, enabling BFD, and
advertising networks.
[FW1] ospf 1
[FW1-ospf-1] default-route-advertise always
[FW1-ospf-1] bfd all-interfaces enable
[FW1-ospf-1] area 0
[FW1-ospf-1-area-0.0.0.0] network 192.168.166.32 0.0.0.7
[FW1-ospf-1-area-0.0.0.0] quit
[FW1-ospf-1] quit
# Enable BFD on FW1.
[FW1] bfd
[FW1-bfd] quit
# Set the OSPF network type to p2p on GE1/0/8 of FW1.
[FW1] interface GigabitEthernet1/0/8
[FW1-GigabitEthernet1/0/8] ospf network-type p2p
# Associate BFD with OSPF on GE1/0/8 of FW1.
[FW1-GigabitEthernet1/0/8] ospf bfd enable
[FW1-GigabitEthernet1/0/8] ospf bfd min-tx-interval 100 min-rx-interval 100
[FW1-GigabitEthernet1/0/8] quit
# Configure OSPF on FW2, including redistributing default routes, enabling BFD, and
advertising networks.
[FW2] ospf 1
[FW2-ospf-1] default-route-advertise always
[FW2-ospf-1] bfd all-interfaces enable
[FW2-ospf-1] area 0
[FW2-ospf-1-area-0.0.0.0] network 192.168.166.16 0.0.0.7
[FW2-ospf-1-area-0.0.0.0] quit
[FW2-ospf-1] quit
# Enable BFD on FW2.
[FW2] bfd
[FW2-bfd] quit
# Set the OSPF network type to p2p on GE1/0/8 of FW2.
[FW2] interface GigabitEthernet1/0/8
[FW2-GigabitEthernet1/0/8] ospf network-type p2p
# Associate BFD with OSPF on GE1/0/8 of FW2.
[FW2-GigabitEthernet1/0/8] ospf bfd enable
[FW2-GigabitEthernet1/0/8] ospf bfd min-tx-interval 100 min-rx-interval 100
[FW2-GigabitEthernet1/0/8] quit
440
Learn Firewalls with Dr. WoW
Step 5 Configure the hot standby function.
# Configure active and standby groups on FW1 to monitor interface GE1/0/8; and configure
Link-Group to speed up convergence.
[FW1] interface GigabitEthernet1/0/8
[FW1-GigabitEthernet1/0/8] hrp track active
[FW1-GigabitEthernet1/0/8] hrp track standby
[FW1-GigabitEthernet1/0/8] link-group 1
[FW1-GigabitEthernet1/0/8] quit
# Configure active and standby groups on FW1 to monitor interface GE1/0/9; and configure
Link-Group to speed up convergence.
[FW1] interface GigabitEthernet1/0/9
[FW1-GigabitEthernet1/0/9] hrp track active
[FW1-GigabitEthernet1/0/9] hrp track standby
[FW1-GigabitEthernet1/0/9] link-group 1
[FW1-GigabitEthernet1/0/9] quit
# Specify a heartbeat interface on FW1, configure quick session backup, and enable the hot
standby function.
[FW1] hrp interface Eth-Trunk1
[FW1] hrp mirror session enable
[FW1] hrp enable
# Configure active and standby groups on FW2 to monitor interface GE1/0/8; and configure
Link-Group to speed up convergence.
[FW2] interface GigabitEthernet1/0/8
[FW2-GigabitEthernet1/0/8] hrp track active
[FW2-GigabitEthernet1/0/8] hrp track standby
[FW2-GigabitEthernet1/0/8] link-group 1
[FW2-GigabitEthernet1/0/8] quit
# Configure active and standby groups on FW2 to monitor interface GE1/0/9; and configure
Link-Group to speed up convergence.
[FW2] interface GigabitEthernet1/0/9
[FW2-GigabitEthernet1/0/9] hrp track active
[FW2-GigabitEthernet1/0/9] hrp track standby
[FW2-GigabitEthernet1/0/9] link-group 1
[FW2-GigabitEthernet1/0/9] quit
# Specify a heartbeat interface on FW2, configure quick session backup, and enable the hot
standby function.
[FW2] hrp interface Eth-Trunk1
[FW2] hrp mirror session enable
[FW2] hrp enable
[Dr. WoW's comment] When hot standby is successfully configured, most of the
configurations can be automatically backed up. Therefore, in the following steps, we only
need to configure the active device, FW1 (except for some specified configurations).
Step 6 Configure the NAT function.
# Configure the NAT address pool, natpool1, on FW1.
HRP_A[FW1] nat address-group natpool1
441
Learn Firewalls with Dr. WoW
HRP_A[FW1-nat-address-group-natpool1] section 0 5.1.1.145 5.1.1.150
HRP_A[FW1-nat-address-group-natpool1] quit
# Configure NAT policy, nat_policy1, on FW1 to translate the source addresses of packets
from 10.0.0.0/16 into the IP addresses in natpool1.
HRP_A[FW1] nat-policy
HRP_A[FW1-policy-nat] rule name nat_policy1
HRP_A[FW1-policy-nat-rule-nat_policy1] source-address 10.0.0.0 16
HRP_A[FW1-policy-nat-rule-nat_policy1] action nat address-group natpool1
HRP_A[FW1-policy-nat-rule-nat_policy1] quit
HRP_A[FW1-policy-nat] quit
# Configure the NAT address pool, natpool2, on FW1.
HRP_A[FW1] nat address-group natpool2
HRP_A[FW1-nat-address-group-natpool2] section 0 5.1.1.153 5.1.1.158
HRP_A[FW1-nat-address-group-natpool2] quit
# Configure NAT policy, nat_policy2, on FW1 to translate the source addresses of packets
from 10.1.0.0/16 into the IP addresses in natpool2.
HRP_A[FW1] nat-policy
HRP_A[FW1-policy-nat] rule name nat_policy2
HRP_A[FW1-policy-nat-rule-nat_policy2] source-address 10.1.0.0 16
HRP_A[FW1-policy-nat-rule-nat_policy2] action nat address-group natpool2
HRP_A[FW1-policy-nat-rule-nat_policy2] quit
HRP_A[FW1-policy-nat] quit
[Dr. WoW's comment] When hot standby is applied, NAT address pool and NAT policy
configurations can be automatically backed up; therefore, we only need to configure the
active device, FW1.
# Configure a black-hole route on FW1 and advertise the public IP addresses in the NAT
address pool.
HRP_A[FW1] ip route-static 5.147.252.144 255.255.255.240 NULL0
# Configure a black-hole route on FW2 and advertise the public IP addresses in the NAT
address pool.
HRP_S[FW2] ip route-static 5.147.252.144 255.255.255.240 NULL0
[Dr. WoW's comment] Route configuration cannot be backed up, so the black-hole routes
must be configured on both firewalls.
# Enable NAT ALG on FW1. The configurations will be automatically backed up to the
standby device.
HRP_A[FW1] firewall interzone trust untrust
HRP_A[FW1-interzone-trust-untrust] detect ftp
HRP_A[FW1-interzone-trust-untrust] detect sip
HRP_A[FW1-interzone-trust-untrust] detect h323
HRP_A[FW1-interzone-trust-untrust] quit
Step 7 Configure security functions.
# Configure URL filtering profile, url1, on FW1.
HRP_A[FW1] profile type url-filter name url1
442
Learn Firewalls with Dr. WoW
HRP_A[FW1-profile-url-filter-url1]
remark dscp cs1
HRP_A[FW1-profile-url-filter-url1]
block
HRP_A[FW1-profile-url-filter-url1]
block
HRP_A[FW1-profile-url-filter-url1]
category pre-defined subcategory-id 109 action qos
category pre-defined subcategory-id 122 action
category pre-defined subcategory-id 182 action
quit
[Dr. WoW's comment] Before configuring the URL filtering profile, run display url-filter
category to check URL categories; and then select the appropriate category for the network.
Because the configuration procedure is identical, we will only illustrate the configurations of
the preceding three categories.
# Configure a security policy to allow intranet users to access the Internet and reference the
IPS, antivirus, and URL filtering profiles.
HRP_A[FW1] security-policy
HRP_A[FW1-policy-security] rule name policy_sec
HRP_A[FW1-policy-security-rule-policy_sec] source-zone trust
HRP_A[FW1-policy-security-rule-policy_sec] destination-zone untrust
HRP_A[FW1-policy-security-rule-policy_sec] profile ips ids
HRP_A[FW1-policy-security-rule-policy_sec] profile av default
HRP_A[FW1-policy-security-rule-policy_sec] profile url url1
HRP_A[FW1-policy-security-rule-policy_sec] action permit
HRP_A[FW1-policy-security-rule-policy_sec] policy logging
HRP_A[FW1-policy-security-rule-policy_sec] session logging
HRP_A[FW1-policy-security-rule-policy_sec] quit
HRP_A[FW1-policy-security] quit
[Dr. WoW's comment]During the initial deployment of the firewall, predefined antivirus
profile, default, is generally used. After the firewall runs for a period of time, the
administrator can customize the profile based on the network operation. If the requirement on
security is not very high, profile ids is recommended to reduce the risk of IPS false positive.
----End
12.5 Configuration Procedure (For Data Center Firewall)
Step 1 Configure and add the interfaces to security zones.
# Switch interface GE1/0/8 on FW3 to a Layer 2 interface, and add it to the Link-Group.
<FW3> system-view
[FW3] interface GigabitEthernet 1/0/8
[FW3-GigabitEthernet1/0/8] portswitch
[FW3-GigabitEthernet1/0/8] description to R1-GE1/0/3_192.168.166.42
[FW3-GigabitEthernet1/0/8] link-group 1
[FW3-GigabitEthernet1/0/8] quit
# Switch interface GE1/0/9 on FW3 to a Layer 2 interface, and add it to the Link-Group.
[FW3] interface GigabitEthernet 1/0/9
[FW3-GigabitEthernet1/0/8] portswitch
[FW3-GigabitEthernet1/0/8] description to DCSW-GE0/0/2_192.168.166.44
[FW3-GigabitEthernet1/0/8] link-group 1
[FW3-GigabitEthernet1/0/8] quit
# Create VLAN2200 on FW3 and add GE1/0/8 and GE1/0/9 to this VLAN.
443
Learn Firewalls with Dr. WoW
[FW3] vlan 2200
[FW3-vlan-2200] port GigabitEthernet 1/0/8
[FW3-vlan-2200] port GigabitEthernet 1/0/9
[FW3-vlan-2200] quit
# Configure Eth-Trunk1, add heartbeat interfaces GE3/0/8 and GE3/0/9 on FW3 to
Eth-Trunk1, and configure its IP address.
[FW3] interface Eth-Trunk1
[FW3-Eth-Trunk1] ip address 2.2.2.2 255.255.255.252
[FW3-Eth-Trunk1] description to hrp
[FW3-Eth-Trunk1] quit
[FW3] interface GigabitEthernet 3/0/8
[FW3-GigabitEthernet3/0/8] eth-trunk 1
[FW3-GigabitEthernet3/0/8] description to FW4-GigabitEthernet3/0/8
[FW3-GigabitEthernet3/0/8] quit
[FW3] interface GigabitEthernet 3/0/9
[FW3-GigabitEthernet3/0/9] eth-trunk 1
[FW3-GigabitEthernet3/0/9] description to FW4-GigabitEthernet3/0/9
[FW3-GigabitEthernet3/0/9] quit
# Add the interfaces on FW3 to security zones.
[FW3] firewall zone trust
[FW3-zone-trust] add interface GigabitEthernet1/0/9
[FW3-zone-trust] quit
[FW3] firewall zone untrust
[FW3-zone-untrust] add interface GigabitEthernet1/0/8
[FW3-zone-untrust] quit
[FW3] firewall zone dmz
[FW3-zone-dmz] add interface Eth-Trunk1
[FW3-zone-dmz] quit
# Switch interface GE1/0/8 on FW4 to a Layer 2 interface, and add it to the Link-Group.
<FW4> system-view
[FW4] interface GigabitEthernet 1/0/8
[FW4-GigabitEthernet1/0/8] portswitch
[FW4-GigabitEthernet1/0/8] description to R2-GE1/0/3_192.168.166.26
[FW4-GigabitEthernet1/0/8] link-group 1
[FW4-GigabitEthernet1/0/8] quit
# Switch interface GE1/0/9 on FW4 to a Layer 2 interface, and add it to the Link-Group.
[FW4] interface GigabitEthernet 1/0/9
[FW4-GigabitEthernet1/0/9] portswitch
[FW4-GigabitEthernet1/0/9] description to DCSW-GE0/0/1_192.168.166.28
[FW4-GigabitEthernet1/0/9] link-group 1
[FW4-GigabitEthernet1/0/9] quit
# Create VLAN2100 on FW4 and add GE1/0/8 and GE1/0/9 to this VLAN.
[FW4] vlan 2100
[FW4-vlan-2100] port GigabitEthernet 1/0/8
[FW4-vlan-2100] port GigabitEthernet 1/0/9
[FW4-vlan-2100] quit
# Configure Eth-Trunk1, add heartbeat interfaces GE3/0/8 and GE3/0/9 on FW4 to
Eth-Trunk1, and configure its IP address.
[FW4] interface Eth-Trunk1
444
Learn Firewalls with Dr. WoW
[FW4-Eth-Trunk1] ip address 2.2.2.1 255.255.255.252
[FW4-Eth-Trunk1] description to hrp
[FW4-Eth-Trunk1] quit
[FW4] interface GigabitEthernet 3/0/8
[FW4-GigabitEthernet3/0/8] eth-trunk 1
[FW4-GigabitEthernet3/0/8] description to FW4-GigabitEthernet3/0/8
[FW4-GigabitEthernet3/0/8] quit
[FW4] interface GigabitEthernet 3/0/9
[FW4-GigabitEthernet3/0/9] eth-trunk 1
[FW4-GigabitEthernet3/0/9] description to FW4-GigabitEthernet3/0/9
[FW4-GigabitEthernet3/0/9] quit
# Add the interfaces on FW4 to security zones.
[FW4] firewall zone trust
[FW4-zone-trust] add interface GigabitEthernet1/0/9
[FW4-zone-trust] quit
[FW4] firewall zone untrust
[FW4-zone- untrust] add interface GigabitEthernet1/0/8
[FW4-zone- untrust] quit
[FW4] firewall zone dmz
[FW4-zone-dmz] add interface Eth-Trunk1
[FW4-zone-dmz] quit
Step 2 Configure the hot standby function.
# Configure active and standby groups on FW3 to monitor the VLAN2200.
[FW3] vlan 2200
[FW3-vlan-2200] hrp track active
[FW3-vlan-2200] hrp track standby
[FW3-vlan-2200] quit
# Specify a heartbeat interface on FW3, configure quick session backup, and enable the hot
standby function.
[FW3] hrp interface Eth-Trunk1
[FW3] hrp mirror session enable
[FW3] hrp enable
# Configure active and standby groups on FW4 to monitor the VLAN2100.
[FW4] vlan 2100
[FW4-vlan-2100] hrp track active
[FW4-vlan-2100] hrp track standby
[FW4-vlan-2100] quit
# Specify a heartbeat interface on FW4, configure quick session backup, and enable the hot
standby function.
[FW4] hrp interface Eth-Trunk1
[FW4] hrp mirror session enable
[FW4] hrp enable
[Dr. WoW's comment] When hot standby is successfully configured, most of the
configurations can be automatically backed up. Therefore, in the following steps, we only
need to configure the active device, FW1 (except for some specified configurations).
Step 3 Configure security function.
445
Learn Firewalls with Dr. WoW
# Configure a security policy to allow intranet users to access the specified ports on the
servers in the data center, and reference the IPS and antivirus profiles. In this example, we
allow intranet users to access the HTTP server 10.10.10.10 in the data center.
HRP_A[FW3] security-policy
HRP_A[FW3-policy-security] rule name policy_sec
HRP_A[FW3-policy-security-rule-policy_sec] source-zone untrust
HRP_A[FW3-policy-security-rule-policy_sec] destination-zone trust
HRP_A[FW3-policy-security-rule-policy_sec] destination-address 10.10.10.10 32
HRP_A[FW3-policy-security-rule-policy_sec] service http
HRP_A[FW3-policy-security-rule-policy_sec] profile ips default
HRP_A[FW3-policy-security-rule-policy_sec] profile av default
HRP_A[FW3-policy-security-rule-policy_sec] action permit
HRP_A[FW3-policy-security-rule-policy_sec] policy logging
HRP_A[FW3-policy-security-rule-policy_sec] session logging
HRP_A[FW3-policy-security-rule-policy_sec] quit
HRP_A[FW3-policy-security] quit
[Dr. WoW's comment]During the initial deployment of the firewall, predefined antivirus
profile, default, is generally used. After the firewall runs for a period of time, the
administrator can customize the profile based on the network operation.
# Configure ASPF to permit multi-channel protocols.
HRP_A[FW3] firewall interzone trust untrust
HRP_A[FW3-interzone-trust-untrust] detect ftp
HRP_A[FW3-interzone-trust-untrust] detect sip
HRP_A[FW3-interzone-trust-untrust] detect h323
HRP_A[FW3-interzone-trust-untrust] quit
----End
12.6 Highlights

This example illustrates the typical deployment of the firewall at the egress of a stadium
network and internal data center. If you are dealing with the same scenario, this example
will be a good reference.

This example illustrates two typical hot standby networking scenarios "firewalls
connected to routers" and "firewalls connected to routers transparently", which could
help understand the typical application of hot standby.

In this example, BGP, rather than static routes, is running between the firewalls serving
as gateways and the carrier network. If you need to deploy BGP on egress gateways, you
may refer to this example.

The most important and also tricky part is to use source NAT and routing policies to
ensure that forward and return packets take the same path.

Finally, this example also demonstrates the configurations of content security functions
on the egress firewalls and data center firewalls, which are the easiest and safest
configurations applicable to the firewall during the initial deployment.
446
Learn Firewalls with Dr. WoW
13
Firewalls On the VPN Connecting
Corporate Branches and Headquarters
13.1 Networking Requirements
As shown in Figure 13-1, some large enterprise consists of a municipal branch, a provincial
branch, and the headquarters (HQ). The networking is as follows:

On the municipal branch network, the OA, ERP, and financial systems belong to three
different VLANs connected through a Layer 2 switch; a firewall, FW_A (USG2200
V300R001C10), is deployed at the egress.

The provincial branch network has OA and ERP servers and a firewall FW_B (USG5530
V300R001C10) at the egress.

The HQ network has the financial management server, several PCs, and a firewall,
FW_C (USG5530 V300R001C10), at the egress.
447
Learn Firewalls with Dr. WoW
Figure 13-1 VPN connecting corporate branches and HQ
Provincial branch
OA server
FW_B GE0/0/1
10.1.3.1/24
Trust
nel
c tun
IPSe
FW_A
GE0/0/1.1: 10.1.1.1/24
GE0/0/1.2: 10.1.2.1/24
GE0/0/1.3: 192.168.1.1/24
Trust
GE0/0/2
Untrust
GE0/0/2
2.2.2.2/30
Untrust
ERP server
PPPoE server
IPSe
c tun
nel
GE0/0/2
3.3.3.3/30
Untrust
Municipal branch
L2 switch
10.1.3.0/24
HQ
GE0/0/1
192.168.0.1/24
Trust
PC
192.168.0.0/24
FW_C
Financial
management server
192.168.0.200
VLAN10
10.1.1.0/24
VLAN20
10.1.2.0/24
VLAN30
192.168.1.0/24
OA system
ERP system
Financial system
The specific requirements:

At the municipal branch, the data on the OA and ERP systems must be sent to the
corresponding OA and ERP servers of the provincial branch; the data on the financial
system must be sent to the financial management server at the HQ for analysis. All the
data must be encrypted when transmitted on the Internet. In addition, the devices of the
OA, ERP, and financial systems are not allowed to access the Internet, but inter-system
communication is allowed.

To reduce manual configuration efforts, FW_A of the municipal branch allocates IP
addresses and gateway information to the devices of the OA, ERP, and financial systems.

No static public IP address is assigned to the municipal branch; therefore, FW_A is used
as a PPPoE client to obtain a public IP address from the PPPoE server through dial-up.

The financial management server at 192.168.0.200 at the HQ can receive access requests
only from 192.168.0.1 due to security restrictions. Therefore, the source addresses of
packets from the financial system of the municipal branch must be converted into the
address of GE0/0/1 on FW_C. In addition, the financial management server is not
allowed to access the Internet, while any PC within the range of addresses from
192.168.0.2 to 192.168.0.100 is allowed.
448
Learn Firewalls with Dr. WoW
13.2 Network Planning
13.2.1 Interface Planning
FW_A of the municipal branch is connected to the internal networks, on which OA, ERP, and
financial systems are run, through one physical interface. Therefore, three logical
subinterfaces must be configured to connect the networks. In addition, to dynamically obtain
public IP addresses from the PPPoE server through dial-up, a dialer interface must be
configured on FW_A.
No special requirement is set for FW_B of the provincial branch and FW_C of the HQ.
Configure the IP addresses and add them to the security zones after physical interfaces are
determined.
13.2.2 Security Policy Planning
Configure on FW_A a security policy for the trust-untrust interzone to permit the packets
destined for the provincial branch or the HQ from the OA, ERP, and financial systems; and
configure a security policy for the local-untrust interzone to permit IPSec packets.
Configure on FW_B a security policy for the trust-untrust interzone to permit the packets
destined for the OA and ERP servers from the OA and ERP systems; and configure a security
policy for the local-untrust interzone to permit IPSec packets.
Configure on FW_C a security policy for the trust-untrust interzone to permit the packets
destined for the financial management server from the financial system and allow PCs to
access the Internet; and configure a security policy for the local-untrust interzone to permit
IPSec packets.
13.2.3 IPSec Planning
The data between the municipal branch and the provincial branch, and between the municipal
branch and the HQ must be encrypted by IPSec. Therefore, IPSec tunnels must be set up
between FW_A and FW_B and between FW_A and FW_C.
Configure an IPSec policy group on the dialer interface of FW_A, including two IPSec
policies for the connection respectively to the two peers, FW_B and FW_C.
When the municipal branch initiates connections to the provincial branch and the HQ, FW_B
and FW_C set up policy template-based IPSec tunnels to allow the access.
13.2.4 NAT Planning
It is not necessary to create NAT policy on FW_A since the OA, ERP, and financial systems at
the municipal branch do not need to access the Internet. NAT policy is not configured on
FW_B because the provincial branch does not need to access the Internet.
Two NAT policies must be created on FW_C of the HQ. One uses the Easy-IP mode to
convert the source addresses of the packets destined for the financial management server from
the municipal branch financial system into GE0/0/1's address; the other uses the NAPT mode
to convert the source addresses of Internet access packets from the PCs at the HQ into
addresses in the public IP address pool. The objects of the second NAT policy are restricted to
the IP addresses of PCs. The address of financial management server is not included because
the server is not allowed to access the Internet.
449
Learn Firewalls with Dr. WoW
13.2.5 Routing Planning
Static routing is used because the network structures in this example are rather simple.
Configure a default route on FW_A, FW_B, and FW_C, with the next hop addresses being
the address provided by the ISP. In addition, configure a black-hole route on FW_C with the
destination IP address as an address from the NAT pool to prevent routing loops.
The gateway of the devices in the municipal branch network is assigned by FW_A. The
default gateway of the devices in the provincial branch or the HQ is set as FW_B or FW_C's
interface connected to the internal network.
13.3 Configuration Procedure
Step 1 Configure interface IP addresses, and add the interfaces to security zones.
# Create subinterfaces on FW_A, configure their IP addresses, and configure DHCP on them
to allocate IP addresses to intranet devices.
<FW_A> system-view
[FW_A] interface GigabitEthernet 0/0/1.1
[FW_A-GigabitEthernet0/0/1.1] vlan-type dot1q 10
[FW_A-GigabitEthernet0/0/1.1] ip address 10.1.1.1 255.255.255.0
[FW_A-GigabitEthernet0/0/1.1] dhcp select interface
[FW_A-GigabitEthernet0/0/1.1] quit
[FW_A] interface GigabitEthernet 0/0/1.2
[FW_A-GigabitEthernet0/0/1.2] vlan-type dot1q 20
[FW_A-GigabitEthernet0/0/1.2] ip address 10.1.2.1 255.255.255.0
[FW_A-GigabitEthernet0/0/1.2] dhcp select interface
[FW_A-GigabitEthernet0/0/1.2] quit
[FW_A] interface GigabitEthernet 0/0/1.3
[FW_A-GigabitEthernet0/0/1.3] vlan-type dot1q 30
[FW_A-GigabitEthernet0/0/1.3] ip address 192.168.1.1 255.255.255.0
[FW_A-GigabitEthernet0/0/1.3] dhcp select interface
[FW_A-GigabitEthernet0/0/1.3] quit
[Dr. WoW's comment] The structure of the municipal branch network is simple. After running
the dhcp select interface command, FW_A will send the IP addresses of the subinterfaces as
gateway addresses to the devices of the OA, ERP, and financial systems, and allocate other IP
addresses within the same subnet to these devices.
# Create a dialer interface on FW_A and configure dial-up parameters, and then bind it to a
physical interface. In this example, the user name allocated by the PPPoE server to the
enterprise is admin, the password is Admin@123, and the authentication mode is PAP.
[FW_A] dialer-rule 1 ip permit
[FW_A] interface Dialer 1
[FW_A-Dialer1] dialer user admin
[FW_A-Dialer1] dialer-group 1
[FW_A-Dialer1] dialer bundle 1
[FW_A-Dialer1] ip address ppp-negotiate
[FW_A-Dialer1] ppp pap local-user admin password cipher Admin@123
[FW_A-Dialer1] quit
[FW_A] interface GigabitEthernet 0/0/2
[FW_A-GigabitEthernet0/0/2] pppoe-client dial-bundle-number 1
[FW_A-GigabitEthernet0/0/2] quit
450
Learn Firewalls with Dr. WoW
# Add the interfaces on FW_A to security zones.
[FW_A] firewall zone trust
[FW_A-zone-trust] add interface GigabitEthernet0/0/1.1
[FW_A-zone-trust] add interface GigabitEthernet0/0/1.2
[FW_A-zone-trust] add interface GigabitEthernet0/0/1.3
[FW_A-zone-trust] quit
[FW_A] firewall zone untrust
[FW_A-zone-untrust] add interface GigabitEthernet 0/0/2
[FW_A-zone-untrust] add interface Dialer 1
[FW_A-zone-untrust] quit
# Configure the interface IP addresses on FW_B.
<FW_B> system-view
[FW_B] interface GigabitEthernet 0/0/1
[FW_B-GigabitEthernet0/0/1] ip address 10.1.3.1 255.255.255.0
[FW_B-GigabitEthernet0/0/1] quit
[FW_B] interface GigabitEthernet 0/0/2
[FW_B-GigabitEthernet0/0/2] ip address 2.2.2.2 255.255.255.252
[FW_B-GigabitEthernet0/0/2] quit
# Add the interfaces on FW_B to security zones.
[FW_B] firewall zone trust
[FW_B-zone-trust] add interface GigabitEthernet 0/0/1
[FW_B-zone-trust] quit
[FW_B] firewall zone untrust
[FW_B-zone-untrust] add interface GigabitEthernet 0/0/2
[FW_B-zone-untrust] quit
# Configure the interface IP addresses on FW_C.
<FW_C> system-view
[FW_C] interface GigabitEthernet 0/0/1
[FW_C-GigabitEthernet0/0/1] ip address 192.168.0.1 255.255.255.0
[FW_C-GigabitEthernet0/0/1] quit
[FW_C] interface GigabitEthernet 0/0/2
[FW_C-GigabitEthernet0/0/2] ip address 3.3.3.3 255.255.255.252
[FW_C-GigabitEthernet0/0/2] quit
# Add the interfaces on FW_C to security zones.
[FW_C] firewall zone trust
[FW_C-zone-trust] add interface GigabitEthernet 0/0/1
[FW_C-zone-trust] quit
[FW_C] firewall zone untrust
[FW_C-zone-untrust] add interface GigabitEthernet 0/0/2
[FW_C-zone-untrust] quit
Step 2 Configure security policies.
# Configure on FW_A an outbound security policy for the trust-untrust interzone to allow the
OA and ERP systems to access the provincial branch.
[FW_A] policy interzone trust untrust outbound
[FW_A-policy-interzone-trust-untrust-outbound] policy 1
[FW_A-policy-interzone-trust-untrust-outbound-1] policy source 10.1.1.0 0.0.0.255
[FW_A-policy-interzone-trust-untrust-outbound-1] policy source 10.1.2.0 0.0.0.255
451
Learn Firewalls with Dr. WoW
[FW_A-policy-interzone-trust-untrust-outbound-1] policy destination 10.1.3.0
0.0.0.255
[FW_A-policy-interzone-trust-untrust-outbound-1] action permit
[FW_A-policy-interzone-trust-untrust-outbound-1] quit
# Configure on FW_A an outbound security policy for the trust-untrust interzone to allow the
financial system to access the financial management server of the HQ.
[FW_A-policy-interzone-trust-untrust-outbound] policy 2
[FW_A-policy-interzone-trust-untrust-outbound-2] policy source 192.168.1.0 0.0.0.255
[FW_A-policy-interzone-trust-untrust-outbound-2] policy destination 192.168.0.200 0
[FW_A-policy-interzone-trust-untrust-outbound-2] action permit
[FW_A-policy-interzone-trust-untrust-outbound-2] quit
[FW_A-policy-interzone-trust-untrust-outbound] quit
# Configure on FW_A security policies for the local-untrust interzone to permit IPSec
packets.
[FW_A] ip service-set udp500 type object
[FW_A-object-service-set-udp500] service protocol udp source-port 500 destination-port
500
[FW_A-object-service-set-udp500] quit
[FW_A] policy interzone local untrust outbound
[FW_A-policy-interzone-local-untrust-outbound] policy 1
[FW_A-policy-interzone-local-untrust-outbound-1] policy destination 2.2.2.2 0
[FW_A-policy-interzone-local-untrust-outbound-1] policy destination 3.3.3.3 0
[FW_A-policy-interzone-local-untrust-outbound-1] policy service service-set udp500
[FW_A-policy-interzone-local-untrust-outbound-1] action permit
[FW_A-policy-interzone-local-untrust-outbound-1] quit
[FW_A-policy-interzone-local-untrust-outbound] quit
[FW_A] policy interzone local untrust inbound
[FW_A-policy-interzone-local-untrust-inbound] policy 1
[FW_A-policy-interzone-local-untrust-inbound-1] policy source 2.2.2.2 0
[FW_A-policy-interzone-local-untrust-inbound-1] policy source 3.3.3.3 0
[FW_A-policy-interzone-local-untrust-inbound-1] policy service service-set esp
[FW_A-policy-interzone-local-untrust-inbound-1] action permit
[FW_A-policy-interzone-local-untrust-inbound-1] quit
[FW_A-policy-interzone-local-untrust-inbound] quit
[Dr. WoW's comment] The IPSec negotiation packets use UDP with both the source and
destination ports being 500; therefore, a service set, "udp500", is defined here to specify UDP
and the source and destination ports (both are 500) and is referenced in security policies. In
addition, ESP is used to encrypt IPSec service packets. The predefined esp protocol is directly
referenced in security policies.
# Configure on FW_B an inbound security policy for the trust-untrust interzone to allow the
OA and ERP systems to access the provincial branch.
[FW_B] policy interzone trust untrust inbound
[FW_B-policy-interzone-trust-untrust-inbound] policy 1
[FW_B-policy-interzone-trust-untrust-inbound-1] policy source 10.1.1.0 0.0.0.255
[FW_B-policy-interzone-trust-untrust-inbound-1] policy source 10.1.2.0 0.0.0.255
[FW_B-policy-interzone-trust-untrust-inbound-1] policy destination 10.1.3.0 0.0.0.255
[FW_B-policy-interzone-trust-untrust-inbound-1] action permit
[FW_B-policy-interzone-trust-untrust-inbound-1] quit
[FW_B-policy-interzone-trust-untrust-inbound] quit
452
Learn Firewalls with Dr. WoW
# Configure on FW_B an inbound security policy for the local-untrust interzone to permit
IPSec packets.
[FW_B] ip service-set udp500 type object
[FW_B-object-service-set-udp500] service protocol udp source-port 500 destination-port
500
[FW_B-object-service-set-udp500] quit
[FW_B] policy interzone local untrust inbound
[FW_B-policy-interzone-local-untrust-inbound] policy 1
[FW_B-policy-interzone-local-untrust-inbound-1] policy destination 2.2.2.2 0
[FW_B-policy-interzone-local-untrust-inbound-1] policy service service-set udp500
[FW_B-policy-interzone-local-untrust-inbound-1] policy service service-set esp
[FW_B-policy-interzone-local-untrust-inbound-1] action permit
[FW_B-policy-interzone-local-untrust-inbound-1] quit
[FW_B-policy-interzone-local-untrust-inbound] quit
# Configure on FW_C an inbound security policy for the trust-untrust interzone to allow the
financial system to access the financial management server.
[FW_C] policy interzone trust untrust inbound
[FW_C-policy-interzone-trust-untrust-inbound] policy 1
[FW_C-policy-interzone-trust-untrust-inbound-1] policy source 192.168.1.0 0.0.0.255
[FW_C-policy-interzone-trust-untrust-inbound-1] policy destination 192.168.0.200 0
[FW_C-policy-interzone-trust-untrust-inbound-1] action permit
[FW_C-policy-interzone-trust-untrust-inbound-1] quit
[FW_C-policy-interzone-trust-untrust-inbound] quit
# Configure on FW_C an outbound security policy for the trust-untrust interzone to allow PCs
to access the Internet.
[FW_C] policy interzone trust untrust outbound
[FW_C-policy-interzone-trust-untrust-outbound] policy 1
[FW_C-policy-interzone-trust-untrust-outbound-1] policy source range 192.168.0.2
192.168.0.100
[FW_C-policy-interzone-trust-untrust-outbound-1] action permit
[FW_C-policy-interzone-trust-untrust-outbound-1] quit
[FW_C-policy-interzone-trust-untrust-outbound] quit
# Configure on FW_C an inbound security policy for the local-untrust interzone to permit
IPSec packets.
[FW_C] ip service-set udp500 type object
[FW_C-object-service-set-udp500] service protocol udp source-port 500 destination-port
500
[FW_C-object-service-set-udp500] quit
[FW_C] policy interzone local untrust inbound
[FW_C-policy-interzone-local-untrust-inbound] policy 1
[FW_C-policy-interzone-local-untrust-inbound-1] policy destination 3.3.3.3 0
[FW_C-policy-interzone-local-untrust-inbound-1] policy service service-set udp500
[FW_C-policy-interzone-local-untrust-inbound-1] policy service service-set esp
[FW_C-policy-interzone-local-untrust-inbound-1] action permit
[FW_C-policy-interzone-local-untrust-inbound-1] quit
[FW_C-policy-interzone-local-untrust-inbound] quit
Step 3 Configure IPSec.
# Configure ACLs on FW_A to define data flows to be protected by IPSec.
[FW_A] acl 3000
453
Learn Firewalls with Dr. WoW
[FW_A-acl-adv-3000] rule permit ip source 10.1.0.0 0.0.255.255 destination 10.1.3.0
0.0.0.255
[FW_A-acl-adv-3000] quit
[FW_A] acl 3001
[FW_A-acl-adv-3001] rule permit ip source 192.168.1.0 0.0.0.255 destination 192.168.0.0
0.0.0.255
[FW_A-acl-adv-3001] quit
# Configure an ACL on FW_B to define the data flow to be protected by IPSec.
[FW_B] acl 3000
[FW_B-acl-adv-3000] rule permit ip source 10.1.3.0 0.0.0.255 destination 10.1.0.0
0.0.255.255
[FW_B-acl-adv-3000] quit
# Configure an ACL on FW_C to define the data flow to be protected by IPSec.
[FW_C] acl 3000
[FW_C-acl-adv-3000] rule permit ip source 192.168.0.0 0.0.0.255 destination 192.168.1.0
0.0.0.255
[FW_C-acl-adv-3000] quit
# Configure IPSec proposals on FW_A.
[FW_A] ipsec proposal pro1
[FW_A-ipsec-proposal-pro1]
[FW_A-ipsec-proposal-pro1]
[FW_A-ipsec-proposal-pro1]
[FW_A-ipsec-proposal-pro1]
[FW_A-ipsec-proposal-pro1]
[FW_A] ipsec proposal pro2
[FW_A-ipsec-proposal-pro2]
[FW_A-ipsec-proposal-pro2]
[FW_A-ipsec-proposal-pro2]
[FW_A-ipsec-proposal-pro2]
[FW_A-ipsec-proposal-pro2]
encapsulation-mode tunnel
transform esp
esp authentication-algorithm sha1
esp encryption-algorithm aes
quit
encapsulation-mode tunnel
transform esp
esp authentication-algorithm sha1
esp encryption-algorithm aes
quit
# Configure an IPSec proposal on FW_B.
[FW_B] ipsec proposal pro1
[FW_B-ipsec-proposal-pro1]
[FW_B-ipsec-proposal-pro1]
[FW_B-ipsec-proposal-pro1]
[FW_B-ipsec-proposal-pro1]
[FW_B-ipsec-proposal-pro1]
encapsulation-mode tunnel
transform esp
esp authentication-algorithm sha1
esp encryption-algorithm aes
quit
# Configure an IPSec proposal on FW_C.
[FW_C] ipsec proposal pro1
[FW_C-ipsec-proposal-pro1]
[FW_C-ipsec-proposal-pro1]
[FW_C-ipsec-proposal-pro1]
[FW_C-ipsec-proposal-pro1]
[FW_C-ipsec-proposal-pro1]
encapsulation-mode tunnel
transform esp
esp authentication-algorithm sha1
esp encryption-algorithm aes
quit
# Configure IKE proposals on FW_A.
[FW_A] ike proposal 1
[FW_A-ike-proposal-1] authentication-method pre-share
454
Learn Firewalls with Dr. WoW
[FW_A-ike-proposal-1]
[FW_A-ike-proposal-1]
[FW_A-ike-proposal-1]
[FW_A-ike-proposal-1]
[FW_A-ike-proposal-1]
[FW_A] ike proposal 2
[FW_A-ike-proposal-2]
[FW_A-ike-proposal-2]
[FW_A-ike-proposal-2]
[FW_A-ike-proposal-2]
[FW_A-ike-proposal-2]
[FW_A-ike-proposal-2]
authentication-algorithm sha1
encryption-algorithm aes-cbc
dh group2
integrity-algorithm aes-xcbc-96
quit
authentication-method pre-share
authentication-algorithm sha1
encryption-algorithm aes-cbc
dh group2
integrity-algorithm aes-xcbc-96
quit
# Configure an IKE proposal on FW_B.
[FW_B] ike proposal 1
[FW_B-ike-proposal-1]
[FW_B-ike-proposal-1]
[FW_B-ike-proposal-1]
[FW_B-ike-proposal-1]
[FW_B-ike-proposal-1]
[FW_B-ike-proposal-1]
authentication-method pre-share
authentication-algorithm sha1
encryption-algorithm aes-cbc
dh group2
integrity-algorithm aes-xcbc-96
quit
# Configure an IKE proposal on FW_C.
[FW_C] ike proposal 1
[FW_C-ike-proposal-1]
[FW_C-ike-proposal-1]
[FW_C-ike-proposal-1]
[FW_C-ike-proposal-1]
[FW_C-ike-proposal-1]
[FW_C-ike-proposal-1]
authentication-method pre-share
authentication-algorithm sha1
encryption-algorithm aes-cbc
dh group2
integrity-algorithm aes-xcbc-96
quit
[Dr. WoW's comment] The parameters of IPSec and IKE proposals created on the two
firewalls of the IPSec tunnel must be the same.
# Configure the IKE peers on FW_A.
[FW_A] ike local-name FW_A
[FW_A] ike peer fwb
[FW_A-ike-peer-fwb] ike-proposal 1
[FW_A-ike-peer-fwb] local-id-type fqdn
[FW_A-ike-peer-fwb] remote-address 2.2.2.2
[FW_A-ike-peer-fwb] pre-shared-key Admin@123
[FW_A-ike-peer-fwb] quit
[FW_A] ike peer fwc
[FW_A-ike-peer-fwc] ike-proposal 2
[FW_A-ike-peer-fwc] local-id-type fqdn
[FW_A-ike-peer-fwc] remote-address 3.3.3.3
[FW_A-ike-peer-fwc] pre-shared-key Admin@456
[FW_A-ike-peer-fwc] quit
# Configure the IKE peer on FW_B.
[FW_B] ike peer fwa
[FW_B-ike-peer-fwa] ike-proposal 1
[FW_B-ike-peer-fwa] remote-id FW_A
[FW_B-ike-peer-fwa] pre-shared-key Admin@123
455
Learn Firewalls with Dr. WoW
[FW_B-ike-peer-fwa] quit
# Configure the IKE peer on FW_C.
[FW_C] ike peer fwa
[FW_C-ike-peer-fwa]
[FW_C-ike-peer-fwa]
[FW_C-ike-peer-fwa]
[FW_C-ike-peer-fwa]
ike-proposal 1
remote-id FW_A
pre-shared-key Admin@456
quit
[Dr. WoW's comment] In this example, FW_B and FW_C have fixed public IP addresses;
therefore, FW_A can authenticate them by IP addresses. FW_A does not have a fixed public
IP address; therefore, FW_B and FW_C use FQDN to verify FW_A.
IP address authentication is the default method. For authentication using FQDN, run the
local-id-type fqdn command on FW_A in the IKE peer view and then the ike local-name
local-name command in the system view. Meanwhile, run remote-id id on FW_B and FW_C
in the IKE peer view, and the id must be the same as the local-name configured through ike
local-name on FW_A.
In addition, if ike local-name local-name command is not executed, FW_A will send its
device name as the local name to FW_B and FW_C for authentication. The device name of
FW_A in this example is "FW_A"; therefore, the remote-id FW_A command can be
executed on FW_B and FW_C. The ike local-name FW_A command is optional on FW_A.
# Configure IPSec policies on FW_A.
[FW_A] ipsec policy map1 1 isakmp
[FW_A-ipsec-policy-isakmp-map1-1]
[FW_A-ipsec-policy-isakmp-map1-1]
[FW_A-ipsec-policy-isakmp-map1-1]
[FW_A-ipsec-policy-isakmp-map1-1]
[FW_A] ipsec policy map1 2 isakmp
[FW_A-ipsec-policy-isakmp-map1-2]
[FW_A-ipsec-policy-isakmp-map1-2]
[FW_A-ipsec-policy-isakmp-map1-2]
[FW_A-ipsec-policy-isakmp-map1-2]
security acl 3000
ike-peer fwb
proposal pro1
quit
security acl 3001
ike-peer fwc
proposal pro2
quit
# Configure an IPSec policy on FW_B.
[FW_B] ipsec policy-template map_temp 1
[FW_B-ipsec-policy-template-map_temp-1] security acl 3000
[FW_B-ipsec-policy-template-map_temp-1] ike-peer fwa
[FW_B-ipsec-policy-template-map_temp-1] proposal pro1
[FW_B-ipsec-policy-template-map_temp-1] quit
[FW_B] ipsec policy map1 1 isakmp template map1_temp
# Configure an IPSec policy on FW_C.
[FW_C] ipsec policy-template map_temp 1
[FW_C-ipsec-policy-template-map_temp-1] security acl 3000
[FW_C-ipsec-policy-template-map_temp-1] ike-peer fwa
[FW_C-ipsec-policy-template-map_temp-1] proposal pro1
[FW_C-ipsec-policy-template-map_temp-1] quit
[FW_C] ipsec policy map1 1 isakmp template map1_temp
# Apply the IPSec policies on FW_A.
[FW_A] interface Dialer 1
[FW_A-Dialer1] ipsec policy map1
456
Learn Firewalls with Dr. WoW
[FW_A-Dialer1] quit
# Apply the IPSec policy on FW_B.
[FW_B] interface GigabitEthernet 0/0/2
[FW_B-GigabitEthernet0/0/2] ipsec policy map1
[FW_B-GigabitEthernet0/0/2] quit
# Apply the IPSec policy on FW_C.
[FW_C] interface GigabitEthernet 0/0/2
[FW_C-GigabitEthernet0/0/2] ipsec policy map1
[FW_C-GigabitEthernet0/0/2] quit
Step 4 Configure NAT.
# Configure a NAT policy on FW_C for the municipal branch financial system to access the
financial management server; and the translated address is GE0/0/1's IP address 192.168.0.1.
[FW_C] nat-policy interzone trust untrust inbound
[FW_C-nat-policy-interzone-trust-untrust-inbound] policy 1
[FW_C-nat-policy-interzone-trust-untrust-inbound-1] policy source 192.168.1.0
0.0.0.255
[FW_C-nat-policy-interzone-trust-untrust-inbound-1] policy destination 192.168.0.200
0
[FW_C-nat-policy-interzone-trust-untrust-inbound-1] action source-nat
[FW_C-nat-policy-interzone-trust-untrust-inbound-1] easy-ip GigabitEthernet0/0/1
[FW_C-nat-policy-interzone-trust-untrust-inbound-1] quit
[FW_C-nat-policy-interzone-trust-untrust-inbound] quit
# Configure a NAT address pool on FW_C. Assume the public IP address the HQ obtained
from the ISP is 3.3.3.100.
[FW_C] nat address-group 1 3.3.3.100 3.3.3.100
# Configure a NAT policy on FW_C to allow PCs to access the Internet, and the translated
addresses are the IP addresses of the address pool.
[FW_C] nat-policy interzone trust untrust outbound
[FW_C-nat-policy-interzone-trust-untrust-outbound] policy 1
[FW_C-nat-policy-interzone-trust-untrust-outbound-1] policy source range 192.168.0.2
192.168.0.100
[FW_C-nat-policy-interzone-trust-untrust-outbound-1] action source-nat
[FW_C-nat-policy-interzone-trust-untrust-outbound-1] address-group 1
[FW_C-nat-policy-interzone-trust-untrust-outbound-1] quit
[FW_C-nat-policy-interzone-trust-untrust-outbound] quit
[Dr. WoW's comment] Two NAT policies are configured here in this example, one for
outgoing packets and the other for incoming packets.
Step 5 Configure routes.
# Configure a default route on FW_A with the next hop being the dialer 1 interface.
[FW_A] ip route-static 0.0.0.0 0.0.0.0 Dialer 1
# Configure a default route on FW_B, and assume the IP address of the next hop provided by
the ISP is 2.2.2.1.
[FW_B] ip route-static 10.1.0.0 16 2.2.2.1
457
Learn Firewalls with Dr. WoW
# Configure a default route on FW_C, and assume the IP address of the next hop provided by
the ISP is 3.3.3.1.
[FW_C] ip route-static 0.0.0.0 0.0.0.0 3.3.3.1
# Configure a black-hole route for the NAT address pool on FW_C to avoid routing loops.
[FW_C] ip route-static 3.3.3.100 32 NULL 0
----End
13.4 Highlights
This example seems to have nothing special, but still something could be learnt from it.

At the municipal branch, FW_A is connected with the OA, ERP, and financial systems
on the intranet through subinterfaces, and IP addresses are configured on the
subinterfaces to terminate the VLANs. This is the common configuration when the
firewall is connected to multiple VLANs through one physical interface.

Besides, the IPSec policy configured on FW_A is applied on the dialer interface (a
logical interface), rather than on the physical interface. This is because FW_A, as a
PPPoE client, obtains the public IP address by dialing up through the dialer interface;
and the obtained IP address will be used as the address of the IPSec tunnel initiator.
Therefore, the IPSec policy is applied to the dialer interface.

Finally, because the financial management server at the HQ can receive access requests
only from the specific IP addresses, a special NAT policy is configured on FW_C to
translate the source addresses of packets from the municipal branch financial system into
the IP address of GE0/0/1. Different from the conventional NAT for outgoing packets,
this NAT policy is used to translate the address of incoming packets and replace the
private address with another private address. This is a novel way to use NAT.
458
Android
Huawei
Enterprise Business
Huawei
Enterprise Support
To view technical posts on the web page, click:
http://support.huawei.com/ecommunity/bbs/10247000.html
Copyright © Huawei Technologies Co., Ltd. 2016. All rights reserved.