Airborne Network Architecture

advertisement
AIRBORNE NETWORK ARCHITECTURE
System Communications Description
&
Technical Architecture Profile
Version 1.1
7 October 2004
This document contains a description of the USAF Airborne Network Architecture. It is expected that this
architecture will continue to be developed in terms of both the level of detail in which it is expressed and in
the evolution of its fundamental concepts. Please contact the Chair of the Airborne Network Special
Interest Group to determine the most recent version of this document.
Prepared by HQ ESC/NI1 for the
USAF Airborne Network Special Interest Group
Table of Contents
1.
Introduction ............................................................................................................... 1
1.1 Definition of Airborne Network ............................................................................ 1
1.2 Purpose of Document ............................................................................................ 1
1.3 Relationship to Other Documents.......................................................................... 1
1.4 Contents ................................................................................................................. 2
1.5 Intended Use .......................................................................................................... 3
2.
Operational Context .................................................................................................. 4
2.1 Operational Concept Description for the Airborne Network (OV-1).................... 4
2.2 Objective Airborne Network Communications Capabilities ................................. 4
3.
Airborne Network Tenets ......................................................................................... 7
4.
Airborne Network Description (SV-1) ..................................................................... 9
4.1 Overview ............................................................................................................... 9
4.2 Network Functions ................................................................................................ 9
4.2.1 Inter-node Connectivity .................................................................................. 9
4.2.2 Information Assurance (IA) .......................................................................... 11
4.2.3 Network Management ................................................................................... 12
4.2.4 Link Management ......................................................................................... 13
4.2.5 Network Services .......................................................................................... 14
4.3 Network Topologies (Nodes and Links) ............................................................. 14
4.3.1 Node Types ................................................................................................... 15
4.3.2 Link Types .................................................................................................... 25
4.3.2.1
Backbone............................................................................................... 25
4.3.2.2
Subnet ................................................................................................... 26
4.3.2.3
Network Access .................................................................................... 26
4.3.2.4
Legacy ................................................................................................... 26
4.3.3 Typical Topologies ....................................................................................... 26
4.3.3.1
Space, Air, Ground Tether .................................................................... 26
4.3.3.2
Flat Ad-Hoc .......................................................................................... 27
4.3.3.3
Tiered Ad-Hoc ...................................................................................... 28
4.3.3.4
Persistent Backbone .............................................................................. 28
5.
Airborne Network System Functions (SV-4) ......................................................... 30
5.1 On-Board Infrastructure ...................................................................................... 30
5.1.1 Intra Platform Distribution ............................................................................ 30
5.1.1.1
Serial Data Buses .................................................................................. 30
5.1.1.2
Local Area Networks ............................................................................ 30
5.1.2 Platform Information Assurance ................................................................... 30
5.1.3 Platform Network Management .................................................................... 31
5.1.4 Gateways/Proxies .......................................................................................... 31
5.1.4.1
Legacy Infrastructure Gateways ........................................................... 31
5.1.4.2
Legacy Transmission System Data Link Gateways.............................. 32
5.1.4.3
TDL Gateways ...................................................................................... 32
5.1.4.4
Performance Enhancing Proxies ........................................................... 32
5.2 AN Equipment ..................................................................................................... 33
5.2.1 Inter-node Connectivity ................................................................................ 33
i
5.2.1.1
Routing/Switching ................................................................................ 33
5.2.1.2
Quality of Service (QoS)/Class of Service (CoS) ................................. 35
5.2.1.2.1 QoS-Aware Application Interfaces ................................................... 37
5.2.1.2.2 QoS/CoS Mechanisms ...................................................................... 37
5.2.1.2.3 QoS-Based Routing/Switching ......................................................... 38
5.2.1.2.4 QoS Manager .................................................................................... 38
5.2.2 Information Assurance .................................................................................. 39
5.2.2.1
GIG IA Architecture ............................................................................. 40
5.2.2.2
Supporting Infrastructures .................................................................... 43
5.2.2.2.1 Security Management Infrastructure (SMI) ...................................... 43
5.2.2.2.2 DoD Key Management Infrastructure (KMI) ................................... 44
5.2.2.2.3 Public Key Infrastructure (PKI) ........................................................ 45
5.2.2.3
AN Information Flows .......................................................................... 46
5.2.2.3.1 Least Privilege .................................................................................. 47
5.2.2.3.2 User Traffic ....................................................................................... 48
5.2.2.3.3 Policy Dissemination ........................................................................ 48
5.2.2.3.4 Network Management ....................................................................... 48
5.2.2.3.5 Network Control ............................................................................... 49
5.2.2.4
AN IA Boundary ................................................................................... 49
5.2.2.5
Dynamic Policy Based Security Management (PBSM) ....................... 50
5.2.2.5.1 PBSM Framework ............................................................................ 51
5.2.2.5.2 Policy Protection ............................................................................... 53
5.2.2.5.3 Security Management System........................................................... 53
5.2.2.6
Data Protection...................................................................................... 53
5.2.2.6.1 Application Layer Protection ............................................................ 54
5.2.2.6.2 Network Layer Protection ................................................................. 54
5.2.2.6.3 Link Layer Protection ....................................................................... 55
5.2.2.7
Key Management .................................................................................. 56
5.2.2.8
Authentication System .......................................................................... 58
5.2.2.9
ASWR ................................................................................................... 58
5.2.2.9.1 Network Intrusion Detection System ................................................ 59
5.2.2.9.2 Virus Detection, Prevention, and Removal....................................... 60
5.2.2.9.3 Vulnerability Assessment System (VAS) ......................................... 60
5.2.2.10 Protocol Security................................................................................... 60
5.2.2.11 Other Security Functions ...................................................................... 61
5.2.2.11.1 Firewalls .......................................................................................... 61
5.2.2.11.2 Guards – Cross Domain Solutions................................................... 61
5.2.2.11.3 Covert channels ............................................................................... 61
5.2.2.12 Security Modeling and Simulation ....................................................... 61
5.2.3 Network Management ................................................................................... 61
5.2.3.1
Cluster-Based Management Architecture ............................................. 62
5.2.3.1.1 Network Manager ............................................................................. 62
5.2.3.1.2 Cluster Managers .............................................................................. 63
5.2.3.1.3 Intelligent NM Agents ...................................................................... 64
5.2.3.2
Policy Based Management Framework ................................................ 64
5.2.3.3
Network Management System Components ......................................... 66
ii
5.2.4 Link Management ......................................................................................... 69
5.2.4.1
Link Management Architecture ............................................................ 69
5.2.4.2
Link Management System Components ............................................... 70
5.2.5 Network Services .......................................................................................... 71
5.2.5.1
Overview ............................................................................................... 71
5.2.5.2
Name Resolution Service ...................................................................... 73
5.2.5.2.1 Naming Service Features .................................................................. 73
5.2.5.2.2 Naming Service Architecture ............................................................ 75
5.2.5.3
Dynamic Network Configuration Management.................................... 75
5.2.5.4
Network Time Service .......................................................................... 75
5.3 Off-Board Transmission Systems ........................................................................ 76
6.
GIG Integration and Interoperability ...................................................................... 78
6.1 GIG Operational Mission Concepts .................................................................... 78
6.1.1 Provide Common Services ............................................................................ 79
6.1.2 Provide Worldwide Access ........................................................................... 79
6.1.3 Provide Dynamic Resource Allocation ......................................................... 79
6.1.4 Provide Dynamic Group Formation.............................................................. 80
6.1.5 Provide Computer Network Defense (CND) ................................................ 80
6.1.6 Provide Management and Control of GIG Network and Resources............. 80
6.2 GIG Enterprise Services ...................................................................................... 81
6.2.1 Enterprise Service Management/Network Operations.................................. 81
6.2.2 Information Assurance/Security ................................................................... 81
6.3 GIG Transport Convergence ............................................................................... 83
6.4 GIG Routing Architecture ................................................................................... 83
6.5 GIG Quality of Service Architecture ................................................................... 83
6.6 Interface Definition.............................................................................................. 85
8.
Node and Link Configurations for Candidate Platform Types (SV-5) ................... 94
8.1 Approach ............................................................................................................. 94
8.2 Fighter Platform ................................................................................................... 94
8.2.1 Operational Profile ........................................................................................ 94
8.2.2 AN Capabilities, Links, and Topologies ....................................................... 95
8.2.3 AN System Configuration............................................................................. 96
8.2.4 Airborne Fighter Platform AN Issues and Risks .......................................... 97
8.3 Airborne C4ISR Platform .................................................................................... 97
8.3.1 Operational Profile ........................................................................................ 97
8.3.2 AN Capabilities, Links, and Topologies ....................................................... 98
8.3.3 AN System Configuration............................................................................. 99
8.3.4 C4ISR Platform AN Issues and Risks ........................................................ 100
8.4 Airborne Communications Relay Platform ....................................................... 100
8.4.1 Operational Profile ...................................................................................... 100
8.4.2 AN Capabilities, Links, and Topologies ..................................................... 101
8.4.3 AN System Configuration........................................................................... 102
8.4.4 Airborne Communications Relay Platform AN Issues and Risks .............. 103
9.
Recommended Network Standards ....................................................................... 104
9.1 Current Standards (TV-1) .................................................................................. 104
9.2 Emerging Standards (TV-2) .............................................................................. 113
iii
9.3 Areas for Further Development ......................................................................... 120
10. AN Architecture Issues ......................................................................................... 129
11. List of Acronyms .................................................................................................. 136
12. References ............................................................................................................. 140
http://www.unik.no/paalee/
iv
1.
Introduction
This document represents the work performed by the USAF Airborne Network Special
Interest Group to define an Objective Airborne Network Architecture. Additional efforts
are planned to further define this architecture. TBDs (to be determined) inserted in the
text indicate areas still being worked.
1.1
Definition of Airborne Network
The Airborne Network is defined to be an infrastructure that provides communication
transport services through at least one node that is on a platform capable of flight. This
can best be visualized in the context of the operating domains served by the Global
Information Grid (GIG). The Transformational Communications Satellite System
(TSAT) network will provide space connectivity and the GIG-Bandwidth Expansion
(GIG-BE) network together with networks such as those provided under the Combat
Information Transport System and Theater Deployable Communications will provide
surface connectivity. Airborne connectivity within the GIG will be provided by the
Airborne Network. The Airborne Network will connect to both the space and surface
networks, making it an integral part of the communications fabric of the GIG.
1.2
Purpose of Document
In general, a network architecture defines a set of high-level design principles that guides
the technical design of a network. Its role is to ensure that the resulting technical design
will be consistent and coherent – the pieces fit together smoothly – and that the design
will satisfy the requirements on network function associated with the architecture. The
architecture is more general than a particular conformant technical design, and is
expected to be relatively long-lived – applicable to more than one generation of
technology. Finally, the architecture should guide technology development in a
consistent direction, preventing the implementation of inconsistent point solutions
throughout the network that could lead to the loss of functionality and interoperability.
[Source: Developing a Next-Generation Internet Architecture, July 2000]
The Airborne Network Architecture defines the structure of Airborne Network
components, their relationships, and the principles and guidelines governing their design
and evolution over time. It consists of the design tenets, technical standards, architectural
views (system and technical) and a description of the operational capabilities achieved.
The architecture is documented following the guidance of the DoD Architecture
Framework. This document defines a “To-Be” or objective set of airborne network
capabilities, functions and system components to be used to assist planning activities
associated with the acquisition or implementation of a particular network capability. It
should not be used on its own as an all-encompassing design for the Airborne Network.
1.3
Relationship to Other Documents
1
The following documents also provide acquisition or implementation planning details for
the Airborne Network, and are related to this document as indicated:
 Airborne Network Architecture Progress Summary Version 1.0 – Provides the
scope, purpose, and intended uses of the Airborne Network Architecture, describes
the process used to develop the architecture, and summarizes the major results and
conclusions.
 Airborne Network Architecture Roadmap (Draft) – Provides a time phased
approach for implementing the Airborne Network capabilities and network functions.
 Airborne Network Technical Requirements Document (Draft) – Provides detailed
technical requirements for system components needed to implement the Airborne
Network.
 ConstellationNet Addendum to C2 Constellation CONOPS – Provides the
required operational capabilities of the ConstellationNet of which the Airborne
Network is a component.
 USAF ConstellationNet Sub-Enterprise Architecture (formerly known as the AF
Infostructure Architecture) – Provides context and overall guidance for domain
level architectures for implementing a net-centric infostructure. Includes operational,
system, and technical views of the Air Force information infrastructure (to include the
Airborne Network) at the Enterprise level.
 Command and Control Enterprise Reference Architecture (C2ERA) – Provides
the technical direction for designing, acquiring and integrating the computing and
communications capabilities of the Air Force C2 Enterprise.
 Net-Centric Enterprise Solutions for Interoperability (NESI) – Provides a
technical architecture and implementation guidance to facilitate the design,
development, maintenance, evolution, and usage of information systems that support
the Network-Centric Warfare (NCW) environment.
1.4
Contents
The Airborne Network Architecture is documented in terms of the following
perspectives:
 Operational Context – Objective communications capabilities that the Airborne
Network must provide to the warfighter to support future network-centric operations
are described in Section 2.
 Tenets – Network principles applicable to any Airborne Network design that provide
high level guidance for decision making are contained in Section 3.
 Systems Architecture Views – Guidelines for where and when to implement
different system functions or technologies are contained in Sections 4 through 7.
Section 4 describes the objective Airborne Network functions and topologies; Section
5 identifies the systems components needed to provide those functions; Section 6
discusses implications of integrating the Airborne Network into the future Global
Information Grid (GIG); Section 7 contains several model diagrams for the network
and platforms; and Section 8 presents notional system configurations for several
candidate platform types.
 Technical Standards – Technical standards for major Airborne Network components
and interfaces, both existing and emerging, are listed in Section 9.
2

1.5
Issues – Architectural and technical issues identified during the development of the
architecture that require further research are presented in Section 10.
Intended Use
The Airborne Network Architecture is intended to guide the acquisition of network
capability for airborne platforms by defining investment opportunities, system
requirements, technical standards, implementation guidelines, and GIG interoperability
directives.
 Investment Decisions – The Airborne Network Architecture identifies functional
areas requiring further investigation and development to define technical solutions
and/or standards.
 Requirements Definition – The Airborne Network Architecture provides a common
framework to identify required network, platform, and communications component
functionality to enable a desired set of network capabilities needed to support future
Air Force mission operations.
 Standards Definition – The Airborne Network Architecture provides a list of key
technical standards for use in near-term and interim airborne network
implementations.
 Implementation Guidance – The Airborne Network Architecture provides a list of
tenets and system configuration diagrams that should be used to guide the
development of lower level system architectures and system designs.
 GIG Interoperability Direction – The Airborne Network Architecture identifies
GIG interoperability directives and architectural guidance relevant to the airborne
network
3
2.
2.1
Operational Context
Operational Concept Description for the Airborne Network (OV-1)
Network-centric operations and network-centric warfare (NCW) refers to an information
superiority-enabled concept of operations that generates increased combat power and air
mobility by networking sensors, decision makers, and shooters to achieve shared
awareness, increased speed of command, higher tempo of operations, greater lethality,
increased survivability, and a degree of self synchronization. In essence, NCW translates
information superiority into combat power by effectively linking knowledgeable entities
in the battlespace. [Source: Network Centric Warfare, Alberts, Gartska and Stein, 1999]
The Department of Defense (DoD) Joint Vision (JV) 2020 projects to a future period of
United States dominance across the full spectrum of military operations. The military
capabilities necessary to realize this vision depend upon achieving Information and
Decision Superiority through the implementation of an internet-like, assured Global
Information Grid (GIG). Within the AF, achieving Information and Decision Superiority
depends upon extending the capabilities of the GIG into the airborne and space
environments. When fully realized, this AF vision will enable interoperable networkcentric operations between Joint Service, Allied, and Coalition forces. [Source: Airborne
Network Prioritization Plan]
To realize the AF vision, the extension of the GIG in the airborne domain – the Airborne
Network, must be easy to use, configure, and maintain and must provide:
 Ubiquitous and assured network access to all Air Force platforms
 GIG core services whose integrity is assured
 Quality appropriate to the commander’s intent and rules of engagement (ROE)
 Rapid response to mission growth, emerging events and changing mission priorities
 End-to-end interoperation with joint services, coalition, and non-DoD partners and
legacy systems in all physical network domains (sub-surface, surface, airborne and
space)
[Source: Operating Concept for C2 Constellation ControlNet Network Centric
Infostructure]
2.2
Objective Airborne Network Communications Capabilities
The Objective Airborne Network that can provide the capabilities listed in section 2.1
will be a communications utility that provides an adaptable set of communications
capabilities that can be matched to the particular mission, platforms, and communications
transport needs. Communications capabilities can be expressed in terms of the
connectivity that can be established, the services that can be supported over the network
connections, and the operations that are required for the user to establish, maintain, and
access the network connections. Table 2-1 identifies an objective set of communications
capabilities for the Airborne Network. All of these capabilities would not necessarily be
needed for every instantiation of the Airborne Network, but will be necessary to support
all missions, operations, and platforms.
4
Table 2-1. Summary of Airborne Network Objective Capabilities
Network Capability & Attributes
Connectivity
Airborne Network Objective Capabilities
Coverage
Geographic span of links directly interfacing to
a subject node
Diversity of links
Total number and types of links that can be
used to “connect” to the subject node
 Beyond Line of Sight (BLOS) extending Globally
(enabling access to anywhere from anywhere)
Throughput
Total average throughput of all links directly
interfacing to the subject node
Type of connection
Nature of connections that can be established
between the subject nodes and directly
connected nodes
Network interface
Networks that can be directly interfaced from
the subject node (e.g., DISN (NIPRNET,
SIPRNET, JWICS), Transformational
Communications, TDL Networks, CDL
Networks)
 Number of links (system and media) matched to
the mission matched to the environment (to enable
guaranteed access)
 Type of links extend across the spectrum of radio
frequencies including infrared and optical
 Throughput matched to the mission and
automatically adaptable to accommodate
unplanned or transient conditions
 Dynamically reconfigurable to optimize
performance, cost, and mission effectiveness
 Flexible connections able to forward Globally
 Interface to AN subnet and backbone links, as
well as, legacy (i.e., TDL or CDL), coalition and
GIG component networks operating any version
network protocol (i.e., IPv6 or IPv4), as needed
Services
Real-time data
Any data flows that must be sent in real time
(i.e., low latency) with assured delivery (e.g.,
AMTI or GMTI tracks, munition terminal stage
updates, RPV control, TCT&ISR tipoffs, NBC
alert)
Continuous interactive voice
(e.g., Voice over IP telephone and radio nets)
Continuous interactive video
(e.g., Video over IP, Video Teleconferencing)
Streaming multimedia & multicast
(e.g., Video imagery)
Block transfer & transactional data
Short blocks of interactive data (e.g., Telnet,
HTTP, client/server, chat)
Batch transfer data
Long blocks of bulk data (e.g., Email, FTP)
 Multiple simultaneous multilevel precedence and
preemption (MLPP) real-time data links or nets,
as needed
 Multiple simultaneous MLPP voice links or nets,
as needed
 Multiple simultaneous MLPP video links, as
needed
 Multiple simultaneous MLPP multimedia links, as
needed
 Multiple simultaneous MLPP block &
transactional links, as needed
 Multiple simultaneous MLPP batch data links, as
needed
Operations
Managing
All aspects related to managing the links and
the network including:
Planning -- frequency allocation,
transmission, routing, network services

Simplified network planning to include the
allocation and configuration of network
resources, including legacy networks, when
needed
Automated analyses of network performance to

5
Network Capability & Attributes
and traffic.
Monitoring -- performance and use, fault, and
security aspects of a link, network, or
network component.
Analyzing -- performance optimization and
diagnostics.
Controlling -- add, remove, initialize, and
configure links, networks, or network
components
Airborne Network Objective Capabilities
diagnose faults, determine suboptimal
conditions, and identify needed configuration
changes
Monitoring and controlling of AN link and
network resources and interfaces with legacy
networks
Maintenance of network situational awareness
(SA) and distribution of Network SA to peer
networks
Match use of network resources to
commander’s operational objectives



Forming and Adapting
 Automated provisioning, initialization and
restoration of all AN link resources
To include:
 Provisioning -- obtaining the needed link
and network resources.
 Initialization and Restoration -establishing or restoring a link or network
service.
Accessing
All aspects related to obtaining or denying
access to a link or network, to include:
 Protection – communications security as
well as authentication, authorization,
accounting
 Detection
 Reaction
 Link and subnet protection matched to the threat,
with automated detection and reaction
 User data and AN management and control data
protection matched to the threat
6
3.
Airborne Network Tenets
The following statements are proposed as tenets of the Airborne Network (AN)
Architecture. These reflect the underlying principles of any AN design that claims to be
conformant with the AN architecture.
1. Standards Based: AN system components comply with applicable DoD and AF
standards lists.
 Leverage commercial investment in COTS-based networks and their
evolution wherever feasible
 Relax standards only for unique must-have DoD features
 Evolve standards to accommodate DoD features
 Migrate towards use of open standards
2. Layered: AN system components are functionally layered.
 Follows successful COTS Internet model
 Minimizes description of inter-layer interfaces
 Allows technology evolution of layers for maximum cost benefit
3. Modular: AN is inherently modular in nature, capable of being extended and
expanded to meet the changing communications service requirements of the
platforms needed to support any particular mission.
 Components can be continuously added and removed as needed during the
time frame of the mission (hours, days), such that the network can be
adjusted to fit the mission, during the mission
 User capabilities that need to be supported determine the technical
capabilities of the network components selected
 New network components that provide new operational capabilities can be
integrated as needed.
4. Internetworked: AN is capable of internetworking using all available
commercial and military transmission media (i.e., line-of-sight (LOS) radio
communications paths, satellite communications (SATCOM) paths, and laser
communications (Lasercom) paths).
5. Interoperable: AN is capable of interoperating with other peer networks (e.g.,
space, terrestrial, and warfighter networks) and legacy networks (as needed for
coalition interoperability and transition operations).
6. Implemented as a Utility: AN integrates separate transmission mechanisms with
a single common, standards-based network layer (e.g., IPv4, IPv6) for delivery of
common user (i.e., mission-independent) network services.
7. Adaptable: AN is capable of adapting to accommodate changes in user mission,
operating environment, and threat environment.
8. Efficient: AN efficiently utilizes available communication resources.
9. Autonomous: AN can operate autonomously or as part of a larger inter-network.
 Platform network can operate without connectivity to external nodes
7

AN can operate without connectivity to ground nodes
10. Secure: AN supports user and system security.
 Multiple independent levels of security
 User, operations, management, and configuration data integrity and
confidentiality
 Identification and authentication.
11. Managed: AN is capable of integrating into broader AF and joint network
management infrastructures.
12. Policy Based: AN is capable of integrating into policy-based management and
security infrastructures.
8
4.
4.1
Airborne Network Description (SV-1)
Overview
The Target Airborne Network must be capable of supporting the diverse AF missions,
platforms, and communications transport needs of the future. The network will vary
from a single aircraft connected to a ground station to support voice or low speed data, to
a constellation of hundreds of aircraft transporting high speed imagery and real-time
collaborative voice and video. The target network must be capable of forming a topology
that is matched to the particular mission, platforms, and communications transport needs,
and doing so in a way that minimizes the amount of pre-planning and operator
involvement.
The Airborne Network will be described in terms of the network functions it must
perform and the network topologies (i.e., physical or logical configuration of functional
nodes and links) needed to enable the network. The network functional nodes will be
defined in terms of the system components or modules needed to achieve the desired
functionality.
4.2
Network Functions
The Airborne Network functions will be described from five perspectives, as follows:
 Inter-node connectivity – How AN nodes (i.e., platforms) interconnect to each
other.
 Information Assurance – What security functions are needed and where do they go.
 Link Management – How the network controls and makes use of AN
communications links.
 Network Management – How the network controls its configuration and maintains
its integrity.
 Network Services – What network services are needed and how are they used.
4.2.1 Inter-node Connectivity
AN nodes are capable of establishing connections with one or more other AN nodes,
whether airborne, in space, or on the surface, as needed. The transmission paths used to
establish the physical connections, may be asymmetric with respect to bandwidth, and
may be bidirectional or unidirectional (including receive only). Also, the forward and
reverse network connections relative to any node could take different physical paths
through the network. The AN connections may be point-to-point, broadcast, or
multipoint/multcast.
AN nodes are capable of establishing connections to relay (receive and transmit with the
same data formats and on the same media/frequency), translate (receive and transmit with
the same data formats but on different media or frequencies), or gateway (receive and
transmit with different data formats and on different media/frequencies) information, as
needed.
9
AN nodes are capable of establishing connectivity to other AN nodes based upon a
prearranged network design that prescribes inter-node connectivity (i.e., topology) of the
network. Also, AN nodes are capable of establishing link connectivity to other AN nodes
autonomously, without prearrangements that identify the specific AN nodes, and
dynamically as opportunities and needs arise.
Key inter-node connectivity functions include the following:





Backbone connectivity – Some AN nodes are capable of establishing stable high
bandwidth network connections for interconnecting other networks and AN subnets,
as needed to satisfy the information transport requirements of the host platform and
network. The backbone connection will consist of persistent or quasi-persistent links
between two or more AN nodes that are high-capacity in comparison to the subnet
links connected to the nodes, permit use of the common network and transport
protocols, and provide access to other subnets and/or global network. Backbones are
capable of carrying both local and transit traffic. Backbone system components
consist primarily of routers/switches and trunks.
Subnet connectivity – AN nodes are capable of establishing and internetworking
with prearranged, static, and ad-hoc subnets with specific sets of nodes, as needed, for
instance to support a particular mission, transmission media or location. AN nodes
are capable of leaving a subnet and rejoining it or another subnet at any time with
minimal prearrangements.
Network access connectivity – AN nodes are capable of establishing legacy and/or
IP network connections using any available media; some AN nodes are capable of
providing access to any space or terrestrial IP network as opportunities and needs
arise.
Routing/switching – AN nodes are capable of routing or switching IP packets
to/from any local LAN, subnet, backbone, or space or terrestrial IP network (i.e., end
system, intra-area, inter-area, and inter-domain ( or autonomous system (AS))
routing) as opportunities and needs arise. AN nodes are capable of routing and
switching both platform and transit traffic as needed. Routing decisions will be made
on an expanded set of metrics to optimize performance for the AN, such as on hops,
bandwidth, delay, reliability, load, media preference, BER or packet loss rate,
stability, availability, geographic position, speed and heading. AN nodes are capable
of handing off routing capabilities to another AN node (such as when a platform goes
off station and is replaced by another platform).
Quality of Service (QoS)/Class of Service (CoS) – The AN is capable of
establishing and changing connections and forwarding IP-packets on routes and in a
sequence that satisfies the indicated delivery performance requirements associated
with the information flow. The AN will implement standards-based QoS/CoS
mechanisms that are compatible with those implemented in interconnected GIG
networks enabling timely information exchanges across diverse networks, systems
and communities of interest (COIs). The QoS/CoS mechanisms must support all
communications services (e.g., voice, data, video), provide preferential treatment
based on priority (with preemption), enable policy-based assignment of service
10
classes and priorities, rapidly respond to changes in assignments, and work end-toend on all host and network devices, as needed.
4.2.2 Information Assurance (IA)
AN IA services include identification, authentication, integrity, confidentiality,
availability, non-repudiation, security policy management, and key management. AN IA
components will enable or extend access to GIG based security or may organically
provide the security services. IA services will be enabled regardless if the AN is
connected to other networks that comprise the overall GIG and during periods of
connectivity interruption. IA services will be designed to function under constraints such
as low-bandwidth, high-latency connections, unidirectional links, or with nodes operating
in LPI/LPD modes to include receive-only.
Key Information Assurance functions include the following:


Policy Based Security Management (PBSM) – PBSM provisions and supports the
specification and enforcement of access control policies, logging, monitoring, and
auditing. Policy based security management enables:
o Specification of authorization policies for access control
o Specification of obligation policies for logging, monitoring, and auditing
o Analyses of policies
Data Protection – AN security services will protect user data and AN administrative
data, provide cover to transmitted data, and provide the mechanisms to support
Multiple Independent Levels of Security and Multi-Level Security.
o User Data – The AN will employ modern, high-assurance NSA Type 1
encryption and algorithms to secure user packet and circuit data. The AN will
provide data security over operational environments ranging from US military
only to those that will incorporate operations with U.S. military and national
assets and U.S. allies and coalition partners.
o AN Administrative Data – (configuration, management, operations, etc.) – The
AN will employ public key technologies and protocols, such as SSL, TLS,
IPSec to ensure the integrity and confidentiality of AN administrative data.
Administrative data includes that used to monitor the health and status of AN
components and to configure and operate those components, as well as, that
needed for actual network operations, such as routing updates.
o AN Transmitted Data – Transmission Security algorithms and keys, or
TRANSEC, will provide transmitted signals covertness or cover to avoid
detection, identification and exploitation (such as traffic analyses.)
o Multiple Independent Levels of Security (MILS) Support – The AN will
provide for simultaneous operations in multiple independent security domains
each secured by their own encryption device. Once secured the resultant data
will be transported by a common black transport network.
o Multi-level Security (MLS) Support – The AN will provide for simultaneous
operations in multiple security domains on one red network. The red network
11




data will be secured using encryption devices and then transported through a
common black transport network.
Key Management – The AN will use DoD Key Management Infrastructure (KMI)
and Electronic Key Management System (EKMS) processes as its core key
management capability. The AN may provide user interfaces and/or a pass-through
capability that enable platform LAN and user access to the services provided by these
systems.
Authentication, Identification, Integrity, and Access Control. – The AN will
provide services to enable user and device authentication, identification, and access
control, with or without interface to GIG security services.
Attack Sensing, Warning, and Response (ASWR) – The AN will provide for host
and network based Intrustion Detection Systems, Intrustion Prevention Systems, and
Virus Detection, Prevention, and Removal applications.
o Intrusion Detection System (IDS) – The AN IDS will defend system
components against threat actions carried by network data traffic by detecting
and responding to inappropriate, incorrect or anomalous network activity.
The two basic categories of IDS are host-based and network-based. The AN
will use a combination of host based and network based IDS processes.
o Intrustion Prevention System (IPS) – The IPS detects and prevents known
attacks.
o Virus Detection, Prevention, and Removal – The AN will check for the
presence of malicious executable code (e.g., viruses, worms, Trojan horses,
etc.) and remove or “repair” the suspect files or alert the operator.
Protocol Security – The routing, transport, management, and other protocols used to
enable the AN must be secured. Dependent on the sensitivity of the protocol’s data,
that security may require identification and authentication between endpoints,
applying a digital signature to the transmitted data to provide origin and integrity
validation, or encryption to provide confidentiality.
4.2.3 Network Management
AN Network Management enables operators to plan network resources and equipment
configurations, analyze and predict network performance, monitor and display current
network status, isolate and resolve faults, and configure network components.
Key Network Management functions include the following:

Fault, Configuration, Accounting, Performance, Security (FCAPS) Management
o Identify when faults and anomalies occur; report, isolate, analyze, and resolve
faults
o Provide configuration of AN system parameters, detect configuration changes
o Perform accounting of resource utilization to verify that resources are being
utilized in accordance with commanders’ operational objectives
o Collect network performance statistics, status, and event notification data,
analyze and resolve performance faults and reconfigure network resources as
warranted
12
o Conduct security fault/anomaly detection, analysis, and security
system/component (re)configuration.

Network Situation Awareness
o Collect (or receive), analyze, and fuse AN composition, status, performance,
and information assurance data (including legacy network status) in near realtime to produce user-defined views of the mission critical AN resources of
concern to a commander or NetOps center.
o Display system and network resources, showing their operational status and
linkages to other resources.
o Tailor reporting to facilitate the timely sharing of data and consistency of data
across the AN, its peer networks, and other GIG component networks.

Network Resource Planning
o Develop and maintain a network resource plan (to include a frequency
management plan) for the allocation and configuration of network elements
through range of operations
o Interoperate with legacy/coalition network planning tools to ensure an
integrated plan
o Formulate policies in accordance with network resource plan
o Store policies in policy repository

Policy Based Network Management (PBNM)
o Construct, store, and distribute network policies, including for IA and QoS
o Dynamically update network policies
o Perform policy accounting, including SLA management and negotiation
o Provide network access control

Network Modeling, Analysis, and Simulation
o Provide tools for examining network functional and performance
characteristics; use results to refine resource planning and policy formulation
o Provide integrated full network simulation to include the AN, legacy, and
coalition links
4.2.4 Link Management
AN Link Management enables real-time autonomous management of the resources and
services provided by the RF and optical links that interconnect the AN nodes. This
includes finding/locating nodes/links, provisioning link bandwidth, admission control,
and managing radio and network configurations.

Advertising and Discovery – Each AN node will be capable of advertising its
identity and location to enable it to be discovered by other AN nodes. The AN Link
Management will identify, authenticate, locate and determine the network capabilities
of all advertising AN nodes.
13

Establishing and Restoring – The AN will form links among advertising AN nodes
to satisfy mission operational requirements and prescribed network configurations in
accordance with current policies. The AN Link Management will be capable of
setting and adjusting AN network component and link parameters to modify the
performance of any link. As an example the AN will determine the protocols and
data rate to be used over each link, and will be capable of adjusting data rate for a
given link.

Monitoring and Evaluation – The AN Link Management will monitor and evaluate
the current operational state of all advertising AN nodes network components and
links. The AN Link Management will determine the network performance being
achieved by each active AN node and link. In addition, the AN Link Management
will determine the network performance being achieved by the overall network to
determine whether or not the current topology needs to be altered to satisfy current
mission operational requirements.

Topology Management – The AN Link Management will determine how the AN
nodes will be interconnected in order to establish an AN topology that satisfies
current mission operational requirements. This includes determining each AN node
network component and link parameters, performing admission control for each node
requesting connection to the AN, provisioning of link resources, and managing the
handoff of links to ensure seamless connectivity.
4.2.5 Network Services



4.3
Name Resolution System – The AN contains a Name Resolution System to resolve
hostnames, domains and IP addresses for the AN. The Name Resolution System will
include master and replica servers geographically separate with automated failover
capability. The AN Name Resolution System will be fully integrated into the GIG
Organization, Service and Root DNS architecture.
Dynamic Network Configuration Management – The AN includes the capabilities
to dynamically assign and distribute IP addresses, manage network servers, IP
networks and subnets. The AN network configuration management system also
provides a view of the IP network, subnet, domain, name resolution server, network
time server, etc.
Network Time Service – The AN includes Network Time Servers (NTS)
synchronized with Coordinated Universal Time (UTC) to provide synchronized time
distribution within an AN node. The AN NTS will be fully integrated into the GIG
UTC, Intermediate, and Client Node Network Time Service architecture. The AN
also provides network time services that are fully functional in a GPS denied
environment, or when the network is disconnected from the GIG infrastructure.
Network Topologies (Nodes and Links)
The Airborne Network must be capable of supporting diverse AF airborne platforms.
These platforms will vary in their communications capability needs, flight patterns, and
14
size, weight and power (SWAP) constraints. The network must be capable of
interconnecting all platforms, supporting all needed services, and providing access to all
needed GIG services. Some of the platforms will be high performance aircraft capable of
operating at high speeds, needing to rapidly join and exit networks while having SWAP
for very limited communications resources (which may be legacy systems). The network
must also be capable of guaranteeing certain levels of performance to support bandwidth,
latency or loss sensitive applications. Figure 4-1 depicts a Notional Airborne Network
Topology showing the different types of network nodes and links that address these
needs. To optimize network performance, operations, fault-resilience and ensure the
efficient use of resources, the use of relatively stable connections should be maximized
especially in the Airborne Backbone (when such a configuration is needed), while the
highly mobile platforms and dynamic network connections are isolated to the Tactical
Subnets. The Airborne Network should be capable of forming whatever topology is best
matched to the mission, platforms, and communications transport needs as discussed in
section 4.3.3.
Figure 4-1. Notional Airborne Network Topology
4.3.1 Node Types
The node types depicted in Figure 4-1 indicate the minimum network functionality that
must be performed at each node to realize the target AN capabilities. A brief description
of each node type follows:

Legacy – Airborne platforms that are equipped with legacy communications systems
capable of supporting voice, tactical data link (TDL), and possibly some point-topoint IP network connections for very limited services (e.g., email only).
15





Relay/Gateway – Airborne platforms that are equipped with legacy communications
systems as described for a legacy node, but also include equipment that enable them
to access multiple TDLs and to relay data Beyond Line of Sight (BLOS), or transfer
data formats from one link to another.
Network Access – Airborne platforms equipped with IP network-capable
communications systems, which provides an IP network connection to an AN or GIG
network node. These nodes are tethered to the network, and do not provide any AN
service to other nodes on the network.
Network Capable – Airborne platforms equipped with IP network-capable
communications systems, which can join an IP data network (Tactical Subnet) and
provide limited AN service (e.g., transit routing) to other nodes on that local network.
Internetwork – Airborne platforms equipped with IP network-capable
communications systems, which can access and interconnect multiple IP data
networks. These nodes are equipped with gateway and network services functionality
that enable them to provide GIG services to other airborne nodes when interconnected
to the GIG through a Network Service Provider node.
Network Service Provider – Airborne platforms, fixed or deployed ground facilities,
or space-based packages equipped with IP network capable communications systems,
which can access multiple IP data networks. These nodes are equipped with gateway
and network services functionality that enable them to interconnect with a GIG
network (e.g., DISN, JTF Service Component Network, etc.) service delivery node
(SDN) and to provide GIG services to other airborne nodes.
Table 4-1 and 4-2 summarizes the network capabilities and functionalities for each node
type, respectively.
Table 4-1. Summary of Node Types and Typical Network Capabilities
Node Type
Legacy
Connectivity
 Coverage: Mostly LOS,
some BLOS
 Diversity: Single or few
legacy links and link
types
 Throughput: Typically
low speed connections
 Type of connection: PtPt, Pt-MultPt, automatic
relay
 Network interfaces:
Legacy links and tactical
subnets
Network Capability
Service
Typically a subset of
communications transport
services, including:
 Real-time data
 Voice
 Interactive data
16
Operation
 Managing: Manual
planning, analyzing,
monitoring, and
controlling of node
resources locally.
Limited automated
management and
monitoring for some
legacy links.
 Forming and Adapting:
Manual provisioning,
initialization and
restoration of node link
resources. Limited
dynamic resource sharing.
 Accessing: Link and
tactical subnet protection,
with limited manual
detection and reaction.
Node Type
Relay/
Gateway
Network
Access
Network
Capable
Internetwork
Connectivity
Network Capability
Service
 Coverage: Typically
LOS and BLOS
 Diversity: Few to several
legacy links and link
types
 Throughput: Typically
low speed connections
 Type of connection: PtPt, Pt-MultPt, TDL
Forwarding
 Network interfaces:
Legacy links and tactical
subnets
Typically a subset of
communications transport
services, including:
 Real-time data
 Voice
 Interactive data
 Coverage: LOS & BLOS
 Diversity: Single IP
network-capable link
 Throughput: Low to
high speed connection
 Type of connection: Ptto-Pt
 Network interfaces:
Tactical subnets, AN
backbone, GIG networks
All or most
communications transport
services, including:
 Real-time data
 Voice
 Video
 Multimedia & multicast
 Interactive data
 Bulk (time insensitive)
data
 Coverage: Typically
LOS
 Diversity: Single IP
network-capable link
 Throughput: Low to
high speed connection
 Type of connection: PtPt, Pt-MultPt, Forwarding
 Network interfaces:
Tactical subnets
Typically a subset of
communications transport
services, including:
 Real-time data
 Voice
 Interactive data
 Coverage: LOS & BLOS
 Diversity: Multiple links
All or most
communications transport
services, including:
17
Operation
Limited dynamic joining
and leaving of subnets.
 Managing: Manual
planning, analyzing,
monitoring, and
controlling of node
resources locally
 Forming and Adapting:
Manual provisioning,
initialization and
restoration of node link
resources
 Accessing: Link and
tactical subnet protection,
with limited manual
detection and reaction
 Managing: Monitoring
and controlling of node
resources locally and from
a remote network node,
distribute network SA
data, match use of
resources to operational
objectives
 Forming and Adapting:
Automated provisioning,
initialization and
restoration of AN link and
network resources
 Accessing: GIG
protection, with
automated detection and
reaction
 Managing: Monitoring
and controlling of node
resources locally and from
a remote network node,
distribute network SA
data, match use of
resources to operational
objectives
 Forming and Adapting:
Automated provisioning,
initialization and
restoration of AN link and
network resources
 Accessing: Tactical
subnet protection, with
automated detection and
reaction
 Managing: Automated
analyses of network
Node Type
Network
Service
Provider
Connectivity
Network Capability
Service
and link types
 Throughput: Low to
high speed connections,
high speed connections to
AN backbone
 Type of connection: PtPt, Pt-MultPt, Forwarding
 Network interfaces:
Tactical subnets, AN
backbone
 Real-time data
 Voice
 Video
 Multimedia & multicast
 Interactive data
 Bulk (time insensitive)
data
 Coverage: LOS & BLOS
 Diversity: Many links
and link types
 Throughput: Low to
high speed connections,
high speed connections to
AN backbone and GIG
 Type of connection: PtPt, Pt-MultPt, Forwarding
 Network interfaces:
Tactical subnets, AN
backbone, and GIG
networks
All communications
transport services, including
MLPP:
 Real-time data
 Voice
 Video
 Multimedia & multicast
 Interactive data
 Bulk (time insensitive)
data
Operation
performance, monitoring
and controlling of node
resources locally and from
a remote network node
and of connected AN
resources, maintain and
distribute network SA,
match use of resources to
operational objectives
 Forming and Adapting:
Automated provisioning,
initialization and
restoration of AN link and
network resources
 Accessing: Tactical
subnet and AN backbone
protection, with
automated detection and
reaction
 Managing: Simplified
network planning,
automated analyses of
network performance,
monitoring and
controlling of node
resources and of
connected AN resources,
maintain and distribute
network SA, match use of
resources to operational
objectives
 Forming and Adapting:
Automated provisioning,
initialization and
restoration of AN link and
network resources
 Accessing: AN and GIG
protection, with
automated detection and
reaction
Table 4-2. Summary of Node Types and Network Functionality
Node
Type
Network Functions
Connectivity

Subnet Access (through gateway)
Legacy
Information Assurance

User data and orderwire communications bulk encrypted

Limited TRANSEC capabilities
18
Node
Type
Network Functions

Manual key management; some OTAR
Link Management

None
Network Management

None, other than management approach applied by the legacy system; Relay/Gateway
provides proxy function for management of legacy systems by the AN management
framework
Network Services

None
Connectivity

Legacy Link Access

Subnet Access (through gateway)
Information Assurance

User data and orderwire communications bulk encrypted

Limited TRANSEC capabilities

Manual key management; some OTAR
Relay/
Gateway
Link Management

None
Network Management

Conduct proxy function between AN management and legacy management
Network Services

None
Connectivity

GIG Access

QoS
Network
Access
Information Assurance
 Policy Based Security Management
 User data protection through use of Inline Network Encryptor (e.g., HAIPE)
 Node data Protection – use of IPSec, Type 2 encryption, etc., to protect management and
control data
 TRANSEC
 I&A, data integrity validation, etc.
 Automated Key Management – management of node’s keys; access to replacement keys
 Attack Sensing, Warning, and Response
 Virus Protection
 Protocol Security
Link Management
 Advertising and Discovery
 Advertises node’s identity and location
 Discovers nodes that advertise their identity and location; authenticates nodes
identity and determines available network capabilities
19
Node
Type
Network Functions


Establishing and Restoring
 Requests network service needed to satisfy mission operational requirements
from designated AN link manager node
 Sets and adjusts AN network component and link parameters to establish or
modify the required link(s) as directed by the designated AN link manager node
Monitoring and Evaluation
 Monitors and evaluates node’s network and link performance
 Reports status to designated AN link manager node
Network Management
 Fault, Configuration, Accounting, Performance, Security Management
 Collect network performance statistics, status, and event notification data from
local network elements, analyze and resolve faults and configure local network
resources, autonomously if needed
 Report network performance statistics, status, and event notification data to a
designated AN manager node and receive and respond to configuration requests
from the AN manager node
 Network Situation Awareness
 Report network events to a designated AN manager node
 Policy Based Network Management
 Request, receive, and respond to policy updates
 Provide network access control
Network Services
 Name Resolution System
 Dynamic Network Configuration Management
 Network Time
Connectivity
 Subnet
 Subnet Access
 Routing/ Switching (Intra-Area Router)
 QoS
Network
Capable
Information Assurance
 Policy Based Security Management
 User data protection through use of Inline Network Encryptor (e.g., HAIPE)
 Node data Protection – use of IPSec, Type 2 encryption, etc., to protect management and
control data
 TRANSEC
 I&A, data integrity validation, etc.
 Automated Key Management – management of node’s keys; access to replacement keys
 Attack Sensing, Warning, and Response
 Virus Protection
 Protocol Security
Link Management
 Advertising and Discovery
 Advertises node’s identity and location
 Discovers nodes that advertise their identity and location; authenticates nodes
identity and determines available network capabilities
 Establishing and Restoring
20
Node
Type
Network Functions



Requests network service needed to satisfy mission operational requirements
from designated AN link manager node
 Sets and adjusts AN network component and link parameters to establish or
modify the required link(s) as directed by the designated AN link manager node
Monitoring and Evaluation
 Monitors and evaluates node’s network and link performance
 Collects/processes/analyzes status reports from assigned subnet nodes
 Reports status to designated AN link manager node
Topology Management
 Determines needed subnet connectivity for assigned nodes to establish a
topology that satisfies current mission operational requirements
 Performs admission control for each node requesting connection
 Provisions subnet resources, determines network and link parameters for
assigned nodes, and directs adjustments to network component and link
parameters for assigned nodes that request network service
 Determines and directs needed adjustments to the assigned nodes’ network
component and link parameters to accommodate changes in service requests,
participating nodes or equipment status (based on status reports)
 Manages the handoff of links within its subnet to ensure seamless connectivity
 Reconfigures subnet topology as directed by the designated AN link manager
node
Network Management
 Fault, Configuration, Accounting, Performance, Security (FCAPS) Management
 Identify when faults and anomalies occur; report, isolate, analyze, and resolve
faults
 Provide configuration of AN system parameters, detect configuration changes
 Perform accounting of resource utilization to verify that resources are being
utilized in accordance with commanders’ operational objectives
 Collect network performance statistics, status, and event notification data,
analyze and resolve performance faults and reconfigure network resources as
warranted
 Conduct security fault/anomaly detection, analysis, and security
system/component (re)configuration.
 Network Situation Awareness
 Collect (or receive), analyze, and fuse local AN composition, status,
performance, and information assurance data in near real-time
 Tailor reporting to facilitate the timely sharing of data and consistency of data
across the AN
 Policy Based Network Management
 Provision and dynamically update local network policies, including for IA and
QoS
 Perform policy accounting, including SLA management and negotiation
 Provide network access control
Internetwork
Network Services
 Name Resolution System
 Dynamic Network Configuration Management
 Network Time
Connectivity
 Backbone
21
Node
Type
Network Functions




Subnet
Subnet and Backbone Access
Routing/ Switching (Intra-Area Router, Area Border Router)
QoS
Information Assurance
 Policy Based Security Management
 User data protection through use of Inline Network Encryptor (e.g., HAIPE)
 Node data Protection – use of IPSec, Type 2 encryption, etc., to protect management and
control data
 TRANSEC
 I&A, data integrity validation, etc.
 Automated Key Management – management of node’s keys; access to replacement keys
 Attack Sensing, Warning, and Response
 Virus Protection
 Protocol Security
Link Management
 Advertising and Discovery
 Advertises node’s identity and location
 Discovers nodes that advertise their identity and location; authenticates nodes
identity and determines available network capabilities
 Establishing and Restoring
 Requests network service needed to satisfy mission operational requirements
from designated AN link manager node
 Sets and adjusts AN network component and link parameters to establish or
modify the required link(s) as directed by the designated AN link manager node
 Monitoring and Evaluation
 Monitors and evaluates node’s network and link performance
 Collects/processes/analyzes status reports from assigned subnet and/or backbone
nodes
 Reports status to designated AN link manager node
 Topology Management
 Determines needed subnet and/or backbone connectivity for assigned nodes to
establish a topology that satisfies current mission operational requirements
 Performs admission control for each node requesting connection
 Provisions subnet and/or backbone resources, determines network and link
parameters for assigned nodes, and directs adjustments to network component
and link parameters for assigned nodes that request network service
 Determines and directs needed adjustments to the assigned nodes’ network
component and link parameters to accommodate changes in service requests,
participating nodes or equipment status (based on status reports)
 Manages the handoff of links within its subnet and/or backbone to ensure
seamless connectivity
 Reconfigures subnet and/or backbone topology as directed by the designated AN
link manager node
Network Management
 Fault, Configuration, Accounting, Performance, Security (FCAPS) Management
 Identify when faults and anomalies occur; report, isolate, analyze, and resolve
faults
22
Node
Type
Network Functions




Provide configuration of AN system parameters, detect configuration changes
Perform accounting of resource utilization to verify that resources are being
utilized in accordance with commanders’ operational objectives
 Collect network performance statistics, status, and event notification data,
analyze and resolve performance faults and reconfigure network resources as
warranted
 Conduct security fault/anomaly detection, analysis, and security
system/component (re)configuration.
Network Situation Awareness
 Collect (or receive), analyze, and fuse AN composition, status, performance, and
information assurance data in near real-time to produce user-defined views of the
mission critical AN resources of concern to a commander or NetOps center
 Display system and network resources, showing their operational status and
linkages to other resources
 Tailor reporting to facilitate the timely sharing of data and consistency of data
across the AN
Policy Based Network Management
 Provision and dynamically update local network policies, including for IA and
QoS
 Perform policy accounting, including SLA management and negotiation
 Provide network access control
Network Services
 Name Resolution System
 Dynamic Network Configuration Management
 Network Time
Connectivity
 Backbone
 Subnet
 Subnet, Backbone, GIG Access
 Routing/ Switching (Intra-Area Router, Area Border Router, AS Border Gateway Router)
 QoS
Network
Service
Provider
Information Assurance
 Policy Based Security Management
 User data protection through use of Inline Network Encryptor (e.g., HAIPE)
 Node data Protection – use of IPSec, Type 2 encryption, etc., to protect management and
control data
 TRANSEC
 I&A, data integrity validation, etc.
 Automated Key Management – management of node’s keys; access to replacement keys
 Attack Sensing, Warning, and Response
 Virus Protection
 Protocol Security
Link Management
 Advertising and Discovery
 Advertises node’s identity and location
 Discovers nodes that advertise their identity and location; authenticates nodes
identity and determines available network capabilities
 Establishing and Restoring
23
Node
Type
Network Functions



Requests network service needed to satisfy mission operational requirements
from designated AN link manager node
 Sets and adjusts AN network component and link parameters to establish or
modify the required link(s) as directed by the designated AN link manager node
Monitoring and Evaluation
 Monitors and evaluates node’s network and link performance
 Collects/processes/analyzes status reports from assigned AN nodes
 Reports status to designated AN link manager node
Topology Management
 Determines needed access, subnet and/or backbone connectivity for assigned
nodes to establish a topology that satisfies current mission operational
requirements
 Performs admission control for each node requesting connection
 Provisions access, subnet and/or backbone resources, determines network and
link parameters for assigned nodes, and directs adjustments to network
component and link parameters for assigned nodes that request network service
 Determines and directs needed adjustments to the assigned nodes’ network
component and link parameters to accommodate changes in service requests,
participating nodes or equipment status (based on status reports)
 Manages the handoff of links within its subnet and/or backbone to ensure
seamless connectivity
 Reconfigures access, subnet and/or backbone topology as directed by the
designated AN link manager node
Network Management
 Fault, Configuration, Accounting, Performance, Security (FCAPS) Management
 Identify when faults and anomalies occur; report, isolate, analyze, and resolve
faults
 Provide configuration of AN system parameters, detect configuration changes
 Perform accounting of resource utilization to verify that resources are being
utilized in accordance with commanders’ operational objectives
 Collect network performance statistics, status, and event notification data,
analyze and resolve performance faults and reconfigure network resources as
warranted
 Conduct security fault/anomaly detection, analysis, and security
system/component (re)configuration.
 Network Situation Awareness
 Collect (or receive), analyze, and fuse AN composition, status, performance, and
information assurance data in near real-time to produce user-defined views of the
mission critical AN resources of concern to a commander or NetOps center.
 Display system and network resources, showing their operational status and
linkages to other resources.
 Tailor reporting to facilitate the timely sharing of data and consistency of data
across the AN and its peer networks.
 Network Resource Planning (likely performed at a single AF Network Operations
facility or ground NSP)
 Develop and maintain a network resource plan for the allocation and
configuration of network elements through range of operations
 Formulate policies in accordance with network resource plan
 Store policies in policy repository
 Policy Based Network Management
 Provision and dynamically update network policies, including for IA and QoS
24
Node
Type
Network Functions

 Perform policy accounting, including SLA management and negotiation
 Provide network access control
Network Modeling, Analysis, and Simulation (likely performed at a single AF Network
Operations facility or ground NSP)
 Provide tools for examining network functional and performance characteristics;
use results to refine resource planning and policy formulation
Network Services
 Name Resolution System
 Dynamic Network Configuration Management
 Network Time
4.3.2 Link Types
4.3.2.1 Backbone
The AN quasi-persistent core backbone will provide a high-performance and faultresilient routed structure that can be characterized by a defined network capacity, latency
and loss rate. The backbone should be composed of relatively stable high bandwidth
links between a defined set of platforms. Ideally, the backbone links should be
symmetric and point-to-point with as equal bandwidth, latency, and loss characteristics as
is possible. It should be implemented to enable alternative path routing and fast routing
convergence, consistent steady-state traffic engineering and latency performance, and
consistent failure mode behavior. This backbone should also be implemented to enable
optimized paths between interconnections, dynamic load-sharing across the core structure
where appropriate, and efficient and controlled use of bandwidth. [Source:
Understanding Enterprise Network Design Principles]
The AN backbone will provide the following advantages to the overall network
performance:
 Reduce overall network complexity: Even if it is based upon and formed using
mobile ad-hoc networking technology, the backbone can be considered a part of the
routing infrastructure that does not change nearly as frequently as other subnets
attached to it. It therefore enables use of a much simpler and efficient routing protocol
among backbone routers, essentially the same protocol as is used on terrestrial
networks.
 Increases overall network stability: The fact that there are platforms whose locations
and flight characteristics are relatively stable gives them the inherent ability to form
relatively stable interconnections between them, whether they form these connections
in an ad-hoc fashion or not. These stable links can be used in network paths between
users having needs for stable and persistent connectivity.
 Facilitates reliable performance: Including relatively stable links in the network
paths enables resources to be reserved, where needed, for traffic having high reliability
requirements.
25
 Provides
location for common services: Many of the nodes comprising the backbone
can be used to host common network services, such as directories and gateways, so
that smaller, more constrained platforms do not need to do so.
 Provides aggregation points for SATCOM and ground interconnects: Nodes on the
backbone can serve as airborne concentration points for interconnection to the
SATCOM backbone networks and terrestrial networks.
4.3.2.2 Subnet
AN subnets will be formed to satisfy the specific communications transport needs of a set
of platforms. The subnet links maybe point-to-point or broadcast, high or low bandwidth,
LOS or SATCOM as needed to satisfy the mission needs. Subnet connections can be
prearranged quasi-static connections, ad hoc quasi-static connections, ad hoc dynamic
connections (connections of opportunity) consisting of any communications media
capable of supporting IP traffic.
4.3.2.3 Network Access
AN network access links will provide connectivity to AN and/or GIG services. The
network access links may be high or low bandwidth, LOS or SATCOM and function as a
circuit or trunk. These links must enable AN and GIG security, addressing, network
management, QoS, admission control, and network services.
4.3.2.4 Legacy
AN legacy links refer to any connectivity established using non-IP communications
systems typically capable of supporting voice, tactical data link (TDL), and possibly
some point-to-point IP network connections for very limited services (e.g., email only).
4.3.3 Typical Topologies
The Airborne Network will be capable of forming many different topologies, each
matched to a particular mission, set of platforms, and communications transport needs.
This flexibility will enable the AN to meet performance objectives while minimizing the
infrastructure required or the use of scarce resources.
4.3.3.1 Space, Air, Ground Tether
Tethering aircraft consists of establishing a direct connection to another aircraft or ground
node, via a point-to-point link for nodes within line of sight (LOS) or via a SATCOM
link for nodes that are beyond line of sight (BLOS). As in the case of the VIP/SAM
aircraft that have been recently equipped with the Senior Level Communications System
(SLCS) or a B-2 with reachback communications, a SATCOM link provides connectivity
to a network ground entry point as shown at the left in Figure 4-2. Strike aircraft that
accompany C2 aircraft such as an AWACS are tethered via point-to-point links as shown
in the center of the figure. Finally, C2 or ISR aircraft may connect via a LOS link
26
directly to a network ground entry point as shown at the right in the figure. Each of these
tethered alternatives requires the presence of a tethering point that has been prepositioned.
Figure 4-2. Airborne Network Tethered Topologies
4.3.3.2 Flat Ad-Hoc
A flat ad-hoc topology, as shown in Figure 4-3, refers to establishing nonpersistent
network connections as needed among the AN nodes that are present. With this network
the AN nodes dynamically “discover” other nodes to which they can interconnect and
form the network. The specific interconnections between the nodes are not planned in
advance, but rather are made as opportunities arise. The nodes join and leave the
network at will continually changing connections to neighbor nodes based upon their
location and mobility characteristics. This type of network topology would best serve
missions involving a relatively small number of aircraft that are very dynamic and have
modest communications transport needs.
Figure 4-3. Airborne Network Flat Ad-Hoc Topology
27
4.3.3.3 Tiered Ad-Hoc
Ad-hoc networks can be flat in the sense that all AN nodes are peers of each other in a
single network, as discussed above, or they can dynamically organize themselves into
hierarchical tiers such that higher tiers are used to move data between more localized
subnets. Figure 4-4 depicts such a tiered ad-hoc topology. This network topology would
be beneficial as the number of aircraft increases, or their mobility patterns become more
stable, or the communications transport needs increase.
Figure 4-4. Airborne Network Tiered Ad-Hoc Topology
4.3.3.4 Persistent Backbone
A network topology characterized by a persistent backbone in shown in Figure 4-5. The
backbone is established using relatively persistent wideband connections among highvalue platforms flying relatively stable orbits. The backbone provides the connectivity
between the tactical subnets which are considered edge networks relative to the
backbone. The backbone provides concentration points for connectivity to the space
backbone as well as to terrestrial networks. This type of network topology would be
needed to support a C2 Constellation consisting of several C2 and ISR aircraft
exchanging high volumes of high priority, latency-sensitive sensor and command data
among themselves and with strike aircraft.
28
Figure 4-5. Airborne Network Persistent Backbone Topology
29
5.
5.1
Airborne Network System Functions (SV-4)
On-Board Infrastructure
5.1.1 Intra Platform Distribution
5.1.1.1 Serial Data Buses
Serial data buses will provide highly reliable, fault tolerant, deterministic, serial switched
interconnections for on-board mission systems where timing jitter and missed deadlines
are intolerable. Serial data buses, such MIL-STD-1553B and newer enhanced rate
standards, typically employ dual (and multiple) redundancy to provide the requisite levels
of fault tolerance and command/response techniques to provide the guaranteed real-time
deterministic performance.
5.1.1.2 Local Area Networks
Avionics-quality data networks based on COTS networking technologies will provide
high data throughput while achieving bounded latency, determinism and guaranteed
availability for critical applications.
5.1.2 Platform Information Assurance
The platform security system will enable authenticated and authorized operators to
monitor ongoing security events and verify the configuration of security components.
Within the constraints of security policy, it will allow authenticated and authorized
operators the ability to modify system security behavior. Platform security components
will integrate with the AN Policy Based Security Management (PBSM) system allowing
dissemination, installation and enablement of security policy on platform security
components. Through interfaces with the AN network management system, authorized
network management systems will be able to query platform security components and
platform secured components to determine their configuration, health, and performance.
IA capabilities that may be provided by platform LANs include:






Ability to report configuration, status, and performance of security
components (e.g., HAIPE devices) and secured components (e.g., routers).
Ability to receive, validate, process, and implement security policy distributed
by the AN PBSM system.
Provide for user and AN node data protection. Data protection includes
support for data confidentiality, integrity, digital signatures, etc., as
appropriate for the data and applications.
Accomplish I&A.
Provide for Key Management.
Support Attack Sensing, Warning, and Response (ASWR.) ASWR includes
Intrusion Detection and Prevention and Virus Detection. ASWR will provide
30

the capability to report all instances of detected security threats such as
unauthorized attempts to use or penetrate the network or to deny service.
Provide for protocol security.
5.1.3 Platform Network Management
The platform network management system will enable operators to manage all on-board
network elements. The platform network management system will interface and
interoperate with the AN network management system to enable operators to manage
remote network elements in the airborne network.
The platform network management system will be capable of:
 Monitoring the health and status (to include identifying and locating faults) of
managed network elements through both passive receipt and interpretation of
status and alert messages and by active probes.
 Reporting the settings of all configurable parameters that can be provided by
every network element. The system will enable an authorized and
authenticated operator to change the settings of all configurable parameters for
every network element.
 Monitoring and reporting the utilization of every network element. The
system will be capable of receiving and applying network policy information
to effect changes in allocations of network resources to traffic flows.
 Changing the security configuration of network elements. The system will be
capable of reporting the initial configuration and all changes to the security
configuration of network elements and host systems and all instances of
detected security threats such as unauthorized attempts to use or penetrate the
network or to deny service.
5.1.4 Gateways/Proxies
Gateways and/or proxies will enable: (1) interconnection of legacy on-board
infrastructure (e.g., serial data buses) to the IP-based airborne network, (2) use of legacy
off-board transmissions systems to transport IP-based traffic, (3) interconnection of
legacy Tactical Data Link (TDL) systems into the airborne network, and (4) the use of
standard COTS and AF IP-based user applications over impaired (i.e., limited
bandwidth, long delays, high loss rates, and disruptions in network connections) airborne
network connections. These systems will facilitate the transition of the legacy on-board
infrastructure, transmission systems, TDLs, and user applications to the objective
airborne network systems. Therefore, these systems will only be needed during a
transition period, and the scope of the needed functionality will vary with specific
platforms. (Note, the transition period may extend indefinitely to accommodate
interoperability with allied and coalition platforms and systems.)
5.1.4.1 Legacy Infrastructure Gateways
31
Legacy gateways provide an interface(s) between the on-board legacy infrastructure
(such as MIL-STD-1553 data, audio and video) and the on-board IP network to enable
legacy host systems (e.g., servers, OWSs, radios, radio access, video distribution and
display, etc.) to interoperate with IP-based host systems and to access the airborne
network.
5.1.4.2 Legacy Transmission System Data Link Gateways
Legacy transmission system gateways can be used to enable legacy voice and data radio
channels to be used to transport IP traffic. These gateways (such as those developed for
AFRL’s Information for Global Reach (IFGR) project) overcome the low bandwidth,
high loss, long delay, half-duplex environments of legacy radio systems using a variety of
techniques. A combination of Forward Error Correction (FEC) and Automatic Request
for retransmission (ARQ) can be used to decrease the wireless error rate in the datalink
layer along with some of the Performance Enhancing Proxy techniques discussed below.
5.1.4.3 TDL Gateways
TDL gateways (such as the Common Link Interface Processor (CLIP)) perform message
processing and formatting, data translation and forwarding, and radio interface and
control to enable interconnection of legacy data link systems into the airborne network.
5.1.4.4 Performance Enhancing Proxies
Performance Enhancing Proxies (PEPs) improve the performance of user applications
running across the AN by countering wireless network impairments, such as limited
bandwidth, long delays, high loss rates, and disruptions in network connections. These
systems are functionally implemented between the user application and the network and
can be used to improve performance at the application and transport functional layers of
the OSI model. Some techniques that can be employed include:
 Compression: Data compression or header compression can be used to
minimize the number of bits sent over the network.
 Data bundling: Smaller data packets ca be combined (bundled) into a single
large packet for transmission over the network.
 Caching: A local cache can be used to save and provide data objects that are
requested multiple times, reducing transmissions over the network (and
improving response times).
 Store-and-forward: Message queuing can be used to ensure message delivery
to users who become disconnected from the network or are unable to connect
to the network for a period of time. Once the platform connects, the stored
messages are sent.
 Pipelining: Rather than opening several separate network connections
pipelining can be used to share a single network connection for multiple data
transfers.
32






5.2
Protocol streamlining: The number of transmissions to set-up and take down
connections and acknowledge receipt of data can be minimized through a
combination of caching, spoofing, and batching.
Translation: A translation can be performed to replace particular protocols or
data formats with more efficient versions developed for wireless
environments.
Embedded acknowledgments: Acknowledgements can be embedded in the
header of larger information carrying packets to reduce the number of packets
traversing the network.
Intelligent restarts: An intelligent restart mechanism can be used to detect
when a transmission has been dropped and reestablished, and then resume the
transmission from the break point instead of at the beginning of the
transmission.
Transmission Control Protocol (TCP) replacement or enhancement: A
replacement to standard TCP, or an enhanced TCP variant can be used over
wireless links with high losses and/or long delays.
Link Error Awareness (LEA): LEA can be used to distinguish and handle
efficiently the two type of network losses -- losses due to congestion, and
losses related to link errors.
AN Equipment
5.2.1 Inter-node Connectivity
5.2.1.1 Routing/Switching
The AN routing/switching function will enable the dynamic exchange of traffic with
other platforms and will be capable of both off-board and transit routing. Off-board
routing pertains only to the traffic originating on or destined for hosts on the local
platform. Transit routing deals with traffic originating on other platforms and destined
for hosts on other platforms. AN routers/switches will also enable the exchange of traffic
to other GIG fixed, deployed, and mobile networks. AN routers/switches must be
capable of performing static, dynamic or ad hoc routing as needed, with flat or
hierarchical routing topologies. The AN will enable platforms to dynamically change
their points of attachment to the network with no disruption to incoming or outgoing
traffic flows and minimal loss of data (i.e., seamless roaming).
The AN routers/switches must direct traffic from the on-board infrastructure to an offboard legacy, subnet, network access or backbone link transmission system. Typical
routing/switching functions include:
 Media translation
 Address translation
 Security and firewalls
 Neighbor (network router) discovery
 Exchange of routing information
 Detection and propagation of link status changes
33








Initialization and maintenance of routing tables
Path determination
Packet marking, classification, conditioning, and servicing (queuing and
scheduling)
Packet forwarding
Policy enforcement
Packet filtering
Load Balancing
Route flap dampening (i.e., suppressing router update transmissions caused by
unstable links changing available paths)
Table 5-1 summarizes the AN routing functional requirements.
Table 5-1. AN Routing Functional Requirements
Requirement
Description
Loop-free
AN routing/switching will produce forwarding paths that do not contain circular
routing loops, i.e., traffic passes through an AN router/switch only once on its
path from source to destination.
Directionality
AN routing/switching will be possible with the use of bidirectional,
unidirectional, or asymmetric paths.
Routing
Topologies
AN routing/switching will support both flat network topologies and hierarchical
network topologies. When in a hierarchical network topology, the AN will be
capable of end system, intra-area, inter-area, and inter-domain (or autonomous
system (AS)) routing) as opportunities and needs arise.
Transmission
Types
AN routing/switching will be capable of routing user information as unicast,
broadcast, multicast, and anycast network transmissions between AN Nodes on
any part of the AN or GIG.
Time reference
The AN routing/switching will not require a universal time reference at all
nodes in network.
Routing Metric
As a minimum, the AN will enable routing decisions to be based upon the
following criteria:

number of hops,

bandwidth availability,

delay,

relative reliability of each available path,

traffic load,

media preference (to prioritize non performance functions such as
LPI/LPD, AJ, fast handoff capability),

path bit error rate or packet loss rate

cost (i.e., dollars).
As a goal, the AN shall also enable routing decisions to be based upon the
following criteria:
34
Requirement
Description

relative stability of the path as determined by the number of link state
changes that have occurred in a unit of time over all links comprising
the path,

relative availability of the path as determined by the proportion of time
the path has been operational in an interval of time,

geographic position,

speed and heading.
Constraint Based
Routing
The AN will enable the use of any combination of the above route selection
criteria for routing decisions, prioritized and weighted according to user
determined policy and quality of service needs.
Responsiveness
The AN will change the routes used to forward information in response to
changes in the available communications resources, offered traffic patterns, and
performance demands in order to make most efficient use of network resources
in accordance with the communications policy in place at the time.
Scalability
The AN routing protocols will operate with varying numbers of platforms,
varying numbers of fast moving platforms, varying amounts of offered traffic
per platform.
Robustness
The AN routing protocol will be resistant to link or node failures or high link bit
error rates.

In the event of a failure of a single AN Node or link, the AN will be
capable of routing traffic to the remaining AN Nodes.

The AN will provide a failover mechanism for any critical AN nodes
to avoid potential single points of failure.
The AN will be capable of routing information to segments of the
network that became partitioned due to critical AN Node or link
failures by using any other links or networks available, including
terrestrial and space networks.

Compatibility
with standard
Internet
protocols
AN routing will be compatible with standard Internet routing protocols and
compliant with applicable Air Force and DoD directives on their use.
5.2.1.2 Quality of Service (QoS)/Class of Service (CoS)
QoS mechanisms will allow the AN to make decisions on resource allocation and traffic
handling based on recognition of different types of traffic with different performance
expectations. The AN must also provide the capability to prioritize traffic based on class
of user, application, or mission; and to preempt a lower priority data flow if a higher
priority flow is initiated and insufficient resources exist to carry both flows
simultaneously. This capability, referred to as Class of Service (CoS) support,
corresponds approximately to Multi-Level Precedence and Preemption (MLPP).
The AN will provide the capability to assess QoS/CoS requirements to determine if they
can be met. QoS requirements are used to derive network resource and configuration
35
needs. They are successively mapped into quantitative QoS parameters relevant to
various AN systems that can be monitored and controlled. QoS parameters may be
oriented towards





Performance - sequential versus parallel processing, delays, data rate
Format - transfer rate, data format, compression schema, image resolution
Synchronization - loosely versus tightly coupled, synchronous versus
asynchronous
Cost - platform rates, connection and data transmission rates
User - subjective quality of images, sound, response time
In allocating resources, the Link Management system must not only consider resource
availability and resource control policies, but also QoS requirements. To ensure the
required QoS is sustained, the Link Management system must be capable of monitoring
QoS parameters and reallocating resources in response to system anomalies. This
requires monitoring resource availability and its dynamic characteristics, e.g., measuring
processing workload and network traffic, to detect deviations in the QoS parameters.
When there is a change of state, i.e., degradation in the QoS, and the Link Management
system cannot make resource adjustments to compensate, then the application is notified.
The application must either adapt to the new level of QoS or scale to a reduced level of
service.
Supporting these QoS functional requirements for the AN, where the network topologies,
links, platforms, and information exchange needs are dynamic will require an adaptable
QoS architecture. This architecture must include QoS-aware application interfaces, a
variety of dynamically configurable QoS/CoS mechanisms, QoS-based routing, and a
network-wide QoS manager as depicted in Figure 5-1.
36
Link
Management
Host
Server
or OWS
PB Network
Management
PB Security
Management
Peer QoS
Manager
QoS Manager
Application
Application
QoS
Application
Interface
QoS-Based
Routing
QoS
Mechanisms
Application
QoS
Mechanisms
Router/Switch
Transmission System
AN Node
Data
Control
Figure 5-1. QoS Functional Components
5.2.1.2.1 QoS-Aware Application Interfaces
The AN must be capable of providing adequate communications services for a wide
variety of user mission applications, from those that do not address QoS to those that are
fully QoS-aware capable of requesting specific QoS needs and adapting to degraded
network conditions. QoS-aware application interface components will provide the
functionality necessary to provide consistent QoS treatment for all user mission
applications, including:


Identification, classification, and marking of traffic flows
Monitoring and controlling of traffic flow rates and bandwidth allocations
5.2.1.2.2 QoS/CoS Mechanisms
The AN end-to-end QoS/CoS architecture must include multiple, complementary
mechanisms, such as admission control, traffic shaping, various queue management
strategies, and application-layer aspects as listed in Table 5-2.
37
Table 5-2. QoS Mechanisms
Tools/Techniques/Technologies
Use and Benefit
Differentiated Services (DiffServ)
Admission Control
Bandwidth Broker
802.1Q and P
Resource Reservation Protocol--Tunneling
Extensions (RSVP-TE)
Multi-Protocol Label Switching--Traffic
Engineering (MPLS-TE)
Real-time configuration management of
applications, networks, and edge-user
devices
Application-to-Network exchange of
network capability and performance
information (traffic engineering)
Metadata Tagging (pending modification of
DoD Standards)
Prioritizing tool
Policy admission tool
Network controlling tool
Traffic classification and routing tool
Connection assurance tool
Connection assurance and traffic engineering tool
Positive enterprise control tools
Applications can behave adaptively to changing network
conditions to provide the user with best possible service
Data is marked with time sensitivity to indicate usefulness,
accuracy, and/or need
5.2.1.2.3 QoS-Based Routing/Switching
The AN will employ QoS-based routing/switching in which paths for flows are
determined based upon knowledge of available network resources as well as the QoS
requirement of the flows. QoS-based routing path-selection criteria will include QoS
parameters such as available bandwidth, link and end-to-end path utilization, node
resources consumption, delay and latency, and induced jitter.
The main objectives of QoS-based routing are:



Dynamic determination of feasible paths: QoS-based routing will determine a
path, from among all available choices, that is capable of accommodating the QoS
of the given flow. Feasible path selection will be subject to policy constraints.
Optimization of resource usage: Network state-dependent QoS-based routing will
enable the efficient utilization of network resources by improving the total
network throughput.
Graceful performance degradation: State-dependent routing will compensate for
transient inadequacies in network engineering (e.g., during focused overload
conditions), giving better throughput and a more graceful performance
degradation.
5.2.1.2.4 QoS Manager
Each AN node will have a QoS manager that brokers network resources and services for
user mission applications, provisions network resources, and controls the QoS-related
configuration of network components. Each QoS manager will receive requests from
host applications and peer QoS managers for network services. The QoS manager at
38
Network Service Provider nodes will also receive requests from other component GIG
networks. Each QoS manager will interoperate with the link management, policy-based
network management, and policy-based security managements systems to provision
network resources. Each QoS manager will also be capable of controlling the QoSrelated configuration of the network components located at the node.
The QoS manager will be a distributed function capable of performing the following:

Brokering Network Resources and Services
o Verify user/application QoS privileges
o Negotiate service level agreement (SLA) with application subject to
network and security policies
o Negotiate SLA with other GIG networks subject to network and security
policies
o Monitor network performance
o Alert application/GIG network when network not meeting SLA (i.e., QoS
violations)
o Identify corrective actions and renegotiate SLA with application/GIG
network

Provisioning Network Resources
o Map QoS requests to network devices
o Manage end-to-end network configuration
o Perform admission control
o Monitor, allocate, and release network resources
o Request additional network resources
o Adapt to changes in available network resources

Controlling Network Components
o Configure network components for selected QoS/CoS mechanisms
o Adapt/modify network component QoS/CoS related parameters (e.g.,
routing metrics, router update mechanism, router flow conditioning, queue
management, flow scheduling, etc)
5.2.2 Information Assurance
AN Information Assurance components will enable or extend access to DoD GIG based
security or may organically provide the security services. IA services will be enabled
regardless if the AN is connected to other GIG transport networks or operating as a GIG
disconnected, stand-alone network segment. IA services will be designed to function
under constraints such as low-bandwidth, high-latency connections, unidirectional links,
or with nodes operating in LPI/LPD modes to include receive-only. AN IA components
will interoperate with the GIG IA infrastructure and other non-GIG IA infrastructures
including coalition IA infrastructures. This interoperability may come from direct
interface with the external infrastructure (e.g., with GIG core security functions) or
39
through a gateway or proxy system (e.g., interoperating with allied or coalition IA
infrastructure.)
5.2.2.1 GIG IA Architecture
The Department of Defense (DoD), Intelligence Community (IC), and other Federal
agencies (e.g., the National Aeronautics and Space Agency (NASA) and the Department
of Homeland Security (DHS)) require the development of an assured global national
security information technology enterprise to achieve their strategic visions of
information superiority, net-centric operations, and collaborative information sharing. To
achieve their visions, the DoD’s GIG and the IC’s Horizontal Integration (HI) initiatives
will transform the way they operate, communicate, and use information. This
transformation from a need-to-know to a need-to-share information protection model is
enabled by tightly integrating the business, warfighter, and national intelligence domains
with the technical infrastructure domain through a net-centric enterprise capability. The
AN is an element of the GIG technical infrastructure, providing airborne connectivity to
support mission critical communications and applications. The GIG IA Architecture
defines six operational mission concepts that IA enablers must support. (See section 6
for additional discussion on GIG operational mission concepts and IA system enablers.)
GIG Mission Concepts and IA System Enablers (Functions)
As an element of the GIG fabric, the AN is a transport component of the larger GIG
enterprise. The AN IA architecture will support the six primary GIG mission areas as
follows:

Provide Common Services – Provision a set of common services to all users to
support all missions, enabling the Task, Post, Process, and Use (TPPU) model of
information management.
o The AN will use Distributed Policy-Based Access Control to mediate access
for all users to system communications, information and services based on
user identities, privileges, digital policy, object attributes, environmental
attributes and mission need.
o The AN will determine the end-to-end protection that can be provided
combined with the end-to-end protection required by an information object
and visibility into enforcement mechanisms to ensure that information is not
shared with systems that cannot adequately protect it to create a Secure Endto-End Communications Environment.
o Enterprise-Wide Network Defense and Situational Awareness will allow the
AN to audit all accesses to communications, information and services, protect
audit information during generation, processing and storage in such a way as
to ensure non-repudiation by the user, and use audit information to detect
inappropriate access to information and resources.
o The AN will use Distributed Policy-Based Access Control to enable
authorized users to access control policy when critical mission needs require
it.
40

Provide Worldwide Access – The ability to connect regardless of location giving
users access to all assets needed to accomplish their mission.
o Using Identification and Authentication Strength of Mechanism to ensure
confidentiality and authentication, the AN will provide a communications path
for users that have strongly authenticated identities and verified
authorizations.
o Using Enterprise-Wide Network Defense and Situational Awareness, the AN
will provide notification of compromised devices and the revocation of all
authorization for that device.
o Using Enterprise-Wide Network Defense and Situational Awareness, the AN
will have the ability to create audit files that include strongly authenticated
and protected information about endpoints, network devices and processes,
and network management control actions.
o To ensure confidentiality, the AN will protect network topology and access
location data using the Secure End-to-End Communications Environment that
could be used to geolocate users.
o Using Distributed Policy-Based Access Control, the AN will provide access
control to network transport capabilities based on authenticated user identity,
user privileges, authenticated network access device identity, location, Quality
of Service/Class of Service (QoS/CoS) and user priority settings.

Provide Dynamic Resource Allocation – Allocate network and computing resources
based on dynamic mission needs. The AN will use Assured Management and
Allocation of Resources to:
o Provide two-way authentication and the establishment of the integrity of all
management and control commands that implement resource configuration,
allocation and enforcement.
o Provide a policy based enforcement mechanism to ensure that users are not
exceeding their allocated resources or privileges.
o Provide a policy override feature to enforce priority of certain information
such as survival data.
o Ensure audit log entry of all changes to the QoS/CoS settings and policy
override actions are securely maintained.
o Provide real-time situational awareness picture of all resources.

Provide Dynamic Group Formation – Dynamically form groups of users and
enable these groups to communicate and share a common set of information and
services only with each other.
o Using Dynamic Policy Management, the AN will provide dynamic allocation
and enforcement of COI communications and computing resources and
services.
41
o Dynamic Policy Management will also provide support for cross-domain
connectivity between the GIG, the AN, and other external systems.

Provide Computer Network Defense (CND) – Provide a proactive and reactive
capability for computer network defense; protect, monitor, detect, analyze and
adaptively respond to unauthorized system and network activities. Using EnterpriseWide Network Defense and Situational Awareness, the AN will be able to:
o Provide increased priority to enable the timely delivery and processing of
sensor data and the derived situational awareness information to support the
execution of analyzed and deliberate responses.
o Provide real-time monitoring, collection, anomaly detection and analysis of all
sensor data.
o Ensure protection mechanisms are provided to sensor data and the derived
situational awareness information to guarantee their integrity and availability.
o Provide for the secure collection and processing of sensor and situational
awareness data throughout the AN.
o Provide secure logs to record all CND and IA actions taken to adjust the AN
and GIG configurations. The logs should include the initiating user, time of
action and the action taken.
o Provide an ability to automatically uncover vulnerabilities and predict
intrusion/attack strategies that can exploit the vulnerabilities.

Provide Management and Control of GIG Network and Resources – Securely
manage and control all communication, computing resources and services, remotely
and in an automatic manner. Given the scope of management and control, protection
of this information is critical to preventing unauthorized entities (including GIG and
non-GIG entities) from gaining access to GIG components and affecting the
availability of the GIG.
o Using Assured Management of Enterprise-Wide IA Mechanisms and Assets,
the AN will provide secure management of the configuration of the GIG,
monitoring for unauthorized configuration changes and securely applying
updates; provide the capability to automatically configure and reconfigure its
resources, receive configuration status from its resources and generate reports
on the configuration of its resources; provide the ability to set limits on the
usage of its resources and provide for automatic corrective actions when
thresholds are exceeded; provide the ability to define, enforce and control a
digital security policy across security domains and COIs; provide the ability to
correlate and deconflict differing policies to arrive at a mutually acceptable
policy; provide the ability to securely distribute and validate updates or
changes to the digital security policy; and provide the ability for network
managers to manage and control differential delivery of information, based on
QoS/priority settings of the information flow, as well as traffic
planning/engineering inputs at a network level.
42
o Enterprise-Wide Network Defense and Situational Awareness will allow the
AN to ensure all components, especially IA-specific, have the capability of
recording audit events and posting audit reports for access by management
and control entities, and maintaining the availability and integrity of all such
records.
5.2.2.2 Supporting Infrastructures
5.2.2.2.1 Security Management Infrastructure (SMI)
Supporting infrastructures provide the foundation upon which IA mechanisms are used in
the network, enclave, and computing environments for securely managing the system and
providing security-enabled services. Supporting infrastructures provide security services
for networks, end-user workstations, servers for Web, applications, and files, and singleuse infrastructure machines (e.g., higher-level Domain Name Server (DNS) services,
higher-level directory servers).
An overall supporting infrastructure for secure environments is referred to as a Security
Management Infrastructure (SMI). An SMI is comprised of components, functions, and
products provided by external systems and/or within the system. Examples of SMI
provided products include software-based cryptographic algorithms, symmetric keys,
public keys, X.509 certificates, and virus update files.
Under the Cryptographic Modernization Initiative (CMI), the DoD is transforming the
cryptographic inventory to offer significant operational benefit in alignment with the
Department’s commitment to transformational communications and related (e.g., netcentric) warfare doctrines. The CMI will introduce new End Cryptographic Units
(ECUs) incorporating flexible, programmable cryptography, new cryptographic
algorithms, new key types, and capabilities for electronic downloading of new operating
or application software, including cryptographic software. ECUs will have provisions for
dynamically changing operating modes, algorithms, or keys in response to new missions.
The emerging ECU characteristics require an overarching ECU Management
Infrastructure. Currently, a framework consisting of five categories of ECU management
has been defined.1

Inventory Management – Tracking possession (quantities, locations, status,
owner identity) of ECUs.

Configuration Management – Establishing hardware and software baselines for
ECUs; managing optional and mandatory hardware and software updates; and
maintaining records of approved equipment and system configurations.

User Data Management – Creation, distribution and installation of various types
of user data associated with a specific mission, not categorized as configuration or
1
ECU management categories are defined in the GIG IA Architecture and are still undergoing refinement
as is the ECU management program. As a result, these categories should be considered notional at this
time.
43
key management data. Examples are frequency allocations or hop sets and IP
addresses.

Key Management – Provisioning of cryptographic key and PKI certificates to
ECUs, including ordering, generation, production, delivery, loading, auditing,
accounting, and tracking of keys and certificates.

Policy Management – Provide authority structure to manage and change security
policies enforced in ECUs.
The specific infrastructures and processes supporting these management categories will
consist of a combination of external infrastructures and local, organizational
infrastructures. For example, management of AN ECU inventory records will be
conducted at an AN management facility with interfaces to external organizations such as
NSA. User data management, policy management and configuration management
functions will also be initiated and performed at an AN management facility.
Configuration functions such as reprogramming of cryptographic algorithms will require
interfacing to the DoD’s KMI. The KMI will provide the secure creation, distribution,
control and management of public key products, traditional symmetric keys, and support
for manual cryptographic systems. The KMI will support the entire range of key
management needs including registration, ordering, key generation, certificate generation,
distribution, accounting, compromise recovery, rekey, destruction, data recovery, and
administration. While the DoD KMI is the primary source for AN key-related products,
dependent on mission requirements, the IC Public Key Infrastructure (PKI) and the DoD
PKI may provide support for certain key and certificate requirements.
Currently, the DoD is operating a separate PKI that provides X.509 based certificate
products for DoD personnel and devices. Similarly, the DoD is currently operating the
legacy Electronic Key Management System (EKMS) that provides key management
support for all other symmetric (i.e., traditional) and asymmetric (i.e., public) key
management needs. However, in the FY07 – FY09 timeframe, EKMS functions will be
transitioning into the DoD KMI.2 Also, the DoD PKI will be brought under the DoD
KMI umbrella. Early AN operations may need to interface with the EKMS to obtain key.
The AN must also project evolving its key management processes from EKMS to KMI.
5.2.2.2.2 DoD Key Management Infrastructure (KMI)
The AN will use the DoD Key Management Infrastructure (KMI) and Electronic Key
Management System (EKMS) processes as its core key management capability. The AN
will limit the keys it requests from the KMI to those necessary to configure, operate and
manage the devices that are integral to and directly managed by the AN. These keys will
include those necessary to enable operator, manager, and administrator access to AN
components.
It’s anticipated that some legacy EKMS functions will be operational past the 2012 timeframe. However,
early transition to emerging KMI functionality will improve automated key delivery and should simplify
key management processes.
2
44
The KMI will support the AN by providing:
 End Cryptographic Unit (ECU) registration
 Key ordering and generation
 Certificate generation
 Key distribution, accounting, and compromise recovery
 Rekey functions
 Key destruction
 Key and ECU administration
 Over the Network Keying
 HAIPEs, IPSec, and circuit link encryptors electronic key products and upgraded
cryptographic algorithms
 TRANSEC device and packet masking function key material
 TBD functions and keys
5.2.2.2.3 Public Key Infrastructure (PKI)
PKI delivered Public Key (PK) services will support several security functions including
validating integrity and origin of signed executable code (e.g., waveforms), configuration
data, distributed key material, and management and control data. PK services also
provide the basis for mutual authentication, and support identification and authorization.
Finally, the PK certificates will be used to secure IPSec, TLS, and SSL tunnels and
provide security to AN control and management data.
An AN will require node authentication and access control. The AN nodes will not
process information received from other nodes or AN management systems or devices
external to the AN until it validates that it is communicating with a valid, operational
entity via crypto verification of the public key certificate. The digital signature will be
used to provide authentication and integrity security services.
The AN must provide the means for AN connected networks to access GIG resident PK
services. The AN may need to host functionality that will extend PKI services, such as
certificate status checking, to the airborne network thereby ensuring entities (applications,
devices, and humans) are able to seamlessly access PK services.
PKI will support the AN by providing:
 Identity, integrity, confidentiality, and non-repudiation
 Class TBD person certificates
 Class TBD device certificates
45
 Support certificate issuance, validation, and revocation for device and people
certificates
PKI must support
 Disconnected operations
 Terminal, ECU, manager, operator, and administrator and other AN entity
authentication
 Role based access management for nodes and other AN components including
system managers, administrators and operators
 Secure policy distribution
 Network management and operations
 Protocol security
The AN may need to support extending PKI services to the AN operational domain by
providing:
 Local storage of for certificate status validation data such as Certificate Revocation
Lists (CRL)
 Processes, such as OCSP responders, necessary to support accomplishing
certificate validation while minimizing bandwidth impact
 Local support for certificate public key storage
 LDAP or other DoD PKI interoperable directory service
 Extended PKI support when disconnected from DoD infrastructure.
5.2.2.3 AN Information Flows
Though the AN will support multiple user communities, the types of information flows
handled by the system are common among those user communities. Table 5-3 identifies
the types of AN information flows that need to be protected within the system
boundaries. Security of the AN and the information transiting the system are critical to
mission success. Classified national security information carried over the AN will be
transmitted only by secure means. Sensitive but unclassified information will be
protected during transmission according to the level of risk and the magnitude of loss or
harm that could result from disclosure, loss, misuse, alteration, destruction, or nonavailability. The AN will have the capability to transmit user information ranging in
classification from unclassified up to and including Top Secret/Sensitive Compartmented
Information (TS/SCI). The classification levels will be dependent on the ability of the
system security components and processes or information originator to encrypt the
information and the ability of the end user to provide the appropriate cryptographic
devices for decryption. AN system information (management, disseminated policy, etc.)
will be protected in a threat environment up to TBD.
46
Table 5-3. Summary of MAC and Confidentiality Level by Information Flow
Information Flow
Mission Assurance
Category
MAC 1
MAC 1
MAC 1
MAC 1
MAC 1
MAC 1
MAC 1
User
Policy Dissemination – Category I
Policy Dissemination – Category II
Network Management – Category I
Network Management – Category II
Network Control – Category I
Network Control – Category II
Confidentiality Level
Defined by User
Sensitive
Classified
Sensitive
Classified
Sensitive
Classified
Figure 5-2 depicts the System information flows and where those flows will interface
with AN components and the GIG.
Inbound Flow
GiG
User
[Direction]
Net Mgmt
Net Control
Pol Diss
Inbound Flow
Outbound Flow
[Status]
Net Mgmt
AN Node or
User
Net control
Terminal
Pol Diss
User
[Direction]
[Status]
Net Mgmt
Net Mgmt
Net Control
AN NOC
Net Control
Pol Diss
Pol Diss
User
Outbound Flow
Figure 5-2. Data Flows in Relation to AN, AN Nodes, and the GIG
5.2.2.3.1 Least Privilege
Least Privilege is an important IA concept for the AN. The key concept behind least
privilege is that a user be provided no more privileges than necessary to perform a task.
In an environment like the AN, this clearly becomes important in trying to ensure that
users in a given user community are not capable of exceeding their assigned permissions
and administrators of the AN are given system access compatible with their roles and
responsibilities. To accomplish least privilege requires identifying what the user's job is,
determining the minimum set of privileges required to perform that job, and restricting
the user to a security domain with only those privileges. By enforcing least privilege,
47
users or objects of the AN system will be unable to circumvent the organizational
security policy.
5.2.2.3.2 User Traffic
Transport of user information through the AN is the core mission of the system. User
information will flow from a user’s enclave – with confidentiality protection provided by
the AN node or, in some cases, the user – to the AN. Typically, user information will
enter the AN via an AN node or an edge interface with another transport network, then
flow through the AN out to another AN node or through an interface to another transport
network, and then to the distant end user’s enclave. User information can be data, voice,
video, or imagery. The information owners or producers will determine the information
sensitivity or confidentiality level; the AN node or the user’s system will provide for endto-end confidentiality protection. While the user information will enter the AN as
sensitive unclassified, the AN will be responsible for ensuring the availability of the
transport and will be required to provide services to protect the transport, such as
TRANSEC, logical separation between users in different user communities, and packet
header masking or cover over RF communications.
5.2.2.3.3 Policy Dissemination
Policy dissemination information is the system data flows that are used to distribute
security and management policy and enforce how AN components are managed, utilized
and protected. Those AN components include all resources within the enterprise such as
physical devices (e.g., routers, servers, workstations, security components), software
(services, applications, processes), firmware, bandwidth, information, and connectivity.
The policy dissemination data flows are the digital set of rules, coded in software or
hardware into GIG assets, by which all actions of these assets must abide. To achieve
enterprise wide (end-to-end) policy management of the AN, the policies defining the
rules for use of information, communications (transport), and management and control
functions must be integrated into a cohesive system policy. This includes establishment
of an agreed approach for risk measurement, risk management and risk acceptance,
through an Assured Information Sharing policy [TBD]. The policy defining
interconnections with non-AN entities would specify functions such as U.S. access and
handling rights for a foreign partner’s restricted data. Reciprocally, AN policies must
address the access to GIG assets by these partners.
The dynamic and adaptive aspect of policy dissemination information allows for the rapid
adjustment of these rules in response to crisis situations that require either a reduction of
privileges or increased latitude. These adjustments will change the criteria used to
determine how resources are allocated to users and how access control decisions are
made. The AN must not only support adjustments to policy, but must also support
conditional policies.
5.2.2.3.4 Network Management
48
Network management information is essential to gaining a view into the status and
configuration of the AN. Network management information may include audit data, the
capturing of AN network events, configuration commands and information, component
management commands (such as AN node session establishment or disconnection),
traffic statistics and performance data. Keying management information is also
considered part of network management information.
Network management information is transmitted between the AN Network Manager and
the AN network components. Network management data flows are two-way flows of
information.
In addition to the intra-AN management flows, network management information will
also be securely exchanged with external network operations centers (NOCs) (such as the
Global Network Operations and Security Center (GNOSC)).
All AN management information flows must use authentication and protection techniques
approved by NSA to ensure the authenticity and integrity of the information.
5.2.2.3.5 Network Control
Network control information includes link management exchanges, network services
exchanges, router exchanges (e.g., Interior Gateway Protocol (IGP) exchanges, Exterior
Gateway Protocol (EGP) exchanges, routing table updates), QoS signaling exchanges
(e.g., Resource Reservation Protocol – Traffic Engineering (RSVP-TE). Network control
information protection must include confidentiality mechanism to protect against traffic
analysis attacks as well as integrity and authentication mechanisms to ensure the control
information is authentic and unaltered. Control information that contains information
that identifies individual users of the AN and/or their location must be protected at the
MAC 1, Classified level.
5.2.2.4 AN IA Boundary
Definition
For the purposes of AN IA Architecture, an Information Assurance Boundary will be
defined as:
A conceptual term used to delineate system Security Services; Integrity, Confidentiality,
Non-Repudiation, Authentication and Availability, from other system services. The
Information Assurance Boundary is rarely analogous to a physical boundary and will
often consist of functions implemented in both hardware and software.
IA Boundary Identification Process
Identification of a system’s IA Boundary begins with creating the system’s Security
Policy. This Security Policy specifies the system’s Security Services to be provided, the
data elements requiring these protections and the handling requirements to be
implemented. Any physical or functional component of the system which implements a
49
part of its Security Policy is considered to be “Inside the IA Boundary” of that system.
Once the IA Boundary is established, it is submitted to the system’s Designated
Approving Authority (DAA) for Approval.
AN IA Boundary
The IA Boundary is established by the Designated Approving Authority (DAA) to
include all hardware and software sub-systems which implement parts of the system
Security Policy. AN functions which may be included in the AN IA Boundary are listed
below:

Encryption: TRANSEC, HAIPE, Bulk Encryption

Privilege Management (e.g., PCF functions).

Data Security Services (Integrity, Confidentiality, etc.)

ASWR

Key Management: Key Ordering, Generation, Distribution, etc.

Network Operations and Management

Network Services (e.g., DNS, Configuration Management Server)

Backbone Routing Services (e.g., Routers, Switches)

Operator Authentication mechanism(s)
5.2.2.5 Dynamic Policy Based Security Management (PBSM)
The AN, characterized by dynamic network topologies, will be subject to various types of
complex security attacks and varying operational needs. The AN IA management
capability will need to cover multiple security domains and COIs. This will require the
trusted creation, distribution, and automated enforcement of IA policy directives.
Policies prescribe the operational need, acceptable security risk, and weighting between
the two under various conditions. The policy must be implemented in a manageable,
flexible means, not “hard coded” into the design of system components.
Dynamic policy management is the establishment of digital policies for enforcing how
AN assets are managed, utilized, and protected. This includes policies for access control,
quality of protection, quality of service, resource allocation, connectivity, prioritization,
audit, and computer network defense, as well as policies covering the use of AN
component hardware and software. AN assets include all resources within the AN such
as the physical devices (e.g., terminals, gateways, routers, security components), software
(services, applications, processes), firmware, bandwidth, information, and connectivity.
For example, security policies will be used to guide:



Firewall configurations including open and closed ports and allowed or
disallowed protocols
Proxy configuration
Attack profiles to detect
50



Actions to take when an attack is detected
Identification and Authentication (I&A) policy
Virus checking policies, etc.
To achieve enterprise wide (end-to-end) policy management, the AN will implement core
policies promulgated by the GIG. These policies will define the rules for information
use, communications (transport), management and control functions, and service access.
The AN will take these core policies and modify them for operations within the AN’s
domain and unique operational requirements.
The AN must be able to support the policies of non-AN participants (e.g., IC, Allied
Nations, NATO, etc.) who may have airborne and other networks that will need to
interoperate with the AN. Support and interoperating with policies generated by non-AN
sources will require establishment of an agreed approach for risk measurement, risk
management and risk acceptance. The policy would also specify functionality such as
U.S. access and handling rights for allied-restricted data. In most cases, the GIG core
policy will addresses cross-user group handling rules requiring little modification within
the AN.
The dynamic aspect of policy management allows for the rapid adjustment of these rules
to meet operational scenarios that require a rapid reduction of privileges and accesses or
increased latitude. These adjustments will change the criteria used to determine how
resources are allocated to users and how access control decisions are made.
The AN must support conditional policies. The policy for accessing an AN asset will
specify different access rules based upon the conditions that apply to this set of
information. Factors such as destination system or data source, path (within the AN or
from outside the AN, or routing through the AN) etc., may modify the level of trust
afforded an entity within the AN ultimately modifying the level of access allowed.
5.2.2.5.1 PBSM Framework
Policy based security management requires a framework that addresses policy
management from the point of policy creation to policy installation in end devices. This
framework must include the ability to dynamically update AN policy in response to
changing network conditions or in response to updated GIG security policies. Figure 5-3
provides an architectural framework (based upon the policy management architecture
developed by the IETF Policy Framework Working Group) for performing dynamic
policy management within the AN.
51



GiG Policy Input
AN Policy PreEngineering
Network Modeling
Validate
Policy
Policy Input
Point
Logical Language
Policy
Promulgation
Point
Deconflicted Policy
Policy
Distribution
Point
Device-Specific Commands — e.g., config file
Push
Pull
Policy
Enforcement
Point
Current configuration/error or event traffic
Figure 5-3. Notional Policy Management Architectural Framework
Policy management begins with a pre-engineering phase in which the GIG source policy
and locally developed policy or modifications to the GIG policy is validated before
entering into the AN. Validated GIG and local security policies enter the AN at a policy
input point as follows:
 The entity entering the policy at the input point must be authenticated and
validated – the input point must determine if the entity has the proper
authorizations to enter policy. The policies are coded in a high level language
suitable to define policy that will be transferred to a policy promulgation point.

The policy promulgation point performs policy deconfliction to resolve any
conflicts between the GIG enterprise-wide policy, AN mission specific and
community of interest policies, and the policies of non-AN entities (e.g., allied,
coalition, etc.) that have access to the AN.

The deconflicted policy is provided to a policy distribution point, which will
likely be at management and configuration nodes, for inclusion in AN device
configuration loads.

The policy is distributed to a policy enforcement point -- a network or security
device -- using a push or pull model. The push model will be used for policy that
must take effect immediately because new behavior is needed under a particular
condition. The pull model can be used in cases where a policy change is
scheduled to take effect at a particular time but is not critical to current
52
operations. Ensuring that AN entities stay synchronized with current policy is a
critical aspect of policy distribution
As needed to meet network node distribution, policy promulgation, distribution, and
enforcement points may be hierarchal distributed in order to support the distributed
nature of the AN. It is anticipated the network and security management structures will
take advantage of similar data flow and physical architectures to ensure that all AN assets
are security and operationally managed.
5.2.2.5.2 Policy Protection
The AN policy management architecture will implement the security services and
mechanisms necessary to protect the policy throughout its life cycle. From the point of
creation to installation in the policy enforcement point, every entity handling digital
policy must maintain the integrity of policy information for policy-at-rest and policy-intransit. AN assets must maintain integrity of the source of origin for policy throughout
the management infrastructure. Depending on the source and value of the policy, the AN
may need to provide policy confidentiality protection.
Every entity sending or receiving policy information must be identified and
authenticated. An AN entities privileges to send, receive, and modify policy as well as to
send error messages to the policy promulgation point must also be validated. Other
security services include the logging of all policy management transactions and the
assured availability of the management infrastructure. A critical aspect of maintaining
the security posture of the AN, the availability of the policy input, promulgation,
distribution, and enforcement points is vital to nearly all AN functions.
5.2.2.5.3 Security Management System
TBD
5.2.2.6 Data Protection
The AN will use a variety of mechanisms to secure user and system data at rest and in
transit. These mechanisms will range from the Type 1, High Assurance Internet Protocol
Encryptor (HAIPE) to commercial Type 2 encryption systems, to software based
mechanisms, such as the IPSec protocol described by RFC 2401. The level of security to
be applied to a data object will be defined by the object’s “Quality of Protection”3 or QoP
markings. The AN Distributed Policy Based Security Management and Access Control
systems will provide the enforcement mechanisms to assure the specified QoP is
3
As part of the data labeling, an object will be associated with security properties and policies necessary
for protecting it. Properties can include how to protect the object as it travels across the network (e.g.,
commercial grade vs Type 1 protection, as well as object and/or packet and/or link level data-in-transit
protection requirements), how the object can be routed (e.g, must be contained within DoD controlled
resources or can flow through networks external to the GIG), or how the data object must be protected at
rest. Quality of Protection is different from metadata that describes the contents of the object. Metadata is
designed to facilitate discovery and data sharing. QoP defines how a data object is protected while it is at
rest and in transit.
53
provided. Data, be it user data or system management data including policies, is
vulnerable at two different times, when at rest stored in a form waiting to be used and
when in transit between the storage facility and the user.
Data-at-rest protection will be required for some types of data (e.g., sensitive data, some
policy, etc.) and for certain environments, such as information stored on a system in a
hostile environment. For shared information, data-at-rest protection must be provided at
the object level, where an object is defined as a file, or pieces of a file. This leads to a
large range of object types each of which must be properly protected. Data-at-rest
protection will be provided in a number of ways including media encryption mechanisms.
Data-in-transit protection consists of the ability to provide confidentiality, integrity, and
or authentication services to the information as it is transmitted through the AN.
Protection of data-in-transit includes providing traffic flow security (TFS). Traffic flow
security should be provided for all high-risk links in the AN but could also be provided
for medium or low risk links. In general, TFS protections include protections against
network mapping and traffic analysis. In general, the lower in the protocol stack
confidentiality is applied, the greater the TFS benefit. For IP networks a variety of
mechanisms can be used. For IP environments where the communications links are
circuit based and routers are protected, hop-by-hop link encryption can be applied to
provide TFS for encrypted packet traffic. AN network traffic will be protected at the
application, network, and link layers.
5.2.2.6.1 Application Layer Protection
Application layer data-in-transit security is the protection of information as it flows from
one end user terminal to another, where the end user terminals apply the protection
mechanisms, and the protected information is not accessible at any point during
transmission. Application layer protections, implemented by user applications or other
user-managed security tools, are beyond the scope of the AN. However, they can impact
AN security by preventing virus checking and intrusion detection and prevention tools
from inspecting the contents of the data payload.
5.2.2.6.2 Network Layer Protection
Network layer data-in-transit security is the protection of IP packets as they flow across
the AN to other AN nodes or to nodes existing beyond the edge of the AN. Protection
could be from enclave to enclave, or from host to host. High Assurance Internet Protocol
Encryptor (HAIPE) compliant devices will be used to provide Type 1 data-in-transit
network layer security.
The AN HAIPE device will be a modular, software programmable INFOSEC module
designed to meet the low and high data throughput requirements necessary to support all
AN users. Its modular design will facilitate its operation as a component of the AN node
or as a platform LAN supporting component.
54
In order to support backwards compatibility with legacy communications systems, the
HAIPE supporting programmable INFOSEC module (PIM) will also support operations
as a bulk encryption device. Being able to program the PIM as a HAIPE device or bulk
encryption device will facilitate use of AN supporting components to provide capabilities
that will be matched to the mission, platforms, and communications transport needs.
Figure 5-4 provides a notional, block diagram view of the components that may comprise
an AN node supporting terminal. Key to Figure 5-xx components is their flexibility –
they are designed to support straight through connectivity or bypassing and are designed
around a function agnostic switching fabric. This will allow a module to be programmed
as a switch, router, or other network component as required to meet mission needs.
Router is function/data agnostic switching fabric — can be
router, switch, mux, etc. Software load and operations policy
dictates how it operates.
PIM
[ HAIPE or
Bulk ]
RF
or
Optical
Control
TRANSEC
ASWR
IDS/IPS
Virus
Key Management
Router, PIM, router — with
connectivity options that
will allow horizontal flow
through terminal or input
and/or output to be
redirected to platform.
1. Security Applications
but could be net
management, etc.
2. Applications on red and
black side.
3. Red supports plain text
[user] data.
4. Black supports terminal
[security, management,
control, configuration, etc.]
ASWR
IDS/IPS
Virus
Key Management
Figure 5-4. Notional AN Supporting Terminal Block Diagram
5.2.2.6.3 Link Layer Protection
As an RF based network, the AN will need to provide protection against a number of
threats, such as jamming, not found in wired networks. IP networking link layer
protection will be provided by Transmission Security (TRANSEC) mechanisms.
TRANSEC implemented link layer security provides the following protection and
functions:

Anti-Jam (A/J) protection
55

Low Probability of Interception/Detection (LPI/LPD) protection

Traffic Flow Security (TFS) and Traffic Analysis Protection (Note, the HAIPE
device provides IP network layer TFS protection)

Signals analysis protection

Protocol and header cover/packet masking

TRANSEC isolation for major sets of users (If connecting to TSAT, AN
supporting TRANSEC will need to meet TSAT TRANSEC user group separation
requirements.)
TRANSEC will be implemented in a stand-alone, embedded, modular programmable
INFOSEC device that will operate at the data rates and crypto algorithms necessary to
support user needs.
5.2.2.7 Key Management
Key management is one of the fundamental aspects of Information Assurance. Key
management includes the following elements/attributes:

Key Management Plan – a document that describes the process of handling and
controlling cryptographic keys and related material (such as initialization values)
during their life cycle in a cryptographic system, including ordering, generating,
distributing, storing, loading, escrowing, archiving, auditing, and destroying the
material.

Algorithm Flexibility – the capability to support a variety of cryptographic
algorithms and systems, of varying strengths and types, and to change algorithms
when appropriate to meet new interoperability or mission requirements.

Coalition Interoperability – the capability of interoperating with cryptographic
systems used by coalition partners while simultaneously providing a high
assurance U.S.-only capability.

Authorized Key Sources – keys can be either locally generated or provided by a
central authority which is source and integrity validated.

Trusted Key Storage – keys must be stored securely on an End Cryptographic
Unit (ECU) such that it cannot be extracted in a useful way by an attacker
(including attackers’ software agents), and it cannot be modified in an
unauthorized way without detection.

Key compromise recovery mechanisms – the capability to identify all ECUs
impacted by a key compromise and ensure the rapid recovery of operations.

Key destruction mechanisms – the capability of “destroying” keys when
required by circumstances.
Key Management Architecture (KMA)
The AN KMA will consist of the following elements:
56

AN Network Operations Center (NOC) (i.e., terrestrial management facility or
network service provider)

AN Nodes and Terminals

Key Loading and Initialization Facility (KLIF)

National Security Agency (NSA) KMI/EKMS

TBD
The AN key management infrastructure must be capable of operating for extended
periods of time without connectivity or support from the. Therefore, a closed system for
supporting the key material requirements of the AN elements will be needed, rather than
an open system where the elements could interact directly with the NSA KMI/EKMS for
all key materials. The AN NOC will be the primary interface to the NSA KMI/EKMS
for key material. The NOC is expected to support and supply keying material to all other
elements of the AN as well as supporting all crypto planning functions, compromise
recovery functions and support the controlling authority. Prior to performing any key
management services, the identity of the node or component must be ascertained using a
robust method such as a PK-based digital signature. Benign key distribution techniques
must be used to ensure that RED keys are accessible only to the end-device.
Where possible, and in any newly developed cryptographic devices, the use of
asymmetric (public) keys should be used in favor of symmetric keys. Asymmetric keys
have inherent properties that make them more secure than symmetric keys.4 The AN
network must support in-band benign keying and rekeying of AN node and component
public key certificates. Also, with the emergence of multi-channel, software
programmable communications systems, it may be advantageous to distribute all keys for
a given system as part of the system’s configuration file essentially delivering initial key
material outside the key distribution system.
TRANSEC for RF links will continue to require the use of symmetric keys. The AN
must support the benign distribution of these keys to the node in support of the
appropriate user group, protect these keys while resident in the equipment through robust
anti-tamper and provide timely compromise recovery.
The AN NOC will also:

Report key compromises to KMI

Report public key certificate revocations back to the PKI.

Control of the PCFs

Authenticate terminals/users/networks
4
With asymmetric keys, a unique key is generated for each transaction through a public key exchange,
while in a shared symmetric key operation, a single compromise leads to a system-wide compromise. Also,
public key certificates allow for individual certificate revocation to selectively remove compromised
individuals/devices without system wide rekeying.
57

Download/upload mandatory software

Maintain ECU inventory records

Manage ECU (inventory, configuration, user data, key and policy)

Manage the security configuration of AN resources

Audit and ASWR

Manage security policies

Manage user groups
5.2.2.8 Authentication System
Authentication establishes the validity of a claimed identity and Access Control is the
prevention of unauthorized use of the system’s resources. Robust Authentication and
Access Control is especially important to the AN to ensure that the given user/device has
been provided appropriate access to the AN’s resources.
Authentication applies to all of the information flows as follows:

User Information Flows: Required in order to establish/maintain the logical
separation of COIs.

Network Management Flows: Required to avoid spoofing by a potential
adversary.

Network Control Flows: Required to insure the integrity of network operations
and to enable proper management/control.

Policy Control Flows: Required to insure proper maintenance and separation of
the COIs.
Access to and use of the system will be restricted to known verified users. Factors that
may be used in the authentication process include Biometrics, cryptographic tokens, and
integrated Public Key Infrastructure etc. Multiple factors will be required to ensure that
only users with known, verified identity are allowed access.
Network management information flows must include a minimum of two factor
authentication features. Configuration files, routing tables, and software updates must
have robust authentication to assure management by only the control center.
Network control systems must be protected by rigorous physical and personnel security
methods. Robust and, at a minimum two-factor authentication, including PKI and
biometrics must be used to improve the security of network control transactions.
5.2.2.9 ASWR
58
The AN will provide host and network based Intrusion Detection Systems (IDS),
Intrusion Prevention Systems (IPS), and Virus Detection, Prevention, and Removal
applications. Larger AN nodes, such as the Network Service Provider and Internetwork
nodes, will support full IDS, IPS, or virus applications and detect and respond to
anomalous network traffic. Less capable nodes, such as Network Access and Network
Capable nodes, will employ agents that monitor network traffic for anomalous behavior
sending data back to an aggregation point and then responding to the central services
instructions.
Transient encrypted user data will not be tested for anomalous content as the node will
not have the keys or requirement to unwrap the packet for inspection.
The AN will support plain text (red side) IDS, IPS, and virus capabilities. However,
platforms with organic IDS and/or IPS capability may request the AN node to pass their
plain text data directly to their processes.
The AN node is responsible to monitor the AN management and control path for
anomalous network behavior and virus infected code taking action based on the current
policy.
ASWR process and sensor collected data will be sent to network management
aggregation points. The network management nodes will analyze the data to determine if
detected anomalous behaviors are localized to a few nodes or occurring network wide.
Processed and aggregated data will be forwarded to GIG network operations centers.
Dependent on the analysis results, network security and operations policy will be
generated and distributed throughout the network modifying IDS, firewall, and other
security component behavior, to minimize anomalous behavior impact.
The AN will employ reactive IDS and migrate to IPS as intrusion detection systems and
software technology matures.
5.2.2.9.1 Network Intrusion Detection System
The Intrusion Detection System (IDS) defends system components against threat actions
carried by network data traffic. The AN will use a combination of host based and
network based IDS processes. Host-based IDS components -- traffic sensors and
analyzers – will run directly on one or more of the hosts they are designed to protect.
Network-based IDS sensors will be placed on subnetwork components, and associated
analysis components run either on subnetwork processors or other, more powerful, hosts.
The IDS will be configured using GIG provided attack signatures that will be delivered
with the AN Node configuration files. Updated attack signatures that result from local
detection of previously unknown attacks or is received from the GIG will be delivered
using security policy updates or via an updated configuration file.
59
The IDS will evolve to become the Intrusion Prevention System (IPS.) The IPS hardware
and software detects and prevents known attacks. It will employ methods such as
heuristic, anomaly checking, or signature based filtering to detect the attack. Intrusion
prevention is specifically targeted at detecting and then preventing publicly known yet
stealthy application-specific attacks. Intrusion prevention blocks malicious actions using
multiple algorithms and will provide blocking capabilities that include (plus go beyond)
signature-based blocking of known attacks.
The AN IDS and IPS processes will locally log and report back to the AN NOC detection
and any remediation results. The NOC will use the information to modify network
security policy. In addition, the NOC will aggregate the information and report it back to
the appropriate GIG network operations center.
5.2.2.9.2 Virus Detection, Prevention, and Removal
The AN virus detection, prevention, and removal implementation will employ signature
matching, code enumeration, and checksum comparison to detect virus infected code or
data.5 Once detected, and dependent on the security and operations policy in place, the
virus checking application may attempt to remove or “fix” the data file or it may alert the
operator or it may quarantine the file for later action. The virus checking application is
only effective when applied to plain text data. Because of this and the need to provide a
defense in depth capability, hosts on platform LANs should also implement their own
local virus checking applications.
The AN virus detection and remediation processes will locally log and report back to the
AN NOC detection and remediation results. The NOC will use the information to modify
network security policy. In addition, the NOC will aggregate the information and report
it back to the appropriate GIG network operations center.
5.2.2.9.3 Vulnerability Assessment System (VAS)
TBD
5.2.2.10 Protocol Security
The routing, transport, management, and other protocols used to enable the AN must be
secured. At a minimum, protocol security will include origin validation, integrity,
confidentiality, and mutual authentication between endpoints. Dependent on the
sensitivity of the protocol’s data, that security may require use of Type 1 or Type 3
encryption techniques.
5
There are three common approaches to viral code detection. The first, signature matching, requires
information about the virus. Specifically, a signature-based detector requires the virus’ code length and
the location of its “contagious” segment. The second technique, code enumeration, involves examining
known programs periodically to test whether any unknown segments of code have been added to the
original file. It is most effective when applied before each execution of the program. Finally, checksum
methods compare the current size of a file and the summed value of its bytes to the same attributes of a
known uninfected version - an infection will often change these values.
60
Many of the standards based protocols do not organically provide for the level of security
required by the AN or else have security implementations that will cause excessive
overhead if employed in the AN. These shortfalls must be remediated by extending
current specifications or generation of replacement specifications that are optimized for
use in the wideband, wireless domain.
5.2.2.11 Other Security Functions
5.2.2.11.1 Firewalls
TBD
5.2.2.11.2 Guards – Cross Domain Solutions
TBD
5.2.2.11.3 Covert channels
TBD
5.2.2.12 Security Modeling and Simulation
TBD
5.2.3 Network Management
The Airborne Network will be characterized by dynamic network topologies and
bandwidth-constrained, variable quality links. Due to the special challenges posed by
these features, it will be impractical to apply the same network management (NM)
methods used in terrestrial networks to the Airborne Network. Two significant
architectural challenges must be addressed: (1) enabling the NM capability to adapt to
dynamic topology changes; and (2), developing efficiency enhancements to minimize
management overhead over bandwidth constrained, variable quality links.
The NM architecture for terrestrial networks is typically a centralized or hierarchical
structure in which the relationship between the manager(s) and managed nodes changes
very infrequently. The Airborne Network, however, will self-form, and undergo dynamic
topology changes. Nodes may enter and leave the network, subnets may partition from
the main network, separate subnets may merge to create a larger network, etc. As such,
reliance upon static management relationships between nodes, and the assumption of a
static topology, cannot be applied to the management architecture of the Airborne
Network.
The Airborne Network will also be confronted by significant efficiency demands. In
general, maintaining network robustness when the majority of the network nodes are
mobile will require a higher degree of maintenance overhead when compared to static
terrestrial networks. Yet Airborne Network links will be bandwidth limited, and of
variable quality; the dynamic nature of highly mobile airborne platforms will result in
occasions of link outages and degraded link quality. These constraints make it critical
61
that the network management function be not only efficient, but also balance the “cost”
of its transactions against the benefit that would result from conducting the transactions.
5.2.3.1 Cluster-Based Management Architecture
To address these challenges, a cluster-based management architecture is needed. A
cluster-based management architecture groups individual nodes into clusters that are
managed by a cluster manager. The cluster managers are in turn managed by a network
manager. The main concept behind cluster management is to convert a traditionally
centralized process into a semi-distributed process to address the challenges posed by
mobile networking (i.e., a dynamic topology and enhanced NM efficiency). The major
components of the architecture include a Network Manager, Cluster Managers, and
Intelligent NM Agents.
5.2.3.1.1 Network Manager
The Network Manager executes management applications that monitor and control the
managed elements – i.e., the subordinate cluster managers. Location options for the
Network Manager include an air-based or ground-based node. Based on the tenet that the
Airborne Network management solution scale across the variety of operational
configurations (including autonomous operations disconnected from GIG enterprise
services), it is assumed that the Network Manager will typically be air-based. It should
be noted that a ground-based manager is not precluded in this architecture; in certain
instances, a ground-based manager may be advantageous given the benefits of collocation
within terrestrial centers of C2. However, to enable the desired management efficiency
and autonomy features, the architecture will assume capabilities of an air-based manager.
A required feature of an airborne Network Manager is its ability to transition
management responsibilities to another node if the hosting platform leaves the network,
or is no longer able to serve as the Network Manager. As such, a fail-over and hand-off
function is required of the manager.
62
Cluster 1
Cluster 2
Cluster 3
= Terrestrial-based Peer or Super-ordinate Manager
= Network Manager
= Cluster Manager
= Intelligent NM Agent
Figure 5-5. Cluster Based Management Framework
5.2.3.1.2 Cluster Managers
Cluster Managers are selected dynamically as the network forms. The performance
objective in selection of clusters is to minimize protocol overhead between the node and
manager, while maximizing management service availability over a dynamic topology.
As Cluster Managers are selected, local NM/PBNM services are instantiated within them.
These intermediate managers then take responsibility for local NM and policy
dissemination functions in accordance with the policies of Network Manager. Cluster
Managers act as proxies to aggregate and filter information -- based on cost-benefit
examination -- from the cluster’s nodes to the Network Manager. Likewise, Cluster
Managers receive NM and policy directives from the Network Manager and disseminate
as appropriate to constituent cluster nodes.
When acting as elements of a larger network, the Cluster Managers may collaborate
among themselves to adapt to the changing topology. New Cluster Managers may be
designated as subnets partition. As partitions merge, Cluster Managers may collaborate
to determine who will serve the role of Cluster Manager of the larger network. Thus, the
maintenance of Cluster Managers is dynamic to adapt to the topology changes of
network.
Cluster Managers enable clusters to operate autonomously while disconnected from the
Network Manager due to network partitioning, loss of links to the Network Manager, etc.
63
Because the Cluster Manager maintains the cluster’s topology, health, and status
information locally, this information will be available to the Network Manager upon
reconnection of the cluster to the network. Thus, a dynamic and distributed management
structure, rather than a centralized static structure, is applied to adapt to the dynamic
topology of the Airborne Network.
5.2.3.1.3 Intelligent NM Agents
Imparting local intelligence within the managed elements enables the nodes to conduct
localized and autonomous collection, aggregation, analysis, and problem resolution -thus reducing the management overhead that occurs in centralized architectures with
“dumb” agents. Intelligent NM Agents conduct cost-benefit assessments for interacting
with the Cluster Managers – taking into account the link’s available bandwidth, the
node’s Emission Control (EMCON) state, the importance and timeliness of event
reporting, etc.
5.2.3.2 Policy Based Management Framework
The IETF Policy Framework Working Group has developed a policy management
architecture that is considered the best approach for policy management on the Internet. It
includes the following components:
 Policy Server – A graphical user interface for specifying, editing, and
administering policy.
 Policy Repository – A device used to store and retrieve policy information.
 PDP (policy decision point) – A resource manager or policy server that is
responsible for handling events and making decisions based on those events and
updating the PEP configuration appropriately.
 PEP (policy enforcement point) – Network devices such as routers, firewalls,
radios, and hosts that enforce the policies received from the PDP.
 LPDP (local policy decision point) – Scaled-down PDP used when a policy server
is not available.
In addition to the Policy Server/PDP and PEPs, an intermediate set of elements, termed
cluster PDPs, will be needed within the airborne management architecture to
accommodate efficiency and autonomy requirements.
In most implementations of the framework, the Policy Server, Policy Repository, and
PDP are collocated and may potentially be hosted within the same physical device. The
discussion of the framework below assumes this consolidation of system components.
In the framework, a ground-based resource planner would formulate high-level airborne
networking policies -- that is, formulation of the relatively static, high-level specifications
of the desired behavior of flows. These sets of high-level policies would be downloaded
into the Policy Server/Repository/PDP. The Policy Server/Repository/PDP is responsible
for distributing policies to the intermediary PDPs. There are two approaches for
64
distributing policy: outsourcing, and delegated provisioning. In outsourcing, all policy
decisions are conducted at the PDP. In delegated provisioning, policy directives may be
defined in broader terms, with flexibility for local policy interpretation based on local
events and conditions. The accompanying figure illustrates a hybrid solution for policy
distribution. The outsourcing model (as reflected by the blue lines) is applied to maintain
explicit control of particular PEPs. The provisioning model (as reflected by red lines) is
recommended at cluster PDPs to minimize signaling overhead and enable the adaptation
features required in policy enforcement to accommodate a dynamic topology. Policy
managed services may include quality of service, traffic engineering features, and
security attributes. Each service would employ a common set of policies, called a Policy
Information Base (PIB).
Modeling, Analysis &
Simulation Tools
Resource Planning Tool(s)
 Policy planning
 Policy formulation
Policy Server/Repository/PDP:



Cluster
PDP
Policy Distribution
Policy Decisions
Policy Accounting
Cluster
PDP
Cluster PEPs
Cluster PEPs
Cluster
PDP
Cluster PEPs
Policy provisioning
Policy outsourcing
Figure 5-6. Policy Based Network Management Framework
PBNM Overlay
The policy network management framework outlined above may be overlaid onto the
cluster-based management architecture. As shown in the figure below, cluster managers
are provisioned to act as intermediary (Local) Policy Decision Points (LPDP),
interpreting and distributing policies from the Policy server/PDP to their cluster nodes
(Cluster PEPs). Because each cluster manager acts as a local PDP for its cluster PEPs, the
node is imparted with a degree of self-sufficiency required in semi-autonomous cluster
operations.
65
Cluster 1
Cluster 2
Cluster 3
= Ground-based resource planning, and policy formulation, and
modeling, analysis, and simulation
= Network Manager/Policy Server/PDP
= Cluster Manager/LPDP
= Cluster PEPs
Figure 5-7. Cluster Based Management Framework with PBNM Overlay
Modeling, Analysis, and Simulation tools would take policies formulated from the
Resource Planner -- and policy accounting statistics from the Policy Server/
Repository/PDP -- to evaluate the performance of current policies, and provide a facility
to conduct “what if” scenarios to formulate improved and/or adjusted policies. These
functions would be conducted at a ground based node.
5.2.3.3 Network Management System Components
The general system components for conducting NM across the AN is reflected in Figure
5-x. This particular diagram depicts the notional placement of those system components
within the platform AN Equipment and On-Board Infrastructure. In today’s Internet,
management interoperability is accomplished using TCP/IP for communications
protocols, SNMP for a management protocol, and a standard management information
base for managing these protocols (MIB-II). In a similar fashion, the objective AN will
require an analogous set of communications protocols, management protocol, and the
corresponding MIBs.
Every AN managed device will implement an Intelligent Agent (IA). The IA will
execute the client-level portion of FCAPS functions for the AN, maintaining the fault,
configuration, performance, and security attributes of the device. In platforms with
multiple managed devices, a “master” IA will proxy and aggregate information across the
66
“slave” IAs, compiling summary reports to the Cluster Manager. The IA is focused on
minimizing unnecessary management overhead across bandwidth-constrained AN links.
As such, the master IA will apply a high degree of “intelligence” to conduct localized and
autonomous collection, aggregation, analysis, and problem resolution functions
concerning the platform’s managed objects. The master IA acts as a proxy to aggregate
and filter information -- based on cost-benefit examination -- from the platform’s
managed devices to the Cluster Manager. A MIB specifically tailored to these features
(IA MIB) will support Intelligent Agent functionality. In platforms with a single
managed device, the IA will act in a combined master/slave role. In platforms with an
on-board infrastructure and local network manager, the IA can provide an HMI by
“piggy-backing” on the local platform NM process. This HMI will present local health
and status of the platform’s managed devices.
All nodes that are “Network Capable” will implement Cluster Manager functions. The
Cluster Manager is the crucial component in adapting the NM architecture to nodal
mobility and topology changes. The Cluster Manager is an application process that is
invoked autonomously and dynamically via participation within server election and handover protocols. When invoked, the CM process assumes the mid-level management role
in FCAPS functions, distributing control and configuration commands to IAs in
accordance with the policy and provisioned directives of the Network Manager (NM).
Likewise, the CM aggregates IA summary reports of fault, configuration, performance
and security data and forwards them to the NM. To enable management over the
dynamic network, these processes conduct (or leverage) a clustering protocol that adapts
the regional management function to the local topology changes. An affiliated CM MIB
is accessed by these processes and provides the required database for conducting cluster
management.
Internetwork nodes and Network Service Provider nodes will implement a Network
Manager application process. This application layer process manages the airborne
network as a whole, via provisioning of commands to managed devices via the elected
cluster managers. The NM application accesses an affiliated MIB data base (NM MIB).
The HMI for this function is hosted by the on-board infrastructure, and provides top-level
Network FCAPS and Network SA.
67
Figure 5-8. Network Management System Components
Network Manager
• Conduct overall monitoring of AN
• Delegate control and configuration
commands to CMs
• Present AN Situation Awareness (SA)
Cluster summary reports of Fault,
Configuration, Accounting, Performance,
and Security
Control and configuration
commands to CMs
Cluster Manager
• Autonomously select and advertise role as CM
• Maintain cluster organization through participation
in clustering protocol
• Aggregate cluster SA from cluster’s IAs;
• Conduct cost-benefit analyses for intelligent
reporting to NM
Distribution of NM/CM control
and configuration commands to
intelligent agents
IA summary reports of Fault,
Configuration, Performance, and Security
Intelligent Agent-Master
• Auto-discovery of CM
• Localized and autonomous problem resolution
• Aggregate platform’s managed device status data
• Conduct cost-benefit analyses for intelligent
reporting
Distribution of NM/CM/IA control
and configuration commands to
managed devices
Raw reports (Fault, Configuration,
Performance, and Security)
Intelligent Agent-Slave
• Maintain managed device data in Device
MIB
Figure 5-9. Network Management System Components: Relationships and
Activities
68
5.2.4 Link Management
5.2.4.1 Link Management Architecture
A Link Management (LM) architecture that accommodates the characteristics and
challenges of the AN consists of employing multiple managers each responsible for a
subset of the total nodes that constitute the AN. This architecture will allow the LM
system to function in a modular decentralized fashion that promotes efficiency as well as
adaptability.
The LM architecture components consist of a high level LM Executive, LM Autonomous
Managers, and Intelligent LM Agents. Figure 5-10 shows the overall architecture.
LM
Executive
LM
Agent
LM
Agent
LM
Agent
LM
Autonomous
Manager
LM
Autonomous
Manager
LM
Autonomous
Manager
LM
Autonomous
Manager
LM
Agent
LM
Agent
LM
Agent
LM
Agent
LM
Agent
= Logical Connectivity
Figure 5-10. LM Architecture
LM Executive
The purpose of LM Executive (LME) is to maintain status of the entire AN, and if
necessary, control subordinate LM Autonomous Managers. The LME monitors and
evaluates the AN overall topology by interfacing with LM Autonomous Managers
(LMAMs). The LME identifies any specific actions to be taken to alter the network
topology when needed and executes them by controlling/overriding the LMAMs. The
LME also prepares summary reports of LM information pertaining to the entire AN.
LM Autonomous Managers
69
Each LM Autonomous Manager monitors and controls a subset of the total nodes (and
associated links) that constitute the AN. As described above, the LM Executive monitors
and controls these subordinate managers. In addition, each autonomous manager can
exchange information with other autonomous managers to accomplish operations
involving more than one LM subnetwork.
The LM Autonomous Manager (LMAM) determines how the nodes of the subnetwork
will be interconnected, performs admission control for each node, and provisions the
needed link resources. The LMAM also monitors its LM subnetwork (nodes and links)
and determines whether or not the current topology needs to be altered to accommodate
any changes such as link or component failures or changes to the subnetwork
participants. When necessary, the LMAM identifies the specific actions to be taken to
alter the network topology and commands the corresponding LM Agent to perform these
actions on the target link. In addition, the LMAM recognizes the conditions requiring
hand-offs between nodes and will enable these hand-offs to be performed seamlessly.
Intelligent LM Agents
An LM Agent (LMA) advertises the identity and location of its node and identifies,
authenticates, locates and determines the role of AN nodes from which it receives
advertisements. This information is used by the topology establishment and adaptation
functions in the corresponding LMAM. The LMA also determines the operational state
of the resident network components and links and the network performance being
achieved. Under the direction of an LMAM, the LMA changes the value of network and
link parameters when necessary to modify the performance of any link.
5.2.4.2 Link Management System Components
AN Link Management system components are depicted in Figure 5-11.
70
Figure 5-11. LM System Components
5.2.5 Network Services
5.2.5.1 Overview
Network services refer to the services necessary for the effective access to and utilization
of the network’s communications transport service. Examples include name resolution,
dynamic host configuration (including dynamic address configuration), security
authentication services, policy services for network admission, etc). These network
services may be accessed by hosts or applications.
Since the AN must be capable of self-forming and self-adapting with nodes dynamically
joining or exiting the network, the network services architecture must be either: (1)
impervious to these dynamics, able to supply the desired services despite the dynamic
changes of the network; or (2) capable of adapting to these changes to provide the desired
services. Also, when the AN must operate autonomously, without connectivity to ground
nodes, the network services must be provided from airborne locations. Finally, the
network service overhead traffic needs to be minimized over the bandwidth-constrained,
variable quality links of the AN.
For dynamic networks, if every network service (e.g., name resolution, address
configuration) as well as, Network Management and Link Management conducts its own
service transactions separately and independently of the other services, it could create an
environment in which the service overhead traffic saturates the capacity of the network
links. This condition and the above factors dictate the need for a distributed network
services architecture with common approaches used to deliver the different services.
71
Options for distributed architectures range in the “level of distribution” delivered: from
(1) partitioning of the network into regional clusters, in which cluster heads provide
services to a network region or “cluster”; to (2), complete distribution, in which every
node can render the service.
Delivering network services with a distributed service architecture across a dynamic
network will require that a number of service maintenance functions be performed.
These service maintenance functions include server election, service discovery, service
advertisement, and the need to maintain regional clusters through which to deliver
services. Rather than forming separate, independent architectures – with each service
employing its own physical architecture and corresponding maintenance transactions – a
unified, common architecture for AN Network Services is established to achieve
efficiency gains across the AN. In this architecture, cluster maintenance is conducted as
a single process for all network services. Additionally, servers of the various services are
collocated such that functions such as service election, discovery, advertisement, and
other common service maintenance operations can be conducted on a common basis,
rather than once per independent service. This enforces a single physical services
architecture. (Another opportunity for leveraging commonality may be exploited by
“piggy backing” upon functional facilities provided by other lower-layer processes, such
ad hoc routing maintenance transactions.)
The basic functionality of these service maintenance functions are described below:
Server Election
One of the attributes of a distributed architecture that supports dynamic networking is the
ability of multiple (and in some cases, all) nodes to act in the capacity of “a server”. As a
network self-forms, nodes that can act as a server are elected (or selected), and take on
the server capability. To minimize potential performance impacts in highly dynamic
networks, this process could include election of primary and backup servers. If a node
leaves or becomes inoperable, an election process ensues to select a new server. If two
subnets, previously disconnected and employing their own server merge, the
election/selection process reconciles the redundancy.
Service Advertisement
Once nodes have been elected to act as servers, an advertisement feature is often
employed to announce that it is acting in the server capacity. This announcement is
conducted periodically, or on an event-oriented basis. Minimizing service advertisement
overhead is a critical need for constrained-bandwidth networks; as such, various features
may be employed, including constraining the broadcast announcement to n-hops (i.e., a
cluster), using multicasts rather than broadcasts, and enacting the announcement only
upon trap-driven events, etc.
Service Discovery
72
A feature that often complements, and works in concert with the service advertisement
function, is the service discovery function. This is employed by nodes which are in
search of a server, such as those that have recently joined the network and need
immediate service. Again, minimizing overhead is a critical need for constrainedbandwidth networks; the same approaches to mitigate overhead for service announcement
may also be employed within this function.
Cluster Maintenance
A clustering approach to distributed architectures groups individual nodes into clusters
that are serviced by a cluster server. The main concept behind clustering is to convert a
traditionally centralized process into a semi-distributed process that adapts to dynamic
network challenges. The performance objective in selection of clusters is to minimize
protocol overhead between client and server while maximizing service availability. To
do this, clusters maintenance protocols optimize the cluster attributes to adapt to the
dynamics of the network. Clusters that are too large suffer from an excess of clientserver transaction traffic, while clusters that are too small suffer from an excess of cluster
maintenance overhead. Additional factors that impact cluster maintenance include,
among others, relative node speed, cluster density, mobility characteristics, etc.
5.2.5.2 Name Resolution Service
A name resolving service (or naming service) translates the name of a resource to its
physical IP address. A name resolving service also uses reverse mapping to translate the
physical IP address to the name of the resource. Application of the current terrestrialbased Domain Name System (DNS) to the AN is impractical due to its reliance upon
dedicated, fixed, terrestrial-based name servers. Since the AN must be able to operate
disconnected from the terrestrial GIG infrastructure, an airborne-based naming service is
needed that is self-sufficient and autonomous, while at the same time backwards
compatible and interoperable with terrestrial-based services DNS services. The naming
service must also be adaptive to AN dynamics -- name-providing nodes may leave the
network, become disconnected due to topology partitioning, or go into EMCON mode.
Finally, the AN naming service must also be efficient in the face of bandwidthconstrained, variable quality AN links.
5.2.5.2.1 Naming Service Features
Interoperability and compatibility with GIG-based name services
The AN name resolution architecture will accommodate both airborne- and GIG-based
name resolution. The architecture should leverage terrestrial GIG-based name resolution
services when appropriate. However, in instances in which GIG-based services are
infeasible, unavailable, or impractical -- (e.g., due to disconnected operations, high-cost
ground-air links, etc.) -- the architecture will provide a self-sufficient, air-based name
resolution capability. The air-based capability will be backwards compatible and
73
interoperable with GIG-based services, to allow zone transfers/updates and airbornebased caches of GIG-based authoritative zone information.
Autonomous Name Registration
The AN air-based name resolution service will require an autonomous name registration
process to discover and formulate a knowledge base of the existing names.
Self Configuration and Service Advertisement/Discovery
The AN air-based name resolution service will require a distributed naming architecture
with self-configuration, autonomous server election, and service advertisement/discovery
capabilities to support the spontaneous, self-forming features of the AN. An autonomous
name registration capability will provide self-configuration of zone data and supporting
resource records upon which naming services are based. After self-configuration of
resource records, an autonomous server election process will occur to select naming
capable nodes. A service advertisement/discovery processes will then enable nodes that
can provide naming services to announce themselves. Other nodes seeking name
resolution services will employ a server discovery process to locate appropriate name
servers. Thus, a distributed naming architecture is formed such that airborne nodes do
not have to rely on fixed, dedicated name servers.
Distributed Architecture for Adaptation to Network Dynamics
Continuous self-configuration and autonomous advertisement/discovery processes
provide continuous service adaptation to dynamic topology changes. Additionally, these
distributed services provide built-in redundancy and fail-over features to accommodate
loss of primary service-providing nodes and/or their supporting links.
Scalability
The architecture must scale across the range of operational AN scenarios – i.e., from a
small cluster of nodes disconnected from the GIG to a theater-wide deployment of AN
assets.
Efficiency
The AN will provide efficient discovery of the naming service across the bandwidthconstrained, variable quality links of the dynamic AN topology. The discovery apparatus
must be efficient and optimized to the current state of the dynamically changing network.
Efficiency options include use of multicast and/or constrained (hop-limited) broadcasts.
The AN name resolution architecture will conduct efficient name-resolution transactions.
In GIG-connected operations, the naming service will evaluate when it is more efficient
(or more expedient) to conduct name resolution via GIG-based services, rather than via
AN-based services, and vice-versa.
74
5.2.5.2.2 Naming Service Architecture
The AN name service architecture will consist of a distributed airborne-based naming
system, with gateways to provide access, interoperability, and backwards compatibility to
terrestrial-based GIG DNS services. Autonomous features (such as name registration,
name server election, and name server discovery) will be supported by protocols to
minimize overhead and efficiently use bandwidth-constrained AN links, such localized
multicast and/or constrained (hop-limited) broadcast transactions.
5.2.5.3 Dynamic Network Configuration Management
The AN will automatically configure and reconfigured as nodes join, depart, and move
from one subnet to another. Providing a network-wide reconfiguration capability will
require accurate current knowledge of the entire network structure. The AN dynamic
network configuration management system will collect node configuration and capability
data. This network configuration data will be stored in a repository and distributed to
each of the AN subnetworks. Within a subnetwork, the AN dynamic network
configuration management system will dynamically assign and distribute IP addresses,
subnet mask, domain name for any network interface and distribute (and update)
addresses of network servers (e.g., Name Resolution Server, Network Time Server,
Network Policy Server, QoS Manager, Security Manager, Network Manager, Link
Manager). The AN dynamic network configuration management system will also
distribute other network configuration information (e.g., routing protocols and
configuration). Figure 5-12 depicts the major functional components of the AN dynamic
network configuration management system.
QoS
Manager
PB Security
Manager
Dynamic
Configuration
Server
Dynamic
Configuration
Server
Dynamic
Configuration
Server
Configuration
Database
Configuration
Server
PB Network
Manager
AN
Node
AN
Node
Link
Manager
AN Management
AN Node
configuration
&
capabilities
AN Node
configuration
&
capabilities
AN Node
configuration
&
capabilities
Figure 5-12. Dynamic Network Configuration Management Functional Components
5.2.5.4 Network Time Service
75
The AN Network Time Service will enable the synchronization of all AN network
devices, as well as the synchronization with external time sources. Time provided by the
AN Network Time Service will be traceable to the Coordinated Universal Time (UTC),
an international standard, based upon atomic timekeeping coordinated between the U.S.
Naval Observatory (USNO), the National Institute of Standards and Technology (NIST)
and 25 other countries. Each AN node will include a Network Time Server (NTS)
synchronized with the UTC (when able to connect to a UTC or Intermediate node) to
provide synchronized time distribution within the AN node. The AN NTS will be fully
integrated into the GIG UTC, Intermediate, and Client Node Network Time Service
architecture [per the Network Time Service Profile]. Each AN node will include:



An NTS capable of being synchronized with UTC using one of the following as
primary and at least one other as secondary:
o Network Time Distribution (e.g., Network Time Protocol) – USNO and
NIST provide precise (±30 ms) time synchronization services over the
network using the Network Time Protocol (NTP). Since NTP uncertainty
is heavily based upon network latencies, in reliable, high bandwidth, low
latency networks, the uncertainty factor can be reduced to µs or ns
precision.
o Global Positioning System (GPS) using redundant antennas – GPS is a
highly precise (±200 or ±340 ns) method of time synchronization. GPS
satellites employ redundant atomic clocks synchronized with USNO to
ensure precise timing and availability. The GPS Standard Positioning
System (SPS) provides time accuracies to within 340 ns for most nonmilitary applications. The Precise Positioning System (PPS) provides time
accuracies to within 200 ns for military applications.
o Two-Way Satellite Time Transfer (TWSTT) – TWSTT is the most
accurate method of delivering time to a node. A dedicated satellite
connection to USNO is established to provide highly precise (±1 ns)
synchronization.
The ability to synchronize the NTS with the NTS of any peer AN nodes to
achieve locally coordinated time.
A backup clock providing at least the accuracy of a rubidium clock configured as
a failover device in case of a disconnected or degraded state.
Within each AN node, time will be distributed to supported network devices and hosts,
using the Network Time Protocol.
5.3
Off-Board Transmission Systems
As a minimum, the Off-Board Transmission Systems provide the physical and data link
layer capabilities used to create the backbone, subnet and network access links of the
Airborne Network. Off-Board Transmission Systems components include radios (i.e.,
RF/optical and modem subsystems), multiplexers, and transmission security (TRANSEC)
equipment used for the transport of all information onto and off of the platform. OffBoard Transmission Systems must be capable of interfacing to and interoperating with
76
the AN Equipment system components. As such it must be capable of being managed by
the AN Network Management System and Link Management System. Furthermore, the
Off-Board Transmission Systems components must be capable of implementing QoS
mechanisms that enable the required end-to-end services to be provided when used in
conjunction with the AN Equipment.
Multiple transmission systems may be implemented on some platforms to accommodate
various communications needs. The physical and data link layer capabilities of each of
these transmission systems may be very different, each optimized for different needs such
as high bandwidth, beyond line of sight connectivity, high mobility, or low probability of
detection. However, the Off-Board Transmission Systems must provide a common
interface to the AN Equipment network components to enable interoperability between
these different systems. Note, some of the network functionality may actually be
implemented as part of a transmission system component (e.g., radio or terminal), but the
network functions must be separable to accommodate network functions implemented on
the platform and to facilitate implementation of different network configurations. Legacy
transmission systems will interface to the AN Equipment through gateways as described
in section 5.1.
Black Side
routing and
Common MANET
services
Black Side IP Routing
MANET Services
Data Link
Layer
Physical Layer
Multiple security
domains with Red
side routing if
necessary
HAI
PE
HAI
PE
HAI
PE
HAI
PE
Other
HAI
PE
Description
Coalition
Confident
ial
Red Side
TS IP
Routing
NATO
Secret
Red Side
Secret IP
Routing
Network Layer
Notional JTRS Protocol Stack
HAI
PE
OSI Model
Satcom
Data Link
Legacy or
Future
Data Link
SRW
Data Link
NDL
Data Link
WNW
Data Link
Above
2GHz
Legacy or
Future
PHY
SRW
NDL
WNW
Multiple Data Link
and Physical
Layers for varying
environments and
communications
needs
Figure 5-13. Off-Board Transmission Functional Representation
77
6.
GIG Integration and Interoperability
The future Global Information Grid (GIG) will provide an end-to-end, seamless, internetlike communications capability that meets the mobility, security, and reliability needs of
the DoD for mobile and fixed ground-based, air-based, and space-based assets. The
communications traffic will be comprised of a combination of voice, video, and data,
across multiple security levels using an unclassified (black) transport layer.
The GIG transport network will be comprised of a number of component networks,
including the Airborne Network, as depicted in Figure 6-1. Each component network
will need to pass data both internally among its network members and externally to/from
other GIG component networks. Achieving the end-to-end functionality required by the
GIG will necessitate that the Airborne Network interface and interoperate with other GIG
component networks and with the GIG Enterprise Services (GIG ES).
Figure 6-1. Major GIG Component Networks
6.1
GIG Operational Mission Concepts
The following GIG operational mission concepts define the target functionality of the
integrated GIG transport network which must be provided by the GIG component
networks, including the AN.
78
6.1.1 Provide Common Services
GIG information, service applications, transport capabilities and management features are
focused toward providing the user with the resources needed to successfully complete
any mission, anytime, anywhere. This focus is best evident in the shift from the existing
centralized Task, Process, Exploit, and Disseminate (TPED) operational concept of
providing information from producers to consumers based on a “need-to-know” sharing
model to a net-centric Task, Post, Process, and Use (TPPU) “need-to-share” paradigm in
which information is posted to the GIG allowing all users of the GIG (and subsequently
the AN) to discover and use information. The primary emphasis of TPPU is to get the
right information to the user as quickly as possible to impact the decision making
process. Users across the enterprise must be able to discover the existence, and location,
of needed information, retrieve it in a timely manner, process it as required and post the
resulting “value added” product for sharing with the community.
6.1.2 Provide Worldwide Access
GIG users will be able to access GIG information and services from anywhere in the
world. This includes access from both fixed and mobile locations as well as access in
both wired and wireless environments. The GIG will provide ubiquitous login that
enables users to identify/authenticate themselves and their current role/context to the
network, to GIG services, and for information access from any supported physical
location. Once a user has been authenticated and is granted access, the GIG
infrastructure will institute and maintain network connectivity, employing a variety of
communication and usage modes to provide near real-time access to information and
services (e.g., survival information, “task” requests, “post” or “use” services,
collaboration via multimedia) at all security levels. Assuring on-demand connectivity,
the GIG will be able to dynamically route traffic as needed to achieve mission needs in
all environments (i.e., fixed, mobile, wired and wireless connections).
6.1.3 Provide Dynamic Resource Allocation
The GIG will support simultaneous, multiple, global missions. The GIG communications
and computing resources, while large, are still limited and these resources must be
dynamically allocated to provide optimal support to the many varied mission objectives
that are constantly evolving and changing in priority. The GIG must have the ability to
establish and manage critical characteristics of all resources needed by a GIG user/entity
including priority/ precedence level, QoS and CoS levels (e.g., guaranteed or expedited
delivery options), and computing resources (e.g., bandwidth, storage, processing).
Ensuring efficient and flexible use of GIG resources will require the convergence of
voice, data, and video onto a single network. Additionally, the mobile and dynamically
changing tactical environment will require users/entities to connect/re-connect to GIG
throughout a communication session to react to changes in the communications
infrastructure topology. The GIG will need the ability to identify a user/entity as a GIG
entity and dynamically establish the user’s device(s) connectivity, resources, and
privileges.
79
6.1.4 Provide Dynamic Group Formation
GIG information will need to be shared among users/entities with varying levels of
privilege and trust in order to carry out specific missions. Communities of Interest (COI)
will allow groups of users with common information needs to access, and control access
to, the information they need to carry out a specific task. Users and entities will need to
be able to be members of multiple COIs under either a periods processing or
simultaneous access model (e.g., participating in a U.S.-only COI, a Coalition COI, and a
bilateral COI with a Coalition partner nation in execution of a mission). The scope of
COI group formation includes dynamic allocation of communications and computing
resources as well as dynamic access to enterprise services. COIs may be established and
disestablished dynamically, to meet changing missions, or remain fixed to support ongoing tasks. The COI construct is envisioned as the main mechanism for enabling
coalition partners to use the GIG as needed, without having their access jeopardize the
availability of the GIG. When only small numbers of coalition partners need to
participate, GIG identities could be issued to them temporarily.
6.1.5 Provide Computer Network Defense (CND)
The GIG will provide a proactive and reactive capability for computer network defense.
The GIG will protect, monitor, detect, analyze and adaptively respond to unauthorized
system and network activities. Authorized managers and users of the GIG will have the
ability (based upon their user profile and privileges) to receive near real-time situational
awareness of current threats, configuration, status, and performance of the GIG
communications and computing resources. This CND situational awareness will be
provided via monitoring of the GIG to obtain sensor data, detection of anomalies within
that sensor data, and analysis of those anomalies to provide defensive courses of action.
All of these defensive capabilities (i.e., monitoring, detection, and analysis) combine to
support CND situational awareness, which enables CND response actions. AN CND
situational awareness will be a user defined operational picture (UDOP) of the AN
cyberspace at the various organizational levels. Automated situational views will be
enabled through user- defined subscription of sensor data, which is combined with
appropriate analytical tools that produce awareness, knowledge, and defensive courses of
action.
6.1.6 Provide Management and Control of GIG Network and Resources
The GIG will support the ability to remotely manage and control all GIG
communications, computing resources and services. The common management and
control capability of the GIG includes the generation, collection and/or distribution of
resource, access, audit, status, performance, fault, configuration, and inventory data to
support the auditing/supervisory functions required for network management, security
management, and CND operations. It also includes the management and control of the
control (signaling) information flowing between GIG communications and computing
resources to support end-to-end connectivity, QoS, and prioritization.
80
6.2
GIG Enterprise Services
GIG ES will be a suite of value-added information, web, and computing capabilities that
will be available to all components, deployed by users across DoD in a consistent
manner. This will enable leveraging of best-of-breed concepts and will maximize the
net-centric performance of the GIG. When GIG ES is first fielded, the first suite of
services that will be offered as part of the core enterprise services (the services that are
useful throughout the DoD) will include Enterprise Service Management and
IA/Security.
All information technology (IT) systems and National Security Systems (NSS) acquired,
procured (systems or services), or operated by any DoD component will need access to
the Core Enterprise Services. There is no single DoD “net” for a service provider to live
within, and it is desirable to offer common core services across all the nets. The networks
may require separately hosted solutions for core support. Interface devices such as
gateways and guards that span some of these boundaries may also be required. Thus the
core services must exist in Operational Area networks (OANs), Metropolitan Area
Networks (MANs), and Wide Area Networks (WANs) (both Continental U.S. (CONUS)
and Theaters) to include the AN.
6.2.1 Enterprise Service Management/Network Operations
Effective operational management of GIG transport systems and their components will in
large part depend upon having an infrastructure that has been instrumented with
monitoring and reporting capabilities, an in-depth knowledge regarding critical mission
processes that the infrastructure must support, an understanding of the relationships
between the two, and the ability to present relevant status and associated mission impact
assessments to decision makers at all levels. This will necessitate an increased level of
information sharing and integration between management operations across different
technology, operational, and security domains.
GIG transport service providers must take an active role in developing the necessary set
of cross-domain operational policies, processes, and procedures, based on internationally
accepted common bodies of knowledge needed to enhance the flow of information
between different management domains thereby ensuring that problems are proactively
detected, isolated and resolved with the minimum impact to the user as well as providing
for improved planning and provisioning through better communications of requirements.
Enterprise Service Management will provide end-to-end GIG performance monitoring,
configuration management and problem detection/resolution, as well as enterprise IT
resource accounting and addressing, for example, for users, systems and devices.
Additionally, general help desk and emergency support to users is encompassed by this
service area, similar to 911 and 411.
6.2.2 Information Assurance/Security
81
The GIG operational mission concepts provide the target functionality for which IA
constructs were conceptualized and specified. Seven core IA functions emerged as the
foundational enablers for the GIG mission concepts. Each of these constructs, referred to
as IA system enablers, are the architectural building blocks for achieving (enabling) one
or more of the GIG mission concepts. Full interoperability with the GIG IA Architecture
and other, GIG supporting transport networks, requires the AN support the core IA
functions:

Identification and Authentication Strength of Mechanism is the process of
evaluating the strength of user authentication, the assurance of the requesting
client and other Information Technology (IT) components, and the risk
associated with the user’s operating environment to determine the associated
level of trust of the entity. Identity and Authentication (I&A) Strength of
Mechanism (SoM) scores enable dynamic access control decisions to be made
based on how resistant the authentication of each service request is to
impersonation or forgery.

Distributed Policy-Based Access Control is used to manage and enforce
rule-based access control policies based on real-time assessment of the
operational need for access and the security risk associated with granting
access.

Secure End-to-End Communications Environment as it applies to the AN
primarily involves the protection of data in transit and, to a lesser extent, data
at rest.

Dynamic Policy Management enables the establishment of digital policies
for enforcing how assets are managed, utilized, and protected. It allows the
flexibility needed to ensure the right asset is available, at the right place, at the
right time.

Assured Management and Allocation of GIG Resources maintains the
integrity and availability of all enterprise resources and ensures that they are
available based on operational needs.

Enterprise-Wide Network Defense and Situational Awareness consists of
enterprise-wide protection, monitoring, detection and analysis that support
mission situational awareness and response actions.

Assured Management of Enterprise-Wide IA Mechanisms and Assets
encompasses the policies, procedures, protocols, standards and infrastructure
elements required to reliably support initialization and over-the-network
security management (command, control and monitoring) of IA mechanisms
and assets.
Consequently, the AN will be expected to achieve the requisite level of system security
needed to maintain the overall level of GIG enterprise security that enables these mission
capabilities. Properly designed and implemented, IA within the GIG will provide the
needed availability, integrity, and confidentiality to allow authorized users to access the
information they need to carry out their mission while preventing unauthorized users
82
from denying, degrading, or exploiting that mission. AN IA functions will ensure these
security services are in place for the GIG supporting, AN air-to-air, air-to-ground, and
air-to-space transport infrastructure.
6.3
GIG Transport Convergence
GIG transport convergence (i.e., convergence of voice, data, and video onto a single
network) will require the phase out all legacy transmission and link-layer technologies
over time. In the ideal end-state, all information will be transferred over a secure black
core using the Internet Protocol.
IP convergence necessitates the development of a comprehensive end-to-end architecture
for information transfer that addresses all aspects of the GIG infrastructure (e.g., LANs,
WANs, etc). This architecture must ensure consistent voice/video over IP
implementations, application modifications to adjust to changing network conditions, and
consistent definition and treatment of service level agreements. However, one of the
most difficult parts of this architecture will be addressing the performance management
of end-to-end flows which transit nodes in the infrastructure with extreme changes in
bandwidth, heavy congestion, long latencies, and the inherent difficulties of mobile
tactical environments. No technology exists today to achieve end-to-end enforceable and
measurable performance in the heterogeneous DoD environment with its many
transformational communications networks.
Additional details on the DoD IP convergence planning can be found in TBD.
6.4
GIG Routing Architecture
TBD
6.5
GIG Quality of Service Architecture
The Military Communications Electronics Board (MCEB) has established a GIG End-toEnd QoS Working Group led by the Defense Information Systems Agency (DISA). This
working group is preparing recommendations for a phased approach to implementing
QoS within the GIG. Figure 6-2 depicts a notional technical approach for realizing endto-end QoS across several network domains. This diagram and the QoS mechanisms
listed are not intended to specify a specific framework, but to illustrate the recognized
subdivision of the GIG into multiple domains and types of domain, as well as to identify
mechanisms of potential utility within each type of domain.
83
BB
BB
BB
CAC
CAC
SWITCHED LAN
CORE/DISTRIBUTION
Tactical LAN
RSVP, DIFFSERV, 802.X, MPLS
DIFFSERV, MPLS
RSVP, DIFFSERV, 802.X, MPLS
Figure 6-2: End-to-End Strawman Technical Approach for QoS/CoS
The working group has also prepared a table of “Proposed DiffServ Flow Classification
and Corresponding DiffServ Code Points (DSCPs)/Class Selector Code Point (CSCP)
Values” that identify traffic flow classes based upon nature of the flows and military
traffic precedence levels within the flow classes. This proposal addresses the DoD
enterprise’s need to standardize the methodology for describing IP traffic flow types and
precedence levels, as well as, having consistent end-to-end treatment of DSCPs (assured
delivery markings). DSCP values need to be preserved through the various networks
from source to destination, to ensure the mission commander’s intent is known and can
be properly complied with by all networks and systems.
Based on current technologies, implementation of an end-to-end QoS/CoS capability will
depend upon adherence to the following general guidelines:

End-to-end QoS/CoS support requires all network domains to adopt a common
QoS/CoS architecture.

Service level agreements (SLAs) will likely be needed between domains; however,
the existence of SLAs does not eliminate the need for a common QoS/CoS
architecture.

An end-to-end QoS/CoS architecture must include support at the network layer,
allowing traversal of heterogeneous networks providing a commonly understood set
of network-layer technologies.

The selected QoS/CoS architecture must be applicable to different types of domains
ranging from vastly over provisioned, for example GIG-BE, to highly bandwidthlimited, such as mobile, tactical networks.

Within individual domains, QoS/CoS support may be provided at the link layer. In
these cases, network-layer technologies may utilize that support in order to efficiently
accomplish QoS/CoS provision within that domain.

A complete, end-to-end QoS architecture must include multiple, complementary
mechanisms, such as admission control, traffic shaping, various queue management
strategies, and application-layer aspects.
84

Provisions must exist for existence and enforcement of QoS/CoS policy, as well as
management of this policy across the GIG enterprise. [Source: DoD Transport
Design Guidance Document, Version 0.97]
Additional details on the DoD end-to-end QoS/CoS planning can be found in TBD.
6.6
Interface Definition
To ensure interoperability, a consistent method to interface with GIG component
networks will be defined. This interface definition must describe:
 Types of communication transport services provided across the interfaces (e.g.,
MLPP voice, real-time data, etc.)
 Type of networking relationships that exist between the two networks at the
interface (e.g., peer, hierarchical peer, etc.)
 Traffic security classification levels, security associations and security procedure
for both sides of each interface
 Security related attributes of the two networks (e.g., both networks are red, both
networks are black, one network is black and other network is red, or traffic type
dependent relationships)
 Types of routing protocol information and resource availability information
needing to be transferred across the interface
 QoS/CoS features, guarantees, and applicable parameters offered on both sides of
the interface
 Implementation of any Performance Enhancing Proxies (PEPs)
 Implementation details in terms of the user (or data), control, and management
plane protocols, procedures, and exchanges.
A good joint interface agreement should be written in accord with the following
prescriptions.
1. Provide a ‘network architecture’ picture showing all possible interfaces between
two networks (with internal nodes and links depicted with enough detail to put the
interface in context). Explicitly identify ‘gateway nodes’ from the perspective of
the two networks. Also, identify the connection mechanism between the two
gateways.
2. The default assumption is that the interface between the two networks will be at
the link level. This means that links connecting gateways in one network with
gateways in another network are the ‘physical interfaces’ for carrying user traffic.
3. Describe the types of user information transfer services (data or bearer plane
services) provided across the interfaces (there could be more than one in that
different traffic types may use different services).
4. Describe the networking relationships exist between networks at this interface.
85
5. Describe assumptions on traffic security classification levels, security associations
and security procedure for both sides of each interface.
6. Describe the security related attributes of the two networks (e.g., both networks
are red, both networks are black, one network is black and other network is red, or
traffic type dependent relationships). Describe where the PlainText (PT)CypherText (CT) boundaries occur, if the two networks have different attributes
for traffic segments.
7. Describe the types of routing protocol information and resource availability
information needing to be transferred across the interface.
8. Describe Quality of Service (QoS) features offered on both sides of the interface.
For packet service, these may include: Differential Services (DiffServe),
Integrated Services (IntServe), Resource Reservation Protocol (RSVP), RSVPTraffic Engineering (RSVP-TE), etc. If there is a difference in the offerings in
two networks, describe where ‘translation (aggregation, etc.) occurs. Also
describe if there is any policing and Call Admission Control (CAC) function in
one or both direction. Also describe what happens to traffic above acceptable
value by the policing function and to the flows rejected by the CAC function.
9. For packet services, if one or both networks terminates a transport layer protocol
(e.g. via Performance Enhancing Proxies), identify the termination and specify
any requirements created on the other network. Also, specify if the network with
will have a matching termination at the other end.
10. For each of the two networks, specify whether the Multi-Level Precedence and
Preemption (MLPP) classification is supported. These include Routine (R),
Priority (P), Immediate (I), Flash (F) and Flash Override (FO). Describe how
MLPP is activated (indicators, fate of preempted traffic, etc.) and what
interactions take place at the interface in relation to this capability.
11. For each network, describe if it can specify and provide QoS guarantees. Also,
specify all QoS parameters that can be guaranteed in this fashion. QoS here
should be interpreted in a broad sense: data rate, delay, loss, jitter, availability,
recovery time, etc.
[Source: DoD Transport Design Guidance Document, Version 0.97]
86
7. Airborne Network Model Diagrams (SV-2)
Table 7-1 summarizes the key system functionality (described in section 5) that will be
required at each network node type. Figures 7-1 through 7-6 depicts this information in
an SV-2 graphic for each node.
Table 7-1. Summary of Node Types and System Functionality
Node Type
On-Board Infrastructure
Platform Distribution
 Sensor data buses (as
needed)
 Digital audio & data
buses
 IP-based LANs
Legacy
AN Equipment
Off-Board Transmission
None
One or more legacy
voice/data terminals (e.g.,
UHF SATCOM terminals,
HF/VHF/UHF LOS radios,
TDL terminals, CDL
terminals, Commercial
SATCOM, MILSATCOM
terminals)
None
Several legacy voice/data
terminals (e.g., UHF
SATCOM terminals,
HF/VHF/UHF LOS radios,
TDL terminals, CDL
terminals, Commercial
SATCOM, MILSATCOM
terminals)
Routing/Switching
 No AN routing
One or more IP-capable
voice/data/video terminals
(e.g., MUOS SATCOM
terminals, TC SATCOM
terminals, Commercial
SATCOM terminals,
Network CDL terminals,
Tactical Subnet terminals,
Information Assurance
 TRANSEC
Network Management
 None
Gateways
 Data Link Gateway
 PEPs
Platform Distribution
 Sensor data buses (as
needed)
 Digital audio & data
buses
 IP-based LANs
Relay/
Gateway
Information Assurance
 TRANSEC
Network Management
 None
Gateways
 TDL Gateway
 Data Link Gateway
 PEPs
Network
Access
Platform Distribution
 Sensor data buses (as
needed)
 Digital audio buses
 IP-based LANs
Information Assurance
 TBD
QoS/CoS
 QoS Application
Interface
 QoS Mechanisms
 QoS Manager
87
Node Type
On-Board Infrastructure
AN Equipment
Off-Board Transmission
Lasercom terminals)
Network Management
 Local Network
Management System (for
On-Board, AN, and OffBoard network
components)
 NM HMI (e.g., Console)
Gateways
 Legacy Infrastructure
Gateway
 PEPs
Information Assurance
 SM Agent
 Policy Enforcement
 HAIPE
 TRANSEC
 IDS & Virus Protection
 Vulnerability
Assessment System
 Firewalls & Guards
Link Management
 LM Agent
Network Management
 Intelligent NM Agent
Platform Distribution
 Sensor data buses (as
needed)
 Digital audio buses
 IP-based LANs
Information Assurance
 TBD
Network
Capable
Network Management
 Local Network
Management System (for
On-Board, AN, and OffBoard network
components)
 NM HMI (e.g., Console)
Gateways
 Legacy Infrastructure
Gateway
 PEPs
Network Services
 Name Resolution
Service: Naming Client
(name resolver)
.
Routing/Switching
 Intra-area Routing
QoS/CoS
 QoS Application
Interface
 QoS Mechanisms
 QoS Manager
Information Assurance
 SM Agent
 Policy Enforcement
 HAIPE
 TRANSEC
 IDS & Virus Protection
 Vulnerability
Assessment System
 Firewalls & Guards
Link Management
 LM Agent
 LM Autonomous
Manager
Network Management
 Cluster Manager
 Intelligent NM Agent
 Cluster Policy Decision
Point
88
One or more IP-capable
voice/data/video terminals
(e.g., TC SATCOM
terminals, Commercial
SATCOM terminals,
Network CDL terminals,
Tactical Subnet terminals)
Node Type
On-Board Infrastructure
AN Equipment
Off-Board Transmission
Network Services
 Name Resolution
Service: Local Zone
Database and Resource
Records, Local Name
Server, Naming Client
(name resolver)
Platform Distribution
 Sensor data buses (as
needed)
 Digital audio buses
 IP-based LANs
Information Assurance
 TBD
Network Management
 Local Network
Management System (for
On-Board, AN, and OffBoard network
components and subnet
management)
 NM and Policy
Management HMI (e.g.,
Console)
Internetwork
Gateways
 Data Link Gateway
 Legacy Infrastructure
Gateway
 PEPs
Routing/Switching
 Intra-area & Inter-area
Routing
QoS/CoS
 QoS Application
Interface
 QoS Mechanisms
 QoS Manager
Information Assurance
 Security Manager
 Policy Distribution, &
Enforcement
 HAIPE
 TRANSEC
 IDS & Virus Protection
 Vulnerability
Assessment System
 Firewalls & Guards
Some legacy voice/data
terminals (e.g., UHF
SATCOM terminals,
HF/VHF/UHF LOS radios,
CDL terminals)
Multiple IP-capable
voice/data/video terminals
(e.g., MUOS SATCOM
terminals, TC SATCOM
terminals, Commercial
SATCOM terminals,
Network CDL terminals,
Tactical Subnet terminals,
Lasercom terminals)
Link Management
 LM Agent
 LM Autonomous
Manager
 LM Executive
Network Management
 AN Network Manager
 Intelligent NM Agent
 Cluster Manager
 Policy Server/Repository
 Policy Decision Point
 Cluster Policy Decision
Point
Network Services
 Name Resolution
Service: Local Zone
Database and Resource
Records, Local Name
Server, Naming Client
(name resolver)
Network
Platform Distribution
Routing/Switching
89
Some legacy voice/data
Node Type
Service
Provider
On-Board Infrastructure
 Sensor data buses (as
needed)
 Digital audio buses
 IP-based LANs
Information Assurance
 TBD
Network Management
 Local Network
Management System (for
On-Board, AN, and OffBoard network
components and
AN/subnet management)
 NM and Policy
Management HMI (e.g.,
Console)
 Ground-based Resource
Planning Tools
 Ground-based Modeling,
Analysis, and Simulation
Tools
Gateways
 Data Link Gateway
 Legacy Infrastructure
Gateway
 PEPs
AN Equipment
 Intra-area, Inter-area, &
Interdomain Routing
QoS/CoS
 QoS Application
Interface
 QoS Mechanisms
 QoS Manager
Information Assurance
 Security Manager
 Policy Input,
Distribution, &
Enforcement
 HAIPE
 TRANSEC
 KLIF
 IDS & Virus Protection
 Vulnerability
Assessment System
 Firewalls & Guards
 Security M&S
Link Management
 LM Agent
 LM Autonomous
Manager
 LM Executive
Network Management
 AN Network Manager
 Policy Server/Repository
 Policy Decision Point
Network Services
 Name Resolution
Service: Gateway to
GIG DNS, Local Zone
Database and Resource
Records, Local Name
Server, Naming Client
(name resolver)
90
Off-Board Transmission
terminals (e.g., UHF
SATCOM terminals,
HF/VHF/UHF LOS radios,
CDL terminals)
Multiple IP-capable
voice/data/video terminals
(e.g., MUOS SATCOM
terminals, TC SATCOM
terminals, Commercial
SATCOM terminals,
Network CDL terminals,
Tactical Subnet terminals,
Lasercom terminals)
Figure 7-1. Legacy Node Communications Diagram
Figure 7-2. Relay/Gateway Node Communications Diagram
91
Figure 7-3. Network Access Node Communications Diagram
Figure 7-4. Network Capable Node Communications Diagram
92
Figure 7-5. Internetwork Node Communications Diagram
Figure 7-6. Network Service Provider Node Communications Diagram
93
8.
Node and Link Configurations for Candidate Platform Types (SV-5)
8.1
Approach
In this section, a notional AN system configuration is presented for several candidate
platform types. The notional system configuration should be considered the minimum set
of system functionality needed to provide the network capabilities necessary to support
the projected (time independent) platform mission operations discussed here. The
notional configurations are only intended to depict the needed systems functions and their
inter-relationships, which can be used as guidelines for where and when to implement
different system functions or technologies. It is not intended to be a network design
specifying the exact type, quantities, sizing or placement of system components for
specific platforms.
This section begins with a notional highly mobile platform with minimal information
exchange needs, referred to as an Airborne Fighter platform. The second notional
platform is a relatively slow-moving platform with more intensive information exchange
needs, referred to as an Airborne C4ISR platform. The third notional platform is a
relatively slow-moving airborne platform that supports AN network range extension,
internetworking, and gateway functions. This platform is referred to as an Airborne
Communications Relay platform.
For each notional platform type, a brief operational profile is provided in terms of the
operational characteristics that drive the AN implementation and employment. These
characteristics include flight patterns, information exchange needs, and any physical
constraints imposed by the platform. These drivers are used to determine the needed
network capabilities (connectivity, network services and operations), the minimum
airborne network functions, links and topologies, and finally the corresponding network
node and system functions for each platform in an objective AN.
8.2
Fighter Platform
8.2.1 Operational Profile
An airborne fighter platform flight profile includes periods of stable flight patterns and
dynamic maneuvers at high speeds. Its relatively small size limits the amount of space
available for mounting antennas and installing equipment. It also limits the amount of
prime power available to AN equipment and other on-board processing. It will host only
one or two crew members, thus its mission applications, sensors, and munitions will be
specialized for conducting one or two mission types (e.g., Close Air Support (CAS) or
Suppression of Enemy Air Defenses (SEAD)).
It will be employed as part of a strike package or combat air patrol (CAP). The strike
package or CAP includes groups of two to four airborne fighter platforms up to a total of
200 platforms. The strike package or CAP will have supporting airborne C2 and ISR
platform(s), tanker (refueling) platform(s), and ground C2 platform(s). Each airborne
94
fighter platform requires connectivity to all other strike package or CAP and supporting
platforms; however, a majority of information will be exchanged between airborne
fighter platforms. This is driven in large part because of the need for frequent (e.g., every
2 seconds) situational awareness and target sorting updates (e.g., position location on
both friendlies and hostiles) in a highly mobile environment. The second largest
information exchange is general background situational awareness exchanges between an
airborne fighter platform and airborne C4ISR platform(s). The majority of information is
exchanged as real-time (fixed format message) data and voice. Some imagery files and
live video feeds may be forwarded to airborne fighter platforms from the supporting
platforms. Networked munitions on-board the airborne fighter platform may also be
exchanging information.
8.2.2
AN Capabilities, Links, and Topologies
Table 8-1 summarizes the network capabilities needed to support the operational profile
described in section 8.2.1.
Table 8-1. Airborne Fighter Platform AN Capabilities
Connectivity
 Coverage: LOS,
potentially BLOS
 Diversity: 2 or 3 links/2
or 3 waveforms
 Throughput: Low speed
to high speed connections
 Type of connection: PtPt, Pt-MultPt, Forwarding
 Network interfaces:
Legacy links and tactical
subnets
Network Capabilities
Services
 Real-time data
 Voice
 Interactive data
Airborne
Fighter
Platform
95
Operation
 Managing:
Legacy node: Manual
planning, analyzing,
monitoring, and
controlling of legacy node
resources locally.
Network node:
Monitoring and
controlling of network
capable node resources
locally and from a remote
network node, distribute
network SA data, match
use of resources to
operational objectives.
 Forming and Adapting:
Legacy node: Manual
provisioning, initialization
and restoration of legacy
node link resources.
Network node:
Automated provisioning,
initialization and
restoration of AN link
resources.
 Accessing:
Legacy Node: Link and
tactical subnet protection,
with limited manual
detection and reaction for
legacy node resources.
Connectivity
Network Capabilities
Services
Operation
Network node: Tactical
subnet protection, with
automated detection and
reaction for network
capable node resources.
The minimum network capabilities needed to support the airborne fighter platform
operational profile can be provided by legacy and network capable AN nodes with legacy
and subnet links. Airborne fighter platforms will participate in both tethered and flat adhoc network topologies. A tethered topology would primarily be used for reachback and
forwarding between the airborne fighter platform and supporting elements. A flat ad-hoc
topology would be used between airborne fighter platforms in a strike package or CAP
for the more frequent information exchanges.
8.2.3 AN System Configuration
Figure 8-1 depicts the minimum system configuration to implement the AN node
functions, links, and topologies to support airborne fighter platform operations.
Link Management & Network Services
Link
Manager
Name
Server
Config
Mgt
Time
Server
Bus/LAN
Interface
Sensor
Systems
HAIPE
HF/VHF/UHF
Voice Data
HAIPE
Routing & Security
MUOS SATCOM
Voice Data
Internal
External
Switch Firewall & Switch
External
Name Service
QoS
Manager
External Router
Intrusion
Detection
TDL
Voice/Data
WNW
Voice Data
Legacy
Voice
Systems
VoIP
Gateway
Network Management
HAIPE
AN Network
Policy
NM
Manager
Database Decision
Point
On-Board Networks
AN Equipment
Figure 8-1. Fighter Platform AN System Configuration
96
Off-Board
Transmission
8.2.4 Airborne Fighter Platform AN Issues and Risks
TBD
8.3
Airborne C4ISR Platform
8.3.1 Operational Profile
This operational profile applies to the mission crew (as opposed to the flight crew) on
manned C4ISR platforms. A C4ISR platform flight profile includes periods of enroute
flying and repeated, stable flight patterns. The relatively large size enables space
available for mounting antennas and installing significant communications equipment to
accommodate multiple mission crew functions. It also enables prime power available to
AN equipment and other on-board processing. It will host up to three dozen mission
crew members, including a communications operator. A C4ISR platform’s mission
applications and sensors will support multiple capabilities and mission types. Mission
durations for any single aircraft and crew could range up to 12 hours; with aerial
refueling it could be extended to 24 hours. C4ISR platforms often operate beyond lineof-sight of ground infrastructure.
A C4ISR platform could be employed as a stand-alone or as part of a multiple sensor
platform ISR constellation. Some will also be employed as airborne elements of the
tactical air control system (AETACS) for support to strike package(s), CAP, etc. (refer to
section 8.2.1 for description of fighter aircraft employment). These C4ISR platforms
require a broad range of connectivity to simultaneously support ISR and battle
management (for AETACS) functions for multiple mission types.
In stand-alone and ISR constellation missions, connectivity is primarily used to
send/receive tasking and status and to distribute raw and processed (e.g., tracks) sensor
data. Air Tasking Order (ATO) updates, theater and national imagery/video and other
intel data must be received. The raw sensor data is collected and off-loaded in
continuous streams. The volume of information and its distribution is dependent upon
the type of sensor data. For example, Air Moving Target Indicator (AMTI) is relatively
low volume, but is distributed to a comparatively greater number of platforms (including
airborne and ground) than Ground Moving Target Indicator (GMTI) and Synthetic
Aperture Radar (SAR) and other types of sensor data. The larger volumes of data are
offloaded to one or more ground stations (up to 30), which may be located in theater
(LOS or BLOS of platform operation) or in CONUS (reachback connectivity from
platform). This data is offloaded in real time (as it is collected), and can include video.
As more processing capability is fielded in airborne platforms to enable collaborative
targeting, some of this large volume data may also be provided to other airborne C4ISR
platforms.
For battle management functions, the C4ISR platform requires connectivity to all strike
package or CAP aircraft and supporting (ground and airborne) platforms; however, a
majority of information will be exchanged with the other airborne ISR and strike package
97
or CAP platforms. This is driven in large part because of the need for frequent (e.g.,
every 2 seconds) situational awareness updates (e.g., position location) in a highly mobile
environment. The second largest information exchange is command and control
information between the C4ISR platform and the ground tactical air control system
(GTACS) elements, including the Air and Space Operations Center (AOC). The C4ISR
platform forwards information between networks and COIs involved in multiple mission
types. The majority of information is exchanged as real-time (fixed format message)
data, interactive data (files), and voice. Some imagery files may be forwarded between
C4ISR platforms, GTACS elements, and strike package or CAP aircraft.
8.3.2 AN Capabilities, Links, and Topologies
Table 8-2 summarizes the network capabilities needed to support the operational profile
described in section 8.3.1.
Table 8-2. C4ISR Platform AN Capabilities
Connectivity
 Coverage: LOS and
BLOS
 Diversity: >10 links/
waveforms
 Throughput: Low speed
to high speed connections
 Type of connection: PtPt, Pt-MultPt, Forwarding
 Network interfaces:
Legacy links, tactical
subnets, GIG
Network Capabilities
Services
 Real-time data
 Voice
 Interactive data
 Video
 Bulk transfer
C4ISR
Platform
98
Operation
 Managing:
Legacy node: Manual
planning, analyzing,
monitoring, and
controlling of legacy node
resources locally.
Network node:
Monitoring and
controlling of network
capable node resources
locally and from a remote
network node, distribute
network SA data, match
use of resources to
operational objectives.
 Forming and Adapting:
Legacy node: Manual
provisioning, initialization
and restoration of legacy
node link resources.
Network node:
Automated provisioning,
initialization and
restoration of AN link
resources.
 Accessing:
Legacy node: Link and
tactical subnet protection,
with limited manual
detection and reaction for
legacy node resources.
Connectivity
Network Capabilities
Services
Operation
Network node: Tactical
subnet protection, with
automated detection and
reaction for network
capable node resources.
The minimum network capabilities needed to support the C4ISR platform operational
profile can be provided by legacy and internetwork AN nodes with network access,
subnet and legacy links. C4ISR platforms will participate in both tethered and tiered adhoc network topologies. A tethered topology would primarily be used for reachback and
forwarding between the C4ISR platform, GTACS, and strike package or CAP aircraft. A
tiered ad-hoc topology would be used between the C4ISR platform and airborne fighter
platforms in a strike package or CAP. Some operational concepts for collaborative
targeting may also require airborne backbone links and topologies for timely distribution
of high volume sensor data between C4ISR platforms. An airborne backbone could also
be used in an extended theater to provide reachback capabilities between strike package,
CAP aircraft, and GTACS elements.
8.3.3 AN System Configuration
Figure 8-2 depicts the minimum system configuration to implement the AN node
functions, links, and topologies to support C4ISR platform operations.
99
Bus/LAN
Interface
Sensor
Systems
HAIPE
Link Management & Network Services
Link
Manager
HAIPE
Network, Policy,
Security, QoS
Mgt Servers
Name,Time,
Config Mgt
Servers
Apps & Transport
Proxies
Mission Servers,
OWS, Voice &
Video
Name
Server
Config
Mgt
Time
Server
HF/VHF/UHF
Voice Data
TDL
Gateway
High
Assurance
Guard
MUOS SATCOM
Voice Data
TDL
Voice/Data
HAIPE
Routing & Security
Classified
Users
CDL
Voice Data
Internal
External
Switch Firewall & Switch
External
Name Service
QoS
Manager
Network, Policy,
Security, QoS
Mgt Servers
External Router
Intrusion
Detection
Com’l SATCOM
Voice Data
Name,Time,
Config Mgt
Servers
Apps & Transport
Proxies
TCM SATCOM
Voice Data
HAIPE
Network Management
Mission Servers,
OWS, Voice &
Video
Unclassified
Users
AN Network
NM
Policy Policy
Manager
Database Server Decision
Point
High
Assurance
Guard
WNW
Voice Data
Lasercom
Voice Data
Legacy
Voice
Systems
VoIP
Gateway
HAIPE
On-Board Networks
AN Equipment
Off-Board
Transmission
Figure 8-2. C4ISR Platform AN System Configuration
8.3.4 C4ISR Platform AN Issues and Risks
TBD
8.4
Airborne Communications Relay Platform
8.4.1 Operational Profile
This operational profile applies to airborne communications relay functions onboard a
widebody platform or a UAV that may or may not be dedicated to the communications
relay function. Airborne communications relay platform flight profile includes periods of
enroute flying and repeated, stable flight patterns. The relatively large size of widebodies
theoretically enables space available for mounting antennas and installing significant
communications equipment; however this may be limited by the primary mission
function for the platform (e.g., C4ISR). UAVs offer long endurance and high altitude,
which give wide area air and surface coverage and good optical paths to satellites.
Mission durations for any single aircraft and crew could range up to 12 hours or more for
widebodies and longer for UAVs. Airborne communications relay platforms could
100
operate within line-of-sight or beyond line-of-sight of ground infrastructure. They may
or may not have a communications operator on-board.
The mission of an airborne communications relay platform is to be employed as part of
and/or support to C4ISR constellation and/or strike package(s) or CAP. The
communications relay platform provides connectivity between elements of a strike
package, CAP aircraft, C4ISR platforms, and GTACS platforms that require range
extension or internetworking and gateway functions between networks for information
interoperability. For platforms that are beyond line of sight of ground infrastructure (and
have no space infrastructure connection) the communications relay platform provides
critical connectivity for C2 and situational awareness. The airborne communications
relay platform must support all information exchange types and characteristics used
within a theater.
8.4.2 AN Capabilities, Links, and Topologies
Table 8-3 summarizes the network capabilities needed to support the operational profile
described in section 8.4.1.
Table 8-3. Airborne Communications Relay Platform AN Capabilities
Connectivity
 Coverage: LOS and
BLOS
 Diversity: >10 links/
waveforms
 Throughput: Low speed
to high speed connections
 Type of connection: PtPt, Pt-MultPt, Forwarding
 Network interfaces:
Legacy links, tactical
subnets, GIG
Network Capabilities
Services
 Real-time data
 Voice
 Interactive data
 Video
 Bulk transfer
Airborne
Communica
tions Relay
Platform
101
Operation
 Managing:
Legacy node: Manual
planning, analyzing,
monitoring, and
controlling of legacy node
resources locally.
Network node:
Monitoring and
controlling of network
capable node resources
locally and from a remote
network node, distribute
network SA data, match
use of resources to
operational objectives.
 Forming and Adapting:
Legacy node: Manual
provisioning, initialization
and restoration of legacy
node link resources.
Network node:
Automated provisioning,
initialization and
restoration of AN link
resources.
 Accessing:
Legacy node: Link and
tactical subnet protection,
Connectivity
Network Capabilities
Services
Operation
with limited manual
detection and reaction for
legacy node resources.
Network node: Tactical
subnet protection, with
automated detection and
reaction for network
capable node resources.
The minimum network capabilities needed to support the airborne communications relay
platform operational profile can be provided by legacy and internetwork AN nodes with
network access, subnet and legacy links. Airborne communications relay platforms will
participate in both tethered and tiered ad-hoc network topologies. A tethered topology
would primarily be used for reachback and forwarding between the C4ISR platform,
GTACS, and strike package or CAP aircraft. A tiered ad-hoc topology would be used
between the C4ISR platform and airborne fighter platforms in a strike package or CAP.
Some operational concepts for collaborative targeting may also require airborne
backbone links and topologies for timely distribution of high volume sensor data between
C4ISR platforms. An airborne backbone could also be used in an extended theater to
provide reachback capabilities between strike package or CAP and GTACS elements.
8.4.3 AN System Configuration
Figure 8-3 depicts the the minimum system configuration to implement the AN node
functions, links, and topologies to support airborne communications relay platform
operations.
102
Link Management & Network Services
Link
Manager
Name
Server
Config
Mgt
Time
Server
HF/VHF/UHF
Voice Data
MUOS SATCOM
Voice Data
TDL
Voice/Data
Routing & Security
CDL
Voice Data
Legacy
Voice
Systems
VoIP
Gateway
HAIPE
Internal
External
Switch Firewall & Switch
External
Name Service
QoS
Manager
External Router
Intrusion
Detection
Com’l SATCOM
Voice Data
TCM SATCOM
Voice Data
Network Management
AN Network
NM
Policy Policy
Manager
Database Server Decision
Point
WNW
Voice Data
Lasercom
Voice Data
On-Board Networks
AN Equipment
Off-Board
Transmission
Figure 8-3. Airborne Communications Relay Platform AN System Configuration
8.4.4 Airborne Communications Relay Platform AN Issues and Risks
TBD
103
9.
9.1
Recommended Network Standards
Current Standards (TV-1)
Table 9-1 includes a list of the key technical standards that should be used in near-term
and interim AN implementations to provide the AN functionality defined herein. This
list is not all inclusive and many of the standards only provide a subset of the desired
functionality. See section 9.2 for potential emerging standards that provide more of the
desired functionality and section 9.3 for a list of areas for which widely accepted
standards do not yet exist. Standards that are included in the Department of Defense
(DoD) Information Technology Standards Registry (DISR) are noted.
Table 9-1. Applicable Technical Standards
Standards Area
Connectivity
Description/Discussion
Applicable Standards
Routing
Open Shortest Path First (OSPF)
RFC 1584 (Multicast Extensions to OSPF)
(DISR) - RFC 2328/ IETF Standard 54
(OSPF Version 2)
(DISR) - RFC 2740 (OSPF for IPv6)
RFC 3630 (Traffic Engineering Extensions
to OSPF v2)
Border Gateway Protocol (BGP)
(DISR) - RFC 1771, (Border Gateway
Protocol 4 (BGP-4)
RFC 1997 (BGP Communities)
RFC 2439 (Route Flap Dampening)
(DISR) - RFC 2545 (Extensions for IPv6
Inter-Domain Routing)
RFC 2796 (Route Reflection)
(DISR) - RFC 2858 (Multiprotocol
Extensions)
RFC 2918 (Route Refresh Capability)
RFC 3065 (AS Confederations)
RFC 3107 (Label Information)
RFC 3392 (Capabilities Advertisement)
Intermediate System to
Intermediate System (IS-IS)
RFC 1142 (IS-IS Intra-Domain Routing
Protocol)
RFC 1195 (IS-IS in TCP/IP Environments)
Quality of Service
Differentiated Services
(DiffServ)
DiffServ:
(DISR) - RFC 2474 (DiffServ Field)
104
Standards Area
Description/Discussion
Applicable Standards
RFC 2475 (DiffServ Arch)
RFC 2597 (AF PHB)
RFC 2638 (Bandwidth Broker for Diffserv
Arch)
RFC 3086 (Per Domain Behaviors)
RFC 3140 (PHB Coding)
RFC 3246 (EF PHB)
Multi-Protocol Label Switching
(MPLS)
MPLS:
(DISR) - RFC 2702 (Requirements for
Traffic Engineering over MPLS)
(DISR)-RFC 2917 (Core MPLS IP VPN
Arch)
(DISR) - RFCs 3031 (MPLS-Arch),
RFC 3032 (MPLS LSR Encoding)
RFC 3209 (RSVP-TE for MPLS LSPs)
RFC 3477 ((RSVP-TE for MPLS LSPs –
Unnumbered Links)
DiffServ & MPLS:
RFC 3270 (MPLS-DS, with Mapping Guide)
Integrated Services (IntServ)
IntServ:
(DISR) - RFC 2205 (RSVP)
(DISR) - RFC 2207 (RSVP Extensions for
IPSec)
RFC 2209 (rsvp Message Processing Rules)
(DISR) - RFC 2210 (RSVP and IntServ)
RFC 2211 (Controlled Load Service),
RFC 2212 (Guaranteed Service)
RFC 2215 (Characterization Parameters)
RFC 2688 (IntServ over Low Speed
Networks)
RFC 2746 (RSVP over IP Tunnels)
RFC 2815 (IntServ over IEEE 802
Networks)
RFC 2961 (RSVP Overhead Reduction)
(DISR) - RFC 3175 (Aggregate RSVP)
105
Standards Area
Description/Discussion
Applicable Standards
IntServ & DiffServ:
RFC 2996 (RSVP DCLASS Object)
RFC 2297 (RSVP Null Service Type)
RFC 2998 (Framework IntServ over
DiffServ)
Admission Control and Policy
RFC 2750 (RSVP Extensions for Admission
Control)
RFC 2753 (Framework for Policy-based
Admission Control)
RFC 2814 (RSVP-Based Admission Control
over IEEE 802 Networks)
RFC 3644 (Policy QoS Information Model)
Information Assurance
DoD IA
DoD IA Policy Framework
8500 - General
DoDD 8500.1, "Information Assurance
(IA)," 10/24/2002
DoDI 8500.2, "Information Assurance (IA)
Implementation," 02/06/2003
8510 - Certification and Accreditation
8520 - Security Management (SMI, PKI,
KMI, EKMS)
DoDD 8520.1, "Protection of Sensitive
Compartmented Information (SCI),"
12/20/2001
DoDI 8520.2, "Public Key Infrastructure
(PKI) and Public Key (PK) Enabling,"
04/01/2004
8530 - Computer Network Defense
/Vulnerability Mgt
DoDD O-8530.1 “Computer Network
Defense (CND),” 01/08/01
DoDI O-8530.2 “Support to Computer
Network Defense (CND),” 03/09/01
8540 - Interconnectivity/Multi-Level
Security (SABI)
8550 - Network/Web (Access, Content,
Privileges)
106
Standards Area
Description/Discussion
Applicable Standards
DoDI 8551.1, "Ports, Protocols, and Services
Management (PPSM)," 08/13/2004
8560 - Assessments (Red Team, TEMPEST
Testing & Monitoring)
8570 - Education, Training, Awareness
DoDD 8570.1, "Information Assurance
Training, Certification, and Workforce
Management," 08/15/04
8580 - Other (Mobile Code, IA OT&E, IA in
Acquisition)
DoDI 8580.1, "Information Assurance (IA)
in the Defense Acquisition System,"
07/09/2004
Network Layer Security
IPSec
(DISR) - RFC2401 (Security Architecture
for the Internet Protocol)
(DISR) - RFC2402 (IP Authentication
Header)
(DISR) - RFC2406 (IP Encapsulating
Security Payload (ESP))
ASWR Respons
Authentication/
DoDD 8500.1, DoDI 8500.2
US DOD CP: “X.509 Certificate Policy (CP)
for the U.S. Department of Defense (DOD)”,
Version 5.0, 1 December 1999
DoD Certificate Policy
Identification
RFC 2560 (X.509 Internet Public Key
Infrastructure Online Certificate Status
Protocol –OCSP)
Online Certificate Status
Protocol
X.509
RFC2510 (Internet X.509 Public Key
Infrastructure Certificate Management
Protocols)
RFC2511 (Internet X.509 Certificate
Request Message Format)
RFC2585 (Internet X.509 Public Key
Infrastructure Operational Protocols: FTP
and HTTP)
RFC2587 (Internet X.509 Public Key
Infrastructure LDAPv2 Schema)
RFC3161 (Internet X.509 Public Key
Infrastructure Time-Stamp Protocol (TSP))
RFC3279 (Algorithms and Identifiers for the
Internet X.509 Public Key Infrastructure
107
Standards Area
Description/Discussion
Applicable Standards
Certificate and Certificate Revocation List
(CRL) Profile)
RFC3280 (Internet X.509 Public Key
Infrastructure Certificate and Certificate
Revocation List (CRL) Profile)
Security Policy
Management
Data Encryption
Key Distribution
DoD Directive 8500
DoDD 8500.1, DoDI 8500.2
SNMPv3
RFC 3414 (User-based Security Model
(USM) for version 3 of the Simple Network
Management Protocol (SNMPv3))
IPSec
RFC3585 (IPsec Configuration Policy
Information Model)
HAPIS
NSA HAIPIS Version 2, May 2004
DoD Directive 8500
DoDD 8500.1, DoDI 8500.2
Internet Key Exchange
RFC2409 (The Internet Key Exchange
(IKE))
Public-Key Cryptography
Standards
(DISR) - RFC2315 (PKCS #7:
Cryptographic Message Syntax Version 1.5)
RFC2898 (PKCS #5: Password-Based
Cryptography Specification Version 2.0)
RFC2985 (PKCS #9: Selected Object
Classes and Attribute Types Version 2.0)
RFC2986 (PKCS #10: Certification Request
Syntax Specification Version 1.7)
RFC3447 (Public-Key Cryptography
Standards (PKCS) #1: RSA Cryptography
Specifications Version 2.1)
RFC3207 (SMTP Service Extension for
Secure SMTP over Transport Layer
Security)
Secure Socket Layer
(DISR) - Draft-freier-ssl-version3-01
Transport Layer Security
(DISR) - RFC2246 (The TLS Protocol
Version 1.0)
RFC2595 (Using TLS with IMAP, POP3 and
ACAP)
RFC2712 (Addition of Kerberos Cipher
Suites to Transport Layer Security (TLS))
RFC2830 (Lightweight Directory Access
Protocol (v3): Extension for Transport Layer
Security)
108
Standards Area
Description/Discussion
Applicable Standards
RFC3268 (Advanced Encryption Standard
(AES) Ciphersuites for Transport Layer
Security (TLS))
RFC3436 (Transport Layer Security over
Stream Control Transmission Protocol)
RFC3546 (Transport Layer Security (TLS)
Extensions)
(DISR) - RFC2408 (Internet Security
Association and Key Management Protocol
(ISAKMP))
IPSec
DoD Directive 8500
Secure Directory
DoDD 8500.1, DoDI 8500.2
LDAP
RFC2649 (An LDAP Control and Schema
for Holding Operation Signatures)
RFC2829 (Authentication Methods for
LDAP)
RFC3062 (LDAP Password Modify
Extended Operation)
RFC3112 (LDAP Authentication Password
Schema)
Secure Domain Name
Services
DNS
(DISR) - RFC2535 (Domain Name System
Security Extensions)
RFC3645 (Generic Security Service
Algorithm for Secret Key Transaction
Authentication for DNS (GSS-TSIG))
Network Management
Common Management
Protocol
IETF Standard 62:
RFC3411 (Architecture for SNMP
Management Frameworks)
RFC 3412 (Message Processing &
Dispatching for SNMP)
RFC 3413 (SNMP Applications)
RFC 3414 (User-Based Security Model for
SNMPv3)
RFC 3415 (View-Based Access Control
Model for SNMP)
RFC 3416 (Version 2 of the Protocol
Operations for SNMP)
RFC 3417 (Transport Mappings for SNMP)
RFC 3418 (MIB for SNMP)
Common Structure of
Management
Information (SMI)
Provides a common structure
and format rules for representing
management information;
SMIv1 and SMIv2
109
(DISR) IETF Standard 16:
RFC 1155 (Structure and Identification of
management information for TCP/IP-based
Standards Area
Description/Discussion
Applicable Standards
internets)
RFC 1212 (Concise MIB definitions)
IETF Standard 58:
RFC 2578 (Structure of Management
Information Version 2 (SMIv2))
RFC 2579 (Textual Conventions for SMIv2)
RFC 2580 (Conformance Statements for
SMIv2)
Management
Information Base
General
(DISR) IETF Standard 17:
RFC 1213 (Management Information Base
for Network Management of TCP/IP-based
internets:MIB-II)
The following 3 RFCs update RFC 1213:
(DISR) RFC 2011 (SNMPv2 Management
Information Base for the Internet Protocol
using SMIv2)
(DISR) RFC 2012 (SNMPv2 Management
Information Base for the Transmission
Control Protocol using SMIv2)
(DISR) RFC 2013 (SNMPv2 Management
Information Base for the User Datagram
Protocol using SMIv2)
(DISR) IETF Standard 59:
RFC 2819 (Remote Network Monitoring
Management Information Base)
(DISR) RFC 1657 (Definitions of Managed
Objects for the Fourth Version of the Border
Gateway Protocol (BGP-4) using SMIv2)
Other MIBs
RFC 1724 (RIP Version 2 MIB Extension)
(DISR) RFC 1850 (OSPF Version 2
Management Information Base)
(DISR) RFC 2021 (Remote Network
Monitoring Management Information Base
Version 2 using SMIv2)
RFC 2096 (IP Forwarding Table MIB)
RFC 2206 (RSVP Management Information
Base using SMIv2)
RFC 2213 (Integrated Services Management
Information Base using SMIv2)
RFC 2214 (Integrated Services Management
Information Base using SMIv2)
110
Standards Area
Description/Discussion
Applicable Standards
RFC 2564 (Application Management MIB)
(DISR) RFC 2605 (Directory Server
Monitoring MIB)
RFC 2667 (IP Tunnel MIB)
RFC 2720 (Traffic Flow Measurement:
Meter MIB)
(DISR) RFC 2788 (Network Services
Monitoring MIB)
(DISR) RFC 2790 (Host Resources MIB)
RFC 2922 (Physical Topology MIB)
RFC 2932 (IPv4 Multicast Routing MIB)
RFC 2933 (Internet Group Management
Protocol MIB)
RFC 2959 (Real-Time Transport Protocol
Management Information Base)
RFC 3273 (Remote Network Monitoring
Management Information Base for High
Capacity Networks)
RFC 3287 (Remote Monitoring MIB
Extensions for Differentiated Services)
RFC 3289 (Management Information Base
for the Differentiated Services Architecture)
RFC 3559 (Multicast Address Allocation
MIB)
Link Management
Location Management
(DISR) RFC3261 (SIP: Session Initiation
Protocol)
RFC3265 (Session Initiation Protocol (SIP)Specific Event Notification)
Registration
RFC3326 (The Reason Header Field for the
Session Initiation Protocol (SIP))
RFC3327 (Session Initiation Protocol (SIP)
Extension Header Field for Registering NonAdjacent Contacts)
RFC3608 (Session Initiation Protocol (SIP)
Extension Header Field for Service Route
Discovery During Registration)
IETF Best Current Practice (BCP)0075,
RFC3665 (Session Initiation Protocol (SIP)
Basic Call Flow Examples)
ITU-T G.7714/Y.1705 (Generalized
automatic
discovery techniques), November 2001
ITU-T G.7714.1/Y.1705.1 (Protocol for
Service, Node and Path
Discovery
111
Standards Area
Description/Discussion
Applicable Standards
automatic
discovery in SDH and OTN networks), April
2003
(DISR) RFC2461 (Neighbor Discovery for
IP Version 6 (IPv6))
RFC3379 (Delegated Path Validation and
Delegated Path Discovery Protocol
Requirements)
RFC3674 (Feature Discovery in Lightweight
Directory Access Protocol (LDAP))
Network Services
Name Resolution
Service
(DISR) - IETF Standard 13 Domain Name
System (RFC 1034/RFC 1035)
Configuration
Management
RFC 951, Bootstrap Protocol (BOOTSP)
Network Time
(DISR) - RFC 2131, Dynamic Host
Configuration Protocol
RFC 3396 (Update of RFC 2131)
(DISR) - RFC 1305 Network Time Protocol
(Version 3)
Network Time Protocol
112
9.2
Emerging Standards (TV-2)
Table 9-2. Applicable Emerging Technical Standards
Standards Area
Connectivity
Description/Discussion
Applicable Standards
Routing
OSPF
Draft-ietf-ospf-scalability-08 (Prioritized
Treatment of OSPF Packets)
Draft-ietf-ospf-cap-03 (Advertising
Optional Router Capabilities), July 2004
Draft-ietf-isis-wg-multi-topology-07
(Multi-topology Routing), June 2004
IS-IS
RFC 3784 (IS-IS Extensions for Traffic
Engineering), June 2004
RFC 3787 (Recommendations for
Networks Using IS-IS)
Mobile Ad Hoc Routing
RFC 3561 (AODV Protocol)
RFC 3626 (OLSR Protocol)
draft-chandra-ospf-manet-ext-01 (OSPF
Extensions), July 2004
draft-ietf-manet-dsr-10 (The Dynamic
Source Routing Protocol for Mobile Ad
Hoc Networks), July 2004
draft-spagnolo-manet-ospf-wirelessinterface-01 (OSPFv2 Wireless Interface
Type), May 2004
draft-spagnolo-manet-ospf-design (Design
Considerations for a Wireless OSPF
Interface), April 2004
draft-jeong-manet-maodv6-00 (Multicast
Ad hoc On-Demand Distance Vector
Routing for IP version 6), July 2004
Quality of Service
DiffServ and IntServ
RFC 3670 (Model for QoS Datapath
Mechanisms), January 2004
RFC 3754 (Multicast for DiffServ), April
2004
ID draft-baker-diffserv-basic-classes-03
(DSCP<=> PHB Mapping Guide), July
2004
QoS and Signaling
IETF Next Steps in Signaling
113
Standards Area
Description/Discussion
Applicable Standards
WG
The Next Steps in Signaling
(NSIS) working group is
considering protocols for
signaling information about a
data flow along its path in the
network.
RFC 3726 (Requirements for Signaling
Protocols), April 2004
draft-ietf-nsis-fw-06 (Next Steps in
Signaling: Framework), July 2004
draft-ash-nsis-nslp-qspec-01 (QoS-NSLP
QSpec Template), July 2004
draft-tschofenig-nsis-qos-ext-authz-00
(Extended QoS Authorization for the QoS
NSLP), July 2004
draft-ietf-nsis-qos-nslp-04 (NSLP for
Quality-of-Service signaling), July 2004
QoS and Routing
Framework
RFC 2386 (Framework for QoS Routing),
August 1998
QoS Routing Mechanisms and
OSPF Extensions describes
extensions to the OSPF protocol
to support QoS routes.
QoS and Measurement
RFC 2676 (QoS Routing), August 1999
RFC 2330 (Framework for IP Performance
Metrics)
RFC 3432 (Network performance
measurement with periodic streams)
Information Assurance
DoD IA
DoD IA Policy Framework
8510 - Certification and Accreditation
DoDD 8510.aa “DoD C&A”
DoD 8510.1-M “DITSCAP DoDI 8510.bb
“DITSCAP Implementation”
8520 - Security Management (SMI, PKI,
KMI, EKMS)
DoDI 8520.cc “Communications Security”
(COMSEC)”
8530 - Computer Network Defense
/Vulnerability Mgt
DoDI O-8530.cc “DoD Vulnerability
Management”
114
Standards Area
Description/Discussion
Applicable Standards
8540 - Interconnectivity/Multi-Level
Security (SABI)
DoDI 8540.aa “Interconnection & Data
Transfer Between Security Domains”
8550 - Network/Web (Access, Content,
Privileges)
DoDD 8550.aa “Web Site Administration”
DoD 8550.bb-G “Firewall Configuration”
DoDI 8550.dd “DoD Biometrics”
8560 - Assessments (Red Team, TEMPEST
Testing & Monitoring)
DoDD 8560.aa “Information Assurance
Monitoring and Readiness Testing of DoD
Telecommunications and Information
Systems”
Authentication/
X.509
RFC3709 (Internet X.509 Public Key
Infrastructure: Logotypes in X.509
Certificates), February 2004
Identification
RFC3739 (Internet X.509 Public Key
Infrastructure: Qualified Certificates
Profile), March 2004
FC3779 (X.509 Extensions for IP
Addresses and AS Identifiers), June 2004
RFC3850 (Secure/Multipurpose Internet
Mail Extensions (S/MIME) Version 3.1
Certificate Handling), July 2004
Key Distribution
IKE
RFC3664 (The AES-XCBC-PRF-128
Algorithm for the Internet Key Exchange
Protocol (IKE)), January 2004
RFC3706 (A Traffic-Based Method of
Detecting Dead Internet Key Exchange
(IKE) Peers), February 2004
RFC3734 (Extensible Provisioning
Protocol (EPP) Transport Over TCP),
March 2004
TLS
RFC3749 (Transport Layer Security
Protocol Compression Methods), May 2004
Secure Mobility
IPSec
RFC3776 (Using IPsec to Protect Mobile
IPv6 Signaling Between Mobile Nodes and
Home Agents), June 2004
Secure Directory
LDAP
RFC3829 (Lightweight Directory Access
Protocol (LDAP) Authorization Identity
Request and Response Controls), July 2004
Network Management
115
Standards Area
Description/Discussion
Common Structure of
Management
Information (SMI)
Management
Information Base
Applicable Standards
RFC 3780 (SMIng-Next Generation SMI),
May 2004
RFC 3781 (SMIng-Mappings to SNMP),
May 2004
IPv6 MIBs
(DISR) RFC 2452 (IP Version 6
Management Information Base for the
Transmission Control Protocol), December
1998
(DISR) RFC 2454 (IP Version 6
Management Information Base for the User
Datagram Protocol), December 1998
RFC 2465 (Management Information Base
for IP Version 6: Textual Conventions and
General Group), December 1998
(DISR) RFC 2466 (Management
Information Base for IP Version 6:
ICMPv6 Group), December 1998
RFC 3595 (Textual Conventions for IPv6
Flow Label), September 2003
RFC 2940 (Definitions of Managed Objects
for COPS Protocol Clients), October 2000
Other MIBs
RFC 3747 (The Differentiated Services
Configuration MIB), April 2004
RFC3812 (MPLS Traffic Engineering (TE)
MIB), June 2004
RFC 3813 (MPLS Label Switching Router
(LSR) MIB), June 2004
RFC 3814 (MPLS) Forwarding
Equivalence Class To Next Hop Label
Forwarding Entry (FEC-To-NHLFE) MIB),
June 2004
RFC 3816 (Definitions of Managed Objects
for RObust Header Compression (ROHC)),
June 2004
RFC3873 (Stream Control Transmission
Protocol (SCTP) Management Information
Base (MIB)), September 2004
Policy-Based Network
Management
Common Policy Protocol
RFC 2748 (COPS), January 2000
RFC 3084 (COPS Usage for Policy
Provisioning, COPS-PR), March 2001
Common Structure of Policy
Provisioning Information (SPPI)
RFC 3159 (Structure of Policy Provisioning
Information, SPPI), August 2001
Policy Information Schema and
Models
RFC 3060 (Policy Core Information Model
Specification – Version 1), February 2001
116
Standards Area
Description/Discussion
Applicable Standards
RFC 3460 (Policy Core Information Model
Extensions), January 2003
RFC 3585 (IPsec Configuration Policy
Information Model), August 2003
RFC 3644 (Policy QoS Information
Model), November 2003
RFC 3670 (Information Model for
Describing Network Device QoS Datapath
Mechanisms), January 2004
RFC 3317 (Differentiated Services Quality
of Service Policy Information Base), March
2003
Policy Information Base
RFC 3318 (Framework Policy Information
Base), March 2003
RFC 3571 (Framework Policy Information
Base for Usage Feedback), August 2003
draft-ietf-ipsp-ipsecpib-10 (IPSec Policy
Information Base), April 2004
Link Management
Handoff Across
Heterogeneous Links
Manage mobile nodes including
fast handoff for highly dynamic
wireless networks and efficient
location update for highly
mobile nodes
Dynamic Mobility Agents (DMA) Protocol,
AMPS Protocol Design Document,
CECOM MOSAIC Project, Contract
DAAB07-01-C-L534, May 19, 2002
Domain Announcement Protocol (DAP)
Location Management
RFC3680 (A Session Initiation Protocol
(SIP) Event Package for Registrations),
March 2004
RFC3825 (Dynamic Host Configuration
Protocol Option for Coordinate-based
Location Configuration Information), July
2004
Registration
Establish and Maintain
Channels
RFC3840 (Indicating User Agent
Capabilities in the Session Initiation
Protocol (SIP)), August 2004
Signaling to establish, maintain
and recover paths; determination
of route and properties of a path;
link or node removal from
network
Draft-ietf-ccamp-lmp-10 (Link
Management Protocol (LMP)), October
2003
Draft-ietf-ccamp-gmpls-g709-07
(Generalized MPLS Signalling Extensions
for G.709 Optical Transport Networks
Control), March 2004
Draft-ietf-ccamp-gmpls-recoveryfunctional-02 (Generalized MPLS
Recovery Functional Specification), April
2004
117
Standards Area
Description/Discussion
Applicable Standards
Draft-ietf-ccamp-gmpls-segment-recovery00 (GMPLS Based Segment Recovery),
April 2004
Draft-ietf-ccamp-gmpls-recovery-analysis03 (Analysis of Generalized Multi-Protocol
Label Switching (GMPLS)-basedRecovery
Mechanisms (including Protection and
Restoration)), April 2004
Draft-ietf-ccamp-gmpls-recovery-e2esignaling-01 (RSVP-TE Extensions in
support of End-to-End GMPLS-based
Recovery), May 2004
Draft-ali-ccamp-mpls-graceful-shutdown00 (Graceful Shutdown in MPLS Traffic
Engineering Networks), June 2004
Draft-rabbat-fault-notification-protocol-05
(Fault Notification Protocol for GMPLSBased Recovery), June 2004
Draft-vasseur-ccamp-te-router-info-00
(Routing extensions for discovery of TE
router information), July 2004
Draft-huang-gmpls-recovery-resourcesharing-00 (Generalized MPLS Recovery
Resource Sharing), July 2004
Topology
Topology Dissemination
RFC 3684 (Topology Dissemination –
Reverse Path Forwarding), February 2004
Dynamic Topology Changes
Draft-xushao-ipo-mplsovergmpls-02
(Requirements for MPLS over GMPLSbased Optical Networks (MPLS over
GMPLS), May 2004
Service, Node and Path
Discovery
RFC3810 (Multicast Listener Discovery
Version 2 (MLDv2) for IPv6), June 2004
RFC3832 (Remote Service Discovery in
the Service Location Protocol (SLP) via
DNS SRV), July 2004
Draft-daigle-snaptr-01 (Domain-based
Application Service Location Using SRV
RRs and the DynamicDelegation Discovery
Service (DDDS)), June 2004
Draft-daniel-manet-dns-discoveryglobalv6-00 (DNS Discovery for global
connectivity in IPv6 Mobile Ad Hoc
Networks), March 2004
Draft-ietf-ipv6-2461bis-00 (Neighbor
Discovery for IP version 6 (IPv6)), July
2004
Draft-ietf-eap-netsel-problem-01 (Network
Discovery and Selection Problem), July
118
Standards Area
Description/Discussion
Applicable Standards
2004
Network Services
Name Resolution
Service
RFC 3363 (IPv6 Address Representation in
DNS), August 2002
(DISR)- RFC 3596 (DNS Extensions to
Support IPv6), October 2003
draft-daniel-manet-dns-discovery-globalv600 (DNS Discovery for global connectivity
in IPv6 Mobile Ad Hoc Networks), March
2004
Configuration
Management
RFC 3646 (DHCP for IPv6), December
2003
RFC 3736 (Stateless DHCP for IPv6), April
2004
RFC 3825 (DHCP Option for LocationBased Configuration), July 2004
Automatic
Configuration and
Reconfiguration
Distribute network configuration
information (e.g., IP addresses,
DNS server) within a
subnetwork; update
configuration database
Dynamic Registration and Configuration
Protocol (DRCP), AMPS Protocol Design
Document, CECOM Multifunctional Onthe-move Secure Adaptive Integrated
Communications (MOSAIC) Project,
Contract DAAB07-01-C-L534, May 19,
2002
Draft-boucadair-netconf-req-00
(Requirements for Efficient and Automated
Configuration Management), July 2004
Draft-ietf-netconf-prot-03 (NETCONF
Configuration Protocol), June 2004
draft-jelger-manet-gateway-autoconf-v6-02
(Gateway and address autoconfiguration for
IPv6 adhoc networks), April 2004
draft-jeong-manet-addr-autoconf-reqts-02
(Requirements for Ad Hoc IP Address
Autoconfiguration), July 2004
Distribute network configuration
information to subnetworks
Dynamic Configuration Distribution
Protocol (DCDP), AMPS Protocol Design
Document, CECOM MOSAIC Project,
Contract DAAB07-01-C-L534, May 19,
2002
Collect node configuration and
capability information
Configuration Database Update Protocol
(YAP), AMPS Protocol Design Document,
CECOM MOSAIC Project, Contract
DAAB07-01-C-L534, May 19, 2002
Select nodes to perform link
management and network
services functions
Adaptive Configuration Manager (ACM) ,
AMPS Protocol Design Document,
CECOM MOSAIC Project, Contract
DAAB07-01-C-L534, May 19, 2002
119
Standards Area
Description/Discussion
Applicable Standards
Adaptive Configuration Agent (ACA),
AMPS Protocol Design Document,
CECOM MOSAIC Project, Contract
DAAB07-01-C-L534, May 19, 2002
Network Time
9.3
Areas for Further Development
Table 9-3. AN Technical Standards Gaps
Standards Area
Connectivity
Description/Discussion
Status
Routing Protocol for
Heterogeneous (Wired
and Wireless)
Environments
By extending and building off a
wired routing protocol
framework, the probability of
successful transition and
interoperability within
heterogeneous (fixed wired and
ad hoc wireless) networks may
be greatly improved.
Several draft RFCs have been prepared that
describe potential wireless extension to
OSPF
Mobile Ad Hoc
Routing
There is extensive work on
mobile ad hoc routing (shown in
rightmost column) for which
there are no immediate
standardization plans.
draft-ietf-manet-odmrp-04 (On-Demand
Multicast Routing Protocol; Draft Expiredno subsequent RFC)
draft-ietf-manet-maodv-00 (Multicast
AODV Protocol; Draft Expired; no
subsequent RFC)
draft-ietf-manet-rdmar-00 (Relative
Distance Micro-discovery Ad Hoc Routing
Protocol); Draft Expired; no subsequent
RFC)
draft-ietf-manet-tora-spec-04 (TemporallyOrdered Routing Algorithm (TORA)
Version 1; Draft Expired; no subsequent
RFC)
draft-ietf-manet-zone-zrp-04 (The Zone
Routing Protocol (ZRP) for Ad Hoc
Networks; Draft Expired; no subsequent
RFC)
draft-ietf-manet-star-00 (Source Tree
Adaptive Routing (STAR) Protocol; Draft
Expired; no subsequent RFC)
draft-ietf-manet-admr-00 (The Adaptive
Demand-Driven Multicast Routing Protocol
for Mobile Ad Hoc Networks; Draft
Expired; no subsequent RFC)
draft-ietf-manet-cedar-spec-00 (Core
120
Standards Area
Description/Discussion
Status
Extraction Distributed Ad hoc Routing
Specification; Draft Expired; no subsequent
RFC)
draft-ietf-manet-cbrp-spec-01 (Cluster
Based Routing Protocol; Draft Expired; no
subsequent RFC)
draft-ietf-manet-longlived-adhoc-routing00 (Long-lived Ad Hoc Routing based on
the Concept of Associativity; Draft
Expired; no subsequent RFC)
draft-jeong-umr-manet-00 (Unicast Routing
based Multicast Routing Protocol for
Mobile Ad Hoc Networks; Draft Expired;
no subsequent RFC)
Quality of Service
QoS and Routing
draft-perkins-manet-aodvqos-01 (Quality of
Service for Ad hoc On-Demand Distance
Vector Routing; Draft Expired; no
subsequent RFC)
QoS and Resource Reservation
BGRP: A Framework for
Scalable Resource Reservation.
The Border Gateway
Reservation Protocol (BGRP) is
used for inter-domain resource
reservation that can scale in
terms of message processing
load, state storage and control
message bandwidth.
draft-pan-bgrp-framework-00 (Draft
expired; no subsequent RFC)
QoS and Dynamic SLAs/SLSs
COPS Usage for SLS
negotiation (COPS-SLS) is a
protocol for supporting Service
Level Specification (SLS)
negotiation.
draft-nguyen-rap-cops-sls-03
(Draft expired; no subsequent RFC)
Attributes of a Service Level
Specification (SLS) Template
depicts a standard set of
information to be dynamically
negotiated between a customer
and an IP service provider or
between service providers, by
means of instantiated Service
Level Specifications (SLS).
draft-tequila-sls-03
Service Level Specification for
draft-somefolks-sls-00
121
(Draft expired; no subsequent RFC)
Standards Area
Description/Discussion
Status
Inter-domain QoS Negotiation
describes a structure for defining
Service Level Specifications for
QoS negotiation over IP
networks. The high level schema
described in this document is
intended for use during the
process of QoS negotiation
between a customer entity and a
provider entity.
(Draft expired; no subsequent RFC)
QoS and Signaling:
DRSVP aims to overcome the
shortcomings of RSVP in terms
of QoS adaptation. By treating a
reservation as a request for
service somewhere within a
range, flexibility needed to deal
with network dynamics is
gained. As available resources
change, the network can readjust
allocations within the reservation
range.
INSIGNIA (protocol) is the first
QoS signaling protocol
specifically designed for
resource reservation in ad hoc
environments. It supports inband signaling by adding a new
option field in IP header called
INSIGNIA to carry the signaling
control information. The
INSIGNIA module is
responsible for establishing,
restoring, adapting, and tearing
down real-time flows. It includes
fast flow reservation, restoration
and adaptation algorithms that
are specifically designed to
deliver adaptive real-time
service in
MANETs. If the required
resource is unavailable, the flow
will be degraded to best-effort
service. QoS reports are sent to
source node periodically to
report network topology
changes, as well as QoS
statistics (loss rate, delay, and
throughput)
N. Schult, M. Mirhakkak
and D. Thomson. A New Approach for
Providing Quality of Service in Dynamic
Network Environment.
http://www.mitre.org/support/papers/
tech_papers99_00/thomson_mp_
dynamic/thomson_dynamic.pdf.
draft-ietf-manet-insignia-01
(Draft expired; no subsequent RFC)
draft-salsano-bgrpp-arch-00
122
Standards Area
Description/Discussion
Inter-domain QoS Signaling: the
BGRP Plus Architecture
describes a scalable inter-domain
resource control architecture for
DiffServ networks. The
architecture is called BGRP
Plus, as it extends the previously
proposed BGRP framework
(draft-pan-bgrp-framework00.txt).
QoS and Frameworks for WideArea Ad-Hoc Networks
SWAN is a stateless network
model
which uses distributed control
algorithms to deliver service
differentiation in mobile wireless
ad hoc networks in a simple,
scalable and robust manner.
SWAN uses explicit congestion
notification (ECN) to
dynamically regulate admitted
realtime traffic in the face of
network dynamics brought on by
mobility or traffic overload
conditions.
The Flexible QoS Model for
MANET (FQMM) is based both
on
IntServ and Diffserv.
Applications with high priority,
use the per-flow QoS guarantees
of IntServ. Applications with
lower priorities achieve DiffServ
per-class differentiation.
The INSIGNIA QoS framework
supports adaptive services that
can provide base QoS (i.e.,
minimum bandwidth) assurances
to real-time voice and video
flows and data, allowing for
enhanced levels (i.e., maximum
bandwidth) of service to be
123
Status
(Draft expired; no subsequent RFC)
G. Ahn, A. T. Campbell, A. Veres, and L.H. Sun, “SWAN: Service Differentiation in
Stateless Wireless Ad-hoc Networks”,
Proc. IEEE Infocom, pp. 457-466, June
2002.
draft-ahn-swan-manet-00 (Service
Differentiation in Stateless Wireless Adhoc Networks; Draft Expired; no
subsequent RFC)
H. Xiao, W. K.G. Seah, A. Lo, and K.
Chaing “Flexible QoS Model for Mobile
Ad-hoc Networks”, IEEE Vehicular
Technology Conference, Vol. 1, pp 445449, Tokyo, Japan,
May 2000.
S. Lee, G. Ahn, X. Zhang, and A. T.
Campbell, “INSIGNIA:
An IP-Based QoS framework for Mobile
Ad-hoc Networks”,
Journal of Parallel & Distributed
Computing, Vol. 60, No.
4, pp 374-406, April 2000.
Standards Area
Description/Discussion
Status
delivered when resources
become available. INSIGNIA is
designed to adapt user sessions
to the available level of service
without explicit signaling
between source-destination
pairs.
The PYLON framework views a
mobile multihop network as a
network that needs to cling to a
fixed access network in order to
gain access to the rest of the
world. The hosting access
network is expected to provide
the mobile multihop network
with suitable Service Level
Agreement (SLA), Traffic
Conditioning
Agreement (TCA), and service
provisioning policies. This
framework uses aggregate RSVP
signaling to perform
inter-domain resource
reservation.
iMAQ is a cross-layer
architecture to support the
transmission of multimedia data
over a mobile multihop network.
Hard QoS guarantees cannot be
provided and resources are not
reserved. The framework
involves the network layer (an
ad hoc routing layer) and a
middleware service layer. At
each mobile node, these two
layers share information and
collaborate to provide QoS
assurances to multimedia traffic.
MMWN (Multimedia support
for Mobile Wireless Network)
system consists of adaptive link
and network algorithms to
support quality of service in
large, multihop mobile wireless
network. MMWN consists of
modular system of distributed,
adaptive algorithms at link and
network layer. The key features
of the system are: adaptive
control of link quality;
124
Y. L. Morgan and T. Kunz,
“PYLON: An Architectural Framework for
Ad-hoc QoS
Interconnectivity with Access Domains”,
Proceedings of the 36th Hawaii
International Conference on System
Sciences (HICSS’03)
iMAQ: An integrated mobile ad hoc QoS
framework. http://cairo.cs.uiuc.edu/adhoc/.
.
R. Ramanathan and M. Steenstrup.
Hierarchically organized, multihop mobile
wireless networks for quality of service
support. Mobile Networks and
Applications,
3(1):110–119, 1996.
Standards Area
Description/Discussion
hierarchical organization;
routing with quality of service;
and autonomous repair of
multipoint virtual circuit with
resource reservations.
AQuaFWiN architecture has
used several concepts such as
hierarchical organization,
adaptive power control from
existing architectures. The key
distinguishing feature of the
proposed architecture is the
generic feedback mechanism
which can be used to achieve
adaptability at different layers of
the network.
Status
Bobby Vandalore, Raj Jain, Sonia Fahmy,
Sudhir Dixit
AQuaFWiN: Adaptive QoS Framework for
Multimedia in Wireless Networks
and its Comparison with other QoS
Frameworks, 1999
Information Assurance
Security Architecture
for Secure
Management
Transactions
The AN must be able to operate
when disconnected from
terrestrially-based services
making application of current
security approaches difficult, if
not impossible, to implement.
The AN architecture in general,
and the NM Architecture in
particular, must be capable of
securing administrative
transactions over a disconnected,
dynamically changing network.
A number of different security constructs
exist that can be applied to today’s wireline NM transactions, including public key
technologies, security protocols including
IPSec enabled virtual private networks
(VPN), Transport Layer Security (TLS),
Secure Socket Layer (SSL), as well as
protocols with built-in security features,
such as the SNMPv3 User Security
Module. These approaches were designed
upon the premise that dedicated, terrestrialbased security resources would be available
to support these security services.
Network Management
Service Advertisement
and Discovery
Mechanisms
To support the spontaneous, selfforming features of the AN, the
network management
architecture will require
autonomous election of cluster
managers and
advertisement/discovery of those
managers within the network. A
service advertisement
mechanism enables nodes with
NM/PBNM services to announce
themselves, while a service
discovery process enables nodes
seeking NM/PBNM services to
locate NM/PBNM servers. Both
of these features enable the NM
Architecture to adapt to the
network’s dynamic topology
changes.
125
Proprietary and standards-based solutions
for automated service discovery exist for
fixed, wire-line networks (Service Location
Protocol, Salutation Protocol, etc).
However, these most likely would not be
appropriate for dynamic topology networks.
One issue is the problem of broadcast
flooding over dynamic topology networks
with bandwidth-constrained links. The
selected service discovery methods would
need to constrain the broadcast function in
some fashion to avoid redundant flooding
and inefficient use of bandwidth resources.
Some potential options include use of
MANET multicast and/or constrained (hoplimited) broadcasts.
Standards Area
Description/Discussion
Status
Clustering Protocol for
Management of
Dynamic Topology
Networks
The main concept behind cluster
management is to convert a
traditionally centralized process
into a semi-distributed process
that adapts to dynamic network
challenges. The performance
objective in selection of clusters
is to minimize protocol overhead
between client and server while
maximizing management service
availability. Selection of cluster
size is an important parameter in
forming and maintaining
clusters. Clusters that are too
large suffer from an excess of
(NM/PBNM) message overhead
due to collection of data from a
large number of clients.
Networks with clusters that are
too small suffer from an excess
of overhead in cluster
maintenance traffic. Factors that
impact selection of clusters
include, among others, relative
node speed, cluster density,
mobility characteristics, etc.
Various recent research endeavors have
proposed cluster management schemes.
However, none of these have been formally
implemented, nor evaluated against the
particular attributes (relative node speed,
cluster density, mobility characteristics) of
a military airborne network.
Policy Based Network
Management
Policy Information Base
draft-ietf-rap-acct-fr-pib-01 (Framework of
COPS-PR Policy Information Base for
Accounting Usage; Draft Expired; no
subsequent RFC)
draft-rawlins-rsvppcc-pib-02 (RSVP Policy
Control Criteria PIB; Draft Expired; no
subsequent RFC)
draft-jacquenet-fwd-pib-00 (An IP
Forwarding Policy Information Base; Draft
Expired; no subsequent RFC)
draft-jacquenet-ip-te-pib-02 (An IP Traffic
Engineering Policy Information Base; Draft
Expired; no subsequent RFC)
draft-li-rap-mplspib-00 (MPLS Traffic
Engineering Policy Information Base; Draft
Expired; no subsequent RFC)
draft-otty-cops-pr-filter-pib-00 (A Filtering
Policy Information Base (PIB) for Edge
Router Filtering Service and Provisioning
via COPS-PR; Draft Expired; no
subsequent RFC)
Link Management
Node Advertisement
and Discovery
Mechanisms
The Ad-Hoc Mobility Protocol Suite
(AMPS) consisting of Dynamic
Registration and Configuration Protocol
126
Standards Area
Description/Discussion
Status
(DRCP), Dynamic Configuration
Distribution Protocol (DCDP),
Configuration Database Update Protocol
(YAP) and Adaptive Configuration Agent
(ACA) developed for the CECOM
Multifunctional On-the-move Secure
Adaptive Integrated Communications
(MOSAIC) Project provides many of the
desired functionality.
Node Admission
Control
Handoff Across
Heterogeneous Links
Dynamic Mobility Agents (DMA) Protocol
and Domain Announcement Protocol
(DAP) developed for the MOSAIC Project
provide many of the desired functionality.
Link Resource
Provisioning
Node and Link RealTime Performance
Monitoring and
Evaluation
Network Topology
Optimization
Link Management
Architecture and
Transactions
Network Services
Name Resolution
Service
draft-engelstad-manet-name-resolution-01
(Name Resolution in on-demand MANETS
and over external IP Networks; Draft
Expired; no subsequent RFC)
draft-jeong-manet-dns-service-00 (DNS
Service for Mobile Ad Hoc Networks;
Draft Expired; no subsequent RFC)
draft-park-manet-dns-discovery-globalv600 (DNS Discovery for global connectivity
in IPv6 Mobile Ad Hoc Networks; Draft
Expired; no subsequent RFC)
Configuration
Management
draft-paakkonen-addressing-htr-manet-00
(IPv6 addressing in a heterogeneous
MANET-network; Draft Expired; no
subsequent RFC)
draft-perkins-manet-autoconf-01 (IP
Address Autoconfiguration for Ad Hoc
127
Standards Area
Description/Discussion
Status
Networks; Draft Expired; no subsequent
RFC)
draft-rantonen-manet-idaddress-dadadhocnet-00 (IP Address Autoconfiguration
with DAD minimization for Ad Hoc
Networks; Draft Expired; no subsequent
RFC)
Service Discovery
draft-koodli-manet-servicediscovery-00
(Service Discovery in On-Demand Ad Hoc
Networks; Draft Expired; no subsequent
RFC)
Network Time
RFC 2030 (Simple NTP version 4),
October 1996
Khoa To and James Sasitorn; N.A.M.E:
The Network Time Protocol for Ad Hoc
Mobile Environments; Computer Science
Department Rice University Houston,
Texas
128
10.
AN Architecture Issues
Table 10-1. AN Architecture Issues
ID
Issue
Connectivity
Description/Discussion
CN1
Network and Link
Stability
How much network stability is needed in terms of
throughput, latency, loss, and error rates to support
AF mission traffic and ensure needed performance
guarantees? What functions are required to provided
the needed stability? Are there any AF mission
traffic types that should not be supported with an IPbased network, today, in the interim, in the target
(consider efficiency as well as network
performance)? How should application and
networking protocols be adapted to accommodate
link instability?
CN2
Directional Links
What channel access mechanisms are most
appropriate for ad hoc networks that combine
directional and omni directional elements (a.k.a.
directional ad hoc networks)? How should one
initiate and maintain a network topology in
directional ad hoc networks? What routing
algorithms are necessary for standard (unicast), highassurance, and multipoint data delivery services in
directional ad hoc networks?
CN3
Network and Link
Handoffs
What functions are required to perform
seamless/near seamless radio handoffs? What
functions are required to ensure uninterrupted
network connectivity as platforms move through
several different subnets?
CN4
Routing
The Airborne Network must deliver a set of routing
protocols that can be run on all platforms which are
dynamic enough to accommodate rapid topological
changes while maintaining efficiency with respect to
bandwidth consumption from routing information
exchanges. The routing protocols should be capable
of intelligent routing decisions by taking into
account such things as link characteristics (e.g.
capacity and error rate of a particular radio subnet),
number of hops, traffic requirements (e.g. delaysensitivity, precedence levels), etc.
CN5
QoS
The Airborne Network must have a consistent
approach to providing Quality of Service (QoS)
across the AN constellation. It is necessary to
specify the protocols that will be employed, and
describe the roles that various network elements are
expected to play. These roles include such things as
initiating resource reservations, classifying and
tagging IP packets, providing prioritized queuing
mechanisms, responding to requests for resource
reservations, etc.
129
Status
DISA is defining a
GIG QoS
architecture;
however, it is
unknown if it will
be suitable for the
AN.
ID
Issue
Description/Discussion
Status
CN6
Real-Time Traffic
What standards should be used for voice (VoIP),
video (video over IP), and circuit emulation?
DISA is defining a
VoIP standard
implementation.
CN7
GIG Integration
The Airborne Network must interface to and be
integrated with other Service and Joint networks.
An AN service delivery point (SDP) Interface
definition is needed to indicate how the AN would
interface with the GIG and other peer-networks?
CN8
IP Convergence
The most difficult part of implementing converged
IP-based networks will be addressing the
performance management of end-to-end flows which
transit nodes in the infrastructure with extreme
changes in bandwidth, heavy congestion, long
latencies, and the inherent difficulties of mobile
tactical environments.
Enterprise
management in
industry today
focuses on single
enterprises, single
technology groups,
and single
communities of
interest. There is
an absence of tools
and technologies
for real-time
management
across networks,
particularly in the
area of real-time
configuration and
performance.
Information Assurance
IA1
DoD and Service IA
Policy
Existing policy doesn't adequately address
operations in a wireless or wireless ad hoc network
environment.
IA2
HAIPE Specification
and Interoperability
A core the High Assurance IP Interoperability
Specification (HAIPIS) is being developed.
However, a lightweight and potentially other
variants of HAIPIS are also being developed to
accommodate different operational environments.
Lack of interoperability between HAIPE variants,
including between wireless and wired HAIPE
devices, will impact interoperability.
GIG E2E Working
Group and HAIPE
Working Group is
baselining
HAIPISv2 and
defining modules
that may
accompany
HAIPISv2. NSA
is working to
baseline HAIPIS
Lite requirements.
IA3
Encryption
Processing Speeds
HAIPE, TRANSEC, and other encryption device
data processing speeds may not keep pace with
required network throughput speeds.
NSA has seeded
industry with
funding to develop
high speed HAIPE
devices.
IA4
Crypto
Modernization
Initiative
CMI may not deliver the high assurance algorithms,
keys, and hardware products when needed to support
wireless and laser based packet and/or circuit
switched networks.
DoD Research
Area -- identify
algorithms and
emcryption key
130
ID
Issue
Description/Discussion
Status
products needed to
support near, mid,
and long term
wireless IP
capability.
IA5
Protocol Security
IA6
DoD PKI Integration
Not all protocols have built-in security or have
minimal built-in security. Many protocols [i.e.,
OSPF, SNMP, BGP] rely on ubiquitous shared
secrets and/or only provide limited integrity or
authentication checking. Few provide data
confidentiality.
DoD PKI supports authenticating and identifying
humans, devices, and information. However,
effective PKI use requires continuous connectivity to
the PKI. Current design is terrestrially based and has
little support for the tactical environment and no
support for the ad hoc network. Issues that must be
resolved include certificate management and
validation over bandwidth constrained networks and
employment of PK services when disconnected from
the GIG infrastructure. Care must be taken to ensure
that use of PK services for the AN doesn’t become a
single point of failure.
Services
collaborated and
developed DoD
Tactical Function
Requirements
Document (FRD)
that identifies delta
between current
PKI ORD needs
and extending PKI
to the tactical
environment.
Other, related issues are the applicability of current
certificate design to non fixed IP or domain name
devices; usability of identity certificates to support
implementation of role based access control systems;
and adequacy of DoD PKI trust model. DoD and
service timeline to develop tactical solutions are
unknown.
IA7
Key Management
IA8
Access Control
IA9
Authentication
Current EKMS is predominately manual. Emerging
EKMS capability is moving towards increased
automation. Large numbers of wireless devices and
[potentially] different subnets will drive EKMS
solutions plus make manual key delivery
impractical. Need to define AN EKMS requirements
in terms of operational employment [ad hoc, non-ad
hoc and anchored, non-anchored networking] and
need timeline.
AN entities will need to support OTAR, OTNK,
OTAZ, and OTAT.
Devices will be managed and maintained by entities
that may be local or remote to the device itself.
Level of entity access to a device will be based on
the entity's role in relation to that device [can
manage, can operate, can monitor, etc.] The security
impacts of centralized versus distributed and unified
versus federated management systems needs to
determined.
Nodes [AN, management, humans, etc.] will need to
be authenticated by devices and/or networks
[dependent on the activity to be accomplished.] The
131
HAIPE is working
towards automated
over-the-air rekey.
OTNK [Over the
Network Keying]
is yet to be
developed;
however, a OTNK
pilot is underway.
ID
Issue
Description/Discussion
Status
functional layer at which the authentication will
occur, how it will be managed, and how the
authentication state will transfer through the AN
needs to determined.
How IDS and IPS will operate in the AN
environment must be determined, including whether
these functions are distributed throughout all nodes
or centralized at specific nodes. How data
aggregation and response will be accomplished.
What response time is needed and reasonable? What
is the impact of operating in an ad hoc network?
Should these functions be network centric and
interoperate between devices or device centric?
How will device compromise be detected? Once
detected, how will AN nodes be informed of the
compromise and how will recovery be
accomplished? What is anticipated operational
impact during a recovery phase?
Various protocols may be more or less susceptible to
use as a path to carry covert data.
IA10
Intrusion Detection
System (IDS) and
Intrusion Prevention
Systems (IPS)
IA11
Compromise and
Compromise
Recovery
IA12
Covert Channel
IA13
Black or Unclassified
Terminals
Unmanned terminals [relay nodes, UAV, etc.] will
need operational TRANSEC keys and may be
queried for status data. Software, wrapped keys, and
other code may exist in the terminal, that if
compromised, may have significant security
implications. How will device be maintained
"black" while operational?
IA14
MSLS and MLS.
AN will initially support multiple single levels of
security and eventually support multiple level
security. How will this support be implemented and
supported?
IA15
Allied/Coalition
Security
Environment
IA16
Interoperability with
Other Networks
IA17
Operations,
Management, and
Control Data
Mixed U.S., coalition, and allied operations will
require ability to operate across multiple security
domains, using mixed encryption and TRANSEC
algorithms and keys, interacting with U.S. and nonU.S. public key infrastructures.
AN security infrastructure and protocols must
interoperate with infrastructures and protocols
implemented to support other networks [e.g., GIG,
tactical ground, fixed terrestrial, etc.] Hierarchical
or layered IA functions must interoperate with their
logical peers on other networks.
System configuration, link and network management
data, device code updates, etc. must have source and
integrity validated before receiving entities
132
JTRS is
implementing an
unmanned relay
node.
TCM has
requirement for a
black terminal.
NSA has expressed
concern over how
terminal can
remain black
w/operational
keys.
JTRS is
implementing
MSLS through use
of independent
channel per
security domain.
ID
IA18
Issue
Description/Discussion
Integrity and Validity
operationally employ it.
Operation over
Wireless Links
Traffic flow security, as implemented by the High
Assurance Internet Protocol Encryptor, have a
performance impact. In a low speed link, TFS
support may significantly impact overall user data
throughput.
Network Management
NM1
Efficiency
What efficiency improvements (e.g., compression,
aggregation) can be implemented to reduce NM
traffic? Transaction-intensive NM exchanges must
be minimized (in frequency and size) to prevent high
overhead.
NM2
Management of
Multiple Network
Security Levels
The Airborne Network must be capable of the
management of multiple network security levels -i.e., management of multiple security levels of red
networks and a black network. An integrated,
efficient solution is needed.
NM3
Management of AdHoc Networks
NM4
Integrated Network
Management
What needs to be done to develop a unified NM
capability for the AN when separate acquisitions
(FAB-T, JTRS, MP-CDL) are implementing
different NM and administrative domains ? What
are the interface and integration requirements for
providing integrated, inter-domain NM? Should
there be a top-level NM or "Services Manager" (over
TC, JTRS, MP-CDL constituents) for AN policy
management, E2E provisioning, and AN Situational
Awareness?
NM5
Policy Management
How can QoS Provisioning and Policy Based
Network Management (policy distribution,
enforcement, provisioning) be performed in a
dynamic network composed of multiple separately
managed links and subnets?
Link Management
LM1
Cross-Layer
Approach
Determine how link management functions (e.g.,
topology management, admission control, QoS/CoS)
may be implemented via cross-layer approach.
Determine relative benefits and tradeoffs of physical,
MAC, network and application layer techniques in
combination with one another.
133
Status
The HAIPIS
Working Group is
seeking
technological
solutions that can
be applied to the
next generation of
HAIPE devices
enabling them to
provide TFS while
decreasing their
wireless
operational impact.
ID
Issue
Description/Discussion
LM2
Link Discovery
Protocols
Efficient link discovery/establishment protocols
needed.
LM3
Dynamic Link
Configuration
LM4
Measurement and
reporting of link
conditions
Which radio/terminal (layer 1 and 2) parameters are
dynamically changeable? To what extent can these
parameters be changed?
Which conditions are measurable, measurement
techniques, measurement frequency, establishment
of acceptable ranges, reporting thresholds? What
parameters should be reported for each link type?
LM5
Topology
Management
Determine tradeoffs between factors such as
computation overhead, levels of hierarchal
organization, optimal node power, desired
connectivity, maximum data rate for links,
vulnerability to jamming, network survivability,
graceful performance degradation under dynamic
changes.
LM6
Dynamic Resource
Provisioning
What are the protocols, approaches, technologies for
dynamically provisioning link resources for the AN?
LM7
Dynamic Network
Configuration
LM8
Link Management of
Ad Hoc Networks
What are the protocols, approaches, technologies for
dynamically configuring network parameters
throughout the AN?
What are the protocols, approaches, technologies to
leverage for LM of ad hoc networks?
Network Services
NS1
IP Address
Management
The Airborne Network must have a well-defined
address management plan in place in order to handle
the distribution of IP addresses to network elements
on airborne platforms. The address management
plan must be consistent across the entire AN
constellation, and roles and responsibilities for each
platform must be well defined. The address
management plan must clearly identify when and
how network elements will be assigned IP addresses.
This includes the renumbering of network elements
during operation.
NS2
Domain Name
System (DNS)
The Airborne Network must have a consistent and
robust approach to providing DNS services for nodes
in the constellation. In order to do this, an
operations model must be established that details
such things as the location of DNS servers,
redundancy plans, and the responsibility of specific
platforms with respect to whether or not they provide
DNS services for others in the AN constellation.
NS3
Mobility
Management
In the Airborne Network mobility management will
be needed to ensure that platforms can be located
quickly and that packet delivery operates properly
and without interruption as platforms move
throughout the network. An integrated muli-layer,
multi-transport system solution is needed.
134
Status
ID
Issue
Description/Discussion
NS4
Network Time
Protocol
Currently Network Time Service is not a ubiquitous
service across the GIG. Security directives prevent
IP-based time synchronization across firewall
boundaries (AFI 33-115, 16).
General
G1
Airborne Network
Service Providers
What is the minimum set of services required on an
airborne NSP node? What is required to ensure that
an airborne NSP node failure or loss will not impact
mission operations?
G2
Proxies
The use of proxies can improve the performance of
user applications running across the AN by
countering wireless network impairments, such as
limited bandwidth, long delays, high loss rates, and
disruptions in network connections. However,
proxies performance can be impacted by the security
and data link mechanism. An integrated muli-layer,
multi-transport system solution is needed.
135
Status
11.
List of Acronyms
ACA
ACM
AETACS
AMPS
AMTI
AN
AOC
AS
ASWR
ATO
AWACS
Adaptive Configuration Agent
Adaptive Configuration Manager
Airborne Elements of the Tactical Air Control System
Ad-Hoc Mobility Protocol Suite
Air Moving Target Indicator
Airborne Network
Air and Space Operations Center
Autonomous System
Attack Sensing, Warning, and Response
Air Tasking Order
Airborne Warning and Control System
BGP
BLOS
Border Gateway Protocol
Beyond Line-of-Sight
CAP
CAS
CITS
CLIP
CND
COIs
CONOPS
CONUS
CoS
C2ERA
Combat Air Patrol
Close Air Support
Combat Information Transport System
Common Link Interface Processor
Computer Network Defense
Communities of Interest
Concept of Operations
Continental United States
Class of Service
Command and Control Enterprise Reference Architecture
DAA
DAP
DCDP
DHS
DISN
DMA
DNS
DoD
DRCP
Designated Approving Authority
Domain Announcement Protocol
Dynamic Configuration Distribution Protocol
Department of Homeland Security
Defense Information System Network
Dynamic Mobility Agents
Domain Name System
Department of Defense
Dynamic Registration and Configuration Protocol
ECU
EGP
EKMS
EMCON
End Cryptographic Unit
Exterior Gateway Protocol
Electronic Key Management System
Emission Control
FCAPS
FEC
Fault, Configuration, Accounting, Performance, Security
Forward Error Correction
136
GIG
GIG-BE
GIG ES
GMTI
GNOSC
GPS
GTACS
Global Information Grid
GIG-Bandwidth Expansion
GIG Enterprise Services
Ground Moving Target Indicator
Global Network Operations and Security Center
Global Positioning System
Ground Tactical Air Control System
HAIPE
HAIPIS
HI
High Assurance Internet Protocol Encryptor
High Assurance IP Interoperability Specification
Horizontal Integration
I&A
IA
IDS
IFGR
IGP
IPS
ISR
IT
Identity and Authentication
Information Assurance
Intrusion Detection System
Information for Global Reach
Interior Gateway Protocol
Intrusion Prevention System
Intelligence Surveillance and Reconnaissance
Information Technology
JWICS
JV
Joint Worldwide Intelligence Communications System
Joint Vision
KLIF
KMA
KMI
Key Loading and Initialization Facility
Key Management Architecture
Key Management Infrastructure
LDAP
LM
LMA
LMAMs
LME
LMP
LOS
LPDP
LPI/LPD
Lightweight Directory Access Protocol
Link Management
LM Agent
LM Autonomous Managers
LM Executive
Link Management Protocol
Line of Sight
Local Policy Decision Point
Low Probability of Interception/Detection
MANs
MIB
MILS
MLPP
MLS
MOSAIC
Metropolitan Area Networks
Management Information Base
Multiple Independent Levels of Security
Multi-Level Precedence and Preemption
Multi-level Security
Multifunctional On-the-move Secure Adaptive Integrated
Communications
137
MPLS
MPLS-TE
Multi-Protocol Label Switching
Multi-Protocol Label Switching--Traffic Engineering
NASA
NCW
NESI
NIPRNET
NIST
NM
NM
NOC
NSA
NSS
NTP
NTS
National Aeronautics and Space Agency
Network Centric Warfare
Net-Centric Enterprise Solutions for Interoperability
Unclassified-but-Sensitive Internet Protocol Router Network
National Institute of Standards and Technology
Network Management
Network Manager
Network Operations Center
National Security Agency
National Security Systems
Network Time Protocol
Network Time Server
OANs
OTAR
OTAT
OTAZ
OTNK
Operational Area networks
Over-The-Air Rekeying
Over-The-Air Transfer
Over-The-Air Zeroize
Over-The-Network Keying
PBNM
PBSM
PDP
PEP
PEP
PIB
PIM
PKI
Policy Based Network Management
Policy Based Security Management
Policy Decision Point
Policy Enforcement Point
Performance Enhancing Proxy
Policy Information Base
Programmable INFOSEC Module
Public Key Infrastructure
QoS
Quality of Service
RF
ROE
RSVP-TE
RSVP-TE
Radio Frequency
Rules of Engagement
Resource Reservation Protocol--Tunneling Extensions
Resource Reservation Protocol – Traffic Engineering
SAR
SATCOM
SDN
SDP
SEAD
SIP
SIPRNET
SLA
Synthetic Aperture Radar
Satellite Communications
Service Delivery Node
Service Delivery Point
Suppression of Enemy Air Defenses
Session Initiation Protocol
Secret Internet Protocol Router Network
Service Level Agreement
138
SLCS
SMI
SoM
SoMI
SPPI
SPS
SSL
Senior Level Communications System
Security Management Infrastructure
Strength of Mechanism
Structure of Management Information
Structure of Policy Provisioning Information
Standard Positioning System
Secure Socket Layer
TAP
TCP
TCT
TDC
TDL
TFS
TLS
TPED
TPPU
TRANSEC
TSAT
TS/SCI
TWSTT
Traffic Analysis Protection
Transmission Control Protocol
Time Critical Targeting
Theater Deployable Communications
Tactical Data Link
Traffic Flow Security
Transport Layer Security
Task, Process, Exploit, and Disseminate
Task, Post, Process, and Use
Transmission Security
Transformational Communications Satellite System
Top Secret/Sensitive Compartmented Information
Two-Way Satellite Time Transfer
UAV
UDOP
USNO
UTC
Unmanned Aerial Vehicle
User Defined Operational Picture
U. S. Naval Observatory
Coordinated Universal Time
VoIP
VPN
Voice over IP
Virtual Private Networks
WAN
Wide Area Network
YAP
Configuration Database Update Protocol
139
12.
References
1. B. Braden et al., Developing a Next-Generation Internet Architecture, July 2000
2. Airborne Network Architecture Progress Summary Version 1.0, USAF Airborne
Network Special Interest Group, May 2004
3. Airborne Network Architecture Roadmap (Draft)
4. Airborne Network Technical Requirements Document Version 0.4, USAF
Airborne Network Special Interest Group, 11 September 2003
5. Alberts, D. S., J. J. Gartska, and F. P. Stein., Network Centric Warfare:
Developing and Leveraging Information Superiority, DoD C4ISR Cooperative
Research Program, Washington, DC., 1999
6. Airborne Network (AN) Prioritization Plan, Headquarters USAF (AF/XICA), 31
March 2004
7. Operating Concept for C2 ConstellationNet Network Centric Infostructure,
AFC2ISRC/SCY, 1 April 2004
8. Understanding Enterprise Network Design Principles, Cisco Systems, Inc, 2001
9. U.S. Air Force Infostructure Enterprise Architecture, Version 1.3, Air Force
Communications Agency, 27 February 2004
10. U. S. Air Force Network Management Platform Profile, Version 0.9, Air Force
Communications Agency, 5 December 2002
11. U. S. Air Force Network and Infrastructure Defense Platform Profile, Version 0.9,
Air Force Communications Agency, 27 November 2002
12. U. S. Air Force Domain Name Service Profile, Version 1.0, Electronic Systems
Center
13. (ESC/NI2), 23 February 2004
14. U. S. Air Force Network Time Service Profile, Version 1.0, Electronic Systems
Center
15. (ESC/NI2), 28 January 2004
16. Framework for QoS-based Routing in the Internet, RFC 2386, IETF Network
Working Group, August 1998
17.
“Quality of Service - Glossary of Terms”, http://www.qosforum.com/whitepapers/qos-glossary-v4.pdf, May 1999
140
18. Proposed Draft Strawman DSCP Mapping for GIG Enterprise IP Networks
Version 11p, DoD GIG QoS/CoS Working Group, 22 March 2004
19. Guidelines for Improved Applications Performance over Networks, Defense
Information Systems Agency, 11 April 2003
20. Blaine, D., DoD Global Information Grid (GIG) Quality of Service / Class of
Service(QoS / CoS) Net-Centric Effective Assured Delivery of Information,
Defense Information Systems Agency, 30 September 2003
21. DoD Transport Design Guidance Document, Version 0.97, The MITRE
Corporation, April 2004
22. A Framework for Policy-based Admission Control, RFC 2753, IETF Network
Working Group, January 2000
23. Terminology for Policy-Based Management, RFC 3198, IETF Network Working
Group, November 2001
24. Kam, A., Trafton, R., Assessment of Mobile Routing Protocols for the Airborne
Network, USAF Global Grid Technical Note Number 2003-04, Electronic
Systems Center (ESC/NI1), October 2003
25. Trafton, R., Doane, J., Lam, L., Performance Enhancing Proxy Evaluation: Peribit
SR-50, Mentat SkyX Gateway XR10, USAF Information Transport Technical
Note Number 2004-01, Electronic Systems Center (ESC/NI1), February 2004
26. Kam, A., Morin, S., Trafton, R., Networking Over RF Links, USAF Global Grid
Technical Note Number 2003-01, Electronic Systems Center (ESC/NI1), May
2003
27. Incremental Update -- AMPS Protocol Design Document, CECOM
Multifunctional On-the-move Secure Adaptive Integrated Communications
(MOSAIC) Project, Telcordia Technologies, Inc., 1 April 2003
28. Differentiated Services with Distributed Quality of Service Managers, CECOM
Multifunctional On-the-move Secure Adaptive Integrated Communications
(MOSAIC) Project, Rockwell Collins, August 2002
29. Incremental Update - DS/BB Protocol Design Document, CECOM
Multifunctional On-the-move Secure Adaptive Integrated Communications
(MOSAIC) Project, Telcordia Technologies, Inc., 1 April 2003
141
30. Global Information Grid Core Enterprise Services Strategy Version 1.1a, Office
of the Assistant Secretary of Defense for Networks and Information
Integration/DoD Chief Information Officer, July 2003
31. Global Information Grid Information Assurance Reference Capabilities
Document, Volume I, Sections 2 and 3
32. Xukai Zou, Byrav Ramamurthy and Spyros Magliveras; Routing Techniques in
Wireless Ad Hoc Networks-Classification and Comparison; International
Conference on
33. Computing, Communications and Control Technologies: CCCT'04, August 2004
34. Shahrokh Valaee and Baochun Li, Distributed Call Admission Control for Ad
Hoc
35. Networks, Proceedings of IEEE 56th Vehicular Technology Conference,
Volume: 2, September 2002
36. Shigang Chen and Klara Nahrstedt, Distributed Quality-of-Service Routing in Ad
Hoc Networks, August 1999
37. Jamal N. Al-Karaki and Ahmed E. Kamal, Quality of Service Routing in Mobile
Ad hoc Networks: Current and Future Trends, Mobile Computing Handbook, M.
Ilyas and I. Mahgoub (eds.), CRC Publishers, 2004
38. Prasant Mohapatra, Jian Li and Chao Gui; QoS in Mobile Ad Hoc Networks; June
2003
39. Imrich Chlamtac, Marco Conti and Jennifer J.-N. Liu; Mobile Ad Hoc
Networking: Imperatives and Challenges; 2003
40. Khoa To and James Sasitorn; N.A.M.E: The Network Time Protocol for Ad Hoc
Mobile Environments; Computer Science Department Rice University Houston,
Texas
41. Chen, Wenli and Jain, Nitin, August 1999, ANMP: Ad Hoc Network
Management Protocol, IEEE Journal on Selected Areas in Communications, Vol.
17, No. 8.
42. Shen, Chien-Chun, Srisathapornphat, Chavalit, and Jaikaeo, Chaiporn, February
2003, An Adaptive Management Architecture for Ad Hoc Networks, IEEE
Communications Magazine.
43. Phanse, Kaustubh and DaSilva, Luiz A., Protocol Support for Policy-Based
Management of Mobile Ad Hoc Networks, submitted for (conference)
publication, 2003, http://www.ee.vt.edu/~kphanse/research.html
142
44. Toner,S. & O'Mahony,D., Self-Organising Node Address Management in Ad-hoc
Networks, in Springer Verlag Lecture notes in Computer Science 2775, Springer
Verlag, Berlin, 2003, pp 476-483, also at http://ntrg.cs.tcd.ie/publications.php
45. Liotta, A., G. Knight, and G. Pavlou, On the Performance and Scalability of
Decentralised Monitoring Using Mobile Agents, in Active Technologies for
Network and Service Management, Proceedings of the 10th IFIP/IEEE
International Workshop on Distributed Systems: Operations and Management
(DSOM '99), Zurich, Switzerland, R. Stadler, B. Stiller, eds., pp. 3-18, Springer,
October 1999, http://www.ee.surrey.ac.uk/Personal/A.Liotta/Publications/
46. Intel Labs, Simplifying Support of New Network Services Using COPS-PR,
http://www.intel.com/labs/manage/cops/
47. Phanse, Kaustubh and DaSilva, Luiz A., Extending PBNM to Ad Hoc Networks,
http://www.ee.vt.edu/~kphanse/research.html
143
Download