Uploaded by muralivaddadi

LTRSPM-2010-LG

advertisement
Cisco Packet Core 5G Lab Handbook
LTRSPM2010
Speakers:
Ananya Simlai
Vineet Gupta
Ravindra Nandawat
1|Page
Learning Objectives ___________________________________________________________ 8
Introduction to 5G Architecture _________________________________________________ 9
1.
Cisco’s 5G Vision _____________________________________________________________ 9
2.
Cisco Ultra 5G Packet Core Solution ____________________________________________ 12
3.
Cisco Ultra 5G NSA Packet Core Solution ________________________________________ 15
4.
Cisco Ultra 5G SA Packet Core solution __________________________________________ 16
5.
Cisco Cloud native Ultra Services Platform _______________________________________ 17
6.
Cisco 5G SA Network Function Portfolio _________________________________________ 18
7.
AMF - Access and Mobility Management Function ________________________________ 19
8.
SMF - Session Management Function ___________________________________________ 19
9.
User Plane Function _________________________________________________________ 20
10.
Cisco DPI and Inline Services ________________________________________________ 21
11.
Cisco Ultra Services Proxy __________________________________________________ 22
12.
Cisco Ultra Traffic Optimization ______________________________________________ 22
13.
Policy Control function _____________________________________________________ 22
14.
Network Repository function________________________________________________ 22
15.
Network exposure function _________________________________________________ 23
16.
Network slice selection function _____________________________________________ 23
17.
Binding Support Function ___________________________________________________ 23
18.
Security Edge Protection Proxy ______________________________________________ 24
19.
Non-3GPP interworking function ____________________________________________ 24
20.
Migration path from 4G to 5G _______________________________________________ 25
21.
Summary ________________________________________________________________ 26
22.
Conclusion _______________________________________________________________ 27
23.
5G Core Reference Specifications: ____________________________________________ 27
Kubernetes Overview _________________________________________________________ 29
1.
Kubernetes Objects__________________________________________________________ 30
2.
Kubernetes Control Plane ____________________________________________________ 30
3.
Kubernetes Master __________________________________________________________ 30
4.
Kubernetes Nodes___________________________________________________________ 31
5.
Understanding Pods _________________________________________________________ 31
6.
Services ___________________________________________________________________ 31
2|Page
7.
Namespaces _______________________________________________________________ 32
8.
ReplicaSet _________________________________________________________________ 32
9.
Deployments _______________________________________________________________ 32
10.
StatefulSets ______________________________________________________________ 32
11.
DaemonSet ______________________________________________________________ 32
12.
Jobs - Run to Completion ___________________________________________________ 33
5G Call Demo: _______________________________________________________________ 34
1.
Task 1: Connect to dCloud Setup _______________________________________________ 34
Option 1 – Connect via Cisco AnyConnect VPN Client ________________________________________ 34
Option 2 - Log in to dcloud site and launch Remote Desktop __________________________________ 35
2.
Start UDM and register UDM+AUSF ____________________________________________ 35
3.
Task 2: Run a 5G Call_________________________________________________________ 36
Option 1 – Start a 5G Data Call (Preferred to see detailed logs) ________________________________
UPF Login ________________________________________________________________________
Client Login ______________________________________________________________________
Server Login - to ping to UE IP ________________________________________________________
Client Login - to ping from UE IP ______________________________________________________
Option 2 – Start a 5G Data Call _________________________________________________________
UPF Login ________________________________________________________________________
Client Login ______________________________________________________________________
Server Login ______________________________________________________________________
Server Login - to ping to UE IP ________________________________________________________
Client Login - to ping from UE IP ______________________________________________________
36
36
36
37
37
38
38
38
39
39
39
Kubernetes and Platform checks _______________________________________________ 41
4.
Network Diagram ___________________________________________________________ 41
5.
Pod Access _________________________________________________________________ 41
6.
Open SSH Terminal sessions (1 for Master node). Get root access on all the Nodes. ____ 42
7.
Verify the Docker version. ____________________________________________________ 42
8.
Observe all the pods of the architecture entities of K8 – master are up _______________ 42
9.
Observe all the services of the architecture entities in K8 – master are up _____________ 43
10.
Observe all nodes in the K8s system __________________________________________ 43
11.
Verify Helm version _______________________________________________________ 43
12.
Check Repo List ___________________________________________________________ 43
Preparation and Installation of 5G Network Functions _____________________________ 45
13.
Open SSH Terminal sessions for Master node. __________________________________ 45
14.
Clean up existing namespaces _______________________________________________ 45
15.
Wait for namespaces to be deleted __________________________________________ 45
16.
Create a new namespace for NGINX __________________________________________ 45
3|Page
17.
install Ingress controller NGINX ______________________________________________ 45
18.
Check NGINX is installed ___________________________________________________ 46
19.
Check NGINX services are running____________________________________________ 46
20.
Prerequisites check ________________________________________________________ 46
21.
Create Namespace and Secrets ______________________________________________ 47
22.
Check all the namespace created ____________________________________________ 47
23.
Create Kubernetes secret ___________________________________________________ 47
24.
Create Helm Repos ________________________________________________________ 48
25.
Check NFs repos __________________________________________________________ 49
Deploy CNEE ________________________________________________________________ 50
26.
Use the helm command to install the CNEE environment. ________________________ 50
27.
Check if Ops-Center is up and running: ________________________________________ 50
28.
Once the Ops Center comes up, login using the ClusterIP and apply the configuration. 53
29.
Bring up CNEE ____________________________________________________________ 55
30.
Use Ops-Center commands to check the status of NF ____________________________ 55
31.
Check if all pods and services are UP and running. ______________________________ 55
32.
Check CNEE GUI: __________________________________________________________ 57
Deploy NRF _________________________________________________________________ 59
33.
Use global.yaml and nrf.yaml to install the NRF ________________________________ 59
34.
Once the Ops Center comes up, login using the ClusterIP and apply the configuration. 59
35.
NRF Configuration_________________________________________________________ 59
36.
line.
Once the Ops Center comes up, login using the ClusterIP. Please copy paste config line by
59
37.
Type commit before you exit out of Ops-Center to save the configuration___________ 60
38.
Use system mode running to deploy (Make sure configuration is accurate) __________ 60
39.
Ops-Center commands to check the status of NF _______________________________ 60
40.
Check if all pods and services are UP and running. ______________________________ 60
Deploy NSSF ________________________________________________________________ 64
41.
Use global.yaml and nssf.yaml to install the NSSF_______________________________ 64
42.
Use the helm command to install the NSSF. ____________________________________ 64
43.
Check Ops-center is up and running:__________________________________________ 64
44.
NSSF Configuration ________________________________________________________ 65
45.
line.
Once the Ops Center comes up, login using the ClusterIP. Please copy paste config line by
65
4|Page
46.
Type commit before you exit out of Ops-Center to save the configuration___________ 65
47.
Use system mode running to deploy __________________________________________ 65
48.
Ops-Center commands to check the status of NF _______________________________ 65
49.
Check if all pods and services are UP and running. (FROM K8-Master Node) _________ 65
50.
NSSF Table Configuration ___________________________________________________ 68
51.
Open the web browser _____________________________________________________ 68
52.
Login to NSSF GUI using Credentials: Admin/admin _____________________________ 68
Deploy AMF ________________________________________________________________ 72
53.
Check the contents of amf.yaml _____________________________________________ 72
54.
Use the helm command to install the AMF. ____________________________________ 72
55.
Check Ops-center is up and running:__________________________________________ 72
56.
Once the Ops Center comes up, login using the ClusterIP or Loadbalancer IP and apply
the configuration. _______________________________________________________________ 72
57.
AMF Configuration ________________________________________________________ 72
58.
Type commit before you exit out of Ops-Center to save the configuration___________ 73
59.
Use system mode running to deploy __________________________________________ 73
60.
Ops-Center commands to check the status of NF _______________________________ 73
61.
Check if all pods and services are UP and running. (from Master) __________________ 73
62.
Check AMF registration to NRF ______________________________________________ 76
Deploy PCF _________________________________________________________________ 77
63.
The yaml files are uploaded to the folder: cat /root/5g/pcf.yaml __________________ 77
64.
Use the helm command to install the PCF. _____________________________________ 77
65.
Verify Ops-center is UP and running: _________________________________________ 77
66.
Once the Ops Center comes up, login using the ClusterIP _________________________ 77
67.
PCF Configuration _________________________________________________________ 77
68.
Please copy paste PCF config line by line. ______________________________________ 77
69.
Type commit before you exit out of Ops-Center to save the configuration___________ 78
70.
Use system mode running to deploy __________________________________________ 78
71.
Ops-Center commands to check the status of NF _______________________________ 78
72.
Check if all pods and services are UP and running. ______________________________ 78
73.
Check you can access to PCF Central and PB GUIs _______________________________ 78
74.
Check PCF registration to NRF: ______________________________________________ 78
Deploy SMF _________________________________________________________________ 80
75.
Use the above mentioned global.yaml and smf.yaml to install the SMF _____________ 80
5|Page
76.
Use the helm command to install the SMF. ____________________________________ 80
77.
Note: The config files are uploaded on k8s Master. Open in another window: cat
/root/5g/smf.yaml ______________________________________________________________ 80
78.
Once the Ops Center comes up, login using the ClusterIP. ________________________ 80
79.
SMF Configuration ________________________________________________________ 81
80.
Please copy paste config line by line. _________________________________________ 81
81.
Type commit before you exit out of Ops-Center to save the configuration___________ 81
82.
Use system mode running to deploy __________________________________________ 81
83.
Ops-Center commands to check the status of NF _______________________________ 81
84.
Verify Ops-center is UP and running: _________________________________________ 82
85.
Create label to designate the worker where SMF protocol Pod is installed: __________ 82
86.
Check if all pods and services are UP and running. ______________________________ 83
87.
Check SMF registration to NRF: ______________________________________________ 83
Register AUSF and UDM with NRF ______________________________________________ 84
88.
Run the script from the Master K8 node: ______________________________________ 84
89.
Check AUSF and UDM registered in NRF database ______________________________ 84
Deploy UDM for SMF _________________________________________________________ 86
90.
Note: UDM is already installed on worker 1. Skip UDM installation procedure. ______ 86
91.
Check if the IP/Port assigned is listening: ______________________________________ 86
Deploy UPF _________________________________________________________________ 87
92.
UPF is already deployed. The procedure to install UPF remains same as CUPS -UP. ___ 87
93.
Verify key UPF specific configuration _________________________________________ 87
94.
Check Logs from master ___________________________________________________ 88
95.
Check Sx Session Establishment between SMF & UPF ____________________________ 88
96.
Make Test Call simulated via Lattice Tool ______________________________________ 89
Make your 5G Call ___________________________________________________________ 90
97.
Run a 5G Call _____________________________________________________________ 90
98.
Collect logs on AMF rest, service _____________________________________________ 90
99.
Collect logs on SMF rest, service _____________________________________________ 90
100.
Collect logs on PCF rest, engine: _____________________________________________ 90
101.
Collect logs on NRF rest, service: _____________________________________________ 90
102.
Check Subscriber count on SMF / Clear the subscriber from DB: ___________________ 91
Appendix: __________________________________________________________________ 92
103.
CNEE Key Configuration Values: _____________________________________________ 92
6|Page
104.
NRF Key Configuration Values : ______________________________________________ 93
105.
NSSF Key Configuration Values: ______________________________________________ 95
106.
AMF Key Configuration Values: ______________________________________________ 97
107.
PCF Key configuration Values: ______________________________________________ 101
108.
SMF Key Configuration Values ______________________________________________ 104
109.
UPF Configuration: _______________________________________________________ 107
110.
UDM for SMF installation procedure: ________________________________________ 109
7|Page
Learning Objectives
Upon completion of this lab, you will be able to:





Get an understanding of 5G Architecture.
Get an understanding of Cloud Native Architecture
Get hands on experience on Cloud Native Platform based on Kubernetes.
Get hands on experience on installation and configuration of 5G Standalone (SA)
Network Functions
Initiate a 5G Test Session
8|Page
Introduction to 5G Architecture
5G is the next generation of Third-Generation Partnership Program (3GPP) technology,
after 4G/LTE, being defined for wireless mobile data communication. Starting with
3GPP Release 15 onward, this technology defines standards for 5G. As part of 3GPP
Release 15, new 5G Radio and Packet Core evolution is being defined to cater to the
needs of 5G networks. References 1 and 2 provide more details on 3GPP standards for
5G architecture.
Following are the some of the key goals of 5G:

Very high throughput (1–20 Gbps)

Ultra low latency (<1 ms)

1000x bandwidth per unit area

Massive connectivity

High availability

Dense coverage

Low energy consumption

Up to a 10-year battery life for machine-type communications
Figure 1 shows some of the projections set by 5GPP (a joint initiative between the
European Union Commission and European Information and Communication Technology
[ICT]):
Figure 1 : 5G drivers
1. Cisco’s 5G Vision

Cisco views 5G as an enabler for a new set of possibilities and capabilities.
Every new generation of 3GPP wireless mobile data communication technology
has set the stage for a new set of use cases and capabilities. 3G was the first
truly wireless mobile data communication technology that catered to data
9|Page
communication. Whereas 4G was the first truly all-IP wireless data
communication technology, both 3G and 4G have been instrumental and
foundational to the data communication over mobile devices. This situation led to
proliferation of applications such as video, ecommerce, social networks, games,
and several other applications on mobile devices. Focus in 3G and 4G was more
on mobile broadband for consumers and enterprises.
Figure 2 shows some trends and new opportunities that operators should
address.
A new set of use cases is being introduced that is going to have its own set of
challenges and complexities. Thus, the new 5G network has to help operators manage
current needs as well as support new needs of new use cases, some that have yet to
be imagined. 5G is not just going to be about high-speed data connections for
enhanced mobile broadband, but will enable several new capabilities that can cater to
several new enterprise use cases. 5G will not just be about serving consumer and
enterprise subscribers with high throughput connectivity. 5G will enable new revenue
avenues and opportunities for operators with its ability to cater to requirements for
several new enterprise use cases. To this end, Cisco envisions 5G to equip operators
with more capabilities to cater to enterprise customers’ needs to support their current
and future use cases.
Cisco understands that the 5G core needs to be the enabling platform for service
providers to take advantage of the major changes taking place in the data center,
networking, and the economics of mobility in a standardized multivendor environment.
Very significant changes for the mobile core that facilitate new opportunities such as
personalized networks through slicing and more granular functions are being defined.
5G provides a framework to take advantage of the massive throughput and low latency
that new radio provides.
10 | P a g e
Figure 3 shows some of the use cases that 5G will cater to.
Figure 3. 5G use cases
Figure 4 illustrates the broad categories of use cases that 5G will cater to.
Figure 4. 5G usage scenarios (source: ITU)
These three requirements enable all the use cases.
11 | P a g e



Enhanced Mobile Broadband (eMBB): 5G eMBB brings the promise of high
speed and dense broadband to the subscriber. With gigabit speeds, 5G provides
an alternative to traditional fixed line services. Fixed wireless access based on
mmWave radio technologies enables the density to support high-bandwidth
services such as video over a 5G wireless connection. To support eMBB use
cases, the mobile core must support the performance density and scalability
required.
Ultra-Reliable Low-Latency Communications (robotics and factory automation)
(URLLC): Ultrareliable low-latency communications focuses on mission-critical
services such as augment and virtual reality, tele-surgery and healthcare,
intelligent transportation, and industry automation. Traditionally over a wired
connection, 5G offers a wireless equivalent to these extremely sensitive use
cases. URLLC often requires the mobile core User Plane Function (UPF) to be
located geographically closer to the then end user in a Control and User Plane
Separation (CUPS) architecture to achieve the latency requirements.
Massive Internet of Things (IoT): Massive IoT in 5G addresses the need to
support billions of connections with a range of different services. IoT services
range from devices sensors requiring relatively low bandwidth to connected cars
that require a service similar to that of a mobile handset. Network slicing
provides a way for service providers to enable Network as a Service (NaaS) to
enterprises, giving them the flexibility to manage their own devices and services
on the 5G network. Following are characteristics of these use cases: ◦ Efficient
low-cost communication with deep coverage ◦ Lightweight device initialization
and configuration ◦ Efficient support of infrequent small data for mobile
originated data-only communication scenarios
2. Cisco Ultra 5G Packet Core Solution
Although 5G promises greater flexibility and new opportunities for the operator, it also
offers a greater potential for added complexities and cost. Cisco believes that the
capabilities shown in Figure 5 are required to reduce complexity and cost and enable
you to stay ahead of your competition.
Figure 5. Capabilities required to reduce complexity and cost
Cisco’s 5G strategy is about delivering what the operator needs to succeed in this new
environment, including the agility, flexibility and security to address their customer’s
12 | P a g e
requirements for a better connected experience. This strategy includes maintaining
investment protect for existing infrastructure, including the repurposing of their Cisco
ASR 5500 Series Evolved Packet Cores (EPCs) while they evolve to a more virtualized
architecture. We are also highly focused on protecting the operators investment in their
new 5G solutions. We understand that 5G is far more than just a new radio; 5G is about
delivering connected experiences from the multicloud to the client across a multivendor
architecture.
Figure 6 shows the Cisco 5G solution architecture tenets.
Figure 6. Cisco 5G solution architecture
Cisco is a leading packet core vendor for decades and has been influencing 3GPP
standards given the expertise have built over several years. Cisco has witnessed
transitions earlier too, first from 2G to 3G and then 3G to 4G, and we are currently bestplaced vendor to define and lead the solution for the important and crucial transition
from 4G to 5G.
Cisco’s 5G packet core solution product strategy is to provide a synergistic and
coherent set of 5G Standalone (SA) packet core for 5G Network Functions (NFs)
compliant to 5G SA 3GPP standards, using the Cisco Cloud Native Ultra Services
Platform. This platform helps Cisco enable best-in-class ―cloud‖ operational benefits
across the full Cisco 5G SA NF portfolio. These cloud operational benefits include
dynamic network-function scale-in/out, faster network-function upgrades, in-service
network-function upgrades, and support for NETCONF/YANG, streaming telemetry.
Cisco Ultra Services Platform is one of the industry-leading virtualized platforms for
mobile core services. The Cisco Ultra Service Platform-based Virtual PortChannel (VPC)
13 | P a g e
solution is deployed in more than 40 networks globally, making Cisco one of the leading
virtual packet core vendors.
Cisco had been working on several packet core concepts even before they could get
standardized in 3GPP. For instance, Cisco was one of the vendors to demonstrate
CUPS at the Mobile World Congress (MWC) in 2016 and 2017, before 3GPP
standardized that technology. Continuing the similar trend, Cisco is aggressively
working to introduce a pre-standards version of the 5G solution in order to evaluate the
needs of the next-generation 5G network. It plans to introduce the version to 3GPP to
influence the standards.
Figure 7 lists some of the reasons for operators to choose a Cisco 5G solution.
Figure 7. Reasons to Choose Cisco for 5G solution
3GPP has defined two different solutions for 5G networks: 5G Non-Standalone (NSA)
and 5G standalone.
5G Non-Standalone Solution (NSA): In 5G NSA operators will use their existing EPC to
anchor the 5G new radio using the 3GPP Release 12 Dual Connectivity feature. This
feature will help operators with aggressive 5G launch needs to launch 5G in a shorter
time and at lesser cost. The 5G NSA solution might suffice for some initial use cases,
but 5G NSA has some limitations with regard to getting a much cleaner, truly 5G native
solution and thus all the operators will eventually be expected to migrate to the 5G
Standalone solution.
5G standalone solution: In 5G standalone a new 5G packet core is being introduced. It
is much cleaner, with several new capabilities built inherently into it. Network slicing,
14 | P a g e
CUPS, virtualization, automation, multi-Gbps throughput support, ultra-low latency, and
other such aspects are natively built into the 5G standalone Packet Core architecture.
Cisco has in its portfolio packet core solutions for both 5G non-standalone and 5G
standalone networks. Our 5G packet core solution allows operators to make transition
from 4G to 5G in a graceful step-by-step manner.
3. Cisco Ultra 5G NSA Packet Core Solution
Cisco is one of the leading packet core vendors and has several customers worldwide
who have deployed the Cisco Packet Core solution for EPC. Cisco enhanced its EPC
packet core solution to support 5G non-standalone packet core capabilities. Cisco will
support 5G non-standalone features in its existing EPC packet core network functions
so that operators, with Cisco EPC Packet Core solution, can just do a software upgrade
and buy 5G nonstandalone licenses to turn on the 5G non-standalone capabilities (refer
to Figure 8).
Figure 8. Simplify 5G packet core evolution
The Cisco 4G CUPS solution will provide flexibility and benefits of control- and userplane separation and support for 5G peak data rates on a per-session basis. Refer to
reference 12 for more details about the Cisco CUPS solution.
The Cisco 5G NSA Packet Core solution enables operators with Cisco EPC Packet Core
to launch 5G service in a shorter time, using existing investment and infrastructure for
some time for 5G. Thus it will provide an option to launch 5G with very little disruption in
the network.
The Cisco 5G NSA solution supports all three option 3s (3, 3a, and 3x) with its 5G NSA
packet core solution. It will be a 3GPP-compliant solution, so it can interoperate with
any Radio Access Network (RAN) and network functions that are 3GPP-standardscompliant. Cisco Mobility Management Entity (MME), Cisco Serving GPRS Support Node
(SGSN), Cisco Serving Gateway (SGW), Cisco Packet Data Network Gateway (PGW),
and Policy and Charging Rules Function (PCRF) will support the 5G NSA features.
15 | P a g e
The Cisco 5G NSA Packet Core solution supports feature parity for both 4G and 5G
sessions, so operators can have all the value-add features available for 4G sessions to
be available for 5G sessions too. Cisco EPC Packet Core network functions are
available on the Cisco Ultra Services Platform and are already deployed on several
customers’ networks worldwide. EPC network functions will eventually be available on
the new Cisco Cloud Native Ultra Services Platform including all 5G functions as well.
Cisco is already involved in multiple 5G trials with multiple operators globally and
expects to soon go live.
4. Cisco Ultra 5G SA Packet Core solution
The 5G standalone packet core is equipped with several new capabilities inherently
built in so that operators have flexibility and capability to face new challenges with the
new set of requirements for varying new use cases. The network functions in the new
5G core are broken down into smaller entities such as the Single-Mode Fiber (SMF) and
UPF, which can be used on a per-service basis. Gone are the days of huge network
boxes; welcome to services that automatically register and configure themselves over
the service-based architecture, which is built with the new functions such as the
Network Repository Function (NRF), which borrow their capabilities from cloud native
technologies. For more details about cloud native evolution, please refer to reference
11.
Separation of the user plane has freed it from the shackles of the control plane state
and permits deployments at the edge with very little integration overhead. Multi-access
edge computing that spans both wireless and wireline technologies will significantly
redefine how users connect to applications, corporate networks, and each other.
Figure 9 shows the new 5G standalone architecture as defined by 3GPP in reference
1.
Figure 9. New 5G standalone architecture
16 | P a g e
5. Cisco Cloud native Ultra Services Platform
The Cisco Ultra Services Platform has evolved into a cloud-native platform. With this
evolved cloud-native platform, the Cisco 5G Stand-Alone (SA) solution provides a
synergistic and coherent set of 5G SA network functions compliant to 5G SA 3GPP
standards. These functions help Cisco enable best-in-class ―cloud‖ operational
benefits across the full Cisco 5G network-function portfolio. These cloud operational
benefits include dynamic networkfunction scale-in/-out, faster network-function
upgrades, in-service network-function upgrades, and support for NETCONF/YANG and
streaming telemetry. Cisco’s goal is to provide a modular network-function
implementation that enables carrier-specific adaptations to implement differentiated
services. Cisco’s 5G Packet Core portfolio strategy is that all our 5G network-functions
will use these common base software platform characteristics. This scenario enables
our 5G core solution so customers can enjoy the related cloud operations benefits
across the range of relevant Cisco network functions, consolidating and streamlining
the network-function management and operational processes, and reducing carrier
Operating Expenses (OpEx).
Cisco’s Cloud Native Ultra Services Platform delivers common configuration tools,
common telemetry, logging, a unified control plane, common HTTP2/Stream Control
Transmission Protocol (SCTP), Smart Business Architecture (SBA)/Representational
State Transfer (REST)/JavaScript Object Notation (JSON), common database
technologies, high-availability and Geographical Redundancy (GR) services, and
common orchestration across all our 5G standalone network functions. This Cisco
Cloud Native Ultra Services Platform uses open-source software services and tasks (for
example, IP Communicator [IPC]; data synchronization; Service Bus; and configuration),
life-cycle management [for example, kubernetes, load balancer, service mesh, and
continuous integration and continuous delivery support enabling improved time to
market and improved service velocity (refer to Figure 10).
17 | P a g e
Figure 10. Cisco’s Cloud Native Ultra Services Platform Features
6. Cisco 5G SA Network Function Portfolio
In addition to delivering 3GPP Release 15-compliant 5G network functions, Cisco’s 5G
solution strategy is to deliver an operationally efficient, unified, and high-performance
5G service-based architecture across these 5G network functions, with value-added
Cisco capabilities beyond 3GPP.
Finally, Cisco’s 5G solution strategy is also to use our significant 4G software features
across our 4G EPC products to provide maximum 4G and 5G feature compatibility
where possible in our 5G network functions, and to enable feature-rich 4G and 5G
network interworking capabilities in these network functions.
Cisco’s 5G SA portfolio is composed of all key mobile core network functions: Access
and Mobility management Function (AMF), [[define]] SMF, UPF, PCF, Network
Repository Function (NRF), Network Slice Selection Function (NSSF), Network Exposure
Function (NEF), Binding Support Function (BSF), Non-3GPP Interworking Function
(N3IWF), and Security Edge Protection Proxy (SEPP) (refer to Figure 11).
Figure 11. Cisco 5G SA packet Core architecture
18 | P a g e
Cisco believes some of key drivers for the new 5G SA architecture are as
follows:









Truly converged multi-access core
Service-based architecture
Improved and enhanced network slicing capabilities
Cloud-native friendly architecture
Better integration between application and network layer
New value-added capabilities
Simplified Quality-of-Service (QoS) framework
Interoperability with 4G EPC
Different deployment options to suit different operator needs
7. AMF - Access and Mobility Management Function
AMF supports registration management, access control, and mobility
management function for all 3GPP accesses as well as non-3GPP accesses
such as Wireless LAN (WLAN). AMF also receives mobility-related policies from
the PCF (for example, mobility restrictions) and forwards them to the user
equipment. AMF fully supports 4G interoperability with the interface to 4G MME
node.
8. SMF - Session Management Function
Cisco SMF builds upon the evolutions of the industry-leading Cisco System
Architecture Evolution Gateway (SAEGW) solution in the 4G space and its
evolution in the 4G architecture to evolve to CUPS to support a decomposed
SAEGW control plane (SAEGW-C)as the central control-plane entity that
19 | P a g e
communicates over an Sx interface to the distributed and hybrid user-plane
functions. Cisco started on the journey toward CUPS and laid the groundwork
for the SMF evolution ahead of the 3GPP standards. In addition to supporting the
standards-based SAEGW-C and its evolution to SMF, the rich history and
experience of delivering integrated inline services and how that can be enabled
in various operator networks for the various use cases is the key differentiation
of the Cisco SMF product strategy. In the 5G architecture, SMF is responsible
for session management with individual functions being supported on a persession basis. SMF allocates IP addresses to user equipment, and selects and
controls the UPF for data transfer. SMF also acts as the external point for all
communication related to the various services offered and enabled in the user
plane and how the policy and charging treatment for these services is applied
and controlled.
9. User Plane Function
The Cisco User Plane Function (UPF) is designed as a separate network
functions virtualization (VNF) that provides a high-performance forwarding
engine for user traffic. The UPF uses Cisco Vector Packet Processing (VPP)
technology for ultra-fast packet forwarding and retains compatibility with all the
user-plane functions that the monolithic StarOS offers currently (such as
Source/Dest Policy Incomplete [SPI/DPI] traffic optimization; and inline services
Network Address Translation (NAT), firewall, Domain Name System (DNS)
snooping etc.).
Cisco UPF product evolution for 5G continues to build upon our core principles
of delivering industry-leading performance while integrating intelligence in the
data path to deliver differentiated services in truly distributed network
architectures. The UPF product strategy encompasses a broad range of user
planes that can run on existing physical assets (investment protection), onpremises Telco Cloud, and virtualized environments as well as truly cloud-native
user planes that can support a mix of public and private cloud offerings.
Supporting distributed architectures with user planes moving closer to the edge
and supporting Mobility Edge Compute (MEC) use cases to support the datapath services, delivered closer to the edge and with really low latency, is an
integral part of the 5G evolution. Cisco UPF product strategy is based on
incorporating intelligent inline services as well as a traffic steering framework to
support service chains that can include external third-party applications as well.
The key product capabilities of Cisco UPF are Integrated DPI-based services,
Cisco Ultra Services Proxy, Cisco Ultra Traffic Optimization (UTO), and others
(refer to Figure 12).
20 | P a g e
10.
Cisco DPI and Inline Services
Cisco DPI and inline services include:



Application Detection and Control (ADC): Cisco ADC allows operators to
dynamically detect applications run by subscribers and derive business
intelligence about the traffic and apply packaged promotions such as zero rating
of music, video, or social media applications. ADC employs heuristic, statistical,
and deterministic analysis-based detection of applications and content. Cisco
exploits co-development opportunities where possible with content providers
and the operators to better identify applications (such as Google, Amazon, and
Facebook) and realize use cases more accurately.
Integrated subscriber firmware and NAT: StarOS supports firewall and NAT
inline services as part of the DPI function, thereby eliminating the need for an
operator to deploy an external box that provides such functions. Inline services
facilitate easier management and help reduce overall latency. The NAT
implementation is carrier-grade endpoint-independent, and subscriber-aware
and supports NAT44 and NAT64 functions. The firewall is an inline service that
inspects subscriber traffic and performs IP sessionbased access control of
individual subscriber sessions to protect the subscribers from malicious security
attacks.
Integrated content-filtering solution: This inline service to extract and categorize
Universal Resource Locators (URLs) contained in HTTP requests from mobile
subscribers is available. The URLs are precategorized into classes by an external
database. HTTP requests from user equipment are checked for URL
categorization and policies are applied based on subscriber profile. Various
actions are taken based on URL category and subscriber profile such as to
permit, block, redirect, etc. The content-filtering solution is optimally applied at
the SMF/UPF before unnecessary traffic propagates further into the network.
21 | P a g e
11.
Cisco Ultra Services Proxy
Cisco is also integrating an inline services proxy for supporting optimization for
end-user flows based on an integrated TCP/HTTP proxy that can be used to
adapt to changing characteristics of a mobile connection and adjust the overall
flow based on the service being offered. This proxy is based on integrating an
industry-leading solution from a partner as an integrated offering and greatly
simplifies the conventional way of offering such services, which incurred heavy
overheads on how the traffic was steered and moved around in order to apply
such services.
12.
Cisco Ultra Traffic Optimization
Mobile video tsunami is a reality now, and operators must make extensive RAN
Capital Expenditures (CapEx) investments to keep up with mobile traffic growth.
Operators are supporting the volume demand by increasing the number of cell
sites in the RAN; otherwise the subscriber Quality of Experience (QoE) will
suffer. The Cisco Ultra Traffic Optimization (UTO) is a software solution on the
4G PGW or 5G UPF that allows the use of existing RAN much more efficiently,
thereby delaying or reducing RAN investments. Cisco UTO enables up to 40percent more traffic transmission over a given band of spectrum and through
existing cell sites and improves QoE for all subscribers and data flows.
13.
Policy Control function
Cisco PCF is a direct evolution of the Cisco PCRF on the existing Cisco Policy
Suite Cloud Native Docker container-based platform. The new PCF supports all
the existing features of the traditional 3G and 4G Cisco Policy Suite PCRF in
addition to the new 5G QoS policy and charging control functions and the
related 5G signaling interfaces defined for the 5G PCF by the 3GPP standards
(for example, N7, N15, N5, Rx, ..). Through various configuration options,
operators will have the flexibility to enable or disable various features, protocols,
or interfaces. The PCF evolution is planned in an incremental manner to keep
older Cisco Policy Suite PCRF functions intact, and enable a hybrid 4G and 5G
PCRF and PCF solution where necessary for customer operations.
14.
Network Repository function
Cisco NRF is being delivered in line with 3GPP requirements in support of
intelligent NFV core network node selection. Cisco’s NRF product further
delivers value-added intelligence in the areas of stateful node selection, serving
node discovery, topology hiding, signaling proxying as a basis for advance 5G
network automation, and superior 5G core overall flexibility and simplicity of
operations. Cisco’s 5G NRF product uses and extends key 4G product assets in
the area of 4G node selection and 4G diameter signaling control.
22 | P a g e
15.
Network exposure function
Cisco’s NEF uses the Cisco 4G Application Programming Interface (API)
gateway called mobile orchestration gateway, which is commercially deployed
in cloud-native networks today. The Cisco 4G API Gateway currently enables
subscriber session QoS control services and sponsored data charging services
between the core network and over-the-top applications, and as such lays the
essential foundation for our 5G standalone NEF function in the 5G standalone
core.
16.
Network slice selection function
Network slicing enables the network to be segmented and managed for a
specific use case or business scenario. A slice comprises the 5G network
functions needed to compose a complete Public Land Mobile Network [[define]]
(PLMN). The operability of a slice can be exposed to a slice owner such as an
enterprise delivering an Internet of Things (IoT) service. Examples of slices
include fixed mobile wireless, connected car, as well as traditional consumer
services. The network operator generally defines the granularity of a slice to
best meet the business requirements.
Network slicing requires the ability to orchestrate and manage the 5G network
functions as a common unit. This orchestration requires coordination across
individual network functions to ensure services are properly configured and
dimensioned to support the required use case.
NSSF provides a network slice instance selection function for user equipment. It
is possible to determine whether to allow the network slice requested by the
user equipment. It also is possible to select an appropriate AMF or candidate
AMF set for the user equipment. Based on operator configuration, the NSSF can
determine the NRF(s) to be used to select network functions and services within
the selected network slice instance(s).
Cisco had worked on the pre-standards NSSF function for even 4G EPC. and
has a solution for doing slicing for 4G EPC too. This pre-standards NSSF
solution is evolved now for 5G standalone packet core.
17.
Binding Support Function
The 3GPP Binding Support Function (BSF) is a distinct 5G SAnetwork function
used for binding an applicationfunction request to one of many PCF instances,
as described in TS 23.503. The 3GPP BSF addresses a ―PCF binding‖ problem
(that is, getting an application function and NEFs to talk to the same PCF as the
SMF Protocol Data Unit [PDU] session) in 5G SA (independent of diameter), and
it also fulfills a Diameter Routing Agent-like (DRA) binding function for 5G SA
scenarios where the traditional IP Multimedia Subsystem (IMS) interacts with the
5G SA core through the Rx protocol. For the IMS use case, the BSF is defined to
23 | P a g e
terminate (and convert) or proxy the Rx directly to the relevant PCF using
binding-based routing at the BSF.
Also per 3GPP, the BSF can be co-located with other network functions such as
SMF, PCF, NRF, etc., but most suitably co-located with the NEF.
As a 5G SAnetwork-function type, the BSF per se does not apply to option 3x
for which the EPC core applies, including traditional virtual DRA (vDRA) nodes
that perform Rx and Gx binding-based routing in 4G. Being an extension of
Cisco vDRA in 4G, the Cisco BSF can, however, operate in the option 3x core,
but in this case the Cisco BSF would, of course, be configured as a DRA node.
18.
Security Edge Protection Proxy
Security Edge Protection Proxy (SEPP) is a nontransparent proxy that supports
message filtering and policing on inter-PLMN control-plane interfaces and also
topology hiding for the PLMN network. A SEPP function should perform the
firewall role for transactions between domains. Given that the SEPP is the point
where integrity protection and encryption are applied, the SEPP has visibility into
each aspect of a transaction.
The SEPP function applies permit/deny Access Control Lists (ACLs) based on
configured rules. This approach is effective for known threat exposures.
Furthermore, the SEPP function generates flow-related information that will be
provided to an off-board threat visibility analysis function such as Cisco
Stealthwatch® security. This capability supports the creation of a baseline
behavior profile, which allows the operator to validate the policies driving the
ACL creation against observed behavior and correct as necessary. It also allows
the operator to detect anomalous behaviors in real time and instigate manual
remediation. For example, rogue nodes attempting to use SEPP services would
be highlighted.
These flow records can also be used to assist resolving disputes between
roaming partners, using Internetwork Packet Exchange (IPX)-like functions or
directly connected.
Additionally, the SEPP firewall functions allows the presentation of optional
security honeypot-like functions. Suspect flows, based on rogue node
identification, would be processed by the function in such a way that potential
attackers perceive no detectable change in behavior.
19.
Non-3GPP interworking function
The non-3GPP interworking function (N3IWF) is used for integrating non-3GPP
access types into the 5G SA core to make it a truly converged core. It is used
mainly for non-3GPP access types such as Wi-Fi and fixed-line integration into
the 5G SA core. The N3IWF terminates the Internet Key Exchange Version 2
24 | P a g e
(IKEv2) and IP Security (IPsec) protocols with the user equipment over [[define]]
NWu and relays over the N2 interface the information needed to authenticate the
user equipment and authorize its access to the 5G core network. It also mainly
supports termination of N2 and N3 interfaces to the 5G core network for the
control and user planes, respectively.
20.
Migration path from 4G to 5G
Cisco believes migration from 4G to 5G has to be graceful and should happen in
a step-by-step manner. 4G is going to co-exist with 5G for a long time to come,
even if 5G is introduced. Given this reality as well as the fact operators need to
have a network that can cater to a wide variety of devices, they need to have a
network that supports these different types of devices at the same time.
The Cisco 5G solution is geared to help operators easily perform the step-bystep migration from 4G to 5G.
Figure 13 shows the step-by-step migration path that Cisco recommends to
operators to migrate from their current 4G EPC network to a 5G network.
Figure 13. Migration from 4G EPC to 5G network
Figure 14 shows how the interoperable network will look like eventually. The
network will support different types of older as well as truly native 5G SA
devices at the same time. As the industry transitioned from 2G and 3G to a 4G
network, this evolution is expected to follow a similar path from 4G to 5G
networks.
Figure 14. Interoperable network
25 | P a g e
21.
Summary
5G enables a new set of possibilities and capabilities. 5G is not just going to be
about high-speed data connections for enhanced mobile broadband, but also
will enable several new capabilities that can cater to several new enterprise use
cases. 5G will not just be about serving consumer and enterprise subscribers
with high-throughput connectivity; 5G will enable new revenue avenues and
opportunities for operators by its ability to cater to requirements for several new
enterprise use cases. Thus, Cisco envisions 5G to equip operators with more
capabilities to cater to enterprise customer needs to support their current as
well as new use cases.
Cisco is a leading packet core vendor for decades. Cisco has witnessed several
3GPP technology transitions earlier too, first from 2G to 3G and then from 3G to
4G, and is currently the best-placed vendor to define and lead the solution for
the important and crucial transition from 4G to 5G.
We are developing our 5G solution with operator needs in mind. Ours strategy is
to transition our customers to a cloud-centric world to get the benefits of our
cloud-native solution and thus equip them to be able to meet their needs. We
believe 5G is not just about new radio, but about the total end-to-end network,
including the need for both RAN and packet core evolve to cater to these
operators’ needs.
The Cisco 5G Packet Core solution product strategy is to provide a synergistic
and coherent set of 5G standalone network functions compliant to 5G
standalone 3GPP standards, over the Cisco Cloud Native Ultra Services
26 | P a g e
Platform. This platform is the cloud-native evolution of the Cisco Ultra Services
Platform. The Cisco Cloud Native Ultra Services Platform helps Cisco enable
best-in-class ―cloud‖ operational benefits across the full Cisco 5G SA NF
portfolio. These cloud operational benefits include dynamic network-function
scale-in/-out, faster networkfunction upgrades, in-service network-function
upgrades, and support for NETCONF/YANG and streaming telemetry.
Cisco will have in its portfolio a packet core solution for both 5G NonStandalone (NSA) and 5G standalone networks. Cisco’s goal is to develop a 5G
packet core solution that allows operators to make the transition easily from 4G
to 5G.
22.
Conclusion
The Cisco Ultra Services Platform is an important piece of the entire Cisco 5G
value chain. Cisco is taking a multicloud-to-client approach, unifying
multivendor solutions into a single, secure, standards-based architecture. And
emphasizing that with the proper secure network so customers can start
delivering 5G services today in a cloud-scale mobile Internet for business,
consumer, and IoT— bringing in ―new 5G money‖ with a compelling value chain.
5G is where the breadth of Cisco matters, because we do service enablement,
the services themselves, the 5G core, the IP transport, the cloud, etc. We can
truly optimize and secure across the entire service layer.
The Cisco Ultra Services Platform is a fully virtualized architecture supporting
control- and user-plane separation and a distributed architecture. This platform
includes software components for packet core, policy suite, and automation.
The new cloud-native evolution of the platform expands its potential and
flexibility to deliver your 5G and digital transformation success (refer to Figure
15).
Figure 15. Cisco 5G: Redefining your network
23.
5G Core Reference Specifications:
Figure 16. 5G Core Reference Specifications:
27 | P a g e
28 | P a g e
Kubernetes Overview
Figure 17: Kubernetes Overview
To work with Kubernetes, we use Kubernetes API objects to describe our
cluster’s desired state: what applications or other workloads we want to run,
what container images we use, the number of replicas, what network and disk
resources we want to make available, and more. We set your desired state by
creating objects using the Kubernetes API, typically via the command-line
interface, kubectl. We can also use the Kubernetes API directly to interact with
the cluster and set or modify our desired state.
Once we’ve set our desired state, the Kubernetes Control Plane makes the
cluster’s current state match the desired state via the Pod Lifecycle Event
Generator (PLEG). To do so, Kubernetes performs a variety of tasks
automatically–such as starting or restarting containers, scaling the number of
replicas of a given application, and more. The Kubernetes Control Plane consists
of a collection of processes running on your cluster:
The Kubernetes Master is a collection of three processes that run on a single
node in your cluster, which is designated as the master node. Those processes
are: kube-apiserver, kube-controller-manager and kube-scheduler.
Each individual non-master node in our cluster runs two processes:
kubelet, which communicates with the Kubernetes Master.
kube-proxy, a network proxy which reflects Kubernetes networking services on
each node.
29 | P a g e
1. Kubernetes Objects
Kubernetes contains a number of abstractions that represent the state of our
system: deployed containerized applications and workloads, their associated
network and disk resources, and other information about what your cluster is
doing. These abstractions are represented by objects in the Kubernetes API;
The basic Kubernetes objects include:




Pod
Service
Volume
Namespace
In addition, Kubernetes contains a number of higher-level abstractions called
Controllers. Controllers build upon the basic objects, and provide additional
functionality and convenience features. They include:





ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
2. Kubernetes Control Plane
The various parts of the Kubernetes Control Plane, such as the Kubernetes
Master and kubelet processes, govern how Kubernetes communicates with your
cluster. The Control Plane maintains a record of all of the Kubernetes Objects in
the system, and runs continuous control loops to manage those objects’ state.
At any given time, the Control Plane’s control loops will respond to changes in
the cluster and work to make the actual state of all the objects in the system
match the desired state that you provided.
For example, when you use the Kubernetes API to create a Deployment, you
provide a new desired state for the system. The Kubernetes Control Plane
records that object creation, and carries out your instructions by starting the
required applications and scheduling them to cluster nodes–thus making the
cluster’s actual state match the desired state.
3. Kubernetes Master
The Kubernetes master is responsible for maintaining the desired state for our
cluster. When you interact with Kubernetes, such as by using the kubectl
command-line interface, you’re communicating with your cluster’s Kubernetes
master.
30 | P a g e
The “master” refers to a collection of processes managing the cluster state.
Typically all these processes run on a single node in the cluster, and this node is
also referred to as the master. The master can also be replicated for availability
and redundancy.
4. Kubernetes Nodes
The nodes in a cluster are the machines (VMs, physical servers, etc) that run
your applications and cloud workflows. The Kubernetes master controls each
node; you’ll rarely interact with nodes directly.
5. Understanding Pods
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in
the Kubernetes object model that you create or deploy. A Pod represents
processes running on your Cluster .
A Pod encapsulates an application’s container (or, in some cases, multiple
containers), storage resources, a unique network IP, and options that govern
how the container(s) should run. A Pod represents a unit of deployment: a single
instance of an application in Kubernetes, which might consist of either a single
container or a small number of containers that are tightly coupled and that share
resources.
Docker is the most common container runtime used in a Kubernetes Pod, but
Pods support other container runtimes as well.
Pods in a Kubernetes cluster can be used in two main ways:

Pods that run a single container. The “one-container-per-Pod” model is the
most common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container, and Kubernetes manages the Pods rather
than the containers directly.

Pods that run multiple containers that need to work together. A Pod
might encapsulate an application composed of multiple co-located
containers that are tightly coupled and need to share resources. These
co-located containers might form a single cohesive unit of service–one
container serving files from a shared volume to the public, while a
separate “sidecar” container refreshes or updates those files. The Pod
wraps these containers and storage resources together as a single
manageable entity. T
6. Services
A Kubernetes Service is an abstraction which defines a logical set of Pods and
a policy by which to access them - sometimes called a micro-service. The
set of Pods targeted by a Service is (usually) determined by a Label Selector
31 | P a g e
As an example, consider an image-processing backend which is running with
3 replicas. Those replicas are fungible - frontends do not care which
backend they use. While the actual Pods that compose the backend set may
change, the frontend clients should not need to be aware of that or keep
track of the list of backends themselves. The Service abstraction enables this
decoupling.

For Kubernetes-native applications, Kubernetes offers a simple Endpoints
API that is updated whenever the set of Pods in a Service changes. For
non-native applications, Kubernetes offers a virtual-IP-based bridge to
Services which redirects to the backend Pods.
7. Namespaces
Kubernetes supports multiple virtual clusters backed by the same physical
cluster. These virtual clusters are called namespaces.
8. ReplicaSet
A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any
given time. As such, it is often used to guarantee the availability of a specified
number of identical Pods
9. Deployments
A Deployment controller provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment object, and the Deployment
controller changes the actual state to the desired state at a controlled rate. You
can define Deployments to create new ReplicaSets, or to remove existing
Deployments and adopt all their resources with new Deployments.
10.
StatefulSets
StatefulSet is the workload API object used to manage stateful applications.
Manages the deployment and scaling of a set of Pods , and provides guarantees
about the ordering and uniqueness of these Pods.
11.
DaemonSet
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes
are added to the cluster, Pods are added to them. As nodes are removed from
the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean
up the Pods it created.
32 | P a g e
12.
Jobs - Run to Completion
A Job creates one or more Pods and ensures that a specified number of
them successfully terminate. As pods successfully complete, the Job tracks
the successful completions. When a specified number of successful
completions is reached, the task (ie, Job) is complete. Deleting a Job will
clean up the Pods it created.
A simple case is to create one Job object in order to reliably run one Pod to
completion. The Job object will start a new Pod if the first Pod fails or is
deleted (for example due to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
33 | P a g e
5G Call Demo:
1. Task 1: Connect to dCloud Setup
Please refer to the supplied printouts for dCloud.cisco.com information.
Option 1 – Connect via Cisco AnyConnect VPN Client
On the session view, click on top-left corner “Details”
A new compact window will pop-up with session details, scroll down to get the AnyConnect
Credentials. Use these credentials, to connect to Cisco AnyConnect VPN client.
34 | P a g e
Option 2 - Log in to dcloud site and launch Remote Desktop
Please refer to the supplied printouts for dCloud.cisco.com information.
2. Start UDM and register UDM+AUSF
1. Login to master node and register UDM and AUSF
198.18.134.30
root/C1sco12345
cd /root/5g
ls -lrt
./reg_query.sh
2. From Master node login to ‘worker1’
35 | P a g e
ssh worker1
3. Run following steps to start the mock-tools for UDM
cd smf-mock-servers/src/smf-mock-servers/
export PATH=$PATH:/usr/local/go/bin
nohup ./run-mock-tools -ip=198.18.134.31 > run-mock-tools.log &
netstat -plan | grep 8099
root@Worker1:~# netstat -plan | grep 8099
tcp 0 0 198.18.134.31:80990.0.0.0:* LISTEN 25634/main
root@Worker1:~#
3. Task 2: Run a 5G Call
Option 1 – Start a 5G Data Call (Preferred to see detailed logs)
Login to Master Node into 3 session simultaneously –
UPF Login
1. From Master node login to ‘UPF’
ssh admin@10.1.10.40
Password: Cisco@123
2. Start Monitor Protocol for option 49 (PFCP) and 26 (GTPU) with verbosity
5
monitor protocol
49
26
B (for Begin)
Y (this will start the tracing on UPF)
++++
(Enter plus sign to increase the verbosity)
Client Login
1. From Master node login to ‘client’ . (Open 2 terminals)
ssh client
2. Start lattice using following
36 | P a g e
start_lattice.sh &
3. Tail Logs from second client terminal
tail -f /tmp/ltclogs/lattice.log
4. Start 5G Call
Sudo start5g.sh
Server Login - to ping to UE IP
1. Ping the UE IP from Server
a. From Master node login to ‘server’
ssh server
b. Ping the UE IP
ping <UE-IP>
c. On UPF monitor protocol, observe the ICMP packets
Monitor protocol -> Option 26
Client Login - to ping from UE IP
1. Ping N6 IP from the UE IP
a. From Master node login to ‘client’
ssh client
b. Find the tunnel interface
ip a
c. Check the routing table and route to N6 interface 10.1.30.141
netstat -nr
d. Find the tunnel interface
su
Password: StarOS@dcloud
ip route add 10.1.30.141/32 via 0.0.0.0 dev tun-100
e. Ping the UE IP
ping 10.1.30.141 -I <UE IP>
f. On UPF monitor protocol, observe the ICMP packets
37 | P a g e
Monitor protocol -> Option 26
Option 2 – Start a 5G Data Call

Launch Putty application after the remote desktop application is launched.
UPF Login
1. From Master node login to ‘UPF’
ssh admin@10.1.10.40
Password: Cisco@123
2. Start Monitor Protocol for option 49 (PFCP) and 26 (GTPU) with verbosity
5
monitor protocol
49
26
B (for Begin)
Y (this will start the tracing on UPF)
++++
(Enter plus sign to increase the verbosity)
Client Login
1. From putty session – Load and Open
o client-lattice
38 | P a g e
2. From putty session – Load and Open
o client-start5G
Server Login
1. From putty session – Load and Open
o server
2. Ping the UE IP, which is to be obtained from UPF via “show subscriber all’
CLI output.
Ping <UE-IP>
Server Login - to ping to UE IP
1. Ping the UE IP from Server
a. From putty session – Load and Open
server
b. Ping the UE IP
ping <UE-IP>
c. On UPF monitor protocol, observe the ICMP packets
Monitor protocol -> Option 26
Client Login - to ping from UE IP
1. Ping N6 IP from the UE IP
a. From putty session – Load and Open
client
a. Find the tunnel interface
ip a
b. Check the routing table and route to N6 interface 10.1.30.141
netstat -nr
c. Find the tunnel interface
su
39 | P a g e
Password: StarOS@dcloud
ip route add 10.1.30.141/32 via 0.0.0.0 dev tun-100
d. Ping the UE IP
ping 10.1.30.141 -I <UE IP>
e. On UPF monitor protocol, observe the ICMP packets
Monitor protocol -> Option 26
40 | P a g e
Kubernetes and Platform checks
4. Network Diagram
dCloud Setup
5. Pod Access
1 Setup Cisco Anyconnect Session to you dCloud Session
198.
Login using username: <Lab Info Sheet>
password:<Lab Info Sheet>
198.
IP Schema
Node
Management
Credentials
K8 Master
198.18.134.30
root/C1sco12345
K8 Worker 1
198.18.134.31
root/C1sco12345
K8 Worker 2
198.18.134.32
root/C1sco12345
K8 Worker 3
198.18.134.33
root/C1sco12345
K8 Worker 4
198.18.134.34
root/C1sco12345
K8 Worker 5
198.18.134.35
root/C1sco12345
K8 Worker 6
198.18.134.36
root/C1sco12345
K8 Worker 7
198.18.134.37
root/C1sco12345
K8 Worker 8
198.18.134.38
root/C1sco12345
K8 Worker 9
198.18.134.39
root/C1sco12345
Client
198.18.134.50
Server
198.18.134.51
staradmin/starent or
“ssh client” from
Master Node
“ssh server” from
Master Node
41 | P a g e
6. Open SSH Terminal sessions (1 for Master node). Get root
access on all the Nodes.
Ssh root@198.18.134.30
Password: C1sco12345
7. Verify the Docker version.
Docker repos and Docker 17.03.2 version has been pre-installed on all the Nodes
(Master and worker nodes).
root@198:~# docker version
Client:
Version:
17.03.2-ce
API version: 1.27
Go version:
go1.7.5
Git commit:
f5ec1e2
Built:
Tue Jun 27 03:35:14 2017
OS/Arch:
linux/amd64
Server:
Version:
API version:
Go version:
Git commit:
Built:
OS/Arch:
Experimental:
root@198:~#
17.03.2-ce
1.27 (minimum version 1.12)
go1.7.5
f5ec1e2
Tue Jun 27 03:35:14 2017
linux/amd64
false
8. Observe all the pods of the architecture entities of K8 – master
are up
Run these commands on Master.
root@198:~# kubectl get pods -n kube-system
NAME
READY
etcd-master
1/1
kube-apiserver-master
1/1
kube-controller-manager-master
1/1
kube-dns-86f4d74b45-krxxq
3/3
kube-proxy-2nnmj
1/1
kube-proxy-6bszb
1/1
kube-proxy-6tc2b
1/1
kube-proxy-7fswp
1/1
kube-proxy-bcdq8
1/1
kube-proxy-hl6rc
1/1
kube-proxy-kvrds
1/1
kube-proxy-stp82
1/1
kube-proxy-xb52j
1/1
kube-scheduler-master
1/1
42ransport42-dashboard-56d4f774fd-9z2cs
1/1
206d
tiller-deploy-f5597467b-g4xkp
1/1
weave-net-9gnhg
2/2
weave-net-gjmwn
2/2
weave-net-grwft
2/2
weave-net-mjntp
2/2
weave-net-t4l8r
2/2
weave-net-t4tzk
2/2
weave-net-trqbv
2/2
weave-net-w7l67
2/2
weave-net-wl2pp
2/2
STATUS
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
RESTARTS
9
18
9
18
10
9
9
9
9
9
10
9
9
9
9
AGE
206d
206d
206d
139d
206d
206d
206d
206d
206d
206d
206d
206d
206d
206d
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
9
25
25
23
26
25
25
27
30
25
203d
206d
206d
206d
206d
206d
206d
206d
206d
206d
42 | P a g e
9. Observe all the services of the architecture entities in K8 –
master are up
root@198:~# kubectl get svc -n kube-system
NAME
TYPE
CLUSTER-IP
AGE
kube-dns
ClusterIP
10.96.0.10
53/UDP,53/TCP
206d
43ransport43-dashboard
NodePort
10.98.42.61
9090:32526/TCP
206d
tiller-deploy
ClusterIP
10.111.29.170
44134/TCP
203d
root@198:~#
10.
PORT(S)
<none>
198.18.134.30
<none>
Observe all nodes in the K8s system
root@198:~# kubectl
NAME
STATUS
master
Ready
worker1
Ready
worker2
Ready
worker3
Ready
worker4
Ready
worker5
Ready
worker6
Ready
worker7
Ready
worker8
Ready
root@198:~#
11.
EXTERNAL-IP
get nodes
ROLES
master
<none>
<none>
<none>
<none>
<none>
<none>
<none>
<none>
AGE
206d
206d
206d
206d
206d
206d
206d
206d
206d
VERSION
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
Verify Helm version
root@198:~# helm version
Client: &version.Version{SemVer:”v2.9.0”,
GitCommit:”f6025bb9ee7daf9fee0026541c90a6f557a3e0bc”, GitTreeState:”clean”}
Server: &version.Version{SemVer:”v2.9.0”,
GitCommit:”f6025bb9ee7daf9fee0026541c90a6f557a3e0bc”, GitTreeState:”clean”}
12.
Check Repo List
root@198:~# helm repo list
NAME
URL
stable
https://kubernetes-charts.storage.googleapis.com
local
http://127.0.0.1:8879/charts
cnat-cnee
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/cnee.2019.01.01-5/
cnat-amf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/amf.2019.01.01-5/
cnat-nrf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/nrf.2019.01.01-5/
43 | P a g e
cnat-nssf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/nssf.2019.01.01-5/
cnat-smf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/mobile-cnat-smf/smf-products/2019-01-30_Disktype/
cnat-pcf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/pcf.2019.01.01-5/
cnat-hpm
https://nicomart:AKCp5btAu99hyTzjeitXisDiq8yknXm434KULNq3ittFyriGRehc
cReR9dDX41K7CwKE9pJ7f@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/mobile-cnat-incubator/incubator-products/helm-product-manager/
root@198:~#
44 | P a g e
Preparation and Installation of 5G
Network Functions
13.
Open SSH Terminal sessions for Master node.
Ssh root@198.18.134.30
Password: C1sco12345
14.
Clean up existing namespaces
kubectl
kubectl
kubectl
kubectl
kubectl
kubectl
kubectl
15.
delete
delete
delete
delete
delete
delete
delete
ns
ns
ns
ns
ns
ns
ns
cnee
nrf
nssf
amf
smf
pcf
base
Wait for namespaces to be deleted
root@Master:~# kubectl get ns
NAME STATUS AGE
amf Terminating 3d
cnee Terminating 3d
default Active 206d
helm Active 105d
kube-public Active 206d
kube-system Active 206d
nrf Terminating 3d
nssf Terminating 3d
pcf Terminating 3d
smf Terminating 3d
root@Master:~# kubectl get ns
NAME STATUS AGE
default Active 206d
helm Active 105d
kube-public Active 206d
kube-system Active 206d
16.
Create a new namespace for NGINX
kubectl create ns base
17.
Install Ingress controller NGINX
Note: the highlight IP Address should match with your setup’s Master IP.
helm upgrade -–install nginx-ingress stable/nginx-ingress -–set
rbac.create=true,controller.service.type=NodePort,controller.service.extern
alIPs={198.18.134.30} -–namespace base
45 | P a g e
18.
Check NGINX is installed
root@198:~# kubectl get svc --all-namespaces | grep nginx
base
nginx-ingress-controller
10.99.182.187
198.18.134.30
80:31824/TCP,443:32246/TCP
203d
base
nginx-ingress-default-backend
10.100.75.64
<none>
80/TCP
203d
19.
NodePort
ClusterIP
Check NGINX services are running
root@198:~# kubectl get svc --all-namespaces | grep nginx
base
nginx-ingress-controller
10.99.182.187
198.18.134.30
80:31824/TCP,443:32246/TCP
203d
base
nginx-ingress-default-backend
10.100.75.64
<none>
80/TCP
203d
root@198:~#
NodePort
ClusterIP
In Most cases nginx IP is K8s Master IP
20.
Prerequisites check
Check all k8s nodes are ready
root@198:~# kubectl
NAME
STATUS
master
Ready
worker1
Ready
worker2
Ready
worker3
Ready
worker4
Ready
worker5
Ready
worker6
Ready
worker7
Ready
worker8
Ready
get nodes
ROLES
master
<none>
<none>
<none>
<none>
<none>
<none>
<none>
<none>
AGE
206d
206d
206d
206d
206d
206d
206d
206d
206d
VERSION
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
v1.10.0
Get nodes Ips and hostnames
root@198:~# cat /etc/hosts
127.0.0.1
localhost
198.18.134.30
Master
198.18.134.31
Worker1
198.18.134.32
Worker2
198.18.134.33
Worker3
198.18.134.34
Worker4
198.18.134.35
Worker5
198.18.134.36
Worker6
198.18.134.37
Worker7
198.18.134.38
Worker8
198.18.134.39
Worker9
# The following lines are desirable for Ipv6 capable hosts
::1
localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Check helm is installed
root@198:~# kubectl get pods --all-namespaces
kube-system
tiller-deploy-f5597467b-g4xkp
1/1
Running
9
203d
| grep tiller
46 | P a g e
root@198:~#
Check weave is installed
root@198:~# kubectl get pods --all-namespaces
kube-system
weave-net-9gnhg
2/2
Running
25
206d
kube-system
weave-net-gjmwn
2/2
Running
25
206d
kube-system
weave-net-grwft
2/2
Running
23
206d
kube-system
weave-net-mjntp
2/2
Running
26
206d
kube-system
weave-net-t4l8r
2/2
Running
25
206d
kube-system
weave-net-t4tzk
2/2
Running
25
206d
kube-system
weave-net-trqbv
2/2
Running
27
206d
kube-system
weave-net-w7l67
2/2
Running
30
206d
kube-system
weave-net-wl2pp
2/2
Running
25
206d
root@198:~#
21.
Create Namespace and Secrets
kubectl
kubectl
kubectl
kubectl
kubectl
kubectl
22.
create
create
create
create
create
create
ns
ns
ns
ns
ns
ns
cnee
nrf
nssf
amf
smf
pcf
Check all the namespace created
root@198:~# kubectl get
NAME
STATUS
amf
Active
base
Active
cnee
Active
default
Active
helm
Active
kube-public
Active
kube-system
Active
nrf
Active
nssf
Active
pcf
Active
smf
Active
23.
| grep weave
ns
AGE
3d
203d
3d
206d
105d
206d
206d
3d
3d
3d
3d
Create Kubernetes secret
kubectl create secret docker-registry regcred –docker-server=devhubdocker.cisco.com –docker-username=tmelabuser.gen –dockerpassword=AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiqCaTWHpVXLZEuoU
Luf5cPY –docker-email=vinameht@cisco.com –namespace cnee
kubectl create secret docker-registry regcred –docker-server=devhubdocker.cisco.com –docker-username=tmelabuser.gen –dockerpassword=AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiqCaTWHpVXLZEuoU
Luf5cPY –docker-email=vinameht@cisco.com –namespace nrf
47 | P a g e
kubectl create secret docker-registry regcred –docker-server=devhubdocker.cisco.com –docker-username=tmelabuser.gen –dockerpassword=AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiqCaTWHpVXLZEuoU
Luf5cPY –docker-email=vinameht@cisco.com –namespace nssf
kubectl create secret docker-registry regcred –docker-server=devhubdocker.cisco.com –docker-username=tmelabuser.gen –dockerpassword=AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiqCaTWHpVXLZEuoU
Luf5cPY –docker-email=vinameht@cisco.com –namespace amf
kubectl create secret docker-registry regcred –docker-server=devhubdocker.cisco.com –docker-username=tmelabuser.gen –dockerpassword=AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiqCaTWHpVXLZEuoU
Luf5cPY –docker-email=vinameht@cisco.com –namespace pcf
kubectl create secret docker-registry regcred –docker-server=devhubdocker.cisco.com –docker-username=tmelabuser.gen –dockerpassword=AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiqCaTWHpVXLZEuoU
Luf5cPY –docker-email=vinameht@cisco.com –namespace smf
24.
Create Helm Repos
Check no NFs repos exist
helm repo list
Create add repo helm command using the below mentioned Username, Password
and Path:
helm repo add cnat-cnee
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/cnee.2019.01.01-5/
helm repo add cnat-nrf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/nrf.2019.01.01-5/
helm repo add cnat-nssf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/nssf.2019.01.01-5/
helm repo add cnat-amf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/amf.2019.01.01-5/
helm repo add cnat-pcf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/pcf.2019.01.01-5/
helm repo add cnat-smf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/mobile-cnat-smf/smf-products/2019-01-30_Disktype/
48 | P a g e
25.
Check NFs repos
root@198:~# helm repo list
NAME
URL
stable
https://kubernetes-charts.storage.googleapis.com
local
http://127.0.0.1:8879/charts
cnat-cnee
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/cnee.2019.01.01-5/
cnat-amf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/amf.2019.01.01-5/
cnat-nrf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/nrf.2019.01.01-5/
cnat-nssf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/nssf.2019.01.01-5/
cnat-smf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/mobile-cnat-smf/smf-products/2019-01-30_Disktype/
cnat-pcf
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9y
XgSHiqCaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnatcharts-release/builds/2019.01-5/pcf.2019.01.01-5/
cnat-hpm
https://nicomart:AKCp5btAu99hyTzjeitXisDiq8yknXm434KULNq3ittFyriGRehc
cReR9dDX41K7CwKE9pJ7f@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/mobile-cnat-incubator/incubator-products/helm-product-manager/
root@198:~#
49 | P a g e
Deploy CNEE
Check contents of global.yaml
root@198:~/5g# cat /root/5g/global.yaml
global:
registry: devhub-docker.cisco.com/mobile-cnat-docker-release
singleNode: false
useVolumeClaims: false
imagePullPolicy: IfNotPresent
ingressHostname: 198.18.134.30.nip.io
imagePullSecrets:
- name: “regcred”
root@198:~/5g#
Check contents of cnee.yaml
root@198:~/5g# cd /root/5g
root@198:~/5g# cat cnee.yaml
ops-center:
product:
autoDeploy: true
helm:
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/cnee.2019.01.01-5/
root@198:~/5g#
26.
Use the helm command to install the CNEE environment.
Helm upgrade –install cnee-ops-center cnat-cnee/cnee-ops-center -f
global.yaml -f cnee.yaml –namespace cnee –debug –devel
27.
Check if Ops-Center is up and running:
Check ops-center pods
root@198:~/5g# kubectl get pods -n cnee -o wide
NAME
RESTARTS
AGE
IP
NODE
alertmanager-66d9bbf99b-549l8
0
3d
10.34.128.2
worker8
api-cnee-ops-center-86cf696494-rtwdk
0
3d
10.42.0.66
worker2
bulk-stats-df488795-zfpn6
0
3d
10.40.0.148
worker7
cnee-cnee-documentation-documentation-576649bdbf-lc424
0
3d
10.34.128.0
worker8
elastic-kibana-proxy-6d4986db7d-rt4s4
2
3d
10.40.0.147
worker7
elastic-node-6dkgg
0
3d
198.18.134.32
worker2
elastic-node-7xqf6
0
3d
198.18.134.35
worker5
elastic-node-8pds7
0
3d
198.18.134.38
worker8
elastic-node-b5j6x
0
3d
198.18.134.31
worker1
READY
STATUS
1/1
Running
1/1
Running
2/2
Running
1/1
Running
2/2
Running
3/3
Running
3/3
Running
3/3
Running
3/3
Running
50 | P a g e
elastic-node-bh6b4
0
3d
198.18.134.37
worker7
elastic-node-jgm8w
0
3d
198.18.134.33
worker3
elastic-node-nps9s
0
3d
198.18.134.34
worker4
elastic-node-vttt6
0
3d
198.18.134.36
worker6
51ranspo-dashboard-metrics-679579d7cc-42zfd
0
3d
10.47.0.103
worker4
51ranspo-f9ff87cfd-mwwjj
0
3d
10.35.0.173
worker3
kibana-564cd5b4cf-q9xk7
0
3d
10.43.128.103
worker6
kube-state-metrics-56dd8bc6c-4gkjn
0
3d
10.34.128.3
worker8
node-exporter-4pzwc
0
3d
198.18.134.33
worker3
node-exporter-6q4d2
0
3d
198.18.134.37
worker7
node-exporter-85m9p
0
3d
198.18.134.34
worker4
node-exporter-b7vvp
0
3d
198.18.134.35
worker5
node-exporter-d5n65
0
3d
198.18.134.32
worker2
node-exporter-nlnnd
0
3d
198.18.134.31
worker1
node-exporter-psdsw
0
3d
198.18.134.36
worker6
node-exporter-qm4bd
0
3d
198.18.134.38
worker8
ops-center-cnee-ops-center-589bd67678-6dkpf
0
3d
10.33.0.46
worker1
51ransport51-hi-res-0
Running
1
3d
10.43.128.104
worker6
51ransport51-hi-res-1
Running
1
3d
10.47.0.108
worker4
51ransport51-rules-56b978db55-n9rv6
Running
0
3d
10.34.128.5
worker8
proxy-cnee-ops-center-cnee-console-7bb69c486c-g9gkq
0
3d
10.47.0.76
worker4
swift-cnee-ops-center-6d5c7c7f8c-7jgz8
0
3d
10.35.0.164
worker3
thanos-query-hi-res-67b6f485c6-jsrds
0
3d
10.40.0.149
worker7
thanos-query-hi-res-67b6f485c6-zn4jx
0
3d
10.34.128.6
worker8
ui-cnee-ops-center-cnee-console-cdbf4488f-qfw8h
0
3d
10.36.0.141
worker5
root@198:~/5g#
3/3
Running
3/3
Running
3/3
Running
3/3
Running
1/1
Running
4/4
Running
2/2
Running
1/1
Running
2/2
Running
2/2
Running
2/2
Running
2/2
Running
2/2
Running
2/2
Running
2/2
Running
2/2
Running
4/4
Running
3/3
3/3
1/1
1/1
Running
1/1
Running
1/1
Running
1/1
Running
1/1
Running
Check ops-center services
root@198:~/5g# kubectl get svc -n cnee -o wide
NAME
EXTERNAL-IP
PORT(S)
SELECTOR
alertmanager-service
<none>
9093/TCP
component=alertmanager
bulk-stats
<none>
2222:30643/TCP
component=bulk-stats
TYPE
CLUSTER-IP
AGE
ClusterIP
10.98.15.63
3d
NodePort
10.99.169.250
3d
51 | P a g e
cnee-cnee-documentation-documentation-service
ClusterIP
10.96.183.163
<none>
80/TCP
3d
component=cnee-cnee-documentation-documentation
console-ui-cnee-ops-center
NodePort
10.107.57.13
<none>
80:30460/TCP
3d
app=cnee-console,release=cnee-ops-center
elastic-kibana-proxy
ClusterIP
10.104.54.135
<none>
9200/TCP,9300/TCP
3d
component=elastic-kibana-proxy
52ranspo
ClusterIP
10.99.49.226
<none>
3000/TCP
3d
component=52ranspo,release=cnee-cnat-monitoring
52ranspo-dashboard-metrics
ClusterIP
10.108.42.151
<none>
9418/TCP
3d
component=52ranspo-dashboard,dashboard-category=metrics
helm-api-cnee-ops-center
NodePort
10.99.150.42
<none>
3000:30074/TCP
3d
component=helm-api,release=cnee-ops-center
kibana-cnee-logging-visualization
ClusterIP
10.111.122.73
<none>
80/TCP
3d
component=kibana
ldap-proxy-cnee-cnat-monitoring
ClusterIP
10.106.39.26
<none>
636/TCP,369/TCP
3d
component=ops-center,release=cnee-ops-center
ldap-proxy-cnee-logging-visualization
ClusterIP
10.110.223.215
<none>
636/TCP,369/TCP
3d
component=ops-center,release=cnee-ops-center
ops-center-cnee-ops-center
ClusterIP
10.97.173.210
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
component=ops-center,release=cnee-ops-center
52ransport52-hi-res
ClusterIP
10.106.209.210
<none>
9090/TCP
3d
app=thanos-query,thanos-resolution=hi-res
52ransport52-rules
ClusterIP
None
<none>
9419/TCP
3d
component=52ransport52-alert-rules
proxy-cnee-ops-center
ClusterIP
10.96.243.244
<none>
4001/TCP
3d
app=ui-proxy,release=cnee-ops-center
swift-cnee-ops-center
NodePort
10.103.186.178
<none>
9855:31847/TCP,50055:32652/TCP,56790:30640/TCP
3d
app=swift,release=cnee-ops-center
thanos-peers-hi-res
ClusterIP
None
<none>
10900/TCP
3d
thanos-peer=true
root@198:~/5g#
Check ops-center ing
root@198:~/5g# kubectl get ing -n cnee
NAME
ADDRESS
PORTS
AGE
alertmanager-ingress
monitoring.198.18.134.30.nip.io
80, 443
3d
cnee-cnee-documentation-documentation-ingress
documentation.198.18.134.30.nip.io
80, 443
3d
console-ui-ingress-cnee-ops-center
center.198.18.134.30.nip.io
80, 443
3d
52ranspo-ingress
monitoring.198.18.134.30.nip.io
80, 443
3d
HOSTS
alertmanager.cnee-cnatdocs.cnee-cneeconsole-ui.cnee-ops52ranspo.cnee-cnat-
52 | P a g e
helm-api-ingress-cnee-ops-center
helm-api.cnee-opscenter.198.18.134.30.nip.io
80, 443
3d
kibana
kibana.cnee-loggingvisualization.198.18.134.30.nip.io
80, 443
3d
ops-center-ingress-cnee-ops-center
restconf.cnee-opscenter.198.18.134.30.nip.io,cli.cnee-ops-center.198.18.134.30.nip.io
80, 443
3d
root@198:~/5g#
28. Once the Ops Center comes up, login using the ClusterIP
and apply the configuration.
Credentials: admin/admin
root@198:~/5g# kubectl get svc -n cnee | grep ops-center-cnee
ops-center-cnee-ops-center
ClusterIP
10.97.173.210
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
root@198:~/5g# ssh -p 2024 admin@10.97.173.210
admin@10.97.173.210’s password:
Welcome to the CLI
admin connected from 10.32.0.1 using ssh on ops-center-cnee-ops-center589bd67678-6dkpf
product cnee#
Copy config from /root/conf/cnee-conf.yaml to be pasted here
config
Entering configuration mode terminal
<PASTE COPIED CONFIGURATION HERE>
commit
Check configuration
product cnee# show running-config
system mode running
helm default-repository cnee
helm repository cnee
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/cnee.2019.01.01-5/
!
k8s namespace
cnee
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
aaa authentication users user admin
uid
9000
gid
100
password
$1$sBXc9Rys$hk1wbyU44iOEBn5Ax1jxO.
Ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
53 | P a g e
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$vI4c9C5G$4jFqSZVZazWEW3peI11D.1
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt “\h> “
!
aaa ios level 15
prompt “\h# “
!
aaa ios privilege exec
level 0
command action
!
command autowizard
!
command enable
!
command exit
!
command help
!
command startup
!
!
level 15
command configure
!
!
!
nacm write-default deny
nacm groups group admin
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group 54ranspo-admin
user-name [ admin ]
!
nacm groups group 54ranspo-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
54 | P a g e
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
product cnee#
29.
Bring up CNEE
product cnee#config
product cnee(config)# system mode running
product cnee(config)# commit
end
30.
Use Ops-Center commands to check the status of NF
product
product
product
product
product
product
31.
cnee#show
cnee#show
cnee#show
cnee#show
cnee#show
cnee#show
running-config
k8s pods status
k8s services
helm charts status
helm charts version
system status
Check if all pods and services are UP and running.
Log into master to check status of cnee pods
root@198:~/5g# kubectl get pods -n cnee -o wide
NAME
RESTARTS
AGE
IP
NODE
alertmanager-66d9bbf99b-549l8
0
3d
10.34.128.2
worker8
api-cnee-ops-center-86cf696494-rtwdk
0
3d
10.42.0.66
worker2
bulk-stats-df488795-zfpn6
0
3d
10.40.0.148
worker7
cnee-cnee-documentation-documentation-576649bdbf-lc424
0
3d
10.34.128.0
worker8
elastic-kibana-proxy-6d4986db7d-rt4s4
2
3d
10.40.0.147
worker7
elastic-node-6dkgg
0
3d
198.18.134.32
worker2
elastic-node-7xqf6
0
3d
198.18.134.35
worker5
elastic-node-8pds7
0
3d
198.18.134.38
worker8
elastic-node-b5j6x
0
3d
198.18.134.31
worker1
elastic-node-bh6b4
0
3d
198.18.134.37
worker7
elastic-node-jgm8w
0
3d
198.18.134.33
worker3
elastic-node-nps9s
0
3d
198.18.134.34
worker4
elastic-node-vttt6
0
3d
198.18.134.36
worker6
55ranspo-dashboard-metrics-679579d7cc-42zfd
0
3d
10.47.0.103
worker4
55ranspo-f9ff87cfd-mwwjj
0
3d
10.35.0.173
worker3
READY
STATUS
1/1
Running
1/1
Running
2/2
Running
1/1
Running
2/2
Running
3/3
Running
3/3
Running
3/3
Running
3/3
Running
3/3
Running
3/3
Running
3/3
Running
3/3
Running
1/1
Running
4/4
Running
55 | P a g e
kibana-564cd5b4cf-q9xk7
2/2
Running
0
3d
10.43.128.103
worker6
kube-state-metrics-56dd8bc6c-4gkjn
1/1
Running
0
3d
10.34.128.3
worker8
node-exporter-4pzwc
2/2
Running
0
3d
198.18.134.33
worker3
node-exporter-6q4d2
2/2
Running
0
3d
198.18.134.37
worker7
node-exporter-85m9p
2/2
Running
0
3d
198.18.134.34
worker4
node-exporter-b7vvp
2/2
Running
0
3d
198.18.134.35
worker5
node-exporter-d5n65
2/2
Running
0
3d
198.18.134.32
worker2
node-exporter-nlnnd
2/2
Running
0
3d
198.18.134.31
worker1
node-exporter-psdsw
2/2
Running
0
3d
198.18.134.36
worker6
node-exporter-qm4bd
2/2
Running
0
3d
198.18.134.38
worker8
ops-center-cnee-ops-center-589bd67678-6dkpf
4/4
Running
0
3d
10.33.0.46
worker1
56ransport56-hi-res-0
3/3
Running
1
3d
10.43.128.104
worker6
56ransport56-hi-res-1
3/3
Running
1
3d
10.47.0.108
worker4
56ransport56-rules-56b978db55-n9rv6
1/1
Running
0
3d
10.34.128.5
worker8
proxy-cnee-ops-center-cnee-console-7bb69c486c-g9gkq
1/1
Running
0
3d
10.47.0.76
worker4
swift-cnee-ops-center-6d5c7c7f8c-7jgz8
1/1
Running
0
3d
10.35.0.164
worker3
thanos-query-hi-res-67b6f485c6-jsrds
1/1
Running
0
3d
10.40.0.149
worker7
thanos-query-hi-res-67b6f485c6-zn4jx
1/1
Running
0
3d
10.34.128.6
worker8
ui-cnee-ops-center-cnee-console-cdbf4488f-qfw8h
1/1
Running
0
3d
10.36.0.141
worker5
root@198:~/5g# kubectl get svc -n cnee -o wide
NAME
TYPE
CLUSTER-IP
EXTERNAL-IP
PORT(S)
AGE
SELECTOR
alertmanager-service
ClusterIP
10.98.15.63
<none>
9093/TCP
3d
component=alertmanager
bulk-stats
NodePort
10.99.169.250
<none>
2222:30643/TCP
3d
component=bulk-stats
cnee-cnee-documentation-documentation-service
ClusterIP
10.96.183.163
<none>
80/TCP
3d
component=cnee-cnee-documentation-documentation
console-ui-cnee-ops-center
NodePort
10.107.57.13
<none>
80:30460/TCP
3d
app=cnee-console,release=cnee-ops-center
elastic-kibana-proxy
ClusterIP
10.104.54.135
<none>
9200/TCP,9300/TCP
3d
component=elastic-kibana-proxy
56ranspo
ClusterIP
10.99.49.226
<none>
3000/TCP
3d
component=56ranspo,release=cnee-cnat-monitoring
56ranspo-dashboard-metrics
ClusterIP
10.108.42.151
<none>
9418/TCP
3d
component=56ranspo-dashboard,dashboard-category=metrics
56 | P a g e
helm-api-cnee-ops-center
NodePort
10.99.150.42
<none>
3000:30074/TCP
3d
component=helm-api,release=cnee-ops-center
kibana-cnee-logging-visualization
ClusterIP
10.111.122.73
<none>
80/TCP
3d
component=kibana
ldap-proxy-cnee-cnat-monitoring
ClusterIP
10.106.39.26
<none>
636/TCP,369/TCP
3d
component=ops-center,release=cnee-ops-center
ldap-proxy-cnee-logging-visualization
ClusterIP
10.110.223.215
<none>
636/TCP,369/TCP
3d
component=ops-center,release=cnee-ops-center
ops-center-cnee-ops-center
ClusterIP
10.97.173.210
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
component=ops-center,release=cnee-ops-center
57ransport57-hi-res
ClusterIP
10.106.209.210
<none>
9090/TCP
3d
app=thanos-query,thanos-resolution=hi-res
57ransport57-rules
ClusterIP
None
<none>
9419/TCP
3d
component=57ransport57-alert-rules
proxy-cnee-ops-center
ClusterIP
10.96.243.244
<none>
4001/TCP
3d
app=ui-proxy,release=cnee-ops-center
swift-cnee-ops-center
NodePort
10.103.186.178
<none>
9855:31847/TCP,50055:32652/TCP,56790:30640/TCP
3d
app=swift,release=cnee-ops-center
thanos-peers-hi-res
ClusterIP
None
<none>
10900/TCP
3d
thanos-peer=true
root@198:~/5g# kubectl get ing -n cnee -o wide
NAME
HOSTS
ADDRESS
PORTS
AGE
alertmanager-ingress
alertmanager.cnee-cnatmonitoring.198.18.134.30.nip.io
80, 443
3d
cnee-cnee-documentation-documentation-ingress
docs.cnee-cneedocumentation.198.18.134.30.nip.io
80, 443
3d
console-ui-ingress-cnee-ops-center
console-ui.cnee-opscenter.198.18.134.30.nip.io
80, 443
3d
57ranspo-ingress
57ranspo.cnee-cnatmonitoring.198.18.134.30.nip.io
80, 443
3d
helm-api-ingress-cnee-ops-center
helm-api.cnee-opscenter.198.18.134.30.nip.io
80, 443
3d
kibana
kibana.cnee-loggingvisualization.198.18.134.30.nip.io
80, 443
3d
ops-center-ingress-cnee-ops-center
restconf.cnee-opscenter.198.18.134.30.nip.io,cli.cnee-ops-center.198.18.134.30.nip.io
80, 443
3d
root@198:~/5g#
root@198:~/5g#
32.
Check CNEE GUI:
Open a brower and paste the highlighted url
root@198:~/5g# kubectl get ing -n cnee
57 | P a g e
NAME
HOSTS
ADDRESS
PORTS
AGE
alertmanager-ingress
alertmanager.cnee-cnatmonitoring.198.18.134.30.nip.io
80, 443
3d
cnee-cnee-documentation-documentation-ingress
docs.cnee-cneedocumentation.198.18.134.30.nip.io
80, 443
3d
console-ui-ingress-cnee-ops-center
console-ui.cnee-opscenter.198.18.134.30.nip.io
80, 443
3d
58ranspo-ingress
58ranspo.cnee-cnatmonitoring.198.18.134.30.nip.io
80, 443
3d
helm-api-ingress-cnee-ops-center
helm-api.cnee-opscenter.198.18.134.30.nip.io
80, 443
3d
kibana
kibana.cnee-loggingvisualization.198.18.134.30.nip.io
80, 443
3d
ops-center-ingress-cnee-ops-center
restconf.cnee-opscenter.198.18.134.30.nip.io,cli.cnee-ops-center.198.18.134.30.nip.io
80, 443
3d
58 | P a g e
Deploy NRF
33.
Use global.yaml and nrf.yaml to install the NRF
Note : The yaml files are uploaded to the folder: cat /root/5g
root@198:~/5g# cat /root/5g/nrf.yaml
ops-center:
product:
autoDeploy: false
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/nrf.2019.01.01-5/
name: nrf
root@198:~/5g#
Use the helm command to install the NRF.
Cd /root/5g/
helm upgrade –install nrf-ops-center cnat-nrf/nrf-ops-center -f global.yaml
-f nrf.yaml –namespace nrf –debug –devel
34. Once the Ops Center comes up, login using the ClusterIP
and apply the configuration.
Credentials: admin/admin
root@198:~# kubectl get svc -n nrf | grep ops
ops-center-nrf-ops-center
ClusterIP
10.109.34.197
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
root@198:~#
root@198:~#
root@198:~# ssh -p 2024 admin@<IP>
admin@10.109.34.197’s password:
Welcome to the CLI
admin connected from 10.32.0.1 using ssh on ops-center-nrf-ops-center797f4bd88c-dwdxf
product nrf#
35.
NRF Configuration
Note: The config files are uploaded on k8s Master. Open another window: cat
/root/config/nrf-conf.yaml
root@198:~# cat /root/CLUS/configs/nrf.cfg
36. Once the Ops Center comes up, login using the ClusterIP.
Please copy paste config line by line.
autowizard false
59 | P a g e
complete-on-space false
config
Entering configuration mode terminal
product nrf(config)#
 from this line, please copy paste config line by
line
37. Type commit before you exit out of Ops-Center to save the
configuration
commit
Commit complete.
End
product nrf#
Refer Appendix for reference configuration
38. Use system mode running to deploy (Make sure
configuration is accurate)
show running-config
config
Entering configuration mode terminal
system mode shutdown
commit
system mode running
commit
end
39.
show
show
show
show
show
show
40.
Ops-Center commands to check the status of NF
running-config
k8s pods status
k8s services
helm charts status
helm charts version
system status
Check if all pods and services are UP and running.
root@198:~/5g# kubectl get pods -n nrf -o wide
NAME
STATUS
RESTARTS
AGE
IP
NODE
activemq-0
Running
0
2d
10.42.0.70
worker2
activemq-1
Running
0
2d
10.34.128.8
worker8
admin-db-0
Running
0
2d
10.34.128.10
worker8
admin-db-1
Running
0
2d
10.36.0.143
worker5
cps-license-manager-56758fbffd-ltghj
Running
0
2d
10.33.0.47
worker1
datastore-ep-profile-75dbf858f9-mt58b
Running
0
2d
10.40.0.153
worker7
datastore-ep-profile-notification-6d86898db9-z87s4
Running
0
2d
10.42.0.75
worker2
READY
1/1
1/1
1/1
1/1
1/1
2/2
2/2
60 | P a g e
datastore-ep-subscription-54d78ff455-vjdrl
Running
0
2d
10.43.128.106
worker6
datastore-ep-subscription-notification-7fc46c8d7c-4g669
Running
0
2d
10.34.128.11
worker8
db-admin-0
Running
0
2d
10.34.128.9
worker8
db-admin-config-0
Running
0
2d
10.43.128.105
worker6
db-profile-config-0
Running
0
2d
10.36.0.147
worker5
db-profile-s1-0
Running
0
2d
10.36.0.144
worker5
db-s1-0
Running
0
2d
10.42.0.71
worker2
db-session-config-0
Running
0
2d
10.33.0.49
worker1
db-subscription-config-0
Running
0
2d
10.33.0.50
worker1
db-subscription-s1-0
Running
0
2d
10.47.0.111
worker4
lbvip02-6f59f78478-mrdn6
Running
0
2d
10.34.128.7
worker8
nrf-nrf-nrf-engine-app-blue-586f4dd9d9-6hl5d
Running
1
2d
10.47.0.110
worker4
nrf-rest-ep-644574759-8hgdb
Running
0
2d
10.42.0.76
worker2
ops-center-nrf-ops-center-797f4bd88c-dwdxf
Running
0
3d
10.35.0.185
worker3
patch-server-nrf-cnat-cps-infrastructure-6b76cf465f-mjb29
Running
0
2d
10.40.0.150
worker7
policy-builder-nrf-nrf-engine-app-blue-5fcc688fd4-727pn
Running
0
2d
10.42.0.69
worker2
rs-controller-admin-56fb6bd8bf-p22vp
Running
0
2d
10.40.0.151
worker7
rs-controller-admin-config-c875c68b9-mrnp5
Running
0
2d
10.36.0.142
worker5
rs-controller-profile-config-788cf54fdf-qhdkm
Running
0
2d
10.35.0.191
worker3
rs-controller-profile-s1-666476b659-z5wsn
Running
0
2d
10.33.0.51
worker1
rs-controller-s1-658c58657-scvs4
Running
0
2d
10.36.0.145
worker5
rs-controller-session-config-7949d896f7-l9b8p
Running
0
2d
10.40.0.152
worker7
rs-controller-subscription-config-b7fb5df4c-cxl8d
Running
0
2d
10.34.128.15
worker8
rs-controller-subscription-s1-68d684985-khxd6
Running
0
2d
10.47.0.112
worker4
svn-698f498ccc-hlgbg
Running
0
2d
10.33.0.48
worker1
root@198:~/5g# kubectl get svc -n nrf -o wide
NAME
TYPE
EXTERNAL-IP
PORT(S)
activemq
ClusterIP
<none>
61616/TCP,11099/TCP,11098/TCP
component=activemq
admin-db
ClusterIP
<none>
27017/TCP
component=admin-db-router
datastore-ep
ClusterIP
<none>
8980/TCP,9100/TCP
component=datastore-ep
2/2
2/2
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
4/4
1/1
1/1
2/2
1/1
1/1
2/2
2/2
1/1
1/1
2/2
2/2
CLUSTER-IP
AGE
SELECTOR
None
2d
None
2d
10.104.13.1
2d
61 | P a g e
datastore-ep-nf-auth
ClusterIP
10.97.12.134
<none>
8980/TCP,9100/TCP
2d
component=datastore-ep-nf-auth
datastore-ep-profile
ClusterIP
10.105.85.172
<none>
8980/TCP,9100/TCP
2d
component=datastore-ep-profile
datastore-ep-subscription
ClusterIP
10.103.100.215
<none>
8980/TCP,9100/TCP
2d
component=datastore-ep-subscription
lbvip02
ClusterIP
10.106.172.95
<none>
11211/TCP
2d
component=cps-memcache
ldap-proxy-nrf-cnat-cps-infrastructure
ClusterIP
10.99.106.184
<none>
636/TCP,369/TCP
2d
component=ops-center,release=nrf-ops-center
ldap-proxy-nrf-nrf-engine-app-blue
ClusterIP
10.96.246.187
<none>
636/TCP,369/TCP
2d
component=ops-center,release=nrf-ops-center
mongo-admin-0
ClusterIP
10.104.71.95
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-admin-0
mongo-admin-config-0
ClusterIP
10.97.229.169
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-admin-config-0
mongo-profile-config-0
ClusterIP
10.98.56.19
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-profile-config-0
mongo-profile-s1-0
ClusterIP
10.103.27.235
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-profile-s1-0
mongo-s1-0
ClusterIP
10.101.51.110
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-s1-0
mongo-session-config-0
ClusterIP
10.110.118.158
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-session-config-0
mongo-subscription-config-0
ClusterIP
10.98.207.223
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-subscription-config-0
mongo-subscription-s1-0
ClusterIP
10.98.124.9
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-subscription-s1-0
nrf-datastore-ep
ClusterIP
10.105.78.104
<none>
8980/TCP
2d
component=cps-datastore-ep
nrf-engine
ClusterIP
10.109.147.15
<none>
8880/TCP,8890/TCP,8085/TCP
2d
component=nrf-engine
nrf-rest-ep
ClusterIP
10.111.184.184
198.18.134.30
8082/TCP,9299/TCP,8881/TCP
2d
component=nrf-rest-ep
ops-center-nrf-ops-center
ClusterIP
10.109.34.197
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
component=ops-center,release=nrf-ops-center
patch-server-nrf-cnat-cps-infrastructure
ClusterIP
10.98.220.64
<none>
8080/TCP
2d
component=patch-server,release=nrf-cnat-cps-infrastructure
policy-builder-nrf-nrf-engine-app-blue
ClusterIP
10.104.82.238
<none>
7070/TCP
2d
component=policy-builder,release=nrf-nrf-engine-app-blue
rs-admin
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=admin
62 | P a g e
rs-admin-config
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=admin-config
rs-profile-config
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=profile-config
rs-profile-s1
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=profile-s1
rs-s1
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=s1
rs-session-config
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=session-config
rs-subscription-config
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=subscription-config
rs-subscription-s1
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=subscription-s1
svn
ClusterIP
None
<none>
80/TCP
2d
component=cps-subversion
root@198:~/5g# kubectl get ing -n nrf -o wide
NAME
HOSTS
ADDRESS
PORTS
AGE
nrf-engine-ep-ingress
nrf.api.nrf-nrfservices.198.18.134.30.nip.io
80, 443
2d
nrf-rest-ep-ingress
nrf.rest-ep.nrf-nrf-restep.198.18.134.30.nip.io
80,
443
2d
ops-center-ingress-nrf-ops-center
restconf.nrf-opscenter.198.18.134.30.nip.io,cli.nrf-ops-center.198.18.134.30.nip.io
80, 443
3d
policy-builder-ingress-nrf-nrf-engine-app-blue
pb.nrf-nrf-engine-appblue.198.18.134.30.nip.io
80, 443
2d
root@198:~/5g#
root@198:~/5g#
63 | P a g e
Deploy NSSF
41.
Use global.yaml and nssf.yaml to install the NSSF
Note : The yaml files are uploaded to the folder: cat /home/labuser/pod4/nssf.yaml
cd /root/5g
cat /root/5g/nssf.yaml
nssf.yaml
root@198:~/5g# cat nssf.yaml
ops-center:
product:
autoDeploy: false
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/nssf.2019.01.01-5/
name: nssf
root@198:~/5g#
42.
Use the helm command to install the NSSF.
Helm upgrade –install nssf-ops-center cnat-nssf/nssf-ops-center -f
global.yaml -f nssf.yaml –namespace nssf –debug –devel
43.
Check Ops-center is up and running:
root@198:~/5g# kubectl get
root@198:~/5g# kubectl get
root@198:~/5g# kubectl get
NAME
Once the Ops Center comes up,
Credentials: admin/admin
pods -n nssf -o wide
svc -n nssf -o wide
ing -n nssf
HOSTS
login using the ClusterIP and apply the configuration.
root@198:~/5g# kubectl get svc -n nssf | grep ops
ops-center-nssf-ops-center
ClusterIP
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
root@198:~/5g#
10.109.143.84
ssh -p 2024 admin@<IP address>
root@198:~/5g# ssh -p 2024 admin@10.109.143.84
admin@10.109.143.84’s password:
Welcome to the CLI
admin connected from 10.32.0.1 using ssh on ops-center-nssf-ops-center6946bb8fb7-mv8ln
product nssf#
64 | P a g e
44.
NSSF Configuration
Note : The config files are uploaded on k8s Master. Open another window: cat
/home/labuser/pod4/config/nssf
cat /root/CLUS/configs/nssf.cfg
45. Once the Ops Center comes up, login using the ClusterIP.
Please copy paste config line by line.
Autowizard false
complete-on-space false
config
Entering configuration mode terminal
product nssf(config)#
 from this line, please copy paste config line
by line
46. Type commit before you exit out of Ops-Center to save the
configuration
product nssf(config)#
commit
Commit complete.
End
Refer Appendix for reference configuration
47.
Use system mode running to deploy
product nssf#
config
Entering configuration mode terminal
system mode shutdown
commit
system mode running
commit
end
product nssf#
48.
show
show
show
show
show
show
Ops-Center commands to check the status of NF
running-config
k8s pods status
k8s services
helm charts status
helm charts version
system status
49. Check if all pods and services are UP and running. (FROM
K8-Master Node)
root@198:~/conf# kubectl get pods -n nssf -o wide
NAME
STATUS
READY
RESTARTS
AGE
IP
NODE
65 | P a g e
activemq-0
Running
0
3d
10.42.0.73
worker2
activemq-1
Running
0
3d
10.34.128.13
worker8
admin-db-0
Running
0
3d
10.43.128.111
worker6
admin-db-1
Running
0
3d
10.47.0.114
worker4
cps-license-manager-56758fbffd-v4bm7
Running
0
3d
10.43.128.110
worker6
datastore-ep-nssai-availability-589b487585-nnr6s
Running
0
3d
10.36.0.148
worker5
datastore-ep-nssai-availability-notification-69ccf6f4f7-t9mwd
Running
0
3d
10.43.128.109
worker6
db-admin-0
Running
0
3d
10.42.0.74
worker2
db-admin-config-0
Running
0
3d
10.47.0.115
worker4
db-nssai-availability-config-0
Running
0
3d
10.34.128.12
worker8
db-nssai-availability-s1-0
Running
0
3d
10.40.0.155
worker7
db-s1-0
Running
0
3d
10.40.0.157
worker7
db-session-config-0
Running
0
3d
10.43.128.112
worker6
lbvip02-6f59f78478-4cwmv
Running
0
3d
10.36.0.149
worker5
nssf-engine-b7fb7b888-52rtw
Running
0
3d
10.40.0.158
worker7
nssf-policy-builder-nssf-nssf-engine-app-nssf-1-7fdd4969d7f279b
Running
0
3d
10.47.0.116
worker4
nssf-rest-ep-5cc8cb79d5-dcv4r
Running
0
3d
10.40.0.154
worker7
ops-center-nssf-ops-center-6946bb8fb7-mv8ln
Running
0
3d
10.35.0.194
worker3
patch-server-nssf-cnat-cps-infrastructure-76f99f7c45-zs4fj
Running
0
3d
10.47.0.113
worker4
rs-controller-admin-66cf9875f-gl2lj
Running
0
3d
10.36.0.151
worker5
rs-controller-admin-config-f9b786dc9-7tzh4
Running
0
3d
10.36.0.150
worker5
rs-controller-nssai-availability-config-59bf8bd685-dv9mn
Running
0
3d
10.42.0.72
worker2
rs-controller-nssai-availability-s1-5656d9f59f-qqwqs
Running
0
3d
10.47.0.109
worker4
rs-controller-s1-6c8c5cb44c-jk94x
Running
0
3d
10.34.128.14
worker8
rs-controller-session-config-7fdd9d488c-bsjfm
Running
0
3d
10.43.128.115
worker6
svn-767846bfbf-p9lwr
Running
0
3d
10.40.0.156
worker7
root@198:~/conf# kubectl get svc -n nssf -o wide
NAME
EXTERNAL-IP
PORT(S)
activemq
<none>
61616/TCP,11099/TCP,11098/TCP
component=activemq
admin-db
<none>
27017/TCP
component=admin-db-router
TYPE
AGE
ClusterIP
3d
ClusterIP
3d
1/1
1/1
1/1
1/1
1/1
2/2
2/2
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
1/1
4/4
1/1
2/2
1/1
1/1
2/2
2/2
1/1
2/2
CLUSTER-IP
SELECTOR
None
None
66 | P a g e
datastore-ep
ClusterIP
10.101.218.85
<none>
8980/TCP,9100/TCP
3d
component=datastore-ep
datastore-ep-nssai-availability
ClusterIP
10.110.130.102
<none>
8980/TCP,9100/TCP
3d
component=datastore-ep-nssai-availability
lbvip02
ClusterIP
10.104.163.221
<none>
11211/TCP
3d
component=cps-memcache
ldap-proxy-nssf-cnat-cps-infrastructure
ClusterIP
10.100.230.111
<none>
636/TCP,369/TCP
3d
component=ops-center,release=nssf-ops-center
ldap-proxy-nssf-nssf-engine-app-nssf-1
ClusterIP
10.96.206.238
<none>
636/TCP,369/TCP
3d
component=ops-center,release=nssf-ops-center
mongo-admin-0
ClusterIP
10.105.245.89
<none>
27017/TCP
3d
statefulset.kubernetes.io/pod-name=db-admin-0
mongo-admin-config-0
ClusterIP
10.96.245.39
<none>
27017/TCP
3d
statefulset.kubernetes.io/pod-name=db-admin-config-0
mongo-nssai-availability-config-0
ClusterIP
10.103.204.165
<none>
27017/TCP
3d
statefulset.kubernetes.io/pod-name=db-nssai-availability-config-0
mongo-nssai-availability-s1-0
ClusterIP
10.106.36.157
<none>
27017/TCP
3d
statefulset.kubernetes.io/pod-name=db-nssai-availability-s1-0
mongo-s1-0
ClusterIP
10.103.218.161
<none>
27017/TCP
3d
statefulset.kubernetes.io/pod-name=db-s1-0
mongo-session-config-0
ClusterIP
10.98.92.223
<none>
27017/TCP
3d
statefulset.kubernetes.io/pod-name=db-session-config-0
nssf-engine
ClusterIP
10.96.237.175
198.18.134.34
8883/TCP,8890/TCP,8081/TCP
3d
component=nssf-engine
nssf-policy-builder-nssf-nssf-engine-app-nssf-1
ClusterIP
10.100.244.243
<none>
7070/TCP
3d
component=nssf-policy-builder,release=nssf-nssf-engine-app-nssf-1
nssf-rest-ep
ClusterIP
10.101.149.164
198.18.134.34
8083/TCP,9083/TCP
3d
component=nssf-rest-ep
nssf-rest-ep-internal
ClusterIP
10.104.150.21
<none>
9299/TCP,8882/TCP
3d
component=nssf-rest-ep
ops-center-nssf-ops-center
ClusterIP
10.109.143.84
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
component=ops-center,release=nssf-ops-center
patch-server-nssf-cnat-cps-infrastructure
ClusterIP
10.101.168.227
<none>
8080/TCP
3d
component=patch-server,release=nssf-cnat-cps-infrastructure
rs-admin
ClusterIP
None
<none>
27017/TCP
3d
component=replica-set,set-name=admin
rs-admin-config
ClusterIP
None
<none>
27017/TCP
3d
component=replica-set,set-name=admin-config
rs-nssai-availability-config
ClusterIP
None
<none>
27017/TCP
3d
component=replica-set,set-name=nssai-availability-config
rs-nssai-availability-s1
ClusterIP
None
<none>
27017/TCP
3d
component=replica-set,set-name=nssai-availability-s1
67 | P a g e
rs-s1
<none>
27017/TCP
component=replica-set,set-name=s1
rs-session-config
<none>
27017/TCP
component=replica-set,set-name=session-config
svn
<none>
80/TCP
component=cps-subversion
ClusterIP
3d
None
ClusterIP
3d
None
ClusterIP
3d
None
root@198:~/conf# kubectl get ing -n nssf -o wide
NAME
HOSTS
ADDRESS
PORTS
AGE
nssf-engine-ep-ingress
nssf.api.nssfnssf-engine-app-nssf-1.198.18.134.30.nip.io
80, 443
3d
nssf-policy-builder-ingress-nssf-nssf-engine-app-nssf-1
nssf.pb.nssfnssf-engine-app-nssf-1.198.18.134.30.nip.io
80, 443
3d
nssf-rest-ep-ingress
nssf.restep.nssf-nssf-rest-ep.198.18.134.30.nip.io
80, 443
3d
ops-center-ingress-nssf-ops-center
restconf.nssfops-center.198.18.134.30.nip.io,cli.nssf-ops-center.198.18.134.30.nip.io
80, 443
3d
root@198:~/conf#
root@198:~/conf#
50.
NSSF Table Configuration
Find the NSSF GUI URL
Login to NSSF GUI using Credentials: Admin/admin
root@198:~/conf# kubectl get ing -n nssf | grep pb
nssf-policy-builder-ingress-nssf-nssf-engine-app-nssf-1
nssf-engine-app-nssf-1.198.18.134.30.nip.io
80, 443
3d
root@198:~/conf#
51.
Open the web browser
https://nssf-engine-app-nssf-1.198.18.134.30.nip.io
52.
nssf.pb.nssf-
should be full string
Login to NSSF GUI using Credentials: Admin/admin
Click on  In Custom reference Data
68 | P a g e
 AMF Selection and add the raw as followings:
69 | P a g e
 NRF Selection and add the raw as followings:
Please make sure that NRF id is “http://master-ip:8082/”, where the master-ip is
198.18.134.10
master-ip
master-ip
 SNSAAI Selection and add the raw
70 | P a g e
71 | P a g e
Deploy AMF
Use the above mentioned global.yaml and amf.yaml to install the AMF
Note : The yaml files are uploaded to the folder: cat /root/5g (from Master)
53.
Check the contents of amf.yaml
root@198:~/conf# cd /root/5g/
root@198:~/5g#
root@198:~/5g# cat /root/5g/amf.yaml
ops-center:
product:
autoDeploy: false
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/amf.2019.01.01-5/
name: amf
root@198:~/5g#
54.
Use the helm command to install the AMF.
Helm upgrade –install amf-ops-center cnat-amf/amf-ops-center -f global.yaml
-f amf.yaml –namespace amf –debug –devel
55.
Check Ops-center is up and running:
root@198:~/5g# kubectl get pods -n amf -o wide | grep ops
ops-center-amf-ops-center-65c76bc5f-kqdn2
Running
0
3d
10.43.128.116
worker6
root@198:~/5g#
4/4
56. Once the Ops Center comes up, login using the ClusterIP or
Loadbalancer IP and apply the configuration.
Credentials: admin/admin
kubectl get svc -n amf | grep ops
ssh -p 2024 admin@<IP address>
oot@198:~/5g# kubectl get svc -n amf | grep ops
ops-center-amf-ops-center
ClusterIP
10.98.127.113
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
root@198:~/5g# ssh -p 2024 admin@10.98.127.113
admin@10.98.127.113’s password:
Welcome to the CLI
admin connected from 10.32.0.1 using ssh on ops-center-amf-ops-center65c76bc5f-kqdn2
product amf#
57.
AMF Configuration
Note : The config files are uploaded on k8s Master. Open another window:
72 | P a g e
root@198:~/5g# cat /root/CLUS/configs/amf.cfg
root@198:~/conf# Once the Ops Center comes up, login using the ClusterIP. Please copy
paste config line by line.
autowizard false
complete-on-space false
config
Entering configuration mode terminal
product amf(config)#
 from this line, please copy paste config line by
line
58. Type commit before you exit out of Ops-Center to save the
configuration
product amf(config)#
commit
Commit complete.
End
product amf#
Refer Appendix for reference configuration
59.
Use system mode running to deploy
product amf#
show running-config
product amf#
config
Entering configuration mode terminal
system mode shutdown
commit
system mode running
commit
end
60.
show
show
show
show
show
show
Ops-Center commands to check the status of NF
running-config
k8s pods status
k8s services
helm charts status
helm charts version
system status
61. Check if all pods and services are UP and running. (from
Master)
root@198:~/conf# kubectl get pods -n amf -o wide
NAME
STATUS
RESTARTS
AGE
IP
NODE
amf-amf-documentation-documentation-5fbb577548-srddh
Running
0
2d
10.43.128.120
worker6
amf-amf-pats-executor-6bf4d7bc4d-gncfb
Running
0
2d
10.33.0.59
worker1
amf-amf-pats-repo-ff988f847-7nd2b
Running
0
2d
10.34.128.18
worker8
amf-amf-protocol-ep-55f444c6f8-rwjjr
Running
0
2d
10.34.128.17
worker8
amf-amf-rest-ep-99dd76796-rhrc8
Running
0
2d
10.47.0.120
worker4
READY
1/1
1/1
1/1
1/1
1/1
73 | P a g e
amf-amf-sctp-lb-848c69dccb-b8r8g
1/1
Running
0
2d
198.18.134.33
worker3
amf-amf-service-0
1/1
Running
0
2d
10.35.0.196
worker3
amf-mock-tools-7cbc5fb8b7-s4hpc
1/1
Running
0
2d
10.42.0.84
worker2
74ransport-0
1/1
Running
0
2d
10.42.0.83
worker2
datastore-ep-amf-subscriber-6dcdcdd5c6-dm2kv
2/2
Running
1
2d
10.40.0.159
worker7
datastore-ep-amf-subscriber-notification-55b55d76f8-qpkzv
2/2
Running
0
2d
10.35.0.195
worker3
db-amf-subscriber-config-0
1/1
Running
0
2d
10.43.128.119
worker6
db-amf-subscriber1-0
1/1
Running
0
2d
10.36.0.155
worker5
etcd-0
1/1
Running
0
2d
10.33.0.58
worker1
etcd-1
1/1
Running
0
2d
10.40.0.161
worker7
etcd-2
1/1
Running
0
2d
10.47.0.117
worker4
74ranspo-dashboard-amf-7dcc57c69-ww88g
1/1
Running
0
2d
10.40.0.164
worker7
jaeger-agent-2gtgw
1/1
Running
0
2d
10.33.0.52
worker1
jaeger-agent-8bqr9
1/1
Running
0
2d
10.35.0.199
worker3
jaeger-agent-8g97s
1/1
Running
0
2d
10.47.0.118
worker4
jaeger-agent-8p9fc
1/1
Running
0
2d
10.34.128.16
worker8
jaeger-agent-djk45
1/1
Running
0
2d
10.40.0.160
worker7
jaeger-agent-grp76
1/1
Running
0
2d
10.43.128.107
worker6
jaeger-agent-s5jhl
1/1
Running
0
2d
10.42.0.77
worker2
jaeger-agent-wg7vk
1/1
Running
0
2d
10.36.0.152
worker5
jaeger-collector-7bcbd755f4-hwv9g
1/1
Running
0
2d
10.43.128.118
worker6
jaeger-query-58488968c8-r8x98
1/1
Running
3
2d
10.47.0.119
worker4
lfs-5d444f976c-nxfg7
1/1
Running
0
2d
198.18.134.33
worker3
ops-center-amf-ops-center-65c76bc5f-kqdn2
4/4
Running
0
3d
10.43.128.116
worker6
rs-controller-amf-subscriber-config-5dfbf4f9df-znh5l
1/1
Running
0
2d
10.36.0.154
worker5
rs-controller-amf-subscriber1-7fb674b887-l6d4c
2/2
Running
0
2d
10.35.0.197
worker3
root@198:~/conf# kubectl get svc -n amf -o wide
NAME
TYPE
CLUSTER-IP
EXTERNAL-IP
PORT(S)
AGE
SELECTOR
amf-amf-documentation-documentation-service
ClusterIP
10.105.86.164
<none>
80/TCP
2d
component=amf-amf-documentation-documentation
amf-amf-pats-executor
ClusterIP
10.103.24.66
<none>
8080/TCP,8091/TCP,2222/TCP
2d
component=pats,release=amf-amf-pats-executor
74 | P a g e
amf-amf-pats-repo
ClusterIP
10.102.0.109
<none>
80/TCP
2d
component=pats,release=amf-amf-pats-repo
amf-datastore-ep
ClusterIP
10.103.196.14
<none>
8980/TCP
2d
component=amf-datastore-ep
amf-mock-tools
ClusterIP
10.110.239.225
198.18.134.30
8099/TCP
2d
component=amf-mock-tools,release=amf-amf-mock-tools
amf-protocol-ep
ClusterIP
10.106.247.182
<none>
8894/TCP,8886/TCP,8070/TCP
2d
component=amf-protocol-ep
amf-rest-ep
ClusterIP
10.107.199.209
198.18.134.31
8883/TCP,8080/TCP,8090/TCP,8870/TCP
2d
component=amf-rest-ep
amf-service
ClusterIP
10.98.12.154
<none>
8882/TCP,8885/TCP,8080/TCP
2d
component=amf-service
automationhost
ClusterIP
10.104.12.187
<none>
3868/TCP,3869/TCP,2775/TCP,25/TCP,161/TCP
2d
component=pats,release=amf-amf-pats-executor
75ransport
ClusterIP
10.105.34.133
<none>
7000/TCP,7001/TCP,9042/TCP,9160/TCP,7070/TCP
2d
app=75ransport,comp-group=75ransport,namespace=amf
etcd
ClusterIP
None
<none>
2379/TCP,2380/TCP,7070/TCP
2d
component=etcd
75ranspo-dashboard-amf
ClusterIP
10.109.91.151
<none>
9418/TCP
2d
component=75ranspo-dashboard,dashboard-category=amf
jaeger-agent
ClusterIP
10.105.22.223
<none>
5775/UDP,6831/UDP,6832/UDP
2d
component=jaeger-agent,release=amf-amf-cluster-infrastructure
jaeger-collector
ClusterIP
10.103.202.0
<none>
14267/TCP,14268/TCP,9411/TCP
2d
component=jaeger-collector,release=amf-amf-cluster-infrastructure
jaeger-query
ClusterIP
10.96.23.25
<none>
16686/TCP
2d
component=jaeger-query,release=amf-amf-cluster-infrastructure
mongo-amf-subscriber-config-0
ClusterIP
10.109.5.44
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-amf-subscriber-config-0
mongo-amf-subscriber1-0
ClusterIP
10.108.109.78
<none>
27017/TCP
2d
statefulset.kubernetes.io/pod-name=db-amf-subscriber1-0
ops-center-amf-ops-center
ClusterIP
10.98.127.113
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
component=ops-center,release=amf-ops-center
rs-amf-subscriber-config
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=amf-subscriber-config
rs-amf-subscriber1
ClusterIP
None
<none>
27017/TCP
2d
component=replica-set,set-name=amf-subscriber1
root@198:~/conf# kubectl get ing -n amf -o wide
NAME
HOSTS
ADDRESS
PORTS
AGE
amf-amf-documentation-documentation-ingress
docs.amf-amfdocumentation.198.18.134.30.nip.io
80, 443
2d
amf-amf-pats-executor
amf-amf-patsexecutor.198.18.134.30.nip.io
80, 443
2d
75 | P a g e
amf-amf-pats-repo
amf-amf-patsrepo.198.18.134.30.nip.io
80, 443
2d
amf-mock-tools-ingress
amf.rest-ep.amf-amf-mocktools.198.18.134.30.nip.io
80,
443
2d
amf-rest-ep-ingress-amf-amf-services
amf.restep.amf-amfservices.198.18.134.30.nip.io
80, 443
2d
jaeger-ingress
jaeger.amf-amf-clusterinfrastructure.198.18.134.30.nip.io
80, 443
2d
ops-center-ingress-amf-ops-center
restconf.amf-opscenter.198.18.134.30.nip.io,cli.amf-ops-center.198.18.134.30.nip.io
80, 443
3d
root@198:~/conf#
root@198:~/conf#
62.
Check AMF registration to NRF
kubectl exec -it db-profile-s1-0 -n nrf mongo
profile-s1:PRIMARY> use session
profile-s1:PRIMARY> db.session.find().pretty();
{
“_id” : “040b5ff0-bd6e-43d6-82fa-99234dc73b45”,
“tags” : [
“serviceName:Namf_Communication”,
“serviceInstanceId:Namf_Communication”,
“tai:123;456;10”,
“tai:123;456;20”,
“tai:123;456;30”,
“guami:123;456cisco-amf”,
“amfSetId:2”,
“amfRegionId:1”,
“nfType:AMF”,
“plmn:123;456”,
“snssai:1:1”,
“snssai:2:1”,
“snssai:2:3”,
“snssai:12:13”
],
“ukTags” : [
“nfInstanceId:040b5ff0-bd6e-43d6-82fa-99234dc73b45”
],
“d” :
BinData(0,”CJb2JBLeAgokMDQwYjVmZjAtYmQ2ZS00M2Q2LtgyZmEtOTkyMzRkYzczYjQ1EAMY
ASIKCgMxMjMSAzQ1NioFCgExEAEqBQoBMRACKgUKATMQAioGCgIxMxAMOgljaXNjby1hbWZKCjE
wLjguNTcuMTCCAQCKAQCSAQCaAVUKATESATIaFwoKCgMxMjMSAzQ1NhIJY2lzY28tYW1mIhAKCg
oDMTIzEgM0NTYSAjEwIhAKCgoDMTIzEgM0NTYSAjIwIhAKCgoDMTIzEgM0NTYSAjMwogEAqgEAs
gEAugEAwgGDAQoSTmFtZl9Db21tdW5pY2F0aW9uEhJOYW1mX0NvbW11bmljYXRpb24aCQoCdjES
AzEuMCIEaHR0cCoJY2lzY28tYW1mOhEKCjEwLjguNTcuMTAgASiaP1IKCgMxMjMSAzQ1NloBBGo
FCgExEAFqBQoBMRACagUKATMQAmoGCgIxMxAM”),
“nextEvalTime” : ISODate(“2019-05-01T19:34:17.213Z”),
“purge” : false,
“_v” : NumberLong(1)
}
76 | P a g e
Deploy PCF
Use the above mentioned global.yaml and pcf.yaml to install the PCF
63. The yaml files are uploaded to the folder: cat
/root/5g/pcf.yaml
root@198:~# cat /root/5g/pcf.yaml
ops-center:
product:
autoDeploy: false
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/pcf.2019.01.01-5/
name: pcf
root@198:~#
64.
Use the helm command to install the PCF.
Helm upgrade –install pcf-ops-center cnat-pcf/pcf-ops-center -f global.yaml
-f pcf.yaml –namespace pcf –debug –devel
65.
Verify Ops-center is UP and running:
kubectl get pods -n pcf -o wide
kubectl get ing -n pcf
kubectl get svc -n pcf | grep ops
66.
Once the Ops Center comes up, login using the ClusterIP
credentials: admin/admin
root@198:~# kubectl get svc -n pcf | grep ops
ops-center-pcf-ops-center
ClusterIP
10.100.249.52
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
root@198:~# ssh -p 2024 admin@10.100.249.52
admin@10.100.249.52’s password:
Welcome to the CLI
admin connected from 10.32.0.1 using ssh on ops-center-pcf-ops-centerf67c9d9fd-wclrv
product pcf#
67.
PCF Configuration
Note : The config files are uploaded on k8s Master. Open another window:
cat /root/CLUS/configs/pcf.cfg
68.
Please copy paste PCF config line by line.
Autowizard false
complete-on-space false
config
Entering configuration mode terminal
product pcf(config)#
 from this line, please copy paste config line by
line
77 | P a g e
69. Type commit before you exit out of Ops-Center to save the
configuration
product pcf(config)#
commit
Commit complete.
End
product pcf#
Refer Appendix for reference configuration
70.
Use system mode running to deploy
product pcf#
config
Entering configuration mode terminal
system mode shutdown
commit
system mode running
commit
end
71.
show
show
show
show
show
show
72.
Ops-Center commands to check the status of NF
running-config
k8s pods status
k8s services
helm charts status
helm charts version
system status
Check if all pods and services are UP and running.
Kubectl get pods -n pcf -o wide
kubectl get svc -n pcf -o wide
kubectl get ing -n pcf -o wide
73.
Check you can access to PCF Central and PB GUIs
kubectl get ing -n pcf
pb.pcf-pcf-engine-app-blue.198.18.134.10.nip.io
pb.pcf-pcf-engine-app-blue.198.18.134.10.nip.io/pb
74.
Check PCF registration to NRF:
kubectl exec -it db-profile-s1-0 -n nrf mongo
profile-s1:PRIMARY> use session
profile-s1:PRIMARY> db.session.find().pretty();
{
“_id” : “2b80caf8-42e1-395d-95b3-8f2fb828a2fe”,
“tags” : [
“serviceName:Npcf_AMPolicyControl”,
“serviceName:Npcf_SMPolicyControl”,
“serviceInstanceId:2b80caf8-42e1-395d-95b38f2fb828a2fe.N15”,
“serviceInstanceId:2b80caf8-42e1-395d-95b38f2fb828a2fe.N7”,
“nfType:PCF”,
“plmn:123;456”,
“snssai:2:3”
],
“ukTags” : [
“nfInstanceId:2b80caf8-42e1-395d-95b3-8f2fb828a2fe”
],
78 | P a g e
“d” :
BinData(0,”CJb2JBK8AgokMmI4MGNhZjgtNDJlMS0zOTVkLTk1YjMtOGYyZmI4MjhhMmZlEAcY
ASIKCgMxMjMSAzQ1NioFCgEzEAJKCjEwLjguNTcuMTDCAXcKKDJiODBjYWY4LTQyZTEtMzk1ZC0
5NWIzLThmMmZiODI4YTJmZS5OMTUSFE5wY2ZfQU1Qb2xpY3lDb250cm9sGggKAnYxEgJ2MSIEaH
R0cDoPCgoxMC44LjU3LjEwKPpGUgoKAzEyMxIDNDU2WgEDagUKATMQAsIBdgonMmI4MGNhZjgtN
DJlMS0zOTVkLTk1YjMtOGYyZmI4MjhhMmZlLk43EhROcGNmX1NNUG9saWN5Q29udHJvbBoICgJ2
MRICdjEiBGh0dHA6DwoKMTAuOC41Ny4xMCj6RlIKCgMxMjMSAzQ1NloBBGoFCgEzEAI=”),
“nextEvalTime” : ISODate(“2019-05-01T19:51:36.369Z”),
“purge” : false,
“_v” : NumberLong(1)
}
79 | P a g e
Deploy SMF
75. Use the above mentioned global.yaml and smf.yaml to install
the SMF
Note : The yaml files are uploaded to the folder: /root/5g
cd /root/5g
cat /root/5g/smf.yaml
cat /root/5g/global.yaml
smf.yaml
root@198:~# cat /root/5g/smf.yaml
ops-center:
product:
autoDeploy: false
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/mobile-cnat-smf/smf-products/2019-01-30_Disktype/
name: smf
76.
Use the helm command to install the SMF.
Helm upgrade –install smf-ops-center cnat-smf/smf-ops-center -f global.yaml
-f smf.yaml –namespace smf –debug –devel
77. Note: The config files are uploaded on k8s Master. Open in
another window: cat /root/5g/smf.yaml
root@198:~# cat /root/5g/smf.yaml
ops-center:
product:
autoDeploy: false
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url:
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JloveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/mobile-cnat-smf/smf-products/2019-01-30_Disktype/
name: smf
root@198:~#
78.
Once the Ops Center comes up, login using the ClusterIP.
Credentials: admin/admin
80 | P a g e
root@198:~# kubectl get svc -n smf | grep ops
ops-center-smf-ops-center
ClusterIP
10.111.89.127
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
<none>
ssh -p 2024 admin@<IP address>
root@198:~# ssh -p 2024 admin@10.111.89.127
admin@10.111.89.127’s password:
Welcome to the CLI
admin connected from 10.32.0.1 using ssh on ops-center-smf-ops-center564777b65-mwcvm
product smf#
79.
SMF Configuration
Note : The config files are uploaded on k8s Master. Open another window: cat
cat /root/CLUS/configs/smf.cfg
80.
Please copy paste config line by line.
autowizard false
complete-on-space false
config
Entering configuration mode terminal
product smf(config)#
 from this line, please copy paste config line by
line
81. Type commit before you exit out of Ops-Center to save the
configuration
product smf(config)#
commit
Commit complete.
End
product smf#
Refer Appendix for reference configuration
82.
Use system mode running to deploy
product smf#
config
Entering configuration mode terminal
system mode shutdown
commit
system mode running
commit
end
83.
show
show
show
show
show
show
Ops-Center commands to check the status of NF
running-config
k8s pods status
k8s services
helm charts status
helm charts version
system status
81 | P a g e
84.
Verify Ops-center is UP and running:
root@198:~# kubectl get pods -n smf -o wide
NAME
READY
STATUS
RESTARTS
AGE
IP
NODE
datastore-ep-smf-subscriber-5897b5c94f-ztthl
2/2
Running
3
2d
10.34.128.19
worker8
datastore-ep-smf-subscriber-notification-5dc65c54bf-w7rtj
2/2
Running
0
2d
10.33.0.56
worker1
db-smf-subscriber-config-0
1/1
Running
0
2d
10.42.0.82
worker2
db-smf-subscriber1-0
1/1
Running
0
2d
10.35.0.192
worker3
etcd-0
1/1
Running
0
2d
10.47.0.122
worker4
ops-center-smf-ops-center-564777b65-mwcvm
4/4
Running
0
3d
10.47.0.128
worker4
redis-primary0-0
1/1
Running
0
2d
10.43.128.108
worker6
redis-secondary-0
1/1
Running
0
2d
10.47.0.121
worker4
redis-sentinel-0
1/1
Running
0
2d
10.36.0.156
worker5
redis-sentinel-1
1/1
Running
0
2d
10.40.0.162
worker7
redis-sentinel-2
1/1
Running
0
2d
10.35.0.204
worker3
rs-controller-smf-subscriber-config-586f5d456c-zwmvv
1/1
Running
0
2d
10.43.128.117
worker6
rs-controller-smf-subscriber1-5cd775c977-xqwjk
2/2
Running
0
2d
10.36.0.157
worker5
smf-nodemgr-f4bb96877-psg2p
1/1
Running
0
2d
10.40.0.163
worker7
smf-protocol-58bdfcf449-r5mfj
1/1
Running
0
2d
10.36.0.158
worker5
smf-rest-ep-5c6d9575df-9vqd7
1/1
Running
0
2d
10.42.0.85
worker2
smf-service-smf-smf-service-78dd64ff55-g4djp
1/1
Running
0
2d
10.35.0.193
worker3
root@198:~# kubectl get ing -n smf
NAME
HOSTS
ADDRESS
PORTS
AGE
ops-center-ingress-smf-ops-center
restconf.smf-opscenter.198.18.134.30.nip.io,cli.smf-ops-center.198.18.134.30.nip.io
80, 443
3d
root@198:~# kubectl get svc -n smf | grep ops
ops-center-smf-ops-center
ClusterIP
10.111.89.127
<none>
8008/TCP,2024/TCP,2022/TCP,7681/TCP
3d
root@198:~#
root@198:~#
85. Create label to designate the worker where SMF protocol
Pod is installed:
kubectl label nodes
worker5
disktype=ssd1
--overwrite
Note: Call will fail if you miss this
If the above command shows error do the following:
kubectl label nodes
kubectl label nodes
worker5
worker5
disktype=ssd --overwrite
disktype=ssd1 --overwrite
82 | P a g e
86.
Check if all pods and services are UP and running.
kubectl get pods -n smf -o wide
kubectl get svc -n smf -o wide
kubectl get ing -n smf -o wide
87.
Check SMF registration to NRF:
kubectl exec -it db-profile-s1-0 -n nrf mongo
profile-s1:PRIMARY> use session
profile-s1:PRIMARY> db.session.find().pretty();
{
“_id” : “d0f58d9b-ae65-44ab-94bd-3df819ded024”,
“tags” : [
“serviceName:Nsmf_PDUSession”,
“serviceInstanceId:1”,
“dnn:starent.com”,
“nfType:SMF”,
“snssai:2:3”
],
“ukTags” : [
“nfInstanceId:d0f58d9b-ae65-44ab-94bd-3df819ded024”
],
“d” :
BinData(0,”CJb2JBKLAQokZDBmNThkOWItYWU2NS00NGFiLTk0YmQtM2RmODE5ZGVkMDI0EAQY
ASoFCgEzEAJKCjEwLjguNTcuMTSiAQ0KC3N0YXJlbnQuY29twgE7CgExEg9Oc21mX1BEVVNlc3N
pb24aDgoCdjESCDEuUm4uMC4wIgRodHRwOg8KCjEwLjguNTcuMTQomj8=”),
“nextEvalTime” : ISODate(“2019-05-01T19:59:50.551Z”),
“purge” : false,
“_v” : NumberLong(1)
}
83 | P a g e
Register AUSF and UDM with NRF
88.
Run the script from the Master K8 node:
cd /root/5g
ls -lrt
./reg_query.sh
Expected:
# ./reg_query.sh
Sending NF Registration for AUSF
{“nfInstanceId”:”a4202c7e-b852-4878-8a72ef3ef9a406d3”,”nfType”:”AUSF”,”nfStatus”:”REGISTERED”,”plmn”:{“mcc”:”123”,”
mnc”:”456”},”sNssais”:[{“sst”:2,”sd”:”3”}],”ipv4Addresses”:[“198.18.134.10”
],”nfServices”:[{“serviceInstanceId”:”Nausf_Ueauthentication”,”serviceName”
:”Nausf_Ueauthentication”,”version”:[{“apiVersionInUri”:”v1”,”apiFullVersio
n”:”1.0”}],”schema”:”http”,”ipEndPoints”:[{“ipv4Address”:”198.18.134.10”,”
84ransport”:”TCP”,”port”:8099}],”allowedPlmns”:[{“mcc”:”123”,”mnc”:”456”}],
”allowedNfTypes”:[“AMF”],”allowedNfDomains”:[“internet”],”allowedNssais”:[{
“sst”:2,”sd”:”3”}]}]}
Status=200
Sending NF Discovery for AUSF
{“validityPeriod”:3600,”nfInstances”:[{“nfInstanceId”:”a4202c7e-b852-48788a72ef3ef9a406d3”,”nfType”:”AUSF”,”plmn”:{“mcc”:”123”,”mnc”:”456”},”sNssais”:[{
“sst”:2,”sd”:”3”}],”ipv4Address”:[“198.18.134.10”],”nfServices”:[{“serviceI
nstanceId”:”Nausf_Ueauthentication”,”serviceName”:”Nausf_Ueauthentication”,
”version”:[{“apiVersionInUri”:”v1”,”apiFullVersion”:”1.0”}],”schema”:”http”
,”ipEndPoints”:[{“ipv4Address”:”198.18.134.10”,”transport”:”TCP”,”port”:809
9}]}]}]}
Status=200
Sending NF Registration for UDM
{“nfInstanceId”:”2602d82a-ac9f-4b5a-993dbc725b2d770a”,”nfType”:”UDM”,”nfStatus”:”REGISTERED”,”plmn”:{“mcc”:”123”,”m
nc”:”456”},”sNssais”:[{“sst”:2,”sd”:”3”}],”ipv4Addresses”:[“198.18.134.10”]
,”nfServices”:[{“serviceInstanceId”:”Nudm_UecontextAndSubscriberData”,”serv
iceName”:”Nudm_UecontextAndSubscriberData”,”version”:[{“apiVersionInUri”:”v
1”,”apiFullVersion”:”1.0”}],”schema”:”http”,”ipEndPoints”:[{“ipv4Address”:”
198.18.134.10”,”transport”:”TCP”,”port”:8099}],”allowedPlmns”:[{“mcc”:”123”
,”mnc”:”456”}],”allowedNfTypes”:[“AMF”,”SMF”,”PCF”],”allowedNfDomains”:[“in
ternet”],”allowedNssais”:[{“sst”:2,”sd”:”3”}]}]}
Status=200
Sending NF Discovery for UDM
{“validityPeriod”:3600,”nfInstances”:[{“nfInstanceId”:”2602d82a-ac9f-4b5a993dbc725b2d770a”,”nfType”:”UDM”,”plmn”:{“mcc”:”123”,”mnc”:”456”},”sNssais”:[{“
sst”:2,”sd”:”3”}],”ipv4Address”:[“198.18.134.10”],”nfServices”:[{“serviceIn
stanceId”:”Nudm_UecontextAndSubscriberData”,”serviceName”:”Nudm_UecontextAn
dSubscriberData”,”version”:[{“apiVersionInUri”:”v1”,”apiFullVersion”:”1.0”}
],”schema”:”http”,”ipEndPoints”:[{“ipv4Address”:”198.18.134.10”,”transport”
:”TCP”,”port”:8099}]}]}]}
Status=200
89.
Check AUSF and UDM registered in NRF database
kubectl exec -ti db-profile-s1-0 -n nrf mongo
profile-s1:PRIMARY> use session
84 | P a g e
switched to db session
profile-s1:PRIMARY>
profile-s1:PRIMARY> db.session.find().pretty();
{
“_id” : “a4202c7e-b852-4878-8a72-ef3ef9a406d3”,
“tags” : [
“serviceName:Nausf_Ueauthentication”,
“serviceInstanceId:Nausf_Ueauthentication”,
“nfType:AUSF”,
“plmn:123;456”,
“snssai:2:3”
],
“ukTags” : [
“nfInstanceId:a4202c7e-b852-4878-8a72-ef3ef9a406d3”
],
“d” :
BinData(0,”CJb2JBLAAQokYTQyMDJjN2UtYjg1Mi00Odc4LThhNzItZWYzZWY5YTQwNmQzEAUY
ASIKCgMxMjMSAzQ1NioFCgEzEAJKCjEwLjguNTcuMTDCAXQKFk5hdXNmX1VFYXV0aGVudGljYXR
pb24SFk5hdXNmX1VFYXV0aGVudGljYXRpb24aCQoCdjESAzEuMCIEaHR0cDoRCgoxMC44LjU3Lj
EwIAEooz9SCgoDMTIzEgM0NTZaAQNiCGludGVybmV0agUKATMQAg==”),
“nextEvalTime” : ISODate(“2019-05-01T20:03:26.063Z”),
“purge” : false,
“_v” : NumberLong(1)
}
{
“_id” : “2602d82a-ac9f-4b5a-993d-bc725b2d770a”,
“tags” : [
“serviceName:Nudm_UecontextAndSubscriberData”,
“serviceInstanceId:Nudm_UecontextAndSubscriberData”,
“nfType:UDM”,
“plmn:123;456”,
“snssai:2:3”
],
“ukTags” : [
“nfInstanceId:2602d82a-ac9f-4b5a-993d-bc725b2d770a”
],
“d” :
BinData(0,”CJb2JBLVAQokMjYwMmQ4MmEtYWM5Zi00YjVhLTk5M2QtYmM3MjViMmQ3NzBhEAIY
ASIKCgMxMjMSAzQ1NioFCgEzEAJKCjEwLjguNTcuMTDCAYgBCh9OdWRtX1VFY29udGV4dEFuZFN
1YnNjcmliZXJEYXRhEh9OdWRtX1VFY29udGV4dEFuZFN1YnNjcmliZXJEYXRhGgkKAnYxEgMxLj
AiBGh0dHA6EQoKMTAuOC41Ny4xMCABKKM/UgoKAzEyMxIDNDU2WgMDBAdiCGludGVybmV0agUKA
TMQAg==”),
“nextEvalTime” : ISODate(“2019-05-01T20:03:26.429Z”),
“purge” : false,
“_v” : NumberLong(1)
}
85 | P a g e
Deploy UDM for SMF
90. Note: UDM is already installed on worker 1. Skip UDM
installation procedure.
For installation procedure refer Appendix.
91.
Check if the IP/Port assigned is listening:
root@198:~# ssh worker1
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-142-generic x86_64)
* Documentation:
* Management:
* Support:
https://help.ubuntu.com
https://landscape.canonical.com
https://ubuntu.com/advantage
System information as of Fri May 24 00:55:02 EDT 2019
System load:
Usage of /:
Memory usage:
Swap usage:
Processes:
Users logged in:
0.19
28.9% of 72.85GB
35%
0%
275
1
IP
IP
IP
IP
IP
address
address
address
address
address
for
for
for
for
for
eth0:
eth1:
docker0:
virbr0:
weave:
198.18.134.31
192.168.10.11
172.17.0.1
192.168.122.1
10.33.0.0
Graph this data and manage this system at:
https://landscape.canonical.com/
New release ‘18.04.2 LTS’ available.
Run ‘do-release-upgrade’ to upgrade to it.
root@Worker1:~# netstat -an | grep 8099
tcp
0
0 198.18.134.31:8099
tcp
0
0 198.18.134.31:8099
ESTABLISHED
root@Worker1:~#
0.0.0.0:*
198.18.134.32:48088
LISTEN
86 | P a g e
Deploy UPF
92. UPF is already deployed. The procedure to install UPF
remains same as CUPS -UP.
Check the build for UPF
ssh admin@10.1.10.40
Password: Cisco@123
[local]POD7-UP# [The version doesn’t match this’[
show ver
Wednesday April 24 16:21:33 EDT 2019
Active Software:
Image Version:
21.12.M0.private
Image Build Number:
private
Image Description:
Developer_Build
Image Date:
Mon Feb 11 20:43:36 IST 2019
Boot Image:
/flash/sftp/qvpc-si.70902.private.bin
Source Commit ID:
373966e83de8848770fb5b5176a9d45981dd3e60
Check the boot file
show boot
Wednesday April 24 16:24:53 EDT 2019
boot system priority 7 \
image /flash/sftp/qvpc-si.70902.private.bin \
config /flash/UPF_POD7.cfg
93.
Verify key UPF specific configuration
show configuration
Verify user-plane-service configuration:
context ingress
sx-service sx-svc
instance-type userplane
bind ipv4-address 198.18.134.100
exit
user-plane-service user_plane_svc
associate gtpu-service pgw-gtpu pgw-ingress
associate gtpu-service sgw-gtpu-ingress sgw-ingress
associate gtpu-service sgw-gtpu-egress sgw-egress
associate gtpu-service SxU cp-tunnel
associate sx-service sx-svc
associate control-plane-group ingress
context local
control-plane-group ingress
peer-node-id i
If N4 peers are not up check mon pro logs (49 ) and SMF protocol logs and
remove/add
context ingress
user-plane-service user_plane_svc
associate control-plane-group ingress
87 | P a g e
94.
Check Logs from master
kubectl get pods -n smf | awk ‘{print $1}’ | grep protocol | xargs kubectl
logs -f -n smf –tail=1
kubectl get pods -n smf | awk ‘{print $1}’ | grep node | xargs kubectl logs
-f -n smf –tail=1
95.
Check Sx Session Establishment between SMF & UPF
[local]POD7-UP# show sx peers
Wednesday April 24 16:30:18 EDT 2019
+---Node Type:
–€ - CPLANE
(–) - UPLANE
|
|+--Peer Mode:
(–) - Active
(–) - Standby
|
||+-Association
(–) - Idle
(–) - Initiated
||| State:
(–) - Associated
(–) - Releasing
|||
(–) - Released
|||
|||+Configurati–€(C) - Configured – (N) - Not Configured
||||State:
||||
||||+IP Pool:
(E) - Enable
– (D) - Disable
– (N) - Not
Applicable
|||||
|||||
|||||
||||| Sx Service
No of
||||| ID
Restart
||||| |
Recovery
|
Current
Max
Peer
vvvvv v
Group Name
Node ID
Peer ID
Timestamp
v
Sessions Sessions
State
----- ---- -------------------- ------------------------------ ---------- ------------------ ---- --------- --------- --------CAAND 5
ingress
198.18.134.13
33554434
2019-04-24:15:57:10 1
0
0
ACTIVE
Total Peers:
1
[local]POD7-UP#
88 | P a g e
96.
Make Test Call simulated via Lattice Tool
89 | P a g e
Make your 5G Call
97.
Run a 5G Call
98.
Collect logs on AMF rest, service
kubectl get pods -n amf
kubectl logs -f -n amf <pod name> --tail=1
OR
kubectl get pods -n amf | awk '{print $1}' | grep rest | xargs kubectl logs
-f -n amf --tail=1
kubectl get pods -n amf | awk '{print $1}' | grep service | xargs kubectl
logs -f -n amf --tail=1
99.
Collect logs on SMF rest, service
kubectl get pods -n smf
kubectl logs -f -n smf <pod name> --tail=1
OR
kubectl get pods -n smf | awk '{print $1}' | grep rest | xargs kubectl logs
-f -n smf --tail=1
kubectl get pods -n smf | awk '{print $1}' | grep service | xargs kubectl
logs -f -n smf --tail=1
kubectl get pods -n smf | awk '{print $1}' | grep nodemgr | xargs kubectl
logs -f -n smf --tail=1
kubectl get pods -n smf | awk '{print $1}' | grep proto | xargs kubectl
logs -f -n smf --tail=1
100. Collect logs on PCF rest, engine:
kubectl get pods -n pcf
kubectl logs -f -n pcf <pod name> --tail=1
OR
kubectl get pods -n pcf | awk '{print $1}' | grep rest | xargs kubectl logs
-f -n pcf --tail=1
kubectl get pods -n pcf | awk '{print $1}' | grep pcf-engine-pcf-pcfengine-app-blue | xargs kubectl logs -f -n pcf -c pcf --tail=1
101. Collect logs on NRF rest, service:
kubectl get pods -n nrf
kubectl logs -f -n nrf <pod name> --tail=1
OR
90 | P a g e
kubectl get pods -n nrf | awk '{print $1}' | grep rest | xargs kubectl logs
-f -n nrf --tail=1
kubectl get pods -n nrf | awk '{print $1}' | grep engine | xargs kubectl
logs -f -n nrf --tail=1
102. Check Subscriber count on SMF / Clear the subscriber from
DB:
kubectl exec -ti db-smf-subscriber1-0 -n smf mongo
smf-subscriber1:PRIMARY> use session
switched to db session
smf-subscriber1:PRIMARY>
smf-subscriber1:PRIMARY> db.session.count({});
smf-subscriber1:PRIMARY> db.session.remove({});
Check Subscriber count on AMF / Clear the subscriber from DB:
kubectl exec -ti db-amf-subscriber1-0 -n amf mongo
smf-subscriber1:PRIMARY> use session
switched to db session
amf-subscriber1:PRIMARY>
amf-subscriber1:PRIMARY> db.session.count({});
amf-subscriber1:PRIMARY> db.session.remove({});
91 | P a g e
Appendix:
103. CNEE Key Configuration Values:
ssh -p 2024 admin@<cnee-ops-center ip>
system mode running
helm default-repository cnee
helm repository cnee
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JLoveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/cnee.2019.01.01-5/
!
k8s namespace
cnee
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
aaa authentication users user admin
uid
9000
gid
100
password
$1$sBXc9RYs$hk1wbyU44iOEBn5Ax1jxO.
ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$vI4c9C5G$4jFqSZVZazWEW3peI11D.1
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt "\h> "
!
aaa ios level 15
prompt "\h# "
!
aaa ios privilege exec
level 0
command action
!
command autowizard
!
command enable
!
command exit
!
command help
!
command startup
!
!
level 15
command configure
!
!
!
nacm write-default deny
nacm groups group admin
92 | P a g e
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group grafana-admin
user-name [ admin ]
!
nacm groups group grafana-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
104. NRF Key Configuration Values :

nrf tracing jaeger agent udp host jaegr-agent.amf.svc.cluster.local < jager
agent is running on AMF namespace>

nrf profile-settings nf-heartbeat-timer-seconds 604950 <choose high value for
timer to avoid timeout between NRF and NF>

nrf rest endpoint ip <use NGINX IP for rest-ep>

nrf rest endpoint port 8082 <port is configurable if you change it make sure it
will be reflected on all NF configuration for NRF discover/registration>

nrf engine-group blue <blue is just name and you can choose any NRF name>

k8s ingress-host-name <>.nip.io <use NGINX IP>

replicas <you can increase number of replicas for HA>
logging default-level debug
logging logger com.cisco
level debug
93 | P a g e
!
license MOBILE-CORE
encrypted-key
25D220C6817CD63603D72ED51C811F9B14BD9210E6461AAEB21AE40EC3C2EC3135915F4E35A
AAF9F6853D9AD94F792AC404068FE0EF7420B06FADA05897CFAF74BEEC36E4748B312031880
091CF85365
!
nrf rest endpoint ip 198.18.134.30
nrf rest endpoint port 8082
nrf engine-group blue
replicas 1
load-balancing enable
true
load-balancing use-weights-from-nf-profile false
!
db profile profile-db-ep-replicas 1
db profile shard-count 1
db subscription subscription-db-ep-replicas 1
db subscription shard-count 1
system mode running
helm default-repository nrf
helm repository nrf
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JLoveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/nrf.2019.01.01-5/
!
k8s namespace
nrf
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
aaa authentication users user admin
uid
9000
gid
100
password
$1$p4WCNPuq$5kYxEji2lt.y4zYlX8u6h0
ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$bKVMogip$6twRp/nMG.SvbDJ2HWaJK/
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt "\h> "
!
aaa ios level 15
prompt "\h# "
!
aaa ios privilege exec
level 0
command action
!
command autowizard
!
command enable
!
command exit
!
command help
!
94 | P a g e
command startup
!
!
level 15
command configure
!
!
!
nacm write-default deny
nacm groups group admin
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group grafana-admin
user-name [ admin ]
!
nacm groups group grafana-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
105. NSSF Key Configuration Values:

nssf rest endpoint ip <use nginx IP>

nssf rest endpoint port 8083 <8083 is the default port you can use any port but
this should be reflected on other configuration>

nssf engine-group nssf-1 <nssf-1 is NSF name and can be any >

k8s ingress-host-name <>.nip.io <use nginx IP>

replicas <you can increase number of replicas fo HA >
95 | P a g e
license MOBILE-CORE
encrypted-key
25D220C6817CD63603D72ED51C811F9B14BD9210E6461AAEB21AE40EC3C2EC3135915F4E35A
AAF9F6853D9AD94F792AC404068FE0EF7420B06FADA05897CFAF74BEEC36E4748B312031880
091CF85365
!
nssf rest endpoint ip 198.18.134.34
nssf rest endpoint port 8083
nssf engine-group nssf-1
replicas 1
ns-selection slice-selection-during-registration populate-nsi-info true
!
db nssai-availability nssai-availability-db-ep-replicas 1
db nssai-availability shard-count 1
system mode running
helm default-repository nssf
helm repository nssf
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JLoveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/nssf.2019.01.01-5/
!
k8s namespace
nssf
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
aaa authentication users user admin
uid
9000
gid
100
password
$1$Wsur7Q/.$vTj4AnTfkQ3NSgO7HdO/D0
ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$51jkKahQ$zBCbBBwQXHIlp.kQ2zgza1
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt "\h> "
!
aaa ios level 15
prompt "\h# "
!
aaa ios privilege exec
level 0
command action
!
command autowizard
!
command enable
!
command exit
!
command help
!
command startup
!
!
level 15
96 | P a g e
command configure
!
!
!
nacm write-default deny
nacm groups group admin
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group grafana-admin
user-name [ admin ]
!
nacm groups group grafana-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
106. AMF Key Configuration Values:

amf-address <use nginx IP>

http-endpoint base-url http://<nginx IP and port 8090 make sure this port has
no conflict with SMF port and IP>

amf-tools enable true <enable creating mocktools AUSF/UDM as pods on AMF>

amf-tools amf-mock-tool external-ip <use nginx IP this will be AUSF/UDM IP
for AMF network-function default port is 8099>

network-function nrf http-endpoint > <use NRF ip (nginx ip) and NRF port >

network-function nssf http-endpoint base-url <use NSSF ip (nginx ip) and NSSF
port >
97 | P a g e

network-function ausf < you have 2 options either no configuration for httpendpoint base-url http:// so AMF will discover AUSF via NRF or you can use
http-endpoint base-url http://amf-mocktool :8090 so AMF will not query NRF for
AUSF discovery>

network-function UDM < you have 2 options either no configuration for httpendpoint base-url http:// so AMF will discover UDM via NRF or you can use
http-endpoint base-url http://amf-mocktool :8090 so AMF will not query NRF for
UDM discovery>

network-function SMF < you have 2 options either no configuration for httpendpoint base-url http:// so AMF will discover SMF via NRF or you can use
http-endpoint base-url http://SMF-IP:PORT so AMF will not query NRF for SMF
discovery>

network-function PCF < you have 2 options either no configuration for httpendpoint base-url http:// so AMF/SMF will discover PCF via NRF or you can use
http-endpoint base-url http://PCF-IP:PORT so AMF/SMF will not query NRF for
PCF discovery>

sctp endpoint ip-address <chose any worker IP for SCTP, you can also use
worker ipv6 and should be routed to LFS IP gnb>

sctp endpoint port 1000 <any port can be used and should e reflected on LFS
gnb >

sctp k8-node-hostname POD4-854-w2<corresponding worker hostname>

k8s-amf amf-rest-ep ip-address <nginx ip >
logging default-level trace
logging logger gfsm
level info
!
amf-global
amf-name cisco-amf
call-control-policy local
disable-init-csr-reg true
disable-rfsp-pcf
false
timers t3560 value 10
timers t3560 retry 3
timers t3550 value 5
timers t3550 retry 3
timers t3570 value 5
timers t3570 retry 3
timers t3513 value 5
timers t3513 retry 3
timers t3522 value 5
timers t3522 retry 3
timers tguard value 30
timers tguard retry 1
timers tidle value 36000
timers tidle retry 1
timers tpurge value 120
timers tpurge retry 1
timers t3502 value 36000
98 | P a g e
timers t3502 retry 1
timers t3512 value 36000
timers t3512 retry 1
security-algo 1 ciphering-algo 128-5G-EA1
security-algo 1 integity-prot-algo 128-5G-IA1
!
operator-policy local
ccp-name local
!
supi-policy 123
operator-policy-name local
!
plmn-policy 123456
operator-policy-name local
!
!
amf-services am1
amf-name
cisco-amf
amf-address
198.18.134.31
http-endpoint base-url http://198.18.134.31:8090
serving-network-id
sn1
amf-profile
ap1
operator-policy-name local
guamis mcc 123 mnc 456 region-id 1 set-id 2 pointer 3
tai-groups tg1
!
tais mcc 123 mnc 456 tac 10
!
tais mcc 123 mnc 456 tac 20
!
tais mcc 123 mnc 456 tac 30
!
slices name s1
sst 01
sdt 000001
!
slices name s2
sst 02
sdt 000001
!
slices name s3
sst 02
sdt 000003
!
!
amf-profiles ap1
!
tai-groups name tg1
tais mcc 123 mnc 456 tac 10
name t1
!
tais mcc 123 mnc 456 tac 20
name t2
!
tais mcc 123 mnc 456 tac 30
name t3
!
!
network-function nrf
http-endpoint base-url http://198.18.134.30:8082
!
network-function nssf
http-endpoint base-url http://nssf-rest-ep.nssf.svc.cluster.local:8083/
!
99 | P a g e
network-function ausf
discovery-profile discover-by-plmn true
!
network-function smf
http-endpoint base-url http://198.18.134.32:8090
discovery-profile discover-by-plmn true
discovery-profile discover-by-slice true
discovery-profile discover-by-dnn true
!
network-function pcf
http-endpoint base-url http://198.18.134.30:9082
discovery-profile discover-by-plmn true
discovery-profile discover-by-slice true
!
network-function udm
discovery-profile discover-by-plmn true
!
sctp endpoint ip-address 10.1.20.13
sctp endpoint port 1000
sctp k8-node-hostname worker3
k8s-amf amf-service no-of-replicas 1
k8s-amf amf-protocol-ep no-of-replicas 1
k8s-amf amf-rest-ep no-of-replicas 1
k8s-amf amf-rest-ep ip-address 198.18.134.31
amf-tools enable true
amf-tools amf-mock-tool external-ip 198.18.134.30
system mode running
helm default-repository amf
helm repository amf
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JLoveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/amf.2019.01.01-5/
!
k8s namespace
amf
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
aaa authentication users user admin
uid
9000
gid
100
password
$1$DzzGJ0op$udlVY1o9Sj4lJG10q0ij41
ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$CZQZ5AOe$K9mmVK99GH8RmcdhDiGQf0
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt "\h> "
!
aaa ios level 15
prompt "\h# "
!
aaa ios privilege exec
level 0
command action
!
100 | P a g e
command
!
command
!
command
!
command
!
command
!
autowizard
enable
exit
help
startup
!
level 15
command configure
!
!
!
nacm write-default deny
nacm groups group admin
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group grafana-admin
user-name [ admin ]
!
nacm groups group grafana-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
107. PCF Key configuration Values:

debug tracing jaeger agent udp host jaeger-agent.amf.svc.cluster.local <use
AMF jeager agent>
101 | P a g e

rest-endpoint ips [] <nginx ip/masterIP>

rest-endpoint port 9082 <default rest-ep port used by other NFs>

service-registration registry url http://198.18.134.10:8082/nnrf-nfm/v1 <NRF
nginx IP /PORT >

k8s ingress-host-name <>nip.io <nginx IP
db spr shard-count 1
db session shard-count 1
license MOBILE-CORE
encrypted-key
25D220C6817CD63603D72ED51C811F9B14BD9210E6461AAEB21AE40EC3C2EC3135915F4E35A
AAF9F6853D9AD94F792AC404068FE0EF7420B06FADA05897CFAF74BEEC36E4748B312031880
091CF85365
!
debug tracing type OPENTRACING_JAEGER
debug tracing jaeger agent udp host jaeger-collector.amf.svc.cluster.local
debug tracing jaeger agent udp port 9411
debug logging default-level warn
debug logging logger
com.broadhop.utilities.queue.redis.local.RedisMessageCluster
level error
!
debug logging logger com.cisco
level debug
!
debug logging logger io
level warn
!
debug logging logger org
level warn
!
rest-endpoint ips
[ 198.18.134.30 ]
rest-endpoint port
9082
rest-endpoint tracing-service-name pcf
service-registration registry url http://198.18.134.30:8082/nnrf-nfm/v1
service-registration services amfService
allowed-plmns 123 456
!
allowed-nssais 2
sd 3
!
!
service-registration services smfService
allowed-plmns 123 456
!
allowed-nssais 2
sd 3
!
!
service-registration profile instance-id pcf-1
service-registration profile plmn mcc 123
service-registration profile plmn mnc 456
service-registration profile snssais 2
sd 3
!
engine blue
replicas
1
repository
pcf
tracing-service-name pcf
!
102 | P a g e
system mode running
helm default-repository pcf
helm repository pcf
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JLoveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/builds/2019.01-5/pcf.2019.01.01-5/
!
k8s namespace
pcf
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
aaa authentication users user admin
uid
9000
gid
100
password
$1$kDHg9NBW$5vgkYdoVRsizGsUP.m5u31
ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$VaxeFYup$0OTiaD2L6WqM6/WnEM/4A1
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt "\h> "
!
aaa ios level 15
prompt "\h# "
!
aaa ios privilege exec
level 0
command action
!
command autowizard
!
command enable
!
command exit
!
command help
!
command startup
!
!
level 15
command configure
!
!
!
nacm write-default deny
nacm groups group admin
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
103 | P a g e
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group grafana-admin
user-name [ admin ]
!
nacm groups group grafana-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
root@Master:~/CLUS/configs#
108. SMF Key Configuration Values

k8s ingress-host-name <>.nip.io <use nginx/MasterIP>

profile dnn starent.com <dnn , by default dnn is starent.com>

bind-address ipv4 <><worker IP or any interface on worker where SSD is
labeled >

profile network-element nrf nrf1 <NRF IP Port for NF registration and discovery
>

profile network-element amf amf1 <AMF rest-ep IP/PORT usually nginx
masterIP/> check AMF configuration or run

profile network-element pcf pcf1<if no http-endpoint is configured so NRF will
be used for discovery/registration procedure>

profile network-element udm udm1 http-endpoint base-url <UDM which is
created on worker,this is not same AMF mocktool>

n4-peer-address <UPF N4 IP address >

k8 smf profile protocol node-label ssd1 <worker hostname which is labeled as
SSD>

k8 smf profile rest-ep external-ip [ 198.18.134.14 ] <use any worker interface
as rest-ep also you can use nginx if AMF rest-ep using different port>
104 | P a g e
helm default-repository smf
helm repository smf
url
https://tmelabuser.gen:AKCp5ccGGPUZEBuXsu2LkSRXNF45kBz9JLoveT9uvZkS9yXgSHiq
CaTWHpVXLZEuoULuf5cPY@devhub.cisco.com/artifactory/mobile-cnat-chartsrelease/mobile-cnat-smf/smf-products/2019-01-30_Disktype/
!
k8s namespace
smf
k8s registry
devhub-docker.cisco.com/mobile-cnat-docker-release
k8s single-node
false
k8s use-volume-claims false
k8s image-pull-secrets regcred
k8s ingress-host-name 198.18.134.30.nip.io
profile dnn internet
network-element-profile-list chf [ chgser1 ]
charging-profile chgprf1
ssc-mode 1
ipv4-pool name poolv4
prefix 10.100.0.0/24
ip-range start 10.100.0.1
ip-range end 10.100.0.254
vrf
ISP
!
!
profile charging chgprf1
method none
!
profile smf smf1
node-id
abcdef
dnn-profile-list [ internet ]
bind-port
8090
allowed-nssai
[ slice1 ]
service name nsmf-pdu
type
pdu-session
schema
http
version
1.Rn.0.0
n4-bind-address ipv4 198.18.134.34
http-endpoint base-url http://smf-service
network-element-profile-list upf [ upf1 ]
network-element-profile-list pcf [ pcf1 ]
network-element-profile-list nrf [ nrf1 ]
network-element-profile-list amf [ amf1 ]
dnn-profile-list [ internet ]
!
!
profile network-element nrf nrf1
http-endpoint base-url http://nrf-rest-ep.nrf.svc.cluster.local:8082
!
profile network-element amf amf1
http-endpoint base-url http://198.18.134.31:8090
!
profile network-element pcf pcf1
http-endpoint base-url http://198.18.134.30:9082
!
profile network-element udm udm1
http-endpoint base-url http://198.18.134.31:8099
!
profile network-element upf upf1
n4-peer-address ipv4 198.18.134.40
n4-peer-port 8805
keepalive
60
dnn-list
[ internet ]
!
profile network-element chf chgser1
105 | P a g e
ip-address 10.8.51.115
port 8099
!
!
k8 smf local etcd endpoint host etcd
k8 smf local etcd endpoint port 2379
k8 smf local etcd no-of-replicas 1
k8 smf local datastore-endpoint smf-datastore-ep:8980
k8 smf local redis-endpoint redis-primary:6379
k8 smf local coverage-build false
k8 smf local service no-of-replicas 1
k8 smf local nodemgr no-of-replicas 1
k8 smf local tracing enable true
k8 smf local tracing enable-trace-percent 100
k8 smf local tracing endpoint host jaeger-collector.amf.svc.cluster.local
k8 smf local tracing endpoint port 9411
k8 smf profile protocol no-of-replicas 1
k8 smf profile protocol node-label ssd1
k8 smf profile protocol external-ip [ 198.18.134.35 ]
k8 smf profile rest-ep no-of-replicas 1
k8 smf profile rest-ep external-ip [ 198.18.134.32 ]
nssai name slice1
sst 02
sdt 000003
!
aaa authentication users user admin
uid
9000
gid
100
password
$1$jxylg3co$KhWSMUkj3VgdGsvquWhn.0
ssh_keydir /var/confd/homes/admin/.ssh
homedir
/var/confd/homes/admin
!
aaa authentication users user readonly
uid
9001
gid
100
password
$1$R1ci3Dil$ax/hX2mQ7XNHJaZfIKcko1
ssh_keydir /var/confd/homes/read-only/.ssh
homedir
/var/confd/homes/read-only
!
aaa ios level 0
prompt "\h> "
!
aaa ios level 15
prompt "\h# "
!
aaa ios privilege exec
level 0
command action
!
command autowizard
!
command enable
!
command exit
!
command help
!
command startup
!
!
level 15
command configure
!
!
106 | P a g e
!
nacm write-default deny
nacm groups group admin
user-name [ admin ]
!
nacm groups group bulkstats
user-name [ admin ]
!
nacm groups group crd-read-only
user-name [ admin ]
!
nacm groups group crd-read-write
user-name [ admin ]
!
nacm groups group grafana-admin
user-name [ admin ]
!
nacm groups group grafana-editor
user-name [ admin ]
!
nacm groups group policy-admin
user-name [ admin ]
!
nacm groups group policy-ro
user-name [ admin readonly ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list confd-api-manager
group [ confd-api-manager ]
rule any-access
action permit
!
!
109. UPF Configuration:
config
cli hidden
tech-support test-commands encrypted password ***
license key "\
VER=1|DOI=1548084337|DOE=1563722737|ISS=3|NUM=201851|CMT=dcloud_cnat_u\
pf|LEC=1000|FR4=Y|FSR=Y|FPM=Y|FID=Y|FI6=Y|FLI=Y|FFA=Y|FCA=Y|FTM=Y|FTP=\
Y|FDC=Y|FAA=Y|FDQ=Y|FEL=Y|BEP=Y|FAI=Y|LPP=1000|LSF=1000|LGW=1000|HIL=X\
T2|LSB=1000|FMF=Y|FEE=Y|FIT=Y|FDS=Y|LSE=1000|FGD=Y|FWI=Y|FNQ=Y|FGX=Y|F\
WT=Y|LCU=1000|LUU=1000|FL7=Y|FRD=Y|LTO=1000|FNS=Y|LNS=1000|FTN=Y|SIG=M\
CwCEyJS111wSK4+zFdE6WQiIRlKaM8CFQC9+D7ResO4lsdcXIe4LxZpUApERQ"
system hostname UPF
autoconfirm
iftask restart-enable
context local
interface LOCAL1
ip address 10.1.10.40 255.255.255.0
#exit
ssh key
+B0gwh55fluzwh80505kuz6g6qjj3dyouuq2e8trd3bl6obhzmyeia2px9v6y25mhdm2khuwmda
ghpju1kc1u2jec6y6928xohbi8b1pr40y1z0m40iq6h22ord1cwhn64y125wuqlhkohzv434ubh
wemd2qkr3phw8audz8hrr12fis70ghnm3l3rom2opka817h0m30ni40u2n1k1lwkrenpoqbsw3j
tv19ewvynys3lo0iyd2hic260ov2yixtqio0s3v2os5zyt5jjb0yd9pjgss9ifa2626yygpip9j
x1adz828d7mxxo3jdlk6pvnycap34dy59lqhos9k3owl2o73n0b0z0hd6pzx5za2jv2xwh6fsnv
107 | P a g e
5qzv1vs3c36zjz3qo1wssjgbexd9up07wcfb04jp51s3fxvvmewor9ko3dpylgpmorc912tqowc
g8cxk2a1t1leo6dk4h2h3l8t0rc3rkbr510qnad3vqqbqg02q7wqowcai6r0zdvyioh9nxbj0r3
omip6vvif50gsaa6txxhxzv3qjfhwrpsjdc93pfts13o0ap5r0tzhgba7idcim2tpb25a22vygf
3gkyhk0ccyl9p2jttqzis07olm2xsken0f08xul06te4600gclly3fp48ekra7pg53w17y2dcrh
iis2db1znw8qzbiv1fjpprbt06dsh21o5epz1y6dqd1x3nohhdhcji80h6tlyohd06ha2jctfu9
ovxqzu21n3khdbwyexl3380bu4dtvs9314s9lakax20kt3j3cub29uo1ns23hh5rmr6diti33wr
j52fyf0cf03wjbm020mq452l2th1ggc3fzt3u79gsl6lwii43of1ppx5noost1ywi0qdwt8vlu1
68iyyhfi0e80\
+B0otsezlz5gzmo0f34p5ba9ix7s3bi54k8m5zujj1jzex2i1g1ytz1mdvi104hoj7o0vopbke3
i8dfl1d0km4320u12d2d2gnum4zq8bw3ctfzh4w8n3e931inzn5143bfl179bsvm0rbwx209ruy
0408a6qd0hfke0nw7xw2z2pi2piwz33se60nqn2ihhz7kmr1v9lt9tkhpjkb13jmazib4bmu43t
wqd4abmhxnx13rrgkhtz9tg319nn9c4n7weqs3az64hs63lety2uq2wft8zuixq1hi4kr1b7kk0
q2thkfefh6tkkd1yoozump004xc2tqsqhnevogqp0yfnqn8j3456f3qn1nek5sx2se3r6vqb4u0
oc1v0d5k06ugfncuv3eol2twgboduo3rg2zyl50jcyl1220q4ir5pvvs1aof9gnnllg240l5x8y
qsbbwhr11sedaitdmjrs0qrh923js606t3ljfokhmpna822mrta02212lrr3holss6mfj7v5344
vn2eibl5p12y5wtpp780bj21qr4181ibbkt004v7jau4nrmpi2ky0c33po022d12zxocpf4w5q6
0lb2ofyp76pgn2hyike9v12ar428stza5im6wvj1v2nzxrdi7yhh1djb5vbzbo14i34axwssja4
r292oj7mbmextkdb3dy7g65fvg7tf048pg2dy77o8j2uhzgyl70duha16tlq9zawplir30y39yl
8bsw4u\
len 937 type v2-rsa
server sshd
subsystem sftp
#exit
subscriber default
exit
administrator admin encrypted password *** ftp
aaa group default
#exit
gtpp group default
#exit
ip route 0.0.0.0 0.0.0.0 10.1.10.19 LOCAL1
#exit
port ethernet 1/1
no shutdown
bind interface LOCAL1 local
#exit
task facility sessmgr max 1
task facility chmgr max 1
task facility chmgr per-sesscard-count 0
mme-manager
congestion-control cpu-utilization threshold 90 tolerance 10
#exit
context SAEGW
interface saegw_up_ingress
ip address 198.18.134.40 255.255.192.0
ip address 198.18.134.41 255.255.192.0 secondary
ip address 198.18.134.42 255.255.192.0 secondary
ip address 198.18.134.43 255.255.192.0 secondary
#exit
subscriber default
exit
aaa group default
#exit
gtpp group default
#exit
gtpu-service SxU
bind ipv4-address 198.18.134.40
exit
gtpu-service pgw-gtpu
bind ipv4-address 198.18.134.43
exit
gtpu-service sgw-gtpu-egress
bind ipv4-address 198.18.134.42
exit
108 | P a g e
gtpu-service sgw-gtpu-ingress
bind ipv4-address 198.18.134.41
exit
sx-service sx-svc
instance-type userplane
bind ipv4-address 198.18.134.40
exit
user-plane-service user_plane_sv
associate control-plane-group SAEGW
exit
user-plane-service user_plane_svc
associate gtpu-service pgw-gtpu pgw-ingress
associate gtpu-service sgw-gtpu-ingress sgw-ingress
associate gtpu-service sgw-gtpu-egress sgw-egress
associate gtpu-service SxU cp-tunnel
associate sx-service sx-svc
associate control-plane-group SAEGW
exit
ip route 0.0.0.0 0.0.0.0 198.18.128.1 saegw_up_ingress
#exit
context ISP
interface loop1 loopback
ip address 8.8.8.8 255.255.255.255
#exit
interface sgi
ip address 10.1.30.141 255.255.255.0
#exit
subscriber default
exit
apn starent.com
pdp-type ipv4 ipv6
bearer-control-mode none prefer-local-value
selection-mode subscribed sent-by-ms chosen-by-sgsn
exit
aaa group default
#exit
gtpp group default
#exit
ip route 0.0.0.0 0.0.0.0 10.1.30.1 sgi
#exit
control-plane-group SAEGW
peer-node-id ipv4-address 198.18.134.35 interface n4
#exit
user-plane-group default
#exit
port ethernet 1/10
no shutdown
bind interface saegw_up_ingress SAEGW
#exit
port ethernet 1/11
no shutdown
#exit
end
110. UDM for SMF installation procedure:
Starting from standalone UDM is required for PDU registration, this UDM is different
from mock tool on AMF which will be used for UE authentication/registration and need
to be installed on different IP

Choose any worker

Download smf-mock-servers.tar.gz from this cisco-box
109 | P a g e

tar -xvzf smf-mock-servers.tar.gz

Install golang packages as below

Download go1.10.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.10.1.linux-amd64.tar.gz
tar -xvzf smf-mock-servers.tar.gz
cd smf-mock-servers/src/smf-mock-servers/
export PATH=$PATH:/usr/local/go/bin
nohup ./run-mock-tools -ip=<worker-IP> > run-mock-tools.log &
netstat -plan | grep 8099
root@cnlab16:~# netstat -plan | grep 8099
tcp
0
0 198.18.134.15:8099
0.0.0.0:*
LISTEN
20824/main
Use this UDM IP in SMF configuration
profile network-element udm udm1
http-endpoint base-url http://<>:8099
!
Login to lattice server from Master Node
ssh client
Password: starent
110 | P a g e
Download