Executing a Cloud Deployment Key Concepts: Cloud Deployment vs. Cloud Migration o Cloud Deployment: Involves placing resources in the cloud. Examples: o Deploying virtual machines (VMs) and a load balancer to run an application. o Backing up files to cloud-based object storage. Deployment can vary in complexity (e.g., small-scale backups vs. large application setups). o Cloud Migration: Refers to moving existing resources into the cloud. Resources may come from a data center or another cloud provider. Does not involve physically moving hardware. Example migration process: Set up a new resource (e.g., a database VM) in the cloud with similar or better specs. Copy the data from the existing resource to the new one. Outcome: Two resources temporarily exist (one in the cloud, one in the data center), but the cloud resource is used going forward. Important Differences in Migration o Migrations are not typically “lift and shift” (i.e., directly transferring the resource from one location to another). o Instead, the process requires: Re-creating the environment in the cloud. Adjustments to systems to accommodate differences between cloud and data center infrastructures. o Key Takeaways Deployment = Placing resources in the cloud (e.g., setting up new VMs, backing up files). Migration = Re-creating and transferring data to new cloudbased resources, not physically moving hardware. Migration involves significant adaptations and may require maintaining backups in the original data center for safety. Cloud vs. Data Center Operations o Key Points: Differences Between Cloud and Data Center Cloud operations differ significantly from on-premises (data center) operations. o Real-Life Example: Load Balancing Challenge A fleet of on-premises servers used a software-based load-balancing method that relied on layer 2 multicast. This method, while unconventional, worked in the data center. Cloud providers do not support layer 2 multicast or broadcast. Migrating these VMs to the cloud wouldn't work without system rearchitecture: Solution: Replace the software-based method with an external load balancer. o Takeaway The cloud requires different operational methods and architectural adjustments compared to data center environments. Important to consider these differences when transitioning from data center to cloud systems. Understanding Deployment and Change Management Key Points o Deployment and change management involve processes for deploying operations in the cloud. Scope of Application o These concepts apply to: Initial cloud deployments. Any subsequent changes or deployments of new cloud resources. Takeaway o Deployment and change management are ongoing processes that are essential for maintaining and evolving cloud operations. Change Management Definition and Purpose o Change management: Process of managing all aspects of upgrades, repairs, and reconfigurations of cloud services. Goal: Minimize service disruptions during changes. Common in both cloud operations and enterprise data centers. Typical Steps in the Change Management Process 1. Submit a change request. 2. Develop an implementation plan for the change. 3. Create a backout plan (in case the change is unsuccessful). 4. Get approvals from stakeholders. 5. Implement and test the change. 6. Update documentation as needed. 7. Conduct post-change reviews to assess the outcome. Importance of Change Management o Critical for successful cloud operations and managing ongoing changes. o Helps ensure that: Cloud architecture meets immediate and future needs. Disruptions to operations are minimized. Risks associated with changes are mitigated. Organizational Impact on Change Management o Smaller organizations: Change management is typically easier and faster. May not even have a formal process. o Larger organizations: Change management tends to be more complex and timeconsuming. Requires detailed procedures, including: Name of the requester. Description of the change and reason for it. Development of testing and backout plans. Documentation of risks involved. Resource requirements and coordination across teams. Assigning responsibilities for design, configuration, deployment, and validation. Reviewing dependencies and conflicts with other changes. Key Considerations o Primary purpose: Prevent problems, not just enable progress. o Adaptability: In low-risk scenarios, strict change management procedures may be bypassed with proper approvals. Change management policies should align with the level of risk involved in the proposed change. IT Perception of Change Management o Often viewed as a hindrance to progress. o However, its value lies in reducing risks and ensuring smooth operations. Obtaining Buy-In from All Involved Parties Importance of Keeping Stakeholders Informed o Essential to update all interested parties on: Migration plans and progress. Timelines and any changes that could impact current operations. o Relevant groups may include: Finance, production, human resources, and other non-IT departments requiring updates. Role of the Change Review Group o In medium to large organizations, a formal change review group oversees and approves changes. Sometimes called a Change Advisory Board (CAB). Composed of: Managers, architects, and representatives of project stakeholders. o Responsibilities: Manage risk associated with changes. Ensure no conflicting changes are scheduled for the same maintenance window. Provide multiple perspectives to identify potential issues before implementation. Key Questions from the Change Review Group 1. What do you expect to gain from the change? 2. What are the risks of implementing or not implementing the change? 3. Which functional departments will be involved in the change process? 4. How long will it take to implement and validate the change? 5. How long will it take to back out the change if necessary? Impact Considerations o The group will assess: Potential effects on ongoing operations. Service level agreements (SLAs) with customers. Costs associated with unforeseen outages. Decision Outcomes for Change Requests o Change requests may be: Approved. Denied. Returned for further investigation by the review team. Setting a Realistic Migration Timeline Importance of Timelines in Migration o Timelines must be established as part of the planning and implementation process. o Incremental migrations help: Reduce the risk of outages. Avoid the need to reverse the migration due to unforeseen issues. Example: Two-Tier Web Application Migration o Application setup: Web front end and database back end, both running in a data center. o Incremental migration process: Migrate the web front end to the cloud. Keep the database back end on-premises. Update the domain name to point to the cloud servers. o Reversion strategy: If issues arise, revert by updating DNS records to point back to the original setup. o Advantage: Enables testing in a live production environment with real users. Starting Small for Effective Migration o Initial focus: Begin with small, noncritical systems that are easy to implement. o Benefits: Builds experience in the migration process. Helps determine realistic timeframes for larger, more complex migrations. Best Practices: Maintenance Window o Migrations should be performed during a maintenance window: A scheduled time for maintenance when planned outages are acceptable. Reduces the risk of unintended visible effects on operations. o Importance: Prevents unexpected issues from causing disruptions during regular operations. Time Allocation for Migration o Allocate time for: 1. Sequential processes required during migration. 2. Post-migration testing and validation: Ensure all stakeholders verify system functionality. 3. Backout procedures if migration fails. Restore the original site or setup as a fallback. o Buffer time: Extend allocated time for each phase to account for unexpected delays. Key Takeaway o A well-structured migration timeline includes: Incremental deployment. Thorough testing and validation. Sufficient buffer time. o These elements are crucial to minimize risks and ensure a successful migration. Documenting and Following Procedures Importance of Documentation in Migration o Complete documentation is critical for a successful migration. o Key steps include: Review and update current documentation. Create accurate system diagrams. Ensure up-to-date backups of all systems being migrated. o Sources for information: Network monitoring and management systems. Device configuration downloads. Vendor support documents. Documentation Before and After Migration o Post-migration documentation will differ from pre-migration: Example: VLANs in data centers may not exist in the cloud. Different cloud provider architectures may require changes. o Result: Two sets of documentation: Before migration. After migration. o Proper documentation ensures ongoing support for cloud deployments. Key Components of Migration Documentation o Include: Internal and public IP addresses. Routing information. Firewall placements, including allowed ports and protocols. Additional components such as VPNs, load balancers, and application firewalls. o Regular updates: Documentation must be updated to reflect any changes. Infrastructure-as-Code (IaC) as Documentation o IaC allows you to programmatically define and deploy cloud infrastructure using templates. o Advantages: IaC templates serve as prewritten documentation. Reduces the need to update post-deployment documentation. o However, you’ll still need to update: Network diagrams. Any manual configuration changes. Network Planning and Documentation o Start early and collaborate with the cloud service provider. o Helps with: Correct selection and procurement of necessary networking hardware and software. Ordering data circuits for interconnecting locations. Essential Network Documentation Sections o Network core: Detailed diagrams showing IP subnets, firewall rules, and redundancy plans. Include configuration scripts for installation, maintenance, and troubleshooting. o Access and distribution networks: Diagrams for wide area network (WAN) connections, including: VPN links. Routing tables. Access control lists. Connections to the cloud provider’s network, corporate office, and data centers. o Network management section: Map showing network operations center (NOC) connections. Devices being monitored by network management systems. o Services section: Details on caching systems, DNS, logging, load balancers, etc. Include IDS/IPS and network analyzers. Automating Network Documentation o Some cloud-based tools can automate network discovery in production environments: Create detailed diagrams and configuration documents. Continuously monitor and update network changes. Off-the-shelf applications are available for this purpose. Design Phase Documentation o Network diagrams and IP addressing plans should be created in the design phase. o Benefits: Identify potential issues early and plan remediation. Detailed documentation helps both the cloud provider and consumer. Acts as a road map during implementation. Serves as a troubleshooting tool during ongoing operations. Capacity Planning and Expansion o Network diagrams are vital for capacity planning and network growth. o They serve as a starting point for planning expansions in the cloud deployment. Key Takeaways o Accurate, up-to-date documentation: Supports implementation, troubleshooting, and future planning. Reduces time wasted during troubleshooting. Facilitates smooth operations and growth of cloud deployments. What Is a Cloud Workflow? Definition of a Cloud Workflow o A cloud workflow is a series of steps or activities required to complete a task. o Purpose: Manages the state of a project using cloud-based workflow architectures. o Often involves the interoperation of multiple components and applications in the cloud. Example: E-Commerce Workflow o Steps required in an online transaction include: Shopping cart. Checkout. Financial transaction. Warehousing. Shipping functions. o Characteristics of each step: Each step has specific requirements before and after its process. An outside event typically initiates each process. Workflow Functionality in the Cloud o A cloud workflow service: Manages steps to complete a process. Can include: Human processes. Parallel steps. Sequential steps. o Acts as a state tracking and coordination system in the cloud. Workflow in Cloud Migration o The cloud migration process can benefit from a workflowbased approach. o Project management teams can: Design and implement workflows. Use workflows to track and coordinate migration steps efficiently. Setting Up Your Cloud for Automation Cloud Automation Overview o Cloud automation is a core feature of the virtualized data center. o Public cloud automation is provided by the cloud service provider and offered through: Web dashboard. API (Application Programming Interface). SDK (Software Development Kit). Command-line interface (CLI). Centralized Automation for Hybrid Clouds o Global cloud management systems: Offered by various vendors and service providers. Enable centralized management of hybrid cloud deployments. Allow automation systems to connect and manage multiple clouds simultaneously. Complexity of Cloud Automation o Automation is a complex and detailed topic. o Covers advanced processes beyond the scope of the Cloud+ exam. o Future examples will explore automation systems in cloud deployment models. What Are Cloud Tools and Management Systems? Importance of Cloud Tools and Monitoring o Managing and monitoring the deployment is critical for successfully operating a cloud environment. o Ongoing monitoring ensures: All components operate within defined performance ranges. Systems are properly configured, secure, and meet performance agreements. Key Metrics to Monitor in a Cloud Deployment o Commonly monitored data points include: CPU, memory, and disk usage on servers. Network interface statistics. Application logging. o Thousands of objects can be monitored; focus on what is important for ongoing operations. Automated Monitoring and Responses o Cloud providers offer integrated monitoring tools with orchestration platforms. o These tools can configure automated responses to specific metrics. Example: Provision more instances if CPU utilization on web servers exceeds a threshold. Options for Monitoring Tools o Not limited to tools provided by the cloud provider. o Options include: Traditional network management tools, extended for cloud services. New cloud-specific products and services developed by startups. Components of a Network Management Solution o Network management operations center: Houses systems that monitor and collect information from cloud-hosted devices. o FCAPS acronym covers key management areas: Fault management. Configuration management. Accounting. Performance management. Security. Architecture of a Managed Service Operation o Consists of servers running specialized monitoring applications. These applications request metrics from endpoint devices. o Management systems collect logs from: Servers, network equipment, and storage systems. Load balancers, VPN concentrators, and firewalls. Complexity of Cloud Management o Cloud services being remote and hosted in a shared environment add complexity to: Meeting compliance requirements. Protecting data. Performing ongoing maintenance and problem resolution. Cloud Deployment Models Overview of Cloud Deployment Models o Cloud services are delivered through various deployment models. o Combination of models is common in the market. Public Definition of Public Cloud Delivery Model: o Infrastructure designed for use by the general public. Providers of Public Cloud: o Offered by private corporations, government organizations, and academic institutions. Hosting and Infrastructure: o Service is hosted in data centers controlled by the provider. o Utilizes shared hardware resources. Visual Reference: o Example infrastructure is depicted in Figure 2.3. Private Definition of Private Cloud Model: o Designed for use by a single organization (refer to Figure 2.4). o Multiple units within the organization can utilize it. Ownership: o Can be: Wholly owned by the organization. Owned by a third-party provider. A combination of both. Hosting Options: o On-premises (within the organization’s facilities). o Off-premises (at a hosting facility). Hardware Design: o Typically uses dedicated hardware, unlike shared hardware in public clouds. Hybrid Definition of Hybrid Cloud: o A combination of two or more cloud delivery models (e.g., private, community, or public). Examples of Hybrid Model Use: o Cloud bursting: Used to handle peak processing loads. o Load balancing: Balances workloads between different delivery models. Visual Reference: o Example scenarios illustrated in Figure 2.5. Community Definition of Community Cloud: o Designed for a specific community of interest. o Shared by companies with similar requirements (e.g., business needs, regulatory compliance, security, or policy). Ownership and Operation: o Can be owned and operated by: A group of companies. A specialized cloud provider. Other interested parties. Hosting Options: o Can exist inside or outside a company’s data center or hosting facility. Visual Reference: o Example infrastructure shown in Figure 2.6. Network Deployment Considerations Purpose of the Section: o Provides a broad understanding of networking concepts to assist with cloud network deployments. o Builds foundational knowledge for more advanced networking topics. Topics Covered: o Common network protocols. o Basic configurations. o Virtual private networks (VPNs). o IP addressing. o Security services: e.g., intrusion detection and prevention. o Demilitarized zone (DMZ) in the cloud. Key Difference: Data Center vs. Cloud Networking: o In traditional data centers: Networking devices (e.g., switches, routers, firewalls) are discrete resources you can manage directly. o In the cloud: Functions of networking devices are abstracted or accessible only through the cloud provider's interface. Devices like virtual switches are invisible and fully managed by the cloud provider. Example of Cloud Networking: o Traditional switch functionality: In the cloud, servers aren’t connected to a virtual switch you can log in to. Switch functionality is completely abstracted and under the cloud provider's control. Relevance of traditional configurations: o Specifics of data center switches have limited relevance to cloud environments. Recommendation: o Pay attention to subtle differences between cloud networks and on-premises networks. Network Protocols Prevalence of Network Protocols: o Dozens of protocols are used in modern networks. Purpose of Learning Protocols: o To understand well-known port numbers and the applications they represent. o Helps in configuring and validating firewall rules. Key Takeaway: o Familiarity with common protocols and their default port numbers is essential for effective network management. HTTP Protocol Name: o HTTP (Hypertext Transfer Protocol) Default Port: o TCP port 80 Purpose: o A common application protocol used by web browsers to access World Wide Web servers in the cloud. FTP Protocol Name: o FTP (File Transfer Protocol) Purpose: o Used to send and receive files between systems on a network. o Dates back to the earliest days of networking. Command Set: o Uses a standard command set for file transfers. Default Ports: o TCP port 21 (primary). o TCP port 20 (may also be used). HTTPS Protocol Name: o HTTPS (Hypertext Transfer Protocol Secure) Default Port: o TCP port 443 Purpose: o A combination of HTTP and Transport Layer Security (TLS). o TLS encryption secures the connection between the client and server. Key Benefit: o Prevents interception or manipulation of data during transit. FTPS Protocol Name: o FTPS (File Transfer Protocol Secure) Purpose: o An encrypted version of FTP. Default Ports: o TCP ports 989 and 990 Encryption: o Utilizes TLS encryption for secure file transfers. SSH Protocol Name: o SSH (Secure Shell) Purpose: o Encrypted version of the Telnet protocol. o Used to access remote devices via a command-line interface. Default Port: o TCP port 22 SFTP Protocol Name: o SFTP (Secure File Transfer Protocol) Purpose: o Similar to FTPS, but tunnels FTP over SSH for secure file transfers. Default Port: o TCP port 22 Encryption: o Does not use TLS; relies on SSH for encryption. DNS Protocol Name: o DNS (Domain Name System) Primary Function: o Maps human-readable domain names (e.g., example.com) to IP addresses. Additional Functions: o Acts as a database of services for a domain. o Stores records such as Mail Exchanger (MX) records, which specify mail servers for a domain. Default Ports: o Uses both TCP and UDP port 53. DHCP Protocol Name: o DHCP (Dynamic Host Configuration Protocol) Purpose: o Enables automatic assignment of IP addressing information to devices on a network. o Eliminates the need for manual/static IP configuration when connecting to a network. Default Port: o UDP port 68 SMTP Protocol Name: o SMTP (Simple Mail Transfer Protocol) Purpose: o Used to send email messages between mail servers. Default Port: o TCP port 25 NTP Protocol Name: o NTP (Network Time Protocol) Purpose: o Automatically configures system time based on an authoritative reference clock. o Ensures accurate time synchronization, which is critical for: Security mechanisms. Event logging and correlation. Importance: o Some authentication mechanisms rely on accurate time. o Incorrect timestamps in log files can hinder event correlation. Default Port: o UDP port 123 Network Ports Overview of Network Ports o Well-known port numbers: Unique port numbers assigned to specific applications/services. Found in the TCP or UDP header under the destination port field. Example: When browsing via HTTPS, the browser sends a TCP/IP packet to port 443. The remote server reads the destination port and forwards the packet to the appropriate application (e.g., web server like Apache or Nginx). Thousands of well-known ports exist; below are some of the most common in the cloud: TCP Port 80 Port Number: o TCP Port 80 Purpose: o Reserved for HTTP. o Handles World Wide Web traffic. TCP Port 21 Port Number: o TCP Port 21 Purpose: o Reserved for FTP applications. o FTP servers listen on port 21 for incoming client connection requests. TCP Port 22 Port Number: o TCP Port 22 Purpose: o Used by: SSH (Secure Shell) command-line interface. Secure Copy (SCP). SFTP (Secure File Transfer Protocol) for communications. TCP Port 25 Port Number: o TCP Port 25 Purpose: o Assigned to SMTP (Simple Mail Transfer Protocol). o Used to route email between mail servers. TCP and UDP Port 53 Port Number: o TCP and UDP Port 53 Purpose: o Used by DNS (Domain Name System) for domain name lookups. TCP Port 443 Port Number: o TCP Port 443 Purpose: o Used by HTTPS for secure World Wide Web connections. o Establishes an encrypted connection between the browser and a secure web server in the cloud. o Encryption achieved using SSL/TLS protocols. UDP Ports 67, 68, 546, and 547 Purpose: o Used by DHCP (Dynamic Host Configuration Protocol) to automatically assign network configurations to devices without statically defined IPs. Ports for DHCP: o UDP Ports 67 and 68: Used for DHCP with IPv4. o UDP Ports 546 and 547: Used for DHCP with IPv6. Network Configurations Ownership of Networks: o Cloud service provider owns the networks within its data centers. Virtual Private Clouds (VPCs): o Most cloud providers allow customers to configure VPCs on top of their networks. Configuration Methods: o Can be done via: Web-based interface APIs SDKs Command-line interface (CLI) Network Configuration Options: o Routes o Access Control Lists (ACLs) o Security groups o IP address assignment Additional Configurable Network Services: o Load balancers o Application (layer 7) firewalls o Content delivery o Caching systems o DNS services Virtual Private Networks Definition and Purpose of VPNs o VPNs (Virtual Private Networks): Allow for secure encrypted connections over insecure networks (e.g., the Internet). o Commonly used for encrypted access to cloud services from remote locations (e.g., working from home). Types of VPN Connections 1. Point-to-Site VPN (Remote Access VPN): Enables a secure connection from a single remote user (e.g., a home device) to a network. Typically established on demand, as needed. 2. Site-to-Site VPN (Point-to-Point VPN): Connects two entire networks over a public network. Used to replace private dedicated circuits, reducing costs. Tends to be fixed (permanent connection). VPN Implementations o Can vary depending on hardware/software: Software-based VPNs: Installed on client computers. Firewall/Router VPN services: Integrated into networking hardware. Dedicated VPN concentrators: Standalone devices specifically for managing VPN traffic. Complexity of VPN Configuration o Configuring VPN solutions involves a wide range of complex options. Firewalls and Microsegmentation Key Differences: Cloud vs. Data Center Firewalls o In a data center: A firewall isolates different network segments (e.g., subnets) and restricts traffic between them. Example: One subnet for database servers. Another subnet for application servers, separated by a firewall. Limitation: Traffic within the same subnet is unrestricted; the firewall cannot see or restrict intra-subnet traffic. o In the cloud: Firewalls are replaced by microsegmentation. Allows packet filtering rules to be applied at the network interface level (per VM). Provides granular control over communication between resources, regardless of subnets or hosts. Security is more scalable—firewall rules are applied to resources based on their role (e.g., database VMs), not their subnet. Key Components of Firewall Rules 1. Direction: Inbound or outbound. 2. Source: One or more IP addresses. 3. Protocol: (e.g., TCP, UDP, or ICMP). 4. Port: Relevant TCP or UDP port, if applicable. o Implicit deny: By default, firewall rules deny all traffic unless explicitly allowed. Example: Providers often create a rule for new VMs to allow outbound IPv4 access to the Internet for updates. Interface Interface-Level Rules: o Often referred to as security groups. o Control traffic to and from a VM’s network interface (allow or deny traffic). Analogy: o Similar to an Access Control List (ACL) applied to a switchport in a data center. Network Network-Level Rules: o Applied to an entire virtual network or subnet. o Often referred to as ACLs or network security groups (terminology varies by provider). Analogy: o Similar to an ACL applied to a VLAN in a data center. Cloud-Specific Firewall Behavior: o Firewalls in the cloud: Do not involve discrete firewall devices at the cloud’s "edge." Cannot place a group of VMs "behind" a specific firewall device. Firewall functionality is abstracted via a management interface. o Firewall rules: Created and applied to specific resources, like VM interfaces or subnets. Microsegmentation advantage: Firewall rules move with the resource (e.g., if a VM is relocated to another host, its rules remain active). Essential Elements of Firewall Rules: 1. Direction: Specifies inbound or outbound traffic. 2. Source: Defines one or more IP addresses. 3. Protocol: Examples include TCP, UDP, or ICMP. 4. Port: Specifies the relevant TCP or UDP port, if applicable. Stateful Definition: o A stateful firewall allows traffic to pass in one direction and intelligently allows return traffic in the opposite direction. Example: o When a VM connects to a server on the Internet to download updates, the firewall automatically permits return traffic from that server. Functionality: o Uses connection tracking to: Identify return traffic. Distinguish it from unsolicited traffic. Stateless Definition: o A stateless firewall does not automatically allow return traffic. Key Requirement: o An explicit rule must be created to allow return traffic. Analogy: o Similar to an Access Control List (ACL) on a traditional switch. Note o Firewall Appliance in the Cloud: You can create a VM that runs a firewall appliance. Cloud networks can be configured so that traffic must pass through this appliance. o Key Distinction: This is different from the native firewall functionality provided by the cloud provider. Web Application Firewalls o Definition and Purpose o A Web Application Firewall (WAF): Monitors HTTP(S) requests to a web application. Detects and blocks exploits aimed at gaining unauthorized access or performing a denial-of-service (DoS) attack. o How WAFs Differ from Traditional Firewalls o Traditional Firewalls: Operate at layer 3 (IP) or layer 4 (TCP/UDP), allowing or denying traffic based on header information. o WAFs: Inspect application traffic at a deeper level to detect: Malicious script injections (e.g., cross-site scripting). SQL injection attacks. Abnormal query strings. Block suspicious traffic before it reaches the application. o Advanced WAF Features o Block traffic based on: Geographic location. Known malicious IP addresses. o Cloud Provider WAF Services o Cloud providers often offer WAFs as a managed service. o Alternatively, WAF appliances can be deployed as VMs in front of applications. o Importance of a WAF o Ideal scenario: Applications are 100% secure and immune to attacks. o Reality: Applications are often vulnerable and require frequent patches to fix security holes. A WAF provides a shield against newly discovered vulnerabilities before they can be patched. Application Delivery Controllers o Definition and Functionality: o Application Delivery Controller (ADC): Combines features of a load balancer, web application firewall (WAF), and firewall. o Designed to work together to enable access to a single application. o Example of ADC Workflow: 1. Firewall: Filters incoming traffic, allowing only HTTPS traffic on TCP port 443. Forwards traffic to the WAF. 2. Web Application Firewall (WAF): Performs deep packet inspection to identify suspicious requests. Passes safe traffic to the load balancer. 3. Load Balancer: Distributes traffic to the appropriate web server. 4. Dependency on Configuration: Success depends on all steps working perfectly. A single misconfiguration in any device can cause the application to fail. o Benefits of an ADC: o All-in-One Solution: Combines multiple network functions into one device (firewall, WAF, load balancer). Reduces the need for managing multiple separate appliances. o Fewer Points of Failure: Streamlines network management. Eliminates concerns about connectivity issues between devices. Watching Out for the Bad Guys: Understanding IDSs/IPSs Purpose of IDS/IPS: o Monitor network traffic for patterns of suspicious activity that could indicate a network-based attack or intrusion. o Both systems operate in real time to detect potential threats. Intrusion Detection System (IDS): o Function: Monitors network traffic and detects intrusions based on signatures maintained by the vendor. Alerts a management system or sends out notifications (e.g., email or text) when an attack is detected. o Key Point: Does not take action to stop or mitigate the attack—it only monitors and reports. Intrusion Prevention System (IPS): o Function: Builds on IDS by actively taking preventive measures to mitigate attacks. Uses configuration scripts and other methods to stop attacks in progress. o Interaction with Network Devices: Communicates with routers and firewalls to apply rules that block or minimize the attack's impact. Key Differences: o IDS: Detection and alert only. o IPS: Detection and active response to mitigate threats. Demilitarized Zone Definition and Purpose: o A DMZ is a section of the network that hosts Internet-facing servers while allowing limited access to internal resources. o Protects the internal network by isolating servers that are more prone to external attacks. Servers Commonly Placed in a DMZ: o Mail servers o DNS servers o FTP servers o Web servers Security Benefits: o Prevents Internet-facing servers from being placed alongside sensitive internal servers (e.g., file servers). o If a server in the DMZ is compromised: Limited access is granted to internal resources. Example: An Internet-facing web server might only access a specific database server on a defined protocol and port. Firewall Policies in a DMZ: o Extensive rules are configured on the firewall to: Restrict access to DMZ servers. Ensure DMZ servers are used only for their intended purpose. VXLAN Deployments What is VXLAN? o Virtual Extensible LAN (VXLAN): An encapsulation method designed to overcome the scaling limitations of traditional Ethernet. o Solves the problem of limited VLANs (4,094 maximum) by enabling millions of isolated virtual networks. Traditional Ethernet and VLAN Limitations: o In a traditional network: Multiple VLANs are used, each corresponding to an IP subnet (e.g., 192.168.1.0/24). Ethernet frames support a maximum of 4,094 VLANs, which is sufficient for most single organizations. o Cloud provider challenges: Need to support hundreds of thousands of customers, each requiring isolated networks. 4,094 VLANs are insufficient for cloud-scale environments. VXLAN as a Solution: o VXLAN uses MAC-in-IP encapsulation: Encapsulates an Ethernet frame inside an IP/UDP packet sent to port 4789. A VXLAN header (8 bytes/64 bits) sits between the encapsulated Ethernet frame and the IP header. o VXLAN Network Identifier (VNI): A 24-bit field in the VXLAN header that differentiates traffic between virtual networks. Allows for more than 16 million VNIs (virtual networks). Ensures traffic isolation, even if two customers use overlapping subnets (e.g., both using 10.1.0.0/16). Additional Use of VXLAN: o Enables the creation of Layer 2 tunnels across IP networks. o Facilitates stretched VLANs spanning separate geographic locations. Purpose: Allows VMs in the same subnet to migrate between data centers without changing their IP addresses. Challenges with Stretched VLANs: o Can lead to: Split-brain scenarios. Difficult-to-troubleshoot network issues. Potential data loss. o Best practice: Avoid stretching VLANs across sites. Instead, use safe VXLAN-based VM mobility solutions that don’t involve VLAN stretching. GENEVE Key Concepts: Data Plane vs. Control Plane o Data Plane: Concerned with how data is encapsulated and formatted as it moves across the network. Example: VXLAN defines a specific packet format for encapsulating Ethernet frames. o Control Plane: Concerned with how data is routed and forwarded to its destination. Example: VXLAN uses a flood-and-learn approach to traverse the network. VXLAN Overview: o Flood-and-Learn Approach: When two VMs on different hosts send packets, the first packet is flooded to all virtualization hosts in the network. This behavior can lead to scalability issues in large environments. o Alternative Control Plane Protocols for VXLAN: VXLAN can work with protocols like Ethernet VPN (EVPN) and Border Gateway Protocol (BGP) for a more scalable solution. GENEVE (Generic Network Virtualization Encapsulation): o Definition: An alternative to VXLAN for transporting Ethernet frames over an IP network. o Key Difference from VXLAN: GENEVE defines only the packet format and does not dictate the control plane. The control plane is customizable, offering flexibility in how packets are transported. o Technical Detail: GENEVE uses UDP port 6081 for communication. Comparison of VXLAN and GENEVE: o VXLAN: Defines both packet format and default flood-and-learn control plane. Can use scalable control plane protocols like EVPN with BGP. o GENEVE: Defines packet format only; control plane is user-defined, offering greater flexibility. Uses UDP port 6081 for packet transport. IP Address Management Planning IP Address Assignments for Cloud Migration: o Develop a clear plan for assigning IP addresses to devices in both the data center and the cloud. o Private IP address blocks (RFC 1918) are commonly used to conserve limited public IP addresses. Private IP Address Management: o Private, non-Internet routable address blocks (RFC 1918): Commonly used in Virtual Private Clouds (VPCs). Cloud providers may allow you to: Choose a custom address block for your VPC. Use pre-assigned address blocks. o Subnetting: Start with one large address block for the VPC, then divide it into subnets. Benefits of multiple subnets: Organize applications or network segments. Use security measures like access control lists (ACLs), security groups, and firewalls to control traffic flow. Key Considerations for IP Addressing: o Avoid reusing IP addresses already in use by your organization. o Do not reuse existing data center IP addresses in the cloud: Cloud and data center networks may need to connect. Conflicting IP addresses can cause serious issues. Public IP Address Management: o Cloud providers own public IP addresses that are Internetreachable. o Assign public IP addresses only as needed: Example: A web server running on a VM may require a public IP for Internet access. A load balancer fronting multiple web servers would also need a public IP. o Elastic IP Addresses: Public IPs that can be reserved and reassigned to cloud resources. Useful for maintaining reachability while providing flexibility in resource management. Note: o Public IPs Not Required: Cloud resources do not need a public IP address if they only need to communicate with internal resources (e.g., within the cloud or data center). Network Packet Brokers Purpose: o Enable packet-level traffic analysis for monitoring all network devices. o Useful for: Detecting malware or hackers. Identifying data leakage. Traditional Packet Analysis: o Achieved via SPAN (Switched Port Analyzer) on switches. o Traffic is copied to a network monitoring tool for analysis. o Limitation: In large, high-speed networks, it is challenging to copy every packet and send it to a single device. What is a Network Packet Broker?: o A specialized security appliance for packet analysis in modern, complex networks. o Functions: Collects and aggregates packets from devices (e.g., switches, routers, firewalls). Processes packets by: Buffering and deduplicating. Filtering unnecessary or sensitive information. Forwarding packets to one or more network monitoring tools. Benefits: o Provides a centralized collection point for packet analysis. o Especially useful for environments with both cloud and onpremises resources. Cloud Integration: o Public cloud providers often support VM traffic sniffing: Traffic can be sent to another VM running: A network monitoring tool. A virtual packet broker appliance. o Alternative approach: Install an agent on each VM to forward packet copies to the packet broker. Content Delivery Networks Definition and Purpose: o A Content Delivery Network (CDN) is a highly available service designed to deliver static and dynamic content (e.g., web pages, streaming video) as quickly as possible. Key Features of a CDN: o Consists of multiple Points of Presence (PoPs) or edge locations: Strategically positioned around the world to be close to end users. o Content Delivery Process: When a user requests content: The CDN determines the best-performing edge location (usually the one physically closest to the user). The request is routed to that edge location for servicing. Example: A user in Asia will be served from an edge location in Asia. A user in Mexico will be routed to an edge location in the Americas. o Performance Benefit: Reduces the amount of data that must travel long distances, providing a faster user experience. Redundancy and High Availability: o CDNs store redundant copies of content across multiple edge locations. o Ensures content is still available even if one location goes offline. Availability: o All major cloud providers offer CDN services. o Several specialized CDN companies also exist. Service Level Agreements Definition and Purpose: o An SLA (Service Level Agreement) is a document that: Outlines specific metrics (e.g., performance, availability levels). Specifies minimum performance standards and penalties for not meeting them. Defines data ownership and the rights and responsibilities of both the customer and the provider. Shared Responsibility Model: o Cloud provider's responsibility: Manages the infrastructure that supports the cloud. o Customer's responsibility: Manages and secures the resources created in the cloud. Examples: If you create a VM, you are responsible for its configuration and data. For a managed database service, the provider manages the database server and hardware, but you manage the databases you create. Responsibility Based on Service Model: 1. IaaS (Infrastructure as a Service): Customer has greater control and more responsibility (e.g., VM configuration, security, data). Suitable for large organizations with dedicated IT teams. 2. PaaS (Platform as a Service): Shared responsibility for applications, middleware, and runtime environment. 3. SaaS (Software as a Service): Provider handles most responsibilities (e.g., software updates, infrastructure, security). Ideal for small startups or organizations with limited IT resources. Cost Considerations: o More responsibility on the provider = higher cost. o Classic trade-off: Balancing time and money in choosing the right service model. Matching Data Center Resources to Cloud Resources Key Similarities: o Hardware and software requirements driven by applications are typically the same in both environments: OS versions Storage capacity Memory capacity Processing power Key Differences: o Some requirements don’t translate directly to the cloud: Example: Backing up to tape in a data center will require different backup procedures in the cloud. Important Consideration Before Migration: o Understand how to translate data center resources to cloud resources. o Evaluate and scale cloud compute resources to meet your specific requirements. What Are Available and Proposed Hardware Resources? VM Configurations Offered by Cloud Providers: o Wide range of configurations tailored to specific requirements: General compute. Graphics processing units (GPUs) for intensive workloads. High I/O configurations for applications like databases. CPU-centric or memory-centric configurations. Instance Types (Flavors): o Cloud term for VM types is typically "instance". o Each instance type defines key parameters: Number of vCPUs. Amount of RAM. Network and storage I/O performance. Other hardware-related parameters (vary by cloud provider). Virtualization and Allocation: o Cloud providers virtualize hardware resources. o Resources are allocated to VMs based on the chosen instance type. Physical and Virtual Processors Advancements in Processing Power: o Multicore processors have significantly increased CPU density and capabilities. o Physical servers supply processing power to VMs, requiring sufficient CPU resources to support multiple VMs. Calculating CPU Requirements: o Similar to determining RAM needs: Total CPU requirements of all VMs hosted on a server must be calculated. The physical server is configured to meet these requirements. CPU and Server Scaling: o Server motherboards contain multiple slots for CPUs, each with multiple cores. o A single server can scale to support the processing needs of hundreds of VMs. Oversubscription: o In some cases, a VM host may be oversubscribed, meaning: It hosts more VMs than its resources can fully support. As a result, the expected 2 GHz of processing power for a VM may drop to 1 GHz during high demand. Dedicated VM Hosts: o To avoid resource contention: Request a dedicated VM host (not shared with other customers). Allows control over the number of VMs per host and prevents CPU contention. Physical and Virtual Memory Advancements in Processing Power: o Introduction of multicore processors has increased CPU density and capabilities. Processing Power for VMs: o Physical servers supply processing power to VMs. o Sufficient processing capacity is required to support multiple VMs running on a server. Calculating and Configuring CPU Requirements: o CPU requirements of all VMs must be calculated to ensure adequate resources. o Physical servers are configured with enough CPU to meet the total processing needs of hosted VMs. o Server motherboards have multiple CPU slots, and each CPU can have multiple processing cores. A single server can scale to meet the needs of hundreds of VMs. Oversubscription in VM Hosts: o Oversubscription occurs when: A host has more VMs than its CPU resources can adequately support. This may cause a VM to receive less processing power (e.g., 1 GHz instead of the expected 2 GHz) during high demand. Dedicated VM Hosts: o To prevent resource contention: Request a dedicated VM host (not shared with other customers). Provides control over the number of VMs per host, ensuring sufficient CPU resources for each. o Overcommitting Your Memory Resources Memory Usage by Virtual Machines (VMs): o VMs consume RAM on the host server. o Memory requirements depend on: The number of VMs hosted. How each VM is configured. Ensuring Adequate Memory: o Cloud providers must ensure that: Sufficient memory is installed on the server to support all hosted VMs. o Additional memory is allocated for: Future growth. The hypervisor's memory needs. Modern Server Memory Capabilities: o Memory density continues to increase with modern server designs. o Other memory considerations: Access speeds. Error correction capabilities (e.g., ECC memory). Bursting and Ballooning—How Memory Is Handled Memory Ballooning: o Definition: A hypervisor function used to reclaim unused memory from running VMs and allocate it elsewhere. o Purpose: Optimizes the usage of installed RAM by reusing unused memory from VMs. How Ballooning Works: 1. Memory Shortage on VM Host: When a VM host is running low on memory (e.g., due to overcommitment). 2. Hypervisor Communication with VMs: Special virtualization tools run on each VM to integrate with the hypervisor. The hypervisor sends a signal to the balloon driver in each VM. 3. Balloon Driver Process: The balloon driver prompts the VM's OS to: Allocate unused memory and reserve it. Prevent other processes from accessing this reserved memory. The balloon driver informs the hypervisor of the free memory, which the hypervisor can then reclaim. Trade-Offs of Ballooning: o Performance Impact: If the balloon driver reserves too much memory: The VM's OS may move or swap memory to disk. This can result in a substantial slowdown in performance. Understanding Hyperthreading in a CPU What is Hyperthreading? o Hyperthreading allows a single microprocessor core to act as two separate CPUs to the operating system. Each logical (virtual) processor can be: Started, stopped, and controlled independently of the other. How It Works: o Shares silicon resources on the CPU chip for command execution. o The technology is transparent to the operating system or hypervisor. Impact on Virtual Machines: o VMs see two cores even though there is only one physical core simulating two. Requirement for Hyperthreading: o The hypervisor or operating system must support symmetrical multiprocessing to leverage hyperthreading. o Understanding Hyperthreading in a CPU Early Virtualization Challenges: o Initially, emulation software was used to enhance CPU functionality for hypervisors. o Problem: This software-based approach led to poor VM performance. Hardware-Based Virtualization Enhancements: o Solution: Intel and AMD added virtualization support directly in the CPU silicon with specialized microcode. o Resulted in greatly improved hypervisor and VM performance. AMD and Intel Virtualization Technologies: 1. AMD-V (AMD Virtualization): Silicon and microcode extensions from AMD for virtualization. Common feature in AMD CPU releases. 2. Intel VT-x (Intel Virtualization Technology): Intel’s equivalent to AMD-V, offering enhanced hardware virtualization. System BIOS and Performance: For optimal performance, these features must be enabled in the system BIOS. o Modern server hardware: These features are usually enabled by default. o CPU Overcommitment Ratios Definition of CPU Overcommitment: o CPU overcommitment refers to allocating more virtual CPUs (vCPUs) than the available physical CPUs (pCPUs). o Also known as the vCPU-to-pCPU ratio. o Assumes that not all VMs will fully utilize their allocated CPU resources at the same time. o Idle CPU cycles are dynamically reassigned to VMs that need more compute resources. Determining Overcommitment Ratios: o Application-dependent: CPU-intensive applications: Require a lower overcommitment ratio to avoid performance issues. Light-load applications: Allow for a higher overcommitment ratio to maximize resource utilization. o Benefit: Optimizing physical resource usage leads to a lower cost of operations. CPU Wait Time: o CPU wait time: The time a VM process or thread waits for access to the physical CPU. o Causes of wait time: Contention for finite physical CPU resources when many VMs run on the same host. The hypervisor may pause VMs to ensure equitable CPU access for all VMs. Monitoring and Performance Tuning: o Hypervisors and monitoring tools: Collect and display CPU wait statistics. Assist in performance tuning and capacity planning to optimize processing requirements. Single Root I/O Virtualization Definition and Purpose: o SR-IOV (Single Root I/O Virtualization): Allows multiple VMs to share a single physical NIC on a VM host. o Virtual NICs: SR-IOV virtualizes the physical NIC into multiple virtual NICs, each connecting to a different VM. Key Benefits: o Enables rapid, direct communication between the VM and the physical NIC. o Bypasses the hypervisor: Eliminates the need for the hypervisor to perform virtual switching. Results in improved network performance. Templates and Images Purpose of Templates in Cloud Deployments: o Used to avoid repetition and simplify the deployment of multiple redundant VMs. o Useful for replicating environments (e.g., production, testing). Types of Templates: 1. VM Templates (Images): Definition: Disk images with at least a preinstalled OS. Can include preinstalled software to save setup time. Virtual appliances: Prebuilt VM templates with preconfigured software. Custom Templates: Create a custom VM configuration and save it as a reusable image. Supports autoscaling by provisioning new VMs from preconfigured templates. 2. Infrastructure-as-Code (IaC) Templates: Definition: Text files that describe cloud resources and their configurations. Use case: Automate the deployment of complex environments (e.g., production and testing). Ensures environments are identical, improving consistency and reliability. Proprietary Templating Languages: Each cloud provider has its own language (e.g., AWS, Azure). Challenge: Learning multiple languages for multi-cloud setups. Solution: Third-party tools with universal templating languages. Learning Curve and Cloud Differences: o Each provider has unique terminology and implementation details: Example: "Virtual network" in one provider may be called a VPC in another. o Key takeaway: Understanding each provider's architecture is essential, despite shared concepts like network interfaces and elastic block storage. Physical Resource High Availability Definition of High Availability: o High availability (HA) ensures minimal downtime by using redundant systems to handle failures. o Configurations: Active/Active: Multiple systems are active and share the workload. Active/Standby: One system is active while another is on standby, ready to take over in case of failure. High Availability in Data Centers: o Applies to critical systems, such as: Computing systems Power systems Networking equipment Cooling infrastructure o Purpose: Prevent single points of failure that could lead to major outages. High Availability in Cloud Infrastructure: o Cloud providers implement HA principles similar to traditional data centers. o Providers ensure redundancy and resiliency for: Computing resources Storage systems Networking infrastructure o Responsibility: In the cloud model, the provider manages HA for the underlying infrastructure, unlike colocation where the customer handles most system management. Introducing Disaster Recovery Overview of DR: o Disaster Recovery (DR): Critical for ensuring system resilience and quick restoration after failures. o Backups and DR: Must be planned from the beginning; they cannot be afterthoughts or add-ons. Cloud vs. Data Center DR: o Cloud DR: Easier to implement compared to a traditional data center. The infrastructure is pre-built and managed by the cloud provider, simplifying backup configurations. o Data Center DR: Requires managing hardware, ensuring backups occur, and storing them properly (often off-site). More hardware-dependent, making it complex and time-consuming. Examples of Cloud-Based DR: 1. VM Storage Volume Backups: Schedule regular snapshots stored in a different region to meet off-site requirements. 2. Managed Database Services: Providers can automatically back up databases and transaction logs. 3. Object Storage: Use versioning and replication to retain recoverable file copies for as long as needed. Recovery in the Cloud: o Easier than data center recovery, but not as straightforward as backups. o Worst case: Perform a new deployment using backed-up data. o Best case (if budget allows): Have redundant components already provisioned to take over immediately in case of failure. Physical Hardware Performance Benchmarks Purpose of Performance Benchmarks: o Establish baselines before migrating to the cloud for use in the migration validation process. o Allows comparison of pre-migration and post-migration performance. Metrics to Collect: o Disk I/O operations o Network throughput and link utilization o Error rates o RAM usage o Storage capacity and performance Benefits of Collecting Benchmarks: o Provides a reference point to track deviations in performance. o Facilitates trending analysis to: Identify the need for additional capacity (e.g., CPU, memory, storage, networking). Optimize resource allocation based on usage patterns. Cloud Provider Support: o Most cloud management and monitoring tools: Can collect, analyze, and monitor benchmarks. Often included as part of the cloud provider’s service offerings. Cost Savings When Using the Cloud Key Advantages of Cloud Economics: o Pay-as-you-grow model: No up-front capital expenses for servers, storage, and networking. Eliminates the need for massive capital investments in data center operations. Money saved can be allocated elsewhere in the organization. o Flexible capacity: No need to purchase capacity for peak usage. Resiliency and scaling allow quick addition of capacity, often within minutes. Cost Management Considerations: o Resource usage: Cloud billing models often charge based on hours of usage. Risks: Forgetting to shut down unused servers. Autoscaling may scale up resources but fail to scale down when the workload subsides. o Potential for large bills: Provisioned but unused resources can lead to unexpectedly high costs. Best Practices for Cost Control: o Configure billing alerts: Example: Set an alert to notify you when usage is on track to exceed your budget. For personal accounts, set alerts at midpoint thresholds (e.g., $25 if your budget is $50/month) to avoid surprises. o Delete unused resources: It’s the user’s responsibility to manage and delete unused cloud resources. Energy Savings in the Cloud Modern Cloud Data Centers: o Many cloud data centers are newer (built within the last 10 years) and implement energy-efficient technologies. o Energy efficiency helps reduce the operational costs of cloud services. Higher Utilization Ratios in the Cloud: o Cloud computing benefits from the shared service model, leading to: Higher server utilization compared to traditional enterprise data centers. Enterprise data centers often have idle or low-utilization servers that still consume energy. Energy Management in the Cloud: o Modern management systems: Can power off unused servers, storage, and systems. Automatically reenable resources when needed, further reducing energy consumption. Shared vs. Dedicated Hardware Resources in a Cloud Data Center Shared Hardware Resources: o Primary economic advantage of cloud computing: Based on shared virtualized resources (compute, storage, and networking). Most common and cost-effective model. o Allows multiple customers to share the same physical hardware through virtualization. Dedicated Hardware Resources: o Dedicated servers are required in specific scenarios: Security regulations mandate hardware isolation. Application restrictions or special hardware requirements necessitate a bare-metal server. o Higher cost than shared resources due to allocating a complete server to one customer. Key Comparison: o Shared model: Cost-efficient and widely used. o Dedicated model: Higher cost, but required for compliance or specialized use cases. o Microservices Monolithic Applications: o Definition: Traditional applications where all components (e.g., web interface, shopping cart, checkout, financial transactions, shipping) run on one server. o Characteristics: Components are tightly coupled and must run on the same server for communication and coordination. Self-contained and relatively simple to deploy. Fast interprocess communication since all components are on the same machine. Drawbacks of Monolithic Applications: 1. Single Point of Failure: If the server goes down, the entire application fails, causing all functions to stop. 2. Scalability Issues: Functions cannot scale independently. Example: High traffic on the web interface may overload the server, slowing down or crashing other functions (e.g., checkout, shipping). 3. Tight Coupling: All application components are interdependent, making it difficult to update, scale, or maintain individual parts. Microservices and Cloud Elasticity: o o The rise of cloud computing has encouraged developers to rearchitect applications into microservices to take advantage of the cloud’s scalability and flexibility. Microservices offer solutions to many of the challenges posed by monolithic applications (details likely covered in future sections). Containers Microservices Architecture: o Definition: Application components are decoupled and distributed across multiple servers. o Advantages: Components (microservices) can run on separate servers. Allows independent scaling of each component. Enables redundant deployment for high availability. Challenges with Microservices: o Complexity in distribution and scaling: Must decide how to distribute microservices across servers. Example: 6 servers for web interface and shopping cart. 3 servers for other components. 2 servers for warehousing and shipping. Adjusting deployment dynamically (e.g., for increased workload) is difficult. o Manual deployment issues: Installing/uninstalling microservices individually is inefficient and time-consuming. Introduction to Containers: o What is a container?: A lightweight virtual machine used to deploy microservices. Runs a Linux executable, isolating the microservice from other processes on the host. Allows multiple containers to run on a single VM for cost efficiency. o Benefits of Containers: Provide redundancy: If one container crashes, others remain available. Prevent complete failure of a service on a VM (e.g., web interface). Cost-effective: Multiple containers on one VM = “multiple VMs” for the price of one. Container Orchestration: o Containers make it easy to launch, terminate, and move microservices as needed. o Use container orchestration platforms (e.g., Kubernetes) to automate deployment and management. o Major cloud providers offer: Kubernetes as a service. Proprietary container management services. o Best practice: Use an orchestration platform instead of managing containers manually. Working with Containers Popular Container Platform: o Docker is the most widely used container platform. o In the context of the cloud, "container" often refers to Docker containers. How Containers Work: o Similar to regular VMs but involve nested virtualization (a VM within a VM). o Containers don’t require their own OS: They borrow the Linux kernel from the host VM and boot from it. Steps to Create a Container: 1. Build an image: Contains the necessary application files. 2. Launch the container: Specify configurations like: Storage mappings Network configurations Memory and CPU options Storage Mapping in Containers: o Mapped storage points to a folder or storage device on the host VM. o For shared storage across containers on different hosts: Use a shared filesystem, typically offered as a highly available service by cloud providers. Network Interface Mapping: o When a network interface is mapped to a container: It virtualizes the host VM’s virtual network interface. Secrets What Are Secrets? Secrets refer to sensitive configuration items needed to deploy and run applications, such as: Database connection strings Authentication keys Usernames and passwords IP addresses and DNS hostnames Challenges with Hard-Coding Configuration: o Non-sensitive parameters (e.g., hostnames, IP addresses) may vary and cannot be hard-coded into the container image. o Sensitive items, like secrets, should not be stored or transmitted in the clear for security reasons. Managing Secrets Securely: o Container orchestration platforms provide tools to store and manage secrets securely. o How it works: When a container launches, it can request the secret (e.g., a database connection string) from the secrets manager over a secure connection. The secrets manager: Decrypts the data on the fly. Passes the secret to the container without storing it or transmitting it in plaintext. Benefits of Using a Secrets Manager: o Prevents exposure of sensitive data. o Ensures secrets are never stored in the container or transmitted unencrypted. o Enhances security and protects critical application configurations. Note: o Debate: There is ongoing debate about whether containers should be considered VMs. o Similarities to VMs: Containers virtualize storage, memory, and CPU, meeting the criteria for a computing machine. Containers can be started, stopped, and suspended just like VMs. o Conclusion: Based on these characteristics, containers can be viewed as a type of lightweight VM. o Configuring and Deploying Storage Commonality Between Cloud and Data Center Environments: Storage systems are decoupled from the servers that use them. Server Storage Configuration: o Servers typically do not have large arrays of local hard drives or solid-state drives (SSDs) installed. o Many servers may have no local storage at all. Use of External Storage Systems: o Servers rely on large external storage systems. o These storage systems are interconnected via a Storage Area Network (SAN). o Identifying Storage Configurations Three Types of Storage in Cloud Data Centers: 1. Block Storage 2. Object/File Storage 3. Filesystem Storage Relevance: o Storage is a core infrastructure component in any cloud data center. o These storage types are implemented differently in cloud environments (details likely covered in later sections). Network-Attached Storage Definition: o Network-Attached Storage (NAS) provides network access to a shared filesystem. o Example: A file server on an Ethernet-based LAN hosting shared directories. How NAS Works: o The filesystem resides on the NAS server, which handles the details of storing files on disk. o Clients (e.g., VMs in the cloud) access files on the NAS server via standard protocols: Network File System (NFS) Server Message Block (SMB) / Common Internet File System (CIFS) Technical Details: o SMB/CIFS use TCP port 445. Clients typically do not persistently store copies of data locally. They may temporarily cache files that are in use. Direct-Attached Storage Definition: o Direct-Attached Storage (DAS) refers to storage directly connected to a computing device (e.g., computer, laptop) rather than over a network. Characteristics of DAS: o Common in home and small business environments. o Easiest storage method to implement. o Storage devices include: Hard drives Solid-state drives (SSDs) Flash drives o Any other storage media directly connected to the computer. Connection Types: o DAS storage devices connect to the computer via: ATA SATA SCSI interfaces. DAS in the Cloud: o Exists in the form of high-speed, temporary storage. o Commonly used for temporary cache locations to hold files during processing. o Non-persistent: Data is subject to loss. o Often referred to as ephemeral storage by cloud providers. Suitable for temporary storage needs where data loss is acceptable. Storage Area Networks Definition and Purpose: o A Storage Area Network (SAN) is a dedicated high-speed network that connects storage systems to servers. o Decouples storage from servers to improve scalability, performance, and flexibility. o Separate from the network used for normal VM traffic. Benefits of SANs: o High-speed, redundant, and scalable storage. o Enables seamless VM migration: VMs can move between hosts while maintaining access to storage. Supports stateful moves, allowing applications to continue running during migration. o Facilitates: Maintenance Cloud bursting Fault tolerance Disaster recovery SAN Performance Factors: o Drive performance: SSDs or high-RPM magnetic hard drives are used to reduce read/write latency. o Network performance: High-speed Fibre Channel interfaces are commonly used for SAN connections to ensure speed and reliability. Poor design or contention can negatively impact performance. Advanced Technologies: o NVMe over Fabrics (NVMeOF): A newer and faster host-to-SAN technology. Requires NVMe-supported storage controllers. High speed comes with higher costs in cloud environments. o iSCSI (Internet Small Computer Systems Interface): Uses an Ethernet network adapter to connect to a SAN. Operates over TCP port 3260. Slower than Fibre Channel; suitable for less speed-critical applications. Key Consideration for Cloud Migration: o High-speed storage requirements will likely result in higher costs in cloud environments. Object/File Storage Definition and Overview o Cloud object storage: Highly scalable and reliable way to store files. o Terms “file” and “object” are interchangeable. o Differs from NAS: Access via HTTPS or cloud provider’s proprietary API. Not designed for mounting like traditional filesystems (e.g., NFS, SMB). Cannot store raw blocks of data—only files. Lacks hierarchical folder/directory structures. Redundancy in Object Storage o Achieved by storing files in multiple locations. o Example: File A: Stored on three different physical devices, potentially in different data centers. File B: Likely stored on different devices than File A. o Files in object storage are not physically bound together (unlike traditional filesystems). Comparison to Filesystems o In filesystems: Files are bound together. Deleting a folder deletes all its contents. o In object storage: Files are independent. No folders; worst-case scenario is deleting individual files. Buckets and Containers o Files in object storage are grouped into containers or buckets. Example: Separate buckets for image assets from different websites. o Buckets ≠ Folders: Buckets cannot be nested. To delete a bucket, all its files must be deleted first. Object Storage vs. Block Storage o Block storage: Mimics a physical disk; allows OS to create filesystems. Can modify portions of a file by accessing individual blocks. o Object storage: Files can only be read or written in their entirety. To modify even a small part of a file, the entire file must be reuploaded. No direct editing of files; only replacement is possible. Object IDs o Files in object storage are addressed by object IDs (analogous to filenames). Globally unique identifier used to locate data or metadata. Metadata in Object Storage o Metadata: Part of a file/sector header used to identify data contents. Essential for indexing and searching in big data applications. Can include various attributes (e.g., data type, application, security level). o Extended metadata: User-defined attributes for files. Examples: Author Authentication type Username/password Certificates, encoding, or any other specific attribute Allows for creation of sophisticated index schemes for file access and usage. Software-Defined Storage Definition and Overview o Software-Defined Storage (SDS): Not a new storage type, but a new way of using existing storage technologies. Allows virtualization of multiple storage systems into a single virtual storage system. Commonly used in cloud environments. Key Features o Interactions with cloud storage (e.g., elastic block storage) involve SDS systems. When provisioning a storage volume for a VM, you’re interfacing with an SDS system, not directly with a SAN. Control Shift in SDS o Traditional data centers: Storage management is controlled by a SAN administrator. Example: To provision storage for a VM, the SAN admin creates a logical unit number (LUN). Admin handles resizing LUNs, enabling/disabling features like deduplication, encryption, or compression. o SDS changes this dynamic: Control moves to the person provisioning the VM, eliminating reliance on SAN admins for routine tasks. Early Manifestation: Virtual SAN (vSAN) o vSAN: An early implementation of SDS. Comprises multiple VMs across different physical hosts for redundancy. Can use both local storage and SAN storage. Combines storage from multiple systems into a unified pool for provisioning. Advantages of vSAN o Simplifies storage management: If you run out of space, you can request a new LUN to add to the vSAN instead of resizing existing ones. o Offers advanced features: Encryption Deduplication Compression Replication Storage Provisioning Definition and Purpose o Storage provisioning: The process of creating and assigning storage resources, typically as volumes that are mounted or accessed by servers or remote users. o In cloud environments, storage provisioning is as critical as managing compute resources. Traditional Enterprise Data Center Provisioning o Complex process requiring a dedicated team of storage engineers. o Tasks involved: Installing and configuring Storage Area Networks (SANs). Creating Logical Unit Numbers (LUNs) and vSANs. Applying security configurations. Mounting new storage volumes on VM hosts using Host Bus Adapters (HBAs). o Ongoing operational tasks: Redundancy. Security. Backup operations. Cloud-Based Storage Provisioning o Cloud operations have automated and simplified the provisioning process. o Steps: 1. Access the cloud management console via a web browser. 2. Choose desired storage options: Replication Backups Volume size Storage type Encryption Service levels/tiers Security parameters 3. Click the Create button. 4. In a few minutes, the storage system is created and deployed. Key Benefit o Cloud provisioning eliminates the complexity of traditional methods, making storage deployment faster and more user-friendly. Thick Provisioning Definition o Thick provisioning: Allocates the entire requested storage capacity at the time of volume creation. Process o When a virtual disk is created: The user specifies a desired storage capacity (e.g., 100 GB). With thick provisioning, all 100 GB is immediately allocated upon disk creation. Key Characteristics o The full storage capacity is reserved upfront, regardless of whether it will be used immediately. o Ensures the requested storage is fully available from the start. Comparison to Alternative o Differs from methods where storage starts small and grows as needed (e.g., thin provisioning). Thin Provisioning Definition o Thin provisioning: Allocates storage capacity on an as-needed basis, preventing waste of unused storage. Key Features o Upon creation, the volume does not reserve the full requested capacity. A smaller amount is initially allocated. Additional capacity is added dynamically as usage increases, up to the maximum volume size. o Appears to have the requested size to the user/VM, but the actual reserved storage on the back-end is much smaller. Example o Creating a 100 GB thinly provisioned volume: The VM sees the volume as 100 GB. However, only 10 GB might be initially reserved on the back-end. Performance Considerations o When the used storage approaches the reserved amount, the backend system dynamically reserves more space. This process may cause a temporary increase in read/write latency. o Recommendation: For performance-critical applications, consider thick provisioning to avoid latency issues. Storage Overcommitment Definition o Storage overcommitment: Allocating more storage space than is physically available by using thin provisioning. Key Features o Thin provisioning ensures each VM's actual disk reservation starts small and expands as needed. o Enables efficient use of storage pools by dynamically allocating storage to virtual resources. Risks o Overcommitment can result in storage overutilization, where physical storage is fully consumed. o If all available storage is used: Write operations will fail. Resources dependent on that storage may experience abrupt failure. Best Practices o Monitor storage usage closely to prevent full storage consumption. o For critical systems where outages are unacceptable, avoid overcommitting storage. Physical to Physical Definition and Context o Physical to Physical (P2P): Migration of legacy applications running on physical server hardware in a data center to a physical server in the cloud. o Necessary for applications that cannot be virtualized. o Cloud providers may offer physical or "bare-metal" servers to support this need. Key Characteristics o Uncommon but available from many cloud providers. o Costly, as it requires dedicating server hardware to a single customer. P2P Migration Process 1. Options for migration: Perform a new installation of the OS and application on the target physical server. Conduct a P2P migration to move both the OS and application to the cloud. 2. Migration utilities: Required to handle the migration process. Must account for device drivers and differences in hardware platforms. Utilities may be provided by the cloud provider or third-party software companies. Handling Large Data Volumes o For substantial amounts of data, physical drives can be shipped to the cloud provider. o The cloud provider will perform the data migration on behalf of the customer. Encrypting Your Data at Rest Definition o Data at rest: Data stored on a storage medium (e.g., a drive). o Encryption at rest: The process of encrypting data before it is written to storage. Key Concepts o Ensures no traces of unencrypted data remain on the storage medium. o Encryption keys are used for encrypting and decrypting files. o Keys can be managed by either: Cloud service provider. End user (based on feature offerings). Contrast with Data In-Transit o Data in-transit: Data actively moving across a network. Categories of Encryption at Rest o Categories depend on responsibility for managing the encryption process: Managed by the cloud provider. Managed by the end user. Regulatory Importance o Some regulations mandate encryption of data at rest to protect sensitive information. Server-Side Encryption Definition o Server-side encryption: Encryption is managed and performed by the cloud provider. Key Features o The encryption process is transparent to the user. o The cloud provider: Manages encryption keys. Handles encryption and decryption automatically during data read/write operations. Implications and Risks o The provider can decrypt data at any time, giving them potential access to your data. o In cases of legal compulsion, the provider may be required to turn over decrypted data to authorities. Client-Side Encryption Definition o Client-side encryption: The customer encrypts data before sending it to the cloud provider. Key Features o The customer is fully responsible for managing encryption keys. o The cloud provider has no access to the unencrypted data. Implications and Risks o If the customer loses the encryption key, the encrypted data becomes permanently inaccessible. o Commonly used by organizations with strict security requirements to ensure that even the cloud provider cannot access sensitive data. Token Models Definition o A token is a temporary credential granting access to a resource for a limited period of time. o Used by cloud providers to secure access to resources like: Files in object storage. VMs. PaaS or SaaS applications. Purpose o Tokens authenticate APIs or command-line access to cloud services when a dedicated username and password is not feasible. o Provide time-limited access to ensure security. Example Use Case: E-commerce Website o A website selling downloadable software: Stores the software in cloud object storage. Generates a token and special URL after purchase to grant the user temporary read access to the software file. The token ensures: Limited access duration. Protection against unauthorized downloads (e.g., software piracy). Optional one-time use, so the URL becomes invalid after a single download. Key Benefits o Increased security: Access is temporary and limited. o Prevents unauthorized sharing or prolonged access. o Avoids the need for managing individual user accounts and passwords for every customer. Input/Output Operations per Second Definition o IOPS: A measure of storage performance, referring to the number of input/output (IO) operations performed per second. An IO operation is either a read or write to storage. Importance o Higher IOPS = Faster data storage and retrieval. Cloud Provider Options o Many cloud providers allow customers to choose a guaranteed IOPS level for block storage. Relationship Between Storage Size and IOPS o There is typically a direct relationship between the amount of storage provisioned and the IOPS available: Example: A 20 GB volume might offer 60 IOPS. A 100 GB volume could offer 300 IOPS. o Reason: More allocated storage means data is distributed across more disks, enabling parallel reads and writes for better performance. Compression and Deduplication Definition and Purpose o Compression: Reduces file size by storing redundant information once and using pointers to reference it within a single file. o Deduplication: Reduces storage use by identifying and eliminating redundant data across multiple files or volumes. Key Concepts o File-level compression example: In a book, replace all instances of the word "the" with a symbol to save space without losing information. o Deduplication example: A SAN deduplicates redundant data across all LUNs. Useful for virtual disks containing the same OS—potentially freeing terabytes of space (e.g., reducing the 8 GB OS footprint across hundreds of virtual disks). Benefits o Significantly reduces the storage footprint, resulting in space efficiency. Costs and Trade-Offs o Time-memory trade-off: Reduces storage usage but increases processing time, leading to slower read and write operations. Best Practices o Organize data by speed requirements: For low-latency data: Avoid compression/deduplication to maintain performance. For highly redundant, low I/O data: Use compression/deduplication to maximize storage efficiency with minimal impact. Storage Priorities: Understanding Storage Tiers Definition o Storage tiering: The practice of categorizing and managing data based on its usage, performance needs, and access frequency. Key Data Requirements o Criticality: How important the data is to operations. o Access frequency: Some data (e.g., transactional databases) requires frequent access. Other data (e.g., old corporate records) may need only occasional access. o Performance: Applications with high read/write speeds for better performance. o Geographical placement: Where the data is physically stored. o Encryption and security: Protecting data from unauthorized access. Purpose of Storage Tiering o Allows different storage tiers to be assigned based on the specific requirements of the data. o Optimizes cost and performance by treating data differently according to its usage. Tier 1 Definition o Tier 1 storage: Used for critical or frequently accessed data. o Stored on the fastest, most redundant, and highest-quality media available. Key Features o Redundancy: Configured to allow one or more disks to fail without data loss or loss of access. o High I/O performance: Supports applications requiring rapid data read/write speeds. o Reliability and durability: Designed for maximum trustworthiness. o Manageability and monitoring: Offers the best management and monitoring capabilities. Purpose o Ensures continuous access to critical data even in the event of hardware failures. o Ideal for mission-critical applications requiring high performance and reliability. Tier 2 Definition o Tier 2 storage: Used for data with lower performance requirements or data that is accessed infrequently. Key Features o Does not require fast read/write speeds. o Can use less expensive storage devices. o Data can be accessed via a remote storage network. Examples of Tier 2 Data o Email storage o File sharing o Web servers (where performance is important but cost-effective solutions are acceptable). Purpose o Balances performance and cost for data that doesn’t require the high performance of Tier 1. Tier 3 Definition o Tier 3 storage: Designed for rarely accessed data or backups of Tier 1 and Tier 2 data. Examples of Tier 3 media: o DVDs o Tape o Other low-cost storage media. Key Features o Low-cost option for large data volumes. o Slow retrieval times: Data access may take hours. o Focuses on cost-efficiency over performance. Performance and Tier Hierarchy o Lower tier numbers (e.g., Tier 1) = Higher performance, redundancy, and availability. Tier 1 > Tier 2 > Tier 3 in terms of performance. o Tier 3 datasets: Accessed infrequently. No need for high performance like Tier 1 or Tier 2. Multitiered Cloud Storage Design o Flexible tiering: More than three tiers can be used to meet specific performance and cost requirements. o Automation and scripting: Enable data to migrate between tiers based on data retention policies. o Classification of data: Proper tier assignment leads to significant cost savings. Avoids paying for unnecessary performance or capacity. Example Use Case o Current email: Stored in Tier 1 or Tier 2 for frequent access. o Archived email: Moved to Tier 3 for long-term retention. o Regulatory compliance: Retain older data offline (higher tiers) to reduce costs while meeting legal requirements. Managing and Protecting Your Stored Data Cloud Provider Responsibilities o The cloud provider ensures the availability and reliability of the storage infrastructure. o Durability: A measurement of how well the storage system preserves stored data. Example: 100% annual durability would mean no data loss over a year (though no provider guarantees this level, as some data loss is inevitable). Customer Responsibilities o You are ultimately responsible for: Ensuring data integrity. Maintaining data availability. o Despite cloud provider guarantees (e.g., replication, durability, or availability), the responsibility for data protection lies with the customer. Next Steps o Strategies for managing and protecting data will be discussed in the following sections. High Availability and Failover High Availability o Definition: The ability of a system to quickly and automatically recover after a component failure. o Requires redundant components so if one fails, its counterpart takes over. o During a failure: There may be a delay while the redundant component assumes control. o Cloud providers build systems for high availability, but occasional outages can still occur. o Important: High availability ≠ always available. Fault Tolerance o Definition: The ability of a system to remain continuously operational even after a component failure. o Achieved through a resilient design that anticipates and mitigates system failures. o Example of fault tolerance: A server with redundant power supplies: If one power supply fails, the other maintains power, keeping the server operational. Key Difference o High availability: Focuses on quick recovery after failure. o Fault tolerance: Ensures continuous operation without interruption. Cross-Region Replication Definition and Purpose o Data replication: Process of placing copies of data on multiple systems to ensure disaster recovery and resiliency. o Cross-region replication: Replicating data across different regions for additional protection. Importance of Replication o If all data resides in a single availability zone: A zone failure makes the data inaccessible until operations are restored. A catastrophic event (e.g., natural disaster) could lead to permanent data loss. o Replication across multiple zones improves availability and protection. Example o A block storage volume may automatically replicate data across multiple zones. If one zone fails, the data remains intact and accessible from other zones. However, data corruption could still occur and might also be replicated. Best Practices o Replicate and retain backups to meet data retention and availability requirements. o For critical data, use cross-region replication for higher resiliency. o Many cloud providers offer automatic cross-region replication as a configurable feature. Replication Types: Understanding Synchronous and Asynchronous Replications Overview of Cross-Region Replication o For object storage: Replication process is simple: Replicate files to another region every time a file is uploaded. o For frequently changing data (e.g., transactional databases): Replication becomes more complex. Risk: Newly written data may be lost if a failure occurs before it is backed up. Replication frequency is critical to minimize data loss. Synchronous Replication o Definition: Data is written to primary and secondary storage simultaneously. A transaction is marked as successful only after both sites have completed the write. o Benefits: Ensures consistent and synchronized copies. Supports high-end transactional databases. Provides instantaneous failover and fast recovery time objective (RTO). o Use Cases: Critical applications where no data loss can be tolerated. o Drawbacks: High cross-region network transit costs for busy databases. Asynchronous Replication o Definition: Data is written to primary storage first, then copied to the backup site after a delay. Follows a store-and-forward design, providing eventual consistency. o Benefits: Lower cost compared to synchronous replication. o Use Cases: Databases with less frequent write activity where immediate synchronization is unnecessary. o Drawbacks: Delays in data replication can lead to data inconsistencies during failures. Using Mirrors in Cloud-Based Storage Systems Definition o Site mirror: A configuration that deploys redundant resources across multiple regions to ensure continuous operation in case of a region failure. Purpose o Ensures high availability and fault tolerance for critical systems with stringent availability requirements. Mirror Configurations 1. Active-Standby (Hot-Standby): One region serves as the primary (active) region. Data is continuously and immediately replicated to the secondary (standby) region. In case of a primary site failure, the standby mirror takes over processing. 2. Active-Active: Both regions are actively used simultaneously. Provides load balancing and redundancy for critical applications. Key Considerations o Choose the configuration based on performance, availability, and cost requirements. Cloning Your Stored Data Definition and Purpose o Cloud providers enable replication of data on block storage volumes. o Useful for backup, disaster recovery, or data migration. Process of Cloning Block Storage Data 1. Snapshots: Automatically take a snapshot of the block storage volume at regular intervals (e.g., every few hours). Snapshots can be replicated to a different region. 2. Implementation Methods: Use a custom script to automate snapshots and replication. Some cloud providers offer native functionality as part of their elastic block storage system. Data Recovery o To recover data: Use the snapshot to provision a new storage volume. Attach the new volume to a VM for access. Key Benefits o Simplifies data replication and recovery. o Provides a reliable mechanism for cross-region data availability. Using RAID for Redundancy Definition and Purpose o RAID (Redundant Array of Inexpensive Disks): A system that combines multiple drives to achieve: Performance gains. Storage redundancy (fault tolerance). Large storage capacities. Benefits of RAID o Provides fault tolerance while maintaining performance. o Enables creation of large storage volumes by grouping multiple disks. o Increases performance, speed, and volume size when logical units span multiple drives. RAID in the Cloud o RAID management is typically handled by the cloud provider as part of their storage infrastructure. o Hardware RAID: Configured using RAID controllers (hardware cards). o Software RAID: Allows users to combine elastic block storage (EBS) volumes and treat them as a single drive. RAID Levels o Different configurations are referred to as RAID levels. o Each level is suited for specific use cases, balancing performance, redundancy, and storage needs. RAID 0 Definition of RAID 0: o RAID 0 is a storage technique where a block of data is split across two or more disks. o This process of dividing the data across disks is called striping. Data Storage in RAID 0: o Files are broken into blocks of data. o These blocks are then striped (distributed) across multiple disks in the system. o Example: In a RAID 0 array with two disks, half of the file is stored on one disk, and the other half is stored on the second disk. Key Characteristics: o No Redundancy or Error Detection: RAID 0 lacks fault tolerance; if one disk fails, all data is lost. o Despite lacking redundancy, RAID 0 is still classified as a RAID level. Performance: o Allows for parallel read and write operations, making it fast. o Ideal for use as a caching drive to store dispensable data. Disadvantage: o High risk of data loss due to the absence of redundancy. RAID 1 Definition of RAID 1: o RAID 1 is a storage technique where all data is stored on multiple disks. o Commonly referred to as mirroring. Key Characteristics: o Ensures complete data redundancy by duplicating data across two or more disks. o If one disk fails, the data is still accessible from the mirrored disk(s). Performance: o Improved read times: Data can be read from multiple disks simultaneously (in parallel). o No significant improvement in write performance: Data must be written twice (once to each disk), which slows write operations. Disadvantage: o High cost: RAID 1 is the most expensive RAID level because 50% of storage space is used for redundancy rather than increasing capacity. RAID 1+0 Definition of RAID 1+0: o A combination of two RAID levels: RAID 1 (mirroring) and RAID 0 (striping). o Commonly referred to as RAID 10 or RAID 1+0. Configuration: o Two or more RAID 1 arrays (mirrored) are first created. o These mirrors are then striped using RAID 0. Key Benefits: o Provides both redundancy and higher performance: Redundancy: Ensures data is safe through mirroring. Higher performance: Striping improves read and write speeds compared to RAID 1 alone. Trade-Off: o Higher cost due to the need for more disks (for both mirroring and striping). RAID 0+1 Definition of RAID 0+1: o A combination of RAID levels where RAID 0 (striping) is performed first, and then the striped data is mirrored using RAID 1. o Essentially the inverse of RAID 1+0 (RAID 10). Configuration: o Stripe first: Data is striped across multiple disks. o Mirror second: The striped data is then duplicated (mirrored) for redundancy. RAID 5 Definition of RAID 5: o Known as striping with parity. o Combines data striping (like RAID 0) with parity for fault tolerance. How It Works: o Requires at least three disks: Two disks for striping and one disk for parity. o Parity calculation: Parity is calculated using the XOR (exclusive OR) operation on data bits. Example: Bit 1 written to Drive 1, bit 0 to Drive 2. Parity bit (result of 1 XOR 0) written as 1 to the parity drive. If a drive fails, its data can be reconstructed by XOR’ing the bits of the other drives. Data Reconstruction Example: o Drive 1 | Drive 2 | Parity Bit 1 |0 |1 0 |1 |1 1 |1 |0 0 |0 |0 o Any missing data can be recalculated using the parity bit and the surviving drive data. Performance: o Improved write performance: Writes are performed in parallel. o Slight improvement in read performance: One extra disk contributes to read operations. o Higher performance with more disks: Using five or more disks is recommended for optimal performance. Advantages: o Provides fault tolerance: Can withstand the failure of one disk. o Efficient use of storage: Requires less disk space compared to other RAID levels with redundancy. o Popular in cloud data centers due to balance between redundancy and performance. Disadvantages: o Write-intensive workloads: Performance slows due to parity calculations. o Rebuild time: When a disk fails, rebuilding the array is time-consuming and impacts performance. o Vulnerability to multiple disk failures: If more than one disk fails simultaneously, the array will fail. Transition to RAID 6: o RAID 6 addresses the risk of multiple disk failures by using two parity drives (covered separately). RAID 6 Definition of RAID 6: o An extension of RAID 5 with enhanced fault tolerance. o Also known as striping with double parity. Key Feature: o Two parity disks are used instead of one (as in RAID 5). o This allows RAID 6 to withstand two simultaneous hard drive failures without losing data. Advantages: o Improved fault tolerance: Can handle two drive failures compared to RAID 5, which can handle only one. Disadvantages: o Slower write performance compared to RAID 5: This is due to the overhead of writing the second parity stripe. Quotas and Expiration Advantages of Cloud Storage: o Offers seemingly unlimited capacity. o However, this may lead to potentially unlimited costs. Tools to Control Storage Costs: 1. Quotas: Limits the amount of data that can be stored on the system. Commonly used with filesystem storage (e.g., NFS or SMB). Prevents users from monopolizing shared storage resources (e.g., storing large personal files like music libraries). Some cloud providers may allow quotas for object storage as well. 2. Object Expiration: Available only for object storage. Automates the deletion of files after they reach a specified age. Alternatively, files can be moved to a lower-cost storage tier and later deleted. Ideal for temporary data such as logs that don’t need to be stored indefinitely. Block Storage: o Does not expand automatically, unlike object or file-level storage. o Quotas and object expiration are unnecessary. o Block storage volumes can be resized manually when needed. Storage Security Considerations Importance of Storage Security: o The purpose of information security is to protect data. o Data spends a significant amount of time in storage, making storage security critical. Regulatory Requirements: o Various privacy regulations may apply to stored data, such as: Data must be stored in its country of origin. Data must be encrypted during transit and at rest on storage devices. Cloud Provider Security Options: o Cloud providers offer a variety of storage security features to meet regulatory and security needs. Access Control for Your Cloud-Based Storage Definition of Access Control List (ACL): o A security mechanism consisting of an ordered list of permit and deny statements. o Used to control and secure access to storage resources in the cloud. Function of ACLs: o ACLs explicitly permit or deny access to specific storage resources. o Similar to network ACLs used for switches, routers, and firewalls. ACL Features: o Each storage object can have an ACL defining the following permissions: Read Write Modify Delete o Permissions can vary by user group: Example: One group may have read-only access to a storage bucket. Another group may have both read and write access to the same bucket. User Group Definitions: o Groups can include: Everyone Authenticated users Anonymous users Administrators Custom groups as defined by the organization. o ACLs filter these groups to control access to storage system objects. Example Use Case: o A customer purchasing software from a website receives a token granting read-only permission to download the file. The token does not allow any other actions, such as deleting the file. The ACL attached to the file contains an access control entry (ACE) specifying the token holder's permissions. Expanded Example: o If a developer needs permission to modify the file, an additional ACE can be added to the ACL. This grants the developer’s cloud account the ability to modify and download the file. Understanding Obfuscation Definition of Obfuscation: o A technique used to make information difficult to understand or interpret. Example of Obfuscation: o Using random strings for usernames instead of obvious names like “admin.” Purpose: o Security: Obfuscation helps protect data by making it harder for hackers or hijackers to understand or use stolen data. o Defense mechanism against unauthorized access. Malicious Use of Obfuscation: o Can be exploited by malicious actors to hide malware. o Example: Obfuscated code in web pages may appear benign but contain harmful content. Storage Area Networking, Zoning, and LUN Masking Importance of Understanding SAN for Cloud Migration fibrMigrating to the cloud requires a thorough understanding of storage configurations in the data center. o Challenge: Identifying where data resides for servers using SAN-backed storage. o Knowing how SAN storage operates helps to ask the right questions and avoid leaving critical data out of migration plans. Zoning in Storage Area Networking o Ethernet vs. Fibre Channel Switches: Ethernet switches: Allow open communication by default. Fibre Channel switches: Prohibit SAN port communication by default until zoning is configured. o Definition of Zoning: A SAN network security process that restricts access between initiators (e.g., servers) and targets (e.g., storage volumes). Allows admins to define which storage volumes a server or virtual machine (VM) can access. o Types of Zoning: Hard Zoning: Defined by groups of SAN ports. Soft Zoning: Defined by worldwide names (WWNs). o Zone Sets: Multiple zones can be grouped into zone sets. Zone sets are activated on the SAN fabric switch. o Practical Applications of Zoning: Restricts storage access to specific OS types (e.g., Windows servers access Windows block storage, Linux servers access Linux logical units). Prevents filesystem corruption (e.g., Linux systems attempting to access Windows filesystems). Ensures that VMs boot from SAN storage rather than local storage, mounting their correct storage volumes. LUN Masking o Definition: Similar to zoning but configured at the storage controller level instead of the SAN switch level. Controls access rights between LUNs (Logical Unit Numbers) and VMs or bare-metal servers. o Purpose of LUN Masking: Restricts access to specific LUNs for security and to prevent unauthorized access. Ensures only the intended servers or clusters access particular storage resources. o Configuration Options: A single server can access a specific LUN. A group of servers (e.g., in a cluster) can share access to the same storage resources. Essential when: A server needs to boot off the SAN and requires exclusive LUN access. Applications require shared access for VMs moving between baremetal servers. Key Differences Between Zoning and LUN Masking Feature Zoning LUN Masking Configured at SAN fabric switch Storage controller level Access Between initiators and Between LUNs and Control targets on the SAN individual servers/VMs Purpose Define which servers can Limit LUN access to access which targets specific servers/VMs Summary of Use Cases o Zoning ensures that: Only compatible OS servers access designated storage. VMs boot correctly from SAN without conflicts. o LUN Masking ensures that: LUNs are securely and exclusively accessed by designated servers or server groups. Flexible configurations support both exclusive and shared LUN access scenarios. Hyperconverged Appliances Hyperconvergence: Combines server virtualization with the virtual SAN concept. Traditional Data Centers: Historically relied on centralized storage (e.g., SAN). Shift in Approach: Consolidates compute and storage into a unified system. Hyperconverged Appliances: Virtualization hosts now contain large numbers of drives (providing terabytes or more of storage). Form a single compute and storage cluster. Can be managed as a single unit for simplicity and efficiency. Data Loss Prevention Definition and Purpose o Data Loss Prevention (DLP): Methods to detect and control how data is used. o Differs from traditional access control: Goes beyond binary allow/deny decisions. Considers additional factors such as: Nature/classification of the data. Medium (where data is being written to or read from). Amount of data being accessed. Time of day data is accessed. Use of Machine Learning in DLP o Anomaly Detection: Similar to unsupervised machine learning used for identifying suspicious activity. Example: DLP flags a user copying large volumes of data (e.g., gigabytes) if it's unusual for that user. o Automatic Classification: DLP uses machine learning to classify data and restrict it based on sensitivity. Example: Data with Social Security numbers is flagged as sensitive and access is restricted. o Outbound Traffic Monitoring: Detects and blocks transmission of sensitive information (e.g., credit card numbers) over the network. Additional DLP Capabilities o Enforcing Encryption: Automatically encrypts external drives when plugged in. Prevents unauthorized access if the device is lost or stolen (e.g., in a parking lot). o Policy Customization: Admins define granular DLP policies to classify and protect data/devices. Policies ensure data is handled appropriately based on its classification. Key Benefits o Protects sensitive information from unauthorized access or loss. o Prevents accidental or intentional data leakage. o Helps enforce compliance with data security standards. Accessing Your Storage in the Cloud Cloud Storage Volumes: o When hosting services with virtual machines (VMs) in the cloud: Storage volumes are created in the cloud. These volumes are mounted locally as disks on the VMs. Remote Access to Cloud Storage: o Cloud vendors provide special client software to synchronize files between local devices and the cloud. o File Synchronization: Files stored on local devices are synced to the cloud. Files are replicated across other devices connected to the same account. Examples: Google Drive, Dropbox, OneDrive. Other Cloud Storage Access Methods: o Amazon S3: Accessed via a standard web browser. o APIs and CLI: Cloud-specific API calls and Command-Line Interface (CLI) tools are used for interacting with cloud storage. These methods are unique to the specific cloud storage offerings. Performing a Server Migration Understanding Server Migration o Goal: Migrate existing servers and applications to the cloud, which is a virtualized environment. o Requires understanding of current server specifications and performance metrics. Gathering Server Specifications o Required Information: CPU, memory, storage, and network specs of existing servers. Baseline performance metrics for these resources: Helps determine if current specs are sufficient or require upgrades for growth and performance. Analyze performance variations based on: Type of applications running. Time-of-day workload increases (e.g., CPU, disk, or I/O activity). o Purpose: Use collected data to properly scale and configure the new VM's hardware profile. Scheduling Migration Downtime o Downtime: Migration may require scheduled downtime. Use standard data center maintenance windows to implement changes. o Implementation Document: Details all required steps for the migration process. Includes a rollback plan to restore changes if needed. Covers validation steps to ensure migration is successful. Addressing Performance Issues o Baseline Data: Used to identify and mitigate existing performance issues during migration. Common performance bottlenecks include: CPU, memory, storage, and network bottlenecks. o Changes to the VM's configuration can address these issues during migration. Post-Migration Validation o Testing Plan: Validate that all changes are operating as expected. Test and verify all components of the new VM (a detailed, timeconsuming process). o New Baseline: After migration, establish a new baseline to: Compare with original data. Confirm whether the new server is performing as expected. Key Takeaways o Proper specification gathering and baseline metrics are critical for successful migration. o An implementation plan and a rollback strategy minimize risks during the migration process. o Post-migration testing and performance validation ensure the new VM meets operational requirements. Different Types of Server Migrations Starting Fresh in the Cloud o Process: Provision new VMs with a clean operating system (OS). Install necessary applications. Migrate data as needed. o Advantages: Fewer potential problems during migration. o Disadvantages: Can be cumbersome and time-consuming. Migrating Existing Machines to the Cloud o Process: Convert existing machines into a cloud-compatible format. Use tools provided by major cloud service providers to automate much of the process. Key Considerations o Approach Selection: Choose between starting fresh or migrating existing machines based on time, complexity, and resources. o Tools and Compatibility: Ensure compatibility of existing servers with the chosen cloud provider's infrastructure. Note: o Bare-Metal Servers: Offered by most cloud providers for specialized use cases. Allow installation of operating systems directly on the physical server. o Use Case: Ideal for applications that do not support virtualization. Physical to Virtual Definition o P2V Migration: The process of migrating a non-virtualized physical server running an operating system and applications to a virtual machine (VM) hosted on a virtual host. Data Copy Mechanisms o Block-Level Copy: Copies each block of data from the physical drive verbatim to a virtual disk. Creates an exact replica of the original disk. o File-Level Copy: Copies only individual files to an already formatted virtual disk. Typically used for: Changing filesystem formats. Migrating to a smaller disk size. P2V Software Utilities o Examples of Tools: VMware vCenter Converter. Microsoft Virtual Machine Manager. o Automated P2V Utilities: Offered by third-party software companies and cloud providers. Simplify and automate the P2V migration process. Key Benefits Enables seamless migration of physical servers into a virtualized environment. o Offers flexibility to replicate exact configurations or modify disk/file formats as needed. o Virtual to Virtual Definition of V2V Migration: o A virtual-to-virtual (V2V) migration is a simpler process compared to a physical-to-virtual (P2V) migration. Process Overview: o Involves taking an existing virtual machine’s (VM) virtual disk. o Converts the virtual disk into a format compatible with the cloud provider. o The converted image is used to launch a VM in the cloud. Considerations for V2V Migration: o Hypervisor model and cloud provider requirements: Each hypervisor and cloud service provider may have unique file formats for VMs. o The disk file format of the VM being imported must be supported by the cloud provider. Common VM Disk File Formats: o VDI (Virtual Disk Image): Oracle VirtualBox format. o VMDK (Virtual Machine Disk): VMware format. o VHD (Virtual Hard Disk): Microsoft format. o AMI (Amazon Machine Image): Amazon format. Reference: Figure 2.23 illustrates the process of V2V migration. Virtual to Physical Definition of V2P Migration: o A virtual-to-physical (V2P) migration involves converting a virtual server into a physical server. o Less common compared to other migration types. Use Case: o Typically used when an application vendor does not support running the application in a virtual machine (VM). Key Considerations: o Requires careful attention to hardware and virtualization software compatibility. o A fresh installation of the operating system and application may be necessary. o Each migration has unique requirements, necessitating research and exploration of options. Reference: Figure 2.24 illustrates the process of V2P migration. Online or Offline Online Migrations: o Preferred due to their shorter time requirement. o Restriction: Limited by the networking bandwidth between: The data center hosting the existing server. The cloud data center where the new VM will be migrated. o If bandwidth is insufficient, an offline migration is used as a fallback. Offline Migrations: o Process: Disk images are stored on physical storage media and shipped to the cloud provider. The cloud provider imports the disk images into cloud storage. Images are then made available for provisioning new VMs. o Drawbacks: Introduces a delay due to the shipping process. Additional time is required for the cloud provider to process and import the images. Migrating Your Storage Data Overview: o Regardless of migration method (P2V, V2V, or V2P), centralized storage data (e.g., from a SAN or NAS) may need to be migrated separately. o Typically, this involves copying files over the network to the cloud environment. Collaboration with Cloud Providers: o The cloud service provider will assist with the transition process. o The migration process depends on the provider's storage offerings and infrastructure (refer to Figure 2.25). Planning the Storage Migration: o Considerations: Bandwidth availability: How much bandwidth is available for uploading the data. Time requirements: Uploading large amounts of data can take significant time: Example: Petabyte-scale storage arrays may take weeks or months to transfer over a network. o Challenges: Slow internet connections may make direct uploads infeasible. Workarounds for Large Data Transfers: o Data transfer appliances: Appliances are shipped to the private data center, connected to the storage network, and used for a local data transfer. After the local transfer, the appliance is shipped to the cloud provider for direct upload to the cloud storage. o Background upload process: Appliances can act as local data stores in the private data center while performing uploads to the cloud in the background. o Shipping container-sized storage systems: Some providers offer large-scale storage systems that can be shipped between the data center and the cloud provider. Cloud Provider Solutions: o Providers have unique offerings to reduce delays associated with transferring large data volumes. Addressing Application Portability Definition and Importance of Application Portability: o Application portability: The ability to migrate applications between cloud providers without altering the application’s architecture. o Benefits of portability: Avoids being locked into a specific cloud vendor due to reliance on proprietary services. Enables flexibility in migration if issues arise, such as: Service level agreement (SLA) breaches. Extended outages. Geographical, regulatory, or pricing issues. Key Concepts: o Traditional Applications: Written to run on standard Linux or Windows servers. Best suited for the IaaS (Infrastructure as a Service) model, as they rely on basic resources (compute, storage, and networking). Portability: Easiest to move between cloud providers. Primary challenges: o Performing V2V conversions. o Rebuilding new VMs from scratch. o Cloud-Native Applications: Designed to leverage proprietary services offered by specific cloud platforms. Examples of proprietary services: Object storage. Managed relational and nonrelational databases. Typically labeled with terms like "managed" or "elastic". Portability Challenges: Require rearchitecting the application, potentially including changes to its source code. Specific issues: o Managed databases: Need to change database connection strings when moving providers. o Object storage services: Depend on provider-specific APIs or SDKs; migration involves adapting the application to the new provider’s tools. Key Takeaway: o Application portability should be a central consideration in migration planning to maintain operational flexibility and avoid reliance on proprietary cloud services that limit mobility. Workload Migration Common Procedures Planning and Preparation: o Extensive planning is required before migrating workloads. o Applications selected for migration should be thoroughly tested and evaluated for interoperability. Testing and Validation: o Set up a test environment to: Validate application functionality before moving into production. Ensure the application works as expected in the cloud environment. o During validation, assess the following factors: Performance. Service levels/uptime. Serviceability. Compliance. Security. o Evaluate trade-offs between: Hosting internally in a private data center. Running the application in the cloud. Migration Execution: o Migrations should: Be overseen by a project manager. Be a collaborative effort involving all relevant groups within the organization. o Follow current best practices to ensure a smooth and efficient transition. Examining Infrastructure Capable of Supporting a Migration Purpose: o Analyze the underlying infrastructure and identify potential issues that could impact the migration. Key Factors to Examine and Mitigate: o Data transfer delays: Investigate how long it will take to transfer data. o Downtime requirements: Determine how much downtime is needed for the migration process. o Regulatory or legal concerns: Address any compliance issues related to data migration. o Scheduling the migration: Select an optimal time to minimize operational disruption. Available Network Capacity Impact of Network Bandwidth: o Network bandwidth may limit the feasibility of an online migration. o Large data volumes combined with limited network capacity can result in excessive migration times. Possible Solutions: o Add Internet bandwidth prior to the migration. o Opt for an offline migration if network capacity is insufficient. Planning Considerations: o The project team must: Determine the amount of data to be transferred to the cloud. Calculate the time required for an online migration using current network capacity. Downtime During the Migration Expected Downtime: o Downtime is inevitable during a migration process. Strategies to Reduce Downtime: o Build out cloud infrastructure from scratch instead of migrating VMs directly: Enables prototyping to identify and resolve issues before migration. Planning for Unexpected Issues: o Allocate additional downtime as a buffer in case of unexpected complications. o Include time to roll back to the original state if the migration is unsuccessful. Selecting a Migration Window: o Choose a time window based on: Local organizational policies. Periods of light workload to minimize operational impact. Legal Questions About Migrating to the Cloud Legal compliance requirements must be thoroughly investigated before migration begins. This process should be incorporated into the pre-migration planning phase by the project management team. Legal and compliance considerations should be addressed in the initial design phase of the project to: o Ensure the cloud architecture aligns with legal restrictions. o Avoid potential legal complications during or after migration. Local Time Zones and Follow-the-Sun Migration Constraints Time Zone Considerations for Migration: o Plan migrations across multiple time zones carefully to avoid unnecessary downtime. o Account for local time zones of target data centers: Schedule migrations during low-usage periods in the local time zone of the target region. Example: Avoid impacting Asian operations during peak production hours if migrating from Europe to Asia. o Identify and manage time zone constraints as part of the migration plan. Follow-the-Sun Support Model: o Common in cloud computing and other IT disciplines. o Involves round-the-clock global support by leveraging time zones: Operations are active in parts of the world during their business hours. Centers in different time zones handoff operations at the end of their shifts to the next center in a later time zone. Ensures continuous support without downtime. Managing User Identities and Roles Balancing Security and Usability: o Protect cloud resources from unauthorized access while ensuring legitimate users can access resources easily. o Effective user access control requires a balance between security and usability. Key Elements of User Access Control: o Authentication: Identifying the user and proving their identity. o Authorization: Determining what actions the authenticated user is permitted to perform. Access rights define the user's allowed activities. Topics Covered: o Role-Based Access Rights: Assign rights based on an administrative user’s role within the network. o Access Control Types: Mandatory Access Control (MAC) vs. Discretionary Access Control (DAC). o Multifactor Authentication (MFA): Adds additional security layers to the login process by requiring multiple authentication steps. o Federations: Concept of federated identity management to enable secure access across multiple systems or organizations. RBAC: Identifying Users and What Their Roles Are Overview of Role-Based Access Control (RBAC): o Definition: Access rights are granted or restricted based on users' roles within an organization. A role is a collection of permissions that specify what activities are allowed or denied on specific resources. Example: A role may permit creating a VM but restrict deleting one. Database administrators might have full database management permissions but be restricted from VM or storage operations. Defining and Assigning Roles: o Roles are defined based on organizational needs and often align with job duties: Example roles: Developers Network Administrators Human Resources o Users inherit permissions when assigned to a specific role. Guidelines for Role Scope: o The scope of a role should: Be broad enough to cover all systems necessary for the user's tasks. Avoid granting excessive access to systems unrelated to the user’s responsibilities. Roles for Applications: o Roles aren’t limited to people; they can also be applied to applications: Example: VMs running an image processing application could be assigned a role with access to a cloud storage bucket. What Happens When You Authenticate? Definition of Authentication: o Authentication is the process of: Identifying a user. Confirming the user is who they claim to be. o Post-authentication, permissions are granted based on the user's assigned access rights. Methods of User Authentication: o Username and password combinations. o Variations, such as: Tokens. Biometric methods (e.g., fingerprints, facial recognition). o Cookies for web access: Example: A visitor to an e-commerce site logs in with a username and password. After authentication, a cookie with an identity token is stored in the browser for subsequent identification. Application Authentication: o Applications may need to authenticate with cloud services using: The cloud provider’s API. A security token attached to a role that specifies the application’s access permissions. Understanding Federation Definition of Federation: o Federation enables the use of a third-party identity management system to authenticate access to cloud resources. Example: Using Microsoft Active Directory for authentication to both corporate and cloud systems. Benefits of Federation: o Simplifies cloud migration by allowing users to access cloud resources with their existing corporate login credentials. o Eliminates the need for assigning separate credentials for cloud access. Standards for Federation: o Based on industry standards like Security Assertion Markup Language (SAML): Ensures interoperability between different organizations' systems. Use Cases: o Common in cloud-based e-commerce sites: A single login via web browser integrates with multiple systems, such as: Shopping platforms. Banks. Payment processors. Shipping and warranty services. Users are not required to log in separately for each system. Single Sign-On Systems Definition and Purpose: o Single sign-on (SSO) reduces the need to log into multiple systems individually by enabling users to log in once and gain access to multiple systems. o Centralizes authentication across systems, simplifying user administration. Benefits of SSO: o Eliminates the need to remember and manage multiple username/password combinations. o Saves time by reducing the repetitive entry of authentication credentials. o Ensures secure session termination: Logging off from the directory service disconnects the user from all accessed systems. Examples of SSO Implementation: o SSO Groups: Example: A "web administrators" group grants an administrator access to all cloud web servers within the group without separate logins for each server. o Directory Services using LDAP (Lightweight Directory Access Protocol): Users log into the directory once, and based on their rights, can access network systems without additional logins. Use Case: o Simplifies access for administrators managing multiple cloud resources or systems under their control. o Streamlines the process of accessing interconnected systems within an organization. Understanding Infrastructure Services Networking Services: o Applications related to networking include: IP address management. Load balancing. Network security devices, such as: Firewalls. Intrusion detection and prevention systems. o Security services. o DNS (Domain Name System) services. Domain Name System Purpose of DNS: o Resolves domain names to IP addresses that are used by the IP protocol to connect to remote devices. Example: Similar to a phone book, where you look up a business name to find its number. o DNS server lookup process: Server/workstation queries a DNS server. DNS server replies with the correct IP address for the given domain name. Technical Details: o DNS uses TCP and UDP port 53. Security Concerns: o Insecure by default, making DNS prone to: Man-in-the-middle attacks: Attackers intercept DNS queries and return forged responses (e.g., redirecting to malware-infected sites). Security Enhancements: o DNS Security Extensions (DNSSEC): Adds authentication and data integrity checks. Detects if queries or responses have been altered during transit. Does not encrypt traffic; intercepted traffic can still reveal queries. o DNS over HTTPS (DoH): Provides authentication, data integrity, and encryption of DNS queries. Dynamic Host Configuration Protocol Purpose of DHCP: o Automatically assigns IP addressing information to clients on a network. o Eliminates the need for manual/static configuration of addressing information when connecting to a network. Functions of DHCP: o Provides: IP address. Default gateway information. DNS information. Additional network configuration details as needed. Technical Details: o DHCP clients use UDP port 68. Certificate Services Purpose of Certificate Services: Offered by most cloud providers to outsource creation, management, and deployment of Transport Layer Security (TLS) certificates. o TLS certificates are used for: Authenticating websites. Encrypting traffic between a website and its client. TLS Certificate Import: o Customers can import third-party TLS certificates into the cloud provider’s certificate management service. Certificate Authority (CA) Role: o Cloud provider’s certificate service can function as a Certification Authority (CA). Issues new TLS certificates. These certificates can be used with: Load balancers. Content Distribution Networks (CDNs). TLS Certificate Lifecycle: o Expiration by design: TLS certificates have a fixed expiration date. o The certificate service fully manages the lifecycle of certificates it issues: Automatic renewal before expiration to ensure continuous functionality. o Load Balancing Definition: o Distributes incoming connections across multiple servers to handle workloads that a single server cannot manage. Functions of Load Balancers: o Distributes workload across target servers. o Offloads tasks to reduce server load: Encryption. Compression. TCP handshakes. Key Benefits: o Redundancy: Ensures continuous availability of services even if one server fails. o Scalability: Enables multiple servers to work together and share the load efficiently. How Load Balancing Works: The domain name of a website resolves to the IP address of the load balancer interface, not the servers hosting the website. o The load balancer: Distributes traffic/connections to one of the many connected servers. Monitors server health: Stops routing connections to servers with detected issues. Use Case: o Commonly placed in front of web servers. o Facilitates website scaling by leveraging multiple servers in the cloud. o Multifactor Authentication Definition: o Adds an extra layer of security by combining token-based systems with the traditional username and password model. MFA Authentication Factors: o Something you have (e.g., ATM card, physical device). o Something you know (e.g., PIN, password). o Something you are (e.g., fingerprints, biometrics). Examples of MFA: o ATM Example: Requires an ATM card (something you have) and a PIN (something you know). Secure Access: Requires presenting an ID (something you have) and fingerprints (something you are) for access to a secure data center. MFA in Information Systems: o Typically requires a one-time token during authentication: A string (numeric or alphanumeric) that changes at regular intervals. Designed to be nonsequential and usually 4+ characters. Has a short lifespan and must be entered at login. One-Time Token Sources: o Secure Options: Physical key fob or virtual MFA smartphone app. Highly secure but can lead to lockout if the phone/key fob is lost (requires contacting the provider for account recovery). o Less Secure but Convenient Options: Token sent via email or text message. More prone to compromise but easy to implement and widely supported. Common Usage: o Stronger security systems prioritize physical or virtual MFA tokens. o Websites with large user bases often rely on email or text tokens for ease of use. o Firewall Security Definition: o A firewall inspects network traffic and compares it against a rules list to determine if the traffic is allowed or blocked. o Blocked traffic is not permitted to enter the network. Default Deny Behavior: o Known as whitelisting: Only explicitly allowed traffic can pass. Decisions are based on factors such as: Source and destination IP addresses. Protocol number. Port type and number. Firewalls in the Cloud: o Firewalls are typically abstracted rather than existing as discrete hardware devices. Cloud firewalls allow the application of rules to individual cloud resources (e.g., VM network interfaces). o Virtual Firewall Appliance: Can provision a virtual firewall as a VM in the cloud. Traffic can be forced to pass through this firewall to reach cloud resources. Acts as a firewall at the edge of the cloud network. Benefits of Virtual Firewalls: o Logging: Tracks information about every packet entering (ingress) or leaving (egress) the cloud network. o VPN Access: Can provide secure, encrypted access via a Virtual Private Network (VPN). o Hybrid Cloud Integration: Can integrate with on-premises firewall management platforms. Simplifies management and monitoring of the entire network. Deployment Options: o Standard deployments include: Cloud-based firewalls applied to resources. Virtual firewall appliances placed at the edge of the network. o
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )