Uploaded by Joyonta saha

AWS Cloud Practitioner Essentials Notes

advertisement
AWS Cloud Computing Essentials
Module 1 : Introduction To Amazon Web Services
What is Cloud computing?
Answer : Cloud computing is the on-demand delivery of IT resources over the internet with pay-asyou-go pricing.
When selecting a cloud strategy, a company must consider factors such as required
- cloud application components,
- preferred resource management tools,
- and any legacy IT infrastructure requirements.
The three cloud computing deployment models are cloud-based, on-premises, and hybrid.
Cloud-based deployment model: you can migrate existing applications to the cloud, or you can
design and build new applications in the cloud.
- Run all parts of the application in the cloud.
- Migrate existing applications to the cloud.
- Design and build new applications in the cloud.
For example, a company might create an application consisting of virtual servers, databases,
and networking components that are fully based in the cloud
On-premises deployment model: also known as a private cloud deployment. In this model,
resources are deployed on premises by using virtualization and resource management tools.
- Deploy resources by using virtualization and resource management tools.
- Increase resource utilization by using application management and virtualization
technologies.
For example, you might have applications that run on technology that is fully kept in your
on-premises data center. Though this model is much like legacy IT infrastructure, its incorporation
of application management and virtualization technologies helps to increase resource utilization.
Hybrid deployment model: cloud-based resources are connected to on-premises infrastructure.
You might want to use this approach in a number of situations. For example, you have legacy
applications that are better maintained on premises, or government regulations require your
business to keep certain records on premises.
- Connect cloud-based resources to on-premises infrastructure.
- Integrate cloud-based resources with legacy IT applications.
For example, suppose that a company wants to use cloud services that can automate batch data
processing and analytics. However, the company has several legacy applications that are more
suitable on premises and will not be migrated to the cloud. With a hybrid deployment, the company
would be able to keep the legacy applications on premises while benefiting from the data and
analytics services that run in the cloud.
Benefits of cloud computing
- Trade upfront expense for variable expense
- Stop spending money to run and maintain data centers
- Stop guessing capacity
- Benefit from massive economies of scale
- Increase speed and agility
- Go global in minutes
Module 2 : Compute In The Cloud
When you're working with AWS, those servers are virtual. And the service you use to gain access to
virtual servers is called EC2.
you only pay for what you use. Because with EC2, you only pay for running instances, not stopped
or terminated instances.
EC2 runs on top of physical host machines managed by AWS using virtualization technology. When
you spin up an EC2 instance, you aren't necessarily taking an entire host to yourself. Instead, you
are sharing the host with multiple other instances, otherwise known as virtual machines. And a
hypervisor running on the host machine is responsible for sharing the underlying physical resources
between the virtual machines. This idea of sharing underlying hardware is called multitenancy.
The hypervisor is responsible for coordinating this multitenancy and it is managed by AWS. The
hypervisor is responsible for isolating the virtual machines from each other as they share resources
from the host. This means EC2 instances are secure. Even though they may be sharing resources,
one EC2 instance is not aware of any other EC2 instances also on that host. They are secure and
separate from each other.
When you provision an EC2 instance, you can choose the operating system based on either
Windows or Linux. You can provision thousands of EC2 instances on demand. With a blend of
operating systems and configurations to power your business' different applications.
Beyond the OS, you also configure what software you want running on the instance. Whether it's
your own internal business applications, simple web apps, or complex web apps, databases or
third party software like enterprise software packages.
you have complete control over what happens on that instance.
EC2 instances are also resizable. You might start with a small instance, realize the application you
are running is starting to max out that server, and then you can give that instance more memory
and more CPU. Which is what we call vertically scaling an instance.
In essence, you can make instances bigger or smaller whenever you need to. You also control
the networking aspect of EC2. So what type of requests make it to your server and if they are
publicly or privately accessible is something you decide.
AWS has just made it much, much easier and more cost effective for you to acquire servers
through this Compute as a Service model.
Amazon Elastic Compute Cloud (Amazon EC2) provides secure, resizable compute capacity
in the cloud as Amazon EC2 instances.
Imagine you are responsible for the architecture of your company's resources and need to support
new websites. With traditional on-premises resources, you have to do the following:
- Spend money upfront to purchase hardware.
- Wait for the servers to be delivered to you.
- Install the servers in your physical data center.
- Make all the necessary configurations.
By comparison, with an Amazon EC2 instance you can use a virtual server to run applications in
the AWS Cloud.
- You can provision and launch an Amazon EC2 instance within minutes.
- You can stop using it when you have finished running a workload.
- You pay only for the compute time you use when an instance is running, not when it is
stopped or terminated.
- You can save costs by paying only for server capacity that you need or want.
Amazon EC2 instance types
AWS has different types of EC2 instances that you can spin up and deploy into your AWS
environment.
Each instance type is grouped under an instance family and are optimized for certain types of tasks.
Instance types offer varying combinations of CPU, memory, storage, and networking capacity, and
give you the flexibility to choose the appropriate mix of resources for your applications. The
different instance families in EC2 are general purpose, compute optimized, memory optimized,
accelerated computing, and storage optimized.
General purpose instances provide a good balance of compute, memory, and networking resources,
and can be used for a variety of diverse workloads like web service or code repositories
Compute optimized instances are ideal for compute-intensive tasks like gaming servers, high
performance computing or HPC, and even scientific modeling.
Similarly, memory optimized instances are good for memory-intensive tasks.
Accelerated computing are good for floating point number calculations, graphics processing, or
data pattern matching, as they use hardware accelerators.
And finally, storage optimized are good for - Workloads that require high performance for locally
stored data.
General purpose Instances General purpose instances provide a balance of compute, memory, and networking resources. You
can use them for a variety of workloads, such as:
- application servers
- gaming servers
- backend servers for enterprise applications
- small and medium databases
Suppose that you have an application in which the resource needs for compute, memory, and
networking are roughly equivalent. You might consider running it on a general purpose instance
because the application does not require optimization in any single resource area.
Compute optimized Instances Compute optimized instances are ideal for compute-bound applications that benefit from highperformance processors. Like general purpose instances, you can use compute optimized
instances for workloads such as web, application, and gaming servers.
However, the difference is compute optimized applications are ideal for high-performance web
servers, compute-intensive applications servers, and dedicated gaming servers. You can also use
compute optimized instances for batch processing workloads that require processing many
transactions in a single group.
Memory optimized Instances Memory optimized instances are designed to deliver fast performance for workloads that process
large datasets in memory. In computing, memory is a temporary storage area. It holds all the data
and instructions that a central processing unit (CPU) needs to be able to complete actions. Before a
computer program or application is able to run, it is loaded from storage into memory. This
preloading process gives the CPU direct access to the computer program.
Suppose that you have a workload that requires large amounts of data to be preloaded before
running an application. This scenario might be a high-performance database or a workload that
involves performing real-time processing of a large amount of unstructured data. In these
types of use cases, consider using a memory optimized instance. Memory optimized instances
enable you to run workloads with high memory needs and receive great performance.
Accelerated Computing Instances -
Accelerated computing instances use hardware accelerators, or coprocessors, to perform some
functions more efficiently than is possible in software running on CPUs. Examples of these
functions include floating-point number calculations, graphics processing, and data pattern
matching.
In computing, a hardware accelerator is a component that can expedite data processing. Accelerated
computing instances are ideal for workloads such as graphics applications, game streaming, and
application streaming.
Storage Optimized Instances -
Storage optimized instances are designed for workloads that require high, sequential read and
write access to large datasets on local storage. Examples of workloads suitable for storage
optimized instances include distributed file systems, data warehousing applications, and highfrequency online transaction processing (OLTP) systems.
In computing, the term input/output operations per second (IOPS) is a metric that measures the
performance of a storage device. It indicates how many different input or output operations a device
can perform in one second. Storage optimized instances are designed to deliver tens of thousands
of low-latency, random IOPS to applications.
You can think of input operations as data put into a system, such as records entered into a database.
An output operation is data generated by a server. An example of output might be the analytics
performed on the records in a database. If you have an application that has a high IOPS
requirement, a storage optimized instance can provide better performance over other instance types
not optimized for this kind of use case.
Match each description to an Amazon EC2 instance type Ideal for high-performance databases => Memory optimized
Suitable for data warehousing applications => Storage optimized
Balances compute, memory, and networking resources => General purpose
Offers high-performance processors => Compute optimized
1. **General Purpose Instances:**
- Use Case: Well-balanced compute, memory, and network resources. Suitable for a wide range of
applications.
- Remember: "Balanced for general tasks."
2. **Compute Optimized Instances:**
- Use Case: High compute performance for CPU-bound applications, data processing, scientific
modeling.
- Remember: "Compute-power for CPU-intensive workloads."
3. **Memory Optimized Instances:**
- Use Case: Memory-intensive workloads like in-memory databases, real-time big data
processing.
- Remember: "Memory-centric for data-heavy tasks."
4. **Storage Optimized Instances:**
- Use Case: Storage-intensive tasks, like NoSQL databases, data warehousing, log processing.
- Remember: "Storage-focused for data-heavy workloads."
5. **Accelerated Computing Instances:**
- Use Case: Hardware accelerators like GPUs for tasks like machine learning, deep learning,
graphics rendering.
- Remember: "Supercharge with hardware accelerators."
The key to remembering these categories is to associate each with its primary focus:
- General Purpose: Balanced for general tasks.
- Compute Optimized: Compute-power for CPU-intensive workloads.
- Memory Optimized: Memory-centric for data-heavy tasks.
- Storage Optimized: Storage-focused for data-heavy workloads.
- Accelerated Computing: Supercharge with hardware accelerators.
Absolutely, let's go through each AWS EC2 instance type category and their corresponding use
cases to help you remember them more easily:
1. **General Purpose Instances:**
- **Use Case:** Balanced performance for a wide range of applications.
- **Example:** Small to medium databases, web servers, development and testing environments.
2. **Compute Optimized Instances:**
- **Use Case:** High-performance computing, CPU-bound applications.
- **Example:** Batch processing, media transcoding, high-performance web servers.
3. **Memory Optimized Instances:**
- **Use Case:** Memory-intensive workloads, data analysis, caching.
- **Example:** In-memory databases, real-time big data processing, analytics.
4. **Accelerated Computing Instances:**
- **Use Case:** Workloads that require GPU or FPGA acceleration.
- **Example:** Deep learning, scientific simulations, video rendering.
5. **Storage Optimized Instances:**
- **Use Case:** Storage-intensive applications, large databases.
- **Example:** Data warehousing, NoSQL databases, log processing.
Amazon EC2 pricing
On-Demand
On-Demand Instances are ideal for short-term, irregular workloads that cannot be interrupted.
No upfront costs or minimum contracts apply. The instances run continuously until you stop them,
and you pay for only the compute time you use.
Sample use cases for On-Demand Instances include developing and testing applications and
running applications that have unpredictable usage patterns. On-Demand Instances are not
recommended for workloads that last a year or longer because these workloads can experience
greater cost savings using Reserved Instances.
Amazon EC2 Savings Plans
AWS offers Savings Plans for several compute services, including Amazon EC2. Amazon EC2
Savings Plans enable you to reduce your compute costs by committing to a consistent amount of
compute usage for a 1-year or 3-year term. This term commitment results in savings of up to
72% over On-Demand costs.
Any usage up to the commitment is charged at the discounted Savings Plan rate (for example,
$10 an hour). Any usage beyond the commitment is charged at regular On-Demand rates.
Later in this course, you will review AWS Cost Explorer, a tool that enables you to visualize,
understand, and manage your AWS costs and usage over time. If you are considering your
options for Savings Plans, AWS Cost Explorer can analyze your Amazon EC2 usage over the
past 7, 30, or 60 days. AWS Cost Explorer also provides customized recommendations for
Savings Plans. These recommendations estimate how much you could save on your monthly
Amazon EC2 costs, based on previous Amazon EC2 usage and the hourly commitment
amount in a 1-year or 3-year Savings Plan.
Reserved Instances
Reserved Instances are a billing discount applied to the use of On-Demand Instances in your
account. You can purchase Standard Reserved and Convertible Reserved Instances for a 1-year
or 3-year term, and Scheduled Reserved Instances for a 1-year term. You realize greater cost
savings with the 3-year option.
At the end of a Reserved Instance term, you can continue using the Amazon EC2 instance
without interruption. However, you are charged On-Demand rates until you do one of the
following:
- Terminate the instance.
- Purchase a new Reserved Instance that matches the instance attributes (instance type,
Region, tenancy, and platform).
Spot Instances
Spot Instances are ideal for workloads with flexible start and end times, or that can withstand
interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost
savings at up to 90% off of On-Demand prices.
Suppose that you have a background processing job that can start and stop as needed (such as
the data processing job for a customer survey). You want to start and stop the processing job
without affecting the overall operations of your business. If you make a Spot request and Amazon
EC2 capacity is available, your Spot Instance launches. However, if you make a Spot request and
Amazon EC2 capacity is unavailable, the request is not successful until capacity becomes available.
The unavailable capacity might delay the launch of your background processing job.
After you have launched a Spot Instance, if capacity is no longer available or demand for Spot
Instances increases, your instance may be interrupted. This might not pose any issues for your
background processing job. However, in the earlier example of developing and testing applications,
you would most likely want to avoid unexpected interruptions. Therefore, choose a different EC2
instance type that is ideal for those tasks.
Dedicated Hosts
Dedicated Hosts are physical servers with Amazon EC2 instance capacity that is fully dedicated
to your use.
You can use your existing per-socket, per-core, or per-VM software licenses to help maintain
license compliance. You can purchase On-Demand Dedicated Hosts and Dedicated Hosts
Reservations. Of all the Amazon EC2 options that were covered, Dedicated Hosts are the most
expensive.
Scaling Amazon EC2
Scalability
Scalability involves beginning with only the resources you need and designing your architecture to
automatically respond to changing demand by scaling out or in. As a result, you pay for only the
resources you use.
The AWS service that provides the scaling process to happen automatically for Amazon EC2
instances is Amazon EC2 Auto Scaling.
Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in
response to changing application demand therefore maintaining a greater sense of application
availability.
Within Amazon EC2 Auto Scaling, you can use two approaches:
1. dynamic scaling
2. predictive scaling.
- Dynamic scaling responds to changing demand.
- Predictive scaling automatically schedules the right number of Amazon EC2 instances
based on predicted demand.
To scale faster, you can use dynamic scaling and predictive scaling together.
By adding Amazon EC2 Auto Scaling to an application, you can add new instances to the
application when necessary and terminate them when no longer needed.
When you create an Auto Scaling group, you can set the minimum number of Amazon EC2
instances. The minimum capacity is the number of Amazon EC2 instances that launch
immediately after you have created the Auto Scaling group.
you can set the desired capacity at two Amazon EC2 instances even though your application
needs a minimum of a single Amazon EC2 instance to run.
"desired instance count" in the context of Auto Scaling refers to the number of instances that
you want to have running in your Auto Scaling Group to handle your application's workload
efficiently.
If you do not specify the desired number of Amazon EC2 instances in an Auto Scaling group,
the desired capacity defaults to your minimum capacity.
The third configuration that you can set in an Auto Scaling group is the maximum capacity.
Because Amazon EC2 Auto Scaling uses Amazon EC2 instances, you pay for only the
instances you use, when you use them.
Elastic Load Balancing
**Load Balancing with Amazon EC2 Auto Scaling and Elastic Load Balancing**
1. EC2 Auto Scaling resolves scalability by adjusting instances based on demand.
2. Coffee shop analogy shows uneven customer distribution due to unclear options.
3. A host guides customers to cashier lines; load balancing directs requests to instances.
4. Elastic Load Balancing (ELB) manages load distribution and offers efficiency.
5. ELB operates regionally, ensuring high availability without extra effort.
6. Scaling traffic is seamless with ELB, handling both scaling out and in.
7. ELB decouples backend communication, directing traffic to least busy instances.
8. Backend scaling is efficient, enabling decoupled architecture and dynamic shifts.
9. AWS provides various services, choose the right tool for specific needs.
10. Utilize managed solutions like ELB to optimize efficiency, availability, and scalability.
1. **ELB Overview:** Elastic Load Balancing (ELB) distributes incoming traffic across multiple
resources like Amazon EC2 instances, enhancing performance and availability.
2. **Load Balancer Role:** Acting as a single entry point, the load balancer directs incoming web
traffic to your Auto Scaling group. Requests then spread across multiple resources, preventing
overload on a single instance.
3. **Integration with EC2 Auto Scaling:** While separate services, ELB and EC2 Auto Scaling
collaborate to ensure high application performance and availability in Amazon EC2.
4. **Example - Low Demand:** During low demand, few registers (EC2 instances) are open in the
coffee shop analogy, matching customer needs, preventing idle registers.
5. **Example - High Demand:** As customer numbers rise, more registers open, represented by the
Auto Scaling group. A load balancer directs customers to appropriate registers, achieving even
distribution.
ELB and EC2 Auto Scaling synergy optimizes application performance, balancing traffic and
resources dynamically.
Amazon SQS (Simple Queue Service) and Amazon SNS (Simple Notification Service)
Messaging and Queuing Overview:
Messaging and queuing address the challenges of coordinating communication between software
components without tight coupling.
In the coffee shop analogy, the cashier and barista represent software components communicating
through messages.
Tightly Coupled vs. Loosely Coupled:
Tightly coupled architecture causes issues when one component's failure affects others.
Loosely coupled architecture isolates failures, preventing cascading disruptions.
Introducing a Message Queue:
- A message queue acts as a buffer, decoupling sender and receiver.
- Messages (coffee orders) are placed in the queue (order board) for processing.
- SQS and SNS facilitate this architecture.
Amazon SQS (Simple Queue Service):
- SQS manages message queues, ensuring reliable, scalable communication.
- Messages (payloads) are stored in SQS queues until processed.
- AWS handles underlying infrastructure, auto-scaling, and reliability.
Amazon SNS (Simple Notification Service):
- SNS enables publish/subscribe model for communication.
- Messages are sent to topics, which distribute to subscribers (endpoints).
- Subscribers can be SQS queues, Lambda functions, HTTP endpoints, and more.
SNS for Notifications:
- SNS can also send notifications to end users through various channels.
- End users receive notifications via mobile push, SMS, or email.
- For instance, notifying customers when their coffee order is ready.
Additional Compute Services
If you are trying to host traditional applications and want full access to the underlying
operating system like Linux or Windows, you are going to want to use EC2.
If you are looking to host short running functions, service-oriented or event driven
applications and you don't want to manage the underlying environment at all, look into the
serverless AWS Lambda.
If you are looking to run Docker container-based workloads on AWS, you first need to
choose your orchestration tool. Do you want to use Amazon ECS or Amazon EKS? After you
choose your tool, you then need to chose your platform. Do you want to run your containers
on EC2 instances that you manage or in a serverless environment like AWS Fargate that is
managed for you?
**Additional Compute Services:**
- EC2 instances are virtual machines for various use cases.
- EC2 provides flexibility, reliability, scalability.
- Management includes patching, scaling, high availability.
- AWS offers alternatives for compute capacity.
- Serverless approach shifts management burden.
- AWS offers multiple serverless compute options.
- AWS Lambda is a serverless compute option.
- Lambda functions trigger on configured events.
- Lambda automatically manages scalability and availability.
- Lambda suits short processing tasks (<15 minutes).
**AWS Container Services:**
- Amazon ECS and Amazon EKS are container orchestration tools.
- Containers package app code, dependencies, configs.
- Docker containers run on EC2 instances.
- Containers isolate processes like VMs, host is EC2.
- Container orchestration is complex and needs tools.
- ECS and EKS manage container orchestration.
- Fargate is a serverless compute platform for ECS and EKS.
**Serverless Computing:**
- Serverless means code runs on servers, no provisioning.
- Focus on innovation instead of server maintenance.
- Serverless scales applications automatically.
- AWS Lambda runs code without server management.
- Lambda charges for compute time during code execution.
- Lambda supports various types of applications.
**Containers:**
- Containers package app code and dependencies.
- Containers can be used for secure, reliable, scalable processes.
- Containerization ensures consistent environments.
- Containers share host OS kernel, reducing resource overhead.
- Containers offer portability across environments.
**Amazon Elastic Container Service (ECS):**
- ECS manages containerized applications at scale.
- ECS supports Docker containers for quick deployment.
- ECS uses API calls to launch and stop Docker apps.
**Amazon Elastic Kubernetes Service (EKS):**
- EKS lets you run Kubernetes on AWS.
- Kubernetes manages containerized apps at scale.
- AWS collaborates with Kubernetes community for updates.
**AWS Fargate:**
- Fargate is a serverless compute engine for containers.
- Fargate manages server infrastructure for you.
- Focus on app development, pay only for required resources.
**Module 2 Summary:**
- Concepts covered in Module 2:
- Amazon EC2 instance types and pricing options.
- Amazon EC2 Auto Scaling.
- Elastic Load Balancing.
- AWS services for messaging, containers, and serverless computing.
**Cloud Computing and AWS:**
- Cloud computing is on-demand IT resource delivery with pay-as-you-go pricing.
- AWS offers various service categories for creating solutions.
**Amazon EC2:**
- EC2 allows dynamic provisioning of virtual servers (EC2 instances).
- Instance family determines the hardware for an instance.
- Instance categories: general purpose, compute optimized, memory optimized, accelerated
computing, storage optimized.
- Scaling options: vertical (resizing) and horizontal (launching new instances).
- Amazon EC2 Auto Scaling automates horizontal scaling.
**Elastic Load Balancer:**
- Distributes incoming traffic across EC2 instances.
- Ensures high availability and fault tolerance.
**EC2 Instance Pricing Models:**
- On-Demand: Flexible with no contract.
- Spot Pricing: Utilizes unused capacity at a discount.
- Savings Plans/Reserved Instances: Contract for discounted rates based on committed usage.
**Messaging Services:**
- Amazon SQS: Decouples system components, messages remain in queue.
- Amazon SNS: Sends messages like emails, texts, push notifications, or HTTP requests to
subscribers.
**Compute Services:**
- AWS offers various compute services beyond EC2.
- Container services: Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service
(EKS).
- Container orchestration tools.
- AWS Fargate: Serverless compute platform for running containers.
- AWS Lambda: Upload code, configure triggers, pay only for actual execution time.
**AWS Global Infrastructure:**
- AWS offers a global infrastructure to meet business needs for running applications, storing
content, and analyzing data.
- Traditionally, businesses had to run applications in their own data centers, but AWS allows
running applications in data centers they don't own.
- Challenges with data centers include potential disasters causing connectivity loss and backup
storage being insufficient.
- AWS builds data centers in groups called Regions, each with multiple data centers.
- Regions are designed to be closer to where business traffic demands, ensuring optimal
performance.
- Data centers within a Region are interconnected with a high-speed fiber network.
- Each Region is isolated from others, and data doesn't move in/out without explicit permission,
ensuring security and data sovereignty.
- AWS Regions comply with local laws and statutes, subjecting data to the regulations of the
Region's country.
**Factors in Choosing a Region:**
1. **Compliance:** Regulatory requirements may dictate where your data must reside.
2. **Proximity:** Select a Region close to your customer base for faster content delivery.
3. **Feature Availability:** Some Regions might not have all AWS features, consider availability
of required services.
4. **Pricing:** Costs can vary based on location, and AWS has transparent pricing.
**Availability Zones:**
- Each AWS Region consists of multiple Availability Zones (AZs), which are isolated and
physically separate data centers.
- An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking,
and connectivity.
- Availability Zones are designed to prevent large-scale incidents from affecting all resources in one
location.
- AWS recommends deploying across at least two Availability Zones for high availability.
- Many AWS services run at the Region level, serving across multiple Availability Zones for high
availability.
- Regional services ensure high availability without additional effort on the user's part.
In summary, AWS's global infrastructure is based on Regions, each containing multiple
Availability Zones. Selecting the right Region involves considering compliance, proximity, feature
availability, and pricing. Deploying across multiple Availability Zones within a Region enhances
high availability and disaster recovery capabilities. Regional services further contribute to seamless
high availability without extra user configuration.
Edge Locations
1. **Proximity and Customer Service:**
- AWS infrastructure is designed to serve customers better.
- If customers are in new cities, you can build satellite stores for them.
2. **Content Delivery Networks (CDNs):**
- CDNs, like Amazon CloudFront on AWS, cache data closer to customers.
- Amazon CloudFront accelerates data, video, apps, and APIs delivery.
- It utilizes Edge locations around the world for low latency and high transfer speeds.
3. **Edge Locations and Amazon Route 53:**
- AWS Edge locations include more than just CloudFront; they run Amazon Route 53 DNS
service.
- Amazon Route 53 directs customers to correct web locations with low latency.
4. **AWS Outposts:**
- AWS Outposts installs mini Regions in customers' data centers.
- Offers full AWS functionality in customers' own buildings.
- Addresses specific problems that require on-premises solutions.
5. **Key Takeaways:**
- Regions are isolated areas for accessing necessary services.
- Availability Zones allow running across physically separated buildings for high availability.
- AWS Edge locations and CloudFront improve content delivery to customers worldwide.
How to Provision AWS Resources
1. **APIs and AWS Interaction:**
- APIs (Application Programming Interfaces) are used to interact with AWS services.
- AWS services can be provisioned, configured, and managed through API calls.
- Different tools are available for interacting with AWS APIs, including the AWS Management
Console, AWS CLI, SDKs, AWS Elastic Beanstalk, and AWS CloudFormation.
2. **AWS Management Console:**
- Browser-based interface for managing AWS services visually.
- Useful for getting started, learning, and managing resources in a user-friendly manner.
- Provides wizards and automated workflows to simplify tasks.
- Can be used through a web browser or mobile application.
3. **AWS CLI:**
- Command Line Interface for making API calls using the terminal.
- Offers scriptable and repeatable actions, reducing human error.
- Automation is crucial for consistent and predictable cloud deployments.
4. **AWS SDKs:**
- Software Development Kits enable interaction with AWS services through various programming
languages.
- Allows developers to create programs that use AWS services without dealing with low-level
APIs.
- SDKs provide documentation and sample code for different programming languages.
5. **AWS Elastic Beanstalk:**
- Automates the provisioning of Amazon EC2-based environments.
- You provide code and configurations; Elastic Beanstalk handles resource deployment.
- Handles capacity adjustment, load balancing, scaling, and application health monitoring.
- Simplifies environment configuration and management.
6. **AWS CloudFormation:**
- Infrastructure as code tool for defining AWS resources using JSON or YAML templates.
- Supports a wide range of AWS resources, enabling safe and repeatable infrastructure
provisioning.
- Eliminates manual actions, performs automatic rollbacks in case of errors.
7. **Automation and Consistency:**
- Automation is essential for maintaining a successful and predictable cloud deployment.
- Manual provisioning can lead to human errors; automation mitigates this risk.
- Tools like AWS CLI, SDKs, Elastic Beanstalk, and CloudFormation provide automation
capabilities.
Module 3 Summary
1. **Global Infrastructure:**
- AWS global infrastructure is composed of logical clusters of data centers known as Availability
Zones (AZs).
- AZs are grouped into Regions, which are spread globally.
- Best practice is to deploy infrastructure across at least two AZs for high availability.
- Services like Elastic Load Balancing, Amazon SQS, and Amazon SNS automatically distribute
across AZs.
2. **Edge Locations and Content Delivery:**
- Edge locations are distributed points that accelerate content delivery to customers.
- Content can be deployed to edge locations to enhance delivery speed.
- AWS Outposts enables running AWS infrastructure within your own data center.
3. **Provisioning AWS Resources:**
- AWS offers various options for provisioning resources.
- AWS Management Console provides a visual interface for managing resources.
- AWS SDK, CLI, Elastic Beanstalk, and CloudFormation are tools for resource provisioning.
- CloudFormation facilitates infrastructure provisioning through code definitions.
4. **Infrastructure as Code:**
- AWS CloudFormation allows defining infrastructure using JSON or YAML templates.
- CloudFormation provisions resources in a repeatable, automated manner.
- Infrastructure can be defined and managed using code, enhancing consistency.
5. **Global Availability and Resource Provisioning:**
- AWS offers globally available infrastructure for resource deployment.
- Provisioning resources is made easy through various tools and methods.
Module 4 Introduction
Networking
Amazon Virtual Private Cloud (VPC) and its key concepts:
1. **Amazon Virtual Private Cloud (VPC):**
- A VPC allows you to create a logically isolated section within the AWS Cloud.
- Resources can be provisioned within this virtual network, which you define.
2. **Public and Private Resources:**
- Resources within a VPC can be categorized into public-facing and private resources.
- Public resources have access to the internet and are typically used for customer-facing services.
- Private resources have no internet access and are often used for backend services like databases
or application servers.
3. **Subnets:**
- Resources within a VPC are organized into subnets.
- Subnets are defined by ranges of IP addresses within the VPC.
- Public subnets are used for resources that need to interact with customers or the internet.
- Private subnets are used for resources that should not interact with customers directly.
4. **Networking:**
- VPCs play a crucial role in network isolation and organization within AWS.
- They allow you to control how resources communicate with each other and the internet.
- VPCs are fundamental to setting up secure and segmented network architectures in AWS.
In summary, Amazon Virtual Private Cloud (VPC) is a key AWS service that allows you to
create isolated virtual networks, organize resources into subnets, and control the network
access of your AWS resources. Understanding VPC concepts is important for designing secure
and efficient network architectures on AWS.
Connectivity to AWS
1. **Amazon Virtual Private Cloud (Amazon VPC):**
- A VPC is like your own private network within AWS.
- It allows you to define a private IP range for your AWS resources.
- Resources like EC2 instances and ELBs can be placed inside a VPC.
2. **Subnets in VPC:**
- Subnets are chunks of IP addresses within a VPC that allow you to group resources together.
- Subnets, along with networking rules, control whether resources are publicly or privately
accessible.
3. **Public and Private Resources:**
- Public resources can access the internet, typically used for customer-facing services.
- Private resources have no internet access and are used for backend services.
4. **Internet Gateway (IGW):**
- An internet gateway is necessary to allow traffic from the public internet to flow into and out of
your VPC.
- It acts like a doorway open to the public and is essential for public-facing resources.
5. **Virtual Private Gateway:**
- A virtual private gateway is used for VPCs with all internal private resources.
- It only allows access to approved networks, such as an on-premises data center or internal
corporate network.
- It allows you to create a VPN connection between a private network, like your on-premises data
center or internal corporate network to your VPC.
- It's like a private bus route for authorized users.
6. **AWS Direct Connect:**
- AWS Direct Connect provides a completely private, dedicated fiber connection from your data
center to AWS.
- It's used for high regulatory and compliance needs, offering a dedicated and secure connection.
- It's like having a dedicated, traffic-free magic doorway to your favorite coffee shop.
7. **Networking Boundaries:**
- Amazon VPC helps establish boundaries around AWS resources, allowing you to organize and
isolate them.
- Subnets and gateways play key roles in controlling network access.
8. **Reducing Network Costs:**
- AWS Direct Connect can reduce network costs and increase bandwidth, providing dedicated,
high-performance connections.
In summary, Amazon VPC is a fundamental AWS service that allows you to create isolated
virtual networks, control access to resources, and establish secure connections between your
data center and AWS.
Subnets and Network Access Control Lists
1. **VPC Overview**:
- A Virtual Private Cloud (VPC) is like a fortress with controlled access.
- It includes an internet gateway for traffic in and out.
- AWS offers a range of security tools covering network hardening, application security, identity
and access management, DDoS prevention, encryption, and more.
2. **Network Hardening**:
- Subnets are used in a VPC to control access to gateways.
- Network Access Control Lists (NACLs) act as "passport control officers" for packets, allowing
or blocking traffic at subnet boundaries.
- Security Groups control access at the instance level and are "doormen" for instances.
- Security Groups are stateful, while NACLs are stateless.
- A packet's journey involves checks at multiple stages: Security Group, NACL, and Security
Group again for return traffic.
3. **Subnets in VPC**:
- Subnets are used to group resources in a VPC based on security or operational needs.
- There are public and private subnets.
- Subnets within a VPC can communicate with each other.
4. **Network Traffic in a VPC**:
- Traffic enters a VPC through an internet gateway.
- Permissions for traffic are checked using Network Access Control Lists (NACLs).
- NACLs are stateless and check traffic both inbound and outbound at subnet boundaries.
- Security Groups control traffic at the instance level.
- Security Groups are stateful, remembering previous decisions for incoming packets.
5. **Default Rules**:
- AWS provides default NACLs and Security Groups.
- Default NACLs allow all inbound and outbound traffic but can be customized.
- Default Security Groups deny all inbound traffic and allow all outbound traffic.
6. **Custom Rules**:
- Custom rules can be added to NACLs and Security Groups to specify allowed traffic.
7. **Stateful vs. Stateless Filtering**:
- Security Groups perform stateful packet filtering, remembering previous decisions.
- NACLs perform stateless packet filtering and evaluate traffic at subnet boundaries.
Global Networking
**Domain Name System (DNS):**
- DNS is like the phone book of the internet, translating domain names to IP addresses.
- DNS resolution involves a customer DNS resolver communicating with a company DNS server.
- It's the process of translating a domain name to an IP address.
- DNS resolver forwards requests to DNS servers, which return IP addresses.
**Amazon Route 53:**
- Route 53 is AWS's DNS web service.
- It routes end users to internet applications hosted in AWS and can route users to infrastructure
outside AWS.
- It connects user requests to infrastructure running in AWS, including EC2 instances and load
balancers.
- Route 53 can manage DNS records for domain names and register new domain names.
- It can transfer DNS records from other domain registrars, allowing centralized management.
**Amazon CloudFront:**
- CloudFront is a content delivery service with edge locations for serving content close to users.
- It's part of a content delivery network (CDN) that optimizes content delivery based on geographic
location.
- CloudFront is used to deploy static web assets like images and GIFs to improve latency.
- It's used in combination with Route 53 to deliver content efficiently to users.
Module 4 Summary
1. **Simplified Networking with AWS**:
- AWS simplifies networking, making it accessible to a broader audience.
- Networking on AWS is primarily about determining who can communicate with each other.
2. **Virtual Private Cloud (VPC)**:
- VPC is a fundamental concept in AWS, used for isolating workloads.
- It provides a secure and isolated network environment for AWS resources.
3. **Network Security**:
- Network security on AWS includes gateways, Network ACLs (Access Control Lists), and
security groups.
- These components enable security engineers to create secure networks, allowing legitimate
traffic while blocking malicious attacks.
4. **Connectivity to AWS**:
- AWS offers multiple methods for connecting to its services, including VPN (Virtual Private
Network) and Direct Connect.
- VPNs provide secure connections over the internet, while Direct Connect offers dedicated,
private fiber connections.
5. **Global Networks and Edge Locations**:
- AWS provides global networks through its Edge locations.
- Route 53 is used for DNS (Domain Name System), enabling the translation of domain names
into IP addresses.
- CloudFront is used to cache content closer to consumers, improving content delivery and
reducing latency.
6. **Summary**:
- The text acknowledges that it only scratches the surface of AWS networking capabilities.
- It emphasizes the importance of understanding key networking components to get started with
AWS.
Instance Stores and Amazon Elastic Block Store (Amazon EBS)
**Block-Level Storage and EC2 Instances:**
- Block-level storage is used for storing files efficiently, making it suitable for applications like
databases, enterprise software, and file systems.
- EC2 instances have hard drives, and the type of storage they provide varies.
**Instance Store Volumes:**
- Instance store volumes are physically attached to the host running the EC2 instance.
- Data stored in instance store volumes is temporary and gets deleted when the EC2 instance is
stopped or terminated.
- Instance store volumes are suitable for temporary data like scratch data or easily recreated data.
**Amazon Elastic Block Store (EBS):**
- EBS provides virtual hard drives known as EBS volumes that can be attached to EC2 instances.
- Data written to EBS volumes persists between stops and starts of an EC2 instance.
- EBS volumes come in different sizes and types, and users can define their configurations.
- Regular snapshots of EBS volumes are recommended for data backup and recovery.
**EBS Snapshots:**
- EBS snapshots are incremental backups, which means only the changed data blocks are saved
after the initial full backup.
- This differs from full backups, which copy all data each time.
- EBS snapshots are essential for data protection and recovery.
Amazon Simple Storage Service (Amazon S3)
**Amazon S3 (Simple Storage Service):**
- Amazon S3 is a data store that allows you to store and retrieve a virtually unlimited amount of
data at any scale.
- Data is stored as objects, and objects are organized into buckets.
- The maximum object size that can be uploaded is 5 terabytes.
- Versioning can be enabled to protect objects from accidental deletion.
- You can create multiple buckets and manage permissions for objects.
- Objects can be moved between different classes or tiers of data.
**Storage Classes:**
1. **S3 Standard:**
- Suitable for frequently accessed data.
- Offers high availability with data stored in a minimum of three Availability Zones.
- Higher cost compared to other storage classes designed for infrequently accessed data.
2. **S3 Standard-Infrequent Access (S3 Standard-IA):**
- Ideal for infrequently accessed data.
- Lower storage price but a higher retrieval price.
- Provides high availability with data stored in a minimum of three Availability Zones.
3. **S3 One Zone-Infrequent Access (S3 One Zone-IA):**
- Stores data in a single Availability Zone, lowering storage costs.
- Suitable if data can be easily reproduced in case of an Availability Zone failure.
4. **S3 Intelligent-Tiering:**
- Suitable for data with unknown or changing access patterns.
- Automatically moves objects between tiers based on access patterns.
- Requires a small monthly monitoring and automation fee per object.
5. **S3 Glacier Instant Retrieval:**
- Works well for archived data that requires immediate access.
- Retrieves objects within a few milliseconds.
- Suitable for scenarios where quick retrieval is essential.
6. **S3 Glacier Flexible Retrieval:**
- Low-cost storage for data archiving.
- Retrieves objects within a few minutes to hours.
- Ideal for data archiving needs.
7. **S3 Glacier Deep Archive:**
- Lowest-cost object storage class for archiving.
- Retrieves objects within 12 hours.
- Suitable for long-term retention and digital preservation of rarely accessed data.
8. **S3 Outposts:**
- Creates S3 buckets on AWS Outposts.
- Designed for workloads with local data residency requirements on AWS Outposts.
Comparing Amazon EBS and Amazon S3
**Amazon Elastic Block Storage (EBS):**
- EBS provides block storage.
- It offers sizes up to 16 tebibytes.
- EBS volumes can survive the termination of Amazon EC2 instances.
- Suitable for scenarios where frequent changes and micro edits are made to files.
- EBS allows updates at the block level, saving bandwidth and time when making changes to large
files.
- Typically used with Amazon EC2 instances.
- It's not serverless, as it requires Amazon EC2 instances.
**Amazon Simple Storage Service (S3):**
- S3 provides object storage.
- Offers unlimited storage capacity.
- Individual objects can be as large as 5,000 gigabytes.
- Specializes in write once/read many (WORM) access patterns.
- Boasts 99.999999999% durability.
- Objects have URLs for easy access control and sharing.
- Regionally distributed for high durability and availability.
- Suitable for scenarios with millions of objects, web-enabled content, and backup strategies.
- Cost-effective for large-scale storage needs.
- Serverless, no need for Amazon EC2 instances.
**Use Cases:**
- For scenarios where users upload photos that need to be indexed and viewed by many, S3 is the
preferred choice due to its web-enabled nature, durability, and cost-effectiveness.
- For scenarios involving frequent changes to large files (e.g., video editing), EBS is the better
option as it allows updates at the block level, reducing data transfer requirements.
**Conclusion:**
- The choice between EBS and S3 depends on the specific workload and access patterns.
- S3 is suitable for complete objects and occasional changes.
- EBS is ideal for complex read, write, and change functions.
Amazon Elastic File System (Amazon EFS)
**Amazon Elastic File System (EFS):**
- EFS is a managed file system.
- It's suitable for businesses that require shared file systems across their applications.
- EFS scales automatically based on demand, growing and shrinking as needed.
- Multiple instances can access EFS concurrently.
- EFS is a regional resource, accessible from any EC2 instance in the same Region.
- Ideal for use cases where concurrent access to shared file data is needed.
- Requires no manual provisioning of volumes as it scales automatically.
**Amazon Elastic Block Storage (EBS):**
- EBS volumes store data in a single Availability Zone (AZ).
- It attaches to EC2 instances and is an AZ-level resource.
- EBS is essentially a hard drive, suitable for storing files, running databases, or hosting
applications.
- It doesn't automatically scale when filled up; additional volumes need to be provisioned manually.
- EC2 instances need to be in the same AZ as EBS volumes for attachment.
**Comparison:**
- EFS is a scalable file system suitable for use cases requiring concurrent access to shared file data.
- EBS is a block storage solution attached to EC2 instances and designed for individual instances,
not concurrent access.
- EFS is a regional resource accessible from multiple Availability Zones.
- EBS is an AZ-level resource and requires instances to be in the same AZ for attachment.
Amazon Relational Database Service (Amazon RDS)
**Relational Databases:**
- Relational databases store data in a structured way, allowing relationships between different pieces
of data.
- Data is stored in tables, and SQL (Structured Query Language) is commonly used to query and
manage data.
- Relationships between data are established using common attributes or keys.
- Examples include customer records stored in a "customer" table related to address data in an
"address" table.
- Relational databases are suitable for scenarios where data relationships need to be maintained and
queried.
**Amazon Relational Database Service (RDS):**
- Amazon RDS is a managed service for running relational databases in the AWS Cloud.
- Supported database engines include MySQL, PostgreSQL, Oracle, Microsoft SQL Server, and
others.
- RDS automates tasks like hardware provisioning, database setup, patching, and backups.
- It offers security features such as encryption at rest and in transit.
- Integrating RDS with other AWS services is possible, e.g., using AWS Lambda to query the
database.
- Amazon Aurora is an optimized relational database option compatible with MySQL and
PostgreSQL.
- Aurora offers high performance, reduced input/output (I/O) operations, and high availability.
- Data in Aurora is replicated across multiple Availability Zones, and it provides continuous
backups to Amazon S3.
- Amazon RDS is a valuable choice for businesses to manage relational databases in a costeffective, secure, and efficient manner in the AWS Cloud.
Amazon DynamoDB
**Amazon DynamoDB:**
- DynamoDB is a serverless, NoSQL database service provided by AWS.
- With DynamoDB, you create tables to store and query data, and data is organized into items with
attributes.
- DynamoDB automatically manages the underlying infrastructure, including scaling up or down
based on demand.
- Data in DynamoDB is stored redundantly across availability zones for high availability.
- It offers millisecond response times, making it highly performant for applications with large user
bases.
- Unlike traditional relational databases, DynamoDB is non-relational and NoSQL.
- It uses simple, flexible schemas, allowing variation in attributes between items in the same table.
- Queries in DynamoDB are based on a subset of attributes designated as keys, making query
patterns simpler.
- DynamoDB is purpose-built and suited for specific use cases; it's not the best fit for all workloads.
- It's fully managed and highly scalable, handling massive workloads without the need for
underlying database management.
**Non-relational databases (NoSQL):**
- Non-relational databases, sometimes called "NoSQL databases," use structures like key-value
pairs to organize data.
- In key-value databases like DynamoDB, data is organized into items (keys) with attributes
(values).
- Attributes can vary between items in the same table, providing flexibility in data storage.
- Query patterns in non-relational databases tend to be simpler and focus on collections of items
from one table.
- Non-relational databases are suited for datasets with some variation and high access rates.
Comparing Amazon RDS and Amazon DynamoDB
**Amazon RDS (Relational Database Service):**
- Amazon RDS is a managed relational database service provided by AWS.
- It automates tasks such as hardware provisioning, database setup, patching, and backups.
- RDS offers automatic high availability and recovery, reducing the workload for database
administrators.
- Users have control over data, schema, and network settings in RDS.
- It is suitable for use cases that require complex relational joins, making it ideal for business
analytics.
- RDS is designed for relational databases like MySQL, PostgreSQL, Oracle, Microsoft SQL
Server, and others.
- Relational databases excel at handling complex relationships between data in multiple tables.
**Amazon DynamoDB:**
- DynamoDB is a NoSQL database service provided by AWS.
- It uses a key-value pair data model and does not require advanced schema design.
- DynamoDB can operate as a global database with massive throughput and petabyte-scale
potential.
- It offers granular API access for fine-grained control.
- DynamoDB is well-suited for use cases where complex relationships between data are not
required.
- It excels in scenarios involving simple data structures, such as single-table databases.
- DynamoDB eliminates overhead associated with complex functionality, making it extremely fast
and cost-effective.
**Key Takeaways:**
- The choice between Amazon RDS and DynamoDB depends on your specific use case.
- RDS is ideal for scenarios that require complex relational joins and business analytics.
- DynamoDB shines in situations where complex relationships are not needed, and performance and
scalability are essential.
- Both services are purpose-built and should be chosen based on individual workload requirements.
Amazon Redshift
Here's the important information from the provided text regarding data warehousing and Amazon
Redshift for the AWS Cloud Practitioner exam:
**Data Warehousing:**
- Data warehousing is used for historical data analysis, as opposed to operational analysis.
- Traditional relational databases, designed for real-time transactions, may struggle with the volume
and variety of historical data.
- Data warehouses are engineered for big data and are suitable for business intelligence (BI)
projects.
- Data warehouses are used to analyze historical data and answer questions like production
improvement over time.
**Amazon Redshift:**
- Amazon Redshift is a data warehousing service provided by AWS.
- It offers massively scalable storage, with nodes available in multiple petabyte sizes.
- Redshift integrates with Amazon Redshift Spectrum, allowing queries against exabytes of
unstructured data in data lakes.
- Redshift is optimized for business intelligence workloads and can deliver up to 10 times higher
performance than traditional databases.
- It is a fully managed service, meaning users can focus on data analysis instead of managing the
database engine.
**Key Takeaways:**
- Data warehousing is essential for historical data analysis and BI projects.
- Traditional relational databases may not be suitable for handling the volume and variety of
historical data.
- Amazon Redshift is a scalable, high-performance data warehousing service that allows for
efficient analysis of big data.
- Redshift simplifies the management of data warehousing, allowing users to focus on data analysis.
AWS Database Migration Service
**AWS Database Migration Service (DMS):**
- AWS DMS is a service provided by AWS to help customers migrate existing databases onto AWS.
- DMS facilitates secure and easy migration of data between a source and a target database.
- The source database remains fully operational during the migration, reducing downtime for
applications relying on it.
- DMS supports both homogenous and heterogeneous migrations.
- In homogenous migrations, the source and target databases are of the same type, such as MySQL
to Amazon RDS for MySQL.
- For homogenous migrations, schema structures, data types, and database code are compatible
between source and target.
- The source database can be located on-premises, running on Amazon EC2 instances, or be an
Amazon RDS database. The target can also be in Amazon EC2 or Amazon RDS.
- Heterogeneous migrations involve source and target databases of different types, requiring a twostep process:
1. Use the AWS Schema Conversion Tool to convert the source schema and code to match the
target database.
2. Use DMS to migrate data from the source to the target.
- DMS can be used for various use cases, including development and test database migrations,
database consolidation, and continuous database replication.
- Development and test migrations allow developers to work with copies of production data without
affecting production users.
- Database consolidation involves combining multiple databases into a single central database.
- Continuous replication is useful for scenarios like disaster recovery or maintaining data
consistency across geographically separated locations.
**Key Takeaways:**
- AWS DMS facilitates secure and easy migration of databases onto AWS.
- It supports both homogenous and heterogeneous migrations.
- DMS allows for minimal downtime during the migration process.
- Various use cases include development and test migrations, database consolidation, and
continuous data replication.
- DMS is a valuable tool for organizations looking to migrate, consolidate, or replicate databases
efficiently.
Additional Database Services
**Amazon DocumentDB:**
- Amazon DocumentDB is a document database service that supports MongoDB workloads.
- It's suitable for use cases like content management, catalogs, and user profiles.
- It provides a scalable and managed platform for document-oriented databases.
**Amazon Neptune:**
- Amazon Neptune is a graph database service.
- It's designed for applications dealing with highly connected datasets, such as recommendation
engines, fraud detection, and knowledge graphs.
- Amazon Neptune is suitable for social networking and recommendation engine needs.
**Amazon Quantum Ledger Database (Amazon QLDB):**
- Amazon QLDB is a ledger database service.
- It maintains an immutable system of record where entries cannot be removed.
- Useful for scenarios requiring 100% immutability, such as financial records.
**Amazon Managed Blockchain:**
- Amazon Managed Blockchain allows creating and managing blockchain networks using opensource frameworks.
- Blockchain enables distributed ledger systems without a central authority.
- It may not be suitable for all financial regulatory requirements.
**Amazon ElastiCache:**
- Amazon ElastiCache adds caching layers on top of databases to improve read times for common
requests.
- It supports Redis and Memcached as caching stores.
- ElastiCache can enhance database performance without the need for manual caching management.
**Amazon DynamoDB Accelerator (DAX):**
- Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB.
- It significantly improves response times, reducing them from single-digit milliseconds to
microseconds.
- DAX is specifically designed for non-relational data stored in DynamoDB.
**Key Takeaways:**
- AWS offers various specialized database services to cater to specific business requirements.
- Amazon DocumentDB is suitable for document-oriented workloads like content management and
user profiles.
- Amazon Neptune is a graph database service for highly connected datasets.
- Amazon QLDB provides immutable ledger functionality, ideal for financial records.
- Amazon Managed Blockchain enables the creation and management of blockchain networks.
- Amazon ElastiCache enhances read times by adding caching layers to databases.
- Amazon DynamoDB Accelerator (DAX) improves response times for DynamoDB's non-relational
data.
Module 5 Summary
1. **Elastic Block Store (EBS) Volumes:**
- Attached to EC2 instances for non-ephemeral local storage.
- Provides persistent block-level storage for EC2 instances.
2. **Amazon S3 (Simple Storage Service):**
- Allows you to store objects in AWS through a simple interface or API.
- Object storage service suitable for various data types and sizes.
3. **Relational Databases on AWS:**
- Amazon RDS (Relational Database Service) for managed relational databases.
- Options like MySQL, PostgreSQL, Oracle, and SQL Server are supported.
4. **Amazon DynamoDB:**
- A non-relational database service for key-value pair databases.
- Suitable for workloads that require flexibility and scalability.
5. **Amazon Elastic File System (EFS):**
- Used for file storage use cases, providing scalable and shared file storage.
6. **Amazon Redshift:**
- Data warehousing service for analyzing large datasets.
- Optimized for complex analytical queries.
7. **Database Migration Service (DMS):**
- Aids in migrating existing databases to AWS securely and efficiently.
- Supports homogenous and heterogeneous migrations.
8. **Other Storage Services:**
- Amazon DocumentDB for document-oriented workloads.
- Amazon Neptune for graph databases, ideal for connected datasets.
- Amazon Quantum Ledger Database (Amazon QLDB) for maintaining immutable ledgers.
- Amazon Managed Blockchain for creating and managing blockchain networks.
9. **Caching Solutions:**
- Amazon ElastiCache for adding caching layers to improve read times.
- DynamoDB Accelerator (DAX) for significantly improving DynamoDB's response times.
Module 6 Introduction
1. **Shared Responsibility Model:**
- AWS follows a shared responsibility model where AWS is responsible for the security of the
cloud infrastructure, including data centers and services.
- Customers are responsible for securing their workloads and data within the cloud.
- This model ensures a collaborative approach to security in the cloud.
This model emphasizes that while AWS provides a secure cloud infrastructure, customers are
responsible for implementing security measures within their own workloads and applications to
ensure end-to-end security.
AWS Shared Responsibility Model
**Shared Responsibility Model:**
- Both AWS and the customer share the responsibility for ensuring security on the AWS Cloud.
- AWS is responsible for the security of the cloud infrastructure, including physical data centers,
network security, and underlying services.
- The customer is responsible for securing their data, workloads, applications, and configurations
within AWS services.
- This division of responsibilities ensures a collaborative approach to security.
- Customers maintain control over their content and are responsible for managing access, security
requirements, and configurations.
The shared responsibility model is akin to a homeowner and homebuilder relationship, where the
homebuilder constructs the house (AWS provides the cloud infrastructure), and the homeowner (the
customer) is responsible for securing and managing everything within the house (their data,
applications, and configurations).
This model emphasizes that while AWS provides a secure cloud environment, customers must
implement security measures within their workloads and applications to ensure comprehensive
security.
Key customer responsibilities include securing data, managing user access, configuring security
groups, patching operating systems, and selecting security measures based on specific operational
and security needs.
User Permissions and Access
**AWS IAM Overview:**
- IAM stands for Identity and Access Management, and it is used to manage access to AWS services
and resources securely.
- IAM provides flexibility in configuring access based on specific operational and security needs.
**Root User:**
- When you first create an AWS account, you have a root user who has complete access to all AWS
services and resources.
- It's recommended not to use the root user for everyday tasks but to create an IAM user for those
tasks.
- The root user should be used for specific administrative actions that only the root user can
perform.
**IAM Users:**
- IAM users represent individuals or applications that interact with AWS services and resources.
- By default, IAM users have no permissions associated with them.
- Permissions must be explicitly granted to IAM users, following the principle of least privilege.
- Individual IAM users are recommended even if multiple employees require the same level of
access for added security.
**IAM Policies:**
- IAM policies are JSON documents that allow or deny permissions to AWS services and resources.
- They define what API calls a user can or cannot make.
- Policies are attached to IAM users or groups to grant or deny access.
- The principle of least privilege should be followed when creating policies.
**IAM Groups:**
- IAM groups are collections of IAM users.
- Policies can be attached at the group level, simplifying permission management.
- Users added to a group inherit the permissions associated with that group.
- Useful for managing permissions for groups of users with similar roles.
**IAM Roles:**
- IAM roles are used to grant temporary access permissions.
- Roles are similar to users but don't have usernames and passwords.
- Roles can be assumed by users, applications, external identities, and even other AWS services.
- When an identity assumes a role, it takes on the permissions of that role.
**Multi-Factor Authentication (MFA):**
- MFA adds an extra layer of security for AWS accounts.
- It requires users to provide multiple pieces of information to verify their identity.
- MFA can be enabled for both the root user and IAM users.
- It enhances account security by requiring a second authentication factor, like a random code from
a device.
AWS Organizations
**AWS Organizations Overview:**
- AWS Organizations is a service that helps manage multiple AWS accounts in one central location.
- It simplifies administration, billing, security, and resource sharing across accounts.
**Key Features:**
1. **Centralized Management:**
- Allows you to manage multiple AWS accounts centrally.
- Useful as your company grows and you need separation of duties.
- Creates a parent container called the "root" for all accounts in your organization.
2. **Consolidated Billing:**
- Provides consolidated billing for all member accounts.
- Allows you to pay for all member accounts using the primary account.
- Offers the potential for bulk discounts.
3. **Hierarchical Groupings (Organizational Units - OUs):**
- Enables you to group accounts into OUs based on business or security requirements.
- Policies applied to OUs affect all accounts within them.
- Useful for isolating workloads or applications with specific security needs.
- Helps manage permissions and resource access effectively.
4. **Service Control Policies (SCPs):**
- Allows control over AWS services and API actions each account can access.
- SCPs specify maximum permissions for member accounts.
- Ensures security by restricting access to specific AWS services and resources.
- Useful for adhering to security and compliance requirements.
**Use Case Example:**
- Imagine a company with separate AWS accounts for finance, IT, HR, and legal departments.
- AWS Organizations helps consolidate these accounts into a single organization.
- You can create OUs to group accounts based on their unique needs.
- For example, finance and IT may not need OUs, while HR and legal can be grouped together.
- SCPs can be applied to OUs, restricting access to specific AWS services.
**Overall Significance:**
- AWS Organizations simplifies the management of multiple AWS accounts.
- It streamlines billing, access control, and compliance.
- SCPs and OUs help enforce security and regulatory requirements.
- It's especially valuable as companies expand their AWS usage and need better organization and
control over their resources and permissions.
Compliance
**Compliance and Auditing in AWS:**
- **Importance of Compliance:** Just like industries have specific standards to meet, your AWS
environment must also meet compliance requirements. Compliance ensures that your operations
follow regulations and industry standards.
- **Regulatory Requirements:** Depending on your business, you may need to adhere to specific
regulations. For example, GDPR for European consumer data or HIPAA for healthcare applications
in the US.
- **Shared Responsibility Model:** AWS follows the shared responsibility model. While AWS
provides a secure underlying platform, it's your responsibility to build compliant architectures and
solutions.
- **Data Control:** You have complete control over your data in AWS. AWS offers various
encryption mechanisms to keep your data safe. Data protection configurations are available for
many AWS services.
- **AWS Compliance Programs:** AWS complies with a wide range of assurance programs, and
you inherit AWS policies, architecture, and operational processes best practices. AWS provides
documentation on its compliance through services like AWS Artifact.
- **AWS Artifact:** AWS Artifact is a service that offers access to AWS security and compliance
reports and select online agreements. It consists of two main sections:
- **AWS Artifact Agreements:** You can review, accept, and manage agreements for individual
accounts and all accounts in AWS Organizations. Different agreements are available to address
specific regulations.
- **AWS Artifact Reports:** Provides compliance reports from third-party auditors verifying
AWS compliance with global, regional, and industry-specific security standards and regulations.
- **Customer Compliance Center:** AWS offers the Customer Compliance Center, where you
can access resources related to AWS compliance. This includes customer compliance stories,
whitepapers, documentation, and an auditor learning path.
**Key Takeaway:**
- Compliance is essential in AWS, and you must ensure that your AWS environment meets the
specific standards and regulations relevant to your business.
- AWS provides tools and resources, such as AWS Artifact and the Customer Compliance Center, to
help you achieve and document compliance.
- Remember the shared responsibility model, where AWS secures the platform, and you are
responsible for building compliant solutions on top of it.
Denial-of-Service Attacks
**Understanding DDoS Attacks:**
- **DDoS Attack Objective:** DDoS stands for Distributed Denial-of-Service. The objective of a
DDoS attack is to overwhelm your application's capacity to the point where it can no longer
function. It aims to deny access to your services.
- **Distributed Nature:** DDoS attacks are distributed in nature, meaning they leverage multiple
machines across the internet to launch the attack. The attacker creates an army of zombie bots to
carry out the assault.
- **Attack Strategies:** DDoS attacks involve various strategies, including low-level network
attacks like UDP floods and more sophisticated attacks like HTTP-level attacks.
- **UDP Flood Attack:** An example of a low-level attack is the UDP flood. In this attack, the
bad actor sends a simple request with a fake return address, causing a flood of unwanted data to
your server.
- **HTTP-Level Attacks:** More sophisticated attacks, like HTTP-level attacks, mimic normal
customer requests but come from an army of bot machines. These attacks can disrupt regular
customer access.
- **Slowloris Attack:** A specific attack called Slowloris mimics a slow connection, causing your
production servers to wait for complete requests, effectively exhausting your front-end capacity.
**AWS Defense Against DDoS:**
- **Security Groups:** AWS provides security groups that operate at the network level, allowing
only proper request traffic. This helps mitigate low-level network attacks like UDP floods.
- **Elastic Load Balancer (ELB):** ELB handles HTTP traffic requests and waits for complete
messages, making it resilient against attacks like Slowloris. ELB operates at the region level, adding
a layer of scalability and protection.
- **AWS Shield:** AWS offers specialized defense tools called AWS Shield. AWS Shield
Advanced, in particular, provides protection against sophisticated DDoS attacks. It uses a web
application firewall (WAF) to filter incoming traffic for malicious signatures and employs machine
learning to recognize new threats.
- **Shared Responsibility Model:** The shared responsibility model emphasizes that while AWS
provides a secure platform, you are responsible for building well-architected systems that can
defend against most attacks.
**AWS Shield Levels:**
- **AWS Shield Standard:** AWS Shield Standard is available to all AWS customers at no cost. It
protects against the most common DDoS attacks by automatically detecting and mitigating
malicious traffic in real-time.
- **AWS Shield Advanced:** AWS Shield Advanced is a paid service that provides more
comprehensive protection. It offers detailed attack diagnostics, the ability to detect and mitigate
sophisticated DDoS attacks, and integration with services like Amazon CloudFront, Amazon Route
53, and Elastic Load Balancing. AWS Shield Advanced can also be integrated with AWS WAF for
even more comprehensive protection.
**Key Takeaway:**
- DDoS attacks aim to overwhelm your application's capacity, and AWS provides tools like security
groups, ELB, and AWS Shield to defend against them.
- AWS Shield offers two levels of protection: Standard (free for all AWS customers) and Advanced
(paid service with advanced features).
- A well-architected system, combined with AWS Shield Advanced, can provide robust protection
against DDoS attacks.
Additional Security Services
**Securing Data at Rest and in Transit:**
- **Encryption:** Encryption is the process of securing data in a way that only authorized parties
can access it. It's like having a key to unlock a door.
- **Encryption at Rest:** Encryption at rest refers to securing data while it's idle and stored. AWS
provides server-side encryption at rest for services like DynamoDB. It integrates with AWS Key
Management Service (KMS) to manage encryption keys.
- **Encryption in Transit:** Encryption in transit ensures data is protected while it's being
transmitted between AWS services and clients. For example, secure sockets layer (SSL) connections
are used for data encryption between services like Redshift and SQL clients.
**AWS Key Management Service (KMS):**
- **AWS KMS:** AWS Key Management Service enables encryption operations using
cryptographic keys. Keys are used to lock (encrypt) and unlock (decrypt) data.
- **Access Control:** With AWS KMS, you can control access to keys, specifying which IAM
users and roles can manage them. You can also temporarily disable keys when they are no longer
needed.
**AWS WAF (Web Application Firewall):**
- **AWS WAF:** AWS WAF is a web application firewall that monitors network requests to your
web applications. It works with Amazon CloudFront and Application Load Balancer to block or
allow traffic based on web access control lists (ACLs).
- **Blocking Malicious Requests:** AWS WAF can be configured to allow legitimate requests
while blocking those from specified IP addresses, providing protection against malicious traffic.
**Amazon Inspector:**
- **Amazon Inspector:** Amazon Inspector automates security assessments to improve the
security and compliance of applications. It checks for vulnerabilities and deviations from best
practices, providing a list of security findings.
- **Security Findings:** Inspector categorizes security findings by severity and offers detailed
descriptions of each issue along with recommendations for remediation.
**Amazon GuardDuty:**
- **Amazon GuardDuty:** Amazon GuardDuty provides intelligent threat detection for AWS
infrastructure and resources. It continuously monitors network activity and account behavior to
identify threats.
- **Continuous Monitoring:** GuardDuty analyzes data from multiple AWS sources, including
VPC Flow Logs and DNS logs, to detect threats. It provides detailed findings with recommended
steps for remediation.
These security services and practices help ensure the security of your data and resources within
AWS, which is essential for maintaining the integrity of your cloud environment.
Module 6 Summary
1. **Shared Responsibility Model:**
- AWS follows a shared responsibility model.
- AWS is responsible for the security of the cloud infrastructure.
- Customers are responsible for security in the cloud, including securing their data and
configurations.
2. **AWS Identity and Access Management (IAM):**
- IAM provides users, groups, roles, and policies for managing access to AWS resources.
- Users log in with usernames and passwords.
- By default, users have no permissions until policies are attached.
- Groups are used to group users and manage permissions collectively.
- Roles are identities that can be assumed to gain temporary credentials and permissions.
- Policies define permissions and can explicitly allow or deny actions in AWS.
- Identity federation allows integration with existing corporate identity stores.
- Multi-factor authentication (MFA) should be enabled, especially for the root user.
3. **AWS Organizations:**
- AWS Organizations helps manage multiple AWS accounts hierarchically.
- Accounts are used for isolation of workloads, environments, teams, or applications.
4. **AWS Compliance:**
- AWS undergoes third-party audits to demonstrate compliance with various standards.
- The AWS Compliance Center provides information on compliance.
- AWS Artifact offers access to compliance documents.
- Compliance requirements vary depending on applications and areas of operation.
5. **DDoS Attacks and Security:**
- Distributed Denial-of-Service (DDoS) attacks are a threat.
- AWS offers tools like ELB, security groups, AWS Shield, and AWS WAF to combat DDoS
attacks.
- Security groups control inbound and outbound traffic to AWS resources.
- ELB (Elastic Load Balancer) can help distribute traffic and protect against DDoS attacks.
- AWS Shield and AWS WAF provide advanced security against DDoS attacks.
6. **Encryption:**
- Security in AWS includes encryption in transit and at rest.
- Customers are responsible for securing their data.
- Encryption should be applied at every layer, both in transit and at rest.
- Use AWS services to implement encryption to protect your environment.
7. **Security Best Practices:**
- Security is AWS's top priority.
- Security practices may vary between AWS services.
- Follow the least privilege principle when assigning permissions in IAM.
- Secure your data through encryption.
- Read AWS documentation for specific security guidelines for each service.
These key points cover the core concepts related to security and access management in AWS, as
well as strategies for mitigating threats like DDoS attacks and ensuring compliance with AWS
services.
Module 7 Introduction
1. **Importance of Monitoring in AWS:**
- Monitoring is crucial for businesses, even for a coffee shop owner, to ensure that systems and
processes are running smoothly.
2. **Metrics and Data Collection:**
- Businesses, including the coffee shop, can benefit from collecting metrics to measure the
performance of their systems.
- Metrics can include data like the number of coffees sold, average wait times, and inventory
levels.
3. **Role of Monitoring in the Cloud:**
- Monitoring is essential in the cloud, especially with AWS services that dynamically scale.
- Monitoring allows you to keep track of AWS resources to ensure they are performing as
expected.
4. **Use Cases for Cloud Monitoring:**
- In AWS, monitoring can trigger actions based on metrics.
- For example, if an EC2 instance is overutilized, you can automatically launch another instance.
- If an application starts generating a high rate of error responses, you can receive alerts to
investigate the issue.
5. **Upcoming Topics:**
- The upcoming videos will cover various monitoring tools in AWS.
- These tools will help you measure system performance, receive alerts for issues, and assist in
debugging and troubleshooting.
This information highlights the importance of monitoring in AWS, particularly for businesses with
dynamic cloud environments. It sets the stage for discussing monitoring tools in the following
videos.
Amazon CloudWatch
1. **Importance of Monitoring:**
- Businesses, including the coffee shop owner, need visibility into the state of their systems to
ensure they are running smoothly.
- Monitoring allows you to detect issues or anomalies, such as when an espresso machine needs
cleaning or repair.
2. **Introduction to Amazon CloudWatch:**
- Amazon CloudWatch is a web service provided by AWS for real-time monitoring and
management of various metrics.
- Metrics are data points tied to resources, such as the number of espressos made or CPU
utilization of an EC2 instance.
3. **Creating Custom Metrics and Alarms:**
- CloudWatch enables the creation of custom metrics, like "Espresso Count," and setting
thresholds for those metrics.
- CloudWatch alarms can be created to trigger actions when a threshold is reached, such as
sending an alert when the Espresso Count hits 100.
- These alarms can be integrated with SNS for alert notifications.
4. **Using CloudWatch Dashboards:**
- CloudWatch offers dashboards to aggregate and display metrics in near real-time.
- Dashboards can show data for multiple resources, allowing proactive monitoring and autorefreshing to provide up-to-date views.
5. **Benefits of Using CloudWatch:**
- Centralized access to metrics from AWS resources and services, breaking down silos.
- Provides visibility across applications, infrastructure, and services to correlate and resolve issues
quickly, reducing MTTR and improving TCO.
- Offers insights to optimize operational resources and applications.
6. **Amazon CloudWatch Overview:**
- Amazon CloudWatch is a web service that monitors and manages various metrics, configures
alarm actions based on metric data, and creates graphs to track performance changes over time.
- Alarms can automatically trigger actions based on predefined thresholds, e.g., stopping an EC2
instance if CPU utilization remains low.
- CloudWatch dashboards provide a central location for accessing and monitoring metrics from
various resources.
This information introduces Amazon CloudWatch as a monitoring service that is essential for
tracking and managing metrics, creating alarms, and utilizing dashboards for real-time monitoring
and alerting in AWS environments.
AWS CloudTrail
1. **Introduction to AWS CloudTrail:**
- AWS CloudTrail is a comprehensive API auditing tool used for auditing transactions in IT.
- It is like a digital version of a cash register, ensuring that all API requests made to AWS are
logged and audited.
2. **Key Features of AWS CloudTrail:**
- CloudTrail records every API request made to AWS, regardless of the service or action involved.
- It logs critical details, including who made the request, when it was made, the source IP address,
and the response.
- These logs help in tracking and verifying all changes and actions within an AWS environment.
- CloudTrail provides an audit trail for compliance purposes, demonstrating accountability and
transparency.
3. **Benefits of AWS CloudTrail:**
- AWS CloudTrail is especially valuable for proving that critical settings, such as security group
configurations, have not changed.
- It assists in demonstrating compliance with security and access control measures.
- CloudTrail logs can be stored indefinitely in secure S3 buckets, and tamper-proof methods like
Vault Lock can be used for added security.
4. **AWS CloudTrail Event Example:**
- An example illustrates how CloudTrail can help track changes and actions. In this case, it shows
how an IAM user named Mary was created in AWS IAM.
- CloudTrail records the user who made the request (John), the method used (AWS Management
Console), and the timestamp of the action.
5. **CloudTrail Insights:**
- CloudTrail offers an optional feature called CloudTrail Insights, which automatically detects
unusual API activities in an AWS account.
- For instance, it can identify unexpected increases in the launching of Amazon EC2 instances and
prompt further investigation.
This information provides an overview of AWS CloudTrail, its capabilities, and its significance for
auditing and compliance within an AWS environment.
AWS Trusted Advisor
1. **Introduction to AWS Trusted Advisor:**
- AWS Trusted Advisor is an automated advisory service provided by AWS.
- It evaluates your AWS resources against five key pillars: cost optimization, performance,
security, fault tolerance, and service limits.
- Trusted Advisor uses AWS best practices to assess the health and efficiency of your AWS
environment.
- It provides real-time recommendations to help you optimize your resources.
2. **Five Pillars of AWS Trusted Advisor:**
- **Cost Optimization:** Checks for cost-saving opportunities, such as underutilized resources
that can be scaled down or terminated.
- **Performance:** Monitors factors that could affect resource performance, like EBS volumes
and their associated EC2 instance throughput.
- **Security:** Identifies security vulnerabilities, including weak password policies, lack of
multi-factor authentication for root users, and public access to resources.
- **Fault Tolerance:** Advises on strategies for fault tolerance, such as creating EBS volume
snapshots for data backup and improving availability zone (AZ) distribution for EC2 instances.
- **Service Limits:** Notifies you when you approach or hit AWS service limits, helping you
plan for necessary adjustments.
3. **Trusted Advisor Dashboard:**
- The Trusted Advisor dashboard in the AWS Management Console displays checks for each of
the five pillars.
- Checks are categorized into three levels: green check (no problems detected), orange triangle
(recommended investigations), and red circle (recommended actions).
4. **Usage and Benefits:**
- Trusted Advisor offers guidance for optimizing your AWS account, and you can set up email
alerts for checks.
- It helps you align your AWS environment with best practices, improve cost-efficiency, enhance
security, and increase fault tolerance.
- By following Trusted Advisor's recommendations, you can save costs, ensure performance, and
strengthen the security of your AWS resources.
This information provides a comprehensive overview of AWS Trusted Advisor and its role in
evaluating and optimizing AWS resources across multiple key areas.
Module 7 Summary
1. **CloudWatch for Real-time Monitoring:**
- CloudWatch provides near real-time insight into how your AWS environment is behaving.
- It enables you to monitor your system's performance and conditions that may require attention.
- CloudWatch allows you to analyze metrics over time to optimize your system for maximum
performance.
2. **CloudTrail for Auditing:**
- CloudTrail records and audits every action taken in your AWS environment, tracking who
performed the action, when it occurred, and the source of the action.
- It is a comprehensive auditing tool that answers the "who, what, when, and where" questions
regarding AWS activities.
3. **Trusted Advisor for Best Practices:**
- Trusted Advisor compiles a dashboard with over 40 common concerns related to cost,
performance, security, and resilience.
- It provides actionable recommendations to address these concerns and optimize your AWS
resources.
4. **Additional Monitoring and Analytical Tools:**
- While CloudWatch, CloudTrail, and Trusted Advisor are essential tools, AWS offers a variety of
additional monitoring and analytical tools to meet specific business needs.
Understanding these AWS services is crucial for maintaining efficient, secure, and compliant
applications in your AWS environment. These tools play a vital role in monitoring, auditing, and
optimizing your AWS resources.
Module 8
AWS Free Tier
1. **AWS Free Tier:**
- AWS offers a Free Tier, allowing users to try out certain AWS services without incurring costs
for a specified period.
- There are three types of Free Tier offers: Always Free, 12 Months Free, and Trials.
2. **Always Free:**
- Always Free offers do not expire and are available to all AWS customers.
- Examples include AWS Lambda (1 million free requests and 3.2 million seconds of compute
time per month) and Amazon DynamoDB (25 GB of free storage per month).
3. **12 Months Free:**
- These offers are free for 12 months from the initial sign-up date to AWS.
- Examples include Amazon S3 Standard Storage and Amazon EC2 compute time.
4. **Trials:**
- Trials are short-term free trial offers that start from the date you activate a specific service.
- The length of the trial may vary by the number of days or the amount of usage in the service.
- For example, Amazon Inspector offers a 90-day free trial, and Amazon Lightsail provides 750
free hours of usage over a 30-day period.
AWS Pricing Concepts
**AWS Pricing Concepts:**
- AWS offers cloud computing services with pay-as-you-go pricing.
- Three pricing concepts: Pay for what you use, Pay less when you reserve, Pay less with volumebased discounts when you use more.
**Pay for what you use:**
- You pay for the exact amount of resources used for each service.
- No long-term contracts or complex licensing required.
**Pay less when you reserve:**
- Some services offer reservation options providing significant discounts (e.g., Amazon EC2
Instance Savings Plans).
**Pay less with volume-based discounts when you use more:**
- Some services offer tiered pricing, where per-unit costs decrease with increased usage (e.g.,
Amazon S3 storage).
**AWS Pricing Calculator:**
- Allows exploration of AWS services and cost estimation.
- Organize estimates by defined groups (e.g., cost centers) and share them.
**AWS Pricing Examples:**
- Examples of pricing for AWS services, including AWS Lambda, Amazon EC2, and Amazon S3.
**AWS Lambda:**
- Charged based on the number of requests and compute time.
- Offers 1 million free requests and up to 3.2 million seconds of compute time per month.
- Compute Savings Plans for cost savings with usage commitment.
**Amazon EC2:**
- Pay for compute time used when instances are running.
- Cost reduction options include Spot Instances, Savings Plans, and Reserved Instances.
**Amazon S3:**
- Pricing includes storage, requests, data retrievals, data transfer, and management features.
- Charges based on object sizes, storage classes, and usage duration.
- Some data transfer is free within AWS Regions, with exceptions.
Billing Dashboard
**AWS Billing & Cost Management:**
- Access the AWS Billing & Cost Management dashboard to manage your AWS bill, monitor usage,
and control costs.
**Dashboard Features:**
- The dashboard displays your month-to-date spend, top services, last month's costs, current costs,
and forecasted amounts.
- Top Free Tier services by usage are listed, along with percentage usage.
- Access other billing tools like Cost Explorer, Budgets, and more.
**Viewing Bills:**
- Access your bill to see the services you're paying for.
- Bills can be expanded to see costs broken down by region.
- Global services are also shown with their respective breakdowns.
Consolidated Billing
**Consolidated Billing in AWS Organizations:**
- AWS Organizations allows you to manage multiple AWS accounts from a central location.
- Consolidated billing is a feature in AWS Organizations that enables you to receive a single bill for
all AWS accounts within your organization.
- This simplifies billing by consolidating the bills from multiple accounts into one, making it easier
to track expenses.
- On the monthly bill, you can review itemized charges for each account, providing transparency
into individual account expenses while still receiving a single monthly bill.
- Benefits of consolidated billing include the ability to share bulk discount pricing, Savings Plans,
and Reserved Instances across the accounts in your organization.
- This allows you to take advantage of volume pricing discounts, even if individual accounts don't
meet the usage thresholds.
- Consolidated billing is a free and easy-to-use feature, simplifying the billing process and cost
management.
AWS Budgets
**AWS Budgets:**
The information in AWS Budgets updates three times a day.
- AWS Budgets is a tool that allows you to track and manage your AWS expenses to ensure they
stay within your predefined budget.
- Similar to personal budgeting, AWS Budgets enables you to set custom budgets for different
scenarios, including costs and usage.
- When your costs or usage are approaching or exceeding your budgeted amount, AWS Budgets will
send you alerts, allowing you to take proactive action to control your expenses.
- You can configure your budget by selecting a budget type (e.g., cost), giving it a name, specifying
the budgeted amount (e.g., $1000), and setting an alert threshold (e.g., 80%).
- You can receive notifications via email when your costs or usage reach the defined alert threshold.
**AWS Budgets Example:**
- For example, if you have set a budget for Amazon EC2 usage not to exceed $200 for the month,
you can create a custom budget in AWS Budgets to receive an alert when your usage reaches half of
that amount, which is $100.
- AWS Budgets helps you stay informed about your AWS spending and allows you to make
informed decisions about your resource usage to stay within your budget.
AWS Cost Explorer
**AWS Cost Model:**
- AWS follows a variable cost model, meaning you only pay for the resources and services you use.
Your bill fluctuates based on your actual usage and how you use those resources.
**AWS Cost Explorer:**
- AWS offers a service called AWS Cost Explorer, a console-based tool that allows you to visually
analyze your AWS spending.
- AWS Cost Explorer provides a historical view of your spending over the past 12 months, enabling
you to track and understand how your expenses change over time.
- You can use Cost Explorer to identify which AWS services are consuming the most resources and
costs.
- It allows you to group cost data by different attributes such as service, region, and tags.
- Tags are user-defined key-value pairs that can be applied to AWS resources for cost tracking and
management.
- Cost Explorer supports the creation of custom reports to analyze specific cost and usage patterns,
helping identify cost drivers and optimize spending.
- Understanding cost optimization is essential for managing AWS expenses efficiently.
**AWS Cost Explorer Example:**
- An example shows the AWS Cost Explorer dashboard, displaying monthly costs for Amazon EC2
instances over a 6-month period, with costs separated by instance types.
- By analyzing your AWS costs over time, you can make informed decisions about future costs,
budget planning, and resource optimization.
AWS Support Plans
**AWS Support Plans:**
- AWS offers different support plans designed to cater to the specific needs of businesses, regardless
of their size, ranging from small startups to large enterprises, and across the private and public
sectors.
- Every AWS customer is automatically provided with AWS Basic Support, which is available at no
cost. Basic Support includes features like 24/7 access to customer service, documentation,
whitepapers, support forums, AWS Trusted Advisor, and the AWS Personal Health Dashboard.
- As businesses move critical workloads into AWS, they can opt for higher levels of support to
match their requirements.
- The available AWS Support plans include Basic, Developer, Business, Enterprise On-Ramp, and
Enterprise Support.
**Details of AWS Support Plans:**
- **Basic Support**: Provided for free and includes access to documentation, support
communities, AWS Trusted Advisor checks, and the AWS Personal Health Dashboard for
monitoring service health.
- **Developer Support**: Includes all features of Basic Support and offers direct email support
with a 24-hour response time and a 12-hour response time for impaired systems. Suitable for
businesses experimenting with AWS.
- **Business Support**: Includes features of the previous plans, access to AWS Trusted Advisor
checks, and provides direct phone access to the support team. Offers a 4-hour response time for
impaired systems and a 1-hour response time for systems that are down. It also provides access to
infrastructure event management for additional fee.
- **Enterprise On-Ramp**: Designed for companies migrating production and business-critical
workloads to AWS. Includes all features of previous plans and offers a 30-minute response time for
business-critical workloads, along with access to a pool of Technical Account Managers (TAMs) for
proactive guidance.
- **Enterprise Support**: Recommended for companies running mission-critical workloads. It
includes all the features of previous plans and offers a 15-minute response time for business and
mission-critical workloads. It provides a designated TAM, proactive reviews, workshops, and deep
dives.
**Technical Account Managers (TAMs):**
- TAMs are part of the concierge support team that comes with both Enterprise Support options.
- TAMs provide infrastructure event management, Well-Architected reviews, and operations
reviews.
- Well-Architected reviews are used to assess architectures against the six pillars of the WellArchitected Framework: Operational Excellence, Security, Reliability, Performance Efficiency, Cost
Optimization, and Sustainability.
- TAMs play a holistic role, assisting customers not just in handling trouble tickets but in ensuring
their overall success with AWS.
AWS Marketplace
**AWS Marketplace:**
- AWS Marketplace is a curated digital catalog that simplifies the process of finding, deploying, and
managing third-party software within your AWS architecture.
- It offers a variety of payment options and aims to accelerate innovation, facilitate rapid and secure
deployment of solutions, and reduce the total cost of ownership.
- Key ways in which AWS Marketplace supports customers include one-click deployment, enabling
quick procurement and usage of products from thousands of software sellers.
- Most vendors in the marketplace allow users to credit existing annual licenses for AWS
deployment, and many also offer flexible on-demand pay-as-you-go pricing options.
- Free trials and Quick Start plans are available for experimentation and learning about software
offerings.
- The marketplace caters to large enterprises by providing enterprise-focused features like custom
terms and pricing, private marketplace creation, integration into procurement systems, and various
cost management tools.
- AWS Marketplace is a valuable resource for finding, testing, and purchasing software that runs on
AWS, with detailed information on pricing, support, and customer reviews.
- Software solutions are categorized by industry and use case, making it easier for users to find
solutions that address their specific needs.
Module 8 Summary
**1. AWS Free Tier:**
- Three types of offers included in the AWS Free Tier: 12 months free, Always free, and Trials.
These offer different durations of free usage for various AWS services.
**2. Consolidated Billing:**
- Benefits of consolidated billing in AWS Organizations, which allows multiple AWS accounts to
receive a single bill for easier cost tracking.
**3. AWS Cost Management:**
- Tools for planning, estimating, and reviewing AWS costs, including AWS Budgets and Cost
Explorer.
**4. AWS Support Plans:**
- Differences between the five AWS Support plans: Basic, Developer, Business, Enterprise OnRamp, and Enterprise. These plans offer various levels of support, response times, and features.
**5. AWS Marketplace:**
- How to discover and use software in AWS Marketplace, a digital catalog with third-party software
listings.
Module 9 Introduction
**1. Migration and Innovation:**
- Understand the concepts of migration and innovation in the AWS Cloud.
**2. AWS Cloud Adoption Framework (AWS CAF):**
- Summarize the AWS Cloud Adoption Framework (AWS CAF), which provides guidance for
organizations moving to the AWS Cloud.
**3. Cloud Migration Strategy:**
- Summarize the six key factors of a cloud migration strategy, which are essential for planning a
successful migration to the AWS Cloud.
**4. Data Migration Solutions:**
- Describe the benefits of AWS data migration solutions, including AWS Snowcone, AWS
Snowball, and AWS Snowmobile. These are physical devices used to migrate data in and out of
AWS.
**5. Innovative Solutions:**
- Summarize the broad scope of innovative solutions that AWS offers, showcasing the innovative
capabilities available in the AWS Cloud.
AWS Cloud Adoption Framework (AWS CAF)
For the AWS Cloud Practitioner exam, the key information from the provided text about the AWS
Cloud Adoption Framework (AWS CAF) and its six core perspectives is as follows:
**AWS Cloud Adoption Framework (AWS CAF):**
- AWS offers the AWS Cloud Adoption Framework (AWS CAF) to assist organizations in migrating
to the cloud.
- The AWS CAF provides guidance for a smooth and efficient migration to AWS.
**Six Core Perspectives of the AWS CAF:**
1. **Business Perspective:**
- Focuses on aligning IT with business needs and connecting IT investments to business results.
- Creates a strong business case for cloud adoption and prioritizes cloud adoption initiatives.
- Common roles include business managers, finance managers, budget owners, and strategy
stakeholders.
2. **People Perspective:**
- Supports the development of an organization-wide change management strategy for successful
cloud adoption.
- Evaluates organizational structures, roles, skill and process requirements, and identifies gaps.
- Prioritizes training, staffing, and organizational changes.
- Common roles include human resources, staffing, and people managers.
3. **Governance Perspective:**
- Focuses on aligning IT strategy with business strategy to maximize business value and minimize
risks.
- Addresses skills and processes required for effective business governance in the cloud.
- Manages and measures cloud investments to evaluate business outcomes.
- Common roles include the Chief Information Officer (CIO), program managers, enterprise
architects, business analysts, and portfolio managers.
4. **Platform Perspective:**
- Includes principles and patterns for implementing new solutions on the cloud and migrating onpremises workloads to the cloud.
- Utilizes architectural models to understand and communicate the structure of IT systems.
- Describes the architecture of the target state environment in detail.
- Common roles include the Chief Technology Officer (CTO), IT managers, and solutions
architects.
5. **Security Perspective:**
- Ensures the organization meets security objectives for visibility, auditability, control, and agility.
- Structures the selection and implementation of security controls that meet the organization's
needs.
- Common roles include the Chief Information Security Officer (CISO), IT security
managers, and IT security analysts.
6. **Operations Perspective:**
- Helps enable, run, use, operate, and recover IT workloads as agreed upon with business
stakeholders.
- Defines day-to-day, quarter-to-quarter, and year-to-year business operations.
- Aligns with and supports the operations of the business, identifying process changes and training
needed for successful cloud adoption.
- Common roles include IT operations managers and IT support managers.
The AWS CAF and its six core perspectives offer a structured approach to manage the cloud
migration process by involving different groups within an organization, addressing their distinct
responsibilities, and identifying gaps to create an action plan for successful migration.
Migration Strategies
For the AWS Cloud Practitioner exam, here are the key points regarding migration strategies and
the six R's:
**Migrating to AWS:**
- Migrating to AWS is a process that involves careful planning and selecting the right migration
strategy.
- Six migration strategies are referred to as the six R's, which are Rehosting, Replatforming,
Refactoring/Re-architecting, Repurchasing, Retaining, and Retiring.
- The choice of migration strategy depends on factors like time, cost, priority, and criticality.
**The Six R's - Migration Strategies:**
1. **Rehosting (Lift-and-Shift):**
- Involves moving applications to AWS without making significant changes.
- Offers a quick migration process but may not fully optimize applications.
- Can result in cost savings, even without immediate optimization.
2. **Replatforming (Lift, Tinker, and Shift):**
- Similar to rehosting but includes making some cloud optimizations.
- No core code changes are made during this process.
- Examples include migrating a MySQL database to RDS MySQL or upgrading to Amazon
Aurora for improved performance.
3. **Refactoring/Re-architecting:**
- Involves reimagining the application's architecture using cloud-native features.
- Driven by business needs to add features, scale, or achieve performance improvements.
- Requires significant planning and effort, with the potential for high initial costs.
4. **Repurchasing:**
- Involves moving from traditional software licenses to a software-as-a-service (SaaS) model.
- Example: Migrating from an on-premises CRM system to Salesforce.com.
- May have an upfront cost but can offer substantial benefits.
5. **Retaining:**
- Involves keeping applications in the source environment.
- Suitable for applications that need major refactoring before migration or work that can be
postponed.
- Applications are not migrated immediately.
6. **Retiring:**
- The process of removing applications that are no longer needed.
- Up to 10-20% of an organization's application portfolio may include applications that can be
retired.
Understanding these migration strategies and choosing the appropriate one for each application or
application group is critical for a successful migration to AWS. The selection should consider
factors like cost, time, and business priorities.
AWS Snow Family
Here's the important information about the AWS Snow Family and its devices:
**AWS Snow Family:**
- The AWS Snow Family consists of physical devices designed to transport large amounts of data
into and out of AWS efficiently.
**AWS Snowcone:**
- AWS Snowcone is a compact device that can store up to eight terabytes of data and includes edge
computing capabilities with Amazon EC2 instances and AWS IoT Greengrass.
- To use Snowcone, customers place an order via the AWS Management Console, receive the
device, copy data onto it, and then return it to AWS for data transfer to their AWS account, typically
to an Amazon S3 bucket.
- Use cases include shipping terabytes of data, such as analytics data, video libraries, image
collections, backups, and more.
**AWS Snowball:**
- AWS Snowball comes in two versions: Snowball Edge Storage Optimized and Snowball Edge
Compute Optimized.
- These devices are designed for large-scale data migrations and offer local computing capabilities.
- Snowball Edge devices can be plugged into existing server racks and can run AWS Lambda
functions, Amazon EC2-compatible AMIs, and AWS IoT Greengrass.
- Use cases include capturing IoT device data, image compression, video transcoding, and industrial
signaling.
**AWS Snowmobile:**
- AWS Snowmobile is the largest device in the Snow Family, housed in a 45-foot rugged shipping
container and transported by a truck.
- It can store up to 100 petabytes of data, making it suitable for the largest data migrations and data
center shutdowns.
- Snowmobile is tamper-resistant, waterproof, temperature-controlled, and includes features like fire
suppression and GPS tracking.
- It offers 24/7 video surveillance with dedicated security during transit.
- Data on all Snow Family devices is secure and tamper-resistant, with cryptographic signatures and
automatic 256-bit encryption keys, which can be managed using AWS Key Management Service.
Understanding the AWS Snow Family and its devices is essential for efficiently transferring large
volumes of data to AWS, particularly in cases where traditional methods are limited by bandwidth
constraints.
Innovation with AWS
**Innovations with AWS Services:**
- AWS provides a platform for builders to create and innovate at the pace of their ideas, and it offers
a wide range of services beyond what's covered here.
- Notably, you can migrate to AWS using VMware Cloud on AWS, which allows lifting up and
moving VMware-based infrastructure to the cloud.
**Machine Learning and Artificial Intelligence:**
- AWS offers a comprehensive set of machine learning and AI services, including pre-trained AI
services for computer vision, language recommendations, and forecasting.
- Amazon SageMaker allows you to quickly build, train, and deploy machine learning models at
scale and supports popular open-source frameworks.
- Services like Amazon Augmented AI (A2I), Amazon Lex, and Amazon Textract provide ready-touse AI solutions for various purposes.
- AWS DeepRacer offers a way for developers to experiment with reinforcement learning through a
fun racing environment.
**Internet of Things (IoT):**
- AWS provides technologies for the Internet of Things, enabling connected devices to
communicate globally.
**AWS Ground Station:**
- AWS Ground Station allows you to access satellite communication and pay only for the satellite
time you use, making it more affordable than launching your own satellite.
**Innovation with AWS Services:**
- To drive innovation with AWS services, you should focus on understanding the current state,
desired state, and the problems you want to solve.
**Serverless Applications:**
- AWS offers serverless applications that don't require provisioning or managing servers, and AWS
Lambda is an example of a service to run serverless applications.
- Serverless architecture enables developers to focus on their core product instead of server
management.
**Machine Learning (ML):**
- Traditional ML development can be complex and time-consuming, but AWS simplifies it with
Amazon SageMaker, allowing you to build, train, and deploy ML models quickly.
- ML can be used to analyze data, solve complex problems, and predict outcomes.
**Artificial Intelligence (AI):**
- AWS provides various AI-powered services for tasks like code recommendations, code security
with Amazon CodeWhisperer, speech-to-text conversion with Amazon Transcribe, text pattern
analysis with Amazon Comprehend, fraud detection with Amazon Fraud Detector, and building
chatbots with Amazon Lex.
AWS offers a broad ecosystem of services that cater to various use cases, allowing businesses to
innovate and harness the power of cloud technology.
Amazon CodeWhisperer
**Amazon CodeWhisperer Overview:**
- Amazon CodeWhisperer is an AI software coding companion.
- It analyzes code and comments while a developer writes code in their integrated development
environment.
**Benefits for Developers:**
- CodeWhisperer goes beyond code completion by using natural language processing to understand
the comments in a developer's code.
- It generates complete functions and code blocks based on a developer's descriptions.
- CodeWhisperer also analyzes the surrounding code to ensure generated code matches a
developer's style and naming conventions.
- It simplifies code development by automating repetitive tasks and saving developer time.
- Developers can rely on code suggestions that match their coding style, streamlining workflows
and ensuring high-quality code delivery.
**Benefits for Organizations:**
- CodeWhisperer accelerates application development, leading to faster software delivery.
- It optimizes developer time by automating repetitive tasks, allowing developers to focus on critical
aspects of the project.
- CodeWhisperer helps mitigate security vulnerabilities by assessing code against various standards
and best practices.
- It offers an open source reference tracker to protect open source intellectual property.
- CodeWhisperer enhances code quality and reliability, resulting in robust and efficient applications.
- It enables an efficient response to evolving software threats, ensuring the codebase is up to date
with the latest security practices.
Amazon CodeWhisperer is a valuable tool for developers and organizations, enhancing both code
quality and security while optimizing development workflows.
Module 9 Summary
**AWS Cloud Adoption Framework:**
- Provides guidance on cloud migration.
- Offers Business, People, and Governance Perspectives for non-technical planning.
- Provides Platform, Security, and Operations Perspectives for technical planning.
**Six R's of Migration:**
- Rehost
- Replatform
- Repurchase
- Refactor
- Retire
- Retain
**Data Migration Strategies:**
- Use AWS Snowball and AWS Snowmobile for moving massive amounts of data into AWS.
- These physical devices are filled with data and then shipped back to AWS, where the data is
uploaded.
- This approach helps bypass potential network throughput issues and enhances security compared
to using high-speed internet.
These concepts are essential for understanding migration strategies and options for moving data to
AWS efficiently and securely.
Module 10 Introduction
**Well-Architected Framework:**
- It is a tool used to evaluate AWS architectures for excellence.
- The framework consists of pillars that assess different categories, including Operational
Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.
**Basic principles:**
- Evaluating the quality of an architecture is essential, and it's not solely based on whether it "looks
good."
- There are tools and frameworks, like the Well-Architected Framework, to help assess the quality
of AWS architectures.
- Having the ability to identify deficiencies in architectures, especially concerning reliability, is
crucial for building robust solutions.
Understanding the Well-Architected Framework and its pillars is important for evaluating and
improving the quality of AWS architectures.
The AWS Well-Architected Framework
**Well-Architected Framework:**
- Designed to help architects, developers, and users build secure, high-performance, resilient, and
efficient infrastructure on AWS.
- Composed of six pillars: Operational Excellence, Security, Reliability, Performance Efficiency,
Cost Optimization, and Sustainability.
**Pillar Overview:**
1. **Operational Excellence:**
- Focuses on running and monitoring systems to deliver business value.
- Emphasizes automating changes, responding to events, and continuous improvement.
2. **Security:**
- Prioritizes data integrity and protection using encryption.
- Adheres to AWS security best practices.
3. **Reliability:**
- Concentrates on recovery planning and handling changes to meet demand.
- Addresses recovery from disruptions and system scaling.
4. **Performance Efficiency:**
- Involves using IT and computing resources efficiently.
- Emphasizes informed decision-making, serverless architectures, and global scalability.
5. **Cost Optimization:**
- Focuses on optimizing costs by analyzing and attributing expenditure.
- Recommends adopting a consumption model and using managed services to reduce ownership
costs.
6. **Sustainability (new pillar):**
- Aims to reduce energy consumption and increase efficiency.
- Encourages sustainability goals, maximizing resource utilization, and adopting new, efficient
hardware and software offerings.
**Well-Architected Tool:**
- Provides a self-service tool to evaluate architectures based on the framework.
- Generates a report indicating areas for improvement, following a traffic light system (green,
orange, red).
- Users can customize responses to suit their scenario.
Understanding the Well-Architected Framework and its pillars is essential for designing and
evaluating AWS architectures effectively.
Benefits of the AWS Cloud
**Advantages of Cloud Computing:**
1. **Trade upfront expense for variable expense:**
- Traditional data centers require significant upfront investments in hardware and infrastructure.
- AWS offers a variable cost model, where you pay only for the resources you consume, with no
need for a large upfront investment.
- You can adjust your resources as needed and optimize costs by eliminating unused resources.
2. **Benefit from massive economies of scale:**
- AWS's large-scale operations allow for lower variable costs compared to running your own data
center.
- Aggregated usage from numerous customers leads to better economies of scale, resulting in
lower pay-as-you-go prices.
3. **Stop guessing capacity:**
- In traditional data centers, you must predict and invest in infrastructure capacity, which can lead
to over- or underestimating resources.
- AWS allows you to provision resources as needed, ensuring you have the right capacity without
overcommitting.
4. **Increase speed and agility:**
- AWS provides the flexibility to experiment and innovate by quickly deploying resources.
- You can easily develop, test, and deploy applications, adapt to changing needs, and make agile
decisions.
5. **Stop spending money running and maintaining data centers:**
- Running data centers requires ongoing investment, management, and maintenance.
- AWS takes care of the infrastructure, allowing you to focus on core business activities and
innovation.
6. **Go global in minutes:**
- Expanding to new regions traditionally involves time-consuming and expensive data center
setup.
- With AWS, you can replicate your architecture in new regions in minutes, ensuring global
availability.
Understanding these advantages is crucial for evaluating cloud solutions and making informed
decisions about using AWS.
Module 10 Summary
The six pillars of the AWS Well-Architected Framework:
Operational excellence
Security
Reliability
Performance efficiency
Cost optimization
Sustainability
Six advantages of cloud computing:
Trade upfront expense for variable expense.
Benefit from massive economies of scale.
Stop guessing capacity.
Increase speed and agility.
Stop spending money running and maintaining data centers.
Go global in minutes.
Download