Uploaded by MCA DEPT

Cloud Computing and Advanced Computing Technologies

advertisement
UNIT-1
CLOUD COMPUTING
Bio-computing
Biocomputing, in its essence, is the marriage of computational methods and
biological processes. It harnesses the power of algorithms and data analytics to
unravel the complexities of living systems, providing insights that extend from
molecular interactions to ecological dynamics. Biocomputers use biologically
derived materials to perform computational functions. A biocomputer consists
of a pathway or series of metabolic pathways involving biological materials that
are engineered to behave in a certain manner based upon the conditions (input)
of the system.
The growing amount of biological data being generated has made it difficult
for researchers to effectively store, process, and analyze it using traditional
methods. The use of cloud computing, however, has provided a solution by
offering the scalability and cost-effectiveness required to handle large amounts
of data.
ADVANTAGES
One of the main advantages of biocomputers is their energy efficiency.
While electronic computers require a significant amount of electricity to
operate, biocomputers require only a small fraction of that energy.
As a field, biological computation can include the study of the systems
biology computations performed by biota the design of algorithms inspired by
the computational methods of biota, the design and engineering of
manufactured computational devices using synthetic biology components and
computer methods.
Is computational biology important?
Computational biology has been used to assist in sequencing the human
genome. Computational biologists use a wide variety of software. These range
from command line programs to graphic and web-based programs. Open
source software provides a platform for developing computational biology
techniques
Is computational biology the future?
Already being adapted for clinical Computational biology is likely to
become a part of routine health care in the future, and Markel suspects that
one area we will see this change in is the “internet of things.” Computational
biology applications are not limited to research and drug discovery use, like
Mobile computing
Mobile cloud computing (MCC) is the method of using cloud technology to
deliver mobile apps. Complex mobile apps today perform tasks such as
authentication, location-aware functions, and providing targeted content and
communication for end users.
Mobile computing refers to the use of portable computing devices such as
smartphones, tablets, and laptops, to access and transmit information
wirelessly over a network. Mobile computing is important because it enables
people to access information and communicate with others from anywhere, at
any time.
Mobile computing is the technology that allows users to connect and transmit
data, audio/voice, and video from one device to another without being tethered
to a stationary area with cabling and wires. Devices supporting mobile
computing include smartphones, tablets, laptops, and wearable devices, such as
smartwatches.
Example of a mobile cloud
Gmail , outlook ,and yahoo mail are numerous examples of mobile email.
When you check your emails through your smartphone, you're using mobile
cloud computing technology. Social Media: It enables quick sharing of realtime data on social media platforms like Twitter, Instagram, and Facebook
Application of mobile computing
Applications of Mobile Computing: Communication: Mobile computing has
revolutionized the way people communicate with each other. It has made it
possible to make calls, send text message, and access social media platforms
from anywhere in the world.
Advantages:
The ability to accept payments wirelessly . increased ability to communicate
in and out of the workplace. greater access to modern apps and services.
improved networking capabilities.
Disadvantages:
Connectivity issues: Mobile devices require a reliable internet or cellular
connection to function properly, and connectivity issues can disrupt or prevent
their use. Security concerns: Mobile devices are often more vulnerable to cyberattacks and data breaches, which can compromise sensitive information.
Characteristics of mobile computing:
portability: Users may effortlessly carry mobile computing devices with them
wherever they go because of their lightweight and compact design. One
essential characteristic that sets them apart from conventional desktop
computers is their portability.
Principles of mobile computing:
Some of the main principles which lie behind mobile computing
are portability, social interactivity, connectivity and individuality . Mobile
computing makes use of primarily three different forms of wireless data
connections. These are cellular data services, Wi-Fi connections and satellite
internet access.
Quantum computing
Quantum computers achieve their massive computing power by initiating
quantum physics into processing power and when users are allowed access to
these quantum-powered computers through the internet it is known as
quantum computing within the cloud.
Quantum computing is a unique technology because it isn’t built on bits that
are binary in nature, meaning that they’re either zero or one. Instead, the
technology is based on qubits. These are two-state quantum mechanical systems
that can be part zero and part one at the same time.
HARDWARE
COMPUTER
STRUCTURE
OF
A
QUANTUM
1 Quantum Data Plane. The quantum data plane is the “heart” of a QC. ...
2 Control and Measurement Plane. ...
3 Control Processor Plane and Host Processor. ...
4 Qubit Technologies.
What are quantum computing used for?
With focus on business management's point of view, the potential
applications of quantum computing into four major category are cybersecurity
data analytics and artificial intelligence, optimization and simulation, and data
management and searching.
Advantages:
Cloud-based quantum computing offers considerable advantages over onpremises solutions. For a non-exhaustive-but-broad sampling, web-based
access allows: Enables organizations of all sizes to conduct research on state-ofthe-art hardware and software without the significant costs of building and
maintaining either.
Disadvantages of quantum computing:
 Quantum error correction and environmental sensitivity are major
challenges.
 Post-quantum cryptography is a national security concern.
 Quantum-powered AI could create unintended consequences.
Features of quantum computing
Superposition and quantum entanglement are the underlying properties of
quantum computers. In the case of superposition, qubits can take on multiple
values at the same time, and through quantum entanglement, the state of one
qubit can instantaneously affect other qubits.
Challenges of quantum computing
The three main challenges we'll look at include quantum decoherence,
error correction and scalability. Each is a major hurdle on the road to
quantum computing, and must be overcome if the technology is to reach full
potential.
Optical computing
Optical computing or photonic computing uses light waves produced by
lasers or incoherent sources for data processing , data storage or data
communication for computing. For decades, photons have shown promise to
enable a higher bandwidth than the electrons used in conventional computers.
Optical computing (also known as optoelectronic computing and photonic
computing) is a computation paradigm that uses photons (small packets of light
energy) produced by laser/diodes for digital computation .Photons have proved
to give us a higher bandwidth than the electrons we use in conventional
computer systems.
Advantages and disadvantages:








Low Power Loss.
Improved Reliability.
Cable Flexibility, Durability and Low Maintenance.
Thinner and Light-weighted.
Longer Distances.
Difficult to Splice.
Can't Be bent at sharp angles.
I Highly Susceptible.
Advantages of optical computing: Fast density; small size; minimal
junction heating; high speed; dynamic scaling and reconfigurability into
smaller/larger networks/topologies; vast parallel computing capability, and AI
applications.
Optical computing can be used to perform a variety of operations on signals.
Several processing operations have proven to be more efficient with optical
techniques than their electronic counterparts. These include Fourier
transforms, convolution, correlation, and spectrum analysis.
Nano computing
Nanocomputing describes computing that uses extremely small , or
nanoscale, devices (one nanometer [nm] is one billionth of a meter). In 2001,
state-of-the-art electronic devices could 1qbe as small as about 100 nm, which
is about the same size as a virus.
Nano computers process and perform computations similar to standard
computers, but are sized in nanometers. However, with fast-moving
nanotechnology, Nano computers will eventually scale down to the atomic level
and will be measured in nanometers. Nanorobot, or nanobots, will be controlled
and managed by Nano computers.
Applications of Nanocomputing:
Greater understanding of disease development through improved
computational models. Improved transportation logistics across the world.
Improved financial modeling to avoid economic downturns. The development
of driverless cars with the ability to process real world driving problems faster
than human drivers.
Characteristics:
The National Institute of Standards Technology (NIST) lists five essential
charatristies of cloud computing: on-demand self-services , broad network
access, resource pooling, rapid elasticity, and measured service .
Advantages and disadvantages:
Nanotechnology offers the potential for new and faster kinds of computers,
more efficient power sources and life-saving medical treatments. Potential
disadvantages include economic disruption and possible threats to security,
privacy, health and the environment.
Different types of Nano computers
The various types of Nano Computers like Nano Electronic Computer, Nano
Mechanical Computer , Nano Chemical and Biochemical Computer and Quantum
Computers will help in aid to use and ease to make the task to be simpler and
easier.
HIGH PERFORMANCE COMPUTING
High-Performance Computing (HPC) involves the use of advanced
computing resources to solve complex problems and perform large-scale
simulations. Here are some key notes on high-performance computing:
Definition of HPC:
HPC refers to the use of powerful computing systems that deliver high
performance and processing speeds to solve intricate problems.
Parallel Processing:
HPC systems often rely on parallel processing, where multiple processors
work simultaneously on different parts of a problem.
Types of parallel processing include task parallelism, data parallelism, and
pipeline parallelism.
Clusters and Supercomputers:
HPC systems are often organized as clusters of interconnected computers or
as supercomputers, which consist of thousands of processors working in
tandem.
Examples of supercomputers include Summit, Sierra, and Fugaku.
Distributed Computing:
HPC often involves distributed computing, where tasks are spread across
multiple interconnected machines for improved performance.
Performance Metrics:
Performance is measured in FLOPS (Floating Point Operations Per Second)
or MIPS (Million Instructions Per Second).
Other metrics include throughput, latency, and scalability.
Parallel Programming Models:
Common parallel programming models include MPI (Message Passing
Interface) and OpenMP.
MPI is used for distributed memory systems, while OpenMP is suitable for
shared-memory architectures.
High-Performance File Systems:
HPC often requires high-speed, parallel file systems to handle large volumes
of data.
Lustre and GPFS (IBM Spectrum Scale) are examples of file systems used in
HPC environments.
Applications of HPC:
HPC is utilized in various fields, including weather forecasting, molecular
modeling, financial modeling, seismic exploration, and scientific research.
GPU Acceleration:
Graphics Processing Units (GPUs) are commonly used in HPC for their
parallel processing capabilities, particularly in scientific simulations and
machine learning.
Big Data and HPC:
HPC and big data technologies often overlap, with HPC providing the
computational power to analyze and process large datasets.
Energy Efficiency:
Energy efficiency is a significant concern in HPC, with efforts to design more
power-efficient systems and optimize algorithms for better performance per
watt.
HPC Challenges:
Challenges include scalability, load balancing, data movement, fault
tolerance, and software optimization for parallel architectures.
Cloud-based HPC:
Cloud computing platforms offer HPC resources on-demand, providing
flexibility and cost-effectiveness for users with varying computational needs.
Quantum Computing:
Quantum computing is an emerging field that has the potential to
revolutionize HPC by solving certain problems exponentially faster than
classical computers.
HPC Standards:
Organizations like the TOP500 maintain lists of the most powerful
supercomputers, providing a reference for the global HPC community.
These notes provide a broad overview of key concepts in high-performance
computing. Further exploration of each topic can lead to a deeper
understanding of the specific technologies and challenges in the field.
PARALLEL COMPUTING
Parallel computing involves the simultaneous execution of multiple tasks or
processes to solve a computational problem more efficiently. Here are some
key notes on parallel computing:
Parallelism Types:
Task Parallelism: Dividing a problem into smaller tasks or sub-tasks that can
be executed independently.
Data Parallelism: Distributing the data across multiple processors and
performing the same operation on each portion simultaneously.
Parallel Architectures:
Shared Memory Systems: Processors share a common address space,
allowing them to communicate by reading and writing to shared memory.
Distributed Memory Systems: Processors have separate address spaces and
communicate by message passing.
Hybrid Systems: Combine shared and distributed memory architectures for
improved performance.
Parallel Programming Models:
Message Passing Interface (MPI): A widely used standard for writing parallel
programs that run on distributed memory systems. Processes communicate
through message passing.
OpenMP: Used for shared memory systems, it employs compiler directives to
parallelize code by dividing it into threads that execute in parallel.
Parallel Algorithms:
Divide and Conquer: Break a problem into smaller sub-problems, solve them
independently, and combine the solutions.
Pipeline Processing: Divide a task into a series of sub-tasks, and each subtask is performed by a different processor in a pipeline fashion.
Matrix Multiplication: Efficient parallel algorithms exist for common
operations like matrix multiplication.
Scalability:
A parallel algorithm is considered scalable if it can handle larger problem
sizes as more processors are added without a significant decrease in
performance.
Load Balancing:
Distributing the workload evenly among processors to ensure that each
processor completes its tasks at a similar rate, avoiding idle time.
Synchronization:
Managing the order of execution and coordination between parallel tasks to
avoid race conditions and ensure correct results.
Parallel Computing Platforms:
Multi-core Processors: Systems with multiple processing cores on a single
chip.
Graphics Processing Units (GPUs): High-performance parallel processors
designed for rendering graphics but widely used in scientific and parallel
computing.
Cluster Computing: Interconnected computers working together, often using
message passing or other communication mechanisms.
Parallel Libraries:
Libraries such as Intel Threading Building Blocks (TBB), Parallel Processing
Extensions (PPE), and CUDA (for GPU programming) provide tools to
simplify parallel programming.
Challenges in Parallel Computing:
Communication Overhead: The time and resources spent on inter-processor
communication.
Load Imbalance: Non-uniform distribution of work among processors.
Dependency Management: Ensuring correct order and synchronization of
parallel tasks.
Scalability Limits: Some algorithms may not scale efficiently as the number
of processors increases.
Parallel Debugging and Profiling:
Specialized tools and techniques are needed for debugging and profiling
parallel programs due to the added complexity of concurrent execution.
Parallel Computing in Industry:
Used in various fields, including scientific simulations, financial modeling,
data analysis, artificial intelligence, and more.
Understanding parallel computing concepts and mastering parallel
programming models is crucial for harnessing the power of modern
computing architectures and achieving improved computational efficiency.
DISTRIBUTED COMPUTING
Distributed computing involves the use of multiple interconnected computers
to work together on a task or problem. Here are some key notes on distributed
computing:
Definition of Distributed Computing:
Distributed computing refers to the use of a network of computers to solve a
single problem, dividing the workload among multiple machines.
Characteristics of Distributed Systems:
Concurrency: Multiple processes or tasks can be executed simultaneously.
Autonomy: Each node in the system operates independently.
Fault Tolerance: Systems can continue to operate in the presence of failures.
Scalability: Easily expandable to handle increased workload.
Architectures of Distributed Systems:
Client-Server Architecture: Clients request services, and servers provide
those services.
Peer-to-Peer (P2P) Architecture: Nodes communicate directly with each
other, sharing resources and responsibilities.
Communication in Distributed Systems:
Remote Procedure Call (RPC): Enables one program to execute code on
another machine as if it were a local procedure call.
Message Passing: Nodes communicate by sending messages to each other,
often using protocols like TCP/IP or UDP.
Distributed File Systems:
Systems like Hadoop Distributed File System (HDFS) and Google File
System (GFS) allow distributed storage and retrieval of files.
Consistency and Replication:
Consistency Models: Define the rules for how updates become visible to
other nodes in the system.
Replication: Copying data to multiple nodes to improve fault tolerance and
performance.
Load Balancing:
Distributing the workload evenly among nodes to avoid performance
bottlenecks and ensure efficient resource utilization.
Fault Tolerance:
Techniques like replication, checkpointing, and recovery mechanisms are
used to ensure that the system can continue functioning in the presence of
failures.
Concurrency Control:
Managing access to shared resources to prevent conflicts and ensure data
integrity in a distributed environment.
Security in Distributed Systems:
Encryption, authentication, and authorization mechanisms are crucial for
ensuring the security of data and communication in distributed systems.
Middleware:
Software that provides common services and facilitates communication
between different software applications in a distributed system.
Grid Computing:
Extends distributed computing to involve the coordinated use of resources
from multiple administrative domains for large-scale computing tasks.
Cloud Computing:
A model of distributed computing that provides on-demand access to a shared
pool of computing resources over the internet.
Edge Computing:
Distributes computing resources closer to the location where they are needed,
reducing latency and improving performance for certain applications.
Challenges in Distributed Computing:
Consistency and Coherence: Ensuring that all nodes see a consistent view of
the system.
Communication Overhead: Managing communication between nodes
efficiently.
Scalability: Ensuring that the system can grow in size without sacrificing
performance.
Applications of Distributed Computing:
Used in various fields, including scientific research, financial modeling, data
analytics, content delivery networks (CDNs), and more.
Understanding the principles of distributed computing is crucial for designing
and implementing scalable, fault-tolerant, and efficient systems in a
networked environment.
CLUSTER COMPUTING
Cluster computing involves the interconnection of multiple computers
(nodes) to work together as a single system to perform computational tasks.
Here are some key notes on cluster computing:
Definition of Cluster Computing:
Cluster computing refers to the use of multiple interconnected computers that
work together as a unified system to solve complex computational problems.
Cluster Types:
Homogeneous Cluster: All nodes have similar hardware and software
configurations.
Heterogeneous Cluster: Nodes have diverse hardware or software
configurations.
Components of a Cluster:
Nodes: Individual computers or servers that make up the cluster.
Interconnect: Network infrastructure that allows communication between
nodes.
Middleware: Software that manages communication and coordination
between nodes.
Communication in Clusters:
Message Passing: Nodes communicate by sending messages to each other
using protocols like MPI (Message Passing Interface).
Shared Memory: Nodes access a common memory space, allowing them to
share data directly.
Parallel Processing in Clusters:
Tasks are divided among nodes, and each node processes its portion
simultaneously, allowing for parallel execution.
High-Performance Computing (HPC) Clusters:
Used for computationally intensive tasks such as scientific simulations, data
analysis, and simulations.
Often employ specialized hardware like GPUs for parallel processing.
Load Balancing:
Distributing computational tasks evenly among cluster nodes to ensure
efficient resource utilization.
Scalability:
Clusters can be scaled by adding more nodes to handle increased
computational workloads.
Fault Tolerance:
Redundancy and replication techniques are used to ensure that the system can
continue functioning in the presence of hardware or software failures.
Cluster File Systems:
Specialized file systems like Lustre and GPFS (IBM Spectrum Scale) are
used to provide high-performance and parallel access to data stored across the
cluster.
Job Scheduling and Resource Management:
Software tools, such as Slurm, Torque, or Kubernetes, manage job scheduling
and resource allocation in a cluster environment.
Cluster Topologies:
Star Topology: All nodes connected to a central hub.
Ring Topology: Nodes connected in a circular fashion.
Mesh Topology: Each node connected to every other node.
Virtualization in Clusters:
Virtual machines (VMs) or containers can be used to create isolated
environments on cluster nodes for better resource utilization and flexibility.
Cluster Computing vs. Grid Computing:
Cluster computing involves tightly coupled nodes working together within a
single organization, while grid computing extends computing resources
across different administrative domains.
Applications of Cluster Computing:
Used in scientific research, financial modeling, simulations, data analytics,
and other computationally demanding tasks.
Cloud-Based Cluster Computing:
Cloud platforms offer cluster computing as a service, allowing users to
deploy and manage clusters on-demand without investing in physical
hardware.
Understanding the principles of cluster computing is essential for designing
and managing high-performance computing environments efficiently.
GRID COMPUTING
Grid computing is a distributed computing paradigm that allows the sharing
and coordinated use of resources across multiple administrative domains.
Here are key notes on grid computing:
Definition of Grid Computing:
Grid computing involves the pooling of resources, such as computing power,
storage, and applications, from multiple locations to solve complex problems.
Characteristics of Grid Computing:
Distributed Resources: Resources are distributed across different
geographical locations.
Coordination: Resources are coordinated to work together on a common task.
Heterogeneity: Grids often involve diverse hardware and software
configurations.
Virtual Organizations: Collaboration occurs across different administrative
domains, forming virtual organizations.
Components of Grid Computing:
Resources: Include computing power, storage, data, and applications.
Middleware: Software that facilitates communication, resource coordination,
and user interaction.
Grid Fabric: The physical infrastructure connecting the resources.
Grid Architecture:
Resource Layer: Physical resources such as computers, storage, and
networks.
Fabric Layer: Middleware that manages resource access and communication.
Collective Layer: Coordinates the use of resources for specific applications or
tasks.
Application Layer: User-facing layer for developing and running applications
on the grid.
Grid Middleware:
Globus Toolkit: An open-source toolkit providing essential services for grid
computing, including authentication, resource management, and
communication.
UNICORE (Uniform Interface to Computing Resources): Another
middleware system for grid computing, focusing on easy and efficient access
to distributed resources.
Grid Standards:
Standards like the Open Grid Services Architecture (OGSA) and the Open
Grid Services Infrastructure (OGSI) provide guidelines for developing
interoperable grid services.
Job Scheduling and Resource Management:
Grid computing environments use job schedulers and resource management
systems to allocate resources efficiently among different tasks and users.
Data Management:
Grids often involve large-scale data management, with data distributed across
multiple sites and accessed as needed.
Security in Grid Computing:
Security measures, including authentication, authorization, and encryption,
are crucial for protecting data and ensuring the integrity of grid operations.
Fault Tolerance:
Grid systems incorporate mechanisms to handle failures and ensure the
continuous operation of distributed applications.
Grid Computing vs. Cluster Computing:
While cluster computing involves tightly connected nodes within a single
organization, grid computing extends resources across multiple organizations
and administrative domains.
Applications of Grid Computing:
Used in scientific research, data-intensive applications, simulations, and other
computationally demanding tasks that require collaboration and access to
distributed resources.
Desktop Grid Computing:
Involves harnessing idle computing resources from individual desktop
computers to form a large-scale grid for parallel processing tasks.
Future Trends:
Cloud computing has absorbed some concepts from grid computing, and the
two paradigms often intersect in modern distributed computing environments.
Understanding the principles of grid computing is essential for researchers
and organizations aiming to leverage distributed resources for large-scale and
collaborative computing tasks.
UNIT-II
Principles of Cloud Computing
The term cloud is usually used to represent the internet but it is not just restricted to
the Internet. It is virtual storage where the data is stored in third-party data centers.
Storing, managing, and accessing data present in the cloud is typically referred to
as cloud computing. It is a model for distributing information technology in order to
gain access to resources from the internet without depending on a direct connection
with the server. It uses various web-based tools, and applications to easily receive
resources.
Accessing resources over the internet makes these resources available anytime and
anywhere thereby allowing the user to work remotely. In general, cloud computing
is nothing but the use of computing resources such as hardware and software that are
distributed as services across the network. It centralizes the data storage, processing,
and bandwidth which in turn provide efficient computing to the user. The services
are made available by a cloud vendor based on pay-per-use.
In order to serve large computing resources for solving a single problem, the concept
of computing escalated from grid computing to cloud computing. This computing
makes use of potential ideas of computing power in the form of utility. The main
differences between grid and cloud are that the former substantiates the use of
multiple computers concurrently for solving a specific application. On the other
hand, cloud computing substantiates the use of multiple resources which includes
computing resources in order to serve unified service to the end-user.
Typically, cloud computing holds IT and business resources including server’s
storage, network, applications, and processes. It provides the user needs and
workload dynamically. Apart from supporting the grid, the cloud also supports a
non-grid environment including three-tier web architecture.
Five Essential Characteristics Features
The essential characteristics of cloud computing define the important features for
successful cloud computing. If any feature is missing from the defining feature,
fortunately, it is not cloud computing. Let us now discuss what these essential
features are:
1. On-demand Service
Customers can self-provision computing resources like server time, storage,
network, applications as per their demands without human intervention, i.e., cloud
service provider.
2. Broad Network Access
Computing resources are available over the network and can be accessed using
heterogeneous client platforms like mobiles, laptops, desktops, PDAs, etc
3. Resource Pooling
Computing resources such as storage, processing, network, etc., are pooled to serve
multiple clients. For this, cloud computing adopts a multitenant model where the
computing resources of service providers are dynamically assigned to the customer
on their demand.
The customer is not even aware of the physical location of these resources. However,
at a higher level of abstraction, the location of resources can be specified.
4. Sharp elasticity
Computing resources for a cloud customer often appear limitless because cloud
resources can be rapidly and elastically provisioned. The resource can be released at
an increasingly large scale to meet customer demand.
Computing resources can be purchased at any time and in any quantity depending
on the customers' demand.
5. Measured Service
Monitoring and control of computing resources used by clients can be done by
implementing meters at some level of abstraction depending on the type of Service.
The resources used can be reported with metering capability, thereby providing
transparency between the provider and the customer.
Cloud Deployment Model
As the name suggests, the cloud deployment model refers to how computing
resources are acquired on location and provided to the customers. Cloud computing
deployments can be classified into four different forms as below:
1. Private Cloud
A cloud environment deployed for the exclusive use of a single organization is a
private cloud. An organization can have multiple cloud users belonging to different
business units of the same organization.
Private cloud infrastructure can be either on or off, depending on the organization.
The organization may unilaterally own and manage the private cloud. It may assign
this responsibility to a third party, i.e., cloud providers, or a combination of both.
2. Public Cloud
The cloud infrastructure deployed for the use of the general public is the public
cloud. This public cloud model is deployed by cloud vendors, Govt. organizations,
or both.
The public cloud is typically deployed at the cloud vendor's premises.
3. Community Cloud
A cloud infrastructure shared by multiple organizations that form a community and
share common interests is a community cloud. Community Cloud is owned,
managed, and operated by organizations or cloud vendors, i.e., third parties.
Communications may take place on the premises of cloud community organizations
or the cloud provider's premises.
4. Hybrid Cloud
Cloud infrastructure includes two or more distinct cloud models such as private,
public, and community, so that cloud infrastructure is a hybrid cloud.
While these distinct cloud structures remain unique entities, they can be bound
together by specialized technology enabling data and application portability.
UNIT-III
Architecture of Cloud Computing
Cloud Computing , which is one of the demanding technologies of
the current time and which is giving a new shape to every organization by
providing on demand virtualized services/resources. Starting from small to
medium and medium to large, every organization use cloud computing services
for storing information and accessing it from anywhere and any time only with
the help of internet. In this article, we will know more about the internal
architecture of cloud computing.
Transparency, scalability, security and intelligent monitoring are some of the
most important constraints which every cloud infrastructure should experience.
Current research on other important constraints is helping cloud computing
system to come up with new features and strategies with a great capability of
providing more advanced cloud solutions.
Cloud Computing Architecture:
The cloud architecture is divided into 2 parts i.e.
1. Frontend
2. Backend
The below figure represents an internal architectural view of cloud
computing.
Architecture of cloud computing is the combination of both SOA (Service
Oriented Architecture) and EDA (Event Driven Architecture). Client
infrastructure, application, service, runtime cloud, storage, infrastructure,
management and security all these are the components of cloud computing
architecture.
1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud
computing system. Means it contains all the user interfaces and applications
which are used by the client to access the cloud computing
services/resources. For example, use of a web browser to access the cloud
platform.
 Client Infrastructure – Client Infrastructure is a part of the frontend
component. It contains the applications and user interfaces which are
required to access the cloud platform.
 In other words, it provides a GUI( Graphical User Interface ) to interact
with the cloud.
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It
contains the resources as well as manages the resources and provides
security mechanisms. Along with this, it includes huge storage, virtual
applications, virtual machines, traffic control mechanisms, deployment
models, etc.
1. Application –
Application in backend refers to a software or platform to which client
accesses. Means it provides the service in backend as per the client
requirement.
2. Service –
Service in backend refers to the major three types of cloud based
services like SaaS, PaaS and IaaS. Also manages which type of service
the user accesses.
3. Runtime CloudRuntime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.
4. Storage –
Storage in backend provides flexible and scalable storage service and
management of stored data.
5. Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software
components of cloud like it includes servers, storage, network devices,
virtualization software etc.
6. Management –
Management in backend refers to management of backend components
like application, service, runtime cloud, storage, infrastructure, and
other security mechanisms etc.
7. Security –
Security in backend refers to implementation of different security
mechanisms in the backend for secure cloud resources, systems, files,
and infrastructure to end-users.
8. Internet –
Internet connection acts as the medium or a bridge between frontend
and backend and establishes the interaction and communication between
frontend and backend.
9. Database– Database in backend refers to provide database for storing
structured data, such as SQL and NOSQL databases. Example of
Databases services include Amazon RDS, Microsoft Azure SQL
database and Google CLoud SQL.
10. Networking– Networking in backend services that provide
networking infrastructure for application in the cloud, such as load
balancing, DNS and virtual private networks.
11. Analytics– Analytics in backend service that provides analytics
capabillities for data in the cloud, such as warehousing, bussness
intellegence and machine learning.
Benefits of Cloud Computing Architecture :
 Makes overall cloud computing system simpler.
 Improves data processing requirements.
 Helps in providing high security.
 Makes it more modularized.
 Results in better disaster recovery.
 Gives good user accessibility.
 Reduces IT operating costs.
 Provides high level reliability.
 Scalability.
The layers of cloud computing
Cloud computing is made up of a variety of layered elements, starting at the
most basic physical layer of storage and server infrastructure and working up
through the application and network layers. The cloud can be further divided
into different implementation models based on whether it's created internally,
outsourced or a combination of the two.
The three cloud layers are:



Infrastructure cloud: Abstracts applications from servers and servers from
storage
Content cloud: Abstracts data from applications
Information cloud: Abstracts access from clients to data
Anatomy of Cloud Computing
Provisioning and Configuration Module: It is the lowest level of cloud and
typically resides on bare hardware (as a firmware) or on the top of the
hypervisor layer. Its function is to abstract the underlying hardware and
provide a standard mechanism to spawn instance of virtual machine on
demand. It also handles the post-configuration of the operating systems and
applications residing on the VM
Monitoring and Optimization: This layer handles the monitoring of all
services, storage, networking and applications components in cloud. Based
on the statistics, it could perform routine functions that optimize the behavior
of the infrastructure components and provide relevant data to the cloud
administrator to further optimize the configuration for maximum utilization
and performance,.
Metering and Chargeback: This layer provides functions to measure the
usage of resources in cloud. The metering module collects all the utilization
data per domain per use. This module gives the cloud administrator enough
data to measure ongoing utilization of resources and to create invoices based
on the usage on a periodic basis.
Orchestration: Orchestration is a central to cloud operations. Orchestration
converts requests from the service management layer and the monitoring,
chargeback modules to appropriate action item which are then submitted to
provisioning and configuration module for final closure. Orchestration
updates the CMDB in the process.
Configuration Management Database (CMDB): It is a central
configuration repository wherein all the meta data and configuration of
different modules, resources are kept and updated in the real-time basis. The
repository can then be accessed using standards protocols like SOAP by
third-party software and integration components. All updates in CMDB
happen in real time as requests get processed in cloud.
Cloud Life cycle Management Layer (CLM): This layer handles the
coordination of all other layers in cloud. All requests internal and external are
addressed to the CLM layer first. CLM may internally route requests and
actions to other layers for further processing.
Service Catalog: It is central to the definition of cloud, SC defines what kind
of services the cloud is capable of providing and at what cost to the end user.
SC is the first thing that is drafted before a cloud is architecture. The service
management layer consults SC before it processes any request for a new
resource.
Network Connectivity in Cloud Computing
It refers to the ability of various computing resources, such as virtual
machines, storage, and applications, to communicate with each other over a
network. In a cloud computing environment, these resources may be
distributed across different physical locations and data centers. The
connectivity between these resources is crucial for the overall functionality and
performance of cloud-based applications and services.
1. Virtual Networks: Cloud providers often offer virtual networking
solutions that allow users to create and configure their own networks
within the cloud infrastructure. This includes defining subnets, setting up
virtual private networks (VPNs), and managing routing tables.
2. Internet Connectivity: Cloud resources typically have access to the
internet, enabling communication with external services, clients, or other
cloud-based resources. This connectivity is essential for applications that
need to interact with users or external data sources.
3. Inter-Instance Communication: In a cloud environment, different
instances (virtual machines or containers) may need to communicate
with each other. Network connectivity facilitates the flow of data
between these instances, supporting scalable and distributed
applications.
4. Load Balancing: Cloud providers offer load balancing services to
distribute incoming network traffic across multiple instances to ensure
optimal resource utilization and prevent overloading of individual
resources.
5. Security and Isolation: Network security is a critical aspect of cloud
computing. Cloud providers implement security measures, such as
firewalls, security groups, and network access controls, to protect
resources from unauthorized access and attacks. Isolation between
different tenants (users or organizations) is also maintained to enhance
security.
6. Content Delivery Networks (CDNs): Cloud services often integrate
with CDNs to enhance the delivery of content by caching data at
strategically located servers. This improves the performance and reduces
latency for end-users.
7. Scalability: Cloud-based applications need to scale horizontally by
adding more instances to handle increased load. Network connectivity
plays a crucial role in enabling communication between these
dynamically scaled instances.
8. Hybrid and Multi-Cloud Connectivity: Organizations may adopt a
hybrid or multi-cloud strategy, leveraging resources from different cloud
providers or combining on-premises and cloud infrastructure. Network
connectivity solutions allow these diverse environments to work
seamlessly together.
9. Monitoring and Management: Cloud providers offer tools and services
for monitoring and managing network resources. This includes real-time
monitoring of network performance, logging, and alerting to ensure
optimal operation.
Advantages of Network Connectivity in Cloud Computing:
1. Scalability: Cloud services can scale up or down based on demand, and
network connectivity plays a crucial role in enabling this dynamic
scaling.
2. Flexibility and Accessibility: Users can access cloud services from
anywhere with an internet connection, providing flexibility and
accessibility to resources.
3. Cost Efficiency: Cloud services often follow a pay-as-you-go model,
allowing users to pay only for the resources they consume. Efficient
network connectivity contributes to cost optimization.
4. Collaboration: Improved network connectivity facilitates collaboration
among geographically dispersed teams by enabling real-time data
sharing and communication.
5. Resilience and Redundancy: Cloud providers implement redundant
network architectures to ensure high availability and resilience against
failures.
Tools and Protocols for Network Connectivity:
1. Virtual Private Cloud (VPC): Platforms like Amazon Web Services
(AWS) provide VPC, allowing users to create isolated network
environments within the cloud.
2. Load Balancers: Tools like AWS Elastic Load Balancing distribute
incoming network traffic across multiple servers to ensure efficient
resource utilization.
3. Virtual Private Network (VPN): VPNs establish secure connections
over the internet, enabling users to access cloud resources privately.
4. Direct Connect: Services like AWS Direct Connect provide dedicated
network connections from on-premises data centers to the cloud,
enhancing performance and security.
5. Content Delivery Networks (CDN): CDNs, such as Cloudflare or
Akamai, improve the delivery speed of content by caching it closer to
end-users.
6. Transmission Control Protocol/Internet Protocol (TCP/IP): The
foundational protocol suite for internet communication, ensuring reliable
data transmission.
Figure 1 Cloud Networking Architecture
Application of the Cloud Computing
The application of cloud computing spans across various industries and use cases, offering a wide
range of benefits such as scalability, cost-efficiency, flexibility, and reliability. Here are some common
applications of cloud computing:
1. Infrastructure as a Service (IaaS):
 Hosting websites and web applications: Businesses can leverage cloud infrastructure to host
their websites and web applications, ensuring scalability and high availability without the
need for maintaining physical servers.
 Development and testing environments: Cloud platforms provide on-demand infrastructure
for development and testing, allowing developers to quickly provision resources, build, and
test applications.
2. Platform as a Service (PaaS):
 Application development and deployment: PaaS offerings enable developers to build,
deploy, and manage applications without worrying about underlying infrastructure.
Developers can focus on coding, while the platform handles tasks such as provisioning,
scaling, and maintenance.
 Mobile app development: PaaS platforms provide tools and services for developing mobile
applications, including backend services, data storage, authentication, and push notifications.
3. Software as a Service (SaaS):
 Collaboration and productivity tools: Cloud-based SaaS applications such as Google
Workspace, Microsoft 365, and Slack provide collaboration tools, document management,
email services, and productivity suites accessible from anywhere with an internet connection.
 Customer Relationship Management (CRM): SaaS CRM platforms like Salesforce offer cloudbased solutions for managing customer relationships, sales pipelines, marketing campaigns,
and customer support.
4. Data Storage and Management:
 Cloud storage: Cloud providers offer scalable and durable storage solutions for storing and
managing data, including object storage, file storage, and archival storage.
 Big Data analytics: Cloud platforms provide services for processing and analyzing large
datasets, including managed data warehouses, real-time analytics, and machine learning
tools.
5. Internet of Things (IoT):
 IoT platforms: Cloud-based IoT platforms enable the collection, storage, and analysis of data
from IoT devices. These platforms provide tools for device management, data processing, and
application integration, facilitating IoT application development and deployment.
6. E-commerce and Online Retail:
 E-commerce platforms: Cloud-based e-commerce platforms offer scalable solutions for
building and managing online stores, including product catalogs, inventory management,
payment processing, and order fulfillment.
7. Media Streaming and Content Delivery:
 Media streaming services: Cloud computing powers video streaming platforms such as
Netflix, Amazon Prime Video, and Disney+, delivering high-quality streaming content to users
worldwide.
 Content delivery networks (CDNs): CDNs use cloud infrastructure to distribute content
efficiently, reducing latency and improving the performance of websites, applications, and
media delivery.
Managing the Cloud Computing
1. Resource Provisioning and Optimization:
 Provisioning resources: Managing the allocation of cloud resources such as virtual machines,
storage, and networking to meet the demands of applications and workloads.
 Resource optimization: Monitoring resource usage, identifying inefficiencies, and optimizing
resource allocation to minimize costs and maximize performance.
2. Security and Compliance:
 Data security: Implementing security measures such as encryption, access controls, and
identity management to protect data stored and transmitted in the cloud.
 Compliance management: Ensuring compliance with industry regulations and standards,
such as GDPR, HIPAA, and PCI DSS, by implementing appropriate security controls and audit
trails.
3. Monitoring and Performance Management:
 Monitoring infrastructure: Utilizing monitoring tools to track the performance and health of
cloud resources, including CPU utilization, memory usage, network traffic, and storage
capacity.
 Performance optimization: Analyzing monitoring data to identify performance bottlenecks
and optimize resource configurations for better scalability, availability, and responsiveness.
4. Backup and Disaster Recovery:
 Data backup: Implementing backup strategies to protect against data loss due to accidental
deletion, hardware failure, or cyberattacks, using cloud-based backup solutions or data
replication techniques.
 Disaster recovery planning: Developing and testing disaster recovery plans to ensure
business continuity in the event of natural disasters, outages, or other disruptive events,
leveraging cloud-based disaster recovery services and failover mechanisms.
5. Cost Management:
 Cost monitoring: Tracking cloud usage and costs across different services and accounts, using
cost management tools to analyze spending patterns and identify opportunities for
optimization.
 Cost optimization: Implementing cost-saving strategies such as rightsizing resources, utilizing
reserved instances, and leveraging spot instances to reduce cloud expenses while maintaining
performance and reliability.
6. Automation and Orchestration:
 Automation scripts: Creating scripts and workflows to automate routine tasks such as
resource provisioning, configuration management, and deployment, using tools like AWS
CloudFormation, Azure Resource Manager, or Terraform.
 Orchestration: Orchestrating complex workflows and applications across multiple cloud
services and environments, coordinating tasks and dependencies to ensure seamless
operation and scalability.
7. Governance and Policy Management:
 Policy enforcement: Establishing governance policies and controls to enforce security,
compliance, and operational standards across cloud environments, including access controls,
data retention policies, and service-level agreements (SLAs).
 Cloud governance frameworks: Implementing governance frameworks such as the Cloud
Controls Matrix (CCM) or the AWS Well-Architected Framework to define best practices and
guidelines for cloud adoption and management.
8. Training and Skill Development:
 Training programs: Providing training and certification programs for IT staff to acquire the
necessary skills and expertise in cloud technologies, architecture design, security practices,
and operational best practices.
 Continuous learning: Encouraging continuous learning and professional development to keep
pace with evolving cloud technologies and industry trends, leveraging online resources,
webinars, and community forums.
Managing the Cloud Infrastructure
"Managing Cloud Infrastructure" is a clear and concise phrase that accurately describes the task
of overseeing and controlling the resources, services, and operations within a cloud computing
environment.
1. Resource Provisioning: Allocate and manage virtualized resources such as computing power, storage,
and network bandwidth based on demand.
2. Monitoring and Optimization: Continuously monitor the performance and utilization of cloud
resources to optimize efficiency and cost-effectiveness.
3. Security Management: Implement security measures to protect data, applications, and infrastructure
from threats and vulnerabilities. This includes access control, encryption, and compliance with
regulatory requirements.
4. Scalability: Ensure that the cloud infrastructure can scale up or down dynamically to accommodate
changing workload demands without disruption.
5. Fault Tolerance and High Availability: Design infrastructure to withstand failures and ensure high
availability of services through redundancy and failover mechanisms.
6. Cost Management: Control costs by monitoring usage, optimizing resource allocation, and leveraging
cost-effective pricing models offered by cloud providers.
7. Automation and Orchestration: Use automation tools and orchestration frameworks to streamline
deployment, configuration, and management tasks, reducing manual effort and human error.
8. Disaster Recovery: Develop and implement disaster recovery plans to minimize downtime and data
loss in case of unexpected events or outages.
9. Compliance and Governance: Adhere to regulatory requirements and internal policies related to data
privacy, security, and compliance while managing cloud infrastructure.
10. Vendor Relationship Management: Establish and maintain relationships with cloud service providers,
ensuring alignment with business objectives, service level agreements (SLAs), and support needs.
Figure 2 Managing Cloud Infrastructure
Managing the Cloud Application
"Managing the Cloud Application" refers to the process of overseeing and maintaining software
applications deployed on cloud infrastructure. Here are some key aspects of managing cloud
applications:
1. Deployment and Configuration: Ensure that applications are correctly deployed and configured in the
cloud environment, including setting up necessary dependencies, environment variables, and
networking configurations.
2. Monitoring and Performance Optimization: Monitor application performance metrics such as
response times, resource utilization, and error rates to identify bottlenecks and optimize
performance. Use tools like application performance monitoring (APM) solutions to gain insights into
application behavior.
3. Scalability and Elasticity: Design applications to scale horizontally or vertically based on changing
workload demands. Utilize auto-scaling capabilities provided by cloud platforms to automatically add
or remove resources as needed.
4. High Availability and Fault Tolerance: Implement redundancy and failover mechanisms to ensure
continuous availability of applications, even in the event of infrastructure failures. Use features like
load balancing and multi-region deployment to enhance fault tolerance.
5. Security and Compliance: Implement security best practices to protect applications and data from
security threats. This includes encryption, access control, authentication, and compliance with
regulatory requirements such as GDPR or HIPAA.
6. Backup and Disaster Recovery: Set up regular backups of application data and implement disaster
recovery plans to minimize downtime and data loss in the event of disasters or outages. Utilize
features like geo-replication and automated backups provided by cloud providers.
7. Cost Management: Optimize costs associated with running cloud applications by right-sizing
resources, leveraging cost-effective pricing models, and monitoring usage patterns. Use tools like cost
management dashboards to track spending and identify cost-saving opportunities.
8. Continuous Integration and Deployment (CI/CD): Implement CI/CD pipelines to automate the
process of building, testing, and deploying application updates. This ensures rapid and reliable
delivery of new features and bug fixes to production environments.
9. Compliance and Governance: Adhere to regulatory compliance requirements and internal
governance policies when managing cloud applications. Ensure that data privacy, security, and
compliance standards are maintained throughout the application lifecycle.
10. Vendor Relationship Management: Maintain effective communication and collaboration with cloud
service providers to address any issues or concerns related to application management. Stay informed
about platform updates, service changes, and support options offered by the cloud provider.
UNIT-IV
Software as a Service (SaaS)
SaaS is also known as "On-Demand Software". It is a software
distribution model in which services are hosted by a cloud service
provider. These services are available to end-users over the internet so,
the end-users do not need to install any software on their devices to
access these services.
There are the following services provided by SaaS providers -
Business Services
SaaS Provider provides various business services to start-up the
business. The SaaS business services include ERP (Enterprise Resource
Planning), CRM (Customer Relationship Management), billing,
and sales.
Document Management
SaaS document management is a software application offered by a
third party (SaaS providers) to create, manage, and track electronic
documents.
Example: Slack, Samepage, Box, and Zoho Forms.
Social Networks
As we all know, social networking sites are used by the general
public, so social networking service providers use SaaS for their
convenience and handle the general public's information.
Mail Services
To handle the unpredictable number of users and load on e-mail
services, many e-mail providers offering their services using SaaS.
Advantages of SaaS cloud computing layer
1) SaaS is easy to buy
SaaS pricing is based on a monthly fee or annual fee subscription,
so it allows organizations to access business functionality at a low cost,
which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with
an up-front cost (and often an optional ongoing support fee), SaaS
providers are generally pricing the applications using a subscription fee,
most commonly a monthly or annually fee.
2. One to Many
SaaS services are offered as a one-to-many model means a single
instance of the application is shared by multiple users.
3. Less hardware required for SaaS
The software is hosted remotely, so organizations do not need to
invest in additional hardware.
4. Low maintenance required for SaaS
Software as a service removes the need for installation, set-up, and
daily maintenance for the organizations. The initial set-up cost for SaaS
is typically less than the enterprise software. SaaS vendors are pricing
their applications based on some usage parameters, such as a number of
users using the application. So SaaS does easy to monitor and automatic
updates.
5. No special software or hardware versions required
All users will have the same version of the software and typically
access it through the web browser. SaaS reduces IT support costs by
outsourcing hardware and software maintenance and support to the IaaS
provider.
6. Multidevice support
SaaS services can be accessed from any device such as desktops,
laptops, tablets, phones, and thin clients.
7. API Integration
SaaS services easily integrate with other software or services
through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using
the internet connection, so do not need to require any software
installation.
Disadvantages of SaaS cloud computing
layer.
1) Security
Actually, data is stored in the cloud, so security may be an issue
for some users. However, cloud computing is not more secure than inhouse deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable
distance from the end-user, there is a possibility that there may be
greater latency when interacting with the application compared to local
deployment. Therefore, the SaaS model is not suitable for applications
whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not
usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of
transferring the very large data files over the internet and then
converting and importing them into another SaaS also.
Popular SaaS Providers
SUMMARY OF SAAS PROVIDERS
Provider
Services
Salseforce.com
On-demand CRM solutions
Microsoft Office Online office suite
365
Google Apps
Gmail, Google Calendar, Docs, and sites
NetSuite
ERP, accounting, order management, CRM, Professionals
Services Automation (PSA), and e-commerce applications.
GoToMeeting
Online meeting and video-conferencing software
Constant Contact
E-mail marketing, online survey, and event marketing
Oracle CRM
CRM applications
Workday, Inc
Human capital management, payroll, and financial
management.
Cloud Service Models
There are the following three types of cloud service models 1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
Infrastructure as a Service (IaaS)
IaaS is also known as Hardware as a Service (HaaS). It is a
computing infrastructure managed over the internet. The main advantage
of using IaaS is that it helps users to avoid the cost and complexity of
purchasing and managing the physical servers.
Characteristics of IaaS
There are the following characteristics of IaaS  Resources are available as a service
 Services are highly scalable
 Dynamic and flexible
 GUI and API-based access
 Automated administrative tasks
Example: DigitalOcean, Linode, Amazon Web Services (AWS),
Microsoft Azure, Google Compute Engine (GCE), Rackspace, and Cisco
Metacloud.
Platform as a Service (PaaS)
PaaS cloud computing platform is created for the programmer to
develop, test, run, and manage the applications.
Characteristics of PaaS
There are the following characteristics of PaaS Accessible to various users via the same development application.

Integrates with web services and databases.

Builds on virtualization technology, so resources can easily be
scaled up or down as per the organization's need.

Support multiple languages and frameworks.

Provides an ability to "Auto-scale".
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com,
Google App Engine, Apache Stratos, Magento Commerce Cloud, and
OpenShift.
Software as a Service (SaaS)
SaaS is also known as "on-demand software". It is a software in
which the applications are hosted by a cloud service provider. Users can
access these applications with the help of internet connection and web
browser.
Characteristics of SaaS
There are the following characteristics of SaaS 
Managed from a central location

Hosted on a remote server

Accessible over the internet

Users are not responsible for hardware and software updates.
Updates are applied automatically.
The services are purchased on the pay-as-per-use basis
Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk,
Cisco WebEx, ZenDesk, Slack, and GoToMeeting.
Difference between IaaS, PaaS, and SaaS
IaaS
Paas
It provides a virtual data It
SaaS
provides
virtual It
provides
web
center to store information platforms and tools to software and apps to
and create platforms for create, test, and deploy complete
apps.
tasks.
business
app development, testing,
and deployment.
It provides access to It
provides
runtime It provides software as a
resources such as virtual environments
and service to the end-users.
machines, virtual storage, deployment
etc.
tools
for
applications.
It is used by network It is used by developers.
It is used by end users.
architects.
IaaS provides only
Infrastructure.
PaaS provides
Infrastructure Platform.
SaaS provides
Infrastructure Platform
Software.
SOFTWARE AS A SERVICE :
Cloud computing is the on-demand
availability of computer systemresources,web
based software or web hosted software. SaaS
is a business model specific to cloud computing
,along with infrastructure.
CHARACTERISTICS OF SAAS:







Accessibility,Subscription-based.
Multi-tenancy.
Scalability.
Cost savings.
Security and compliance.
Collaboration .
Saas harnesses the consumer web.
SUITABILITY AS A SERVICE
 future-proof operations and begin making essential progress
towards sustainability.
UNIT-V
EMC
EMC
Cloud Computing is a specialized skill related
to the software and services offered by EMC
Corporation. It involves expertise in designing, , and
deploying managing cloud-based infrastructure and
platforms, including storage and data management
services, virtualization technologies, and cloud security.
EMC IT:
The Export Management Companies can act
as: An external export sales department, which
represents the product of its clients along with
various other non-competitive manufacturers.
 EMC IT in the content of cloud computing,
typically refers to the use of cloud-based
technologies and service by EMC for its
own internal IT infrastructure and
operations. Here are some key points about
EMC IT in cloud computing.
CLOUD ADOPTION:
 EMC IT leverages cloud computing
technologies to modernize its
infrastructure,improve agility and reduce
costs.this includes adopting public ,private
,and hybrid cloud solutions based on the
organisation requirement.
INFRASTRUCTURE AS A SERVICE
 EMC IT may utilize iaas offerings from
public cloud providers such as AMAZON
WEB SERVICE (AWS),MICROSOFT
AZURE OR GOOGLE CLOUD
PLATFORMS (GCP) to provision compute
,storage and networking resources on –
demand.
GOOGLE CLOUD PLATFORM:
Google Cloud Platform is a set of cloud computing services provides
by Google that allows you to store, manage, and analyze data. It is
also used for developing, deploying, and scaling applications on
Google's environment.
Storing and Managing Data
 Cloud SQL is a fully managed
relational database service provided
by Google Cloud Platform (GCP) that
allows developers to store, manage,
and access their data in the cloud.
This service is designed to be highly
scalable and highly available, making
it a great choice for businesses that
require a robust and reliable
database solution.
 to handle large amounts of structured
and semi-structured data with high
availability and reliability. Spanner is
a global scale database that supports
ACID transactions, SQL, and multiregion Cloud Spanner is a fully
managed,
horizontally
scalable
relational database service provided
by Google Cloud Platform (GCP). It is
designed replication.
CLOUD STORAGE :
Thanks to cloud storage services, you can now store, save and share huge
amounts of data on the web. Cloud storage has many advantages, from cost
efficiency to time efficiency. However, before we dive in to get to know what
the 4 different types of cloud storage services are, let’s talk a bit about cloud
deployment.
Cloud deployment model defines many things for you. For example, it outlines
the location of the server you’re using, who has access to it and controls it, the
way the platform is implemented, and the relationship between the cloud
infrastructure and the user.
There are different cloud storage deployment options, and the way you use
your cloud defines which model it is. The four types of deployment options
each provide us different types of solutions to different types of needs. Now,
let dive into it in details.
1. Public Cloud Storage
As the name suggests, public cloud storage supports customers that need to
utilize computer resources including its hardware and software. Basically,
using a public cloud storage is like being tenants living in a big apartment with
other people (company) , and having a landlord (the service provider). Just as
during the case of apartment rent, the cost of public cloud is lower than private
cloud.
So, who uses public cloud ? Public clouds are mostly common for non-critical
tasks such as file sharing or development and testing of an application,
therefore small-scale companies with no private information use it.
Anonymous users with the authority can also have access to public cloud.
Keep in mind that the service provider fully controls the public cloud storage,
its hardware, software, and infrastructure.
Some of the advantages of public cloud storage are:
Cost efficiency
Consistent
Vast flexibility
Expert monitoring
2. Private cloud storage
When we say “Private”, we can conclude that this cloud model is the
infrastructure that is used solely by a single organization. The infrastructures
found in private cloud storage can be managed either by the organization
itself or by a specific service provided. Since a private cloud storage service is
designed for individual needs, naturally, they are more expensive than other
clouds. However, they are also known to be better in resolving the
organization’s biggest fear, security, and privacy.
So, who uses private cloud? Private clouds are used by large enterprises who
store sensitive and private information such as government agencies, financial
institutions, and healthcare organizations. These private cloud infrastructures
can be located either on-premises or with a third party.
The numerous advantages offered by the private cloud are:
Security
Vast flexibility
High efficiency
Ability to customize
3. Hybrid Cloud Storage
A hybrid cloud storage crosses the concepts of private and public cloud
infrastructures together. How is this possible exactly? When a hybrid cloud
storage is used, critical data is stored in the organization’s private cloud, and
the remaining data is stored in the public cloud. This method allows you to
customize the features and modify it to your needs by using the recourses
given to you. When you think about it, both time and money is used efficiently
by this method.
So, who uses Hybrid cloud storage? Many organizations utilize this specific
model when they need to quickly upgrade their IT infrastructure. Hybrid cloud
serves as an advantage to many businesses because of its flexibility.
The advantages of Hybrid cloud storage are:
 High flexibility
 Customizable
 Cost effective
 time effective
 easily Controllable
4. Community Cloud
Lastly, we have the community cloud. This deployment model is dedicated to
multiple organizations in the same community, which means it is not public,
open to anyone in need. However, it is also not private, since there is more
than one company using it. Some examples of community cloud are Banks,
universities in common areas, or police departments within the same state.
So, who uses community cloud storage? A simple answer to this would be the
members of the community.
 Some of its advantages are:
 Highly flexible to the needs of the community
 Cost efficient
 High security
1.Google Cloud Connect
● Google Cloud Connect was a free cloud computing plug-in for Windows
Microsoft Office 2003, 2007 and 2010.
● That can automatically store and synchronize any Microsoft Word document,
PowerPoint presentation, or Excel spreadsheet to Google Docs in Google
Docs or Microsoft Office formats.
● The Google Doc copy is automatically updated each time the Microsoft Office
document is saved.
● Microsoft Office documents can be edited offline and synchronized later
when online. Google Cloud Sync maintains previous Microsoft Office
document versions and allows multiple users to collaborate, working on the
same document at the same time.
● Google Cloud Connect was discontinued on April 30, 2013, as according to
Google, all of Cloud Connect's features are available through Google Drive.
Features
●
Backup: Microsoft Office documents could be manually or
automatically backed up to Google Docs each time they are saved
locally.
●
Synchronize: Changes made to an Office document on one
computer can sync when the file is opened on another computer. [7]
Video
●
Protect: Microsoft Office documents synced to Google Docs can be
made accessible to one person.
●
Share: Microsoft Office documents synced to Google Docs can be
made accessible only to selected people.
●
Edit: A shared document can be set to only be viewed by others or
edited as well.
●
Publish: Documents synced to Google Docs can effectively be
published by making them accessible to anyone.
●
Collaborate: Multiple users can work on the same document at the
same time.
●
Notify: When one person edits a document, others sharing the
document receive an email letting them know.
●
Print: Use Google Cloud Print to print to local or remote network
connected printers.
●
Compare: Previous version are maintained allowing users to
compare to older versions.
●
Roll back: Users can return to a previous version of the document.[15]
●
Green: Green computing allows documents to be shared without
printing or sending large files. Only links need be sent.
●
Mobilize: Google Sync allows synced documents to be viewed and
edited with most internet connected mobile devices.
●
Storage: 5GB of Google Drive storage is included for free.
2. Google Cloud Print
● Google Cloud Print was a Google service that allowed users to print from any
Cloud Print-aware application (web, desktop, mobile) on any device in the
network cloud.
● Any printer with native support for connecting to cloud print services without
Google having to create and maintain printing subsystems for all the hardware
combinations.
● The client devices and printers, and without the users having to install device
drivers to the client, but with documents being fully transmitted to Google.
● Starting on July 23, 2013 it allowed printing from any Windows application, if
Google Cloud Printer was installed on the machine.
● Google Cloud Print was shut down on December 31, 2020.
Features
● Google Cloud Print was integrated into the mobile versions of Gmail and
Google Docs, allowing users to print from their mobile devices.
● Google Chrome 16 and higher listed Google Cloud Print a printer option in
the Print Preview page..
● Google Chrome 9 and higher supported printers without built-in Cloud
Print component through a "Cloud Print Connector".
3. Google App Engine
What is Google App Engine?
● Google App Engine is a cloud computing Platform as a Service (PaaS) which
provides Web app developers and businesses with access to Google’s scalable
hosting in Google managed data centers and tier 1 Internet service.
●
It enables developers to take full advantage of its serverless platform. These
applications are required to be written in, namely: Java, Python, PHP, Go,
Node.JS, . NET, and Ruby.
How is GAE used?
● Users can create an account under GAE section, set up an SDK and write an
application source code. They can use this to test and deploy code in the cloud.
● One way to use GAE is building scalable application back ends that adapt to
workloads as needed. Another way to use GAE is for Application Testing. Users
can route traffic to different app versions to A/B testing (It is a research
methodology applicable in determining user experience.
● It is a randomized experiment with two variantsA and B. Also known as split testing
or bucket testing, it is used to compare two versions of a web app against each
other to determine which one performs better and see which version performs
better under various workloads.
Google App Engine Environments
●
Standard Environment
●
Flexible Environment
Major Features of Google App Engine
● Language support:
Google App Engine lets users’ environment to build applications in some of the most
popular languages, including Java, Python, Ruby, Golang, Node.js, C#, and PHP.
● Flexibility:
Google App Engine offers the flexibility to import libraries & frameworks through Docker
containers.
● Diagnostics:
Google App Engine uses cloud monitoring and logging to monitor health and performance
of an application which helps to diagnose and fix bugs quickly. The error reporting
document helps developers fix bugs on an immediate basis.
● Traffic splitting:
Google App Engine automatically routes the incoming traffic to different application
versions as a part of A/B testing. This enables users to easily create environments for
developing, staging, production and testing.
● Security:
Google App Engine enables users to define access rules in Engine’s firewall and utilize
SSL/TLS certificates on custom domains for free.
4. Amazon Web Services
What is AWS?
● AWS stands for Amazon Web Services, It is an expanded cloud computing
platform provided by Amazon Company.
● AWS provides a wide range of services with a pay-as-per-use pricing model over
the Internet such as Storage, Computing power, Databases, Machine Learning
services, and much more.
● AWS facilitates for both businesses and individual users with effectively hosting
the applications, storing the data securely, and making use of a wide variety of
tools and services improving management flexibility for IT resources.
How does AWS works?
● AWS comes up with its own network infrastructure on establishing the datacenters
in different regions mostly all over the world.
● Its global Infrastructure acts as a backbone for operations and services provided by
AWS.
● It facilitates the users on creating secure environments using Amazon VPCs (
Virtual Private Clouds ).
AWS Fundamentals
● Regions
● Availability Zones (AZ)
● Global Network Infrastructure
Advantages Of Amazon Web Services
●
AWS allows you to easily scale your resources up or down as your needs
change, helping you to save money and ensure that your application always has
the resources it needs.
● AWS provides a highly reliable and secure infrastructure, with multiple data
centers and a commitment to 99.99% availability for many of its services.
● AWS offers a wide range of services and tools that can be easily combined to
build and deploy a variety of applications, making it highly flexible.
Disadvantages Of Amazon Web Services
●
AWS can be complex, with a wide range of services and features that may be
difficult to understand and use, especially for new users.
● AWS can be expensive, especially if you have a high-traffic application or
need to run multiple services. Additionally, the cost of services can increase
over time, so you need to regularly monitor your spending.
● While AWS provides many security features and tools, securing your resources
on AWS can still be challenging, and you may need to implement additional
security measures to meet your specific requirements.
.Applications Of AWS
● Netfilx
● Airbnb
● Capital One
5.Amazon Elastic Compute Cloud
What is AWS EC2?
● Among the vast array of services that Amazon offers, EC2 is the core compute
component of the technology stack.
● In practice, EC2 makes life easier for developers by providing secure, and resizable
compute capacity in the cloud.
● It greatly eases the process of scaling up or down, can be integrated into several other
services.
Use Cases of Amazon EC2
● Deploying Application: In the AWS EC2 instance, you can deploy your
application like .jar,.war, or .ear application without maintaining the
underlying infrastructure.
● Scaling Application: Once you deployed your web application in the EC2
instance know you can scale your application based upon the demand you are
having by scaling the AWS EC2-Instance.
● Deploying The ML Models: You can train and deploy your ML models in the
EC2-instance because it offers up to 400 Gbps), and storage services purposebuilt to optimize the price performance for ML projects.
● Hybrid Cloud Environment: You can deploy your web application in EC2Instance and you can connect to the database which is deployed in the onpremises servers.
● Cost-Effective: Amazon EC2-instance is cost-effective so you can deploy
your gaming application in the Amazon EC2-Instances
Features
●
●
●
●
Operating Systems
Functionality
Software
Scalability and Reliability
Advantages of Amazon EC2
● Elastic Web-Scale Computing. Amazon EC2 enables you to increase or
decrease capacity within minutes, not hours or days. ...
● Completely Controlled. You have complete control of your instances
including root access and the ability to interact with them as you would any
machine. ...
● Flexible Cloud Hosting Services.
Disadvantages of Amazon EC2
● AWS imposes resource limits by default, which vary by region.
● You can only launch a certain number of instances per area.
● Hardware-level changes occur in your application, which may result in poor
performance and usage of your applications.
Amazon Simple Storage Services
o S3 is one of the first services that has been produced by aws.
o S3 stands for Simple Storage Service.
o S3 provides developers and IT teams with secure, durable, highly
scalable object storage.
o It is easy to use with a simple web services interface to store and
retrieve any amount of data from anywhere on the web.
o S3 is a safe place to store the files.
o It is Object-based storage, i.e., you can store the images, word
files, pdf files, etc.
o The files which are stored in S3 can be from 0 Bytes to 5 TB.
o It has unlimited storage means that you can store the data as much
you want.
o Files are stored in Bucket. A bucket is like a folder available in S3
that stores the files.
o S3 is a universal namespace, i.e., the names must be unique
globally. Bucket contains a DNS address. Therefore, the bucket
must contain a unique name to generate a unique DNS address.
Advantages of Amazon S3
o
Create Buckets: Firstly, we create a bucket and provide a name to
the bucket. Buckets are the containers in S3 that stores the data.
Buckets must have a unique name to generate a unique DNS
address.
Storing data in buckets: Bucket can be used to store an infinite
amount of data. You can upload the files as much you want into an
Amazon S3 bucket, i.e., there is no maximum limit to store the
files. Each object can contain upto 5 TB of data. Each object can
be stored and retrieved by using a unique developer assigned-key.
o
Download data: You can also download your data from a
o
bucket and can also give permission to others to download the
same data. You can download the data at any time whenever you
want.
Permissions: You can also grant or deny access to others who
want to download or upload the data from your Amazon S3 bucket.
Authentication mechanism keeps the data secure from
unauthorized access.
o
o
o
Standard interfaces: S3 is used with the standard interfaces
REST and SOAP interfaces which are designed in such a way that
they can work with any development toolkit.
Security: Amazon S3 offers security features by protecting
unauthorized users from accessing your data.
S3 is a simple key-value store
S3 is object-based. Objects consist of the following:
o
o
o
o
o
o
Key: It is simply the name of the object. For example, hello.txt,
spreadsheet.xlsx, etc. You can use the key to retrieve the object.
Value: It is simply the data which is made up of a sequence of
bytes. It is actually a data inside the file.
Version ID: Version ID uniquely identifies the object. It is a
string generated by S3 when you add an object to the S3 bucket.
Metadata: It is the data about data that you are storing. A set of a
name-value pair with which you can store the information
regarding an object. Metadata can be assigned to the objects in
Amazon S3 bucket.
Subresources: Subresource mechanism is used to store objectspecific information.
Access control information: You can put the permissions
individually on your files.
Amazon Simple Storage & Services
o
o
o
SQS stands for Simple Queue Service.
SQS was the first service available in AWS.
Amazon SQS is a web service that gives you access to a message
queue that can be used to store messages while waiting for a
computer to process them.
o
o
o
o
o
o
o
o
Amazon SQS is a distributed queue system that enables web
service applications to quickly and reliably queue messages that
one component in the application generates to be consumed by
another component where a queue is a temporary repository for
messages that are awaiting processing.
With the help of SQS, you can send, store and receive messages
between software components at any volume without losing
messages.
Using Amazon sqs, you can separate the components of an
application so that they can run independently, easing message
management between components.
Any component of a distributed application can store the messages
in the queue.
Messages can contain up to 256 KB of text in any format such as
json, xml, etc.
Any component of an application can later retrieve the messages
programmatically using the Amazon SQS API.
The queue acts as a buffer between the component producing and
saving data, and the component receives the data for processing.
This means that the queue resolves issues that arise if the producer
is producing work faster than the consumer can process it, or if the
producer or consumer is only intermittently connected to the
network.
If you got two EC2 instances which are pulling the SQS Queue.
You can configure the autoscaling group if a number of messages
go over a certain limit. Suppose the number of messages exceeds
10, then you can add additional EC2 instance to process the job
faster. In this way, SQS provides elasticity.
Let's understand through an example.
Let's look at a website that generates a Meme. Suppose the user
wants to upload a photo and wants to convert into Meme. User uploads a
photo on a website and website might store a photo in s3. As soon as it
finished uploads, it triggers a Lambda function. Lambda analyzes the
data about this particular image to SQS, and this data can be "what the
top of the meme should say", "what the bottom of the meme should say",
the location of the S3 bucket, etc. The data sits inside the SQS as a
message. An EC2 instance looks at the message and performs its job. An
EC2 instance creates a Meme and stores it in S3 bucket. Once the EC2
instance completed its job, it moves back to the SQS. The best thing is
that if you lose your EC2 instance, then also you would not lose the job
as the job sits inside the S3 bucket.
Let's look at another example of SQS, i.e., Travel Website.
Suppose the user wants to look for a package holiday and wants to look
at the best possible flight. AUser types a query in a browser, it then hits
the EC2 instance. An EC2 instance looks "What the user is looking
for?", it then puts the message in a queue to the SQS. An EC2 instance
pulls queue. An EC2 instance continuously pulling the queue and
looking for the jobs to do. Once it gets the job, it then processes it. It
interrogates the Airline service to get all the best possible flights. It
sends the result to the web server, and the web server sends back the
result to the user. A User then selects the best flight according to his or
her budget.
If we didn't have SQS, then what happened?
A web server passes the information to an application server and then
application server queried an Airline service. If an Application server
crashes, then a user loses its query. One of the great thing about SQS is
that data is queued in the SQS even if the application server crashes, the
message in the queue is marked as an invisible in a timeout interval
window. When the timeout runs out, message reappears in the queue;
then a new EC2 instance can use this message to perform its job.
Therefore, we can say that SQS removes the application server
dependency.
Queue Types
There are two types of Queue:
o
o
Standard Queues (default)
FIFO Queues (First-In-First-Out)
Standard Queue
o
o
o
o
SQS offers a standard queue as the default queue type.
It allows you to have an unlimited number of transactions per
second.
It guarantees that a message is delivered at least once. However,
sometime, more than one copy of a message might be delivered out
of order.
It provides best-effort ordering which ensures that messages are
generally delivered in the same order as they are sent but it does
not provide a guarantee.
FIFO Queue
o
o
o
o
o
o
The FIFO Queue complements the standard Queue.
It guarantees ordering, i.e., the order in which they are sent is also
received in the same order.
The most important features of a queue are FIFO Queue and
exactly-once processing, i.e., a message is delivered once and
remains available until consumer processes and deletes it.
FIFO Queue does not allow duplicates to be introduced into the
Queue.
It also supports message groups that allow multiple ordered
message groups within a single Queue.
FIFO Queues are limited to 300 transactions per second but have
all the capabilities of standard queues.
SQS Visibility Timeout
o
o
o
o
o
The visibility timeout is the amount of time that the message is
invisible in the SQS Queue after a reader picks up that message.
If the provided job is processed before the visibility time out
expires, the message will then be deleted from the Queue. If the
job is not processed within that time, the message will become
visible again and another reader will process it. This could result in
the same message being delivered twice.
The Default Visibility Timeout is 30 seconds.
Visibility Timeout can be increased if your task takes more than 30
seconds.
The maximum Visibility Timeout is 12 hours.
Important points to remember:
o
o
o
o
o
SQS is pull-based, not push-based.
Messages are 256 KB in size.
Messages are kept in a queue from 1 minute to 14 days.
The default retention period is 4 days.
It guarantees that your messages will be processed at least once.
Microsoft Window Azure
Microsoft Azure is a cloud computing platform that provides a
wide variety of services that we can use without purchasing and
arranging our hardware. It enables the fast development of solutions and
provides the resources to complete tasks that may not be achievable in
an on-premises environment. Azure Services like compute, storage,
network, and application services allow us to put our effort into building
great solutions without worrying about the assembly of physical
infrastructure.
This tutorial covers the fundamentals of Azure, which will provide us
the idea about all the Azure key services that we are most likely required
to know to start developing solutions. After completing this tutorial, we
can crack job interviews or able to get different Microsoft Azure
certifications.
Hat is Azure
Microsoft Azure is a growing set of cloud computing services
created by Microsoft that hosts your existing applications, streamline the
development of a new application, and also enhances our on-premises
applications. It helps the organizations in building, testing, deploying,
and managing applications and services through Microsoft-managed
data centers.
Azure Services
ADVERTISEMENT
ADVERTISEMENT
o
o
o
Compute services: It includes the Microsoft Azure Cloud
Services, Azure Virtual Machines, Azure Website, and Azure
Mobile Services, which processes the data on the cloud with the
help of powerful processors.
Data services: This service is used to store data over the cloud that
can be scaled according to the requirements. It includes Microsoft
Azure Storage (Blob, Queue Table, and Azure File services),
Azure SQL Database, and the Redis Cache.
Application services: It includes services, which help us to build
and operate our application, like the Azure Active Directory,
Service Bus for connecting distributed systems, HDInsight for
processing big data, the Azure Scheduler, and the Azure Media
Services.
o
Network services: It helps you to connect with the cloud and onpremises infrastructure, which includes Virtual Networks, Azure
Content Delivery Network, and the Azure Traffic Manager.
How Azure works
It is essential to understand the internal workings of Azure so that
we can design our applications on Azure effectively with high
availability, data residency, resilience, etcMicrosoft Azure is completely
based on the concept of virtualization. So, similar to other virtualized
data center, it also contains racks. Each rack has a separate power unit
and network switch, and also each rack is integrated with a software
called Fabric-Controller. This Fabric-controller is a distributed
application, which is responsible for managing and monitoring servers
within the rack. In case of any server failure, the Fabric-controller
recognizes it and recovers it. And Each of these Fabric-Controller is, in
turn, connected to a piece of software called Orchestrator.
This Orchestrator includes web-services, Rest API to create, update, and
delete resources.
When a request is made by the user either using PowerShell or Azure
portal. First, it will go to the Orchestrator, where it will fundamentally
do three things:
1. Authenticate the User
2. It will Authorize the user, i.e., it will check whether the user is
allowed to do the requested task.
3. It will look into the database for the availability of space based on
the resources and pass the request to an appropriate Azure Fabric
controller to execute the request.
Combinations of racks form a cluster. We have multiple clusters within
a data center, and we can have multiple Data Centers within an
Availability zone, multiple Availability zones within a Region, and
multiple Regions within a Geography.
o
o
Geographies: It is a discrete market, typically contains two or
more regions, that preserves data residency and compliance
boundaries.
Azure regions: A region is a collection of data centers deployed
within a defined perimeter and interconnected through a dedicated
regional low-latency network.
Azure covers more global regions than any other cloud provider, which
offers the scalability needed to bring applications and users closer
around the world. It is globally available in 50 regions around the world.
Due to its availability over many regions, it helps in preserving data
residency and offers comprehensive compliance and flexible options to
the customers.
Microsoft Assessment and Planning
Toolkit
Microsoft Assessment and Planning (MAP) Toolkit is a free utility
IT can use to determine whether its infrastructure is prepared for a
migration to a new operating system, server version or cloud-based
deployment.
An IT professional can run MAP Toolkit on their device and take an
inventory of the devices, software, users and infrastructure associated
with any networks they are connected to. Microsoft now recommends
that customers use Azure Migrate rather than the MAP toolkit.
Microsoft Assessment and Planning Toolkit is made up of four main
components, as follows:

MAPSetup.exe contains MAP as well as the files IT
administrators need to set up a local SQL Server Database
Engine.

readme_en.htm details what administrators need to run MAP
Toolkit and known issues.

MAP_Sample_Documents.zip provides examples of the types
of reports and proposals MAP Toolkit creates.

MAP_Training_Kit.zip explains how to use MAP Toolkit and
provides a sample database of the information MAP Toolkit can
provide.
MAP Toolkit does not require an agent. It automatically
inventories the devices, software, users and infrastructure in a Windows
or Windows Server deployment and produces a readiness report and
proposal for executives with hardware and software information. The
data can include metrics such as the number of devices, how many
devices run Windows, the number of users and more. The readiness
report can also include information on the applications users work with
and if those applications are compatible with the desktop or server
operating system IT plans to move to.
The Windows 10 Assessment report, for example, shows whether the
hardware in a network is ready for Microsoft's latest OS. The Internet
Explorer Migration Assessment Report details what versions of the
Internet Explorer browser are running in the deployment as well as any
add-ons and ActiveX controls present in Internet Explorer.
MAP Toolkit can detail server utilization information to help IT identify
where servers are, and which servers would be viable to run virtual
desktops on. In addition to the information it provides, MAP Toolkit
delivers recommendations on how IT should proceed with its migration
plan.
MAP toolkit features
Microsoft Assessment and Planning Toolkit is made up of four
main pieces. The first is the installation package -- MAPSetup.exe -- that
contains MAP as well as SQL LocalDB -- a local database of the files IT
needs to set up a SQL Server Database Engine.
The readme_en.htm file details what IT needs to run MAP Toolkit and
any known issues that exist. MAP_Sample_Documents.zip provides
examples of the types of reports and proposals MAP Toolkit creates.
MAP_Training_Kit.zip teaches IT how to use MAP Toolkit and
provides a sample database of the information MAP Toolkit can provide.
MAP Toolkit
accessibility features
Microsoft Assessment and Planning
toolkit requirements
The device running MAP Toolkit must meet certain hardware and
software requirements. The device must have at least a dual-core 1.5
GHz processor, 2 GB of RAM, 1 GB of disk space and a network
adapter card. In addition, the device's graphics adapter must support a
resolution of at least 1024x768.
When it comes to software, the device must have all the latest Windows
updates installed and .NET Framework 4.5. It must run Windows 7 with
Service Pack 1, Windows 8, Windows 8.1, or Windows 10.
When it comes to software, the device must have all the latest Windows
updates installed and .NET Framework 4.5. It must run Windows 7 with
Service Pack 1, Windows 8, Windows 8.1 or Windows 10. For Windows
7, the device must run the Professional, Enterprise or Ultimate edition.
For Windows 8 and 8.1 as well as Windows 10, the device must run the
Professional or Enterprise edition. The server the deployment runs on
must be Windows Server 2008 R2 with Service Pack 1, Windows Server
2012, Windows Server 2012 R2, Windows Server 2016 or Windows
Server 2019.
SharePoint
There are three main types and versions to start working on SharePoint.
1. SharePoint Foundation
2. SharePoint Server
3. Office 365
1) SharePoint Foundation
SharePoint Foundation is used to build a standard web-based collaboration platform,
secure management, and communication solution within the organization.
There are the following features of SharePoint Foundation:
o
It is used to reduce implementation and deployment resources.
o
It provides effective document and task collaboration.
o
It offers features to secure your organization's important business data.
o
It provides PowerShell support.
o
It provides basic search operations.
2) SharePoint Server
SharePoint Server offers the additional features of the SharePoint Foundation. It provides
a more advanced collection features that you can use to utilize your organization's
solutions.
Some additional features of SharePoint Server are given below:
o
SharePoint allows you to create and publish web content without writing any
complex code.
o
SharePoint uses Enterprise Services that allows you to quickly and easily build
custom solutions.
o
SharePoint Server allows the more advanced features that can be implemented
with the environment.
o
SharePoint Server allows you to connect with external data sources and display
business data via Web portals, SharePoint lists, or user profiles.
o
It provides enterprise search
3) Office 365
Office 365 is a cloud-based multiplatform designed to help your business grow. It
provides various apps like Word, Excel, PowerPoint, and more.
The key features of Office 365 are given below:
o
Office 365 allows you to communicate and collaborate with co-workers, anywhere,
anytime.
o
It provides better security.
o
It provides a simple way of creating workflows for projects.
o
Using office 365, you can insert links to stored files instead of sending entire files
to co-workers, business partners, and friends.
Cloud Service Models
There are the following three types of cloud service models 1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
Infrastructure as a Service (IaaS)
IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure
managed over the internet. The main advantage of using IaaS is that it helps users to
avoid the cost and complexity of purchasing and managing the physical servers.
Characteristics of IaaS
There are the following characteristics of IaaS o
Resources are available as a service
o
Services are highly scalable
o
Dynamic and flexible
o
GUI and API-based access
o
Automated administrative tasks
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud.
To know more about the IaaS, click here.
Platform as a Service (PaaS)
PaaS cloud computing platform is created for the programmer to develop, test, run, and
manage the applications.
Characteristics of PaaS
There are the following characteristics of PaaS o
Accessible to various users via the same development application.
o
Integrates with web services and databases.
o
Builds on virtualization technology, so resources can easily be scaled up or down
as per the organization's need.
o
Support multiple languages and frameworks.
o
Provides an ability to "Auto-scale".
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.
To know more about PaaS, click here.
Software as a Service (SaaS)
SaaS is also known as "on-demand software". It is a software in which the applications
are hosted by a cloud service provider. Users can access these applications with the help
of internet connection and web browser.
Characteristics of SaaS
There are the following characteristics of SaaS o
Managed from a central location
o
Hosted on a remote server
o
Accessible over the internet
o
Users are not responsible for hardware and software updates. Updates are applied
automatically.
o
The services are purchased on the pay-as-per-use basis
IBM Smart Cloud
IBM Cloud is an open-source, faster, and more reliable platform. It is built with a suite of
advanced data and AI tools. It offers various services such as Infrastructure as a
service, Software as a service, and platform as a service. You can access its services like
compute power, cloud data & Analytics, cloud use cases, and storage networking using
internet connection.
Feature of IBM Cloud
o
IBM cloud improves operational efficiency.
o
Its speed and agility improve the customer's satisfaction.
o
It offers Infrastructure as a Service (IaaS), Platform as a Service (PaaS), as well as
Software as a Service (SaaS)
o
It offers various cloud communications services to our IT environment.
SAP Labs
SAP labs (SAP Finance and SAP Controlling) is a functional component of SAP ERP. It is
used to manage entire financial data of an organisation. The core functionality of SAP
FICO is generation and management of financial statements which are used for analysis
and reporting. This leads to better planning and critical decision-making. This module can
be easily integrated with other SAP modules such as SAP SD (Sales and Distribution), SAP
PP (Production Planning), SAP MM (Materials Management), SAP QM (Quality
Management), amongst others.
SAP FI
SAP Finance deals with reporting and accounting of overall financial data through balance
sheets or profit and loss statements. It also provides tools for compliance and auditing
which helps organizations to meet various regulatory requirements. It consists of various
sub-modules which deal with a specific area of the accounting process.
Sub-Modules of SAP FI
o
General Ledger: General ledger accounts record all the business transactions that
take place in SAP system. It is updated every time a financial transaction is posted
in the system by a user.
o
Accounts Receivable: This sub-module captures invoicing, payments, approvals,
and other tasks related to them for a customer. It maintains the accounting data
of all customers. This data provided by AR is helpful in effective credit
management.
o
Accounts Payable: This sub-module captures invoicing, payments, approvals, and
other tasks related to them for a vendor. Any financial postings in AP is
automatically updated in general ledger.
o
Asset Accounting: This sub-module is used for managing and monitoring fixed
assets. It is used to extract detailed information about the transactions that involve
fixed assets.
o
Bank Ledger: It is used for editing and displaying bank master data, processing
cashed cheques, posting bills of exchange, processing returned bills of exchange,
processing bank account statements, and financial status reports.
o
Consolidation: It is used for managerial or legal consolidation of financial
statements. It can also be used for combining statements of multiple entities to
provide an overview of an organisation's financial status.
o
Funds Management: It helps organizations to plan and manage their financial
resources by creating budgets and forecasting future cash flows. It helps
organizations to manage their cash and cash equivalents, including cash forecasts,
cash balances, and cash transactions. It also provides tools for monitoring and
controlling bank accounts and bank transactions.
o
Special Purpose Ledger: It is a receiver system which is used to monitor and
evaluate data entered & created in other SAP applications. It is not a sender system
for other applications of SAP.
o
Travel Management: It is used to handle all processes which are a part of handling
business trips. It allows you to plan, request, and book trips along with creating
expense reports for the trip.
SAP CO
SAP Controlling helps businesses plan and manage their financial and internal
management processes. It includes tools for cost and profit centre accounting, activitybased costing, and internal order management. The module also provides functionality
for budgeting, forecasting, and financial reporting. It provides a comprehensive solution
for financial management and helps businesses make better decisions by providing
accurate and timely financial information.
Sub-modules of SAP CO:
o
Cost Element Accounting (CEA): It deals with the recording and reporting of
costs incurred in an organization. It also describes the origin of the costs that the
organization incurs.
o
Cost Centre Accounting (CCA): It helps to record and evaluate the costs incurred
by different cost centres in an organization. It deals with the expenses of all internal
departments in an organization.
o
Activity-Based Costing (ABC): It is a method of measuring the cost and
performance of activities and cost objects.
o
Profitability Analysis (PA): It helps to determine the profitability of different
products, customers, and sales channels. It supports in making decisions such as
pricing of the product, target market segments and channels of distribution.
o
Product Cost Controlling (PC): It helps to determine the costs of products
manufactured by an organization. It records manufacturing costs and optimize
efficiency in the process.
o
Internal Orders: It helps to manage and track costs related to internal projects,
events, non-fixed assets, and other activities.
o
Profit Centre Accounting (PCA): It helps to determine the profitability of different
profit centres within an organization. It deals with both expenses and revenue of
business lines of the company.
o
Budgeting and Forecasting: It provides functionality for budgeting and
forecasting which helps organizations to plan and manage their financial resources
effectively.
Introduction to SAP HANA Cloud

Overview

Storage Options

Releases and Upgrades

Related Information
Get an overview of SAP HANA Cloud.

Overview

Storage Options

Releases and Upgrades
Overview
SAP HANA Cloud provides a single place to access, store, and process all enterprise
data in real time. It is a cloud-native platform that reduces the complexity of multi-cloud
or hybrid system landscapes. SAP HANA Cloud provides all of the advanced SAP
HANA technologies for multi-model data processing in-memory or on disk. You can
benefit from cloud qualities such as automatic software updates, elasticity, and low total
cost of ownership by using SAP HANA Cloud either as a stand-alone solution or as an
extension to your existing on-premise environment.
The SAP HANA Cloud allows you to consume the SAP HANA database from
applications running on SAP Business Technology Platform, as well as from
applications running on-premise or other cloud services using the standard SAP HANA
clients. The SAP HANA Cloud provides simplified data access to connect all your
information without the need to have all data loaded into a single storage solution.
If you are familiar with multiple tenant databases in SAP HANA on-premise systems,
note that every SAP HANA Cloud, SAP HANA database instance is equivalent to a
single tenant database. For multiple databases, create multiple SAP HANA database
instances. Using SAP HANA Cloud Central or the command-line interface, you can
create and manage SAP HANA Cloud instances in your subaccount.
Developers can bind their applications deployed in the same space to database
instances. SAP Business Technology Platform applications are bound to HDI
containers; every application requires a dedicated HDI container. The SAP HANA
Deployment Infrastructure (HDI) provides a service that enables you to deploy database
development artifacts to so-called containers. This service includes a family of
consistent design-time artifacts for all key SAP HANA database features, which
describe the target (run-time) state of SAP HANA database artifacts, for example:
tables, views, or procedures. These artifacts are modeled, staged (uploaded), built, and
deployed into SAP HANA. Using HDI is not a strict requirement, schemas and database
artifacts can be created at run-time using SQL database definition language in the SQL
console. For more information, see the SAP HANA Cloud Deployment Infrastructure
Reference.
Data lake is an SAP HANA Cloud component composed of data lake Relational Engine
– which provides high-performance analysis for petabyte volumes of relational data –
and data lake Files – which provides managed access to structured, semistructured,
and unstructured data stored as files in the data lake.
Data lake is available in different configurations. You can integrate it into a SAP HANA
Cloud, SAP HANA database instance, or you can provision a standalone data lake
instance with no SAP HANA database integration. You can also enable or disable the
data lake Relational Engine component when provisioning your data lake instance.
To create and manage SAP HANA Cloud instances, use SAP HANA Cloud Central or
the command line interface.
To administer an SAP HANA database, use the SAP HANA cockpit, which provides a
range of tools for administration and monitoring. For more information, see SAP HANA
Cockpit.
To query information about an SAP HANA database and view information about your
database's catalog objects, use the SAP HANA database explorer. For more
information, see Getting Started With the SAP HANA Database Explorer.
All access to SAP HANA Cloud instances is via secure connections on SQL ports.
Storage Options
SAP HANA Native Storage Extension
SAP HANA native storage extension is a general-purpose, built-in warm data store in
SAP HANA that lets you manage less-frequently accessed data without fully loading it
into memory. It integrates disk-based database technology with the SAP HANA inmemory database for an improved cost-to-performance ratio. For more information,
see SAP HANA Native Storage Extension.
Virtualization Services provide by SAP
SAP provides a landscape administration tool called ‘SAP Landscape
Management (SAPLaMa) formerly named SAP Landscape virtualization
Management (SAP LVM), which enables the SAP basis administrator to automate
SAP system operation (eg end-to-end SAP system copy/refresh operation).
example of service virtualization
Example:
A financial organization has a unique combination of financial products. To
ensure test coverage, a lot of effort went into finding specific customers with the
set of products needed to test each use-case. With a virtualization tool like
ReadyAPI they can easily set up expected responses .
virtualization and types
Virtualization is a technique of how to separate a service from the underlying
physical delivery of that service. It is the process of creating a virtual version of
something like computer hardware. It was initially developed during the
mainframe era.
SAP run on VMware
SAP on VMware Cloud. Transform your SAP landscape by virtualizing,
automating, and standardizing your software-defined data centers. Improve the
efficiency and flexibility of your environment by running SAP environments on
VMware Cloud.
service virtualization technology in cloud computing
Service virtualization tools monitor traffic between the dependent system and
the application. They use log data to build a model that can replicate the dependent
system's responses and behavior, using inputs such as SQL statements for
databases and XML, or Extensible Markup Language, messages for web services .
different types of virtualization
You can go beyond virtual machines to create a collection of virtual resources
in your virtual environment.




Server virtualization. ...
Storage virtualization. ...
Network virtualization. ...
Data virtualization. ...


Application virtualization. ...
Desktop virtualization.
virtualization in SAP
Service virtualization is a concept of simulating non SAP services to perform
testing of connected SAP systems.
Salesforce in cloud computing
Salesforce Sales Cloud is a customer relationship management (CRM)
platform designed to support sales, marketing and customer support in both
business-to-business (B2B) and business-to-customer (B2C) contexts.
cloud service provider does Salesforce use:
Amazon Web Services (AWS)
(Salesforce), a leading customer relationship management (CRM) company,
chose Amazon Web Services (AWS) as its primary cloud provider in 2016. Today,
Salesforce and AWS have a global strategic relationship focused on technical
alignment and joint development.
Different types of Salesforce
There are four fundamental sales force structures: generalist, market-based,
product-based, and activity-based. These four structures are contrasted in the sales
force structure cube shown in Figure 4-5. A fifth structure, the mixed organization,
is a hybrid of two or more of the four fundamental types .
Which cloud is most used in Salesforce:
Sales Cloud: Sales Cloud is Salesforce's flagship product and is their most popular
CRM software. It helps sales teams manage their customer relationships and
provides them with powerful tools to increase productivity and close more deals.
SALES CLOUD
Sales Cloud is focused on offering functionality to sales reps and sales
managers, with a focus on account acquisition, the sales funnel, and closing deals.
But that doesn't mean Sales Cloud is only about closing deals—it can also support
other teams, too.
Is sales Cloud the same as CRM?
You may have heard the terms Salesforce, Salesforce CRM, and Sales Cloud
used interchangeably. In most cases, people using these names will be referring to
the customer relationship management (CRM) solution created by Salesforce, the
company.
Sales Cloud vs service Cloud
The main difference between Sales Cloud and Service Cloud is that Sales Cloud
helps streamline sales efforts, while Service Cloud helps support agents provide
excellent customer service, and resolve issues before they become a problem.
Sales Cloud an app
What is Salesforce Sales Cloud? Salesforce Sales Cloud is a cloudbased Customer Relationship Management (CRM) application from
Salesforce.
SERVICE CLOUD
Service Cloud enables users to automate service processes, streamline
workflows and find key articles, topics and experts to support customer service
agents. The purpose is to foster one-to-one marketing relationships with every
customer across multiple channels and devices .
Benefits of service Cloud
Service Cloud offers you the advantage of LiveMessage, a customer service
platform that allows you to talk to your customers on their desired channel,
whether it is Facebook Messenger or text messages. This ensures that you can get
in touch with your customers easily and cost-effectively.
Who uses service cloud?
Top 20 US companies that use Salesforce Service Cloud
Company Name
Website
Revenue
Design with Reach
www.dwr.com
2.4 billion USD
Facebook
www.facebook.com
85.96 billion USD
Farmers Insurance
www.farmers.com
11.65 billion USD
Wells Fargo Bank
www.wellsfargo.com
82.407 billion USD
I.Knowledge as a Service (KaaS) combines technology and talent to
deliver knowledge, information, and expertise through a cloud-based platform or
software solution. It can include anything from databases, research, and model
content to professional expertise and analysis.
Example of knowledge as a service
Some providers rely on human curators or subject-matter experts to layer
context on a set of information. These providers are often niche providers dealing
in relatively smaller collections of information. One example of this form of
knowledge as a service provider could be a natural resource prospecting firm.
Examples of knowledge
He distilled knowledge into four types: Factual, Conceptual, Procedural and
Metacognitive.
II.RACK SPACE
The Rackspace Cloud is a set of cloud computing products and services billed
on a utility computing basis from the US-based company Rackspace. Offerings
include Cloud Storage ("Cloud Files"), virtual private server ("Cloud Servers"),
load balancers, databases, backup, and monitoring.
Rackspace an example
Amazon EC2 and Rackspace Cloud are examples of IaaS. Platform as a Service
(PaaS) clouds are created, many times inside IaaS Clouds by specialists to render
the scalability and deployment of any application trivial and to help make your
expenses scalable and predictable.
Purpose of Rackspace
The Rackspace Cloud is a set of cloud computing products and services billed
on a utility computing basis from the US-based company Rackspace. Offerings
include Cloud Storage ("Cloud Files"), virtual private server ("Cloud Servers"),
load balancers, databases, backup, and monitoring.
III.VM WARE
As its name implies, the use of VMware – or 'Virtual Machine' ware – creates a
virtual machine on your computer. This can help businesses better manage their
resources and make them more efficient.
VMware cloud service
VMware Cloud services enable you to determine how resources are used and
where workloads are deployed while applying a single operational model. This
enables you to standardize security, reduce management complexity, and improve
your ROI.
Type of service is VMware
VMware is a virtualization and cloud computing software provider based in
Palo Alto, Calif. Founded in 1998, VMware is a subsidiary of Dell Technologies.
VMware a SaaS or IaaS
VMware defines these service layers as: • Infrastructure as a Service (IaaS) –
Infrastructure containers are presented to consumers to provide agility, automation,
and delivery of components.
IV.MANJRA SOFT ANEKA PALTFORM
Aneka is a cloud application platform. It allows developers to build, deploy, and
manage their applications on private or public clouds. It provides a set of tools and
services for developing cloud applications. It manages the underlying
infrastructure.
Aneka use
Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud
Computing. It acts as a framework for building customized applications and
deploying them on either public or private Clouds.
3 services installed in Aneka container
Services are divided into clothing, foundation, and execution services. Foundation
services identify the core system of Anka middleware, which provides a set of
infrastructure features to enable Anka containers to perform specific and specific
tasks.
Benefits of Aneka in cloud computing
Aneka is the one of the cloud computing platform for deploying Clouds and
developing applications on top of it. It provides a runtime environment and a set of
APIs(Application Program Interface) that allow developers to build . NET
applications that leverage their computation on either public or private clouds.
Download