Uploaded by rewir39527

21ODMCT712 Cloud Computing

advertisement
21ODMCT712-Cloud Computing
Unit-1
Cloud Computing Fundamentals: Computing paradigms, Definition, NIST Model,
Characteristics of Cloud Computing Model, Benefits of Cloud Computing, Cloud
Computing Vs Cloud Services.
Unit-2
Layers in Cloud Computing: IaaS, PaaS, SaaS, DBaaS, BPaaS
Unit-3
Types of Cloud Computing: Public, Private, Hybrid, Community, Layered
Architecture of Cloud Computing and compare it with traditional Client/Server
architecture. Pros and Cons of Cloud Computing, applications. Key Technologies
Enabling
Unit-4
Cloud Computing: Disturbed Systems, Mainframe Computing, Cluster Computing,
Grid Computing, Web 2.0, Service Oriented Computing
Unit-5
Cloud Computing Architecture: Distributed Application Design, Automated
Resource Management, Virtualisation, Distributed Processing Design
Unit-6
Cloud Service Management: Service Level Agreement, Service Provider, Role of
service provider in Cloud computing
Unit-7
Scalability: Scale up and Scale Down Services. Cloud Economics and adopt services
using by Amazon, Google App Engine, Microsoft, etc.
Unit-8
Microsoft Azure: Introduction, architecture, Difference between Azure Resource
Manager (ARM) & Classic Portal, Configuration, Diagnostics, Monitoring and
Deployment of web apps.
Unit-9
Resource Management: Introduction to Resource Management, Provision of
resource allocation in cloud computing
Unit-10
Virtualization: Concept of virtualization, Taxonomy of Virtualization Techniques,
Pros and cons of Virtualization, Virtual Machine provisioning and lifecycle,
Virtualization at the infrastructure level, CPU Virtualization, A discussion on
Hypervisors Storage, Virtualization Cloud Computing Defined. Load Balancing.
Requirements, Introduction Cloud computing architecture, On Demand Computing
Unit-11
Data Management: Challenges with data. Data centers, Storage of data and
databases, Data Privacy and Security Issues at different level.
Unit-12
Traffic Manager: Introduction, Benefits, Managing traffic between data centers.
Unit-13
Loud Storage: Storage account, Storage Replications: LRS, ZRS, GRS, RAGRS
Unit-14
Types of storage: blob, file, table, and queue
Unit-15
Security: Benefits, security service providers, Identity and Access Management,
AAA administration for Clouds..
UNIT 1: CLOUD COMPUTING FUNDAMENTALS 1
STRUCTURE
1. Learning Objectives
2. Introduction
3. Cloud Computing Paradigm
4. Cloud Computing-Definition and Concepts
5. NIST Model
6. Types of Cloud Computing: Public, Private, Hybrid, Community
7. Summary
8. Key Words/Abbreviations
9. Learning Activity
10 Unit End Questions (MCQ and Descriptive)
11. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following aspects
of Cloud Computing:

Introduction to Cloud Computing

Understanding NIST Model

Introduction to Types of Cloud Computing

NIST Model
INTRODUCTION
In the past decade, information technology (IT) has embarked on the cloud computing
paradigm. Although cloud computing is only a different way to deliver computer resources,
rather than a new technology, it has sparked a revolution in the way organizations provide
information and service.
Originally IT was dominated by mainframe computing. This sturdy configuration eventually
gave way to the client-server model. Contemporary IT is increasingly a function of mobile
technology, pervasive or ubiquitous computing, and of course, cloud computing. But this
revolution, like every revolution, contains components of the past from which it evolved.
Thus, to put cloud computing in the proper context, keep in mind that in the DNA of cloud
computing is essentially the creation of its prede cessor systems. In many ways, this
momentous change is a matter of “back to the future” rather than the definitive end of the
past. In the brave new world of cloud computing, there is room for innovative collaboration
of cloud technology and for the proven utility of predecessor systems, such as the powerful
mainframe. This veritable change in how we compute provides immense opportunities for IT
personnel to take the reins of change and use them to their individual and institutional
advantage.
CLOUD COMPUTING PARADIGM
In Cloud Computing scalable resources are provisioned dynamically as a service over internet
in order to assure lots of monetary benefits to be scattered among its adopters. Different layers
are outlined based on the kind of services provided b y the Cloud. Moving from bottom to top,
bottom layer contains basic hardware resources like Memory, Storage Servers. Hence it is
denoted as Infrastructure-as-a-Service (IaaS). The distinguished examples of IaaS are Amazon
easy Storage Service (S3) and Amazon Elastic Compute Cloud (EC2). The layer above IaaS is
Platform-as-a-Service (PaaS) which mainly supports deployment and dynamic scaling of
Python and Java based applications. One such an example of PaaS is Google App Engine. On
top of PaaS, a layer that offers its customers with the capability to use their applications
referred to as Software-as-a-Service (SaaS). SaaS supports accessing user’s applications
through a browser without the knowledge of Hardware or Software to be installed. This
approach has been proven to be a universally accepted and trusted service. Internet and
Browser are the two components required to access these Cloud services. IaaS applications
access requires more internet bandwidth whereas web browser may be sufficient with
reasonable internet bandwidth is sufficient to access SaaS and PaaS applications.
The word “cloud” was a euphemism for everything that was beyond the data center or out on
the network. There are several definitions of a cloud assumed by different categories of cloud
users. It is mostly described as software as a service, where users can access a software
application online, as in Salesforce.com, Google Apps and Zoho. It is also described as in the
form of infrastructure as a service, where a user does not own infrastructure but and rents it
over time on a server and accesses through a site such as Amazon Elastic Compute Cloud
(EC2).
Another form of a Cloud is Platform as a service in which certain tools are made available to
build software that runs in the host cloud. Basically a cloud is built over a number of the data
centers, which reflects the Web’s context for loosely coupled systems (i.e. two systems don’t
know about each other), and provides the ability to have virtualized remote servers through
standard Web services to have large computing power. Cloud paradigm also serves as a
business model apart from technology. Through the business model, the cloud makes a new
form of computing widely available at lower prices that would have been considered
impossible.
CLOUD COMPUTING-DEFINITION AND CONCEPTS
WHAT IS CLOUD?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over public and
private networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.
WHAT IS CLOUD COMPUTING?
Cloud Computing refers to manipulating, configuring, and accessing the hardware and
software resources remotely. It offers online data storage, infrastructure, and application.
Figure 1.1
Cloud computing offers platform independency, as the software is not required to be installed
locally
on
the
PC.
Hence,
the
Cloud
Computing
is
making
our
business
applications mobile and collaborative.
HISTORY OF CLOUD COMPUTING
The concept of Cloud Computing came into existence in the year 1950 with implementation
of mainframe computers, accessible via thin/static clients. Since then, cloud computing has
been evolved from static clients to dynamic ones and from software to services. The following
diagram explains the evolution of cloud computing:
Figure 1.2
BENEFITS
Cloud Computing has numerous advantages. Some of them are listed below 
One can access applications as utilities, over the Internet.

One can manipulate and configure the applications online at any time.

It does not require to install a software to access or manipulate cloud application.

Cloud Computing offers online development and deployment tools, programming
runtime environment through PaaS model.

Cloud resources are available over the network in a manner that provide platform
independent access to any type of clients.

Cloud Computing offers on-de mand self-service. The resources can be used without
interaction with cloud service provider.

Cloud Computing is highly cost effective because it operates at high efficiency with
optimum utilization. It just requires an Internet connection

Cloud Computing offers load balancing that makes it more reliable.
Figure 1.3
RISKS RELATED TO CLOUD COMPUTING
Although cloud Computing is a promising innovation with various benefits in the world of
computing, it comes with risks. Some of them are discussed below:
Security and Privacy
It is the biggest concern about cloud computing. Since data management and infrastructure
management in cloud is provided by third-party, it is always a risk to handover the sensitive
information to cloud service providers.
Although the cloud computing vendors ensure highly secured password protected accounts,
any sign of security breach may result in loss of customers and businesses.
Lock In
It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to
another. It results in dependency on a particular CSP for service.
Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory, and
routing between the different tenants.
Management Interface Compromise
In case of public cloud provider, the customer management interfaces are accessible through
the Internet.
Insecure or Incomplete Data Deletion
It is possible that the data requested for deletion may not get deleted. It happens because either
of the following reasons

Extra copies of data are stored but are not available at the time of deletion

Disk that stores data of multiple tenants is destroyed.
CHARACTERISTICS OF CLOUD COMPUTING
There are four key characteristics of cloud computing. They are shown in the following
diagram:
Figure 1.4
On Demand Self Service
Cloud Computing allows the users to use web services and resources on demand. One can
logon to a website at any time and use them.
Broad Network Access
Since cloud computing is completely web based, it can be accessed from anywhere and at any
time.
Resource Pooling
Cloud computing allows multiple tenants to share a pool of resources. One can share single
physical instance of hardware, database and basic infrastructure.
Rapid Elasticity
It is very easy to scale the resources vertically or horizontally at any time. Scaling of resources
means the ability of resources to deal with increasing or decreasing demand.
The resources being used by customers at any given point of time are automatically monitored.
Measured Se rvice
In this service cloud provider controls and monitors all the aspects of cloud service. Resource
optimization, billing, and capacity planning etc. depend on it.
NIST CLOUD COMPUTING
Although, NIST is credited with having the most succinct and accurate definition of Cloud
Computing, the term itself was first coined nearly 15 years prior when Netscape’s Web browser
was big news. In 2011, NIST defined cloud computing as a model for enabling ubiquitous,
convenient, on-demand network access to a shared pool of configurable computing resources
(e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and
released with minimal management effort or service provider interaction. This short
description is intended to serve as a means for broad comparisons of cloud services and
deployment strategies while providing a baseline for discussion on the overall best uses for
cloud computing.
NISTs definition identified self-service, accessibility from desktops, laptops, and mobile
phones, resources that are pooled among multiple users and applications, elastic resources that
can be rapidly reapportioned as needed, and measured service as the five essential
characteristics of cloud computing. When these characteristics are combined, they create cloud
computing infrastructure that contains both a physical layer and an abstraction layer. The
physical layer consists of hardware resources that support the cloud services (i.e. servers,
storage and network components). The abstraction layer consists of the software deployed
across the physical layer, thereby expressing the essential characteristics of the cloud per NISTs
definition.
FIGURE 1.5
TYPES OF CLOUD COMPUTING: PUBLIC, PRIVATE, HYBRID,
COMMUNITY
BASIC CONCEPTS
There are certain services and models working behind the scene making the cloud computing
feasible and accessible to end users. Following are the working models for cloud computing:

Deployment Models

Service Models
Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.
Figure 1.6
Public Cloud
The public cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness.
A public cloud is a deployment model that is owned by cloud service providers and made
available to the public. Customers can gain new capabilities on demand without investing in
new hardware or software by tapping into the public cloud. Customers simply pay their cloud
provider a subscription fee or pay for only for the resources they wish to use. The vendor is
then responsible for all the administration, maintenance, capacity planning, backups, and
troubleshooting. Each public cloud can simultaneously handle massive amounts of storage that
allows businesses the ability to handle multiple projects and become more available to their
users at a moment’s notice.
Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.
Private cloud computing is a deployment model that is purchased and dedicated to a single
client or company in a single-tenant environment where the hardware, storage and network
assume the highest levels of security. Data that is stored in the private clo uds data centre cannot
be accessed by anyone other than the client that owns it. This is a great solution for
organizations that feel as though their data is too sensitive or valuable to put on a public,
community or hybrid cloud.
The private cloud also gives administrators the ability to automate their data centre thereby
minimizing manual provisioning and management which is incredibly important for safe and
secure day-to-day operations to flourish. Better yet, the private cloud is a great solution for
firms wishing to stay PCI and HIPAA compliant as this model allows sensitive data to be
delivered through a fully private cloud deployment within the network configurations that only
they own.
Figure 1.7
Community Cloud
The community cloud allows systems and services to be accessible by a group of
organizations.
NIST defines a community cloud deployment model as one that is used exclusively by a
specific community of consumers from organizations that have shared concerns (e.g., mission,
security requirements, policy, and compliance considerations). It may be owned, managed, and
operated by one or more of the organizations in the community, a third party, or some
combination of them, and it may exist on or off premises. This multi-tenant platform allows
several companies work on the same platform if they share similar needs and concerns.
Community clouds allows companies to collaborate on joint projects, applications, or research
in a secure setting. This is deployment model is great for organizations that need to test-drive
their high-end security products that are driven by compliance and regulatory measures.
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public
cloud.
Hybrid cloud deployment models are a collaboration of private and public cloud models in a
single environment. Hybrid clouds are comprised of parallel environments where applications
can easily move between private and public clouds. Hybrid clouds are bound together by
proprietary technology that enables data and application portability. Hybrid clouds offers more
IT teams more flexibility, portability, and scalability than other deployment models which is
the main reason why 58% of global enterprises have integrated a hybrid cloud architecture in
their IT infrastructure. Companies that are constantly transitioning between managing public
cloud projects and building applications of a sensitive nature on their private cloud is likely to
seek out a hybrid cloud solution.
SERVICE MODELS
Cloud computing is based on service models. These are categorized into three basic service
models which are 
Infrastructure-as–a-Service (IaaS)

Platform-as-a-Service (PaaS)

Software-as-a-Service (SaaS)
Anything-as-a-Service (XaaS) is yet another service model, which includes Network-as-aService, Business-as-a-Service, Identity-as-a-Service, Database-as-a-Service or Strategy-as-aService.
The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service
models inherit the security and management mechanism from the underlying model, as shown
in the following diagram:
Figure 1.8
Platform as a Service (PaaS): PaaS is a deployment and development platform for
applications provided as a service to developers over the Web. Third party renders develop
and deploy software or applications to the end users through internet and servers. The cost
and complexity of development and deployment of applications can be reduced to a great
extent by developers by using this service. Thus the developers can reduce the cost of buying
and reduce the complexity of managing the required Infrastructure. It provides all of the
services required to build and deliver the web services to support the complete life cycle of
web applications entirely from the Internet. This platform consists of infrastructure software,
databases, middleware, and development tools.
Infrastructure as a Service (IaaS): is a delivery model associated with Hardware and
Software as a service. Hardware such as Storage, server and network along with supporting
software such as operating system, virtualization technology and file system. It is an
evolution of traditional hosting to allow users to provide resources on demand and without
require any long-term commitment. Different from PaaS services, the IaaS provider does very
little management of data other than to keep the data center operational. Deployment and
managing of the software services must be done by the end users just as the way they would
in their own data center.
Software as a service (SaaS): SaaS allows access to programs to large number of users all
the way through browser. For a user, this can save some cost on software and servers. For
Service provider’s, they only need to maintain one program, this can also save space and cost.
Naturally, a SaaS provider gives access to applications to multiple clients and users over web
by hosting and managing the given application in their or leased datacenters. SaaS providers
also run their applications on platforms and infrastructure provided by other cloud providers.
SUMMARY
Delivery of Information and Communication Technologies (ICT) services as a utility has
recently received significant consideration through Cloud computing. Cloud computing
technologies will provide scalable on demand pay per use services to customers through
distributed data centers. Still this paradigm is its infant stage and many challenging issues have
to be addressed. Accordingly, in this chapter, firstly cloud computing paradigm basics are
introduced and various cloud computing services are discussed s ubsequently.
Today, cloud computing has quickly become a buzzword not only in the IT industry but also in
other sectors, such as banking, finance, education, health, utilities, airlines, retail, real estate,
and telecom. Actually, many e-commerce activities have been utilizing many cloud applications
one way or the other. However, when people encounter the definition of “cloud computing,”
they are often puzzled or confused because there are so many different definitions. According
to the research of Luis M. Vaquero et al., there were over 22 different definitions in 2008 alone.
People often ask, “What does cloud computing really mean?” The common answer is again
very tactful: “It is really dependent on what you mean.” The answer actually indicates the
subjectiveness of the cloud definition and the broad spectrum of meanings for the cloud.
When people talk about the cloud-computing paradigm, they often have different purposes in
mind. Thus, the term “cloud” inevitably covers so many aspects of computing. It s ignifies that
the entire IT industry is transforming from a physical world towards a virtual world. It is not
just an incremental change. It is the latest profound transformation for the IT industry.
KEY WORDS/ABBREVIATIONS

Artificial intelligence (AI): The capability of a computer system to imitate human
intelligence. Using math and logic, the computer system simulates the reasoning that
humans use to learn from new information and make decisions

Business analytics tools: Tools that extract data from business systems and integrate it
into a repository, such as a data warehouse, where it can be analysed.

Business intelligence (BI) tools: Tools that process large amounts of unstructured data
in books, journals, documents, health records, images, files, email, video and so forth, to
help you discover meaningful trends and identify new business opportunities.

Cloud: A metaphor for a global network, first used in reference to the telephone
network and now commonly used to represent the Internet. Learn more about the cloud.

Cloud bursting: A configuration which is set up between a private cloud and a public
cloud. If 100 percent of the resource capacity in a private cloud is used, then overflow
traffic is directed to the public cloud using cloud bursting.

Cloud computing: A delivery model for computing resources in which various servers,
applications, data and other resources are integrated and provided as a service over the
Internet. Resources are often virtualised.
LEARNING ACTIVITY
1. Make a group of four and draw a differentiation chart on various types of Cloud
computing
_____________________________________________________________________________
_________________________________________________________________
2. Discuss the implementation factors of NSIT model in the organization.
_____________________________________________________________________________
_________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. What is cloud computing?
2. What are the benefits of cloud computing?
3. What is a cloud?
4. What are the different data types used in cloud computing?
5. What are different Types of Cloud Computing?
B. Multiple Choice Questions
1. _________ computing refers to applications and services that run on a distributed network
using virtualized resources.
a) Distributed
b) Cloud
c) Soft
d) Parallel
2. ________ as a utility is a dream that dates from the beginning of the computing industry
itself.
a) Model
b) Computing
c) Software
d) All of the mentioned
3. Which of the following cloud concept is related to pooling and sharing of resources?
a) Polymorphism
b) Abstraction
c) Virtualization
d) None of the mentioned
4. ________ has many of the characteristics of what is now being called cloud comput ing.
a) Internet
b) Software
c) Web Service
d) All of the mentioned
5. Which of the following can be identified as cloud?
a) Web Applications
b) Intranet
c) Hadoop
d) All of the mentioned
Ans wer
1. b
2. b
3. c
4. a
5. c
REFERENCES

The NIST Definition of Cloud Computing NIST

Wang (2012). "Enterprise cloud service architectures". Information Technology and
Management. 13 (4): 445–454. doi:10.1007/s10799-012-0139-4. S2CID 8251298.

"What is Cloud Computing?". Amazon Web Services. 2013-03-19. Retrieved 2013-0320.

Baburajan, Rajani (2011-08-24). "The Rising Cloud Storage Market Opportunity
Strengthens Vendors". It.tmcnet.com. Retrieved 2011-12-02.

Oestreich,
Ken
(2010-11-15).
"Converged
Infrastructure".
CTO
Forum.
Thectoforum.com. Archived from the original on 2012-01-13. Retrieved 2011-12-02.

Ted Simpson, Jason Novak, Hands on Virtual Computing, 2017, ISBN 1337515744, p.
451
UNIT 2: CLOUD COMPUTING FUNDAMENTALS
STRUCTURE
1. Learning Objectives
2. Introduction
3. Layered Architecture of Cloud Computing
4. Difference between Client Server Architecture and Cloud Computing
5. Pros and Cons of Cloud Computing
6. Summary
7. Key Words/Abbreviations
8. Learning Activity
9. Unit End Questions (MCQ and Descriptive)
10. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following aspects
of Cloud Management and Administration:

Introduction to Layered Architecture of Cloud Computing

Difference between Cloud Computing and Client Server Architecture

Advantages and Disadvantages of Cloud Computing
INTRODUCTION
The term Cloud Computing appears in Google search nearly 54 million times. But The Cloud
remains to be this elusive entity to the general population. Those who fit into this category
either see cloud-based computing as this near- magical technology that whisks your data into
another dimension for you to summon at a moment’s notice at your beck and call (which
sounds pretty wizard- like). For those who work with the technology daily and understand its
capabilities, the technology is much more simplistic than others would make it seem, even
though it does have some technical nuances.
These nuances can sometimes create confusion as to which category of cloud infrastructure an
individual or organization should utilize to fit their data storage or migration needs. Thankfully,
the National Institute of Standards and Technology (NIST) has comprehensively outlined the
definition of Cloud Computing in their September 2011 publication for IT professionals to get
a better understanding of each category of cloud platform. This highly technical topic that goes
along with NISTs definition of cloud computing is enough to turn any mortals brain to mush, so
were going to break it all down for you in an easy-to-digest format using language that even
your non-technical parents can appreciate.
LAYERED ARCHITECTURE OF CLOUD COMPUTING
Figure provides a High Level Architecture of a cloud computing a rchitecture along with the
security issues at different layers. This section elaborates functionalities performed at different
layers and also the security requirements at different layers.
USER
LAYER: DIFFERENT
TYPES OF USERS LIKE CUSTOMERS,
APPLICATION PROGRAMMERS, AND ADMINISTRATORS INTERACT
WITH CLOUD SOFTWARE THROUGH THE USER LAYER. THIS LAYER
CONSISTS OF TWO SUB LAYERS.
Application Sub Layer: The cloud applications are visible through the user layer to the endusers of the cloud. Normally, the applications are accessed through web portals by the users
and from time to time required to pay amount to use them. The overhead of software
maintenance is done by this sub layer and also the ongoing operation and support costs.
Furthermore, it moves the computing tasks from the user terminal to servers in the
Datacenters where the cloud applications are deployed. This in turn minimizes the
requirements on the hardware required from the user’s point of view, and permits them to
gain higher performance. This approach supports efficient processing of CPU- intensive and
Figure 2.1
memory- intensive workloads of the users without any huge capital investments in their local
machines.
Thus, this sub layer even simplifies the work with respect to code up gradation and testing,
while protecting their intellectual property from the service providers point of view.
Developers can add new features through patches easily without distributing the end users as
the cloud application is deployed at the provider’s computing infrastructure rather than at the
user machines. Configuration and testing of an application are less complicated using this sub
layer functionality, since the deployment environment becomes restricted in the provider’s
datacenter. in terms of profits margin to the provider, continuous flow of revenue is supplied
through this sub layer, which brings more profits over a period of time.
In spite of all the benefits and advantages of this sub layer functionality, a number of
deployment
issues
hinder
with
its
broad
acceptance.
More
specifically,
the
security of the cloud applications are the two major challenges that has direct impact on
Service Level Agreements (SLAs). In addition, managing the availability is a monarchy that
providers and users of SaaS has to deal with due to possible network outage and system
failures. Additionally, the migration of the user data and integration of legacy applications to
the cloud is another challenge that is also time-consuming for the adoption of SaaS.
Programming Environme nt Sub Laye r: The users of this layer are cloud application
developers responsible for development and deployment of applications on to the cloud. The
cloud service providers support development environment with necessary set of defined
APIs. Developers interact with the environments through the available APIs, which
accelerates the deployment and support the with scalability support.
Google’s App Engine is one of the example systems in this category, that supports
python runtime environment and APIs for to interact with Google’s cloud runtime
environment. Through this approach implementation of automatic scaling and load balancing
becomes easy for the developers in developing their cloud application for a cloud
programming environment. Through this approach integration with other services (e.g. email,
authentication, user interface) with PaaS-provider becomes easy. Hence, to a large extent the
additional effort requires to develop cloud applications can be reduced and is managed at the
environment level. In addition, the developers possess the capability of integrating the other
services with the applications as and when necessary. This results in making the development
of a cloud application a simple task and also speed up the development time. In this
connection, Hadoop supports deployment environment on the cloud would be considered as
cloud programming, as the application developers are offered with a development
environment which is also referred as Map/Reduce framework for the cloud. The process of
the development of cloud applications becomes easy through these cloud software
development environments.
CLOUD SERVICE MANAGEMENT LAYER
This layer provides management of the applications and virtualized infrastructure for
business solutions. This layer is responsible for providing virtualized resources for services
such as service level management, policy management metered usage, license management,
and disaster recovery. This layer supports scaling of applications through dynamic
provisioning by allocating the resources to applications on demand results in minimizing the
underutilization of resources. Key components of Cloud Service Management layer are listed
below.
SLA Monitor: When a customer first submits the service request, the requests are interpreted
by the SLA Monitor to evaluate QoS requirements to determine whether to accept or reject
the request. It is also responsible to monitor the progress of the submitted job. If any violation
of the SLA is observed by the SLA monitor, it has to act immediately for corrective action.
Resource Provisioning: Availability of VMs and resource requirements are tracked through
this mechanism. It manages the different requests coming to the virtual servers by the
creation of multiple copies of VMs. The resource provisioner is self-adjusted dynamically
such that the processing is completed as per the requirements even at peak loads.
Sequencer & Scheduler: Based on the information from SLA Monitor and Resource
Provisioning, the sequencer arranges or prioritizes the jobs based on objectives of service
provider. Scheduler makes effective resource allocation by having the latest status
information from the Resource Provisioning regarding resource availability and workload
processing.
Dispatche r: The resources selected and assigned by the scheduler to the process are
controlled by this module. It involves switching of context, switching of user, hopping to the
proper location in the user program to restart that program, dispatch latency (i.e. the time
required by the dispatcher to stop and start a process). It is also responsible for the start of the
execution of selected service requests on to the allocated Virtual Machines.
Accounting: It maintains a record of the actual resources usage by service requests in order
to compute the final cost and charge the users. In addition, resource a llocation decisions can
be improved through the use of historical usage information.
Metering: Billing of the users is based on the usage of the system. Usually, billing is based
on the usage of CPU per hour or rate of data transfer per hour. This mechanism also provides
information about pricing policies and different types of services to customer. The customer
needs to select the level or quality of service by providing QoS service requirements without
the need to know how cloud provides the service.
Load Balancer: This mechanism contains algorithms for mapping virtual machines onto
physical machines in a cloud computing environment, for identifying the idle virtual
machines and for migrating virtual machines to other physical nodes. Whenever a user
submits an application workload into cloud system, one can create a new virtual machine.
Now the mapping algorithm of Load balancer will generate a virtual machine placement
scheme, assign necessary resources to it and deploy the virtual machine on to the identified
physical resource. Unmanaged and forgotten virtual machines can consume Datacenter
resources and cause energy waste. Another algorithm of Load balancer will identify idle
virtual machines and shut them off. In the process of optimally placing the virtual machine
onto the destination, we need to relocate the existing virtual machines. For doing this
operation, virtual machine migration algorithm of load balancer is invoked. In summary Load
Balancer will have the following three sub modules.

Migration Manager: It triggers live migration of VMs on to physical servers
depending on information provided by the VM Mapper. It turns a server ‘on’ or ‘off’.

Monitoring Service: This module collects parameters like status of application,
workload, utilization of resources, power consumption etc. This service works like
global information provider that provides monitoring data to support intelligent
actions taken by VM mapper. The status information is utilized to arrest the sprawl of
unmanaged and forgotten virtual machines.

VM Mapper: This algorithm optimally maps the incoming workloads (VMs) on to
the available physical machines. It collects the information from monitoring Service
time to time and makes decision on the placement of virtual machines. The VM
mapper searches the optimal placement by a genetic algorithm provided in the next
chapter.
Policy Management: It is mandatory for organizations to define clear and unambiguous
definitions of governance, policy (i.e. regulatory), security, privacy etc to make sure that
SLAs are not violated when applications are operated on the cloud. In order to deal with
business within a cloud, cloud consumers and providers are to be aligned on guaranteed SLAs
and equivalent pricing models. As the cloud capabilities are being improved (such as
virtual supply chains) policy-driven interactions that are fully abstracted need to be supported
across clouds. It has become a main challenge for the cloud providers in modeling and
extending policies in order to provide integrated services across distributed and heterogeneous
business processes and infrastructure. Policy management has conventionally been fixed
within and across organization boundaries of enterprise IT platforms and applications. Hence,
globally expansion of businesses requires applying new methods to combine and complement
the policies within and across external process networks and value chains.
Advance Resource Reservation Monitor: This is the mechanism to guarantee QoS in
accessing resources across Datacenters. By reserving resources in advance, users are able to
complete applications that are critical with respect to time such as parallel workflow
applications that are Realtime in nature require a number of resources in near future to run.
The prediction of future demand and usage can be done by the provider more accurately.
With this information, the provider can maximize revenue at various times to by applying
policy management to determine pricing. Users will able to decide in advance in resource
reservation as per the needs and net expenses as these costs are publicized in advance. To
successfully plan and manage their operations it is essential for the enterprises to have prior
knowledge of expected costs. Guaranteed supply of resources also helps enterprises to
contemplate and target future expansion more accurately and confidently. Hence, enterprises
be able to scale up or scale down their resource reservations based on short-term, mediumterm, and long-term commitments.
Security and Identity Manage ment: Cloud environments must control an identity and
security infrastructure in order to enable elastic provisioning and to implement security
policies across the clouds. It is necessary to control and ensure that sensitive information is
protected against SLAs, as resource provisioning is done outside the enterprise’s legal
boundaries by the cloud provider. The issues that need to be addressed by the cloud provider
before convincing the end users to migrate from desktop applications to cloud applications
are safety and security of confidential data stored on the cloud. Also the other issues are
user’s authentication and authorization, up-time or down time and performance of the
applications. Finally, data backup and disaster recovery to provide reliable SLAs for their
cloud services.
Autonomic Management: Each element in the cloud service architecture is embraced with
autonomic management capability. The autonomic managers determine task assignment to
each of the available resources, reassignment of the tasks during workload execution based on
the overall progress of the submitted requests. Also autonomic managers adaptively assign
tasks in the workload to execution sites to meet given objective(s) to minimize total execution
time and optimize for QoS targets. These objectives are imposed by an SLA. Autonomous
system that can schedule a job to run at a certain date and time, dynamically trigger workload
on unexpected business events, trigger workload via a web service, monitor for workload
status information, get alerts on successful and unsuccessful completions, generate reports on
job history, use conditional logic, provide a business process view, linking disparate
processes together, schedule maintenance tasks in the applications, the database, etc.
Green Initiatives: Green Computing refers to the practice of implementing policies and
procedures that reduce the impact of computing waste on the environment through the
improved usage of computing resources. This movement has greater environmental concern
involves every department in an enterprise. Green IT is the driving factor for the IT industry.
one particular area of interest in Green IT is “Datacenter’. IT departments are looking to
reduce their carbon footprint of Datacenters to save the environment. Constraints on available
Computing power, cooling capacity and physical space in an enterprise Datacenter facility
impose serious limitations on ability to deliver key services. In these circumstances, “going
green” in the Datacenter is not just about social responsibility, it is a business imperative.
To have a green datacenter, balanced utilization of power, cooling capacity and
efficient infrastructure are the key components. In order to establish a green data center, it is
important to understand how these components in a Datacenter have traditionally been
deployed and to know the initiatives to be taken to make the datacenter green. Now a day’s
renewable energy sources such as solar and wind with partial or complete power are chosen
by many of the businesses. Of all these, “Energy Efficiency” provides the greatest potential
for quick return on investment, ease of implementation, and financial justification. There are
several successful green Datacenter initiatives to help the enterprises to overcome the energy
and capacity limitations, operational vulnerabilities, and constraints that limit today’s Data
center [7].
DATACENTERS LAYER
Datacenters Layer is at the bottom of the cloud service architecture. Normally big
enterprises with huge hardware requirements in need of subleasing Hardware as a Service
(HaaS) are the users of this layer. The HaaS providers operate, manage and upgrade the
hardware on behalf of their consumers in the duration of the lease or contract. This helps the
is enterprises, as it relieves them from upfront investment in building and managing
Datacenters. Meanwhile to maximize profits, HaaS providers have the cost-effective
infrastructure and technical expertise to host the systems. As enterprise users have predefined
business workloads, SLAs in this model are stricter due to severe performance requirements
imposed. The HaaS providers materialize the profits from the economy of scale of building
huge Datacenters infrastructures with gigantic floor space, power, cooling costs as well as
operation and management expertise. A number of technical challenges need to be addressed
by the HaaS providers in operating and managing their services. The major challenges are
efficiency, ease and speed of provisioning to large scale systems. Datacenter management,
scheduling, and power-consumption optimization are the other challenges that arise at this
layer.
Virtual Machines: Virtual machine is the fine grain unit of computing resource. cloud users
will have the flexibility on their VMs for performance and efficiency as they have super-user
privileged in accessing to their Virtual Machines. The users can customize the software stack
of Virtual Machines. Frequently such services are referred as Infrastructure as a Service
(IaaS). Virtualization is the primary technology in a cloud environment that supports the users
with extraordinary flexibility in configuration of the settings without disturbing the physical
infrastructure in the datacenters of the providers. The concept of IaaS has become possible
through recent advances in OS Virtualization. Multiple application environments will be
supported by each of the virtual machine that runs a different operating system which is
referred as guest operating system.
Virtual Machine Monitor: It is a hardware abstraction layer that acts as an interface between
virtual Machines and hardware. It coordinates the access of resources by all virtual machines.
VMM enables organizations to speed up the response to business dynamics through
consolidation of computing resources thus results in less complexity in management of
computing resources. Improving the resource utilization and reducing power consumption are
key challenges to the success.
The Hardware: The hardware offers basic computing resources such as CPU resource,
Memory resource, I/O devices and switches that form the backbone of the cloud.
DIFFERENCE BETWEEN CLIENT SERVER ARCHITECTURE AND
CLOUD COMPUTING
In a client/server architecture, one logs on to a server, authenticating their identification
against credentials saved on the server, not on the local computer even before accessing their
computer’s operating system. Whereas cloud access usually occurs without the need for
manual user-provided credentials, after the user has logged on to the computer, or other
devices, utilizing locally- saved credentials.
Both of them provide storage of the user computer for necessary files. Some would claim that
cloud storage is more transparent to the user, which is absolutely true.
Figure 2.2
Client/server architectures are normally deployed in organizations where control of the user
computer and computer access, such as centrally-stored user credentials, operating system
updates, or updating user applications are centrally administered and directed.
Cloud storage may be a transparent sub-function of a client/server architecture, though the
contrary is not true, that is, a client/server architecture is not immediately a sub-function of
cloud storage, though we can presumably expect that the latter to become the model sooner
rather than later. Depending on the Cloud, no one can really tell just how secure it is, or
whether or not access to user data is truly secure or not.
The primary difference in cloud computing and traditional networking or hosting is the
implementation, and in one word that is “virtualization.” Virtualization allows for extensive
scalability, giving clients virtually limitless resources.
In a traditional networking setup, the server is settled in hardware and if you want to scale up
to more users than the current hardware can support, you would need to allocate more money
for upgrades and there would still be a limit. But with cloud computing infrastructure, multiple
servers are already in place at the start, they then use virtualization to render only the
resources that a specific user needs which give it great scalability from the small needs of
resources of personal businesses to heavy corporate resource needs. A Cloud provider is able
to scale resources without issues and the client will only need to pay for what they use. In
traditional networking, you need to pay for everything; the hardware, the installation,
maintenance, or even just rent it for a monthly fixed price, even if you only need a small bit of
resource.
Cloud computing is an external form of data storage and software delivery, which can make it
seem less secure than local data hosting. Anyone with access to the server can view and use
the stored data and applications in the cloud, wherever internet connection is available.
Choosing a cloud service provider that is completely transparent in its hosting of cloud
platforms and ensures optimum security measures are in place is crucial when transitioning to
the cloud.
With traditional IT infrastructure, you are responsible for the protection of your data, and it is
easier to ensure that only approved personnel can access stored applications and data.
Physically connected to your local network, data centres can be managed by in-house IT
departments on a round-the-clock basis, but a significant amount of time and money is needed
to ensure the right security strategies are implemented and data recovery systems are in place
In summary, cloud architecture is or can be just another kind of a client/server architecture
where the user is cunningly insulated from the client/server aspects of its implementation. It
all depends on who controls what cloud and which cloud that we are talking about. Expect that
in the near future, all client/server architectures look more like the cloud than networks of old,
but it is still pretty much the same thing. Remote storage of user data that is modified locally
and accessible to the user regardless of which platform they use to access it.
PROS AND CONS OF CLOUD COMPUTING
Pros of Cloud Computing
No cost on infrastructure: Cloud computing is divided into three major categories as per the
services: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a
Service (SaaS).
In all these categories, one thing is common that you don’t need to invest in hard ware or any
infrastructure. In general, every organization has to spend a lot on their IT infrastructure to
set up and hire a specialized team.
Servers, network devices, ISP connections, storage, and software – these are the major things
on which you need to invest if we talk about general IT infrastructure.
Minimum management and cost: By selecting the cloud, you save cost in many ways:

Zero investment in infrastructure.

Since you don’t own the infrastructure, you spend nothing on its management or staff to
manage it.

Cloud works on pay as you go model, so you spend only on resources that you need. Nothing
more!
When you opt for the cloud, the management of its infrastructure is the sole responsibility of
the cloud provider and not of the user.
Forget about administrative or management hassles: Whenever there is a purchase or upgradation of hardware, a lot of time is wasted looking for best vendors, inviting quotations,
negotiating rates, taking approvals, generating POs and waiting for delivery and then in
setting up the infrastructure.
This whole process includes lots of administrative/managerial tasks that waste a lot of time.
With cloud services, you just need to compare the best cloud service providers and their plans
and buy from the one that matches your requirements. And this whole process doesn’t take
much time and saves you a lot of efforts. Your system maintenance tasks are also eliminated
in the cloud.
Accessibility and pay per use: Cloud resources are easily accessible from around the globe
– anytime, anywhere and from any device and you have complete access to your resources.
This decides your billing also -you only pay for what you use and how much you use. It’s
like your phone or electricity bill. But with other IT infrastructure, one spends the complete
amount in one go and it is very an interface rare that those resources are used optimally and
thus, the investment goes waste.
Reliability: Your infrastructure in the cloud increases the reliability and availability of
applications and services. Cloud services run on pooled and redundant infrastructure which
provides you with a higher availability of IT services.
Data control: Another primary advantage of the cloud is that it centralizes all the data from
multiple projects and branch offices to a single location. You gain complete control over the
data without visiting individual places for checking the information.
Data backup & recovery: Loss of data can significantly impact your business. You might
lose critical information which can cost you a huge sum of money, waste your valuable time
and adversely impact your brand image.
To prevent it, you can automatically backup all the data to the cloud on a regular basis. This
helps you to recover any data in case of accidental deletion, loss because of natural calamity
or if the hard drive crashes.
Huge cloud storage: Most cloud services provide you a free, secure and huge storage space
to store all your valuable information.
Although most cloud storage services like One Drive offer you a good amount of free
storage, if you use it all, you can always go for buying more secure storage in the cloud.
Automatic software updates: Updating a system every now and then can be a frustrating
task for enterprises. The IT department needs to update the system for every individual which
not only wastes time but affects productivity.
But if you are using cloud-based applications, they will get automatically updated, without
any involvement from the users.
After discussing the benefits of cloud computing, let’s now discuss some disadvantages of
cloud computing.
CONS OF CLOUD COMPUTING
Requires good speed internet with good bandwidth: To access your cloud services, you
need to have a good internet connection always with good bandwidth to upload or download
files to/from the cloud
Downtime: Since the cloud requires high internet speed and good bandwidth, there is always
a possibility of service outage, which can result in business downtime. Today, no business
can afford revenue or business loss due to downtime or slow down from an interruption in
critical business processes.
Limited control of infrastructure: Since you are not the owner of the infrastructure of the
cloud, hence you don’t have any control or have limited access to the cloud infra.
Restricted or limited flexibility: The cloud provides a huge list of services, but consuming
them comes with a lot of restrictions and limited flexibility for your applications or
developments. Also, platform dependency or ‘vendor lock- in’ can sometimes make it
difficult for you to migrate from one provider to another.
Ongoing costs: Although you save your cost of spending on whole infrastructure and its
management, on the cloud, you need to keep paying for services as long as you use them. But
in traditional methods, you only need to invest once.
Security: Security of data is a big concern for everyone. Since the public cloud utilizes the
internet, your data may become vulnerable.
In the case of a public cloud, it depends on the cloud provider to take care of your data. So,
before opting for cloud services, it is required that you find a provider who follows maximum
compliance policies for data security.
For complete security of data on the cloud, one needs to consider a somewhat costlier private
cloud option or the hybrid cloud option, where generic data can be on the public cloud and
business-critical data is kept on the private cloud.
Vendor Lock-in: Although the cloud service providers assure you that they will allow you to
switch or migrate to any other service provider whene ver you want, it is a very difficult
process.
You will find it complex to migrate all the cloud services from one service provider to
another. During migration, you might end up facing compatibility, interoperability and
support issues. To avoid these issues, many customers choose not to change the vendor.
Technical issues: Even if you are a tech whiz, the technical issues can occur, and everything
can’t be resolved in- house. To avoid interruptions, you will need to contact your service
provider for support. However, not every vendor provides 24/7 support to their clients.
SUMMARY
To conclude, there are pros and cons of cloud but cloud has become a mandatory part of
every business venture. Today, one cannot think without enjoying the benefits of cloud
computing. With careful precautions and efforts the disadvantages of cloud computing can be
minimized. It’s true that cloud computing has rocked the business world. The pros outweighs
the cons of cloud computing. The minimized costs, easy access, data back up, data
centralization, sharing capabilities, security, free storage and quick testing speaks for itself.
The argument becomes even stronger with the enhanced flexibility and dependability .
Although cloud computing has recently attracted significant mome ntum and attention in
both academia and industry users. This is due to noticing drastic change in everyone’s
perception of infrastructure availability, software delivery and development models.
Representing the step by step deployment from mainframe computers to cloud
computing architecture, following the transition throw personal computer, network,
client / server, internet and grid computing models. This rapid change towards the cloud
computing, has fuelled on this critical issue for the success of information systems,
communication and information security. To words the perspective of security, a
number of risks and challenges have been introduced regarding relocation to the
cloud computing, deteriorating much of the effectiveness of traditional protection
mechanisms. As a result the aim of this paper is firstly to introduce some of the paradigm
towards cloud computing, application, advantage and drawbacks of cloud computing and
secondly to evaluate cloud security by identifying some of the challenges in the cloud
computing.
KEY WORDS/ABBREVIATIONS

Cloud computing types: There are three main cloud computing types, with
additional ones evolving—software-as-a-service (SaaS) for web-based applications,
infrastructure-as-a-service (IaaS) for Internet-based access to storage and computing
power, and platform-as-a-service (PaaS) which gives developers the tools to build and
host Web applications.

Cloud service provider: A company that provides a cloud-based platform,
infrastructure, application or storage services, usually for a fee.

Cloud storage : A service that lets you store data by transferring it over the Internet or
another network to an offsite storage system maintained by a third party.

Computer grids: Groups of networked computers that act together to perform large
tasks, such as analysing huge sets of data and weather modelling.

Database sharing: A type of partitioning that lets you divide your large database into
smaller databases, which can be managed faster more easily across servers.
LEARNING ACTIVITY
1. Take two organization of different architecture and differentiate them with Cloud
Computing Architecture
___________________________________________________________________________
___________________________________________________________________
2. How Client Computing is advantageous on other server-based technology?
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. Discuss the Layered Architecture of Cloud Computing
2. Compare Cloud Computing Architecture with traditional Client/Server architecture.
3. Discuss the various Pros and Cons of Cloud Computing
4. Give examples of few applications of Cloud Computing
5. What are the Advantages of Cloud Computing Architecture?
B. Multiple Choice Questions
1. Cloud computing is an abstraction based on the notion of pooling physical resources and
presenting them as a ________ resource.
a) real
b) virtual
c) cloud
d) none of the mentioned
2. Which of the following is Cloud Platform by Amazon?
a) Azure
b) AWS
c) Cloudera
d) All of the mentioned
3. All cloud computing applications suffer from the inherent _______ that is intrinsic in their
WAN connectivity.
a) propagation
b) latency
c) noise
d) all of the mentioned
4. Cloud computing is a _______ system and it is necessarily unidirectional in nature.
a) stateless
b) stateful
c) reliable
d) all of the mentioned
5. The _____ is something that you can obtain under contract from your vendor.
a) PoS
b) QoS
c) SoS
d) All of the mentioned
Ans wer
1. b
2. b
3. b
4. a
5. b
REFERENCES

"Where's The Rub: Cloud Computing's Hidden Costs". 2014-02-27. Retrieved 2014-0714.

"Cloud Computing: Clash of the clouds". The Economist. 2009-10-15. Retrieved 200911-03.

"Gartner Says Cloud Computing Will Be As Influential As E-business". Gartner.
Retrieved 2010-08-22.

Gruman, Galen (2008-04-07). "What cloud computing really means". InfoWorld.
Retrieved 2009-06-02.

Vaughan-Nichols, Steven J. "Microsoft developer reveals Linux is now more used on
Azure than Windows Server". ZDNet. Retrieved 2019-07-02.

Kumar, Guddu (9 September 2019). "A Review on Data Protection of Cloud Computing
Security, Benefits, Risks and Suggestions" (PDF). United International Journal for
Research & Technology. 1 (2): 26. Retrieved 9 September 2019.

"Announcing Amazon Elastic Compute Cloud (Amazon EC2) – beta". 24 August 2006.
Retrieved 31 May 2014.
UNIT 3: CLOUD SERVICE MANAGEMENT 1
STRUCTURE
1. Learning Objectives
2. Introduction
3. Cloud – Service Level Agreement
4. Service Provider-Cloud Management
5. How to Choose a Cloud service provider?
6. Role of Service providers in Cloud Computing
7. Scalability: Scale up and Scale Down Services
8. Summary
9. Key Words/Abbreviations
10. Learning Activity
11. Unit End Questions (MCQ and Descriptive)
12. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of Cloud –SLA and
following objectives:

Cloud Service Level Agreement

How to Choose Service Provider

Role of Service Provider

Scalability
INTRODUCTION
Cloud management is the management of cloud computing products and services.
Public clouds are managed by public cloud service providers, which include the public cloud
environment’s servers, storage, networking and data center operations. Users may also opt to
manage their public cloud services with a third-party cloud management tool. Users of public
cloud services can generally select from three basic cloud provisioning categories:
User self-provisioning: Customers purchase cloud services directly from the provider,
typically through a web form or console interface. The customer pays on a per-transaction
basis.
Advanced provisioning: Customers contract in advance a predetermined amount of resources,
which are prepared in advance of service. The customer pays a flat fee or a monthly fee.
Dynamic provisioning: The provider allocates resources when the customer needs them, then
decommissions them when they are no longer needed. The customer is charged on a pay-peruse basis.
A service- level agreement (SLA) is a commitment between a service provider and a client.
Particular aspects of the service – quality, availability, responsibilities – are agreed between
the service provider and the service user.The most common component of an SLA is that the
services should be provided to the customer as agreed upon in the contract. As an example,
Internet service providers and telcos will commonly include service level agreements within
the terms of their contracts with customers to define the level(s) of service being sold in plain
language terms. In this case the SLA will typically have a technical definition in mean time
between failures (MTBF), mean time to repair or mean time to recovery (MTTR); identifying
which party is responsible for reporting faults or paying fees; responsibility for various data
rates; throughput; jitter; or similar measurable details.
CLOUD – SERVICE LEVEL AGREEMENT
A cloud SLA (cloud service- level agreement) is an agreement between a cloud service
provider and a customer that ensures a minimum level of service is maintained. It guarantees
levels of reliability, availability and responsiveness to systems and applications, while also
specifying who will govern when there is a service interruption.
A cloud infrastructure can span geographies, networks and systems that are both physical and
virtual. While the exact metrics of a cloud SLA can vary by service provider, the areas
covered are uniform: volume and quality of work -- including precision and accuracy -speed, responsiveness and efficiency. The document aims to establish a mutual understanding
of the services, prioritized areas, responsibilities, guarantees and warranties provided by the
service provider.
What to look for in a cloud SLA
Service- level agreements have become more important as organizations move their systems,
applications and data to the cloud. A cloud SLA ensures that cloud providers meet certain
enterprise- level requirements and provide customers with a clearly defined set of
deliverables.
The defined level of services should be specific and measureable in each area. This allows
the quality of service (QoS) to be benchmarked and, if stipulated by the agreement, rewarded
or penalized accordingly.
enterprise- vs. provider-managed areas
An SLA will commonly use technical definitions that quantify the level of service, such as
mean time between failures (MTBF) or mean time to repair (MTTR), which specifies a target
or minimum value for service- level performance.
A typical compute and cloud SLA articulates precise levels of service, as well as the recourse
or compensation the user is entitled to should the provider fail to deliver the service as
described. Another area to consider carefully is service availability, which specifies the
maximum amount of time a read request can take; how many retries are allowed; and so on.
The SLA should also define compensation for users if the specifications aren't met. A cloud
storage service provider usually offers a tiered service credit plan that gives users credits
based on the discrepancy between SLA specifications and the actual service levels delivered.
Most public cloud storage services provide details of the service levels that users can expect
on their websites, and these will likely be the same for all users. However, an enterprise
establishing a service with a private cloud storage provider may be able to negotiate a more
customized deal. In this case, the cloud SLA might include specifications for retention
policies, the number of copies that will be retained, storage locations and so on.
Figure 3.1
Cloud service- level agreements may be more detailed to cover governance, security
specifications, compliance, and performance and uptime statistics. They should address
security and encryption practices for data privacy, disaster recovery expectations, data
location, as well as data access and portability.
Data protection processes, such as backup and disaster recovery, should also be addressed.
The agreement should outline the responsibilities of each party, the acceptable performance
parameters, a description of the applications and services covered under the agreement,
procedures for monitoring service levels, and a schedule for the remediation of outages.
Examine the ramifications of the cloud SLA before signing. For example, 99.9% uptime, a
common stipulation, translates to nine hours of outage per year. For some mission-critical
data, that may not be adequate. You should also check to see how terms are defined.
SLAs that scale
Most SLAs are negotiated to meet the needs of the customer at the time of signing, but many
businesses change dramatically in size over time. A solid cloud service- level agreement
outlines intervals for reviewing a contract so that it meets the changing needs of an
organization.
Some vendors even build in notification workflows that indicate when a cloud service- level
agreement is close to being breached so new negotiations can be initiated based on the
changes in scale. When entering any cloud SLA negotiation, it's important to protect the
business by clarifying uptimes. A good SLA protects both the customer and supplier from
missed expectations.
Finally, the cloud SLA should include an exit strategy that outlines the expectations of the
provider to ensure a smooth transition.
SERVICE PROVIDER-CLOUD MANAGEMENT
When a business adopts a cloud solution into their infrastructure, they need to consider how
they’re going to manage it. Many IT teams will take on management responsibilities
themselves, which is certainly a viable option. However, that might not be the best approach
for your organization. You may want to consider a managed service provider (MSP) to help
manage your cloud solution.
A cloud managed service provider lifts some or all cloud management obligations from a
company’s IT team. Depending on the managed service provider, they might offer full
management of a cloud deployment or management of specific cloud services. What are the
benefits of cloud managed service providers, and why should you consider them for managing
your cloud resources?
A cloud service provider is a third-party company offering a cloud-based platform,
infrastructure, application or storage services. Much like a homeowner would pay for a utility
such as electricity or gas, companies typically have to pay only for the amount of cloud
services they use, as business demands require.
Besides the pay-per-use model, cloud service providers also give companies a wide range of
benefits. Businesses can take advantage of scalability a nd flexibility by not being limited to
physical constraints of on-premises servers, the reliability of multiple data centres with
multiple redundancies, customisation by configuring servers to your preferences and
responsive load balancing which can easily respond to changing demands. Though businesses
should also evaluate security considerations of storing information in the cloud to ensure
industry-recommended access and compliance management configurations and practices are
enacted and met.
HOW TO CHOOSE A CLOUD SERVICE PROVIDER?
Once you have decided to make the move to cloud computing, your next step is to select a
cloud service provider. It is vital to assess the reliability and capability of a service provider
that you plan to entrust with your organisation’s applications and data. Some things to
consider:
Business health and processes

Financial health. The provider should have a track record of stability and be in a healthy
financial position with sufficient capital to operate successfully over the long term.

Organisation, governance, planning and risk management. The provider should have
a formal management structure, established risk management policies and a formal process
for assessing third-party service providers and vendors.

Trust. You should like the company and its principles. Check the provider’s reputation
and see who its partners are. Find out its level of cloud experience. Read reviews and talk to
customers whose situation is similar to yours.

Business knowledge and technical know-how. The provider should understand your
business and what you are looking to do and be able to match it up with their technical
expertise.

Compliance audit. The provider should be able to validate compliance with all of your
requirements through a third-party audit.
ADMINISTRATION SUPPORT

Service Level Agreements (SLAs). Providers should be able to promise you a basic level
of service that you are comfortable with.

Performance reporting. The provider should be able to give you performance reports.

Resource monitoring and configuration management. There should be sufficient
controls for the provider to track and monitor services provided to customers and any
changes made to their systems.

Billing and accounting. This should be automated so that you can monitor what resources
you are using and the cost, so you don’t run up unexpected bills. There should also be support
for billing-related issues.
TECHNICAL CAPABILITIES AND PROCESSES

Ease of deployment, management and upgrade. Make sure the provider has
mechanisms that make it easy for you to deploy, manage and upgrade your software and
applications.

Standard interfaces. The provider should use standard APIs and data transforms so that
your organisation can easily build connections to the cloud.

Event manage ment. The provider should have a formal system for event management
which is integrated with its monitoring/management system.

Change management. The provider should have documented and formal processes for
requesting, logging, approving, testing and accepting changes.

Hybrid capability. Even if you don’t plan to use a hybrid cloud initially, you should
make sure the provider can support this model. It has advantages that you may wish to exploit
at a later time.
SECURITY PRACTICES

Security infrastructure. There should be a comprehensive security infrastructure for all
levels and types of cloud services.

Security policies. There should be comprehensive security policies and procedures in
place for controlling access to provider and customer systems.

Identity management. Changes to any application service or hardware component
should be authorised on a personal or group role basis and authentication should be required
for anyone to change an application or data.

Data backup and retention. Policies and procedures to ensure integrity of customer
data should be in place and operational.

Physical security. Controls ensuring physical security should be in place, including for
access to co- located hardware. Also, data centres should have environmental safeguards to
protect equipment and data from disruptive events. There should be redundant networking
and power and a documented disaster recovery and business continuity plan
ROLE OF SERVICE PROVIDER IN CLOUD COMPUTING
There is a significant growth of cloud adoption across small as well as large enterprises. This
has resulted in a large spectrum of cloud offerings including cloud delivery models and a
variety of cloud computing services that are being provided by cloud hosting companies.

Improved accessibility and security
Cloud adoption not only helps improve business processes and enhances the efficiency of IT
infrastructures but also brings down costs of running, upgrading, and maintaining on-site IT
facilities.
Your business-critical data is armed with added security in the cloud environment. In reality,
the data is not actually being placed up in the cloud but is distributed to a number of
remote data centre facilities that are owned and operated by third-party service providers.
These establishments consist of climate-controlled rooms to house enterprise- grade servers
for seamless protection and easy accessibility for maintaining business continuity in spite of
any catastrophic event that may impact the main office of your enterprise.
The cloud data centres are designed to house a multitude of servers for storing data under
stringent security controls. The arrangement is aimed at enabling uninterrupted connectivity
among vast networks comprising of millions of machines. Cloud computing is leveraged by
end users as well as cloud hosting companies for the enrichment of their services.

Unde rstanding the cloud’s role in businesses
In order to understand the precise reasons for increased cloud adoption in enterprise setups,
we should have in-depth knowledge about of cloud’s attributes that boost business processes.
Cloud services are designed to set your IT staff free from mundane and time-consuming tasks
of maintaining, repairing, and upgrading hardware equipment such as servers. On-site IT
infrastructure in enterprises will be leaner after moving workloads to cloud data centre. In the
majority of cases, there will be no need to allocate separate space for housing servers and
other IT equipment.
The direct benefit of cloud computing is associated with reduced capital expenditure as
companies need not invest funds in purchasing costly hardware equipment. Mitigation of
hardware costs is also backed by freedom from maintenance and repair costs of web servers.
There is a definite reduction in upfront costs of procurement of cost-intensive software as
well as hardware.

Performance with a promise of security
In comparison with a physical server, a Cloud Hosting delivers better performance. This is
because established web hosting service providers are in a better position to afford enterprisegrade cloud servers as against small or medium-sized enterprises.
Cloud hosting providers attach great importance to the security of customers’ digital assets by
spending a significant amount of financial and manpower resources. These providers harden
the defences by the implementation of stringent measures such as firewalls, anti- malware and
anti-virus deployments. In addition to this, the host data centres are armed with fortress- like
security for safeguarding physical as well as networking assets.

Greater affordability
By provisioning top of the line hardware and software resources to customers at affordable
prices, cloud hosting service providers help business enterprises reduce their capital as well
as operating costs without impacting performance.
Cloud services go all out by investing huge sums of money to offer world-class resources to
customers at economical prices. Their efficient staff is well equipped to look after the routine
tasks as well as technical glitches irrespective of the time of the day for all weekdays.

Demand-oriented resource provisioning
Users of cloud services are allowed to access the optimum amount of resources in response to
resource requirements. This not only assures guaranteed resource availability but also helps
businesses achieve resource optimization for reduction of operating costs.
Cloud-based infrastructure also enables users to access a variety of resources such as
applications or platforms via any internet enabled device, from any location. These services
are always available on round the clock basis for improved efficiency of enterprises.
Employees can use a number of devices including smart-phones, tablets, and laptops to get
their hands on a multitude of files and folders without the need to make a trip to the office.
Cloud-based solutions are inherently flexible and accessible and businesses can easily keep
their employees well-connected with each other for greater efficiency.

Freedom from maintenance
On-site IT infrastructures are resource intensive and need to be regularly upgraded and
maintained. In contrast, cloud service providers shoulder the entire responsibility of looking
after the performance of servers, bandwidth, network, and software applications. This also
includes periodic upgrades and security patching of operating systems and other businesscritical applications.
This kind of infrastructure management requires large teams of software professionals to be
available for 24 hours a day for 365 days in a year. Majority of companies that adopt cloud
are driven by the need to have consistently available, flexible, secure, and well managed IT
infrastructure in the absence of any on-premise facility.
SCALABILITY: SCALE UP AND SCALE DOWN SERVICES
IT Managers run into scalability challenges on a regular basis. It is difficult to predict growth
rates of applications, storage capacity usage, and bandwidth. When a workload reaches
capacity limits the question is how is performance maintained while preserving efficiency to
scale? The ability to use the cloud to scale quickly and handle unexpected rapid growth or
seasonal shifts in demand has become a major benefit of public cloud services, but it can also
become a liability if not managed properly. Buying access to additional infrastructure within
minutes has become quite appealing. However, there are decisions that have to be made
about what kind of scalability is needed to meet demand and how to accurately track
expenditures.
Scalability is the capability of a system, network, or process to handle a growing amount of
work, or its potential to be enlarged to accommodate that growth. For example, a system is
considered scalable if it is capable of increasing its total output under an increased load when
resources (typically hardware) are added.
A system whose performance improves after adding hardware, proportionally to the capacity
added, is said to be a scalable system
Figure 3.2
This will be applicable or any system such as:
1. Commercial websites or Web application who have a larger user group and growing
frequently,
2. An immediate need to serve a high number of users for some high-profile event or
campaign.
3. A streaming event that would need immediate processing capabilities to serve
streaming to larger set of users across certain region or globally.
4. An immediate work processing or data processing that requires higher compute
requirements that usual for a certain job.
Scalability can be measured in various dime nsions, such as:

Administrative scalability: The ability for an increasing number of organizations or
users to easily share a single distributed system.

Functional scalability: The ability to enhance the system by adding new
functionality at minimal effort.

Geographic scalability: The ability to maintain performance, usefulness, or usability
regardless of expansion from concentration in a local area to a more distributed
geographic pattern.

Load scalability: The ability for a distributed system to easily expand and contract its
resource pool to accommodate heavier or lighter loads or number of inputs.
Alternatively, the ease with which a system or component can be modified, added, or
removed, to accommodate changing load.

Generation scalability: The ability of a system to scale up by using new generations
of components. Thereby, heterogeneous scalability is the ability to use the
components from different vendors.
Scale-Out/In / Horizontal Scaling:
To scale horizontally (or scale out/in) means to add more nodes to (or remove nodes from) a
system, such as adding a new computer to a distributed software application.
Figure 3.3
Pros:

Load is distributed to multiple servers

Even if one server goes down, there are servers to handle the requests or load.

You can add up more servers or reduce depending on the usage patterns or load.

Perfect for highly available web application or batch processing operations.
Cons:

You would need additional hardware /servers to support. This would increase an
infrastructure and maintenance costs.

You would need to purchase additional licenses for OS or required licensed
software’s.
Scale-Up/Down/Vertical Scaling:
To scale vertically (or scale up/down) means to add resources to (or remove resources from)
a single node in a system, typically involving the addition of CPUs or memory to a single
computer.
Figure 3.4
Pros

Possibility to increase CPU/RAM/Storage virtually or physically.

Single system can serve all your data/work processing needs with additional hardware
upgrade being done.

Minimal cost for upgrade
Cons

When you are physically or virtually maxed out with limit, you do not have any other
options.

A crash could cause outages to your business processing jobs.
We discussed in detail about the both approach in Scalability, depending on the need you will
have to choose right approach. Nowadays high availability of cloud computing platforms like
Amazon AWS/Microsoft Azure etc., you have lots of flexible ways to Scale-Out or Scale-Up
on a Cloud environment, which provides you with virtually unlimited resources, provided
you are being capable to pay off accordingly.
SUMMARY
A service-level agreement is an agreement between two or more parties, where one is the
customer and the others are service providers. This can be a legally binding formal or an
informal "contract" (for example, internal department relationships). The agreement may
involve separate organizations, or different teams within one organization. Contracts
between the service provider and other third parties are often (incorrectly) called SLAs –
because the level of service has been set by the (principal) customer, there can be no
"agreement" between third parties; these agreements are simply "contracts." Operationallevel agreements or OLAs, however, may be used by internal groups to support SLAs. If
some aspect of a service has not been agreed with the customer, it is not an "SLA".
SLAs commonly include many components, from a definition of services to the termination
of agreement. To ensure that SLAs are consistently met, these agreements are often
designed with specific lines of demarcation and the parties involved are required to meet
regularly to create an open forum for communication. Rewards and penalties applying to the
provider are often specified. Most SLAs also leave room for periodic (a nnual) revisitation to
make changes.
Virtualization is what makes scalability in cloud computing possible. Virtual machines
(VMs) are scalable. They’re not like physical machines, whose resources are relatively
fixed. You can add any amount of resources to VMs at any time. You can scale them up by:

Moving them to a server with more resources

Hosting them on multiple servers at once (clustering)
The other reason cloud computing is scalable? Cloud providers already have all the
necessary hardware and software in place. Individual businesses, in contrast, can’t afford to
have surplus hardware on standby.
Virtual machines have evolved over the past few years. Operating systems have added more
functionality and compatibilities allowing for every industry to have a more productive
workflow. Technology has made tremendous leaps in progress as well, especially with
increased internet speeds and 5G decreasing latency times exponentially. Using a virtual
machine (remote desktop) has now become cost-effective and more productive for all
industries, and all businesses.
KEY WORDS/ABBREVIATIONS

DevOps-The union of people, process and technology to enable continuous delivery
of value to customers. The practice of DevOps brings development and operations
teams together to speed software delivery and make products more secure and
reliable. Learn more about DevOps.

Elastic computing the ability to dynamically provision and de-provision computer
processing, memory and storage resources to meet changing demands without
worrying about capacity planning and engineering for peak usage.

Hybrid cloud A cloud that combines public and private clouds, bound together by
technology that allows data and applications to be shared between them.

infrastructure as a service (IaaS) A virtualised computer environment delivered as a
service over the Internet by a provider. Infrastructure can include servers, network
equipment and software.

Machine learning: The process of using mathematical models to predict outcomes
versus relying on a set of instructions. This is made possible by identifying patterns
within data, building an analytical model and using it to make predictions and
decisions.
LEARNING ACTIVITY
1. Draw a draft of Service Agreement of Cloud Computing.
___________________________________________________________________________
___________________________________________________________________
2. How the role of Service provider is important in Cloud Computing?
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. What is Service Level Agreement?
2. Who Service Provider?
3. Explain the role of service provider in Cloud computing,
4. What is Scalability?
5. Explain different types of Scalability.
B. Multiple Choice Questions
1. _______ blurs the differences between a small deployment and a large one because scale
becomes tied only to demand.
a) Leading
b) Pooling
c) Virtualization
d) All of the mentioned
2. Weinman argues that a large cloud’s size has the ability to repel ______ and DDoS attacks
better than smaller systems do.
a) sniffers
b) botnets
c) trojan horse
d) all of the mentioned
3. The reliability of a system with n redundant components and a reliability of r is
____________
a) 1-(1-r) n
b) 1-(1+r) n
c) 1+(1-r) n
d) All of the mentioned
4. Which of the following architectural standards is working with cloud computing industry?
a) Service-oriented architecture
b) Standardized Web services
c) Web-application frameworks
d) All of the mentioned
5. Which of the following is related to the service provided by Cloud?
a) Sourcing
b) Ownership
c) Reliability
d) AaaS
Ans wer
1. c
2. b
3. a
4. a
5. a
REFERENCES

Mills, Elinor (2009-01-27). "Cloud computing security forecast: Clear skies". CNET
News. Retrieved 2019-09-19.

Peter Mell; Timothy Grance (September 2011). The NIST Definition of Cloud
Computing (Technical report). National Institute of Standards and Technology: U.S.
Department of Commerce. doi:10.6028/NIST.SP.800-145. Special publication 800-145.

Duan, Yucong; Fu, Guohua; Zhou, Nianjun; Sun, Xiaobing; Narendra, Nanjangud; Hu,
Bo (2015). "Everything as a Service (XaaS) on the Cloud: Origins, Current and Future
Trends". 2015 IEEE 8th International Conference on Cloud Computing. IEEE. pp. 621–
628. doi:10.1109/CLOUD.2015.88. ISBN 978-1-4673-7287-9. S2CID 8201466.

"ElasticHosts Blog". Elastichosts. 2014-04-01. Retrieved 2016-06-02.

Amies, Alex; Sluiman, Harm; Tong, Qiang Guo; Liu, Guo Ning (July 2012).
"Infrastructure as a Service Cloud Concepts". Developing and Hosting Applications on
the Cloud. IBM Press. ISBN 978-0-13-306684-5.

Griffin, Ry'mone (2018-11-20). Internet Governance. Scientific e-Resources. p. 111.
ISBN 978-1-83947-395-1.

Boniface, M.; et al. (2010). Platform-as-a-Service Architecture for Real-Time Quality of
Service Management in Clouds. 5th International Conference on Internet and Web
Applications and Services (ICIW). Barcelona, Spain: IEEE. pp. 155–160.
UNIT 4: CLOUD SERVICE MANAGEMENT 2
STUCTURE
1. Learning Objectives
2. Introduction
3. Cloud Economics
4. Cloud Computing Services by Amazon
5. Cloud Computing Services by Google
6. Cloud Computing Services by Microsoft
7. Summary
8 Key Words/Abbreviations
9. Learning Activity
10. Unit End Questions (MCQ and Descriptive)
11.References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects of Cloud Economics:

Economics related to Cloud

Services Provided by Amazon, Google, Microsoft
INTRODUCTION
By exploring cloud economics in cloud computing, IT teams can gain a far more
sophisticated understanding of their capital and operational expenses. Beyond just the hard
numbers though, they should consider ways that cloud computing can empower and support
the productivity of developers and engineers. Cloud economics goes beyond just cutting
cloud computing costs; it’s about meeting business goals through greater speed and agility.
Understanding the larger perspective in this way will help IT teams choose the best cloud
solution for their needs.
IT teams should also be careful to approach their decisions around cloud economics with
objectivity and an awareness of basic behavioral economics. A host of potential biases and
blind spots can negatively affect their decision making:

Overconfidence blind spot: Being too confident in your understanding of costs and
project timelines.

Recency blind spot: Considering choices soberly versus being wowed by the latest
technology.

Confirmation blind spot: Letting pre-existing notions or false beliefs affect your
objective review of the information.

Refactoring and rework blind spot: Underestimating the time and money to refactor
applications to run in the cloud.

Talent reskilling blind spot: Overlooking the cost to retrain or maintain multiple
operations teams.

Operational costs blind spot: Not paying attention to the full cloud cost structure, such
as provider charges for data egress.
CLOUD ECONOMICS
What is cloud economics?
In the simplest term, economics of cloud computing deal with the knowledge concerning the
principles, costs, and benefits of cloud computing. For any organization to derive the greatest
value for the business, it must specifically determine how cloud services can affect IT budget,
security and IT infrastructure. There is no hard and fast formula to determine that, it all
depends on the assessing the costs pertaining to infrastructure, management, staffing need,
research and development (R&D), security and support. All these factors are analysed to
determine if moving to the cloud makes logical next step forward as per organization’s
specific circumstances and needs.
Making the business case for cloud economics
Before making the leap to cloud, businesses should analyse the economic pros and cons in
depth to get a detailed picture of specific costs and savings. Will it lead to long-term savings
and efficiencies? The answers will vary depending on the organizational needs and
circumstances and on the cloud solution being considered. The goal is to a void a cloud
adoption strategy that drives up cost, complexity and staffing resources.
When exploring cloud economics for their company, IT and finance managers can follow a
basic process to determine cloud computing ROI and TCO, and use those estimates to help
make their case to executives. The process should include these three elements:
Benchmarking: Calculate the cost of operating your current data centre, including capital costs
over the equipment lifespan, labour costs and any other maintenance and operational costs,
from licenses and software to spare parts.
Cloud costs: Estimate the costs of the cloud infrastructure you’re considering (public cloud,
private cloud, hybrid cloud, etc.). You’ll need a quote from your vendor, but look beyond this
basic pricing structure to consider ongoing fees, labour and training costs, ongoing integration
and testing of apps, as well as security and compliance.
Migration costs: Determine the cost to migrate IT operations to the cloud or to switch cloud
providers. These costs should include labour and expenses to integrate and test apps.
How to calculate the cost of moving to the cloud?
Now here is where the economics of cloud computing comes in action. Let’s take a holistic
approach to calculating the cost of cloud computing.
Total cost of ownership
To put the cost of a cloud solution into perspective, you need to calculate the total cost of
ownership (TCO) for the on-premises first. You can calculate that by figuring out the cost of
the equipment you need, cost of the capital and the project lifespan of the equipment. You can
also include the installation and maintenance cost as well.
Cost of your current data centre
That’s the first step- to calculate the amount of time, money and infrastructure required in
running your current data centre. Once you determine the scope and scale of your current IT
infrastructure, it will provide you the baseline to help you calculate the potential cost of the
cloud resources you’ll consume and compare it to current cost levels.
To precisely calculate the cost of your current data centre, make sure to include all aspects.
For example, IT infrastructure consisting of hardware and software that can include physical
servers, software licenses, maintenance contracts, warranties, supplies, material, spare parts,
and anything else that you directly pay for. You need the cost of all these to correctly estimate
how much your current IT infrastructure cost. Then there are operational costs as well that
include labour, facilities used to house IT hardware, internet connectivity. These operational
costs are the part of the cost of your data centre as well.
Cost of estimated cloud infrastructure
Once the cost of your current data centre is determined, you now need to calculate the
estimated cost of cloud infrastructure. While cloud pricing can vary depending on the number
of factors and can be quite complicated, it depends on your cloud provider to provide the
simplified pricing structure that is easier to understand. Alternatively, you can contact your
cloud provider of choice for a quote.
Cost of cloud migration execution
The next step is accounting for the costs involved in executing the migration of the IT
operations to the cloud. It is determined by the scope of your current IT infrastructure and how
much of it you plan on moving to the cloud will be. Moreover, there is a cost involved of
integrating and testing of apps or even consultation fees.
Additional post migration cost
Often, many cloud providers require a monthly infrastructure fee to ma intain and improve
your new cloud environment. Costs such as continued integration and testing of apps, training,
labour, security, and compliance, administration, and others need to be forecasted in order to
determine an accurate post- migration budget.
CLOUD COMPUTING SERVICES BY AMAZON
Amazon
In 2006, Amazon Web Services (AWS) started to offer IT services to the market in the
form of web services, which is nowadays known as cloud computing. With this cloud, we
need not plan for servers and other IT infrastructure which takes up much of time in
advance. Instead, these services can instantly spin up hundreds or thousands of servers in
minutes and deliver results faster. We pay only for what we use with no up-front expenses
and no long-term commitments, which makes AWS cost efficient.
Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in the
cloud that powers multitude of businesses in 190 countries around the world
Amazon Web Service Architecture
This is the basic structure of AWS EC2, where EC2 stands for Elastic Compute Cloud. EC2
allow users to use virtual machines of different configurations as per their requirement. It
allows various configuration options, mapping of individual server, various pricing options,
etc. We will discuss these in detail in AWS Products section. Following is the diagrammatic
representation of the architecture.
Figure 4.1
Note − In the above diagram S3 stands for Simple Storage Service. It allows the users to store
and retrieve various types of data using API calls. It doesn’t contain any computing element.
We will discuss this topic in detail in AWS products section.
Load Balancing
Load balancing simply means to hardware or software load over web servers, that improver's
the efficiency of the server as well as the application. Following is the diagrammatic
representation of AWS architecture with load balancing.
Hardware load balancer is a very common network appliance used in traditional web
application architectures.
AWS provides the Elastic Load Balancing service, it distributes the traffic to EC2 instances
across multiple available sources, and dynamic addition and removal of Amazon EC2 hosts
from the load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-balancing capacity to
adjust to traffic demands and also support sticky sessions to address more advanced routing
needs.
Amazon Cloud-front: It is responsible for content delivery, i.e. used to deliver website. It
may contain dynamic, static, and streaming content using a global network of edge locations.
Requests for content at the user's end are automatically routed to the nearest edge location,
which improves the performance.
Amazon Cloud- front is optimized to work with other Amazon Web Services, like Amazon S3
and Amazon EC2. It also works fine with any non-AWS origin server and stores the original
files in a similar manner.
In Amazon Web Services, there are no contracts or monthly commitments. We pay only for
as much or as little content as we deliver through the service.
Elastic Load Balancer
It is used to spread the traffic to web servers, which improves performance. AWS provides
the Elastic Load Balancing service, in which traffic is distributed to EC2 instances over
multiple available zones, and dynamic addition and removal of Amazon EC2 hosts from the
load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-balancing capacity as per
the traffic conditions.
Security Management
Amazon’s Elastic Compute Cloud (EC2) provides a feature called security groups, which is
similar to an inbound network firewall, in which we have to specify the protocols, ports, and
source IP ranges that are allowed to reach your EC2 instances.
Each EC2 instance can be assigned one or more security groups, each of which routes the
appropriate traffic to each instance. Security groups can be configured using specific subnets
or IP addresses which limits access to EC2 instances.
Elastic Caches
Amazon Elastic Cache is a web service that manages the memory cache in the cloud. In
memory management, cache has a very important role and helps to reduce the load on the
services, improves the performance and scalability on the database tier by cac hing frequently
used information.
Amazon RDS
Amazon RDS (Relational Database Service) provides a similar access as that of MySQL,
Oracle, or Microsoft SQL Server database engine. The same queries, applications, and tools
can be used with Amazon RDS.
It automatically patches the database software and manages backups as per the user’s
instruction. It also supports point- in-time recovery. There are no up- front investments
required, and we pay only for the resources we use.
Hosting RDMS on EC2 Instances
Amazon RDS allows users to install RDBMS (Relational Database Management System) of
your choice like MySQL, Oracle, SQL Server, DB2, etc. on an EC2 instance and can manage
as required.
Amazon EC2 uses Amazon EBS (Elastic Block Storage) similar to network-attached storage.
All data and logs running on EC2 instances should be placed on Amazon EBS volumes,
which will be available even if the database host fails.
Amazon EBS volumes automatically provide redundancy within the availability zone, which
increases the availability of simple disks. Further if the volume is not sufficient for our
databases needs, volume can be added to increase the performance for our database.
Using Amazon RDS, the service provider manages the storage and we only focus on
managing the data.
Storage & Backups
AWS cloud provides various options for storing, accessing, and backing up web application
data and assets. The Amazon S3 (Simple Storage Service) provides a simple web-services
interface that can be used to store and retrieve any amount o f data, at any time, from
anywhere on the web.
Amazon S3 stores data as objects within resources called buckets. The user can store as many
objects as per requirement within the bucket, and can read, write and delete objects from the
bucket.
Amazon EBS is effective for data that needs to be accessed as block storage and requires
persistence beyond the life of the running instance, such as database partitions and
application logs.
Amazon EBS volumes can be maximized up to 1 TB, and these volumes can be striped for
larger volumes and increased performance. Provisioned IOPS volumes are designed to meet
the needs of database workloads that are sensitive to storage performance and consistency.
Amazon EBS currently supports up to 1,000 IOPS per volume. We can stripe multiple
volumes together to deliver thousands of IOPS per instance to an application.
Auto Scaling
The difference between AWS cloud architecture and the traditional hosting model is that
AWS can dynamically scale the web application fleet on demand to handle changes in traffic.
In the traditional hosting model, traffic forecasting models are generally used to provision
hosts ahead of projected traffic. In AWS, instances can be provisioned on the fly according to
a set of triggers for scaling the fleet out and back in. Amazon Auto Scaling can create
capacity groups of servers that can grow or shrink on demand.
Key Considerations for Web Hosting in AWS
Following are some of the key considerations for we b hosting −
No physical network devices needed
In AWS, network devices like firewalls, routers, and load-balancers for AWS applications no
longer reside on physical devices and are replaced with software solutions.
Multiple options are available to ensure quality software solutions. For load balancing choose
Zeus, HAProxy, Nginx, Pound, etc. For establishing a VPN connection choose OpenVPN,
OpenSwan, Vyatta, etc.
No security concerns
AWS provides a more secured model, in which every host is locked down. In Amazon EC2,
security groups are designed for each type of host in the architecture, and a large variety of
simple and tiered security models can be created to enable minimum access among hosts
within your architecture as per requirement.
Availability of data centres
EC2 instances are easily available at most of the availability zones in AWS region and
provides model for deploying your application across data centres for both high availability
and reliability.
CLOUD COMPUTING SERVICES BY GOOGLE
Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that
runs on the same infrastructure that Google uses internally for its end-user products, such as
Google Search, Gmail, file storage, and YouTube.[1] Alongside a set of management tools, it
provides a series of modular cloud services including computing, data storage, data analytics
and machine learning.[2] Registration requires a credit card or bank account details.
Google Cloud Platform provides infrastructure as a service, platform as a ser vice, and
serverless computing environments.
In April 2008, Google announced App Engine, a platform for developing and hosting web
applications in Google- managed data centers, which was the first cloud computing service
from the company. The service became generally available in November 2011. Since the
announcement of the App Engine, Google added multiple cloud services to the platform.
Google Cloud Platform is a part of Google Cloud, which includes the Google Cloud Platform
public cloud infrastructure, as well as G Suite, enterprise versions of Android and Chrome
OS, and application programming interfaces (APIs) for machine learning and enterprise
mapping services.
Cloud Functions, Google Cloud's functions as a service (FaaS) offering, provides a serverles s
execution environment for building and connecting cloud services. With Cloud Functions
you write simple, single-purpose functions that are attached to events emitted from your
cloud infrastructure and services. Your function is triggered when an event be ing watched is
fired. Your code executes in a fully managed environment. There is no need to provision any
infrastructure or worry about managing any servers.
Cloud Functions can be written using JavaScript, Python 3, Go, or Java. You can take your
function and run it in any standard Node.js (Node.js 10), Python 3 (Python 3.7), Go (Go 1.11
or 1.13) or Java (Java 11) environment, which makes both portability and local testing a
breeze.
Cloud Functions are a good choice for use cases that include the following:
Data processing and ETL operations, for scenarios such as video transcoding and IoT
streaming data.
Webhooks to respond to HTTP triggers.
Lightweight APIs that compose loosely coupled logic into applications.
Mobile backend functions.
Application platform
App Engine is Google Cloud's platform as a service (PaaS). With App Engine, Google
handles most of the management of the resources for you. For example, if your application
requires more computing resources because traffic to your website increases, Goo gle
automatically scales the system to provide those resources. If the system software needs a
security update, that's handled for you, too.
When you build your app on App Engine, you can:
Build your app in Go, Java, .NET, Node.js, PHP, Python, or Ruby and use pre-configured
runtimes, or use custom runtimes to write code in any language.
Let Google manage app hosting, scaling, monitoring, and infrastructure for you.
Connect with Google Cloud storage products, such as Cloud SQL, Fire store in Datastore
mode, and Cloud Storage. You can also connect to managed Redis databases, and host thirdparty databases such as MongoDB and Cassandra on Compute Engine, another cloud
provider, on-premises, or with a third-party vendor.
Use Web Security Scanner to identify security vulnerabilities as a complement to your
existing secure design and development processes.
Google Cloud's unmanaged compute service is Compute Engine. You can think of Compute
Engine as providing an infrastructure as a service (IaaS), because the syst em provides a
robust computing infrastructure, but you must choose and configure the platform components
that you want to use. With Compute Engine, it's your responsibility to configure, administer,
and monitor the systems. Google will ensure that resources are available, reliable, and ready
for you to use, but it's up to you to provision and manage them. The advantage here is that
you have complete control of the systems and unlimited flexibility
When you build on Compute Engine, you can do the following:
Use virtual machines (VMs), called instances, to build your application, much like you would
if you had your own hardware infrastructure. You can choose from a variety of instance types
to customize your configuration to meet your needs and your budget.Choose which global
regions and zones to deploy your resources in, giving you control over where your data is
stored and used.
Choose which operating systems, development stacks, languages, frameworks, services, and
other software technologies you prefer.
Create instances from public or private images.
Use Google Cloud storage technologies or any third-party technologies you prefer.
Use Google Cloud Marketplace to quickly deploy pre-configured software packages. For
example, you can deploy a LAMP or MEAN stack with just a few clicks.
Create instance groups to more easily manage multiple instances together.
Use autoscaling with an instance group to automatically add and remove capacity.
Attach and detach disks as needed.
Use SSH to connect directly to your instances.
CLOUD COMPUTING SERVICES BY MICROSOFT
Microsoft Azure, commonly referred to as Azure, is a cloud computing service created by
Microsoft for building, testing, deploying, and managing applications and services through
Microsoft- managed data centres. It provides software as a service (SaaS), platform as a
service (PaaS) and infrastructure as a service (IaaS) and supports many different
programming languages, tools, and frameworks, including both Microsoft-specific and thirdparty software and systems.
Azure is Microsoft's big enterprise cloud, offered as a PaaS and IaaS service. It is a popular
service used by developers who write apps with the support of the company's coding tools.
Azure offers the capability to save money, work faster and integrate da ta and on-premises
apps in a powerful, scalable and flexible way. This feature-filled service offers a hybrid cloud
solution, unlike many other cloud providers that force customers to choose between the
public cloud and their own data centres. Hybrid cloud solutions are known to offer more
efficiency and economy in storage, backup and recovery of data.
Support for Azure has been expanded from Windows to Linux as well, opening up the
services to more users. Clients only pay for the services they need. With Azure, clients can
better provision Windows and Linux VM apps, develop modern mobile and business solution
apps for Windows, iOS and Android, gain insights from data and manage user accounts,
synching with on-premises data directories.
Deployment of Azure services takes less than 5 minutes, just as it is claimed by Microsoft. 57
percent of Fortune 500 companies on the bleeding edge already use Azure, and the numbers
are expected to rise as the capability offered by Azure improves and expands further.
Azure was announced in October 2008, started with codename "Project Red Dog”, and
released on February 1, 2010, as Windows Azure before being renamed to Microsoft Azure
on March 25, 2014
Design
Microsoft Azure uses a specialized operating system, called Microsoft Azure, to run its
"fabric layer”: A cluster hosted at Microsoft's data centres that manage computing and
storage resources of the computers and provisions the resources (or a subset of them) to
applications running on top of Microsoft Azure. Microsoft Azure has been described as a
"cloud layer" on top of a number of Windows Server systems, which use Windows Server
2008 and a customized version of Hyper-V, known as the Microsoft Azure Hypervisor to
provide virtualization of services.
Scaling and reliability are controlled by the Microsoft Azure Fabric Controller, which ensures
the services and environment do not fail if one or more of the servers fails within the
Microsoft data centre, and which also provides the management of the user's Web application
such as memory allocation and load balancing.
Azure provides an API built on REST, HTTP, and XML that allows a developer to interact
with the services provided by Microsoft Azure. Microsoft also provides a client-side
managed class library that encapsulates the functions of interacting with the services. It also
integrates with Microsoft Visual Studio, Git, and Eclipse.
In addition to interacting with services via API, users can manage Azure services using the
Web-based Azure Portal, which reached General Availability in December 2015. The portal
allows users to browse active resources, modify settings, launch new resources, and view
basic monitoring data from active virtual machines and services.
Deployment models
Microsoft Azure offers two deployment models for cloud resources: the "classic" deployment
model and the Azure Resource Manager. In the classic model, each Azure resource (virtual
machine, SQL database, etc.) was managed individually. The Azure Resource Manager,
introduced in 2014,enables users to create groups of related services so that closely coupled
resources can be deployed, managed, and monitored together.[
SUMMARY
Within a few years, cloud computing has become a technology that affects everyone's lives
on a daily basis. We store our personal files on the cloud and use cloud-based apps to
maintain friendships. IT departments have also taken a big step in going from being doubtful
of cloud security to spending billions of dollars on cloud services. The cloud gives small,
medium and large sized companies the ability to simply rent the apps and servers they need,
instead of having to buy them.
Simply put, cloud computing is the delivery of computing services—including servers,
storage, databases, networking, software, analytics, and intelligence—over the Internet (“the
cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically
pay only for cloud services you use, helping lower your operating costs, run your
infrastructure more efficiently and scale as your busine ss needs change.
Web Based Cloud Computing: Companies use the functionality provided by web services
and do not have to develop a full application for their needs. Organizations make use of the
unlimited storage potential of the cloud infrastructure. They can expand and shrink their
storage space as needed without having to worry about dedicated servers on site. It allows
people to access the functionality of a particular software without worrying about storage or
other issues. Companies can run their applications on the cloud service’s platform without
having to worry about maintaining hard drives and servers.
Companies that need to store a lot of data can store all of their data remotely and can even
create a virtual data center. Managed Services: These are applications used by the cloud
service providers, such as anti-spam service.
Service Commerce: It is the creation of a hub of applications that can be used by an
organization’s members. It provides organizations the applications they need along with the
services they desire.
KEY WORDS/ABBREVIATIONS

Machine learning algorithms: Help data scientists identify patterns within sets of
data. Selected based upon the desired outcome—predicting values, identifying
anomalies, finding structure or determining categories—machine learning algorithms
are commonly divided into those used for supervised learning and those used for
unsupervised learning

Microsoft Azure :The Microsoft cloud platform, a growing collection of integrated
services, including infrastructure as a service (IaaS) and platform as a service (PaaS)
offerings

Middleware: Software that lies between an operating system and the applications
running on it. It enables communication and data management for distributed
applications, like cloud-based applications, so, for example, the data in one database
can be accessed through another database.

NoSQL: NoSQL is a set of nonrelational database technologies—developed with
unique capabilities to handle high volumes of unstructured and changing data.
NoSQL technology offers dynamic schema, horizontal scaling and the ability to store
and retrieve data as columns, graphs, key-values or documents.

Platform as a service (PaaS): A computing platform (operating system and other
services) delivered as a service over the Internet by a provider. An example is an
application development environment that you can subscribe to and use immediately.
LEARNING ACTIVITY
1. Draw a comparative study of Google and Microsoft Azure Services
___________________________________________________________________________
___________________________________________________________________
2. Draw a strategy to estimate the economics
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. Explain Cloud Economics.
2. What are the major points to be kept in mind while going for Cloud Computing
3. What are the major challenges in accepting Cloud Computing
4. Explain different services provided by Amazon, Google App Engine, Microsoft.
B. Multiple Choice Questions
1. The ________ cloud infrastructure is operated for the exclusive use of an organization.
a) Public
b) Private
c) Community
d) All of the mentioned
2. __________ cloud is one where the cloud has been organized to serve a common function
or purpose.
a) Public
b) Private
c) Community
d) All of the mentioned
3. A hybrid cloud combines multiple clouds where those clouds retain their unique identities
but are bound together as a unit.
a) Public
b) Private
c) Community
d) Hybrid
4.Which of the following benefit is related to creates resources that are pooled together in a
system that supports multi-tenant usage?
a) On-demand self- service
b) Broad network access
c) Resource pooling
d) All of the mentioned
5. The _____ is something that you can obtain under contract from your vendor.
a) PoS
b) QoS
c) SoS
d) All of the mentioned
Ans wer
1. b
2. c
3. d
4. a
5. b
REFERENCES

"Azure Machine Learning Studio". Machine Learning. Retrieved August 27, 2020.

Directory of Azure Cloud Services, Microsoft.com

"How to monitor Microsoft Azure VMs". Datadog. Retrieved March 19, 2019.

Vaughan-Nichols, Steven J. "Microsoft developer reveals Linux is now more used on
Azure than Windows Server". ZDNet. Retrieved July 2, 2019.

"Meet Windows Azure event June 2012". Weblogs.asp.net. June 7, 2012. Retrieved
June 27, 2013.
UNIT 5: MICROSOFT AZURE 1
STRUCTURE
1. Learning Objectives
2. Introduction
3. Azure -Architecture
4. Difference between Azure Resource Manager (ARM) & Classic Portal.
5. Summary
6. Key Words/Abbreviations
7. Learning Activity
8. Unit End Questions (MCQ and Descriptive)
9. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects of Azure Architecture:

Architectural Aspects of Azure

Difference between Azure Resource Manager (ARM) & Classic Portal
INTRODUCTION
Microsoft Azure is a public cloud platform featuring powerful on-demand infrastructure and
solutions for building and deploying applications workloads as well as a wide variety of IT
and application services. You can use Azure as a public cloud provider and as a hybrid
extension to existing on-premises infrastructure. Organizations that use Microsoft solutions
on-premises are able to easily extend their infrastructure and operational processes to Azure.
With the growing popularity of Azure, today’s systems administrators need to acquire and
strengthen their skills on this fast- growing public cloud platform. In this chapter we explore
the Azure public cloud platform with a focus on the Infrastructure-as-a-Service (IaaS)
features. We cover general architectural features of the Azure cloud including geographic
regions, availability zones, and Service Level Agreements (SLAs) attached to the core Azure
IaaS infrastructure. Regions, Availability Zones, Availability Sets, and Uptime SLAs
The Azure cloud environment is segmented logically and physically to provide the following:
Geographic availabilityLow- latency access to geographic locations for more rapid application
and service access.
Geographic resiliencyMultiple points of presence for distributing applications, workloads,
and services to allow for high availability
Core services are available across the entire infrastructure, including Domain Name System
(DNS), security, identity and directory services, and others that are often described as oxygen
services.
The geographic layout of Azure is divided up into locations grouped into regions, and within
each region they are physically separated Availability Zones.
Regions
Azure touts the largest public cloud, and it is growing at the fastest rate by percentage of any
public cloud to date with 54 regions as of this writing. Regions are defined as an area within a
specific geography that does not span across national borders and that contains one or more
datacenters.
Regional access is an important consideration for many technical and business reasons. Both
deployment considerations and user experience are affected by the availability of multiple
regions. You must also weigh advantages against design considerations and complexity when
using multiregion architectures.
Using multiple regions in order to support scale-out application and virtual machine
deployments provides a way to ensure resiliency and availability. Another use case is
ensuring low- latency access to customers within a specific region (e.g., customers in AsiaPacific geographies would suffer from latency if they were to access a North American
region).
There are also specialty regions that are purpose-built to deal with regulatory and
governmental boundaries. These include the following:

US Gov Virginia and US Gov Iowa

China East and China North

Germany Central and Germany Northeast
Each specialty region is designed to solve for specific governmental and security regulations
that require distinct cloud environments for targeted customers with these requirements
(e.g., FedRAMP, DISA).
Regional clouds in China and Germany provide local datacenter operations to be controlled
by country-specific providers, which is a requirement for data sovereignty and other
regulatory boundaries specific to those regions.
Paired Regions
Another feature within Azure is Paired Regions. These regions are in the same geography
but are typically at least 300 miles apart and provide the ability to deploy cross-region
services and applications while maintaining geographic residency.
Paired Regions also have operational processes that ensure that sequential updates occur and
that prioritized regional recovery occurs in the event of an outage. This provides you with
better resiliency options for application and systems architects to use when designing your
Azure solutions.
Specific Azure services have replication options and will take advantage of the paired
region, as the replication target in order to maintain geographic residency for data and
application workloads.
Figure 5-1
Using Paired Regions enables deployment patterns that can include applications that might be
replicated rather than used in a distributed deployment. This enables active –passive
deployment patterns with low-latency access to the second region for rapid recovery in the
case of a fault.
Paired Regions services that can be replicated include compute (Azure Virtual Machines),
Storage, and Database services. Additional third-party products are available to replicate
resources and data outside of the native Azure offerings.
AZURE -ARCHITECTURE
Azure as PaaS (Platform as a Service)
As the name suggests, a platform is provided to clients to develop and deploy software. The
clients can focus on the application development rather than having to worry about hardware
and infrastructure. It also takes care of most of the operating systems, servers and networking
issues.
Pros

The overall cost is low as the resources are allocated on demand and servers are
automatically updated.

It is less vulnerable as servers are automatically updated and being checked for all
known security issues. The whole process is not visible to developer and thus does not
pose a risk of data breach.

Since new versions of development tools are tested by the Azure team, it becomes easy
for developers to move on to new tools. This also helps the developers to meet the
customer’s demand by quickly adapting to new versions.
Cons

There are portability issues with using PaaS. There can be a different environment at
Azure, thus the application might have to be adapted accordingly.
Azure as IaaS (Infrastructure as a Service)
It is a managed compute service that gives complete control of the operating systems and the
application platform stack to the application developers. It lets the user to access, manage and
monitor the data centres by themselves.
Pros

This is ideal for the application where complete control is required. The virtual
machine can be completely adapted to the requirements of the organization or
business.

IaaS facilitates very efficient design time portability. This means application can be
migrated to Windows Azure without rework. All the application dependencies such as
database can also be migrated to Azure.

IaaS allows quick transition of services to clouds, which helps the vendors to offer
services to their clients easily. This also helps the vendors to expand their business by
selling the existing software or services in new markets.
Cons

Since users are given complete control they are tempted to stick to a particular version
for the dependencies of applications. It might become difficult for them to migrate the
application to future versions.

There are many factors which increases the cost of its operation. For example, higher
server maintenance for patching and upgrading software.

There are lots of security risks from unpatched servers. Some companies have welldefined processes for testing and updating on-premise servers for security
vulnerabilities. These processes need to be extended to the cloud-hosted IaaS VMs to
mitigate hacking risks.

The unpatched servers pose a great security risk. Unlike PaaS, there is no provision of
automatic server patching in IaaS. An unpatched server with sensitive information can
be very vulnerable affecting the entire business of an organization.

It is difficult to maintain legacy apps in Iaas. It can be stuck with the older version of
the operating systems and application stacks. Thus, resulting in applications that are
difficult to maintain and add new functionality over the period of time.
It becomes necessary to understand the pros and cons of both services in order to choose the
right one according your requirements. In conclusion it can be said that, PaaS has definite
economic advantages for operations over IaaS for commodity applications. In PaaS, the cost
of operations breaks the business model. Whereas, IaaS gives complete control of the OS and
application platform stack.
Like other cloud platforms, Microsoft Azure depends on a technology called virtualization,
which is the emulation of computer hardware in software. This is made possible by the fact
that most computer hardware works by following a set of instructions encoded directly into
the silicon. By mapping software instructions to emulate hardware instructions, virtualized
hardware can use software to function like “real” hardware.
Cloud providers maintain multiple data centres, each one having hundreds (if not thousands)
of physical servers that execute virtualized hardware for customers. Microsoft Azure
architecture runs on a massive collection of servers and networking hardware, which, in turn,
hosts a complex collection of applications that control the operation and configuration of the
software and virtualized hardware on these servers.
This complex orchestration is what makes Azure so powerful. It ensures that users no longer
have to spend their time maintaining and upgrading computer hardware as Azure takes care
of it all behind the scenes.
HOW AZURE WORKS
It is essential to understand the internal workings of Azure so that we can design our
applications on Azure effectively with high availability, data residency, resilience, etc.
Microsoft Azure is completely based on the concept of virtualization. So, similar to other
virtualized data centre, it also contains racks. Each rack has a separate power unit and
network switch, and also each rack is integrated with a software called Fabric-Controller.
This Fabric-controller is a distributed application, which is responsible for managing and
monitoring servers within the rack. In case of any server failure, the Fabric-controller
recognizes it and recovers it. And Each of these Fabric-Controller is, in turn, connected to a
piece of software called Orchestrator. This Orchestrator includes web-services, Rest API to
create, update, and delete resources.
Figure 5.2
When a request is made by the user either using PowerShell or Azure portal. First, it will go
to the Orchestrator, where it will fundamentally do three things:
1. Authenticate the User
2. It will Authorize the user, i.e., it will check whether the user is allowed to do the
requested task.
3. It will look into the database for the availability of space based on the resources and
pass the request to an appropriate Azure Fabric controller to execute the request.
Combinations of racks form a cluster. We have multiple clusters within a data centre, and we
can have multiple Data Centres within an Availability zone, multiple Availability zones
within a Region, and multiple Regions within a Geography.
o
Geographies: It is a discrete market, typically contains two or more regions, that
preserves data residency and compliance boundaries.
o
Azure regions: A region is a collection of data centres deployed within a defined
perimeter and interconnected through a dedicated regional low- latency network.
Azure covers more global regions than any other cloud provider, which offers the scalability
needed to bring applications and users closer around the world. It is globally available in 50
regions around the world. Due to its availability over many regions, it helps in preserving
data residency and offers comprehensive compliance and flexible options to the customers.
DIFFERENCE BETWEEN AZURE RESOURCE MANAGER (ARM) &
CLASSIC PORTAL.
This cloud platform from Microsoft has been around in the market for seven years and has
made significant improvements during these years. One such improvement is the
introduction of a new model called the Azure Resource Manager (ARM). With the
announcement of this new deployment model, a range of questions and misconceptions
came into light. It is common to hear questions like: Should I Choose ARM portal or
Classic? Should I upgrade to ARM if I have deployed classic? What’s the difference
between ARM and Classic? Etc.
Figure 5.3
All these questions are valid, and it is, of course, essential to know the technology before
deploying it. There are some stark differences b etween ARM and Azure classic or ASM
portal, and in this blog, we have covered all the major ones that will help you make an
informed decision!
Classic Azure Portal
The underlying feature of this portal is that it is used to create and configure resources t hat
only support resource manager. The network characteristics of the virtual machine are
determined by a necessary cloud service that serves as a logical container for virtual
machines. This means VM in classic Azure should be inside a virtual container called
cloud service. This also implies that one can have multiple VMs inside a single umbrella
called cloud service.
However, all the VMs under a single cloud service have single VIP to maintain the
availability of the VMs and load balancing. Furthermore, cloud services in this model
support virtual network but do not necessarily enforce it. Along with this, there are some
other characteristics of classic Azure, which are:

The API set used by ASM is XML driven REST API.

Security features like Network Security Groups on VMs can be configured using Azure
Power Shell.
ARM Portal
There is no dedicated support for cloud services, and to provide equivalent functionality,
ARM offers several additional resource types. A user will be able to create and configur e
all resources within it. ARM portal has a logical container called resource group, which
makes all the Azure resource- related tasks easy and streamlined. Most importantly,
deletion of resources is easy in ARM as compared to the classic portal.
In addition, Private portals can also be created by leveraging the on-premises data centre.
Besides these, there are some other benefits of ARM, which are:

Unlike classic Azure, fine- grained access control with the help of RBAC is possible in
ARM on all the resources in a resource group.

Deployment using JSON-based templates is possible on ARM

The resources on the ARM portal can be logically organised in Azure subscription and
can be tagged if required.

Deletion of resources is also easy in ARM as compared to classic Azure as the resources
are grouped.

JSON templates can be created to configure the entire pattern.
As of now, both modes are available to users, and it is necessary to pay attention to the
features that each one offers. However, some functions are still p resent in the old portal,
but Microsoft is rapidly bringing new functionality in ARM.
Figure 5.4
Having said all that, it is more likely that the classic model will become obsolete in the
near future. So, if you are new to Azure, then it is a wise decision to deploy ARM and
harness its advantages. Additionally, it is always cost-effective to outsource these kinds of
business requirements as the outsourcing company has the entire infrastructure deployed
to implement the model at different locations.
SUMMARY
In this technology- driven world, businesses are solely focused on maximizing the
effectiveness of shared resources rather than focusing on the products that differentiate
their projects and offerings. In this pursuit, they consistently develop and deploy
technologies that support their objectives and goals. Companies like Amazon have
invested hugely in a computing infrastructure to decrease their costs and to maintain their
expensive existing technology.
With the emergence of more disruptive technologies, c loud computing became a
possibility. Cloud computing is basically a model for enabling ubiquitous, on-demand,
convenient network access to a shared pool of configurable computing resources. And,
Microsoft Azure is a cloud platform that provides services to developers to build, deploy,
and manage business applications. It is a breakthrough service that is considered as both
PaaS and SaaS offering. In fact, the services of Azure cloud include data storage,
analytics, networking, hybrid integration, identity a nd access management, internet of
things, DevOps, migration, etc.
KEY WORDS/ABBREVIATIONS

Management groups Logical containers that you use for one or more subscriptions.
You can define a hierarchy of management groups, subscriptions, resource groups,
and resources to efficiently manage access, policies, and compliance through
inheritance

Subscription A logical container for your resources. Each Azure resource is
associated with only one subscription. Creating a subscription is the first step in
adopting Azure

Azure account
the email address that you provide when you create an Azure
subscription is the Azure account for the subscription. The party that’s associated with
the email account is responsible for the monthly costs that are incurred by the
resources in the subscription. When you create an Azure account, you provide contact
information and billing details, like a credit card. You can use the same Azure
account (email address) for multiple subscriptions. Each subscription is associated
with only one Azure account

Identity
A thing that can get authenticated. An identity can be a user with a
username and password. Identities also include applications or other servers that
might require authentication through secret keys or certificates.

Azure AD account
an identity created through Azure AD or another Microsoft
cloud service, such as Office 365. Identities are stored in Azure AD and accessible to
your organization’s cloud service subscriptions. This account is also sometimes called
a Work or school account.
LEARNING ACTIVITY
1. With respect to organization draw the comparative study on ARM and Classic Portal
___________________________________________________________________________
___________________________________________________________________
2. Study the Azure Architecture of any healthcare organization.
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. Explain Microsoft Azure with its benefits
2. Discuss the architecture of Microsoft Azure.
3. What are the various features of Microsoft Azure?
4. Differentiate between Azure Resource Manager (ARM) & Classic Portal.
B. Multiple Choice Questions
1. Which of the following standard does Azure use?
a) REST
b) XML
c) HTML
d) All of the mentioned
2. What does IPsec in the Azure platform refer to?
a) Internet Protocol Security protocol suite
b) Internet Standard
c) Commodity servers
d) All of the mentioned
3. Which of the following web applications can be deployed with Azure?
a) ASP.NET
b) PHP
c) WCF
d) All of the mentioned
4. A _________ role is a virtual machine instance running the Microsoft IIS Web server that
can accept and respond to HTTP or HTTPS requests.
a) Web
b) Server
c) Worker
d) Client
5. Which of the following element allows you to create and manage virtual machines that
serve either in a Web role and a Worker role?
a) Compute
b) Application
c) Storage
d) None of the mentioned
Ans wer
1. d
2. a
3. d
4. a
5. a
REFERENCES

"Azure Machine Learning Studio". Machine Learning. Retrieved August 27, 2020.

Directory of Azure Cloud Services, Microsoft.com

"How to monitor Microsoft Azure VMs". Datadog. Retrieved March 19, 2019.

Vaughan-Nichols, Steven J. "Microsoft developer reveals Linux is now more used on
Azure than Windows Server". ZDNet. Retrieved July 2, 2019.

"Meet Windows Azure event June 2012". Weblogs.asp.net. June 7, 2012. Retrieved
June 27, 2013.

"Web App Service - Microsoft Azure". Microsoft.

"Mobile Engagement - Microsoft Azure". azure.microsoft.com. Retrieved July 27,
2016.

"HockeyApp - Microsoft Azure". azure.microsoft.com. Retrieved July 27, 2016.

"File Storage". Microsoft. Retrieved January 7, 2017.
UNIT 6: MICROSOFT AZURE 2
STRUCTURE
1. Learning Objectives
2. Introduction
3. Azure -Configuration
4. Diagnostics
5. Monitoring and Deployment of web apps.
6. Summary
7. Key Words/Abbreviations
8. Learning Activity
9. Unit End Questions (MCQ and Descriptive)
10. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects of Azure Configuration:

Understanding of Configuration of Azure

Diagnostics feature of Azure

Monitoring and Development services by Azure
INTRODUCTION
Cloud environments provide an online portal experience, making it easy for users to manage
compute, storage, network, and application resources. For example, in the Azure portal, a
user can create a virtual machine (VM) configuration specifying the following: the VM size
(with regard to CPU, RAM, and local disks), the operating system, any predeployed software,
the network configuration, and the location of the VM. The user then can deploy the VM
based on that configuration and within a few minutes access the deployed VM. This quick
deployment compares favorably with the previous mechanism for deploying a physical
machine, which could take weeks just for the procurement cycle. In addition to the public
cloud just described, there are private and hybrid clouds. In a private cloud, you create a
cloud environment in your own datacenter and provide self-service access to compute
resources to users in your organization. This offers a simulation of a public cloud to your
users, but you remain completely responsible for the purchase and maintenance of the
hardware and software services you provide. A hybrid cloud integrates public and private
clouds, allowing you to host workloads in the most appropriate location. For example, you
could host a high-scale website in the public cloud and link it to a highly secure database
hosted in your private cloud (or on-premises datacenter). Microsoft provides support for
public, private, and hybrid clouds. Microsoft Azure, the focus of this book, is a public cloud.
Microsoft Azure Stack is an add-on to Windows Server 2016 that allows you to deploy many
core Azure services in your own datacenter and provides a self-service portal experience to
your users. You can integrate these into a hybrid cloud through the use of a virtual private
network.
AZURE -CONFIGURATION
Azure App Configuration provides a service to centrally manage application settings and
feature flags. Modern programs, especially programs running in a cloud, generally have many
components that are distributed in nature. Spreading configuration settings across these
components can lead to hard-to-troubleshoot errors during an application deployment. Use
App Configuration to store all the settings for your application and secure their accesses in
one place.
WHY USE APP CONFIGURATION?
Cloud-based applications often run on multiple virtual machines or containers in multiple
regions and use multiple external services. Creating a robust a nd scalable application in a
distributed environment presents a significant challenge.
Various programming methodologies help developers deal with the increasing complexity of
building applications. For example, the Twelve-Factor App describes many well- tested
architectural patterns and best practices for use with cloud applications. One key
recommendation from this guide is to separate configuration from code. An application’s
configuration settings should be kept external to its executable and read in fro m its runtime
environment or an external source.
While any application can make use of App Configuration, the following examples are the
types of application that benefit from the use of it:

Microservices based on Azure Kubernetes Service, Azure Service Fabric, or other
containerized apps deployed in one or more geographies

Serverless apps, which include Azure Functions or other event-driven stateless
compute apps

Continuous deployment pipeline
App Configuration offers the following benefits:

A fully managed service that can be set up in minutes

Flexible key representations and mappings

Tagging with labels

Point- in-time replay of settings

Dedicated UI for feature flag management

Comparison of two sets of configurations on custom-defined dimensions

Enhanced security through Azure-managed identities

Encryption of sensitive information at rest and in transit

Native integration with popular frameworks
App Configuration complements Azure Key Vault, which is used to store application secrets.
App Configuration makes it easier to implement the following scenarios:

Centralize management and distribution of hierarchical configuration data for different
environments and geographies

Dynamically change application settings without the need to redeploy or restart an
application

Control feature availability in real- time
USE APP CONFIGURATION
The easiest way to add an App Configuration store to your application is through a client
library provided by Microsoft. The following methods are available to connect with your
application, depending on your chosen language and framework
Key groupings
App Configuration provides two options for organizing keys:

Key prefixes

Labels
You can use either one or both options to group your keys.
Key prefixes are the beginning parts of keys. You can logically group a set of keys by using
the same prefix in their names. Prefixes can contain multiple components connected by a
delimiter, such as /, similar to a URL path, to form a namespace. Such hierarchies are useful
when you're storing keys for many applications, component services, and environments in one
App Configuration store.
An important thing to keep in mind is that keys are what your application code references to
retrieve the values of the corresponding settings. Keys shouldn't change, or else you'll have to
modify your code each time that happens.
Labels are an attribute on keys. They're used to create variants of a key. For example, you can
assign labels to multiple versions of a key. A version might be an iteration, an environment, or
some other contextual information. Your application can request an entirely different set of
key values by specifying another label. As a result, all key references remain unchanged in
your code.
Key-value compositions
App Configuration treats all keys stored with it as independent entities. App Configuration
doesn't attempt to infer any relationship between keys or to inherit key values based on their
hierarchy. You can aggregate multiple sets of keys, however, by using labels coupled with
proper configuration stacking in your application code.
Let's look at an example. Suppose you have a setting named Asset1, whose value might vary
based on the development environment. You create a key named "Asset1" with an empty label
and a label named "Development". In the first label, you put the default value for Asset1, and
you put a specific value for "Development" in the latter.
In your code, you first retrieve the key values without any labels, and then you retrieve the
same set of key values a second time with the "Development" label. When you retrieve the
values the second time, the previous values of the keys are overwritten. The .NET Core
configuration system allows you to "stack" multiple sets of configuration data on top of each
other. If a key exists in more than one set, the last set that contains it is used. With a modern
programming framework, such as .NET Core, you get this stacking capability for free if you
use a native configuration provider to access App Configuration. The following code snippet
shows how you can implement stacking in a .NET Core application:
C#Copy
// Augment the ConfigurationBuilder with Azure App Configuration
// Pull the connection string from an environment variable
configBuilder.AddAzureAppConfiguration(options => {
options. Connect(configuration["connection_string"])
.Select(KeyFilter.Any, LabelFilter.Null)
.Select(KeyFilter.Any, "Development");
});
Use labels to enable different configurations for different environments provides a complete
example.
App Configuration bootstrap
To access an App Configuration store, you can use its connection string, which is available in
the Azure portal. Because connection strings contain credential information, they're
considered secrets. These secrets need to be stored in Azure Key Vault, and your code must
authenticate to Key Vault to retrieve them.
A better option is to use the managed identities feature in Azure Active Directory. With
managed identities, you need only the App Configuration endpoint URL to bootstrap access to
your App Configuration store. You can embed the URL in your application code (for example,
in the appsettings.json file). App or function access to App Configuration
You can provide access to App Configuration for web apps or functions by using a ny of the
following methods:

Through the Azure portal, enter the connection string to your App Configuration store
in the Application settings of App Service.

Store the connection string to your App Configuration store in Key Vault
and reference it from App Service.

Use Azure managed identities to access the App Configuration store. For more
information

Push configuration from App Configuration to App Service. App Configuration
provides an export function (in Azure portal and the Azure CLI) that sends data
directly into App Service. With this method, you don't need to change the application
code at all.
Reduce requests made to App Configuration
Excessive requests to App Configuration can result in throttling or overage charges. To reduce
the number of requests made:

Increase the refresh timeout, especially if your configuration values do not change
frequently. Specify a new refresh timeout using the SetCacheExpiration method.

Watch a single sentinel key, rather than watching individual keys. Refresh all
configuration only if the sentinel key changes. Use Azure Event Grid to receive
notifications when configuration changes, rather than constantly polling for any
changes. For more information,
Importing configuration data into App Configuration
App Configuration offers the option to bulk import your configuration settings from your
current configuration files using either the Azure portal or CLI. You can also use the same
options to export values from App Configuration, for example between related stores. If you’d
like to set up an ongoing sync with your GitHub repo, you can use our GitHub Action so that
you can continue using your existing source control practices while getting the benefits of App
Configuration.
Multi-region deployment in App Configuration
App Configuration is regional service. For applications with different configurations per
region, storing these configurations in one instance can create a single point of failure.
Deploying one App Configuration instances per region across multiple regions may be a better
option. It can help with regional disaster recovery, performance, and security siloing.
Configuring by region also improves latency and uses separated throttling quotas, since
throttling is per instance. To apply disaster recovery mitigation, you can use multiple
configuration stores.
DIAGNOSTICS
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from
the guest operating system of Azure compute resources including virtual machines.
Note
Azure Diagnostics extension is one of the agents available to collect monitoring data from the
guest operating system of compute resources.
Primary scenarios
The primary scenarios addressed by the diagnostics extension are:

Collect guest metrics into Azure Monitor Metrics.

Send guest logs and metrics to Azure storage for archiving.

Send guest logs and metrics to Azure event hubs to send outside of Azure.
Comparison to Log Analytics agent
The Log Analytics agent in Azure Monitor can also be used to collect monitoring data from
the guest operating system of virtual machines. You may choose to use either or both
depending on your requirements.
The key differences to consider are:
WINDOWS DIAGNOSTICS EXTENSION (WAD)

Azure Diagnostics Extension can be used only with Azure virtual machines. The Log
Analytics agent can be used with virtual machines in Azure, other clouds, and onpremises.

Azure Diagnostics extension sends data to Azure Storage, Azure Monitor
Metrics (Windows only) and Event Hubs. The Log Analytics agent collects data
to Azure Monitor Logs.
Data Source
Description
Windows Event logs
Events from Windows event log.
Performance counters Numerical values measuring performance of different aspects of operating system
and workloads.
IIS Logs
Usage information for IIS web sites running on the guest operating system.
Application logs
Trace messages written by your application.
.NET EventSource
Code writing events using the .NET Event Source class
logs
Manifest based ETW
Event Tracing for Windows events generated by any process.
logs
Crash dumps (logs)
Information about the state of the process if an application crashes.
File based logs
Logs created by your application or service.
Agent diagnostic logs Information about Azure Diagnostics itself.

The Log Analytics agent is required for solutions, Azure Monitor for VMs, and
Table 6.1
other services such as Azure Security Center.
Costs
There is no cost for Azure Diagnostic Extension, but you may incur charges for the data
ingested.
Table 6.2
Data collected
The following tables list the data that can be collected by the Windows and Linux diagnostics
extension.
LINUX DIAGNOSTICS EXTENSION (LAD)
Data Source
Description
Syslog
Events sent to the Linux event logging system.
Performance
Numerical values measuring performance of different aspects of operating system and
counters
workloads.
Log files
Entries sent to a file based log.
Windows diagnostics extension (WAD)
Linux diagnostics extension (LAD)
Data destinations
The Azure Diagnostic extension for both Windows and Linux always collect data into an
Azure Storage account.
Configure one or more data sinks to send data to other additional destinations. The following
sections list the sinks available for the Windows and Linux diagnostics extension.
Windows diagnostics extension (WAD)
WINDOWS DIAGNOSTICS EXTENSION (WAD)
Destination
Description
Azure Monitor
Collect performance data to Azure Monitor Metrics.
Metrics
Event hubs
Use Azure Event Hubs to send data outside of Azure.
Write to data to blobs in Azure Storage in addition to tables.
Azure Storage
WINDOWS DIAGNOSTICS EXTENSION (WAD)
Destination
Description
blobs
Application
Collect data from applications running in your VM to Application Insights to integrate
Insights
with other application monitoring.
Table 6.3
You can also collect WAD data from storage into a Log Analytics workspace to analyse it
with Azure Monitor Logs although the Log Analytics agent is typically used for this
functionality. It can send data directly to a Log Analytics workspace and supports solutions
and insights that provide additional functionality
Linux diagnostics extension (LAD)
LINUX DIAGNOSTICS EXTENSION (LAD)
Destination
Description
Event hubs
Use Azure Event Hubs to send data outside of Azure.
Azure Storage blobs
Write to data to blobs in Azure Storage in addition to tables.
Azure Monitor Metrics
Install the Telegraph agent in addition to LAD.
Table 6.4
LAD writes data to tables in Azure Storage. It supports the sinks in the following table.
Installation and configuration
The Diagnostic extension is implemented as a virtual machine extension in Azure, so it
supports the same installation options using Resource Manager templates, PowerShell, and
CLI.
MONITORING AND DEPLOYMENT OF WEB APPS.
Azure platform as a service (PaaS) offerings manage compute resources for you and affect
how you monitor deployments. Azure includes multiple monitoring services, each of which
performs a specific role. Together, these services deliver a comprehensive solution for
collecting, analysing, and acting on telemetry from your applications and the Azure resources
they consume.
This scenario addresses the monitoring services you can use and describes a dataflow model
for use with multiple data sources. When it comes to monitoring, many tools and ser vices
work with Azure deployments. In this scenario, we choose readily available services
precisely because they are easy to consume.
Relevant use cases
Other relevant use cases include:

Instrumenting a web application for monitoring telemetry.

Collecting front-end and back-end telemetry for an application deployed on Azure.

Monitoring metrics and quotas associated with services on Azure.
Architecture
This scenario uses a managed Azure environment to host an application and data tier. The
data flows through the scenario as follows:
1. A user interacts with the application.
2. The browser and app service emit telemetry.
3. Application Insights collects and analyses application health, performance, and usage
data.
4. Developers and administrators can review health, performance, and usage
information.
5. Azure SQL Database emits telemetry.
6. Azure Monitor collects and analyses infrastructure metrics and quotas.
7. Log Analytics collects and analyses logs and metrics.
8. Developers and administrators can review health, performance, and usage
information.
Components

Azure App Service is a PaaS service for building and hosting apps in managed virtual
machines. The underlying compute infrastructures on which your apps run is
managed for you. App Service provides monitoring of resource usage quotas and app
metrics, logging of diagnostic information, and alerts based on metrics. Even better,
you can use Application Insights to create availability tests for testing your
application from different regions.

Application Insights is an extensible Application Performance Management (APM)
service for developers and supports multiple platforms. It monitors the application,
detects application anomalies such as poor performance and failures, and sends
telemetry to the Azure portal. Application Insights can also be used for logging,
distributed tracing, and custom application metrics.

Azure Monitor provides base- level infrastructure metrics and logs for most services in
Azure. You can interact with the metrics in several ways, including charting t hem in
Azure portal, accessing them through the REST API, or querying them using
PowerShell or CLI. Azure Monitor also offers its data directly into Log Analytics and
other services, where you can query and combine it with data from other sources on
premises or in the cloud.

Log Analytics helps correlate the usage and performance data collected by
Application Insights with configuration and performance data across the Azure
resources that support the app. This scenario uses the Azure Log Analytics agent to
push SQL Server audit logs into Log Analytics. You can write queries and view data
in the Log Analytics blade of the Azure portal.
DevOps considerations
Monitoring
A recommended practice is adding Application Insights to your code during development
using the Application Insights SDKs, and customizing per application. These open-source
SDKs are available for most application frameworks. To enrich and control the data you
collect, incorporate the use of the SDKs both for testing and production deployments into
your development process. The main requirement is for the app to have a direct or indirect
line of sight to the Applications Insights ingestion endpoint hosted with an Internet- facing
address. You can then add telemetry or enrich an existing telemetry collection.
Runtime monitoring is another easy way to get started. The telemetry that is collected must
be controlled through configuration files. For example, you can include runtime methods that
enable tools such as Application Insights Status Monitor to deploy the SDKs into the correct
folder and add the right configurations to begin monitoring.
Like Application Insights, Log Analytics provides tools for analysing data across sources,
creating complex queries, and sending proactive alerts on specified conditions. You can also
view telemetry in the Azure portal. Log Analytics adds value to existing monitoring services
such as Azure Monitor and can also monitor on-premises environments.
Both Application Insights and Log Analytics use Azure Log Analytics Query Language. You
can also use cross-resource queries to analyse the telemetry gathered by Application Insights
and Log Analytics in a single query.
Azure Monitor, Application Insights, and Log Analytics all send alerts. For example, Azure
Monitor alerts on platform- level metrics such as CPU utilization, while Application Insights
alerts on application-level metrics such as server response time. Azure Monitor alerts on new
events in the Azure Activity Log, while Log Analytics can issue alerts about metrics or event
data for the services configured to use it. Unified alerts in Azure Monitor is a new, unified
alerting experience in Azure that uses a different taxonomy.
Alte rnatives
This article describes conveniently available monitoring options with popular features, but
you have many choices, including the option to create your own logging mechanisms. A
recommended practice is to add monitoring services as you build out tiers in a solution. Here
are some possible extensions and alternatives:

Consolidate Azure Monitor and Application Insights metrics in Grafana using
the Azure Monitor Data Source for Grafana.

Data Dog features a connector for Azure Monitor

Automate monitoring functions using Azure Automation.

Add communication with ITSM solutions.

Extend Log Analytics with a management solution.
For more information see [Monitoring for DevOps] [devops- monitoring] in the Azure WellArchitected Framework.
Scalability and availability considerations
This scenario focuses on PaaS solutions for monitoring in large part because they
conveniently handle availability and scalability for you and are backed by service- level
agreements (SLAs). For example, App Services provides a guaranteed SLA for its
availability.
Application Insights has limits on how many requests can be processed per second. If you
exceed the request limit, you may experience message throttling. To prevent throttling,
implement filtering or sampling to reduce the data rate
High availability considerations for the app you run, however, are the develope r's
responsibility.
For
information
about
scale,
for
example,
see
the Scalability
considerations section in the basic web application reference architecture. After an app is
deployed, you can set up tests to monitor its availability using Application Insights.
Security conside rations
Sensitive information and compliance requirements affect data collection, retention, and
storage. Learn more about how Application Insights and Log Analytics handle telemetry.
The following security considerations may also apply:

Develop a plan to handle personal information if developers are allowed to collect
their own data or enrich existing telemetry.

Consider data retention. For example, Application Insights retains telemetry data for
90 days. Archive data you want access to for longer periods using Microsoft Power
BI, Continuous Export, or the REST API. Storage rates apply.

Limit access to Azure resources to control access to data and who can view telemetry
from a specific application.

Consider whether to control read/write access in application code to prevent users
from adding version or tag markers that limit data ingestion from the application.
With Application Insights, there is no control over individual data items once they are
sent to a resource, so if a user has access to any data, they have access to all data in an
individual resource.

Add governance mechanisms to enforce policy or cost controls over Azure resources
if needed. For example, use Log Analytics for security-related monitoring such as
policies and role-based access control, or use Azure Policy to create, assign and,
manage policy definitions.

To monitor potential security issues and get a central view of the security state of your
Azure resources, consider using Azure Security Centre.
Cost considerations
Monitoring charges can add up quickly. Consider pricing up front, understand what you are
monitoring, and check the associated fees for each service. Azure Monitor provides basic
metrics at no cost, while monitoring costs for Application Insights and Log Analytics are
based on the amount of data ingested and the number of tests you run.
To help you get started, use the pricing calculator to estimate costs. Change the various
pricing options to match your expected deployment.
Telemetry from Application Insights is sent to the Azure portal during debugging and after
you have published your app. For testing purposes and to avoid charges, a limited volume of
telemetry is instrumented.
After deployment, you can watch a Live Metrics Stream of performance indicators. This data
is not stored — you are viewing real-time metrics — but the telemetry can be collected and
analysed later. There is no charge for Live Stream data.
Log Analytics is billed per gigabyte (GB) of data ingested into the service. The first 5 GB of
data ingested to the Azure Log Analytics service every month is offered free, and the data is
retained at no charge for first 31 days in your Log Analytics workspace.
SUMMARY
Microsoft Azure is Microsoft's cloud computing platform, providing a wide variety of
services you can use without purchasing and provisioning your own hardware. Azure
enables the rapid development of solutions and provides the resources to accomplish tasks
that may not be feasible in an on-premises environment. Azure's compute, storage, network,
and application services allow you to focus on building great solutions without the need to
worry about how the physical infrastructure is assembled.
Cloud computing provides a modern alternative to the traditional on-premises datacenter. A
public cloud vendor is completely responsible for hardware purchase and maintenance and
provides a wide variety of platform services that you can use. You lease whatever hardware
and software services you require on an as-needed basis, thereby converting what had been a
capital expense for hardware purchase into an operational expense. It also allows you to
lease access to hardware and software resources that would be too expensive to purchase.
Although you are limited to the hardware provided by the clo ud vendor, you only have to
pay for it when you use it.
KEY WORDS/ABBREVIATIONS

Azure tenant A dedicated and trusted instance of Azure AD that’s automatically
created when your organization signs up for a Microsoft cloud service subscription,
such as Microsoft Azure, Microsoft Intune, or Office 365. An Azure tenant represents
a single organization.

Single tenant Azure tenants that access other services in a dedicated environment are
considered single tenant.

Multi-tenant Azure tenants that access other services in a shared environment,
across multiple organizations, are considered multi-tenant.

Azure AD directory: Each Azure tenant has a dedicated and trusted Azure AD
directory. The Azure AD directory includes the tenant’s users, groups, and apps and is
used to perform identity and access management functions for tenant resources.

Custom domain: Every new Azure AD directory comes with an initial domain name,
domainname.onmicrosoft.com. In addition to that initial name, you can also add your
organization’s domain names, which include the names you use to do business and
your users use to access your organization’s resources, to the list.
LEARNING ACTIVITY
1. Give the diagrammatic view of Azure Deployment foe web Apps
___________________________________________________________________________
___________________________________________________________________
2. Draw the detail steps of Monitoring in Azure Cloud Computing
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. Explain with diagram the Configuration Management of Azure.
2. What are the Diagnostics attributes of Azure.
3. Explain how Monitoring and Deployment of web apps.can be implemented through
Microsoft Azure.
4. How Azure provide support for different web applications.
B. Multiple Choice Questions
1. Which of the following element is a non-relational storage system for large-scale storage?
a) Compute
b) Application
c) Storage
d) None of the mentioned
2. Azure Storage plays the same role in Azure that ______ plays in Amazon Web Services.
a) S3
b) EC2
c) EC3
d) All of the mentioned
3. Which of the following element in Azure stands for management service?
a) config
b) application
c) virtual machines
d) none of the mentioned
4. A particular set of endpoints and its associated Access Control rules for an application is
referred to as the _______________
a) service namespace
b) service rules
c) service agents
d) all of the mentioned
5. Which of the following was formerly called Microsoft .NET Services?
a) AppFabric
b) PHP
c) WCF
d) All of the mentioned
Answer
1. c
2. a
3. a
4. a
5. a
REFERENCES

"SQL Data Warehouse | Microsoft Azure". azure.microsoft.com. Retrieved May 23,
2019.

"Introduction to Azure Data Factory". microsoft.com. Retrieved August 16, 2018.

"HDInsight | Cloud Hadoop". Azure.microsoft.com. Retrieved July 22, 2014.

"Sanitization". docs.particular.net. Retrieved November 21, 2018.

sethmanheim. "Overview of Azure Service Bus fundamentals". docs.microsoft.com.
Retrieved December 12, 2017.

"Event Hubs". azure.microsoft.com. Retrieved November 21, 2018.

"Azure CDN Coverage by Metro | Microsoft Azure". azure.microsoft.com. Retrieved
September 14, 2020.

eamonoreilly.
"Azure Automation Overview". azure.microsoft.com. Retrieved
September 6, 2018.

"Why Cortana Intelligence?". Microsoft.

"What is the Azure Face API?". Microsoft. July 2, 2019. Retrieved November 29,
2019.

"Detect domain-specific content". Microsoft. February 7, 2019. Retrieved November
29, 2019.
UNIT 7: RESOURCE MANAGEMENT
STRUCTURE
1. Learning Objectives
2. Introduction
3. Resource Management
4. Scope of Cloud Computing Resource Management
5. Provision of resource allocation in cloud computing.
6. Summary
7. Key Words/Abbreviations
8. Learning Activity
9. Unit End Questions (MCQ and Descriptive)
10. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects of Resource Management in Azure:

Understanding of Resource Management in Azure

Scope of Cloud Computing as in Cloud Computing

Provision of Resource allocation
INTRODUCTION
Cloud computing has become a new era technology that has huge potentials in enterprises
and markets. By using this technology, Cloud user can access applications and associated
data from anywhere. It has many applications for example, Companies are able to rent
recourses from cloud for storage and other computational purposes so that infrastructure cost
can be reduced significantly. For managing large amount of virtual machine request, the
cloud providers require an efficient resource scheduling algorithm. Here we summarize
different recourse management strategies and its impacts in cloud system we try to analyze
the resource allocation strategies based on various matrices and it points out that some of the
strategies are efficient than others in some aspects. So the usability of each of the methods
can varied according to their application area
A cloud is characterized by elasticity that allows a dynamic change in the number of
resources based on the varying demand from a customer as well as a pay-as-you- go
opportunity, both of which can lead to substantial savings for the customers. Appropriate
management of resources in clouds is essential for effectively harnessing the power of the
underlying distributed resources and infrastructure. The problems range fro m handling
resource heterogeneity, allocating resources to user requests efficiently as well as effectively
scheduling the requests that are mapped to given resource, as well as handling uncertainties
associated with the workload and the system. As a consumer or user of cloud, one should be
aware of the ways and means with which the cloud resources are allocated to user
requirements, and how are the applications being executed in a cloud environment. As a
researcher, one can understand the opportunities to dig further to carry-on with more
innovations to contribute better solutions to the existing problems.
Resource management in cloud computing is to mean the efficient use of heterogeneous and
geographically distributed resources for client requests for clo ud service provisioning. Since
the resources spread across multiple organizations with different policy of usages, the
management of these is really a big challenge.
RESOURCE MANAGEMENT
We consider the resource management as the process of allocating computing, storage,
networking and indirectly energy resources to a set of applications, in the context that looks
to jointly meet the performance objectives of the infrastructure providers, users of the cloud
resources and applications. The objectives of the cloud users tend to focus on application
performance. The conceptual framework provides a high level view of the functional
component of cloud resource management systems and all their interactions. This field is
classified into eight functional areas or we can say that resource management activities which
are as follow:
Global planning of virtualized resources Resource demand profiling Resource exercise
estimation
resources
Resource pricing and profit maximization
Application scaling and provisioning
Local scheduling of cloud
Workload management
Cloud
management systems Cloud computing is appeared as a business necessity, being animated
by the idea of just using the infrastructure without managing it. Although, initially this idea
was present only in the academic area, recently, it was transposed into industries by
companies like Microsoft, Amazon, Google, Yahoo! and Salesforce.com. This makes it
possible for new start-ups to enter the market easier, since the cost of infrastructure is greatly
diminished. There are various sorts of issues just as number of servers becomes immense and
dependencies between servers become complex in the terms of managing cloud systems in
static manner. Cloud computing providers deliver common online business applications
which are accessed from servers through web browsers
SCOPE OF CLOUD COMPUTING RESOURCE MANAGEMENT
Business applications hosted in the cloud are probably the most promising cloud service and
the most interesting topic for computer science education because it can give business the
option to pay as they go while providing the big impact benefit of the latest technology
advancement [6]. Resource management decisions by the Cloud Service Provider and Cloud
Service User need accurate estimations of the condition of the physical and virtual resources
which are required to deliver the applications hosted by cloud. The functional elements of
Resource Utilization Estimation provide state estimation for compute, network, storage and
power resources. It also provides input into cloud monitoring and resource scheduling
processes.
The functional elements are mapped to the Cloud Provider and Cloud User roles in line with
an IaaS cloud offering. The cloud service provider is responsible for overseeing the
exercising of computer, networking, storage, power resources and controlling this utilization
via global and local scheduling process.
Figure 7.1
As shown in figure 4 arrows represent the principal information flows between functional
elements. The diagram shows the responsibilities of the actors in an IaaS environment. The
portioning is different in the case of PaaS and SaaS environment. The framework is depicted
from IaaS perspective. However, it is applicable to the PaaS and SaaS perspectives - the
functional elements remain the same, but the responsibility for supplying of more of them
rests with the Cloud Provider whereas in the case of PaaS, the role of Cloud User is split into
a Platform Provider along with an Application Provider. The degree of resource allocation
responsibility falling on each varies depending on the scope of the provided platform. In the
case of SaaS, the Platform and Application Provider are basically the same organization
which is also the Cloud Provider. In all sorts of resource management functionality the
responsibilities would then have on these organizations. Resource Management and
Virtualization One of the most important technologies is the use of virtualization. It is the
way to gist the hardware and system resources from an operating system. In computing,
virtualization means to create a virtual version of a device or a resource, such as a server,
storage device, network or even an operating system where the framework divides the
resource into one or more execution environments. One of the most basic concepts of
virtualization technology gives employed in cloud environment is resource consolidation and
management. Hypervisors or Virtual Machine Monitors are used to perform virtualization
within a cloud environment across a large set of servers. These monitors lie in between the
hardware and the operating systems. The figure mentioned below defines one of the key
advantages of cloud computing which allows for a consolidation of resources within any data
centre. Within a cluster environment managing of multiple operating systems is performed to
allow for a number of standalone physical machines which is further combined to a
virtualized environment. The entire processes require less physical resources than ever
before. Thousands of physical machines amidst with megawatts of power are required for the
deployment of large clouds, which brings forth the necessity of developing an efficient Cloud
Computing system that utilizes the strengths of the cloud while minimizing its energy
footprint. Cloud Operations Management System
PROVISION OF RESOURCE ALLOCATION IN CLOUD
COMPUTING.
The strategies of resource allocation can be defined as the mechanism for obtaining the
guaranteed VM
and/or physical resources allocation to the cloud users with minimal
resource struggle, avoiding over, under-provisioning conditions and other parameters, This
needs the amount and its types of resources required by the applications in order to satisfy the
user's tasks, the time of allocations of the resources and its sequels also matters in case of the
resource allocation mechanism.
Resource Allocation can be defined as efficiently distributing the resources among multiple
users as per their demands for given period of time. However, resource allocation has proven
to be bit complicated in cloud computing. Therefore, there is a need to increase the
computing capability for allocating the resources. The main aim of smartly allocating the
resources is to gain financial profits in the market. This technique also boosts up the
objectives of cloud computing i.e. pay as per use because the client need not pay for the
resources that he has not used. Dynamic resource allocation shoots up the work flow
implementation and allows the users to differentiate among different policies available.
Resource allocation strategy must avoid the following issues: 1) Over provisioning: This
means that an application receives more amount of resources than actually demanded. 2)
Under provisioning: This means that an application receives less amount of resources than
actually demanded. 3) Contention of resources: This means that different applications try to
access a single resource at same time. 4) Scarcity: This means there lack of resources. Shown
below is Table 1 that explains the required input for both the service provider and the client
Due to limited resources various restrictions and increasing demands from the users, t here is
a need to efficiently allocate the resources to fulfil cloud requirement. The demand and
supply of resources ma be available, hence there arises a need for different strategies that
allocate the resource smartly. Given below are few strategies that analyses the issue of
resource allocation in cloud computing.
1) Rule Based Resource Allocation: To reduce the maintenance cost of resources, the
resources allocation algorithm negotiates between multiple users to provide safer access to
the resource across a network. Any failure in efficient negotiation may lead to the failure of
whole cloud system. In RBRA, the distribution of resources is dynamic and the utilization of
resources is at its peak. The resources are allocated based upon the priority and he nce a queue
is formed, if any resource „R‟ is being used by and any user X, and another user X demands
it, similarly user Z demands it any another time instant, then this algorithm creates a queue,
giving the priority to Y and then Z. This further means after X has used the resource, Y will
use it and then Z will use it. This increases the performance of the whole system. The priority
may also be divided on the basis of task size. If the criticality of task is least, it is assigned
last place. After the resource are allocated, the execution of tasks place and results are given
to client.
2) Optimized Resource Scheduling: This algorithm is based on the infrastructure as a services
.i.e. Iaas. To provide best results, cloud computing makes use of virtual machine and in this
algorithm, the virtual machine is distributed amongst many users so as to maximize the
resource usage. An improved genetic algorithm is used here so as to allocate the resources in
finest possible way
3) Fair Resource Allocation for Congestion Control: Whenever resources are being allocated
to any user or any service, definitely there are chances of congestion over the network.
Congestion is a big problem as it deplete the overall performance, hence must be controlled.
The FRA allows fair use of resources among different users because the need of resources
may vary from every user. In this technique, whenever any user demands a particular service,
a particular bandwidth is selected and is allocated to the client for a particular period of time.
After zero resources are left, all the new requests from a customer are rejected
4) Federated Computing and Network System: In this model, both computing resources and
network resources are mixed together. Therefore, for a combination of resources, FC NS is
required. The synchronization of resources whether compute or network are altogether
presented to FCNS which makes use of wavelength division multiplexing and offers best data
transfer with least traffic over the network
SUMMARY
The cloud computing technology enables all its resources as a single point of access to the
customer and is implemented as pay per usage. Even though there are many undisputed
advantages in using cloud computing, one of the major concerns is to understand how the
user / customer requests are executed with proper allocation of resources to each of such
request. Unless the allocation and management of resources is done efficiently in order to
maximize the system utilization and overall performance, governing the cloud environment
for multiple customers becomes more difficult.
Swiftly increasing demand of computational calculations in the process of business,
transferring of files under certain protocols and data centres force to develop an emerging
technology cater to the services for computational need, highly manageable and secure
storage. To fulfil these technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high computational environment. Cloud
computing is the most recent paradigm promising to turn around the vision of “computing
utilities” into reality. The term “cloud computing” is relatively new, there is no universal
agreement on this definition. In this paper, we go through with different area of expertise of
research and novelty in cloud computing domain and its usefulness in the genre of
management. Even though the cloud computing provides many distinguished features, it still
has certain sorts of short comings amidst with comparatively high cost for both private and
public clouds. It is the way of congregating amasses of information and resources stored in
personal computers and other gadgets and further putting them on the public cloud for
serving users. Resource management in a cloud environment is a hard problem, due to the
scale of modern data centres, their interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is turning to be one of the most
explosively expanding technologies in the computing industry in this era. It authorizes the
users to transfer their data and computation to remote location with minimal impact on
system performance. With the evolution of virtualization technology, cloud computing has
been emerged to be distributed systematically or strategically on full basis. The idea of cloud
computing has not only restored the field of distributed systems but also fundamentally
changed how business utilizes computing today. Resource management in cloud computing
is in fact a typical problem which is due to the scale of modern data centres, the variety of
resource types and their inter dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
It is fact that the research and analysis of cloud computing is still in its initial period, apparent
impacts may be brought by cloud computing. As the prevalence of cloud computing
continues to raise, the need for power saving mechanisms within the cloud also increases.
While a number of cloud terminologies are discussed in this paper, there is a need of
amendments in cloud infrastructure both in the academic and commercial sectors where
management of different segments will be in quick span of time and believing that green
computing will be one of the major segments of the coming generation cloud computing. Its
uses in the management sectors in modern era not only embellish the utilization rate of
resources to address the imbalance in the development between regions, but also can make
more extensive use of cloud computing to our work life. Consequently cloud services must be
designed under assumption that they will experience frequent and open unpredictable
failures. Services must recover from failures autonomously, and this implies that cloud
computing platforms must offer standard, simple and fast recovery procedures. To sum up,
we can further conclude that research and development related to cloud computing
technology forms a virtual role in the future of resource management and internet technology.
Getting view on the basis of ongoing research efforts and continuing advancements of
computing technology, we come into cropper that this technology hover to have a major
impact on scientific research as well as management planning.
KEY WORDS/ABBREVIATIONS

Cloud Management Platform (CMP) – A cloud management platform (CMP) is a
product that gives the user integrated management of public, private, and hybrid cloud
environments.

Cloud Marketplace A cloud marketplace is an online marketplace, operated by a
cloud service provider (CSP), where customers can browse and subscribe to software
applications and developer services that are built on, integrate with, or supplement the
CSP’s main offering. Amazon’s AWS Marketplace and Microsoft’s Azure store are
examples of cloud marketplaces.

Cloud Migration – Cloud migration is the process of transferring all of or a piece of
a company’s data, applications, and services from on-premise to the cloud.

Cloud Native – Applications developed specifically for cloud platforms.

Cloud Washing – Cloud washing is a deceptive marketing technique used to rebrand
old products by connecting them to the cloud, or at least to the term cloud.
LEARNING ACTIVITY
1. How Resources are managed in Azure Cloud Computing.
___________________________________________________________________________
___________________________________________________________________
2. Draw a comparative study between various resource allocation techniques
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
2. What is Resource Management?
3. How Azure helps to resource management?
4. What is the provision of resource allocation in cloud computing?
5. Discuss various techniques to implement Resource allocation
6. What are the various methods to manage Resource management?
B. Multiple Choice Questions
1. Which of the following is used to negotiate the exchange of information between a client
and the service?
a) Compute Bus
b) Application Bus
c) Storage Bus
d) Service Bus
2. Which of the following can be used to create distributed systems based on SOA?
a) Passive Directory Federation Services
b) Application Directory Federation Services
c) Active Directory Federation Services
d) None of the mentioned
3. SQL Azure is a cloud-based relational database service that is based on ____________
a) Oracle
b) SQL Server
c) MySQL
d) All of the mentioned
4. Which of the following was formerly called SQL Server Data Service?
a) AppFabric
b) SQL Azure
c) WCF
d) All of the mentioned
5. Azure data is replicated ________ times for data protection and writes are checked for
consistency.
a) one
b) two
c) three
d) all of the mentioned
6. SQL Azure Database looks like and behaves like a local database with a few exceptions
like _____________
a) CLR
b) CDN
c) WCF
d) All of the mentioned
Ans wer
1. c
2. d
3. b
4. b
5. c
6. a
REFERENCES

R. Buyya, S. Pandey, and C. Vecchiola, "Cloudbus toolkit for market-oriented cloud
computing," In Proceedings of the 1st International Conference on Cloud Computing
(CloudCom '09), volume 5931 of LNCS, pages 24–44. Springer, Germany, December
2009.

VMware, “Understanding Full Virtualization, Paravirtualization, and Hardware
Assist,”
VMware,
Tech.
Rep.,
2007.
[Online].
Available:
http://www.vmware.com/files/pdf/VMware paravirtualization.pdf

R. Buyya, A. Beloglazov, J. Abawajy, "Energy-efficient management of data centre
resources for cloud computing: a vision, architectural elements, and open challenges,"
In Proceedings of the 2010 International Conference on Parallel and Distributed
Processing Techniques and Applications (PDPTA ’10), LasVegas, USA, 2010.

Martin Randles, David Lamb, A. Taleb-Bendiab, "A Comparative Study into
Distributed Load Balancing Algorithms for Cloud Computing," IEEE 24th
International Conference on Advanced Information Networking and Applications
Workshops (WAINA), vol., no., pp.551-556, 20-23 April 2010.

D. Minarolli and B. Freisleben, "Utility-based resource allocation for virtual machines
in Cloud computing," In Proceedings of the 2011 IEEE Symposium on Computers
and Communications (ISCC '11), vol., no., pp.410-417, June 28 2011-July 1 2011.
UNIT 8: VIRTUALIZATION
STRUCTURE
1. Learning Objectives
2. Introduction
3. Concept of Virtualization
4. Taxonomy of Virtualization Techniques
5. Pros and cons of Virtualization
6. Virtual Machine provisioning and lifecycle
7. Load Balancing
8. Summary
9. Key Words/Abbreviations
10. Learning Activity
11. Unit End Questions (MCQ and Descriptive)
12. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects Virtualization:

Understanding of Virtualization

Pros and Cons of Virtualization

Introduction to Load Balancing

Life cycle of Virtual Machine
INTRODUCTION
When you ‘virtualize,’ you’re splitting a physical hard-drive into multiple, smaller parts. That
way, you can run multiple operating systems (OS) off the same computer. You’ve probably
seen folks run Windows on macOS as a guest operating system — that’s an example of
virtualization.
Cloud computing is simply virtualization on an epic scale. You’re now taking millions of
virtual machines, and forcing them to run many different environments for hundreds of
millions of users across the world.
Virtualization is a technique of how to separate a service from the underlying physical
delivery of that service. It is the process of creating a virtual version of something like
computer hardware. It was initially developed during the mainframe era. It involves using
specialized software to create a virtual or software-created version of a computing resource
rather than the actual version of the same resource. With the help of Virtualization, multiple
operating systems and applications can run on same machine and its same hardware at the
same time, increasing the utilization and flexibility of hardware.
CONCEPT OF VIRTUALIZATION
One of the main cost effective, hardware reducing, and energy saving techniques used by
cloud providers is virtualization. Virtualization allows to share a single physical instance of a
resource or an application among multiple customers and organizations at one time. It does
this by assigning a logical name to a physical storage and providing a pointer to that physical
resource on demand. The term virtualization is often synonymous with hardware
virtualization, which plays a fundamental role in efficiently delivering Infrastructure-as-aService (IaaS) solutions for cloud computing. Moreover, virtualization technologies provide a
virtual environment for not only executing applications but also for storage, memory, and
networking.
Figure 8.1
The machine on which the virtual machine is going to be build is known as Host Machine
and that virtual machine is referred as a Guest Machine .
CHARACTERISTICS OF VIRTUALIZATION
1.Increased Security –
The ability to control the execution of a guest programs in a completely transparent manner
opens new possibilities for delivering a secure, controlled execution environment. All the
operations of the guest programs are generally performed against the virtual machine, which
then translates and applies them to the host programs.
A virtual machine manager can control and filter the activity of the guest programs, thus
preventing some harmful operations from being performed. Resources exposed by the host
can then be hidden or simply protected from the guest. Increased security is a requirement
when dealing with untrusted code.

Example-1: Untrusted code can be analysed in Cuckoo sandboxes environment.
The term sandbox identifies an isolated execution environment where inst ructions can
be filtered and blocked before being translated and executed in the real execution
environment.

Example-2: The expression sandboxed version of the Java Virtual Machine (JVM)
refers to a particular configuration of the JVM where, by means of security policy,
instructions that are considered potentially harmful can be blocked.
2.Managed Execution–
In particular, sharing, aggregation, emulation, and isolation are the most relevant features.
Figure 8.2
3.Sharing
–
Virtualization allows the creation of a separate computing environments within the same
host. This basic feature is used to reduce the number of active servers and limit power
consumption.
4. Aggregation
Not only it is possible to share physical resource among several guests, but virtualization also
allows aggregation, which is the opposite process. A group of separate hosts can be tied
together and represented to guests as a single virtual host. This functionality is implemented
with cluster management software, which harnesses the physical resources of a homogeneous
group of machines and represents them as a single resource.
5. Emulation
–
Guest programs are executed within an environment that is controlled by the virtualization
layer, which ultimately is a program. Also a completely different environment with respect to
the host can be emulated, thus allowing the execution of guest programs requiring specific
characteristics that are not present in the physical host.
6. Isolation
–
Virtualization allows providing guests—whether they are operating systems, applications, or
other entities—with a completely separate environment, in which they are executed. The
guest program performs its activity by interacting with an abstraction layer, which provides
access to the underlying resources. The virtual machine can filter the activity of the guest and
prevent harmful operations against the host.
Besides these characteristics, another important capability enabled by virtualization is
performance tuning. This feature is a reality at present, given the considerable advances in
hardware and software supporting virtualization. It becomes easier to control the performance
of the guest by finely tuning the properties of the resources exposed through the virtual
environment. This capability provides a means to effectively implement a quality-of-service
(QoS) infrastructure.
7. Portability
The concept of portability applies in different ways according to the specific type of
virtualization considered.
1. In the case of a hardware virtualization solution, the guest is packaged into a virtual
image that, in most cases, can be safely moved and executed on top of different virtual
machines.
2. In the case of programming- level virtualization, as implemented by the JVM or the
.NET runtime, the binary code representing application components (jars or
assemblies) can run without any recompilation on any implementation of the
corresponding virtual machine.
TAXONOMY OF VIRTUALIZATION TECHNIQUES
There are several techniques for fully virtualizing hardware resources and satisfying the
virtualization requirements (i.e., Equivalence, Resource control, and Efficiency) as originally
presented by Popek and Goldberg. These techniques have been created to enhance
performance, and to deal with the flexibility problem in Type 1 architecture.
Popek and Goldberg classified the instructions to be executed in a virtual machine into three
groups: privileged, control sensitive, and behavior sensitive instructions. While not all control
sensitive instructions are necessarily privileged (e.g., x86). Goldberg’s Theorem 1 mandates
that all control sensitive instructions must be treated as privileged (i.e., trapped) in order to
have effective VMMs.
Depending on the virtualization technique used, hypervisors can be designed to be either
tightly or loosely coupled with the guest operating system. The performance of tightly
coupled hypervisors (i.e., OS assisted hypervisors) is higher than loosely coupled hypervisors
(i.e., hypervisors based on binary translation). On the other hand, tightly coupled hypervisors
require the guest operating systems to be explicitly modified, which is not always possible.
One of the Cloud infrastructure design challenges is to have hypervisors that are loosely
coupled, but with adequate performance. Having hypervisors that are operating system
agnostic increases system modularity, manageability, maintainability, and flexibility, and
allows upgrading or changing the operating systems on the fly.
The following are the main virtualization techniques that are currently in use:
Binary translation and native execution:
This technique uses a combination of binary translation for handling privileged and sensitive
instructions, and direct execution techniques for user- level instructions. This technique is
very efficient both in terms of performance and in terms of compatibility with the guest OS,
which does not need to know that it is virtualized. However, building binary translation
support for such a system is very difficult, and results in significant virtualization overhead.
OS assisted virtualization (paravirtualization):
In this technique, the guest OS is modified to be virtualization-aware (allow it to
communicate through hyper calls with the hypervisor, so as to handle privileged and sensitive
instructions). Because modifying the guest OS to enable paravirtualization is easy,
paravirtualization can
significantly
reduce
the
virtualization overhead.
However,
paravirtualization has poor compatibility; it does not support operating systems that cannot be
modified (e.g., Windows). Moreover, the overhead introduced by the hyper calls can affect
performance under heavy workloads. Besides the added overhead, the modification made to
the guest OS, to make it compatible with the hypervisor, can affect system’s maintainability.
Hardware-assisted virtualization:
As an alternative approach to binary translation and in an attempt to enhance performance
and compatibility, hardware providers (e.g., Intel and AMD) started supporting virtualization
at the hardware level. In hardware-assisted virtualization (e.g., Intel VT-x, AMD-V),
privileged and sensitive calls are set to automatically trap to the hypervisor. This eliminates
the need for binary translation or paravirtualization. Moreover, since the translation is done
on the hardware level, it significantly improves performance.
Network Virtualization
Network virtualization in cloud computing is a method of combining the available resources
in a network by splitting up the available bandwidth into different channels, each being
separate and distinguished. They can be either assigned to a particular server or device or stay
unassigned completely — all in real time. The idea is that the technology disguises the true
complexity of the network by separating it into parts that are easy to manage, much like your
segmented hard drive makes it easier for you to manage files.
Storage Virtualizing
Using this technique gives the user an ability to pool the hardware storage space from several
interconnected storage devices into a simulated single storage device that is managed from
one single command console. This storage technique is often used in storage area net works.
Storage manipulation in the cloud is mostly used for backup, archiving, and recovering of
data by hiding the real and physical complex storage architecture. Administrators can
implement it with software applications or by employing hardware and software hybrid
appliances.
Server Virtualization
This technique is the masking of server resources. It simulates physical servers by changing
their identity, numbers, processors and operating systems. This spares the user from
continuously managing complex server resources. It also makes a lot of resources available
for sharing and utilizing, while maintaining the capacity to expand them when needed.
Data Virtualization
This kind of cloud computing virtualization technique is abstracting the technical details
usually used in data management, such as location, performance or format, in favour of
broader access and more resiliency that are directly related to business needs.
Desktop Virtualizing
As compared to other types of virtualization in cloud computing, this model enables you to
emulate a workstation load, rather than a server. This allows the user to access the desktop
remotely. Since the workstation is essentially running in a data centre server, access to it can
be both more secure and portable.
Application Virtualization
Software virtualization in cloud computing abstracts the application layer, separating it from
the operating system. This way the application can run in an encapsulated form without being
dependent upon the operating system underneath. In addition to providing a level of isolation,
an application created for one OS can run on a completely different operating system.
PROS AND CONS OF VIRTUALIZATION
Benefits of Virtualization in Cloud Computing
i. Security
During the process of virtualization security is one of the important concerns. The security
can be provided with the help of firewalls, which will help to prevent unauthorized access
and will keep the data confidential. Moreover, with the help of firewall and security, the data
can protect from harmful viruses malware and other cyber threats. Encryption process also
takes place with protocols which will protect the data from other threads. So, the c ustomer
can virtualize all the data store and can create a backup on a server in which the data can
store.
ii. Flexible operations
With the help of a virtual network, the work of it professional is becoming more efficient and
agile. The network switch implement today is very easy to use, flexible and saves time. With
the help of virtualization in Cloud Computing, technical problems can solve in physical
systems. It eliminates the problem of recovering the data from crashed or corrupted devices
and hence saves time.
iii. Economical
Virtualization in Cloud Computing, save the cost for a physical system such as hardware and
servers. It stores all the data in the virtual server, which are quite economical. It reduces the
wastage, decreases the electricity bills along with the maintenance cost. Due to this, the
business can run multiple operating system and apps in a particular server.
iv. Eliminates the risk of system failure
While performing some task there are chances that the system might crash down at the wro ng
time. This failure can cause damage to the company but the virtualizations help you to
perform the same task in multiple devices at the same time. The data can store in the cloud it
can retrieve anytime and with the help of any device. Moreover, there is two working server
side by side which makes the data accessible every time. Even if a server crashes with the
help of the second server the customer can access the data.
v. Flexible transfer of data
The data can transfer to the virtual server and retrie ve anytime. The customers or cloud
provider don’t have to waste time finding out hard drives to find data. With the help of
virtualization, it will very easy to locate the required data and transfer them to the allotted
authorities. This transfer of data has no limit and can transfer to a long distance with the
minimum charge possible. Additional storage can also provide and the cost will be as low as
possible.
Cons of Virtualization
Although you cannot find many disadvantages for virtualization, we will discuss a few
prominent ones as follows −
i.
Extra Costs
Maybe you have to invest in the virtualization software and possibly additional hardware
might be required to make the virtualization possible. This depends on your existing
network. Many businesses have sufficient capacity to accommodate the virtualization
without requiring much cash. If you have an infrastructure that is more than five years old,
you have to consider an initial renewal budget.
ii.
Software Licensing
This is becoming less of a problem as more software vendors adapt to the increased
adoption of virtualization. However, it is important to check with your vendors to
understand how they view software use in a virtualized environment.
iii.
Learn the new Infrastructure
Implementing and managing a virtualized environment will require IT staff with expertise in
virtualization. On the user side, a typical virtual environment will operate similarly to the
non-virtual environment. There are some applications that do not adapt well to the
virtualized environment.
VIRTUAL MACHINE PROVISIONING AND LIFECYCLE
When a virtual machine or cloud instance is provisioned, it goes through multiple phases.
First, the request must be made. The request includes ownership information, tags, virtual
hardware requirements, the operating system, and any customization of the request. Second,
the request must go through an approval phase, either automatic or manual. Finally, the
request is executed. This part of provisioning consists of pre-processing and post-processing.
Pre-processing acquires IP addresses for the user, creates CMDB instances, and creates the
virtual machine or instance based on information in the request. Post-processing activates the
CMDB instance and emails the user. The steps for provisioning may be modified at any time
using
CloudForms
ManagementEngine.
Figure 8.3
PROVISIONING VIRTUAL MACHINES
There are three types of provisioning requests available in CloudForms Management Engine:
1. Provision a new virtual machine from a template
You can provision virtual machines through various methods. One method is to provision a
virtual machine directly from a template stored on a provider.
IMPORTANT
To provision a virtual machine, you must have the "Automation Engine" role enabled.
To Provision a Virtual Machine from a Template:
1. Navigate to Infrastructure → Virtual Machines.
2. Click
(Lifecycle), and then
(Provision VMs).
3. Select a template from the list presented.
4. Click Continue.
5. On the Request tab, enter information about this provisioning request.
Figure 8.4
In Request Information, type in at least a First Name and Last Name and an email
address. This email is used to send the requester status emails during the provisioning
process for items such as auto-approval, quota, provision complete, retirement,
request pending approval, and request denied. The other information is optional. If the
CloudForms Management Engine server is configured to use LDAP, you can use
the Look Up button to populate the other fields based on the email address.
NOTE
Parameters with a * next to the label are required to submit the provisioning request.
To change the required parameters, see Customizing Provisioning Dialogs.
6. Click the Purpose tab to select the appropriate tags for the provisioned virtual
machines.
7. Click the Catalog tab to select the template to provision from. This tab is context
sensitive based on provider.
i.
For templates on VMware
providers:
Figure 8.5
For Provision Type, select VM ware or PXE.
i.
If VM ware is selected, select Linked Clone to create a linked clone to
the virtual machine instead of a full clone. Since a snapshot is required
to create a linked clone, this box is only enabled if a snapshot is
present. Select the snapshot you want to use for the linked clone.
ii.
If PXE is selected, select a PXE Server and Image to use for
provisioning
ii.
Under Count, select the number of virtual machines to create in this request.
iii.
Use Naming to specify a virtual machine name and virtual machine
description. When provisioning multiple virtual machines, a number will be
appended to the virtual machine name.
8. For templates on Red Hat providers:
i.
Select the Name of a template to use.
ii.
For Provision Type, select either ISO, PXE, or Native Clone. You must
select Native Clone in order to use a Cloud-Init template.
i.
If Native Clone is selected, select Linked Clone to create a linked
clone to the virtual machine instead of a full clone. This is equivalent
to Thin Template Provisioning in Red Hat Enterprise Virtualization.
Since a snapshot is required to create a linked clone, this box is only
enabled if a snapshot is present. Select the snapshot to use for the
linked clone.
ii.
If ISO is selected, select an ISO Image to use for provisioning
iii.
If PXE is selected, select a PXE Server and Image to use for
provisioning
iii.
Under Count, select the number of virtual machines you want to create in this
request.
iv.
Use Naming to
specify
a VM
Name and VM
Description.
When
provisioning multiple virtual machines, a number will be appended to the VM
Name.
9. Click the Environment tab to decide where you want the new virtual machines to
reside.
i.
If provisioning from a template on VMware, you can either let CloudForms
Management Engine decide for you by checking Choose Automatically, or
select a specific cluster, resource pool, folder, host, and datastore.
ii.
If provisioning from a template on Red Hat, you can either let CloudForms
Management Engine decide for you by checking Choose Automatically, or
select a datacenter, cluster, host and datastore.
10. Click the Hardware tab to set hardware
options.
Figure 8.5
i.
In VM Hardware, set the number of CPUs, amount of memory, and disk
format: thin, pre-allocated/thick or same as the provisioning template
(default).
ii.
For VMware provisioning, set the VM Limits of CPU and memory the virtual
machine can use.
iii.
For VMware provisioning, set the VM Reservation amount of CPU and
memory.
11. Click Network to set the vLan adapter. Additional networking settings that are
internal
to
the
operating
system
appear
on
the Customize tab.
Figure 8.6
i.
In Network Adapter Information, select the vLan.
12. Click Customize to customize the operating system of the new virtual machine.
These
options
vary
based
on
the
operating
system
of
the
template.
Figure 8.7
13. For Windows provisioning:
i.
To use a customer specification from the Provider, click Specification. To
select an appropriate template, choose from the list in the custom specification
area. The values that are honoured by CloudForms Management Engine
display.
NOTE
Any values in the specification that do not show in the CloudForms
Management Engine console’s request dialogs are not used by CloudForms
Management Engine. For example, for Windows operating systems, if you
have any run once values in the specification, they are not used in creating the
new virtual machines. Currently, for a Windows operating system,
CloudForms Management Engine honours the unattended GUI, identification,
workgroup information, user data, windows options, and server license. If
more than one network card is specified, only the first is used.
Figure 8.8
To modify the specification, select Override Specification Values.
ii.
Select Sysprep Ans wer File, to upload a Sysprep file or use one that exists for
a custom specification on the Provider where the template resides. To upload a
file, click Browse to find the file, and then upload. To use an answer file
in Customization Specification, click on the item. The answer file will
automatically upload for viewing. You cannot make modifications to it.
14. For Linux provisioning:
i.
Under Credentials, enter a Root Password for the root user to access the
instance.
ii.
Enter an IP Address Information for the instance. Leave as DHCP for
automatic IP assignment from the provider.
iii.
Enter any DNS information for the instance if necessary.
iv.
Select Customize Template for additional instance configuration. Select from
the Kickstart or Cloud-Init customization templates stored on your appliance.
15. Click the Schedule tab to select when provisioning begins.
i.
In Schedule Info, select when to start provisioning. If you select Schedule,
you will be prompted to enter a date and time. Select Stateless if you do not
want the files deleted after the provision completes. A stateless provision does
not write to the disk so it requires the PXE files on the next boot.
ii.
In Lifespan, select to power on the virtual machines after they are created, and
to set a retirement date. If you select a retirement period, you will be prompted
for
when
you
want
a
retirement
warning.
Figure 8.9
16. Click Submit.
The provisioning request is sent for approval. For the provisioning to begin, a user with the
administrator, approver, or super administrator account role must approve the request. The
administrator and super administrator roles can also edit, delete, and deny the requests. You
will be able to see all provisioning requests where you are either the requester or the
approver.
After submission, the appliance assigns each provision request a Request ID. If an error
occurs during the approval or provisioning process, use this ID to locate the request in the
appliance logs. The Request ID consists of the region associated with the request followed by
the request number. As regions define a range of one trillion database IDs, this number can
be several digits long.
Request ID Format
Request 99 in region 123 results in Request ID 123000000000099.
2. Clone a virtual machine
Virtual Machines can be cloned in other providers as well.
1. Navigate to Infrastructure → Virtual Machines, and check the virtual machine you
want to clone.
2. Click
(Lifecycle), and then
(Clone selected item).
3. Fill in the options as shown in To Provision from a template using the provided
dialogs. Be sure to check the Catalog Tab.
4. Schedule the request on the Schedule tab.
5. Click Submit.
1. Publish a virtual machine to a template
2. Navigate to Infrastructure → Virtual Machines, and check the virtual machine you
want to publish as a template.
3. Click
(Lifecycle), and then
(Publish selected VM to a Template).
4. Fill in the options as shown in To Provision from a template using the provided
dialogs. Be sure to check the Catalog tab.
5. Schedule the request on the Schedule tab.
6. Click Submit.
LOAD BALANCING
Cloud load balancing is defined as the method of splitting workloads and computing
properties in a cloud computing. It enables enterprise to manage workload demands or
application demands by distributing resources among numerous computers, networks or
servers. Cloud load balancing includes holding the circulation of workload traffic and
demands that exist over the Internet.
As the traffic on the internet growing rapidly, which is about 100% annually of the present
traffic. Hence, the workload on the server growing so fast which leads to the overloading of
servers mainly for popular web server. There are two elementary solutions to overcome the
problem of overloading on the servers
First is a single-server solution in which the server is upgraded to a higher performance
server. However, the new server may also be overloaded soon, demanding another
upgrade. Moreover, the upgrading process is arduous and expensive.

Second is a multiple-server solution in which a scalable service system on a cluster of
servers is built. That’s why it is more cost effective as well as more scalable to build a
server cluster system for network services.
Load balancing is beneficial with almost any type of service, like HTTP, SMTP, DNS, FTP,
and POP/IMAP. It also rises reliability through redundancy. The balancing service is
provided by a dedicated hardware device or program. Cloud-based servers farms can attain
more precise scalability and availability using server load balancing.
Load balancing solutions can be categorized into two types –
1. Software-based load balancers: Software-based load balancers run on standard
hardware (desktop, PCs) and standard operating systems.
2. Hardware-based load balancer: Hardware-based load balancers are dedicated boxes
which include Application Specific Integrated Circuits (ASICs) adapted for a
particular use. ASICs allows high speed promoting of network traffic and are
frequently used for transport- level load balancing because hardware-based load
balancing is faster in comparison to software solution.
Major Examples of Load Balance rs –
1. Direct Routing Requesting Dispatching Technique: This approach of request
dispatching is like to the one implemented in IBM’s Net Dispatcher. A real server and
load balancer share the virtual IP address. In this, load balancer takes an interface
constructed with the virtual IP address that accepts request packets and it directly
routes the packet to the selected servers.
2. Dispatche r-Based Load Balancing Cluster: A dispatcher does smart load balancing
by utilizing server availability, workload, capability and other user-defined criteria to
regulate where to send a TCP/IP request. The dispatcher module of a load balancer can
split HTTP requests among various nodes in a cluster. The dispatcher splits the load
among many servers in a cluster so the services of various nodes seem like a virtual
service on an only IP address; consumers interrelate as if it were a solo server, without
having an information about the back-end infrastructure.
3. Linux Virtual Load Balancer: It is an opensource enhanced load balancing solution
used to build extremely scalable and extremely available network services such as
HTTP, POP3, FTP, SMTP, media and caching and Voice Over Internet Protocol
(VoIP). It is simple and powerful product made for load b alancing and fail-over. The
load balancer itself is the primary entry point of server cluster systems and can execute
Internet Protocol Virtual Server (IPVS), which implements transport- layer load
balancing in the Linux kernel also known as Layer-4 switching.
SUMMARY
Cloud Computing is an internet based network technology that shared a rapid growth in the
advances of communication technology by providing service to customers of various
requirements with the aid of online computing resources. It has provisio ns of both hardware
and software applications along with software development platforms and testing tools as
resources. Such a resource delivery is accomplished with the help of services. While as the
former comes under category of Infrastructure as a service (IaaS) cloud, the latter two comes
under headings of Software as a service (SaaS) cloud and platform as a service (PaaS) cloud
respectively . The cloud computing is an on-demand network enabled computing model that
share resources as services billed on pay-as-you-go (PAYG) plan.
Some of the giant players in given technology are Amazon, Microsoft, Google, SAP, Oracle,
VMware, Sales force, IBM and others. Majority of these cloud providers are high- tech IT
organizations. The cloud computing model is viewed under two different headings. The first
one is the service delivery model, which defines the type of the service offered by a typical
cloud provider. Based on this aspect, there are popularly following three important service
models SaaS, PaaS and IaaS. The other aspect of cloud computing model is viewed on its
scale of use, affiliation, ownership, size and access. The official ‘National Institute of
Standards and Technology’ (NIST) definition for cloud computing outlines four cloud
deployment models. A cloud computing model is efficient if its resources are utilized in best
possible way and such an efficient utilization can be achieved by employing and maintaining
proper management of cloud resources. Resource management is achieved by adopting
robust resource scheduling, allocation and powerful resource scalability techniques. These
resources are provided to customers in the form of Virtual Machines (VM) through a process
known as virtualization that makes use of an entity (software, hardware or both) kno wn as
hypervisor. The greatest advantage of cloud computing is that a single user physical machine
is transformed into a multiuser virtual machines. The Cloud Service Provider (CSP) plays a
crucial role in service delivery to users and is a complex task with given available virtual
resources.
KEY WORDS/ABBREVIATIONS

Container – A container a virtualization instance in which the kernel of an operating
system allows for multiple isolated user-space instances

Content Delivery Network (CDN) – A content delivery network (CDN) is a network
of distributed services that deliver content to a user based on the user’s geographic
proximity to servers. CDNs allow speedy content delivery for websites with high
traffic volume or large geographic reach.

Hype rvisor – A hypervisor or virtual machine monitor (VMM) is a piece of software
that allows physical devices to share their resources among virtual machines (VMs)
running on top of that physical hardware. The hypervisor creates, runs and manages
VMs.

Amazon Web Services (AWS) – Amazon Web Services is a suite of cloud
computing services that make a comprehensive cloud platform offered by
Amazon.com.

Custome r
Relationship
Management
(CRM)
–
Customer
Relationship
Management (CRM) applications allow a business to manage relationships with
current and future customers by providing the business with tools to manage sales,
customer service, and technical support roles. SaaS CRM applications, such as
Salesforce.com, are very popular.
LEARNING ACTIVITY
1. Why virtualization is important aspects in Cloud Computing.
___________________________________________________________________________
___________________________________________________________________
2. Draw a framework to analyse the various virtualization Techniques
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. What is virtualization?
2. Do the detail study of Virtualization Techniques.
3. What are the Pros and cons of Virtualization?
4. How do we perform the Virtual Machine provisioning?
5. Define Load Balancing.
B. Multiple Choice Questions
1. Microsoft offers a _______ calculator for the Windows Azure Platform.
a) TCO
b) TOC
c) OCT
d) All of the mentioned
2. The connection between storage and Microsoft’s Content Delivery Network is stated to be
at least _______ percent uptime.
a) 90
b) 95
c) 99.9
d) None of the mentioned
3. Which of the following aims to deploy methods for measuring various aspects of cloud
performance in a standard way?
a) RIM
b) SIM
c) SMI
d) All of the mentioned
4. Which of the following is not the feature of Network management systems?
a) Accounting
b) Security
c) Performance
d) None of the mentioned
5. ___________ is a framework tool for managing cloud infrastructure.
a) IBM Tivoli Service Automation Manager
b) Microsoft Tivoli Service Automation Manager
c) Google Service Automation Manager
d) Windows Live Hotmail
Ans wer
1. a
2. c
3. c
4. d
5. a
REFERENCES

Gens, Frank. (2008-09-23) “Defining ‘Cloud Services’ and ‘Cloud Computing’,” IDC
Exchange. Archived 2010-07-22 at the Wayback Machine

Henderson, Tom and Allen, Brendan. (2010-12-20) “Private clouds: Not for the faint
of heart”, NetworkWorld.

Whitehead, Richard. (2010-04-19) “A Guide to Managing Private Clouds,” Industry
Perspectives.

Sullivan, Dan. (2011–02) “Hybrid cloud management tools and strategies,”
SearchCloudComputing.com

"Definition: Cloud management", ITBusinessEdge/Webopedia

S. Garcia-Gomez; et al. (2012). "Challenges for the comprehensive management of
Cloud Services in a PaaS framework". Scalable Computing: Practice and Experience.
Scientific International Journal for Parallel and Distributed Computing. 13 (3): 201–
213.

"A Guidance Framework for Selecting Cloud Management Platforms and Tools".
www.gartner.com. Retrieved 2018-11-26.
UNIT 9: TRAFFIC MANAGER
STRUCTURE
1. Learning Objectives
2. Introduction
3. Traffic Manager
4. How clients connect using Traffic Manager
5. Benefits
6. Managing traffic between datacenters
7. Summary
8. Key Words/Abbreviations
9. Learning Activity
10. Unit End Questions (MCQ and Descriptive)
11. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects Traffic Manager:

Definition of Traffic Manager

Pros of traffic Manager

Connection with the help of Traffic Manager
INTRODUCTION
Traffic Manager uses DNS to direct client requests to the most appropriate service endpoint
based on a traffic-routing method and the health of the endpoints. An endpoint is any
Internet-facing service hosted inside or outside of Azure. Traffic Manager provides a range
of traffic-routing methods and endpoint monitoring options to suit different application needs
and automatic failover models. Traffic Manager is resilient to failure, including the failure of
an entire Azure region.
Azure Traffic Manager enables you to control the distribution of traffic across your
application endpoints. An endpoint is any Internet- facing service hosted inside or outside of
Azure.
Traffic Manager provides two key benefits:

Distribution of traffic according to one of several traffic-routing methods

Continuous monitoring of endpoint health and automatic failover when endpoints fail
When a client attempts to connect to a service, it must first resolve the DNS name of the
service to an IP address. The client then connects to that IP address to access the ser vice.
The most important point to understand is that Traffic Manager works at the DNS
level. Traffic Manager uses DNS to direct clients to specific service endpoints based on the
rules of the traffic-routing method. Clients connect to the selected endpoint directly. Traffic
Manager is not a proxy or a gateway. Traffic Manager does not see the traffic passing
between the client and the service.
TRAFFIC MANAGER
Traffic manager operates at the DNS level it allows you to point your domain name to traffic
manager with a CNAME record, and have traffic manager redirect the request the correct
endpoint based on whatever mode you’re using.
Traffic manager has three modes of operation, which are Priority, Weighted and
Performance.
Let’s run through each option.
The priority option is better known as failover. It works by directing all requests to a primary
endpoint unless that endpoint is down, and then it directs to a secondary endpoint.
It’s common to have a backup of an environment in case of failure. That’s where t he priority
method comes in handy.
The way it works is that you specify a list of endpoints in priority order, and traffic manager
will send traffic to the highest priority endpoint that’s available. If you’re thinking about high
availability, especially cross-region availability, this is a fantastic option.
The next mode is weighted, which is similar to round robin in that the intent is to evenly
distribute requests. So, requests are evenly distributed across the different endpoints at
random, however the chance of any given endpoint being selected is based on weighted
values that you define for each endpoint. If you want an even distribution, then assign equal
weights to all the endpoints. Being able to change the weights gives a lot of flexibility! And
it’s a great way to perform canary deployments, as well as application migrations.
The final mode is performance mode, and this is where you have geographically separated
endpoints, and traffic manager will select the best one per request based on latency.
By having your endpoints cross region, and using performance based routing you can ensure
that your end-users are getting the best user experience possible, because they’ll be directed
to the endpoint with the lowest latency, for them. This tends to be the “closest” endpoint,
however it’s not a rule.
Traffic Manager offers the following features:
INCREASE APPLICATION AVAILABILITY
Traffic Manager delivers high availability for your critical applications by monitoring your
endpoints and providing automatic failover when an endpoint goes down.
IMPROVE APPLICATION PERFORMANCE
Azure allows you to run cloud services or websites in datacentres located around the world.
Traffic Manager improves application responsiveness by directing traffic to the endpoint with
the lowest network latency for the client.
PERFORM SERVICE MAINTENANCE WITHOUT DOWNTIME
You can perform planned maintenance operations on your applications without downtime.
Traffic Manager can direct traffic to alternative endpoints while the maintenance is in
progress.
COMBINE HYBRID APPLICATIONS
Traffic Manager supports external, non-Azure endpoints enabling it to be used with hybrid
cloud and on-premises deployments, including the "burst-to-cloud," "migrate-to-cloud," and
"failover-to-cloud" scenarios.
DISTRIBUTE TRAFFIC FOR COMPLEX DEPLOYMENTS
Using nested Traffic Manager profiles, multiple traffic-routing methods can be combined to
create sophisticated and flexible rules to scale to the needs of larger, more complex
deployments.
HOW CLIENTS CONNECT USING TRAFFIC MANAGER
Continuing from the previous example, when a client requests the
page https://partners.contoso.com/login.aspx, the client performs the following steps to resolve the
DNS name and establish a connection:
Figure 9.1
1. The client sends a DNS query to its configured recursive DNS service to resolve the
name 'partners.contoso.com'. A recursive DNS service, sometimes called a 'local DNS'
service, does not host DNS domains directly. Rather, the client off- loads the work of
contacting the various authoritative DNS services across the Internet needed to resolve
a DNS name.
2. To resolve the DNS name, the recursive DNS service finds the name servers for the
'contoso.com' domain. It then contacts those name servers to request the
'partners.contoso.com' DNS record. The contoso.com DNS servers return the CNAME
record that points to contoso.trafficmanager.net.
3. Next, the recursive DNS service finds the name servers for the 'trafficmanager.net'
domain, which are provided by the Azure Traffic Manager service. It then sends a
request for the 'contoso.trafficmanager.net' DNS record to those DNS servers.
4. The Traffic Manager name servers receive the request. They choose an endpoint based
on:
o
The configured state of each endpoint (disabled endpoints are not returned)
o
The current health of each endpoint, as determined by the Traffic Manager
health checks. For
more information,
see Traffic Manager Endpoint
Monitoring.
o
The chosen traffic-routing method. For more information, see Traffic Manager
Routing Methods.
5. The chosen endpoint is returned as another DNS CNAME record. In this case, let us
suppose contoso-eu.cloudapp.net is returned.
6. Next, the recursive DNS service finds the name servers for the 'cloudapp.net' domain.
It contacts those name servers to request the 'contoso-eu.cloudapp.net' DNS record. A
DNS 'A' record containing the IP address of the EU-based service endpoint is returned.
7. The recursive DNS service consolidates the results and returns a single DNS response
to the client.
8. The client receives the DNS results and connects to the given IP address. The client
connects to the application service endpoint directly, not through Traffic Manager.
Since it is an HTTPS endpoint, the client performs the necessary SSL/TLS handshake,
and then makes an HTTP GET request for the '/login.aspx' page.
The recursive DNS service caches the DNS responses it receives. The DNS resolver on the
client device also caches the result. Caching enables subsequent DNS queries to be answered
more quickly by using data from the cache rather than querying other name servers. The
duration of the cache is determined by the 'time-to-live' (TTL) property of each DNS record.
Shorter values result in faster cache expiry and thus more round-trips to the Traffic Manager
name servers. Longer values mean that it can take longer to direct traffic away from a failed
endpoint. Traffic Manager allows you to configure the TTL used in Traffic Manager DNS
responses to be as low as 0 seconds and as high as 2,147,483,647 seconds (the maximum
range compliant with RFC-1035), enabling you to choose the value that best balances the
needs of your application.
BENEFITS
The Traffic Manager comes with many benefits for the user:

Increase Performance: Can increase the performance of your application that
includes faster page loading and better user experience. This applies to the serving of users
with the hosted service closest to them.

High Availability: You can use the Traffic Manager to improve application
availability by enabling automatic customer traffic fail-over scenarios in the event of issues
with one of your application instances.

No Downtime Required for Upgrade / Maintenance: Once you have configured the
Traffic Manager, you don’t need downtime for application maintenance, patch purgation or
complete new package deployment.

Quick Setup: It’s very easy to configure Azure Traffic Manager on Windows Azure
portal. If you have already hosted your application on Windows Azure (a cloud service,
Azure website), you can easily configure this Traffic Manager with a simple procedure
(setting routing policy).
MANAGING TRAFFIC BETWEEN DATA CENTERS
Datacentres provide cost-effective and flexible access to scalable compute and storage
resources necessary for today’s cloud computing needs. A typical datacentre is made up of
thousands of servers connected with a large network and usually managed by one operator.
To provide quality access to the variety of applications and services hosted on datacentres
and maximize performance, it deems necessary to use datacentre networks effectively and
efficiently. Datacentre traffic is often a mix of several classes with different priorities and
requirements. This includes user-generated interactive traffic, traffic with deadlines, and
long-running traffic. To this end, custom transport protocols and traffic management
techniques have been developed to improve datacentre network performance
IV. DATACENTER TRAFFIC CONTROL MANAGEMENT To enforce traffic control,
some level of coordination is needed across the network elements. In general, traffic control
can range from fully distributed to completely centralized. Here we review the three main
approaches used in the literature namely distributed, centralized or hybrid. Table III provides
an overview of traffic control management schemes.
A. Distributed Most congestion management schemes coordinate in a distributed way as it is
more reliable and scalable. A distributed scheme may be implemented as part of the endhosts, switches, or both. Some recent distributed traffic control sche mes Designs that can be
fully realized using end-hosts are usually preferred over ones that need changes in the default
network functions or demand additional features at the switches such as custom priority
queues, in- network rate negotiation and allocation complex calculations in switches, or per
flow state information. End-host implementations are usually more scalable since every
server handles its own traffic. Therefore, popular transport protocols rely on this type of
implementation such as. Some examples of this approach include RCP [Expedites, and
Racks. RCP and PDQ perform in- network rate allocation and assignment by allowing
switches and end- hosts to communicate using custom headers, CONGA gets help from
switches to perform flow let based load balancing in leaf-spine topologies, Expedites
performs flow based load balancing by implementing custom Layer 2 headers and localized
monitoring of congestion at the switches, and Racks uses Tour switches as means to share
congestion information among many flows between the same source and destination racks to
help them converge faster to proper transmission rates. To implement advanced in-network
features, changes to the network elements might be necessary and switches may need to do
additional computations or support new features.
B. Centralized In centralized schemes a central unit coordinates transmissions in the network
to avoid congestion. The central unit has access to a global view of network topology and
resources, state information of switches, and end-host demands. These include flow sizes,
deadlines and priorities as well as queuing status of switches and link capacities. Scheduler
can proactively allocate resources temporally and spatially (several slots of time and different
links) and plan transmissions in a way that optimizes the performance and minimizes
contentions. To further increase performance, this entity can translate the scheduling problem
into an optimization problem with resource constraints the solution to which can be
approximated using fast heuristics. For large networks, scheduler effectiveness depends on its
computational capacity and communication latency to end- hosts. TDMA, Fantasied Flow
Tune are examples of a centrally coordinated network. TDMA divides timeline into rounds
during which it collects end-host demands. Each round is divided into fixed sized slots during
which hosts can communicate in a contention-less manner
C. Hybrid Using a hybrid system could provide the reliability and scalability of distributed
control and performance gains obtained from global network management. A general hybrid
approach is to have distributed control that is assisted by centrally calculated parameters.
Examples of this approach include OTCP Fibbing Hedera and Mahout. OTCP uses a central
controller to monitor and collect measurements on link latencies and their queuing extent
using methods provided by Software Defined Networking (SDN)
SUMMARY
Microsoft Azure Traffic Manager allows users to manage the user traffic distribution of
various service endpoints that are located in data centres around the world. The service
endpoints, which are supported by the Azure Traffic Manager, incorporate cloud services,
Web Apps, and Azure VMs.
Users can use the Azure Traffic Manager, as well as the non-Azure external endpoints as
well. AZURE Traffic Manager utilizes the DNS (Domain Name System) in order to direct
the client requests through the most suitable endpoint by applying the traffic-routing method.
The Traffic Manager offers several endpoint monitoring alternatives and traffic-routing
methodologies to suit unique application requirements with auto-failover models. Azure
Traffic Manager is robust and resilient to failures, which also includes the failures of the
whole Azure region.
KEY WORDS/ABBREVIATIONS

Vertical Cloud – A vertical cloud is a cloud computing solutions that is built or
optimized for a specific business vertical such as manufacturing, financial services, or
healthcare.

Virtual Desktop Infrastructure (VDI) – Virtual desktop infrastructure (VDI) is a
desktop operating system hosted within a virtual machine.

Shared Resources – Shared Resources, also known as network resources, are
computing resources that can be accessed remotely through a network, such as a
Local Area Network (LAN) or the internet.

On-Pre mise – On Premise technology is software or infrastructure that is run on
computers on the premises (in the building) of the person or organization using the
software or infrastructure.

Open Source – Open Source is a development model in which a product’s source
code is made openly available to the public. Open source products promote
collaborative community development and rapid prototyping.
LEARNING ACTIVITY
1.How Traffic is managed between data centres?
___________________________________________________________________________
___________________________________________________________________
2. Discuss how Data centre play important role in managing Data.
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. How client connects using Traffic Manager?
2. What is Traffic Manager?
3. What are benefits of Azure Traffic Manager?
4. How Azure Traffic Manager helps managing traffic between datacentres.
B. Multiple Choice Questions
1. Which of the following “cloudly” characteristics that cloud management service must
have?
a) Billing is on a pay-as-you-go basis
b) The management service is extremely scalable
c) The management service is ubiquitous
d) All of the mentioned
2. How many categories need to be monitored for entire cloud computing?
a) 1
b) 2
c) 4
d) 6
3. Which of the following is a standard protocol for network monitoring and discovery?
a) SNMP
b) CMDB
c) WMI
d) All of the mentioned
4. Which of the following service provider provides the least amount of built in security?
a) SaaS
b) PaaS
c) IaaS
d) All of the mentioned
5. Which of the following services that need to be negotiated in Service Level Agreements?
a) Logging
b) Auditing
c) Regulatory compliance
d) All of the mentioned
Ans wer
1. d
2. d
3. d
4. c
5. d
REFERENCES

Linthicum, David. (2011-04-27) “How to integrate with the cloud”, InfoWorld: Cloud
Computing, April 27, 2011.

Semple, Bryan. (2011-07-14) “Five Capacity Management Challenges for Private
Clouds,” Cloud Computing Journal.

Magalhaes, Deborah et al. (2015-09-19) “Workload modeling for resource usage
analysis and simulation in cloud computing,” Computers & Electrical Engineering

Golden, Barnard. (2010-11-05) “Cloud Computing: Why You Can't Ignore
Chargeback,” CIO.com.

Rigsby, Josette. (2011-08-30) “IBM Offers New Hybrid Cloud Solution Using Cast
Iron, Tivoli,” CMS Wire.

Mike Edwards, Preetam Gawade, John Leung, Bill McDonald, Karolyn Schalk, Karl
Scott, Bill Van Order, Steven Woodward (2017). "Practical Guide to Cloud
Management Platforms". Cloud Standards Customer Council.

Fellows, William (June 2018). "451 Research Cloud Management Market Map". 451
Research Report Excerpt.

"Cloud Computing". www.gartner.com. Retrieved 28 May 2015.

Gamal, Selim; Rowayda A. Sadek; Hend Taha (January 2014). "An Efficient Cloud
Service Broker Algorithm". International Journal of Advancements in Computing
Technology. 6
UNIT 10: DATA MANAGEMENT
STRUCTURE
1. Learning Objectives
2. Introduction
3. Data management strategy in cloud computing
4. Challenges with data
5. Data centers
6. Storage of data and databases
7. Data Privacy and Security Issues at different level.
8. Summary
9. Key Words/Abbreviations
10. Learning Activity
11. Unit End Questions (MCQ and Descriptive)
12. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects Data Management:

Understanding of Data Management

Introduction to Data Centres

Security Issues at Data Management

Knowledge of Storage of Data Base
INTRODUCTION
IT infrastructure is becoming increasingly complex and enterprises should look out for
scalable data management solutions in order to stay afloat. Eventually, data management in
cloud computing has become the go-to solution for many of them. Companies are extensively
adopting the cloud for it offers cost savings, data availability, flexibility, scalability, and
more. However, if you want to leverage the fullest potential of your infrastructure and have a
possibility to work both in the cloud and on-prem storage transitioning easily between the
two, plan ahead for your data management needs and take a step-by-step approach to meet
them.
Data management is an administrative process that includes acquiring, validating, storing,
protecting, and processing required data to ensure the accessibility, reliability, and timeliness
of the data for its users. It is a broad term that can refer to a role (a data manager), while also
referring to an organizational responsibility.
Within the parameters of data management exists responsibility for the entire data lifecycle,
from collection to consumption. This includes its point of origin (data provenance) and
transformative journey from origination to current point of reference or observation (data
lineage). These attributes are particularly useful in managing data: By describing the journey
of a piece of data, visibility becomes available throughout the data pipeline, checks can be
monitored, and incidents of compromise or failure can be traced directly to sources
DATA MANAGEMENT STRATEGY IN CLOUD COMPUTING
Data migration to the cloud is the real deal that requires a holistic approach. This process is
oftentimes hard. Therefore, it is the primary objectives of your business that should dictate
your strategy in the first place. Positive changes are incremental and no miracle will happen
once you start, not to mention the fact that data management is continuous and must be
constantly monitored after the strategic planning is done.
Figure 10.1
One of the most undesirable effects of the wrong data management strategy that everybody
stands a hazard to experience is a substantial increase in costs. Due to the growing
complexity of cloud-driven environments, enterprises expenditures can be unreasonably high.
Nonetheless, you can control a budgeting process and do not have to spend as much as one
used to when there was a need for costly servers and systems. Accordingly, developing an
effective strategy to minimize the number of obstacles you might face by considering its key
elements is critical for you. These aspects are the following:
1. A systematic approach to data security.
Overcoming and preventing security challenges should be a data management system’s
primary concern. Firewalls, data encryption, and data exposure are some possible protective
measures. More stringent control is needed for ensuring security in the cloud. Thus, data
governance must be standardized within your enterprise for your data to be secured at rest, in
flight or when going outside the production environment. Make sure you have considered and
employed all possible security cloud services that can help you detect and respond to threats
and actual leakages. Then, it will be easier to comply with existing data management policies.
2. Tiers optimization for specific workloads.
Tiering is, in the first place, meant to add efficiency to your data management strategy, derive
value from and add value to your data. With the tiered storage, frequently accessed objects
will be stored in higher-performing storage pools while the more rarely accessed data objects
whose volume is bigger will be stored in larger-capacity storage pools. Besides, your data
will be structured, which means lower latency.
3. Flexibility in managing multi-structured data.
Multi-structured data make up separate sets of data managed and stored in multiple formats.
So, it is easy to overspend on storage and analytics. Nevertheless, it is the unified data
management that affords flexibility, operational and cost efficiency in your cloud data
analytics.
CLOUD DATA MANAGEMENT MISTAKES TO AVOID
Now that we have highlighted three pillars that your data migration strategy must rest upon, it
is time to define data management challenges in cloud computing and the potential risk
factors that may hinder your efforts.
Figure 10.2
1. No corporate policy.
Any strategic initiative, especially the one that is process-centric, has to comply with the
corresponding policies and standards. Essentially, data management is the tactical execution
thereof and a good idea here is to consolidate as many entities as possible into one system.
Then, one will not only be able to manage data at lower costs but will also do it more
securely. Data that is kept separately and managed in several different ways within one
organization can be easy to access and control can be provided at the insufficient quality.
Centralized and consistent policies will result in making more right decisions and fewer
mistakes.
2. Moving all your data to the cloud.
Despite all those great things about cloud computing, enterprises should never forget about
the local file servers, domain controllers and the value they add to your solution. Data-driven
decisions can still be made without driving all you have to the cloud. First, one has to think
over what information can stay in an on-premise server and what should go to a cloud server
for further processing.
3. Limited structure.
Data must be structured. When it is organized, it is accessible and you do not have to waste
your time on searching. Thus, proper classification and strict formats for document names are
essential.
BEST PRACTICES FOR DATA MANAGEMENT IN CLOUD COMPUTING
If there are core principles that lay the foundation for the strategic management of data in the
cloud and certain pitfalls to avoid, then there must be methods and techniques that are, if
compared with the traditional ones, aimed at the operational excellence and overall
improvement of your experience.
Figure 10.3
1. Ensure a sophisticated infrastructure. Everything will work smoothly and efficiently if
there is a possibility to choose whether you want to move data to on-prem storages, to the
cloud, or across different clouds. The cloud is not the only destination of a mass data
migration. The structure has to be sophisticated yet this whole system should have centralized
management.
2. Choose your cloud data management platform. Platforms like this are used for control,
monitoring and other relevant cloud activities. Modern enterprises tend to constantly change
their IT environments by making them larger and more complex. If you do provide such an
infrastructure managing different types of data across various cloud computing services and
local servers, then selecting a single platform is highly reco mmended. This platform
approach will help you maintain a certain level of consistency and reduce bottlenecks.
Besides you can opt for a platform that is native, cloud provider-specific, or available from a
third-party vendor.
3. Leverage the Cloud Data Manage ment Interface. It is a generally accepted standard of
interface’s functioning which allows enterprises to manage data elements increasing the
system’s interoperability. Accommodation of requirements from multiple vendors instead of
using the storage system with a unique interface might be challenging, so the deployment of
CDMI compatible systems is the right thing to do.
4. Create a frame work for cloud management first. Before moving data to the cloud,
make sure there is a solid framework. Upon having o ne established, it will be easier for an
enterprise to say how to best manage its cloud resources. Migration of systems to more
capable platforms is a natural process, but it has to be a conscious and informed decision.
CHALLENGES WITH DATA
Challenge 1: DDoS attacks
As more and more businesses and operations move to the cloud, cloud providers are
becoming a bigger target for malicious attacks. Distributed denial of service (DDoS) attacks
are more common than ever before. Verisign reported IT services, clo ud and SaaS was the
most frequently targeted industry during the first quarter of 2015.
A DDoS attack is designed to overwhelm website servers so it can no longer respond to
legitimate user requests. If a DDoS attack is successful, it renders a website use less for hours,
or even days. This can result in a loss of revenue, customer trust and brand authority.
Complementing cloud services with DDoS protection is no longer just good idea for the
enterprise; it’s a necessity. Websites and web-based applications are core components of 21st
century business and require state-of-the-art security.
Challenge 2: Data breaches
Known data breaches in the U.S. hit a record-high of 738 in 2014, according to the Identity
Theft Research Centre, and hacking was (by far) the number one cause. That’s an incredible
statistic and only emphasizes the growing challenge to secure sensitive data.
Traditionally, IT professionals have had great control over the network infrastructure and
physical hardware (firewalls, etc.) securing proprietary data. In the cloud (in private, public
and hybrid scenarios), some of those controls are relinquished to a trusted partner. Choosing
the right vendor, with a strong record of security, is vital to overcoming this challenge.
Challenge 3: Data loss
When business critical information is moved into the cloud, it’s understandable to be
concerned with its security. Losing data from the cloud, either though accidental deletion,
malicious tampering (i.e. DDoS) or an act of nature brings down a cloud servic e provider,
could be disastrous for an enterprise business. Often a DDoS attack is only a diversion for a
greater threat, such as an attempt to steal or delete data.
To face this challenge, it’s imperative to ensure there is a disaster recovery process in place,
as well as an integrated system to mitigate malicious attacks. In addition, protecting every
network layer, including the application layer (layer 7), should be built- in to a cloud security
solution.
Challenge 4: Insecure access points
One of the great benefits of the cloud is it can be accessed from anywhere and from any
device. But what if the interfaces and APIs users interact with aren’t secure? Hackers can
find these types of vulnerabilities and exploit them.
A behavioural web application firewall examines HTTP requests to a website to ensure it is
legitimate traffic. This always-on device helps protect web applications from security
breaches.
Challenge 5: Notifications and alerts
Awareness and proper communication of security threats is a cornerstone of network security
and the same goes for cloud security. Alerting the appropriate website or application
managers as soon as a threat is identified should be part of a thorough security plan. Speedy
mitigation of a threat relies on clear and prompt communication so steps can be taken by the
proper entities and impact of the threat minimized.
Final Thoughts
Cloud security challenges are not insurmountable. With the right partners, technology and
forethought, enterprises can leverage the benefits of cloud technology.
DATA CENTERS
Data centers are simply centralized locations where computing and networking equipment is
concentrated for the purpose of collecting, storing, processing, distributing or allowing access
to large amounts of data. They have existed in one form or another since the advent of
computers.
In the days of the room-sized behemoths that were our early computers, a data center might
have had one supercomputer. As equipment got smaller and cheaper, and data processing
needs began to increase -- and they have increased exponentially -- we started networking
multiple servers (the industrial counterparts to our home computers) together to increase
processing power. We connect them to communication networks so that people can access
them, or the information on them, remotely. Large numbers of these clustered servers and
related equipment can be housed in a room, an entire building or groups of buildings. Today's
data center is likely to have thousands of very powerful and very small servers running 24/7.
Because of their high concentrations of servers, often stacked in racks that are placed in rows,
data centers are sometimes referred to a server farm. They provide important services such as
data storage, backup and recovery, data management and networking. These centers can store
and serve up Web sites, run e- mail and instant messaging (IM) services, provide cloud
storage and applications, enable e-commerce transactions, power online gaming communities
and do a host of other things that require the wholesale crunching of zeroes and ones.
Just about every business and government entity either needs its own data center or needs
access to someone else's. Some build and maintain them in- house, some rent servers at colocation facilities (also called colos) and some use public cloud-based services at hosts like
Amazon, Microsoft, Sony and Google.
The colos and the other huge data centers began to spring up in the late 1990s and early
2000s, sometime after Internet usage went mainstream. The data centers of some large
companies are spaced all over the planet to serve the constant need for access to massive
amounts of information. There are reportedly more than 3 million data centers of various
shapes and sizes in the world today
Why do we need data centres?
The idea that cloud computing means data isn’t stored on computer hardware isn’t accurate.
Your data may not be on your local machine, but it has to be housed on physical drives
somewhere -- in a data centre.
The idea that cloud computing means data isn’t stored on computer hardware isn’t accurate.
Your data may not be on your local machine, but it has to be housed on physical drives
somewhere -- in a data centre.
Despite the fact that hardware is constantly getting smaller, faster and more powerful, we are
an increasingly data-hungry species, and the demand for processing power, storage space and
information in general is growing and constantly threatening to outstrip companies' abilities
to deliver.
Any entity that generates or uses data has the need for data centres on some level, including
government agencies, educational bodies, telecommunications companies,
financial
institutions, retailers of all sizes, and the purveyors of online information and social
networking services such as Google and Facebook. Lack of fast and reliable access to data
can mean an inability to provide vital services or loss of customer satisfaction and revenue.
A study by International Data Corporation for EMC estimated that 1.8 trillion gigabytes
(GB), or around 1.8 zettabytes (ZB), of digital information was created in 2011 [sources:
Glanz, EMC, Phneah]. The amount of data in 2012 was approximately 2.8 ZB and is
expected to rise to 40 ZB by the year 2020 [sources: Courtney, Digital Science Series, EMC].
All of this media has to be stored somewhere. And these days, more and more things are also
moving into the cloud, meaning that rather than running or storing them on our own home or
work computers, we are accessing them via the host servers of cloud providers. Many
companies are also moving their professional applications to cloud services to cut back on the
cost of running their own centralized computing networks and servers.
The cloud doesn't mean that the applications and data are not housed on computing hardware.
It just means that someone else maintains the hardware and software at remote locations
where the clients and their customers can access them via the Internet. And those locations
are data centres.
STORAGE OF DATA AND DATABASES
A cloud database is a database that typically runs on a cloud computing platform, and access
to the database is provided as-a-service. Database services take care of scalability and high
availability of the database. Database services make the underlying software-stack
transparent to the user
There are two primary methods to run a database in a cloud:
Virtual machine image
Cloud platforms allow users to purchase virtual- machine instances for a limited time, and one
can run a database on such virtual machines. Users can either upload their own machine
image with a database installed on it, or use ready- made machine images that already include
an optimized installation of a database.
Database-as-a-service (DBaaS)
With a database as a service model, application owners do not have to install and maintain
the database themselves. Instead, the database service provider takes responsibility for
installing and maintaining the database, and application owners are charged according to their
usage of the service. This is a type of SaaS - Software as a Service.
Architecture and common characteristics
Most database services offer web-based consoles, which the end user can use to provision
and configure database instances.
Database services consist of a database- manager component, which controls the underlying
database instances using a service API. The service API is exposed to the end user, and
permits users to perform maintenance and scaling operations on their database instances.
Underlying software-stack stack typically includes the operating system, the database and
third-party software used to manage the database. The service provider is responsible for
installing, patching and updating the underlying software stack and ensuring the overall
health and performance of the database.
Scalability features differ between vendors – some offer auto-scaling, others enable the user
to scale up using an API, but do not scale automatically.
There is typically a commitment for a certain level of high availability (e.g. 99.9% or
99.99%). This is achieved by replicating data and failing instances over to other database
instances.
Data model
The design and development of typical systems utilize data management and relational
databases as their key building blocks. Advanced queries expressed in SQL work well with
the strict relationships that are imposed on information by relational databases. However,
relational database technology was not initially designed or developed for use over
distributed systems. This issue has been addressed with the addition of clustering
enhancements to the relational databases, although some basic tasks require complex and
expensive protocols, such as with data synchronization.
Modern relational databases have shown poor performance on data- intensive systems,
therefore, the idea of NoSQL has been utilized within database management systems for
cloud based systems.
Within NoSQL implemented storage, there are no requirements for fixed table schemas, and
the use of join operations is avoided. "The NoSQL databases have proven to provide efficient
horizontal scalability, good performance, and ease of assembly into cloud applications. “Data
models relying on simplified relay algorithms have also been employed in data-intensive
cloud mapping applications unique to virtual frameworks.
It is also important to differentiate between cloud databases which are relational as opposed
to non-relational or NoSQL.
SQL databases
are one type of database which can run in the cloud, either in a virtual machine or as a
service, depending on the vendor. While SQL databases are easily vertically scalable,
horizontal scalability poses a challenge, that cloud database services based on SQL have
started to address.
NoSQL databases
are another type of database which can run in the cloud. NoSQL databases are built to service
heavy read/write loads and can scale up and down easily, and therefore they are more
natively suited to running in the cloud. However, most contemporary applications are built
around an SQL data model, so working with NoSQL databases often requires a complete
rewrite of application code.
Some SQL databases have developed NoSQL capabilities including JSON, binary JSON (e.g.
BSON or similar variants), and key-value store data types.
A multi- model database with relational and non-relational capabilities provides a standard
SQL interface to users and applications and thus facilitates the usage of such databases for
contemporary applications built around an SQL data model. Native multi- model databases
support multiple data models with one core and a unified query language to access all data
models.
DATA PRIVACY AND SECURITY ISSUES AT DIFFERENT LEVEL.
With the increase in data volumes, data handling has become the talk of the town. As
companies begin to move to the cloud, there is a higher emphasis ensuring everything is safe
and secure, and that there is no risk of data hacking or breaches. Since the cloud allows
people to work without hardware and software investments, users can gain flexibility and
data agility. However, since the Cloud is often shared between a lot of users, security
becomes an immediate concern for Cloud owners.
Security Issues Within the Cloud
Cloud vendors provide a layer of security to user’s data. However, it is still not enough since
the confidentiality of data can often be at risk. There are various types of attacks, which range
from password guessing attacks and man- in-the- middle attacks to insider attacks, shoulder
surfing attacks, and phishing attacks. Here is a list of the security challenges which are
present within the cloud:
Data Protection and Misuse: When different organizations use the cloud to store their data,
there is often a risk of data misuse. To avoid this risk, there is an imminent need to secure the
data repositories. To achieve this task, one can use authentication and restrict access control
for the cloud’s data.
Locality: Within the cloud world, data is often distributed over a series of regions; it is quite
challenging to find the exact location of the data storage. However, as data is moved from
one country to another, the rules governing the data storage also change; this brings
compliance issues and data privacy laws into the picture, which pertain to the storage of data
within the cloud. As a cloud service provider, the service provider has to inform the users of
their data storage laws, and the exact location of the data storage server.
Integrity: The system needs to be rigged in such a manner so to provide security and access
restrictions. In other words, data access should lie with authorized personnel only. In a cloud
environment, data integrity should be maintained at all times to avoid any inherent data loss.
Apart from restricting access, the permissions to make changes to the data should be limited
to specific people, so that there is no widespread access problem at a later stage.
Access: Data security policies concerning the access and control of data are essential in the
long run. Authorized data owners are required to give part access to individuals so that
everyone gets only the required access for parts of the data stored within the data mart. By
controlling and restricting access, there is a lot of control and data security which can be
levied to ensure maximums security for the stored data.
Confidentiality: There is a lot of sensitive data which might be stored in the cloud. This data
has to have extra layers of security on it to reduce the chances of breaches a nd phishing
attacks; this can be done by the service provider, as well as the organization. However, as a
precaution, data confidentiality should be of utmost priority for sensitive material.
Breaches: Breaches within the cloud are not unheard. Hackers can breach security
parameters within the cloud, and steal the data which might otherwise be considered
confidential for organizations. On the contrary, a breach can be an internal attack, so
organizations need to lay particular emphasis in tracking employee actions to avoid any
unwanted attacks on stored data.
Storage: For organizations, the data is being stored and made available virtually. However,
for service providers, it is necessary to store the data in physical infrastructures, which makes
the data vulnerable and conducive to physical attacks.
These are some of the security issues which come as a part of the cloud environment.
However, these are not exactly difficult to overcome, especially with the available levels of
technological resources these days. There is a lot of emphasis on ensuring maximum security
for the stored data so that it complies with the rules and regulations, as well as the
organization’s internal compliance policies.
SUMMARY
Data is born around us to evolve every second. This process promises the ability to return
anywhere, anytime but the information must also be protected properly with the help of the
right data management solution. Some people may experience difficulties in deciding how to
start working in the cloud or fail to recognize those promised benefits while on their way.
Although the cloud does not come with an instruction manual, it can work for you and your
business, anyway. All you need is to have your data managed well.
Businesses used to have centralized on-premise data warehouses where information was safe.
Yet, as time went by, it became harder to maintain them; highly skilled manpower and
greater maintenance fees are needed now. So, now, in our cloud age, when people are willing
to access their data easily, an innovative management solution is what can help extract the
full value of data to put it to good use. Management methods applied to data that is stored in
the cloud differ from the traditional ones since cloud data analytics has to meet the
requirements of enhanced cloud data security and integrity.
Data management has been rapidly evolving from outdated, locally- hosted storage systems to
a much more versatile and reliable cloud data management module. Although local data
storage was the industry standard for some time, this preference is changing as businesses
become aware of new developments in cloud storage technology.
Over the next few years, more and more companies will migrate to the cloud as their
preferred method of data management. Data will play an increasingly important role in the
ability of organizations to stay competitive in their respective fields. This projection further
emphasizes the need to achieve and maintain an efficient data management structure that will
allow a company to keep pace with a fast-paced and constantly evolving business landscape.
KEY WORDS/ABBREVIATIONS

Multi-Cloud – A multi-cloud strategy is the concurrent use of separate cloud service
providers for different infrastructure, platform, or software needs.

Multi-Tenancy – Multi-Tenancy is a mode of operation for software in which
multiple instances of one or many applications run in a shared environment

Microservices – is a way of designing applications in which complex applications are
built out of a suite of small, independently deployable services

Linux – Linux is an open-source operating system, built on Unix that is used for the
majority of cloud services.

Load Balancing – The process of distributing computing workloads across multiple
resources, such as servers.
LEARNING ACTIVITY
1. Discuss various challenged faced to secure the data.
___________________________________________________________________________
___________________________________________________________________
2. Draw a list of different threats to Data in Network.
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. What is Datacentre?
2. What are the challenges faced to manage data?
3. What is Storage of data and databases?
4. Discuss various Data Privacy and Security Issues in cloud computing.
B. Multiple Choice Questions
1. Which of the following area of cloud computing is uniquely troublesome?
a) Auditing
b) Data integrity
c) e-Discovery for legal compliance
d) All of the mentioned
2. Which of the following is the operational domain of CSA?
a) Scalability
b) Portability and interoperability
c) Flexibility
d) None of the mentioned
3. Which of the following is considered an essential element in cloud computing by CSA?
a) Multi-tenancy
b) Identity and access management
c) Virtualization
d) All of the mentioned
4. Which of the following is used for Web performance management and load testing?
a) VMware Hyperic
b) Webmetrics
c) Univa UD
d) Tapinsystems
5. Which of the following is application and infrastructure management software for hybrid
multi-clouds?
a) VMware Hyperic
b) Webmetrics
c) Univa UD
d) Tapinsystems
Ans wer
1. d
2. b
3. a
4. b
5. c
REFERENCES

Hamlen, K. Kantarcioglu, M. Khan, L. Thuraisingham, B. (2010). Security Issues
for Cloud Computing.International Journal of Information Security and Privacy, 4(2),
36-48.

Levina, N., and Vaast, E. (2005). The emergence of boundary spanning competence
in practice: Implications for implementation and use of information systems. MIS
Quarterly, 29(2), 335–363.

Ravishankar, M.N.; Pan, S.L.; and Leisner, D.E. (2011). Examining the strategic
alignment and implementation success of a KMS: A subculture-based multilevel
analysis. Information Systems Research, 22(1), 39–59.

Tiwana, A (2012), Novelty-knowledge alignment: A theory of design convergence in
systems development. Journal of Management Information Systems, 29(1) 15–52.

RizwanMian, Patrick Martin (2012). Executing data-intensive workloads in a
Cloud.ACM
International Symposium
on
Cluster
2012
12th
IEEE/ACM
International Symposium on Cluster, Cloud and Grid Computing.

Yingjie Shi, XiaofengMeng, Jing Zhao, Xiangmei Hu, Bingbing Liu and
HaipingWang
(2010).
Benchmarking
Cloud-based
Data
Management
Systems.CloudDB’10, Toronto, Ontario, Canada. ACM 978-1-4503-0380-4/10/10
UNIT 11: CLOUD STORAGE
STRUCTURE
1. Learning Objectives
2. Introduction
3. Cloud storage
4. Storage account
5. Storage Replications: LRS, ZRS, GRS, RAGRS
6. Types of storage: blob, file, table, queue.
7. Summary
8. Key Words/Abbreviations
9. Learning Activity
10. Unit End Questions (MCQ and Descriptive)
11. References
LEARNING OBJECTIVES
At the end of the unit learner will able to understand and have knowledge of following
aspects Cloud Storage and Storage Account:

Definition of Cloud Storage

Introduction to Storage Account

Knowledge of Storage Replications

Life cycle of Virtual Machine
INTRODUCTION
Cloud storage is based on highly virtualized infrastructure and is like broader cloud
computing in terms of accessible interfaces, near-instant elasticity and scalability, multitenancy, and metered resources. Cloud storage services can be utilized from an off-premises
service (Amazon S3) or deployed on-premises (ViON Capacity Services).
Cloud storage typically refers to a hosted object storage service, but the term has broadened
to include other types of data storage that are now available as a service, like block storage.
Object storage services like Amazon S3, Oracle Cloud Storage and Microsoft Azure Storage,
object storage software like Openstack Swift, object stora ge systems like EMC Atmos, EMC
ECS and Hitachi Content Platform, and distributed storage research projects like OceanStore
and VISION Cloud are all examples of storage that can be hosted and deployed with cloud
storage characteristics.
Cloud storage is:

Made up of many distributed resources, but still acts as one, either in a federated or a
cooperative storage cloud architecture

Highly fault tolerant through redundancy and distribution of data

Highly durable through the creation of versioned copies

Typically eventually consistent with regard to data replicas
CLOUD STORAGE
Cloud storage is a cloud computing model that stores data on the Internet through a cloud
computing provider who manages and operates data storage as a service. It’s delivered on
demand with just- in-time capacity and costs, and eliminates buying and managing your own
data storage infrastructure. This gives you agility, global scale and durability, with “anytime,
anywhere” data access.
Cloud storage is a model of computer data storage in which the digital data is stored in
logical pools, said to be on "the cloud". The physical storage spans multiple servers
(sometimes in multiple locations), and the physical environment is typically owned and
managed by a hosting company. These cloud storage providers are responsible for keeping
the data available and accessible, and the physical environment protected and running. People
and organizations buy or lease storage capacity from the providers to store user, organization,
or application data.
Cloud storage services may be accessed through a collocated cloud computing service, a web
service application programming interface (API) or by applications that utilize the API, such
as cloud desktop storage, a cloud storage gateway or Web-based content management
systems.
Cloud storage is purchased from a third party cloud vendor who owns and operates data
storage capacity and delivers it over the Internet in a pay-as-you- go model. These cloud
storage vendors manage capacity, security and durability to make data accessible to your
applications all around the world.
Applications access cloud storage through traditional storage protocols or directly via an API.
Many vendors offer complementary services designed to help collect, manage, secure and
analyse data at massive scale.
STORAGE ACCOUNT
An Azure storage account contains all of your Azure Storage data objects: blobs, files,
queues, tables, and disks. The storage account provides a unique namespace for your Azure
Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in
your Azure storage account is durable and highly available, secure, and massively scalable.
Azure Storage features
These features apply to all Azure Storage offerings:
Durability
Azure Storage data is replicated multiple times across regions. There are four ways you can
make sure data is stored redundantly: Locally Redundant Storage (LRS), Zone-Redundant
Storage (ZNS), Geo-redundant Storage (GRS), and Read Access Geo-redundant Storage
(RA-GRS).
Using LRS, three copies of all data are maintained in a single facility within a single region.
With ZRS, three copies of your data will be stored in multiple facilities of two or three
regions. Obviously, this will achieve greater durability than LRS. For GRS, six copies of data
are stored across two regions, with three copies in a so-called primary region, and the rest in a
secondary region, usually geographically distant from your primary region. In case of
primary region failure, the secondary region is used as part of a fail-over mechanism. RAGRS data will be stored just like GRS, except that you get read-only access to the secondary
region.
Geo-redundant Storage (GRS) and Read Access Geo-redundant Storage (RA-GRS) provide
the highest level of durability, but at a higher cost. GRS is the default storage redundancy
mode. In case you need to switch from LRS to GRS or to RA-GRS, an additional one-time
data transfer cost will be applied. But if you chose ZRS, you cannot subsequently change to
any other redundancy mode.
High Availability
With such durable features, storage services will automatically be highly available. If you
chose GRS or RA-GRS, your data collocated will be replicated in multiple facilities across
multiple regions. Any catastrophic failure of one data centre will not result in permanent data
loss.
Scalability
Data is automatically scaled out and load-balanced to meet peak demands. Azure Storage
provides a global namespace to access data from anywhere.
Security
Azure Storage relies on a Shared Key model for a uthentication security. Access can be
further restricted through the use of a shared access signature (SAS). SAS is a token that can
be appended to a URI, defining specific permissions for a specified period of time. With
SAS, you can access standard stores like Blob, Table, Queue, and File. You can also provide
anonymous access, although that it is generally not recommended.
STORAGE REPLICATIONS: LRS, ZRS, GRS, RAGRS
Azure Storage is a managed data storage service in Microsoft Azure cloud which is highly
redundant and protected from any kind of failure as it provides different level of data
replication and redundancy.
Azure Storage Replication Mechanis m:
1. Locally Redundant Storage (LRS):
LRS synchronously replicates data three times within a single physical datacentre in a region,
provides protection against server rack or storage cluster failure but can’t sustain Datacentre
level (Availability Zone level also) failure. Provides at least 99.999999999% (11 nines)
durability of the data.
Figure 11.1
2. Zone Redundant Storage (ZRS):
With ZRS, data is replicated synchronously across three Availability Zones (AZs) in an
Azure region which means even if one of the AZ completely fails we can s till continue to
read and write the data without any interruption or data loss as each Availability Zone is an
independent physical location within an Azure region, however this can’t sustain if the
complete Azure region is impacted due to any unexpected failure. ZRS provides at least
99.9999999999% (12 9’s) durability of the data.
Figure 11.2
3. Geo Redundant Storage (GRS):
With GRS, first data is replicated synchronously three times within a single physical
datacentre in the primary Azure region using LRS mechanism. It then replicates your data
asynchronously to a single physical location in the secondary Azure region. After the data
replicated to the secondary location, it’s also replicated within that location using LRS.GRS
protects against region level disasters and provides at least 99.99999999999999% (16 9’s)
durability of the data.
Figure 11.3
4. Geo Zone Redundant Storage (GZRS):
With GZRS, data is replicated across three Azure availability zones in the primary region and
is also replicated to a secondary geographic region for protection from region level disasters.
GZRS provides at least 99.99999999999999% (16 9’s) durability of the data. The key
difference between GRS and GZRS is how data is replicated in the primary region. Within
the secondary location, data is always replicated synchronously three times using LRS.
The main difference between GRS and GZRS replication is basically how the data is
replicated in the primary region, data in secondary Azure region is always replicated three
times using LRS.
Figure 11.4
Note: Your application or client can’t read or write in secondary Azure region with GRS or
GZRS replication unless there is a failover to the secondary region. If you would like read
access to secondary Azure region then you configure your storage account to use read-access
geo-redundant storage (RA-GRS) or read-access geo- zone-redundant storage (RA-GZRS).
If primary Azure region where Storage Account resides goes unavailable due to any
unplanned or planned events we can manually perform the failover to secondary region, once
failover is completed, the secondary region becomes the primary region and we can again
read and write data.
TYPES OF STORAGE: BLOB, FILE, TABLE, QUEUE.
Azure Storage offers several types of storage accounts. Eac h type supports different features
and has its own pricing model. Consider these differences before you create a storage account
to determine the type of account that is best for your applications. The types of storage
accounts are:
With an Azure Storage account, you can choose from two kinds of storage services: Standard
Storage which includes Blob, Table, Queue, and File storage types, and Premium Storage –
Azure VM disks.
Figure 11.5
Standard Storage account
With a Standard Storage Account, a user gets access to Blob Storage, Table Storage, File
Storage, and Queue storage. Let’s explain those just a bit better.
Azure Blob Storage
Blog Storage is basically storage for unstructured data that can include pictures, videos,
music files, documents, raw data, and log data…along with their meta-data. Blobs are stored
in a directory- like structure called a “container”. If you are familiar with AWS S3, containers
work much the same way as S3 buckets. You can store any number of blob files up to a total
size of 500 TB and, like S3, you can also apply security policies. Blob storage can also be
used for data or device backup.
Blob Storage service comes with three types of blobs: block blobs, append blobs and page
blobs. You can use block blobs for documents, image files, and video file storage. Append
blobs are similar to block blobs, but are more often used for append operations like logging.
Page blobs are used for objects meant for frequent read-write operations. Page blobs are
therefore used in Azure VMs to store OS and data disks.
To access a blob from storage, the URI should be:

http://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>
For example, to access a movie called RIO from the BlueSky container of an account called
Carlos, request:

http://carlos.blob.core.windows.net/ BlueSky/RIO.avi
Note that container names are always in lower case.
Azure Table Storage
Table storage, as the name indicates, is preferred for tabular data, which is ideal for key-value
NoSQL data storage. Table Storage is massively scalable and extremely easy to use. Like
other NoSQL data stores, it is schema- less and accessed via a REST API. A query to table
storage might look like this:

http://<storage account>.table.core.windows.net/<table>
Azure File Storage
Azure File Storage is meant for legacy applications. Azure VMs and services share their data
via mounted file shares, while on-premise applications access the files using the File Service
REST API. Azure File Storage offers file shares in the cloud using the sta ndard SMB
protocol and supports both SMB 3.0 and SMB 2.1.
Azure Queue Storage
The Queue Storage service is used to exchange messages between components either in the
cloud or on-premise (compare to Amazon’s SQS). You can store large numbers of messages
to be shared between independent components of applications and communicated
asynchronously via HTTP or HTTPS. Typical use cases of Queue Storage include processing
backlog messages or exchanging messages between Azure Web roles and Worker roles.
A query to Q ueue Storage might look like this:

http://<account>.queue.core.windows.net/<file_to_download>
Premium Storage account:
The Azure Premium Storage service is the most recent storage offering from Microsoft, in
which data are stored in Solid State Drives (SSDs) for better IO and throughput. Premium
storage only supports Page Blobs.
Use general-purpose v2 accounts instead when possible.

Storage accounts using the classic deployment model can still be created in some
locations, and existing classic accounts continue to be supported.

All storage accounts are encrypted using Storage Service Encryption (SSE) for data at
rest.

Archive storage and blob- level tiering only support block blobs. The Archive tier is
available at the level of an individual blob only, not at the storage account level.
4
Zone-redundant storage (ZRS) and geo- zone-redundant storage (GZRS/RA-GZRS)
(preview) are available only for standard general-purpose V2, BlockBlobStorage, and
FileStorage accounts in certain regions.

Premium performance for general-purpose v2 and general-purpose v1 accounts is
available for disk and page blob only. Premium performance for block or append
blobs are only available on BlockBlobStorage accounts. Premium performance for
files is only available on FileStorage accounts.

Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics,
built on Azure Blob storage. Data Lake Storage Gen2 is only supported on Generalpurpose V2 storage accounts with Hierarchical namespace enabled.
SUMMARY
Computer systems have been steadily moving away from local storage to remote, serverbased storage and processing—also known as the cloud. Consumers are affected too—we
now stream video and music from servers rather than playing them from discs. By keeping
your own documents and media in the cloud, you can enjoy anywhere-access and improve
collaboration. We've rounded up the best cloud storage and file-sharing and file-syncing
services to help you decide which are right for you.
These services provide seamless access to all your important data—Word docs, PDFs,
spreadsheets, photos, and any other digital assets—from wherever you are. You no longer
need to be sitting at your work PC to see your work files. With cloud syncing you can get to
them from your laptop at home, your smartphone on the go, or from your tablet on your
couch. Using one of these services means no more having to email files to yourself or plug
and unplug USB thumb drives.
If you don't yet have a service for storing and syncing your data in the cloud, you should
seriously consider one. Which you choose depends on the kinds of files you store, how much
security you need, whether you plan to collaborate with other people, and which devices you
use to edit and access your files. It may also depend on your comfort level with computers in
general. Most of these services are extremely user- friendly, while others offer advanced
customization for more experienced techies.
KEY WORDS/ABBREVIATIONS

Virtual private data center: Resources grouped according to specific business
objectives.

Standardized interfaces Cloud services should have standardized APIs, which provide
instructions on how two application or data sources can communicate with each other.
A standardized interface lets the customer more easily link cloud services together.

Microsoft account (also called, MSA)
Personal accounts that provide access to
your consumer-oriented Microsoft products and cloud services, such as Outlook,
OneDrive, Xbox LIVE, or Office 365. Your Microsoft account is created and stored
in the Microsoft consumer identity account system that’s run by Microsoft.

Resource groups
Logical containers that you use to group related resources in a
subscription. Each resource can exist in only one resource group. Resource groups
allow for more granular grouping within a subscription, and are commonly used to
represent a collection of assets required to support a workload, application, or specific
function within a subscription

Service Administrator This classic subscription administrator role enables you to
manage all Azure resources, including access. This role has the equivalent access of a
user who is assigned the Owner role at the subscription scope
LEARNING ACTIVITY
1. Study the Cloud storage of Azure in any Retail company. How it is implemented.
___________________________________________________________________________
___________________________________________________________________
2. Discuss various storage techniques.
___________________________________________________________________________
___________________________________________________________________
UNIT END QUESTIONS (MCQ AND DESCRIPTIVE)
A. Descriptive Questions
1. What is Storage account?
2. Discuss various Storage Replications?
3. What are the types of storage?
4. How Azure helps in management of Storage of Database?
B. Multiple Choice Questions
1. Which of the following functional cloud computing hardware/software stack is the Cloud
Reference Model?
a) CAS
b) CSA
c) SAC
d) All of the mentioned
2. For the _________ model, the security boundary may be defined for the vendor to include
the software framework and middleware layer.
a) SaaS
b) PaaS
c) IaaS
d) All of the mentioned
3. Which of the following cloud does not require mapping?
a) Public
b) Private
c) Hybrid
d) None of the mentioned
4. Which of the following service model is owned in terms of infrastructure by both vendor
and customer?
a) Public
b) Private
c) Hybrid
d) None of the mentioned
5. Which of the following model type is not trusted in terms of security?
a) Public
b) Private
c) Hybrid
d) None of the mentioned
Ans wer
1. b
2. b
3. a
4. c
5. a
REFERENCES

"A History of Cloud Computing". Computer Weekly.

Louden, Bill (September 1983). "Increase Your 100's Storage with 128K from
CompuServe". Portable 100. New England Publications Inc. 1 (1): 22. ISSN 07387016.

Daniela Hernandez (May 23, 2014). "Tech Time Warp of the Week". Wired.

"Box.net lets you store, share, work in the computing cloud". Silicon Valley Business
Journal. December 16, 2009. Retrieved October 2, 2016.

"On-premises private cloud storage description, characteristics, and options".
Archived from the original on 2016-03-22. Retrieved 2012-12-10.

S. Rhea, C. Wells, P. Eaton, D. Geels, B. Zhao, H. Weatherspoon, and J.
Kubiatowicz, Maintenance-Free Global Data Storage. IEEE Internet Computing, Vol
5, No 5, September/October 2001, pp 40–49. [1] Archived 2012-03-29 at the
Wayback Machine [2] Archived 2011-06-23 at the Wayback Machine

Kolodner, Elliot K.; Tal, Sivan; Kyriazis, Dimosthenis; Naor, Dalit; Allalouf, Miriam;
Bonelli,
Download