Uploaded by Aayush Chamria

CCL Lab manual 17-02-2020

advertisement
Department of Computer Engineering
Lab Manual
Final Year Semester-VIII
Subject: Cloud Computing Lab
Even Semester
1
Institutional Vision, Mission and Quality Policy
Our Vision
To foster and permeate higher and quality education with value added engineering, technology programs,
providing all facilities in terms of technology and platforms for all round development with societal
awareness and nurture the youth with international competencies and exemplary level of employability even
under highly competitive environment so that they are innovative adaptable and capable of handling
problems faced by our country and world at large.
RAIT‘s firm belief in new form of engineering education that lays equal stress on academics and leadership
building extracurricular skills has been a major contribution to the success of RAIT as one of the most
reputed institution of higher learning. The challenges faced by our country and world in the 21 Century
needs a whole new range of thought and action leaders, which a conventional educational system in
engineering disciplines are ill equipped to produce. Our reputation in providing good engineering education
with additional life skills ensure that high grade and highly motivated students join us. Our laboratories and
practical sessions reflect the latest that is being followed in the Industry. The project works and summer
projects make our students adept at handling the real life problems and be Industry ready. Our students are
well placed in the Industry and their performance makes reputed companies visit us with renewed demands
and vigour.
Our Mission
The Institution is committed to mobilize the resources and equip itself with men and materials of excellence
thereby ensuring that the Institution becomes pivotal center of service to Industry, academia, and society
with the latest technology. RAIT engages different platforms such as technology enhancing Student
Technical Societies, Cultural platforms, Sports excellence centers, Entrepreneurial Development Center and
Societal Interaction Cell. To develop the college to become an autonomous Institution & deemed university
at the earliest with facilities for advanced research and development programs on par with international
standards. To invite international and reputed national Institutions and Universities to collaborate with our
institution on the issues of common interest of teaching and learning sophistication.
RAIT‘s Mission is to produce engineering and technology professionals who are innovative and inspiring
thought leaders, adept at solving problems faced by our nation and world by providing quality education.
The Institute is working closely with all stake holders like industry, academia to foster knowledge
generation, acquisition, dissemination using best available resources to address the great challenges being
faced by our country and World. RAIT is fully dedicated to provide its students skills that make them
leaders and solution providers and are Industry ready when they graduate from the Institution.
We at RAIT assure our main stakeholders of students 100% quality for the programmes we deliver. This
quality assurance stems from the teaching and learning processes we have at work at our campus and the
teachers who are handpicked from reputed institutions IIT/NIT/MU, etc. and they inspire the students to be
2
innovative in thinking and practical in approach. We have installed internal procedures to better skills set of
instructors by sending them to training courses, workshops, seminars and conferences. We have also a full
fledged course curriculum and deliveries planned in advance for a structured semester long programme. We
have well developed feedback system employers, alumni, students and parents from to fine tune Learning
and Teaching processes. These tools help us to ensure same quality of teaching independent of any
individual instructor. Each classroom is equipped with Internet and other digital learning resources.
The effective learning process in the campus comprises a clean and stimulating classroom environment and
availability of lecture notes and digital resources prepared by instructor from the comfort of home. In
addition student is provided with good number of assignments that would trigger his thinking process. The
testing process involves an objective test paper that would gauge the understanding of concepts by the
students. The quality assurance process also ensures that the learning process is effective. The summer
internships and project work based training ensure learning process to include practical and industry
relevant aspects. Various technical events, seminars and conferences make the student learning complete.
Our Quality Policy
Our Quality Policy
It is our earnest endeavour to produce high quality engineering professionals who are
innovative and inspiring, thought and action leaders, competent to solve problems
faced by society, nation and world at large by striving towards very high standards in
learning, teaching and training methodologies.
Our Motto: If it is not of quality, it is NOT RAIT!
3
Departmental Vision, Mission
Vision
To be renowned for its high quality of teaching and research activities with a view to prepare technically
sound, ethically strong and morally elevated engineers. To prepare the students to sustain impact of
computer education for social needs encompassing industry educational institutions and public service.
Mission

To provide budding engineers with comprehensive high quality education in computer engineering
for intellectual growth.

To provide state-of-art research facilities to generate knowledge and develop technologies in the
thrust areas of Computer Science and Engineering.

Providing platforms of sports, technical, co and extra curricular activities for the overall
development which will enable students to be the most sought after in the country and abroad.
4
Departmental Program Educational Objectives
(PEOs)
1. Learn and Integrate
To provide Computer Engineering students with a strong foundation in the mathematical, scientific
and engineering fundamentals necessary to formulate, solve and analyze engineering problems and
to prepare them for graduate studies.
2. Think and Create
To develop an ability to analyze the requirements of the software and hardware, understand the
technical specifications, create a model, design, implement and verify a computing system to meet
specified requirements while considering real-world constraints to solve real world problems.
3. Broad Base
To provide broad education necessary to understand the science of computer engineering and the
impact of it in a global and social context.
4. Techno-leader
To provide exposure to emerging cutting edge technologies, adequate training & opportunities to
work as teams on multidisciplinary projects with effective communication skills and leadership
qualities.
5. Practice citizenship
To provide knowledge of professional and ethical responsibility and to contribute to society through
active engagement with professional societies, schools, civic organizations or other community
activities.
6. Clarify Purpose and Perspective
To provide strong in-depth education through electives and to promote student awareness on the
life-long learning to adapt to innovation and change, and to be successful in their professional work
or graduate studies.
5
Departmental Program Outcomes (POs)
PO1: Engineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering problems.
PO2: Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
PO3: Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
PO4: Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions.
PO5: Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with
an understanding of the limitations.
PO6: The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
PO7: Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for
sustainable development.
PO8: Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9: Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
P10: Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.
PO11: Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one‘s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
PO12: Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
6
Program Specific Outcomes:
PSO1: To build competencies towards problem solving with an ability to understand, identify,
analyze and design the problem, implement and validate the solution including both hardware and
software.
PSO2: To build appreciation and knowledge acquiring of current computer techniques with an
ability to use skills and tools necessary for computing practice.
PSO3: To be able to match the industry requirements in the area of computer science and
engineering. To equip skills to adopt and imbibe new technologies.
7
Index
Sr. No.
1.
Contents
List of Experiments
Page No.
9
Course Objective, Course Outcomes and
2.
Experiment Plan
10
3.
CO-PO Mapping
12
4.
Study and Evaluation Scheme
14
5.
Experiment No. 1
15
6.
Experiment No. 2
19
7.
Experiment No. 3
22
8.
Experiment No. 4
28
9.
Experiment No. 5
32
10.
Experiment No. 6
37
11.
Experiment No. 7
41
12.
Mini Project
47
8
List of Experiments
Sr. No.
Experiments Name
1
Study of NIST model of cloud computing and implement different types of virtualizations.
2
Implement Infrastructure as a Service using Openstack.
3
Explore Storage as a Service for remote file access using web interface and demonstrate
security of web server and data directory using Own cloud
Deploy web applications on commercial cloud like Google App Engine/ Windows Azure
4
5
Create and access VM instances and demonstrate various components such as EC2, S3,
Simple DB, DynamoDB using AWS
6
Demonstrate on demand application delivery and Virtual desktop infrastructure using
Ulteo.
Case Study on Edge Computing
7
8
(Content Beyond Syllabus)
Mini Project.
9
Course Objectives, Course Outcome
& Experiment Plan,
Course Objectives:
Key concepts of virtualization.
1.
Various deployment models such as private, public, hybrid and community.
2.
Various service models such as IaaS and PaaS.
3.
Security and Privacy issues in cloud.
4.
Course Outcomes:
Develop understanding of NIST model of Cloud Computing and adapt different types of
CO1
virtualization to increase resource utilization.
Demonstrate Infrastructure as a Service (IaaS) and identity management mechanism in
CO2
various cloud platforms.
Explore Storage as a Service and analyze security issues on cloud.
CO3
Develop real world web applications and deploy on commercial cloud to demonstrate
CO4
platform as a service.
Explore on demand application delivery using Software as a Service model.
CO5
Build a private cloud and implement different service models using open source cloud
CO6
technologies.
10
Experiment Plan
Module
No.
Week
No.
Experiments Name
Course
Outcome
Weight
age
CO1
10
CO6
10
CO3
10
CO4
07
1
W1
Study of NIST model of cloud computing and
implement different types of virtualizations.
2
W2
Implement Infrastructure as a Service using
Openstack.
3
W3
4
W4
5
W5
Implement Infrastructure as a Service using
Openstack.
Explore Storage as a service using own Cloud for
remote file access using web interfaces.
Deploy web applications on commercial cloud like
Google App Engine/ Windows Azure
6
W6
Deploy web applications on commercial cloud like
Google App Engine/ Windows Azure
7
W7
To create and access VM instances and demonstrate
various components such as EC2, S3, Simple DB,
DynamoDB using AWS
CO2
10
8
W8
Demonstrate on demand application delivery and
Virtual desktop infrastructure using Ulteo.
CO5
10
CO4
03
Case Study on Edge Computing
9
W9
10
W10
Mini Project.
11
W11
Mini Project.
12
W12
Mini Project.
13
W13
Mini Project.
(Content Beyond Syllabus)
11
Mapping Course Outcomes (CO) Program Outcomes (PO)
Subject Course Outcomes
Weighta
ge
Contribution to Program outcomes PO
1
PRATI
CAL
100%
CO1. Develop
understanding of NIST model
of Cloud Computing and
adapt different types of
virtualization to increase
resource utilization.
CO2. Demonstrate
Infrastructure
as
a
Service(IaaS) and identity
management mechanism in
various cloud platforms.
CO3. Explore Storage as a
Service and analyze security
issues on cloud.
CO4. Develop real world
web applications and deploy
on commercial cloud to
demonstrate platform as a
service.
CO5. Explore on demand
2
3
4
5
3
2
3
2
2
3
2
1
3
2
1
application
delivery
using
Software as a Service model.
1
CO6. Build a private cloud
and implement different
service models using open
source cloud technologies.
2
2
6
7
8
9
10
11
12
1
1
1
2
1
2
1
3
1
1
1
3
1
1
2
1
1
1
1
2
1
1
12
Mapping of Course outcomes with Program Specific outcomes
Course Outcomes
Contribution to Program
Specific outcomes
PSO1
Develop understanding of NIST model of Cloud Computing and
CO1 adapt different types of virtualization to increase resource
utilization.
Demonstrate Infrastructure as a Service(IaaS) and identity
CO2
management mechanism in various cloud platforms.
Explore Storage as a Service and analyze security issues on
CO3
cloud.
Develop real world web applications and deploy on commercial
CO4
cloud to demonstrate platform as a service.
PSO2 PSO3
3
2
2
3
2
2
2
2
2
3
3
3
CO5
Explore on demand application delivery using Software as a Service
model.
2
3
3
CO6
Build a private cloud and implement different service models
using open source cloud technologies.
3
3
3
13
Study and Evaluation Scheme
Course
Course Name
Code
Cloud Computing Lab
CSL803
Course
Teaching Scheme
Theo
ry
Practi
cal
Tutoria
l
Theor
y
Practi
cal
Tutorial
Total
--
04
--
--
02
--
02
Course Name
Code
Cloud Computing Lab
CSL803
Credits Assigned
Examination Scheme
Term Work
Practical & Oral
Total
50
25
75
Term Work:
1. The distribution of marks for term work shall be as follows:

Laboratory work (experiments) ................................ (15) Marks.

Mini project............................................................. (15) Marks.

Mini Project Presentation & Report ....................... (10) Marks
Assignments. ........................................................... (05) Marks

Attendance .................................................................(05) Marks

TOTAL ..................................................................... (50) Marks
Practical & Oral:
Practical and Oral examination will be based on Laboratory work, mini project and above syllabus.
14
Cloud Computing Lab
Experiment No. : 1
NIST Model & Virtualization
15
Experiment No. 1
1. Aim: Study of NIST model of cloud computing and implement hosted virtualization and bare metal
virtualization.
2. Objectives: From this experiment, the student will be able to
 Understand the various characteristics of cloud computing.
 Understand cloud service delivery models and deployment models.
 Implement different types of virtualization.
3. Outcomes: The learner will be able to

Develop understanding of NIST model of Cloud Computing and adapt different
types of virtualization to increase resource utilization.
4. Hardware / Software Required : Ubuntu, VMware, Xen
5. Theory:
Computing as a service has seen a phenomenal growth in recent years. The primary
motivation for this growth has been the promise of reduced capital and operating expenses,
and the ease of dynamically scaling and deploying new services without maintaining a
dedicated compute infrastructure. Hence, cloud computing has begun to rapidly transform
the way organizations view their IT resources. From a scenario of a single system consisting
of single operating system and single application, organizations have been moving into
cloud computing, where resources are available in abundance and the user has a wide range
to choose from. Cloud computing is a model for enabling convenient, on-demand network
access to a shared pool of configurable computing resources that can be rapidly provisioned
and released with service provider interaction or minimal management effort. Here, the endusers need not to know the details of a specific technology while hosting their application,
as the service is completely managed by the Cloud Service Provider (CSP). Users can
consume services at a rate that is set by their particular needs. This on-demand service can
be provided any time. CSP would take care of all the necessary complex operations on
behalf of the user. It would provide the complete system which allocates the required
resources for execution of user applications and management of the entire system flow.
The best part of cloud computing is that it provides more flexibility than its previous
counterparts. It has shown many benefits to enterprise IT world. Cost optimization among
them is the frontrunner, since the principle of cloud is ―pay as per use‖. The other benefits
are increased mobility, ease of use, utmost apt utilization of resources, portability of
application, etc. This means users will be able to access information from anywhere at any
time easily without wasting the underlying hardware resources ideal or unused. Due to its
benefit, today‘s computing technology has witnessed a vast migration of 5 organizations
from their traditional IT infrastructure to cloud.
16
Some of the noteworthy benefits are
 Cost Savings
 Remote Working
 Efficiency
 Flexibility
 Future Proofing
 Morale Boosting
 Resilience without Redundancy
Cloud Computing – NIST Definition:
―A model for enabling convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction‖
Figure 1: NIST Model
Cloud Characteristics
 On Demand Self-services
 Broad Network Access
 Resource Pooling
 Rapid Elasticity
 Measured Service
 Dynamic Computing Infrastructure
 Minimally or Self-managed Platform
 Consumption-based Billing
 Multi Tenancy
 Managed Metering
17

Cloud Deployment Models
 Public Cloud
Public clouds are owned and operated by third parties; they deliver superior economies of
scale to customers, as the infrastructure costs are spread among a mix of users, giving
each individual client an attractive low-cost, ―Pay-as-you-go‖ model. All customers share
the same infrastructure pool with limited configuration, security protections, and
availability variances. These are managed and supported by the cloud provider. One of the
advantages of a Public cloud is that they may be larger than an enterprises cloud, thus
providing the ability to scale seamlessly, on demand.
 Private Cloud
Private clouds are built exclusively for a single enterprise. They aim to address concerns
on data security and offer greater control, which is typically lacking in a public cloud.
There are two variations to a private cloud:
 On-premise Private Cloud: On-premise private clouds, also known as internal
clouds are hosted within one‟s own data center. This model provides a more
standardized process and protection, but is limited in aspects of size and scalability.
IT departments would also need to incur the capital and operational costs for the
physical resources. This is best suited for applications which require complete
control and configurability of the infrastructure and security.

Externally hosted Private Cloud: This type of private cloud is hosted externally
with a cloud provider, where the provider facilitates an exclusive cloud
environment with full guarantee of privacy. This is best suited for enterprises that
don‟t prefer a public cloud due to sharing of physical resources.
 Hybrid Cloud
Hybrid Clouds combine both public and private cloud models. With a Hybrid Cloud,
service providers can utilize 3rd party Cloud Providers in a full or partial manner thus
increasing the flexibility of computing. The Hybrid cloud environment is capable of
providing on-demand, externally provisioned scale. The ability to augment a private cloud
with the resources of a public cloud can be used to manage any unexpected surges in
workload.

Cloud Service delivery Models
Cloud Providers offer services that can be grouped into three categories.
1. Software as a Service (SaaS): In this model, a complete application is offered to the
customer, as a service on demand. A single instance of the service runs on the cloud &
multiple end users are serviced. On the customers‟ side, there is no need for upfront
investment in servers or software licenses, while for the provider, the costs are lowered,
since only a single application needs to be hosted & maintained. Today SaaS is offered by
companies such as Google, Salesforce, Microsoft, Zoho, etc.
2. Platform as a Service (Paas): Here, a layer of software, or development environment is
encapsulated & offered as a service, upon which other higher levels of service can be built.
The customer has the freedom to build his own applications, which run on the provider‟s
infrastructure. To meet manageability and scalability requirements of the applications, PaaS
providers offer a predefined combination of OS and application servers, such as LAMP
platform (Linux, Apache, MySql and PHP), restricted J2EE, Ruby etc. Google‟s App
Engine, Force.com, etc are some of the popular PaaS examples.
18
3. Infrastructure as a Service (Iaas): IaaS provides basic storage and computing capabilities
as standardized services over the network. Servers, storage systems, networking equipment,
data centre space etc. are pooled and made available to handle workloads. The customer
would typically deploy his own software on the infrastructure. Some common examples are
Amazon, GoGrid, 3 Tera, etc.
Figure 2: Service Delivery Models

Virtualization
Virtualization is the process of creating virtual format of resources like hardware, software,
etc. In computing, it is termed as creation of virtual hardware resources, operating systems or
network resources. Virtualization is nothing but a software layer in between OS and host
machine. It has a greater importance in cloud computing. By means of virtualization CSP‘s are
able to create virtual machines in cloud computing. The applications are deployed in virtual
machines so that it can be accessed from anywhere in the world in its virtualized form. The
VM image is created, and when a user sends request for accessing a particular resource, the
VM instance is created and access is provided. Users are allowed to access only the VM‘s that
contains their applications or resources. Virtual machines are end point software layers and
need to be protected in an efficient manner. This software layer divides the resources of the
host machine among all the guest OS. The OS has no idea that it is being managed or not. The
advantage of virtualization is that the CPU is shared among different OS. Multiplexing
hardware resources to many OS is done by Virtualization Layer. Every OS would think that
they are controlling the hardware but switching behind scenes is done by virtualization layer
so that system can host many OS. With the help of Hypervisor, virtual machines are created
and managed. Hypervisor is placed on top 8 of hardware which in turn will run multiple OS
and applications in virtualized environment. During virtualization, it is like single OS image
per machine, even when there are Multiple OS running on machine. Due to the virtualization
process the user will get a feeling he is working on single operating system. But actually a
guest operating system will be running on the hypervisor by utilizing the underlying hardware
resources of host operating system.
19

Virtualization Architecture
A Virtual machine (VM) is an isolated runtime environment (guest OS and applications).
Multiple virtual systems (VMs) can run on a single physical system
Hypervisor:
The concept of virtualization has become dominant over the past six years. There may be a
number of issues arising regarding this virtualization and application access in its virtualized form.
Most of the software vendors raised a compliant that their application is not supported in a virtual
state or will not be supported if the end-user 9 decides to virtualize them. To accommodate the
needs of the industry and operating environment, to create a more efficient infrastructure –
virtualization process has been modified as a powerful platform, such that the process
virtualization greatly revolves around one piece of very important software. This is called as
hypervisor. Hypervisor software is also called as Virtual Machine Monitor (VMM) or
virtualization manager. The hypervisor can manage multiple instances of the same operating
system on a single computer system. Hypervisor manages the system resources like the processor,
memory, storage, etc. to get allocated to each operating system needs. Hypervisor makes the job
easier by allowing multiple operating systems to run in a single CPU there by increasing the CPU
utilization. Hypervisor takes care of the definition and management of virtual resources such that it
always gives a solution for system consolidation. This software provides a convenient and efficient
way to share resources amongst virtual machines which are running on top of the physical
hardware. There are mainly two types of hypervisors:
 Bare Metal
 Hosted
20
Virtualization can take many forms depending on the type of application use and hardware
utilization. The main types are listed below:

Hardware Virtualization
Hardware virtualization also known as hardware-assisted virtualization or server virtualization
runs on the concept that an individual independent segment of hardware or a physical server, may
be made up of multiple smaller hardware segments or servers, essentially consolidating multiple
physical servers into virtual servers that run on a single primary physical server. Each small server
can host a virtual machine, but the entire cluster of servers is treated as a single device by any
process requesting the hardware. The hardware resource allotment is done by the hypervisor. The
main advantages include increased processing power as a result of maximized hardware utilization
and application uptime.
Subtypes:



Full Virtualization – Guest software does not require any modifications since the
underlying hardware is fully simulated.
Emulation Virtualization – The virtual machine simulates the hardware and becomes
independent of it. The guest operating system does not require any modifications.
Paravirtualization – the hardware is not simulated and the guest software run their own
isolated domains.
 Software Virtualization
Software Virtualization involves the creation of an operation of multiple virtual environments on
the host machine. It creates a computer system complete with hardware that lets the guest
operating system to run. For example, it lets you run Android OS on a host machine natively using
a Microsoft Windows OS, utilizing the same hardware as the host machine does.
Subtypes:




Operating System Virtualization – hosting multiple OS on the native OS
Application Virtualization – hosting individual applications in a virtual environment
separate from the native OS
Service Virtualization – hosting specific processes and services related to a particular
application
Memory Virtualization
Physical memory across different servers is aggregated into a single virtualized memory pool. It
provides the benefit of an enlarged contiguous working memory. You may already be familiar with
this, as some OS such as Microsoft Windows OS allows a portion of your storage disk to serve as
an extension of your RAM.
Subtypes:

Application-level control – Applications access the memory pool directly
21


Operating system level control – Access to the memory pool is provided through an
operating system
Storage Virtualization
Multiple physical storage devices are grouped together, which then appear as a single storage
device. This provides various advantages such as homogenization of storage across storage devices
of multiple capacity and speeds, reduced downtime, load balancing and better optimization of
performance and speed. Partitioning your hard drive into multiple partitions is an example of this
virtualization.
Subtypes:



Block Virtualization – Multiple storage devices are consolidated into one
File Virtualization – Storage system grants access to files that are stored over multiple
hosts
Data Virtualization
It lets you easily manipulate data, as the data is presented as an abstract layer completely
independent of data structure and database systems. Decreases data input and formatting errors.

Network Virtualization
In network virtualization, multiple sub-networks can be created on the same physical network,
which may or may not is authorized to communicate with each other. This enables restriction of
file movement across networks and enhances security, and allows better monitoring and
identification of data usage which lets the network administrator‘s scale up the network
appropriately. It also increases reliability as a disruption in one network doesn‘t affect other
networks, and the diagnosis is easier.
Subtypes:



Internal network: Enables a single system to function like a network
External network: Consolidation of multiple networks into a single one, or segregation of
a single network into multiple ones
Desktop Virtualization
This is perhaps the most common form of virtualization for any regular IT employee. The user‘s
desktop is stored on a remote server, allowing the user to access his desktop from any device or
location. Employees can work conveniently from the comfort of their home. Since the data transfer
takes place over secure protocols, any risk of data theft is minimized.
 Benefits of Virtualization
• Sharing of resources helps cost reduction
• Isolation: Virtual machines are isolated from each other as if they are physically separated
• Encapsulation: Virtual machines encapsulate a complete computing environment
• Hardware Independence: Virtual machines run independently of underlying hardware
22
• Portability: Virtual machines can be migrated between different hosts.
6. Algorithm
I. Hosted Virtualization on Oracle Virtual Box Hypervisor
1. Download Oracle Virtual box from https://www.virtualbox.org/wiki/Downloads
2. Install it in Windows, Once the installation has done open it.
3.
Create Virtual Machine by clicking on New
23
4.
Specify RAM Size, HDD Size, and Network Configuration and Finish the wizard
5. To Select the media for installation Click on start and browse for iso file
24
6. Complete the Installation and use it.
7. To Connect OS to the network change network Mode to Bridge Adaptor
II. Installing Xen
1. Install a 64-bit hypervisor
#sudo apt-get install xen-hypervisor-amd64
2. Modify GRUB to default to booting Xen.
#sudo sed -i 's/GRUB_DEFAULT=.*\+/GRUB_DEFAULT="Xen 4.1-amd64"/'
/etc/default/grub
#sudo update-grub
25
3. Set the default toolstack to xm (aka xend).
#sudo sed -i 's/TOOLSTACK=.*\+/TOOLSTACK="xm"/' /etc/default/xen
4. Reboot
#sudo reboot
5. Verify that the installation has succeeded.
#sudo xm list
6. Network Configuration
http://fosshelp.blogspot.com/2013/03/how-to-configure-network-on-ubuntu.html
7. Install Bridge utils
#sudo apt-get install bridge-utils
8. Stop Network Manager
#sudo /etc/init.d/network-manager stop
9. Edit Interface
#vim /etc/network/interfaces
auto lo
iface lo inet loopback
auto xenbr0
iface xenbr0 inet dhcp
bridge_ports eth1
auto eth1
iface eth1 inet dhcp
10. Restart networking to enable xenbr0 bridge.
#sudo /etc/init.d/networking restart
8. Create a PV Guest VM
http://fosshelp.blogspot.com/2013/03/how-to-create-pv-guest-vm-on-ubuntu.html
Manually creating a PV Guest VM
In this section we will focus on Paravirtualized (or PV) guests. PV guests are guests that are
made Xen-aware and therefore can be optimized for Xen. As a simple example we'll create
a PV guest in LVM logical volume (LV) by doing a network installation of Ubuntu
(other distros such as Debian, Fedora, and CentOS can be installed in a similar way).
10.1 Install LVM
#sudo apt-get update
#sudo apt-get install lvm2
10.2 Create Volume Group and Logical Volume.
#sudo mkdir /mnt/vmdisk
#sudo dd if=/dev/zero of=/mnt/vmdisk/mydisk1 bs=100M count=10
26
#sudo losetup /dev/loop1 /mnt/vmdisk/mydisk1
.#sudo pvcreate /dev/loop1
#sudo vgcreate -s 512M myvolume-group1 /dev/loop1
#sudo lvcreate -L 512M -n mylogical_volume1 myvolume-group1
#ls /dev
10.3 Get netboot images
#sudo mkdir -p /var/lib/xen/images/ubuntu-netboot
#cd /var/lib/xen/images/ubuntu-netboot
#sudo wget http://mirror.anl.gov/pub/ubuntu/dists/precise/main/installeramd64/current/images/netboot/xen/initrd.gz
http://archive.ubuntu.com/ubuntu/dists/precise-updates/main/installeramd64/current/images/netboot/xen/
#sudo wget http://mirror.anl.gov/pub/ubuntu/dists/precise/main/installeramd64/current/images/netboot/xen/vmlinuz
10.4 Set up the initial guest configuration
#vim /etc/xen/ubuntu.cfg
name = "ubuntu"
memory = 256
disk = ['phy:/dev/myvolume-group1/mylogical_volume1,xvda,w']
vif = [' ']
kernel = "/var/lib/xen/images/ubuntu-netboot/vmlinuz"
ramdisk = "/var/lib/xen/images/ubuntu-netboot/initrd.gz"
extra = "debian-installer/exit/always_halt=true -- console=hvc0"
10.5 Start the VM and connect to console (-c).
#sudo xm create -c /etc/xen/ubuntu.cfg
10.6 List VMs
#sudo xm list
7. Conclusion :
Cloud computing provides different computing resources on demand as a shared pool over
internet. The users only pay for their usage, hence it is cost effective. Mainly Virtualization
means, running multiple operating systems on a single machine but sharing all the hardware
resources. And it helps us to provide the pool of IT resources so that we can share these IT
resources.
8. Viva Questions:
 What are benefits of cloud computing?
 Compare cloud deployment models.
 What are different service delivery models?
 What are drawbacks of cloud computing?
 Explain different types of virtualization.
7. References:
27

https://www.redswitches.com/blog/different-types-virtualization-cloud-computingexplained/

http://www.wideskills.com/cloud-computing/introduction-to-cloud-computing

https://www.salesforcetutorial.com/introduction-to-cloud-computing/
28
Cloud Computing Lab
Experiment No. : 2
Infrastructure as a Service using
Openstack
29
Experiment No. 2
1.
Aim: Implement Infrastructure as a Service using Openstack.
2.
Objective:
3.
1.
To explore Infrastructure as a Servie model of cloud computing.
2.
To understand how to install openstack and create virtual machines.
Outcome: Students will able to learn

Build a private cloud and implement different service models using open source cloud
technologies
4.
Software/Hardware Used: Ubuntu, Openstack
5.
Theory:
Openstack is Free and open-source cloud-computing software platform that provides services
for managing a Cloud environment on the fly. It consists of a group of interrelated projects
that control pools of processing, storage, and networking resources. It provides users methods
and support to deploy virtual machines in a remote environment. The state in OpenStack is
maintained in centrally managed relational database (MySQL or MariaDB). OpenStack
provides all the services for an IaaS. As a software, OpenStack is built from a set of
microservices which can be combined into different setups based on the actual need. The
services provide REST APIs for their users. These can be the cloud operators or other
services. To make the usage of the APIs easier it is possible to use Software Development Kits
(SDKs) which are also developed as projects inside the OpenStack community.
OpenStack Components
OpenStack embraces a modular architecture to provide a set of core services that facilitates
scalability and elasticity as core design tenets.
Figure: Componenets of Openstack
OpenStack identifies nine key components:
30
1. Nova: It is a cloud computing fabric controller, main part of an IaaS system. It is
designed to manage and automate pools of computer resources
2. Keystone: It provides identity services for OpenStack. A central list of
users/permissions mapped against OpenStack services. It provides multiple means of
access.
3. Glance: It provides image services to OpenStack. The "images" refers to images (or
virtual copies) of hard disks. It is used as templates for deploying new VMs.
4. Neutron: It provides the networking capability for OpenStack.
5. Horizon: It is the dashboard behind OpenStack. The only native graphical interface to
OpenStack.
6. Swift: It is a storage system for objects and files. Users refer to a unique file identifiers:
OpenStack decides where to store/back-up etc.
7. Cinder: It is a block storage component, analogous to the traditional access on a disk
drive.
8. Ceilometer: It provides telemetry services. Metering and reporting. It allows OpenStack
to provide billing services to users.
9. Heat: It is the orchestration component of OpenStack. Users can store the requirements
of a cloud application in a file. Defines what resources are necessary for the application.
6. Procedure
The steps for installing Openstack using Devstack in a single server (All in one Single machine
setup) are given as follows:
Step 1-: Update the ubuntu repository and install git package
The current version of Ubuntu OpenStack is Newton. So, that‘s what we are going to install. To
begin with the installation, first, we need to use the git command to clone devstack.
$sudo apt-get update
$sudo apt-get install git
31
Step 2 -: Download the latest git repository for openstack
$ git clone https://git.openstack.org/openstack-dev/devstack
Step 3-: Open Devstack directory and start installation by executing stack.sh shell script
$cd Devstack
$./stack.sh
At the initial stage, the installer will ask passwords for database, rabbit, service authentication,
horizon and keystone.
32
The installer may take up to 30 minutes to complete the installation depends on the internet
bandwidth. Once installation is done you may see the following screen which displays ip address
of dashboard i.e. horizon through which you can gain access to open stack VMs and resources
33
As you can see, two users have been created for you; admin and demo. Your password is the
password you set earlier. These are the usernames you will use to login to the OpenStack Horizon
Dashboard. Open up a browser, and put the Horizon Dashboard address in your address bar.
http://192.168.0.116/dashboard you should see a login page like this.
To start with, log in with the admin users credentials. In admin panel, you will need to use the
demo user, or create a new user, to create and deploy instances. As you can see, two users have
been created for you; admin and demo. Your password is the password you set earlier. These are
34
the usernames you will use to login to the OpenStack Horizon Dashboard. Take note of the
Horizon web address listed in your terminal.
Creating and running Instances
To launch an instance from OpenStack dashboard, first we need to finish following steps:
 Create a Project and add a member to the Project.
 Create Image and Flavor

Create Network for the Project.


Create Router for the Project.
Create a Key pair
A) Create a Project and add a member to the Project.
Login to the dashboard using Admin credentials and Go to Identity Tab –> Projects and Click
on Create Project.
35
Click on ―Create Project‖ , We can also set the Quota for the project from Quota Tab. To create
Users , Go to Identify Tab–> Users–> Click on ‗Create User‘ Button then specify User Name,
email, password, Primary Project and Role and click on create user to add in to OpenStack
workspace.
Create Image and Flavor
To create a flavor login in dashboard using admin credentials, Go to Admin Tab –> Flavors –>
Click on create Flavor.
36
Specify the Flavor Name (fedora.small) , VCPU , Root Disk , Ephemeral Disk & Swap disk.
To Create Image , Go to Admin Tab –> Images—> Click on Create Image. Specify the Image
Name , Description, Image Soure ( in my case i am using Fedora Image File which i have already
downloaded from fedora website with Format QCOW2)
37
C) Create Network for the Project.
To create Network and router for Innovation project sign out of admin user and login as local user
in dashboard. For my convenience i have setup my network as above Internal Network =
10.10.10.0/24 External Network or Floating IP Network = 192.168.1.0/24 Gateway of External
Network = 192.168.1.1 Now, Go to the Network Tab —> Click on Networks —> then Click on
Create Network Specify the Network Name as Internal
Click on Next. Then Specify the Subnet name (sub-internal) and Network Address (10.10.0.0/24)
38
Click on Next. Now, VMs will be getting internal IP from DHCP Server because we enable DHCP
option for internal network.
39
Now Create External Network. Click on ―Create Network‖ again, Specify Network Name as
―external‖
Click on Next. Specify subnet Name as ―sub-external‖ & Network Address as ―192.168.1.0/24‖
Click on Next
40
Untick ―Enable DHCP‖ option and Specify the ip address pool for external network
Click on Create.
D) Create Router for the Project Now time to create a Router. To create router Go To Network Tab
–> Routers –> Click on ‗+ Create Router‘
Now Mark External network as ―External‖ , this task can be completed only from admin user , so
logout from linuxtechi user and login as admin.
Go to Admin Tab —> Networks–> Click on Edit Network for ―External‖
41
Click on Save Changes. Now Logout from admin user and login as local user. Go to Network Tab
—> Routers –> for Router1 click on ―Set Gateway‖
Click on ―Set Gateway‖, this will add a interface on router and will assign the first ip of external
subnet (192.168.1.0/24).
42
Add internal interface to router as well , Click on the ―router1″ and select on ―interfaces‖ and
then click on ―Add interface‖
Now, Network Part is completed now & we can view Network Topology from ―Network
Topology‖ Tab as below.
Now Create a key pair that will be used for accessing the VM and define the Security firewall
rules.
E) Create a key pair
Go to ‗Access & Security‘ Tab —> Click on Key Pairs –> then click on ‗Create Key Pair‗
43
It will create a Key pair with name ―myssh-keys.pem‖ Add a new Security Group with name
‗fedora-rules‘ from Access & Security Tab. Allow 22 and ICMP from Internet ( 0.0.0.0 ).
Once the Security Group ‗fedora-rules‘ created , click on Manage Rules and allow 22 & ICMP
ping.
44
Click on Add , Similarly add a rule for ICMP.
F) Launch Instance
Now finally it‘s time to launch an instance. To launch instance, Go to Compute Tab –> Click on
Instances –> then click on ‗Launch Instance‘ Then Specify the Instance Name, Flavor that we
created in above steps and ‗Boot from image‘ from Instance Boot Source option and Select Image
Name ‗fedora-image‘.
Click on ‗Access & Security‘ and Select the Security Group ‗fedora-rules‘ & Key Pair ‖mysshkeys‗
45
Now Select Networking and add ‗Internal‘ Network and the Click on Launch ….
Once the VM is launched , Associate a floating ip so that we can access the VM.
46
Click on ‗Associate Floating IP‗ to get public IP addresses
Click on Allocate IP.
47
Click on Associate
Now try to access the VM with floating IP ( 192.168.1.20) using keys.
48
As we can see above that we are able to access the VM using keys. Our task of launching a VM
from Dashboard is Completed Now.
Steps to Install and Configure Openstack on Cent OS 7
Step 1-: Download Readymade Centos virtual machine from https://www.osboxes.org/, select
VMs for Virtual box and download Centos 7.
49
Step 2-: Extract the downloaded 7zip file then Create VM inside Oracle virtual box, select a
Virtual Hard drive (.vhd) extension and provide a path through Virtual box to load the VM.
50
Step 3-:Start Cent OS and execute following commands
a) update the Centos packages
$sudo yum –y update
51
b) Install rdo repositories
$ sudo yum -y install https://www.rdoproject.com/repos/rdo-release.rpm
52
Check the rdo repositories in your local path. $ll /etc.yum.repos.d/
c) Install Openstack installer called pack stack
$ sudo yum –y install openstack-packstack
53
Step 4-:Configure an answerfile
By default pack stack all-in-one installs several optional components like Nagios, glance
components and demo projects. Those can be unselected by configuring a answer file.So create a
new answer file called myanswerfile.txt at specified path.
Now, open the file $ sudonano /root/myanswerfile.tst
Set your own Keystone password say redhat
54
Step 5-: Install OpenStack with configured answer file
55
The installation will take approximetly 20-25mins depends on your internet bandwidth, cpu and
memory. Once installation is done you will see following screen. So dashboard is called Horizon
which can beaccessed using IPaddress <in my case it is 10.0.2.15> and credentials for admin are
56
stored in /root/keystonerc_admin. As we have already set the password in rcfile as redhat we can
Login the portal with credentials Username-: admin and password-redhat.
7. Conclusion:
OpenStack is the one of the best cloud computing environments in the market. The ease of
linear scalability and open-source nature has attracted many customers. OpenStack can be
considered for cloud computation as it proves to be an affordable solution for the longer run.
There are two interesting initiatives that are intended to make using OpenStack easier:
StackOps provides tools for making deployment and operation of OpenStack by packaging
them to Distros and DevStack provides a set of well-documented shell scripts to build a
complete OpenStack environment.
57
8. Viva Questions:
1. What are different components of Openstack?
2. How Infrastructure as a service is implemented using Openstack?
10.
References:

https://www.slideshare.net/openstackindia/openstack-introduction-14761434

http://people.redhat.com/mlessard/mtl/presentations/oct2013/OpenstackOverview.pdf

http://mse-cloud.s3-website-eu-west-1.amazonaws.com/docs/OpenStack-presentation.pdf

https://www.cisco.com/c/dam/en_us/solutions/industries/docs/gov/openstack-datm.pdf
58
Cloud Computing Lab
Experiment No. : 3
Storage as a Service using Owncloud
59
Experiment No. 3
1. Aim: Explore Storage as a service using own Cloud for remote file access using web
interfaces.
2.
Objective:
1. To explore Storage-as-a-service (SaaS) using ownCloud.
2. To understand how users are created and managed and security using owncloud.
3.
Outcome: Students will able to learn

Explore Storage as a Service and analyze security issues on cloud.
4.
Software/Hardware Used: Mysql, Php, Apache2 and ownCloud
5.
Theory: ownCloud is a suite of client–server software for creating and using file hosting
services. ownCloud is functionally very similar to the widely used Dropbox, with the primary
functional difference being that the Server Edition of ownCloud is free and open-source, and
thereby allowing anyone to install and operate it without charge on a private server. It also supports
extensions that allow it to work like Google Drive, with online document editing, calendar and
contact synchronization, and more. Its openness avoids enforced quotas on storage space or the
number of connected clients, instead having hard limits (like on storage space or number of users)
defined only by the physical capabilities of the server.
Features of OwnCloud:
1. Sync and Share Your Data, with Ease
ownCloud is the most straightforward way to file sync and share data. You don‘t need to
worry about where or how to access your files. With ownCloud all your data is where ever
you are; accessible on all devices, any time.
2. A Safe Home for All Your Data
ownCloud is hosted exclusively on your own private server/cloud so you can rest assured
that your data is under your control. ownCloud is all about your privacy and works to protect your
files. It ensures that access is controlled only by the one who should have control: You.
3. Your Data is Where You Are
60
When traveling, access ownCloud through your Android or iOS devices. Automatically upload
pictures after taking them. Sync files at home or work with the desktop client keeping one or more
local folders synchronized between devices. And wherever you are, the web interface lets you
view, share and edit your files alone or with others. Want to integrate third-party storage
providers? With its open and flexible architecture, ownCloud offers implementations of Dropbox,
Microsoft OneDrive and many more. Wherever you are, your data is with you.
4. Community Driven
With over 50 million users and a very active developing community of over 1,100 contributors,
ownCloud is one of the biggest open source projects worldwide. Start reaping in the benefits by
joining the ownCloud community: Get help, contribute to our development team or sign up for
exclusive beta versions.
Installation and configuration of OwnCLoud
Own cloud can be installed over the any flavor of linux like Ubuntu, Centos, Fedora etc. but
Ubuntu is preferable. The Steps for installation are as follows
6. Algorithm
Installing Own Cloud
1. Update ubuntu
sudo apt-get update
2. Upgrade ubuntu
sudo apt-get upgrade
3.
Install the Apache webserver
sudo apt-get install apache2 -y
4.Install mysql server
sudo apt-get install mysql-server
5.Install mysql client
sudo apt-get install mysql-client
6.Check the status of apache2
sudo service apache2 status
7.Intstall php 7.1 for ownCloud
sudo add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.1
61
8.
Install owncloud
1. Goto owncloud.org
2. Goto downloads tab
3. Click on Download Tar in the Production release channel under Tarball
4. Extract the downloaded file
5. Move the folder to /var/www/html
sudo mv /home/student/Downloads/owncloud/ /var/www/html/
6. Give permissions to the owncloud folder
sudo chmod 777 /var/www/html/owncloud/
7. Open browser and type the following url
localhost/owncloud
9. Create mysql database with your databasename
sudo mysql -u root –p
mysql>create database databasename;
10. Grant privileges to user with password on owncloud
mysql>GRANT ALL PRIVILEGES on databasename.* to 'username'@'localhost'
identified by 'password';
mysql>FLUSH PRIVILEGES;
mysql>exit
11. Goto localhost/owncloud
1. Create admin account with the username and password.
2. Enter mysql database user name username created in step 10.
3. Enter mysql database password password created in step 10.
4. Enter mysql database name databasename created in step 9.
5. Click on Finish Setup.
This will give access to own cloud portal. Own Cloud portal has two types of users like Admin
user and local user. The admin user can create users/groups, assigns storage quota, assigns
privileges and can manage users and group activities.
The local user is an restricted user who can perform local activities like upload or share files, delete
local shares or can create share etc.
62
7.
Screenshots:
1. ownCloud Portal
2. Group Creation
63
3. Adding Users to Group
4. Upload File or Folder
64
5. File Shared with Group of Users
5. File Shared with specific user
8.
Conclusion: The Storage-as-a-service allows user to upload,/download, share a file to
specific user or group of users. ownCloud provides SaaS by assigning login and password to
individual user and enable to create group. It also allows to add members to the group and share
files/folders to specific or group of users. Thus we explored the Storage-as-a-Service using
ownCloud.
65
9.
Viva Questions:
1. Explain Storage-as-a-Service(SaaS).
2. Discuss features of ownCloud.
10.
References:
1. https://owncloud.org/
2. https://www.vultr.com/docs/install-owncloud-9-1-on-buntu-16-04
66
Cloud Computing Lab
Experiment No. : 4
Deploy Web Application using Google
App Engine
67
Experiment No. 4
1. Aim: Deploy web applications on commercial cloud like Google App Engine.
2.
Objective:
1. To explore different services and architecture of Google App Engine.
2. To understand how to deploy any web application using Google App Engine.
3.
Outcome: Students will able to learn

Develop real world web applications and deploy on commercial cloud to demonstrate
platform as a service.
4. Software/Hardware Used: Google App Engine, Google Cloud Shell.
5. Theory:
The app engine is a Cloud-based platform, is quite comprehensive and combines infrastructure
as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). The app
engine supports the delivery, testing and development of software on demand in a Cloud
computing environment that supports millions of users and is highly scalable. The company
extends its platform and infrastructure to the Cloud through its app engine. It presents the
platform to those who want to develop SaaS solutions at competitive costs. Google App
Engine is a development as well as a hosting platform that powers everything from big
businesses web apps to mobile games, using the same infrastructure that powers Google‘s
worldwide-scale web applications. It is a platform-as-a-service (PaaS) Cloud computing
platform that is fully managed and uses inbuilt services to run your apps. We can start
development almost instantly after downloading the software development kit (SDK).
As soon as we have signed up for a Cloud account, we can build your app:




With the template/HTML package in Go
With Jinja2 and webapp2 in Python
With Cloud SQL in PHP
With Maven in Java
The apps are ‗sandboxed‘ and run on several servers by the app engine. To deal with
additional demands, the app engine allows additional resources for the application.
Features of App Engine
1. Runtimes and Languages
We can use Go, Java, PHP or Python to write an app engine application. We can develop and
test an app locally using the SDK containing tools for deploying apps. Every language has its
own SDK and runtime.
68
2. Generally Available Features
These are covered by the depreciation policy and the service-level agreement of the app
engine. Any changes made to such a feature are backward-compatible and implementation of
such a feature is usually stable. These include data storage, retrieval, and search;
communications; process management; computation; app configuration and management.
Data storage, retrieval, and search include features such as HRD migration tool, Google Cloud
SQL, logs, datastore, dedicated Memcache, blobstore, Memcache and search.
Communications include features such as XMPP. channel, URL fetch, mail, and Google
Cloud Endpoints. Process management includes features like scheduled tasks and task queue.
Computation includes images.
3. Features in Preview
These features are sure to ultimately become generally available features in some release of
the app engine in the future. However, their implementation might change in backwardincompatible ways, as these are in the preview. These include Sockets, MapReduce and
Google Cloud Storage Client Library.
4. Experimental Features
These might or might not become generally available in app engine releases in the future.
Their implementation might change in backward-incompatible ways. These are generally
available publicly. However, those mentioned as ‗trusted tester‘ are available only to a select
user group and they have to sign up to use the features. The experimental features include
Appstats Analytics, Restore/Backup/Datastore Admin, Task Queue Tagging, MapReduce,
Task Queue REST API, OAuth, Prospective Search, PageSpeed and OpenID.
5. Third-Party Services
The app can do things not built into the core product you know as app engine as Google offers
documentation and helper libraries to enhance the capabilities of the app engine platform.
Google partners with other organizations to achieve this.
Advantages of Google App Engine
There are many advantages to the Google App Engine that helps to take your app ideas to the
next level. This includes:

Infrastructure for Security
Around the world, the Internet infrastructure that Google has is probably the most secure.
There is rarely any type of unauthorized access to date as the application data and code are
stored in highly secure servers.
69

Faster Time to Market
Quickly releasing a product or service to market is the most important thing for every
business. Stimulating the development and maintenance of an app is critical when it comes to
deploying the product fast.

Quick to Start
With no product or hardware to purchase and maintain, you can prototype and deploy the app
to your users without taking much time.

Easy to Use
Google App Engine (GAE) incorporates the tools that you need to develop, test, launch, and
update the applications.

Rich set of APIs & Services
Google App Engine has several built-in APIs and services that allow developers to build
robust and feature-rich apps. These features include:

Cost Savings
You don‘t have to hire engineers to manage your servers or to do that yourself. You can invest
the money saved into other parts of your business.

Platform Independence
You can move all your data to another environment without any difficulty as there are not
many dependencies on the app engine platform.
6. Procedure
I. Creating a Google Cloud Platform project
1. To use Google's tools for your own site or app, you need to create a new project on Google
Cloud Platform. This requires having a Google account.
2. Go to the App Engine dashboard on the Google Cloud Platform Console and press the
Create button.
3. If you've not created a project before, you'll need to select whether you want to receive
email updates or not, agree to the Terms of Service, and then you should be able to
continue.
4. Enter a name for the project, edit your project ID and note it down.
5. Click the Create button to create your project.
6. Download a sample –app and unzip
7. Open Google Cloud Shell.
8. Drag and drop the sample-app folder into the left pane of the code editor.
9. Run the following in the command line to select your project:
70
a. gcloud config set project gaesamplesite
b. cd sample-app
c. gcloud app deploy
 Enter a number to choose the region where you want your application
located.
 Enter Y to confirm.
10. Now navigate your browser to your-project-id.appspot.com to see your website online.
II. Web Application using Google Cloud SDK
Step 1: Install Eclipse.
Step 2: Install Google Cloud Tools for Eclipse from Eclipse MarketPlace
Step 3: Create New Project ―Google App Engine Standard Java Project‖ in Google Cloud Tools
71
Step 4: Install Google Cloud SDK
Step 5: Login to Google
72
Step 6: Install Components of Google Cloud SDK using
gcloud components install app-engine-java
Step 7: Add libraries to the project
73
Step 8: Debug the project
Step 9: Define server and runtime enviornments
74
Step 10: Start the app in web browser
7. Conclusion:
Google App Engine enables us to build web applications leveraging Google‘s infrastructure.
App Engine applications are easy to develop, maintain, and can scale as your traffic and data
storage needs grow. We can just upload our application, and it‘s ready to serve to users. Rest
is taken care of by Google Cloud.
8. Viva Questions:
1. How GAE helps application development?
2. What are feature of GAE?
9. References:

https://www.netsolutions.com/insights/what-is-google-app-engine-its-advantages-and-howit-can-benefit-your-business/

https://developer.mozilla.org/enUS/docs/Learn/Common_questions/How_do_you_host_your_website_on_Google_App_E
ngine
75
Cloud Computing Lab
Experiment No. : 5
To create and access VM instances and
demonstrate various components such as
EC2, S3, DynamoDB using AWS
76
Experiment No. 5
2.
Aim: To create and access VM instances and demonstrate various components such as
EC2, S3, DynamoDB using AWS
2.
Objective:
1. To explore Infrastructure-as-a-Sevice(IaaS), Software-as-a-Service(SaaS) and
Platform-as-a-Service(PaaS) using AWS.
2. To create and access Virtual Machine with Compute using AWS.
3. To explore various components like S3, DynamoDB using AWS.
3.
Outcome: Students will able to learn

Demonstrate Infrastructure as a Service (IaaS) and identity management mechanism
in various cloud platforms.
4.
Software/Hardware Used: Web Browser, AWS Console
5.
Theory: Amazon Web Services (AWS) is the world‘s most comprehensive and broadly
adopted cloud platform, offering over 175 fully featured services from data centers globally.
Millions of customers—including the fastest-growing startups, largest enterprises, and leading
government agencies—are using AWS to lower costs, become more agile, and innovate faster.
AWS Management Console:
77
The AWS Management Console brings the unmatched breadth and depth of AWS right to your
computer or mobile phone with a secure, easy-to-access, web-based portal. Discover new services,
manage your entire account, build new applications, and learn how to do even more with AWS.
The console makes it easy to find new AWS services, configure services, view service usage, and
so much more. From updating user groups to building applications to troubleshooting issues, with
the Console, you can take action quickly. The Console offers over 150 services you can configure,
launch, and test to get hands-on experience with AWS. With the Console‘s automated wizards and
workflows, it‘s even easier to quickly deploy and test common workloads.
Build and scale powerful applications
Use the Console and its API, CLI, CloudFormation templates, and other toolkits to build scalable
architectures in any AWS data center around the world. You can customize your experience by
pinning favorites and organize projects with resource groups and the tag editor.
Manage and monitor your account
Oversee all administrative aspects of your AWS account from your desktop or mobile device. You
can view your usage and monthly spending by service, set up AWS IAM users and groups,
configure permissions, and manage security credentials.
Accessing the AWS Management Console
Requirements: All you need is an existing AWS account and a supported browser.
1.
Sign up for an AWS account
Creating an AWS account is free and gives you immediate access to the AWS Free Tier
2.
Enter the Management Console
Log in to your account with your username and password and access to the management
console on any device
3.
Start building with AWS
Test out services and build your production solution quickly and easily once you're ready
Amazon EC2:
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable
compute capacity in the cloud. It is designed to make web-scale cloud computing easier for
developers. Amazon EC2‘s simple web service interface allows you to obtain and configure
capacity with minimal friction. It provides you with complete control of your computing resources
and lets you run on Amazon‘s proven computing environment.
Amazon S3:
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability, data availability, security, and performance. This means customers of all sizes
78
and industries can use it to store and protect any amount of data for a range of use cases, such as
websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices,
and big data analytics. Amazon S3 provides easy-to-use management features so you can organize
your data and configure finely-tuned access controls to meet your specific business, organizational,
and compliance requirements. Amazon S3 is designed for 99.999999999% of durability, and stores
data for millions of applications for companies all around the world.
Amazon DynamoDB:
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond
performance at any scale. It's a fully managed, multiregion, multimaster, durable database with
built-in security, backup and restore, and in-memory caching for internet-scale applications.
DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than
20 million requests per second.
Many of the world's fastest growing businesses such as Lyft, Airbnb, and Redfin as well as
enterprises such as Samsung, Toyota, and Capital One depend on the scale and performance of
DynamoDB to support their mission-critical workloads.
Hundreds of thousands of AWS customers have chosen DynamoDB as their key-value and
document database for mobile, web, gaming, ad tech, IoT, and other applications that need lowlatency data access at any scale. Create a new table for your application and let DynamoDB handle
the rest.
6.
Procedure:
Step 1 -: Login to AWS portal and Select EC2 service from admin console
79
Step 2-: The EC2 resource page will appear which will show you the summary of instances. Now
click on launch instance to select the VM instance type
Step 3-: Select the operating system type in AMI format. In this example we have selected
Windows server instance which is eligible for free tier and click on Next.
Step 4 -: Now select the hardware type for Virtual machine. In this example we have selected
free tier eligible General purpose hardware and click on Next.
80
Step 5-: Now specify the instance details like Number of instances, networking options like VPC,
Subnet ot dhcp public IP etc. and click on Next
Step 6-: Specify the storage space for VM and click on Next
Step 6-: Click on Add tag to specify VM Name and click on Next
81
Step 6-: Configure security group to provide access to VM using different protocols.In this
example we have selected default RDP protocol.
Step 7-: Now Review the instance and click on Launch button
Step 8-: Now to secure VM instance, Encrypt it using public key and create a private key pair to
decrypt that. Here specify key pair name and download key pair.
82
Step 8-: Finally Click on launch instance to Launch VM
Step 9-: Now from summary page click on View instance to see the instance state. Afte some
time you will see the running instance of your VM.
83
Step 10-:Now Click on Connect to get the password for VM to access it over RDP protocol.
Step 11 -: Select the downloaded key pair file to decrypt the password.
84
Step 12 -: Now connect the instance using RDP tool by using Ipaddress/DNS, username and
Password decrypted in last step.
Step 13-: Once you click on connect, you will see the running Windows virtual machine as shown
below.
85
Step 14-: You can shut down instance by selecting instance state followed by stop
Step 15-: You can delete the instance permanently by selecting instance state followed by stop
86
2) To create simple word press app using Light sail service in AWS (SAAS)
Step 1-: Open Admin console of AWS and select light sail service
Step 2-: Select Create instance option
Step 3 -: Select the Linux hosting instance
87
Step 4 -: Select word press hosing
Step 5 -: Specify name to the instance
88
Step 6-: Now click on create to launch the instance Step
Step 7 -: Click on connect instance to get the password for word press
89
Step 8-: Now open bitnami_application_password file to get the admin password. So copy it
and use over admin console
Step 9 -: Now researve static ip by selecting network option and creating static IP
90
Once static IP is allocated then open that ip on browser to see Word press Website
Open admin console of Word press and use password obtained in step 8 to open word press site
builder.
Now you can develop a complete Word press website and use that.
8.
Conclusion: AWS provides various services like compute, storage database etc. All these
services are available by creating an account with AWS and accessible through the web browser.
AWS provides Software-as-a-service through LightSail by giving access to wordpress to design
web application. It also provides storage as a service using AWS S3 and DynamoDB to create
database. Thus we explored all these services using AWS.
9.
Viva Questions:
1. Explain various services provided by AWS.
2. Discuss features and benefits of DynamoDB.
10.
References:
1. https://aws.amazon.com/
91
Cloud Computing Lab
Experiment No. : 6
Demonstrate on demand application
delivery and Virtual desktop
infrastructure using Ulteo.
92
Experiment No. 6
1.
Aim: Demonstrate on demand application delivery and Virtual desktop infrastructure using
Ulteo.
2.
Objective:
1. To explore different services and architecture of Ulteo open virtual desktop.
2. To understand how to access any web application hosted on Linux/ Windows server.
3. Outcome: Students will able to learn

Explore on demand application delivery using software as a service model.
4.
Software/Hardware Used: VMware, ubuntu, Windows 10 .
5.
Theory:
Ulteo Open Virtual Desktop (OVD) was an open-source Application Delivery and Virtual
Desktop infrastructure project that could deliver applications or a desktop hosted on a Linux
or Windows server to end users. It was an open source alternative to Citrix and VMware
solutions.
Formation of desktops and software applications in the special server infrastructure - user
applications, settings and data are stored on centralized servers, each of which is due to the
involvement of virtualization running applications for different users. Different user
applications logically used on one desktop, physically can run on different application servers
that enables the execution of programs for different operating systems. For example, the user
can run alongside the program for Linux and Windows. The system provides for collaboration
of users who can share documents or files. In addition to creating remote workplaces, the
OVD is also suitable for the organization of the mixed environments in which the user Linuxsystem is able to run Windows-based programs and vice versa. Launched in OVDenvironment applications to provide access to local printers, storage devices, USB-devices and
sound card installed on the user's machine, and provides a single clipboard. It supports fullscreen mode or opening applications in separate windows on the local desktop. When
disconnection or failure of the local system state is restored OVD-environment unchanged.
The client software OVD can operate in three modes:



Desk: provides a full virtual desktop;
Portal: provides the performance of individual programs;
Seamless integration: integrates external applications to the local desktop and used as
regular local programs. The basic package includes a custom software applications such as
office suite OpenOffice, Firefox web-browser, Thunderbird e-mail client program for
instant messaging Pidgin.Dopolnitelnye applications can be installed by the system
administrator.
93
The server part consists of two components:


The application server (for running programs)
Session Manager (for managing user sessions)..
Modules
Ulteo OVD uses several modules with different roles. The Session Manager, at least one
Application Server and the client component are required, while the others are optional. Each
module comes with a binary installation package for Linux and, in most cases, also for
Windows.

Session Manager
This server is the central piece of an Ulteo OVD architecture. It manages the session
launching and hosts the administration console. It is the first module to install. Servers
controlled by the Session Manager are known as slave servers.

High Availability
This add-on in Ulteo OVD 3 allowed setting up two physical Session Managers and databases
in a cold-standby cluster. Data was replicated between the two databases using DRBD, and
failover was handled by the Heartbeat cluster manager. High Availability was a Gold module.
It is no longer included in the source code for OVD 4, nor available from the Premium
repository.

Application Server
These are slave servers that run the published applications or desktops. They can be running
Linux or Windows, depending on the type of applications or desktop to be delivered. Mixing
Linux and Windows servers in an Ulteo OVD farm is supported. Linux Application servers
can be set up in two modes: either as regular Linux installations with desktop environment,
applications and the Application Server package, or using the Ulteo Subsystem. The Ulteo
Subsystem can be installed on a Linux server with no desktop environment and no
applications. It consists of a chroot jail with a modified Xfce desktop environment and some
standard applications, including LibreOffice, Adobe Reader, Mozilla Firefox and Thunderbird.
Additional applications can be installed within the chroot jail.

Web Gateway
This slave server module, introduced in OVD 4, allows publishing of Web applications
alongside Linux and Windows application.

Hypervisor
The OVD 4 source code includes code for another new type of slave server called Hypervisor,
allowing Ulteo to act as a front end for a VDI. No installation package is provided as of April
2014.
94

Client
In order to start an Ulteo OVD session, an Ulteo OVD Client is required. Clients generally
support two modes, application mode (or portal mode) and desktop mode. In application
mode, the user can launch individual applications. In desktop mode the user is presented a full
desktop, which can be either Linux or Windows and may contain applications from the
respective other platform.

Web Client
All editions of OVD include a Java client. In desktop mode, the desktop is displayed inside the
browser. The portal mode includes a web-based file browser based on AjaXplorer, from which
users can download files, upload files or launch files in a published application. The Web
Client can be installed on the Session Manager or, beginning with OVD 4, on a dedicated
server.

HTML5 Client
OVD 4 introduced an HTML5 client, which is based on Guacamole and available in both
editions of OVD. It does not require Java but can run in any browser which supports HTML5.
It does not support some features of the other clients, such as client drive mapping and sound.
Apart from this, the look-and-feel is similar to the Java client, including the file browser in
portal mode. The HTML5 client can be installed on the Session Manager or a separate web
server. The server translates all RDP traffic into HTML5 and vice versa, effectively acting as a
gateway. This makes it a suitable solution for deployment across firewalls, as the only traffic
channel between the client and the HTML5 gateway is a HTTP or HTTPS connection.

Native Client
Native clients are available as Premium modules for Linux, Windows, Android and iOS.
The desktop OS clients support desktop mode or application mode. In application mode, users
can either launch remote applications from the client's main window, or configure the client to
place icons into their start menu, from where they can be launched like local applications. The
tablet clients support only desktop mode. They are available from the respective app stores.

File Server
Ulteo OVD includes an optional file server to host user profiles or shared folders, ensuring
user access to the same files when using applications from different servers. As of version 4.0,
only a Linux version is available. The File Server may be installed on an Application Server.
Without a file server, shares can still be mounted using the mechanisms of the operating
system, but these shares may not be available on all application servers or application server
platforms, and cannot be accessed from the Web Client's AjaXplorer component.

Gateway
95
This slave server module facilitates deployment of Ulteo OVD applications over the Internet
by tunneling connections to application servers through an SSL (443) connection. This
eliminates the requirement to expose individual application servers with a public IP address. It
also eases access for clients which are behind firewalls, as many firewall environments allow
outgoing SSL traffic on port 443 with no further restrictions. The Gateway is a Premium
module.
6. Procedure
The steps for installation and configuration of Ulteo are given as follows
Step 1: Installation
1) Install Ulteo through DVD or Open Ulteo OVF file in Vmware player by selecting import
VM button
2) If you haven't an Ulteo OVD DVD-ROM yet, please download the corresponding ISO file
from this place at www.ulteo.com and burn it to a fresh DVD.
3) Insert the Ulteo OVD DVD-ROM into your computer and restart it. If you selected the DVDROM as first boot device you'll see the boot loader Screen.
4) Select Install Ulteo Option
5) The first step is used to select the system language. Choose your language from the list and
click on Forward.
6) In the second step, the system asks you to define your location. Either select a point on the map
or choose one from the Selected city form and click on Forward.
7) The third step is used to define the keyboard layout. Select yours and click on Forward.
8) Then, you have to select the partitioning method. We suggest the automatic method: Erase and
use the entire disk.
9) These questions are about the installed operating system itself, user login and password used to
access the OS, along with the hostname of the machine.
10) Type a password and confirm it. Useful address is displayed to you for a near future use of
OVD.
11) Then read carefully the installation summary, then click on Install and wait til installation
completes
12) Finally, click on Restart now to finish installation process.
Step 2: In Management machine Open following URLs
13) https://Ulteo-Server-ipaddress/ovd for Client access
14) https://Ulteo-Server-ipaddress/admin for Admin access
Step 3: Login on Admin portal specify Username and Password as Admin
96
Under server tab Register server, Click on manage to add ip address of Ulteo Server
a) Go to user tab to add multiple users
b) Go to User tab then select user group then create a new user group and add users in to them
c) Go to Application Tab to Create Application Group
Map User group with Application group And use the services at client side
The Administrator panel is limited to Administrator who can manage Applications, users and groups.
Once admin logged in to this portal, he can create users, user groups, Application groups maps users to
User group and Application group, manages applications or installs softwares based on users
requirement Shown below.
97
Ulteo Administrator panel
The Application menu of admin panel shows available applications which can be mapped to users or
user groups Shown
Ulteo Application Menu
Step 4: At client side open https://Ulteo-Server-ipaddress/ovd for Client access,Specify Username and
Password and Access the softwares added in Application group
Once user selects Access Ulteo option it shows login page of Ulteo session manager shown below. The
user can get login name and password by filling Registration form through main page of cloud portal
Shown below.
98
Ulteo user Login Portal
Once user is validated he can access the services using portal mode or Desktop mode. Both the modes
give access to software applications which are installed on Linux Application Server and Windows
Application Server. In portal mode the user get applications in vertical pane Shown in figure.
Ulteo Portal mode
While in Desktop mode use gets full flagged Linux desktop running on browser with selected
Applications Shown below.
99
Ulteo Desktop mode
7. Conclusion:
Ulteo Open Virtual Desktop is installable open source solution for enterprises. It allows you to
give users remote access to desktops and applications. The OVD is also suitable for the
organization of the mixed environments in which the user Linux-system is able to run
Windows-based programs and vice versa.
Viva Questions:
8.
1. What are different modules used by OVD ?
2. What are feature of OVD?
9.
References:



https://en.bmstu.wiki/Ulteo_Open_Virtual_Desktop
http://www.ulteo.com
https://www.youtube.com/watch?v=tk3I0LggZSk
100
Cloud Computing Lab
Experiment No. : 7
Edge Computing
101
Experiment No. 7
1. Aim: Case Study on Edge Computing
2.
Objective:
1.
To understand the concept of edge computing along with its architecture..
3. Outcome: Students will able to learn

Develop real world web applications and deploy on commercial cloud to
demonstrate platform as a service.
4. Theory:
Edge Computing:
Edge computing is a distributed computing paradigm which brings computation and data storage
closer to the location where it is needed, to improve response times and save bandwidth.
Fig.1 : Edge Computing Infrastructure
The increase of IoT devices at the edge of the network is producing a massive amount of data to be
computed at data centers, pushing network bandwidth requirements to the limit. Despite the
improvements of network technology, data centers cannot guarantee acceptable transfer rates and
response times, which could be a critical requirement for many applications. Furthermore, devices
at the edge constantly consume data coming from the cloud, forcing companies to build content
delivery networks to decentralize data and service provisioning, leveraging physical proximity to
the end user. In a similar way, the aim of Edge Computing is to move the computation away from
data centers towards the edge of the network, exploiting smart objects, mobile phones or network
102
gateways to perform tasks and provide services on behalf of the cloud. By moving services to the
edge, it is possible to provide content caching, service delivery, storage and IoT management
resulting in better response times and transfer rates. At the same time, distributing the logic in
different network nodes introduces new issues and challenges.
Privacy and security
The distributed nature of this paradigm introduces a shift in security schemes used in cloud
computing. Not only data should be encrypted, but different encryption mechanism should be
adopted, since data may transit between different distributed nodes connected through the internet
before eventually reaching the cloud. Edge nodes may also be resource constrained devices,
limiting the choice in terms of security methods. Moreover, a shift from centralized top-down
infrastructure to a decentralized trust model is required. On the other hand, by keeping data at the
edge it is possible to shift ownership of collected data from service providers to end-users.
Scalability
Scalability in a distributed network must face different issues. First, it must take into account the
heterogeneity of the devices, having different performance and energy constraints, the highly
dynamic condition and the reliability of the connections, compared to more robust infrastructure of
cloud data centers. Moreover, security requirements may introduce further latency in the
communication between nodes, which may slow down the scaling process.
Reliability
Management of failovers is crucial in order to maintain a service alive. If a single node goes down
and is unreachable, users should still be able to access a service without interruptions. Moreover,
edge computing systems must provide actions to recover from a failure and alerting the user about
the incident. To this aim, each device must maintain the network topology of the entire distributed
system, so that detection of errors and recovery become easily applicable. Other factors that may
influence this aspect are the connection technology in use, which may provide different levels of
reliability, and the accuracy of the data produced at the edge that could be unreliable due to
particular environment conditions.
Applications
Edge application services reduce the volumes of data that must be moved, the consequent traffic,
and the distance that data must travel. That provides lower latency and reduces transmission costs.
Computation offloading for real-time applications, such as facial recognition algorithms, showed
considerable improvements in response times as demonstrated in early research. Further research
showed that using resource-rich machines called cloudlets near mobile users, offering services
typically found in the cloud, provided improvements in execution time when some of the tasks are
offloaded to the edge node. On the other hand, offloading every task may result in a slowdown due
to transfer times between device and nodes, so depending on the workload an optimal
configuration can be defined.
Another use of the architecture is cloud gaming, where some aspects of a game could run in the
cloud, while the rendered video is transferred to lightweight clients such as mobile, VR glasses,
etc. Such type of streaming is also known as pixel streaming.
103
Difference between Edge Computing and Cloud Computing
Points of
Difference
Edge Computing
Cloud Computing
Suitable
Companies
Edge Computing is
regarded as ideal for
operations with extreme
latency concerns. Thus,
medium scale
companies that have
budget limitations can
use edge computing to
save financial resources.
Cloud Computing is more
suitable for projects and
organizations which deal
with massive data storage.
Programming
Several different
platforms may be used
for programming, all
having different
runtimes.
Actual programming is
better suited in clouds as
they are generally made
for one target platform
and uses one programing
language.
Security
Edge Computing
requires a robust
security plan including
advanced authentication
methods and proactively
tackling attacks.
It requires less of a robust
security plan.
6. Conclusion:
Nowadays, more and more services are pushed from the cloud to the edge of the network because
processing data at the edge can ensure shorter response time and better reliability. Moreover,
bandwidth could also be saved if a larger portion of data could be handled at the edge rather than
uploaded to the cloud. The burgeoning of IoT and the universalized mobile devices changed the
role of edge in the computing paradigm from data consumer to data producer/consumer. It would
be more efficient to process or massage data at the edge of the network.
104
7. Viva Questions
1. What is edge computing?
2. How edge computing is different than cloud computing?
References:
•
Wearable Cognitive Assistance using Cloudlets (http://elijah.cs.cmu.edu/)
•
The Emergence of Edge Computing, Computer 50.1 (2017): 30-39.
•
Edge Computing: Vision and Challenges, IEEE INTERNET OF THINGS JOURNAL,
VOL. 3, NO. 5, OCTOBER 2016
•
K.P.Saharan A.Kumar ― Fog in Comparison to Cloud: A Survey‖, Int‘l. Journal of
Computer Applications (0975 – 8887) Volume 122 – No.3, July 2015
105
Cloud Computing Lab
Experiment No. : 8
Mini Project
106
Experiment No. 8
2. Aim: To develop a mini project by using the concepts studied
2.
Objective:
2. To implement different services of cloud computing by using various cloud
platforms.
3.
To understand and apply different algorithms for load balancing and security
measures in cloud environment.
4.
Outcome: Students will able to learn

Build a private cloud and implement different service models using open source
cloud technologies.
4.
Software/Hardware Used: VMware, ubuntu, Windows 10, GAE, AWS, Openstack. etc.
5.
Theory:
Using the concepts studied throughout the semester students shall be able to
2. Create their private cloud for the institute using the available resources.
3. Apply security concepts to secure a private cloud.
4. Implement efficient load balancing.
5. Compare various virtualization technologies with given resource.
6. Create cloud applications such as messenger, photo editing website, your own social
media etc.
Following are some suggested topics:
3) Minimizing Execution Costs when using Globally Distributed Cloud Services
4) Bug Tracking System
5) University Campus Online Automation Using Cloud Computing
4. Personal Cloud using Raspberry Pi
5. Detecting Data Leaks via Sql Injection Prevention on an E-Commerce
6. Remote Monitoring and Controlling of Industry using IoT
8. Online Bookstore System On Cloud Infrastructure
9. Cloud Based Bus Pass System
10. Cloud computing for Rural banking
11. Android Offloading Computation Over Cloud
12. Intelligent rule-based phishing websites classification Based on URL Features
107
13. Cloud Based Attendance System
14. Secure File Storage On Cloud Using Hybrid Cryptography
15. Cloud Based Online Blood Bank System
16. Secure Text Transfer Using Diffie Hellman Key Exchange Based on Cloud
17. Customized AES using Pad and Chaff Technique And Diffie Hellman Key Exchange
18. Cloud Based Improved File Handling and Duplication Removal Using MD5
19. Secure File Storage On Cloud Using Hybrid Cryptography
20. Cloud Based Online Blood Bank System
21. Storage and Energy Efficient Cloud Computing Project
108
Download