Dynamic Auditing protocol in cloud computing T.Uma mageshwari1

advertisement
Dynamic Auditing protocol in cloud computing
T.Uma mageshwari1, D.Afsa2 ,M.S.Udhaya3
B.TECH, Information Technology, S.K.P Engineering College, Thiruvannamalai, India.
B.TECH, Information Technology, S.K.P Engineering College, Thiruvannamalai, India.
Assistant Professor/IT, Information Technology, Thiruvannamalai, India.
afsana.skp@gmail.com, umathiru21@gmail.com
ABSTRACT:
In cloud computing, data owners host their data on cloud servers and users (data consumers) can access the data from
cloud servers. Existing remote integrity checking methods can only serve for static archive data so it cannot be applied
for auditing services. Therefore auditing protocol is developed. Here we propose a protocol that will directly
communicate with CSP and also initialized by the auditor. The owner can sign up and upload their file into the cloud.
The user can access the data by signing up and download it through mail.
KEYWORD: CSP(cloud service provider), auditing protocol, dynamic auditing protocol, batch auditing, AS(authentication
server).
I.
INTRODUCTION
Cloud storage is an important service of cloud computing [1],
which allows data owners (owners) to move data from their
local computing systems to the cloud. Many owners start to
store the data in the cloud [2]. However, this new paradigm of
data hosting service also introduces new security challenges
[3]. Owners would worry that the data could be lost in the
cloud. This is because loss of data might happen in any
software or hardware and it cannot consider what high degree
of reliable measures cloud service providers would take [4]–
[8]. Sometimes, cloud service providers might be dishonest,
these could discard the data which has not been accessed or
rarely accessed to save the storage space and claim that the
data are still correctly stored in the cloud. Therefore, owners
need to be convinced that the data are correctly stored in the
cloud. Traditionally, owners can check the data integrity based
on two-party storage auditing protocols [9]–[11]. In cloud
Storage system, however it is inappropriate to let either side of
cloud service providers or owners conduct Auditing, because
none of them could be guaranteed to provide unbiased
auditing result. In this type of situation, third party auditing
is a natural choice for the storage auditing in cloud computing.
A third party auditor that has expertise and capabilities can do
a more efficient work and convince both cloud service
providers and owners. For the third party auditing in cloud
storage systems, there have several important requirements
which have been proposed in some previous works [8], [9].
The auditing protocol should have the following properties: 1)
Confidentiality and sensitivity and the auditing protocol
should keep owner’s data confidential against the auditor.
2) Dynamic Auditing. The auditing protocol should support
the dynamic updates of the data in the cloud storage.
3) Batch Auditing. The auditing protocol should also be able
to support the batch auditing for multiple owners and multiple
clouds. In [13], the authors proposed a dynamic auditing
protocol that can support the dynamic operations of the data
on the cloud servers, but this may leak the data content to the
Auditor because it requires the server to send the linear
combinations of data blocks to auditor. In[14], the authors
extended their dynamic auditing scheme to be privacypreserving and support the batch auditing for the multiple
owners. However due to the large number of data tags, their
auditing protocol may incur a heavy storage overhead on the
server. In [15], Zhu et al. proposed a cooperative provable
data possession scheme that can support the batch auditing for
multiple clouds and also extend it to support the dynamic
auditing in [26]. However, their scheme cannot support the
batch auditing for multiple owners. That is because of the
parameters for generating the data tags used by each owner
are different and thus they cannot combine the data tags from
multiple owners to conduct the batch auditing method.
Another drawback is that their scheme requires an additional
trusted organizer to send a commitment to the auditor during
the multi-cloud batch auditing, because their scheme will
apply the mask technique to ensure the data privacy.
However, such additional organizer is not practical in the
cloud storage systems. For furthermore, both Wang’s schemes
and Zhu’s schemes unpleasant heavy computation cost of the
auditor, in which it makes the auditor a performance
bottleneck.
II.
RELATED WORK
Cloud Computing is presently one of the hottest topics in
information technology (IT). Since the outsourcing of all the
essential data is available with a third party, there may be
always having a concern of cloud service provider’s
trustworthiness. Due to data privacy, it is essential for users to
encrypt their sensitive data before storing them into the cloud.
Yet, there exist some shortcomings in the situation of
traditional encryption. When a secret key owner want to look
for some data that are stored in the cloud storage, he/she may
be needed to download all encrypted data from the cloud
server and then decrypt and searches them. If the encrypted
data are huge or the client is a mobile user, then trouble that
the cloud server obtains the secret key So many models were
existed to ensure the integrity of data file. It will be very
inefficient and not convenient. Otherwise he must send his
key to the cloud server which performs the decryption and
search procedures. It causes a serious trouble that the cloud
server obtains the secret key So many models were existed to
ensure the integrity of data file. In “Provable Data Possession”
(PDP) model [4] ensures the possession of data files on untrusted storages. It uses a RSA based homomorphism linear
authenticator for auditing outsourced data, but this leaks the
data to external auditors and hence was not provably privacy
preserving. Juels et.al [5] describes a “Proof
of
Retrievability” (PoR) model where spot-checking and error
correcting codes are used in order to ensure the possession and
retrievability. But this approach works only with encrypted
data’s. Improved versions of PoR protocols had been
proposed which guarantees private auditability and one which
make use of BLS signatures. But these approaches were not
privacy-preserving. Then comes the TPA based approach to
keep online storage honest. This scheme only works for
encrypted files which requires the auditor to keep state, and
suffers from the bounding usage, which potentially brings in
online burden to users when the keyed hashes are used up.
Thus to provide secure cloud storage supporting privacypreserving many methodology, framework and protocols have
been proposed.
III.
EXISTING SYSTEM
Cloud storage is an important service of cloud computing,
which allows data owners (owners) to move data from their
local computing systems to the cloud. More owners start to
store the data in the cloud. However, this new paradigm of
data hosting service also introduces new security challenges.
Owners would worry that the data could be lost in the cloud.
This is because of the loss of data might happen in any
software or hardware and it can not consider what high
degree of reliable measures cloud service providers will take.
Sometimes, cloud service providers might be dishonest, these
could discard the data which has not been accessed or rarely
accessed to save the storage space and claim that the data are
still correctly stored in the cloud. Therefore, the owners need
to be convinced that the data are correctly stored in the cloud.
Traditionally, owners can check the data integrity based on
two-party storage auditing protocols. In cloud computing
storage system however, it is inappropriate to let either side of
cloud service providers or owners conduct such Auditing
method, because none of them could be guaranteed to
provide unbiased auditing result. In this type of situation, third
party auditing is a natural choice for the storage auditing in
cloud computing. A third party auditor (auditor) that has
expertise and capabilities can do a more efficient work and
convince both cloud service providers and owners.
IV.
PROPOSED SYSTEM
An efficient and dynamic auditing protocol is developed in
order to perform an secure transaction of files from owner to
user. Owners create a log in and upload the file that wants to
be stored in the csp. The user also has an secret user name and
password through which he/she can enter into cloud. The CSP
(cloud service provider) will send an secret key to the user
mail id. The user can view and download the file by using the
key that has been generated by the CSP. The dynamic auditing
protocol may contain the following tag generation, file
segmentation and distribution, challenge generation and
verification, architecture of the system. Data owner fragment
the entire content into number of blocks and generates the tags
for individual block and uploads the data into the server and
forwards the hash code and a random challenge.
V.
MODULES
A. Client module:
In this module, the client sends the request to the server.
Based on the request the server sends the corresponding file to
the client.
B. System modules:
1. User:
Users, who have data to be stored in the cloud and real on the
cloud computing for data computation, consist of both
individual data consumers and organizations.
2. Cloud service provider (CSP):
A CSP, who has significant resources and expertise in
building and managing distributed cloud storage server, owns
and operators live Cloud Computing System.
3. Third party auditor(TPA):
An optional TPA has expertise, he is trusted to assess and
expose risk of cloud storage services on behalf of the users
upon request.
C. Cloud data storage module:
Cloud data storage, a user stores his data through a CSP into a
set of cloud servers, which are running dynamically, the user
interacts with the cloud server Via CSP to access or retrieve
his data.
1. Cloud authentication server:
The Authentication server (AS) functions as any AS cloud
with a few additional behaviors added to the typical clientauthentication information to the masquerading router. The
other optional function that should be supported by the AS is
updating of client files.
D. Unauthorized data modification and corruption module:
One of the important issues is to effectively detect any
unauthorized data modification and corruption, possibly due
to server compromise.
E. Adversary module:
1. week adversary:
The adversary is interested in corrupting the user’s data files
stored on individual servers. Once a server is compromised,
an adversary can pollute the original data files by modifying
or introducing its own fraudulent data to prevent the original
data from being retrieved by the user.
2. strong adversary:
This is the very bad case scenario, in which we assume that
the adversary can compromise all the storage servers so that
he can internationally modify the data files as long as they are
internally consistent. In fact, this is equivalent to the case
where all servers are clouding together to hide a data loss or
corruption incident.
VI.
ARCHITECTURE DIAGRAM
VIII.
AUDITOR IMPLEMENTATION
The Auditor monitors the actions taken between the data
owner and the cloud server. And it gets the information such
as data component, Tag generation of the key and random
challenge from the data owner. Now it sends request to the
cloud server and gets the Meta information of the data file.
Before the request processing the cloud server auditor checks
for the authentication of the data owner.
IX.
SERVICE PROVIDER
Data owner hosts the data into the cloud servers. Here the data
that has been fragmented and encrypted by the data owner can
access the information when ever required from the cloud
server. Auditor access the information for auditing purpose if
he is authenticated, Submits the access process to the data
owner when ever required.
VII.
DATA FLOW DIAGRAM
The owner, user, IPT, CSP should sign up with their mail id
and login into the cloud. The owner can login and upload the
file by browsing it. The owner upload the file with the key
that can generated automatically. The CSP has the rights to
view file and verify the key. The CSP only sends the secret
key to the user. The user can download the file using the key
send to their mail. The ITP only view the details of the file
transfer. And also the content modification if the owner
request the ITP.
X.
NOVEL DYNAMIC AUDITING PROTOCOL
Data owner initialize the data component by the method of
fragmentation, encryption and forwards the Meta information
about the data file to the server and also forwards the hash
code i.e. authentication code to the cloud server. CSP
authenticates the auditor and send the Meta information to the
auditor, CSP forwards the monitoring details whenever the
owner, user request it.
XI.
ACTIVITY DIAGRAM
apart from the traditional approaches we are not completely
rely on the third part auditors, so over protocol allows the
auditor to monitors the data component Meta information only
that provides the abstract information of the data component.
Data owner can receive the regular monitoring details.
REFERENCE
The activity diagram shows about the activity or process
carried out during the cloud server communication with the
auditing protocol. The third party auditors communicate with
owner to initialize the data owner. The cloud service providers
monitor the details of the data owner and also it send response
to the auditor.
XII.
CONCLUSION
In this paper we introduced an efficient novel dynamic
auditing protocol for secure data manipulations and auditing,
[1] P. Mell and T. Grance, “The NIST definition of cloud computing,
”National Institute of Standards and Technology, Tech. Rep., 2009.
[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H.
[3] T. Velte, A. Velte, and R. Elsenpeter, Cloud
Computing: A Practical Approach, 1st ed. New York, NY,USA: McGrawHill, Inc., 2010, ch. 7. Stoica, and M. Zaharia, “A view of cloud computing,”
Commun. ACM,
[4] L. N. Bairavasundaram, G. R. Goodson, S. Pasupathy, and J. Schindler,
“An analysis of latent sector errors in disk drives,” in SIGMETRICS, L.
Golubchik, M. H. Ammar, and M. Harchol-Balter, Eds. ACM, 2007, pp. 289–
300.
[5] B. Schroeder and G. A. Gibson, “Disk failures in the real world: What
does an mttf of 1, 000, 000 hours mean to you?” in FAST. USENIX, 2007,
pp. 1–16.
[6] M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows, and M. Isard, “A
cooperative internet backup scheme,” in USENIX Annual Technical
Conference, General Track. USENIX, 2003, pp. 29–41.
[7] Y. Deswarte, J. Quisquater, and A. Saidane, “Remote integrity checking,”
in The Sixth Working Conference on Integrity and Internal Control in
Information Systems (IICIS). Springer Netherlands, November 2004.
[8] M. Naor and G. N. Rothblum, “The complexity of
online memory checking,” J. ACM, vol. 56, no. 1, 2009.
[9] M. Naor and G. N. Rothblum, “The complexity of
online memory checking,” J. ACM, vol. 56, no. 1, 2009.
[10] A. Juels and B. S. K. Jr., “Pors: proofs of retrievability for large files,” in
ACM Conference on Computer and Communications Security, P. Ning, S. D.
C. di Vimercati, and P. F. Syverson, Eds. ACM, 2007, pp. 584–597.
[11] T. J. E. Schwarz and E. L. Miller, “Store, forget, and check: Using
algebraic signatures to check remotely administered storage,” in ICDCS.
IEEE Computer Society, 2006, p. 12.
[12] D. L. G. Filho and P. S. L. M. Barreto, “Demonstrating data possession
and uncheatable data transfer,” IACR Cryptology ePrint Archive, vol. 2006,
p.150, 2006.
[13] F. Seb´e, J. Domingo-Ferrer, A. Mart´ınez-Ballest´e, Y. Deswarte, and J.J. Quisquater, “Efficient remote data possession checking incritical
information .
Download