A Novel Re-Authentication Scheme on Cloud Based Storage Services T.G.V.V.Srinivas , P.Suresh Babu

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 16 Number 7 – Oct 2014
A Novel Re-Authentication Scheme on Cloud Based
Storage Services
T.G.V.V.Srinivas1, P.Suresh Babu2
1
1,2
Final M.Tech Student, 2Associate professor
Dept of CSE, Kaushik College of engineering, JNTUK University
Abstract: Cloud service providers provide storage service
for outsourced data to store. Users store their sensitive
information in cloud service, security and authentication is
mainly raising issue. Therefore in this paper we proposed a
new framework consists of three features authentication,
secure storage, and verification. In our process we used
block cipher technique to encrypt the data such as RC4 and
for authentication we used password authentication. For
manipulation of the data should be done by data owner
only after the verification of the data owner authentication.
It provides mutual trust between data owner and cloud
service provider. We limit the privileges of access of the
data in the cloud service.
transmitting the file across a network. Reading an entire
archive that even periodically and it greatly limits the
scalability of network stores and the growth in storage
capacity has far outstripped the growth in storage access
times and bandwidth. Furthermore, I/O incurred to
establish data possession interferes with on-demand
bandwidth to store and retrieve data. We justify that clients
need to be able to verify that a server has retained file data
without retrieving the data from the server and without
having the server access the entire file. Previous solutions
do not meet these requirements for proving data
possession. Some schemes provide a weaker guarantee by
enforcing storage complexity:[6]
I. INTRODUCTION
The server has to store an amount of data at least
as large as the client’s data and not necessarily the same
exact data. All previous techniques require the server to
access the entire file and which is not feasible when
dealing with large amounts of data.Several trends are
opening up the era of Cloud Computing is an Internetbased development and use of computer technology. The
ever cheaper and more powerful processors and together
with the “software as a service” (SaaS) computing
architecture and are transforming data centers into pools of
computing service on a huge scale. The increasing network
bandwidth and reliable yet flexible network connections
make it even possible that clients can now subscribe high
quality services from data and software that reside solely
on remote data centers. The envisioned as a promising
service platform for the Internet, this new data storage
paradigm in “Cloud” brings about many challenging design
issues which have profound influence on the security and
performance of the overall system. The most biggest
concerns with cloud data storage is that of data integrity
verification at untrusted servers. [7]
Verifying the authenticity of data has emerged as
a critical issue in storing data on untrusted servers. This
issue arisesin peer-to-peer storage systems and the network
file systems long-term archives and the web-service object
stores, and database systems. Such types of systems
areprevent storage servers from misrepresenting or
modifying data by providing authenticity checks when
accessing data. However, archival storage requires
guarantees about the authenticity of data on storage and
namely that storage servers possess data and it is
insufficient to detect that data have been modified or
deleted when accessing the data and this is because it may
be too late to recover lost or damaged data. Archival
storage servers retain tremendous amounts of data which
are accessed and hold data for long periods of time during
which there may be exposure to data loss from
administration errors as the physical implementation of
storage evolves, e.g., backup and restore the data migration
to new systems and the changing memberships in peer-topeer systems.[2,4]
Archival network storage presents unique
performance demands. Take the file data are large and are
stored at remote sites and the accessing an entire file is
expensive in I/O costs to the storage server and in
ISSN: 2231-5381
For example, the storage service provider that
which experiences Byzantine failures occasionally and it
may decide to hide the data errors from the clients for the
benefit of their own. Question is what is more serious is
http://www.ijettjournal.org
Page 327
International Journal of Engineering Trends and Technology (IJETT) – Volume 16 Number 7 – Oct 2014
that for saving money and storage space the service
provider might neglect to keep or deliberately delete rarely
accessed data files which belong to an ordinary client. Say
the large size of the outsourced electronic data and the
client’s constrained resource capability and core of the
problem can be generalized as how can the client find an
efficient way to perform periodical integrity verifications
without the local copy of data files. In order to solve this
problem and many of the schemes are proposed under
different systems and security models. In all these works
the great efforts are made to design solutions that meet
various requirements: high scheme efficiency and the
stateless verification, unbounded use of queries and
retrievability of data and so on.
The role of the verifier in the model and all the
schemes presented before fall into two categories: private
verifiability and public verifiability. However the schemes
with private verifiability can achieve higher scheme
efficiency, public verifiability allows anyone and not just
the client such as data owner, to challenge the cloud server
for correctness of data storage while keeping no private
information. Then the clients are able to delegate the
evaluation of the service performance to an independent
third party auditor (TPA), without devotion of their
computation resources. In cloud service the clients
themselves are unreliable or cannot afford the overhead of
performing frequent integrity checks. So for the practical
use, it seems more rational to equip the verification
protocol with public verifiability and which is expected to
play a more important role in achieving economies of scale
for Cloud Computing. Moreover, for efficiency
consideration and the outsourced data themselves should
not be required by the verifier for the verification purpose.
[4]
Data outsourcing brings with it many advantages.
But associated with it are the risks involved. Though client
cannot physically access the data from the cloud server
directly, without client’s are either not used by client from
a long time. Hence, there is a requirement of checking the
data periodically for correction purpose, known as data
integrity. Here we provide a survey on the different
techniques of data integrity. The traditional schemes for
data integrity in cloud are Provable Data Possession (PDP)
knowledge, cloud provider can modify or delete data which
and Proof of Retrievability (PoR). These two schemes are
the most active area of research in the cloud data integrity
field.
ISSN: 2231-5381
Traditional access control techniques assume the
existence of the data owner and the storage servers in the
same trust domain. This assumption is no longer holds
when the data is outsourced to a remote CSP which takes
the full charge of the outsourced data management and
resides outside the trust domain of the data owner. A
feasible achievement solution can be presented to enable
the owner to enforce access control of the data stored on a
remote untrusted CSP. Through this data is encrypted
under a certain key which is shared only with the
authorized users. The unauthorized users are including the
CSP and are unable to access the data since they do not
have the decryption key. This simple solution has been
widely incorporated into existing schemes, which aim at
providing data storage security on untrusted remote
servers. Some other class of solutions utilizes attributebased encryption to achieve fine-grained access control.
II. RELATED WORK
Data Storage Commitment Schemes:
A storage-enforcing commitment scheme (SEC) is
a three party protocol executed between the parameters
message source S, a prover P, and a verifier V. The
message source communicates the message M to the prover
and the commitment C to the verifier. The verifier V may
verify whether the prover is storing the secret by invoking
a probabilistic interactive algorithm. This algorithm may be
executed an unlimited number of times. Once the message
is revealed then the verifier may check the commitment by
running the algorithm Verify. This scheme has three
properties called binding, concealing, and storageenforcing.[3]
Privacy-Preserving PDP Schemes:
The data owner first encrypts the file, Sends both
the encrypted file along with the encryption key to the
remote server. And also the data owner sends the encrypted
file along with a key-commitment that fixes a value for the
keywithout revealing the key to the TPA. The main
purpose of this scheme is to ensure that the remote server
correctly possesses the client’s data along with the
encryption key and prevent any information leakage to the
TPA which is responsible for the auditing task. Therefore
the clients especially with constrained computing resources
and capabilities can resort to external audit party to check
the integrity of outsourced data and third party auditing
process should bring in no new vulnerabilities towards the
privacy of client’s data. And also to the auditing task the
TPA and it has anotherprimary task which is extraction of
digital contents.[2]
The PDP schemes discussed above focus on static
or warehoused data which is essential in numerous
http://www.ijettjournal.org
Page 328
International Journal of Engineering Trends and Technology (IJETT) – Volume 16 Number 7 – Oct 2014
different applications such as libraries, archives, and
astronomical /medical /scientific /legal repositories. On the
other side, Dynamic Provable Data Possession (DPDP)
schemes investigate the dynamic file operations such as
update or delete or append and insert operations. There are
some DPDP constructions in the literature satisfying
different system requirements.[1]
Scalable DPDP:
This process
cryptography.
is
based
entirely on
symmetric-key
1. Before outsourcing, data owner pre-computes a
certainnumber of short possession verification tokens and
eachof tokencovering some set of data blocks. The actual
data is thenhanded over to server.
2. Subsequently, when data owner wants to obtain a proof
of data possession and it challenges server with a set
ofrandom-looking block indices.
3. In turn, server must compute a short integrity checkover
the specified blocks (corresponding to the indices)
andreturn it to data owner.
Cloud computing, the trend toward loosely
coupled networking of computing resources which is
unmooring data from local storage platforms. Users today
regularly access files without knowing—or needing to
know—on what machines or in what geographical
locations their files reside. They may even store files on
platforms with unknown owners and operators and more
particularly in peer-to-peer computing environments.
While cloud computing encompasses the full spectrum of
computing resources, in this paper we focus on archival or
backup data, large files subject to infrequent updates.
While users may access such files only sporadically and the
demonstrable level of availability may be required
contractually or by regulation. [10]
ISSN: 2231-5381
Financial records for a while have to be retained
for several years to comply with recently enacted
regulations. Recently proposed a notion for archived files
that they call a proof of retrievability (POR). A POR is a
protocol in which a server/archive proves to a client that a
target file F is intact it is in the sense that the client can
retrieve all of F from the server with high probability. In a
naıve POR, a client might simply download F itself and
check an accompanying digital signature and related
constructions adopt a challenge-response format that
achieves much lower (nearly constant) communication
complexity—as little as tens of bytes per round in practice.
A formal definition of a POR and describe a set of different
POR designs in which a client stores just a single
symmetric key and a counter. Their most practical
constructions and support only a limited number of POR
challenges. An alternative construction based on the idea of
storing homomorphic block integrity values that can be
aggregated to reduce the communication complexity of a
proof. Its main advantage is that, due to the underlying
block integrity structure, clients can initiate and verify an
unlimited number of challenges.
III.PROPOSED WORK
In our work we introduced a novel method to limit
the access control. This is because of private and sensitive
data stored in third party servers. Data which is store by a
person called data owner and data base which is allow to
store is referred as cloud service provider. The data stored
in cloud is verified by a trusted person called as trusted
authority. In our work the features are as follows:
1) Prevents unauthorized access of the content in cloud
service provider.
2) For accessing the content every member have to prove
them, for this we implemented password based access
control method.
3) It gives privilege to provide trust between data owner
and cloud providing service.
4) It provides secure encryption technique of the content to
prevent intruder attacks.
http://www.ijettjournal.org
Page 329
International Journal of Engineering Trends and Technology (IJETT) – Volume 16 Number 7 – Oct 2014
6. Verify signatures
Data Owner
3. Store
encrypte
d
Data
blocks
and set
password
4. Send signature and Meta data
2.
Sen
d
res
po
nse
1.
Reg
iste
r
Trusted Authority
5. Request for signature
Cloud
5. Send signature
7. update/delete stored data blocks
After proving of him self
Architecture of proposed system
The algorithm is divided into three parts. Those are
Initialization and authentication, Encryption and Storage,
Proof and Verification. The below explained all the phases.
A) Initialization and authentication
Initially Data owner send request to cloud service
to store information. Then cloud service sends response
and provides unique Id and secure master key. And for
registration cloud provide verification method to user cloud
service provider.
B) Encryption and Storage
After authentication data owner encrypts his file using RC4
algorithmas follows:
These are the steps for RC4 encryption algorithm is as
follows:
1- Get the data to be encrypted and the selected key.
2- Create two string arrays.
3- Initiate one array with numbers from 0 to 255.
4- Fill the other array with the selected key.
5- Randomize the first array depending on the array of the
key.
6- Randomize the first array within itself to generate the
final key stream.
7- XOR operation of the final key stream with the data to
be encrypted to give cipher text.
Algorithm:
Find the length of the file ‘L’.
Divide that file into equal length of blocks such as
32/64/128/256/516/1028. Generally we prefer 128, and it is
ISSN: 2231-5381
compatible to RC4 encryption. For every block below RC4
algorithm encrypts and gives Cipher text.
Input: input data
Output: Cipher Text and Signature
1. j = 0;
2. for i = 0 to 255:
3. S[i] = i;
4. for i = 0 to 255:
5. j = (j + S[i] + K[i]) mod 256;
6. swap S[i] and S[j];
It is important to notice that here the swapping of
the locations of the numbers 0 to 255 (each of
which occurs only once) in the state table. The
values that are present in the state table are
provided. Once the initialization process is
completed and the operation process may be
summarized as shown by the pseudo code below;
7. i = j = 0;
8. for (k = 0 to N-1) {
9. i = (i + 1) mod 256;
10. j = (j + S[i]) mod 256;
11. swap S[i] and S[j];
12. pr = S[ (S[i] + S[j]) mod 256]
13. output M[k] XOR pr}
Where M[0..N-1] is the input message consisting
of N bits.
Generate message digest using secure hash algorithm.
Cloud service generates password and security code for
particular file and sends to data owner using simple mail
http://www.ijettjournal.org
Page 330
International Journal of Engineering Trends and Technology (IJETT) – Volume 16 Number 7 – Oct 2014
transfer protocol for security issue. Then store that blocks
in Data base.
C) Proof and Verification
For update of the file blocks in cloud service, data
owner have to prove himself to cloud. Cloud checks Data
owner using password verification. Then Data have to
prove himself and then he allows manipulating the file
content such as blocks.
In verification process auditor verifies the file blocks using
signature sent by data owner. Auditor requests the
signature from the cloud. Then auditor compares the
signature of the cloud and the data owner. If two signatures
are equal the file is secure otherwise not secure and the
verification result is sent to data owner using simple mail
transfer protocol.
IV. CONCLUSION
In our paper we introduced novel prototype to secure
storage of the data and secure authentication of the content
in cloud service provider. Cloud storage service scheme
provides storing of dynamic data where the owner is able
to change and accessing the data stored by the cloud
service provider and also updating data on the cloud
storage. The data owner enforces to blocks ciphering and
confidentiality of the data using the authentication schemes
which is used in this process.
REFERENCES
[1] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z.
Peterson, and D. Song, “Provable data possession at untrusted stores,” in
Proceedings of the 14th ACM Conference on Computer and
Communications Security, ser. CCS ’07, 2007, pp. 598–609.
[2] F. Seb´e, J. Domingo-Ferrer, A. Martinez-Balleste, Y. Deswarte, and
J.-J.Quisquater, “Efficient remote data possession checking in critical
information infrastructures,” IEEE Trans. on Knowl.And Data Eng., vol.
20, no. 8, 2008.
[3] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable and
efficient provable data possession,” in Proceedings of the 4th
International Conference on Security and Privacy in Communication
Netowrks, 2008, pp. 1–10.
[4] C. Erway, A. K¨upc¸ ¨ u, C. Papamanthou, and R. Tamassia,
“Dynamic provable data possession,” in Proceedings of the 16th ACM
Conference on Computer and Communications Security, 2009, pp. 213–
222.
[5] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, “Enabling public
verifiability and data dynamics for storage security in cloud computing,”
in Proceedings of the 14th European Conference on Research in
Computer Security, 2009, pp. 355–370.
[6] A. F. Barsoum and M. A. Hasan, “Provable possession and replication
of data over cloud servers,” Centre For Applied Cryptographic
Research,
Report
2010/32,
2010,
http://www.cacr.math.
uwaterloo.ca/techreports/2010/cacr2010-32.pdf.
[7] R. Curtmola, O. Khan, R. Burns, and G. Ateniese, “MR-PDP:
multiple-replica provable data possession,” in 28th IEEE ICDCS,
2008, pp. 411–420.
[8] A. F. Barsoum and M. A. Hasan, “On verifying dynamic multiple data
copies over cloud servers,” Cryptology ePrint Archive,
Report 2011/447, 2011, 2011, http://eprint.iacr.org/. [9] K. D. Bowers, A.
Juels, and A. Oprea, “HAIL: a high-availability and integrity layer for
cloud storage,” in CCS ’09: Proceedings of the 16th ACM conference on
Computer and communications security. New York, NY, USA: ACM,
2009, pp. 187–198.
ISSN: 2231-5381
[10] Y. Dodis, S. Vadhan, and D. Wichs, “Proofs of retrievability via
hardness amplification,” in Proceedings of the 6th Theory of
Cryptography Conference on Theory of Cryptography, 2009.
[11] A. Juels and B. S. Kaliski, “PORs: Proofs of Retrievability for large
files,” in CCS’07: Proceedings of the 14th ACM conference on Computer
and communications security. ACM, 2007, pp. 584–597.
[12] H. Shacham and B. Waters, “Compact proofs of retrievability,” in
ASIACRYPT ’08, 2008, pp. 90–107.
BIOGRAPHIES
T.G.V.V.Srinivas completed his BTECH in computer science and
engineering in 2012. He is pursuing
M.Tech in Computer Science and
Engineering from Kaushik College
of Engineering. His areas of interest
include Operating system; cloud
computing, Data Mining and
Warehousing
and
Computer
Networks.
P.SureshBabu completed his B.Tech
and M.E. in Computer Science and
Engineering. He is currently working
as Associate professor of Department
of computer science and engineering
at Kaushik College of engineering,
JNTUK University. He is having
industrial experience of 4 years and teaching experience of
15 years. His areas of interest include Artificial
intelligence, Neural Networks, Cryptography & Network
security, Compiler Design and Advanced Data Structures.
http://www.ijettjournal.org
Page 331
Download