International Journal of Application or Innovation in Engineering & Management... Web Site: www.ijaiem.org Email: , Volume 2, Issue 8, August 2013

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
DATA STORAGE IN CLOUD COMPUTING
TO ENSURE EFFECTIVE INTEGRITY AND
SECURITY IN MULTI CLOUD STORAGE
1
Srikanth Murapaka, 2D. Adinarayana, 3Y.Ramesh Kumar, 4M.V Ramarao
1
1, 2, 3
Final year M.tech Student, 2, 3 Associate Professor, 4Asst.professor
Avanthi Institute of Engineering and Technology (JNTUK), Cherukupally, VizianagaramDist,
Andhra Pradesh, India
4
Shri Vishnu Engineering College for Women, Bhimavaram, Andhra Pradesh, India
ABSTRACT
There has been many major problems been discussed earlier to maintain integrity and security for data storage in cloud. Many reforms have been evolved in
this manner .In this paper we propose an adequate and impressive scheme with dynamic data support to ensure the correctness of user data in the cloud. This
new scheme also supports modification, updation and deletion in data blocks. The main issue is how to frequently, efficiently and securely verify that a storage
server is faithfully storing its client’s (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and
reliability the problem is exacerbated by the client being a small computing device with limited resources.
Keywords- storage outsourcing, multiple cloud, Cooperative, data Retention, homomorphic token.
1. INTRODUCTION
Cloud storage is a model of networked enterprise storage where data is stored not only in the user's computer, but in virtualized pools of storage
which are generally hosted by third parties, too. Hosting companies operate large data centers, and people who require their data to be hosted buy or
lease storage capacity from them. The data center operators, in the background, virtualizes the resources according to the requirements of the
customer and expose them as storage pools, which the customers can themselves use to store files or data objects. Physically, the resource may span
across multiple servers. The safety of the files depends upon the hosting websites. Several trends are opening up the era of Cloud Computing, which
is an Internet-based development and use of computer technology. The ever cheaper and more powerful processors, together with the software as a
service (SaaS)[1,2] computing architecture, are transforming data centers into pools of computing service on a huge scale. The increasing network
bandwidth and reliable yet flexible network connections make it even possible that users can now subscribe high quality services from data and
software that reside solely on remote data centers. Moving data into the cloud offers great convenience to users since they don’t have to care about
the complexities of direct hardware management. The pioneer of Cloud Computing vendors, Amazon Simple Storage Service (S3) and Amazon
Elastic Compute Cloud (EC2) are both well-known examples. While this internet- based online services do provide huge amounts of storage space
and customizable computing resources, this computing platform shift, however, is eliminating the responsibility of local machines for data
maintenance at the same time.As a result, users are at the mercy of their cloud service providers for the availability and integrity of their data. A
number of security-related research issues in data outsourcing have been studied in the past decade. Early work concentrated on data authentication
and integrity, i.e., how to efficiently and securely ensure that the server returns correct and complete results in response to its clients’ queries [14, 25].
Later research focused on outsourcing encrypted data (placing even less trust in the server) and associated difficult problems mainly having to do
with efficient querying over encrypted domain [20, 32, 9, 18].More recently, however, the problem of Provable Data Possession (PDP) –is also
sometimes referred to as Proof of Data Retrivability (POR)– has popped up in the research literature. The central goal in PDP is to allow a client to
efficiently, frequently and securely verify that a server – who purportedly stores client’s potentially very large amount of data – is not cheating the
client. In this context, cheating means that the server might delete some of the data or it might not store all data in fast storage, e.g., place it on CDs
or other tertiary off-line media. It is important to note that a storage server might not be malicious; instead, it might be simply unreliable and lose or
inadvertently corrupt hosted data. An effective PDP technique must be equally applicable to malicious and unreliable servers. The problem is further
complicated by the fact that the client might be a small device (e.g., a PDA or a cell-phone) with limited CPU, battery power and communication
facilities. Hence, the need to minimize bandwidth and local computation overhead for the client in performing each verification. Cloud Computing
is, to use yet another cloud-inspired pun, a nebulously defined term. However, it was arguably first popularized in 2006 by Amazon’s Elastic
Compute Cloud (or EC2, see http://www.amazon.com/ec2/), which started offering virtual machines (VMs) for $0.10/hour using both a simple
web interface and a programmer friendly API. Although not the first to propose a utility computing model, Amazon EC2 contributed to
popularizing the “Infrastructure as a Service” (IaaS) paradigm [4], which became closely tied to the notion of Cloud Computing. An IaaS cloud
enables on-demand provisioning of computational resources, in the form of VMs deployed in a cloud provider’s datacenter (such as Amazon’s),
minimizing or even eliminating associated capital costs for cloud consumers, allowing capacity to be added or removed from their IT infrastructure
in order to meet peak or fluctuating service demands, while only paying for the actual capacity used. The data stored in the cloud may be frequently
updated by the users, including insertion, deletion, modification, appending, reordering, etc. To ensure storage correctness under dynamic data
Volume 2, Issue 8, August 2013
Page 366
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
update is hence of paramount importance. However, this dynamic feature also makes traditional integrity insurance techniques futile and entails
new solutions. Last but not the least, the deployment of Cloud Computing is powered by data centers running in a simultaneous, cooperated and
distributed manner. Individual user’s data is redundantly stored in multiple physical locations to further reduce the data integrity threats. Therefore,
distributed protocols for storage correctness assurance will be of most importance in achieving a robust and secure cloud data storage system in the
real world. However, such important area remains to be fully explored in the literature. Recently, the importance of ensuring the remote data
integrity has been highlighted by the following research works. These techniques, while can be useful to ensure the storage correctness without
having users possessing data, cannot address all the security threats in cloud data storage, since they are all focusing on single server scenario and
most of them do not consider dynamic data operations. As a complementary approach, researchers have also proposed distributed protocols for
ensuring storage correctness across multiple servers or peers. Again, none of these distributed schemes is aware of dynamic data operations.
2. CLOUD ARCHITECTURE
Modern day cloud storage is based on highly virtualized infrastructure and has the same characteristics as cloud computing in terms of agility,
scalability, elasticity and multi-tenancy, and is available both off-premise (Amazon EC2) and on-premise (ViON Capacity Services) It is believed to
have been invented by Joseph Carl Robnett Licklider in the 1960s.[2] However, Kurt Vonnegut refers to a cloud "that does all the heavy thinking
for everybody" in his book "Sirens of Titan" published in 1959.[3] Since the sixties, cloud computing has developed along a number of lines, with
Web 2.0 being the most recent evolution. However, since the internet only started to offer significant bandwidth in the nineties, cloud computing for
the masses has been something of a late developer. It is difficult to pin down a canonical definition of cloud storage architecture, but object storage is
reasonably analogous. Cloud storage services like OpenStack, cloud storage products like EMC Atmos and Hitachi Cloud Services, and distributed
storage research projects like OceanStore[4] or VISION Cloud are all examples of object storage and infer the following guidelines.
Cloud storage is:
• made up of many distributed resources, but still acts as one
• highly fault tolerant through redundancy and distribution of data
• highly durable through the creation of versioned copies
• typically eventually consistent with regard to data replicas.
The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen
by the client, i.e., the computer user. This includes the client’s network (or computer) and the applications used to access the cloud via a user
interface such as a web browser. The back end of the cloud computing architecture is the cloud itself, comprising various computers, servers and
data storage devices.
Fig. 1: Cloud data storage architecture
Representative network architecture for cloud data storage is illustrated in Figure2.Three different network entities can be identified as follows:
Users, who have data to be stored in the cloud and rely on the cloud for data computation, consist of both individual consumers and organizations.
A CSP, who has significant resources and expertise in building and managing distributed cloud storage servers, owns and operates live Cloud
Computing system, An optional TPA, who has expertise and capabilities that users may not have, is trusted to assess and expose risk of cloud
storage services on behalf of the users upon request. Our scheme is based entirely on symmetric-key cryptography. The main idea is that, before
outsourcing, OWN pre-computes a certain number of short possession verification tokens, each token covering some set of data blocks. The actual
data is then handed over to SRV. Subsequently, when OWN wants to obtain a proof of data possession, it challenges SRV with a set of randomlooking block indices. In turn, SRV must compute a short integrity check over the specified blocks (corresponding to the indices) and return it to
OWN. For the proof to hold, the returned integrity check must match the corresponding value pre-computed by OWN. However, in our scheme
OWN has the choice of either keeping the pre-computed tokens locally or outsourcing them – in encrypted form – to SRV. Notably, in the latter
case, OWN’s storage overhead is constant regardless of the size of the outsourced data. Our scheme is also very efficient in terms of computation
and bandwidth.
2.1. THE CLOUD ECOSYSTEM
Virtual infrastructure management tools for datacenters have been around since before Cloud Computing became the industry’s new buzzword.
Several of these, such as Platform VM Orchestrator (http: //www.platform. com/Products/platform-vm-orchestrator), and VMware vSphere
www.vmware.com/products/vsphere/), and Ovirt (http://ovirt.org/), meet many of the requirements for VI management outlined earlier, providing
Volume 2, Issue 8, August 2013
Page 367
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
features such as dynamic placement and management of virtual machines on a pool of physical resources, automatic load balancing, server
consolidation, and dynamic resizing and partitioning of infrastructure. Thus, although creating what is now called a “private cloud” was already
possible with existing tools, these tools lack other features that are relevant for building IaaS clouds, such as public cloud-like interfaces, mechanisms
to add such interfaces easily, or the ability to deploy VMs on external clouds. On the other hand, projects like Globus Nimbus [3]
(http://workspace.globus.org/) and Eucalyptus [4] (http://www.eucalyptus.com/), which we term cloud toolkits, can be used to transform existing
infrastructure into an IaaS cloud with cloud-like interfaces. Eucalyptus is compatible with Amazon’s EC2 interface and is designed to support
additional client-side interfaces. Globus Nimbus exposes EC2 and WSRF interfaces and offers self-configuring virtual cluster support. However,
although these tools are fully functional with respect to providing cloud-like Interfaces and higher-level functionality for security, contextualization
and VM disk image management, their VI management capabilities are limited and lack the features of solutions that specialize in VI
management. Thus, an ecosystem of cloud tools is starting to form (see Figure 1) where cloud toolkits attempt to span both cloud management and
VI management but, by focusing on the former, do not deliver the same functionality of software written specifically for VI management. Although
integrating cloud management solutions with existing VI managers would seem like the obvious solution, this is complicated by the lack of open
and standard interfaces between the two layers, and the lack of certain key features in existing VI managers (enumerated below). The focus of our
work is, therefore, to produce a VI management solution with a flexible and open architecture that can be used to build private/hybrid clouds. With
this goal in mind, we started developing Open Nebula and continue to enhance it as part of the European Union’s RESERVOIR Project
http://www.reservoir-fp7.eu/), which aims to develop open source technologies to enable deployment and management of complex IT services
across different administrative domains. Open Nebula provides much of the functionality found in existing VI managers, but also aims to overcome
the shortcomings in other VI solutions. Namely, (i) the inability to scale to external clouds, (ii) monolithic and closed architectures that are hard to
extend or interface with other software, not allowing its seamless integration with existing storage and network management solutions deployed in
datacenters, (iii) a limited choice of preconfigured placement policies (first fit, round robin, etc.), and (iv) lack of support for scheduling, deploying,
and configuring groups of VMs (e.g., a group of VMs representing a cluster, which must all be deployed, or not at all, and where the configuration
of some VMs depends on the configuration of others, such as the head-worker relationship in compute clusters).
Fig. 2. The Cloud ecosystem for building private clouds
2.2 STRUCTURE AND TECHNIQUES
In this section, we present our verification framework for multi-cloud storage and a formal definition of CDDR. We
introduce two fundamental techniques for constructing our CDDR technique: hash index hierarchy (HIH) on which the
responses of the clients challenges computed from multiple CSPs can be combined into a single response as the final
result; and homomorphic verifiable response (HVR) which supports distributed cloud storage in a multi-cloud storage and
implements an efficient construction of collision resistant hash function, which can be viewed as a random oracle model
in the verification protocol.
2.3 VERIFICATION FRAMEWORK FOR MULTI-CLOUD
Although existing DDR techniques offer a publicly accessible remote interface for checking and managing the
tremendous amount of data, the majority of existing DDR techniques is incapable to satisfy the inherent requirements
from multiple clouds in terms of communication and computation costs. To address this problem, we consider a multicloud storage service as illustrated in Figure 1. In this architecture, a data storage service involves three different entities:
Clients who have a large amount of data to be stored in multiple clouds and have the permissions to access and
manipulate stored data; Cloud Service Providers (CSPs) who work together to provide data storage services and have
enough storages and computation resources; and Trusted Third Party (TTP) who is trusted to store verification parameters
and offer public query services for these parameters. In this architecture, we consider the existence of multiple CSPs to
Cooperative store and maintain the clients‟ data. Moreover, a Cooperative DDR is used to verify the integrity and
availability of their stored data in all CSPs. The verification procedure is described as follows: Firstly, a client (data
owner) uses the secret key to pre-process a file which consists of a collection of blocks, generates a set of public
Volume 2, Issue 8, August 2013
Page 368
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
verification information that is stored in TTP, transmits the file and some verification tags to CSPs, and may delete its
local copy; Then, by using a verification protocol, the clients can issue a challenge for one CSP to check the integrity and
availability of outsourced data with respect to public information stored in TTP. We neither assume that CSP is trust to
guarantee the security of the stored data, nor assume that data owner has the ability to collect the evidence of the CSPs
fault after errors have been found. To achieve this goal, a TTP server is constructed as a core trust base on the cloud for
the sake of security We assume the TTP is reliable and independent through the following functions [12]: to setup and
maintain the CDDR cryptosystem; to generate and store data owner’s public key; and to store the public parameters used
to execute the verification protocol in the CDDR technique. Note that the TTP is not directly involved in the CDDR
technique in order to reduce the complexity of cryptosystem.
2.4 DIVISION OF DATA BLOCKS
We start with a database D divided into d blocks. We want to be able to challenge the storage server t times. We use a
pseudo-random function, f, and a pseudo-random permutation g defined as:
f : {0, 1}c × {0, 1}k −> {0, 1}L
and
g: {0, 1}l × {0, 1}L −! {0, 1} l.
In our case l = log d since we use g to permute indices. The output of f is used to generate the key for g and c = log t.
We note that both f and g can be built out of standard block ciphers, such as AES. In this case L = 128. We use the PRF f
with two master secret keys W and Z, both of k bits. The key W is used to generate session permutation keys while Z is
used to generate challenge nonce. The setup phase is shown in detail in Algorithm 1.
Algorithm 1: Setup Phase
Begin
Choose parameter c, l, k, l and function f, g;
Choose the number t of tokens;
Choose the number r of indices per verification;
Generate randomly master keys w, x, k € {0, 1}k.
For (i1 to t) do
Begin
Round i
Generate ki=fw (i) and ci=fz (i)
Compute
Vi=H (Ci , D[gkt(1)],.....,D[gkt(1)]
Compute
Vi’=AEk (i, vi)
End;
Send to SRV: (D, {[i, vi’] for 1 < i < t})
End;
3. DESIGN METHODOLOGIES
The data design transforms the information domain model created during analysis into the data structures that will be
required to implement the software. The data objects and relationships defined in the entity relationship diagram and the
detailed data content depicted in the data dictionary provide the basis for the data design activity. Part of data design may
occur in conjunction with the design of software architecture. More detailed data design occurs as each software
component is designed. The architectural design defines the relationship between major structural elements of the
software, the design patterns that can be used to achieve the requirements the system architecture, the interface
representation, and the component level detail. During each design activity, we apply basic concepts and principles that
lead to high quality.
Figure.3 Translating Analysis model into Design model
The interface design describes how the software communicates within itself, with systems that interoperate with it, and
with humans who use it. An interface implies a flow of information (e.g., data and/or control) and a specific type of
Volume 2, Issue 8, August 2013
Page 369
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
behavior. Therefore, data and control flow diagrams provide much of the information required for interface design. The
component-level design transforms structural elements of the software architecture into a procedural description of
software components. Information obtained from the PSPEC, CSPEC, and STD serve as the basis for component design.
During design we make decisions that will ultimately affect the success of software construction and as important, the
ease with which software can be maintained. To ensure the security and fidelity for cloud data storage under the aforesaid
antagonist model, we aim to design efficient mechanisms for dynamic data verification and operation and achieve the
following goals: (1) Storage correctness: to ensure users that their data are indeed stored appropriately and kept unharmed
all the time in the cloud. (2) Fast localization of data error: to effectively locate the faulty server when data corruption has
been detected. (3) Dynamic data support: to maintain the same level of storage correctness assurance even if users modify,
delete or append their data files in the cloud. (4) Dependability: to enhance data availability against intricate failures,
malevolent data modification and server colluding attacks, i.e. minimizing the effect brought by data errors or server
failures. (5) Lightweight: to enable users to perform storage correctness checks with minimum overhead.
4. SECURITY ANALYSIS AND PERFORMANCE EVALUATION
Our security analysis focuses on the adversary model as defined. We also evaluate the efficiency of our scheme via
implementation of both file distribution preparation and verification token precomputation. In our scheme, servers are
required to operate on specified rows in each correctness, verification for the calculation of requested token. We will show
that this “sampling” strategy on selected rows instead of all can greatly reduce the computational overhead on the server,
while maintaining the detection of the data corruption with high probability. Suppose nc servers are misbehaving due to
the possible compromise or Byzantine failure. In the following analysis, we do not limit the value of nc, i.e., nc ≤ n.
Assume the adversary modifies the data blocks in z rows out of the l rows in the encoded file matrix. Let r be the number
of different rows for which the user asks for check in a challenge. Let X be a discrete random variable that is defined to be
the number of rows chosen by the user that matches the rows modified by the adversary. We follow the security
definitions in [2, 21] and in particular the one in [2] which is presented in the form of a security game. In practice we
need to prove that our protocol is a type of proof of knowledge of the queried blocks, i.e. if the adversary passes the
verification phase then we can extract the queried blocks. The most generic (extraction based) definition that does not
depend on the specifics of the protocol is introduced in [21]. Another formal definition for the related concept of sublinear authentication is provided in [26]. If P passes the verification then the adversary won the game. As in [2], we say
that our scheme is a Provable Data Possession scheme if for any probabilistic polynomial-time adversary A; the
probability that A wins the PDP security game on a set of blocks is negligibly close to the probability that the challenger
can extract those blocks. In this type of proof of knowledge, a “knowledge extractor” can extract the blocks from A via
several queries, even by rewinding the adversary which does not keep state. At first, it may seem that we do not need to
model the hash function as a random oracle and that we might just need its collision-resistance property (assuming we are
using a”good” standard hash function and not some contrived hash scheme). However, the security definition for PDP is
an extraction-type definition since we are required to extract the blocks that were challenged. So we need to use the
random oracle model but not really in a fundamental way. We just need the ability to see the inputs to the random oracle
but we are not using its programmability feature (i.e., we are not”cooking-up” values as outputs of the random oracle).
We need to see the inputs so that we can extract the queried data blocks. Since we are using just a random oracle our
proof does not rely on any cryptographic assumption but it is information-theoretic.
5. EXPERIMENTAL RESULTS
Fig. 4 Server Login
Fig.5 Cloud Authentication Server
Volume 2, Issue 8, August 2013
Fig.7 Availability of Resources [8]
Page 370
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
Fig.6 Client Login
6. CONCLUSION
In this paper, we investigated the problem of data security in cloud data storage, which is essentially a distributed storage
system. To ensure the correctness of users’ data in cloud data storage, we proposed an effective and flexible distributed
scheme with explicit dynamic data support, including block update, delete, and append. We rely on erasure-correcting
code in the file distribution preparation to provide redundancy parity vectors and guarantee the data dependability. By
utilizing the homomorphic token with distributed verification of erasure coded data, our scheme achieves the integration
of storage correctness insurance and data error localization, i.e., whenever data corruption has been detected during the
storage correctness verification across the distributed servers, we can almost guarantee the simultaneous identification of
the misbehaving server(s). Through detailed security and performance analysis, we show that our scheme is highly
efficient and resilient to Byzantine failure, malicious data modification attack, and even server colluding attacks. We
believe that data storage security in Cloud Computing, an area full of challenges and of paramount importance, is still in
its infancy now, and many research problems are yet to be identified. We envision several possible directions for future
research on this area. We developed and presented a step-by-step design of a very light-weight and provably secure PDP
scheme. It surpasses prior work on several counts, including storage, bandwidth and computation overheads as well as the
support for dynamic operations. However, since it is based upon symmetric key cryptography, it is unsuitable for public
(third party) verification. A natural solution to this would be a hybrid scheme combining elements of [2] and our scheme.
To summarize, the work described in this paper represents an important step forward towards practical PDP techniques.
We expect that the salient features of our scheme (very low cost and support for dynamic outsourced data) make it
attractive for realistic applications. Besides, along with our research on dynamic cloud data storage, we also plan to
investigate the problem of fine-grained data error localization.
REFERENCES
[1] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z.Peterson, and D. Song, “Provable Data Possession at
Untrusted Stores,” Proc. Of CCS ’07, pp. 598–609, 2007.
[2] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, Scalable and Efficient Provable Data Possession,” Proc. of
Secure Comm. ’08, pp. 1– 10, 2008.
[3] T. S. J. Schwarz and E. L. Miller, “Store, Forget, and Check: Using Algebraic Signatures to Check Remotely
Administered Storage,” Proc.of ICDCS ’06, pp. 12–12, 2006.
[4] M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows, and M. Isard, “A Cooperative Internet Backup Scheme,” Proc. of
the 2003 USENIX Annual Technical Conference (General Track), pp. 29–41, 2003.
[5] K. D. Bowers, A. Juels, and A. Oprea, “HAIL: A High- Availability and Integrity Layer for Cloud Storage,”
Cryptology ePrint Archive, Report 2008/489, 2008, http://eprint.iacr.org/.
[6] L. Carter and M. Wegman, “Universal Hash Functions,” Journal of Computer and System Sciences, vol. 18, no. 2, pp.
143–154, 1979.
[7] J. Hendricks, G. Ganger, and M. Reiter, “Verifying Distributed Erasure coded Data,” Proc. 26th ACM Symposium on
Principles of Distributed Computing, pp. 139–146, 2007.
[8] Amazon.com, “Amazon Web Services (AWS),” Online at http://aws. amazon.com, 2008.
[10] A. Juels and J. Burton S. Kaliski, “PORs: Proofs of Retrivability for Large Files,” Proc. of CCS ’07, pp. 584–597,
2007.
[11] M. Bellare, R. Canetti, and H. Krawczyk. Keying hash functions for message authentication. In CRYPTO’96, 1996.
[12] M. Bellare, R. Guerin, and P. Rogaway. XOR MACs: New methods for message authentication using finite
pseudorandom functions. In CRYPTO’95, 1995.
[13] J. Black and P. Rogaway. Ciphers with arbitrary finite domains. In Proc. of CT-RSA, volume 2271 of LNCS, pages
114–130. Springer-Verlag, 2002.
[14] M. Blum, W. Evans, P. Gemmell, S. Kannan, and M. Naor. Checking the correctness of memories. In Proc. of the
FOCS ’95, 1995.
[15] D. Boneh, G. diCrescenzo, R. Ostrovsky, and G. Persiano. Public key encryption with key-word search. In
EUROCRYPT’04, 2004.
[16] M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows, and M. Isard. A cooperative internet backup scheme. In
USENIX’03, 2003.
Volume 2, Issue 8, August 2013
Page 371
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 8, August 2013
ISSN 2319 - 4847
[17] E. Mykletun, M.Narasimha, and G. Tsudik. Authentication and integrity in outsourced databases. In ISOC
NDSS’04, 2004.
[18] M. Naor and G. N. Rothblum. The complexity of online memory checking. In Proc. of FOCS, 2005. Full
version appears as ePrint Archive Report 2006/091.
[19] P. Rogaway, M. Bellare, and J. Black. OCB: A block-cipher mode of operation for efficient authenticated encryption.
ACM TISSEC, 6(3), 2003.
[20] T. S. J. Schwarz and E. L. Miller. Store, forget, and check: Using algebraic signatures to check remotely
administered storage. In Proceedings of ICDCS ’06. IEEE Computer Society, 2006.
[21] F. Sebe, A. Martinez-Balleste, Y. Deswarte,J. Domingo-Ferrer, and J.-J. Quisquater. Time-bounded remote file
integrity checking Technical Report 04429, LAAS, July 2004.
[22] H. Shacham and B. Waters. Compact proofs of retrievability. Cryptology ePrint archive, June 2008. Report
2008/073.
[23] M. Shah, R. Swaminathan, and M. Baker. Privacy-preserving audit and extraction of digital contents. Cryptology
ePrint archive, April 2008.Report 2008/186.
[24] D. Song, D. Wagner, and A. Perrig. Practical techniques for searches on encrypted data. In IEEE S&P’00, 2000.
[25] M. Waldman and D. Mazi`eres. TANGLER: a censorship-resistant publishing system based on document
entanglements. In ACM CCS’01, 2001.
[26] M. Waldman, A. Rubin, and L. Cranor. PUBLIUS: a robust, tamper-evident, censorship-resistant web publishing
system. In USENIX SEC’00, 2000.
[27]. B. Sotomayor, R. S. Montero, I. M. Llorente, and I. T. Foster, “Virtual infrastructure management in private and
hybrid clouds,” IEEE Internet omputing, vol. 13, no. 5, pp. 14–22, 2009.
[28]. G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable and efficient provable data possession,” in
Proceedings of the 4th international conference on Security and privacy in communication networks, Secure Comm.,
2008, pp. 1–10.
[29]. C. C. Erway, A. K¨upc¸ ¨u, C. Papamanthou, and R. Tamassia, “Dynamic provable data possession,” in ACM
Conference on Computer and Communications Security, E. Al-Shaer, S. Jha, and A. D. Keromytis, Eds. ACM, 2009, pp.
213–222.
[30] H. Shacham and B. Waters, “Compact proofs of retrievability,” in ASIACRYPT, ser. Lecture Notes in Computer
Science, J. Pieprzyk, Ed., vol. 5350. Springer, 2008, pp. 90–107.
[31] Q. Wang, C.Wang, J. Li, K. Ren, and W. Lou, “Enabling public verifiability and data dynamics for storage security
in cloud computing,” in ESORICS, ser. Lecture Notes in Computer Science, M. Backes and P. Ning, Eds., vol. 5789.
Springer, 2009, pp. 355–370.
[32] Y. Zhu, H. Wang, Z. Hu, G.-J. Ahn, H. Hu, and S. S. Yau, “Dynamic audit services for integrity verification of
outsourced storages in clouds,” in SAC, W. C. Chu, W. E. Wong, M. J. Palakal, and C.-C. Hung, Eds. ACM, 2011, pp.
1550–1557.
[33] K. D. Bowers, A. Juels, and A. Oprea, “Hail: a high-availability and integrity layer for cloud storage,” in ACM
Conference on Computer and Communications Security, E. Al-Shaer, S. Jha, and A. D. Keromytis, Eds. ACM, 2009, pp.
187–198.
[34] Y. Dodis, S. P. Vadhan, and D. Wichs, “Proofs of retrievability via hardness amplification,” in TCC, ser. Lecture
Notes in Computer Science, O. Reingold, Ed., vol. 5444. Springer, 2009, pp. 109–127.
[35] L. Fortnow, J. Rompel, and M. Sipser, “On the power of multiprover interactive protocols,” in Theoretical Computer
Science, 1988, pp. 156–161.
About Author
Mr. Srikanth Murapaka final year M.tech Student in Avanthi Institute of Engineering and Technology (JNTUK),
Cherukupally, VizianagaramDist, Andhra Pradesh, India.
Mr.D.Adinarayana working as Associate professor in the department of CSE in Avanthi Institute of Engineering and
Technology (JNTUK), Cherukupally, VizianagaramDist, Andhra Pradesh, India.
Y. Ramesh Kumar obtained his M.Sc (Computer Science) degree from Andhra University. Later he obtained his M.tech
(CST) degree from Andhra University. Presently he is working as Associate Professor and Head of the Department (CSE)
at Avanthi Institute of Engineering and Technology, Cherukupally, VizianagaramDist Dist. He has guided more than 60
students of Bachelor degree, 40 Students of Master degree in Computer Science and Engineering in their major projects. His Research
interest includes Ontology-based Information Extraction based on Web search and mining and ware housing.
Mr.M.V Ramarao working as Asst.prof in the department of CSE in Shri Vishnu Engineering College for Women,
Bhimavaram, Andhra Pradesh, India.
Volume 2, Issue 8, August 2013
Page 372
Download