Cloud based Hybrid Model for Authorized De- Web Site: www.ijaiem.org Email:

advertisement

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847

Cloud based Hybrid Model for Authorized Deduplication

1

Miss Prachi D. Thakar, Dr. D.G.Harkut

2

1

M.E. (C.E.) II Year, Prof.Ram Meghe College of Engineering and Management, Amravati, India

2

Department of Computer Science and Engineering,

Prof.Ram Meghe College of Engineering and Management, Amravati, India

A

BSTRACT

Cloud computing provides a more cost effective environment to outsource storage computation. Many enterprises need to store and operate huge amount of data. Cloud provides many features such as application server, web application but our focus is only on the data storage. How to manage such a huge data in cloud is a big issue. To make data management elastic in cloud computing de-duplication technique is used. Data de-duplication is a compression technique that improves storage efficiency by eliminating redundant data. In this approach we have proposed a de-duplication technique which is different from traditional de-duplication system. In this system, user’s privileges are also checked. Again in this model of file-level de-duplication we are applying algorithm that makes de-duplication more efficient. In this approach we also focus on the confidentiality of sensitive data in support of de-duplication. We are proposing a Hybrid cloud approach. In Hybrid cloud approach two cloud are maintained, public cloud and a private cloud. A private cloud is working as an interface between user and public cloud. Private cloud provides a set of private key to the user. We also presented several new de-duplication constructions supporting authorized duplicate check in hybrid cloud architecture.

1. INTRODUCTION

Cloud computing is the delivery of computing services over the Internet. Cloud services allow individuals and businesses to use software and hardware that are managed by third parties at remote locations. Examples of cloud services include online file storage, social networking sites, webmail, and online business applications. The cloud computing model allows access to information and computer resources from anywhere that a network connection is available. Cloud computing provides a shared pool of resources, including data storage space, networks, computer processing power, and specialized corporate and user applications. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data. Cloud storage is gaining popularity in recent years. But the vital issue with cloud is to manage the ever increasing volume of data. To resolve this issue deduplication technique is used.

Data de-duplication is one of the emerging technologies in storage right now because it enables companies to save a lot of money on storage costs .Data de-duplication is a method of reducing storage needs by eliminating redundant data.

Only one unique instance of the data is actually retained on storage media, such as disk or tape. Redundant data is replaced with a pointer to the unique data copy. For example, a typical email system might contain 100 instances of the same one megabyte (MB) file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data de-duplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy. In this example, a 100 MB storage demand could be reduced to only one MB.

The primary benefit of de-duplication is that it greatly reduces storage capacity requirements. This drives several other advantages as well, including lower power consumption, lower cooling requirements, longer disk-based retention of data (faster recoveries), and with some vendors simplified disaster recovery (DR) via optimized replication.

2. LITERATURE REVIEW

Clouds are large pools of easily usable and reachable resources. In cloud all resources connected virtually to create single system image. These resources can be dynamically reconfigured to adjust to a flexible load (scale), allowing optimum resource utilization. Cloud storage refers to scalable and elastic storage capabilities that are delivered as a service using Internet technologies with elastic provisioning and use based pricing that does not penalize users for changing their storage consumption without notice. Two cloud approaches is suggested for data security and privacy of the users in cloud [1,2,3,4].

Many author has presented a family of RBAC models in which permissions are associated with roles, and users are made members of appropriate roles. This greatly simplifies management of permissions. Roles are closely related to the

Volume 4, Issue 5, May 2015 Page 70

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847 concept of user groups in access control. However, a role brings together a set of users on one side and a set of permissions on the other, whereas user groups are typically defined as a set of users only [5].

For reclaiming the space for further use different file system has been proposed [6,7,8,9].Many authors has discussed about the encryption technique that can be apply to secure the user data and also on the privacy of user. For authorization of a user on a particular file many protocol has been derived. Again the combination of different protocol for proving the user’s identity are also discus[10,11,12,13,14,15].

The benefits and overhead of de-duplication are investigated [16].[ 17] discuss about de-duplication for mobile web

.De-duplication approach that reduces the storage capacity needed to store data or the data has to be transfer on the network [18]. A low-cost multi-stage parallel de-duplication solution has been proposed. The key idea is to separate duplicate detection from the actual storage backup instead of using inline de-duplication, and partition global index and detection requests among machines using fingerprint values. Then each machine conducts duplicate detection partition by partition independently with minimal memory usage [19] . ] Design and implement of RevDedup is discussed.

RevDedup is an efficient hybrid inline and out-of-line de-duplication system for backup storage [20]. A multi-level selective de-duplication scheme which integrates inner-VM and cross-VM duplicate elimination under a stringent resource requirement has been proposed. This scheme uses popular common data to facilitate fingerprint comparison while reducing the cost and it strikes a balance between local and global de-duplication to increase parallelism and improve reliability [21]. Four different strategies of de-duplication are specified depending on weather de-duplication happen at client-side or server side, or de-duplication is file level or block level. Author discuss about the proof-ofownership protocol, shortcomings of proof-of-ownership protocol. Author presents a novel scheme for secure P-O-W which overcomes the drawbacks of P-O-W protocol. Author present three schemes which are different from each other in terms of security and performance .Then again there is a comparison between the third scheme and the solution [22].

Identification of different type of data de-duplication strategies such as inter user, intra user and also discussion about some attack that may be happen while de-duplication is done. Author propose a two phase de-duplication approach which combines the inter and intra user de-duplication by introducing de-duplication proxies. This de-duplication proxy is between the client and the storage server. The intra user de-duplication is done between the client and the respective proxies while the inter user de-duplication is done between the proxies and the storage server [23]. Author differentiates the data according to their popularity for secure data de-duplication. The unpopular data is consider as sensitive and given semantic security and the popular data that is used by many user is consider less sensitive so given weaker security and better storage. A multi layered cryptosystem has been proposed[24]. Client-side de-duplication is discussed .In client-side de-duplication if a client want to upload any file on the server then it first send the hash value of the file to the server and the server check whether it is present or not, if present the it inform the client that there is no need to store the file, otherwise it store the file .But the client-side de-duplication introduces an problem. In both the cases the server marks the client as the owner of the file and at that point there is no difference between the owner and the client. Again if any person got the hash values that he/she can access that file. To overcome such problem proof-ofownership is introduced. In this the owner has to prove to the server that he is the actual owner of that file without sending the actual file. A streaming protocol is introduced [25].The contradiction between the de-duplication and encryption is discussed [26]. De-duplication technique for private data storage is introduced. Here the client has to prove his identity by using proof-of-ownership protocol. A client who holds the private data proves the server that he/she is holding private data by just giving summary string of the file, and not reveling the entire file. This private data de-duplication protocol is secure in simulation based framework [27]. A new concept called as Dekey, in which there is no need for key management has been proposed .There is secure distribution of convergent key shares across multiple servers [28]. A de-duplication system for VM disk image backup in virtualization model is design. Deduplication is useful for space storage but it also introduce fragmentation. To reduce this revDedup a de-duplication system removes duplicate from old data and not from the new. If in de-duplication some repeated content are present then the old one is replaced by the new one [29]. A new model Dupless is proposed, in which there is a group of affiliated clients encrypt there data with the help of key server, a server which is different from the storage server [30].

Cross-user de-duplication is introduce .They demonstrate that de-duplication can be a channel of privacy leakage in cross user de-duplication .It can be used as a side channel that reveal information of the file. They have three solutions on this issue. The first solution is stop encryption in cross user de-duplication , second is to perform de-duplication at server side and the third solution is to provide some random threshold for every file and perform de-duplication if and only if the number of copies of the file exceed the given threshold [31] Approach for client-side de-duplication of encrypted file in cloud storage by taking a reference of POW is discussed .They enhanced and generalized the convergent encryption method. The proposed scheme protects data confidentiality against both the outside adversaries and honest-but-curious cloud server. This scheme allows a bounded amount one-time leakage of target file before it starts to execute as compared to POW [32]. Reverse de-duplication is introduce. In reverse de-duplication the newer data is written contiguously and old data segment that share chunks with the newer segment will reference those chunks [33]. Live de-duplication file system is introduced which enables de-duplication storage of VM images in an open source cloud. This file system allows general I/O operation such as read, write, modify and delete, while enabling in-line de-duplication. It mainly target for VM image storage [34]. An in-network de-duplication is proposed for

Volume 4, Issue 5, May 2015 Page 71

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847 storage aware software defined network for the efficient handling of data and to reduce the overhead of network transmission [35].

Different algorithm has been suggested to find out the duplicate record in the database [36]. Many author focus on the archival data. This approach adopt write-once policy which prevent the accidental destruction of data. This model also has the ability to coalesce duplicate copies of block [37].To improve the performance of de-duplication elastic aware algorithm is proposed[38].

2. 1 Motivation behind identified problem

Cloud storage services are becoming very popular now a day. Cloud provides a better way of storage with efficient cost.

Cloud has many features such as web services, application servers, hosted instances, virtualization, terminal services

.This features provided by cloud attracting the researchers to do something new to make cloud more efficient and more powerful.

Many people are using cloud as a storage server .As the number of users increases more space is required .So the issue with the cloud is to manage such a huge amount of data. In order to manage data, different techniques have been introduced. De-duplication technique is used for the management of data. Although, de-duplication has many advantages but it has some security issues and design issues. This motivates us to propose a model which manage the security issues of de-duplication and provide authorized de-duplication in cloud.

3.P

ROBLEM F ORMULATION

This section addresses the point of formulating the problem. Initially this section is describing different issues in Cloud computing and how to overcome it through authorized de-duplication, in the recent past has gained lot of attention from the research fraternity and after that the process through which the present work has tried to achieve the defined problems will be specified.

• Cloud computing provides seemingly unlimited “virtualized” resources to users as services across the whole

Internet, while hiding platform and implementation details.

• Today’s cloud service providers offer both highly available storage and massively parallel computing resources at relatively low costs.

• As cloud computing becomes prevalent, an increasing amount of data is being stored in the cloud and shared by users with specified privileges, which define the access rights of the stored data.

• One critical challenge of cloud storage services is the management of the ever-increasing volume of data.

• To make data management scalable in cloud computing, de-duplication has been a well-known technique and has attracted more and more attention recently.

• Data de-duplication is a specialized data compression technique for eliminating duplicate copies of repeating data in storage.

• Instead of keeping multiple data copies with the same content, de-duplication eliminates redundant data by keeping only one physical copy and referring other redundant data to that copy.

• To allow only authorized person to upload or download there documents and also to prevent the leaking of document information authorized de-duplication is required ,means only the authorized person can be allow for the duplicate check.

This is done by giving the access privileges to the user by the owner of the file. In another word, the goal of the adversary is to retrieve and recover the files that do not belong to them .

4.Proposed Approach

In our proposed approach we are considering college as a client. We are considering three types of files in our implementation. The file types are doc. file, txt files and pdf files. Many authors have proposed different techniques for data de-duplication as discussed in chapter 2. As discussed in the chapter 3 there is a need of data de-duplication for saving huge amount of space. In this section we are introducing our proposed approach which is somewhat different from the previous approaches.

We proposed a model that can protect the data by including differential privileges of users in the duplicate check. The owner of the file decides what privileges should give to other user who wants to access that file. In our proposed model we can protect sensitive data by using the hybrid cloud concept. In hybrid cloud there are two clouds one private and one public. The client has to first interact with the private cloud. All the duplicate check and key distribution is done on private cloud only. In our proposed model system generated key is used, so the privacy is preserved. The public cloud is used for only storage purpose. The diagrammatic view of our proposed approach is given below.

Volume 4, Issue 5, May 2015 Page 72

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847

Figure 1 Proposed approach Diagrammatic view

In our proposed approach the public cloud is a centralized database. The purpose of the public cloud is only to store the file. In the private cloud there are multiple colleges’ private data such as access privileges of the user. The deduplication checking is done on the private cloud only. The user is sending a plain text that the simple file. The encryption of the file is done on the private cloud. For encrypting the file key is required, that key generation is done on private cloud. The file gets stored in the encrypted form in the public cloud .

4.1

W ORKING OF PROPOSED A PPROACH

Flowchart for the proposed approach is shown in Figure. This flowchart helps in visualization of the stepwise working of the proposed approach. Flowchart represents the process of file de-duplication. The flowchart is categorized into two parts modules and internal working. The figure 4.2 describes the different modules and what are the different activities perform by these modules and different component of these modules.

The figure 4.3 describes the internal working of the model, how de-duplication is done. The cloud administrator has the functionality of activating the college account that means to provide the space on the public cloud. Again the authority of deactivating the college account is given to the cloud administrator. The college administrator send request to the cloud administrator for space, the cloud administrator approves the request and after that the cloud administrator’s function is over.

The member involve in this module is college, college staff, college pass out students and the college present students.

The college administrator will register the specified branch of the college and after that the college administrator will register the staff member of the college with some basic information. After the staff registration, the individual staff will fill its personal information and they can change their password. The authority of uploading the document to the public cloud is given to the college administrator, college staff and the pass out students of the college. The downloading authority for some specific file is depending upon the owner of the file. The owner of the file will specifies the access privileges to a particular file as well as the de-duplication permission. If a particular file owner has decided not to give anybody the de-duplication permission for a particular file then the file will get stored. By doing this we are maintaining the privacy of the user.

The checking of de-duplication is done during the uploading of document. In our module we are considering only three types of files, doc file, txt file and the pdf file.

Figure 2 Modules of our model

Volume 4, Issue 5, May 2015 Page 73

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847

Figure 3 Internal Working of our model

5.E

XPERIMENTAL

R

ESULT

A

ND

E

VALUATION

In this section we discus one example to show that how our work is implemented .After that we compare the evaluation parameter of our proposed approach with existing approach.

Consider a college has a private cloud. Consider a user of that college who has uploaded right to the public cloud wants to upload a word file. First the file gets divided into pages and after that a hash key is generated per page. That hash key get stored on to the private cloud of the college. After the encryption of the file is done using AES algorithm on to the private cloud and then the file get stored to the public cloud.

If another user want to upload the same document with some minor changes ,consider 2 or 3 pages then hash key is generated for only that 2 or 3 pages. The hash key is generated from the content of that file. And it will get shown on the partial duplicate document section.

5.1 Evaluation

Table 1 : Evaluation Parameter Comparison

Parameter Proposed Approach

Two-phase duplication de-

Multi-layered cryptosystem

Space

Partial de-duplication in file-level means more space saving

Simple File-level deduplication is provided

Not implemented

Security Authorized de-duplication

Inter-user de-duplication is less secure

Provide more privacy to user who owns unpopular file while privacy degraded for users with popular files.

Bandwidth

If the file is duplicate then it will not get uploaded

In intra user bandwidth saving ,in inter user more bandwidth saving

Not implemented

Computational Overhead

Using AES algorithm for encryption & decryption

Not implemented overhead is caused by convergent and symmetric encryption

6.

C ONCLUSION A ND F UTURE S COPE

6.1 Conclusion

This work describes,

• The existing literature on data de-duplication techniques and hybrid cloud concepts from previous literature is analyzed and studied for formulating the proposed approach.

• Proposed work is based on hybrid cloud model.

• Hybrid cloud model is developed to avoid duplicate document uploaded on public cloud.

• It provides secure cloud servers where organization’s access privileges as well as secrete keys are stored.

• Authorized de-duplication is done by providing the access permission and de-duplication permission.

Volume 4, Issue 5, May 2015 Page 74

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847

• The encryption and decryption of the document is done using AES algorithm.

6.2 Future Scope

Everyone is talking about the benefits of storing data to the cloud for sharing information among friends, to simplify moving data between different mobile devices, and for small businesses to back up and provide disaster recovery capabilities. But how to manage massive amounts of data in enterprise data centers. De-duplication is becoming the emerging technique for managing such a huge amount of data. The proposed work provides space saving as well as security to the data, but still there are various aspects which need to be address in future.

• Proposed work addresses only three types of file txt file, doc file and pdf file, we can extend this work for different types of file such as image and sound files and video files in future.

• We can also apply some searching technique that speed-up the operation by separating the hash key according to the file type.

R EFERENCES

[1].

Jin Li, Yan Kit Li, Xiaofeng Chen, Patrick P. C. Lee, Wenjing Lou, “A Hybrid Cloud Approach for Secure

Authorized Deduplication”,IEEE Transactions on Parallel and Distributed System, vol. PP, no.99, 2014.

[2].

S. Bugiel, S. Nűrnberger, A. Sadeghi and T. Schneider, “Twin clouds: An architecture for secure cloud computing”, in Workshop on Cryptography and Security in Clouds (WCSC 2011), 2011.

[3].

G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z, Peterson and D. Song, “Provable data possession at untrusted stores”, in ACM CCS ’07, pages 598–609, 2007.

[4].

K. Zhang, X. Zhou, Y. Chen, X.Wang and Y. Ruan, “Sedic: privacy aware data intensive computing on hybrid clouds”, in of the 18th ACM conference on Computer and communications security, CCS’11, New York, NY,

USA. ACM, pages 515–526, 2011.

[5].

Ravi S. Sandhu, Edward J. Coyne, Hal L. Feinstein and Charles E. Youman, “Role-Based Access Control Models”, in IEEE Computer, Volume 29, Number 2, , pages 38-47, February 1996.

[6].

William J. Bolosky, John R. Douceur, David Ely, and Marvin Theimer, “Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs”, in Proceedings of the international conference on measurement and modeling of computer systems (SIGMETRICS), Association for Computing Machinery, Inc,

2000.

[7].

J. R. Douceur,A. Adya, W. J. Bolosky, D. Simon and M. Theimer, “Reclaiming space from duplicate files in a serverless distributed file system”, in ICDCS, pages 617–624, 2002.

[8].

I. Clarke, O. Sandberg, B. Wiley, and T. Hong, “ Freenet: A Distributed Anonymous Information Storage and

Retrieval System”, ICSI Workshop on Design Issues in Anonymity and Unobervability, Jul 2000

[9].

Atul Adya, William J. Bolosky, Miguel Castro, Gerald Cermak, Ronnie Chaiken, John R. Douceur, Jon Howell,

Jacob R. Lorch, Marvin Theimer, Roger P. Wattenhofer, “FARSITE: Federated, available, and reliable storage for an incompletely trusted environment” ,ACM SIGOPS Operating Systems Review, 2002.

[10].

M. Bellare, S. Keelveedhi and T. Ristenpart,“Message-locked encryption and secure deduplication”, in

EUROCRYPT, pages 296– 312, 2013.

[11].

Jiawei Yuan and Shucheng Yu, “Secure and Constant Cost Public Cloud Storage Auditing with Deduplication”,

IEEE Conference, pages 145-153, 2013.

[12].

Ari Juels and Burton S. Kaliski, “Pors: proofs of retrivability for large files”, in CCS ’07: ACM conference on

Computer and communications security, pages 584–597, 2007.

[13].

Nesrine Kaaniche and Maryline Laurent, “A Secure Client Side Deduplication Scheme in Cloud Storage

Environments” ,IFIP International Conference on New Technologies, Mobility and Security, 2014.

[14].

C. Wang, Z. Guang Qin, J. Peng, and J. Wang, “A novel encryption scheme for data deduplication system”, in

Communications, Circuits and Systems (ICCCAS),International Conference on, pages 265–269, 2010.

[15].

Qingji Zheng, Shouhuai Xu,“Secure and efficient proof of storage with Deduplication”, Proc. of ACM conference on Data and Application Security and Privacy, 2012.

[16].

Chulmin Kim, Ki-Woong Park, KyoungSoo Park and Kyu Ho Park, “Rethinking Deduplication in Cloud: From

DataProfiling To Blueprint”, Networked Computing and Advanced Information Management (NCM), 2011 7th

International Conference on June 2011.

[17].

Ricardo Filipe, Joao Barreto, “End-to-End Data Deduplication for Mobile Web”, Proceedings of The Tenth IEEE

International Symposium on Networking Computing and Applications, NCA 2011, August 25-27, 2011.

[18].

Jyoti Malhotra ,Priya Ghyare, “A Novel Way of Deduplication Approach for Cloud Backup Services Using Block

Index Caching Technique”, International Journal of Advanced Research in Electrical, Electronics and

Instrumentation Engineering, Vol. 3, Issue 7, July 2014.

[19].

Wei Zhang, Tao Yang, Gautham Narayanasamy and Hong Tang, “Low-Cost Data Deduplication for Virtual

Machine Backup in Cloud Storage”, in Proceedings of the 5th USENIX conference on Hot Topics in Storage and

File Systems, June 13.

Volume 4, Issue 5, May 2015 Page 75

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org

Volume 4, Issue 5, May 2015 ISSN 2319 - 4847

[20].

Yan-Kit Li, Min Xu, Chun-Ho Ng, and Patrick P. C. Lee, “Efficient Hybrid Inline and Out-of-line Deduplication for Backup Storage”, ACM Transactions on Storage (TOS), 11(1), pp. 2:1-2:21, February 2015.

[21].

Wei Zhang, Hong Tang, Hao Jiang, Tao Yang, Xiaogang Li, Rachael Zeng, “Multi-level Selective De-duplication for VM Snapshots in Cloud Storage”, Cloud Computing (CLOUD), 2012 IEEE 5th International Conference on, pages 550 – 557,2012.

[22].

Roberto Di Pietro, Alessandro Sorniotti , “Boosting Efficiency and Security in Proof of Ownership for

Deduplication” ,7th ACM Symposium on Information, Computer and Communications Security (AsiaCCS 2012),

Seoul, Korea, May 2012.

[23].

Pierre Meye, Philippe Raipin, Frederic Tronel and Emmanuelle Anceaume, “A secure two-phase data deduplication scheme”, High Performance Computing and Communications, IEEE 6th Intl Symp on Cyberspace

Safety and Security, IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS),IEEE Intl Conf on

,pages 802-809, 2014.

[24].

J. Stanek, A. Sorniotti, E. Androulaki, and L. Kencl, “A secure data deduplication scheme for cloud storage”, in

Technical Report, 2013.

[25].

S. Halevi,D. Harnik,B. Pinkas and A. Shulman-Peleg, “Proofs of ownership in remote storage systems”,in Y.

Chen,G. Danezis, and V.Shmatikov. ACM Conference on Computer and Communications Security, pages 491–

500. ACM, 2011.

[26].

M. W. Storer, K. Greenan, D. D. E. Long, and E. L. Miller, “Secure data deduplication” ,in Proc. of StorageSS,

2008.

[27].

W. K. Ng, Y. Wen and H. Zhu, “Private data deduplication protocols in cloud storage ”,in S. Ossowski and P.

Lecca, editors, Proceedings of the 27th Annual ACM Symposium on Applied Computing, pages 441–446, ACM,

2012.

[28].

J. Li, X. Chen, M. Li, J. Li, P. Lee, and W. Lou, “Secure deduplication with efficient and reliable convergent key management ”,in IEEE Transactions on Parallel and Distributed Systems, 2013.

[29].

C. Ng and P. Lee, “Revdedup: A reverse deduplication storage system optimized for reads to latest backups”, in

Proc. of APSYS, Apr 2013.

[30].

M. Bellare, S. Keelveedhi and T. Ristenpart, “Dupless: Serveraided encryption for deduplicated storage”, in

USENIX Security Symposium, 2013.

[31].

Shulman-Peleg, A. Harnik D. and Pinkas B, “Side Channels in Cloud Services: Deduplication in Cloud Storage”, in IEEE Security and Privacy Magazine, special issue of Cloud Security, 2010.

[32].

J. Xu, E.C. Chang, and J. Zhou, “Weak leakage-resilient client-side deduplication of encrypted data in cloud storage”, in Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security, ASIA CCS ’13, pages 195–206, New York, NY, USA, 2013.

[33].

Zhike Zhang, Preeti Gupta, Avani Wildani, Ignacio Corderi, Darrell D.E. Long, “Reverse Deduplication:

Optimizing for Fast Restore”, in FAST, 2013.

[34].

Chun-Ho Ng, Mingcao Ma, Tsz-YeungWong, Patrick P. C. Lee, and John C. S. Lu, “Live De-duplication Storage of Virtual Machine Images in an Open-Source Cloud”, Proceedings of ACM/IFIP/USENIX 12th International

Middleware conference, 2011.

[35].

Yu Hua, Xue Liu, Dan Feng, “Smart In-Network Deduplication for Storage-aware SDN”, Proceedings of ACM

SIGCOMM, and ACM SIGCOMM Computer Communication Review, Volume 43, Issue 4,pages: 509-510, 2013.

[36].

Deepa Karunakaran and Rangarajan Rangaswamy, “Optimization Techniques To Record Deduplication”, in

Journal of Computer Science 8 (9): 1487-1495, 2012.

[37].

S. Quinlan and S. Dorward, “Venti: a new approach to archival storage”, in Proc. USENIX FAST, Jan 2002.

[38].

Yufeng Wang, Chiu C Tan, Ningfang Mi, “Using Elasticity to Improve Inline Data Deduplication Storage

Systems”, IEEE International Conference on Cloud Computing (CLOUD), AK, USA, 2014.

AUTHOR

Prachi D Thakar received B.E (Computer Science & Engineering) from SGB Amravati University in

2010 and pursuing M.E.(Computer Engineering) from SGB University. Right now working as a lecturer in the Dept. of First Year Engineering at Prof. Ram Meghe College of Engineering &

Management, Badnera –Amravati.

Dinesh G Harkut received B.E. (Computer Science & Engineering) & M.E. (Computer Science &

Engineering) from SGB Amravati University in 1991 and 1998 respectively. He completed his masters in Business Management and obtained his Ph.D. from SGB Amravati University in Business

Management in 2013 while serving as a full-time faculty in the Dept. of Computer Science &

Engineering at Prof Ram Meghe College of Engineering & Management, Badnera – Amravati. His research interests are Embedded Systems and RTOS.

Volume 4, Issue 5, May 2015 Page 76

Download