A Framework for Collective Auditing Method in Third Party Services

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 22 Number 1- April 2015
A Framework for Collective Auditing Method in
Third Party Services
D. Suresh1, C.P.V.N.J Mohan Rao2
1
1,2
Final M. Tech Student,2 Professor
Dept of CSE, Avanthi Institute of Engineering & Technology - Narsipatnam.
Abstract: There are many third party servers which provide
space for many data players. Data owners outsource their
information in cloud service providers. Due to more usage
of the web services there is an increase in users. There is
hard task to maintain the security over the network for
multiple numbers of the user’s transactions. So we
introduced a novel method to verify or audit the multiple
files of multiple data owners in a cloud.
I. INTRODUCTION
Away system frameworks it is extremely
discriminating to register the secrecy and security over the
put away information. The expanding development of the
individuals utilizing the cloud administrations are making
more twisting to information which is put away in cloud
administration. There are some security parameters such as
validation and approval,
accessibility, secrecy and
uprightness, key offering and key administration,
reviewing and interruption detection[1][2].
Infrequently, cloud administration suppliers may
be exploitative. They could toss the information that have
not been gotten to or seldom gotten to spare the storage
room and case that the information are still accurately put
away in the cloud. Consequently, managers need to be
persuaded that the information are accurately put away in
the cloud [ 3].
Information ought to be secured amid its whole
life-cycle. Confirmation and approval are the most
essential security benefits that any stockpiling framework
ought to backing. Verification is characterized as the
methodology of validating the personality of an
entity(called element confirmation or ID) or the wellspring
of
a
message
(additionally
called
message
authentication).The stockpiling servers ought to check the
character of the producers, consumers, and the managers
before giving them proper access (e.g., read or compose) to
the information. The demonstration of allowing proper
benefits to the clients is called approval.
Confirmation can be common; that is, the makers
and purchasers of the information may need to confirm the
capacity servers to build a complementary trust
relationship. Most of the organizations require consistent
information accessibility. Framework disappointments and
dissent of administration attacks(DoS) are exceptionally
hard to anticipate. A framework that implants solid
ISSN: 2231-5381
cryptographic systems, yet does not guarantee availability,
backup, and recuperation are of little utilize. Typically,
systems are made deficiency tolerant by reproducing
information or substances that are considered as essential
issue of disappointment. However, replication acquires a
high cost of keeping up the consistency between
replicas[4].
Capacity frameworks must keep up review logs of
critical exercises. Review logs are essential for framework
recuperation, interruption identification, and PC crime
scene investigation. Far reaching examination has been
carried out in the field of interruption discovery [8].
Interruption recognition frameworks (IDS)use different
logs (e.g., system logs and information access logs) and
system streams (e.g., RPCs, system streams) for
distinguishing and reporting assaults. Sending IDS at
different levels and relating these occasions is
important[5][6].
As the information gets produced, transferred, and put
away at one or more remote stockpiling servers, it gets to
be defenseless against unapproved divulgences,
unauthorized modifications, and replay assaults. An
aggressor can change or adjust the information while going
through the system or when the information is put away on
plates or tapes. Further, a malignant server can supplant
current documents with legitimate old variants [7,8].
Accordingly, securing information while in travel and
additionally when it lives on physical media is critical.
Classified-ness of information from unapproved clients can
be accomplished by utilizing encryption, while information
uprightness (which addresses the unapproved adjustment of
information) can be attained to utilizing computerized
marks and message verification codes. Replay assaults,
where a foe replays old sessions, can be forestalled by
guaranteeing freshness of information by making every
occasion of the information one of a kind.
II. RELATED WORK
For protecting the data in cloud includes some challenges,
such as
 The data is highly broadcasted in network and
increasing the complexity of the management by
introducing the vulnerability status.
http://www.ijettjournal.org
Page 22
International Journal of Engineering Trends and Technology (IJETT) – Volume 22 Number 1- April 2015
 In decentralized system it creates the storage system is
shared by the users in various security domains with
different policies.
 In the extended time gives more windows for attackers.
This raises the less compatibility issues and the data
migration process including encryption algorithm moving.
We now quickly audit some basic information
storage protection mechanisms. Access Control regularly
incorporates both verification and approval. Unified and
de-brought together get to administration are two models
for disseminated stockpiling systems[9].
Both models oblige substances to be approved
against predefined policies before getting to delicate
information. These access privileges need to be
intermittently looked into or re-granted. Encryption is the
standard strategy for giving privacy security. This depends
on the vital encryption keys being deliberately overseen,
including procedures for adaptable key offering,
invigorating and renouncement. In circulated situations
mystery offering systems can likewise be utilized to part
delicate information into various segment shares. Storage
trustworthiness infringement can either be inadvertent
(from equipment/programming breakdowns) or malignant
assaults [4,10].Accidental change of information is
commonly ensured by reflecting, or the utilization of
fundamental equality or erasure codes.
The recognition of unapproved information
adjustment obliges the utilization of Message
Authentication Codes (MACs) or advanced mark plans.
The recent gives the stronger idea of non-renouncement,
which keeps an element from effectively precluding
unapproved adjustment from claiming data. Data
accessibility components incorporate replication and
redundancy. Recovery instruments might likewise be
needed with a specific end goal to repair harmed
information, for instance re-scrambling information when a
key is lost. Interruption Detection or Prevention
components distinguish or anticipate vindictive exercises
that could bring about robbery or harm of information.
Review logs can likewise be utilized to help recuperation,
give confirmation to security ruptures, and in addition
being critical for consistence.
General threats to secrecy incorporate sniffing
stockpiling movement, snooping on support reserves and
de-allotted memory. Record framework profiling is an
assault that uses access sort, time stamps of last alteration,
document names, and other document framework metadata
to pick up knowledge about the stockpiling framework
operation. Capacity and reinforcement media might
likewise be stolen so as to get to data[11][12].
General threats to respectability incorporate
capacity sticking (a malignant however surreptitious
adjustment of put away information) to alter or supplant
the first information, metadata change to disturb a
stockpiling framework, and subversion assaults to increase
unapproved level access to adjust basic framework
ISSN: 2231-5381
information, and man-in-the-center assaults so as to change
information substance in travel.
General threats to accessibility incorporate
(conveyed) disavowal of-administration, circle fracture,
system disturbance, equipment disappointment and
document cancellation. Unified information area
administration or indexing servers can be purposes of
disappointment for refusal of-administration assaults,
which can be propelled utilizing noxious code. Long haul
information filing frameworks present extra difficulties, for
example, long haul key administration and in reverse
similarity, which undermine accessibility in the event that
they are not directed carefully[13][14].
General threats to validation incorporate wrapping
assaults to SOAP messages to get to unapproved
information, united confirmation utilizing programs that
can potentially open a way to take verification tokens, and
replay assaults to mislead the framework into handling
unapproved operations. Cloud-based capacity and
virtualization stance further threats. For instance,
outsourcing prompts information managers losing physical
control of their information, bringing issues of inspecting,
trust, getting backing for examinations, responsibility, and
similarity of security frameworks. Multi-occupant
virtualization situations can bring about applications losing
their security setting, empowering a foe to assault other
virtual machine cases facilitated on the same physical
server.
III. PROPOSED SYSTEM
In proposed work we designed a protocol that we
have three roles such as clients, cloud service provider, and
verifier. The client store data in cloud service provider.
There are multi-owners present in the network in the cloud.
The client store data in encrypted format in cloud service
provider.
The plain text is to be encrypted by random
alphabetic encryption process which is shown below:
Random Alphabetical Encryption and Decryption
Algorithm :
Encryption: P=plain Text
Key with variable length (128,192, 256 bit)
• Represented with a matrix (array) of bytes with 4 rows
And Nk columns, Nk=key length / 32
Block of length 128 bits=16 bytes
• Represented with a matrix (array) of bytes with 4 rows
And Nb columns, Nb=block length / 32
• Block of 128 bits= 16 bytes Nb=4
State = X
1. Add_CycleKey(State, Key0):
Each byte of the state is combined with a block of the
round key using bitwise xor.
for r = 1 to (Nr - 1)
a. Sub_Bytes(State, S-box):
A non-linear substitution step where each byte is replaced
with another according to a lookup table.
b. Transfer_Rows(State):a transposition step where the last
three rows of the state are shifted cyclically a certain
number of steps.
c.Combine_Columns(State)
http://www.ijettjournal.org
Page 23
International Journal of Engineering Trends and Technology (IJETT) – Volume 22 Number 1- April 2015
d. Add_CycleKey(State, Keyr):a mixing operation which
operates on the columns of the state, combining the four
bytes in each column.
end for
In the block mode transforming, if the blocks were
encrypted totally autonomously the encrypted message
may be powerless against some trifling assaults. Clearly, if
there were two indistinguishable blocks encrypted with no
extra connection and utilizing the same capacity and key,
the comparing encrypted blocks would likewise be
indistinguishable. This is the reason block figures are
typically utilized as a part of different modes of operation.
Operation modes bring an extra variable into the capacity
that holds the condition of the figuring.
The
state
is
changed
amid
the
encryption/decoding process and joined with the substance
of each block. This methodology mitigates the issues with
indistinguishable blocks and may additionally fill for
different needs. The instatement estimation of the extra
variable is known as the introduction vector. The contrasts
between block figures working modes are standing out they
consolidate the state (introduction) vector with the input
block and the way the vector quality is changed amid the
count. The stream figures hold and change their inward
state by outline and generally don't bolster express input
vector values on their input.
5. Collect signatures from receivers
6. Monitor files
Auditor
User
7. Send Status
4. Receive signatures
from multiple
7. Send Status
8. Decrypted file
2. Send meta details
Data Owner 1
3. Send encrypted File
Cloud service
Data owner 2
………..
1.Encrypt file, generate signature
KEY GENERATION PROCESS :
KeyGeneration(Ks)→(pk , sk , skh). The key generation
algorithm takes no input other than the implicit security
parameter Ks. It randomly chooses two random numbers
for selecting random numbers generate two prime numbers
from P. Then calculate primitive roots of the two prime
numbers and those two primitive roots are st ,shrepectively
and belongs to Prime number group as the tag key and the
hash key. It outputs the public tag key as p t = gsKs mod G2,
the secret tag key st and the secret hash key sh. Then
generate hash for sh is calculated by using simple hash
function which means second random value given input to
hash function that explains as follows. For example
consider that each input is an integer I in the range 0 to
N−1, and the output must be an integer h in the range 0 to
n−1, where N is much larger than n. Then the hash function
could be h = I mod n (the remainder of I divided by n), or h
= (I × n) ÷N (the value z scaled down by n/N and truncated
to an integer) or so many other formulas.
Signature Generation (M, st ,sh) → T. The signature
generation algorithm takes each data component M, the
secret tag key st and the secret hash key sh as inputs. It first
chooses s random values r 1, r2, …. , xn є I and computes uj
= gxj mod G1 for all j є [1, n]. For each data block mi(i є
[1,n]), it computes a data challenge as:C=({c1}I€SChal,{rn} n€j
ISSN: 2231-5381
where Wi = FID||i (the “||” denotes the concatenation
operation), in which FID is the identifier of the data and i
represents the block number of mi. It outputs the set of data
tags T = {ti}iє[1,n]. Chall(Minfo) → C. The algorithm takes
the brief information of the data Minfo as the input And it
selects some different data blocks to construct the
Challenge Set Q and generates a random number for each
chosen data block mi(i є Q). It computes the challenge
stamp R = (pt)r by randomly choosing a number r є Z*p. It
outputs the challenge as
Tp=∏𝑅€𝐶 𝑡𝑘𝑙
Proof(M,T,C) → P. The proving algorithm takes as inputs
the data M and the received. The proof consists of the tag
proof TP and the data proof DP. The challenge proof is
generated as
To generate the data proof it first computes the sector
linear combination of all the challenged data blocks MPj for
each j є [1, s] as
Mpj=Vj.Mij
Then, it generates the data proof DP as
DProof=∏𝑠𝐽=1 𝑒(𝑢𝑗, 𝑅)Mpj
It outputs the proof P = (TP,DP).
Verify(C,P, sh, pt ,Minfo) → 0/1. The verification algorithm
takes as inputs the challenge C, the proof P, the secret hash
key sh, the public tag key pt and the abstract information of
the data component. Initially it computes the identifier hash
values hash(sh,Wi) of all the challenged data blocks such as
http://www.ijettjournal.org
Page 24
International Journal of Engineering Trends and Technology (IJETT) – Volume 22 Number 1- April 2015
hash value is calculated by using SHA256 method and
computes the challenge hash Hchallange as
Hchal=∏𝑖€𝑄 (h(Skh,Wi))
Then it verifies the proof from the server by the following
verification equation:
Vp=e(HChallenge,pt)=e(Tp,gr2)
If the above verification equation holds it outputs 1.
Otherwise it results 0.
Experimental Analysis:
For experimental implementation, we verified
through .net application which shows the auditing protocol.
Data owner uploads the data component after segmentation
and encryption of the individual blocks, our approach
reduces the time complexity while encryption of the data
components dynamically and updates corrupted blocks and
prevents the auditor from misusage of data component
without losing data integrity and proposed approach is
more reliable . The following result shows time complexity
between traditional and proposed approaches
6
5
4
Time complexity
3
Data integrity
2
Reliability
1
0
Traditional
Proposed
IV. CONCLUSION
In this our work we propose a novel dynamic auditing
protocol with more secure cryptographic methods. The data
security over the network for multiple data owner network
with cryptographic method. This also extend for future
enhancements there is a chance to multiple data owners
with multiple cloud services. This could be the best
achievement in auditing methodologies.
REFERENCES
[1] P. Mell and T. Grance, “The NIST Definition of Cloud
Computing, ”technical report, Nat’l Inst. of Standards and
Technology,2009.
[2] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R.H.
Katz, A.Konwinski, G. Lee, D.A. Patterson, A. Rabkin, I.
Stoica, and M.Zaharia, “A View of Cloud Computing,”
Comm. ACM, vol. 53,no. 4, pp. 50-58, 2010.
[3] K. Yang and X. Jia, “Data Storage Auditing Service in
Cloud
Computing:
Challenges,
Methods
and
Opportunities,” World Wide Web, vol. 15, no. 4, pp. 409428, 2012.
ISSN: 2231-5381
[4] T. Velte, A. Velte, and R. Elsenpeter, Cloud
Computing: A Practical Approach, first ed., ch. 7.
McGraw-Hill, 2010.
[5] J. Li, M.N. Krohn, D. Mazie`res, and D. Shasha,
“Secure Untrusted Data Repository (SUNDR),” Proc. Sixth
Conf. Symp. Operating Systems Design Implementation,
pp. 121-136, 2004.
[6] G.R. Goodson, J.J. Wylie, G.R. Ganger, and M.K.
Reiter, “Efficient Byzantine-Tolerant Erasure-Coded
Storage,” Proc. Int’l Conf. Dependable Systems and
Networks, pp. 135-144, 2004.
[7] V. Kher and Y. Kim, “Securing Distributed Storage:
Challenges, Techniques, and Systems,” Proc. ACM
Workshop Storage Security and Survivability (StorageSS),
V. Atluri, P. Samarati, W. Yurcik, L. Brumbaugh, and Y.
Zhou, eds., pp. 9-25, 2005.
[8] L.N. Bairavasundaram, G.R. Goodson, S. Pasupathy,
and J. Schindler, “An Analysis of Latent Sector Errors in
Disk Drives,”Proc. ACM SIGMETRICS Int’l Conf.
Measurement and Modeling ofComputer Systems, L.
Golubchik, M.H. Ammar, and M. Harchol-Balter, eds., pp.
289-300, 2007.
[9] B. Schroeder and G.A. Gibson, “Disk Failures in the
Real World:What Does an MTTF of 1,000,000 Hours
Mean to You?” Proc.USENIX Conf. File and Storage
Technologies, pp. 1-16, 2007.
[10] M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows,
and M. Isard,“A Cooperative Internet Backup Scheme,”
Proc. USENIX Ann. Technical Conf., pp. 29-41, 2003.
[11] Y. Deswarte, J. Quisquater, and A. Saidane, “Remote
Integrity Checking,” Proc. Sixth Working Conf. Integrity
and Internal Controlin Information Systems (IICIS), Nov.
2004.
[12] M. Naor and G.N. Rothblum, “The Complexity of
Online Memory Checking,” J. ACM, vol. 56, no. 1, article
2, 2009.
[13] A. Juels and B.S. Kaliski Jr., “Pors: Proofs of
Retrievability for Large Files,” Proc. ACM Conf.
Computer and Comm. Security, P. Ning,S.D.C. di
Vimercati, and P.F. Syverson, eds., pp. 584-597, 2007.
[14] T.J.E. Schwarz and E.L. Miller, “Store, Forget, and
Check: Using Algebraic Signatures to Check Remotely
Administered Storage,”Proc. 26th IEEE Int’l Conf.
Distributed Computing Systems, p. 12,2006.
BIOGRAPHIES
D.Suresh completed B.Tech C.S.E from
JNTU(2006-2010)
from sri vasavi
engineering college Worked as Assistant
professor for Sri Sunflower engineering
college(C.S.E) Challapalli(2012) M.Tech(
C.S.E) JNTUK (2012 -2014) from Avanthi
college of engineering –Narsipatnam
http://www.ijettjournal.org
Page 25
International Journal of Engineering Trends and Technology (IJETT) – Volume 22 Number 1- April 2015
Dr. C.P.V.N.J Mohan Rao is Professor in the
Department of Computer Science and
Engineering,
Avanthi
Institute
of
Engineering & Technology - Narsipatnam.
He did his PhD from Andhra University and
his research interests include Image Processing, Networks,
Information security, Data Mining and Software
Engineering. He has guided more than 50 M.Tech Projects
and currently guiding four research scholars for Ph.D. He
received many honors and he has been the member for
many expert committees, member of many professional
bodies and Resource person for various organizations.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 26
Download