Research Proposal - University of South Australia

advertisement
University of South Australia
Division of Information Technology, Engineering and
Environment
School of Computer and Information Science
2010
Securing the hash communication channel for the
reinstating of tampered documents
Wayne Gartner
Garwd001@students.unisa.edu.au
Student ID: 100081771
Supervisor: Helen Ashman
Abstract
Traditional cryptographic techniques can verify the integrity of files, but what
they cannot do is recover the true file. This is because the cryptographic hash
calculation loses information, combined with the collision-resistance of
cryptographic hashes which make a brute force search for the correct
document infeasible. More recently however, the concept of reinstating
tampered documents explored by Ashman (2000) and Moss & Ashman
(2002) shows it is feasible to recursively create the necessary redundancy in
hash tree structures so that the brute force search can be achieved rapidly
and with negligible error rates.
One focus of the research in this thesis is determining whether there are
significant performance differences between binary and quad tree
implementations of the hash tree structures, according to different quantities
and distributions of tampering. This includes investigating methods for dealing
with persistent tampering which may be required since the hash trees are
likely to be transmitted in the same way as the original, tampered data. Thus a
second focus of the research will look to add an additional layer of security to
the hash communication channel, examining different cryptographic
techniques, and examining the additional performance overheads compared
against the unsecured binary and quad tree implementations.
The motivation for exploring different cryptographic and tree implementations
is to determine what a feasible amount of cryptographic security is without
making the process too resource hungry or too slow to justify not just
resending the entire file.
Several established cryptographic techniques will be explored in this
research, such as Blowfish, AES, DES, Triple DES, and a Password Based
Encryption using hashes. The key sizes for these encryption techniques will
be varied, from 128bit, 192bit and 256bit. The reason for testing so many
different variables is to explore a comprehensive range of realistic options,
and then determine what the best solutions for each implementation of the
code. For example, a low powered device, such as a mobile phone would
have different requirements than a high powered server.
The outcomes of this research include an analysis of the performance of
different combinations of hash tree and encryption. There will also be a
working implementation of the re-instating tampered documents technique,
with different trees and cryptographic security.
1
2
3
4
5
6
Introduction ............................................................................................... 4
1.1
Background ....................................................................................... 4
1.2
Motivation .......................................................................................... 5
1.3
Research Question ............................................................................ 5
1.3.1
Sub Research Question ............................................................. 5
1.3.2
Sub Research Question ............................................................. 5
1.3.3
Sub Research Question ............................................................. 5
1.4
(Research) Assumptions ................................................................... 6
Literature Review ...................................................................................... 7
2.1
Hash Trees ........................................................................................ 7
2.2
Tamper Correcting ............................................................................. 8
2.3
Cryptographic Techniques ................................................................. 8
2.3.1
AES ............................................................................................ 9
2.3.2
Blowfish .................................................................................... 10
2.3.3
RSA .......................................................................................... 10
2.3.4
SSL........................................................................................... 11
2.4
Literature Summary ......................................................................... 11
Methodology ........................................................................................... 12
3.1
Base lining cryptographic techniques .............................................. 12
3.2
Testing Prototypes with Cryptographic techniques .......................... 13
3.3
Variables Explained ......................................................................... 13
Timeline .................................................................................................. 15
Future Work ............................................................................................ 16
Reference ............................................................................................... 17
1 Introduction
1.1 Background
The use of a cryptographic hash has been used as a method of integrity
checking, comparing to hashes to determine if there are any deviations. This
idea was then extended, on the notion that ‘Why can’t we regenerate the
original data from the hashes?’ From that notion, the idea of breaking the
document into smaller pieces and checking their hash values was conceived
(Ashman, 2000).
Fundamentally the prototype operates by breaking down the document into
more manageable pieces, where a brute force search would be more feasible.
Instead of searching for a hash of an entire document, the prototype searches
for a piece that is either one character long, or one byte long (depending on
the implementation).
In preliminary work undertaken by the author, a prototype was built in Java,
which divided a document into a binary tree (i.e. slicing it in half, to create two
child nodes of one parent) until the children were of 1 character length. At that
level, if the hashes were not matching, a brute force search could be very
rapidly executed against the limited number of possibilities. Once the match
was found, the re-instated character could be put back into the tree, and after
the recursive process completes, the tree could then reproduce the original
document.
Communications between the client and the server are requests for hashes
(from client) and the requested hashes (from server). The length of the
messages sent from the client is determined by which level of the tree and
node that is required. In binary this is represented by 0 for left and 1 for right
and quad tree uses 0 for outer left, 1 for inner left, 2 for inner right and 3 for
outer right. The return messages from the server are 25 characters for MD5
hashes, or 40 characters in length for SHA-1 hash. The head node of the tree
(the entire document) is hashed using the SHA-1 hashing technique, whilst all
over nodes use MD5 as the hashing technique.
Over the next year, a quad tree version of the prototype was built, dividing the
document tree at each level into four segments rather than two. This showed
a significant performance improvement, since there were fewer levels of the
tree, although those levels had more branches. Finishing the summer, the
prototype was altered to work for arbitrary file types, rather than being forced
to be a character document.
The current prototype uses a “just in time” basis for sending the redundant
data to the receiver for use in the reinstating process, which significantly
reduces the amount of redundant data both created and subsequently
transmitted. The very early prototype reported in Ashman and Moss (2002)
used a “just in case” basis where the entire hash tree was calculated,
regardless of whether or what parts of it were needed to reinstate the
document after tampering.
This was the current state of the project at the commencement of this
research project.
For the research being conducted for this thesis, the tests will be conducted
firstly on both the Binary Tree and Quad Tree version, and secondly for
arbitrary file types. By using the arbitrary file type version, we are not limiting
ourselves to the type of file that can be tested.
1.2 Motivation
The transmissions of redundant data to reinstate tampered files are currently
being transferred in plain text, as it is not deemed critical information. But, it is
plausible to believe that if the document’s integrity can be compromised, so
can the data sent as part of the process of re-instating the document,
rendering the process useless.
Throughout the transmission process, hashes (and requests for those
hashes) will be sent between the client and server. If the channel is
compromised, it is feasible to believe that a hostile third party could generate
the hashes to produce the document they wish the client to generate, voiding
the process, and instilling a false sense of integrity to the client.
This project seeks solutions to prevent the persistent tampering from
compromising the final receipt of the correct version of the transmitted
document, considering the use of cryptography to secure the channel.
1.3 Research Question
What are potential cryptographic techniques that can be implemented to
secure the hash communication channel, without imposing unjustifiable
overhead to the process?
1.3.1 Sub Research Question
Can a cryptographic technique prevent persistent tampering from
compromising the integrity of a file?
1.3.2 Sub Research Question
What are the performance and storage overheads of one or more suitable
cryptographic techniques?
1.3.3 Sub Research Question
What is the most effective cryptographic technique?
1.4 (Research) Assumptions
Assumptions (pretence) for research:
 Alasdair (Person A) sends document to Miharu (Person M) with the
opportunity for hostile third party to tamper (Seth, Person S)
 Technique uses a series of requests and responses to transmit the
hash tree data on the just-in-time basis
The justification for using just-in-time as apposed to just-in-case approach is
to limit the size of data being transferred between to two parties. The size of
the tree would become too large to justify the process of sending the
redundant data, as opposed to sending the smaller, required pieces of data as
required.
2 Literature Review
There are three main streams of research being conducted; one is into
applying Hash Trees to document integrity, another into the concept of
Tamper Correcting itself, and what research has been conducted, and finally
the research that has conducted into cryptographic techniques, which features
predominately as a the focus of the research.
An emerging trend from the research is that there is little research into the
concept of re-instating tampered documents. Research focuses has been on
hash trees for memory and cryptography to aid confidentiality, but no work
into applying the different concepts together as a form of document integrity
re-instating.
The viewpoint of this literature review is to examine what research has been
conducted in each field, breaking it into appropriate sub categories.
2.1 Hash Trees
There is not too much published literature about hash trees in the context of
the research being conducted. The published works that have been published
are Ashman (2000) and Moss & Ashman (2002). Both of these papers outline
the fundamental principles of using hash trees for re-instating tampered
documents. The process work by breaking the document into smaller more
manageable pieces, creating a tree of hashes. This literature is fundamentals
of the research being conducted, with the prototypes based on the ideas
proposed by Ashman and Moss. Note that this is apparently the only work that
reports on using trees of hash data as redundant data for reinstating
tampered documents.
Fangyong et al. (2009) detail in their work how the hash tree actually works,
or more accurately, the tree data structure. In their studies, they are using
hash trees as a disk storage integrity, incrementally working through the tree
to identify the any deviation in the hash. The principle works by having a
head node that spawn children nodes, then these children nodes spawn more
children nodes until the desired level is reached. In the research being
undertaken, this desired level will be when the content of the leaf is one byte
long.
Williams & Emin Gun (2004) explore integrity verification of data using hash
trees, not just as an idea, but also exploring the challenges of
implementations and the cost of adding the system. Their research into the
use of hash trees focuses on memory integrity and discovering that data has
been compromised, whilst the research being undertaken focuses on using
the hash trees to re-instate the compromised data.
2.2 Tamper Correcting
In the context of the research being undertaken, there is no published
literature on using cryptographic hashes as a means of re-instating tampered
documents. However, there are other works into various different techniques
of tamper correcting, which show that although there is no implementation in
this context, the concept does exist and is an active research topic.
Cong et al. (2008) discuss a technique that uses watermarking as a means of
detecting tampering in web pages. There technique can not just detect where
the changes have occurred, but can also repair the tampering to produce the
original document. This principle is similar to the principle used in the
research being undertaken, but their work does not provide anything to the
prototypes being tested.
Hasan & Hassan (2007) and Hassine et al. (2009) explore different reinstating techniques for images. Hasan & Hassan (2007) technique uses
embedded watermarks as a means of re-instating images if they have been
tampered. Hassine et al. (2009) go a little further in their research, proposing
a technique that also uses embedded watermarks, but to work after a
Quantization attack. Both of these techniques are based on watermarks,
which is too constraining, as it requires the documents to be images. That is,
their work interprets the content of the files being reinstated and in fact relies
on a knowledge of typical features in that form of data, whereas the work in
this proposal reinstates documents without any requirement to interpret the
data as image or any other form.
The concept of using redundant data is not a new concept, and was explored
by Nyberg & Rueppel (1993). Their proposed idea is to add additional data to
digital signatures, but for the use as identity-based public key system, or as a
key agreement protocol. Although these uses are not relevant to the research,
it shows that the concept of using additional redundant data has been
explored before.
Xuesong et al. (2008) have designed a technique to make java programs
tamper-proof. They achieve this through flow signatures derived from regular
expressions – once again, their algorithm interprets the file content and
assumes features typical to the form of data. The use of regular expressions
works for their proposed technique, but for the research being conducted,
hashes serve the requirements better.
2.3 Cryptographic Techniques
To attain the introductory knowledge into the field of cryptography the book by
Singh (2000) walks through the history of cryptography. The book is not so
much a technical literature as a beginner’s guide to cryptography, stepping
through the history of cryptography, from codes used in old England, to
ciphers, the Enigma machine, to the current encryption techniques of today.
Nadeem & Javed (2005) examined the performance of four secret key
encryption techniques, DES, Triple DES, AES and Blowfish. Through their
testing using java, they discovered that in terms of speed performance,
Blowfish was the fastest of the four algorithms, followed by DES, AES and
Triple DES. These results are encouraging, as java is being used for this
research also, and all four of the mentioned algorithms are being tested. Yan
& Ming (2009) conducted similar research, performing performance testing
RSA, AES and Triple DES. Their results draw similar to the early mentioned
results by Nadeem & Javed, finding that AES algorithm the fastest, followed
by Triple DES and RSA.
Wei et al. (2009) and Xinjian et al. (2008) explore two new breeds of attack
faced by cryptography. Although not the focus of this research, it provides
some interesting details of new challenges faced by mobile devices
implementing cryptography. Xinjian et al. (2008) implementing the AES
algorithm show how gathering information on a mobile device, such as power
consumption to determine the secret key. Wei et al. (2009) continues this line
of thought, discussing how different differential analysis can determine the
secret key.
2.3.1 AES
The Advanced Encryption Standard (AES) is a block cipher that encrypts
blocks of 128, 192 or 256 bits of using symmetric keys that are 128, 192 or
256 bits in length (Parikh & Patel, 2007).
The AES cipher is an implementation of the Rijndael algorithm, and went
against DES and Triple DES to become the Advanced Encryption Standard.
Sanchez-Avila & Sanchez-Reillol (2001) performed an in-depth analysis of the
proposed Rijndael cipher against the Data Encryption Standard (DES) and
Triple DES. They found that the Rijndael algorithm main advantage is that it
eliminates the possibility for weak or semi-weak keys, when compared to
DES. But there were people that felt Triple DES should have become the
standard, Rehman et al. (2001) presented a paper stating that Triple DES with
a 128-bit key would be a better solution than Rijndael cipher.
A small part of the reviewed literature was spent exploring some of the
implementations of the AES algorithm. Chi-Feng et al. (2003) looked at using
implementing a modified version of the AES algorithm to be optimised in the
use of smart cards. Although not relevant necessarily to the research being
undertaken, the authors went into great detail explaining how the AES
algorithm works, which is of relevance to the research. Liberatori et al. (2007)
also looked at a high speed/low cost implementation of the AES cipher,
exploring different ways of physical wiring the AES operations to enhance
performance of AES. Again, this isn’t relevant directly to the research being
undertaken, but it shows that the speeds achieved using java computational
methods will be slower than other hardware solutions.
2.3.2 Blowfish
The Blowfish algorithm is a 64-bit block cipher designed by one the world’s
leading cryptologists Bruce Schnier in 1993, and to this day has not been
cracked (Tingyuan & Teng, 2009). Moussa (2005) points out many positives
for the Blowfish algorithm, suitable and efficient for hardware implementation,
unpatented with no licensing required, and uses simplified principles of DES
to provide the same security at a greater speed and efficiency.
Another positive for the Blowfish algorithm is there is little literature published
about the weaknesses of the Blowfish algorithm. Mingyan & Yanwen (2009)
proposed using Blowfish as a password management system, since after a
cryptanalysis contest was held to try and break Blowfish proved to be
unsuccessful; Blowfish was proven to be somewhat secure. This sentiment
that Blowfish is more secure is also mirrored by Meyers & Desoky (2008),
who came to the same conclusion that Blowfish is secure.
2.3.3 RSA
The RSA (Rivest, Shamir and Adleman) algorithm is the only public key cipher
being tested in research. The foundation knowledge about the RSA algorithm
comes from a book published by Burnett & Paine (2001). Their book
describes the RSA algorithm in great detail, exploring how it works, how the
public key system works. They also explain why the correct implementation of
cryptography is important, putting a stronger focus as to why the research
being conducted is important.
Aboud et al. (2008) also explore how the RSA algorithm is implemented, and
propose a way to improve upon the algorithm, making it more secure through
the use of the Hill cipher, making the RSA algorithm more flexible. For the
research being undertaken, their proposed solution will not be used, since
there is no proof that it will work in practical applications, so a more traditional
implementation of the RSA algorithm will be used.
There is also a variety of literature about methods of attacking the RSA
scheme. Giraud (2006) explores the vulnerabilities to fault attacks and simple
power analysis, and countermeasures to protect against these vulnerabilities.
Ren-Junn et al. (2005) use a series of proofs to show the potential to use the
Chinese Remainder Theorem as a means of decrypting messages, but the
performance times are not significant enough at this point in time. Aboud
(2009) shows how if the public key and modulus are known, through a series
of mathematical equations can produce the decryption key. But the complexity
of this function still means RSA is computationally infeasible to break.
However, this literature highlights the need to always stay ahead in research
in cryptography to maintain the effectiveness of each technique.
2.3.4 SSL
Niansheng et al. (2008) analyse the security and configuration of the Secure
Sockets Layer (SSL). Their work explores how the SSL protocol as a whole
works, how SSL authenticates and how the handshake protocol works. They
also explain some of the challenges faced when implementing SSL, such as
limitations of key size. Bisel (2007) published work that showed where SSL
sits in the scheme of cyber security, showing that it plays a role in ECommerce and Virtual Private Network (VPN). Since SSL is an important part
of cyber security, it is an important component of the research being
conducted.
Liping et al. (2009) explore using a combined symmetric key in an SSL
implementation to reduce encryption costs, and to de-complicate the digital
certificate process. The authors detail how the handshake protocol works, and
the drawbacks they are hoping to improve upon with their solution. Zhao & Liu
(2009) also propose a way of securing the handshake protocol, which adds
another layer of user authentication on the client side, minimising some the
attacks faced by SSL. However, although these solutions are faster and more
secure, they will not be used in the research being undertaken. Until these
solutions become the industry standard, a more generic SSL implementation
will be used.
Fang et al. (2008) explore the attacks and countermeasures of the SSL
Protected Trust Model. The authors detail the different attacks SSL trust
model faces and possible countermeasures to protect against them. They
continue to explore the challenges introduced by the countermeasures, and
propose a solution that reduces the threat of an attack, while making the
system easier to use. These threats to SSL are something that must be
considered when exploring the security of each cryptographic technique.
2.4 Literature Summary
From the literature review, it is clearly evident that there are vast amounts of
research into different cryptographic techniques. This research looks to use
these techniques in conjunction with another process, and this review permits
a comparison of the most appropriate techniques.
3 Methodology
To ensure that the tests are consistent and natural deviation is accounted for,
each test will be conducted n times, allowing each test to have a mean,
median, high, low and standard deviation. The justification for conducting
each test n times is to remove the standard variation. Since there are the
uncontrollable variables of processor activity and Java Virtual Machine,
conducting the tests repeatedly will give the best representation of the
statistical significance. One of the next tasks is to determine the value of n as
determined by preliminary tests which will indicate the natural variance in
processor activity and performance of the Java Virtual Machine.
3.1 Base lining cryptographic techniques
Technique
AES
DES
Triple DES
Blowfish
PBEWithMD5AndDES
PBEWithSHA1AndSESede
SSL
Key Size
128bit, 192bit, 256bit
128bit, 192bit, 256bit
128bit, 192bit, 256bit
128bit, 192bit, 256bit
128bit, 192bit, 256bit
128bit, 192bit, 256bit
128bit, 192bit, 256bit
Message Length
10, 25, 40 characters
10, 25, 40 characters
10, 25, 40 characters
10, 25, 40 characters
10, 25, 40 characters
10, 25, 40 characters
10, 25, 40 characters
Once the results of these tests are conducted, we will be able to discover
which of the techniques has better performance, better security, or adds
excessive overhead for little benefit.
3.2 Testing Prototypes with Cryptographic techniques
With these results in mind, each of these techniques will be implemented in
the Binary and Quad Tree arbitrary file type, and the performances of the
quad tree prototype measured against expected results, and the performance
of the prototype without any additional overhead at the communication
channel.
Prototype Code
Binary arbitrary
Quad arbitrary
Binary arbitrary
Quad arbitrary
Binary arbitrary
Quad arbitrary
Binary arbitrary
Quad arbitrary
Binary arbitrary
Quad arbitrary
Binary arbitrary
Quad arbitrary
Binary arbitrary
Quad arbitrary
Key
Size
128bit
192bit
256bit
128bit
192bit
256bit
128bit
192bit
256bit
128bit
192bit
256bit
128bit
192bit
256bit
128bit
192bit
256bit
128bit
192bit
256bit
Technique
File Contents
Length
AES
100 characters
1000 characters
10000 characters
DES
100 characters
1000 characters
10000 characters
Triple DES
100 characters
1000 characters
10000 characters
Blowfish
100 characters
1000 characters
10000 characters
PBEWithMD5AndDES
100 characters
1000 characters
10000 characters
PBEWithSHA1AndSESede 100 characters
1000 characters
10000 characters
SSL
100 characters
1000 characters
10000 characters
3.3 Variables Explained
The justification for using 10-, 25- and 40-character length messages for base
lining each cryptographic technique is to get a subset of the ‘normal’ length
messages passed through the communication channel. It is hard to determine
the precise average message length, due to the variables of message length
and percentage of tampering.
10 characters in length represents a client requesting data from the tree that
is 10 levels down, therefore a feasible number that will frequently occur
throughout the process.
25 characters are the length of the hashes, and therefore the most common
length.
40 characters in length is the response to a requested SHA-1 hash of the
entire document, which is conducted at the start of the process. However, if
this occurs during the process of reinstating the document, it would resemble
the worse case scenario, for a message to contain 40 characters would mean
the data has been requested from the tree has 40 levels to traverse.
The file contents lengths are constrained to 100, 1000 and 10000 characters
in length, and are all .txt files. The justification for this is to make it easier to
control the level of tampering that occurs throughout the document, as it is
harder to make measurable changes to other file types that have additional
overheads (such as .docx and .mp3)
4 Timeline
Date
Early January 2010
Mid February 2010
Early March 2010
Mid March 2010
Mid March 2010
April 2010
Early May 2010
May 2010
June 2010
July 2010
August 2010
September 2010
October 2010
Comment
Found supervisor and
topic
Determined which
cryptographic
techniques to be used
Compiled literature
review
Modified code to
operate in nanoseconds for better
precision
Removed GUI to reduce
resources required
Baseline performance
testing of code Binary
and Quad prototypes
Begin writing Research
Proposal
Continue Baseline
performance testing of
Binary and Quad
prototypes
Baseline testing of
Cryptographic
techniques
Prototype Tests with
Cryptographic
techniques
Analyse results of tests
Compile Thesis
Finish Thesis
Write conference Paper
on thesis work
5 Future Work
Future works to be conducted in relation to this work include:
 Measuring the speed differences between tampering occurring in
groups at the start/middle/end of a document against randomly placed
tampering
 Measuring the precise speed differences between binary/quad trees
and character/arbitrary
 Visualisation tool to show exactly where the tampering has occurred,
possible uses as a forensic recovery tool.
6 Reference
Aboud, SJ 2009, ‘An efficient method for attack RSA scheme’, Second
International Conference on the Applications of Digital Information and Web
Technologies, pp. 587 – 591
Aboud, SJ, Al-Fayoumi, MA, Al-Fayoumi, M & Jabbar, H 2008, ‘An Efficient
RSA Public Key Encryption Scheme’, Fifth International Conference on
Information Technology: New Generations, pp. 127 – 130
Ashman, H 2000, ‘Hashes DO grow on trees – Document integrity at every
level’, Proceedings of Ausweb 2000, Southern Cross University
Bisel, LD 2007, ‘The Role of SSL in Cybersecurity’, IT Professional, vol. 9, no.
2, pp. 22 – 25
Burnett, S & Paine, S 2001, RSA security’s official guide to cryptography,
Osborne/McGraw-Hill
Chi-Feng, L, Yan-Shun, K, Hsia-Ling, C & Chung-Huang, Y 2003, ‘Fast
implementation of AES cryptographic algorithms in smart cards’, IEEE 37th
Annual 2003 International Carnahan Conference On Security Technology, pp.
573 – 579
Cong, J, Hongfeng, X & Xiaoliang, Z 2008, ‘Web pages tamper-proof method
using virus-based watermarking’, International Conference on Audio,
Language and Image Processing, pp. 1012 – 1015
Fang, Q, Zhe, T & Guojun, W 2008, ‘Attacks vs. Countermeasures of SSL
Protected Trust Model’, 9th International Conference for Young Computer
Scientists, pp. 1986 – 1991
Fangyong, H, Dawu, G, Nong, X, Fang, L & Hongjun, H 2009, ‘Performance
and Consistency Improvements of Hash Tree Based Disk Storage Protection’,
IEEE International Conference on Networking, Architecture, and Storage, pp.
51 – 56
Giraud, C 2006, ‘An RSA Implementation Resistant to Fault Attacks and to
Simple Power Analysis’, IEEE Interactions on Computers, vol. 55, no. 9, pp.
1116 – 1120
Hasan, YMY & Hassan, AM 2007, ‘Tamper Detection with Self-Correction
Hybrid Spatial-DCT Domains Image Authentication Techniques’, IEEE
International Symposium on Signal Processing and Information Technology,
pp. 369 – 374
Hassine, A, Rhouma, R & Belghith, S 2009, ‘A novel method for tamper
detection and recovery resistant to Vector Quantization attack’, 6th
International Multi-Conference on Systems, Signals and Devices, pp. 1 – 6
Liberatori, M, Otero, F, Bonadero, JC & Castineira, J 2007, ‘AES-128 Cipher.
High Speed, Low Cost FPGA Implementation’, 3rd Southern Conference on
Programmable Logic, pp. 195 – 198
Liping, D, Xiangyi, H, Ying, L & Guifen, Z 2009, ‘A CSK based SSL
handshake protocol’, IEEE International Conference on Network Infrastructure
and Digital Content, pp. 600 – 603
Meyers, RK, Desoky, AH 2008, ‘An Implementation of the Blowfish
Cryptosystem’, IEEE International Symposium on Signal Processing and
Information Technology, pp. 346 – 351
Mingyan, W & Yanwen Q 2009, ‘The Design and Implementation of
Passwords Management System Based on Blowfish Cryptographic
Algorithm’, International Forum on Computer Science-Technology and
Applications, vol. 2, pp. 24 – 28
Moussa, A 2005, ‘Data encryption performance based on Blowfish’, 47th
International Symposium ELMAR, pp. 131 – 134
Moss, B, Ashman, H 2002, ‘Hash-Tree Anti-tampering Schemes’, IEEE
International Conference on Information Technology and Applications,
Nadeem, A, Javed, M 2005, ‘A Performance Comparison of Data Encryption
Algorithms’, First International Conference on Information and
Communications Technologies, pp. 84 – 89
Niansheng, L, Guohao, Y, Yu, W & Donghui, G 2008, ‘Security analysis and
configuration of SSL protocol’, 2nd International Conference on Anticounterfeiting, Security and Identification, pp. 216 – 219
Nyberg, K & Rueppel, RA 1993, A new signature scheme based on the DSA
giving message recovery, ACM, Fairfax, Virginia, United States
Parikh, C & Patel, P 2007, ‘Performance Evaluation of AES Algorithm on
Various Development Platforms’, IEEE International Symposium on
Consumer Electronics, pp. 1 – 6
Rehman, H, Jamshed, S & Absar ul, H 2002, ‘Why triple DES with 128-bit key
and not rijndael should be AES’, IEEE Students Conference, pp. 12 – 13
Ren-Junn, H, Feng-Fu, S, Yi-Shiung & Chia-Yao, C 2005, ‘An efficient
decryption method for RSA cryptosystem’, 19th International Conference on
Advanced Information Networking and Applications, vol. 1, pp. 585 – 590
Sanchez-Avila, C & Sanchez-Reillol, R 2001, ‘The Rijndael block cipher (AES
proposal): a comparison with DES’, IEEE 35th International Carnahan
Conference on Security Technology, pp. 229 – 234
Singh, S 2000, The Code Book, New York, New York
Tingyuan, N & Teng, Z 2009, ‘A study of DES and Blowfish encryption
algorithm’, IEEE Region 10 Conference TENCON, pp. 1 – 4
Wei, L, Dawu, G, Yong, W, Juanru, L & Zhiqiang, L 2009, ‘An Extension of
Differential Fault Analysis on AES’, Third International Conference on Network
and System Security, pp. 443 – 446
Williams, D & Emin Gun, S 2004, ‘Optimal parameter selection for efficient
memory integrity verification using Merkle hash trees’, Third IEEE
International Symposium on Network Computing and Applications, pp. 383 –
388
Xuesong, Z, Fengling, H & Wanli, Z 2008, ‘A Java Program Tamper-Proofing
Method’, International Conference on Security Technology, pp. 71 – 74
Xinjian, Z, Yiwei, Z & Bo, P 2008, ‘Design and Implementation of a DPA
Resistant AES Coprocessor’, 4th International Conference on Wireless
Communications, Networking and Mobile Computing, pp. 1 – 4
Yan, W & Ming, H 2009, ‘Timing Evaluation of the Known Cryptographic
Algorithms’, International Conference on Computational Intelligence and
Security, vol. 2, pp. 223 – 237
Zhao, H & Liu, R 2009, ‘A Scheme to Improve Security of SSL’, Pacific-Asia
Conference on Circuits, Communications and Systems, pp. 401 – 404
Download