Research paper On Cryptography and network security Principles

advertisement
Research paper
On
Cryptography and network security
Principles and practices
By:
Prashant singh
Tanuj Aggarwal
Pratik Sharma
Rohit Yadav
Dronacharya collge of engineering
Abstarct
The field of network and Internet security consists of measures to deter, prevent,
detect, and correct security violations that involve the transmission of information.
That is a broad statement that covers a host of possibilities. To give you a
feel for the areas covered in this paper, consider the following examples of
security violations:
1. User A transmits a file to user B. The file contains sensitive information
(e.g., payroll records) that is to be protected from disclosure. User C, who is not
authorized to read the file, is able to monitor the transmission and capture a copy
of the file during its transmission.This book focuses on two broad areas:
cryptographic algorithms and protocols, which have a broad range of applications;
and network and Internet security, which rely heavily on cryptographic techniques.
Cryptographic algorithms and protocols can be grouped into four main areas:
• Symmetric encryption: Used to conceal the contents of blocks or streams of
data of any size, including messages, files, encryption keys, and passwords.
• Asymmetric encryption: Used to conceal small blocks of data, such as
encryption keys and hash function values, which are used in digital signatures.
• Data integrity algorithms: Used to protect blocks of data, such as messages,
from alteration.
• Authentication protocols: These are schemes based on the use of
cryptographic
algorithms designed to authenticate the identity of entities.
Computer and network security can be described as the protection afforded to
an automated information system in order to attain the
applicable objectives of preserving the integrity, availability, and confidentiality of
information system resources (includes hardware, software, firmware, information/
data, and telecommunications).
Introduction:
This definition introduces three key objectives that are at the heart of
computer security:
• Confidentiality:
Assures that private or confidential information is
not made available or disclosed to unauthorized individuals.
Privacy: Assures that individuals control or influence what information
related to them may be collected and stored and by whom and to whom
that information may be disclosed.
• Integrity: This term covers two related concepts:
Data integrity: Assures that information and programs are changed only in
a specified and authorized manner.
System integrity: Assures that a system performs its intended function in an
unimpaired manner, free from deliberate or inadvertent unauthorized
manipulation of the system.
• Availability: Assures that systems work promptly and service is not denied to
authorized users.
We now provide some examples of applications that illustrate the requirements just
enumerated. For these examples, we use three levels of impact on organizations or
individuals should there be a breach of security (i.e., a loss of confidentiality,
integrity, or availability)
• Low: The loss could be expected to have a limited adverse effect on
organizational
operations, organizational assets, or individuals.A limited adverse effect
means that, for example, the loss of confidentiality, integrity, or availability might
(i) cause a degradation in mission capability to an extent and duration
that the organization is able to perform its primary functions, but the effectiveness
of the functions is noticeably reduced; (ii) result in minor damage to
organizational assets; (iii) result in minor financial loss; or (iv) result in minor
harm to individuals.
• Moderate: The loss could be expected to have a serious adverse effect on
organizational operations, organizational assets, or individuals. A serious
adverse effect means that, for example, the loss might (i) cause a significant
degradation in mission capability to an extent and duration that the organization
is able to perform its primary functions, but the effectiveness of
the functions is significantly reduced; (ii) result in significant damage to
organizational assets; (iii) result in significant financial loss; or (iv) result in
significant harm to individuals that does not involve loss of life or serious,
life-threatening injuries.
• High: The loss could be expected to have a severe or catastrophic adverse
effect on organizational operations, organizational assets, or individuals.
A severe or catastrophic adverse effect means that, for example, the loss
might (i) cause a severe degradation in or loss of mission capability to an
extent and duration that the organization is not able to perform one or
more of its primary functions; (ii) result in major damage to organizational
assets; (iii) result in major financial loss; or (iv) result in severe or catastrophic
harm to individuals involving loss of life or serious, life-threatening
injuries
Security attacks
A useful means of classifying security attacks, used both in X.800 and RFC 2828, is
in terms of passive attacks and active attacks. A passive attack attempts to learn or
make use of information from the system but does not affect system resources.An
active attack attempts to alter system resources or affect their operation.
Passive Attacks
Passive attacks are in the nature of eavesdropping on, or monitoring of,
transmissions.
The goal of the opponent is to obtain information that is being transmitted.
Two types of passive attacks are the release of message contents and traffic
analysis.
Active Attacks
Active attacks involve some modification of the data stream or the creation of a
false
stream and can be subdivided into four categories: masquerade, replay,
modification
of messages, and denial of service.
Applying Cryptography
Key Based Cryptography
Currently, most cryptography used in practice is key based, that is a string of bits,
that is used to encode the clear text into cipher text and back again to clear text
when required. Two types of key based cryptography exist, based on the availability
of the key publicly:
1. In Private key Cryptography, both the sender and the recipient share a key
that must be kept private. In order to communicate with each other, the key
must be passed between the two, this process is known as the key
distribution and is quite complicated and difficult to do properly. The most
famous example of this type of cryptography is the Data Encryption Standard
(DES), other examples include Triple DES, RC2, RC4 IDEA and Skipjack. This
is also known as symmetric cryptography.
2. While in Public Key Cryptography, each party has two sets of keys, one key is
published to the public, called the Public key, while the other is kept secret
and only known by the owner, the Private key. Anyone wishing to
communicate with a certain party securely, will encrypt the communicated
data with the recipient's public key which is available and on the other side
only the party that holds the matching private key can decrypt the cipher
text. Example Public key algorithms: Diffie-Hellman, RSA and
Merkle-Hellman.
The public key system eliminates the key distribution process that hampers all
private key systems since there no need to communicate secret keys among
communicating parties. However, a problem that arises with public key system is
lack of assurance of the true identity of the party on the other side, for example, if
(A) wants to communicate with (B), it will use (B's) public key, but how can (A)
know that that the party that sent its public key as being (B) is really (B). This
problem requires a trusted third party that authenticates both (A) and (B) to each
other. This trusted third party is known as a Certificate Authority (CA). A CA issues
certificates and guarantees their authenticity.
Digital Signatures
The emergence of public key systems has introduced the concept of digital
signature. A sample digital signature scenario goes as follows:
1. (A) encrypts the data to be signed with his/her private key.
2. (A) then encrypts the result from (1) with (B)'s public key and sends it to
(B).
3. (B) decrypts the incoming data with his/her private key and then decrypts
the result with (A)'s public key.
4. If the initial data is obtained then this will authenticate the data and the
sender.
This is a simple example and not used in practice since it can be defeated by
cutting and pasting from a captured authentic message.
Message Digest
Both public and private key cryptography provide message integrity checks through
checksums, but its not a reliable method since encryption of messages is done
usually in small blocks of text, its possible to delete or duplicate a section of the
message without causing any problems with the checksum.
On the other hand, message digests, provide a reliable method to check message
integrity. A message digest function, also know as "one-way functions" takes a
plain text message and generates from it a short fixed length string that seems
random. This string is known as a hash and the original text cannot be obtained
from the hash, hence the name one-way function.
These attributes of a message digest permit it to act as a digital fingerprint of the
original message. The message's hash will change drastically with slightest change
to the original message.
By combining message digests and encryption, a tamper proof digital signature
method can be used to send messages across the network. The Digital Signature
Standard (DSS) is such a method and it works as follows:
1. The sender runs the message to be sent through the message digest function
and obtains its hash.
2. The sender then encrypts the hash using his/her private key and sends it
along with the original message to the intended recipient.
3. When the recipient gets the message, he/she decrypts the hash using the
sender's public key, and compares the result with the hash obtained from
running the message through the message digest function again.
4. If both hashes are identical, then sender's identity and message integrity are
both verified.
What network security and cryptography will provide:
Authentication
The authentication service is concerned with assuring that a communication is
authentic. In the case of a single message, such as a warning or alarm signal, the
function of the authentication service is to assure the recipient that the message is
from the source that it claims to be from. In the case of an ongoing interaction,
such as the connection of a terminal to a host, two aspects are involved. First, at
the time of connection initiation, the service assures that the two entities are
authentic, that is, that each is the entity that it claims to be. Second, the service
must assure that the connection is not interfered with in such a way that a third
party can masquerade as one of the two legitimate parties for the purposes of
unauthorized transmission or
reception.
Two specific authentication services are defined in X.800:
• Peer entity authentication: Provides for the corroboration of the identity
of a peer entity in an association. Two entities are considered peers if
they implement to same protocol in different systems; e.g., two TCP modules
in two communicating systems. Peer entity authentication is provided
for use at the establishment of, or at times during the data transfer phase
of, a connection. It attempts to provide confidence that an entity is not
performing either a masquerade or an unauthorized replay of a previous
connection.
• Data origin authentication: Provides for the corroboration of the source
of a data unit. It does not provide protection against the duplication or
modification of data units. This type of service supports applications like
electronic mail, where there are no prior interactions between the communicating
entities.
Access Control
In the context of network security, access control is the ability to limit and control
the access to host systems and applications via communications links. To achieve
this, each entity trying to gain access must first be identified, or authenticated, so
that access rights can be tailored to the individual.
Data Confidentiality
Confidentiality is the protection of transmitted data from passive attacks.With
respect to the content of a data transmission, several levels of protection can be
identified.The broadest service protects all user data transmitted between two users
over a period of time. For example, when a TCP connection is set up between two
systems, this broad protection prevents the release of any user data transmitted
over the TCP connection. Narrower forms of this service can also be defined,
including
the protection of a single message or even specific fields within a message. These
refinements are less useful than the broad approach and may even be more
complex
and expensive to implement.
The other aspect of confidentiality is the protection of traffic flow from analysis.
This requires that an attacker not be able to observe the source and destination,
frequency, length, or other characteristics of the traffic on a communications
facility. Data Integrity
As with confidentiality, integrity can apply to a stream of messages, a single
message,
or selected fields within a message. Again, the most useful and straightforward
approach is total stream protection.
A connection-oriented integrity service, one that deals with a stream of
messages, assures that messages are received as sent with no duplication,
insertion,
modification, reordering, or replays. The destruction of data is also covered
under this service. Thus, the connection-oriented integrity service addresses both
message stream modification and denial of service. On the other hand, a
connectionless
integrity service, one that deals with individual messages without regard
to any larger context, generally provides protection against message modification
only.
We can make a distinction between service with and without recovery. Because
the integrity service relates to active attacks, we are concerned with detection
rather
than prevention. If a violation of integrity is detected, then the service may simply
report this violation, and some other portion of software or human intervention is
required to recover from the violation. Alternatively, there are mechanisms
available
to recover from the loss of integrity of data, as we will review subsequently.
The incorporation of automated recovery mechanisms is, in general, the more
attractive alternative.
Nonrepudiation
Nonrepudiation prevents either sender or receiver from denying a transmitted
message. Thus, when a message is sent, the receiver can prove that the alleged
sender in fact sent the message. Similarly,when a message is received, the sender
can
prove that the alleged receiver in fact received the message.
Availability Service
Both X.800 and RFC 2828 define availability to be the property of a system or a
system resource being accessible and usable upon demand by an authorized
system entity, according to performance specifications for the system (i.e., a
system is available if it provides services according to the system design whenever
users request them). A variety of attacks can result in the loss of or reduction in
availability. Some of these attacks are amenable to automated countermeasures,
such as authentication and encryption, whereas others require some sort of
physical action to prevent or recover from loss of availability of elements of a
distributed system. X.800 treats availability as a property to be associated with
various security
services. However, it makes sense to call out specifically an availability service.An
availability service is one that protects a system to ensure its availability.This
service addresses the security concerns raised by denial-of-service attacks. It
depends on proper management and control of system resources and thus depends
on access control service and other security services.
SPECIFIC SECURITY MECHANISMS
May be incorporated into the appropriate protocol
layer in order to provide some of the OSI security
services.
Encipherment
The use of mathematical algorithms to transform
data into a form that is not readily intelligible.The
transformation and subsequent recovery of the
data depend on an algorithm and zero or more
encryption keys.
Digital Signature
Data appended to, or a cryptographic transformation
of, a data unit that allows a recipient of the data unit
to prove the source and integrity of the data unit and
protect against forgery (e.g., by the recipient).
Access Control
A variety of mechanisms that enforce access rights to
resources.
Data Integrity
A variety of mechanisms used to assure the integrity
of a data unit or stream of data units.
Authentication Exchange
A mechanism intended to ensure the identity of an
entity by means of information exchange.
Traffic Padding
The insertion of bits into gaps in a data stream to
frustrate traffic analysis attempts.
Routing Control
Enables selection of particular physically secure
routes for certain data and allows routing changes, Mechanisms that are not specific
to any particular OSI security service or protocol layer.
Trusted Functionality
That which is perceived to be correct with respect to
some criteria (e.g., as established by a security policy).
Security Label
The marking bound to a resource (which may be a
data unit) that names or designates the security
attributes of that resource.
Event Detection
Detection of security-relevant events.
Security Audit Trail
Data collected and potentially used to facilitate a
security audit, which is an independent review and
examination of system records and activities.
Security Recovery
Deals with requests from mechanisms, such as event
handling and management functions, and takes
recovery actions.especially when a breach of security is suspected.
Notarization
The use of a trusted third party to assure certain
properties of a data exchange.
The encryption techniques:
Symmetric encryption is a form of cryptosystem in which encryption and
decryption are performed using the same key. It is also known as conventional
encryption. Symmetric encryption, also referred to as conventional encryption or
single-key
encryption, was the only type of encryption in use prior to the development of
publickey
encryption in the 1970s. It remains by far the most widely used of the two types of
encryption. Part One examines a number of symmetric ciphers. In this chapter, we
begin with a look at a general model for the symmetric encryption process; this will
enable us to understand the context within which the algorithms are used. Next, we
examine a variety of algorithms in use before the computer era. Finally, we look
briefly at a different approach known as steganography. Chapters 3 and 5 examine
the two most widely used symmetric cipher: DES and AES.
Before beginning, we define some terms. An original message is known as the
plaintext, while the coded message is called the ciphertext.The process of
converting from plaintext to ciphertext is known as enciphering or encryption;
restoring the plaintext from the ciphertext is deciphering or decryption. The
many schemes used for encryption constitute the area of study known as
cryptography. Such a scheme is known as a cryptographic system or a cipher.
Techniques used for deciphering a opponent should be unable to decrypt ciphertext
or discover the key even if he
or she is in possession of a number of ciphertexts together with the plaintext
that produced each ciphertext.
2. Sender and receiver must have obtained copies of the secret key in a secure
fashion and must keep the key secure. If someone can discover the key and
knows the algorithm, all communication using this key is readable.
We assume that it is impractical to decrypt a message on the basis of the
ciphertext plus knowledge of the encryption/decryption algorithm. In other words,
we do not need to keep the algorithm secret; we need to keep only the key secret.
This feature of symmetric encryption is what makes it feasible for widespread use.
The fact that the algorithm need not be kept secret means that manufacturers can
and have developed low-cost chip implementations of data encryption algorithms.
These chips are widely available and incorporated into a number of products.With
the use of symmetric encryption, the principal security problem is maintaining the
secrecy of the key.
Let us take a closer look at the essential elements of a symmetric encryption
scheme,. A source produces a message in plaintext,
. The elements of are letters in some finite alphabet.
Traditionally, the alphabet usually consisted of the 26 capital letters. Nowadays,
the binary alphabet {0, 1} is typically used. For encryption, a key of the form
is generated. If the key is generated at the message source,
then it must also be provided to the destination by means of some secure channel.
Alternatively, a third party could generate the key and securely deliver it to
both source and destination.
Brute foce attack
: The attacker tries every possible key on a piece of ciphertext
until an intelligible translation into plaintext is obtained. On average, half
of all possible keys must be tried to achieve success.
If either type of attack succeeds in deducing the key, the effect is catastrophic:
All future and past messages encrypted with that key are compromised.
We first consider cryptanalysis and then discuss brute-force attacks.
Table 2.1 summarizes the various types of cryptanalytic attacks based on the
amount of information known to the cryptanalyst.The most difficult problem is
presented
when all that is available is the ciphertext only. In some cases, not even the
encryption algorithm is known, but in general, we can assume that the opponent
does know the algorithm used for encryption. One possible attack under these
circumstances
is the brute-force approach of trying all possible keys. If the key space is
very large, this becomes impractical.Thus, the opponent must rely on an analysis of
the ciphertext itself, generally applying various statistical tests to it. To use this
approach, the opponent must have some general idea of the type of plaintext that
is concealed, such as English or French text, an EXE file, a Java source listing, an
accounting file, and so on. A brute-force attack involves trying every possible key
until an intelligible translation of the ciphertext into plaintext is obtained. On
average, half of all possible keys must be tried to achieve success. shows how
much time is involved for various key spaces. Results are shown for four binary key
sizes. The 56-bit key size is used with the Data Encryption Standard (DES)
algorithm, and the 168-bit key size is used for triple DES. The minimum key size
specified for Advanced Encryption Standard (AES) is 128 bits. Results are also
shown for what are called
substitution codes that use a 26-character key (discussed later), in which all
possible permutations of the 26 characters serve as keys. For each key size, the
results are shown assuming that it takes 1 μs to perform a single decryption, which
is a reasonable rder of magnitude for today’s machines.With the use of massively
parallel
organizations of microprocessors, it may be possible to achieve processing rates
many orders of magnitude greater. The final column of Table 2.2 considers the
results for a system that can process 1 million keys per microsecond. As you can
see,
at this performance level, DES can no longer be considered computationally secure.
Cryptography
Cryptographic systems are characterized along three independent dimensions:
1. The type of operations used for transforming plaintext to ciphertext. All
encryption algorithms are based on two general principles: substitution, in
which each element in the plaintext (bit, letter, group of bits or letters) is
mapped into another element, and transposition, in which elements in the
plaintext are rearranged. The fundamental requirement is that no information
be lost (that is, that all operations are reversible). Most systems,
referred to as product systems, involve multiple stages of substitutions and
transpositions.
2. The number of keys used. If both sender and receiver use the same key, the
system is referred to as symmetric, single-key, secret-key, or conventional
encryption.
If the sender and receiver use different keys, the system is referred to as
asymmetric, two-key, or public-key encryption.
3. The way in which the plaintext is processed. A block cipher processes the
input one block of elements at a time, producing an output block for each
input block. A stream cipher processes the input elements continuously,
producing output one element at a time, as it goes along.
THE DATA ENCRYPTION STANDARD
(DES) adopted in 1977 by the National Bureau of Standards, now the National
Institute of Standards and Technology (NIST), as Federal Information Processing
Standard 46 (FIPS PUB 46). The algorithm itself is referred to as the Data
Encryption Algorithm (DEA).7 For DES, data are encrypted in 64-bit blocks using
a 56-bit key. The algorithm transforms 64-bit input in a series of steps into a 64-bit
output.The same steps, with the same key, are used to reverse the encryption.
The DES enjoys widespread use. It has also been the subject of much controversy
concerning how secure the DES is.To appreciate the nature of the controversy, let
us quickly review the history of the DES.
Since its adoption as a federal standard, there have been lingering concerns about
the level of security provided by DES. These concerns, by and large, fall into two
areas: key size and the nature of the algorithm.
The Use of 56-Bit Keys
With a key length of 56 bits, there are possible keys, which is approximately
keys. Thus, on the face of it, a brute-force attack appears impractical.
Assuming that, on average, half the key space has to be searched, a single machine
performing one DES encryption per microsecond would take more than a thousand
years (see Table 2.2) to break the cipher.
However, the assumption of one encryption per microsecond is overly conservative.
As far back as 1977, Diffie and Hellman postulated that the technology
existed to build a parallel machine with 1 million encryption devices, each of which
could perform one encryption per microsecond [DIFF77]. This would bring the
average search time down to about 10 hours. The authors estimated that the cost
would be about $20 million in 1977 dollars.
DES finally and definitively proved insecure in July 1998, when the Electronic
Frontier Foundation (EFF) announced that it had broken a DES encryption using a
special-purpose “DES cracker” machine that was built for less than $250,000.
The attack took less than three days.The EFF has published a detailed description of
the machine, enabling others to build their own cracker [EFF98].And, of course,
hardware
prices will continue to drop as speeds increase, making DES virtually worthless.
It is important to note that there is more to a key-search attack than simply
running through all possible keys. Unless known plaintext is provided, the analyst
must be able to recognize plaintext as plaintext. If the message is just plain text in
English, then the result pops out easily, although the task of recognizing English
would have to be automated. If the text message has been compressed before
encryption, then recognition is more difficult. And if the message is some more
general
type of data, such as a numerical file, and this has been compressed, the problem
becomes even more difficult to automate. Thus, to supplement the brute-force
approach, some degree of knowledge about the expected plaintext is needed, and
some means of automatically distinguishing plaintext from garble is also needed.
The EFF approach addresses this issue as well and introduces some automated
techniques that would be effective in many contexts.
Fortunately, there are a number of alternatives to DES, the most important of
which are AES and triple DES, discussed in Chapters 5 and 6, respectively.
The Nature of the DES Algorithm
Another concern is the possibility that cryptanalysis is possible by exploiting the
characteristics of the DES algorithm. The focus of concern has been on the eight
substitution tables, or S-boxes, that are used in each iteration. Because the design
criteria for these boxes, and indeed for the entire algorithm, were not made public,
BLOCK CIPHER DESIGN PRINCIPLES
Although much progress has been made in designing block ciphers that are
cryptographically strong, the basic principles have not changed all that much since
the work of Feistel and the DES design team in the early 1970s. It is useful to begin
this discussion by looking at the published design criteria used in the DES effort.
Then we look at three critical aspects of block cipher design: the number of rounds,
design of the function F, and key scheduling.
DIVISIBILITY AND THE DIVISION ALGORITHM
Divisibility
We say that a nonzero divides if for some , where , ,and are
integers.That is, divides if there is no remainder on division.The notation is
commonly used to mean b divides a.Also, if b ƒ a, we say that b is a divisor of a.
babƒa
b a a = mb m a b m
n
GF(2n)
GF(p) p
The positive divisors of 24 are 1, 2, 3, 4, 6, 8, 12, and 24.
13 ƒ 182; -5 ƒ 30; 17 ƒ 289; -3 ƒ 33; 17 ƒ 0
Subsequently, we will need some simple properties of divisibility for integers,
which are as follows:
• If , then .
• If and , then .
• Any 0 divides 0.
• If a ƒ b and b ƒ c, then a ƒ c:
bZ
a ƒ b b ƒ a/b/ a =a ;b
a ƒ 1 a = ;1
11 ƒ 66 and 66 ƒ 198 = 11 ƒ 198
• If and , then for arbitrary integers and .
To see this last point, note that
• If , then is of the form for some integer
The Division Algorithm
Given any positive integer and any nonnegative integer , if we divide by , we
get an integer quotient and an integer remainder that obey the following
relationship:
(4.1)
where is the largest integer less than or equal to . Equation (4.1) is referred to
as the division algorithm.1
Figure 4.1a demonstrates that, given and positive , it is always possible to
find and that satisfy the preceding relationship. Represent the integers on the
number line; will fall somewhere on that line (positive is shown, a similar
demonstration can be made for negative ). Starting at 0, proceed to , 2 , up to
, such that . The distance from to is , and we
have found the unique values of and .The remainder is often referred to as a
residue.
ADVANCED ENCRYPTION STANDARD
The Advanced Encryption Standard (AES) was published by the National Institute of
Standards and Technology (NIST) in 2001. AES is a symmetric block cipher that is
intended to replace DES as the approved standard for a wide range of applications.
Compared to public-key ciphers such as RSA, the structure of AES and most
symmetric ciphers is quite complex and cannot be explained as easily as many
other cryptographic algorithms. Accordingly, the reader may wish to begin with a
simplified version of AES, which is described in Appendix 5B. This version allows the
reader to perform encryption and decryption by hand and gain a good
understanding of the working of the algorithm details. Classroom experience
indicates that a study of this simplified version enhances understanding of AES.1
One possible approach is to read
the chapter first, then carefully read Appendix 5B, and then re-read the main body
of the chapter.it looks at the evaluation criteria used by NIST to select from among
the candidates for AES, plus the rationale for picking Rijndael, which was the
winning candidate. This material is useful in understanding not just the AES design
but thecriteria by which to judge any symmetric encryption algorithm.
AES STRUCTURE
General Structure
The cipher takes a plaintext block size of 128 bits, or 16 bytes.The key length can
be 16, 24, or 32 bytes (128, 192, or 256 bits). The algorithm is referred to as
AES-128, AES-192, or AES-256, depending on the key length.
The input to the encryption and decryption algorithms is a single 128-bit block.
In FIPS PUB 197, this block is depicted as a square matrix of bytes.This block
is copied into the State array, which is modified at each stage of encryption or
decryption.
After the final stage, State is copied to an output matrix. These operations are
depicted in Figure 5.2a. Similarly, the key is depicted as a square matrix of
bytes.This key is then expanded into an array of key schedule words. Figure 5.2b
shows the expansion for the 128-bit key. Each word is four bytes, and the total key
schedule is 44 words for the 128-bit key. Note that the ordering of bytes within a
matrix is by column. So, for example, the first four bytes of a 128-bit plaintext
input to the encryption cipher occupy the first column of the in matrix, the second
four bytes occupy the second column, and so on. Similarly, the first four bytes of
the expanded key, which form a word, occupy the first column of the w matrix.
The cipher consists of rounds, where the number of rounds depends on the
key length: 10 rounds for a 16-byte key, 12 rounds for a 24-byte key, and 14
rounds for a 32-byte key (Table 5.1). The first rounds consist of four distinct
transformation functions: SubBytes, ShiftRows, MixColumns, and AddRoundKey,
which are described subsequently. The final round contains only three
transformations, and there is a initial single transformation (AddRoundKey) before
the first round.
CRYPTOGRAPHIC HASH FUNCTIONS
The following describes a simple hash function: Choose , primes and compute
. Choose relatively prime to and less than .Then a number
is hashed as follows:
If there is an that hashes to the same value as , then
So which implies that breaking this amounts to finding a multiple of , which is the
hard problem in RSA.
a. Write a function that takes a bitlength and generates a modulus of
bitlength and less than and relatively prime to it.
b. Show the output of your function from part (a) for a few outputs.
Using as arguments write a function to perform the hashing.
Conclusion
Security often require that data be kept safe from unauthorized access. And the
best line of defense is physical security (placing the machine to be protected behind
physical walls). However, physical security is not always an option (due to cost
and/or efficiency considerations). Instead, most computers are interconnected with
each other openly, thereby exposing them and the communication channels that
they use.
This problem can be broken down into five requirements that must be addressed:
1. Confidentiality: assuring that private data remains private.
2. Authentication: assuring the identity of all parties attempting access.
3. Authorization: assuring that a certain party attempting to perform a
function has the permissions to do so.
4. Data Integrity: assuring that an object is not altered illegally.
5. Non-Repudiation: assuring against a party denying a data or a
communication that was initiated by them.
With regards to confidentiality, cryptography is used to encrypt data residing on
storage devices or traveling through communication channels to ensure that any
illegal access is not successful. Also, cryptography is used to secure the process
of authenticating different parties attempting any function on the system. Since a
party wishing be granted a certain functionality on the system must present
something that proves that they indeed who they say they are. That something is
sometimes known ascredentials and additional measures must be taken to ensure
that these credentials are only used by their rightful owner. The most classic and
obvious credential are passwords. Passwords are encrypted to protect against
illegal usage.
Authorization is a layer built on top of authentication in the sense that the
party is authenticated by presenting the credentials required (passwords, smart
cards, ... etc.). After the credentials are accepted the authorization process is
started to ensure that the requesting party has the permissions to perform the
functions needed.
Data integrity and Non-Repudiation are achieved by means of digital signature,
a method that includes performing cryptography among other things.
REFRENCES:
1.cryptography principles-william stallings
2. en.wikipedia.org/wiki/Cryptography
3.e book on network security fundamentals
4.www.schneier.com/blog/archives/2011/04/security_risks_7.html
5. www.lifehacker.com
Download