Research Paper - ODU Computer Science

advertisement
Security in the
.NET 2.0 Framework Environment
A Project to Satisfy CS 497 Requirements
Research Paper
Keith Mulkey
UIN #00238076
13 August, 2006
Table of Contents
Section 1 Introduction ............................................................. 3
Purpose ............................................................................................ 3
Section 2 .NET Security .......................................................... 4
2.1 Cryptography ............................................................................. 4
2.1.1 Substitution Ciphers .......................................................... 5
2.1.2 Transposition Ciphers ....................................................... 6
2.1.3 Block Ciphers .................................................................... 7
2.1.4 Product Ciphers ................................................................. 7
2.1.5 Cryptographic Principles ................................................... 7
2.1.5.1 Redundancy ............................................................... 7
2.1.5.2 Freshness ................................................................... 8
2.2 Symmetric-Key Algorithms ...................................................... 8
2.2.1 DES – Data Encryption Standard ...................................... 8
2.2.2 Triple DES ......................................................................... 9
2.2.3 AES – Advanced Encryption Standard ............................. 9
2.3 Asymmetric-Key (Public-Key) Algorithms ............................ 10
2.3.1 RSA ................................................................................. 11
2.4 Digital Signatures .................................................................... 11
2.4.1 MD5................................................................................. 12
2.4.2 SHA-1 .............................................................................. 12
2.5 Public Key Distribution ........................................................... 13
2.6 Authentication ......................................................................... 14
2.6.1 Private-Key Authentication ............................................. 14
2.6.2 Diffie-Hellman Key Exchange ........................................ 15
Keith Mulkey
Page 1
2/15/2016
2.6.3 Key Distribution Center (KDC) ...................................... 16
2.6.4 Needham-Shroeder Protocol ........................................... 16
2.6.5 Kerberos .......................................................................... 17
2.6.6 Public-Key Authentication .............................................. 18
2.7 Authentication in .NET ........................................................... 18
2.7.1 Forms Authentication ...................................................... 19
2.7.2 Windows Authentication ................................................. 20
2.8 Authorization ........................................................................... 21
2.8.1 Code-Access Security ..................................................... 21
2.8.1.1 Evidence and Code Identity .................................... 22
2.8.1.2 Security Policy ........................................................ 23
2.8.1.3 Permissions ............................................................. 23
2.8.2 Role-Based Security ........................................................ 24
Keith Mulkey
Page 2
2/15/2016
1.
Section 1 Introduction
1.1.
Purpose
The purpose of this project is to research the built-in security tools available to
developers working in the Microsoft .NET 2.0 Framework environment and to gain
enough knowledge to develop a tutorial application utilizing some of those tools.
This project is not a research project intended to enhance the framework nor is it a
project intended to find weaknesses and security “holes” in the Microsoft product.
This is a learning project. It is intended that this project will satisfy the requirements
of CS 497 Independent Study.
Keith Mulkey
Page 3
2/15/2016
2.
Section 2 .NET Security
There are four basic, closely related areas of security: authentication, integrity control,
non-repudiation, and secrecy.
Authentication is the area of network security that deals with verifying the identity of a
potential user before granting them access to sensitive information on your network.
When we speak of integrity control as it relates to network security, we are referring to
methods that can be used to verify that the message or data received is in fact the same as
the message or data that was originally sent. Repudiation is defined as the act of refusing
to acknowledge a contract or debt. Thus, non-repudiation is the part of network security
that we can use to help substantiate a claim that a message was indeed sent by the
particular individual whom we claim sent it. The security area of secrecy deals with
protecting information from the eyes of unintended recipients. This is what most people
think of when they hear the words “Network Security”.
These areas are the concerns addressed by cryptography and digital signatures
2.1.
Cryptography
The word cryptography comes from the Greek words for “secret writing” and there
are two basic categories of cryptography: cipher and code. A cipher is an encoding
scheme where characters or bits are transformed one for one without any regard to the
language or the structure of the message. A code on the other hand substitutes one
word for another word. For the most part, codes are not used anymore and therefore
will not be discussed any further in this paper. Suffice it to say however that they did
play a vital role in history, especially during World War II.
In cryptography, the data to be encrypted is known as plaintext. The plaintext is
encrypted using a key and the result is the ciphertext. It is the ciphertext that is then
transmitted from sender to receiver. The idea is that even if the encryption algorithm
is simple and well known, without the key that was used to encrypt the plaintext, the
Keith Mulkey
Page 4
2/15/2016
decryption of the message is extremely difficult if not practically impossible. This
idea is based on Kerckhoff’s principle: “All algorithms must be public: only the keys
are secret.” The result of this principle is that the cipher is only as strong as the key
and thus the length of the key is a major design issue. Cipher keys that are of interest
to this paper are measured by the number of bits used by the encryption key.
Consequently, a key that is 8 bits in length would have 28 or 256 different
combinations (and of course would be very simple to crack). A 128-bit key would
have 2128 or more than 3.4 X 1038 different possible keys. A brute force approach of
simply trying every possible key and doing so at the rate of 1 key each nanosecond
would take more than 1022 years to try every key.
2.1.1.
Substitution Ciphers
Substitution ciphers are those where each letter or group of letters in the plaintext
message are replaced by another letter or group of letters to arrive at the
ciphertext. The oldest known substitution cipher is the Caesar cipher where each
letter is substituted with the letter that is offset by three from the alphabet. For
example A is replaced by D, B is replaced by E and Z is replaced by C. In this
simple substitution cipher, the key is 3. Obviously, using today’s English
language capital-letters-only alphabet, there are only 25 possible keys and this
would not take long to crack.
There are many ways to improve the basic substitution cipher such as character
mapping where each character of the alphabet is mapped to a different character
of the alphabet resulting in a key length of 26! or 4 X 1026 possible keys. This is
called monoalphabetic substitution. Trying each of these keys, one by one, at the
rate of 1 per nanosecond, in an attempt to crack the code would take more 1010
years to cover the entire key space.
While the monoalphabetic substitution cipher seems to be an excellent choice, if a
cryptanalyst has even a small amount of ciphertext to work with, he or she can
easily break the code using the statistical properties of natural languages. In the
Keith Mulkey
Page 5
2/15/2016
English language for example, the six most commonly used letters are e, t, o, a, n,
and i in that order. Furthermore, the cryptanalyst will evaluate the ciphertext
looking for common digrams and trigrams, two and three letter combinations.
Common digrams are th, in, er, re, and an while common trigrams are the, ing,
and, and ion. Thus the cryptanalyst will evaluate the frequencies of letters,
digrams, and trigrams to determine tentative character mappings and then go from
there. Cryptanalysts can further analyze the message that they are trying to crack
by having some knowledge regarding the possible subject matter.
2.1.2.
Transposition Ciphers
Instead of retaining the order of plaintext symbols as does the substitution ciphers,
transposition ciphers reorder the letters but do not perform any substitutions. The
key for the cipher is a word or phrase that does not contain any repeated letters.
Without going into minute detail, suffice it to say that the key word or phrase is
used to establish the order of the columns of a matrix and the plaintext message
itself is used to establish rows of the matrix. The column order is determined by
the character’s relative order in the alphabet. The ciphertext is then read from the
matrix column by column. A simple example using the word ROCK as the key to
encrypt the message “attack now” would be:
R O C K
4 3 1 2
a t t a
c k
n
o w a b
To determine the ciphertext from the above matrix, we look at column 1, then
column 2, etc. Thus the ciphertext is t aanbtkwaco. The letters a and b that fill
out the matrix in the last row are padding. For a cryptanalyst to break this code he
or she must first determine that they are working with a transposition cipher
instead of a substitution cipher. For a message of any length, this can be
Keith Mulkey
Page 6
2/15/2016
determined simply by looking at the frequency of the letters used. If the most
commonly used letters e, t, a, etc. fit the normal pattern for plaintext, then the
cryptanalyst can be fairly certain it is a transposition cipher since each letter
represents itself. Next the cryptanalyst tries to determine the number of columns
in the key by looking at digrams and then once this is determined, placing the
columns in the proper order.
2.1.3.
Block Ciphers
A block cipher is a cipher that operates on a fixed number of bits at a time. This
is what is referred to as a block. Examples of a block size are 64 bits or 128 bits.
Block ciphers are used in symmetric-key algorithms (see below) where the
encryption key is applied to each block in turn. If the plaintext message is not an
integral multiple of the block size, then the message is padded to make it so.
2.1.4.
Product Ciphers
Product ciphers are a type of block cipher where a sequence of substitutions,
permutations, and modular arithmetic are applied to blocks of data at a time.
There are usually several iterations involved in the product cipher and while each
iteration in itself may not be secure, the idea is that a sufficiently large number of
iterations will create enough confusion as to make decryption very difficult. An
important class of product ciphers is knows as Feistel ciphers.
2.1.5.
Cryptographic Principles
There are two basic principles that underlie all cryptographic systems:

Messages must contain some redundancy

Some method is needed to foil replay attacks
2.1.5.1.
Redundancy
The first fundamental principle of cryptography states that all encrypted
messages must contain some redundancy. What this means is that all
messages should contain information that is not needed to understand the
contents of the message but can be used to tell whether or not the message is
Keith Mulkey
Page 7
2/15/2016
valid. Using redundancy, we can prevent active intruders from generating
garbage messages that cause the protected system to act on a bogus request.
The redundant information must however, be of a form that does not simplify
the cryptanalyst’s attempts to crack the message.
2.1.5.2.
Freshness
The second fundamental principle is that we need some way to determine
whether a message that is received is recent in its genesis thus limiting the
possibility of playback attacks. One way of doing this is to store received
messages for a short period of time and compare incoming messages with
those stored. If a duplicate is detected, it is discarded. If a message is
received that has a timestamp older than the short timeout period, it too is
rejected.
2.2.
Symmetric-Key Algorithms
Today, cryptography still uses the same basic concepts of transposition and
substitution ciphers but the idea of keeping the algorithm simple has vanished.
Modern encryption algorithms place emphasis on making the encryption algorithm
extremely complex. So much so that even in the presence of large amounts of
ciphertext, the cryptanalyst still has an almost impossible task of cracking the
encrypted message unless the encryption key is obtained. One category of these
complex algorithms is referred to symmetric-key algorithms. These are so named
because the same key is used to encrypt and decrypt the message. Obviously, when
using symmetric-key algorithms, the exchange of the key itself must be done in a
secure manner. Three such algorithms are discussed below.
2.2.1.
DES – Data Encryption Standard
In 1977, the United States National Bureau of Standards (NBS) adopted an
encryption algorithm proposed by IBM and termed it the Data Encryption
Standard (DES). DES was quickly adopted for use in voice grade communication
and other non-digital media and in 1980, the American National Standards
Institute (ANSI) set it as the banking industry’s encryption standard.
Keith Mulkey
Page 8
2/15/2016
DES is a Feistel cipher algorithm and it uses a 56 bit encryption key to encrypt 64
bit blocks of plaintext at a time. DES however, is no longer secure when used in
its original form. A non-profit group named the Electronic Frontier Foundation
devised a specialized computer for under $250,000 that could crack DES
encrypted messages in less than three days. Furthermore, for a mere $1 million,
one can obtain a hardware device that can crack DES in only 3½ hours.
2.2.2.
Triple DES
Triple DES was developed to address the shortcomings of DES and is really quite
simple in concept. It also has the benefit that since it still uses the same
encryption and decryption algorithms as DES, the implementing software is easy
to modify. In Triple DES, two and sometimes three keys are used in three
different stages of the encryption. First, one key is used to encrypt the plaintext
message. Next a second key is used to decrypt the message. Since the second
key is not the same key that was used to encrypt the message in the first stage, the
output of the second stage is still ciphertext. Then in the final stage, the
encrypted-then-decrypted ciphertext is encrypted again either with the first key (if
using only two keys) or with the third key. Another advantage of Triple DES was
that it provides backward compatibility with DES since to revert to DES
encryption, one only need use the same key for all three stages. The three stage
encryption/decryption process of course triples the amount of time it takes to
encrypt and decrypt messages.
2.2.3.
AES – Advanced Encryption Standard
In 1997, with the impending demise of DES and to some degree, Triple DES, the
United States Government set out to obtain a new encryption standard. Rather
than attempt to develop a new standard on their own however, the National
Institute of Standards and Technology (NIST) sponsored a “contest” to see who
could come up with the best encryption algorithm. Fifteen encryption proposals
were submitted and in 2000 the NIST announced the selection of the Rijndael
encryption algorithm as the winner. The name of the algorithm derives from the
Keith Mulkey
Page 9
2/15/2016
names of the two Belgian cryptographers that developed it. In November 2001,
Rijndael became the encryption standard for the U.S. Government. While the
Rijndael algorithm is a block cipher, it is not a Feistel or product cipher. Rijndael
is a substitution-permutation cipher. The Rijndael algorithm itself accommodates
separate key lengths and block sizes with either being of length 128 bits to 256
bits in increments of 32 bits. The AES however, dictates that the block size must
be 128 bits and the key lengths can be 128, 192, or 256 bits. To date, the only
successful attacks on the Rijndael encryption have been side-channel attacks.
Side-channel attacks are attacks that do not attack the actual algorithm itself but
instead attack a particular implementation of the algorithm on systems that have
leaked data.
2.3.
Asymmetric-Key (Public-Key) Algorithms
While AES certainly seems to be a very secure encryption system, the main weakness
of it and all other symmetric-key encryption systems is the distribution of the keys.
This certainly limits the usefulness of symmetric-key encryption by itself in a public
environment like the Internet. This fact led two researchers at Stanford University in
1976 to propose a new approach to cryptography. One in which the encryption key is
different from the decryption key. The encryption key is made publicly available
which leads to the name Public-Key Encryption. The decryption key is kept private
and is known only to the recipient of the public-key encrypted message. To establish
a secure communication channel between two computers on the Internet, user A
employs user B’s public key to encrypt a request message destined for user B. User
B upon receiving user A’s request uses his private key to decrypt the message. Since
user B is the only one with knowledge of the correct decryption key, user B is the
only one who can successfully decrypt user A’s request. To reply to user A’s request,
user B employs user A’s public key to encrypt the reply and user A then uses his
private decryption key to decrypt the reply from user B. Thus, a secure
communication channel has been established between the two users. The trick is to
find an algorithm that allows us to decrypt a message using a different key than the
one used to encrypt the message, an algorithm that makes it almost impossible to
Keith Mulkey
Page 10
2/15/2016
determine the private key given the public key, and an algorithm that can’t be broken
with a plaintext attack.
2.3.1.
RSA
Many researchers have spent a great deal of time trying to devise public-key
algorithms. In 1978, a trio of researchers at M.I.T. discovered and algorithm that
has withstood all attempts to crack the scheme. The algorithm is named RSA
using the initials of the last name of the three researchers. About the only
drawback of the algorithm is the key length required for good security. The RSA
algorithm requires keys of at least 1024 bits. This is compared to a key length of
128 bits for AES and thus the algorithm is very slow in comparison. So, the
obvious question is “If the algorithm is so slow, what good is it?” The answer is
that in practice, RSA is ideal for distributing a single session symmetric key to a
user and then using AES or Triple DES for the encryption and decryption of large
amounts of data. While RSA is not the only public-key algorithm in existence, it
is probably the most widely used and is the only one discussed in this paper.
2.4.
Digital Signatures
In society, probably the most often used form of proof that an individual did in fact
agree to a contract is the handwritten signature. Obviously, there is no way of
handwriting your signature on an electronic message. This is the function of the
digital signature. Digital Signatures are intended to serve at least two purposes:

It can be used by the receiver to verify that the sender of the message is who
he or she claims to be. This has to do with authentication.

It can be used by the receiver to later prove that the message was initiated by
the sender. This is at the heart of non-repudiation. Of course for this to be
truly a useful tool, the digital signature must be devised in such a manner as to
make it impossible for the receiver to have created the message himself.
Keith Mulkey
Page 11
2/15/2016
To achieve these purposes, cryptography is used for digital signatures and just as
there is in encryption algorithms, there are symmetric-key signatures and asymmetrickey, or public-key signatures. Like symmetric-key encryption, symmetric-key
signatures involve the use of identical keys at the sending and receiving ends. Publickey signatures are of course implemented using public-key encryption. Thus, one key
is used to encrypt while a different key is used to decrypt. In both cases, the end goal
is that the digital signatures include encrypted information that could only have been
encrypted by the sending party or by a trusted central authority. One digital signature
scheme that works in public-key cryptosystems and is based on one-way hash
functions is called a message digest. The two most widely used message digest
algorithms are MD5 and SHA-1.
2.4.1.
MD5
MD5 is one of five message digest algorithms designed by Ronald Rivest of
M.I.T. Mr. Rivest is the “R” in RSA. It works by mangling the bits of the
message in such a way that every output bit is affected by every input bit. MD5
has been widely used to provide assurance that downloaded files have been
downloaded in tact and unaltered. This is done by comparing a publicly
published MD5 sum for a download with the MD5 sum computed for the
downloaded file. MD5 is also widely used to store passwords. The MD5
algorithm works with data in 512 bit blocks and produces a message digest that is
128 bits in length.
2.4.2.
SHA-1
SHA-1 is an acronym for Secure Hash Algorithm 1. It was developed by the
United States National Security Agency and approved by the NIST. SHA-1 also
works with 512-bit blocks of data but produces a 160 bit message digest instead
of 128 bits.
2.5.
Public Key Distribution
As mentioned above, public-key encryption is an excellent way of providing
symmetric session keys for secure channel communication and for digitally signing
Keith Mulkey
Page 12
2/15/2016
messages using MD5 or SHA-1 one-way hashing algorithms. The obvious question
that quickly comes to mind is “How do we distribute the public key in the first
place?” One might suppose that the easiest way is to simply post your public key on
your website and then whoever needs it can obtain it from there. The problem with
this approach is that a user that is retrieving a public key from a public website is
cannot be certain that the key that they are receiving in fact came from the intended
website. It is not very difficult to hijack a TCP session and thus present a look-alike
website where the intruder can provide his or her own public key. When the user
sends the encrypted message using this “fake” public key to the hijackers website, it
is a simple matter to decode the message and gain access to the information that was
supposed to be protected by the secure communications.
Clearly, a better approach to distributing public keys is needed. One idea might be to
have a central distribution center that everyone could go to for public keys. The
obvious problem here is that the center would have to be on line and accessible 24
hours a day, 7 days a week. Any down time whatsoever could not be tolerated.
Additionally, having a centralized distribution center creates a bottleneck and
presents a single point of failure. A better solution is one that would not require a
distribution center to be on line at all. Instead, the central distribution center provides
certificates that people, companies, and organizations can use to verify that the public
key that they have obtained does in fact belong to who we think it belongs. An
organization that certifies public keys is call a Certification Authority or CA for short.
The way it works with a CA is that a user that wishes to distribute his public key has
the public key certified by the CA. The CA issues a certificate that indicates that the
public key is bound to (owned by) the user making the application. The CA generates
a SHA-1 hash value for the certificate and then signs it using its private key. This
certificate can then be posted by a user on his website. When a client wishes to
obtain his public key, the key is provided using the certificate. The client can then
verify that the certificate is indeed valid using the certificate, his own computed SHA1 has value, the CA computed hash value, and the CA’s public key. The standardized
Keith Mulkey
Page 13
2/15/2016
format for the digital certificates is called the X.509 standard and is widespread use
on the Internet.
2.6.
Authentication
Authentication is the process of verifying the identity of a potential user before that
user is granted access to any information or resource. Authentication is often
confused with authorization. The difference however is that while authentication is
the process of verifying identity, authorization is the act of granting specific
permissions once a user has been authenticated.
Having to show identification in the form of a driver’s license before being admitted
to an event is a type of authentication that everyone has encountered at some point in
their life. Obviously though, this form of authentication cannot be used over a wide
area network. Since we are not face-to-face with our desired communication partner,
the problem of authentication becomes a two-way street. Not only does the client
need to be able to present secure credentials to the server, the client must also have
some way of verifying that the server with which he or she is connected is in fact the
server of his or her intent. Secure authentication of course depends on cryptography
and like other uses of cryptography; authentication can be based on a shared secret
key (private key) or on a public-private key pair.
2.6.1.
Private-Key Authentication
Private-key authentication protocols are all basically a variation of a challengeresponse protocol. In challenge-response protocols, the two parties share a secret
or private key that they can use to encrypt a random number received from the
other party. There are several variations of this protocol but they all boil down to
a common concept. A client contacts a server providing his or her identity thus
asking to be authenticated. The server responds with a random number. This is
the challenge. The client, using the private key encrypts the number (usually with
Keith Mulkey
Page 14
2/15/2016
Triple DES or AES) and returns it to the server. This is the response. If the
server can decrypt this number using the shared private key, then the server is
certain that the client is who he claims to be. This however does nothing to
ensure the client that the server is who he claims to be. Consequently, the client
also sends a random number, the client’s challenge, to the server who then
encrypts the number using the shared private key and returns the ciphertext, the
server’s response, to the client. If the client can successfully decrypt the returned
ciphertext using the shared private key, then the client is certain that the server is
the intended communications partner. As stated above, there are variations on
how and when the identity, random numbers (challenges), and encrypted
responses are sent but the principle is the same.
2.6.2.
Diffie-Hellman Key Exchange
Like symmetric-key encryption, a major concern regarding private-key
authentication is the method with which the two parties obtain the secret key to be
used during authentication. There is a protocol that allows a key to be exchanged
in a secure manner. It is known as the Diffie-Hellman key exchange. The basis
for this key exchange is rooted in modular arithmetic and when used, an intruder
that simply “sniffs” the message exchange cannot compute the secret key given
the random numbers that are passed back and forth. The Diffie-Hellman key
exchange however does have a weakness. It is susceptible to a bucket brigade or
man-in-the-middle attack where an intruder can actually insert itself in between
the client and the server, capture the key exchange messages going from the client
to the server and establish a key with the client. Likewise, the intruder initiates a
key exchange with the server and thus can decode all messages from the server.
With the intruder inserted in the middle of the communications, all data can be
clearly read.
2.6.3.
Key Distribution Center (KDC)
Even if there was no weakness with the Diffie-Hellman key exchange approach,
there is a practical consideration that renders the technique impractical for actual
use. To authenticate many users simultaneously, you would need a separate key
Keith Mulkey
Page 15
2/15/2016
for each user. For a busy website, this could quickly become unwieldy. Another
approach is to have a central Key Distribution Center (KDC) with which users
can register a private key. The approach is that a client contacts the KDC and
uses its private key, which is known to only the client and the KDC, to encrypt a
particular server’s identity and a session key to be used in a connection to a
particular server. The KDC decrypts the message and then uses the server’s
private key to encrypt the client’s identity and session key. This information is
then passed on to the server who then accepts the message as authentic since it
came from the known and trusted KDC. This approach also has a major flaw. It
is susceptible to a replay attack. For a detailed explanation of replay attacks, refer
to Reference 1.
2.6.4.
Needham-Shroeder Protocol
One approach to solving the replay attack problem associated with the simple
KDC authentication was developed by Roger Needham and Michael Schroeder
and published in 1978. The Needham-Shroeder protocol involves the use of an
KDC but instead of the KDC being the “go-between” it communicates only with
the potential client. The client begins by providing in plaintext to the KDC a
random number, the client identity, and the desired server identity. The KDC
uses the client’s private key to encrypt a response that contains the random
number, the server’s identity, a session key, and what is known as a ticket. The
ticket contains the client’s identity and the session key that have been encrypted
using the server’s private key. Upon receiving this information from the KDC,
the client contact’s the desired server using the ticket and a new random number
that is encrypted using the session key. The server responds to this challenge by
returning the random number less 1 encrypted with the session key and a random
number of its own. The client then responds with the server’s random number
less one encrypted with the session key. While all of this does defeat the
possibility of a replay attack, a slight weakness was found but was later corrected
by the Otway-Rees protocol in 1987.
Keith Mulkey
Page 16
2/15/2016
2.6.5.
Kerberos
Kerberos is an authentication protocol that is based on the Needham-Shroeder
protocol is the default method for authenticating users in Windows 2000 and later.
The biggest difference between Kerberos and Needham-Shroeder is Kerberos’
assumption that all clocks on the network are fairly well synchronized. Kerberos
involves three servers: the Authentication Server (AS) which verifies users during
login and can be likened to the KDC in terms of knowing the secret key (or
password) for each user; the Ticket-Granting Server (TGS) which issues the
identity tickets and is again like the KDC in that it issues the tickets; and the
server that is actually the target of the client’s request.
To logon using Kerberos, a client contacts the AS and provides his identity in
plaintext. The AS generates a response for the client by using the TGS private
key to encrypt the client’s identity and a session key. The AS then uses the
client’s private key that is known only to the client and the AS to encrypt the
TGS-key encrypted data along with another copy of the session key. This
information is passed back to the client. When this package arrives, the client
workstation asks for the user’s password. The password is used to generate the
user’s private key which is then used to decrypt the package that was received
from the AS. Provided the decryption was successful, the client contacts the TGS
and provides the TGS-key encrypted identity/session key, the desired server’s
identity, and a time stamp that is encrypted using the session key provided by the
AS. In response to this message, the TGS provides the client with an identity
ticket that consists of the client’s identity and a session key that can be used
between the client and the desired server. This identity ticket is then encrypted
with the desired server’s private key thus making it impossible for an intruder to
alter. At this point, the client can establish an authenticated connection with the
desired server using the identity ticket and a time stamp encrypted with the
client/server session key. If the client wishes to establish a different connection
with a different server, it begins by contacting the TGS to obtain a new identity
ticket and client/server session key.
Keith Mulkey
Page 17
2/15/2016
2.6.6.
Public-Key Authentication
If there is a Public Key Infrastructure in place that is known to both the client and
the server, then public-key cryptography can be used to perform mutual
authentication. The client initiates the process by asking the PKI directory service
for the public key of the desired server. Upon receiving the public key, the client
requests a session with the server by sending its identity and a random number
packet that has been encrypted using the server’s public key. The server decrypts
the request using its private key and then asks the PKI directory service for the
public key of the client identified in the request. The server, after receiving the
requesting client’s public key from the PKI, responds to the client’s session
request by encrypting a package that contains the client’s random number, his
own random number, and a session key. The key used for this is of course the
public key of the client. The client then responds back to the server with a
message containing the server’s random number that has been encrypted with the
session key provided by the server. From this point on, all communications with
the server are done using the provided session key.
2.7.
Authentication in .NET
The .NET 2.0 Framework contains numerous classes to support user authentication
both in a LAN environment and over the World Wide Web.
In Microsoft Windows based LAN environments, Windows 9x clients use NT LAN
Manager (NTLM) authentication while Windows 2000 and later clients attempt first
to use Kerberos authentication. If the server is unable to support Kerberos, NTLM is
used.
For the Internet, Microsoft ASP.NET implements authentication through three core
authentication models: Forms Authentication, Windows Authentication, and Passport
Authentication. Each of these modules implements the IHttpModule interface which
provides access to the events associated with user authentication. Since theses
modules implement a common interface, they each have their own unique
Keith Mulkey
Page 18
2/15/2016
Authenticate event. It should be noted that the PassportAuthenticationModule uses
Microsoft’s Passport authentication service which relies on Microsoft’s Passport
database. This database is the same one used for the free Hotmail email system. The
advantage of using this module is that the credentials required for logon are
maintained by the Passport system freeing your system of the burden of maintaining
the information. The downside is that to use this module, you must obtain and
maintain a Passport usage subscription with Microsoft for an annual fee. Due to this
fact, Passport Authentication will not be discussed any further in this paper.
2.7.1.
Forms Authentication
Forms Authentication allows a developer to design his or her own unique login
pages and associated authentication logic while still using ASP.NET to track user
and role information using encrypted cookies. Forms authentication is a ticketbased system which means that when a user logs in to your web site, they receive
a ticket with user information. The ticket is stored in encrypted form as a cookie
on the user’s hard drive. When each subsequent request is made to the web site,
the ticket is automatically attached to authenticate the user. Upon receiving a
request for a page that is not available to anonymous users, ASP.NET checks for a
ticket from the requesting user. If none is attached, the user is automatically
redirected to the login page. Using Forms authentication, you have complete
control since authentication does not rely on any other external authentication
system (e.g. Windows authentication). One word of caution however; since
Forms authentication uses standard HTML forms for prompting for user
credentials, you must use Secure Sockets Layer (SSL) to encrypt and transmit the
information. Otherwise, the user’s logon credentials are transmitted in the clear
leaving everyone vulnerable.
2.7.2.
Windows Authentication
Unlike Forms Authentication, Windows Authentication is not built into
ASP.NET. Instead, the responsibility of authentication is relegated to Internet
Information Server (IIS). In Windows Authentication, IIS queries the client’s
browser for authentication credentials that map to a Windows user account. If the
Keith Mulkey
Page 19
2/15/2016
user can be authenticated through Windows, then IIS permits the web page
request and forward the user account and role information to ASP.NET for further
processing. Using Windows Authentication frees you, the developer and
ASP.NET from the burden of managing user accounts and worrying about
authentication. It also provides functionality that is usually desirable in a security
conscious environment such as password expiration, account lockout, and group
membership. You get all of the benefits provided by Windows security settings
since you are using the same user accounts that would be used on your LAN or
Intranet zone. A few drawbacks of using Windows Authentication however are
that it is tied to Windows users, it’s tied to Windows client machines, and
flexibility and control of the authentication process is limited and can’t be
customized easily.
Windows Authentication comes in three varieties: Basic Authentication, Digest
Authentication, and Integrated Windows Authentication. Basic Authentication is
the most widely supported authentication protocol and almost all web browsers
support it. The main problem with Basic Authentication is that it is not secure all
by itself. The data that is transferred is not encrypted in any way so it must be
secured in some other fashion such as using Secure Sockets Layer (SSL). Digest
Authentication is very much like Basic Authentication except that the password is
passed as a hashed value instead of clear text. A problem with Digest
Authentication is that due to a different interpretation by Microsoft of a part of the
Digest authentication specification, IIS Digest Authentication only works with
Internet Explorer 5.0 browsers and later. Integrated Windows Authentication is
the most convenient standard for intranet application since it performs
authentication without any client interaction. But, for Integrated Windows
Authentication to work, both the client and the web server must be on the same
LAN or intranet.
Since Windows Authentication requires IIS, there is no Interactive Tutorial
associated with this section of this application. There are any number of good
Keith Mulkey
Page 20
2/15/2016
sources on how to use Windows Authentication. For instance Pro ASP.NET 2.0
in C# 2005 (ISBN 1-5909-496-7) by Matthew MacDonald and Mario Szpuszta
treats the subject very well in Chapter 22 Windows Authentication.
2.8.
Authorization
The Microsoft documentation defines authorization in .NET Framework security as
“the process of limiting access rights by granting or denying specific permissions to
an authenticated identity or principal.” Thus the distinction is made between
authorization and authentication. While authentication is the process of verifying
identity, authorization is the act of granting specific permissions once a user has been
authenticated. Two important features that enable authorization it the .NET Common
Language Runtime (CLR) are Role-based security and Code-access security.
2.8.1.
Code-Access Security
Whereas the security provided by the Windows operating system security and
role-based security are based on protection as a function of the user that is running
an application, code-access security provides protection as a function of the
application that is running irrespective of the user running it. This protection
model helps prevent malicious or errant code from accessing files and resources
that it has no business accessing. A few examples of where this protection might
come in handy are:

Changes to files and directories

Changes to the Windows registry

Establishing network or Internet connections

Creating application domains

Running native (machine-level, unmanaged) code
The key elements of code-access security are:
Keith Mulkey
Page 21
2/15/2016

Evidence and Code Identity – this provides credentials to the Common
Language Runtime (CLR) to establish identity of the running application
which can in turn be presented for security policy resolution.

Security Policy – this is the set of rules configured for the application.
These rules are used during policy resolution to determine which
permissions should be granted to the application.

Permissions – this is the authority that is granted to the application for
accessing operations and resources during runtime.
2.8.1.1.
Evidence and Code Identity
Evidence consists of inherent information such as an assembly’s strong name,
the hash value of the assembly’s content, or a publishers X.509 certificate as
well as other types of information that is determined at runtime such as the
URL from where the assembly was downloaded. All of this evidence
combines to provide the code identity for the assembly. When the CLR loads
an assembly, the identity is compared with the security policy rules to derive
the permissions that are granted to the assembly. There are basically two
types of evidence: host evidence and assembly evidence.
The host evidence consists of the artifacts or objects that are instantiated by
the CLR when the assembly is loaded. This includes things such as the URL
from where the assembly was downloaded, the hash value derived from the
contents of the assembly, the strong name assigned to the assembly at build
time, and the publisher’s signed certificate. Assembly evidence can be any
customized evidence that the developer of the assembly wishes to include; for
example the author of the software or maybe some copyright information.
When combined, the host evidence and the assembly evidence make up the
identity of the assembly.
2.8.1.2.
Security Policy
Security policy is the set of rules that you can create that provide the bridge
between evidence and permissions. When an assembly is loaded at runtime,
Keith Mulkey
Page 22
2/15/2016
the assembly’s evidence is gathered and presented to the Common Language
Runtime (CLR). The CLR in turn uses this evidence to determine the codeaccess permissions that will be granted to the assembly. This process is
known as policy resolution. It is important to note that policy resolution
grants only code-access permissions. The other permission type, identity
permission, is assigned by the CLR as a direct result of certain types of
evidence that is stored with the assembly.
2.8.1.3.
Permissions
When the Common Language Runtime (CLR) loads an assembly, it
determines the set of permissions that should be granted to the assembly for
accessing various files and resources on the host computer. But, a somewhat
lesser known feature of code-access security is that the loaded assembly can
also use permission objects to demand that any other code that wishes to use
the services and features provided by the loaded assembly, must contain a
specific set of permission objects. Consequently, not only does code-access
security protect the host machine from malicious code but it can also help
protect your code from being invoked by unintended partners. Thus, third
party code that is downloaded from an unscrupulous website can’t use your
finely tuned assembly as a proxy to access resources to which it would
otherwise be denied access.
At runtime, a set of permission objects is assigned to the assembly as it is
being loaded. The content of that set depends on the assembly’s identity
which is established using the evidence that is presented by the assembly.
There are two types of permissions: code-access permissions and identity
permissions.
Code-access permission classes represent actions and resources that are
subject to the .NET security control. These permissions provide detailed
control over what code is allowed to do. For example, your code can control
Keith Mulkey
Page 23
2/15/2016
access to the Windows event log using the
System.Diagnostics.EventLogPermission class or access to the local hard
drive using the System.Security.Permissions.FileIOPermission class.
Identity permissions represent the value of certain types of host evidence
presented by the assembly at runtime. Only host evidence is used for identity
permissions. Assembly evidence is never used to create identity permissions.
Identity permissions are a convenient way to make decisions based on an
assembly’s identity without resorting to the use of Evidence objects. For
example, you can use identity permissions to limit the execution of code to a
particular Zone or allow only code that is signed with a particular publisher’s
certificate to be run.
2.8.2.
Role-Based Security
Role-based security is built around the identity and roles of the user on whose
behalf the code is running. This is implemented in .NET by integrating with an
existing user account system such as the Windows user accounts or the .NET
Passport user accounts. In fact, it is not difficult to integrate the role-based
security aspects of .NET with any custom user account mechanism. Irrespective
of the underlying user account system, .NET’s role-based security model provides
a standardized process by which you can make runtime security decisions such as
ensuring a particular group membership exists before granting access to a
resource. The .NET role-based security however should not be confused with the
Windows security system. Whereas Windows security protects the integrity of
the entire system and enforces security regardless of the runtime environment,
.NET works at the application level and provides an easy-to-use API which you
can use to control access to functionality and resources.
Keith Mulkey
Page 24
2/15/2016
Works Cited
Brown, Ketih. The .NET Developer's Guide to Windows Security. Addison Wesley,
2005.
Freeman, Adam, and Allen Jones. Programming .NET Security. O'Reilly & Associates,
2003.
Macdonald, Matthew, and Mario Szpuszta. Pro ASP.NET 2.0 in C# 2005. APress, 2005.
"MSDN2 Library." Microsoft Developer's Network. <http://msdn2.microsoft.com/enus/library>.
Shepherd, George. Microsoft ASP.NET 2.0 Step by Step. Microsoft P, 2005.
Tanenbaum, Andrew S. Computer Networks. 4th ed. Prentice Hall PTR, 2003.
Keith Mulkey
Page 25
2/15/2016
Download