Computer Security THIRD EDITION Dieter Gollmann

Computer Security
THIRD EDITION
Computer Security
THIRD EDITION
Dieter Gollmann
Hamburg University of Technology
A John Wiley and Sons, Ltd., Publication
This edition first published 2011
 2011 John Wiley & Sons, Ltd
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for
permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of Dieter Gollmann to be identified as the author of this work has been asserted in accordance with the
Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by
the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be
available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names
and product names used in this book are trade names, service marks, trademarks or registered trademarks of
their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This
publication is designed to provide accurate and authoritative information in regard to the subject matter covered.
It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional
advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Gollmann, Dieter.
Computer security / Dieter Gollmann. – 3rd ed.
p. cm.
Includes bibliographical references and index.
ISBN 978-0-470-74115-3 (pbk.)
1. Computer security. I. Title.
QA76.9.A25G65 2011
005.8 – dc22
2010036859
A catalogue record for this book is available from the British Library.
Set in 9/12 Sabon by Laserwords Private Limited, Chennai, India
Printed in Great Britain by TJ International Ltd, Padstow
Contents
Preface
C H A P T E R 1 – History of Computer Security
1.1
1.2
1.3
1.4
1.5
1.6
1.7
The Dawn of Computer Security
1970s – Mainframes
1980s – Personal Computers
1.3.1
An Early Worm
1.3.2
The Mad Hacker
1990s – Internet
2000s – The Web
Conclusions – The Benefits of Hindsight
Exercises
xvii
1
2
3
4
5
6
6
8
10
11
C H A P T E R 2 – Managing Security
13
2.1
2.2
14
15
16
17
19
21
22
23
24
24
26
26
28
29
29
2.3
2.4
2.5
Attacks and Attackers
Security Management
2.2.1
Security Policies
2.2.2
Measuring Security
2.2.3
Standards
Risk and Threat Analysis
2.3.1
Assets
2.3.2
Threats
2.3.3
Vulnerabilities
2.3.4
Attacks
2.3.5
Common Vulnerability Scoring System
2.3.6
Quantitative and Qualitative Risk Analysis
2.3.7
Countermeasures – Risk Mitigation
Further Reading
Exercises
C H A P T E R 3 – Foundations of Computer Security
31
3.1
32
32
34
34
35
36
37
38
Definitions
3.1.1
Security
3.1.2
Computer Security
3.1.3
Confidentiality
3.1.4
Integrity
3.1.5
Availability
3.1.6
Accountability
3.1.7
Non-repudiation
vi
CONTENTS
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.1.8
Reliability
3.1.9
Our Definition
The Fundamental Dilemma of Computer Security
Data vs Information
Principles of Computer Security
3.4.1
Focus of Control
3.4.2
The Man–Machine Scale
3.4.3
Complexity vs Assurance
3.4.4
Centralized or Decentralized Controls
The Layer Below
The Layer Above
Further Reading
Exercises
38
39
40
40
41
42
42
44
44
45
47
47
48
C H A P T E R 4 – Identification and Authentication
49
4.1
4.2
4.3
4.4
50
51
52
54
55
56
58
59
63
63
4.5
4.6
4.7
4.8
4.9
Username and Password
Bootstrapping Password Protection
Guessing Passwords
Phishing, Spoofing, and Social Engineering
4.4.1
Password Caching
Protecting the Password File
Single Sign-on
Alternative Approaches
Further Reading
Exercises
C H A P T E R 5 – Access Control
65
5.1
5.2
5.3
66
66
68
68
68
70
71
71
72
72
73
74
74
75
76
78
5.4
5.5
5.6
Background
Authentication and Authorization
Access Operations
5.3.1
Access Modes
5.3.2
Access Rights of the Bell–LaPadula Model
5.3.3
Administrative Access Rights
Access Control Structures
5.4.1
Access Control Matrix
5.4.2
Capabilities
5.4.3
Access Control Lists
Ownership
Intermediate Controls
5.6.1
Groups and Negative Permissions
5.6.2
Privileges
5.6.3
Role-Based Access Control
5.6.4
Protection Rings
CONTENTS
5.7
5.8
5.9
5.10
Policy Instantiation
Comparing Security Attributes
5.8.1
Partial Orderings
5.8.2
Abilities in the VSTa Microkernel
5.8.3
Lattice of Security Levels
5.8.4
Multi-level Security
Further Reading
Exercises
C H A P T E R 6 – Reference Monitors
6.1
6.2
6.3
6.4
6.5
6.6
Introduction
6.1.1
Placing the Reference Monitor
6.1.2
Execution Monitors
Operating System Integrity
6.2.1
Modes of Operation
6.2.2
Controlled Invocation
Hardware Security Features
6.3.1
Security Rationale
6.3.2
A Brief Overview of Computer Architecture
6.3.3
Processes and Threads
6.3.4
Controlled Invocation – Interrupts
6.3.5
Protection on the Intel 80386/80486
6.3.6
The Confused Deputy Problem
Protecting Memory
6.4.1
Secure Addressing
Further Reading
Exercises
79
79
79
80
81
82
84
84
87
88
89
90
90
91
91
91
92
92
95
95
96
98
99
100
103
104
C H A P T E R 7 – Unix Security
107
7.1
108
109
109
110
110
111
111
112
113
113
113
114
115
7.2
7.3
7.4
Introduction
7.1.1
Unix Security Architecture
Principals
7.2.1
User Accounts
7.2.2
Superuser (Root)
7.2.3
Groups
Subjects
7.3.1
Login and Passwords
7.3.2
Shadow Password File
Objects
7.4.1
The Inode
7.4.2
Default Permissions
7.4.3
Permissions for Directories
vii
viii
CONTENTS
7.5
7.6
7.7
7.8
7.9
Access Control
7.5.1
Set UserID and Set GroupID
7.5.2
Changing Permissions
7.5.3
Limitations of Unix Access Control
Instances of General Security Principles
7.6.1
Applying Controlled Invocation
7.6.2
Deleting Files
7.6.3
Protection of Devices
7.6.4
Changing the Root of the Filesystem
7.6.5
Mounting Filesystems
7.6.6
Environment Variables
7.6.7
Searchpath
7.6.8
Wrappers
Management Issues
7.7.1
Managing the Superuser
7.7.2
Trusted Hosts
7.7.3
Audit Logs and Intrusion Detection
7.7.4
Installation and Configuration
Further Reading
Exercises
116
117
118
119
119
119
120
120
121
122
122
123
124
125
125
126
126
127
128
128
C H A P T E R 8 – Windows Security
131
8.1
132
132
133
134
135
135
137
139
141
142
143
144
145
145
145
147
148
149
150
150
150
8.2
8.3
8.4
8.5
8.6
Introduction
8.1.1
Architecture
8.1.2
The Registry
8.1.3
Domains
Components of Access Control
8.2.1
Principals
8.2.2
Subjects
8.2.3
Permissions
8.2.4
Objects
Access Decisions
8.3.1
The DACL
8.3.2
Decision Algorithm
Managing Policies
8.4.1
Property Sets
8.4.2
ACE Inheritance
Task-Dependent Access Rights
8.5.1
Restricted Tokens
8.5.2
User Account Control
Administration
8.6.1
User Accounts
8.6.2
Default User Accounts
CONTENTS
8.7
8.8
8.6.3
Audit
8.6.4
Summary
Further Reading
Exercises
152
152
153
153
C H A P T E R 9 – Database Security
155
9.1
9.2
156
158
160
161
162
163
163
164
167
168
169
170
172
173
175
175
9.3
9.4
9.5
9.6
9.7
9.8
Introduction
Relational Databases
9.2.1
Database Keys
9.2.2
Integrity Rules
Access Control
9.3.1
The SQL Security Model
9.3.2
Granting and Revocation of Privileges
9.3.3
Access Control through Views
Statistical Database Security
9.4.1
Aggregation and Inference
9.4.2
Tracker Attacks
9.4.3
Countermeasures
Integration with the Operating System
Privacy
Further Reading
Exercises
C H A P T E R 10 – Software Security
177
10.1
178
178
178
178
179
179
179
179
181
181
183
184
185
186
187
187
189
191
191
192
10.2
10.3
10.4
10.5
Introduction
10.1.1 Security and Reliability
10.1.2 Malware Taxonomy
10.1.3 Hackers
10.1.4 Change in Environment
10.1.5 Dangers of Abstraction
Characters and Numbers
10.2.1 Characters (UTF-8 Encoding)
10.2.2 The rlogin Bug
10.2.3 Integer Overflows
Canonical Representations
Memory Management
10.4.1 Buffer Overruns
10.4.2 Stack Overruns
10.4.3 Heap Overruns
10.4.4 Double-Free Vulnerabilities
10.4.5 Type Confusion
Data and Code
10.5.1 Scripting
10.5.2 SQL Injection
ix
x
CONTENTS
10.6
10.7
10.8
10.9
Race Conditions
Defences
10.7.1 Prevention: Hardware
10.7.2 Prevention: Modus Operandi
10.7.3 Prevention: Safer Functions
10.7.4 Prevention: Filtering
10.7.5 Prevention: Type Safety
10.7.6 Detection: Canaries
10.7.7 Detection: Code Inspection
10.7.8 Detection: Testing
10.7.9 Mitigation: Least Privilege
10.7.10 Reaction: Keeping Up to Date
Further Reading
Exercises
193
194
194
195
195
195
197
197
197
199
200
201
201
202
C H A P T E R 11 – Bell–LaPadula Model
205
11.1
11.2
206
206
207
208
210
210
211
212
213
214
214
216
216
11.3
11.4
11.5
State Machine Models
The Bell–LaPadula Model
11.2.1 The State Set
11.2.2 Security Policies
11.2.3 The Basic Security Theorem
11.2.4 Tranquility
11.2.5 Aspects and Limitations of BLP
The Multics Interpretation of BLP
11.3.1 Subjects and Objects in Multics
11.3.2 Translating the BLP Policies
11.3.3 Checking the Kernel Primitives
Further Reading
Exercises
C H A P T E R 12 – Security Models
219
12.1
220
220
220
221
221
223
225
228
228
229
230
231
232
12.2
12.3
12.4
12.5
12.6
The Biba Model
12.1.1 Static Integrity Levels
12.1.2 Dynamic Integrity Levels
12.1.3 Policies for Invocation
Chinese Wall Model
The Clark–Wilson Model
The Harrison–Ruzzo–Ullman Model
Information-Flow Models
12.5.1 Entropy and Equivocation
12.5.2 A Lattice-Based Model
Execution Monitors
12.6.1 Properties of Executions
12.6.2 Safety and Liveness
CONTENTS
12.7
12.8
Further Reading
Exercises
232
233
C H A P T E R 13 – Security Evaluation
235
13.1
13.2
13.3
13.4
13.5
13.6
Introduction
The Orange Book
The Rainbow Series
Information Technology Security Evaluation Criteria
The Federal Criteria
The Common Criteria
13.6.1 Protection Profiles
13.6.2 Evaluation Assurance Levels
13.6.3 Evaluation Methodology
13.6.4 Re-evaluation
13.7 Quality Standards
13.8 An Effort Well Spent?
13.9 Summary
13.10 Further Reading
13.11 Exercises
236
239
241
242
243
243
244
245
246
246
246
247
248
248
249
C H A P T E R 14 – Cryptography
251
14.1
252
252
253
254
255
256
257
257
257
259
259
260
261
261
263
264
265
266
268
269
270
271
272
273
14.2
14.3
14.4
14.5
14.6
14.7
14.8
14.9
Introduction
14.1.1 The Old Paradigm
14.1.2 New Paradigms
14.1.3 Cryptographic Keys
14.1.4 Cryptography in Computer Security
Modular Arithmetic
Integrity Check Functions
14.3.1 Collisions and the Birthday Paradox
14.3.2 Manipulation Detection Codes
14.3.3 Message Authentication Codes
14.3.4 Cryptographic Hash Functions
Digital Signatures
14.4.1 One-Time Signatures
14.4.2 ElGamal Signatures and DSA
14.4.3 RSA Signatures
Encryption
14.5.1 Data Encryption Standard
14.5.2 Block Cipher Modes
14.5.3 RSA Encryption
14.5.4 ElGamal Encryption
Strength of Mechanisms
Performance
Further Reading
Exercises
xi
xii
CONTENTS
C H A P T E R 15 – Key Establishment
275
15.1
15.2
276
276
277
278
279
279
280
281
282
283
285
286
286
287
287
288
288
289
289
291
292
292
293
295
295
15.3
15.4
15.5
15.6
15.7
15.8
Introduction
Key Establishment and Authentication
15.2.1 Remote Authentication
15.2.2 Key Establishment
Key Establishment Protocols
15.3.1 Authenticated Key Exchange Protocol
15.3.2 The Diffie–Hellman Protocol
15.3.3 Needham–Schroeder Protocol
15.3.4 Password-Based Protocols
Kerberos
15.4.1 Realms
15.4.2 Kerberos and Windows
15.4.3 Delegation
15.4.4 Revocation
15.4.5 Summary
Public-Key Infrastructures
15.5.1 Certificates
15.5.2 Certificate Authorities
15.5.3 X.509/PKIX Certificates
15.5.4 Certificate Chains
15.5.5 Revocation
15.5.6 Electronic Signatures
Trusted Computing – Attestation
Further Reading
Exercises
C H A P T E R 16 – Communications Security
297
16.1
298
298
299
299
301
302
302
304
304
306
307
308
308
310
312
313
16.2
16.3
16.4
16.5
Introduction
16.1.1 Threat Model
16.1.2 Secure Tunnels
Protocol Design Principles
IP Security
16.3.1 Authentication Header
16.3.2 Encapsulating Security Payloads
16.3.3 Security Associations
16.3.4 Internet Key Exchange Protocol
16.3.5 Denial of Service
16.3.6 IPsec Policies
16.3.7 Summary
IPsec and Network Address Translation
SSL/TLS
16.5.1 Implementation Issues
16.5.2 Summary
CONTENTS
16.6
16.7
16.8
Extensible Authentication Protocol
Further Reading
Exercises
314
316
316
C H A P T E R 17 – Network Security
319
17.1
320
320
321
322
322
324
324
324
325
326
327
328
329
330
330
330
331
331
331
332
333
333
334
334
334
335
335
336
17.2
17.3
17.4
17.5
17.6
Introduction
17.1.1 Threat Model
17.1.2 TCP Session Hijacking
17.1.3 TCP SYN Flooding Attacks
Domain Name System
17.2.1 Lightweight Authentication
17.2.2 Cache Poisoning Attack
17.2.3 Additional Resource Records
17.2.4 Dan Kaminsky’s Attack
17.2.5 DNSSec
17.2.6 DNS Rebinding Attack
Firewalls
17.3.1 Packet Filters
17.3.2 Stateful Packet Filters
17.3.3 Circuit-Level Proxies
17.3.4 Application-Level Proxies
17.3.5 Firewall Policies
17.3.6 Perimeter Networks
17.3.7 Limitations and Problems
Intrusion Detection
17.4.1 Vulnerability Assessment
17.4.2 Misuse Detection
17.4.3 Anomaly Detection
17.4.4 Network-Based IDS
17.4.5 Host-Based IDS
17.4.6 Honeypots
Further Reading
Exercises
C H A P T E R 18 – Web Security
339
18.1
340
340
341
342
342
343
343
344
346
18.2
18.3
Introduction
18.1.1 Transport Protocol and Data Formats
18.1.2 Web Browser
18.1.3 Threat Model
Authenticated Sessions
18.2.1 Cookie Poisoning
18.2.2 Cookies and Privacy
18.2.3 Making Ends Meet
Code Origin Policies
xiii
xiv
CONTENTS
18.4
18.5
18.6
18.7
18.8
18.9
18.3.1 HTTP Referer
Cross-Site Scripting
18.4.1 Cookie Stealing
18.4.2 Defending against XSS
Cross-Site Request Forgery
18.5.1 Authentication for Credit
JavaScript Hijacking
18.6.1 Outlook
Web Services Security
18.7.1 XML Digital Signatures
18.7.2 Federated Identity Management
18.7.3 XACML
Further Reading
Exercises
347
347
349
349
350
351
352
354
354
355
357
359
360
361
C H A P T E R 19 – Mobility
363
19.1
19.2
364
364
365
365
366
366
367
368
368
369
369
370
370
372
373
373
375
377
378
379
381
381
383
383
19.3
19.4
19.5
19.6
19.7
19.8
Introduction
GSM
19.2.1 Components
19.2.2 Temporary Mobile Subscriber Identity
19.2.3 Cryptographic Algorithms
19.2.4 Subscriber Identity Authentication
19.2.5 Encryption
19.2.6 Location-Based Services
19.2.7 Summary
UMTS
19.3.1 False Base Station Attacks
19.3.2 Cryptographic Algorithms
19.3.3 UMTS Authentication and Key Agreement
Mobile IPv6 Security
19.4.1 Mobile IPv6
19.4.2 Secure Binding Updates
19.4.3 Ownership of Addresses
WLAN
19.5.1 WEP
19.5.2 WPA
19.5.3 IEEE 802.11i – WPA2
Bluetooth
Further Reading
Exercises
C H A P T E R 20 – New Access Control Paradigms
385
20.1
386
386
Introduction
20.1.1 Paradigm Shifts in Access Control
CONTENTS
20.2
20.3
20.4
20.5
20.6
20.7
20.8
20.9
20.1.2 Revised Terminology for Access Control
SPKI
Trust Management
Code-Based Access Control
20.4.1 Stack Inspection
20.4.2 History-Based Access Control
Java Security
20.5.1 The Execution Model
20.5.2 The Java 1 Security Model
20.5.3 The Java 2 Security Model
20.5.4 Byte Code Verifier
20.5.5 Class Loaders
20.5.6 Policies
20.5.7 Security Manager
20.5.8 Summary
.NET Security Framework
20.6.1 Common Language Runtime
20.6.2 Code-Identity-Based Security
20.6.3 Evidence
20.6.4 Strong Names
20.6.5 Permissions
20.6.6 Security Policies
20.6.7 Stack Walk
20.6.8 Summary
Digital Rights Management
Further Reading
Exercises
387
388
390
391
393
394
395
396
396
397
397
398
399
399
400
400
400
401
401
402
403
403
404
405
405
406
406
Bibliography
409
Index
423
xv
Preface
Ég geng ı́ hring
.ı́ kringum allt, sem er.
Og utan þessa hrings
er veröld mı́n
Steinn Steinarr
Security is a fashion industry. There is more truth in this statement than one would like
to admit to a student of computer security. Security buzzwords come and go; without
doubt security professionals and security researchers can profit from dropping the right
buzzword at the right time. Still, this book is not intended as a fashion guide.
This is a textbook on computer security. A textbook has to convey the fundamental
principles of its discipline. In this spirit, the attempt has been made to extract essential
ideas that underpin the plethora of security mechanisms one finds deployed in today’s
IT landscape. A textbook should also instruct the reader when and how to apply these
fundamental principles. As the IT landscape keeps changing, security practitioners have
to understand when familiar security mechanisms no longer address newly emerging
threats. Of course, they also have to understand how to apply the security mechanisms
at their disposal.
This is a challenge to the author of a textbook on computer security. To appreciate
how security principles manifest themselves in any given IT system the reader needs
sufficient background knowledge about that system. A textbook on computer security
is limited in the space it can devote to covering the broader features of concrete IT
systems. Moreover, the speed at which those features keep changing implies that any
book trying to capture current systems at a fine level of detail is out of date by the time it
reaches its readers. This book tries to negotiate the route from security principles to their
application by stopping short of referring to details specific to certain product versions.
For the last steps towards any given version the reader will have to consult the technical
literature on that product.
Computer security has changed in important aspects since the first edition of this book
was published. Once, operating systems security was at the heart of this subject. Many
concepts in computer security have their origin in operating systems research. Since
the emergence of the web as a global distributed application platform, the focus of
xviii
PREFACE
computer security has shifted to the browser and web applications. This observation
applies equally to access control and to software security. This third edition of Computer
Security reflects this development by including new material on web security. The reader
must note that this is still an active area with unresolved open challenges.
This book has been structured as follows. The first three chapters provide context and
fundamental concepts. Chapter 1 gives a brief history of the field, Chapter 2 covers
security management, and Chapter 3 provides initial conceptual foundations. The next
three chapters deal with access control in general. Chapter 4 discusses identification
and authentication of users, Chapter 5 introduces the principles of access control, with
Chapter 6 focused on the reference monitor. Chapter 7 on Unix/Linux, Chapter 8 on
Windows, and Chapter 9 on databases are intended as case studies to illustrate the
concepts introduced in previous chapters. Chapter 10 presents the essentials of software
security.
This is followed by three chapters that have security evaluation as their common theme.
Chapter 11 takes the Bell–LaPadula model as a case study for the formal analysis of an
access control system. Chapter 12 introduces further security models. Chapter 13 deals
with the process of evaluating security products.
The book then moves away from stand-alone systems. The next three chapters constitute
a basis for distributed systems security. Chapter 14 gives a condensed overview of
cryptography, a field that provides the foundations for many communications security
mechanisms. Chapter 15 looks in more detail at key management, and Chapter 16 at
Internet security protocols such as IPsec and SSL/TLS.
Chapter 17 proceeds beyond communications security and covers aspects of network
security such as Domain Name System security, firewalls, and intrusion detection systems.
Chapter 18 analyzes the current state of web security. Chapter 19 reaches into another
area increasingly relevant for computer security – security solutions for mobile systems.
Chapter 20 concludes the book with a discussion of recent developments in access
control.
Almost every chapter deserves to be covered by a book of its own. By necessity, only a
subset of relevant topics can therefore be discussed within the limits of a single chapter.
Because this is a textbook, I have sometimes included important material in exercises
that could otherwise be expected to have a place in the main body of a handbook on
computer security. Hopefully, the general coverage is still reasonably comprehensive and
pointers to further sources are included.
Exercises are included with each chapter but I cannot claim to have succeeded to my
own satisfaction in all instances. In my defence, I can only note that computer security
is not simply a collection of recipes that can be demonstrated within the confines of
PREFACE
a typical textbook exercise. In some areas, such as password security or cryptography,
it is easy to construct exercises with precise answers that can be found by going
through the correct sequence of steps. Other areas are more suited to projects, essays, or
discussions. Although it is naturally desirable to support a course on computer security
with experiments on real systems, suggestions for laboratory sessions are not included
in this book. Operating systems, database management systems, and firewalls are prime
candidates for practical exercises. The actual examples will depend on the particular
systems available to the teacher. For specific systems there are often excellent books
available that explain how to use the system’s security mechanisms.
This book is based on material from a variety of courses, taught over several years at
master’s but also at bachelor’s degree level. I have to thank the students on these courses
for their feedback on points that needed better explanations. Equally, I have to thank
commentators on earlier versions for their error reports and the reviewers of the draft of
this third edition for constructive advice.
Dieter Gollmann
Hamburg, December 2010
xix
1
Chapter
History of Computer Security
Those who do not learn from the past will repeat it.
George Santanya
Security is a journey, not a destination. Computer security has been travelling
for 40 years, and counting. On this journey, the challenges faced have kept
changing, as have the answers to familiar challenges. This first chapter will
trace the history of computer security, putting security mechanisms into the
perspective of the IT landscape they were developed for.
OBJECTIVES
• Give an outline of the history of computer security.
• Explain the context in which familiar security mechanisms were originally
developed.
• Show how changes in the application of IT pose new challenges in
computer security.
• Discuss the impact of disruptive technologies on computer security.
2
1 HISTORY OF COMPUTER SECURITY
1.1 T H E D A W N O F C O M P U T E R S E C U R I T Y
New security challenges arise when new – or old – technologies are put to new use.
The code breakers at Bletchley Park pioneered the use of electronic programmable
computers during World War II [117, 233]. The first electronic computers were built
in the 1940s (Colossus, EDVAC, ENIAC) and found applications in academia (Ferranti
Mark I, University of Manchester), commercial organizations (LEO, J. Lyons & Co.),
and government agencies (Univac I, US Census Bureau) in the early 1950s. Computer
security can trace its origins back to the 1960s. Multi-user systems emerged, needing
mechanisms for protecting the system from its users, and the users from each other.
Protection rings (Section 5.6.4) are a concept dating from this period [108].
Two reports in the early 1970s signal the start of computer security as a field of research
in its own right. The RAND report by Willis Ware [231] summarized the technical
foundations computer security had acquired by the end of the 1960s. The report also
produced a detailed analysis of the policy requirements of one particular application
area, the protection of classified information in the US defence sector. This report was
followed shortly after by the Anderson report [9] that laid out a research programme for
the design of secure computer systems, again dominated by the requirement of protecting
classified information.
In recent years the Air Force has become increasingly aware of the problem of computer
security. This problem has intruded on virtually any aspect of USAF operations and
administration. The problem arises from a combination of factors that includes: greater
reliance on the computer as a data-processing and decision-making tool in sensitive
functional areas; the need to realize economies by consolidating ADP [automated data
processing] resources thereby integrating or co-locating previously separate data-processing
operations; the emergence of complex resource sharing computer systems providing users
with capabilities for sharing data and processes with other users; the extension of resource
sharing concepts to networks of computers; and the slowly growing recognition of security
inadequacies of currently available computer systems. [9]
We will treat the four decades starting with the 1970s as historical epochs. We note
for each decade the leading innovation in computer technology, the characteristic
applications of that technology, the security problems raised by these applications, and
the developments and state of the art in finding solutions for these problems. Information
technologies may appear in our time line well after their original inception. However, a
new technology becomes a real issue for computer security only when it is sufficiently
mature and deployed widely enough for new applications with new security problems
to materialize. With this consideration in mind, we observe that computer security has
passed through the following epochs:
•
•
•
•
1970s: age of the mainframe,
1980s: age of the PC,
1990s: age of the Internet,
2000s: age of the web.
1.2 1970s – MAINFRAMES
1.2 1 9 7 0 s – M A I N F R A M E S
Advances in the design of memory devices (IBM’s Winchester disk offered a capacity of
35–70 megabytes) facilitated the processing of large amounts of data (for that time).
Mainframes were deployed mainly in government departments and in large commercial
organizations. Two applications from public administration are of particular significance.
First, the defence sector saw the potential benefits of using computers, but classified
information would have to be processed securely. This led the US Air Force to create the
study group that reported its finding in the Anderson report.
The research programmes triggered by this report developed a formal state machine model
for the multi-level security policies regulating access to classified data, the Bell–LaPadula
model (Chapter 11), which proved to be highly influential on computer security research
well into the 1980s [23]. The Multics project [187] developed an operating system that
had security as one of its main design objectives. Processor architectures were developed
with support for primitives such as segmentations or capabilities that were the basis for
the security mechanisms adopted at the operating system level [92].
The second application field was the processing of ‘unclassified but sensitive’ data such
as personal information about citizens in government departments. Government departments had been collecting and processing personal data before, but with mainframes
data-processing at a much larger scale became a possibility. It was also much easier for
staff to remain undetected when snooping around in filesystems looking for information
they had no business in viewing. Both aspects were considered serious threats to privacy,
and a number of protection mechanisms were developed in response.
Access control mechanisms in the operating system had to support multi-user security.
Users should be kept apart, unless data sharing was explicitly permitted, and prevented
from interfering with the management of the mainframe system. The fundamental
concepts for access control in Chapter 5 belong to this epoch.
Encryption was seen to provide the most comprehensive protection for data stored in
computer memory and on backup media. The US Federal Bureau of Standards issued a
call for a data encryption standard for the protection of unclassified data. Eventually,
IBM submitted the algorithm that became known as the Data Encryption Standard
[221]. This call was the decisive event that began the public discussion about encryption
algorithms and gave birth to cryptography as an academic discipline, a development
deeply resented at that time by those working on communications security in the security
services. A first key contribution from academic research was the concept of public-key
cryptography published by Diffie and Hellman in 1976 [82]. Cryptography is the topic
of Chapter 14.
In the context of statistical database queries, a typical task in social services, a new threat
was observed. Even if individual queries were guaranteed to cover a large enough query
3
4
1 HISTORY OF COMPUTER SECURITY
set so as not to leak information about individual entries, an attacker could use a clever
combination of such ‘safe’ statistical queries to infer information about a single entry.
Aggregation and inference, and countermeasures such as randomization of query data,
were studied in database security. These issues are taken up in Section 9.4.
Thirdly, the legal system was adapted and data protection legislation was introduced
in the US and in European countries and harmonized in the OECD privacy guidelines
[188]; several legal initiatives on computer security issues followed (Section 9.6).
Since then, research on cryptography has reached a high level of maturity. When the
US decided to update the Data Encryption Standard in the 1990s, a public review
process led to the adoption of the new Advanced Encryption Standard. This ‘civilian’
algorithm developed by Belgian researchers was later also approved in the US for the
protection of classified data [68]. For the inference problem in statistical databases,
pragmatic solutions were developed, but there is no perfect solution and the data
mining community is today re-examining (or reinventing?) some of the approaches from
the 1970s. Multi-level security dominated security research into the following decade,
posing interesting research questions which still engage theoreticians today – research
on non-interference is going strong – and leading to the development of high-assurance
systems whose design had been verified employing formal methods. However, these
high-assurance systems did not solve the problems of the following epochs and now
appear more as specialized offerings for a niche market than a foundation for the security
systems of the next epoch.
1.3 1 9 8 0 s – P E R S O N A L C O M P U T E R S
Miniaturization and integration of switching components had reached the stage where
computers no longer needed to be large machines housed in special rooms but were small
enough to fit on a desk. Graphical user interfaces and mouse facilitated user-friendly
input/output. This was the technological basis for the personal computer (PC), the
innovation that, indirectly, changed the focus of computer security during the 1980s. The
PC was cheap enough to be bought directly by smaller units in organizations, bypassing
the IT department. The liberation from the tutelage of the IT department resounded
through Apple’s famous launch of the Macintosh in 1984. The PC was a singleuser machine, the first successful applications were word processors and spreadsheet
programs, and users were working on documents that may have been commercially
sensitive but were rarely classified data. At a stroke, multi-level security and multiuser security became utterly irrelevant. To many security experts the 1980s triggered a
retrograde development, leading to less protected systems, which in fairness only became
less secure when they were later used outside their original environment.
While this change in application patterns was gathering momentum, security research
still took its main cues from multi-level security. Information-flow models and
1.3 1980s – PERSONAL COMPUTERS
non-interference models were proposed to capture aspects not addressed in the
Bell–LaPadula model. The Orange Book [224] strongly influenced the common
perception of computer security (Section 13.2). High security assurance and multi-level
security went hand in hand. Research on multi-level secure databases invented
polyinstantiation so that users cleared at different security levels could enter data into
the same table without creating covert channels [157].
We have to wait for the Clark–Wilson model (1987) [66] and the Chinese Wall model
(1989) [44] to get research contributions influenced by commercial IT applications
and coming from authors with a commercial background. Clark and Wilson present
well-formed transactions and separation of duties as two important design principles for
securing commercial systems. The Chinese Wall model was inspired by the requirement
to prevent conflicts of interest in financial consultancy businesses. Chapter 12 covers
both models.
A less visible change occurred in the development of processor architectures. The
Intel 80286 processor supported segmentation, a feature used by multi-user operating
systems. In the 80386 processor this feature was no longer present as it was not used by
Microsoft’s DOS. The 1980s also saw the first worms and viruses, interestingly enough
first in research papers [209, 69] before they later appeared in the wild. The damage
that could be done by attacking computer systems became visible to a wider public. We
will briefly describe two incidents from this decade. Both ultimately led to convictions
in court.
1.3.1 An Early Worm
The Internet worm of 1988 exploited a number of known vulnerabilities such as brute
force password guessing for remote login, bad configurations (sendmail in debug mode),
a buffer overrun in the fingerd daemon, and unauthenticated login from trusted hosts
identified by their network address which could be forged. The worm penetrated 5–10%
of the machines on the Internet, which totalled approximately 60,000 machines at the
time. The buffer overrun in the fingerd daemon broke into VAX systems running Unix
4BSD. A special 536-byte message to the fingerd was used to overwrite the system stack:
pushl
pushl
movl
pushl
pushl
pushl
pushl
movl
chmk
$68732f
$6e69622f
sp, r10
$0
$0
r10
$3
sp, ap
$3b
push ’/sh, ‹NUL›’
push ’/bin’
save address of start of string
push 0 (arg 3 to execve)
push 0 (arg 2 to execve)
push string addr (arg 1 to execve)
push argument count
set argument pointer
do "execve" kernel call
The stack is thus set up so that the command execve("/bin/sh",0,0) will be
executed on return to the main routine, opening a connection to a remote shell via
5
6
1 HISTORY OF COMPUTER SECURITY
TCP [213]. Chapter 10 presents technical background on buffer overruns. The person
responsible for the worm was brought to court and sentenced to a $10,050 fine and 400
hours of community service, with a three-year probation period (4 May 1990).
1.3.2 The Mad Hacker
This security incident affected ICL’s VME/B operating system. VME/B stored information
about files in file descriptors. All file descriptors were owned by the user :STD. For
classified file descriptors this would create a security problem: system operators would
require clearance to access classified information. Hence, :STD was not given access
to classified file descriptors. In consequence, these descriptors could not be restored
during a normal backup. A new user :STD/CLASS was therefore created who owned
the classified file descriptors. This facility was included in a routine systems update.
The user :STD/CLASS had no other purpose than owning file descriptors. Hence, it
was undesirable and unnecessary for anybody to log in as :STD/CLASS. To make
login impossible, the password for :STD/CLASS was defined to be the RETURN key.
Nobody could login because RETURN would always be interpreted as the delimiter
of the password and not as part of the password. The password in the user profile of
:STD/CLASS was set by patching hexadecimal code. Unfortunately, the wrong field
was changed and instead of a user who could not log in, a user with an unrecognizable
security level was created. This unrecognizable security level was interpreted as ‘no
security’ so the designers had achieved the opposite of their goal.
There was still one line of defence left. User :STD/CLASS could only log in from the
master console. However, once the master console was switched off, the next device
opening a connection would be treated as the master console.
These flaws were exploited by a hacker who himself was managing a VME/B system. He
thus had ample opportunity for detailed analysis and experimentation. He broke into a
number of university computers via dial-up lines during nighttime when the computer
centre was not staffed, modifying and deleting system and user files and leaving messages
from The Mad Hacker. He was successfully tracked, brought to court, convicted (under
the UK Criminal Damage Act of 1971), and handed a prison sentence. The conviction,
the first of a computer hacker in the United Kingdom, was upheld by the Court of Appeal
in 1991.
1.4 1 9 9 0 s – I N T E R N E T
At the end of 1980s it was still undecided whether fax (a service offered by traditional
telephone operators) or email (an Internet service) would prevail as the main method
of document exchange. By the 1990s this question had been settled and this decade
became without doubt the epoch of the Internet. Not because the Internet was created
1.4 1990s – INTERNET
in the 1990s – it is much older – but because new technology became available and
because the Internet was opened to commercial use in 1992. The HTTP protocol and
HTML provided the basis for visually more interesting applications than email or remote
procedure calls. The World Wide Web (1991) and graphical web browsers (Mosaic,
1993) created a whole new ‘user experience’. Both developments facilitated a whole new
range of applications.
The Internet is a communications system so it may be natural that Internet security
was initially equated with communications security, and in particular with strong
cryptography. In the 1990s, the ‘crypto wars’ between the defenders of (US) export
restrictions on encryption algorithms with more than 40-bit keys and advocates for the
use of unbreakable (or rather, not obviously breakable) encryption was fought to an end,
with the proponents of strong cryptography emerging victorious. Chapter 16 presents
the communications security solutions developed for the Internet in the 1990s.
Communications security, however, only solves the easy problem, i.e. protecting data
in transit. It should have been clear from the start that the real problems resided
elsewhere. The typical end system was a PC, no longer stand-alone or connected to
a LAN, but connected to the Internet. Connecting a machine to the Internet has two
major ramifications. The system owner no longer controls who can send inputs to this
machine; the system owner no longer controls what input is sent to the machine. The
first observation rules out traditional identity-based access control as a viable protection
mechanism. The second observation points to a new kind of attack, as described by Aleph
One in his paper on ‘Smashing the Stack for Fun and Profit’ (1996) [6]. The attacker
sends intentionally malformed inputs to an open port on the machine that causes a buffer
overrun in the program handling the input, transferring control to shellcode inserted by
the attacker. Chapter 10 is devoted to software security.
The Java security model addressed both issues. Privileges are assigned depending on the
origin of code, not according to the identity of the user running a program. Remote code
(applets) is put in a sandbox where it runs with restricted privileges only. As a type-safe
language, the Java runtime system offers memory safety guarantees that prevent buffer
overruns and the like. Chapter 20 explores the current state of code-based access control.
With the steep rise in the number of exploitable software vulnerabilities reported in the
aftermath of Aleph One’s paper and with several high profile email-based virus attacks
sweeping through the Internet, ‘trust and confidence’ in the PC was at a low ebb. In
reaction, Compaq, Hewlett-Packard, IBM, Intel, and Microsoft founded the Trusted
Computing Platform Alliance in 1999, with the goal of ‘making the web a safer place
to surf’.
Advances in computer graphics turned the PC into a viable home entertainment platform
for computer games, video, and music. The Internet became an attractive new distribution
7
8
1 HISTORY OF COMPUTER SECURITY
channel for companies offering entertainment services, but they had to grapple with
technical issues around copy protection (not provided on a standard PC platform of
that time). Copy protection had been explored in the 1980s but in the end deemed
unsuitable for mass market software; see [110, p. 59). In computer security, digital rights
management (DRM) added a new twist to access control. For the first time access control
did not protect the system owner from external parties. DRM enforces the security policy
of an external party against actions by the system owner. For a short period, DRM
mania reached a stage where access control was treated as a special case of DRM, before
a more sober view returned. DRM was the second driving force of trusted computing,
introducing remote attestation as a mechanism that would allow a document owner
to check the software configuration of the intended destination before releasing the
document. This development is taken up in Sections 15.6 and 20.7.
Availability, one of the ‘big three’ security properties, had always been of paramount
importance in commercial applications. In previous epochs, availability had been
addressed by organizational measures such as contingency plans, regular backup of
data, and fall-back servers preferably located at a distance from a company’s main
premises. With the Internet, on-line denial-of-service attacks became a possibility and
towards the end of the 1990s a fact. In response, firewalls and intrusion detection systems
became common components of network security architectures (Chapter 17).
The emergence of on-line denial-of-service attacks led to a reconsideration of the engineering principles underpinning the design of cryptographic protocols. Strong cryptography
can make protocols more exploitable by denial-of-service attacks. Today protocols are
designed to balance the workload between initiator and responder so that an attacker
would have to expend the same computational effort as the victim.
1.5 2 0 0 0 s – T H E W E B
When we talk about the web, there is on one side the technology: the browser as
the main software component at the client managing the interaction with servers and
displaying pages to the user; HTTP as the application-level communications protocol;
HTML and XML for data formats; client-side and server-side scripting languages for
dynamic interactions; WLAN and mobile phone systems providing ubiquitous network
access. On the other side, there are the users of the web: providers offering content and
services, and the customers of those offerings.
The technology is mainly from the 1990s. The major step forward in the 2000s
was the growth of the user base. Once sufficiently many private users had regular
and mobile Internet access, companies had the opportunity of directly interacting
with their customers and reducing costs by eliminating middlemen and unnecessary
transaction steps. In the travel sector budget airlines were among the first to offer web
booking of flights, demonstrating that paper tickets can be virtualized. Other airlines