CSCI 262 Computer Security Fall 2013 Lecture 1 Halim M. Khelalfa (main reference = Dieter Gollmann, Computer Security, John Wiley, 3rd edition, 2011) By PresenterMedia.com Lecture 1 Part I: History of Computer Security •Learning outcomes of part 1 1. Outline of the history of computer security 2. Explain the context in which familiar security mechanisms were originally developed 3. Show how changes in the application of IT pose new challenges in computer security 4. Discuss the impact of disruptive technologies on computer security •Introduction 1. Security is a journey, not a destination. 2. Computer security has been travelling for forty years. 3. The challenges faced have kept changing. 4. So have the answers to familiar challenges. 5. Security mechanisms must be seen in the context of the IT landscape they were developed for. •Epochs • 1930s: people as “computers” • 1940s: first electronic computers • 1950s: start of an industry • 1960s: software comes into its own • 1970s: age of the mainframe • 1980s: age of the PC • 1990s: age of the Internet • 2000s: age of the Web Starting Point: Anderson Report, 1972 • In recent years the Air Force has become increasingly aware of the problem of computer security. This problem has intruded on virtually any aspect of USAF operations and administration. The problem arises from a combination of factors that includes: Starting Point: Anderson Report, 1972 • The problem arises from a combination of factors that includes: 1. 2. 3. 4. Greater reliance on the computer as a data processing and decision making tool in sensitive functional areas; The need to realize economies by consolidating ADP resources thereby integrating or co-locating previously separate data processing operations; The emergence of complex resource sharing computer systems providing users with capabilities for sharing data and processes with other users; The extension of resource sharing concepts to networks of computers; and the slowly growing recognition of security inadequacies of currently available computer systems •1970s: Mainframes – Data Crunchers • Technology: Winchester disk (IBM) 35-70 megabytes memory. • Application: data crunching in large organisations and government departments. • Protection of classified data in the defence sector dominates security research and development. • Social security applications and the like. • Security controls in the system core: operating systems, database management systems • Computers and computer security managed by professionals. •1970s: Security Issues • Military applications: • Anderson report • Multi-level security (MLS) • Bell LaPadula model • Status today: High assurance systems developed (e.g. Multics) but do not address today’s issues. • Non-classified but sensitive applications • DES, public research on cryptography • Privacy legislation • Statistical database security • Status today: cryptography is a mature field, statistical database security reappearing in data mining. • Fundamentals of access controls •1980s: PCs – Office Workers • Technology: Personal Computer, GUI, mouse, … • Application: word processors, spreadsheets, i.e. office work. • Liberation from control by the IT department. • Single-user machines processing unclassified data: No need for multi-user security or for MLS. • Risk analysis: no need for computer security. • Security evaluation: Orange Book (TCSEC, 1983/85): Driven by the defence applications of the 1970s. •1980s: Security Issues • Research on MLS systems still going strong; Orange Book, MLS for relational databases. • Clark-Wilson model: first appearance of “commercial security” in mainstream security research. • Worms and viruses: research proposals, before appearing in the wild. • Also the worm comes from Xerox park (1982) … • Intel 80386 drops support for segmentation. (Microsoft DOS did not use it) •1990s: Internet – Surfers Paradise? • Technology: Internet, commercially used. • Applications: World Wide Web (static content), email, entertainment (music, movies), … • Single-user machine that had lost its defences in the previous decade is now exposed to the “hostile” Internet. • No control on who can send what to a machine on the Internet. •1990s: Internet – Surfers Paradise? • Technology: Internet, commercially used, Web 1.0. • Applications: Web surfing, email, entertainment, … • Single-user machine that had lost its defences in the previous decade now exposed to “hostile” Internet. • No control on who can send what to a machine on the Internet. • Buffer overrun attacks: • Aleph One (1996): Smashing the Stack for Fun and Profit • “Add-on” security controls: firewalls, SSL, browser sandbox as reference monitor, … •1990s: Security Issues • Crypto wars: is wide-spread use of strong cryptography a good idea? • Internet security treated as a communications security problem. • Buffer overrun attacks: • Aleph One: Smashing the Stack for Fun and Profit • Internet security is mainly an end systems issue! • Java security model: the sandbox. New access Control paradigms • Trusted Computing; Digital Right Management (DRM) • Status today: mature security protocols (IPsec, SSL/TLS), better software security. •2000s: Web – e-Commerce • Technology: Web services, WLAN, PKI?? • Web 2.0: dynamic content • B2C applications: Amazon, eBay, airlines, on-line shops, Google, ... • Criminal activity replaces “hackers”. • Legislation to encourage use of electronic signatures. • PKIs have not taken off; e-commerce has essentially evolved without them. •2000s: Security Issues • SSL/TLS for secure sessions. • Software security: the problems are shifting from the operating systems to the applications (SQL injection, crosssite scripting). • Security controls moving to application layer: Web pages start to perform security checks. • Access control for virtual organisations: e.g. federated identity management. • Security of end systems managed by the user. •Disruptive Technologies • Cheap and simple technologies that do not meet the requirements of sophisticated users, but are adopted by a wider public. • When the new technology acquires advanced features, it takes over the entire market. • What happened to work stations? • Problem for security: security features may not be required by the applications the new technology is initially used for; when it turns into a platform for sensitive applications, it becomes difficult to re-integrate security. •Summary • Innovations (mouse, GUI, PC, WWW, worms, viruses) find their way out of research labs into the mass market. • Innovations are not always used as expected: email on the Internet, SMS in GSM. • Also the users are inventive! • When new technology are used in innovative ways, old security paradigms may no longer apply and the well engineered ‘old’ security solutions become irrelevant. • We can start all over again … Part II Foundations of Computer Security •Outline of part 2 1. 2. 3. 4. 5. 6. Some definitions The fundamental dilemma of computer security Data vs information Principles of computer security The layer below The layer above •Learning outcomes part II a) Approach a definition of computer security introducing CIA b) Explain the fundamental dilemma of computer security c) Mention some general design decisions that have to me made when constructing secure systems d) Point out that security mechanisms have to rely on physical or organizational protection to be effective •Agenda • Security strategies • Prevention – detection – reaction • Security objectives • Confidentiality – integrity – availability • Accountability – non-repudiation • Fundamental Dilemma of Computer Security • Principles of Computer Security • The layer below Security strategies= Protecting Information Assets i. ii. iii. iv. Avoidance Prevention Detection Containment and response v. Recovery vi. Improvement Security strategies 1. Avoidance • Improving security by to avoiding configurations that present unnecessary opportunities for problems to occur. Example: • • • IF users of systems on a particular network do not require direct access to external networks, and IF inbound connections are forbidden, THEN there is no reason to connect the network to external networks in the first place. 2. Prevention • Implementation of measures and controls to minimize the possibility of security problems occurring. Example • • It may be necessary to store different kinds of data on a common file server. To prevent unauthorized access to each kind of data, access controls should allow users to see only those kinds of data they have permission to see. Security strategies 3. Detection Despite all efforts to prevent unauthorized access to information assets and resources, security incidents are bound to occur. It is therefore necessary to implement measures to detect possible information security problems when they occur. Example: It may be appropriate for you to deploy network traffic monitors to alert you to unauthorized connection attempts to your networked systems. 4. Containment and response When information security incidents occur, you will have to work quickly to contain the damage and respond to prevent further unauthorized activity. Preparation and practice in handling security incidents is an essential part of maintaining readiness to respond when incidents occur. Security strategies 5. Recovery When system failures and security incidents occur, you will need to have resources and data backups available to restore your data, systems, networks, and security infrastructure to a “known-good” state. This means that preparation and ongoing effort must be applied in advance to back up data and systems. 5. Improvement New threats to the security of information and information systems are discovered every day. Intruders actively seek ways to infiltrate systems in search of information and resources. It is necessary to engage in a continuous effort to sustain and improve the security of the networked information systems under your administrative control. As security incidents occur, lessons learned help to identify areas in need of improvement. Staying up to date regarding newly discovered problems and the means to mitigate them are essential elements of a continuous security improvement process. •Example 1 – Private Property 1. Prevention: locks at doors, window bars, walls round the property. 2. Detection: stolen items are missing, burglar alarms, closed circuit TV. 3. Reaction: call the police, replace stolen items, make an insurance claim … •Example 2 – E-Commerce 1. Prevention: encrypt your orders, rely on the merchant to perform checks on the caller, don’t use the Internet (?) … 2. Detection: an unauthorized transaction appears on your credit card statement. 3. Reaction: complain, ask for a new card number, Who will cover the fraudulent transaction? The card holder, the merchant? The card issuer? Security Objectives CIAAA 1. Confidentiality: prevent unauthorised disclosure of information 2. Integrity: prevent unauthorised modification of information 3. Availability: prevent unauthorised withholding of information or resources 4. Authenticity: “know whom you are talking to” 5. Accountability (non-repudiation): prove that an entity was involved in some event Confidentiality Confidentiality Secrecy = Protection of data belonging to an organisation. Privacy = Protection of personal data More on Confidentiality- content or existence of a document Confidentiality Content of a document Existence of a document More on Confidentiality- traffic analysis in communication system Un-linkability Two or more items of interest (messages, actions, events, users) are unlikable if an attacker cannot sufficiently distinguish if they are related or not Anonymity A subject is anonymous if it cannot be identified within a given anonymity set of subjects •Privacy • Protection of personal data (OECD Privacy Guidelines, EU Data Privacy Directive 95/46/EC). • “Put the user in control of their personal data and of information about their activities.” • Taken now more seriously by companies that want to be ‘trusted’ by their customers. • Also: the right to be left alone, e.g. not to be bothered by spam. Integrity • Prevent unauthorised modification of information (prevent unauthorised writing). • Data Integrity - The state that exists when computerized data is the same as that in the source document and has not been exposed to accidental or malicious alteration or destruction. • Detection (and correction) of intentional and accidental modifications of transmitted data. •Integrity continued • Clark & Wilson: no user of the system, even if authorized, may be permitted to modify data items in such a way that assets or accounting records of the company are lost or corrupted. • In the most general sense: make sure that everything is as it is supposed to be. • (This is highly desirable but cannot be guaranteed by mechanisms internal to the computer system.) • Integrity is a prerequisite for many other security services; operating systems security has a lot to do with integrity. • Example : hacker trying to modify an OS access right table to circumvent access controls. Availability The property of being accessible and usable upon demand by an authorised entity. Denial of Service (DoS): prevention of authorised access of resources or the delaying of time-critical operations. Maybe the most important aspect of computer security, but few methods are around. Distributed denial of service (DDoS) receives a lot of attention; systems are now designed to be more resilient against these attacks. •Denial of Service Attack (smurf) • Attacker sends ICMP echo requests to a broadcast address, with the victim’s address as the spoofed sender address. • The echo request is distributed to all nodes in the range of the broadcast address. • Each node replies with an echo to the victim. • The victim is flooded with many incoming messages. • Note the amplification: the attacker sends one message, the victim receives many. •Denial of Service Attack (smurf) attacker A sends echo request to broadcast address with victim as source A A victim echo replies to victim A •Accountability • At the operating system level, audit logs record security relevant events and the user identities associated with these events. • If an actual link between a user and a “user identity” can be established, the user can be held accountable. • In distributed systems, cryptographic non-repudiation mechanisms can be used to achieve the same goal. Non-repudiation Non-repudiation services provide unforgeable evidence that a specific action occurred. 1. Non-repudiation of origin: protects against a sender of data denying that data was sent. 2. Non-repudiation of delivery: protects against a receiver of data denying that data was received. •Reliability & Safety • Reliability and safety are related to security: • Similar engineering methods, • Similar efforts in standardisation, • Possible requirement conflicts. • Reliability addresses the consequences of accidental errors. • Is security part of reliability or vice versa? • Safety: measure of the absence of catastrophic influences on the environment, in particular on human life. •Security & Reliability • On a PC, you are in control of the software components sending inputs to each other. • On the Internet, hostile parties provide input. • To make software more reliable, it is tested against typical usage patterns: • “It does not matter how many bugs there are, it matters how often they are triggered.” • To make software more secure, it has to be tested against ‘untypical’ usage patterns (but there are typical attack patterns). •Dependability • Proposal for a term that encompasses reliability, safety, and security • Dependability (IFIP WG 10.4): I. II. The property of a computer system such that reliance can justifiably be placed on the service it delivers. The service delivered by a system is its behaviour as it is perceived by its user(s); a user is another system (physical, human) which interacts with the former. •Distributed System Security • Distributed systems: computers connected by networks • Communications (network) security: addresses security of the communications links • Computer security: addresses security of the end systems; today, this is the difficult part. • Application security: relies on both to provide services securely to end users. firewall •What is computer security What we do in computer security? We deal with the prevention and detection of unauthorized actions by users fo a computer systems. Why we do it? Computer security is concerned with the measures we take to deal with intentional actions by parties behaving in some unwelcome fashion. •Fundamental Dilemma of Computer Security Security unaware users have specific security requirements but no security expertise. I. II. If you provide your customers with a standard solution it might not meet their requirements. If you want to tailor your solution to your customers’ needs, they may be unable to tell you what they require. •DATA VS INFORMATION Data: physical phenomena chosen by convention to represent certain aspects of our conceptual and real world. Information: The meanings we assign to data Possible problems: Covert channels: response time or memory usage may signal information. Inference in statistical databases: combine statistical queries to get information on individual entries. Controlling access to information may be very difficult and need to be replaced by controlling access to data. 47 •Principles of Computer Security The Dimensions of Computer Security Application Software User (subject) Resource (object) Hardware Security policy Layer of a computer system where protection mechanism is implemented 1st Fundamental Design Decision Where to focus security controls? The focus may be on data – operations – users; e.g. integrity requirements may refer to rules on Format and content of data items (internal consistency): account balance in an accounts database is an integer. Operations that may be performed on a data item: credit, debit, transfer, open account, check balance Users who are allowed to access a data item (authorised access): account holder and bank clerk have access to account. 2nd Fundamental Design Decision: Where to place security controls? (in which layer) 1. 2. 3. 4. 5. Users run application programs Application programs may use the services of a DBMS, a browser, etc. These services run on top of an operating system which provides file and memory management and access control to resources such as printers and I/O devices The OS has a kernel which mediates every access to the processor and to memory The hardware ( the processor , memory) physically stores and manipulates the data held in the computer system.am Applications Services (middleware) Operating system OS kernel Hardware Another look at the layer problem 1. Visualize security mechanisms as concentric protection rings, with hardware mechanisms in the centre and application mechanisms at the outside. 2. Mechanisms towards the centre tend to be more generic while mechanisms at the outside are more likely to address individual user requirements. 3. The man-machine scale for security mechanisms combines our first two design decisions. Security mechanism: Onion Model of Protection applications The Man-Machine Scale services operating system OS kernel hardware Specific complex focus on users Generic simple focus on data Man oriented Machine oriented •3rd Fundamental Design Decision Complexity or Assurance? Simplicity – and higher assurance Simple generic mechanisms may not match specific security requirements Feature-rich security environment? To choose the right features from a rich menu, you have to be a security expert. Security unaware users are in a no-win situation 53 •Example: Security Evaluation • Security evaluation checks whether products deliver the security services promised. • We have to state the • function of the security system, • required degree of assurance (trust) in its security. • For high assurance, the security system has to be examined in close detail. • Obvious trade-off between complexity and assurance: The higher an assurance level you aim for, the simpler your system ought to be. • Feature-rich security and high assurance do not match easily. 54 •4th Fundamental Design Decision Centralized or decentralized control? Centralized control • Within the domain of a security policy, the same controls should be enforced. 1. If a single entity is in charge of security, then it is easy to achieve uniformity 2. but this central entity may become a performance bottleneck. 55 Decentralized control • A distributed solution may be more efficient • but you have to take added care to guarantee that different components enforce a consistent policy. •Security perimeter Every protection mechanism defines a security perimeter (security boundary). The parts of the system that can malfunction without compromising the mechanism lie outside the perimeter. The parts of the system that can disable the mechanism lie within the perimeter. Note: Attacks from insiders are a major concern in security considerations. 56 •Exercise • Identify suitable security perimeters for analysing personal computer (PC) security. • Consider the room the PC is placed in, • the PC itself, • or some security module within the PC when investigating security perimeters. a. b. c. Is the PC in a protected room, a room shared with colleagues, a room in a public place What are the options for input? Keyboard, data carrier (CD, USB stick, floppy), Internet? Can users take the PC home or open it? 57 5th Fundamental Design Decision Blocking Access to the Layer Below Attackers try to bypass protection mechanisms. There is an immediate and important corollary to the second design decision: How do you stop an attacker from getting access to a layer below your protection mechanism? 58 Examples of bypassing control mechanisms from the layer below If you gain system privileges in OS If you have direct access to the physical memory devices You can change programs, files that contain control data for security mechanisms in the service and applications layers You can manipulate the raw data You bypass the logical controls of the operating system 59 •Access to the Layer Below Controlled access Security perimeter Physical access control and administrative measures 60 •The Layer Below – Examples Recovery tools (ex: Norton) 1. • • • If the logical organization of memory is destroyed? Restore data by reading memory directly and then restoring the file structure. Such a tool can be used to circumvent logical access control as it does not care for the logical memory structure. Unix treats I/O devices and physical memory devices like files. 2. • • If access permissions are defined badly, e.g. if read access is given to a disk, an attacker can read the disk contents and reconstruct read protected files. 61 •More examples – Storage risk 3. Object reuse: (release of memory) In single processor systems, when a new process is activated it gets access to memory positions used by the previous process. Avoid storage residues, i.e. data left behind in the memory area allocated to the new process. 4. Backup: Whoever has access to a backup tape has access to all the data on it. Logical access control is of no help and backup tapes have to be locked away safely to protect the data. 5. Core dumps: same story again 62 •The Layer Above • It is neither necessary nor sufficient to have a secure infrastructure, be it an operating system or a communications network, to secure an application. • Security services provided by the infrastructure may be irrelevant for the application. • Infrastructure cannot defend against attacks from the layer above. • Fundamental Fallacy of Computer Security: Don’t believe that you must secure the infrastructure to protect your applications. •Summary • Security terminology is ambiguous with many overloaded terms. • Distributed systems security builds on computer security and communications security. • Two major challenges in computer security, are: I. the design of access control systems that fit the requirements of the Internet II. and the design of secure software. • In security, understanding the problem is more difficult than finding the solution. References • Dieter, Gollmann, 3rd Edition , 2011, Computer Security, John Wiley and Sons