Recommendations for Network and Computer Security HTASC Network and Computer Security Subcommittee R. Cecchini, INFN, Firenze, A. Flavell, University of Glasgow, W. Friebel, DESY, Zeuthen, J. Gamble, CERN, Geneva, T. Haas, DESY, Hamburg, (Chairperson) D. Heagerty, CERN, Geneva, P. Moroni, CERN, Geneva, B. Perrot, IN2P3, Orsay, M. Rott, Institute of Physics, Prague E. Wassenaar, NIKHEF, Amsterdam 30 November, 1999 Outline Introduction Definition of Scope and Risk Analysis Policies General Measures Infrastructure Personnel Documentation Specific Measures Networks Operating Systems X11 SSH Recent and Future Developments Mechanisms of Communication Introduction With the advent of the World-Wide-Web public interest in and access to wide area networks has risen dramatically. Only a few years ago network access to computers around the world was limited to a well-confined community of scientists and engineers. Today it is safe to assume that a large and growing fraction of the population in the industrialized countries has access to and regularly uses the internet. Many of the concerns about computer and network security have thus gained a completely new quality: One can compare the situation a few years ago to a small village where it was not a problem to trust most inhabitants and to leave doors and windows open. Now the wide area network environment more resembles a megalopolis. In this situation not only common sense dictates that adequate resources be devoted to security. A continuous succession of hacking incidents at various HEP sites indicates that a stable situation has not been reached yet. For this reason HTASC has formed a subcommittee with the following mandate: Mandate of HTASC Computer/Network Security Subgroup ================================================= Advise HTASC/HEPCCC on Computer and Network Security needs and to suggest policies to meet those needs for HEP laboratories and institutes by defining computer/network security guidelines for HEP institutions, estimating the resources needed to implement such guidelines, suggesting means of communication between the institutions in case of security incidents. This report results from a number of discussions held via email and from a meeting held at DESY on 19 May, 1999. It describes the problems and proposes policies and measures according to the principle of “good practice” as proposed or established in various HEP institutions. In this sense it does not try to be complete or to compete with other existing documents which in many ways are much more exhaustive. The purpose of this document is to further the goal of common security standards throughout the HEP community. Definition of Scope and Risk Analysis A large variety of risks pose dangers to the computing infrastructure and the data integrity at HEP labs and institutes. Examples of more conventional ones are water, fire or theft. The risks due to water and fire have not become greater lately and therefore will not be considered further in this report. It should however be pointed out that with the increased public interest in computing equipment and with the general move to PC-based systems, the risk of theft is growing as exemplified by major theft incidents in various HEP institutes. In these cases the loss of data often outweighs the loss of equipment. It should also be pointed out in particular that data obtained from a stolen system may provide information that enables a potential intruder to gain access to the networked computing infrastructure of a HEP institute. A second important source of risk is internal sabotage. This risk has always been there in the past and for this reason will also not be considered further. It needs to be pointed out, however, that in a university environment the distinction between risks from inside and outside is not easily done. We shall evaluate the risk from attacks on the computing infrastructure from outside sources and give recommendations on how to reduce the vulnerability. The risks can be classified in the following categories: Loss of privacy: An intruder gaining access to accounts can read mails and protected documents. While, in general, HEP data and documents should be considered to be freely accessible, there are important exceptions. In particular violations of privacy of personnel data may have serious legal implications, Denial of Service: An intruder may abuse or modify the services offered by a running system in order to prevent it from working properly or bring it to a complete stop. Data Loss or Corruption, Damage to accelerator or detector components: An intruder may gain access to accelerator or detector control systems. This could result in serious damage to hardware systems and people’s health, Waste of manpower as a consequence of security incidence handling and recovery. Damage to the reputation of the institution: An intruder may use accounts at a HEP institution to intrude in other places and thus damage the reputation of the HEP institution. This may even result in legal claims against that institution. Before considering ways to reduce these risks it must be pointed out that silver bullet solutions do not exist: On one hand it is mandatory to maintain a high level of openness and accessibility to allow free exchange of scientific information. On the other hand potential damage must be avoided. Hence, any measures taken will be compromises. Policies Any institution should have a well-defined set of rules and policies concerning the use of computing and networking resources. These should be laid down in a written document available to all users. Users should acknowledge that they are aware of the rules and policies with a signature. This document should contain the following: Purpose of computing infrastructure, Definition of authorized use including a statement that what is not explicitly authorized is formally forbidden, A statement that the institution’s security officers or the representatives are authorized to monitor and possibly shutdown systems and networks when necessary in order to the institution’s computer security, Responsibilities of the users to protect the resources from unauthorized use, e. g. by Not to reveal passwords, Protecting keys, tickets or certificates. Recommendations for good practice for the use of network and computing resources, In addition any institution should have a corresponding document concerning the administration of computing and networking resources. This second document should contain information relevant to system administrators. It should contain: The responsibilities of system administrators in matters of security, Security rules for the administration of network and computer resources, Instructions for setting up particular types of systems or references to relevant documents from other sources, Incident handling procedures. General Measures Infrastructure A number of steps can be taken on the hardware/infrastructure level in order to increase the security. It is recognized that some of these measures may be difficult or costly to implement. However some of these basic points should be kept in mind in particular when new installations or upgrades are being planned. Some examples of such measures are given in the following list: Access to computer and network hardware should be controlled: In particular computer consoles and sensitive network equipment such as routers, hubs or switches should be accessible to authorized personnel only. If PCs or workstation consoles are installed in public areas it is advisable to connect them to a protected and carefully monitored subnet. The location and configuration of all systems should be well documented: This documentation should be updated regularly. Local area networks should be well structured and the topology should be documented, Using ethernet switches instead of hubs provides both better network performance and protection against sniffing, although, at increased cost. We note that in many institutions, in particular in a university environment, the influence of HEP groups on the infrastructure is limited. Personnel Responsibilities in matters of computer and network security need to be clearly defined. A security officer should be nominated who coordinates all activities. Depending on the size of the laboratory or institution this task may be a full-time occupation. The security officer should Define security rules and policies and monitor that they are being applied, Setup a Computer Emergency Response Team (CERT). The CERT should be reachable at any time using a standard email address (cert@myinstitution.mycountry), Ensure that people who perform security-sensitive tasks, such as system or network administrators, are properly trained, Stay informed about recent developments, Maintain regular contacts with other institutions. The security officer should be given sufficient power so he can perform these tasks efficiently. In addition to these measures regular training of users and administrators on security matters is very important. Training should raise the awareness about the problems and perpetrate recognized good practices. Documentation Various documents should be provided. These can be classified in three categories: Rules for user conduct: This should be as compact as possible and clearly outline the user’s responsibilities. Users should acknowledge with a signature that they have read this document. System administrator’s guide: This document should lay down the rules for setting up and administering computer and network systems. Incident handling procedures: Procedures to take in case of various incidents should be laid down beforehand. Specific Measures Networks Currently sniffing and spoofing on IP traffic are among the most serious security threats. The term sniffing is used where attempts are made to listen in on IP traffic in order to obtain sensitive information such as passwords. The term spoofing is used where attempts are made to fake IP addresses. Various ways exist to improve the protection against these threats. The existence of a well-structured LAN, split up into small subnets connected with routers, reduces the risks. Accordingly, efforts are ongoing in larger institutions to replace existing historically grown cabling with structured cabling. Also, well-managed computer systems can be reasonably well protected against these threats. However, any system on a LAN is only as well protected as the least-protected one. Taking into account the way research is done in HEP laboratories, it is recognized that there may be many systems on a site that cannot easily be brought to an acceptable security standard: Older equipment that cannot easily be upgraded exists in many experiments and accelerator controls. For other systems professional administration may not be available. A “Firewall” protecting the whole site or sensitive areas within significantly reduces these problems. Two general types of firewalls are distinguished: Packet Filters: These work on levels 3 and 4 of the OSI layer model. They can block individual packets based on a set of rules. These rules allow distinctions based on source and destination addresses and protocols used. Packet filtering can be fairly easily implemented at little additional cost since most routers provide packet filtering capabilities. Packet filters are routinely used in larger HEP institutions to block traffic on a per-host, per-subnet and per-port basis. It is strongly recommended to use packet filters on border routers. Even if an institution has a completely free access policy it should at least implement this measure. Application Gateway: This works on level 7 of the OSI layer model. Here one can use user and host authentication and can distinguish between different applications. Where large data volumes need to be handled implementation of such a firewall may be very costly. A special case of an application gateway is a network monitor as implemented for instance at CERN. Such a system listens in on all IP traffic looking for sensitive keywords. When a keyword is detected the remaining communication is recorded. Regular inspection of the log files is necessary. This system has been able to detect a large fraction of hacking attempts at CERN. Since this system impinges on employee’s privacy, it needs to be pointed out, that at least in some countries the operation of such a system may pose legal problems. Operating Systems A large number of different operating systems are currently being used within HEP. They can roughly be classified in three categories: UNIX-type systems, WINDOWS-type systems, Others. The systems in the third category are either used in very specialized applications or are slowly disappearing. These systems will not be covered in this report. Among the UNIX-type systems, LINUX has seen an explosive growth within the last year. This together with the fact that LINUX is free software with the complete source code available to anybody makes it a major concern in matters of security. Also WINDOWS-type systems have spread in HEP, but not quite at the same rate. This report covers some aspects of UNIX security with a strong bias towards LINUX. WINDOWStype systems will only be covered marginally. Security problems in the WINDOWS world are not discussed as openly as in the UNIX world due to the single-vendor nature. Here are a number of recommendations for administering and using UNIX/LINUX systems: Every system must have a responsible system administrator, The system should be kept up-to-date; Security patches should be applied in a timely fashion, All network services that are not strictly needed should be explicitly turned off. Candidates that should be turned off are Bootps, btx, comsat, cfinger, finger, imap, midinet, netbios-ns, netbios-ssn, netstat, systat, nntp, ntalk, talk, dtalk, pop2, pop3, printer, rplay, rstatd, rusersd, tftp, timed, uucp, gopher, linuxconf and all r-commands (rsh, rexec, etc…) If ssh is used also: shell, login, exec, ftp, telnet, An intrusion detection system on the networking level, such as tcp-wrapper, or on the file level such as tripwire should used. System log files should be inspected regularly, System logging should be performed from a remote system in order to prevent system log files to be manipulated, “Good” passwords should be used. The quality of the passwords should regularly be checked with a tool like crack, The shadow password file mechanism should be used. This list is clearly incomplete and will change with time. For WINDOWS-type systems no recommendations are given here. Useful information on the topic of security in WINDOWS systems can be found in many places. Some exemplary references are given here: http://www.cert.org http://www.microsoft.com/security http://www.ntbugtraq.com X11 The X Window System poses a serious security risk if it is not properly secured. If an X11 display is insecure, it allows a program running anywhere on the Internet to connect to it and the connection may be completely invisible to the user. Once connected, that program has full access to the display and the keyboard of the X11 server. The best defense is to prevent unwanted connections in the first place. In particular X11 applications that run on an off-site host should not point back directly to an X11 server on the site. Different tools exist to protect X11 connections. Two examples are given here MXCONNS is a proxy X11 server. It is a public domain program which prompts the user for confirmation every time a new X11 connection is made. It does not prevent sniffing. SSH tunneling allows encryption of the entire X11 traffic thus protecting both against unwanted “stealth” connections and sniffing. This is the recommended method to secure X11 connections. SSH ssh (Secure Shell) is a program for logging into a remote machine and for executing commands. It is intended to replace rsh, rlogin and telnet. It provides secure encrypted communications between two untrusted hosts over an insecure network. X11 connections and arbitrary TCP/IP ports can also be forwarded over the secure channel. ssh provides different authentication methods. The most relevant for this discussion is based on public-key cryptography: Encryption and decryption are done using separate keys, and it is not possible to derive the decryption key from the encryption key. The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. When the user logs in, the ssh program tells the server which key pair it would like to use for authentication. The server checks if this key is permitted, and if so, sends the user (actually the ssh program running on behalf of the user) a challenge, a random number, encrypted by the user's public key. The challenge can only be decrypted using the proper private key. The user's client then decrypts the challenge using the private key, proving that he/she knows the private key but without disclosing it to the server. This authentication happens without security-relevant information ever going over the network When the user's identity has been accepted by the server, the server either executes the given command, or logs into the machine and gives the user a normal shell on the remote machine. All communication with the remote command or shell will be automatically encrypted. ssh has been promoted as a general replacement for all remote logins. While it is recognized that the general use of ssh reduces the risk of password sniffing it may generate other problems. If passwords or key pairs have been obtained in some other way, ssh can be used to completely cloak an intruding session. The encryption makes monitoring of suspicious sessions impossible. Nevertheless, it is considered the best solution known. It should therefore be the method of choice for all remote connections, both for interactive logins and for data transfer. For this reason the availability of ssh clients in all HEP institutions is considered an absolute necessity Recent and Future Developments It is clear from the discussion in the previous sections of this report that views on computer and network security are in constant flux. The major reason for this is the fact that systems, networks and protocols that are mostly being used were not developed for the enormously widespread use they are seeing today and the corresponding security issues. Due to the enormous commercial interest in wide-area networking new technologies are appearing that address many of these questions. Examples for such technologies are encryption and digital signatures. For these technologies an independent trusted certification authority is required. Commercial certification authorities that give out keys and digital signatures are starting to appear. An example is the company Verisign. Following these leads, larger HEP institutions are starting to set up their own certification authorities. Many questions arise in this context: Should HEP institutes rely on commercial certificates? Should larger HEP centers set up their own certification authority? How can trusted relationships be established based on certificates that are issued by different certification authorities at different HEP sites? Is it conceivable that only one certification authority exist for all of HEP? How do members of HEP institutions obtain certificates? In an environment with well-defined trust relationships and reasonable mechanisms to issue certificates, many (but not all) of the problems discussed in this report will be solved. It is therefore strongly recommended that a concerted effort in HEP be launched in order to come to a common strategy on the topic of digital signatures and certificates. Mechanisms of Communication Improving computer and networking security is a continuing process. This process requires close collaboration from all involved parties. For this reason we propose to set up a list of security contact persons for all HEP institutes. The larger institutions should promote this process by requiring their collaborating institutes to nominate computer and network security contact persons. The list can be maintained by HTASC