Brian Tully Tran To Coen150 5/17/2004 Internet Worms: Methods, Countermeasures and Famous Incidents 1 Abstract Among the greatest malicious programs facing today’s computer users are computer worms. They cause billions of dollars in damages every year, exploiting common system vulnerabilities in order to wreak havoc on the global economy. Since vulnerabilities will likely be around for a long time to come, it is beneficial to understand worms that exploit these vulnerabilities in order to help prevent them and the damages they inflict. This project examines the vulnerabilities exploited, specifically weak passwords, trap doors and buffer overflows. It also examines how these vulnerabilities are exploited by specific well-known worms, such as the Morris Worm and Blaster Worm. Lastly, it suggests countermeasures for both the vulnerabilities and the worms themselves. 2 Introduction In today’s computing society there are perhaps no greater destructive programs than computer worms. They cause problems on a scale that ranges from corporations all the way down to individual users. The damages they inflict total billions of dollars per year and range from lost productivity to compromised information. According to a report by the U.K. based security company mi2g Ltd., digital attacks, including worms and viruses caused more than $8 billion in damages worldwide in January 2003, with the Slammer worm alone accounting for $1 billion. With damages so high, the question remains as to why worms exist in the first place and why they have not yet been 2 eradicated. Worms exploit system vulnerabilities. Although security has improved greatly over just the past decade, systems still have faults that can be exploited. Even today, perhaps the two most renowned operating systems, Windows and Unix, both have a class D and class C1 security rating from the Orange Book. The Orange Book is the de facto standard for many agencies that specifies the rating criteria for the security of systems. Class D offers minimal protection and class C1 offers discretionary security protection. From these ratings alone it is clear that vulnerabilities will be around for a long time to come, so it is beneficial to understand worms that exploit these vulnerabilities in order to prevent them and the damages they inflict [15]. This project will examine briefly the system vulnerabilities exploited, specifically weak passwords, trap doors and buffer overflows. It will also examine how specific wellknown worms, such as the Blaster Worm and Slammer Worm exploit these vulnerabilities. Lastly, it will suggest countermeasures for both the vulnerabilities and the worms themselves. 3 Background Information A computer worm is a program that is able to spread functional copies of itself to other computers. There are several necessary criteria for a worm. First, the worm must be able to replicate. Second, it must be self-contained, meaning it does not require a host program. Third it must be activated by creating processes, which means it needs a multitasking system. Finally, for network worms, replication must occur across communication links. Unlike a virus, a worm does not need a host program to propagate. There are two major classifications of worms. They are host computer worms and network worms. A host computer worm is a worm that is entirely contained in the 3 computer that it is running on and that uses network connections only to copy itself to other computers. A network worm has multiple segments that run on different hosts, and possibly perform different actions, using the network for several communication purposes, of which propagation is only one [9]. Worms by definition are not malicious. A worm is simply a program designed to replicate. Any number of additional tasks may be performed by the worm. In today’s society worms have a bad reputation for performing malicious tasks. In fact, the first network worms were intended to aid in network management. They took advantage of system properties to perform useful tasks. Malicious worms exploit these same system properties. The facilities that allow worms to replicate do not necessarily discriminate between malicious and benevolent programs. 4 History of Worms The term "worm" was first used by science fiction author John Brunner in his 1970s novel, "The Shockwave Rider". In his book Brunner described a totalitarian government that kept control over its citizens by using a powerful computer network. A freedom fighter, in the story, introduced a “tapeworm” into this computer network that infested the system and forced the government to shut down the network. Researchers writing a paper on experiments in distributed computing noted the similarities between their software and the program used by the freedom fighter and adopted the name [9]. Worms were originally used as a mechanism to perform tasks in a distributed network environment. In 1982, at the Xerox Palo Alto Research Center (PARC), scientists John Shoch and Jon Hepps experimented with worms to perform distributed computations. Together they developed five worms, each designed to perform particular 4 tasks around the network. The simplest worm was a “town crier” worm whose job was to post announcements on all the computers of the network. Another, more complicated worm was designed to do computation intensive tasks. It harnessed the extra computing power of idle computers by operating at night after everyone went home. Before employees arrived in the morning the worm would save all the work done during the night and remain dormant until the next evening [3, 9]. These experiments showed promising performance results using worms for network management tasks. However, there was a great difficulty in managing the worms, namely controlling the number of simultaneously executing copies. After letting the previously mentioned worm execute overnight, the scientists arrived the next morning to find that it had crashed several hosts. Even after the hosts were rebooted the worm quickly located the ready machines and proceeded to crash them again. Shoch and Hepps had to develop a “vaccine” to destroy the worm before the computers on the network could function again. Although the worm was created for a good purpose, the malicious implications of such a worm became quite clear. Without a “vaccine” or self-destruct mechanism built in, a worm could crash computers at will with no easy resolution [3, 9]. 5 System Vulnerabilities Malicious worms exploit flaws in the operating system or system management in order to replicate. Some of the most infamous worms attack availability, although there are worms that attack confidentiality and integrity as well. The flaws exploited are seemingly endless, but common ones include weak passwords, trap doors and buffer overflows. The first two are generally used to gain access to user accounts, while the last is used to gain root privileges. 5 Perhaps the most common attack is the password attack. This attack uses a password guessing scheme to break into user accounts. The guessing could be brute force random guessing. However, since brute force cracking present encryption could take a very long time, worms take advantage of the fact that many people choose simple passwords that are easy to guess. Worms therefore use a dictionary or a list of commonly used passwords to attack systems. The methods behind such an attack are beyond the scope of this project. However, a brief understanding is necessary to understand buffer overflow [7]. Another common attack exploits backdoors built into programs. A backdoor is a software feature (usually undocumented) built into a program that sidesteps the normal security mechanisms. Often installed for debugging and maintenance, backdoors offer attackers easy access to systems once they are discovered. After accessing user accounts using password or backdoor attacks, buffer overflow attacks are commonly used to gain root privileges [7]. 5.1 Buffer Overflow A common attack used by worms to gain super-user status exploits buffer overflow. A buffer is a contiguous allotted chunk of memory, such as an array. In C and C++, there is no automatic bounds checking on a buffer, which means a user can write past a buffer. For example, if a user creates an array, myArray[10] and the user tries to set the array using myArray[i] = ‘T’; // i = 10, the ‘T’ would cause a buffer overflow because the user has written outside the buffer. This may spill into user space or even worse, a memory area being used by the operating system. Many programs written in C or C++ have these vulnerabilities because there are no run time checks built into the 6 language. Common functions like sprintf(), scanf(), gets(), and strcpy() do not check that the destination buffer is large enough to hold the string. Generally, buffer overflows occur due to a lack of checking the size of an input before storing the input in a buffer array. Buffer overflow attacks exploit this lack of bounds checking in order to overwrite parts of the operating system with malicious code [4, 7]. In an ordinary declaration, a buffer is put onto a stack. A stack holds the allotted memory that the user specified. When a program is executed, it runs a set of binary instructions that is contained in the program. As the program executes instructions it maintains pointers that keep track of the memory. A brk pointer keeps track of the malloced memory and a stack pointer points to the top of the stack. When a subroutine call is made the function parameters and return address are pushed onto the stack that was created by the program. These are followed by a frame pointer that is used to reference the local variables and the function parameters. By entering long unchecked parameters that exploit buffer overflow, an attacker can manipulate the return address causing a stack overflow [4]. When there is a buffer overflow in an unchecked parameter on the stack, the long input string will overwrite the space allotted for the last return address that was pushed on the stack. This corrupts the process stack. When this happens, it can alter the program’s execution path because the function’s return address has been overwritten. With the original return address lost, an attacker can do one of two things. First, an attacker can inject attack code. When the return address is overwritten, the attacker can overwrite the original return address with an input string that is actually executable binary code. For example, the attacker can create a root shell that will enable the attacker to use the profile 7 with root privileges. With the root privileges, the attacker can control the computer. Even worse, if an attacker’s program input came from a network connection, it could allow any user anywhere on the network to gain root privileges on the local host. The second type of attack involves changing the return address. The attacker can make the buffer overflow change the return address to point to malicious code. When the function returns, it jumps to the attack code instead of returning to where it was called from [2, 4]. 5.2 Countermeasures There are several solutions to combating system vulnerabilities. First, users should choose passwords that are difficult to guess. Second, programmers should not build backdoors into their programs. While both of these will likely see large improvements as people become more concerned with security, neither will be eliminated altogether. As far as the buffer overflow problem is concerned, one solution is to simply write secure code. Since buffer overflows can occur when an input is larger than the bounds of the buffer, the best way to deal with this problem is to minimize the use of functions that can cause buffer overflows. Developers should be educated about minimizing the use of functions such as sprintf(), scanf(), gets(), and strcpy() to avoid this problem. Instead they should use functions like strncpy() to limit the size of the buffer. Another solution would be to use modern compilers that allow bounds checking to go into compiled code automatically, without changing the source code. Any code that attempts to access an illegal address is not allowed to execute. The problem with this is that they require the source code be recompiled which could be a problem if the application is not open source. Additionally, performance may suffer to a considerable extent. There are many other solutions that cannot be mentioned, but most of them 8 require extra work on the part of the user. This in itself proves to be a great deterrent to fixing such vulnerabilities [4]. 6 Famous Incidents The first worm that was a serious computer security threat was the Christmas Tree Worm that attacked IBM mainframe computers in December 1987. The worm was a chain letter Christmas card that included a Trojan Horse program with a hidden purpose. The card claimed if the program was executed it would draw a Christmas tree on the display. This in fact was true, but in addition it also sent a copy to everyone on the user’s address list. This worm managed to bring down the world-wide IBM network on Christmas day. It was the first of a series of malicious worms that have plagued society since then [9]. 6.1 Morris Worm The Morris Worm, otherwise known as the Internet Worm, was released to the Internet on November 2, 1988. The whole purpose of the worm was to copy itself into other systems. It did this by attacking mail servers. It searched for network connections to any systems it could contact. Whenever it located a computer, it tried to break in, using several obscure holes in Unix. It exploited a trap door in sendmail that let users execute remote commands on a server. It used this trap door to spawn copies of itself and mail them to other computers, commanding them to execute the program. After reaching a new host, the process repeated, and the newly created worm searched for other computers to infect and sent mail messages to them [10]. If the sendmail hole didn’t work, the virus tried to exploit a buffer overflow vulnerability with the finger daemon. The worm invaded through the program that 9 handled finger requests. The finger daemon had room for 512 characters of data. The worm sent 536 characters, and the extra 24 characters got executed as commands. By buffer overflowing the finger daemon the worm found another way to execute the malicious program on another computer. The worm also attempted to exploit weak passwords. It had a password guesser built in that tried to logon to computers using a few hundred common passwords. To infect a remote host, the worm exploited the sendmail or finger vulnerabilities to run a bootstrap loader on the remote machine. The loader then opened a network connection to the infected machine to load the worm over and execute it. After doing this it collected the names of other hosts to infect and repeated the process [10]. In the end the Morris Worm affected roughly 6,000 systems, inadvertently forcing them to shut down for several days. The worm itself was not designed to harm systems. However, a flaw, which allowed multiple copies to infect a single machine, had the effect of using excessive system resources. Morris confessed to creating the worm out of boredom and was convicted in 1990 of violating the 1986 Computer Fraud and Abuse Act. He was fined $10,000 and sentenced to three years’ probation. 6.2 Code Red Worm The Code Red Worm infected more than 250,000 servers in less than nine hours in July of 2001. Several versions of Code Red infected web servers running Microsoft’s Internet Information Server (IIS). The worm worked by checking port 80 of connected servers to determine if a system was vulnerable. When it found an insecure server, the worm would copy itself onto that server. The worm would then use the newly infected server to find other servers to infect. It did this by sending a HTTP GET request to the 10 vulnerable server in an attempt to exploit a buffer overflow vulnerability. The buffer overflow attack was directed at idq.dll, which allowed the attacker to run malicious code on the affected system [12]. During the first nineteen days the worm looked for other servers to infect. At the same time it would use infected servers to deface web pages requested by the servers. After these nineteen days, if an exploit was successful, the worm activated and from day twenty to twenty-seven it launched a distributed denial of service attack against the White House web site www.whitehouse.gov. On day twenty-eight, the worm slept, making no active connections or denial of service attacks. In the end, over 750,000 servers were infected causing over $2 billion in damage [12]. 6.3 Blaster Worm The Blaster worm hit the Internet on August 11, 2003. It spread on computers that ran Windows XP and Windows 2000. The main purpose of the Blaster worm was to launch a port 80 denial of service attack against Microsoft’s windowsupdate.com site on August 16, 2003. It spread quickly through the Internet because it filtered ISP’s for vulnerable systems and attacked those that it found. The worm exploited a buffer overflow in the Windows implementation of Remote Procedure Call (RPC). More specifically, the enabling vulnerability was a defect in Microsoft’s interface between its Window’s Distributed Component Object Model (DCOM) and RPC in Windows NT, 2000, XP and 2003 Server. Using the Microsoft’s DCOM, the worm would pass a rogue code into the TCP/IP RPC packets in order to inherit the computer’s privileges. After the computer was infected, the Blaster worm launched a DDOS attack on Microsoft’s 11 windowsupdate.com site to prevent any user from receiving patches that were available on the site [1, 11]. With an infected computer under the worm’s control, the worm would hack sites using basic port scanning to find open ports at TCP port 135. Port 135 is used by Microsoft to support RPC. If an open port was found, the worm would deposit variants of a Trojan horse to execute a remote shell on a TCP port. The computer would then initiate a TFTP request to port 69 that would download the actual malware. The infected computer would then become an unwilling repeater in the DDOS attack against the windowsupdate.com site [1]. Overall, more than 1.4 million computers have been affected by all the variants of the Blaster worm. This problem could have been prevented if users had downloaded the patch for the problem earlier. Microsoft had released a patch for the RPC and DCOM vulnerability on July 16, 2003, but at least 1.4 million computer users did not bother to install the patch [1]. 6.4 Slammer Worm Another worm that affected the Internet community was the SQL slammer worm, also known as the Sapphire worm. The Slammer worm was the fastest spreading computer worm in history. It began at 5:30 a.m. on January 25, 2003 and within 10 minutes it affected over 75,000 computers. During this time, the worm doubled in size every 8.5 seconds and affected 90 percent of vulnerable hosts. The Slammer worm spread using random scanning. The worm selected IP addresses at random to infect and eventually found all vulnerable hosts. Another reason the worm spread so quickly 12 through the Internet was because the worm only contained a simple, fast scanner that was a mere 376 bytes. The payload was only 404 bytes [8, 14]. The purpose of this worm was to cause denial of service attacks on several specific Internet hosts and to slow down the Internet in general. The worm exploited two buffer overflow vulnerabilities in Microsoft’s SQL Server database product. The worm used the buffer overflow to transmit a TCP-SYN packet to different Internet hosts in order to create a denial of service attack [8]. Similar to the Blaster worm, the Slammer worm exploited vulnerabilities in Microsoft’s programs that were already published and had patches available. Microsoft had released a patch for the SQL server on July 24, 2002, six months before the attacks occurred [8, 14]. 6.5 Sasser The Sasser worm was first noticed by the Internet community on April 30, 2004. It spread to computers running Windows XP and Windows 2000. Unlike other worms, the Sasser worm did not travel by email. Instead it connected directly to open ports on a computer and did not need any user interaction in order to spread. The Sasser worm exploited a buffer overflow in a Microsoft component known as LSASS, or local security authority subsystem service. The worm would then scan different IP addresses and connect to the computers through TCP port 445. The worm may have also spread through port 139. Both of the channels were used by a Windows file sharing protocol. Many of theses channels had been blocked by ISPs but when the worm found a vulnerable system, it would install an FTP server and use the server to transfer itself to new computers [5, 13]. 13 The LSASS patch was available before the release of the worm. In fact, several technology specialists believe that the worm was created by reverse-engineering the patch to discover the vulnerability. Since people do not patch their systems on a regular basis, this scenario affected over 10,000 computers, and there are new variants appearing on a regular basis [5, 13]. 6.6 Countermeasures Network Associates’ McAfee AVERT senior director Vincent Gulotto and Symantec Security Response senior director Vincent Weafer both agree that the main reason worms spread as extensively as they do is because both users and system administrators fail to update their anit-virus software. This is compounded by the fact that no matter how good the security is on one system, the Internet makes it dependent on other users and their security. Despite this fact, users who install up to date patches find themselves in a much more pleasant situation. By installing patches regularly, the amount of time during which vulnerabilities can be exploited is kept at a minimum. As was seen with the Blaster worm, a patch had been out for almost a month, but a lack of concern or downright laziness prevented people from installing it [6]. It is also important to update anti-virus software on a regular basis. According to Gulotto, this is the best weapon against computer worms, both old and new, because the latest anti-virus software uses heuristics to identify code common to a wide range of viruses and variants. Along the same lines it is important to make sure firewall software is properly configured and that all unnecessary services, such as web and ftp servers are disabled, if not in use [6]. 14 In addition, it is important to build completely secure systems. Many enterpriselevel networks use tough firewalls to prevent unauthorized entry from the outside, but have minimal protections inside. This allows a worm that penetrates the system to cause damage unchecked. This is made worse by system administrators who eradicate most of the problem but do not eradicate it completely because the rate of infection is low. This can lead to a high degree of transmissions of old viruses [6]. 7 Summary Worms are not going away anytime soon and individual users do not have much of a say in how secure their systems are. Admittedly, there are a few operating systems to choose from, but ultimately security is dependent on the developers of the product. This leaves the user in a predicament. He or she is forced to use an insecure system knowing there are worms out there that can attack it. Unfortunately, this project cannot offer a feasible alternative to this other than to not connect to the Internet. Despite these unfortunate circumstances, however, what this project has shown is that with a minimal amount of effort, a user can greatly increase the security of his or her inherently insecure system. By installing up to date patches, for instance, the amount of time that elapses between when the vulnerability is discovered and when the vulnerability is fixed on the system is minimal. This means that the amount of time the vulnerability could be exploited is also minimal. Simple procedures like this and those previously mentioned can significantly reduce the number of computers affected by worms as well as the extent of the damage done. 15 8 References [1] Berghel, Hal. "August, 2003: SoBig, W32/Blaster and the Malware Month of the Millenium." . Aug. 2003. ACM. 11 May 2004 <http://www.acm.org/~hlb/coledit/digital_village/dec-03/dv_12-03.html>. [2] Cowan, Crispin. "Buffer Overflow Attacks." . 9 Dec. 1997. . 13 May 2004 <http://www.cse.ogi.edu/DISC/projects/immunix/StackGuard/usenixsc98_html/node3.ht ml>. [3]Drakos, Nikos. "History of Worms." . 10 Mar 1994. . 9 May 2004 <http://www.csrc.nist.gov/publications/nistir/threats/subsubsection3_3_2_1.html>. [4] Grover, Sandeep. "Buffer Overflow Attacks and Their Countermeasures." . 10 Mar. 2003. . 13 May 2004 <http://www.linuxjournal.com/article.php?sid=6701>. [5] Legard, David. "Sasser worm expected to hit hard on Monday." . 3 May 2004. IDG News Service. 12 May 2004 <http://www.infoworld.com/article/04/05/03/HNsasserworm_1.html>. [6] Lyman, Jay. "How Computer Worms Work - and Why They Never Die." . 13 Nov. 2001. NewsFactor Network. 12 May 2004 <http://www.newsfactor.com/perl/story/14733.html>. [7] Mahoney, Matt. "Computer Security: A Survey of Attacks and Defenses." . 11 Sept. 2000. . 12 May 2004 <http://www.cs.fit.edu/~mmahoney/ids.html>. [8] Moore, David, Vern Paxson, Stefan Savage, Colleen Shannon, Stuart Staniford, Nicholas Weaver. "The Spread of the Sapphire/Slammer Worm." . . University of California Berkeley. 11 May 2004 <http://www.cs.berkeley.edu/~nweaver/sapphire/>. [9] Nagpal, Shuchi. "Computer Worms - an Introduction." . . Asian School of Cyber Laws. 10 May 2004 <http://www.asianlaws.org/cyberlaw/library/cc/what_worm.htm>. [10] Stoll, Cliff. The Cuckoo's Egg. New York: Pocket Books, 1990. [11] . "Blaster Worm." . 21 Apr. 2004. Wikipedia. 11 May 2004 <http://en.wikipedia.org/wiki/Blaster_worm>. [12] . "How Computer Viruses Work." . . LidRock. 10 May 2004 <http://lidrock.howstuffworks.com/virus4.htm>. [13] . "Sasser worm." . 16 May 2004. Wikipedia. 11 May 2004 <http://en.wikipedia.org/wiki/Sasser_worm>. 16 [14] . "SQL slammer worm." . 21 Apr. 2004. Wikipedia. 11 May 2004 <http://en.wikipedia.org/wiki/SQL_slammer_worm>. [15] . "WebCohort Detects and Stops Attacks Based on New Destructive Oracle Vulnerabilities." . 20 Feb. 2003. The Herjavec Group. 8 May 2004 <http://www.herjavecgroup.com/press/feb2003_01.php>.