DREXEL UNIVERSITY Info 710 Project Network Incident Investigation, Acquisition, and Response John Misczak, John Ehring & Chris Grabosky 9/2/2010 Table of Contents Introduction .................................................................................................................................................. 3 Network Forensics Discussion....................................................................................................................... 3 Types of Forensic Data .................................................................................................................................. 4 Statistical Data .......................................................................................................................................... 4 Alert Data .................................................................................................................................................. 4 Session Data .............................................................................................................................................. 4 Full Content Data ...................................................................................................................................... 4 Relevant Tools ............................................................................................................................................... 4 Packet Sniffing Tools ................................................................................................................................. 4 Packet Analyzers ....................................................................................................................................... 5 Text Search Utilities .................................................................................................................................. 5 Trace Tools ................................................................................................................................................ 5 Network Statistics Tools............................................................................................................................ 5 Intrusion Detection ................................................................................................................................... 5 Network Monitoring ................................................................................................................................. 6 Windows Network Evidence Acquisition ...................................................................................................... 6 First Trace.................................................................................................................................................. 6 Statistical Analysis ................................................................................................................................. 6 Alert Data Analysis ................................................................................................................................ 8 Session Data Analysis .......................................................................................................................... 11 Full Content Data Analysis .................................................................................................................. 13 Second Trace ........................................................................................................................................... 14 Statistical Analysis ............................................................................................................................... 14 Alert Data Analysis .............................................................................................................................. 16 Session Data Analysis .......................................................................................................................... 20 Full Content Data Analysis .................................................................................................................. 21 UNIX First Trace .......................................................................................................................................... 23 Physical Properties .................................................................................................................................. 23 Basic Content Analysis ............................................................................................................................ 25 Alerts ....................................................................................................................................................... 27 Sessions ................................................................................................................................................... 31 Full Content Analysis ............................................................................................................................... 34 Comparison Analysis ................................................................................................................................... 40 Conclusion ................................................................................................................................................... 41 Works Cited ................................................................................................................................................. 42 Introduction Before forensic experts begin any investigation, they must fully grasp and understand the basic core principles of digital forensics. Most importantly, digital forensics experts must understand the proper protocol which must be followed before any investigation can occur; failure to do so could result in perfectly good digital evidence being thrown out due to being inadmissible. Additionally, a forensic investigator must have a solid understanding of the types of data involved in a network acquisition investigation. Statistical, alert, session, and full content data can all provide various clues into the underpinnings of what actually happened during a network event. Once these types of data are understood, a digital forensics expert can begin their analysis using a variety of tools. These tools can change depending on the operating system in question, but they are generally based on the same principles and forensic techniques. Packet sniffing tools, packet analyzers, text search utilities, trace tools, network statistics tools, intrusion detection, and network monitoring can each provide specific data sets to a forensic expert. In our analysis of network incident investigation, acquisition, and response, we will demonstrate typical network acquisition processes using various types of data and specific tools common to forensic experts. We will conduct a network acquisition process on both the Windows and UNIX environments. Finally, we will compare these two environments to see what they share and how they differ from one another. Network Forensics Discussion Ben Laurie describes network forensics as “using evidence remaining after an attack on a computer to determine how the attack was carried out and what the attacker did” (Laurie, 2004, p. 51). However, this simplistic definition does not do the field justice. Network Forensics is a discipline within the greater context of digital forensics which focuses on the collection of data, intrusion detection, and evidence gathering via the monitoring of a network. It can be used to gather evidence after the fact, as Laurie suggested, but can also be used to find evidence of a future attack or an attack in progress, as well as when a potential attack is likely to take place. Furthermore, network forensics has the potential for usefulness beyond calculated attacks; digital forensic evidence can be used to support prosecution of other crimes, such as the distribution or retention of child pornography. As Laurie describes, network forensics involves much more than the collection of leftover data, as many cybercriminals are fully aware of their actions, and so seek to conceal their crimes. By utilizing a number of network forensic techniques, security professionals can help unmask these crimes and find real, useable evidence of wrong-doing and illegal activities which can be used in a court of law. Attackers are likely to distort or remove evidence of their crimes by “modifying logs, deleting core dumps, and installing software...which can’t be seen by using standard utilities” (Laurie, 2004, p. 51). It is the duty of a network forensics specialist to understand the potentialities behind these cover-ups and to look for the clues which can reveal that data has been tampered with or altered. Many network attacks fail. For example, buffer overflow attacks require the execution of code from a very specific location in order to be successful. This location depends highly on the operating system, its build, and compilers used. Often, attacks will not know the exact location of this directory and will be forced to try several variations before a successful attack is made. Forensics experts can use these failed attempts to help determine what took place and what data could have been modified (Laurie, 2004). Types of Forensic Data Statistical Data Statistical data is the summary information from a log dump that may provide a first direction into forensic analysis (such as which protocol or application had been used). This type of data can be created by using such applications as tcpdstat (for basic summary information) or Argus (for network flow summary information). Alert Data Alert data consists of a more in-depth analysis based on patterns from a log dump. It can help to identify possibly malicious activity. Intrusion detection applications, such as Snort, are capable of producing this type of data. Session Data Session data looks at session information from a higher level in the OSI model and identifies nodes that were communicating, including messages sent and received, as well as information about those messages. Session data can help to identify strange session information like an overflow of SYN packets or dropped connections, which can be a telltale sign of malicious or criminal activity. Full Content Data Full content looks into the full packet including control information anduser data (payload). This type of data can help to identify actual attack information. However, obtaining this information is a very time consuming process due to the large amounts of data involved. Therefore, previous analysis is necessary to identify which packets should be analyzed for their full content. Relevant Tools Packet Sniffing Tools Packet sniffing tools, such as tcpdump (for UNIX operating systems) and windump (for Windows operating systems) can be used by network forensics experts to “convert binary data in packets to human readable format”, intrusion detection, or logging traffic for later evidence collecting (Fuentes & Dulal, 2005, p. 169). These types of tools can be used to create dumps of communications through the TCP/IP and TCP/UDP stack to be analyzed for inconsistencies or evidence of an attack by understanding the exact correspondence between sender and receiver. Tcpdump, of which windump is a Windows- friendly port, was created in the early 1990s, is one of the most basic packet sniffing tools available; it is a fundamental part of more complicated tools, such as Ethereal (Fuentes & Dulal, 2005). Packet Analyzers Packet analyzers, such as Ethereal and the more up-to-date Wireshark (both available on UNIX and Windows platforms, as well as other operating systems), expand upon the work of packet sniffing tools such as tcpdump. Included in packet analyzers are the functionality of packet sniffers, in addition to a GUI (making the application more user-friendly), as well as filters which can display or capture particular types of records. Packet analyzers, while similar in function to packet sniffers, “make it easier to make sense of a stream of ongoing network communications” (Fuentes & Dulal, 2005, p. 173). By providing an interface which can more easily make sense of a considerable amount of data, the work of a network analyst is severely reduced. Text Search Utilities Text search utilities, such as ngrap and grep (which stands for “global regular expression print”), print every line in a specific data set which contain a specific sequence of characters, regardless of whether or not it is a word or part of another word (Goebelbecker, 1995). These text search utilities are extremely useful to forensics experts because theyallow large quantities of log data (or output from network dumps) to be quickly filtered for relevant information. Trace Tools Trace tools, such as tcptrace (developed by Shawn Ostermannfor the UNIX platform), helps to analyze TCP dump files and identify session information. Tcptrace can work with packet sniffers/analyzers, such as tcpdump, windump, Ethereal, and Wireshark, by using their data dumps as input. Once this input is analyzed, tcptrace can provide information such as elapsed time between sending and receiving packets, the specific segments of the packets which are sent and received, throughput, and total bytes (Ostermann, 2003). Network Statistics Tools Network statistics tools, such as tcpdstat (developed by Dave Dittrich for the UNIX operating system), “produce a per-protocol breakdown of traffic by bytes and packets, with average and maximum transfer rates” from input gleaned packet sniffers/analyzers, such as tcpdump or Wireshark (Dittrich). These types of tools allowforensics experts to obtain basic summary information of a network dump. Intrusion Detection Many intrusion detection tools which are “capable of performing real-time traffic analysis and packet logging on IP networks” are currently available(Roesch). Snort is the leading tool in this market, one of the reasons we have chosen it as our primary intrusion detection tool. Snort’s main usesinclude performing protocol analysis, content searching/matching, detection of attacks, stealth port scans, and OS fingerprinting attempts, among other types of attacks (Roesch). Specifically for our investigation, we will run Snort in batch mode against already collected data to find patterns in the logs consistent with malicious activity. Network Monitoring Though many network monitoring software suites are available, we have chosen Argus for its ability to transform Wireshark/Ethereal output (libpcap data) into session data, capable of being analyzed with other tools. Additionally, Argus can itself analyze packet files and “generate summary network flow data” ( QoSient, LLC, 2010). Argus can also be used to analyze network streams and produce audits for live networks. Windows Network Evidence Acquisition First Trace The file that our team starts with for the first trace is the first part of the capture file from JBR Bank’s incident, labeled s2a.lpc. While this file contains a wealth of network evidence data due to it being a packet capture of a JBR Bank system, it is rather difficult to assess the exact cause and severity of the incident by looking at it as whole. Instead, the preferred method is to work from the outside in, starting with the statistical data of the capture. Statistical Analysis In order to perform some statistical analysis on the capture file, the team elected to use its first tool, a program called tcpdstat (http://staff.washington.edu/dittrich/talks/core02/tools/tools.html). In order to get tcpdstat to properly compile on a Linux platform, however, the team had to download the packages build-essential and libpcapdev-0.8 first. Once these packages were installed from their respective repositories, tcpdstat compiled and was available for use. The team then used tcpdstat to analyze the capture file and output the results of the analysis into a text file for easier viewing. With the file created, its contents can be opened and viewed in any text editor. The team chose to relocate its output and results files to another machine not involved in the forensic analysis in order to prevent any volatile data from being overwritten. When viewing the results of running the capture file through tcpdstat, a few scenarios came to light that are possibly backed by evidence of the capture file’s statistics. First, the highest packet counts are fairly typical except for one entry. While IP, TCP, HTTP, and HTTPS are all frequently used protocols in regular web traffic, the high packet count for the “other” field indicates that a protocol unknown by tcpdstat is used frequently in the trace, or many different unknown protocols were used a few times each. The latter theory was found to carry more weight once the team noticed that numerous other protocols had just a few packets each. Thus, it is highly likely that someone was attempting to send small amounts of a large variety of traffic in order to see what got through; in other words, a port scan may have been performed on this JBR Bank system. However, there could be many possible reasons why this pattern of traffic occurred that do not include an attack or malicious individual; therefore, the team could not arrive at a conclusion at this point In the investigation, and moved onto the next type of analysis. Alert Data Analysis While tcpdstat was useful for statistical analysis, it has little to no value as a tool that examines and helps analyze network alert data. Thus, the team had to turn to its second tool, the open-source intrusion detection system called Snort. Snort is a useful tool due to its flexibility; it can either be activated as a live on the wire detection system, or it can be run after the fact on a collected batch of data. Since the team already had the capture file, the second option was chosen. Snort was run through the command line with a number of flags that configured it to run in the fashion that the team desired. The flag “-c /etc/snort/snort.conf” tells Snort where to look for its rules file that governs how the program will function and what it will look for. The “-r s2a.lpc” parameter gives snort the location of the file to consider; since snort was being run from the directory that included the file, the team did not have to include the full path. The “-b” flag tells snort to run in the aforementioned batch mode, while “-l /var/log/snort” informs the program what directory to place its log files in. The alert file generated by running snort contains a number of alerts based on data traffic patterns in the capture file. While a first look at the file may yield nothing more than a rather large column of text, one can quickly see that the various incidents are delimited by the series of characters “[**]” surrounding the title of the incident. Even in the first few alerts, the phrase “Attempted Information Leak” and “Web Application Attack” occur multiple times. For each of these attacks, the victim IP address that acts as the destination for attacks and source for leaks is 103.98.91.41, which the team had been informed, was the IP address of the JBR Bank system. Furthermore, the attacks appear to be occurring over TCP Port 80, which is frequently used on a web server such as the one employed by JBR Bank. Furthermore, near the end of the alert file, the requests on TCP port 80 stop and numerous amounts of other requests are made. These include using ICMP, SNMP, the SOCKS web proxy to gather additional information about the system, such as what services are running and therefore can be used to the attacker’s advantage. By first closely examining the key port of the web server and then fanning out to investigate other potential avenues of attack or exploitation, it is clear that the party who was interacting with the JBR Bank system has intentions other than simply accessing the bank’s web site. In order to find out exactly what those intentions may be, the team moved onto the next set of data for this trace. Session Data Analysis While tcpdstat and Snort have helped the team with statistical and alert data respectively, so far they fall short when it comes to examining session data. For that purpose, the team elected to use the tool Argus to once again examine the capture file for possible information that would help in the investigation. Once again, flags were used when running Argus from the command line; “-d” switches Argus to background mode, whereas “-r s2a.lpc” tells Argus the data for review is in the file s2a.lpc and “-w s2a.argus” instructs Argus on where to put its output. Subsequently, the team used Argus’ client, “ra”, to decipher the Argus data and place it into a text file that could be easily read. Numerous switches were employed yet again; “-a” turns on summary statistics for the data, whereas “-c” formats the output with bytes sent by a system first, followed by bytes received by the system next. The “-n” switch does not allow ra to replace Ips and port numbers with known host names and services, thereby allowing the team to more easily connect the dots with the other types of data collected in this investigation. “-r s2a.argus” tells ra to read the data from the s2a Argus file, whereas “grep –v drops” takes the status report that Argus would normally add to the text file and removes it. The output is formatted with a line wrap in order to present the screenshot at a readable size. The destination for each line of traffic is TCP port 80, and about 6 packets were sent and received in each case. More importantly, the FIN packet seen frequently means that each connection was closed after it was used for a brief period of time, meaning that a number of different vulnerabilities for the web server’s port 80 were attempted in order to see what worked and what failed. Later on in the output, a number of ports are contacted with one packet each, with the server replying with one packet to each connection as well. The large number of ports that this occurs to is either a rather significant coincidence, or strong evidence that someone performed a port scan on the JBR Bank system. Now that the team has become suspicious of these connections to multiple ports on the system, it would like to see the contents of the packets in question. This desire entails more significant analysis, however, in the form of complete content investigation. Full Content Data Analysis Since the team wanted to examine the entire content of individual packets, the popular packet capture and trace file tool Wireshark was enlisted. Wireshark allowed the team to search for packets based on the timestamp in question (18:55:18 on September 23, 2003) as well as the port numbers (1359, 305, 698, etc.) that were possibly scanned by a malicious individual. In numerous communications from the attacker’s system (IP address 95.16.3.79.47990) to the JBR Bank web server (IP address 103.98.91.41.698), a hexadecimal pattern of “5555 5555 5555” was found in the content field of the packet. This pattern was corroborated with network security information sources as being the signature of a reconnaissance tool called Nmap. At this point, the team feels fairly certain that a malicious individual has been gathering intelligence for an attack on JBR Bank’s web server; however, the team does not know what the attacker is after or exactly what method he/she will be using. Fortunately, another trace file was also given to the team in the hopes of containing additional information and answers. Second Trace Statistical Analysis With most of the information gleaned from the first capture file, the team turned to the second packet capture for its second trace investigation. Once again, the first type of data to be examined was that of statistical data, as it gave a broad overview of the trace and allowed the team to choose its next moves. Just like before, the trace file, this time called s2b.lpc, was run through tcpdstat in order to generate some summary statistics for review. The resulting output text file, s2b.tcpdstat.txt, is much shorter than last time but still contains enough relevant data to make a few insights and decisions on next action items. Once again, the protocol labeled “other” has a high amount of traffic; in fact, it represents nearly 84% of all application layer traffic. The team realized that a high amount of traffic unrecognized by tcpdstat could simply be newer protocols being used in the exchange of information; however, with strong evidence to support port scanning and vulnerability reconnaissance from the first capture file, the strong presence of unknown protocol traffic is more cause for concern. Alert Data Analysis Just as in the first trace, the second capture file was passed through the team’s intrusion detection program, Snort, to review any alerts that may be cause for concern. Several of the same switches were used, as well as the additional parameter of “>snort.stats” at the end of the command that places the statistics from Snort into a file for later review. Since all of the data needed for review is now in the snort.stats file, the team opened that file and began parsing through the information contained within. The team noticed that the number of TCP packets reported here matched that of the output from tcpdstat. Additionally, some of the packets categorized as “other” in tcpdstat are revealed to be ARP packets by Snort, a protocol not detected by tcpdstat. The team also notices that there were 176 alerts for this capture file, and decides to check out the s2b.alert file that was generated by Snort. The first several alerts are concerned with access to a printer through a potentially vulnerable web application. The links provided in the alert provide further information that can be used to bring background information to such an alert; for example, a buffer overflow in an Internet Printing module in the Windows 2000 operating system can possibly allow remote users to acquire root privileges. Gaining root would be one of the most desirable achievements an attacker could reach, so any incident related to giving root access is considered serious by the team. However, the attacker attempts three times to perform this trick through the printing extension, meaning that it may not have worked and the attacker was simply making sure it was the security of the system and not his own errors that caused the attempt to fail. Another alert seen clearly in this alert log is that of a denial of service, or DoS, attack. This type of attack could potentially bring down JBR Bank’s web server, causing them to lose public image as well as money due to the unavailability of tis banking services. With a few ideas about what types of attacks the intruder may have tried, the team decided to turn to session data to see what connections were made. Session Data Analysis Once again, the tool Argus was used on the second capture file in order to detect the traffic and connections going back and forth between the web server and other machines, including those belonging to the attacker. Just like before, the web server (destination for attacks, source for outgoing leaks or transmissions) has the IP address of 103.98.91.41. Also, the IP address of 95.208.123.64 is seen as the destination for an outgoing FTP connection; this was the same IP address seen attempting to exploit vulnerabilities in the Snort alert data. Furthermore, there are multiple connections through TCP port 60906, which is an unusually high port number that the team did not recognize at this point in the investigation. Lastly, the web server successfully establishes a connection with the attacking machine at 20:00:51 on October 1, 2003, as seen by the line indicating the connection on port 6667. With these notable items, port numbers, and timestamps in mind, the team turned to full packet examination once again in order to glean even more details about the attack. Full Content Data Analysis The team took a different approach than simply using Wireshark for this trace; rather, they decided to utilize a tool called tcpflow to narrow the packets down to just those that crossed TCP ports 21, 60906, 1465, and 6667, as they were the ports that became interesting due to previously found evidence. Tcpflow created eight files as a result of this command, which all follow a nomenclature of “sourceIPaddress.portnumber-destinationIPaddress.portnumber”. The file named 095.016.003.023.01408-103.098.091.041.60906 provides the first meaningful information for this part of the investigation, as it includes a series of commands. These commands didn’t tell the team much by themselves, so the team decided to open up the response message sent by JBR Bank’s web server, which was found by flipping the order of the addresses around. A number of files are found on the server, including several that should not be on a bank’s web server – notably nc.exe, which is a program called netcat that creates TCP connections that can be used as backdoors for attackers. Knowing that the intruder probably had an easy way in and out of the system now, the team turned its attention to subsequent messages sent between the web server and the attacker’s machine. The biggest interest surrounded the message 103.098.091.041.01465- 095.208.123.064.03753, which shows the web server providing the contents of a help file to the remote attacker and attempted transferring of an executable named “update.exe”. Due to information already obtained from JBR Bank, the team knows that a program named PsExec was in some way utilized in the attack. Armed with several types of data, including full content included in several of the packets sent to and from the victim system, the team decided to search for any mention of PsExec using a tool called Ngrep. The results from the ngrep search were placed in a file called s2b.ngrep.psexec.txt, which allowed the team to review the results at their leisure on a separate machine. The team noticed several mentions of PsExec in this results file; however, its repeated presence raised more questions than it answered. PsExec’s purpose is to establish a connection a system that an attacker is targeting, typically for the intruder’s own malicious purposes; however, it needs administrative rights to do so. The team already noticed the attacker attempt to gain root through the use of a printer extension vulnerability, but it appeared that the attempt(s) failed. Without any further log files or captures, the team could only hypothesize that the attacker eventually succeeded at compromising the aforementioned vulnerability, or found another similar one that worked. UNIX First Trace Physical Properties Just like the example Windows intrusion, there is another set of data collected from an intrusion of a UNIX system for BRJ, a software company. Although the platform is different, many of the mechanics and goals are the same as a Windows investigation. Since it was a remote intrusion, that is to say it was accessed via the network and not the keyboard plugged into the computer, network traffic could be captured into a Libpcap-formatted file with tcpdump. This tool will capture data from a given interface, in this case the UNIX system’s eth0, and record it to disk. However because there is both legitimate traffic in addition to the traffic from the intruder, this file can become massive quickly. Tcpslice was used to make the file smaller and slice it up to just the applicable pieces. This Libpcap file was delivered to the team for investigation in addition to output from Snort like a portscan log. The portscan file follows the pattern of “date source:port ->destination:port type” and can be seen here. The results of looking at this file shows that an attacker at the IP address of 94.90.84.93 issued a portscan to a wide array of ports on a BRJ Software server (102.60.21.3) on September 8 between 2:20:42 and 2:20:45 in the afternoon. This Snort log file with its 1552 lines (and thus a scan of 1552 ports on the system) simply says a scan occurred since it was recorded by the BRJ Software server. However it does not say what the BRJ Software server responded to the attacker (e.g. whether the port was open). Basic Content Analysis The physical appearance of a log file being large, like the Snort portscan file, is the first step of the investigation. By looking into that file and determining that a portscan occurred and the source IP address of the scan, the team is better prepared for the next step which is the basic content analysis. By looking at the Libpcap dump from the network interface, some basic statistics can be gathered by using the tool tcpdstat. This tool has done some basic information gathering that will further limit the scope of the digital forensic investigators. It shows the start and end time of the capture (roughly 75 minutes long) and the total capture size (14.73 MB captured of data from 56377 packets and with additional information stored, the total Libpcap file comes to 15.59 MB) and the most common protocols. Most data traveled over TCP/IP. At the application layer, the applications most used were FTP, telnet, and SSH. All of these may be used by the attacker as it allows file transfer as well as a shell to control the UNIX system. The advantage here is that FTP and telnet are not encrypted so any data that traversed the network for these application layer protocols was captured in the Libpcap dump and can be analyzed by deep packet inspection (full content analysis). If the attacker used SSH, the information would be encrypted and thus the full content analysis would yield nothing additional of worth. Alerts Because so much traffic used the FTP and telnet protocols, it still would not be very easy to jump right into full content analysis. Instead some additional investigation may yield a way to further narrow the search scope to identify the intruder and what he/she did. This further analysis can be done with Snort. Snort has many rules that are used to identify malicious, strange, or suspicious network traffic. By feeding the Libpcap file through Snort, it can generate files and use these predefined rules that may identify what the intruder was doing. By using the command snort -c /etc/snort/snort.conf -r s3.lpc -b -l ./SnortOut it generated two files in the SnortOut directory based on the rules prescribed in the Snort configuration file and the Libpcap (lpc) file. The one file that will be useful here is an alert file. Snort generated several alerts that should be of concern to the investigative team. The first alert mentions that the intruder attempted to gain access to a privileged user account (like root). Further entries later prove that it was the root user that was being accessed. To do this, the intruder was attempting to gain access to an exploit of a printing daemon (lprd). Snort rules are useful in that it may give a link to the security site SecurityFocus.com that will further explain what was going on. By properly exploiting this security vulnerability, it may be possible to gain access to this privileged account. The rule that generated it was the Snort attack-responses.rules file. The line with the yellow highlight shows that it was checking for raising an alert that the user was changed to root (often user ID 0 on UNIX systems). By gaining access to the root user, much more damage can be done to the system as well as allowing the intruder to have much more access to files. The investigators know that the intruder gained root access from the Snort log entry that said “id check returned root” which is created through the line highlighted in the attack-responses.rules file shown above. The entry shows that it was done in the time period where the intrusion happened from the same IP address of the likely intruder. Additionally, it was sent over telnet, an unencrypted channel. This is something that would not likely occur from BRJ employee as they should be using SSH. There are also many failed login attempts listed in the Snort log. While good because it means the person was locked out, it is still problematic as it does not show successful log in attempts. Because these login alerts eventually stop, it is likely that the intruder finally did gain access to the system. This is further proved through the userid check mentioned above which originates on the BRJ Software server on port 2323 (meaning the intruder was successfully able to access the server over port 2323). Sessions One possible solution that would fit the team’s findings thus far would be that the intruder was able to start a telnet server listening on port 2323 of the BRJ server through the lprd buffer overflow exploit. However additional investigation must occur to prove this. The additional information could be obtained through looking at the data logged from the telnet session. This is done with the tcptrace tool. The number of connections between 94.90.84.93 (a system not owned by BRJ and the same address the team has been tracking so far) and 102.60.21.3 (a BRJ system) is what is important here. These connections should not exist; their presence is indicative of a brute force attack on the lpr daemon. The complete/reset shown in the last column combined with the packets sent and received by the BRJ system (the two preceding columns) show that the brute force attack was failing for some time. If it succeeded, it would look like the first two connections (the legitimate connections between BRJ servers). There the packet count is well above the normal SYN/ACK/RST and instead a session occurs. Unfortunately for BRJ, eventually the attack does work. As seen below on the highlighted 1880 session, the intruder gains access to the system. Once he/she has access, the BRJ server begins sessions to talk to a remote system. The remote system is a new IP address of 94.178.4.82, and the port number of 21 points to it being an FTP server. The investigators now know that the intruder has compromised the BRJ system, is able to telnet in, gain root access, and FTP out of the system. This is further proven later by the traffic originating at the intruder’s IP and accessing the BRJ server successfully over port 514 (which means he/she can access the system and login with rlogin, rcp, or RSH). More connections exist later. One of those proves that the intruder can SSH to the BRJ server (problematic because, as mentioned previously, these sessions are encrypted so the investigators are unable to see exactly what was being done through network analysis). However what is strange is that the intruder, it seems, begins to use the BRJ server as a way to attack another server, 94.200.10.71. In the first three sessions listed above show that the intruder was able to send a finger request to the machine and then log into it via telnet. This is bad as it looks like it is coming from BRJ and not the attacker who first attacked BRJ. Then, using the BRJ server, the intruder begins to brute force his/her way into the 94.200.10.71 FTP server. Again, this eventually ends and at the session ID of 1696 it shows a successful session and a large file transfer. After this is the end of the file, but the file ends strangely as well. Traffic is going over port 2323 on the BRJ server, a port that should not be in use. More things are going on so further analysis is needed. Full Content Analysis The session analysis focused in on which sessions need more attention. This attention is given through full content analysis where the payload of each TCP/IP packet is examined in an attempt to reconstruct what communications were happening between the intruder and the BRJ server. This will allow BRJ to know what may have been taken, modified, removed, installed, or otherwise tampered on their UNIX system. Starting with sessions 2087 and 2089, the investigators can compare the data from these packets. A diff was done on the output of the tcpflow command that separated these sessions from the rest of the Libpcap capture file. The diff is shown below. The difference shows what made the 2089 session attack on the lpr daemon successful and what was unsuccessful in the 2087 session. With this information, the investigators know the latter was a success so the attention can now be focused on the 3879 session. The tcpflow command extracted two files. The first will be called the “input” file. This contains the commands that the intruder typed into the BRJ system. The other file will be called the “output” file as it contains what the BRJ UNIX server responded via telnet to the intruder’s commands. Screen shots of the two files are shown below. Here, the best technique would be to pull up both files side-by-side, then using experience with UNIX and Linux systems, the investigator would be able to match outputs with the appropriate input. So, for example, the intruder issued commands to figure out information about the system, the logged in user, and the network interface. Using the output file, the investigators get the exact same output that the intruder would have seen after issuing these commands. The basics of Linux or UNIX command line is required here to align the two files. However after doing so, the intruder’s malice seem to be that the intruder created a hidden directory /tmp/.kde, FTP to 94.178.4.82, and download the binary files that start with “knark” (a rootkit kernel module) to that directory. The intruder then adds a user “lpd” to the system with the same UID as root and enables RSH access for this (and all other users because the “++” was specified in the .rhosts file). After that, he/she does some basic poking about and kills the brute force program that he/she used to brute FTP into the 94.200.10.71 server, likely in an attempt to hide his/her tracks. The intruder later telnets in again, this time with an account he has the username and password for: richard. This access was likely gained through an easy to guess (or brute force) password. Once in, the intruder executes lpd that allows root access. After proving he is root by checking his/her active user ID and username, he begins to check commands like netstat and ps to see what is running on the system. Specifically, he or she is looking for a program datapipe. Using a rootkit, he/she is able to hide the datapipe process. The intruder repeats this process again later by logging in as the compromised account and running /usr/sbin/root lpd /bin/bash that dumps him into a root prompt (which kindly returns “Do you feel lucky today, hax0r”). Once in, he/she attempts to tar up /home and /var/mail and put it in the hidden directory /tmp/.kde he created earlier. Then, after failing to create an FTP connection to 94.20.1.9, he/she pings that server to see 90+% packet loss. The intruder eventually is able to connect to that server via FTP but the results are unknown from this trace. Again he/she logs in later but after seeing that the compromised account richard is logged in elsewhere, he/she disconnects, likely to avoid suspicion. That was the last time that there is record of the intruder logging in. However additional examination should occur on the FTP transfer he/she may have been able to begin. There is one file that says the IP address of that FTP server belongs to zeus.anonme.com. This file, created by using the tcpflow command on the Libpcap file to look specifically for port 21 traffic not going to 94.200.10.71, is the result of the first failed FTP connection to 94.20.1.9 mentioned above. The next file in the sequence shows that the intruder logged in successfully with the username shadowman, switched the transfer mode to binary, and tried to copy the files.tar.gz (the archive of the /var/mail and /home directories he/she created by logging in as richard earlier). So far, most of everything has been reconstructed. However there still is the business of when the intruder used SSH. Up until now, that SSH data could not be processed because it was encrypted. Although the SSH channel into the server was secure, if the intruder used a network resource that reached out of the BRJ machine that was not secure, the investigators can figure out what was being done. It looks like this is what happened: the intruder SSH’d into the BRJ system and, from there,FTP’d out of the server. Because FTP is not encrypted, that data was captured and can be dissected. Based on the server responses, it looks like the intruder logged in anonymously to a Windows NT FTP server. From there the intruder retrieved brutus.pl, net-telnet-3.03.tar.gx, allwords.txt, nat10.tar, john1.6.tar.gz, and datapipe.c. The first three of these were used to do a dictionary brute force attack on the FTP server (mentioned earlier). Because the Libpcap file captures everything, all of these files were recorded. Since the datapipe.c file is an uncompiled C file, it can be read simply by looking at the stream (unlike the tar files which are binary and would need to be reconstructed). This gives full insight into the whole process of what the intruder was doing. Comparison Analysis In performing both the Windows and UNIX evidence gathering activities, the team noticed a few similarities between the two processes, as well as a few major differences. While the two operating systems do not share much in common natively, they do both use the same network protocols to interact with other machines and the Internet. Thus, most, if not all, of the network evidence gathered in either investigation was relevant for both; which IP addresses belong to which machines and which ports correspond to which services did not change depending on which operating system was being investigated at the time. This correspondence allowed evidence found in one investigation to be of use in the other, preventing the team from having to reinvent the wheel. Additionally, the use of an unpatched vulnerability allowed the intruder to get his or her foot in the door in both cases. For the Windows machine, it was a buffer overflow in an Internet Printing extension; in the case of UNIX, it was a buffer overflow in the print spool protocol. Even though the intruder did not have access to an employee’s login credentials from the outset, he/she was still able to gain access and perform malicious actions through this weakness. Information available online allowed the team to recognize exactly which vulnerability was used after using Snort to examine the respective batch files; if such information was readily accessible, a patch or fix was most likely also obtainable that could have prevented or at least impaired these incidents. On the other hand, the two investigations yielded an expected number of dissimilarities. The Windows network evidence acquisition yielded typically general details; the team discovered that a port scan was used, which IP addresses were associated with the attack, and that a file was transferred to the victim machine in order to facilitate future access for the attacker. Most of the data examined in the Windows investigation stemmed from the capture files which were run through a number of tools, including Snort and Argus. Therefore, it was difficult to get a good idea of what exactly the attacker was doing, what he/she was looking for, and how much damage had been done at the point that the investigation took place. Conversely, the UNIX investigation provided a surprisingly large number of specifics that could be used by the team to draw conclusions about the incident. For starters, a record of every command the intruder used through his/her backdoor connection was discovered, allowing the team to follow in the attacker’s footsteps as the intrusion happen. Moreover, the specific exploit was identified, along with the affected port numbers and protocols. User credentials created and used by the attacker were also discovered, giving the team solid clues on where to look next on the machine to fully uncover the extent of the incident. This quantity of details helps the team more accurately assign responsibility for the intrusion and create a response plan that will cover up the vulnerabilities used and go about bringing the attacker to justice. Lastly, while the UNIX investigation could be performed using another UNIX machine, the Windows investigation required the use of a Linux machine with a number of pre-requisites established; namely, build-essential and libpcapdev-0.8. Such a requirement may prevent a less experienced user from performing the investigation, as a typical Windows user may not be familiar with Linux or its dependency and package management system to effectively work through such an examination. A UNIX user, alternatively, would be using the same platform that he/she was familiar with to perform the investigation, so the operating system learning curve is non-existent, thereby requiring that the user only learn how to use the right tools in the correct fashion. While this point is obsolete when an experienced forensics team that is well versed in all operating systems is involved, it does create a barrier to aspiring forensic professionals who may be trying to manually troubleshoot and inspect potentially malicious parts of their systems. Conclusion Knowledge of how computer systems interact was important to perform a forensic investigation. Knowing the mechanisms for access to both Windows and UNIX systems meant that the team would be ready for an investigation with the correct tools. These tools ranged from packet sniffers, to text editing and comparison tools, to network intrusion and detection systems. Each of these tools proved useful from a forensics perspective to analyze statistical data, or summary information, alert data, data that has been flagged as possibly malicious, session data, or higher level information that can reconstruct communication patterns, and full content data, or the raw data flowing over a network. These tools were applied on real world data for both Windows and UNIX systems. The attack on Windows systems was able to be located as a buffer overflow, eventually allowing access to the system. The UNIX system had a similar fate with a buffer overflow attack and the ability for the attacker to gain access to that system to attack others. In conclusion, these techniques are useful on virtually any system and may be used for a variety of reasons. This may include locating an attacker, determining if a virus has infected the network, looking for users doing unlawful or things against policy (like gaining access to files or copying files off secure systems), or a variety of other purposes. Depending on the level, the different types of forensic data can be collected and evaluated with these tools in a similar process. Although the intricacies of Windows and UNIX systems differ, the fact that both are networked operating systems means both are vulnerable for attack but can be examined in relatively similar ways. Works Cited QoSient, LLC. (2010, March 28). ARGUS -Auditing Network Activity - Getting Started. Retrieved August 5, 2010, from QoSient: http://www.qosient.com/argus/gettingstarted.shtml Dittrich, D. (n.d.). Tools written/modified by Dave Dittrich. Retrieved August 5, 2010, from University of Washington: http://staff.washington.edu/dittrich/talks/core02/tools/tools.html Fuentes, F., & Dulal, K. (2005). Ethereal vs. Tcpdump: a comparative study on packet sniffing tools for educational purpose. Journal of Computing Sciences in Colleges, 20 (4), 169-175. Goebelbecker, E. (1995). Using grep: Moving from DOS? Discover the power of this Linux utility. Linux Journal (18es). Laurie, B. (2004). Network Forensics. Queue, 2 (4), 50-56. Liskov, B., Curtis, D., Johnson, P., & Scheifer, R. (1987). Implementation of Argus. ACM Symposium on Operating Systems Principles , 111-122. Ostermann, S. (2003, November 4). tcptrace - Official Homepage. Retrieved August 5, 2010, from tcptrace: http://tcptrace.org/ Roesch, M. (n.d.). About Snort. Retrieved August 5, 2010, from Snort :: About Snort: http://www.snort.org/snort