Uploaded by sudylib_one.account

DFM Issue1

advertisement
Digital
ForensicS
The Quarterly Magazine for Digital Forensics Practitioners
ISSUE 01
/ magazine
INSIDE
Competition!
/ Detailed Forensic Examination Procedures
/ How secure is your personal data?
/ Expert Witness Advice
/ Facebook Forensics
Tool Development
Win an Archos 405
Portable Media Player
THE WEB
HOW MUCH DO WE REALLY KNOW?
Blow by blow network analysis on Web requests
By Tim Watson
01
9 772042 061097
Issue 1
/ Starting out
A beginners guide
to digital forensics
DF1_OFC_Cover - Online.indd 1
/ LATEST News
WINDOWS 7 LAUNCHES
THE ITC WORKSHOP
/ Book Reviews
Real Digital Forensics
iPhone Forensics
TR Media
/ Special Offer
20% off Oxygen
Forensic Suite
29/10/09 4:41:24 pm
AD9049_3 13/10/2009 11:12 Page 1
We are pleased to offer the following exciting courses at
undergraduate and postgraduate level
Forensic Computing MSc Forensic Computing BSc Honours
Computer Security BSc Honours
Computer Security MSc
Shape your future
To find out more visit dmu.ac.uk/technology or contact us:
T: (0116) 257 7456
E: technology@dmu.ac.uk
AD9049
/ EDITORIAL
EDITORIAL
Digital Forensics Magazine is a quarterly magazine, published by
TR Media Ltd, registered in the UK. It can be viewed online at:
www.digitalforensicsmagazine.com
Welcome to the first issue of
Digital Forensics Magazine…
Acquisitions
Roy Isbell
T
he response the DFM team has received since first
announcing the magazine, back in June 2009, has been
nothing short of amazing. Our call for authors, thanks
to our friends at ISC(2), brought over 600 enquiries to the Web
site along with dozens of suggestions for articles and product
reviews. This unprecedented response, along with comments
such as, ‘at last’, ‘long overdue’ and ‘just what our industry
needs’ convinced us that this was a worthwhile project. So
without further ado, we are proud to bring you this inaugural
issue of Digital Forensics Magazine.
In this inaugural issue, we have selected a broad range of
articles aimed at both the novice and the established practitioner.
We really hope you enjoy reading all of them, including Jeff
Bryner’s fascinating practical experiences related in “The Diary of
pdfbook: A Tool for Facebook Memory Forensics”.
In each issue, the Legal Department will focus on the
interface between law, cybercrime, and digital forensics,
including such topics as “the fine art of serving as an expert
witness”. This will be especially challenging as cybercrime
transcends international borders, and therefore involves many
different legal systems and jurisdictions. As we already have
an international readership, we shall aim to cover relevant
legal developments in a variety of countries, although the UK
and the USA are likely to predominate. This is one domain in
particular where we can all help each other to identify and
solve emerging problems as a community.
Our News Desk brings you the latest news from around the
world and in future issues, our Jobs section will allow readers
to see who’s hiring in their particular fields. News is already an
integral part of our website, as you might have already seen,
but we will also be introducing the Job Board service just as
soon as we can. If you have already visited our site, you’ll
know that we already cover many news items from around the
world, but you might not know that we are also broadcasting
on other media formats, such as Twitter, Mippen (via iPhone),
and the more traditional RSS. We also have an iPhone
application being developed especially for DFM.
What of our plans for the future? Well, we expect to add a
From the Lab department, where we will carry out product
reviews and detailed technical analysis. We have already
established links with the leading vendors of forensic tools
and will use these to help you solve problems you might be
encountering. Everyone on the team really wants to hear
from you with suggestions about what you would like to see
in your magazine.
Enjoy the magazine and let us know what you think.
/ The DFM Team
Editorial Board
Tony Campbell, Roy Isbell, Tim Watson,
Moira Carroll-Mayer, Alastair Clement
Editorial Assistant
Sharon Campbell
News Desk
Matt Isbell
Marketing
Matthew Rahman
Production and Design
Matt Dettmar (Loud Vision Ltd)
Contributing Authors
Chris Bilger, Jeff Bryner, Tony Campbell, Moira Carroll-Mayer,
Bill Dean, Dr Gavin Manes, Peter Membrey, Dr Tim Watson, Eric
Fiterman, Noemi Kuncik, Gary Hinson, James Johnson, Justin
Gillespie, Elizabeth Downing, Michael Harvey, Andy Harbison,
Robert Slade, Matt Isbell
Technical Reviewers
Tony Campbell, Dr Tim Watson, Dr Gavin Manes, Tarak Modi,
Moira Carroll-Mayer, Scott Zimmerman, Roy Isbell, Darin Dutcher
Contact Digital Forensics Magazine
Editorial
Contributions to the magazine are always welcome; if you are
interested in writing for Digital Forensics Magazine or would
like to be on our technical review panel, please contact us on
editorial@digitalforensicsmagazine.com
Alternatively you could telephone us on:
Phone: +44 (0) 203 2393666
News
If you have an interesting news items that you’d like us to cover,
please contact us on: news@digitalforensicsmagazine.com
Advertising
If you are interested in advertising in Digital Forensics Magazine
or would like a copy of our media kit, contact the marketing team
at: marketing@digitalforensicsmagazine.com.
Subscriptions
For all subscription enquiries, please visit our website at www.
digitalforensicsmagazines.com and click on subscriptions.
For institutional subscriptions please contact our marketing
department on marketing@digitalforensicsmagazine.com.
Copyright and Trademarks
Trademarked names may appear in this magazine. Rather than
use a trademark symbol with every occurrence of a trademarked
name, we use the names only in an editorial fashion and to the
benefit of the trademark owner, with no intention of infringement
of the trademark.
Digital Editions
Our collaboration with Yudu allows us to reach our readers in
the most carbon neutral way possible and is proving to be very
successful. “The YUDU Publishing system was designed so that
each publication has the lowest possible electronic footprint,
whilst maintaining the highest quality viewing experience.
However, as hard as we try, we can’t be completely energy free
- the servers that host YUDU editions consume energy - so the
remaining energy consumption is offset with Carbon Clear. They’re
a leading carbon management company who invest in projects
that improve living standards in developing countries, whilst
providing global climate benefits with clean energy projects and
reforestation initiatives.” Yudu, 2009
3
DF1_03_Editorial.indd 3
29/10/09 5:03:01 pm
DF1_OFC_Cover - Online.indd 1
29/10/09 5:00:43 pm
/ CONTENTS
CONTENTS
/ DIGITAL FORENSICS MAGAZINE
ISSUE 01
08
REGULARS
/ NEWS
Rounding up some of the most interesting news items
affecting the world of digital forensics
06
/ COMPETITION
Win an Archos 405 Portable Media Player
14
/ BOOK REVIEWS
In this issue we look at Real Digital Forensics and
iPhone Forensics
48
/ 360°
Introducing our reader feedback section...
50
FEATURES
LEAD FEATURE
/ ANATOMY OF A WEB REQUEST
Tim Watson explores the forensic implications what
happens when you connect to a Website
08
/ Data Erasure – Fact or Fiction?
Pete Membrey discusses the complexities of erasing
data from magnetic storage devices
15
/ FORENSIC EXAMINATION OF A COMPUTER SYSTEM
Gary Hinson and Robert Slade, take us through the
procedures for investigating a computer system
21
39
/ Backup Tape Forensics is Here to Stay
Gavin Manes, et al. discuss the significance of magnetic
tape as a potential goldmine for the investigator
/ BRIEF INTRODUCTION TO COUNTER-FORENSICS
Noemi Kuncik and Andy Harbison take a trip into the
sinister world of counter forensics
43
21
39
LEGAL
/ EXPERT WITNESS REPORTING
30
Do you know what’s expected of you in the courtroom?
Eric Fiterman offers advice on how to deliver a thorough
and objective review of the evidence
34
/ IMPACT OF FEDERAL RULES
Bill Dean discusses the impact of the US Government’s
Federal Rules on e-discovery
TECHNOLOGY
/ THE DIARY OF A PDFBOOK
36
Jeff Bryner takes us on a journey of e-discovery in
Facebook with this fascinating account of developing his
own forensic software tool
DF1_05_Contents.indd 5
29/10/09 5:03:24 pm
/ NEWS
NEWS
Microsoft Windows 7 Ships
On 22nd October 2009, Microsoft shipped
the latest version of its PC operating system,
Windows 7. Much anticipated by the market,
especially after the total failure of Windows Vista
to displace its predecessor Windows XP, Windows
7 is exciting both Microsoft and the consumer.
In a ploy to compete with free solutions such as
FreeBSD and Ubuntu Linux, this new Microsoft
offering has been scaled back to run on the most
modern low-powered netbooks. It doesn’t come preloaded with
all the bloatware Vista had installed by default. To get standard
applications such as Mail, Movie Maker, and Photo Gallery
you’ll need to download the Windows Live Essentials pack from
the WindowsLive.com Web site.
So, great news all round, right? Well, yes, probably. But what
does such a major new release mean for the digital forensic
examiner? For certain it will bring headaches while you wait
for your favourite forensic examination tools to catch up (or be
redeveloped) and you’ll have to get up to speed pretty quickly to
understand and exploit the new information stores, such as Jump
Lists. Updates to Internet Explorer 8 introduce new capabilities
to combat malware and phishing, with features like SmartScreen
being used to warn a user when a Web site attempts to download
a malicious program. However, IE 8 introduces a new feature
called InPrivate Browsing, similar to Firefox’s Private Browsing,
where no records of any transactions on the Web are stored.
Mozilla says it best: “In a Private Browsing session, Firefox
won’t keep any browser history, search history, download history, web form history, cookies, or temporary internet files.” So
there goes another evidence trail you’ll need to find a replacement for. As far as forensic examination software goes, the usual suspects – EnCase and FTK– both seem to be able to support
Windows 7. More specifically they support the Windows 7 file
system which is still good old NTFS, since the big change known
as WinFS (Windows Future Storage), which would have had
a massive impact for examiners, never quite made it into this
release. WinFS may appear in a subsequent service pack, but
such a radical change to the operating system will need a lot of
planning and cause a lot of pain to OEM software developers.
The conclusion here is that Windows 7 won’t cause serious
problems for forensic examiners. However, it will undoubtedly
allow forensic specialists on the side of the defence to use
the new system to cast doubt on tried and tested Vista and
XP procedures. The sooner Windows 7 can be proven beyond
reasonable doubt to make no significant difference to the
evidence collection process, the sooner forensic examiners
can feel comfortable using it to support a case.
6
DF1_06-07_News.indd 6
/ Jumping into Jump Lists
Jump Lists could well be a goldmine. These are brand new,
context sensitive, quick-start menus that allow you to
launch applications with specific functionality preselected.
For example, using the Internet Explorer 8 (IE 8) Jump List,
you’ll see frequently viewed websites where clicking on one
will launch IE 8 and take you straight to that site. Using the
Jump List for Windows Media Player 12, you can jump right
into playing your favourite song without having to start the
application first. Basically, a Jump List allows you to use a
single click to get to your preferred end state, saving time and
thus increasing productivity.
But where is all this Jump List data held? It’s stored in two
places: on the file system and in the registry. Using the menu
to flush the Jump List (by unpinning the program or option) is
not enough to remove the data from the system. If you want
to see what was once in a Jump List, but has been removed
through the interface, just take a peek into the registry.
For example, if you suspect your subject has been editing
images in Microsoft Paint, taking a look in HKEY_CURRENT_
USER\Software\Microsoft\Windows\CurrentVersion\Applets\
Paint\Recent File List shows exactly which images were edited.
Our own testing on the Beta version we’ve been playing with
showed that even deleting the registry key still left some
residual information on the drive under the AppData\Roaming\
Microsoft\Windows\Recent\AutomaticDestinations directory.
Digital Forensics – The Future has Begun
A Cybercrime and Forensics Workshop hosted by the Information Technologists Company (www.wcit.org.uk) was held at the
Information Technologists Hall on 12th October 2009. The ITC
was granted livery status in 1992, becoming the 100th Livery
Company of the City of London.
The ITC has over 650 members coming from all sectors of the
ICT field and provides a neutral meeting ground for discussion
of issues that are central to both the profession and the City of
London. The Cybercrime and Forensics Workshop was arranged
by the ITC Security Panel as the first in a series of workshops to
explore the challenges of this emerging and complex area.
The workshop included presentations by Dr Steve Marsh
from the Office of Cyber Security and Andrew Rennison, the
Forensic Science Regulator. In addition presentations were
given by the Metropolitan Police Central e-crime Unit (PCeU),
the Forensics Industry, Academia, and the International Information Systems Security Certification Consortium (ISC(2)).
Representatives of government departments including the
Communications-Electronics Security Group (CESG) attended
the limited invitation-only event. Other delegates included
vendors and forensic service providers, senior academics,
law enforcement organisations, specialist interest groups,
and commercial training organisations. The purpose of the
Digital / ForensicS
29/10/09 5:03:42 pm
workshop was to explore ways in which all these groups can
cooperate to solve the problems of our emerging profession.
The workshop was coordinated by Roy Isbell of the ITC
Security Panel, who opened the proceedings by reviewing
the current cybercrime and forensic landscape, and explained
how convergence was increasing the complexity met with by
law enforcement and forensic practitioners. The workshop
explored issues surrounding qualifications, standards, procedures, accredited laboratories, and approved products.
Dr Steve Marsh provided an update on the work being carried
out since the release of the Cyber Security Strategy of the United Kingdom and current ideas about measuring success. Quality standards and procedures were addressed by Andrew Rennison, who discussed the current status of standards work and
the need for a single comprehensive standard, compatible with
existing international standards and auditable for compliance.
Gordon Homes of the PCeU gave a detailed and informative
update on how they are meeting the challenge, with examples
of recent successes and the future challenges they face. Mike
Dickinson from Micro Systemation raised the issue of product
validation and the need for an independent testing organisation
similar to the National Institute of Standards and Technology
(NIST) in the USA. Dr Tim Watson from De Montfort University
reviewed in depth the 83 undergraduate degree courses listed
on the Universities & Colleges Admissions Service (UCAS) Web
site for forensic computing, and discussed the varying structure
and quality of these courses. The final topic was provided by
John Colley of ISC(2), who talked about how special interest
groups can help. Looking at the current worldwide membership
of leading organisations in the security field, he said that no
one really knows how many are practising the discipline, or the
level of their qualifications or competence.
The discussions that took place during the workshop led to
agreement on some problem areas, and identified potential
resolutions that those attending took away to investigate further. The workshop was deemed such a success that follow-up
events are already being planned by the ITC Security Panel, to
focus on specific subjects and bring together those who can
influence and guide the profession in future.
/ NEWS ROUND-UP
Nmap v5
On 16th July 2009, Insecure.org announced the latest release
of network security analyser, Nmap (network mapper), which
it describes as “Open Source software designed for network
exploration or security auditing”. Insecure.org explains how
the program now generates network inventory, manages
server upgrade schedules, and monitors host or server uptime.
Nmap runs on all major operating systems, and v5.00 includes
approximately 600 improvements of which the following four
are cited as the most important:
• Nmap Scripting Engine (NSE) – a powerful and flexible
scripting tool that allows the user to automate network tasks.
• Ncat – Insecure.org calls this their “Swiss Army Knife”. It
allows flexible, easier data transfer, redirection and debugging
than in previous versions.
• Ndiff – using this tool users can directly compare scan results.
Scans can be saved into an archive and changes can be tracked.
Performance has been greatly improved in Nmap v5 and the
user can now specify the scan rate, thus bypassing Nmap’s
congestion algorithms and effectively making scans faster.
This release has been termed the most important since 1997.
Nmap.org (www.nmap.org) reported that Nmap has been named
“security product of the year” by many sources such as Linux
Journal, Info World, and Codetalkers Digest. Moreover, it has
featured in eight films including The Matrix Online and Die Hard 4.
Teen Cyberwarriors
The Government is set to start a competition in 2010 with the
aim to build up the UK’s cybersecurity ‘army’. Teenage ‘techies’
are being encouraged to enter the contest with promises of
salaries that could be as high as 6 figures. Organisers are
looking to replicate a previous contest run in the USA, where the
aim was to find 10,000 new cyberwarriors. The contest will run
as a series of tasks that include password extraction, website
capture, and defence against cyber attacks. As previously
mentioned, the prizes for the lucky winners will be employment
with the government, but they also include places on courses
and expert mentorship from sponsors, SANS Institute
scholarships, bursaries, and work experience.
US winner Michael Coppola gained extra credit for hacking
into the contest’s own scoring system and awarding himself
10,000 extra points.
AccessData releases Forensic Toolkit 3.0
AccessData has released the Forensic Toolkit 3.0 (FTK 3.0).
Expectations were low after the disappointment of FTK 2.0, but
industry pundits are now hailing the improvements in FTK 3.0
as enough to make it a contender again. Many improvements
have been made to the toolkit, including faster performance
on even the weakest machines, indexing that permits rapid
searching, and distributed processing to help you ‘leverage
CPU resources from up to 4 computers.’
Version 3.00 also offers a variety of new features. RAM
analysis is now possible, allowing the user to sift RAM captures
for passwords, HTML pages and more, search memory strings,
and list all processes. A compelling new feature is its ability to
properly analyse the Apple OS X operating system.
7
DF1_06-07_News.indd 7
29/10/09 5:03:43 pm
/ LEAD FEATURE
ANATOMY OF A
WEB REQUEST
The Process, Pitfalls and Evidence Trail
by Tim Watson De Montfort University
I
/ INTERMEDIATE
t’s something we all take for granted. Browsing the Web
seems so simple that it’s almost as though the information
we’re accessing is sitting on our own computer, waiting for
us to access it. So, how does clicking on a link or typing in a
Uniform Resource Locator (URL) result in the page we request
being displayed? And where in the chain of events are there
opportunities for others to manipulate the process? Whether
you’re a home user concerned about privacy and security, a
malicious attacker determined to undermine both, or a forensic investigator working hard to separate fact from fiction, a
good understanding of this chain of events, the weak links,
and the trail of evidence created when surfing the Web is your
best weapon. In this article, we will follow the journey of a
single Web request from your browser to a Web server and
back again. On the way we’ll encounter pirates and hijackers,
sinister African businessmen, deadly Sirens calling us to our
doom, insidious poisons and Greeks bearing gifts. Hang on to
your hats, it’s going to be a bumpy ride.
We start our journey in the safe harbour of your personal
computer. Against the odds, you’ve managed to keep your
computer free from the myriad digital parasites that feed off
your data and which could interfere with your actions. This is
an important point. If the machine were to be infected with
malware it would provide the first opportunity for an attacker
to manipulate or fabricate your actions. But more of this later.
For now, let’s assume that your computer is clean. (We’ll be
making quite a few assumptions in this article; but rather than
list all the subtleties of each protocol and data structure we’ll
just describe a typical Web page request).
Let’s begin by opening up a browser and typing in a URL –
for the sake of argument we’ll use http://www.thedarkvisitor.
com/. Whether you’re using Firefox, Lynx, uzbl, Safari, Internet
Explorer, or any other browser, when you enter this URL the program knows that it needs to send a Hypertext Transfer Protocol
(HTTP) GET request to the associated Web server and that it will
expect an HTTP response in return. Like any other application
program, it uses the underlying operating system to do as much
of the work as possible. Of the many system calls offered by
the operating system’s Application Programming Interface (API)
– either offered directly by operating systems such as Linux or
Mac OS X, or via functions in Dynamic Link Libraries (DLLs) if
you’re using Microsoft Windows – there are a collection of calls
8
DF1_08-13_Lead Feature.indd 8
that interact with the operating system’s networking subsystem. The browser uses these to prepare and transmit its GET
request, and then to wait for and receive the response.
One reason why a browser feels so simple to use is because
the user doesn’t have to explicitly connect to the Web servers
being accessed. A URL just feels like a document name and
accessing it seems as easy as accessing a document on your
computer’s hard disk. The complicated networking is hidden
from the user by the browser, which itself is part of a massively
distributed system.
Superficially, there is little difference between a file browser
that searches for and fetches files on your computer and a Web
browser, which does the same but is not limited to files on your
computer alone. But for a file browser to work it just needs
to run its various program functions on one computer. A Web
browser needs to ask programs running on other computers
Digital / ForensicS
29/10/09 5:06:27 pm
around the world to retrieve files on its behalf. At any one time
there are millions of Web browsers and Web servers talking to
each other across the Internet, joined by the common language
of HTTP. In effect, there is one large software application that
spans the globe and part of it is running on your computer.
In an application program that just runs on one computer,
information is passed back and forth between the program’s
functions by putting it in areas of shared memory that all the
functions can read from and write to. In a distributed system, the
same information flows are needed but the computers involved
don’t share the same memory and so they need another way
of exchanging information. They do this by encapsulating the
application information in packets that are sent from one part
of the system to another over a network. So, our browser needs
to package up its GET request into one or more packets and ask
the operating system to send them to the Web server. But where
exactly should it send the data and how will the response find its
way back? We need a destination and a return address.
/ Location, Location, Location
In fact, there are six addresses involved. To see why, it’s best
to consider a real-world situation that has many parallels with
accessing things over the Internet. Let’s consider delivering a
package to someone living in a block of flats.
We can see from Figure 1 that delivering a package from
the sender in one building to the recipient in another looks
easy! But nothing is ever really so easy: which building? which
street? which city? which country? When the delivery arrives
at the intended building, which buzzer should be pushed to
ensure that the correct recipient accepts the package? And if
something needs to be sent in return, where should it go?
Figure 1. Delivering a package
When the delivery arrives
at the intended building,
which buzzer should be
pushed to ensure that the
corRect recipient accepts
the package?
We use a postal address to get the package to the right
building and a flat number to identify which buzzer to press.
Although we only need to supply the destination address, the
mail service will transfer our package from pickup van to one or
more sorting offices, then perhaps on to an aeroplane or train
and eventually into a delivery van. Each step of this journey
requires that the package be put in a container and sent on to
the next stage. Consequently the package is repeatedly boxed,
addressed to the next stage and transferred to a differently
addressed container as it continues on its way. Even when it
arrives at the correct building, if the wrong buzzer is pushed, or
if the intended recipient isn’t listening for the buzzer, then the
package will not reach its destination.
So, we need a postal address and a flat number together with
a correctly addressed container for the first hop of the journey.
Three addresses: one for the building, one for the person in the
building, and one for the next stage of the journey. If we require
a reply, then we need another three addresses to ensure that
the response can find its way back again. Consequently, in our
analogy, as in a Web request, we require six addresses.
Let’s see how these six addresses relate to our Web request.
The request’s journey from your computer to the Web server
takes it through several intermediate devices (such as computers and routers).There are 18 such devices between my computer and http://www.thedarkvisitor.com/. For the destination
address, we have to specify which computer on the Internet to
send it to and which of the programs listening for network input
on that computer should be given the request. These are the
IP address and the port number, respectively. Armed with the
destination IP address, each device along the way can determine the next hop on from itself and can forward the packet by
9
DF1_08-13_Lead Feature.indd 9
29/10/09 5:06:28 pm
/ LEAD FEATURE
Frame header
(MAC addresses)
Packet header
(IP addresses)
Segment header
(port numbers)
Application data
(e.g. DNS request)
Segment
Packet
Frame
Figure 2. Data encapsulation
putting it inside another package, called a frame, which uses
another address – a Media Access Control (MAC) address – to
uniquely identify the network interface of the next hop device.
So, we need three addresses: port number, IP address and the
MAC address of the first step of the journey. And we’ll need
another three addresses to ensure that the response gets
back to us. Your operating system will take care of the return
addressing, and it will also supply the MAC address of the first
step of the journey for us, as soon as we tell it the destination IP
address; but we don’t have that yet. To get it, your browser will
need to use another important distributed system. This is the
Domain Name System (DNS).
/ Internet Telephone Directory
Your browser has to construct another request – a DNS request
– to ask a DNS server to resolve the name of the Web server into
an IP address. To understand DNS it’s best to think of it as working like an online telephone directory. You look up the name of
the person you want to contact and the directory supplies you
with the number that you need to use to contact them over the
(telephone) network. On the Internet, it’s IP addresses rather
than telephone numbers that you need.
FROM THE SAFETY OF YOUR
COMPUTER, WE’RE ABOUT TO
ENTER THE DANGEROUS,
COASTAL WATERS OF YOUR
LAN AND THE WILD SEAS OF
THE INTERNET
The IP address of at least one DNS server has been configured on your computer. This happens either statically or dynamically when the computer is powered up, typically via a Dynamic
Host Configuration Protocol (DHCP) request for a DHCP server
to give it an IP address, DNS server addresses, default gateway,
and a variety of other useful information. Since most personal
computers are configured to use DHCP, this is another common point at which an attacker can strike, supplying details of
a rogue DNS server or gateway that will resolve the Web server
names you send into the IP addresses of malicious Web servers,
unrelated to the intended destination.
Let’s recap. We want to visit http://www.thedarkvisitor.com/
and your browser needs a destination IP address. To get this,
10
DF1_08-13_Lead Feature.indd 10
it has to send a request to a DNS server, so that the name
of the Web server we want to access is resolved to its IP address. The IP address of one or more DNS servers is stored
on your machine and the browser retrieves the first of these.
DNS servers listen on User Datagram Protocol (UDP) port 53,
so the browser constructs a UDP segment addressed to port
53. Hang on, I hear you cry, a segment? We’ve had packets
and frames and now a segment, what do they all do? Well, a
segment is simply another type of digital package: segments
contain application data and are addressed to ports (i.e.
they are addressed to a process on the destination computer with a return address of a process on the originating
computer). As Figure 2 shows, segments are placed inside
packets, which use IP addresses to identify which devices
are communicating. A packet is repeatedly placed inside,
and then removed from a succession of frames as it travels
from device to device, making its way from its source to its
final destination.
As we follow the journey from browser to Web server and
back again, we’re concentrating on three questions: How
does the entire process work? What opportunities are there
for an attacker to subvert the process? Where might we find
forensically significant artefacts? We’ve mentioned that an
attacker might use malware to manipulate the Web request
process on the originating host – every forensics investigator worth her salt knows about the Trojan defence and the
importance of showing that submitted evidence takes into
account any malware on the system. Of course, there will
be a number of pieces of information on the host to help
reconstruct what has gone on there. But now, in pursuit of
an IP address for the destination Web server, we are about
to send a DNS request inside a segment, in a packet, and
enveloped in a frame, over the network (see Figure 3). Just
as every forensic investigator knows where to look for evidence on a seized computer; we also need to understand
the forensic artefacts created by network activity that exist
outside the originating computer. If an attacker can trick
your device into wrapping its packet in a frame addressed
to his computer, he can intercept the packet, look inside,
and modify it if necessary before forwarding it on to its
destination. He can also spoof a reply. If we don’t know
where to look for evidence, how will we be able to counter
a defence based on the malicious interception of network
traffic? From the safety of your computer, we’re about to
enter the dangerous, coastal waters of your LAN and the
wild seas of the Internet.
Digital / ForensicS
29/10/09 5:06:28 pm
Frame 1 (82 bytes on wire, 82 bytes captured)
Ethernet II, Src MAC: 00:11:d8:0c:0a:36, Dst MAC: 00:22:3f:5c:60:66
Internet Protocol, Src IP: 192.168.1.2, Dst IP: 192.168.1.1
User Datagram Protocol, Src Port: 52598, Dst Port: 53
Domain Name System (query)
Transaction ID: 0x2963
Flags: 0x0100 (Standard query)
Questions: 1
Answer RRs: 0
Authority RRs: 0
Additional RRs: 0
Queries
www.thedarkvisitor.com: type A, class IN
Figure 3. Example DNS query captured using the Wireshark packet analyser
Your browser has constructed a packet ready to be sent to
the DNS server. Your operating system inspects the destination IP address and tries to match it against an entry in its
routing table – a data structure held in RAM that tells the
operating system which network interface to use and which
device to send it to for the next stage of the journey. If it
can’t find a match, an error will be generated. Consequently,
the routing table is forensically significant as it determines
what a device will do with a packet, assuming the host
firewall doesn’t interfere in any way (and thus the firewall
configuration is also important when reconstructing events).
On a typical PC, a route is added to the routing table when a
network interface is configured or when a gateway is defined
to allow access to external networks. However, most of the
intermediate devices in a journey across the Internet will be
routers, and they have dynamic routing tables that change
as the routers talk to each other using a variety of routing
protocols. Subverting these protocols or the protocols used
/ Ettercap
One of the standard weapons in the hacker’s armoury is
Ettercap, a powerful program that performs Man in the Middle
(MITM) attacks and extracts passwords from network traffic.
Ettercap uses ARP poisoning to redirect switched traffic to
an attacker and can retrieve passwords from many protocols,
including HTTP, IMAP 4, POP, FTP, RLOGIN, SSH1, TELNET, VNC,
ICQ, IRC, MSN, NFS, SMB, MySQL, X11, BGP, SOCKS 5, LDAP,
SNMP, HALF LIFE, and QUAKE 3.
Ettercap can inject, remove and modify packet contents
based on selection criteria supplied by the attacker, and it
can also be used to sniff SSL secured data by presenting
a fake certificate to victims. An attacker can use Ettercap
to passively monitor network traffic and to collect detailed
information about hosts on the network (operating system
used, open ports, running services, etc.). In active mode, it
is easy to kill any network connection by selecting it from a
list, perform DHCP and DNS spoofing, view a victim’s Web
browsing activity in real time, and much more. As a network
security-monitoring tool, Ettercap can be used to spot ARP
poisoning attempts and whether anyone is sniffing every
packet on the network.
Ettercap is open source and freely available for all major
operating systems including Linux, Windows, and Mac OS X.
See the official website at http://ettercap.sourceforge.net/.
by switches to construct loop-free paths through networks
is another way for an attacker to redirect packet flows and
leap on board the convoy of data, cutlass in hand. Network
logs, if they exist, can reveal both the protocol conversations
and any attempts to subvert them, and switch and router
configurations can add useful corroborating evidence. Most
network devices have the ability to log network traffic and
events and it is good practice to monitor and log network
traffic as it travels across a network. Two standard, opensource tools are commonly used for this – tcpdump (http://
www.tcpdump.org/) and Wireshark (http://www.wireshark.
org/) – and becoming competent in their use is an essential
skill for all network forensic investigators.
So, if everything is configured correctly, our DNS request’s
destination IP address will match a row in the routing table,
and the operating system will know which network interface
to use and which device to send it to. The operating system
must wrap the packet in a frame and address it to the correct
device. To do so it needs the relevant MAC address; and we
need to get to grips with yet another protocol. What started
as a simple Web request is rapidly turning into a cascade of
interrelated actions, each one with weaknesses that can be
exploited and a variety of associated pieces of evidence. Let’s
continue on our journey.
/ Asking for Directions
Within your operating system is another RAM-based data
structure called the ARP cache. The Address Resolution Protocol (ARP) is used to enable computers to find out which MAC
address should be associated with a given IP address. Recent
results from ARP requests are stored in the ARP cache.
From an attacker’s perspective, ARP is a wonderful protocol: a
computer broadcasts a request asking for the MAC address associated with a particular IP address and whoever responds to
the requesting computer is believed by it. You can even send a
computer a new MAC address for a given IP address at any time,
and it will just believe you. An analogy would be a banking protocol for paying your credit card bill. You walk into a bank, grab
a paying-in slip, and ask in a loud voice what account number
to use to send money to your credit card company. Whatever
gets shouted back – you don’t even check to see if it came from
the shifty looking character with a below average leg count, eye
patch, idiosyncratic hand replacement, and pet parrot – you
believe the reply and promptly fill out the slip.
Hacker-friendly programs such as Ettercap exploit this protocol vulnerability to poison the ARP caches of a victim device
and its gateway so that all traffic in both directions is redirected through an attacker’s device (see Figure 4). This Man in the
Middle (MITM) attack can be used to make it look as though
the victim has been doing things that he shouldn’t, or to gain
unauthorised access to systems or data. The deadly Siren
calls that lure unwary traffic towards malicious servers can be
identified as suspicious ARP requests and responses within
network logs, as in Figure 5, or in the contents of ARP caches
(before Ettercap cleans up after itself ). This information can
be very useful when determining whether the evidence is
consistent with this type of network tampering.
11
DF1_08-13_Lead Feature.indd 11
29/10/09 5:06:29 pm
/ LEAD FEATURE
IP = 146.277.150.45
MAC = 00:19:d1:a4:22:f4
Before attack
IP = 146.277.150.245
MAC = 00:14:f6:9a:29:f2
IP = 146.277.150.46
MAC = 00:19:d1:a4:37:5f
During attack
Figure 4. ARP poisoning in action (network diagram)
Figures 4 and 5 show Ettercap ARP poisoning in action.
Figure 4 illustrates the network configuration, and Figure 5
lists the packets captured using tcpdump. Working through
the packets shown in Figure 5, the first four show the attacker
(IP address ending with 46) requesting the MAC addresses
of the victim and gateway, and the associated responses.
The next eight packets are malicious ARP replies not associated with any specific request. They target the victim and
gateway, and tell each of them that the other’s MAC address
is 00:19:d1:a4:37:5f, which just happens to be the attacker’s
MAC address. These packets are retransmitted every ten seconds to ensure that the ARP caches stay poisoned. The final
two packets show Ettercap cleaning up after itself by using the
same unrequested reply trick to correct the ARP caches on the
victim and gateway.
POISONING A DNS SERVER’S
CACHE IS KNOWN AS PHARMING
AND IS A FAR MORE EFFECTIVE
METHOD OF COLLECTING CREDIT
CARD DETAILS
17:08:42.918718 arp who-has 146.227.150.45 tell 146.227.150.46
17:08:42.919203 arp reply 146.227.150.45 is-at 00:19:d1:a4:22:f4
17:08:42.928903 arp who-has 146.227.150.254 tell 146.227.150.46
17:08:42.930510 arp reply 146.227.150.254 is-at 00:14:f6:9a:29:f2
17:08:43.940085 arp reply 146.227.150.254 is-at 00:19:d1:a4:37:5f
17:08:43.940093 arp reply 146.227.150.45 is-at 00:19:d1:a4:37:5f
…
17:08:47.983375 arp reply 146.227.150.254 is-at 00:19:d1:a4:37:5f
17:08:47.983390 arp reply 146.227.150.45 is-at 00:19:d1:a4:37:5f
17:08:57.993502 arp reply 146.227.150.254 is-at 00:19:d1:a4:37:5f
17:08:57.993525 arp reply 146.227.150.45 is-at 00:19:d1:a4:37:5f
…
17:09:58.055349 arp reply 146.227.150.254 is-at 00:19:d1:a4:37:5f
17:09:58.055372 arp reply 146.227.150.45 is-at 00:19:d1:a4:37:5f
17:09:58.107659 arp reply 146.227.150.254 is-at 00:14:f6:9a:29:f2
17:09:58.107672 arp reply 146.227.150.45 is-at 00:19:d1:a4:22:f4
Figure 5. ARP poisoning in action (packet capture)
/ Trust No One
After determining the correct MAC address to use, your operating system can now send out the DNS request and receive
12
DF1_08-13_Lead Feature.indd 12
a reply. If your computer is using an older or badly configured
wireless LAN, dangers can lurk within weak encryption mechanisms. A rogue wireless access point could be used by attackers to lure you to your doom, another deadly Siren call that
can be hard to resist. Even if the network is trustworthy, the
response from the DNS server may not be. The protocol and
software used by DNS servers to exchange information can
be exploited to poison the DNS cache. This is a particularly
deadly attack as it is so difficult for the users to detect. If you
carefully type in the URL of your favourite online bookshop,
your ISP’s DNS server responds to the browser’s DNS request
with an IP address. How do you know whether the IP address
is correct?
Poisoning a DNS server’s cache is known as pharming
and is a far more effective method of collecting credit card
details, login credentials and personal information than its
more prevalent but less potent relative, the phishing attack,
as seen in the many fake bank security emails and the equally popular 419 scams “from the relatives of deposed African
leaders”. Even if the DNS server is untouched, it is still possible to spoof a DNS response, especially if the transaction
ID is not very random. This is more likely to happen if the
operating system is virtualised, as virtualisation can reduce
the randomness of random number generators. This is also
a problem for Initial Sequence Numbers (ISNs) used to keep
network conversations safe from session hijackers, which we
shall discuss shortly.
So, when we receive our DNS response we’re ready to
send out the HTTP GET request. We have the destination IP
address and since we are using HTTP, your browser will use
destination port 80 unless we tell it to use a different port by
including it after a colon in the URL (e.g. http://www.example.
com:8080/). Your operating system will supply the MAC
address of your gateway. Our return details should also be included: a source IP address, a source port number (chosen at
random by the operating system), and a source MAC address,
(which will be included in the frame header). However, unlike
the previous DNS request which used UDP, an HTTP request
will use the Transmission Control Protocol (TCP) and before we
see our GET request arriving at the destination Web server and
a response coming back, we need to understand a little more
about TCP.
/ Getting Connected
If I want to send you a short message about my Caribbean
sailing holiday I’ll use a postcard. This is analogous to UDP – a
method of communicating by sending one or more individually
addressed messages. If I want to send you a draft of my large
travel book, sending each page on a separate postcard would
be confusing. Pages will arrive out of order and some may be
missing. You would have to spend time receiving postcards,
sorting them, requesting duplicates of the missing ones etc.
TCP solves this problem: it allows you to set up a connection,
and then TCP does all the sorting and requesting redeliveries,
and even transforms the contents of individual packets into
a stream of data. TCP is like a logical hosepipe – the sender
shoves data in one end and the receiver sees it in the same
Digital / ForensicS
29/10/09 5:06:29 pm
/ Pharming and Phishing
Get more information on understanding pharming and
phishing: http://en.wikipedia.org/wiki/Pharming
http://en.wikipedia.org/wiki/Phishing
order coming out of the other end. Since HTTP is often used to
transfer large Web pages that won’t fit into a single packet, it
is natural for it to use TCP.
Unlike UDP, which simply sends a packet whenever someone has any data to transfer, TCP has three phases. First it sets
up the connection using a ‘three-way handshake’, then it uses
the connection to allow both parties to exchange data (i.e.
the same connection is used both to send and receive data by
each endpoint) and finally the connection in both directions is
closed down.
The three-way handshake is shown in Figure 6, which is
from the Request For Comment (RFC) 793 that describes TCP.
The essence of TCP is that each endpoint computer uses a
number to show how much data it has transferred so far. If I
tell you that I’ve sent 30 bytes and you acknowledge that you
received 30, then everything is fine. If I then say that now the
total transferred is 40 bytes, but you say that you still only
have 30 bytes from me, then TCP knows to retransmit the
missing data. If the numbers get too far out then TCP will reset
the connection and another connection will have to be set
up. The numbers are called sequence numbers and an initial
sequence number (ISN) is chosen at random. This protects
against a delayed packet arriving later and interfering with
another connection, and protects against an attacker guessing
the number and forging a TCP packet, which would allow him
to hijack the TCP connection. (As the TCP connection is known
as a session, this attack is called a TCP session hijack – not to
be confused with cookie stealing session hijacks).
Following the packets in Figure 6, we can see that device
TCP A sends a packet with a control bit set to show that it is a
synchronisation packet – a SYN packet – that synchronises the
connection by sending an ISN. Line 2 shows a reply packet from
TCP B that acknowledges A’s ISN and sends one of its own; this
packet is a SYN-ACK packet. Finally, the connection is set up on
line 3 when A sends an ACK to acknowledge B’s ISN.
TCP A
TCP B
1. SYN-SENT >
<SEQ=100><CTL=SYN> > SYN-RECEIVED
If our simple Web request to http://thedarkvisitor.com/
is similar to the test request that I have just run from my
computer, it will have resulted in 7 TCP connections to four
different Web server IP addresses (the main Web page
contains images that have to be retrieved from other Web
servers) with 235 packets transferred that together contained 80,216 bytes. Assuming that, like my computer, yours
is 17 hops away from the destination Web server, each of
the 235 packets will have been placed in 17 different frames
as they travelled between source and destination and many
ARP request/reply frames will have been generated as these
devices updated their ARP caches. A reasonable estimate is
that this one Web request resulted in well over 4,000 frames
being transmitted.
If we hope to keep our Web browsing private, we can think
again – 17 other computers know what we requested and
what was returned, since the contents of our connections are
unencrypted, and every computer on each of the 17 networks
we passed through could potentially use a MITM attack to
view the contents too.
A reasonable estimate is
That this one Web request
resulted in well over 4,000
frames being transmitted
In this article, we have followed the journey of a Web
request from its origins as a URL typed into a browser, through
the operating system of the source computer, and through
several intermediate devices on its way to the destination Web
server. We’ve seen how ARP and DNS are involved in the route
that it takes, and how a request is sent over a TCP connection
that first has to be synchronised by a three-way handshake.
What appears to most users as a simple click of the mouse
turns out to be a very complicated process involving thousands of frames and many networks.
With so many opportunities for malicious intervention
and manipulation, evidence of a particular Web request or
response is not necessarily the smoking gun it might appear
to be. But any request creates ripples in the digital ocean that
travel widely and can be spotted, if you know what you’re
looking for and where to look. /
2. ESTABLISHED < <SEQ=300><ACK=101><CTL=SYN,ACK> < SYN-RECEIVED
3. ESTABLISHED > <SEQ=101><ACK=301><CTL=ACK> > ESTABLISHED
/ Author Bio
Figure 6. TCP three-way handshake (adapted from RFC 793)
Dr Tim Watson is the head of the
Department of Computer Technology at
De Montfort University and the leader
of its computer forensics and security
group. With more than twenty years’
experience in the computing industry and
in academia, he has been involved with
a wide range of computer systems on several high-profile
projects and has acted as a consultant for some of the largest
telecoms, power and oil companies. Tim is a regular media
commentator on computer forensics and security.
Consequently, every HTTP GET request we make is sent by
setting up a TCP connection between your computer and the
destination Web server – we send a SYN, the Web server sends
us a SYN-ACK, and then we complete the connection with an
ACK. The next packet sent on this connection contains our GET
request. Subsequent packets from the Web server will also be
part of this TCP connection, and will contain the Web page we
requested or a response containing an error message.
13
DF1_08-13_Lead Feature.indd 13
30/10/09 4:20:39 pm
/ COMPETITION
COMPETITION
/ WIn an Archos 405 Portable Media Player
with Digital Forensics Magazine
/ Question
Which Special Agent with the IRS is purported to be the
‘Father of Computer Forensics’?
A. MICHAEL ANDERSON
B. MICHAEL DARLING
C. JOHNNY WEISSMULLER
/ To Enter
To enter the competition all you need to do is send an
email to competition@digitalforensicsmagazine.com writing
ARCHOS405 in the subject line including your name address
and phone number with your entry.
TERMS AND CONDITIONS
This competition is open to anyone aged 18 or over, except for
employees of TR Media Limited and their immediate families.
Only one entry is permitted per person. Entries can be submitted
by email only to competition@digitalforensicsmagazine.com.
TR Media shall not be responsible for technical errors in
telecommunication networks, Internet access or otherwise,
preventing entry to this competition. Closing date for all entries is
on 31st December 2009 at 9.30am. Any entries received after that
time will not be included. The first correct entry, chosen at random
by the DFM team, will be notified by email on Monday 11/01/2010.
The winner will be announced in Issue 2 of the magazine and on
the Digital Forensics Magazine website. Submitting your entry
constitutes your consent for us to use your name for editorial or
publicity purposes, should you be the winner. TR Media reserves
the right to change or withdraw the competition and/or prize at any
time. By entering the competition, entrants are deemed to have
accepted these terms and conditions.
“THE ARCHOS 405 OFFERS
A BIG, BEAUTIFUL, GLAREFREE SCREEN, ABOVEAVERAGE AUDIO QUALITY,
SD FLASH MEMORY
EXPANSION, AND MAC/PC
COMPATIBILITY, AT A VERY
ATTRACTIVE PRICE”
CNET REVIEWS
14
DF1_14_Competition.indd 14
Digital / ForensicS
29/10/09 5:06:57 pm
/ FEATURE
DATA ERASURE –
FACT OR FICTION?
Don’t give away your personal data along with your old PC!
by Pete Membrey
W
/ INTERMEDIATE
ith stories of lost USB keys, stolen laptops, and
personal data being recovered from second hand
machines, it’s a wonder we’re not too scared to even
touch our computers these days. But how serious is this
risk? Is it really as bad as people seem to be making out?
How easy is it to recover data – and can we prevent
“black hats” from doing it?
We’ve all seen the horror stories in the news about people
recovering information from hardware that they’ve bought
from eBay. Traditionally researchers would buy desktop and
laptop computers and try to recover information from them,
usually with a great deal of success. These cases tend to
make the news because a researcher managed to find private
photos or credit card information, and that makes for an exciting story. After you read the shocking title, you’ll be treated
to a long spiel on why if you don’t erase your disks properly,
someone is going to steal your identity, blackmail your wife,
and post naughty pictures all over the Internet.
It’s not just laptops any more either. These people are
buying up anything that could ever store information. Even a
ten-year-old Nokia mobile phone stores information such as
phone numbers, text messages, and whom you’ve been calling recently. Imagine the juicy titbits that can be pulled back
from an iPhone or a netbook. Indeed, practically every digital
device on the market stores some sort of personal information. Even if you don’t consider your mother’s phone number
very sensitive (though I bet she’d disagree), you still don’t
want just anyone getting hold of it.
/ Deleting or Formatting versus Erasing
So is there any truth to all this? Well, it is certainly very easy to
recover information from a hard disk if it has not been properly
erased. A simple format is definitely not enough – the data
can still be easily recovered. The good news is that erasing a
disk is very straightforward and will prevent all but the most
serious people recovering your data. This is becoming a bigger
concern as people buy more and more computers. For example, a government survey in New Zealand showed that the
average household owns 1.2 computers (excluding monitors).
[1] This means that the second hand market is likely to be full
of computer equipment – possibly even yours – that has not
been properly erased.
Hard disks aren’t really very intelligent devices, at least
not at the operating system level. To the operating system, a
hard disk appears as a large amount of memory that can hold
ones and zeros, nothing more [2]. As there is no predefined
structure, we need to apply one by formatting the disk. How
this works depends on the filesystem, but basically it arranges
and allocates space in such a way that we can easily store and
retrieve our information. There are two key parts to the filesystem – the index and the data. The data is the information in a
file that has been stored on disk. The index provides a quick
way for the operating system to find that data. In essence, we
pick the file based on information in the index, which is then
used to locate the file on the disk platters.
IF THE HARD DISK IS IN A
HUNDRED PIECES, THERE IS
NO WAY ANY DATA IS GOING
TO BE RECOVERED
When we delete a file what we’re actually doing is removing the entry in the index and telling the filesystem that it can
reuse the space that the file has been using. As an analogy,
think of a map of a wilderness that shows a mountain. Now,
if we remove the mountain from the map, anyone who looks
at it will see empty space. But if that person actually looks
at what’s really there, the mountain will be very easy to spot.
This is why deleting a file is insecure - it doesn’t remove the
data, just the reference to it.
The same can be said of formatting. Whilst this happens on
a much bigger scale (across a whole disk or partition), the disk
itself is not wiped clear. Removing all data on a 500GB hard
disk is not difficult but it does take a considerable amount of
time. So, for performance reasons, a format just creates a new
index and doesn’t touch the data itself.
Most people when they sell their PC’s, simply reformat the
hard disk and reinstall their operating system of choice. This
will certainly overwrite some of the data, but the operating
system will at most take a few gigabytes of space and the
previous owner likely used considerably more than that, which
means the data is still there. The new user won’t be able to see
15
DF1_15-19_2nd Feature.indd 15
29/10/09 5:07:32 pm
/ FEATURE
it of course because the index has no knowledge of the files.
However, using a tool such as WinHex or the Forensic Toolkit
will allow someone who knows how to click `next’ a few times
to scan the hard disk and reconstruct the data. This is the issue
that the researchers have been going on about - what most users consider to be a properly wiped disk is in fact anything but.
Properly erasing a disk isn’t very difficult and most operating systems now come with built-in support to do just that.
Now that you know, the why and how of standard data recovery, you are also in a position to know how to stop it. We just
need to make sure that every bit of information on the disk
has been erased. The easiest way to do this is to write zeros
to the disk. We start at the very beginning and keep on going
until the disk is full. In effect, every bit on the disk would read
as zero and the data would effectively be irretrievable. For
example, performing this on a Linux machine (perhaps from a
Linux live CD) can be achieved with this command [3]
cat /dev/zero > /dev/sda
where /dev/zero generates a continuous stream of zeros and
/dev/sda represents the first hard disk. This command will
keep writing zeros until the disk is full.
Properly erasing a disk
isn’t very difficult and most
operating systems now
come With built-in support
/ The Hidden Flaw: Bit Archaeology
This technique looks foolproof at first sight, but there is a way
around it. It is still largely theoretical, but the idea is that if you
know what data was used to overwrite a given bit, you can, with
the correct hardware, work out what was there previously. This
is because hard disks don’t actually write ones and zeros to
disk but instead leave a magnetic charge that’s close to either
of the two. This isn’t a problem from the user’s point of view,
as the drive electronics always return either a one or a zero.
But if we assume that an investigator has this technology, and
he knew that we’d simply written zeros, he could look at the
magnetic charge and see how close to zero it actually is. From
this, he could make a very good guess as to whether the bit was
previously a one or a zero and thus reconstruct the data.
To get around this problem, we need to randomise things a
bit. There are various different specifications for this, the most
famous by Peter Gutmann [4]. This technique involves several
passes across the disk, each time using a different pattern.
The idea is that if these patterns are followed, several layers
of previous data will be completely random and therefore the
investigator cannot simply look at the voltages to determine
the last “real” value that was stored for each bit.
The DoD has put together its own recommendation, which can
be found in DoD 5220.22 M [5]. Again the idea is that by overwriting the data multiple times, it will become much harder to work
out what that data used to be. By randomly choosing whether
16
DF1_15-19_2nd Feature.indd 16
to write ones or zeros, an attacker is denied any easy way of
recovering data. However, the DoD recommendation is for seven
passes whilst other standards suggest up to thirty-five passes [4].
Gutmann’s theory has been challenged, however. Gutmann
insists that the specific patterns he suggests allow for safe
erasure, as they take various different types of hard disk
physical storage into account. There are many ways to store
data physically on the disk, especially in the last decade or
so. Gutmann’s patterns were designed to ensure that the data
would be safely erased, regardless of how the disk works
internally. This proposition has been challenged by Feenberg
[6] at the National Security Agency (NSA), who insists that on
modern hardware a simple erase is more than sufficient to
ensure that data cannot be recovered.
Gutmann is considered an expert, but the NSA is not exactly
low-tech either. The problem is, they can’t both be right –
either zeroing a disk is secure or it isn’t. At least we can safely
say that zeroing a disk is far better than not doing so. It will
prevent the vast majority of people from recovering your data,
if they did bothered to look for it. But what if we want to take
Digital / ForensicS
29/10/09 5:07:33 pm
it further? Another Linux command that we can run from our
trusty live CD is:
cat /dev/urandom > /dev/sda
The only difference here is that rather than generating
zeros, this device generates random data. Security experts
will of course gasp at this blasphemy: the /dev/urandom
/ Deleting and Erasing
Deleting files removes them as far as the end-user is
concerned, and all operating systems provide commands for
this purpose. But, because overwriting a large file can take a
long time, and most data is not particularly sensitive, deletion
just removes the links that point to a file’s data on the disk.
There are “undelete” tools that can recover such data, so if
you want to make sure no one reads deleted files, you must
overwrite them with fresh data or a meaningless pattern. This
is known as erasing a file.
device isn’t really random in a technical sense, but for our
purposes it’s close enough – especially as we’ll zero the disk
afterwards. Like the previous command, this one writes data
across the entire disk. But unlike the last command, where the
drive is zeroed and appears to be “new”, a drive that has been
randomised but not zeroed is very obvious. In other words,
anyone looking at your disk will know that you erased it, even
if they cannot retrieve the information itself.
Disk Utility, a tool available on Mac OS X, supports both
the DoD recommendation and Gutmann’s patterns. More
details on how Disk Utility works and the patterns it uses can
be found on Apple’s official support site here http://support.
apple.com/kb/TA24002/.
Most live Linux and Unix CDs come with the shred command
these days. This command is very handy and combines most
of what we’ve looked at already. For example, running the following command will randomise the disk 19 times followed by
a layer of zeros.
shred -vzn 20 /dev/sda
17
DF1_15-19_2nd Feature.indd 17
29/10/09 5:07:34 pm
/ FEATURE
The `v’ option tells shred that you want verbose output. You
should probably leave this on, as randomising a disk can take a
very long time and without seeing updates on the screen, you
will have no idea how things are progressing or whether or not
the machine has crashed. The `z’ option tells shred to zero the
disk and the `n’ option tells it how many sweeps to run.
Like the other commands (although they are rarely used in
this fashion) shred can erase individual files as well as disks.
For example using /dev/sda1 would erase only the first partition leaving any others intact. Passing it a filename will cause
shred to fill the file with random data - but it won’t actually
delete the file. For that you need the `u’ option. This is off by
default, as most people erase entire disks and you don’t want
to remove the disk device once you’ve finished your erase!
When shred removes a file, it renames it various times to
ensure that it will be difficult to recover.
/ Technology Baffles Brains
There is a catch however – and this is caused by how modern
filesystems such as ext3 work. Ext2 used to be the standard filesystem used on most Linux platforms. It was relatively basic, but
it was robust and did the job. Files were written directly to disk,
which caused problems if the write process was interrupted by a
power failure or any other sudden fault. This lead to the infamous
“fsck” or “file system check” that would kick off after an unexpected reboot. Because the system doesn’t know exactly what
was going on immediately before the failure, it has to check the
whole disk in order to ensure that everything is as it should be.
When shred removes a file,
it renames it various times
to ensure that it will be
difficult to recover
Ext3 was a big improvement on ext2, with the main difference
being that it has a journal. Before writing data to the disk, the
system updates the journal, which is essentially a work log. That
way, should there be a power failure, when the server reboots,
all it has to do is consult the work log and tidy up the loose ends.
Microsoft’s NTFS has similar features. For the most part this
works to our advantage. However, when it comes to removing
data securely it can cause problems, because we have no way to
directly access the disk and thus no way to be sure that bits of
the file haven’t remained in the journal or related areas.
Many tools on the market offer to erase `free space’. Mac
OS X comes with built-in support for this. How it works is
straight forward enough. The tool creates a large file and
then keeps on increasing its size until all of the free space on
the filesystem is used up. At this point, every spare piece of
storage space is now in use. (This can have the unfortunate
side effect of causing other programs to crash due to lack of
storage space). In theory this big new file should absorb all
data that isn’t within an existing file. This done, the program
writes random data to the file, thus overwriting all the old
data it has swept up from the disk. The idea is that this will
18
DF1_15-19_2nd Feature.indd 18
/ Warning!
Since hard drives are made of rigid, strong materials, physically
destroying them is a hazardous activity. When bashing away at
the casing with a hammer, be sure to wear safety goggles and
take all other safety precautions. And if you use a blowtorch
to remove the oxide, remember that the resulting fumes are
seriously toxic. Do it in a very well ventilated area!
effectively clean the disk and permanently erase anything that
was deleted previously, but perhaps not securely (although
the Mac does support secure file deletion).
By and large this process is rather effective, but there is a
problem. Slack space exists when data taking up a full slot
is overwritten by new data that doesn’t need so much space.
This means that the disk space is still marked as `in use’ and
so the erasure tool cannot allocate it. So even after a complete
sweep of the disk with the OSX erase free space tool, data that
exists in the slack space will not be erased.
This is a problem, the more so because Apple does not
mention anywhere that this is the case. In fairness this is not
Apple’s fault. To reclaim and properly erase slack space is not
a simple procedure; although theoretically possible, it’s not
very practical. With that in mind, erasing free space is a good
way of removing data that you want to be “gone for good”
– it will get the vast majority of it, but some may remain. If
the data were sensitive enough for this to be a problem, full
erasure would be the best solution.
This problem isn’t limited to particular filesystems, but
can also be affected by certain devices. For example, a single
hard disk is a known quantity. A storage area network (SAN)
or network-attached storage (NAS) is somewhat different.
The tools we’ve used work because they overwrite the data in
place. With these new technologies, the disks are effectively
virtual. In other words, there’s no guarantee that the place we
appear to be writing, is where we actually are writing.
This can be a good thing in certain types of disk, especially
solid-state devices, which suffer from wear and tear far more
readily than traditional hard disks. In other words, you don’t
want to keep reading and writing (especially writing) the same
place on the disk, as it will wear out quicker. To avoid this,
most modern disks have technology to spread the data around
evenly in a way that is hidden from the host computer. This
means that when you overwrite a file ten times, although that
will appear to be what’s happened from the operating system
point of view and even if you check with low-level tools, the
data could be in ten separate physical locations on the disk.
Another problem that’s limited to these devices is that many
of them come with a buffer zone. This is an extra amount of
storage space used to spread data more evenly across the disk.
You can’t see this extra space directly as it’s used internally
by the device. Based on how our erasure techniques work, we
know that we have to overwrite everything on the disk. Say
for example that we have a 500GB disk that has a 10% buffer
allocation. This means the disk is 550GB in size but we can only
address 500GB of it. If we do a full erase with shred or any other
tool, there is no guarantee that the extra 50GB will ever be
Digital / ForensicS
29/10/09 5:07:35 pm
erased. As we have no way to address it, we can never be sure.
To get around this, SSDs usually come with a built-in erase option that will ensure that everything is properly reset and back
to normal. However it’s worth remembering that traditional
techniques will not work well on these devices; something to
bear in mind before you bet the house on a given procedure.
/ The Bottom Line: If I had a Hammer
(or a Blowtorch)
At present recovering information from a disk is either easy or
practically impossible. If the user has taken adequate precautions, it’s very unlikely that someone with standard equipment
will be able to recover anything. However, we have all seen the
movies where, with lots of random key presses and impressive
stuff scrolling up the screen, the hero (or heroine) manages
to recover data that would otherwise have been impossible.
This usually goes with a nice technical term such as Magnetic
Resonance Imaging.
Well, this is only slightly science fiction and is becoming more
science fact every day. Hard disk technology is not particularly
accurate. Most people presume that a hard disk writes either a
one or a zero to the platter and then simply reads that information back. What actually happens is that charges tend to be
rather less accurate, sometimes overlap, and are hardly ever
exactly one or zero. In fact when the information is written to
disk the write head only does enough to ensure that when the
data is read, it is obvious which number it should be. As mentioned previously, it is then theoretically possible to look at the
charge on the disk and based on how close to one or zero it is,
determine what the last value probably was.
Depending on whom you ask, this is either a technology
that requires vast amounts of resources to put into play or
something that is becoming standard across forensics labs.
Like the erasure techniques we’ve looked at, there seem to
be a lot of unknowns and no one seems to be sure – or if they
are, they’re keeping quiet about it. At the end of the day, all we
can do is minimise the chances of data recovery and hope that
if these advanced recovery technologies do exist, they’re not
going to be pointed at our data!
As you can see, erasing a disk is not as simple as it might
appear. All of the techniques mentioned have their flaws. As
these techniques get more complicated, the required effort
and cost to recover the data increases enormously, and in reality it will be extremely difficult for any data to be recovered,
even from a disk that has just been zeroed.
That said, if you are really concerned about the contents of
a hard disk, the easiest way to deal with it is to destroy the
hard disk physically. This may sound a bit crazy, but hear me
out. Presumably, whatever is on the hard disk that you want
to erase must be highly sensitive. How much damage would
it cause you if it were to fall into the wrong hands? If you now
compare that cost to the price of a new hard disk, suddenly it
doesn’t seem so bad. If the hard disk is in a hundred pieces,
there is no way any data is going to be recovered.
There are various ways to destroy a hard disk. A very effective way is to remove the platters from inside the drive. You
can either do this with the proper tools, or you can bash them
out with a hammer or an axe. Once you have the platters,
you will see that they are coated with a shiny material. This is
the layer that’s sensitive to magnetism and where the data is
stored. Simply take a blowtorch from your local DIY store and
give the platter the good news. This will vaporize the layer and
ensure that the data is lost forever.
As you can see, erasing a hard disk isn’t as straightforward
as you might first have thought. Although it’s relatively easy to
destroy data, there is always a slight risk that the data might
not actually have been erased. There really isn’t any way to be
entirely sure either.
The simple rule to follow then is this:
If the data you’re worried about is really that sensitive, just
destroy the disk. This is the only way to truly guarantee that
the data will not be recovered. /
If the hard disk is in a
hundred pieces, there is
no way any data is going
to be recovered
References
[[1] Ministry for the Environment (2006), `Ownership of Electrical and
Electronic Equipment’. [Online] Available at: http://www.mfe.govt.nz/publications/waste/eee-survey-reportjan06/html/page3.html [Accessed 17th of June 2009.]
[2] Casey, E. and R. Dunne (2004), Digital evidence and computer
crime: forensic science, computers and the Internet, Academic Pr.
[3] Membrey, Peter, Tim Verhoeven and Ralph Angenendt (2009),
Definitive Guide to CentOS, Apress.
[4] Gutmann, Peter (1996), `Secure Deletion of Data from Magnetic and
Solid-State Memory.’ [Online] Available at: - http://www.cs.auckland.
ac.nz/gut001/pubs/secure_del.html. [Accessed 19th of June 2009.]
[5] US Department of Defense (2004), `National Industrial Security
Program Operating Manual’.[Online] Available at : - http://www.dtic.
mil/whs/directives/corres/html/522022m.htm [Accessed 19th of
June 2009.]
[6] Feenberg, Daniel (2004), `Can Intelligence Agencies Read
Overwritten Data? A response to Gutmann.’ [Online] Available at: http://www.nber.org/sysadmin/overwritten-data-guttman.html..
[Accessed 19th of June 2009.]
/ Author Bio
Peter Membrey lives in Hong Kong and is
actively promoting open source in all its
various forms and guises, especially in
education. He had the honor of working
for Red Hat and has received his first
RHCE at the tender age of 17. He is now
a Chartered IT Professional and one of
the world’s first professionally registered ICT Technicians.
Currently studying for a master’s degree in IT, he hopes to
study locally and earn a PhD in the not-too-distant future. He
lives with his wife Sarah and is desperately trying (and sadly
failing) to come to grips with Cantonese.
19
DF1_15-19_2nd Feature.indd 19
30/10/09 4:27:08 pm
DF1_OFC_Cover - Online.indd 1
29/10/09 4:58:43 pm
/ FEATURE
FORENSIC EXAMINATION
OF A COMPUTER SYSTEM
A generic technical security procedure
by Gary Hinson and Robert Slade
T
/ INTERMEDIATE
his paper explains the procedure involved in forensically examining digital evidence such as a hard drive
obtained from a computer system allegedly involved
in a crime. It does not cover “live forensics” – the forensic
analysis of running systems - which requires special skills and
techniques beyond the scope of this procedure. It is extremely
important that the procedure is followed carefully and systematically, since even minor improvisations or mistakes can
compromise (i.e. damage or call into question) the evidence
or the analysis. That, in turn, could lead to a court case being
dismissed. This is not the place to cut corners.
/ The procedure
Figure 1 shows the key activities in the overall process, in the
form of a flowchart. The following sections explain the activities in more detail and include pragmatic guidance.
/ Prepare in advance for forensic
investigations
• Prepare a ‘grab bag’ for use by the forensic investigation
team when called out, containing suitable tools, storage media, notes on procedure, etc.
Ensure the investigators are adequately trained to use
the tools, and the processes are repeatable and sustainable, regardless of which direction the investigation takes
(e.g. whether the analysis is overt or covert).
• Your in-house resources and expertise may not fully cover all
aspects of digital forensic analysis (e.g. live forensics); or you
may not be sure of always having enough resources to respond
immediately. If so, consider identifying and perhaps contracting
with external specialists so that you can call them in at short
notice, or send properly collected evidence offsite for further
analysis in a secure manner. This kind of prearrangement (a form
Figure 1
21
DF1_21-26_3rd Feature.indd 21
29/10/09 5:07:57 pm
/ FEATURE
of contingency planning) provides a ready source of supplementary skills and additional resources for large or complex
investigations, and greater independence for more sensitive
internal jobs.
• It helps to get ready to visit a crime scene or evidence collection site before you are actually called there. If you have the
luxury of advance warning, you should be able to prepare for a
specific location and situation. Simple things such as booking
transportation and finding somewhere to stay can be done
while waiting for the callout.
• Make sure that you will recognize any source of evidence
by familiarizing yourself with the types of technology likely
to be involved. That does not just mean devices belonging to
the organization, as employees may well be using personal
equipment (such as mobile phones, USB thumbdrives, and
personal digital assistants (PDAs)) at work, whether for
work activities or not, and these may contain useful digital
forensic evidence.
Make sure that you will
recognize any source of
evidence by familiarizing
yourself with the types
of technology likely to
be involved
22
DF1_21-26_3rd Feature.indd 22
/ Make sure you have the authority to
commence the forensic examination
• The decision to examine a computer system forensically is
normally taken by a suitable manager, typically the person in
charge of the incident management process, a senior manager, or a police officer.
• She should be fully aware that you will be taking the system
offline, and for how long. This could be a long time if you need
to conduct a full analysis, or not so long if you simply make forensic copies of the digital evidence for the analysis and hand
the system back. Even after that, it may take a while for IT staff
to clean up and rebuild a compromised system.
• Find out exactly which system is to be examined and ideally
get this in writing.
• If you will be investigating an employee under suspicion of
malfeasance (such as fraud) or other circumstances where you
do not wish the fact of a probe to be apparent, you need to
factor this in to your planning. The examination should be conducted at a time when the subject, and other people, will not
be around - normally after hours or while the subject is certain
to be somewhere else. Ensure that your actions will not be
observed, and will not leave traces that the subject may find.
Protection against observation by others is important not only
because they may inform the subject, but because if your sus-
Digital / ForensicS
29/10/09 5:07:58 pm
picions of the subject become public knowledge, the subject
may have a case for slander or libel. Note that in this case you
may be subject to additional time limitations as well.
/ Secure the scene
• Assuming you are conducting an overt rather than covert investigation, protect the crime scene (which includes
the interior state of the system under investigation) from
interference or damage by controlling physical access to the
scene. This prevents intrusions and interruptions while you
are working.
• The first rule of evidence collection, as in first aid and
medicine, is to “do no harm”. Whatever actions you take
must not damage or corrupt the evidence, particularly the
primary evidence at the scene. Be especially careful to follow
the procedures exactly in order to prevent scene or evidence
contamination by you or other investigators.
• For covert or particularly sensitive investigations, it is always
a good idea to work with at least one partner. Then each investigator can testify to the nature of the work done by the other.
/ Decide quickly what to do about any computer
systems that are currently running
• Depending on the specific circumstances and relevant policies, you may need to make key decisions at the scene. There
is no single correct answer for all situations.
• Will you leave the computer switched on or turn it off? With
an uninterruptable power supply (UPS), it is possible to keep
a system powered on while transporting it to the forensics
laboratory for analysis of live running processes. However,
forensic analysis of live systems is a specialized task beyond
the scope of this procedure.
• Will you maintain network connectivity or simply disconnect
the LAN cable? Cutting off the network connection should
contain a network-based intrusion and stop any further
outflow of information. But remember that a compromised
machine may be using a wireless network connection, so
disconnecting the Ethernet cable may not be enough to stop
a network compromise in its tracks. Another consideration
is that cutting off the network may trigger a ‘scorched-earth’
response (more below).
• It helps to decide in advance what you plan to do in such
situations. Of course, if you do not have the technical capability (skills, procedures and tools) to conduct live forensics,
there is no point in considering that particular option.
/ If you decide to power-down the system,
disconnect the power cable immediately
• Speed is of the essence in this section. The compromise
may be continuing right now for as long as it takes you take
to agonize over the decision, so quickly double-check that the
system or systems you are dealing with are in fact the right
ones, and disconnect the power cable.
• In the heat of an incident, it is all too easy to pull the wrong
cable, perhaps taking an important system offline with potentially serious consequences, not to mention embarrassment
for those responsible. Don’t do it!
• Do not run a normal system shutdown because, if the
system has been hacked, the hacker may have launched a
‘scorched-earth’ program designed to destroy potentially
incriminating information about the compromise (which of
course is valuable evidence) during the shutdown sequence. It
is a good rule to touch as little as possible – certainly not the
keyboard or even the mouse.
/ Plan your approach
• Now that the urgent tasks have been done, you have a
breathing space to gather your thoughts and prepare for the
remaining activities.
• Open the grab-bag, check the contents, and gather your
thoughts. The bag will most likely contain an inventory or
checklist, forensics hardware, software, blank storage media,
evidence forms, procedures, guidelines and other items that
are likely to be useful during the process (e.g. a camera).
Make yourself comfortable: this is going to be a long session!
• Consider videotaping the evidence collection procedure to
create an additional record of exactly what happens at the
scene and perhaps to use for training other investigators and
for lessons-learned exercises.
For covert or particularly
sensitive investigations, it is
always a good idea to work
with at least one partner
/ Start the forensics paper trail and collect
primary evidence
• Create a new unique case identifier if necessary, following
your organization’s conventions for case ID formats (for example
based on the date of the initial capture of the evidence). This ID
will be used on all the paperwork associated with this case.
• Use appropriate forensic techniques for all seizures. Use
gloves, evidence bags, tags, etc. when handling physical
items, particularly if they might contain valuable fingerprints
or DNA evidence.
• Ensure you have the right or permission to examine the
evidence. Even where you have authorization from management, it may be worth writing receipts for items taken from
the scene. Generally speaking you are obliged to return seized
items as soon as possible after seizing and analyzing them.
• Start with the most volatile forms of evidence, the things that
might ‘evaporate’ of their own accord or be compromised during
the analysis (for example the contents of RAM and perhaps page/
swap files). The corresponding tools and techniques should ideally have been prepared and rehearsed in advance to save time.
• Create new unique item numbers for every item of evidence
captured. These might include the computer system itself and
its hard drives, USB memory sticks, and CD or DVD media.
Item numbers are normally just serial numbers appended to
the case ID. List all the items on the evidence inventory form.
• Record relevant details about the system and its data storage
devices on evidence record forms, one sheet per evidence item.
23
DF1_21-26_3rd Feature.indd 23
29/10/09 5:07:58 pm
/ FEATURE
• Start recording the chain of custody for each and every item
of evidence.
• It is vital that everyone who handles or examines the evidence from this point forward is noted on the relevant chain
of custody form, so that the chain remains unbroken, and it is
also vital that everyone understands their personal obligations to handle, examine and protect the evidence properly.
• Complete all applicable fields on the forms, carefully and
legibly.
• Keep the forms themselves as safe and secure as the
evidence. The chain-of-custody forms may travel with the associated items of evidence, but the evidence record forms will
normally be kept locked in the safe.
Record relevant details a
bout the system and its data
storage devices on evidence
record forms, one sheet per
evidence item
/ Try to obtain passwords, encryption keys etc.
• User passwords, encryption keys, and devices such as smart
cards or tokens may be necessary to investigate a system.
Ideally ask the users and system administrators to write down
any passwords or keys (perhaps retrieved from key escrow)
and to sign the piece of paper, noting the date and time. Secure
this piece of paper as evidence in an evidence bag or sealed
envelope, ideally sealed with tamper-evident evidence tape to
indicate that it has not been examined by anyone not listed on
the chain-of-custody form (this is the same basic process that
will be followed for all other items of evidence). [Note: advise
the user to change their passwords on any other systems where
the same password was used, since it has now been disclosed.]
• Be aware that programs such as Truecrypt allow the user to
configure a ‘duress password’, which gives access to benign
data, while the primary data remain securely stored and, being
strongly encrypted, simply appear to be random bytes on the disk.
• Seize smart cards, tokens, SIM cards etc. as part of the evidence.
• If the user refuses to cooperate or cannot remember passwords, this may make it difficult or impossible to access and
examine strongly-encrypted data, although lower grade encryption may be broken by cryptanalysis or brute force forensic
tools if the analysts have the requisite skills, tools, and time.
/ Remove the hard drives and any other nonvolatile storage media from the machine
• Catalogue storage media on the evidence record forms (one
per drive, CD, USB memory stick or whatever), record identifying details such as model and serial number in each case.
• Keep the media safe. Ideally they should be locked in a suitably certified fireproof safe with dual access controls, except
when they are copied for analysis or taken to court. The original evidence is the most valuable evidence (“best evidence”),
so look after it especially well. It is vital to ensure that there is
absolutely no question of the evidence having been tampered
with or compromised, hence the need to maintain the chainof-evidence records meticulously.
/ Check the system’s real-time clock
• Power-up the machine without any hard drives installed,
checking the BIOS real-time clock and recording the current
date and time, along with the true clock time at that same moment (ideally down to the nearest second). Then power off the
Original evidence
Evidential copies
Working copies
Figure 2: Be prepared to create multiple copies of the original evidence.
24
DF1_21-26_3rd Feature.indd 24
Digital / ForensicS
30/10/09 4:26:12 pm
/ Further information
The US Department of Justice’s digital forensics process
flowchart (PDF) is primarily aimed at investigative team
leaders, while their guide for first responders is a helpful
briefing to prepare forensics team members for what they
ought to be doing when called out.
machine and return it either to secure storage or to the owner,
whichever management decides is appropriate.
• The precise dates and times of file activities, records in
logs, and incidents are often material to a case since they can
establish the exact sequence of key events. Even small timing
discrepancies may be enough to raise doubts and discredit
the evidence, so be careful.
/ Install a hardware “write blocker” before
any captured hard drive is powered up
• This step is vital. While forensic software is designed not to
write to drives being copied, you cannot testify to this in court
unless you actually wrote the software or have proven the
capability beyond all reasonable doubt. The key advantage of
using a commercial hardware write blocker is that its suitability has already been established in court. With the write
blocker in place protecting the original drive whenever it is
powered up, there can be no reasonable claim that you might
have inadvertently or deliberately altered data on the disk.
• Important: remember this step in the unlikely event that you
ever need to take additional copies of the original evidence for
any reason.
/ Create one or more forensic “evidential
images” of each original disk
• The types of copy (evidential and working) that you will
need, and their relationships to each other and the original
evidence, are shown in Figure 2.
• Ideally, use suitable digital forensics hardware (as well as a
write-blocker as mentioned above) to take accurate bit-copy
images of the hard drives, USB sticks, CDs, DVDs, or whatever
original storage media were recovered from the machine.
• It often pays to take multiple evidential images using different forensic software. This may seem excessively cautious
and tedious but is definitely better than trying to defend the
integrity of a single evidential image later in court, particularly
in serious criminal cases.
• Generate both MD5 and SHA1 hashes for the evidential
images, and verify the hashes for every evidential image (or
image segment) against those calculated from the original
media. If you are working with a partner, your partner should
validate the hashes and record the results formally as part of
the case notes.
• Store the evidential images along with the hash values. Ideally you should store these on DVD or, if too large for a DVD,
on external hard drives. These should be purchased especially
for this purpose. Do not try to re-use old DVDs or drives –
penny-pinching now may destroy the entire case. If the volume
of work justifies the expense, you may have access to a disk
farm (NAS or SAN) dedicated for forensics. In that case strong
logical access controls are required to mirror the physical
access controls normally used (for instance to prevent data
from different cases mingling, and to cleanse used storage
thoroughly before reuse).
• One of the evidential images may be kept in reserve to be
given to the counterparty in a court case for their forensic
examination, if required. You may also decide to create an
‘insurance copy’ in case the first evidential copy is compromised. This avoids having to take another copy of the original
evidence (since each additional access slightly degrades its
evidential value). Use one evidential image for your subsequent forensic analyses and, remember to keep the chain of
custody up to date at every step.
/ Physically secure both the original
evidence and all evidential copies
• Store the media containing the evidential images, plus the
original hard drives, the system itself (without its battery if it
is a laptop) and the associated paperwork in a fireproof safe
or other secure area, such as a secure storage facility that is
under constant video surveillance or security monitoring.
• Do not remove any evidence unless: (a) you are going to
analyze it forensically, in which case you must update the chainof-custody forms accordingly and protect the items while being
examined; or (b) you are taking it to court, or handing it over
to law enforcement or legal representatives, in which case you
must still maintain the chain of custody; or (c) the case is over
and the evidence is no longer required. In any event, your forensics procedures should state who may authorize the removal of
evidence from safe storage, and how this authority is recorded.
If necessary, retrieve any
files or other information
that the user legitimately
needs, on their behalf,
from the working copy
/ Take one or more working copies of one of
the evidential image/s
• From now on, you will be examining the working copy or copies.
If, for whatever reason, a working copy is damaged or somehow
its integrity is brought into question, it can be replaced with a
new working copy taken from the evidential image.
• Remove the evidential image from safe storage, completing
the chain of custody form.
• Use suitable forensic software to prepare one or more
working copies, ideally in the same way that you created the
evidential image earlier.
• Take additional copies for other analysts if appropriate,
but remember that they may well contain confidential data
and should be suitably protected at all times. Don’t just
leave them lying around as they could be stolen, copied
or damaged.
25
DF1_21-26_3rd Feature.indd 25
30/10/09 4:26:20 pm
/ FEATURE
• Replace the evidential image in the safe, completing the
chain-of-custody form.
• If necessary, retrieve any files or other information that the
user legitimately needs, on their behalf, from the working
copy. (Do not hand over your working copies – they are valuable and probably contain sensitive evidence relating to the
case). This decision and the associated process need to be
documented to minimize the risk of inappropriately disclosing
information or compromising the case.
You do not want to find
yourself in the position of
having to defend and justify
your methods on the
witness stand unless you
are sure of your ground
/ Forensically examine the working copies
• Follow sound forensic practices carefully so there is no
doubt in court about what you did to reveal and examine the
evidence, and no question that what you found would also be
found by another competent forensic examiner working from
the same original evidence and following the processes you
may be asked to describe in detail.
• Record what you do and what you find immediately in your
fieldwork notes throughout the analytical process. It is generally advisable to use a hardbound (not loose leaf ) notebook
for this purpose, recording the date and if necessary the time
of every stage or major activity and supporting details where
necessary. You may find it worthwhile also to video your analysis to cut down the amount of note-taking required, but only if
the video is likely to be admissible evidence. (Another reason
for recording a video is that you want it as a training aid). Stay
professional and avoid contaminating notes and evidence
with irrelevant information such as personal notes or doodles.
Superfluous material makes it harder for the court to focus on
the key issues and may even raise concerns about the competence or integrity of the investigative team and process.
• You may need to search for hidden data, deleted files etc.
following sound forensic practices. This can get highly technical
but many tools exist to support this type of work and the investigators should have been adequately trained beforehand.
• If you are not forensically skilled and experienced, or are
unduly concerned about your responsibilities in this case, consider sending a working copy to a predetermined competent
and trustworthy specialist digital forensic examiner who can
conduct the analysis, prepare a forensic report on your behalf
and, if necessary, appear in court as an expert witness to
explain the process and findings.
/ Conduct a post incident review
• Following any significant information security incident and
investigation, it is good practice to conduct a post-incident
review. The primary objectives are twofold: (1) to examine and
26
DF1_21-26_3rd Feature.indd 26
if possible address the root causes of the incident, and (2) to
improve the investigative process, learning from how the incident was handled and being better prepared for the next one.
• Such reviews are best led by someone independent, generally a manager, auditor or consultant who was not intimately
involved in the incident investigation but has the trust of the
team and can assess the situation dispassionately.
• Remember that the goal is not to apportion blame but to
learn and improve. It is just as important to identify the things
that went well as those that did not.
• There are diminishing returns as processes mature, so it is
not necessary to review every single incident. The frequency
should be at management’s discretion.
/ Conclusion
The process described in this paper and summarized in Figure
1’s flowchart encapsulates commonplace forensic practices, but
only in a generic way. It should be checked and customized by
competent people to suit your particular circumstances and requirements. The legal rules and practices regarding the admissibility of evidence, for instance, vary between jurisdictions. You do
not want to find yourself in the position of having to defend and
justify your methods on the witness stand unless you are sure of
your ground, and the best time to gain that confidence is now,
while you have the time to research, think and seek qualified advice, and before you are called upon for real. Keep the procedure
succinct and clear to make it simpler to follow in training sessions
or during investigations, and easier to explain in court. /
/ Author Bio
Dr Gary Hinson PhD MBA CISSP has
more than two decades’ experience as
practitioner, manager and consultant
in the field of information security, risk
and IT audit. Gary runs the information
security awareness subscription service
NoticeBored (www.NoticeBored.com)
and spends his days writing awareness materials in an idyllic
hideaway in rural New Zealand. He is also actively involved
in developing the ISO/IEC 27000-series information security
management standards and shares his passion through
www.ISO27001security.com.
/ Author Bio
Robert Slade is a data communications and
security specialist from North Vancouver,
British Columbia, Canada. His research into
computer viral programs started when they
first appeared as a major problem “in the
wild”. He has since become known for “Mr.
Slade’s lists” of antiviral software vendors,
antiviral reviews, antiviral evaluation FAQ, and virus books. One
of the working group for the VIRUS-L FAQ, he is best known for
a series of review and tutorial articles which have recently been
published as “Robert Slade’s Guide to Computer Viruses”. As
an outgrowth of the virus research, he prepared the world’s first
course on forensic programming (aka software forensics), and
he wrote a book on that, too.
Digital / ForensicS
29/10/09 5:08:00 pm
DF1_OFC_Cover - Online.indd 1
29/10/09 5:01:08 pm
/ FUTURE ISSUES
2 GREAT REASONS
TO SUBSCRIBE
In the next issue of Digital Forensics Magazine
In future issues of Digital Forensics Magazine …
O
ver the next few issues of Digital Forensics Magazine
we are really excited to be bringing you a programme of
fantastic serial articles from two experts in the industry.
This short introduction should be enough to whet your appetite
and make you look forward to reading the forthcoming articles.
lines for triage, and techniques of continued process improvement (for quality and compliance). We think this series will be
a significant contribution to the body of research on incident
response, and a great resource for our readers.
/ DIY Computer Incident Response Team
Scott Zimmerman’s up and coming series on Proactive Computer Forensics looks like a true thought leadership piece. He
will be covering such subjects as activity monitoring, proactive data and log file gathering, and tighter system management that reports forensically relevant information (including
sample Unix scripts). This series will complement the articles
by Dr Kabay, giving you all the detailed knowledge you need to
design, implement and manage an incident response capability based on a scientific approach to data gathering.
A key aim of the magazine is to link the latest academic
research to the practical world of the digital forensics investigator. DFM has already established links with universities
in the UK, Australia, and the USA, and we are growing that
network all the time. One such link is with Norwich University,
specifically with the Senior Academic Advisor on the Master
of Science Degree in Information Assurance, Michel Kabay
PhD www.mekabay.com. We have been working closely with
Dr Kabay to craft a series of four articles that lay out exactly
how to set up an Incident Response unit. This series explains
how to set up your team, how to deal with numerous types of
incident (using electronic records to track incidents), guide-
/ Proactive Computer Forensics
We are always interested in commissioning article series,
so if you have an idea contact us via the website:
www.digitalforencsicsmagazine.com
Digital
ForensicS
/ magazine
DF1_28_Future Issues.indd 28
29/10/09 5:08:21 pm
/ LEGAL EDITORIAL
LEGAL EDITORIAL
Ahoy there, readers, from the legal wheelhouse of Digital Forensics Magazine!
by Moira Carroll-Mayer
I
n this edition, articles by Eric Fiterman and Bill Dean take a
look from the practitioners’ viewpoint at two areas of vital importance to digital forensics experts, where their endeavours
must satisfy the requirements of the courts. These are respectively the necessity of answering to standards laid down in the case
of Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993)
for the presentation of expert evidence, and the advantages and
perils arising from the US ‘Federal Rules of Electronic Discovery’.
The issues are as persistent as they are omnipresent. In
2005 the House of Commons Select Committee on Science
and Technology (www.publications.parliament.uk/pa/cm/cmsctech.htm) discussed the inability or disinclination of scientific
experts to explain the what, how and why of their investigations
before the courts. Such failures can only serve to undermine
the clarity, weight and admissibility of evidence. The comments
made by the Select Committee are as relevant today as they
were then; the recommendations just as pertinent. For anyone
presenting digital (or any scientifically based) evidence before
the courts in the USA, the Daubert standards are a fairly reliable
stalwart. Contemporary UK courts adopt Daubert ad hoc and
always subject to measures intended to avoid miscarriages of
justice due to over-reliance upon expert evidence.
In the USA changes to the ‘Federal Rules of Electronic
Discovery’ have increased the demand for digital forensics
experts. This is evident, for example, in the collection and
presentation of digital evidence to a party seeking it. Where
discovery of evidence is resisted on the grounds that it is unreasonable to produce it, an expert may be needed to justify a
resulting protective order, or to establish that information was
deleted in good faith, not as a means of avoiding the revelation of something condemnatory.
This last possibility resonates in the aftermath of the UK
Department for Business, Innovation and Skills (BIS) report,
released on 11th September 2009, into events surrounding
the going into administration of MGRG motors at Longbridge
in England (see http://www.berr.gov.uk/files/file52782.pdf ).
The report, ordered by the UK Government following MGRG’s
collapse leaving debts of almost £1.3 billion, suggests serious
contributory accounting and other irregularities.
Among matters coming to light in the report was the extraordinary behaviour of Peter Beale, Chairman of Phoenix Venture
Holdings (the parent company of MGRG), who early on the
morning following the announcement of inspection by BIS purchased the ‘Evidence Eliminator’ software product. Things do
not look good for Mr Beale because the software is (according
to the report) marketed with these words: “It is a proven fact …
routine Forensic Analysis equipment such as EnCase and F.R.E.D
used by Private and Business Investigators, Law-Enforcement
and others, can recover evidence from parts of your hard drive
that you thought were empty, parts that you had cleaned.”
Evidence Eliminator is also claimed to be “… designed,
tested and proven to defeat the exact same ‘Forensic Analysis’
equipment as used by police and government agencies, etc.”
and enables users to: “… purge [their] PC of hidden computer
data which could later be recovered with Forensic Software to
be used as evidence against [them].” Mr Beale denies that immediately before the BIS inspection he ‘deliberately went out
and got a piece of software to suddenly delete lots of things
on my computer’ and claims he ‘had no concept at that time of
what the DTI investigation would look like and was subsequently absolutely astounded that such things [as data on a
computer hard drive] would even be looked at’.
The authors of the BIS Report allege that Mr Beale deliberately deleted data from the hard drive before they could access it,
while he was fully aware of their intention to image and review
the contents for documents relevant to their investigation.
Whatever the correctness of the assertions made by either
Mr Beale or the BIS, the relevance and timeliness of Bill
Dean’s article on the US Federal Rules on Electronic Discovery
for corporate governance regimes everywhere, in light of the
MGRG debacle, is clear. The BIS report was constrained by
the absence in the UK corporate governance framework of adequate requirements on digital data retention, from discussing the use by a company director of Evidence Eliminator.
It is a pity that the matter of company directors’ interference
with digital data and the possibility of implications from criminal
and company law are not directly addressed in the report. In the
contemporary environment of private and public loss, anguish
and uncertainty the stakes are raised not only for corporate
policy makers, but also for in-house digital forensics experts and
their compatriots from the further reaches of cyberspace.
In the next edition, I will have exciting news about the
implications of these, not wholly untoward, developments for
digital forensics experts as in their wake, the digital bounty
hunter draws closer to shores far from those of the US. /
/ AUTHOR BIO
Moira Carroll-Mayer, Digital Forensics Magazine’s Legal Editor,
is a lecturer in Procedural and Substantive Law of Forensic
Computing with published articles on Communication Ethics,
Identity Management & the Implications for Criminal Justice,
the Ethical Implications of Nanotechnology, and Digital
Crime & Forensic Science in Cyberspace. Moira is currently
conducting research into the ethical and legal implications of
advanced autonomous weapons systems.
29
DF1_29_Legal Editorial.indd 29
29/10/09 5:08:45 pm
/ LEGAL FEATURE
The Wily wITNESS
Key Legal Issues and Courtroom Advice
by Eric Fiterman Methodvue
E
/ ENTRY
xpert opinion testimony is a legal tool frequently used
in civil and criminal cases involving the collection, acquisition, and examination of digital evidence. The U.S.
Supreme Court has suggested that expert witness testimony
should be based on scientific, repeatable, and reliable methods that are generally accepted by the scientific community.
When delivering expert witness testimony, in particular under
cross examination, forensic practitioners must be aware of
these legal requirements, as well as common pitfalls to avoid
when expressing facts and technical data in the courtroom.
This article explores key legal issues affecting forensic
experts, and provides recommendations for witnesses to
improve the quality and defensibility of opinions delivered in
the court room.
/ What Expert Witnesses Need to Know and Do
In civil and criminal litigation, an expert witness may be
called to provide testimony or opinion on complex matters
in order to interpret and present case circumstances to a
non-technical audience. The expert witness often plays an
important role in the formulation and execution of an effective legal strategy, and is consequently held to some of the
highest legal, ethical, and professional standards in her field.
While expert witnesses come from a variety of backgrounds,
the specialised skill and experience of computer forensic
practitioners is increasingly in demand.
it is not uncommon for
forensic practitioners to
testify in high-stakes civil
litigation and business torts
While law enforcement and investigative agencies pioneered many of the methods and techniques used to examine
digital evidence, it is not uncommon for forensic practitioners
to testify in high-stakes civil litigation and business torts.
Modern commerce and governance structures are tightly
coupled with technology: communications, core business
processes, and infrastructure all depend on software and
hardware systems to function properly. When involved in a
legal dispute, businesses are frequently required to furnish
evidence stored in these systems, often with the assistance
of technical experts who are able to extract and present this
information to the court.
30
DF1_30-32_1st Legal Feature.indd 30
It is also important to appreciate the impact and scope of
expert witness testimony for a related class of litigation: intellectual property (IP) litigation. Many businesses invest heavily
in the development and protection of intellectual property,
thus making information an asset with significant intrinsic
value. The ease with which these assets can be misappropriated or stolen is a compelling reason that organisations fight
aggressively to protect their positions, investments, and
competitive advantage.
Businesses aren’t alone. Take a moment to think about your
most sensitive and private information – is there anything you
can identify, that isn’t stored in electronic form? Your financial
history, health records, online habits, and personal history are
all held on a computer somewhere. Stringent privacy laws in
the EU, and legal requirements governing the unauthorised
disclosure of data (so-called “Data Breach Laws”) create an
environment fraught with legal and technical challenges.
When sensitive information is stored in electronic form,
trained personnel will be required to protect this information
and to explain in legal proceedings how and why it may have
been compromised.
/ Technical and Legal Challenges
When called to provide expert witness testimony, the forensic
practitioner is faced with a number of challenges. For one, an expert may be required to examine evidence involving commercial,
closed, or proprietary systems. This means that the full details of
a component’s operation and functions are not readily available
for inspection. Consider, for instance, Microsoft’s Windows series
of operating systems. Microsoft does not publish every detail
and specification of Windows – the source code for Windows is
protected as proprietary and confidential information.
Consequently, some of the best resources on Windows
forensics are produced by technical experts, who aren’t employed by, or affiliated with, Microsoft. One example is Brian
Carrier’s book, File System Forensic Analysis, which details the
results of Mr. Carrier’s own independent research and findings
related to the mechanics of Microsoft file systems. The inference here is that the expert witness must conduct her own
independent tests or rely on published works and consensusdeveloped tools to perform her analysis, while acknowledging
that this analysis is typically based on observation, without
detailed (or, perhaps, any) inspection of the source materials.
Another challenge faced by forensic experts is that, unlike
the system of common law, technical systems are based on
dynamic, fluid, and changing paradigms. New software develDigital / ForensicS
29/10/09 5:45:40 pm
opment languages, programming models, and
system architectures make it difficult for legal
professionals and juries to attain an understanding of the most elementary technical concepts. Take DNA evidence, for
example. When first introduced into
the legal system, DNA evidence and
expert witness testimony based upon
it, were not well understood, and
were sometimes challenged as new
or unproven science. Since then, DNA
exhibits have gone “mainstream” and
most jurors, conditioned by the popular
media to expect these types of exhibits,
would be surprised if DNA evidence were
not introduced at a criminal trial. Technology,
however, is a moving target, making it difficult
to build broad awareness and acceptance of a single
standard. As an illustration, virtualisation and cloud computing architectures are new territory even for experienced
litigation attorneys with technical backgrounds.
Lastly, like many specialised domains, forensic science
is difficult to explain in laymen’s terms. This problem is not
unique to the computer forensic practitioner, and there are excellent resources for those interested in learning more about
conveying technical information. I have included a short list of
these at the end of this article. The key issue here is that one
of the most important jobs the expert witness is called to do,
is to translate highly technical, often terse information,
into language that can be clearly understood by a nontechnical audience.
/ Opinion Defensibility
The U.S. Supreme Court [Daubert v. Merrell Dow Pharmaceuticals, Inc., 43F.3d 1311 (9th Cir.1995)] has suggested
that expert witness testimony should be based on
scientific, repeatable, and reliable methods that are
generally accepted by the scientific community. What
does this mean and how does this apply to the work of
the forensic practitioner?
The work performed by a forensic practitioner
should be based on proven and documented
methods for collecting, processing, and analysing
digital information. This can be established in a
number of ways. One of the most common methods is by applying techniques that have been
taught by known and recognised authorities
in the field. This may include departmental or
agency training curricula, academic coursework
and degree programs, or commercial training
credentials. The training should be based on consensus-developed, community-driven standards
to ensure admissibility of the work. Training and
proficiency with industry standard forensic tools
will help support the admissibility of an expert’s
work, but vendor product familiarity should be
only part of the picture.
31
DF1_30-32_1st Legal Feature.indd 31
29/10/09 5:45:59 pm
/ LEGAL FEATURE
/ USEFUL RESOURCES
You can learn much more about techniques for presenting technical
information in court at the following Web sites (among others):
American Bar Association (ABA):
http://www.abanet.org
National Institutes of Standards in Technology (NIST):
http://www.eeel.nist.gov/oles/forensics.html
National Institute of Justice:
http://www.ojp.usdoj.gov/nij/publications/welcome.htm
If you’re working in an environment that has not yet defined
a formal protocol or standardised approach for performing
forensic work, you may be able to reference tested methodologies without having to reinvent the wheel. One resource
that is frequently overlooked consists of guides and resources
published by the US government. Both the National Institute
of Standards in Technology (NIST) and the US Department
of Justice’s, Office of Justice Programs, provide guidance for
personnel involved in the collection, acquisition, examination,
and presentation of digital evidence.
When developing an expert witness report, the practitioner
must perform her work in a deliberate, documented, and repeatable fashion. The objective is to show that a formal process –
one that is not arbitrary or capricious – was applied to develop
information. The best way to go about this is to have a clearly
documented approach to the steps taken, the tests conducted,
and the results of the work. Independent tests should be supplemented by technical guidance and documentation that supports
the approach taken and interpretation of the findings.
The expert’s role is to
listen actively and respond
to the question that has
been asked
I started my career in law enforcement, and one of the most
important lessons I learned during that time was not to let
my own personal opinions and perspectives interfere with
the objectivity of my work. I learned to look for exculpatory
information when investigating allegations of misconduct
- any indication that there could be cracks in the case I was
putting together. Playing devil’s advocate will condition the
practitioner to fully explore the facts and circumstances of a
case, leading to an objective, defensible, and comprehensive
analysis. It will also help you prepare for potential issues and
counterpoints that may arise during cross-examination.
/ Things to Remember
Giving testimony is not a conversation. When testifying, in particular under cross-examination, the role of the expert witness
is to answer what is asked. I recently had an opportunity to
work with Mr John Moran, an experienced IP litigator and partner at the law firm Holland and Knight LLP, who gave the following example. When asked, “Are you employed?” in a casual
32
DF1_30-32_1st Legal Feature.indd 32
conversation, I may reply by stating, “Yes, I am employed by
Methodvue as Chief Executive”. While this may be appropriate for a conversation, it is typically not the correct response
when testifying. In the example, I answered the question and
volunteered further information about my occupation – thus
answering a question that was not asked.
This simple example illustrates how easy it can be to treat
a deposition as a conversation, which it emphatically is not.
When giving testimony, the better response would be to reply
simply “Yes” to indicate that I am employed. The expert’s role
is to listen actively and respond to the question that has been
asked - awkward for a discussion, but the appropriate and
expected response when testifying. Like any guidance, this
is not a “one size fits all” rule. It is, however, a tried and true
principle to follow when testifying.
Follow an established protocol. Whether learned and practised in training courses, or performed regularly while carrying
out your assigned duties, follow an approach that is sound,
repeatable, and based on industry-accepted standards. This
also means you should treat every case as if it were going to
trial – whether you think it has merit or not. This may sometimes seem tedious or unnecessary, but it is always prudent
to treat every case as if it needs to withstand the scrutiny of a
judge, opposing counsel, and jury. This will improve the quality and admissibility of your work, and the defensibility of the
opinions presented in your report and testimony.
Be objective. We are human beings, and we all tend to view
the world with our own set of experiences, beliefs, and perspectives. The challenge for the expert witness is to put these
considerations aside and conduct an independent, thorough,
and objective review of the evidence in the case. This should
include looking for evidence that could support positions
of value to opposing counsel. This is critical to establish the
“ground truth” and may also help to identify facts and circumstances that erode or devalue opposing witness statements.
Need more incentive? There may be another expert witness
involved in the trial whose sole purpose is to scrutinise and
assess the validity of your work. Your reputation is on the line
– being highly critical of your own work will help move your
performance from run-of-the-mill to highly-finished.
Communicate opinions in laymen’s terms. The forensic
practitioner is required to do what appear to be two contradictory things: conduct highly-technical and scientific analysis, in
enough detail to meet stringent legal requirements, while explaining this analysis in a way that can be clearly understood by
a non-technical audience. Giving testimony is not like delivering
a lecture or course to a technical audience. The expert’s job is to
act as a translator of her own work without talking down to the
jury. Graphics, visual aids, and demonstrations can all help the
jury to understand fully the information you present to them. /
/ Author Bio
Eric M Fiterman is a former FBI Special Agent and founder
of Methodvue (http://www.methodvue.com), a USbased consultancy that provides cybersecurity and threat
intelligence services to government and private businesses.
Digital / ForensicS
29/10/09 5:46:20 pm
Blade Ad Final.qxd
8/10/09
13:20
Page 1
BLADE
F O R E N S I C D AT A R E C O V E R Y
BLADE is a Windows-based, advanced professional forensic data recovery solution
designed by Digital Detective Group. It supports professional module plug-ins which
give it advanced data recovery and analysis capabilities. The power and flexibility
of the tool can be expanded as new modules become available.
BLADE supports all of the major forensic image formats and is more than
just a data recovery tool. The professional modules have in-built data validation
and interpretation routines to assist with accurate data recovery.
The software has been designed for fast/accurate forensic data recovery. Not
only is it highly effective in the pre-analysis stage of a forensic examination, it
can be quickly configured to recover bespoke data formats.
With the addition of professional modules, BLADE can recover data which is
not extracted by other forensic tools.
P R O F E S S I O N A L R E C OV E RY M O D U L E S
Live and Deleted Outlook Express (v5-6) Email Messages
(including attachments)
Live and Deleted AOL (Personal Filing Cabinet) Email Messages
(including attachments)
Live and Deleted Windows Link Files
K E Y F E AT U R E S
Regular Expression High Speed Advanced Carving
Supports Headers, Data Landmarks & Footers
User Created/Customiseable Data Recovery Profiles
Professional Modules for Advanced/Specialised Data Recovery
Variable Input Block Size, Sector Boundary Options
Multithreaded for fast searching and recovery
Forensic Audit Logging
SUPPORTS
Single/Segmented Unix/Linux DD/Raw Image Files
EnCase® (v1-6) Compressed/Uncompressed Image Files
SMART/Expert Witness Compressed/Uncompressed image Files
AccessData® FTK Image Files
Physical/Logical disk access
W W W . B L AD E F O R E N S I C S . C O M
Digital Detective Group, PO Box 698, Folkestone, Kent, CT20 9FW.
Telephone: 0845 224 8892
/ LEGAL FEATURE
DISCLOSE OR
DISPOSE?
Companies Operating in the USA Need to Prepare Right Now
by Bill Dean Sword & Shield Enterprise Security
O
/ ENTRY
n December 1, 2006, the world of civil litigation changed
forever with the adoption of amendments to the Federal
Rules of Civil Procedure that address electronically
stored information (ESI). As of that date, ESI officially attained
the same importance as paper documents in litigation.
Speaking from the trenches as a senior systems analyst in a
large company, I saw companies fearful of their new responsibilities and concerned about the impact on their operations.
We technologists knew the type of information that was buried deep in computer systems and the difficulties in locating
it. We were relieved to discover that the ”Safe Harbor” and the
updated rules were restricted to cases in federal courts.
Now however, many states have adopted rules similar
to those contained in the Federal Rules of Civil Procedure,
concerning electronic discovery and digital forensics in litigation. This alliance of Federal and state rules has given rise to
greater expectation of electronic discovery, so that the sense
of relief described above is no longer justified. Since 70% of
digital information remains unprinted and invisible, the potential for e-discovery is limitless; our work is cut out!
For companies or attorneys still wondering what e-discovery
is, the simplest definition is: the extension of the traditional
discovery process to information that is stored on computer
systems. Examples of ESI encountered will include e-mail[s],
word processing files, spreadsheets, and information from
company databases. This information will reside in a variety of
places such as corporate computers, corporate servers, smartphones, Internet servers, and even home computers.
The good news is that information in an electronic form
can be much easier to cull, analyse, and review. Technology
may have created the problem, but technology can also help
to solve it.
Electronic information can be stored and processed very
efficiently, but there are considerations that warrant caution.
ESI is particularly sensitive to alteration or destruction. Simple
acts such as turning a computer on or off could affect the
integrity and availability of information. Then, of course there
is the infamous metadata, special-purpose data that contains
unobvious and usually invisible information describing the
nature and context of the ESI, or other useful details. For
these reasons, ESI must be collected and handled forensically
to maintain its integrity.
One major obstacle to proper evidential preparation has
been insufficient technical explanation of the rules. This article
will cover some of the technical aspects of the updated rules.
/ Rule 26(b)(2)
This is known as the “two-tiered discovery” rule that divides
ESI into two categories depending on whether it is “reasonably
accessible.” The Federal Rules Advisory Committee had the
foresight to recognise that while some ESI is easy to retrieve,
some is difficult, expensive, or impossible to retrieve. Easily
accessible ESI includes such instances as documents from a file
server, or e-mails, that have not been deleted. ESI considered
not reasonably accessible may include backup tapes from previous years or deleted information requiring forensic analysis to
recover and reconstruct. “Reasonable accessibility” is typically
determined by the expense or effort required to gather the ESI.
However, difficulty of accessing ESI alone will not necessarily
shield parties from their obligation to produce this data.
34
DF1_34-35_2nd Legal Feature.indd 34
29/10/09 5:46:42 pm
/ Rule 26(f)
The “meet and confer” requirement in this rule prescribes how
the ESI discovery process should begin by discussing “any
issues relating to preserving discoverable information” and
devising a method that will address “any issues relating to
disclosure or discovery of electronically stored information,
including the form or forms in which it should be produced.”
During the process, there are numerous questions that arise.
What ESI is available? Where is the ESI located? What will be
preserved, and how? What are the parties’ plans for discovery?
In what format will data be provided?
One of the most important steps in preparation is determining who, for each party, is the most knowledgeable about the
ESI systems involved. This person may contribute to:
• The development or provision of a data map of the
ESI systems
• The provision of data retention policies
• The provision of system backup and recovery plans
• Locations of email • Locations of other relevant data
• Plans for preservation
• Justifying the designation of certain data as not
“reasonably accessible”
/ Rule 34(b)
This rule permits the requesting party to specify the form
or forms in which it wants ESI to be produced. The responding party must also state the form that it intends to use for
producing ESI. If the requesting party does not specify a
form, or if the responding party objects to the requested
form, the Court will resolve the dispute. The purpose of this
rule is to prevent efforts to produce ESI in a format that is
difficult to use. For example, producing thousands of e-mails
in non-searchable PDFs is a potential practice that this rule
addresses. For e-discovery and digital forensics experts, this
may require converting information into different formats for
clients to review.
/ Rule 37(f)
This “Safe Harbor” rule provides that “absent exceptional
circumstances” a court may not impose sanctions on a party
“for failing to provide ESI lost as a result of routine, good
faith operation of an electronic information system”. The rule
recognises the reality that some computer systems routinely
alter and delete information without specific direction from
an operator, and that companies may have reasonable and
appropriate policies that automate this process.
However, a party must not exploit the routine operation
of an information system to thwart discovery by allowing a
system to continue to destroy information that it is required to
preserve. A “good faith” effort would require a party that is in,
or reasonably anticipates, litigation to take steps to prevent
further loss of information. This has given rise to what is
called the “litigation hold.”
While parties were pleased that the “Safe Harbor” relieves
them from the obligation to retain information perpetually,
there are a couple of caveats. If ESI or other information is
routinely purged based on age, then that should occur pursuant to a documented and approved ESI management policy.
And that policy should include a specific “litigation hold”
procedure that provides for appropriately tailored suspension
of the policy when litigation is reasonably foreseeable.
/ Conclusions
For companies that are not prepared for e-discovery, the
time has arrived. To employ a provocative analogy, it is
typical for a company to budget, plan, and test its disaster
recovery procedures in preparation for the possibility of a
disaster. But now that both Federal and State Rules require
e-discovery, a company can be almost certain of receiving
an e-discovery request before it experiences a disaster. To
complicate matters, court decisions like that in the famous
Zubulake v. UBS Warburg, LLC, 229 F.R.D. 422 (S.D.N.Y.
2004) dispute now routinely confer on attorneys the duty to
take affirmative steps to monitor compliance and ensure that
all sources of discoverable information are identified and
searched. Attorneys failing to do so have been sanctioned,
sometimes severely.
Since 70% of digital
information remains
unprinted and invisible,
the potential for
e-discovery is limitless;
our work is cut out!
I strongly suggest that companies begin working now to
obtain the information outlined in the “Meet and Confer”
section above. Studies have shown that properly planning for
e-discovery, before it occurs, can save companies up to 40% of
the costs otherwise incurred. In the current economic climate,
that kind of saving will surely be welcome.
Requirements at Federal and State levels for operations
such as data preservation, devising electronic discovery plans,
demonstrating “undue burden”, providing evidence disputing
whether or not electronic information was deleted as a “good
faith routine”, and providing specific forms of production,
further demonstrate the need for digital forensics experts. /
/ Author Bio
Bill Dean is the Director of Computer
Forensics for Sword & Shield Enterprise
Security. He has more than 13 years
of experience in the technical field, in
roles such as programmer, systems
support, enterprise systems design
and engineering, virtualisation, digital
forensics, and information security. Bill is a frequent speaker
and published author on the topics of digital forensics and
electronic discovery for numerous legal associations.
35
DF1_34-35_2nd Legal Feature.indd 35
30/10/09 4:36:42 pm
/ TECH FEATURE
RY Diary
The
OF A PDFBOOK
A Tool for Facebook Memory Forensics
by Jeff Bryner Portland General Electric
T
/ EXPERT
his article was sent to us for inclusion in DFM. Written in
an informal and irreverent style, it brings to life the daily
round of research and coding from which emerged a
tool to investigate Facebook sessions using a browser. We enjoyed reading it (after we finished deciphering what it meant),
and decided that it would adorn – and liven up! – our inaugural issue. So read on, and we hope you learn as much as
we did! As the diary is full of technical references, we haven’t
even tried to explain them. You’re on your own this once, but
remember: Google is your friend!
Facebook is the largest social networking site with –
incredibly enough – over 300 million users (about the same
/ DFM Team
Day One
Day Six
I am in Atlanta for a business trip and I finally give in and join facebook
after prompting from my high school reunion’s organizing committee.
It appears that if I friend enough of them I won’t have to go?!
I already can see them, know where they live, who they married,
kids, etc. I even used facebook to meet up for dinner with my best
friend from high school here in Atlanta! OK, I am a facebook user.
Day Two
Seems my article on pdgmail is in the running for best forensic blog
entry of the year? Something tells me this a contest in the same
way the USA invites only US teams to its ‘world championships’ but
it’s encouraging to be recognized! I should write another tool for
something else.
Day Three
Just back from defcon17 and for fun dumped out memory from my
facebook session on firefox using pd and did a strings -el on the
dump. I recognized some stuff in there... pdfbook was born.
Day Four
Regex is harder than it should be. Thank <insert supreme being of
your choice> for kodos: http://kodos.sourceforge.net/
Day Five
So it seems that facebook uses some strange combination of xml,
json and html. pdymail was much easier since it was all xml on
the back end. I guess I’ll start by picking out the obvious json-like
structures for my status since I can see that.
They call it ‘userInfos’ and it has my name, first name, a link to my
thumbnail badge photo, the text of my status, time of the update.
Ooh, the time is even in unix epoch.
36
DF1_36-37_Tech Feature.indd 36
as the total population of the USA!) It is therefore a mine of
fascinating information that forensic investigators are very
likely to find useful.
As a “public health warning” we should point out that
DFM has not tested or validated the tool or any of activities
described in this article, nor will we be setting up an accreditation capability any time soon. We will however be testing tools
submitted to us in a From the Lab section that will appear in
future issues. Until then we welcome any feedback from you
on the effectiveness of pdfbook, and any suggested improvements that you might like to share with the developer.
Uggh... regex is hard.
fbookuserinfosre=re.compile(r”””userInfos.*(“name.*?)\}”””,r
e.MULTILINE) is the best that I can do to whittle this down. A good
reminder of the usefulness of non-greedy regex, i.e. the ones with
? before the ending string. In this case (“name.*?)\} is capturing
everything from “name until the first } encountered.
This is important in memory where there are a million instances of
whatever ending character set you need. So ending quickly is less
likely to yield false positives.
Day Seven
I gotta remember to not do any of this from work so there’s no chance
for copyright issues. Not hard since there’s selective access to
facebook @work but makes for difficult spontaneous brainstorming.
Funny also because if we had a facebook investigation I wonder what
tool we’d use?
Day Eight
I give up. Munged the userInfos entry into a python dictionary for easy
access. Regexing the individual data points was too hard and it works.
I see my status update from previous memory dumps I’ve done:
Name: Jeff Bryner thumbURL: http://profile.ak.fbcdn.net/
v228/472/64/q1421688057_3296.jpg status: 2 gamble @the airport
or not, that is the question. statusTime: Sun Aug 2 17:35:34 2009
The thumbnail URL can be retrieved using any web tool. No
authentication is necessary. So if an investigator was trying to
associate a name or face with an entry, this would be one way. Of
course the picture you get at the time you run your command may not
be the same as when the entry was created.
Digital / ForensicS
29/10/09 5:22:19 pm
Day Nine
UIIntentionalStory_Message.{1,90}?UIIntentionalStory_Names
says that as long as the UIIntentionalStory_Message is at least 90
characters away from the following UIIntentionalStory_Names then
it’s a regex match. This is key to eliminating the false positives in
memory strings.
UIIntentionalStory.setup($(“div_story_ 50930182619061234_1153161
70123”), {“title”:”Hide”,”unfollow”:{“users”:[{“id”:543391123,”nam
e”:”Joe Facebook”,”firstName”:”Joe”,”hideString”:”Hide Joe”}]}});
Day Fourteen
Looks like the stories on your wall are called ‘UIIntentionalStory’ in
facebook-speak. The only plain text info there seems to be the name
of the person posting a ‘story’:
There’s a userid that seems consistent throughout, but without
having the tool contact facebook the best this will get is the name of
someone known to the subject of the memory dump. May be useful;
regex is: fbookintentionalstoryre= re.compile(r”””(UIIntentionalSt
ory.setup.*(\”name”\:”(.*)”.*”firstName”))”””).
An examiner could use this to at least prove that the subject has
some sort of relationship with someone posting a story. Though
the nature of relationships to flotsam in memory is relative. If you
happen upon a politicians page for example you end up with posts
from their supporters in your memory which of course doesn’t mean
you know the supporter or have a relationship with them.
Day Ten
I spent a good while browsing every interface I could find in facebook
to see what other memory structures may be left behind. What? Now
my status doesn’t even show up? There is a lot of html structures with
div classes seeming to refer to facebook styles. I guess they use div
css classes in a non-standard way to identify metadata for an entry?
Stuff like class=”like_box has_likes like_not_exists” This is either
strange and/or brilliant but it leaves me with the task of parsing
the html which I don’t like since it’s brittle, but what about memory
forensics isn’t?
Day Eleven
Geez, pdgmail was two days tops. pdymail was three just to get the
xml output right. Facebook is for the birds. I take it back about the xml,
there’s no xml that I can find and to get useful information I may have to
allow the tool to call back to facebook’s API?! That’s not very forensicy.
Day Twelve
html hit list:
<div class=”UIRecentActivity_Body”>
<h3 class=”UIIntentionalStory_Message”>
<div class=”UIStoryAttachment_Copy”>
RecentActivity: Gives you entries the user sees when they log into
facebook or refresh their page. Stuff like:
Jeff became a fan of <a href=”http://www.facebook.com/pages/
Fishbone/6519219892?ref=mf ” >Fishbone</a>.
There’s an ‘onclick’ tag embedded in the href entry, but it’s not
useful so I brute forced it out of the output. I guess if you were
hacking the API it may be useful?
UIIntentionalStory_Message: Gives you the messages people send
to each other’s walls. They may be the facebook user, or someone’s
wall they visited. They also include the primary user’s status
messages, aka the “What’s on your mind” status box.
example pdfbook output: StoryMessage: Jeff Bryner</a>
webgoat..really webgoat is on my mind..glad you asked?
UIStoryAttachment divs are the attachment to story messages that
aren’t text like links, pictures, videos, etc. I can’t see a good way to
parse them automagically. Maybe that’s a version 2 feature.
Day Thirteen
Tried it with IE on a sacrificial windows qemu session... of course
they issue totally different HTML for IE than firefox, no quotes in
the class names, uppercase instead of lower, etc. Brittle... did I
mention this would be brittle? OK, back to regex with some flags for
case insensitivity, optional quotes, etc. Also I’m remembering from
pdgmail that in addition to non-greedy regex, it’s important to set
an upper boundary. I like doing this with the .{1,X} regex construct.
For example:
I’ve a feeling that not everyone looks forward to coding over labor
day weekend? Reminds me of that bit in Young Frankenstein: Abby
someone... Abby Normal.
Hmm now that I’m finding all this stuff, a lot of it looks to be
repeated data. Probably because memory is messy and redundant.
I’ll add some hashing of the bits that are found and store the results
in python dictionaries to limit the duplicates. Not perfect since
there’ll be a stray character here and there in the flotsam that is
memory... but better than needless duplicates.
Day Fifteen
So a wall to wall message is a ‘storymessage’ with particulars of the
sending party and the recipient. The name of the sender is first, then
the recipient both are prefixed by their url. Mine for example is:
<a href=”http://www.facebook.com/profile.
php?id=1421688057&ref=nf ” >Jeff Bryner</a>
So would this be a way to figure out whose facebook traffic you
are seeing in the memory dump if the computer you are examining
is anonymous in nature? Probably a better way would be to use the
status update if it has a ‘remove’ link since you can only remove a
status if you’re the person who initiated it. I’ll see if I can target that.
Day Sixteen
Success! I can pick out a likely owner of the memory artifacts by the
presence of a remove button in the html. The structure starts with:
<aonclick=’ProfileStream.hideStory(“div_
story_4aa5d7bd29cfd0a12915885”, “1421688057”,
“5377451560287089488”, 72, “”)
and ends with
<span class=”UIButton_Text”>Remove</span></a>
The good bit is the 2nd argument in the onclick hideStory function
which holds the facebook userid. So now the program collects
facebook userids and at the end compares likely owners it found
from these structures with the ‘remove’ button.
Day Seventeen
Well, that’s it. A little clean up here and there, consistency in debug
output to stderr and I think version one is a wrap. Now I wonder what
my twitter account holds... /
Reference Information
The tool is available from http://www.jeffbryner.com/pdfbook
Example usage:
on a windows or linux box, use pd from www.trapkit.de, thus:
pd -p 1234> 1234.dump
where 1234 is the process ID of running instance of IE/firefox/
browser of your choice.
You can also use any memory imaging software, i.e. mdd,
win32dd, etc. to grab the whole memory on the box rather
than just one process. You can also use common memory
repositories like pagefile.sys, hiberfile.sys, etc. There’s a good
memory imaging tool reference at http://www.forensicswiki.
org/index.php?title=Tools:Memory_Imaging
Transfer the dumped memory to linux and do:
strings -el 1234.dump> memorystrings.txt
pdfbook -f memorystrings.txt
It’ll find what it can out of the memory image and spit out its
findings to standard out. Happy hunting!
37
DF1_36-37_Tech Feature.indd 37
29/10/09 5:22:21 pm
Digital
ForensicS
/ magazine
Digital Forensics magazine keeps you up to date on all the latest
developments in the world of computer and cyber forensics.
The magazine covers the following topics areas:
/ Cyber terrorism
/ Law from the UK and rest of the world
/ Management issues
/ Investigation technologies and procedures
/ Tools and techniques
/ Hardware, software and network forensics
/ Mobile devices
/ Training
/ eDiscovery
/ Book/product reviews
CHECK OUT
digitalforensicsmagazine.com
for all the latest news and views on the world
of digital forensics (special feature articles are
available for registered users).
SUBSCRIBE
TODAY
www.digitalforensicsmagazine.com/subscribe
DF1_38_Subs Ad.indd 38
29/10/09 5:22:47 pm
/ BACKUP TAPE FORENSICS
BACKUP TAPE
FORENSICS
IS HERE TO STAY
Issues and Case Studies
by Gavin W. Manes, James Johnson, Justin Gillespie, Elizabeth Downing, Michael Harvey Avansic
A
/ INTERMEDIATE
lthough magnetic tape storage is often perceived as a rarity by digital forensics investigators, there are an increasing number of situations that require tape recovery and
analysis. Many companies use tape backup systems to comply
with various regulatory and statutory requirements, which brings
the forensics issues with their acquisition and investigation to the
forefront. The ability to perform digital forensics investigations on
storage tapes is an important tool in any forensics professional’s
arsenal, and a thorough understanding of the situations and
techniques where these storage devices will appear can alleviate
some of the inevitable issues. This paper summarises the main
challenges to magnetic tape storage forensics, and includes two
case studies of investigations that required backup tape analysis.
/ Introduction
Since the early 1950s, magnetic tape storage has been a
standard backup solution for large data centres due to its low
cost and the compactness of the medium. However, many view
magnetic tape storage as obsolete and therefore little effort has
been devoted to the forensic acquisition and analysis of backup
tapes. Despite the lack of interest in this area, there are several
situations that require forensics investigators to recover and
analyse data from backup tapes. Data recovery professionals
must also be prepared to handle this class of media: in a 2004
survey conducted by the Yankee Group, over 40% of respondents who had occasion to restore systems from tape, reported
at least one incident where the information was unrecoverable
due to tape failure [1].
Improvement of forensic techniques for backup tapes is
necessary for a variety of reasons. Certain peculiarities of the
magnetic tape format present unique challenges to the investigator: different types of tapes, proprietary storage formats
and compression algorithms, and the fragility of the magnetic
tape itself can all complicate investigations. The standard
SCSI communication protocol for tape drives precludes lowlevel acquisition, and tape drives will generally not read past
an End-of-File marker (regardless of what data lies beyond)
without modification of the drive’s firmware.
/ Backup tapes in landmark cases
Backup tapes were the centrepiece of two landmark digital
forensics cases in the USA: Coleman v. Morgan Stanley and
Zubulake v. UBS Warburg [3][4], lending credence to their use
as digital evidence in court. Both of these cases set precedents for the admission and validity of digital evidence in the
modern legal landscape.
/ Coleman vs. Morgan Stanley
In Coleman vs. Morgan Stanley, Coleman’s document production request specified emails from a certain date range, which
according to Morgan Stanley resided on a complex backup
system that required significant resources to recover. It was
later discovered that Morgan Stanley had found backup
tapes containing relevant emails, but had not produced them
in response to the Court’s Order. Furthermore, it was found
that searching these tapes would have been relatively easy,
39
DF1_39-42_4th Feature.indd 39
29/10/09 5:23:12 pm
/ BACKUP TAPE FORENSICS
despite the company’s claims to the contrary. Morgan Stanley
was issued an adverse inference order by the court for failing
to comply with the discovery orders, and was ordered to pay
$1.5 million in damages. Although this decision has since
been reversed, the sanctions relating to the failure to disclose
were not removed.
/ Zubulake v. UBS Warburg
In Zubulake v. UBS Warburg (2003), a wrongful termination suit that ended in a $29 million verdict for the plaintiff,
information was requested which resided on backup tapes.
However, it was found that those tapes had been deleted
after the lawsuit had been filed. An adverse inference
instruction was given to the jury on the basis that UBS
Warburg had failed to preserve emails that it knew to be
relevant to litigation.
In both of these cases, backup tapes proved to be crucial
evidence on which the verdict hinged. Clearly, the proper and
thorough forensic acquisition and investigation of backup
tape media should not be ignored when performing a digital
forensics investigation.
/ Backup Tape Issues
Tape Formats and Hardware Issues
There are many magnetic tape formats, each of which requires
different hardware to read. While internal forensics investigators employed by a company may have to accommodate only
the tape formats in use by that company, a standalone forensics or data recovery company needs to maintain a collection
of tape drives to cover, at least, the most commonly encountered tapes. The most likely formats include DAT, Exabyte, and
DLT, with AIT and LTO types growing in popularity.
a staggering variety of
software solutions provide
different ways of storing
the data on the tape
Even if a company possesses all of these devices, they
usually will not read past an End of Data marker on the tape,
which can leave unread data on the tape from any previous
backups. Although the SCSI standard specifies a common
interface for all of these drives, it lacks the low-level control commands needed by forensic investigators to make a
complete bitstream copy of the tape. Some drives contain
firmware with a special mode that allows reading past the End
of Data marker. There are currently other proposed solutions
to allow for complete bitstream copies to be created using
customized firmware, but most of the research in this area is
either theoretical or not publicly available.
Archive Types
Beyond the hardware issues, investigators also encounter
problems with the large number of backup archive formats
in use. Tape hardware only provides a medium for the data,
40
DF1_39-42_4th Feature.indd 40
while a staggering variety of software solutions provide different ways of storing the data on the tape. The most common
archive types are the tar and dump formats typically used
on Linux systems and Windows’ built-in NTBackup; however,
most vendors providing backup software solutions have their
own proprietary tape formats [5]. This presents significant
problems for investigators, especially if the software used
to create the tapes is rare or obsolete. These situations may
require the use of a data recovery company.
Integrated Solutions
Solutions to the problems of backup tape forensics will be either
hardware or software related. Hardware considerations for
deploying a backup tape solution in a forensics lab must take
into account the most common types of tapes that the investigator is likely to encounter in the field. Several stopgap solutions
have been developed for the forensic acquisition of backup
tapes, through the use of standard tape management tools, and
occasionally tape recovery software [2]. These tools address the
problem, but require more preparation than would be the case
for a hard drive. Again, the wide range of tapes, archive formats,
and backup recovery software complicates the issue.
Recommendations
Duplication of the backup tape to prevent accidental damage
or spoliation of evidence is highly recommended when attempting to extract information. Safe duplication of tape data
can be achieved, usually in a single read pass, by using dd on
a Linux machine [2][6]. This program creates a set of tape files
which can then be copied to a duplicate tape. All data recovery
operations should be performed on this duplicate tape to prevent unnecessary wear and tear on the original physical media
and to prevent any accidental destruction of evidence.
Extraction of the files from the tape can be performed in a
number of ways. It is generally acceptable to use the original
Digital / ForensicS
29/10/09 5:23:12 pm
evidence existed on a machine at a certain point in time, it was
either negligently or maliciously deleted after that date. Using
tapes from an intermediate period can also provide information
regarding the loss or destruction of data that is unavailable on
a disk image of the current system. This information could help
provide grounds for a negative inference instruction.
Investigators can use backup tapes to find files and documents that have been deleted, and it is possible to recover
information from log files that have been overwritten on the
current system. These logs contain information about the
history of the system and other resources such as network
systems, users, logs, services, etc. These resources may not
exist any more and their removal may signal illicit or negligent
behaviour; such information could be useful either in the
discovery process or to establish grounds for arguing that
evidence was not properly preserved.
/ Case studies
The practical uses of backup tapes are illustrated by the following two case studies. Both highlight some of the complexities that can arise during the collection and investigation portion of the digital forensics process when using backup tapes.
backup tape software to restore the information onto a hard
disk. Several third party solutions exist that can handle a variety of tape formats and are especially useful when the original
tape format is unknown. Nucleus Technologies provides a tape
backup recovery software product that can handle some of the
most basic formats, such as tar and NTBackup. This solution
has some features that are desirable to an investigator, such
as creating an image of a tape at the time of recovery. Index
Engines also provides an offline tape recovery solution: it
requires a lease of specialised hardware to attach the tape
library, but has a more robust set of compatible formats and
is geared towards E-Discovery users. Whichever method is
utilised, the software should be used to restore the files from
the backup tape onto a hard disk, which can then be imaged
or directly analysed as part of an investigation.
/ Forensic uses of backup tapes
Several practical and theoretical uses of backup tapes can
arise during digital forensics investigations. Backup tapes are
typically used to retrieve data in the event of server failure or
retirement, and are superior to backup files due to their inherent resistance to tampering or destruction of data. Forensics
professionals can use backup tapes when a server is too
large for onsite collection, since the tapes can be taken to a
forensics lab and their contents restored onto a RAID system
or other large storage device.
There are more innovative uses of backup tapes during
forensics investigations. Investigators can build a sophisticated timeline of a computer’s history by comparing the
current state of the system to a disk image in order to identify
signs of tampering. If such nefarious activity did take place, it
is unlikely that both the system and the backup tapes would
have been modified consistently.
Backup tapes can also be used to ensure compliance with
evidentiary rules regarding spoliation. They can show that while
Case Study #1
A company required forensics investigation of an email
server related to the departure of an employee two years
previously. The server had been decommissioned and
removed from the company inventory since the events in
question, and a newer server had taken its place. The IT staff
had performed manual migration for the mailboxes of current
employees during the changeover. However, the company
had retained backup tapes of the original server, several of
which contained information from the time period in question. Since the backup tapes represented the only potential
evidence from that time period, it was critically important
to recover and examine the information on the tapes in the
most forensically sound manner possible.
Backup tapes can also be
used to ensure compliance
with evidentiary rules
regarding spoliation
The forensic acquisition of these backup tapes presented
a challenge for the investigation team. Since the backup
tape was of a format the forensics team currently did not
have available, equipment purchases had to be made, and
extensive research was conducted to determine the soundest
forensic methodology for analysing the tapes. Even though
this case featured only one tape, the amount of time needed
to research the factors involved was significant due to the
presence of unfamiliar technology and the necessity to duplicate the information in a forensically acceptable manner.
The team decided to use the dd program to create a bit-for-bit
copy of the accessible data on the tape, write it onto a newly
41
DF1_39-42_4th Feature.indd 41
29/10/09 5:23:13 pm
/ BACKUP TAPE FORENSICS
purchased tape of the exact same type, and then perform
recovery operations on the cloned tape. As a result, an extra
two weeks of preparation time was added to the investigation.
Even when an investigation team is prepared for dealing with
tapes it can be a time consuming operation to perform recovery.
Case Study #2
The second case study involved a large, mission-critical e-mail
and file server that was to be collected from a company on
the opposing side of a lawsuit. This situation is often called
a “hostile” collection, and makes it even more important
than usual that forensic acquisition occur without unplanned
business interruptions. In this case the terms of the collection
agreement stipulated that the forensics team was limited to
a single day of access to make a “live image” of the machine.
Due to the size of the server, it quickly became clear that even
with a file-system level acquisition of active files, the imaging
process could not be completed during the allotted time.
Tape backup systems are an
important component of any
case involving corporate
lawsuits
However, during discussion of alternatives with the company’s IT team, investigators discovered that backup tapes
for the server were available. Indeed, the tapes would provide
information from the server that was more likely to provide
relevant information, since it was from a time closer to the
events in question. Therefore, the investigator decided that
the backup tapes offered an appealing alternative to traditional acquisition.
With the lawyers’ approval, the investigation team collected
a set of eight backup tapes from the company. These included
multiple sets of incremental backups from before and after the
date in question, providing a healthy time frame for investigators to examine.
Due to the large number of tapes acquired from the company, the size of each tape, and the perceived complexity
of analysing a set of incremental backups, an external data
recovery company was employed to restore each tape to hard
disk (at a substantial cost). Once this had been done, the
hard disks provided by the data recovery company were imaged and analysed. Initial analysis of the hard disks revealed
several large file-based backups, which forensic software was
unable to process. So the files were exported and the software
used to create the backups was determined. Each individual
file was then processed and extracted onto disk, after which it
was imaged and processed using forensic tools.
By employing the external data recovery company, the
forensic company incurred large additional costs. However,
these may have been no greater than the alternative costs of
new equipment and processing the incremental backup tape
format. Additionally, they encountered more file-based backups and spent extra time processing those files, in case any
42
DF1_39-42_4th Feature.indd 42
contained relevant data. Although these costs and challenges
may seem like disadvantages to the investigators, there were
also two important advantages. First, they were able to collect
the information from the collection site in a very timely manner. Second, they were able to compare the different states of
the server over the time frame from which they had collected
information. Therefore they could accurately compare the
state of a user’s mailbox or personal files from different periods of time, which assisted in reconstructing the events of the
time period in question.
/ Conclusion
Tape backup systems are an important component of any case
involving corporate lawsuits. Indeed, the involvement of tape
backup systems in civil litigation is increasing, which brings all
of the issues with their acquisition and investigation to the forefront. Many of the same problems encountered by investigators
for the past few years still exist, since little research has been
devoted to this area. Therefore, forensics companies should be
prepared to handle digital information from backup tapes, since
so much can hinge on a company’s ability to retrieve information from them. Future work in this area includes the creation of
a vendor-neutral tool to retrieve data from the large variety of
tapes in corporate use. In the meantime, digital forensics investigators should familiarise themselves with the potential issues,
techniques, and available solutions regarding backup tape
forensics in order to be most effective to their customers. /
References
[1] Gruener, J. and Balaouras, S. (2004). Reexamining Tape Media
Reliability: What Customers Should Know. Retrieved October 14,
2008, from ftp://ftp.compaq.com/pub/products/storageworks/
ECN-11396-Consulting.pdf
[2] Nikkel, B. (2005). Forensic Acquisition and Analysis of Magnetic
Tapes. Digital Investigation, 2(1), 8-18.
[3] Coleman (Parent) Holdings, Inc. v. Morgan Stanley & Co., Inc.,
2005 WL 679071 (Fla. Cir. Ct. Mar. 1, 2005)
[4] Zubulake v. UBS Warburg LLC, 217 F.R.D. 309 (S.D.N.Y. 2003).
[5] McCallister, Michael. SUSE Linux 10 Unleashed: Unleashed,
Sams Publishing, 2006.
[6] Watters, Paul. Solaris 10: The Complete Reference, McGraw-Hill
Professional, 2005.
/ Lead Author Bio
Gavin W. Manes Ph.D. received his Doctorate in Computer
Science from the University of Tulsa where he specialized
in information assurance research. He went on to perform
research and teach courses in digital forensics and
telecommunications security at the Institute for Information
Security at the University of Tulsa. He is currently the
founder and CEO of Avansic, a provider of digital forensics
and electronic discovery services. He is well published in
the fields of digital forensics and information security, and
has presented a number of related topics at conferences and
symposia across the country. He also serves as an expert
witness through courtroom testimony, depositions, and
consultation with legal professionals
Digital / ForensicS
29/10/09 5:23:13 pm
/ FEATURE
BRIEF INTRODUCTION
TO COUNTER-FORENSICS
How Investigators Can Find Themselves Looking for What Isn’t There
by Noemi Kuncik and Andy Harbison Grant Thornton
W
/ INTERMEDIATE
ith the ever-increasing growth of computer performance
and storage abilities, together with the expansion of
networks and falling costs, digital crime is rapidly becoming an everyday worry not only for personal users but also
for corporations of all sizes.
Thanks to the sophistication of today’s computers and networks, and the spread of knowledge about their workings, digital
criminals can not only carry out their plans and mostly remain
anonymous but also cover their tracks. By doing so, they make
it extremely difficult and time-consuming for a digital forensic
investigator to put the pieces together and solve the puzzle.
In the year 2000, the main focus of digital forensic practitioners
was probably cyber-crime investigation. Large-scale electronic
discovery did not really come into widespread use until 2003
or 2004, and the large majority of precedent-setting cases date
from 2003 or later. The highly influential decisions in Zubulake v
Warburg were made in 2003 - 2005, and the U.S. Federal Rules of
Civil Procedure were modified to take into account the discovery
of electronically stored information in December 2006.
Digital forensic investigation, especially in criminal cases,
requires a considerable knowledge of computer science.
Nowadays it can be argued that most people working in the
digital forensic field are electronic discovery practitioners,
many of who have insufficient knowledge about computers
beyond what they see in front of them. Many, perhaps most
of them are trained on a narrow range of commercial tools.
Platform-based certifications such as the EnCase Certified
Examiner (EnCE) are widespread.
Such training focuses on the correct operation of the software in question and the interpretation of the results it displays. It typically does not consider in detail how computers
operate behind the scenes – how they save data and where,
what happens when they are turned on or off, and so on.
This can be a handicap when up against advanced computer
users who deploy sophisticated methods to hide their identities, and cover their tracks by means of counter-forensics.
In most cases misuse, abuse and suspicious activities that
take place inside a computer system can be traced back to their
perpetrators by an adept digital forensic investigator, However,
before such an investigation can even be started the investigator needs to be aware that some form of evidence manipulation has occurred. Often this is more a matter of knowing what
evidence should be present but is not, rather than detecting the
overt use of counter forensic tools. Consequently, investigative
experience and a thorough grounding in computer science is
essential in detecting the use of counter-forensics.
We have seen a considerable upsurge in the use of counterforensics in the last two years, but this has not been reflected
in the literature. It is our concern that with the proliferation of
electronic discovery, where less attention is typically paid to
forensic and ephemeral data, counter-forensics may be underused. We have begun to see sophisticated counter-forensic
measures being deployed in many of our cases. It has now
become routine procedure for us to check for prior evidence
tampering in every disk we analyse.
LARGE-SCALE ELECTRONIC
DISCOVERY DID NOT REALLY
COME INTO WIDESPREAD
USE UNTIL 2003 OR 2004
There are many possible causes of this upsurge. Public
awareness of digital forensics has greatly increased with its
growing use in civil and criminal cases. Most developed countries have now seen a number of high-profile legal cases where
digital forensics techniques have been employed with great
success. It is equally possible that the proliferation of police
procedural TV shows in recent years, such as “CSI”, “Forensic
Detectives”, and many others (as well as books), have caused
this increase in counter-forensic sophistication. These shows
regularly include examples of digital forensics techniques in use
and have made people aware of the resources at the disposal
of investigators, and thus of their own risk of exposure. Another
cause may be simply the growing level of IT understanding
among the general public in most developed countries.
In this article we will give a brief outline of counter-forensics, and then discuss the different locations where data can
be found in a computer system, and the different types of
data present. We will focus mainly on the Microsoft Windows
family of operating systems which are installed on most of
the world’s computers, especially PCs. Consequently most
evidence elimination and counter-forensic tools are oriented
43
DF1_43-46_5th Feature.indd 43
29/10/09 5:23:35 pm
/ FEATURE
towards removing data from Windows and its NTFS file system. The large majority of documents retrieved in electronic
discovery procedures will also be acquired from computers
running some version of Windows.
/ Counter-Forensics
Counter-forensics (or “anti-forensics” as it is often termed in
the USA) is the collective term for techniques intended to complicate, inhibit, subvert, or delay forensic techniques for finding evidence. It can be seen as encompassing a broad range of
techniques from subtle and highly sophisticated data altering
techniques to methods as crude as smashing evidential hard
drives with a hammer. The purpose of counter-forensics is to
make sure that evidence is not discovered and subsequently
disclosed to a court, arbitrator, or some other forum. Additionally, in most cases at least some attempt is made to disguise
the fact that evidence is being altered or withheld. In the vast
majority of cases such tampering with evidence will damage
the interests of those using these counter-forensic techniques.
The forensic material on
computer hard drives can be
broken up into a number of
broad categories
Counter-forensics is sometimes seen as being mostly about
evidence destruction or erasure, but this is not the whole
story. In many instances, particularly in respect to evidence
in civil cases, it may not be necessary for counter-forensic
techniques to destroy or erase data on evidential media. It
is enough if they make it more difficult for an investigator
or analyst to recover the data. Commercial digital forensics
specialists usually operate within time limits and charge an
hourly or daily rate for their services. Slowing an investigator’s
rate of progress, by disrupting the evidence or converting it
to a format that is difficult to search, increases the costs of
an investigation for the clients or legal professionals who pay
the bills. This can deter them from pushing enquiries as far as
they otherwise might.
As we have already pointed out, many modern digital forensics
practitioners are trained to use specialised software applications
such as EnCase and FTK, but do not understand how the computers they examine actually work at a deep level. Some counterforensic tool developers have even designed their applications
specifically to defeat common forensic analysis applications.
They realise that many unsophisticated users will unquestioningly believe outputs given to them by their tools, and so can be
easily deceived. Detecting the use of counter-forensic methods is
often a matter of knowing what should be on a hard drive, but is
missing from the drive being investigated. If an investigator does
not understand exactly what should be there in the first place, he
is not going to know if it has been removed.
For example, some counter-forensic applications will
remove the Windows file table entries associated with deleted
files, making it considerably less straightforward to identify
44
DF1_43-46_5th Feature.indd 44
tampered or erased files. Some such programs will simply
remove the first few characters of each file table entry in the
knowledge that Guidance Software’s widely used EnCase
application will not then resolve the entry. However, the mere
fact that such file table entries are missing should warn the
investigator that something is amiss.
Another major problem is that very little research has been
carried out on identifying the telltales left by general purpose
counter-forensic tools and determining what data specific
tools actually leave behind. The counter-forensic tools used by
computer hackers are often highly specific, targeting particular
types of forensic evidence. A skilled hacker can surgically remove practically every trace of his presence on a computer. The
general purpose evidence removal tools used by non-hackers
are different, and are often either too general – removing data
that does not need be removed – or too specific – failing to
remove data that does. It should be possible to work out which
tool has been deployed to destroy data by examining what has
been removed and what has been missed. Unfortunately most
research appears to be done in the field of hacking counter-forensics, and the more general tools used by less sophisticated
IT users has not really attracted much attention.[2]
A related issue is that most research on counter-forensics is
done in the context of computer security and counter-hacking
rather than the more mundane arena of civil law, despite the
fact that counter-forensics can potentially do far more damage when employed in civil cases than it does in hacks. (The
prosecution rate for hackers was extremely low even before
counter-forensic techniques became common, so their introduction has not greatly altered the overall picture). [3]
In hacking, it may prevent an investigator pursuing a hacker,
but deployed in legal cases, it directly acts to subvert the fair
resolution of cases. Its purpose is to directly “load” the scales
of justice, by altering the “database” of information available
to the court and to the parties to any dispute, necessarily
influencing the likely eventual outcome.
/ Forensic evidence on computers: What is there to destroy?
We all know that computers typically store large numbers
of word processing files, spreadsheets, databases, cached
Web pages, emails and other “working” documents as part
of their normal operation. These documents are the material
upon which most legal activity is based. It is also becoming
well known that most operating systems in current use do
not delete files very efficiently, and that deletion does not by
any means guarantee complete erasure of all data from the
disk. The “empty” portions of a hard drive can, in fact, be full
of fragments (more or less complete) of files deleted earlier –
sometimes, much earlier.
Computers are designed to retain and retrieve information
very efficiently and to protect the integrity of files and documents against internal failures and external errors. In consequence they tend to retain surprising numbers of copies of
the files stored on their disks. These are usually deleted when
the software has finished with them, but they persist in the
“empty spaces” of disks long after they are supposedly gone.
Digital / ForensicS
29/10/09 5:23:36 pm
/ Anti-Forensics and
Counter-Forensics
“Anti-forensics” is a less satisfactory term than “counterforensics”. The term “counter-forensics” implies that
measures are taken to complicate, inhibit or subvert forensic
investigation. “Anti-forensics” suggests that these measures
actually prevent forensics being performed, which rarely
if ever happens. Even if an evidential hard drive is actually
replaced, it is still possible to determine this fact, providing
useful evidence to the investigation. Anti-forensics is a term
originally coined by the computer hacking community who
tend to see forensics in a negative way.
Moreover the term “anti-forensics” appears to deny Locard’s
Exchange Principle, one of the fundamental tenets of all forensic
science. In essence, the principle simply states that “Every
contact leaves a trace.” In digital forensics this means that any
action on a computer device can change the data on that device
Forensic data also accumulates folders of “intact” parts of a
computer’s file system. Modern computers are designed to be
user friendly, which regularly involves giving hints and reminders to users as to where important documents are located. In
addition, operating systems keep regularly updated lists of
links to programs and documents most often visited by users.
The upshot is that computers save a lot of information besides
the actual “working” documents stored on the hard drive.
This information is of immense value to forensics examiners in
gaining an understanding of what computers have been used
for over the previous days, months, or even years.
The forensic material on computer hard drives can be
broken up into a number of broad categories, not necessarily
mutually exclusive:
Active Data Active data is the working data on a computer:
the operating system, programs, and working files stored on
hard drives. The documents, emails, spreadsheets and other
data people use day-to-day is active data, and consequently it
is the material that is most often used in administrative, civil
and criminal litigation.
Temporary or Replicate Data This is the mass of copied
data stored on hard drives. It is produced in large quantities
by most popular applications. For example, Microsoft Word
automatically makes copies on the hard disk of whatever
documents are currently being written or edited. It does this
so that, if the program or computer crashes, the working document will not be lost. As soon as the completed document is
saved, Word will delete its temporary copies and the user will
often never be aware that they ever existed. But on modern
computers, deletion is not the same as erasure and the data
from Word’s temporary copies of a document can hang around
for a long time in the empty spaces of the disk.
Another common source of replicate data is Internet browser
software like Internet Explorer, Firefox, or Safari. These programs usually store the component parts of the Web pages
they download from the Internet in an area called the “browser
cache”. They do this so they can reuse the data if the user revisits the Web page at a later time. This sometimes saves the computer having to download the full Web page a second time. This
was a big time saver in the past when Internet connections were
typically a lot slower than today. Of course, it also means that
a user’s Web browsing can often be literally reconstructed from
the browser cache, often to his considerable embarrassment.
(Recently, browser suppliers have started providing optional
features for deleting much of this potential evidence).
Residual data is the data left behind in the “empty spaces”
of the drive. The two principal repositories of residual data on
any computer are the “unallocated” and “slack” spaces.
To understand these we must briefly review how a hard drive
operates. In simple terms, hard drives work the same way as oldfashioned libraries. The files are arranged across the hard drive
much as books are organised in the shelves of a library, and like
a library the disk maintains an index system, usually called the
file table. When a user wants to access a file, the computer does
not search through the disk looking for it – that would take far too
long. Instead it goes to the file table (the “card index”) looks up
exactly where the file is, and then goes straight to it.
When a file is deleted most file systems do not overwrite the
space on the disk where the file was stored, in effect “removing
the book from the shelves”. Instead a small note is made on the
file table entry (the “index card”) that the file is now “deleted”
and the card and space on the disk is available for reuse.
The computer takes this short cut to save time. Hard drives
operate very slowly indeed compared to the computer’s processor and memory. They are therefore a potential bottleneck,
throttling system performance. Overwriting deleted files
would take a lot of time, and in any case they will, in theory,
be overwritten with the passage of time, so the file system
does not do it. In the meantime, however, a digital forensics
specialist may still be able to retrieve the abandoned fragments of those deleted files.
Sometimes the space on the hard drive used by a deleted
file is reused before the corresponding file table entry has
been reused. In this case a digital forensics specialist will
still be able to determine the name, creation data, size, and
other characteristics of the deleted file, even if she is unable
to recover the deleted file itself. On other occasions the file
table entry is reused, leaving the data intact on the hard disk.
The file is then said to be in “unallocated” space. The great
majority of the empty space on a hard drive is made up of this
“unallocated space”. It is normally a junkyard of different file
fragments from documents, system files, and other ephemera.
Slack space is more persistent. It, too, is a by-product of
the way hard disks are organised. In order to further speed up
the process of finding the location of a file on the hard drive,
the computer divides the drive’s address space into a large
number of units called “clusters”. On Windows, clusters are
usually 4 kilobytes (4,096 8-bit bytes) in length. The start of
any document can only lie at the start of one of these clusters.
Of course this is rather inefficient in terms of storage capacity.
It means a 1 kilobyte file will still take up 4 kilobytes on the
disk. 3 kilobytes of data will be wasted. This “wasted” space
is called the cluster slack (or sometime cluster tip).
Now, say a 4 kilobyte file is deleted, and the cluster is reused
by the computer later to store a 1 kilobyte file. Obviously the first
kilobyte of the cluster will contain the data of the new file, but the
45
DF1_43-46_5th Feature.indd 45
29/10/09 5:23:36 pm
/ FEATURE
remaining 3 kilobytes will contain the remaining three kilobytes
of the old file. This old data will be preserved on the disk until the
new file is itself deleted. Hence, even if a user thinks she has deleted a document, parts of it can persist in slack space for months
or years afterwards. See Figure 1 for a visual explanation of how
data is stored in clusters and what happens after data is deleted.
History Logs, which record every Web address, component
and cookie file accessed by the computer together with the
time and date of access. The history logs also record a certain
amount of file access data.
API Logs, which record the connection of devices (such as
USB drives) to the computer,
Application Logs, which contain details of events logged by
applications such as media programs. The events to be written
are determined by the developers of each program, not the
operating system.
/ Conclusion
Figure 1 – Storing data in clusters
Systems data Modern operating systems accumulate a lot
of tracing data within themselves, usually in an attempt to
make the computer more user- friendly and to help users work
more productively. A lot of this data can be of immense value
to a Digital forensics investigator. For example it can tell them:
• The files and documents most recently used.
• What folders were opened, and when.
• The creation, last access, and last modification dates of the
files stored on the system.
• What users logged onto the computer and when.
• What devices, such as USB pens drives have been connected
to the computer, and when.
• What web-addresses have been typed into web-browsers.
• Which programs have been used on the computer, by whom,
and how often (if the system offers per-user authentication).
Often this data replicates and corroborates other data stored
in the active spaces, logs and historical data, making it a useful resource for the forensic investigator.
Logs and historical data Most computers regularly log the
activity and performance of both the operating system and the
applications running on it. This is done to help administrators
diagnose problems on the system, to help users remember
what they did in the past – or for security reasons. Obviously
this data can be enormously helpful to a forensic investigator
trying to assemble a timeline of events. For example the key
logs found on a Windows computer are:
Event Logs, where applications and the operating system
record events such as hardware and software errors, system
shutdowns and restarts, and many others.
46
DF1_43-46_5th Feature.indd 46
It is dangerous for digital forensics specialists to ignore the possibility that evidence on the computers they analyse has been
tampered with or deleted. Our own experience is that a significant
proportion of the computers we analyse have undergone some
form of evidence modification before coming into our possession.
The safest approach for an investigator is not automatically to
assume that his forensic applications are telling him the whole
story. Even the most apparently complete tools have their weaknesses, and investigators should not be afraid to look at the raw
data to double-check whatever the software is telling them. Of
course, this requires that the investigator understand where the
tools he is using are getting the data they display on the screen.
Above all, a complete digital forensics skill set does not
begin and end with a platform-specific certification. A good
investigator should take the opportunity to expand her knowledge, and should not be afraid to look more deeply at the underlying operating principles of the systems she investigates.
A useful piece of advice, even for commercial digital forensics specialists - for whom every hour must be accountable –
is not to be scared to follow your nose. If a forensic tool gives
findings that are difficult to explain, or look inconsistent with
the other evidence you are seeing, do not be afraid to look
“under the bonnet”. /
REFERENCES
[1] Nelson, S.D.; Olson, B.A. and Simek J.W. The Electronic Evidence
and Discovery Handbook American Bar Association 2006.
[2] Andy Clarke, Inforenz, How effective are Evidence Eliminators?
Presentation to COSAC Conference, Kildare, Ireland 2002.
[3] http://www.usdoj.gov/criminal/cybercrime/cccases.html
/ Author BioS
Noemi Kuncik is an IT Forensics Specialist with a BA(Honours)
degree in Computer Science and Masters in Computer Science
and Informatics from University College Dublin. Noemi worked
with Mick Moran of Interpol to create a training program
countering child grooming and is researching the use of data
mining in conjunction with Digital Forensic Investigations.
Andy Harbison is a Director and IT Forensic Lead holding a BSc
in Electronic Engineering & MSc’s in Business Administration
and Information Technology. Andy lectures at the University
College Dublin, Law Society of Ireland and Dublin City University
and has written articles on computer fraud, electronic litigation
and data privacy. He is a regular speaker at conferences.
Digital / ForensicS
30/10/09 4:33:03 pm
/ IN THE NEXT ISSUE
COMING SOON…
In the next issue of Digital Forensics Magazine
W
e’ve got some great content already lined up for the next
Issue of Digital Forensics Magazine, some of which we
want to tell you about right here:
/ Operational forensics
Incident reporting right from the heart of an operational situation.
/ Setting up the optimum digital forensic lab
Not all labs are created equal; learn how your lab should be
equipped and staffed according to the needs of your organisation.
/ Know your enemy
A continuing delve into the sinister world of counter forensics
and some of the tools and techniques often used to defeat the
forensic investigator.
Digital
ForensicS
The Quarterly Magazine for Digital Forensics Practitioners
ISSUE 01
/ magazine
INSIDE
Competition!
/ Detailed Forensic Examination Procedures
/ How secure is your personal data?
/ Expert Witness Advice
/ Facebook Forensics
Tool Development
Win an Archos 405
Portable Media Player
/ Steganalysis
Finding the ghost in the machine. Can forensic investigators ever
know enough about this to include it in investigations? What tools
are out there that can help?
/ Needles, Haystacks and the forensic
investigator
bad joke, maybe? No, not at all – we look into peer-to-peer (P2P)
file sharing networks and how they can be used for data mining
and theft of personal data.
THE WEB
HOW MUCH DO WE REALLY KNOW?
Detailed network analysis on Web requests
By Tim Watson
01
9 772042 061097
/ Riding the Wi-Fi wave with wireless network
forensics
What artefacts exist to assist the forensic investigator?
On the legal side we’ll continue to look deep into the role
of expert witnesses and the impact of varying legislation on
cross-jurisdictional boundaries. We’ll also analyze the forensic
investigator’s relationship to eDiscovery and what the forensic
mindset can contribute to the development of this booming new
sector. Also, we’ll take a look at the impact of cloud computing
on how we investigate cybercrime. Our “In The Lab” section
will introduce the topics of live memory forensics and security
visualisation (looking at data patterns and relationships).
The Magazine has access to a large and rapidly growing
community of practitioners and academics, whose knowledge
we encourage you to tap. Just ask our virtual team of experts
for answers to the questions and problems you are encountering. (See the Reader’s Letters section for details on how to
pose your questions). You can submit proposals by going to
our website on www.digitalforensicsmagazine.com
Issue 1
/ Starting out
A beginners guide
to digital forensics
/ LATEST News
WINDOWS 7 LAUNCHES
THE ITC WORKSHOP
/ Book Reviews
Real Digital Forensics
iPhone Forensics
TR Media
/ Special Offer
20% off Oxygen
Forensic Suite
NEXT ISSUE PUBLISHED
FEBRUARY 2010
47
DF1_47_Next Issue.indd 47
29/10/09 5:24:00 pm
/ BOOK REVIEWS
BOOK REVIEWS
age will hold up in a court of law. This is done by following appropriate procedures and using write blocking tools. Detailed
information is provided on creating images with commercial
and open source products.
Part four, Forensic Analysis Techniques, is the longest section of the book. It covers a myriad of techniques that can be
used to squeeze the last drop of useful information from data.
The topics include:
Real Digital Forensics:
Computer Security and Incident Response
Authors: Keith J Jones, Richard Bejtlich, Curtis W Rose
Publisher: Addison-Wesley
Date of Publication: 3 October 2005
ISBN-13: 978-0-321-24069-9
List Price: £35.90 (UK), $59.99 (USA)
Reviewer: Chris Bilger
Although “Real Digital Forensics: Computer Security and
Incident Response” was published as long ago as 2005, it still
provides a solid all-round introduction to IT forensics. (A new
edition entitled “Real Digital Forensics 2” is planned for mid2010). Weighing in at 688 pages, this book covers Windows,
Unix and Linux and explains digital forensics from the perspectives of incident response and case law. It also discusses in
depth a number of commercial and open source tools used to
perform forensic analysis. The DVD which accompanies the
book contains several sets of sample intrusion data generated
by attacking live systems, and is extremely useful for practice
forensic examinations.
The first section, Live Incident Response, shows how to
carry out an incident response process on Windows and
Unix platforms. It covers the types of information to collect
from a machine, what to look for, and why this information is
important in determining that an attacker has compromised
a resource.
The next part, Network-Based Forensics, looks into the
different kinds of data that can be collected on a network. It
examines how to use each type of data in a forensic examination, and describes the tools used to capture different kinds
of data. As before, specific details are given on analysing
evidence on different operating systems.
The third part, Acquiring a Forensic Duplication, is devoted
to creating a sound forensic image. It is important that suitable guidelines are followed so the process of creating an im48
DF1_48-49_Book Reviews.indd 48
• Recovering deleted files
• Electronic discovery
• Reconstructing web browsing and email activity
• Windows registry reconstruction
• Analysis of different forensic tools sets for Windows an
Unix/Linux
• Analysing unknown files.
These chapters provide the critical information that is
needed for most forensic examinations.
Part five, Creating a Complete Forensic Toolkit, deals with
tools for Windows and Unix/Linux and how to create a robust
toolkit that will aid a forensic investigator during examinations. It shows how to make sure the tools that are used do
not alter information on the host system. Additional information is given on how to make a bootable Linux distribution that
includes the tools.
The sixth section, Mobile Forensics, discusses forensics
as applied to mobile devices. It covers multiple tools
that can be used for forensic analysis of a Personal Digital
Assistant (PDA). Chapters are devoted to creating duplications
of USB devices and compact flash cards and the analysis of
these devices.
The last section of the book, Online-Based Forensics, looks
into popular on-line email sites and how to track emails sent
through these services. It also investigates ways to determine
domain name ownership. There is an appendix that introduces
the Perl scripting language, which can be useful for sorting
through large amounts of data.
This book is easy to read and comprehend, and its authors
have an abundance of experience in the field of forensics and
incident response. Keith Jones has been an expert witness
on several cases. Richard Bejtlich is Director of Incident Response at the General Electric Company and author of
the TaoSecurity blog; he has written and contributed to a
number of other books on IT security (Extrusion Detection:
Security Monitoring for Internal Intrusions, The Tao of
Network Security Monitoring: Beyond Intrusion Detection…)
Curtis Rose has 18 years of experience in computer forensics and Information Security, and leads teams that conduct
computer examinations.
Digital / ForensicS
29/10/09 5:24:34 pm
The authors do a great job of stepping through each chapter
and explaining techniques in a way that is easy to understand.
The section of the book that helped me most professionally
was section five, Creating a Complete Forensic Toolkit, which
explains exactly how to create a bootable toolkit that will not
alter data on the host system. On the whole, this book provides a consistent introduction to a wide array of IT forensics
topics. One topic that feels incomplete, however - perhaps because of the book’s vintage - is Mobile Device Forensics. There
is no information on mobile phones and MP3 players. That is
an isolated shortcoming, however. The book introduces and
discusses many of the tools that are widely used in the field,
and its screenshots are helpful in illustrating sample output
from tools. In my opinion “Real Digital Forensics: Computer
Security and Incident Response” is a great resource for any
forensic investigator.
iPhone Forensics
Recovering Evidence, Personal Data &
Corporate Assets
Author: Jonathan Zdziarski
Publisher: O’Reilly
Date of Publication: 17 September 2008
Price: £30.99 (UK), $39.99 (USA)
ISBN: 978-0-596-15358-8
Reviewer: Tony Campbell
I love my iPhone and so should you (he says in a monotone,
robotic voice). But, the real question is, am I just another
Apple fanboy, brainwashed by Steve Jobs’ celebrity industry
presence and marketing genius? Or have I really made a buying decision based on the facts? It’s true that the iPhone is
probably the sexiest piece of kit in this arm of the Milky Way,
but is there something lurking under the glitzy hood, that
could rise up and bite us in the proverbial “you know what”?
Whether you are an individual or an organisation (and on
whatever side of the law you happen to operate), you’ll need
to know exactly how much risk you are taking when you do
business on your iPhone. How secure is your data and, forensically, how many of your daily activities, transactions and
communications are accountable in the eyes of the law?
So, how do you dig into Apple’s prizewinning marrow while
donning the cap of the forensics investigator? That’s the easy
part: pick up a copy of Jonathan Zdziarski’s iPhone Forensics,
published by O’Reilly Media, and you’ll see exactly what’s
going on beneath the glossy veneer. This book is a great
technical companion for computer forensics guys who have
a need (or a calling) to dig into the iPhone platform. True,
it’s a very short book with a high price point (just 113 pages
of technical content for £30.99), so the real proposition is
pitched in terms of technical punch rather than kilograms
of rainforest.
The foreword, written by the enigmatic John T Draper (Cap’n
Crunch), sets the scene for the rest of the book, showing that
it’s fairly easy for investigators to get a bucket load of valuable
data from the iPhone as long as they know where to look.
Zdziarski kicks off with a great introductory chapter that takes
us through the rules of evidence collection and good forensic
practice, before launching into the technical chapters. Even if
it is aimed primarily at the newbie investigator, this introduction gives the book a nice, well-rounded feel.
Chapters 2 and 3 cover the basics of understanding the
iPhone architecture and how to gain access to the underlying
system. These chapters are invaluable and written in an easy
to follow style, but quickly get you to the stage where you are
looking at the iPhone device with its pants pulled well and
truly down. Zdziarski then spends the next three chapters focusing on the forensic recovery of data, and analysing a whole
bunch of interesting tools, such as Foremost and Scalpel. He
then launches into e-discovery where he details techniques
for finding evidence inside iPhone database files (SQLite) and
XML property lists (these contain items such as cookies, account details, and Safari browsing history).
Chapter 6 ties the iPhone forensic investigation to the desktop PC, describing tools and techniques for pairing evidence
between the two platforms. Finally, Chapter 7 cuts to the
chase and explains in terms of specific kinds of investigation
(and real-life cases) which information is the most useful, and
how it would be presented in court.
This book is an excellent resource for any computer forensics investigator. I recommend buying it, and also registering
on O’Reilly’s website for their up-to-date iPhone Forensics
Data Recovery Training and listening to some of the webcasts
by Jonathan Zdziarski himself. For more information on these
resources, see http://search.oreilly.com/?q=iphone+forensics/.
49
DF1_48-49_Book Reviews.indd 49
29/10/09 5:24:34 pm
/ LETTERS
360°
Feedback, commentary, industry gossip …
O
ur 360 section is dedicated to our readers; so, tell us
what’s on your mind? Did you like our lead feature on
the Anatomy of a Web Request? Would you like to hear
more about counter forensics? How about breaking open
Google Android in a future issue? Please send us your feedback on anything that’s inspiring you, or even bugging you. We
will endeavour to print all the letters we get (space dependant
of course) but we will make sure we reply to each and every
one of them. As an incentive, we’ll highlight our favourite letter in each issue and the writer will receive a prize by way of
thanks for the contribution. So, go ahead and get scribbling.
/ Contact Details
Send your letters or feedback to:
360@digitalforensicsmagazine.com
“In human resources or industrial/
organizational psychology, 360-degree
feedback, also known as “multi-rater
feedback,” “multisource feedback,” or
“multisource assessment,” is feedback
that comes from all around an employee.
“360” refers to the 360 degrees in a
circle, with an individual figuratively in
the centre of the circle. Feedback is
provided by subordinates, peers, and
supervisors.” (Wikipedia, 2009)
20% discount
on OXYGEN SOFTWARE
products and services
• Oxygen Forensic Suite
• Oxygen Forensic Suite PRO
• Oxygen Forensic Suite Training in London, UK
(8-9 December 2009);
To receive the discount, readers should enter the
dfm2009 coupon code during a website purchase
of any of the afore mentioned products.
www.oxygen-forensic.com
50
DF1_50_Letters.indd 50
Digital / ForensicS
29/10/09 5:25:27 pm
DF1_OFC_Cover - Online.indd 1
Windows Forensic Analysis
DVD Toolkit, 2e
ISBN 9781597494229
£34.99, €51.95, $69.95
Malware Forensics
ISBN 9781597492683
£41.99, €49.95, $69.95
Cisco Router and Switch Forensics
ISBN 9781597494182
£35.99, €42.95, $59.95
Mac OS, iPod and iPhone Forensic
Analysis DVD Toolkit
ISBN 9781597492973
£41.99, €49.95, $69.95
Cutting Edge Content in Digital Security
Now Available!
Visit the BRAND NEW www.syngress.com
to purchase these or other great Syngress titles!
29/10/09 4:57:40 pm
Memory and the
ability to retain information.
Refresh Your Knowledge with an (ISC)2®
CISSP® Review Seminar.
The Official (ISC)2 CISSP* CBK Review Seminar is the most comprehensive,
complete review of security systems concepts and industry best practices.
It’s the #1 credential for mid- to senior-level managers who are or looking
to become CISOs, CSOs or Senior Security Engineers.
Adding the CISSP title to your name makes you part of the elite cadre of
information security professionals most in demand today. So, if you plan to
build a career in information security, then the CISSP should be your next
career move.
For more information or to register, visit www.isc2.org/emeacissprs09.
*CISSP® stands for Certified Information Systems Security Professional and
is accredited to ANSI/ISO/IEC Standard 17024:2003.
Download