>> Weidong Cui: Hello everyone, it's my great pleasure... a PhD candidate from UC San Diego and he is...

advertisement
>> Weidong Cui: Hello everyone, it's my great pleasure to introduce Stephen Checkoway. He is
a PhD candidate from UC San Diego and he is working on some cool stuff about car security,
compromising a voting machine and developing techniques related to programming, so today
he will talk to us about his work.
>> Stephen Checkoway: Thank you. So as Weidong said, I am Steve Checkoway, and today I'm
going to talk about some of my research on electronic voting machines and on the security of
modern automobiles. When we think of computers we tend to think of laptops, desktops and
servers, but the truth is the vast majority of processors in use today are embedded invisibly in
devices that we don't normally think of as computers. These range from the trivial like a
greeting card which will play some sounds or flash some lights when you open it up to far more
complex things like planes and trains. There's a very good reason that we have put software
controlled systems in these various devices. It turns out that it is far simpler to design, test and
deploy a software based system than it is to design test and deploy an equivalent hardwarebased system. And this is true whether or not the hardware is an electronic circuit or a piece of
mechanical hardware. One consequence of the ease of software development is that we see
that many of the systems that were formerly purely mechanical systems have been replaced by
software controlled systems. Some examples include things like elevators, slot machines, and
nearly all forms of modern transportation. Now once we have replaced mechanical systems
with software controlled systems, it's almost a natural next step to consider adding some form
of external connectivity to these devices. This can be for monitoring or controlling the devices
and it can take the form of several different things. It can be any of the standard wireless
communication standards or it can be as simple as just taking a USB stick, plugging it into the
device, downloading the data and then physically walking that USB stick over to another
computer or device to use the data. So before I talk about the security of these embedded
devices, I want to talk very briefly about the security of PCs. So back at the dawn of time when
personal computers were coming into vogue, the software that was running these was
extremely buggy and computers were very vulnerable to attack. I claim that it doesn't matter,
or it didn't matter at that time, and there are two basic reasons for this. The first is that there
was not an easy way for an attacker to contact a bunch of computers to take them over at the
same time. He had the target one computer at a time. And the second is it wasn't really much
incentive to do this. There wasn't a lot to be gained aside from notoriety for attacking
computers. Now this starts to change in the late ‘90s and early 2000s when the internet
becomes much more popular when now we have our attackers have the ability to communicate
with and potentially compromise a bunch of computers at the same time. Now still not really a
big deal because there wasn't a lot of incentive to actually do this. Now that changes once
people realize that they can make money by compromising computers, and for example,
turning them into zombies as part of a botnet. So really what we see is that financially
motivated crime is a real driver behind exploiting PCs. So when we move from a computer to a
device that is able to manipulate the physical world, the stakes for a vulnerable system can be
quite a bit higher. So some of the things that people have done in the past that are looking at
security of embedded systems, has ranged from looking at toy robots in the home which can be
used to spy on the occupants of the home, to a teenager in Poland who is able to take control
of the trains in his city and actually cause several collisions. Fortunately, nobody was seriously
injured as a result of this, to Stuxnet which took over uranium enrichment centrifuges in Iran
and finally researchers have shown that they can take control remotely of implantable medical
devices such as pacemakers and even in theory kill the people with the devices. Now computer
security broadly speaking is about making the world a better place. To this end there are sort
of three thrusts of research that go on in the security community. The first of these is the
measurement of existing attacks. So here researchers examine what's going on, they look at
the internet, they see the worms and viruses and malware that are being propagated and
measure the effects of them. Basically they see what's out there and what it is that the
evildoers are actually doing with these. The second one is to try to stay one step ahead of the
miscreants and develop new attacks against existing systems or to develop attacks against new
systems that people hadn't previously considered. Now both of these feed into the third thrust
which is defense. The measurement of existing attacks in the development of new attacks
enables researchers to design strong defenses against these attacks to protect people in the
future. Now these are all three important areas, but for the rest of this talk I'm just going to
focus on one of these, on development of new attacks against systems. So first I want to talk
about what happens when we replace mechanical voting with software controlled voting and
then I'm going to spend a little bit of time and talk about how I can remotely take over your car.
And then finally I will conclude. So partly in response to the voting debacle in Florida in 2000,
Congress passed the help America Vote Act or HAVA which had as one of its goals the
replacement of the old punch card and lever style voting machines with something newer.
Many jurisdictions opted to replace their old machines with paperless direct recording
electronic voting machines or DREs. So let's take a look at how these work here. Now DREs
can range in size from the size of a small tablet to these large 200 pound behemoths that have
to be wheeled around. In any case, there is a bit of a logistical problem here in that on the day
of the election, the voting machines need to be in place at the polling places, or the precincts so
that the voters can come in and vote. But since these are so large and cumbersome in some
cases, what happens is the voting machines are delivered several days before the election takes
place and then they just sit around basically unguarded. So this particular picture was taken by
Princeton professor, Ed Felton who has made it his hobby to travel around Princeton in the days
leading up to an election and take pictures of voting machines that are left unattended. On the
day of the election, the poll workers set up the machines and then the voters come in and they
cast their votes using whatever the interface is for that particular machine, either interacting
with a touch screen or as in here, pushing the button that is next to the preferred candidate's
name. So after they cast, or push the cast vote button, the voting machine stores their vote in
its internal memory. Now at the end of the election, what you would like to happen is you
would like to collect all of these machines, aggregate all of the votes together and then figure
out who won the election. But if you think about the problem with delivering the machines to
the polling place, you can see that we have the same problem collecting them all. So what
happens is the votes actually are stored on a results cartridge, so this is about the size of a
paperback book for this one. And these are what are collected at the end of the day. So after
the polls close, the poll workers go around, they remove all of these cartridges and take them
back to the county headquarters where the votes are aggregated and the winner is declared.
So in response to the fact that we now have software that is controlling our elections, a number
of researchers have taken a look at the security of the voting machines that are running the
elections. So this is just a small sampling, but basically, every serious study of a voting machine
has revealed significant technical flaws. These have ranged from being able to violate the ballot
secrecy, so if you can watch the order in which voters cast their vote, then after the fact you
can recover how each voter voted to being able to change all of the votes on a particular voting
machine or changing all of the votes countywide, and even in at least one case being able to
infect one machine and have it spread virally to all of the other voting machines over the course
of the next couple of elections. So you might be thinking, all right, we know that voting
machines are insecure so why am I up here talking about them? Well, the truth is that not
everyone agrees with that assessment. So the local election officials and the voting machine
vendors have been highly critical of these studies. They have said things like giving attackers
access to source code is unrealistic and any system for which source code is given can be
attacked. These studies are unrealistic. Here we are providing source code. These are extreme
circumstances. The hackers have the source code, the operating manuals, unlimited access.
They have only slightly limited access. They have unusual access to machines. The attacks are
highly improbable. You can see sort of a trend here that the people doing the study had too
much access. They were given source code or other insider knowledge that a real attacker
wouldn't have, and in fact here is my favorite of the quotes, "I think my 9 and 12-year-old kids
could find ways to break into the voting equipment if they had unfettered access." So I hope
that this is not the case, but if it turns out that you really do need some sort of insider access to
be able to attack the voting machine, then these responses are a fairly strong indictment of the
previous studies of voting machines. So to that end, I along with researchers at UC San Diego
and Princeton's Center for Information Technology Policy set out to study a voting machine for
which we had absolutely no insider access. The machine we chose to study was the Sequoia
AVC Advantage, which is in fairly wide use in a number of different states. It has several key
features that made this a good machine to study from our point of view. The first is that it has
a very simple design. Any flaws that are found in the voting machine must be flaws in the
software or hardware that was designed by the company and intended to be running on this
voting machine. This is in contrast to some of the other studies which relied on Windows’
AutoRun feature for example to load malicious code onto these machines. As I mentioned we
had no insider access. All we had is a voting machine which was purchased by Princeton
Professor Andrew Appel from a government surplus site. There was no authentication
required; anybody could have purchased these. He bought a lot of five machines for a total
price of $82 plus $900 in shipping to Princeton, but anybody could have done this. And then it
turns out that this particular voting machine has, it was designed with security in mind, which
seems to be not the norm for voting machines and other embedded devices. In particular, this
machine has a pretty strong defense against code injection, which I will tell you about. So what
is probably a review for many of you, here's how code injection normally works. There is a
program that is running in memory that the attacker wishes to exploit, and the processor
maintains a piece of state called the program counter or instruction pointer which points at the
next instruction to be executed. So the attacker’s task is twofold. First, he causes his own
malicious code to be loaded somewhere else in memory and then second, changes the program
counter to point from the legitimate program to the malicious program. Now one way that this
can be done is a pretty easy one, it is a buffer overflow on the stack, which allows the attacker
to both inject new code and change the program counter at the same time. On an embedded
system, the situation is pretty similar. There is one change, which is that frequently the
program that is running on the embedded system is the only one that's going to be running and
so rather than being loaded from disk, it is stored in a read-only memory chip which is attached
to the motherboard and always mapped into the address space. Here the attacker’s job is
basically the same. First load the malicious code somewhere into memory. Now he can't load
it into the ROM without actually replacing the ROM, but that's okay, the processor is perfectly
willing to execute code loaded anywhere, execute code anywhere in the address space, so the
attacker just loads his code into the RAM, and then as before he causes the program counter to
change from pointing at the ROM to pointing at his malicious code. On the Sequoia AVC
Advantage, the situation is markedly different. So first it does have this ROM RAM distinction,
where the legitimate program is in the lower half of the address space, in the ROMs and the top
half of the address space is for RAM. But the one difference is that the AVC Advantage has a
piece of hardware such that if the processor ever attempts to fetch an instruction from RAM it
raises what is called a non-maskable interrupt, which basically causes the processor to branch
to a known location where it executes a small piece of code. In this case it prints out an error
message and then goes into an infinite loop waiting for the machine to be turned off. So if we
think about what this means, the attacker cannot change any of the instructions in ROM
without actually replacing the ROM, and the attacker cannot cause any instructions in RAM to
be executed, so there is no way for the attacker to actually run any code that is injected into
the machine. So this was the fundamental assumption that the designers of the voting machine
made when they designed this. Namely, if you can't inject code, then the only program that's
running is the legitimate voting program and so everything should be fine. Now this
assumption is false. So return oriented programming is a technique that was designed to allow
arbitrary attacker control, turning complete behavior without any code injection. So basically,
the way this works is the program has some sequence of instructions in it and the attacker
looks through them and finds some sequence that performs, or finds a bunch of sequences,
each of which performs some discrete action that is useful. So what I mean by a discrete action
is something like adding the contents of two registers together, or storing the contents of a
register into memory. Now the attacker can arrange for these sequences of code to be
executed in the appropriate order, then he can cause any behavior that he wants. So the
details are technical and I'd be happy to talk about them after the talk, but the upshot is that if
the attacker is able to gain control of the function call stack, then he can induce any behavior
he wants in this program. Now necessarily, because we're relying on the instructions that exist
in the legitimate program, the return oriented attack is specific to the particular program and
the processor on which the program is running. So return oriented programming was designed
and is mostly used in the context of exploiting PCs. Now PCs tend to have software defenses
against code injection. Microsoft, for example, calls it data execution prevention or DEP, which
basically says that any page of memory can be writable or it can be executable, but never both
at the same time. So the way ROP is basically used is you right a small ROP program that turns
off the software defenses against code injection, and then performs the standard code injection
attack. Now this doesn't work on the AVC Advantage precisely because of this hardware.
There is no way to turn this defense off. The consequence of this is that if an attacker wishes to
actually attack the voting machine, he has to write the entire program or the entire exploit in
this return oriented style, and so this is in fact what I did. I wanted to design an exploit which
when loaded onto the machine would steal votes. It would shift votes from one candidate to
another candidate. To make this problem sort of tractable so I didn't have to sit down and
write down a bunch of individual addresses that I would be writing onto the stack, I designed a
very simple assembly like language and a compiler for it that would translate a relatively high
level specification of this vote stealing, and it would translate it into the return oriented style.
So let's see how this attack would actually happen. So the first thing that happens is the
attacker would need to gain physical access to the voting machine. So this was pretty easy. As I
mentioned near the beginning of the talk, these voting machines are left unattended in the
days before the election so the night before the attacker would go to the machine, pick three
very simple locks. They're so simple that even I can pick them, and I am terrible at lock picking.
Anyone with skill could do it in a couple of seconds, max. Then having gained access to the
machine, you open up the back panel and get at where the operator panel is and where the
results cartridge would be stored. And the attacker has previously loaded this vote stealing
program into a file in the results cartridge, or into a cartridge that's actually an auxiliary
cartridge. It's very similar to the results cartridge. The difference is unimportant, but the point
is that the voting machine has two slots in it, one for the results cartridge, and one for the
auxiliary cartridge. Now in normal use the auxiliary cartridge is left empty, so there's no
problem there. All the attacker has to do is insert his cartridge into the auxiliary port. Then the
attacker navigates a couple of menus on the voting machine’s display and triggers a bug that
causes a very simple buffer overflow on the stack, and this is what initiates the whole load
stealing program, so there's a rather complicated multistage exploit that goes on here, but at
this point from the attacker’s point of view, he is basically done. All he has to do now is wait for
the handy instructions that are provided on the operator panel, namely remove the exploit
cartridge from the voting machine and then turn the power on the machine off. So it turns out
that the power on this voting machine is under software control, so there's a power knob but
it's merely advisory. The soft, the legitimate software notices when the power knob has been
turned to the off position and that's when it decides to depower the motherboard. So what
happens instead once the vote stealing program is installed, is it sits around and waits for the
knob to be turned off and is soon as it detects that this is the case, it powers off all of its
peripherals, but sits around and keeps running. So the attacker closes up the voting machine
and moves on to the next one or leaves the polling place. The attacker is now done at this
point. Yes?
>>: Once you have physical access to the machine, why do you need such a sophisticated
attack? You could just cut the trace to the non-maskable interrupt line or for that matter just
swap out the ROM with your own attack ROM.
>> Stephen Checkoway: Right, you're absolutely right that an attacker could replace the ROM
or to cut the trace for the non-maskable interrupt, so there are two reasons that one wouldn't
want to do this. The first is that this would leave some physical evidence that the attack, that
some attack had occurred. The board would be damaged. Maybe you could do this in such a
way that people wouldn't notice or they would think that it was just that something had
happened to scrape against it and cause the damage. Replacing the ROM could really be
detected. All they would have to do is remove the ROM and stick it in a ROM reader and realize
hey, this isn't right thing. The second reason that one wouldn't do this is that there are a
variety of tamper evident seals that protect the internals of the machine. So you can open it
up, but you can't get at the ROMs without defeating a number of security seals. So people have
shown in the past that it's not too hard to defeat them using a variety of household objects like
hairdryers, but by doing a purely software attack you don't have this problem. You don't have
to defeat any seals and there are no telltale signs left behind. But you are absolutely right that
that could happen. Yeah?
>>: Having an auxiliary port is kind of convenient, I guess, in some sense. Is that a common
feature in these systems? In some ways I guess they need to be extensible, but it seems like a
huge backdoor in terms of letting people put new code or whatever new stuff into the machine.
>> Stephen Checkoway: Well, it was designed, my understanding is one of the things you can
do with this is you can aggregate the vote totals, or aggregate the votes from a bunch of
machines onto a single one. You put the results cartridge in the results cartridge port and you
put a vote aggregating cartridge in the auxiliary port and then you tell it to add the votes. And
so it's a functionality issue. But, yeah, you're right; it seems like, perhaps not a good design
choice, or that it should be secured for sure. So the next day, the poll workers come in and they
go and they set up the voting machine, and they turn the power knob on and now the vote
stealing program has been sitting around running this entire time, and so it detects that the
knob has been turned on and now it performs all of the standard startup sequences. It plays all
of the sounds. It prints out all of the messages and takes the normal amount of time this
machine takes to start up and get into the election mode. So there's really no indication that
anything out of the ordinary is going on. Then, the election runs basically as normal. The
voters come in and they cast their votes and then at the end of the day the polls are closed and
it's only at this point that something different from the legitimate program happens, namely
votes are shifted from one candidate to another candidate. So here on this particular machine,
three votes were cast for George Washington. I know this because I was the one who cast all
three of them. And after I closed the polls and printed out the results tape here, you can see
that two votes have been shifted from George Washington to Benedict Arnold and so here he is
the winner at least on this particular voting machine. So let's take a step back here and ask
what went wrong with the security of this voting machine? Now the obvious thing to say is
well, there was this buffer overflow that enabled me to begin this return oriented attack. But I
don't think that that's the interesting thing. We've known about buffer overflows for quite a
number of years and we know how to defend against them. So really the only thing that's
interesting about the fact that there is a buffer overflow is that people are still writing code
with buffer overflows, even though we do know how to defeat them. Really for me, the more
interesting thing is that they had this security assumption and they relied on it to defend the
voting machine against attack, namely they were assuming that if they could prevent bad code
then they could prevent bad behavior, and it turns out that this was just not true and,
moreover, when the machine was designed no one knew about return oriented programming
and so this defense was actually a pretty good one. It was pretty strong. My understanding is
that the military actually uses something quite similar on the control systems for their weapons.
But what this meant is that once return oriented programming was invented, these voting
machines were simply not secure anymore, and so they didn't remain secure for their entire
service life. And finally, as we were talking about there is no real well to detect that this attack
happened after the fact. If I had replaced some ROMs then we could tell that these ROMs had
been replaced, but since this was a software only attack, there was no evidence left behind.
Yeah?
>>: Do you need to leave your cartridge in the slot?
>> Stephen Checkoway: No. You don't leave the cartridge in the slot; so once that message
pops up that says remove the exploit cartridge, and then turn off the machine. So you take it
out and turn off the machine and then you're all set. Yes.
>>: [inaudible] code to read itself and put things back to normal afterwards?
>> Stephen Checkoway: In fact, it does, yes, so after it's done, it erases all of its memory, or
erases all of the code that it had written into memory and returns to the legitimate program,
so…
>>: So an easy defense would be, I unplug all of the power for every voting machine and on
voting day, I plug in the power to make sure that it's all clear.
>> Stephen Checkoway: So that is a great point, and in fact, we would expect that the voting
machines wouldn't be plugged in until it came time to actually run the election. So it turns out
that since these voting machines need to work even if there's a power outage, each machine
has a rather large car battery in it which the brochure for the machine claims allows voting for
up to 18 hours without power. So the code will remain resident unless you remove the battery
and unplug the machine. And in fact even in that case, the RAM is battery backed and so it will
remain, so you would have to remove everything, pull out all of the batteries and then go
through the entire setup procedure in order to do it.
So if you'll pardon the pun, let's shift gears a little bit and talk about how I can take control of
your car from a distance. So these two statements up here are rather obvious statements, but I
think they need to be said because they are important. The first is that cars are the cyber
physical systems by which I mean we have software controlling the physical world. And second,
any vulnerabilities in the software that is controlling these systems is potentially lifethreatening. So these are important. The code in them needs to be correct for cars to be safe.
Up until the late 1970s, cars were basically purely mechanical devices. However, fuel efficiency
and environmental concerns led to the introduction of emission control systems in the form of
engine control computers. Now these computers proved so successful that they sparked a
proliferation of computers in the car, and so now almost everything in the car is under
computer control from heating and lights to airbags to the transmission to the engine to the
brakes; basically everything in your car is controlled by a computer at this point. I used to like
to say that basically the parking brake was the last holdout, but now I have ridden in cars where
the parking brake is a button that you press, so at this point everything is under computer
control. Now you might notice that several of these computers need to talk to each other. The
remote keyless entry system needs to talk to the body controller, so that when I push the
unlock button on my key fob the remote control door lock receiver will send a message to the
body controller saying unlock the doors and then the body controller does it. So to avoid a
potential n squared communications problem, the designers of the internal architecture for
cars has designed them around a few shared networks. So each computer communicates with
one or more shared buses. So at the same time this is happening, sort of the last of the
holdouts of the mechanical systems is replaced with a software controlled system. So the
accelerator pedal is no longer connected directly to the throttle. Instead it is simply, or there
are simply sensors in the pedal that detect how far the pedal is depressed and then the engine
controller detects how far the pedal has been depressed and then it is responsible for
controlling how much fuel to mix with air, how far to open the throttle, et cetera. So while all
of this is going on, the cars have gained a significant number of external connectivity. So this
has ranged from iPod and USB connectors on the radio, to a number of wireless channels such
as in some cars they are adding Wi-Fi; cars have Bluetooth so that they can communicate with
your cell phone and act basically as a hands-free headset. There is a telematics unit in most
cars these days which is basically a smart phone, so it is a cell phone paired with a moderately
powerful computer which has a phone number that can be, which can be dialed and in addition
it can make an intranet connection. It can make like a 3G internet connection. And finally,
looking forward there are going to be app stores, so we will have third-party code running on
the telematics unit. Now I think that this is a terrible idea, and it sounds like many of the
engineers in the automotive industry agree that this is a terrible idea, however, from what I'm
told by these engineers, the marketing people love their app stores and so this is coming.
There is no stopping it. We will have app stores on cars.
>>: I think they already have them on some on cars like Kia or something has apps?
>> Stephen Checkoway: Okay. I've seen ads for like Pandora integrating with some cars, so
yeah. All right. Before I tell you all of the things that an attacker can do to a car I want to give
kind of a motivating example here. Here we have a range for wireless access to this car’s
internal networks, so which car is this? While doing this study we, which was a collaborative
effort between researchers at the University of Washington and UC San Diego, we purchased a
pair of identical make and model late-model 2009, I guess, automobiles that are sort of, they're
generic. They're not anything high-end or special, you know, $30,000 range. There are
hundreds of thousands of these on the road today. So here we can see my co-author grad
student turned stockman Alexi, and he is going to be driving the University of Washington car
down an unused runway in Blaine and when he gives the signal, we are going to disable his
brakes and you will be able to hear him try to brake. I will point it out when it happens, but it
makes sort of a grinding sound. [video begins].
>> Alexi: Three, two, one, zero. Send the packet at three, two, one, zero. I cannot brake; let
me try accelerating.
>> Stephen Checkoway: The sensor woh, woh sound, that was him trying to brake.
>> Alexi: Okay, now I'm at 20 and I still can't brake.
>> Stephen Checkoway: So I don't know if you caught it, but that contained my favorite line. I
cannot brake; let me try accelerating. [laughter]. The video continues on for another 30
seconds or so, but you might notice at the end of the runway is not too far off and in the end
the parking brake in this case is a mechanical one and that's enough to stop the car. It turns out
that basically anything that is under computer control, which is everything in the car, is under
attacker control if the attacker is able to gain access to the car's internal networks. So this is
just a short list of things. We can turn the engine on and off. We saw the brakes. They can be
completely disabled like they were in the video, or they can be selectively enabled so you can
say, lock one particular brake. The instrument panel is just under software control, so if you tell
it you're going 140 miles an hour in park, as far as it's concerned, you're going 140 miles an
hour in park. All of the horns, actually, there's an amusing story about the horn which you
should ask me about after the talk. The heating, the lights, the heating it turns out you can turn
up so high that it is really extremely unpleasant to be inside the car. So you can turn it up much
higher than the control on the dash will allow. But basically, everything is under attacker
control here and in fact the attacker can do more, so each of these computers, which are called
ECUs in the industry, can be reprogrammed by the other computers in the car. So if you can
take control of any of those computers, then you can reprogram all of them and therefore
control absolutely everything. And so this is sort of the assumption that was made by the
designers of the car. Namely that you need some form of physical access in order to be able to
tamper with the computer systems. Now if this is true, then it's a pretty reasonable sort of
assumption, because if I have physical access to your car, then I can do all of the sorts of things
that Hollywood has been telling us that we can do to cars if we had access to them for the last
30 years. So I don't need to play around with your computer system. I can cut your brake lines,
or do all of those sorts of things. However, as you might imagine, this assumption is false. So
we can consider attackers who have a variety of different capabilities. So the first one I want to
mention is are the attackers who have an indirect physical access to the car. Now what I mean
by this is that the adversary does not himself have direct physical access to the car, but he does
have access to some device which has access to the car. So there are a bunch of examples of
these sorts of things. In the corner here you can see a standard diagnostic tool that a mechanic
would plug directly into your car. These are frequently on, these are frequently wireless
devices themselves, and so the attacker could take control of the device and now can take
control of all of the cars that it plugs into. There are third-party radios which plug into the car.
Any malicious radio could take control of the car. There are third-party tools that plug directly
into the federally mandated onboard diagnostic port, so here is one, Autobot, which is a
popular one. I mentioned that radios have iPod and USB connectors. They can play a variety of
digital file formats including WMA and MP3. And electric vehicles communicate with their
charging stations as they are being charged. So the point here is that really any one of these
devices that is connected to the car extends the attack surface of the car to that of the device.
The second category of attackers one could consider are those that can send short-range
wireless communication signals, so here something up to say 100 meters or less. Now there are
a variety of these in the car. I mentioned previously that some cars can pair with your cell
phone by way of Bluetooth for hands-free calling. Remote keyless entry has at least two
wireless communication channels. Some cars have Wi-Fi access points in them, using the
telematics unit as its gateway to the internet. There are sensors that are federally mandated
sensors that spin around inside your car's tires and periodically, about once a second or so,
report the pressure of the tire to a receiver in the car, and looking forward, people are
developing vehicle to vehicle networks where cars will be able to communicate with each other
as they're driving down the road. The final category of attackers one can consider, are those
that can send long-range wireless signals, so this is at a distance of miles or even at a global
scale. So cars have a variety, can receive a variety of digital radio signals including satellite
radio and HD radio and even the radio data system which is the signal that's broadcast along
with FM that allows your radio to display artist and track information as your listening to the
radio. Now in addition to this there are the telematics units, which have cell phones and so can
be accessed worldwide and can create internet connections, and so they can be accessed by
that as well. So of all of these potential attack vectors, we looked at four of them in-depth. We
looked at the manufacturer's standard diagnostic tool, the standard radio media player that is
in the car. We looked at the Bluetooth and the cellular communication in the telematics unit
and the point is that every one of these lead to a complete car compromise. So all of these are
vulnerable, and compromising the car using any of these vectors we could control the entire car
including things like the brakes and all of the things I mentioned earlier. So I want to talk about
two of these very briefly. The first of them is the media player, so the car’s radio. What you do
is you basically insert a CD into the radio and then that will cause it to take over the radio.
There are two ways this can be done. The first is that the radio contains some vestigial code
that the manufacturers weren't even aware existed until we told them that was put there by
the supplier for the radio for re-flashing the radio quickly and easily. So what you do is you take
a standard ISO 9660 formatted CD with a particular file and directory structure and you stick it
into your radio. A rather cryptic message is displayed and if you push the wrong button, and
there's no indication of what the right or wrong button here it is, but if you push the wrong
button it will reflash your radio, and so you can basically overwrite all of the radio’s firmware.
The second attack that we developed was that, it exploited a flaw in the radio’s parsing of WMA
files so there is an undocumented header in a WMA file that this code was mis-parsing. So in
the end, we were able to constructed WMA file that would play fine when played on a
computer, but when burned to a CD and inserted into the radio, would take control of the
radio. So you might be thinking… You have a question?
>>: So what was the process that you went through to find these bugs, or these exploits?
>> Stephen Checkoway: So finding the various vulnerabilities to exploit them, we spent a lot of
time looking at the firmware in IDA Pro. My co-author, Karl and I, for example, spent a bunch
of time looking at the radios from where we extracted the firmware from it and then just
disassembled it and I was looking at where it was parsing the ISO 9660 file system, and we
found this reflash incapability in it and then a little bit later Karl was looking through the WMA
parsing code and discovered that there was this vulnerability that led to an extremely tricky
buffer overflow to exploit, which caused him to--normally what would happen is it would just
crash the radio when you tried. I had this tendency to just brick all of the radios that I poked at.
I have a whole stack of dead radios somewhere. Eventually what he ended up doing was
writing a debugger that would run on the radio so we would reflash it using the first attack
vector, and then use the debugger and he repurposed an unused serial port, and then we just
sat there with the debugger poking at values in memory until we could figure out what the right
thing that we should overwrite was. It was an intense two weeks, I think, looking at the radio.
>>: Is there like a Black Hat community around this kind of, I mean obviously people hack
computers all the time, but this is kind of pretty specialized, I mean presumably every radio is
different with this kind of software and there are 100 different like low disability brands and
that, is it just you? Are you the one that ever looked at these things?
>> Stephen Checkoway: There's a big community of people who look at the computers in their
car. I don't think that they have looked at radios before. Usually what they're trying to do is,
the tuning committee, is they are trying to make their cars perform better, so they will change
values and tables to cause more fuel to be mixed with air et cetera. And so they just try to get
the most performance out of their car that they can. I am not aware of other people looking at
this and trying to attack cars. That was the radio. The next one that I want to talk about is the
telematics unit. So this is a diagram of the networking stack for the telematics unit. It has this
3G internet connection on top of which it runs point-to-point protocol as a cell and then finally
on top of that is the higher-level telematics functionality. So because it has this as a cell, this
seems like a pretty poor candidate to attack unless we wanted to attempt to defeat SSL, which I
did not. However, if you look at a coverage map for 3G in the U.S., you can see that there are
huge regions where you simply cannot get 3G internet. Now since these telematics units need
to be able to work even when they don't have 3G internet connection, it turns out that there is
a second parallel communications stack that you might notice does not have any sort of SSL. So
this is really just a frequency shift keying software modem that runs over the voice channel of
the telematics unit. So it sounds quite a bit like a dial-up modem if you listen to that, if you play
the waveforms. And so this is what we decided to attack, because this looks a lot better. Now
it turns out that the people that wrote the code for the software modem, which is a standard
Airbiquity modem, were different people then those who wrote the telematics code and it
turned out that they had mismatched assumptions, namely, Airbiquity was designed with the
ability to send rather large packets of data at a time, whereas the rest of the telematics code
knew that it was only going to be sending short packets of data and it was only expecting to
receive short packets of data, and so allocated a correspondingly short buffer on its stack.
However, it was perfectly happy to trust the data that was coming in over the phone and it
would copy the entire long packet of memory into its short buffer thus overwriting the stack as
it turned out. So this led to our attack, which was very simple. Basically there are two parts.
You call the car and second you transmit your malicious payload. Now there are two ways you
could go about doing this. One way is you could implement the entire modem protocol, and so
you have this two-way communication with the telematics unit after you have called it up. But
it turns out that we don't actually care what the response is from the modem, so we can just
assume that is going to send us the expected response and so all you have to do is record your
half of the conversation as an MP3 and then just play the MP3 into the phone. So here is a
picture of Karl playing from his iPod, playing this exploit song directly into a conference
telephone and actually taking control of the car that is just outside of the picture. So now once
you have taken control of the car, it turns out that you can use many of the other wireless
access points, not access points, but wireless vectors, to control the car after you have
compromised it. So one in particular that was extremely handy is that we can use the
telematics unit’s 3G internet connection and so we would take control of it using any of these
methods, and then you just create an internet connection and what we did was we wrote a
pretty simple IRC chat client that would connect to our server whenever the car was turned on,
and then it would sit around and wait for commands to be sent to it and then it would perform
whatever action we told it to do. So there at the top you can see the University of Washington
car joins the channel, and a little bit farther down UCSD car joins the channel, and finally at the
bottom Karl commands both cars to simultaneously perform a particular action. The action
that he chose was to have it send a packet on its internal network. This particular packet just
caused a chime to happen in both cars simultaneously, but you can imagine that it could have
done anything you wanted. In fact, it's very easy to get a shell on the device using the DCC chat
feature of Irssi.
>>: So these cars were the same make and model?
>> Stephen Checkoway: Yeah, so we had the same car. So the point of this was to determine if
the vulnerabilities that we found were specific to a particular instance, or to a family of cars.
And it turned out that they were all for the most part, the family of cars.
>>: How typical are the results across car makes and models?
>> Stephen Checkoway: That's a great question. We don't know the answer. We only looked
at one. There are sort of two competing factors here. One, there are many different suppliers
of components, so in different model years you could have a vastly different component. Your
telematics unit could be implemented by a completely different company using a different
architecture with no shared software. However, the suppliers do provide parts to many
manufacturers and these can be used across various models, so on the one hand you have this
diversity. On the other they tend to reuse the components across multiple cars. So how
applicable it is, we don't know for sure. So now that we have this capability to take control of
cars and then to control them after we've taken them over, a natural question is what is it that
someone might want to do with these capabilities. So it's easy to talk about things like
terrorism. We're going to have all the cars stop working at once or we're going to lock their
brakes or something like that. But I'm not a terrorist and so this is a little bit outside my area of
expertise. I am a computer scientist and one thing that we have some knowledge of is
financially motivated crime. So let's consider car theft, as a computer scientist. We study
people are financially motivated and they commit crimes using computers. That's part of the
first thrust of the computer security research. So the old way to do car theft is you send your
guy out with a Slim Jim and he breaks into the car and then hot wires it drives away. But we
don't have to do that. We can be smarter about it. In fact, we can even sell car theft as a
service if you wish. [laughter]. So here's how this would work. The first thing that you would
do is that you compromise the car and so you would probably want to compromise a bunch of
cars, but you could do it with any of the methods that I've described here. You could call the
car and take control of it. Next you can locate the car using the GPS that's in the car, so once
you know where the car is and you know what kind of car it is, because you know what the Vin
number is and so that tells you if, and I going to steal a car that I care about. Yeah?
>>: Question about [inaudible] how do you call the car, by calling a phone number or by calling
the car sequence ID or something like that?
>> Stephen Checkoway: So you, the question if I understood it is how do you call the car?
>>: Yeah.
>> Stephen Checkoway: You just dial the phone number.
>>: So you know the phone number of that car, or you know the phone number of the driver?
>> Stephen Checkoway: Of the car.
>>: So how you get that?
>> Stephen Checkoway: So that's a great question. How do you get the phone number of the
car? So in our case, we cheated. We pushed the button and said what's our number and it told
us. In reality, you would need to be able to find the numbers for the cars. So one thing that we
thought might be the case is that they would be sort of allocated sequentially and in a block but
the University of Washington car has a 206 area code. The car in San Diego has a San Diego
area code so that is probably not the case, so you would probably need someone that has
access to the database of the cars.
>>: [inaudible] you could just start calling [inaudible].
>> Stephen Checkoway: Or you could just [inaudible] dial. You're absolutely right. You would
choose the ones in your area that you wanted to attack, yet absolutely. So, yeah?
>>: So obviously UK all this money for these keys, these electronic keys that are supposedly
encrypted so that only somebody with the key can start your car. Can you bypass that
completely?
>> Stephen Checkoway: That's coming right up. So after you've located the car you send the
guy out who's going to steal it. When he arrives you unlock the doors, start the engine and
bypass all of the antitheft measures, which allow you to say shift into drive and then you just
drive the car away. And so we implemented this whole attack and here Karl has been informed
of where the car is and so he is going to walk out to it. When he gets there the car stealing
program will be run, which will cause the doors to unlock, the engine will start which you can't
hear because of a University of Washington bus or Seattle bus drives by, but trust me it starts.
Now while he's putting on his seatbelt, the antitheft measures are being bypassed. You can see
that there is no key in the ignition, and he can just shift into drive and drive the car away.
[laughter]. Okay. So that's car theft. One might consider doing something a bit more, or a bit
different rather. So here cars have in addition to GPS, they have microphones inside the cabin
so that you can use the car as the hands-free headset essentially. So we can use the car to
listen to what's going on inside of it. Here's how this proceeds. You start as before by
compromising the car, using any of the methods you want. Then you can have the car just
continuously report its GPS coordinates, so I used a Twitter clone so that the car would just
tweet its location every 15 seconds or so. Then you listen for audio, or rather you listen for
voices in the cabin, so just using standard voice detection, and then compress the audio and
stream it off to another server. So we implemented this as well, but the video that we
produced was perhaps a bit too cute to show in this setting, so I will skip it. [laughter]. So let's
take a step back here and ask what went wrong with the car? So there are two things that I
think are really important that happened here. First is that cars have had a real lack of
adversarial pressure. No one, to the best of our knowledge, it is attacking cars and so the
manufacturers haven't had any incentive to design security mechanisms for the car. The
second reason is that the cars are these, they're not built by the manufacturers themselves, so
they don't have the capability to produce all of the components that go into the car. Instead,
what they do is they design specifications for each of the computers in the car and then they
third-party suppliers, bid on contracts to build the components to the specifications,
presumably the lowest-cost bidder wins and they produce that component which is returned to
the manufacturer essentially as a black box widget. So the car companies have no idea what
goes on inside these components unless they do the same sorts of analyses that we did which
they don't do because they didn't have any reason to do so. So this leads to a couple of
problems. One is that you have these incorrect assumptions between various suppliers, so the
telematics code, for example was, it had a different assumption than the modem code had.
And in fact, almost every bug or every vulnerability that we found in the car fell at one of the
boundaries of these components. So this has some ramifications for other industries such as
the airline industry, which builds planes basically the same way that cars are built by having
third-party suppliers produce a bunch of the components. And finally, as with the voting
machine, there is really no forensic evidence left after the fact that this occurred, so we
developed attacks that would essentially erase themselves from memory as soon as they were
complete and there is no audit log left that could be found even if the car has a black box that is
recording traffic, you may be able to detect that an attack has occurred if you say, have
happened to have logged that the command to lock the brakes was issued right before the
attack happened, but you wouldn't really figure out when the car was attacked. You wouldn't
know who did it or how it was attacked. So my interest in looking at these is not to identify
specific vulnerabilities in these embedded systems. Really, I want to understand how these
vulnerabilities arise in the first place. So it's helpful to look at the commonalities between
these two rather different systems. In both cases the systems contained a number of very
simple vulnerabilities like buffer overload using string copier memory copy. So what this is
telling us that people are still writing code poorly. They haven't, they're not doing the very
simple things to prevent the most basic of attacks. But more than that, even when these
systems are designed with some security in mind, they're not designed with defense in depth in
mind. So they have one mechanism that's going to protect car or the system. They're going to
have requiring physical access to get to the networks, or they're going to have, you know, as in
the voting machine, this hardware that prevents code injection. And the assumption is that
these defenses won't be bypassed so when they are, there's no additional protection and so if
you think about the car, for example, there's no reason that my taking over the radio should
ever let me take control of the brakes. That simply shouldn't happen. In both cases the attacks
were really undetectable. There's no way to tell after the fact that an attack has occurred. And
finally, it's rather difficult to update the software that is running on these computers, so you
can imagine something like the Chrome model where basically whenever a new update goes
out, everybody's browser more or less is updated basically at the same time. In both of these
cases, this is not the case; the voting machine somebody would have to go through and replace
each ROM individually and for the cars, the owners of the cars would have to take it to the
dealership and then they could update the software for each individual car. But many people
don't even go to dealerships. They just take it to their neighborhood mechanic who might not
have the ability to do this. However, in the case of the cars, this is actually starting to change
with firmware over the air so that you can update your firmware remotely. So this has its own
security implications, but if this done right, then perhaps this is starting to be solved. So looking
forward, the embedded systems community can really learn a lot from the PC world when it
comes to security and so in particular, it can avoid making the same mistakes that the PC world
has made. So these are things like defending against simple buffer overflows. This is running
your code through standard tools that can check statically that these sorts of things can
happen. This is fuzzing your own software to make sure that it behaves correctly even in the
presence of input that you didn't expect. So all of these sorts of things they could start doing
immediately, and could really make their systems a lot safer. However, I claim that the
embedded systems world can go quite a bit further in terms of what they can do for security.
So there are a number of techniques that can defeat all of the attacks that we used because we
just needed the simplest thing possible and, in fact, can provably defend against a large class of
attacks, so control flow integrity developed here at Microsoft could be employed and that
would prevent a huge class of attacks. Now software vendors have not implemented these
sorts of things in products, because they're not willing to take the performance overhead as we
were talking about earlier. So this means that even though we know how to defend against all
sorts of attacks, we simply don't. But in the embedded systems world you are typically not on
the bleeding edge of performance, and so if it takes say, twice as much computing power to
implement CFI, well, that's no real barrier. You just buy the next faster processor in the catalog
and the problem is more or less solved for you. Yeah?
>>: I don't buy that. It seems like here anybody in the chain wants to sell their thing at the
lowest cost. So if you need more processing to get more security and your competitor is not
doing that, then he is going to sell it for cheaper and you are going to lose a bid. So isn't it
exactly the same ecosystem as the PC ecosystem.
>> Stephen Checkoway: No, because, well, I guess it depends on whose perspective you're
thinking of. If you're thinking of the third-party suppliers, they perhaps don't want to do this
because they don't want to, they don't want to say well, we can build your part but it's going to
cost us, you know, an extra dollar per component, but if the manufacturer said, no, you really
need to do this, all of you, if that's part of the specification…
>>: [inaudible] regulation, right, I mean in some sense if there was a regulation that said that
you had to do this then people would [inaudible] the cost. The same is probably true for PCs. If
there was a regulation that said that you had to run this kind of security measure, then
everyone would just bite the bullet and, you know. So is that a reasonable direction?
>> Stephen Checkoway: I think it is. Well, I don't know if regulation is necessary, but I think
that the manufacturers can decide that they want to do this, that they want all of their
suppliers to do it. So now the suppliers are already having to implement this functionality and
since they want to use the same components across multiple cars, so they want to sell to
multiple manufacturers, if they already have this capability, then I feel like one manufacturer
could take the lead and say this is what you need to do. But I would hope that they would
actually get together and decide that hey, this is an important problem to solve. Could the PC
world do this? I think they could. I don't know that they will. I'm not going to say that they
don't care enough, but they're not doing this. I don't know. At least it seems like the
automotive industry has decided that security is important to it and so hopefully they will do
things like this. Yeah, Josh?
>> Josh:
>>: Perhaps the market question is do you have any idea how much it would cost to actually
put good security in each car and then the question becomes would consumers pay that
additional amount.
>> Stephen Checkoway: Right, and so I don't know how much it would cost. After having done
this whole study we decided that basically cars were insecure by design. All of the computers in
the network are trusted and there's no real way to get around that. So you have to basically
throw this away and start over with a new internal computer architecture that is designed with
security in mind. And the car manufacturer that we spoke to came to exactly the same
conclusion and in fact they have initiated a worldwide effort to redesign their cars’ internal
architecture with security as a focus. They hired a bunch of people dealing with security and so
forth.
>>: It seems like it would be like a substantial liability issue for the car manufacturers now that
you published these results. They have millions, hundreds of millions of cars with these known
defects and you've published lots of details about these defects, so they certainly know about
them and if I was a lawyer [laughter] I would drag these papers into court for drag you into
court and say you know, you told them about it. And they didn't do anything about it. And the
next generation of cars that came out the next year had the same flaws in it so it's their fault.
>> Stephen Checkoway: So they're definitely--so the manufacturer that we talked to was
extremely responsive to us. They wanted more information from us all the time. They wanted
to know what is it that they could fix immediately that would make their cars better. They
wanted to know specifics and they wanted principles that would help them secure the cars as
is. In at least one case we told them about a flaw, think in the telematics unit, just in time for
them to get it fixed before they shipped the next model year. So they've taken this very
seriously. They seem to be wanting to add security to their cars. Now is this a liability issue?
I'm not a lawyer, so I don't want to speak to that.
>>: I guess it's a very related question, is there evidence that this is actually happening?
>> Stephen Checkoway: I am not aware of any evidence that this is happening. The only thing
that I know of that is really similar, well, there was the rental car company with the disgruntled
employee who did something like disable all of the cars using whatever third-party thing that
they had in there to disable cars. But I've been told that you can buy a device that performs an
attack fairly similar that will bypass the antitheft measures in a car and will start the engine, so
you can buy these things. So you break into the car, you plug it into the onboard diagnostic
port and then it does the rest for you. So those exist. Whether people are attacking cars
remotely, I think the answer is probably not, but I don't know for sure, and we work certainly
approached by a bunch of government agencies and military branches that wanted this
technology [laughter]. So some people are interested in it. Yeah?
>>: How good is it to design two networks, one is only for internal car control and the other
one is completely for entertainment and [inaudible]?
>> Stephen Checkoway: Right. That is certainly a good idea. This is what the airline industry
does. To the best of my knowledge, they have a separate infotainment network separate from
the rest of the avionics. You could certainly do something like that. In fact, the cars that we
looked at had two networks and our initial hypothesis was that one of them was the low
security network and the other was the high-security network, just based on the computers
that were on the networks, however, after speaking to the engineers of the car company, we
found out this is not the case. It was purely a matter of meeting hard real-time constraints in
communication, so one could, in fact, separate in many cases, there are some components that
really do legitimately need to talk to multiple networks, and these are the places that I think
you should focus your security. So the telematics unit needs to be able to monitor and control
all of the network, so it needs to sit on all of them, and so that's something where you really
need to put a lot of effort into security. All right. So these were both pretty big projects that I
have worked on and I've had a large number of collaborators here who these projects would
have been significantly more difficult without them. And with that, thank you very much.
[applause].
>>: How old a car should I buy?
>> Stephen Checkoway: How old a car should you buy? So the thing to keep in mind is that
computers have made cars much safer. They are safer today than they were, [laughter] so you
should buy whatever car you most enjoy driving.
>>: Do you have a story about a horn?
>> Stephen Checkoway: Yes, I do have a story about a horn. At University of Washington they
were going through and they were fuzzing the cars’ internal networks so they would target a
computer and they would just push the button on a laptop that would send a random packet of
data in then they would examine the physical state of the car to figure out what it changed. So
this is how we figured out how to lock the brakes, for example. So while they were fuzzing the
computer that was responsible for honking the horn, they sent it the honk horn demand by
mistake, or rather they simply didn't know, and at the time we were unaware that it was pretty
easy to stop it. You could send one additional command which meant stop performing all of
these silly actions, or even just unplug the computer and inside 10 seconds it would stop doing
that. But we didn't know this at the time, so the horn just started blaring and it got the
intention of the University Police who came over and demanded to know what they were doing
and so, you know, you've got a bunch of grad students that are sitting around car that is just
honking and everybody is like this and they said well, you know, we're doing research. And this
officer said on the horn? [laughter]. In later experiments we either disabled the horn or, I think
at University of Washington they just replaced it with a small speaker that made a little buzz
[laughter]. And related to that, we disabled the air bag if you were wondering, so when the
manufacturers, when they found out what did said do not play with the airbag. It can kill you.
This is not… Any other questions?
>>: Is the airbag on the same electronic, you know, sensor network that--in other words, it
seems like that's a very dangerous thing to deploy an airbag in the middle of somebody driving
down the road would be pretty nasty.
>> Stephen Checkoway: Yeah, extremely.
>>: I mean it seems like, one command.
>> Stephen Checkoway: Yeah, I think you can, but I don't know what it was. I don't know how
to do it. We certainly didn't do it. There was definitely something that, we could spoof the fact
that the airbag had been deployed, but that didn't actually deploy it.
>>: Could you disable the airbag?
>> Stephen Checkoway: Yes.
>>: How did you do that?
>> Stephen Checkoway: Pulled the fuse.
>>: Oh.
>> Stephen Checkoway: Yeah, so, we really didn't want… We had no interest in the airbag
going off. And actually I spent most of my time in the passenger seat so even if the driver side
airbag went off, no problem. [laughter]. All right. Thank you. [applause].
Download