15933 >> Galen Hunt: I'd like to welcome you to...

advertisement
15933
>> Galen Hunt: I'd like to welcome you to our talk this morning. It's my pleasure to introduce Nickolai
Zeldovich. Nickolai has the distinction of being the only candidate I've ever invited to interview for a job at
Microsoft who turned me down -- was it two days? -- three days before the interview and said, oh, I'll take
this job at MIT instead. Go figure, right? Anyway, I'm really glad to have him here.
When he decided not to interview, we did decide it would be great to have him come up and give his talk on
High Star and get to meet people. He's Dave Misaris's student at Stanford, or I guess you're graduated
now, so you're not Dave's anymore, huh?
And he's headed off to MIT this fall as a professor. So we're really quite pleased to have him here.
>> Nickolai Zeldovich: Thanks very much. So I'm going to tell you in this talk how we can build secure
systems out of buggy code by using this idea of information flow control.
So let's see, why is it so difficult to build secure systems in the first place? . Well, today a single bug in
almost any line of code in our systems can usually lead to some kind of security vulnerability.
So it's not surprising then that we are seeing very simple bugs in applications leading to disclosures of
many of Social Security and credit card numbers. And, ironically, even security software itself often has
bugs in it as well.
So, for example, Symantics virus scanner which is supposed to improve the security of our systems
actually had a bug in it that exposed 200 million machines on the Internet to attack.
So our current strategy for doing with this kind of security mess is to try to come up with techniques to find
and fix every one of these bugs from buffer overflows to SQL injection attacks and so on.
But, in practice, this is just turning on an arms race with the attacker of who is going to be, who is going to
be the guy who finds the next bug, whether it's going to be us or the attackers.
And experience shows that we can't actually eliminate all these bugs in the first place. So this will just keep
going on and on. So I claim that the strategy is simply not sustainable in the long run.
So to understand this problem in a bit more detail, let's look at an example application here to understand
what does it mean for a system to be secure. So here we have a simple web application such as a job
search website.
And at a high level, the web application itself is concerned with security of different user profiles. So, for
example, making sure that one user cannot look at someone else's resume in the system.
One level down, the web server, is concerned with insuring which web browsers can make what different
http requests to the system. Further down the operating system probably has some mechanism for
protecting different Unix users from one another and the hardware provides some hardware mechanism for
protecting the kernel from application code.
What all this means, taken together, is that if we want the whole system to be secure, code at every one of
these layers has to be correct. If we somehow managed to build a perfectly secure hardware platform and
a perfectly secure operating system, the application as a whole could still be compromised by bugs
anywhere up the stack.
So this is just not going to happen, of course, because we have millions of lines of code at each one of
these layers that has to be correct in the current model.
So what can we do about this problem? Well, I claim that as long as the security of our systems continues
depending on all of this code being correct, we're basically doomed.
However, in this talk I will try to convince you that we can still potentially build secure systems despite bugs
in all of this code.
So to understand how this is going to work, let's step back for a second and rethink how security should
work. So at a high level, most security concerns we have in a system have to do with the data in the
system, not necessarily with the code.
So, for example, I don't want my financial data being sent out to an attacker's website. And my password
from my system shouldn't be sent to some phishing system.
And one user's profile on a web server shouldn't be getting sent to a different user's web browser.
Suppose we can actually control all these data movements in a system. Somehow in the operating
system? Then as long as our data was secure, we wouldn't actually have to care what the code was doing
with our data.
And this would achieve our goal of building a secure system without having to care whether the code is
buggy or not.
>>: I have a question. I apologize. What's the -- I just want to know one thing, the code, what's the
difference between buggy and being secure?
>> Nickolai Zeldovich: That's roughly we're assuming it's the same thing. So buggy code can probably be
taken over by an attacker to a rough approximation we assume that it is controlled by an attacker to some
degree.
>>: Well, then by definition isn't that buggy code insecure?
>> Nickolai Zeldovich: So the goal of -- in this case we're sort of differentiating between functionality and
security. So we're saying that we'd like to build a system that might be secure, that should always be
secure, but if it tries to violate security, it will stop functioning but not break your data privacy guarantees,
for example.
>>: So you're separating security-related functionality of the code?
>> Nickolai Zeldovich: In some sense, yes. So basically what we want to say is that here's a system as a
whole. It's either going to perform this function or it's not going to perform anything at all.
So currently we have all these data movement policies in our systems. So, for example, making sure that a
user's profile can only be sent to his web browser web server and so on.
But all these policies are being enforced by code all over the place in all these different layers. And as a
result code enforcing all these policies has to be correct, as indicated by the yellow color here. So this
code has to be trusted.
And what I will argue for is that this idea of protecting data in a system should be abstracted into a common
mechanism provided by the lowest level in the system either by harbor or by the operating system. And the
security goals of all the high levels should be reduced to this common protection mechanism.
So, in particular, I will argue that this common mechanism should be about controlling data movement in a
system, which actually cuts across different layers of obstruction.
So what I mean by this is that if you think about data in a system, we have the same data in memory pages
and in files and can also be thought of as representing things in applications such as your user profile in a
web server or your tax information and financial application.
And all this means is that any security policy that we have about these high level abstractions on an
application can also be directly translated into data movement policy on low level abstractions that actually
store the data in harbor or in the operating system.
And we can control these data movement policies at these low level abstractions as well. And this is
fundamentally what's going to allow us to enforce these data movement security policies in a small amount
of code.
And I will show you later about 100 lines of code can enforce the security policy for a fairly complex
application. So this idea of controlling data movement and systems has been around for a while. So it was
originally used in the '70s in military systems to ensure that top secret data cannot be written to unclassified
files by applications.
However, even though these military operating systems did provide an information flow control mechanism
at a low level, they didn't actually allow higher level applications make use of this mechanism. What this
means, if we were to write a web server on a military operating system we would have to basically
implement our security policies from scratch again, making our entire software stack part of the trusted
computing base.
So some more recent work such as the GIF programming language or the flume system has actually
allowed higher level applications to make use of an information flow control mechanism.
So, for example, in GIF and flume, a web application can use the information flow control mechanism
provided by the underlying system to enforce its own security policies.
However, in all these systems there's still millions of lines of code in the remaining components here
indicated by the yellow color that have to be correct in order for the system to be secure.
And what I will argue about in this talk is that information flow control should be the fundamental protection
mechanism provided by the lowest level in a system and all other protection in the system should be built
on top of this one mechanism.
And in this talk I will show you how information flow control can be enforced by a small 20,000-line kernel
or even by the hardware itself.
So although this seems like -- the information flow control mechanism we're going to be talking about is
actually fairly simple. What we're going to do we'll associate a label with every piece of data in the system.
And as I mentioned earlier this label is going to apply to data at all different levels of abstraction from
hardware to application data and this label is going to follow the data as it moves around through the
system and ultimately this label is going to specify what can happen to the data. So, for example, it can
make sure that any copy of my Social Security number cannot be sent to an attacker's website.
And if we get this one simple mechanism right, the hope is that most of the other code in the system is not
going to have to worry about security. However, even though this mechanism seems fairly straightforward
it's not at all obvious how do we build all other protection in the system on top of this one mechanism.
So, for example, how do we implement things like user accounts with this simple information flow control
mechanism? Or if we do that, how do we give users and applications access to the same protection
mechanism that's being used to constrain them in the first place?
Or what happens when applications and users misuse this mechanism and create a process that they can
no longer communicate with or kill? Or how do we even manage a system where the system administrator
might not have the rights to talk to most of the processes on his own machine?
So in this talk I will talk to you about three systems we built at Stanford that follow this idea of building
information flow control as the lowest level protection mechanism and using it for all other protection
primitives higher up.
So the bulk of the talk is going to be about this operating system called High Star that we've built that
controls information flow in a small OS kernel.
I will then also briefly mention hardware system called Low Key that shows that if we can modify the
hardware we can actually reduce the trusted computing base even smaller and enforce information flow
control largely in the processor itself.
And, finally, I will briefly mention a distributed system called Decel (phonetic) that can also extend this idea
of information flow control to a distributed system of multiple machines.
So let me start out by talking about information flow control in this High Star operating system. And I'll start
out with an example application of a virus scanner.
So here on the left we have a virus scanner process that reads our private data and perhaps stores some
temporary state. On the right we have an update process that receives new versions of our virus database
from the network.
So I don't expect any of these processes to be malicious by design. But, as I mentioned earlier, even
Symantics virus scanner was vulnerable, had vulnerabilities in it.
So to the extent possible we should probably treat both the virus scanner and update processor as
malicious here. The question is: If these guys are malicious, can we ensure that our system is still secure?
In particular, can we ensure that our files are not corrupted or not sent over the network?
So at some high level this seems like the perfect job for information flow control. We want to associate
some kind of red label with our private user files on the left there. It will also be associated with the virus
scanner process and any temporary state that it saves.
But this label will somehow prevent the virus scanner from sending our data out onto the network. So, let's
see, can we actually implement such a policy in an operating system like Linux today?
So what can go wrong? Well, one obvious thing is that the virus scanner can just take our data and send it
out onto the network directly.
So we can fix this by restricting access to sockets in some manner. But then the virus scanner can also
collude with the update process, and if they're both malicious, it can pass the data to the update process
which will send it on to the network in turn.
This guy, by definition, needs access to the network. To fix this, we need to somehow restrict interprocess
communication in the system as well. But then the virus scanner can also write our private data into a
temporary file and the update processor will read it out from there.
Or the virus scanner can set its process title to the contents of your private file and the update process will
run PS to observe it. Or the virus scanner can modulate disk space usage in some detectable fashion or
lock different regions of the virus database and so. Or it can even take over some unwilling process in the
system using a debugging mechanism and have that process leak the data out.
So it seems like this list of ways of leaking data on Linux just keeps going on and on. Is there any hope for
us to actually enforce such a simple information flow control policy here?
So it turns out that the problem is that the Unix interface at which I've been describing things up to this point
is too high level for controlling information flow. So this Unix API shown where this blue line on the left here
has two fundamental problems with it.
One is that there's just too many ways for data to move around inside of the kernel as I illustrated with all
these examples just now. And the second problem is that Unix fundamentally associates protection with
processes and files. Not with the data inside of these objects.
As a result, it's very easy for a process to launder data by reading the data from one file and writing it out
onto a different file with different protection permissions.
So as a result, using this Unix API as a security boundary, as indicated by this red line, it's simply a bad
idea, if we want to have any guarantees of enforcing information flow control here.
So to solve this problem in High Star, this operating system that we've designed we split up these
traditional boundaries into lower level security boundary provided by the kernel and the same existing Unix
API will be provided a much higher level and there will be a Unix library in our case that will translate this
traditional Unix API into our much lower level system called interface that will actually enforce security in
our case.
And the question here is: How do we actually design this kernel mechanism such that, on one hand, the
kernel is small and simple and we have strong enforcement of our security guarantees, but at the same
time this interface is actually general enough for us to be able to implement a Unix library on top of it to
provide all the things we have come to expect from an operating system and to capture all the security
policies we might want from this operating system as well.
So at a high level the approach we're going to take is as follows: So the kernel is going to provide a fairly
simple interface that consists of six kernel object types. I will tell you in a little bit. And these object types
are going to be expressive enough for us to build a Unix-like environment on top of them.
And only security mechanism provided by the kernel will be an information flow control mechanism. And
it's going to be egalitarian in the sense that any process in the system can make use of the mechanism, not
just the system administrator. And I'll show you how we can build everything from this system from these
simple primitives.
What gives us some hope that we found the right primitives is that the same set of simple mechanisms
seem to be solving a whole bunch of different problems for us, as I'll be illustrating in the rest of the talk.
So I will start off by telling you a little bit more about these kernel mechanisms. And then I will show you
how they can be combined together to achieve some interesting functionality.
Then I will argue for why these mechanisms provide better security guarantees than Unix does today and
finally show you some applications of these mechanisms.
So to start off, here are the six kernel object types that our kernel has. Four of them are probably familiar
to anyone who has worked on an operating system before. We have segments, threads, address, spaces
and devices.
And what this means is that we can implement all the things you come to expect from an operating system.
But we also have two unique kernel object types that High Star provides that are key to achieving our
security guarantees.
We have a gate object that provides interprocess communication for us and a container object that is
similar to a trajectory object. In Unix that stores all other kernel object. So every kernel object is stored in
some other container.
And I will describe this kernel object in a little more detail later on. Each kernel object has a label
associated with it that provides information flow control in our case. In particular, this label describes what
are the security properties of the data contained inside that object. And the way I will describe data in this
talk is by analogy of colors.
So here I have a thread object in the middle and two segments analogous to files, effectively on either side
of it. I will use the color yellow to indicate objects that have my private data inside of them.
So I might label all of these three objects with this yellow no smoking symbol which means they contain my
yellow private data inside of them. And the rule that kernel enforces is that yellow data can only flow to
other yellow objects.
So in this case the thread can read and write both of these segments. We can also have multiple colors of
data in the system. So if this file on the right is labeled purple. That means it contains someone else's
private data and the thread is not allowed to either read or write this file.
And, finally, we can have multiple colors of data in the same objects label. So if the thread in the middle is
labeled both yellow and purple that means it can see both of our private data can read both of our files but
not write them because that would leak information about the other user.
This would seem to form a perfectly secure system in the sense that all of my data is perpetually labeled
yellow but at some point I want to actually get the data out of the system. So for that reason we introduced
the notion of ownership of data that allows a process to bypass this mechanism. So if this thread in the
middle has an ownership of this yellow data indicated by a yellow star symbol here. It means that it
controls this yellow data and can take yellow data from an object on the left and write it out into a
non-yellow object on the right.
So I'm explicitly trusting it to bypass these restrictions and what the thread might do is encrypt my data, for
example.
So anyone in the system can actually allocate a new color in the system and they get a star symbol, and
they can then use this corresponding color to specify information flow restrictions on any data they
themselves introduce.
So in the sense, the label mechanism is egalitarian, because any process in the system is equivalent.
Anyone can request these colors. And as far as the kernel is concerned, all colors are equally powerful.
So these kernel object types and these labels that I've just described to you form this entire kernel interface
at the low level provided by the High Star kernel itself. Now I will tell you how we can actually make use of
this mechanism at a higher level to provide some more interesting functionality.
So to start with, we can implement all the standard operating system abstractions. So, for example, Unix
process in High Star is composed of a container that has a thread executing in an address space object
and there are separate segments providing all the different pieces of memory you want.
And this actually results in a fairly usable Unix-like system. So my laptop is actually running this High Star
operating system. And I can switch to a different virtual terminal and log in as myself. List my files. Check
that I'm running under my user account. List the running processes and so on.
So, actually, results in a fairly usable Unix-like environment. The interesting thing, it's not just another
clone of Unix or Linux. If you recall earlier, we had this virus scanner example where we wanted to make
sure that a malicious virus scanner could not corrupt or disclose our private data onto the network.
So we can easily achieve this in High Star by labeling my private data yellow, then the virus scanner and
any temporary data it saves will also be labeled yellow. But this yellow label will automatically prevent the
virus scanner from disclosing my data out onto the network or to anyone else in the system.
And the nice thing here is there's no need anymore to audit this virus scanner code for security. The
operating system automatically ensures that the only thing it can do is to look at my private data but not to
disclose it out onto the network.
Similar -- but there's a problem here, namely, how do I actually get the output from the virus scanner out
onto my terminal where my terminal might not actually be labeled yellow in the system. This is exactly the
reason we introduced the notion of ownership earlier.
So we have a small process called wrap in our system that has the corresponding yellow star which allows
it to declassify this yellow data and send it out onto my terminal. And this wrap process is fairly small. Just
140 lines of code. And the nice thing here is that the small piece of code can enforce this fairly interesting
security policy on our large and frequently changing application, without having to understand what the
application is exactly doing.
Similarly, we can use the same information flow control mechanism to ensure that a malicious virus
scanner cannot corrupt my private data either. So we can use a separate purple color here to label the
virus scanner and my temporary files as well. As long as my files are not labeled purple, the operating
system's information flow control rules ensure that the virus scanner cannot overwrite my data. Although it
can still send the output to my terminal here.
>>: You mentioned the PS kits at the beginning, the scanner processors. But how do you prevent that to
happen?
>> Nickolai Zeldovich: So what actually happens -- so basically what we do in the kernel is we say we
want security to take precedence of functionality in all cases. So it is up to the Unix library to implement
something that looks like PS but it cannot violate information flow control rules because the kernel enforces
that.
So as a result the implementation of PS in our user space library can only list the processes you have the
rights to see have existed.
So as a result so you only -- when I run PS in my example here, you only see the processes in my terminal,
for example. You don't see every single process in the system because allowing you to see that might
violate information flow rules.
>>: In your earlier example you mentioned channels, the disk (phonetic) so another update process in the
system can't (inaudible) the disk in size?
>> Nickolai Zeldovich: So we have -- we don't allow any processes -- we have no notion of kind of how
much space there is or how much space has been taken up. We have a notion of quotas. But those are
designed in a way that I will describe maybe later. But they don't actually leak information.
So we've actually implemented this wrap application that I just mentioned earlier. So this wrap program
can run any standard Unix application you want that runs on High Star. But it provides these two extra
security guarantees that you cannot get on Unix today.
In particular, the application you run is not going to be able to corrupt any of your files. And it's not going to
be able to disclose anything onto the network. And it does this by setting this label on the application and
running it and this is all done in 140 lines of code. And the kernel then does all the heavy lifting, if you will,
of enforcing the security policy. And all of this together can enforce the security policy on any application.
So, for example, I can run wrap on simple commands like LS and it seems to work. I can also run wrap on
this virus scanner I downloaded from Source Forge and it seems to scan my home directory for viruses and
doesn't find anything.
But the interesting thing here I don't actually have to trust the virus scanner to do the right thing here. Even
if it were trying to be malicious. It wouldn't be able to corrupt any of my data or disclose my mail to some
Hot Mail account or anything like that.
So, in particular, if I get this mystery (inaudible) script from a friend of mine that on a Unix system would
disclose my private key and/or move my home directory, if I run it under wrap, automatically nothing bad
can happen. It can't actually create any files in a public directory and it can't actually modify my home
directory or any of the files in it.
>>: So based on what you just said about the disk or the quota, you are trying to prevent a covert channel.
But do you think you can prevent all covert channels?
>> Nickolai Zeldovich: So what we aim for is that we want to ensure that our design has no covert
channels in it. And what this means is that, of course, our implementation will have various kinds of covert
channels in it. But because the covert channel is not inherent in the design, we hope to be able to fix any
particular covert channel that bothers you without breaking applications.
And this means that you can kind of tweak the security knob, if you will, at run time without having to
redesign your whole system to increase security.
>>: But adding to this question assumes that you're not controlling the hardware you're not going to be
able to stop all of these hardware-based covert channels, right?
>> Nickolai Zeldovich: Fundamentally, yes. If you have a single operating system running multiple levels
of security levels, you're going to have various, some covert channels are going to be unavoidable.
What we want is basically a design that has no inherent covert channels, which means that for any set -for any covert channel victory that you would like to achieve it's theoretically achievable by modulating the
covert channels in some way or mitigating them without having to break application.
So you can maybe allocate a separate CPU for every process or maybe you have to have very coarse
grain scheduling or flush caches on each switch but you can do this and pay a performance cost for
security but it doesn't require redesign.
>>: If you have a failure, by examination, but I'm having trouble seeing how this would help prevent
something like SQL injection attack where you have a process that gets some data over the network, high
level processing on it and decides to propagate a database. Seems like if you're just at that label view, the
same thing is happening, you've gotten the request and do a good update of the database versus gotten
the request and done a malicious state of the database. It seems in both cases the application database
request but you can, you had a high semantic level database doing what it wasn't supposed to like drop a
table (phonetic).
>> Nickolai Zeldovich: What we'd ultimately like to have is express security properties using these labels,
for example, that regular website users shouldn't be able to drop a database table. Or, for example, if
you're doing a select query on a database we'd like to associate a different label with every user role on the
database, because the application can be malicious and send all kinds of select queries at once. It needn't
even be a SQL injection attack. But the output from the database comes back labeled and you can only
send the right user's data to the right user's web browser.
>>: If it's a label, there's some shared statement (inaudible). In other words, if you were to individually
label every (inaudible) there would be, say, meta data about the database which all users have to
moderate, whenever anybody inserts a (inaudible) into the database there's some structure that describes
the database that has to be modified.
If you drop table you might just be sharing ->> Nickolai Zeldovich: Absolutely. And at some point we have to have some code to be correct in this
case. And what we're really going after is that we want to have, minimize the amount of trusted code for
any particular action here.
So, ultimately, yes, we need to have some code that manages the set of tables in the database and the
scheme for all of them and so on.
>>: There you're talking about the (inaudible) also including this application code.
>> Nickolai Zeldovich: TCP for application will include some of it's own code for the database or
something.
>>: Trusted virus attack.
>> Nickolai Zeldovich: Absolutely. Something has to do something correct. What we'd really like to do is
provide you the mechanism to have a small amount of code and force a security policy for a much larger
application.
For example, this wrap program has to be correct, otherwise it can do anything with my privileges and send
my data anywhere it wants. But the nice thing is this 140-line wrap program can set up these labels and
then run the application in a way that enforces security regardless of the size or the intentions or the
complexity of the application.
And, hopefully, the same can be the case for web server. And we actually have a web server I'll show you
later. Doesn't have a SQL database, per se, uses the file system. But I think the same ideas will hopefully
apply there. So you will have to trust some part of the database but not the entire database, hopefully.
>>: That is part of the High Star trust (inaudible).
>> Nickolai Zeldovich: It's a program that's not part of the fully trusted kernel. Whenever you invoke the
program, you are trusting it in some sense with your user privileges. So it's like a program in Unix. So you
run LS. You're trusting it, but it's not part of the TCB if you don't run it.
So it's not a part of the TCB for the web server.
>>: So rapid (inaudible) has no special permissions.
>> Nickolai Zeldovich: Rapid self has no special permissions. In fact, I might have two versions of wrap. I
can run wrap, wrap LS, and the first wrap is trusted, the second wrap is not trusted and the LS is kind of
twice isolated. So it's kind of ->>: Does that mean if I have wrap I can run a legacy application, legacy application doesn't have to
understand the labeling?
>> Nickolai Zeldovich: Exactly. This is actually the new core utility LS -- I haven't modified it in any way.
I've just recompiled it for High Star. So we have a composite compatibility that allows us to do this. So
legacy applications that need persistent data, for example, will need a slightly different wrap that enforces a
different security policy that allows data to persist between invocations, for example, but it will also be fairly
small.
>>: Earlier you ran LS without wrap. Is that when you first ->> Nickolai Zeldovich: Sorry?
>>: You ran LS or P.
>> Nickolai Zeldovich: I ran LS.
>>: You don't need wrap to get that up.
>> Nickolai Zeldovich: Right. So, basically, if I don't run wrap, it doesn't get any kind of colors. It gets my
stars representing my privileges, but it's not restricted in any way.
>>: I see.
>> Nickolai Zeldovich: So this LS could do anything it wants.
>>: Still get your stars but ->> Nickolai Zeldovich: Exactly but wrap gives it my stars but also restricts it in some way.
>>: One more question. Seems like High Star could have mechanisms that would test the integrity or the
programs that you have allowed to operate from your TCB, like certificate validation and things of this sort.
>> Nickolai Zeldovich: So the operating system doesn't have any mechanism for kind of a testing for user
space programs. You can do it yourself, though. For example, you could modify the shell to read the shell
one hash of a binary before running it and executing a binary is all implemented in the user space library,
so you can ensure the same binary you check sum is the one you load into an address space and start
executing. So you avoid time of checker time of use problems by the fact that the library is in fact loading
the executable and creating the other space and so on.
>>: During these demos have you ever been asked (inaudible) without wrap.
>> Nickolai Zeldovich: Not during a practice talk. I recovered. [Laughter].
>>: That's why he's going to MIT. [Laughter].
>> Nickolai Zeldovich: So now let me move on to this question of how do we know that, why is High Star
providing better security than Unix here.
Let's take a look at another example of the way information can leak in a Unix system. Here we have the
same virus scanner process on the left and update process on the right. And we want to make sure the
scanner process can't leak data to the update process here.
Suppose the kernel can enforce the guarantee. But what happens if these two processes share read only
file descriptor, perhaps representing the virus database. It might seem like a reasonable configuration
because the file descriptor is read only, so neither guy can write to the file. It turns out each file descriptor
in Unix has a C COFF (phonetic) set associated with it, so the scanner process can potentially seek the file
descriptor back and forth and the updates process can look at the seek pointer and say, aha, that's the
information you're trying to leak to me.
So the reason this is a problem in Unix is these file descriptors, along with all these other kernel
extractions, process titles, file locks and so on, are implemented as part of the kernel inside of the trusted
computing base. So you have to get every one of these abstractions right in order for your security to hold.
And the way High Star gets around this problem is by implementing all of these different abstractions in the
Unix library above the security boundary. So as a result we don't have to reason about all these different
abstractions in order to enforce security correctly.
So in particular, the file descriptor problem is not a problem on High Star almost by design. In the sense
that every file descriptor in High Star is implemented with a separate segment object that stores all the data
associated with the file descriptor including the seek pointer.
If the scanner process on the left were to be labeled yellow, it would automatically not be able to update
any of the data in the file descriptor segment because of a single access control check enforced by the
kernel, namely whether information can flow from one object to another.
Or, in other words, just object read and write checks. So in this way we can -- so these labels become the
only security building block we need in the operating system to enforce security.
So on one hand it's a very low level mechanism in the sense it controls what information can flow from one
object to another. It's also a very expressive mechanism the sense it can be used in our case to implement
both Unix user IDs and group IDs and all the things that you expect in a Unix system and can also be used
for things you cannot even implement in Unix. For example, security policies of this wrap application that I
showed earlier.
This label mechanism is egalitarian, in the sense it allows anyone to allocate a new color in the system and
get the corresponding star and specify more information flow restrictions as they would like.
In fact, the system doesn't even have any inherent notion of super user privileges for the system
administrator. So this is not to say we don't have a system administrator in our system.
Of course, we need some guide to have privileges to back up our data or to reset our password if we forget
it. But we do this all very explicitly in High Star. So if we have two users Alice and Bob here, each with
their own privileges indicated by the yellow and purple stars, they very explicitly give their privileges to the
system administrator's shell so he can back up their data.
But if the user Bob here on the right wants to have some secret data that's not accessible to the system
administrator, I'm sorry, but the privileges of this system administrator are all implemented by convention
inside of this Unix library above the security boundary.
So if the user Bob here on the right wants to actually have some secret data that's not accessible to the
system administrator, he can easily do so.
So, for example, if he has this SSH agent process that on Unix holds your private key for authenticating to
other machines on the network, he can easily label it with some new green color. And as long as he
doesn't give the corresponding green star to the system administrator, the system administrator can get at
his private key.
Now, of course, there's a potential problem here. Namely, what happens if this SSH agent process goes
nuts? In our case more colors and refuses to talk to either the user or system administrator and starts
using up CPU cycles, what can we do about this process at this point?
How we solve it in High Star is by very specifically controlling all the resource allocation. So every user in
the system has a home container analogous to a home directory that stores all of the kernel objects
representing their processes and files. And when user Bob here starts a new SHH process, it starts inside
of his home container.
And at this point, even if this process allocates a whole bunch of different colors that prevent it from talking
to anyone on the system, the user can still delete that SHH agent process from his home container and the
kernel will then garbage collect it for you.
And, in fact, this garbage collection mechanism or container mechanism is also what allows us to manage
the system without any super user access. So in High Star, the only special privilege that the system
administrator has is access to the top level container in the system.
This means that the system administrator can give out resources to different users by creating sub
containers and can revoke those resources by deleting those sub containers but it does not give the
system administrator the right to look at the data inside of those containers.
What this means is that even if the system administrator's account is compromised the worst thing that the
attacker can do is either reclaim all resources or to give himself more resources. He cannot compromise
the data of other users on the system.
>>: System administrator has the privilege to install (inaudible) to test the memory?
>> Nickolai Zeldovich: In our case he does not. So there's no loggable kernel modules in High Star at all.
>>: So there's a super administrator then that's configuring the system?
>> Nickolai Zeldovich: So there's some guy who has physical hardware access, and we don't try to prevent
him from compromising the system. So, for example, when you install High Star initially, you, of course,
install some kernel and that has to be correct and we don't try to solve that problem. You could have some,
checks some of the bios, make sure you have assigned kernel or something like that, but at some point you
have to have a root of trust and in our case it's the kernel.
>>: How (inaudible) after High Star is running, differentiate, one guy has physical access to the High Star,
but the other guy remotely cannot?
>> Nickolai Zeldovich: What differentiates them is the fact that the guy with physical access can put in a
CD reboot the machine and boot off a CD.
>>: It can only happen in certain ways?
>> Nickolai Zeldovich: To effect the same container mechanism also allows us to implement the file
system in High Star system as well. Of course, you need some sort of mechanism to store data
persistently in an operating system to implement a file system. The way Unix does this is by providing two
different mechanisms for storing data in memory and on disk.
So in memory you call mal lock or a break to allocate pages of memory. But on disk, to allocate the space
there, you open a file. You call read and write and close the file descriptor.
What this also means is that there's also two very different ways of protecting data in memory and on disk
that are not the same in Unix, and is actually difficult to understand how, what are the protections of data
when it bounces back between memory and disk.
We actually solve this problem in High Star by reusing the same exact mechanisms I've been already
telling you about to implement the file system as well. So the only thing we need to do in High Star to
implement this is to provide a single level of storage, much as in multex or erros (phonetic) operating
systems in the past. What all this means is that all the kernel objects I've been talking to you about are
stored persistently on disk and they're in memory versions just a cache of their disk contents.
Once we have this idea of persistent store, we can implement the file system with the same kernel objects.
Directories are implemented by containers and every container has a special file name segment that maps
file names to the actual objects implementing that file.
The nice thing about implementing the file system with the same abstractions is that we can use the same
protection mechanism for file system security as well. So if I want to make sure that my mail doesn't end
up on my website I just label my mailbox segment with, for example, this yellow symbol and the kernel will
automatically ensure that my mail cannot end up in my public HTML directory, then.
And one nice thing, one other nice thing about a single level store is that it also allows us to avoid super
user privileges for the system administrator. So on Unix, the root user needs super user privileges to start
everything when the system boots up.
But on High Star, processes don't actually notice a hardware reboot because of the single level store. So I
will quickly demonstrate this here.
So if I try to reboot my machine -- I have to log in as root. But if I type "reboot" what's actually going on is
the kernel is now check pointing all the kernel objects implementing this Unix environment to disk. And
when it writes them all out to disk, it will physically reboot my machine, and, at that point, when the kernel
boots up again, it will restore the Unix environment -- I'm sorry for the misimaging. So when the kernel
boots up again, it will resynch at some point, hopefully. Sorry.
So when the kernel comes back up it basically reads the whole environment back from disk and continues
executing exactly where we left off. So the nice thing is that, for example, if you remember this user that
has had an SHH agent process running, this SHH process will continue running without any need to trust
the system administrator to restart it when the machine reboots. Question?
>>: How are you doing with the consistency now that you've basically got an object stored on disk? The
consistency of the objects of, say, if High Star crashes.
>> Nickolai Zeldovich: We have log, 30 bits (phonetic) of memory, all the usual tricks.
So, of course, we want to reboot the machine sometimes for management reasons if some software is
running away and using up all the CPU cycles, and we can do this in High Star as well just that we have a
separate command for doing it.
So we have a command called U reboot that can reboot all the user space code and the operating system
without having to restart the kernel. And all it needs to do is to delete all the process containers in the
system while keeping the file system containers intact and starting a new unit process.
And in a few slides I will show you how a gate mechanism can actually allow this (inaudible) process to
also operate without having to require any super user privileges as well.
So now that we move on to the question of how do we know that High Star is enforcing any of these
information flow control guarantees correctly? So what gives us some hope is that information flow control
gives us a very precise security goal of what do we want to achieve, namely that whenever information
flows from one object to another, all we need to do is to compare the labels on those two objects to
understand whether the operations should be allowed or not.
And this means that we can very precisely understand we don't have to actually reason about high level
semantics to understand what security checks should be performed. And of course the long-term goal here
is to actually verify the security of our implementation. Unlike Unix we have a very precise security goal in
mind here. We've done some model checking and static and dynamic analysis to try to do this.
But in the meantime, of course, we haven't verified High Star (inaudible) yet. But in the meantime the
question is how do we at least ensure that our design of the operating system here is sound.
In the context of information flow control this boils down to the question of what do we do about these
covert channels in the system? So in High Star we classify the channels in two broad categories. One is
information covert channels that are not part of the specification. As I mentioned earlier these kind of
covert channels are nice in the sense that while they're probably unavoidable, applications or at least
well-behaved applications don't depend on the presence of these covert channels.
All this means is that we can mitigate them or somehow introduce noise in the system to reduce their bit
rates and so on, without breaking these well-behaved applications.
So a much more troublesome kind of covert channels of course are ones inherent in the specification of the
system. Because these kinds of covert channels we cannot mitigate or avoid without breaking applications.
And the way High Star deals with these covert channels is by the way High Star avoids these covert
channels is by explicitly labeling everything that's part of the specification. So we have a label attached to
every kernel object. We have a label for every resource in the system represented by containers and so
on.
But there's a potentially subtle problem here namely what do we do about the label of the label itself? So if
you remember earlier, we had this example of a file descriptor associated with a Unix, a seek pointer
associated with a Unix file descriptor earlier and potentially a user can modulate the label much the same
way he can modulate the seek point to leak information.
And to illustrate this problem I will give you this example of a malicious secret process on the left that has a
secret bit of data that it would like to leak onto the network. And, well, suppose we label it yellow to prevent
it from doing so. It turns out that in many systems it can still leak this one bit of data out onto the network
with the help of two colluding processes and the way it does so is by sending a message to one of the two
processes depending on the bit it would like to leak.
So, oftentimes, sending a process in an operating system that provides information flow control propagates
the label to the recipient process. So in this case colluding process one which receives a message
becomes also yellow and it can no longer talk to the network as well. So now any network attacker can say
aha colluding process one cannot talk to the network, that must be the bit he was trying to leak to me. And
in this way we can actually bypass the information control restriction.
Turns out that this problem has actually been a thorn in the side of information flow control for some time.
A number of systems have tried to deal with this. For example, programming languages like GIF rely on
compile time checks to avoid these problems. If there's no checks at run time, you cannot leak data this
way. But it turns out that compile checks are too restrictive. They cannot allow such dynamic applications
like the wrap example I was showing earlier.
Some operating systems like AIX or Asbestos (phonetic) give up and say this is an inevitable property of
dynamic label systems. Turns out they just didn't quite find the right abstraction to provide in the kernel to
enable this functionality.
And military systems that, of course, care very much about avoiding covert channels use fixed labels on
everything to ensure security. But they can only do so because they have very few labels in the first place.
They just have secret, top secret and so on.
They cannot implement very dynamic applications like this wrap example either. So the way High Star
avoids this problem in its design is to have immutable labels for all nonthreat objects in the system. This
means once the object is created, it has a fixed label and the attacker cannot modulate that label to convey
information.
But because immutable labels are too restrictive for our dynamic applications, we actually allow threads
and only threads to change their own label. And the reason that it's safe for a thread to change its own
label is that any information that goes into this decision for a thread to change its own label is already
available to the thread so it can leak it directly.
So it doesn't hurt us to allow it to change its label just the same. And, of course, we only allow a thread to
add and not to remove categories if its label to ensure that it cannot uncolor itself in some sense.
So to show you how this simple idea of allowing a thread to change its own label allows us to implement all
kinds of interesting application behaviors. Let's take a look at this example of a job search website
application here.
On the right here we have a database server process that has a database of job listings. And on the left
we have a job search process that would like to query this database for potential job openings. And what
we'd like to make sure is that the fact that this guy is looking for a job is not revealed to anyone in the
system, not even the database itself. And we'll ensure this with this yellow label on the job search process
itself.
But how can these two very differently labeled processes communicate with one another if they cannot
change each other as labels? The way we achieve this in High Star is by using the gate abstraction I
briefly mentioned earlier. So this gate abstraction is going to do, is that when the job search process on the
left sends a query to the gate object, it's going to switch the clients own thread into the address space and
protection domain of the database server process.
So what this means is that this client thread is now running in the database servers database with its
privileges and it can now read the job listings database, process the query and send the results back to the
client's own address space.
All without having to ever change the label of any thread other than the client's thread itself. So in this way
we can actually achieve and process information between different labeled processes without having to
change the label of anything other than the client's own thread.
So, in fact, this gate mechanism also helps us avoid super user privileges for restarting applications as
well. If you remember earlier, we had this uniboot command that killed all the processes running in the
system, but there's no need to kill gates because they're passive objects, not actually using CPU time
actively. The U boot command leaves the gates object intact which store the privileges of the database
server across process restarts.
So to summarize the design of the High Star kernel is that I've presented you a very small number of
mechanisms, I hope, that seem to be solving a whole bunch of different problems for us. So this container
mechanism I showed you allows us to control the allocation of resources in the system and implement the
file system. And while I didn't actually mention it, it allows us to discover the labels of other objects in the
system as well. This gate abstraction allows us to implement inter process communication with immutable
labels and allows us to avoid super user privileges in the system.
These labels themselves are very versatile in the sense they allow us to implement Unix users. This wrap
application example, security of a database, and as I will show in the little bit the security of a web server
as well. And all these abstractions are provided by a small 20,000-line kernel that everything else builds on
top of.
So as I'm running out of time we'll quickly go through a couple of applications so I will tell you largely about
how we can actually build a web server out of untrusted code and High Star.
So here's architecture of a traditional web server like Apache on Linux. There's some process that accepts
connection from the web browser, hands them off to an open SSL library for decryption then the plain text
request is passed onto a component that parses the HTP request which then in turn runs some application.
In this case maybe generating a PDF file using ghost script. All this code put together is about a million
lines of code that have to be fully trusted in order for the web server to be secure.
So if any line of a million lines of code here were malicious it could disclose the information for any one of
these users. So we can greatly shrink this TCB on High Star by, for example, labeling each user's data
with a different color here.
If we run a separate instance of the application code labeled with that user's color, we can ensure that the
only way this data can escape is through a component that has the user's corresponding star, which in this
case would be the HTP parser. So as long as this HTP components were secure even a malicious
application code could not disclose one user's data to a different user's web browser.
But this is potentially problematic in the sense that the HTP component now is fully trusted. We can avoid
that on High Star as well by breaking out the user's privileges into a series of different authentication
agents, one per user. If we had a separate authentication agent for every user, we can spawn a separate
instance of this http process and will explicitly authenticate each user by sending the user's password to the
authentication agent and getting the privileges back.
So now we have no fully trusted user code in this application.
>>: If I want to leak information, I can still leak it out by sending an image request where the image request
goes to my server and the query stream for the image has all the information I want to leak out.
>> Nickolai Zeldovich: The threat model here is this application kind of being malicious. You mean sort of
giving a web page back to the web browser that then performs something else?
>>: Elsewhere.
>> Nickolai Zeldovich: Perhaps in this case you can enforce the object return that has to have file type
PDF or something along those lines.
>>: Okay.
>> Nickolai Zeldovich: So, of course, that's a valid point. And kind of the end-to-end goal in our case is to
be able to label parts of the document as well so that you know that, well, this part of the HTML document
was synthesized by the application, so I shouldn't trust it as much, so I shouldn't send any of this data to
outside servers. So if we can extend this idea of information flow control into the web browser where we
can label different parts of the HTML DOM, hopefully we can solve that as well. That's kind of the
information flow control view of this problem.
So we can also ensure that the open SSL library is not part of our trusted computing base either. So if we
label each instance of the open SSL library with a different color, one per TCP connection, we can ensure
that even if the open SSL library is compromised, the worst thing it can do is send your data unencrypted
maybe to your own web browser but you're unencrypted data to someone else on the network.
>>: How does the HTML module know what image is to safe to classify?
>> Nickolai Zeldovich: The guarantee we're going for here is kind of ensuring that of any data generated
by, from your data only ends up with your web browser. So it's kind of connecting the pipes in the right
direction or connecting the dots in the right way, not necessarily interpreting the data along the way.
So you could, of course, try to do some data interpretation but that seems much more prone to error. So
this is a very limited property that we are achieving here.
So we can also ensure that the SSL library can't disclose your private key either by having a separate RSD,
but as I'm running out of time what I'll say is basically what we can do, by using information flow control we
can enforce the security of this web server fairly small fraction of the overall size of the web server. So we
can enforce these data flow security properties in about 6,000 lines of trusted code out of a total of a million
lines of code here.
So probably stop here rather than bore you to death. Okay. I thought it was an hour long. All right.
Sounds good.
So turns out that actually not all of these 6,000 lines of code have to even be implemented specifically for
this web server.
So because the labelled mechanism is actually egalitarian, we can actually use the same authentication
component I mentioned earlier in the web server for authenticating Unix users as well. These
authentication agents I mentioned earlier have a very simple function. They store a particular color star
and they give it back to you if you provide the right password. They don't care what the color is used for,
whether it's web server user or Unix user. In fact, we use the same exact code line for line for both the web
server in our Unix library.
One nice thing is that because these labels are egalitarian, we can actually have some interesting
functionality here. For example, because each user controls their own authentication agent, we can
actually allow each user to supply their own password checking code. So if you want accept one time
passwords or implement a challenge response scheme for log in, you can do so without having to get the
system administrator's approval or having to get your code verified in any way.
Now, of course, this introduces a potential problem; namely, what happens if you mistype your user name
and give your password to the wrong authentication agent in a system? It turns out that information flow
control can also solve this problem as well because we can ensure that the authentication agent you send
your password to can only give you back one bit of data whether it's giving you the star or not and cannot
leak your password or anything about it to anyone else in the system.
So to summarize, I've shown you how this information flow control mechanism provided by an operating
system can actually help an application, can actually help us enforce application security guarantees in a
small amount of trusted code.
So what we need is a small part of the application to specify the security policy in terms of labels. So I
showed you how about 6,000 lines of code can do this in a web server and about 140 lines of code for this
wrap example I showed you earlier.
Then what we need is a 20,000-line kernel that can enforce these security guarantees for a wide range of
applications from the virus scanner I showed you earlier to ensuring password secrecy during log in or the
privacy of user data in a web server application as well.
And then the nice thing is that the rest of the application code in the system can be buggied to a fresh
approximation without compromising the security guarantees.
>>: Is this like social engineering attack, like I'm a user (inaudible) I want to run. I list that proven which
won't have my own privilege. And then that program may encrypt all my files and just throw away the key
and send that destruction e-mail saying give me $100, I'll give you a key.
>> Nickolai Zeldovich: Right. So we don't have kind of an interim solution for that problem. But what we
do, I think, have is a mechanism that hopefully makes it easier for application developers such as the web
browser that download the application in the first place, to specify certain security policies on the
application when you run it.
So maybe when you download arbitrary code, first it gets run in wrap. For most applications that might be
okay. Or maybe we have to extend wrap in some sense to provide separate data store for every single
application so that it can store its own data persistently but not touch other applications data and so on.
Of course, there's some trade-off of the user having to run everything in wrap or maybe everything gets run
in wrap by default and the user needs some escape hatch to store data persistently or so on. But I think
we're in a better shape because this mechanism doesn't give you the power to run applications you don't
understand and still get some guarantees of what they're doing.
So I will very briefly mention what we did in hardware. So we can actually, by using tag and memory in
hardware, we were able to push a lot of these information flow control policies into the processor itself. So
what our hardware does is associate a 32 bit tag with every word of memory in the system. And we have a
small security monitor now running underneath the kernel which translates our labels into tags on physical
memory.
And what this means is that now we can enforce security in about half the amount of trusted code that we
had before. So instead of about 12,000 lines of code in the kernel we have about 5,000 lines of trusted
security monitor code which runs underneath the kernel that enforces security on physical hardware,
physical memory.
And the kernel is relegated to just minimizing covert channels at this point. If you compromise the kernel
you get a bigger covert channel but you don't compromise Unix file permissions for example.
>>: The Low Key needs to understand some OS abstraction. I mean the OS state.
>> Nickolai Zeldovich: Does not need to really understand it. What the kernel does is actually, whenever it
allocates memory or relabels memory tells the security monitor, hey, here's the label for that piece of
memory and the security monitor translates the label into a tag on the memory and maintains a mapping
between the tags and the labels.
And enforces just kind of tagging on memory and implements a context operation.
>>: What about the low level seems Low Key to TAP, this part of the kernel is the owner of this.
>> Nickolai Zeldovich: It comes about through the egalitarian mechanism of how the colors get allocated in
the first place. Anyone can say I need a new color, gets passed down to the security monitor. The security
monitor says this process now owns this color, now has this star, and it only allows now that process to
declassify that data. So the security monitor has no contextual switch.
>>: (Inaudible) contact switches.
>> Nickolai Zeldovich: Understands contact switches.
>>: How far does this go -- you said it keeps colors with registers, processes.
>> Nickolai Zeldovich: We don't actually label registers themselves. We label the entire kind of register set
with one value as the current PC counter, basically accounts for all the registers as well.
So I think, so the guys I worked with on this Mike Nolton and Harry Conner and Christ Durekis (phonetic)
have a version of this that has a finer grain of tracking through registers and so on that works better for
some cases. Turns out to have inherent covert channels that we're really trying to avoid here and that's
part of the reason why we don't actually label individual registers.
>>: How big is the color space and how are they allocated?
>> Nickolai Zeldovich: The color space in High Star is two to the 61 for various reasons. But it's nearly
infinite. So the monitor actually corresponds these tag values with labels. So labels in our case are kernel
objects so they have some physical memory address.
So our tag values are actually the physical memory address of the label object. Of course, the space of our
tag values is 2 to 32. Cannot possibly encompass all the possible labels we can construct. But the labels
active at any given time have to fit in your physical memory. And as a result the 32 bit tag space is
sufficient for them.
>>: And they're allocated randomly?
>> Nickolai Zeldovich: Yes. So our color values are allocated by block cyfer encrypting linear encounter.
>>: So the color, can I print a covert channel by allocating colors until I get the right combination?
>> Nickolai Zeldovich: That's the hope, is that by having blue fish (phonetic) block cyfer encrypt encounter
hopefully the values are effectively PR and G. So hopefully there's no prediction. Of course, you know that
once ->>: You take one and if it fits the data I want to transmit, keep it, if not give it back?
>> Nickolai Zeldovich: You don't give it back. So the kernel just keeps a counter that forever keeps
incrementing. So the color is never reused. No way to give it back. You can drop the star but the color
value never comes around again.
>>: Right. So I can drop the color, right?
>> Nickolai Zeldovich: But no one else will get it.
>>: I was just thinking could I create a covert channel which is the set of colors, the colors I have right now,
is the highest one is the last bit of zero or one and I allocate colors until I get one of those.
>> Nickolai Zeldovich: You could do that but no one else -- if there's some restriction on you that prevents
you from leaking data which is presumably what you're trying to circumvent that restriction will prevent
other people from looking at your label as well.
>>: Okay.
>> Nickolai Zeldovich: So whatever colors you allocate no one will know what you've kept or what you
threw away.
>>: Okay. So your colors aren't noble.
>> Nickolai Zeldovich: Right.
And, finally, so we actually extended all this information flow control to distributed systems as well and the
high level idea there is you can basically encode these labels and network messages and have each
machine enforce these information flow restrictions locally and this actually turns out to require just about
5,000 lines more lines of trusted code on each machine to translate between the network and the local
operating system representations of these information flow restrictions.
And, of course, we have to have some mechanism to figure out which machines are trusted to handle what
data and I'll be happy to discuss this more with you later. But we had a paper at NSDI that describes how
we do this.
And this actually allows our High Star web server that I described earlier to scale to multiple machines by
just trusting this extra 5,000 lines of code to transmit labeled messages on the network.
So to summarize, I've shown you how we can use information flow control to build secure systems out of
buggy code with kind of, by providing information flow control in all these different kinds of systems.
So thanks for your time. We have a bunch of papers. Some live CDs of the operating system all the
source code available on our website. Thanks.
(Applause.
>>: Any questions?
>>: So as far as like applications like Outlook, I want it to leak my data when I send a test to somebody,
how do we apply this model, make some kind of sub component that has the yellow star that's only I trust
that to leak my data when I want it to?
>> Nickolai Zeldovich: Right. So perhaps one analogy is the file selection dialogue would have some
privileges so that if the user explicitly selected a file in that application, then that file gets declassified for
that application. So you have to kind of basically declassify based on the user's intent somehow.
>>: So then it is is say the untrusted portion, e-mail all of my documents there still would have to be more
of the user saying, oh, no, I really don't want that to happen?
>> Nickolai Zeldovich: Right. Yes. So maybe it tries to say, well, would you like to e-mail with this file.
Right. So I don't know, maybe the users will end up saying yes anyway. But I don't have an answer to that
[Laughter].
>>: It's still dependent in some cases the user ->> Nickolai Zeldovich: At some point we want the user to get his work done and we have a mechanism to
allow that, namely the star and someone gets it.
I don't know what the right UI is for, maybe like drag and drop is better than the file dialogue so that you
can prompt. I don't know.
>>: I had a question. When you were talking about the programmable password modules and about you
could label it so that it only had one bit of information. So how would you do, say, logging in. So I want to
log my invalid password.
>> Nickolai Zeldovich: That's the cool thing. So we have this fairly elaborate scheme there that actually we
have a three-phase log-in. First you say, hey, I'd like to log in. What it does is it creates a set of gates for
you that you can then use to log in and then it logs this attempt.
And then one of the gates you invoke with a label that prevents it from leaking your password anywhere
and that gate checks your password. And then it gives you back an intermediate star that says this star is
basically good for the real star if you give it to this other gate which will log the fact that you logged in
successfully. So then you give this intermediate star to the third gate and it says I've logged in your
successful log in attempt and I'm giving you back the real thing. Turns out you can do this it just takes a
little bit more construction.
>>: In the web server example you had open SSL like a largely trusted thing. Supposing open SSL had a
bug that it used only zeros, would this metric be ->> Nickolai Zeldovich: Yes.
>>: Is there any protection?
>> Nickolai Zeldovich: That seems very difficult to predict. Kind of in our model. So all we're really going
for is that kind of data flows to the right destination. It's hard for us to kind of understand what the
semantics are. So, of course, if it doesn't encrypt at all or it has a poorly seeded R and G then we're in
trouble.
>>: (Inaudible).
(Applause)
Download