>> Tom Ball: It's my great pleasure to welcome... Incorporated. They're a small company down, small but growing...

advertisement
>> Tom Ball: It's my great pleasure to welcome Joe Hendrix and his colleagues from Galois
Incorporated. They're a small company down, small but growing company down in Portland
Oregon that works really on applying formal methods and proof technology and domain
specific languages to building secure and reliable systems. Joe got his PhD in there at University
of Illinois in 2008 and he was here at Microsoft in the security engineering center in 2009. He's
been at Galois since then, since 2009 working on formal analysis and verification tools and he's
going to tell us about In Systems We Trust today, so welcome.
>> Joe Hendrix: Thanks. Thanks for having me here and listening. I should warn you this is my
first time getting this presentation so I might run over or things might be unclear. Feel free to
asked questions on that. Why is this my first time? That's a good question. A little bit about
Galois. We're a little over ten years old. We're 45 people. We basically do research in
computer systems for particular customers, so we sort of do sponsored research. Our mission
is to create trustworthiness in critical systems and we've got a lot of people working in
programming languages, formal methods, security, things that are relevant to Microsoft and
we're just down the road basically. To talk about where our motivation is coming from in
something that I thought would resonate with Microsoft is that these, in the old days products
kind of drove the tech industry and people would always order new products. They would get
adoption. They would get sales. They would build up enormous companies, enormous value
and they would sort of drive the marketplace, and making sure that these were secure as
possible is a hard job, but it's sort of taking a backseat in the mind share at least to the current
trends, devices and the services. This is a data center in Virginia. I think that's Microsoft in
there. We all have relatives that stare at iPhones all the time including myself. Everything sort
of, there's not really a geography in cyberspace, so all of these massive connections building up.
Security in this context is much harder because it's on a global scale. It's not just about securing
your individual products, but securing a whole ecosystem or a whole computer that's the size of
several football fields. If we look towards the future, things are going to get, it's not clear what
the security implications of emerging trends are, but technology is getting more integrated into
our lives. Right now your phone sits in your pocket. You may stare at it occasionally. In the
future we are having augmented reality, virtual reality drones that can fly anywhere, spy on
things, and do interesting work. You have Kinect in your home so you have a lot of technology
that is sort of coming into our lives in a sort of a potentially more intrusive or more intimate
way than what I think a lot of current technology is. I think the question that motivates me and
a lot of other people is what's going to make this next generation more secure than the last. I
think given the scale of things and the intimacy, I think that's going to be really hard. I'm not
going to have an answer for this, but I think this is sort of one thing that we should be striving
to work towards and it's something that spans, no single company is going to be able to do this
or no single part of the industry, no single government; it's a hard problem. Like many bad talks
I am not going to actually answer that question. I'm going to go into some specific technical
results that we have and these are all, I think fit under this umbrella of trustworthy computing
and trusted, dealing with trust and I'm just going to go through these projects and sort of
loosely tie them back to the theme at the very end. If you have questions on any of these, stop
me and we will do them. The first one is the verification of cryptographic implementations.
That's the one I'm personally most familiar with. There's also the other ones I sort of have a
passing knowledge of. What is this about? We all know that cryptography is used all over the
place now in devices and it's just a small part of really large systems. This is, I think, a diagram
from Android from a couple of years ago and if you look in one of these tiny boxes, you will see
SSL, which I think means OpenSSL. If you were to be able to peer inside this box you would see
the cryptographic images that sit within OpenSSL. OpenSSL is a much larger framework and I'm
just talking about these tiny parts of the primitive operations. However, if there's errors in the
implementation, that can compromise the integrity of the entire system. All of this ecosystem
is built on many trusted components and cryptography is an essential one of those. It turns out
that bugs are quite prevalent. We had a Nist report from 2008 that found of half the
cryptosystems that they look at they found bugs, so basically the government had a 50-50
chance of buying correctly implemented cryptography. This seems pretty tragic given that
people build bridges all the time and they don't fall down very often except in very famous
cases, but this is partially because they are connected to the real world and in this case this was
the Tacoma Narrows and there were harmonic effects that people just weren't predicting.
Cryptographic systems are a digital artifact. They're not dealing with the real world, so there
should be much greater confidence that they are going to be done correctly than something
like a bridge, but in fact, in practice, that's not the case today. One motivating bug for the work
that I'm talking about which is primarily about elliptic curve crypto comes from a bug found in
OpenSSL. In this particular routine which does a modular reduction on a particular elliptic curve
P384 there was an error in the routine and it was an edge case that occurred in less than 1 to
the 2 to the 29 inputs and it's actually way lower than that. I've run many more tests and
haven't found mathematically you can show that it's at most that. It was found just before they
released OpenSSL 0.98g, but it wasn't considered a breaking bug because it was so rare and it
wasn't clear how to exploit it and so they didn't even implement if fix until six months later, so
even though we trust OpenSSL, it had this known bug in it. That was in 2007 and in 2012 a
group of cryptographers found that they could implement an adaptive attack to cover your keys
just for this bug by itself. There's some mitigations if you're using ephemeral keys, but several
Linux distributions were still unpatched, so they still had this bug from this OpenSSL that came
out in 2007 and the authors basically felt like well, this is too hard for the cryptographic
community to deal with, so let's see if we can have formal verification of cryptographic
implementations. As it turns out, we've actually been working on that problem at Galois for
about 10 years, so one of the first projects that Galois was to do was to develop a language
called Cryptol, which we're calling the language of cryptography and out of that work, so
Cryptol began as a specification language and a code generation language, so the idea was you
write your code in this language and then you generate an implementation that's correct. That
got some traction but something that's come up in recent years is that our customers are
getting third-party implementations and they want to know that those implementations are
correct. To do that we built some verification tools so that we can verify a third-party
implementation is functionally equivalent to a Cryptol specification. This rests on, this isn't all
ourselves, so we rely on third-party tools for the SAT and SMT so we've had a great relationship
with Alan Michiko and working with ABC and we have also worked with Yices. Yes?
>>: [indiscernible] third-party tools, are they choosing those because they're faster or more
features or just because?
>> Joe Hendrix: So ABC, ABC has some nice features that make it particularly good I think for
crypto. Yices, I think, Z3, we've done a little bit of testing on it and it's slightly faster than Yices
but Yices sort of solved the same problem.
>>: They are choosing other crypto libraries rather than yours?
>> Joe Hendrix: Because maybe it's for some special hardware or they have some special
needs. The other thing is they have a suite of, they don't care just about functional correctness,
but also timing and other properties, so independent experts develop those things. And we are
only focused on the functional correctness aspect, not the timing analysis and all of that.
>>: They are building their own compilers for Cryptol?
>> Joe Hendrix: Sorry, no. They're building their own cryptographic implementations, their
own implementation of AES or their own implementation of…
>>: Without using Cryptol?
>> Joe Hendrix: Without using Cryptol, so just in C.
>>: [indiscernible] Cryptol spec is…
>> Joe Hendrix: Is the same as a C.
>>: Is the same as their C code?
>> Joe Hendrix: Yes.
>>: Wow.
>> Joe Hendrix: For Java or, yeah.
>>: So that's wow, excellent. [laughter].
>> Joe Hendrix: Yes. We have orchestration we're doing just Haskell work but I'm not going to
talk about Haskell at all except that Cryptol is implemented in Haskell so all this is about C or
Java verification. We tell our customers we say that basically we're doing exhaustive test
coverage. It takes 128 bit key and 128 message block and we're effectively showing that it's
equivalent to the Cryptol spec on all 256 messages or key in and message pairs. A little bit
about Cryptol. I'm not going to go into details. There's a book and we have a website,
Cryptol.net but it's a declarative specification language. It's tailored to the crypto domain, so
we have a lot of primitives for bit vectors and shifts and things that appear in cryptographic
primitives. It was designed with feedback from cryptographers from the NSA. They just
affiliated with the language design. They weren't telling us which algorithms to use. They can
write whatever algorithms they want in it and we're currently working on a Cryptol 2 and I think
the release of that under an open-source license is planned for early spring. I think it will be
Apache style. Some examples of Cryptol, so we have bit vectors and it's just denoted by these
brackets and so this is saying that add is a function that takes two 384 bit numbers and returns
another 384 bit number and we have a primitive for a vector edition so it's a pretty easy
implementation. Here I have an example of a sign extending the value by 1, so this just adds an
extra 0 to the end, to the most significant bit of x and here I can do that in inverse by just
reversing it and taking the cutter of the end. And then with this we can sort of implement stuff
like edition modulo fairly easily and Cryptol sports polymorphic types so you can have n as any
finite number here. We also support infinite streams if you want infinite streams of it, so that's
why we have this fin of n here. Once you've written your spec in Cryptol, I've alluded to this.
We have a tool seat for Cryptol workbench so we can create software implementations. We
have done some work on doing hardware implementations from that and then we can also
generate formal models that can be used to verify third-party Cryptol quotations.
>>: [indiscernible]
>> Joe Hendrix: Right now we have binary releases for Cryptol 1 and Cryptol 2 will be under
open source release.
>>: [indiscernible]
>> Joe Hendrix: Yeah, Cryptol 2 is basically this left side and this, I think we're still discussing
how to best release it. Our tools are available to our clients, but we're basically still kind of
getting it polished to a releasable state and then we'll have to figure out how to best release
that. My hope is we can release it on open-source license or at least parts of this. In terms of
what we can verify, so I've talked about Cryptol. We also have a symbolic execution tool for
JBM byte code so we can verify crypto written in Java and we have tools for models for C via
LVM's, so we have an LVM symbolic execution engine and then we have a tool for VHDL that
I'm a little less familiar with, but it can take, I think it uses i Verilog.
>>: [indiscernible] execution [indiscernible] verification condition generation whether the user
provides [indiscernible]
>> Joe Hendrix: Actually, generally, we just unroll loops. In Cryptol loops have finite bounds so
we can actually just…
>>: You are doing explicit sets based on numeration?
>> Joe Hendrix: Yes.
>>: Plus some smarts.
>> Joe Hendrix: Plus some smarts to, like both of these now will, when you, well, I'll go into
that. So we just use forward symbolic execution to generate logical terms that define the
results of the code. Just to see what would happen if we were to say execute this, so say x and
y are symbolic expressions, then when you hit this branch it will execute this path and this path
and then merge the results back to get an if then else suggestion. If we have a loop we can do
the same thing, so this is a loop that came from an actual crypto program. Loops will have
different bounds of this form and if we have branches we use a post dominator analysis to
identify where they should merge. In the end we get these tools, basically these two terms. So
you can think of this as a giant expression that represents what one implementation does and
this is another giant expression and we can show a whole bunch of terms through various
techniques and I thought I'd talk a little bit about those techniques.
>>: One of the problems of growing old is that one's eyesight confounded by little green dots.
>> Joe Hendrix: Okay, I'll stop doing that.
>>: Can you point with the mouse instead?
>> Joe Hendrix: I can just use my hand I think. I'm not very practiced at this. Okay. So I think
basically there's three techniques that we found to be super useful. One is the SAT solving and
you can replace that with SMT and the story is the same. We take the two implementations as
circuits, so you create what people would call a miter, which you just XOR the outputs together
and you see like is this satisfiable. It was interesting. Wow. That really took me ahead. Sorry.
Sorry. There we go. Now I know why. Wrong talk. Okay. Amazingly, this works. Another
technique that we found quite useful that's implemented in ABC is a technique called SAT
sweeping and the idea behind SAT sweeping is that it uses a test case generation to identify
equivalences between implementation and implementation B, so on various random inputs we
can identify those that compute the same value on those inputs. That suggests candidate
internal equivalences. ABC will then check the smaller expressions to show that they're
equivalent and we sort of merge the terms together. Say if you had two parts, one part of A
and one part of B and you could show that they are equivalence then you could just sort of
merge from that and you sort of keep doing this and you end up showing they are equivalence.
So you only have these small SAT problems essentially with these increasingly large shared
structure between them.
>>: [indiscernible] some sub domain of the input? Partition, or does it correspond to input
partitioning in proving that for a particular partition implementations are equivalent and then
on that…
>> Joe Hendrix: We go, we just give them whole inputs and then we basically get tags for all of
our, we basically get labels for all of our internal AIG nodes and those are sort of a equivalence
check, basically equivalence relation on terms. On these sets of inputs that we've tested we
know that these two terms are equivalent and it turns out that Cryptol has this nice property
that mixes up inputs together so that you find that nodes if they have the same value on a few
random inputs, then they are almost certainly the same function. They may be structurally
different, but they're almost certainly the same function. What this will do is it will have a small
SAT problem where you have two small parts of it and show that those are equivalent and then
use that to sort of…
>>: How do you get, how are you getting that small set problem?
>> Joe Hendrix: Because we find two nodes that have the same value on a set of inputs, so they
have the same inputs to implementation to implementation B and we have some node and say
it was a 1 followed by a 0 followed by 1 and then we find another node in implementation B
that has that same fingerprint and that suggests that they may be equivalent. So the SAT solver
will either come back and say yes. They're equivalent and then we can sort of use that to
simplify the problem, or no. They're not equivalent and then we have an input that we know
distinguishes those nodes.
>>: Is it a partial input or is it a…
>> Joe Hendrix: It's a complete input.
>>: A complete input.
>>: So some set [indiscernible]
>> Joe Hendrix: We had the best luck with AIG’s and we haven't been using VDD’s.
>>: So it's not part of ABC?
>> Joe Hendrix: This is all part of ABC. We haven't implemented this. We have studied this
implementation a bit. That's been really useful for crypto because it has this property that
mixes up all the inputs really fast. If you had programs that sort of, you might get false
equivalences in other programs. And then finally, a technique we use is this relies on SMT is to
just decompose a problem, so we might identify, if we have a large problem we identify say
things that come from different subroutines and we identify those and we prove that those are
equivalent and then we can sort of cut them out of the problem and use abstraction and so it's
equivalent. This technique is essential in basically the more complex cryptographic
verifications. Just to kind of give you a flavor for some of the algorithms that we might
consider, there is a suite called Suite B that has been approved for use by the Department of
Defense. It basically is aimed around providing transport level security for secure e-mail or
secure shells and it uses basically four approved algorithms for Suite B. There's AES, SHA2,
ECDSA which is an elliptic curve digital signature algorithm and ECDH which is an elliptic curve
[indiscernible]. These protocols are optimum in these four primitives and we're focused on the
primitive. To kind of get a feel for how big these things are, we've done a verification of AES
128 that came from Bouncy Castle. We've also done once for C but the Bouncy Castle was 817
lines of code. For C we verified in implementation of libgcrypt and that was actually smaller.
And then ECDSA we needed to do a verification in Java and I evaluated different ECC
implementations and none of them were fast enough so we ended up writing our own and it
was around 2300 lines. That was all in Java. To sort of get a feel for how much effort is
required, we can take AES and using our symbolic execution and using ABC and actually now
less than 40 minutes come back and have a proof that OpenSSL or Bouncy Castle, their
implementation is equivalent to their Cryptol spec. It's fully automatic. The AIG that was
generated by our symbolic execution engine was a megabyte. The SHA-384, when we last tried
this we couldn't care if I had automatically so what we did is we look for a low-level method,
certain low-level primitives and we showed that those were equivalent, the Cryptol and the C
methods, that those were equivalent and we sort of grafted the implementations in so they
were now identical. I think there were nine algorithms and one the top-level thing, and using
that it took ABC a couple of hours to verify. This shows that there is not a clear relationship
between lines of code and verification size. Yes?
>>: All of these little interactive things that you do, the book keep, is it straightforward enough
that the bookkeeping for it you can keep in your head, or is it also mechanically checked
somewhere in the system?
>> Joe Hendrix: Yes. In this case, yes, I think the bookkeeping was actually just a make file. In
this case we built a tool that keeps track of lemmas and manages the proof orchestration
basically.
>>: So the AIGs are just like the SAT formula?
>> Joe Hendrix: Yes.
>>: Okay. Is a circuit representation?
>> Joe Hendrix: Yes. It's called Andon Rotographs [phonetic].
>>: In the second case, was it just that it blew up? The AIG blew up and you had to decompose
or what was the problem?
>> Joe Hendrix: Essentially that. I think part of it is that SHA involves a lot more use of XOR and
that was a little bit more problematic than the particular structure of…
>>: Do you use the SAT solver? It's just one call to the SAT solver?
>> Joe Hendrix: Generally, yeah.
>>: We have a paper in FM CAD about decomposing problems modularly and that would be,
that second one might be a really interesting case for Sam Baylessson [phonetic] if there was
like a way to do modular decomposition of the formula and give independent SAT solvers and
have them communicate. I wonder if we could help out there.
>> Joe Hendrix: We also have the ability to export in a format called Bliff which lets us define
functions and we could potentially use that to give you some format. And then finally, the
ECDSA which is probably going to be the focus for the rest of this talk because it was the most
amount of work. Just trying to get an AIG from that turned out to not work, so I let it run for a
while and eventually our simulator crashed when AIG was five gigs. Maybe I could buy more
RAM. I don't know. But what we ended up doing is we built a tool to decompose the
verification and this ended up having 48 steps. But in the end it only took ten minutes once we
have all this decompositions took…
>>: [indiscernible]
>> Joe Hendrix: What's that?
>>: The ten minutes, so it's end-to-end?
>> Joe Hendrix: Yes. All 48 sets, so if you are willing to do more manual labor than the
verification time goes down which shouldn't surprise people too much. I find elliptic curves
really neat, so I thought that I would just talk about them, if you haven't seen them before. An
elliptic curve is basically a line that satisfies an equation of this form. You have this y which is
squared and you have this cubed x on the right and these a’s, b’s and c’s could be arbitrary. I
picked one and here's a diagram of. You can see because y squared basically it's a mirror image
around the x-axis and that's what that looks like over the real numbers. Cryptographers don't
like the real numbers so they use finite fields and so this is a diagram over the field of, a prime
field with 19 elements, and I just changed the axes so you have zero sort of in the center so you
can still see that mirror image which is still true there. It kind of looks like a frog. That's what
cryptographers will work on. Of course, they use more than 19 elements. They might use 2 to
384, but that's a very boring drawing because it's just white. Once you have this line, the neat
property about them is that you can define the group. If you have two points, P and Q if you
draw a line between him there is going to be a unique third point that is denoted by negative R
there and in the group defined by an elliptic curve they say R is equal to the negation of that
point that you get from the intersections. Imagine this pictorially, basically the sum of P and Q
was just defined to be that R point down there and if you want to negate the point you just
negate its y coordinate and that defines negation. And there's a few cases but they're not
worth going into, I think. The neat thing is that you can right this out algebraically, so here's
when P and Q are different, here's the formula that you would need to write out the sum of P
and Q.
>>: [indiscernible] P and Q?
>> Joe Hendrix: Yes. It's just how it's defined. It's a particular group and they just defined it
that way. I think there's certain nice properties about that.
>>: [indiscernible]
>> Joe Hendrix: Yes. The neat thing is that once you have that definition then you can
transform it to finite field, so the geometric intuition of drawing a line between two points and
getting that third point, doesn't really hold when you're working in this finite field, but the
formulas still work out. Algebra is nice. The reason people are interested in elliptic curves for
cryptography purposes is that they can find what is called a one-way function. If you know a
value, if you have a scale value K and a point P on curve, so that satisfies one of those
equations, then you can compute the product K times P which is just P added to itself K times
quite quickly, but if you know capital Q and P it's hard to find K. There's no sub exponential
algorithms known, though I think quantum algorithms could be potentially used to break this if
you could build a quantum computer. This primitive is basically the workhorse for ECDSA and
ECDH.
>>: [indiscernible] very large?
>> Joe Hendrix: Yes. It's not quite as large as RSA, so it's like 384 bits or so. Here is an example
of the NIST P384 curve. The prime number that they use for that finite field is this number right
here, which has been carefully chosen so that you can do efficient 32-bit arithmetic to compute
your modular reductions. The curve equation is fairly simple except for this coordinate b which
I just wrote out in hex and that's what b is. With this particular project our goal was to verify an
efficient implementation of ECDSA over this particular curve written in Java against a Cryptol
spec and we are free to use known optimizations so we can take the crypto math literature and
just sort of trust that they knew what they were doing in terms of writing different algorithms
out and use it to write the specification. And then the implementation can use both those
same algorithms and low-level tricks to reports. Yes?
>>: [indiscernible] particular [indiscernible] curve, the coefficients and the b [indiscernible]
back there. These are the [indiscernible] as well. They're fixed across different instantiations of
this…
>> Joe Hendrix: Yes. They are part of this spec.
>>: They are part of the spec so what, could it be that these numbers were carefully chosen?
>> Joe Hendrix: Potentially, yes. This I think probably not, but this number is actually the SHA1
hash of some other number, so you might have some reason, but I'm not going to say that this
is necessarily secure.
>>: [indiscernible] generate b and [indiscernible]
>> Joe Hendrix: That would be, yeah, I see some work on that. One thing is that the curve, the
size of the curve will change so you need to compute the size of the curve and your points and
things like that. This is just part of the NIST spec and that's what we were…
>>: [indiscernible] with respect to this [indiscernible]
>> Joe Hendrix: Yeah, random thing, yeah. Yeah. It would be interesting, fun to explore what
we can do better in this space. Essentially, one of the things you need to know is how many
points satisfy that equation which is called point counting and that's a more heavyweight
operation, so you don't want to do that every time you do a signature. That's what we took
and basically when you look at an implementation at a certain level it looks like this sort of
hierarchy were at the low level you have operations of finite fields and this is where you might
want to change your field. And you have operations, points and these tend to be generic to all
elliptic curves, and then at the top you have some one-way, your one-way functions. It turns
out if you have to multiply your two scale multiplications and by doing them at the same time.
And then finally at the top you have the digital signature algorithm running the key agreement.
Just to show you a little bit what this would look like in Cryptol and I don't think I'll dwell on this
too much, but since we support 384 bit numbers we can write out 2 to the 384 directly.
Modular addition can be written using over arbitrary ranges so we can specialize it to different
ranks. That's modular addition in Cryptol. The same field addition in Java is more tedious.
Here's how we wrote our field prime down. Only there's an exercise to see that this is sort of
the number. This is a, since Java doesn't support -- we needed to be efficient so we couldn't
use the big library, so we encoded 384 bit numbers as integer arrays with 12 elements. In that
encoding this is what that prime field is. And then addition, field addition basically calls a
couple of other methods. In addition you see there's this four loop that iterates through the
elements and we needed to capture additional overflow. The only way I could see to do that
was to convert everything to longs which is what this thing does and sort of chop things up and
that's what this sort of does. This is about as efficient as I could make it in Java. In terms of
performance, using all these tricks we were able to get something that's fairly competitive with
OpenSSL, so on the right, that's Bouncy Castle and ours is about an order of magnitude faster at
64 bits. We still couldn't beat OpenSSL because C I have access to a full multiplier which is, I
don't in Java.
>>: [indiscernible] Java [indiscernible]
>> Joe Hendrix: Yes. The blue is our Java.
>>: And the crypto doesn't compile directly?
>> Joe Hendrix: We do have a Java compiler, but we wrote this by hand rather than through
Cryptol.
>>: [indiscernible] spec [indiscernible] spec
>> Joe Hendrix: We could. We wouldn't get something as efficient as this. So that's where we
are with our implementation. To actually do this orchestration, it turned out what we had to
do after some initial experiments was to build a tool that let us prove things at this level and
then use the proofs here to sort of simplify the proofs here and then sail one up this tower. If
we didn't have this infrastructure, just using SAT-based equivalent checking we could show
addition, subtraction and doubling we're correct. We couldn't even show a multiply
equivalence and much less that. Now just our symbolic…
>>: [indiscernible]
>> Joe Hendrix: Size of the problem. Yeah, and this involves a 384 bit multiply in it so a known
hard problem. Using symbolic execution we could get the models up to about this size, but
then at the basically the scaler multiply calls these things hundreds of times so the models just
got too large in the gigabyte size, so we couldn't even…
>>: Is it possible, is there any hope? Probably not with cryptography of this sort to say well, we
can do a proof scaling down, you know, just some smaller model size and then just we have a
generalization [indiscernible]? It just doesn't work out.
>> Joe Hendrix: Yeah. I think it would be fun. There has been some work on verifying
multiplication circuits where they can test it on a few inputs. You can extend that. I haven't
pushed that direction at all, but it would be really fun I think to see how far we could go.
Basically these large numbers only show up as constants in key places at this level, and actually
they don't show up here, but they show up a little bit here. It should be parametric and ignore
the size. What we built was a tool that we are calling SAWScript, so SAW stands for the
software analysis workbench and it can cut through your verification problems. And it allows
you to specify the behavior of Java methods including side effects and you can use Cryptol
functions to describe what those methods do. Then these method specifications that you
write, they can be used as statements we've proven, so proof obligations or as lemmas to help
verify later methods. Then we have a small tactic language to help users control verification
steps, but we're mostly intended for automated verification. You just have to manually write
your steps. One thing our specifications had to capture was like links of arrays because that's
not part of the Java type. If there are any extra assumptions in inputs, if some of the arguments
can reference other things and basically what is the post condition for the method when it
terminates. So I think this is fairly standard in a lot of tools. And then we had this little tactic
language. Here's the specification for field addition, so we just pulled in, this top line is basically
pulling in the Cryptol implementation through a file that's called an SBV file. We define the
field prime constant directly and then we can write out field addition in Java took three
arguments and we can freely allow them to all alias each other and then we had to say that
.field prime equaled basically splitting this 384 bit number into the 12 elements. As a
precondition, and that's what this assert meant, and then as our post condition we call that
ensure and we said that the value of z, which is the argument used to store the result, is equal
to splitting the result of the Cryptol function that was imported on the join of x, y. And then our
tactic was just user writing using some rules that were given above here and then call Yices and
get it to verify the resulting simplification. That was almost instantaneous. We could also put
ABC here, but Yices turned out to be slightly faster. Once we had that specification for say
these operations, then we could use that in the other ones, so this was, there was an ec double
and it involves calls to the field routines and we can just use those specs so that instead of the
symbolic execution tool instead of diving into the byte code for field addition, we'll use the spec
and just step over it. Then that's how it works, basically. And in terms of results we were
successful. We were able to verify the Java implementation against the Cryptol spec. This is
written badly. Basically what we're saying here is you could try to independently validate the
specification using testing or some other mechanism. The specification is now trusted and you
don't necessarily have a reason to trust it except that it is written in this functional language
and hopefully cleared the mathematicians. We ended up finding three bugs in the verification.
One was that we failed to clear some intermediate results and that was caught because we
have to specify all the post conditions, so we wanted their procedure to clean up so you
wouldn't leak information. We had a particular boundary condition on an input that wasn't
very interesting. And then it turns out that there was a bug in our modular reduction team just
as there was in OpenSSL’s. Ours was slightly different. At the very end of it we had this cleanup
that would deal with some overflow and there was some code that looked like this and the bug
was in that second line. I had an equals and it should have been a plus equals and the previous
code had verified that this overflow was between 0 and 5 and it was actually incredibly unlikely
to be 4 or 3. It was mostly, usually 0. Random testing, just our initial random testing didn't find
it. Once I found the bug with ABC, which just took 20 seconds, I ran it testing to see if I could
find it and after a couple of hours testing did find it. So maybe if you just test things work you
find it, but it turns out that this code was actually some more stuff later on which would have
an even more remote chance of being triggered. It just was that code.
>>: [indiscernible]
>> Joe Hendrix: These were just random, yeah.
>>: And did you intend to write equals there?
>> Joe Hendrix: No. It was a typo. Copy paste error and I was responsible for writing this. I
really wanted it to be correct, so I was embarrassed when I found that. It was neat that ABC
found this in just 20 seconds. And this was fairly complicated modular division routine. And it's
not like a memory error or something like that. The heuristics you would use normally to call
this execution normally wouldn't necessarily find this. And then in terms of our overall effort,
we ended up using the rewriter 30 times. It was just a nice tool to have so we had some rules
to simplify formulas. We used Yices 23 times. We used ABC 13 times. We needed Yices at the
higher levels because we had uninterpreted functions for say field addition, but ABC could
often find things and it was better at finding counter examples in many cases. We did have a
feature where we could do inductive assertions and we needed that in only one procedure.
That was for modular division, so there's basically a GCD algorithm that has a while loop and so
we used the induction.
>>: So here you have one proof checking technology that's trusted, the rewriter plus Yices plus
ABC. Do you have thoughts about also like generating pre-certificates and having them
independently checked?
>> Joe Hendrix: We have been thinking about that. Right now I think the thing that I'm trusting
the least is the semantics for our language and I think one thing that I'd really like to look at is if
we can do declarative language semantics and somehow drive the simulator from it. That's I
think the thing that I'm most focused on. If we sort of trust that interpreter, then we could try
seeing other solvers that they can generate certificates, that would be useful.
>>: Do you find that they are not demanding this?
>> Joe Hendrix: This is way better than the state-of-the-art which is testing and that, so they
haven't demanded that of us. I think what they're most interested in is just making the tools
more usable so that they can get broader adoption and then maybe come back more. So that's
it on verification and that's going to be the thing I go into the most detail on. I'll talk about
some other projects we have. We don't just do crypto. One line of work that we've been
working on for a while is on separation of virtualization, so this is for architecture than
verification. The Department of Defense has basically this approach called MILS where you
have different levels of security and you wanted a separation between those levels of security.
One kind of product that started out in the early days of Galois was a work on a file system that
was sort of aware of these multiple levels and enforced integrity in the sense that you had a
low network and you had a high network and high could read from low but low could never
read from high. So high, think secret stuff and low think not so secret stuff. You don't mind if
secret people can have access to non-secret information but you don't want the converse. And
this was all implemented as a block access controller, so you had actually physically separate
disks for like top secret, secret and unclassified. This block access control would mediate
between these three components and this was actually written in Isabel and I think the
implementation was derived from Isabel code. That was verified to be correct. Sort of more
recent work that we've been doing is on virtualization. We have a project where we're trying to
build secure mobile phones, so one challenges that basically when people want to keep these
domains separate right now, they use separate computers, so you see that with the separate
disks here. They use entirely separate computers, separate phones, so they're carrying all this
stuff around. They can drop them and lose them and it's just more secure to have things
physically separate but it's a headache. What we are looking at is can we have a trusted
portion on a phone and then use software to enforce the integrity between this trusted and
non-trusted domain. In this case we've been working with Nicta and they have a virtualization
wire called seL4 and it's been proven correct so many of you have probably seen presentations
that they have actually shown that there's no buffer overflows, null pointer dereferences all
using Isabel. So that's one project we have, another is on more recent is trying to really
leverage virtualization. Once you have this separation, once you have this operation, you can
start to decompose kind of all the drivers and everything in two separate domains and start to
look at things that are really refined security level. We have sort of a microkernel here and a
separation layer that is responsible for setting up communication and then so you might have
an Android or a Linux and it talks to your drivers in a completely separate domain, so there's a
much more rigid separation of the information that is more originally separated and there's
more structured mechanisms for communicating.
>>: Do you worry about crossing channel text in that kind of environment?
>> Joe Hendrix: Yes.
>>: How do you [indiscernible] that doesn't happen?
>> Joe Hendrix: I think with this work so far these domains are kind of statically laid out. Across
channels, like timing between the things. I don't know. That's an issue in this stuff. I don't
know what their approach is for it. Short answer. More recently we've been working on
actually getting this running on a real device and so one of the things is a proof of concept we
have and this was joint work with SRI and LG and Android Linux where they are using instead of
seL4, OKL4 which is another L4-based hypervisor and we have Android where it uses entirely
virtual device drivers for its operation so it isn't directly talking to things, but it can call out to
these other device drivers and independent modules. For the access to the block device there
is a layer in between that performs encryption, so any data that is written to the block device
has to be encrypted by this separate layer and the same with network traffic, so it will let you
set up a secure pocket that can only talk to other people through encrypted channels and can
only write to the device with encryption. And that's an older LG phone is using that.
>>: It would still be someone that installs the my latest evil app on Android, but once it runs it
there it sees everything unencrypted, right?
>> Joe Hendrix: Yes. If you have an evil app running on your high domain, it can see everything
unencrypted, but because of the secure IP on the network interface, it can't send that to
anywhere other than through that and crypto network, so it doesn't have access to the normal
internet, basically, so yeah.
>>: Did you ever look at singularity as a model for some of these [indiscernible]? Because they
got the software isolation [indiscernible]
>> Joe Hendrix: Yeah. It's kind of neat. Yeah. I know the people working on this are familiar
with that stuff but I don't know to what degree they investigate it.
>>: Just to be clear, you've got multiple apps running, both secure and unsecure on the same
hardware at the same time?
>> Joe Hendrix: At the same time. So you do have these cross the main in timing, yeah. And I
wish I had a good sized answer for you because yes. That's a good point. I think typically, I
think now it's sort of coming back to me. I think the main thing is there is actually a full mode
switch so you don't have things, things can run at nearly the same time, but you have to stop
things and then start up the next thing. I think that's the approach that they're using, so you
sort of switch between your secure mode and the secure mode will take over the whole display
and all of that, so you have to have a big switch, which is hopefully less work than switching
phones. Yes?
>>: Are you guys looking at hardware support for this at all? Because I think if you kind of keep
going further down the stack, eventually you can watch things in hardware. Like, for example,
you said you switch modes, but then [indiscernible] time between mode switches you've got
hardware seize and stuff like get some information from one level or the other, but if you use
may be hardware support that runs below all of that, then that's your trusted block.
[indiscernible]
>> Joe Hendrix: I know one of the things we've looked at is trust zone to mostly for the
attestation part, so if we can attest that the device is running, but I don't know enough details
about that. Another project that we have is called fuse and Rogan who is visiting here today is
the project lead for that and this is about finding interactions between applications, so
something that is interesting is that systems, Android in particular, has fine grain permission
system on apps so you can restrict greatly what an app does. One of those permissions is the
ability to communicate with other apps and by virtue of communication, you could have apps
collude to get sort of greater capabilities that either one of them has below. So one sort of
small example we found on that is the Samsung Galaxy S3 ships with an app that's called Kies
and it allows apps to be installed from the file system, so if there is some app on your file
system, it will just install it so that it can be running. And then there is this Clipboard Save
Service, and I don't know what that does but it has an API that other apps can use to write
arbitrary data to the file system. You now have this vector where it maybe this seems safe and
maybe this doesn't seem so benign because it's only installing things on the file system, but
integrated together they get a little bit more risky. And this wasn't found by us but there is a
blog link to describe that actual tactic in more detail. The basic approach, the fuse is that it will
do analysis on individual binaries force scalability purpose. Is sort of creates an extended
manifest of what that app does and then there's a graph analysis that will connect these parts
together to sort of figure out how things are communicating, and then there's interactive
visualization and I think you would need to talk to Rogan for more details on that. But just to
see why interactive visualization is really important, there's a diagram for I think just 20 apps on
Android and basically everything is talking to everything else and this app has notes for
permissions because those are used for the establishment communication and has notes for
the apps. And then if you want to filter things you can look at a much sparser graph that just
shows save the interaction between a few components. The purple and the rest of the Android
system is in green and it's still a really dense thing, so it's interesting how much everything
basically talks to everything else, and we can use this analysis to change things together to
prioritize our analysis. Yeah, so I think another project that is quite interesting that we have
been working on is trying to defend against, basically higher assurance cyber physical systems.
This was part of a DARPA funded project and it was motivated by some work done by a team
that was called the car shark team and they basically took a, I don't think they took this
particular car, but what they took they found that you could exploit cars through many
different mechanisms. If you had access to a buss on the car that is called the can buss, there is
basically no isolation between the different components. Since everything was sort of
connected you could hijack the cars. You could get your little program running on the
dashboard through compromising the entertainment system through a…
>>: [indiscernible]
>> Joe Hendrix: Yes.
>>: [indiscernible].
>> Joe Hendrix: Anyway, so that motivated DARPA project to try to figure out how can we
make higher assurance of autonomous vehicles in other systems. So Galois is working with a
bunch of other people. The prime on our, we are part of the care team which works on systems
for unmanned aerial vehicles. Rockwell Collins is the prime on it and they're working with Mike
Whalen [phonetic] at Minnesota and they are responsible for the integration architecture.
They do a lot of work for architecture description languages. We're also, there is a red team for
this that is trying to find flaws in the systems and that's AIS and Draper labs. Boeing, Galois’s
part is kind of working on quad copters and the autopilot for that. Boeing actually has a
military vehicle that they are applying this stuff to. And then we are also working with Nicta
because Nicta brings sort of secure operating systems and networking and that, so we're
focused primarily on the autopilot part. The name of our project is called SMACCM Pilot;
SMACCM is Secure Mathematically Assured Composition of Control Models and it has a nice
acronym that like in military circles. All of this is released under BSD license and we use GitHub
for the source and so it's all available if you want to try out some of it. Our platform is basically
Arduino's, so we started actually with this ArduPilot which is an 8-bit Arduino processor. Not a
lot of power, but it can run on a plane. One of the people at Galois has a ton of experience with
this open source project that's called ArduPilot which lets you write flight plans and sort of map
style interface and the plane will go off and fly that flight plan that you drew. It's a very simple
way to control a plane, so you don't need to worry too much about like the actual mechanics of
flight. You just tell it where you want it to go and it will fire up. It's starting to see commercial
use in farming, so farms have an interest in having a plane or helicopter survey the field when
you don't want to walk around, so this is a nice way and you get elevation. You can also use
infrared and all these other things to see different aspects of your crops. ArduPilot comes out of
a website called do-it-yourself drones and this is targeting the hobbyist sort of enthusiasts’
market for autopilots in UMVs [phonetic]. One of the first problems we sort of run into, we
want to use ArduPilot when we want to work with the community, but ArduPilot kind of had
this monolithic design, many people kind of stitched together things. There was all sorts of
platforms, specific stuff, separate, integrated with stuff roid [phonetic] to the algorithms and
there wasn't really sort of things that people, like communication security or security critical or
safety critical things that you might care about if you are trying to put. And there wasn't really
any testing or verification, sort of anything goes. So one of the first things an engineer at Galois
did is we wrote a hardware abstraction layer so that ArduPilot now instead of being linked
directly to Arduino can be separated and you can have different types of applications running
on that and this is now all part of ArduPilot. In addition to this sort of improvement to
ArduPilot, we have sort of on the research side have been working on a few lines and this has
been going on for the last year with a few people working on it. One of the first things is a
language for writing memory safe code, low-level code that can run on there. The second is a
language called Tower for stitching these things together, and the final is the actual autopilot
itself, so we're building both the tools and using them. In the design of the language I know
that they considered many alternatives and came up with writing and embedded DSL within
Haskell that is called Ivory and the key features of Ivory art that they want to provide for
memory safety, timing safety through making it easier to do worst-case execution analysis,
functional correctness, but because it was low level, it needed to do some bit manipulation,
memory area manipulation because we don't have a full operating system running on these
Arduinos. And they need to be interoperable with some other C code. And lastly, because we
were working with the red team we wanted to generate readable C code so that they could
make some progress. I think that if the red team just couldn't understand anything that maybe
defeats the purpose of having the red team.
>>: [indiscernible]
>> Joe Hendrix: Yes.
>>: Do you have dynamic features like memory allocation, dynamic memory allocation, virtual
dispatch and all of that?
>> Joe Hendrix: No. As an embedded DSL, basically Ivory is implemented as a Haskell library, so
the language itself is defined in a package that's called Ivory language and the compiler which is
used to generate the C code from Ivory is written in another package and we did this this way
because basically building a programming language is a lot of work. There's a lot of sort of work
in building a type checker and parser and by writing Ivory within Haskell, we get those things
more easily. And we can also level Haskell as sort of a macro language, so you can right Haskell
programs to build up your Ivory programs which will access, do some neat things in composing,
building basically more complex structures in our language without having to implement them
ourselves. The system isn't very big, so the language itself is under 4k lines of code. What this
means in terms of our compilation for getting to the executable, basically you take the Ivory
user program and the Ivory compiler and compile those together using Haskell to get an
executable. You run the executable and it will generate C code for you and then the C code can
be compiled using a standard C compiler along with libraries from the small RTOS we used and
other user level libraries. That's how Ivory works. Ivory is just about sort of the individual
functional code. Tower is about task scheduling and communication, so it lets you write,
specify that. You can specify time and trigger behaviors and you can define how these things
communicate. There's some support both for shared memory and queue. Because of the
design of Ivory, this Tower, on top of Ivory is macros, so it's just code that manipulates ivory
expressions and so we have this same type of safety correctness as Ivory and we didn't need a
code generator for that.
>>: But with that you have higher-level properties. You presumably want that, you know, the
task is event which schedules eventually executes.
>> Joe Hendrix: Yes.
>>: Are you doing any sort of [indiscernible]
>> Joe Hendrix: We are not looking at those properties. The scope of the red team was mostly
on these low-level bugs and so we focused mostly on that. There are some other groups that
are looking at the higher-level issues and is the autopilot correct, is an interesting question, but
we haven't done that.
>>: [indiscernible] denial of service [indiscernible]
>> Joe Hendrix: Yeah. So we're not, I think we would say higher assurance software, not the
end story. Just to kind of show you what our part is, here is the, we ended up having to
upgrade processors, so this is the current board and this fits a lot of things on it, so it has a lot
of different sensors on it. Galois’ part is this part in the middle and on top of that we interact
with some C code for the sort of the sensor to autopilot stage and then we, again, rely on C
code for actually doing the modem and the motors. Those parts could have memory safety
errors. They haven't been verified. Again, all of this is available on SMACCMPilot.org. If you
want to try it out, feel free. I think the…
>>: Yes?
>>: [indiscernible]
>> Joe Hendrix: Yes. I think it's 16-bit, has a little bit more RAM.
>>: So what was the, what did you need out of your better processor?
>> Joe Hendrix: I don't know.
>>: [indiscernible] what is the final program size because there isn't that much RAM.
>> Joe Hendrix: Yeah. It's on the order of thousands of lines and the generate code is in like I
think it's under 2000, so I don't know how much k that would come out to, but it's still fairly
small. I think they have the thing that this board was giving us was the extra sensors. Finally,
this is the last project I'm going to talk about. This is about computing on encrypted data. The
work I'm going to talk about is from Galois but we worked closely with Cybernetica on some of
the stuff so I thought I would mention them. They have been helpful in helping us learn about
this domain. The goal of this project is to basically perform computations where the algorithm
is running on encrypted data, so it doesn't actually know, you just a specific algorithm you are
trying to run and it doesn't have plaintext, access to the plaintext data. This came out in the
news I would say about four or five years ago and a breakthrough has come in something called
fully homomorphic encryption, which showed how you could actually do this on crypto words.
Now the challenge with fully homomorphic encryption is that basically you need millions of bits
just to encode a single like memory word, plaintext word, so it's a huge blowup in the size of
the programs that you need to worry about. There's been some other work on something
called secure multiparty communication and this still is not performing as say executing normal
hardware, but it has a slightly weaker tacker model but it's much more efficient and so we have
actually implemented apps that use secure multiparty communication. In doing this we sort of
reduce, the relationship to trust is that by, we can basically run computations on hardware
that's less trusted, so we don't have to trust the service that's doing the computation as much if
we can use techniques like this. In secure multiparty communication it's designed against
adversaries that are honest but curious, so the idea is you have something, some number of
machines, and here we have three, and if you have a single value you code it across the three
machines. If you had a value x you can separate that into the sum of three values x1, x2 x3 and
this could be actually an arbitrary ring. Each of these servers will know one of those values, but
just knowing one of those values won't tell you what x is. You have to basically have collusion
between the servers. They have to learn that value. This sort of reduces trust because no one
server can compromise things. You have to have that actual collusion. These x sub i values are
drawn randomly just with the property that they encode x. What you can do is you can
simulate computation on the unencrypted values of the x by computation on the x sub i’s. In
addition it's fairly easy, so if you want to do addition you can do it locally, so if you have two
numbers that are split in this way, x and y, you can compute the share just by adding the two
numbers that you know. Multiplication, however, will require some communication so you
actually have to exchange messages and so we have an algebraic formula. If you work out the
math you'll see that this is the same thing. This actually is just an expansion multiplication, and
so you have this communication pattern where the servers are exchanging values. They only
exchange enough information so that they can do that computation and periodically there's a
cleaning process that is where you generate new numbers so you can build up too much
information so you can recover all values. All these complications have to be designed explicitly
by hand so they will satisfy this correctness property. They don't exchange too much
information that the secret is no longer secure. One application that we have looked at is
actually executing a filter where the filter is responsible for taking documents and removing
some data from it. But we don't want the filter to know what those documents are, so we
wanted to operate on encrypted files. The share approach for doing this is to first encrypt the
document with AES and then turn the filter computation into a shared computation. That will
look something like this, where you have your computer that has the document. You encrypt it
with AES. And then you send it to the mail server and you also have to send the AES key, but
you encrypt the AES key to these servers that are running the secure mail filter so they get
copies of it but only the servers can decrypt the message and that can also be in the shared
computation. Then they will do the filtering, and the result they will re-encrypt it and the result
you get this encrypted message which if these computer share the AES key and there's
mechanisms for doing key exchange, they can decrypt it. This looks something like this where
you are sort of taking your document, AES encrypting it, sending separate copies to the servers
and then coming back running your filter. And as a communith [phonetic] diagram it looks
something like this, just to make everything quite explicit. You take your document, you
encrypt it, and you encrypt it using the Share encryption scheme, so you get the Share AES of
the document and then you run AES in the context of the Shared computation, so you have to
run this AES over these Shared values and then you get your document back and this is all still
within the purple space. You run the filter to get your new document, you encrypted and then
you, and then it's still in the Shared. And then you can un-Share things, so you run your
decryption and you get your AES document back. That's how you get the whole thing. Here's
some performance numbers just on how long AES takes. Basically we can do about 320 AES
blocks a second, which is around 30 k. To improve performance we will interweave, because
there's communication and computation, we've found that if you have two threads doing this
simultaneously you can get better performance, so less time per block by having multiple
threads doing it and you sort of interweave communication computation. That's it on this. This
is just sort of a random selection of the various projects we've worked on. I'd be happy to talk
about more. I think in this higher level goal of trustworthiness of the next generation, I don't
know, but it's sort of a theme motivating a lot of this work and that's it.
>> Tom Ball: Thank you very much. [applause]. Anymore questions?
>>: The first part [indiscernible] do you have a sense for how much [indiscernible] or did you
just turn the crank and something popped out?
>> Joe Hendrix: For the SHA case it was basically an afternoon. AES is almost instantaneous.
You just have to write a little test case to get your spec. ECC was spread out over three months
but we were also doing tool development during that time, so I think we could get it down to -if I had a spec now in Java it would probably take a couple of weeks or a month maybe to do
that.
>>: [indiscernible] AES and SHA, where the specs fairly close to the implementation?
[indiscernible] per SHA is very [indiscernible]
>> Joe Hendrix: I think one major thing is the implementations tend to use tables to accelerate
things, whereas, our spec wouldn't do that, so that's probably the biggest difference. The ECC,
there are a lot more differences because of this encoding scheme.
>>: There was this attempt to keep them looking close together, though? Did you actually do
literate specifications [indiscernible] which is where these books [indiscernible]
>> Joe Hendrix: Yeah.
>> Tom Ball: Okay. Great. Thanks again. [applause]
Download