>> Kristin Lauter: Okay. So welcome everyone. ... visiting us from Seoul National University in Korea. Seoul...

advertisement
>> Kristin Lauter: Okay. So welcome everyone. Today we are very pleased to have Miran Kim
visiting us from Seoul National University in Korea. Seoul National University is widely
acknowledged to be one of the best universities in Korea. Miran will be speaking to us on a
Guide to Applications of Homomorphic Encryption. Miran has done important work on
maintaining and extending various libraries for homomorphic encryption and has won the Edit
Distance contest in the iDASH Secure Genome Analysis competition last year. Thank you,
Miran.
>> Miran Kim: Thank you for introducing me and giving me the opportunity to talk about my
work. Today I'd like to talk about the title, Guide to Applications of Homomorphic Encryption.
I’ll briefly explain the homomorphic encryption and then I’ll describe our basic arithmetic’s on
the homomorphic encryption such as binary circuits and arithmetic circuits of integers; and
finally I’ll show you how to apply our primitive circuits to the real world problems such as
private database query processing and secure DNA sequence analysis.
Homomorphic encryption is an encryption function which permits computation of encrypted
data without decryption. More precisely, it is an algebra structure preserving map between
plain text space and cipher text space, for example, we use the cipher text but we use a method
space as a polynomial [indiscernible] integer P and cycle [indiscernible] polynomial. But we
usually use of a small subring Z sub P which is an integer from zero to P minus 1.
From this definition it allows us to evaluate any arithmetic circuit homomorphically and many
homomorphic encryption [indiscernible] system have this adjusted such that Gentry’s original
scheme and as follows of words such that DGHV10 and BGV12 and BLLM13 and so on. In this
talk I focus on minimizing the number of sequence of multiplication gates for a circuit which is
called depth, and the number of homomorphic multiplication can be considered as the
computational complexity. This is because the error in the message grows exponentially or
linearly with the multiplicative depth and if the error is too large that we cannot compute
anymore so the multiplicative depth is very important in homomorphic encryption.
Now I explained our basic arithmetic’s under homomorphic encryption. Suppose we are given
mu-bit integers and we can consider these four primitive circuits: equality, comparison,
minimum and addition. Equality circuit determines the equality of two inputs and the
comparison circuit has one if X is [indiscernible] less that Y and minimum circuit outputs the
minimum of two inputs, and last, the addition circuit is just to add two numbers over Z. The
binary circuits are expressed as polynomials of degree of mu so they can be evaluated with the
homomorphic encryption of depth log mu. And for the complexity the equality requires mu’s
multiplication and the others for the other circuit they are complexity, are quadratic, and bit
length mu. However, they require many cipher texts. For example, for only one mu-bit integer
we need mu cipher text.
Now consider these circuits over the integers. Suppose that we take a large method space then
the addition can be performed only one circuit but we have to choose P so as to perform these
operations without overflowing in this method space. And we can revise the previous circuits
using these specs. [indiscernible] XOR and [indiscernible] AND and plus minus. are arithmetic
operations over integers. Then for one bits A and B we know that these properties satisfy. So
using this very simple observation their primitive circuits can be computed efficiently with the
same depth and same complexity. However, there are still many cipher texts because for
equality and comparisons circuits we use the bitwise encoding and each bit is considered as an
element of large method space.
On the other hand, we can revise the circuits using Fermat’s Little Theorem. Let’s recall
Fermat’s Little Theorem. For a prime P, X to the P minus 1 is zero if and only if X is zero so we
can consider two importance functions. The first one is the function Chi sub K which is a
characteristic function of K and it can be expressed using Fermat’s Little Theorem, and then
equality circuit can be seen as just Chi sub zero and Chi sub K is a polynomial of degree P minus
1 so the depth is log P minus 1 about log P and the complexity is also log P.
The other function is depth function which has one if and only if X is from zero to P minus one
over 2 then this depth function can be expressed as a sum of Chi sub K from zero to P minus 1
over 2 so the comparison circuit and minimum circuit are expressed using this depth function.
And as you see this depth function also polynomial of degree P minus 1 so that depth is about
log P and the complexity is for computing Chi sub K we need log P multiplication and we have P
minus 1 over 2 Chi sub K's so the total complexity is big O P times log P. So the comparison and
minimum circuit they have the same complexity as follows.
>>: Can I ask you a question? Does this range, it’s like a range theory, right? It's like asking if
it’s in this range. Does it work if you change the endpoints of the range or it only works
because it’s zero and P minus 1 over 2?
>> Miran Kim: You mean if we change the range then it will be changed?
>>: Will it still work and still do it the same way?
>> Miran Kim: Maybe it will work.
>>: So it's like a range query function.
>> Miran Kim: Yes. In the next slide I will talk about if we change the range. And we have a
very nice advantage using integer encodings such that they are not so many cipher texts. So for
one mu-bit integers we just have one cipher text. And the next slide is if the domain is
restricted to the [indiscernible] such that, for example, such that X is from zero to L or from P
minus L to P minus 1 then the characteristic function and the step function can be expressed
using Lagrange Interpolation Formula. So if we re-compute M sub K’s as constant values then
these are polynomials of degree two times L so the total depths is about log two times L. So if
the difference between two inputs is less than L than equality comparison minimum circuit can
be represented using this function and they can be evaluated with a homomorphic encryption
of depths log L and the complexity is as follows.
In short, we have the following results given to mu-bit integers. If we use bitwise encoding
then the older circuits have depth log mu but the number of cipher texts is very large, and in
case of integer encoding we first take P about 2 to the mu so the addition can be performed as
just a model of addition in the method space; and first if we use Fermat’s Little Theorem then
the depth is mu which is bigger than the one of bitwise encoding but the number of cipher texts
is very small; and if the difference between two inputs is very small then the depths can be
reduced to log L and it has also the small number of cipher texts. So you can choose bitwise
encoding or integer encoding by the function that you want to evaluate.
And we have another optimization such that Single-Instruction-Multiple-Data Technique called
SMID technique. In short, we pack each message into a single cipher text so we can perform an
operation simultaneously for all of the messages. More precisely, suppose that the
[indiscernible] polynomial factor a small P into a product polynomial which are reducible and
has the same degree then the method space is isomorphic to a product of polynomial ring both
P and irreducible factors. So this is called plaintext slots and we force, embed our messages
into a plaintext slot and encode to the plaintext polynomial using Chinese remainder theorem
and encrypt to the cipher text. And we can also make extensive use of the same techniques to
move data across plaintext slots.
Now I'll show you how to apply our primitive circuits to the real problems. On 2011 there was a
massive attempt to access the user account on the Sony PlayStation. From these attacks we've
learned that we should increase sensitive information and store this data to protect them from
such kind of data theft. And here is the query that we considered in our work. Suppose that
we have a code for department and database table then we can consider this private search
and compute query. We first searched the [indiscernible] who are in the research department
and that aggregated their salaries.
There are several options. The first thing we can think about is searchable encryption, but they
only support search operation and it is weakly secure because [indiscernible] leak data
distribution between plaintexts and cipher texts remains. And the other option is using
cryptosystems with layers of the different encryption [indiscernible] schemes such that CryptDB
DB and Monomi. But they cannot support multiplication so if you want to multiply then the
result values should be encrypted in advance or decrypt encrypted attributes. So we thought
that homomorphic encryption schemes appeared to perfectly perform such a database query
with a single encryption scheme.
The following picture illustrates our approach. So suppose that a server holds an encrypted DB
data consisting of tuples and attributes. Then let us denote T sub I bracket AJ is the value of A
sub J in a tuple T sub I. Then for the preparation step the DB user prepares public key and
secret key of homomorphic encryption and then publish publishes public key to a server and
then encrypts their data and stores it in the encrypted database.
For the submission of query step a DB user first encrypts all the messages in a query Q under
public key, say Q bar. And the method can be a keyword. And database a [indiscernible]
compiles Q bar into Q bar stars using processor by applying our techniques. Finally, the DB sub
bar evaluate over encrypted data and returns the encrypted result to DB user. Let's think of our
search and compute queries. We first check the equality of keyword type attribute and
keyword. If they are the same then it outputs one and otherwise it outputs zero and then
multiplying you multiply attributes by this result and aggregate over [indiscernible] attributes.
But if we encrypt all the messages in a bit by bit manner then addition operation over encrypt
data are very expensive despite our optimization because addition includes very expensive
carry operation and carry operation requires many multiplication.
So our solution is as follows. First we use integer methods of space. More precisely, for
keyword attributes we used bitwise encoding because they are just used for search operation,
and numeric type attributes are encoded by integer because they are used for compute
operation so using this encoding we can reduce the depths of search and compute queries by
big O log mu for mu-bit type attributes. This is just for the depth of search operations.
And we also use the same technique in our experiment we usually embed about 600 messages
in one cipher text and perform the queries simultaneously for all the messages. And for the
performance improvement we represent each numeric type attributes and the radix 2 to the
omega and the equation we have the following equation. From this equation we know that
[indiscernible] to compute just these red functions over integers. So we choose a large
plaintext modulus P in order to perform these operations without overflowing in the method
space. And after you evaluate this function, and after decryption of this result the user only has
to shift and add.
Here is our experiment result. We used the BGV scheme and each library by IBM and we
supposed the keyword type attributes 10 bits and numeric type attributes 30 bits. For the
equality of search and sum query with one hundred tuples we took method space to the
[indiscernible] 2D 15 and we packed 600 messages in one cipher text and the running time is
about four seconds. And the comparison circuit and some query with 10 tuple, 100 tuple then
it took about 13 seconds.
Now I want to talk about security genome analysis.
>>: Before you go can I ask some questions about the previous [inaudible]? So how much
difference is there between the search part and the sum part? So can you explain again how do
you do the search?
>> Miran Kim: So for the search operation we use equality circuit and comparison circuit, as I
described above, and the sum edges add because we chose the large method space so it is free
operation except for the search operation it requires>>: [inaudible].
>> Miran Kim: It is very important to accelerate the timing.
>>: And here capital M was the number of records?
>> Miran Kim: Yeah. You may assume that they are 100 people and it's the number of people
in the database. Managing a product displays genome type information in a public database so
the genomic data has become publicly accessible, and one you can recover physical
characteristics from genomic data so we want to use the potential of homomorphic encryption
for security of data analysis. In our model the data owner wants to store large amounts of data
in the cloud and many users may interact with the same data over time.
In our solution cloud can handle all that interaction through computation on encrypted data so
it doesn't require further interaction from the data owner. This could be more interesting in
the situation of this picture. The data owner can be a hospital or clinic and the third party is
patient. Then suppose the hospital would like to use the cloud services for analyzing a lot of
patients’ data. Then for its general public key and secret key and then patients can upload their
data directly with using public key and then hospital [indiscernible] genomic queries to the
cloud and the cloud evaluates with public key and then return the encrypted results to the
hospital then it can be decrypted by a secret key of hospital.
Now I'll explain the Edit distance which is a measure to quantify this dissimilarity of two strings.
In more detail this can be the minimum number of Edit operations such that mismatch,
insertion and deletion which is needed to transfer one string to the other string. For example
suppose that we have two strings, alpha and beta, and then we can make another string’s
insertion of bar. Then the first same column is called match and the last different column is
called mismatch. For the fourth column alpha prime is needed to insertion of A in order to
transfer into beta prime so it is called insertion. And the fourth column of alpha prime is
needed to be deleted in order to transfer to beta prime so it is called deletion. So in this case
the Edit operation is two mismatches and one insertion and one deletion. So the total is four
and it can be the candidate of Edit distance because Edit distance is the minimum of this Edit
operation.
And here is the concrete Wagner-Fischer Edit Distance Algorithm. From one to six it’s just for
initialization step and number 9 and 10 is the rear computation step. We've already checked
the character of inputs. They are the same or not. And then doing addition and then
computing minimum then we can create a metrics DIJ by updating these entries. So the Edit
distance is the last value D sub N, M. Another aspect if we consider the diagonal path then we
have to check the equality of the inputs, and if we move one unit down it can be a deletion, and
if we move one unit right then it can be insertion, so in this case we have to add one to the
previous values.
So I'd like to explain our encrypted Edit Distance Algorithm. Suppose that we are given two
inputs, alpha and beta over omega beats alphabet. That means that the character is omega
bits. Then for line nine we have to check the equality and it is expressed using equality circuit
as described above and we can add one by N steps in the X axis and add one by M steps in the Y
axis. So the element of metrics is always less than N plus M minus one so it suffices to assume
that D sub IJ’s are log N plus M minus one bits.
So for this one round we see that if you use bitwise encoding then the total depth is depth sub
equality circuit over mu, over omega plus depth sub add circuit over mu bit plus depth sub
minimum circuit over mu bit. So the total depth that is about two times log mu. And in case of
integer encoding the depths of addition is zero but the depth sub minimum circuit over mu bits
is about mu so we can conclude that bitwise encoding is better than integer encoding in our
encrypted Edit Distance Algorithm.
In short, given encryption of these kinds of values then one can apply the above operations to
compute the encryption of DI J with depth two times log mu which is approximately big O log
log N plus M and it is possible to D sub IJ simultaneously when [indiscernible] plus J is a fixed
value so continuing this way then we obtain the encryption of Edit distance with homomorphic
encryption of depths these numbers. The first term is depths of one round; the second term is
the number of rounds. However, the instance requires two large depths. For example, if we
compute Edit distance with a DNA sequence length eight then we need a homomorphic
encryption of depth 120 so it's impossible to evaluate. Maybe it took several days or several
months it's impossible.
So we want to optimize the algorithm using block computation. First divide the Edit distance
matrix into sub-blocks of size, Tau plus 1 and so the Edit distance problem in each block and you
compute each of them diagonally and update the entries. Then we note that the Edit distance
can be evaluated with only at this value. The first time is depth sub 1 block and the second
term is the number of rounds. Here is the result. And if Tau is omega log log N plus M then the
depths of one diagonal round can be big O Tau so the total depths can be reduced big O N plus
M. Here is our experiment results with the same scheme and the same library. For DNA
sequences with length eight we packed 682 messages into one cipher text and so the total time
is five hours and the amortized time it is about 27 seconds.
And now I'd like to introduce the Security Genome Analysis Competition on last year which was
sponsored by National Institutes of Health. And GenomeWeb and Nature report on this
competition and the teams were from Microsoft and IBM, Stanford and MIT and so on. I also
participated with [indiscernible] researcher Kristin and we won a task of Edit distance.
For task of DNA sequence comparison suppose that two individual genomes were randomly
selected from a personal genome product and we want to compare two sequences with
reference genome which was known publicly. The information of the two genomes were
provided as the Variance Call Format file. At first it was so strange file. They have some
concept. First it is a single nucleotide polymorphism and it is A, G, C, T, as you know, and the
REF is the reference spaces. Here it is, reference spaces, and alt is alternate non-reference
string which means the user’s data alternative genome and there is SVTYPE which is a type of
structure variant relative to the reference. For example, if non-reference hasn't any SNP’s then
the SVTYPE is deletion and if the non-reference string have some SNP’s while the reference
doesn’t have then the SVTYPE is insertion.
Before we begin we have something to do. The first thing is to create the data using the
positions in the VCF files of two users, that is someone arranges the information and makes the
merged list, and then we have to define two encodings E and F. The first encoding E tells us
whether the genotype is missing or not in the list and the second encoding F specifies the
variant with respect to the reference so if SVTYPE is insertion over a deletion then it's just zero
and otherwise it is one.
And that the user’s string can be encoded as a binary string of length 15. The first rule of each
SNP is represented by two bits as A is to 11, 00 and G is 01, C is 10 and T is 11 and they
concatenate with each other and Pad 1 at the end of string because over A string are zero string
so we have to distinguish them and Pad zero is to make a 15 bit string. This is because we
assume that it is enough to compare seven strings between two users. So they are after 14 bits
by the first rule and then we Pad 1 so it's up to 15 beats so M is older user string can be seen as
15 binary string, say SI, and in case of missing genotype, that is E sub I is zero then it is encoded
just as zero string.
>>: Can I make a comment? So this length 15 that actually came from the data, right? That was
because there could've been up to 15 entries?
>> Miran Kim: Because in the file some genotype has length 170 so it's too large to compute
over encrypt data so we just cut. So sometimes it outputs the different values but it is correct
in our experiment results.
>>: What was 170 sometimes?
>> Miran Kim: 170 length.
>>: The insertion?
>> Miran Kim: All strings. In this example it just had has one characteristic but sometimes it
has 100, 180 SNP’s so it's large.
>>: [inaudible] arbitrary graph.
>> Miran Kim: For performance of our experiment, yeah. And here is the Hamming Distance
Algorithm. Suppose that Alice and Bob have genotypes over many SNP’s say XI, and X I prime,
and for a fixed genotype defined a Hamming distance at the [indiscernible] as follows. First the
SVTYPE is insertion or deletion then we can think that they are perfectly different so the
Hamming distance is zero and if one of them is missing or the genotype are different than the
Hamming distance is the one and otherwise it is zero. And then aggregate the Hamming
distance at the [indiscernible] and here is our strategy. First Alice and Bob packed their data
into plaintext logs respectively and encrypt using public key and then evaluate the circuit for
computing the Hamming distance at the [indiscernible] in parallel. So if we want to do bitwise
encoding and we evaluated the first circuit, and if you want to encode integer encoding then
we use the second circuit, and after evaluation of Hi and decryption then aggregate Hi’s.
Here is a task that we want. [indiscernible] divides approximate algorithm to compute the Edit
distance based from the reference sequences but it was exactly 9 percent but it had some
advantage such that it calculates based on the [indiscernible] metric so it enables to parallel
computation and note that the full Wagner-Fischer Edit Distance Algorithm as described above
is computed in a recursive way so the depth is too large.
>>: What are you doing? [inaudible]?
>> Miran Kim: In the homomorphic encryption the depths is the sequence [inaudible]
multiplicative gate, the number of multiple gates. It is very important to the>>: [inaudible] make your depth more than about 20 for a homomorphic encryption.
>>: [inaudible].
>> Miran Kim: Here is the approximate Edit distance for a fixed genotype defined as genotype
length and Edit distance as follows. For the genotype length if the SVTYPE is a deletion that
means there is no SNP’s in the user genome then take the lengths of reference and otherwise
she has the SNP’s so take the lengths of the old and say D sub I and the Edit distance E sub I is
zero if and only if they are the same genotype and otherwise take the maximum of the lengths
and aggregate E sub I’s.
And here is an evaluation circuit for bitwise encoding. For bitwise encoding we use the first
evaluation circuit and for the integer scheme we use the second circuit. And here are the
implementation results with [indiscernible] library by IBM and BLLN scheme with our library by
Microsoft. We are given two DNA sequences with 5000 genotypes. Then [indiscernible]
scheme took about 15 seconds for Hamming distance and took about 40 seconds for Edit
distance and BLLN scheme took 68 seconds for a Hamming distance and 110 seconds for
Hamming distance for Edit distance. As I told, it took 27 seconds for evaluating just for 8
sequences. The total results are improving than the previous results.
>>: Because the length [inaudible] eight so that’s a huge improvement.
>> Miran Kim: Yes. By eight to 5000. And we also provide a theory comparison between GHS
scheme and the BLLN scheme. They use the same plaintext space in cipher text but the main
difference between the two schemes is two encryption of messages. GHS scheme put the
messages in the less significant bits of the cipher text but BLLN scheme put the messages in the
higher bits of cipher text; and GHS scheme uses old cyclotomic polynomials so there is no
security reduction but BLLN scheme use the cyclotomic polynomial of degree powers of two so
there is a [indiscernible] reduction to shortest vector problem. So BLLN scheme is more secure
than GHS. We know these two facts. The thing is that M multiplicative depths. If M is large
then the size of cipher text model of Q of BLLN scheme is larger than the one of GHS but the bit
size of cipher text is less than GHS.
And finally, I want to introduce my ongoing work which is a floating-point homomorphic
encryption. It was motivated by a project with Samsung. They want to compute the
recommended system over encrypted data. The recommended system is recommended to the
users that they want to see the movies or these kinds of things. And so we thought that we
first needed approximated arithmetic's so we constructed a concept of floating-point
homomorphic encryption. It outputs encryption of some more significant bits of addition and
multiplication of messages while the magnitude of significant is reduced. Significant is the part
of floating-point number that contains a significant digit. And our scheme supports more
efficient encrypted floating-point arithmetic’s than previous eight encrypted schemes which
only support modulus operation. Without that our scheme can be applied to a large integer
computation and multiplicative inverse. For example, with 80 bit security parameter our
implementation shows that our floating-point homomorphic encryption takes 1.8 seconds to
compute an encryption of 15 bits precision of multiplicative inverse given 20 bit precision of a
number. This is the end and thank you for your attention.
>>: Questions?
>>: Is there any way [inaudible] like to the integer or maybe to the floating-point one or are
they like different domains completely?
>> Miran Kim: For bitwise encoding we use just the sub 2 as a method space, but for large
methods for integer encoding then we can use the Z sub P for [indiscernible] large P. They have
the different plaintext spaces.
>>: If there would be some way to do this like using bootstrapping you wouldn't do
bootstrapping you would change the parameters. There might be some way to do that. It
would be very interesting.
>> Miran Kim: Yeah. As you know, bootstrapping requires many computations and as far as
the bootstrapping for integer scheme requires six levels so it's not shaped operation on but in
my work I don't use bootstrapping. I just want to use the encryption to just to support small
number of operations. So with my parameters after evaluating we cannot compute anymore
but we have the correct answer.
>>: You can do bootstrapping with only six levels? What parameters sizes?
>> Miran Kim: With 18 security parameters.
>>: But what size for Q and N?
>> Miran Kim: I don't know the exact parameter. It was just for the integer homomorphic
encryption scheme not [indiscernible] scheme. One of my friends is researching about the
bootstrapping and it’s his work. He said it requires six levels for bootstrapping in integer
scheme, but I don't know the>>: It's more for this.
>>: What kind of techniques do you use in the floating-point thing?
>>: Can you go back one slide? Like the multiplicative inverse?
>> Miran Kim: Yeah. Multiplicative inverse and we want to apply to approximate
[indiscernible] series. In our scheme we cannot compute the exact value but we can compute
approximately to our desired results so>>: How many levels does that take to do the, for example, the [inaudible]? Like degree 10 or
[inaudible]?
>> Miran Kim: I remember. [indiscernible] I’ll talk with you later. I'll show you our results.
>>: I wanted to ask a question.
>>: [inaudible] we require the runtimes in seconds or tens of seconds. How much will you
benefit from parallelization? So [inaudible] or the cost [inaudible]. Can you dial it down to
milliseconds? Can you do that?
>> Miran Kim: So you mean what is the advantage for the parallel computation?
>>: How far can we optimize this? What’s the potential there?
>> Miran Kim: As far as I know we can, the total time is the same but before we encryption we
encode several data in one plaintext slot, one message. I show my slide.
>>: So there’s like several steps in doing this>>: [inaudible].
>> Miran Kim: It depends on how many we embed in a slot. So the total time is just divided by
the number of the slots. This is the number as time and we can computes the same operations
over many messages.
>>: So your implementation is not [inaudible] based or just [inaudible] based?
>> Miran Kim: It's [indiscernible]-based.
>>: [inaudible]?
>> Miran Kim: I heard that it is more correct that we call this matching technique.
>>: I mean it’s a form of data [inaudible]. This is interesting [inaudible] that this parallelization
happening inside the algorithm it’s not that [inaudible].
>>: [inaudible]. There are ways to push this beyond [inaudible]. Like he was talking about
[inaudible].
>>: I think there's additional levels of parallelization that are not happening back here. So one
is if you look even at the lower level of the [inaudible] 15 multiplications the parallelization is
really not being used there and yet there's another level on top of this that could be
implemented which is kind of expanding your capability by doing multiple CRT essentially on
the plaintext and you can do that in parallel so there's like at least three levels of doing types of
parallelization.
>>: It sounds like [inaudible].
>>: Yeah. It really is.
>>: So I have a question. I mean at a high-level a lot of what you're doing for the different
applications and different tasks is examining these trade-offs between really the depth and the
cipher text size in many ways because with the integer encoding you reduce the cipher text size
so dramatically but for multiplication, for comparison, equality checking you’re increasing the
depth so there's this kind of a balancing act that you're doing for different applications which
also affects the parallelization possibilities as well. But it's still an open question kind of
optimizing for any given task. Here's a task, optimizing this encoding choice and these tradeoffs. So I just wonder what you think about that. Would you have any ideas for kind of a
general optimization thing?
>> Miran Kim: In case of me if I need many addition then I usually take the integer encoding
because the addition is free, but if I don't need to operate the addition operation then I'll
usually choose the bitwise encoding. So I think it depends on what you evaluate, but
sometimes someday we can do that and we can consider the better circuits for equality and
comparison.
>>: Do you think like the type [inaudible] you use could be automated then? Like you showed
us several techniques for how to improve these assorted algorithms. Do think that logic that
you obviously did by hand could be put into a compiler or something as a general [inaudible]?
>> Miran Kim: It's not.
>>: I don't blame you. It looks tough.
>>: At least partially be able.
>> Miran Kim: Could you say that again? I’m sorry.
>>: [inaudible] compilers that encode those making this process a little bit more [inaudible].
But as long as you can discover strategies [inaudible].
>> Miran Kim: Any other questions?
Download