>> Amy Draves: Thank you for coming. My... John MacCormick who is joining us as part of the...

advertisement
>> Amy Draves: Thank you for coming. My name is Amy Draves and I am here to introduce
John MacCormick who is joining us as part of the Microsoft Research Visiting Speaker Series.
John is here today to discuss his book 9 Algorithms That Change the Future, Ingenious Ideas
That Drive Today's Computers. John has a PhD in computer vision from Oxford and is currently
a professor at Dickinson College. His work spans several subfields of computer science
including computer vision, large-scale distributed systems and education. Please join me in
giving him a very warm welcome.
[applause]
>> John MacCormick: Thank you very much indeed Amy and Microsoft for the invitation to be
here. It's actually a real privilege and pleasure to be here. I worked for Microsoft for four years
I guess, 2003 to 2007 in the Silicon Valley lab of Microsoft research, but it's been some years
since I've been back in Redmond and it's really, as I say, a privilege and a pleasure to be here.
You would have seen the title of the book and you probably have guessed that I'm going to talk
a little bit about algorithms, but that presents a little bit of a problem for this audience because
I'm guessing that a significant fraction of the people in the room know more about algorithms
than I do. I don't propose to explain a bunch of algorithms to you. I'm actually kind of
interested in just understanding the audience a little bit. I understand there are people online
as well, but the people in the room, how many people consider themselves to be a computer
scientist? Raise your hand if you consider that. How many people have at least some formal
training in computer science like an undergraduate major or maybe a PhD or whatever? Right.
Okay. As I feared, you all understand algorithms probably better than I do, but what I want to
do instead is ask and offer some answers to some major questions about algorithms. The three
things that I want to address today, and it's related to the book. It's related to how can we
communicate the ideas about algorithms to the general public. That's the basic idea of the
book by the way to take nine different types of algorithms and try to explain them without
assuming any background in math or computer science to anyone who is interested in these
ideas but doesn't have a background in it. So the three questions I want to address are number
one, how can we even define what an algorithm is to say a general computer user, a member of
the general population. Secondly, what is the role that algorithms play in our society today?
How do they affect people out there in the world? And finally, why might it be a good idea for
people in general, not just computer scientists, but a significant fraction of the population to
understand something about algorithms and computational thinking? Why might it be a good
idea for that to be the case? Those are my three questions that I want to cover, let me attack
the first one obliquely about defining algorithms for the general public. You would've noticed
that in the title of the book we have this word algorithms and there is a somewhat interesting
story about that because when I had finished about half of the manuscript for this book, I
managed to get a contract with Princeton University Press to publish it and I drive over to
Princeton to chat with the editor and get some advice on how to finish the manuscript and she
was enthusiastic. She set up a meeting for me with the director of marketing of the press and
this was a few years ago now, so I don't remember much about that meeting, but one thing is
ingrained in my brain which was that the director told me under no circumstances was I to use
the word algorithm in the title or the subtitle of the book. That was a scary word, a technical
word and it was going to cut the sales in half and just no, no, no, no. So I thought, okay fine, I
went back and finished the manuscript eventually, sent it back to them, came up with some
nice warm fuzzy titles. No, that's not what I want to talk about yet. Let me see. Oh yeah, yeah,
here we go. So this is actually the proof that I sent to them on October 31, 2007. Look at this
nice friendly title With Genius at Your Fingertips. But they came back to me with this nine
algorithms thing and I was like, what's going on? So I did a little bit of informal market research
myself just testing with friends and family and it actually seemed to work. People seem to like
this. They weren't put off by the word algorithms. In addition, I met again with this director
person and he told me oh, yeah, everyone knows what an algorithm is. This is great. It's going
to make it sound sciencey and interesting and exciting [laughter] and, you know, people are
going to be right into it. So, great. What's going on? It turns out that there was actually a
genuine shift. The director was not, probably was right on both occasions. There has actually
been a flowering of this use of the word algorithm in our culture just over the last few years
and one way that we can sort of cheaply and easily verify that is to go to the New York Times
website and just do a search for the word algorithm. If you do everything since 1851, you get
over 10,000, which I think is just the maximum that they return, but restricting just to the past
30 days you see here we get about 38 results. That suggests to me that every day if you read
the entire New York Times cover to cover you are probably going to see this word algorithm
and they're talking about something for some reason. So that word is out there now and
people are seeing it and people therefore, have some idea what it means even if they may not
know precisely. Another sort of anecdotal thing I like to say about this relates to the movie The
Social Network. How many people have seen a movie? It's a supposedly true story about the
founding of Facebook. Near the start of that movie the Mark Zuckerberg character who will
later dropped out of Harvard and found Facebook is having a late-night hacking session in his
dorm room and his friend Eduardo comes in and Zuckerberg turns to him and says, Eduardo, I
need the algorithm that you used to rank chess players. Eduardo is sort of like, uh, especially,
he finds out that Zuckerberg is trying to do it for slightly unsavory purpose that he wants this
algorithm. But Zuckerberg is saying I need the algorithm; I need the algorithm. This may or
may not be exactly your idea of exciting cinema, but to me I'm very excited about that scene
and the reason is that millions of people saw that movie and for many of them this may be the
first time that they really saw the word algorithm used in context and got some idea of what it
means. I mean, a couple of moments later we see Eduardo and Mark writing a bunch of math
formula on the window of the dorm room and hacking away and so people have some idea that
it has something to do with math and computers and getting things ranked or something like
that. Okay so, we have this flowering of the word algorithm and so hopefully many people
have been exposed to the word. But how can we take a step further and explain to them what
it really means? What really is an algorithm and how can we explain it to somebody who
doesn't have a background in math or computers or programming? I'm a little interested, just
stepping back into the world of technical computer science for moment, I mean, would anyone
in the audience be willing to volunteer a technical definition? What do the computer scientists
consider to be a definition for an algorithm? Do I have any willing volunteers or is this going to
be like one of my undergraduate classes where [laughter] I have to victimize someone? No
way? Stony silence, looking down, okay. Don't worry. I'm used to that. [laughter] so here we
go; this is a commonly used textbook by Peter Linz. It defines an algorithm for a function as a
Turing machine which given any input on its tape eventually halts with the correct answer
which computes the value of that function for the input. So this is a pretty formalized definition
with a bunch of fancy symbols and stuff as well. It doesn't have to be that extreme. If we look
at other textbooks on automata and language theory we see some somewhat more informal
definitions but still meaning Turing machines; Hopcroft and Ullman, Lewis and Papadimitriou
are here. Then if we take a step up to CLRS, the sort of algorithms Bible here, right at the start
of the book they give this nice informal definition as a well-defined computational procedure
that takes an input and produces an output. But I would argue that all of these are not very
useful for explaining the concept of algorithm to everyday computer users. Can we do better?
Well one thing that's often proposed is to make an analogy with a recipe and cooking and sure
that's a sequence of steps that leads to a result but to my mind this is somewhat unhelpful and
perhaps even dangerous definition, an unhelpful analogy. The problem is that cooking is rather
imprecise; you can vary what you do. There's no concept of a variable input to this that
produces a different output and what you hope to produce. You bake the same cake every
time; except that there is randomness as well which we don't really want to get into in our
definition. So can we do better? This is something obviously that I've thought about somewhat
and I try to address in the book. One thing that I realized is that actually everyone knows a
bunch of algorithms. We learn them in grade school. One of the very first algorithms that we
learned is addition. If I pull someone off the street and tell them to add two 5 digit numbers,
not in their head. They are allowed to use pencil and paper. And then I ask him well, how do
you add two 5 digit numbers, they're going to be able to do it. They know a bunch of steps in
their head that they can follow that is guaranteed to terminate and produce the correct
answer. And when you explain this algorithm, it's actually pretty sophisticated. You write the
numbers down so that the digits lineup. You start at the rightmost column, add the two digits
there. If the result is bigger than 10, you put a one up in the next column and then put the
second digit down here. If it was less than 10 you just put the digit down there. Now move to
the next column and add these numbers and so on. This actually has all the classic elements of,
you know, what we as computer scientists consider an algorithm to be. We have this notion of
iteration, do until you get to the last column. We have this notion of if statements. If the
answer is bigger than 10 do this, otherwise do that. So I think this really captures a lot of the
important aspects of an algorithm and the computational nature and moreover, everyone
knows this, right? As soon as you present that example hopefully they can latch onto it and of
course it's a bunch of others multiplication, long division and so on, but again, everyone learns
in grade school. These algorithms are out there. The one disadvantage of that approach is that
it could possibly give the misleading impression that algorithms are about arithmetic, but at
least we have this notion of a very formal rigorous computational procedure which I quite like.
Another thing that one can do in trying to explain algorithms to a general computer user is to
tease apart the notion of algorithms on the one hand and computers on the other. One piece
of evidence for this is that algorithms have existed for millennia more than computers have.
We know that the Babylonians in ancient times had algorithms for solving equations. We know
that this Greek mathematician Euclid wrote down an algorithm for computing the greatest
common divisor of two numbers, and that algorithm, by the way, is 3000 years old, but it's
modern in the sense that we use it all the time. Probably the cell phones in this room have
actually run that Euclidean algorithm dozens of times since I started talking because phones are
components of certain cryptographic protocols. So here's an example of something that's over
3000 years that is an algorithm but it still is relevant today in our computational devices like
tablets, cell phones, laptops and so on. That's my answer to the first question. How can we
explain what an algorithm is to a general computer user who does not have a technical
background? My second question was what is the role of algorithms in our society and I don't
propose to give a deep or profound answer for this. Rather I just want to give a laundry list of
algorithms that I believe have fit computer users every day, not just computer scientists, but
everyone, and this is going to tie in pretty closely to the book because the algorithms I'm going
to talk about are benign types of algorithms that give the book its title. As I was careful to state
in the introduction to the book by the way, there is some poetic license in the title. The book
talks about many more than exactly 9 algorithms. It talks about 9 different classes of
algorithms for solving 9 different types of problems. So the things that it talks about are the
things that I believe are examples of things that affect ordinary computer users. Number one,
web searching is a huge one that just burst onto the scene in the last 15 years and we really
can't imagine the world without it now, so there are actually two different chapters on that.
One of them covers indexing algorithms that are used in search engines like Bing and Google
and the other one is PageRank, this famous algorithm that many people believe helped launch
Google to the forefront of the search engine industry back in the late ‘90s. And then what else
have we got? Public-key cryptography, the internet would be a very different place without
these algorithms that were invented back in the 1970s, but now we just can't do anything that
relies on security on the internet without this public-key crypto. Error correcting codes. We
can't live without those. Transmitting wireless signals just wouldn't work without error
correction. Compression algorithms, we depend on those for the efficiency of the transmission
and storage that we do. What else? Pattern recognition, so here this is a personal bias since
that's really where my original research interests lay in grad school, so I actually talked about
three different algorithms there, decision trees, neural networks and just nearest neighbor
classification. And again, so that was a little bit of a stretch at the time I was first formulating
the contents of the book in terms of is it really true that ordinary computer users employ those
pattern recognition algorithms every day? But I had a stroke of good fortune there because I
think now it is much more common for ordinary computer users to use that various automated,
you know, these type-in corrections systems for on-screen keyboards, for instance. They're
using pattern recognition to guess the best word that you are trying to type and handwriting
recognition is used much more commonly by people in everyday situations now than it was in
the past. Then there are also database algorithms. This is kind of an unsexy topic, but one that
I believe is very important and really affects our society. So I actually chose to talk about a few
different things there, write ahead logging, two-phase commit for replicated databases and the
fundamental ideas behind the relational model. All of these things are, you know, of course
computer users are not aware that it's happening, but every transaction you perform on
internet is relying on all of these ideas in order to give us the rock solid reliability and
dependability that we've just gotten totally used to in our interactions with online services.
And then also digital signatures, this is a great one, again, many aspects of the internet
wouldn't work without it, so the book gives a fully fledged description of RSI digital signatures.
After all of these sort of whizbang chapters saying how great computers are and aren't they
wonderful, I try to temper that a little bit with a final chapter about undecidability, but trying to
address it very concretely, so the basic idea is to explain why it is that no computer program
could ever be written that can analyze other programs and find all of the bugs in them. This is a
problem that computer scientists refer to as undecidable and this was first proved or
discovered by Church and Turing back in the 1930s. Let me just check to see if I missed
anything. This is the table of contents. I think I actually hit them all, so it is 9 classes of
algorithms if you take 11 and subtract 2 for the introduction and the conclusion you get 9 so
that's where that comes from. So like I was saying, this last chapter on what is computable has
a link with Alan Turing and since it's Turing's hundredth birthday this year I'm going to indulge
in a slight digression on Turing. Turing did an undergraduate degree at Kings College Cambridge
and later became a fellow there and I lived in Cambridge for a while early in the 1990s. One of
the things you discover when you get there -- well, I'm sure you know the English are sort of
masters of eccentricity and one of the things is that when you get there they have all sorts of
funny words for things that seem to be specific to Cambridge. For instance, does anyone know
what an undergraduate student who is majoring in computer science is called in Cambridge?
Anyone know that? No. Compski as in compsci but with the sci rendered as ski for no apparent
reason and same with people that do natural sciences such as physics, chemistry et cetera are
natski. Mathematicians are mathmos. That's just a little sampling of all these funny words that
you start getting exposed to. So I had a friend who was an undergraduate just starting and she
was at Kings College Cambridge, which is the college where Turing was. Unsurprisingly, the
student computer room at that college is named in his honor; it's called the Turing Room, but
my friend had never heard of Alan Turing so the first time she went down to the computer
room and she saw the sign on the door, Turing Room and she thought oh, well, this must be
another one of these funny Cambridge words that they have. There must be a verb, to tura,
which means to use a computer and of course when you go into the computer room and start
using the computer you are turing and hence the Turing Room. So I don't think she was
psychologically scarred by that mistake. She did go on to get a PhD in linguistics [laughter].
Anyway that's my Turing digression to celebrate his hundredth birthday. He comes up,
surprising, Turing's influence is phenomenal and even in a sort of pretty light relatively light
book like this, and he came up in a surprising number of places. I wasn't trying to write a book
championing Alan Turing or anything like that, but he just naturally comes up in a few places.
That's my way of answering the second question which is what is the role of algorithms in our
society? I really think that they are pervasive now. They affect everyone, not just computer
scientists and I've answered it just by giving a laundry list of examples. You might wonder why I
didn't include things like search algorithms, hashing, some of these classic algorithms which are
the first thing that you learn in the commuter science curriculum as an undergraduate. Well,
it's not because I don't think that those are important or interesting algorithms, far from it, but I
just wanted to focus on things that actually solves a concrete, specific problem that an
everyday computer user could really understand and say yes, this interesting algorithm solves
that problem. Whereas, something like a sorting algorithm is used in almost any large
computer program, any large piece of software is going to have a sorting routine at some point
and so that's why I chose not to focus on that kind of thing. I guess that brings me to my third
question. Actually, before I do that, let me just give you one other hint of the content of the
book. I'd like to do a demo in which I transmit my credit card number to my host Deborah
sitting at the back there. Hi, Deborah. I assume that you know your own cell phone number, is
that correct? Yes. Good. Okay. Don't worry; this isn't going to be scary. I just want you to
focus on the last two digits of your cell phone number. So get those in your mind. Then what
I'm going to do is I'm pretending that I'm a computer and Deborah is a computer and I am going
to transmit my credit card number to her over the internet modeled by the sound traveling
through the room and everyone else sitting around here, we can think of you as malicious
hackers that have hacked into one of the routers maybe on the internet or maybe you have
hacked into my computer. Anyway you can see every single thing that's transmitted between
us and, you know, obviously in this analogy you can hear everything I am saying to Deborah. So
this is what I'm going to do. To keep things simple I'm just going to use a two digit credit card
number. Okay. I'm going to take that two digit credit card number, I'll add it to Deborah’s two
digit phone number and the result I'm going to get is 65. So take that 65 transmitted over the
internet to Deborah. No one out there knows my credit card number unless you happen to
know Deborah’s cell phone number. I know that that's a slight hole in this otherwise brilliantly
secure scheme. So Deborah without wanting to put you on the spot, are you now in a position
to compute my credit card number? Would you know how to do it?
>> Deborah: No.
>> John MacCormick: No, okay. So I took her cell phone number, two digits. I added my two
digit credit card number and that gave me a total, so to get back the credit card number I would
just subtract off the cell phone number. So have you computed it? What did you get? Okay.
Let's see if you were right [laughter]. Hm, well done [laughter] I don't know if that shows up on
the cameras there at home, but this is a 13, and I do not have 100 different credit cards in here
with 100 different possible answers. That was the correct number. So I know especially for this
audience that that is a totally cheesy oversimplified demo of cryptography with all sorts of
security problems as well as imperfections in the analogy between -- that was an example of
symmetric key cryptography as used on the internet. Nevertheless, the reason I wanted to do
that demo is not to help you understand crypto. Everyone in this room already has that. I just
wanted you to see that this is one of the simple ways that you can explain a very interesting
and somewhat sophisticated idea, symmetric key crypto, to someone that's never encountered
it before without using any mathematics or computer science background. The chapter on
public-key crypto takes that very simple example as a starting point and then builds on it with
more and more complex examples until it gets a fully fledged description with no
oversimplifications of the Diffie-Hellman key exchange protocol, which is a key part, of course,
of what was missing in the demo because Deborah and I already had this shared secret of her
cell phone number, whereas on the internet when you contact a computer for the first time,
you don't have any established shared secret and that's the thing that public-key crypto really
solves for us. Now let me move on with my third question, which is why might it be a good idea
for the general public to have some understanding of algorithmic and computational thinking?
That's one of the objectives of the book is to try to spread these ideas and the understanding of
them to a general audience. Why might that be a good idea? And I have two different answers
for this. The first one relies on our notion of informed citizens and what it takes to be a good
citizen in our society. There's a pretty strong consensus that it is a good idea for citizens in our
society to have knowledge in various different areas. In many different colleges, for instance,
require you to attain competence in a foreign language and to study one or two courses in the
physical sciences often with a lab science requirement. Often there will be an American culture
requirement, a requirement to study literature or something in the arts and humanities. And
the philosophy behind this is to have informed citizens who can make our society better. I
would argue that to this set of things that it is a good idea to know about, not required of
course, but things that are good idea, we should add algorithmic and computational thinking. I
really think now that our society has gotten to that point where we are all affected by these
kinds of ideas and it would be really good if people in general just had some exposure to that
and some understanding of that, especially people like political representatives and business
leaders and so on, but also just everyone out there. I mean, I really do believe that if you
accept at all this principle that a broad education helps to create a good society, then I think
you also need to buy into the idea that algorithmic and computational thinking being a good
idea to add into that broad education. I'm not specifically targeting this idea of everyone
should learn how to code, by the way, or we need more software developers. Let's make sure
we get into high schools and teach code. That is an excellent idea, but it's a slightly different
objective than the one I am talking about right now. I know Microsoft has been active in doing
this and, you know, that's an absolutely wonderful program. I saw it discussed in the New York
Times I think two or three weeks ago and that's just great. But I also want to push an even
broader notion of just understanding algorithmic computational thinking whether or not one is
learning to program. So that's my first answer to why it might be a good idea for the general
population to understand algorithms. My second one is an even more general argument and I
normally explain this using an analogy or metaphor to do with the night sky. On a clear night
when the stars are all out one thing that I like to do and I'm sure many people in this room like
to do is just gaze up at the night sky and enjoy it. It just fills you with wonder and awe and
almost a sense of excitement, just to look at it. But it turns out I actually know almost nothing
about astronomy. I wish I knew more about it; I just don't. I haven't prioritized it. I haven't
made time to learn about astronomy, but I bet you that if I did know even just a little bit of
astronomy, my enjoyment and wonder and awe of looking at the night sky would be greatly
increased. And I think the same thing is true of our use of computers. I mean computers and
cell phones and tablets et cetera are amazing devices. It's exciting to use them. It's really cool,
but if you just understand a little bit about the algorithms and computational and types of
thinking that go into these algorithms I believe that amplifies your sense of wonder and awe
and excitement at using those devices. And if I'm being honest, that's the real reason that I
wrote this book, just because I was hoping to transmit a little bit of that sense of wonder and
awe that I actually feel when I am using a computational device. I just want to say thanks again
to Microsoft for hosting me. I really am delighted to be here and thank you to everyone for
coming and thanks for your attention.
[applause]
>> John MacCormick: So I guess we have time for questions now, right? Okay. Fire away.
>>: Since you had a crypto example, I couldn't help noticing that you have a chapter on public
use policy and far apart from that a separate chapter on digital signatures. It seems like an odd
structure since digital signatures are part of public policy.
>> John MacCormick: Yes, it's partly because I didn't want to cover material that is too similar
in consecutive chapters. If this was a textbook, you would put them together because it would
help the student to build on that, but I was just aiming for variety here. It's meant to be an
enjoyable read, so it's not optimized for perfect understanding. On the other hand, it turns out
that because of particular aspects that I chose to address in these chapters, they also are not as
overlapping as you might think, and so the public-key crypto does Diffie Hellman, not RSA. I
know they are similar ideas and this actually explains RSA digital signatures and there are some
quite different aspects as to how to set these up. On the other hand, I also explicitly
acknowledge the similarities once we got far enough into this chapter I said by the way, we've
seen a few of these things before. Then we start with a simple example and move towards this
one and that one.
>>: Do you know which chapters are most popular?
>> John MacCormick: The most popular chapters?
>>: Yes.
>> John MacCormick: There's a way to get that off of Amazon but I haven't done it for a while
and I don't remember what happened last time I did. I don't think I will try to do it live, but it's
a good question. I should go back and check that out. Any other questions? Yeah?
>>: I noticed that the algorithms [inaudible] long enough either focus on human machine
interface [inaudible] searching for interfaces dealing with the encryption and decryption of
information [inaudible] or just access to information. I don't notice a lot of hardware level
things that are not human machine or face level nor web search engine [inaudible].
>> John MacCormick: Do you have any examples in mind?
>>: Like switching for the lower levels of the network. Algorithms that deal with routing of
data at the machine level and not at the user interface.
>> John MacCormick: Yeah. I don't want to feel defensive about this. I just picked -- the
bottom line is I just picked algorithms that I think are really exciting and people should know
about. I mean, that's what really happened, but I did try to justify it in a few ways. I already
talked about why I didn't choose these classic computer science curriculum things like sorting,
hashing and so on. Also, I wanted to really emphasize theoretical concepts because one of the
things I get -- when I teach an introductory computer science class, I always ask the students
what is computer science? What do you think a computer scientist does? These are people
who are not going to major in computer science, by the way. Probably half of them are there
because my college requires them to do it, to get a lab science credit and computer science
counts as a lab science for those purposes. So I ask them and one of the common answers is
something to do with programming and another common answer is something to do with
hardware. So I really wanted to choose algorithms that would demonstrate -- programming in
particular as an essential prerequisite for talking about computer science accurately and
implementing and experimenting with it, but it's not in itself the ideas that computer scientists
work with. The ideas are these theoretical algorithms and I tried to choose ones that are
examples of that. Now, it may be that the networking algorithms that you just gave an example
of fit into that and I'm not going to try to defend why that particular thing isn't in there. But
another thing is this just reflects my personal biases of my career. I mean I worked in a
distributed systems lab for a few years and I think that's definitely biased what I happen to
think of as interesting and important.
>>: So it seems to stretch the third question that that you were talking about the second
question which is what the effect is on society and people. These [inaudible] probably would
have the most impact. The ones dealing with the hardware which are really abstracted in a way
invisible [inaudible].
>> John MacCormick: Yeah, for instance, error correcting codes are totally invisible to end
users, but as I tried to explain, our computers actually wouldn't work without them. I mean,
every time you store data on a hard drive or something like that, it just doesn't work without
the error correction and the same with transmitting data over a connection with any kind of
unreliability like a wireless connection. So, you know, general users are completely oblivious of
that and yet I think number one, this is a really interesting thing. I mean it's just so cool when
you first learn about error correcting codes and it's almost magical. But yeah, number two, the
users are oblivious of it. So I think, yeah, low level networking algorithms I think probably
would be a candidate to go in here. Yes?
>>: In terms of things that users are oblivious to, [inaudible] what that is, but [inaudible]
databases, is the average user like usually oblivious to that?
>> John MacCormick: I think they are oblivious to the very important ideas that were built up in
the database community over the decades. I've given it this sort of grandiose subtitle, The
Quest for Consistency, but there's something to that. We've really, I know that sometimes you
try to buy a book or something online and it hangs and you wonder did that purchase go
through or not. So things aren't perfect, but actually databases are amazingly rock solid. I
mean they really have worked hard to make sure that transactions occur once and exactly once
and endemicity so that if I withdrew money from that bank account, it's not going to disappear;
it will appear in this one. The money will be somewhere. These very important properties that
we really depend on, I mean again, if things like that didn't work there are lots of things about
the way society functions now that just couldn't happen, certainly not with anything like the
efficiency and ease that they do. I would argue that general computer users are not aware of
that. I mean, it just seems normal to us. My online banking where I can see my balance and
that transfer happened just fine, even though my browser crashed. When I go back and look at
it again later all of my accounts are in a consistent state. None of my money has disappeared.
So I think there is a bit of magic there and the specific things that I chose to talk about in that
chapter, again, are somewhat influenced by personal biases, but they are definitely not things
that the general population would be aware of write ahead logging, two-phase commit and
very elementary aspects of the relational models. Yes?
>>: So we see in your book some algorithms that people may not be aware of, but they are
very, very useful that are in widespread use. How about algorithms that you'd like to see
widespread use that may be useful that quite a few people haven't caught onto yet? Any
thoughts in mind?
>> John MacCormick: So the question was are there algorithms that I wish had caught on more,
but aren't used whether or not people know about them and the heckle was probability from
somebody else, right? There's nothing really that I wish was being used more. It's a market of
ideas, right, and these are a selection of the things. I don't claim all of the things that are
important but a selection of things that I think are really important and interesting and it would
be great if more people out there knew about them. Probability, yes, very important, also,
among the more challenging to explain in a nontrivial way, some of the more trivial aspects,
yes, but to get at the heart of it is a little bit tricky. The pattern recognition chapter does
mention probability in a couple of places, but I'll admit that I was a little bit light on that and it
would be interesting to try to give a popular yet meaningful account that goes a little bit deeper
into some of those probabilistic ideas.
>>: What would you say the age level of your, what are you targeting with your writing? Is this
the kind of thing they would like to see, for instance, and a high school course?
>> John MacCormick: When I wrote the book, my target audience that I had in mind was sort
of late teen up to and including adult. I mean, I sort of thought of the books that inspired me
when I was young. It would be totally hubristic to compare this to [inaudible] or something like
that, but it's that sort of target audience that I had in mind and so in particular, I'm not naïve
enough to think that when I say the general population, anyone at all would be interested in
picking this up and reading it. I had in mind people that sometimes buy science magazines or
sometimes read the science section of the New York Times or some other paper. You know,
they have some interest in science or technology and may have bought other popular math
physics books. Popular computer science books it turns out there's really not much of a
category. It's actually surprisingly thin. That's what I had in mind and I didn't really think high
school, but I've since been told that a lot of people seem to consider this actually quite suitable
for use with high school kids. I've had a few people already adopting it as a college book, not as
a textbook. It's definitely not designed as a textbook, but as perhaps one of several books used
in a first year seminar that addresses like technology in society and something like that. So it's
a range of audiences, but the short answer is, people who have some interest in science but no
background in computer science. Having said that, it seems that actually even very well
educated computer scientists with blue-chip PhD's actually seem to enjoy them. Among the
best reviews I've had from the book have been written by highly educated computer scientists,
so there seems to be some interest actually, even though you understand like what a digital
signature is, apparently computer scientists seem to enjoy reading that chapter for instance to
see how I chose to explain it to someone that doesn't understand the basic math required to
really do this.
>>: To feed your hubris a little bit, the company you are in you might also like section of
Amazon's, pretty good company you’re in. [laughter]
>> John MacCormick: Right. That's, yeah, it's nice to be in that company, but it doesn't imply
anything about sales. It implies [laughter] a correlation, not a similar level. But I agree, it's a
nice selection.
>>: This is kind of putting your memory on the spot question so I apologize, but can you think
of any interesting misperceptions on computer science that you run into maybe with your
students or…
>> John MacCormick: The main one is this complete misunderstanding of what computer
scientists do. I'm sure everyone in this room has experienced that, even with close friends,
family and parents, I mean, the classic party conversation is oh, computer science. Do you do
software or hardware? And how you even answer that question because it betrays -- it don't
want to use a harsh word like ignorance. It betrays the misperception of what computer
science is about. The answer is neither. I do programming, especially when I was a full-time
industrial researcher at Microsoft Research, sure, I write computer programs all the time, but
that's not an end in itself, so I would say that that is by far away the main misperception that
I've had. Perceptions about algorithms, I have nothing springing to mind right now, but there
must be some good ones out there.
>>: For things that people think the computer can do or…
>> John MacCormick: Yeah, in my opinion, actually the general population is uncannily good at
somehow intuiting what a computer should be capable of doing. We tend to
anthropomorphize them. I mean it's very common place now to say oh, it's thinking, or it
thinks that 45 minutes is enough time to change terminals in Chicago, you know, it's wrong. I'm
talking about it like it was thinking about it, but you'll also hear someone say oh, it should know
that. It should be able to take account of that, and they are right. That's a simple fact that can
be programmed into a computer. I mean this is totally anecdotal now. I haven't researched
this, but it seems that the general population does have a pretty good instinct for what
computers should be able to do, and I don't know how or why they have that.
>> Amy Draves: Let's again thank the speaker and thank you so much.
>> John MacCormick: Thank you.
[applause]
Download