22303 >> Kirsten Wiley: Good afternoon. My name is... introduce and welcome Robert Vamosi, who is visiting us as...

advertisement
22303
>> Kirsten Wiley: Good afternoon. My name is Kirsten Wiley. I'm here today to
introduce and welcome Robert Vamosi, who is visiting us as part of the Microsoft
Research Visiting Speakers Series. Robert is here today to discuss his book:
"When Gadgets Betray Us: The Dark Side of Our Infatuation With New
Technologies."
In an age where devices have unprecedented access to our personal data, we
trust that the information we use our gadgets for will remain private, when
actually we leave behind trails of personal information that others can exploit.
Robert Vamosi is an award-winning journalist and analyst who has been covering
digital security issues for more than a decade, a senior analyst for Mocana, a
hardware security start-up, and also contributing editor at PCWorld, a blogger at
forbes.com and a former senior editor at CNET.
Please join me in welcoming Robert to Microsoft.
[applause]
>> Robert Vamosi: Thank you. Appreciate the turnout today. So at the very end
of Annie Hall, Woody Allen's character tells a joke. He's Alvey Springer in the
movie, and so what the joke is, is he says, doctor -- he's going into a psychiatrist.
He says: Doctor, my brother thinks he's a chicken. And the doctor says: Well,
that's terrible. Have you turned him in? And the character responds: Well, no,
we need the eggs.
So I think that sets up what I'm about to talk about today. And that is: We love
our gadgets. We can't do without them. And I, too, have gadgets. But at some
level we also understand that there are trade-offs. There's always a trade-off
between convenience and security.
And we are increasingly entrusting our gadgets more and more. We're elevating
them to a higher level than maybe they deserve. Now, something like this
happened 20 years ago, this trade-off between convenience and security, with
software.
And we had this interesting back and forth with the user, where we had all of
these potential controls either up front, where it was confusing to the end user, or
we masked them and put them behind a bunch of walls and the user just clicked
the button and they were secure. So we went back and forth on software. I think
today we've made a lot of progress.
20 years, we now know we need to update our software. We also need to
know -- we also know that we have firewalls and antivirus protection. But
hardware, hardware is about 20 years behind where software is today. A lot of
people are not thinking about hardware, and I'm talking about embedded chips,
as being the same place where we were with the PC 20 years ago.
So five years ago it didn't really matter if your TV or DVD player had a
vulnerability in it. It didn't really matter because chances are it wasn't really
connected to anything.
Today, our digital TVs reach out to the Internet and they stream Netflix and
Pandora, and all these great things, but how many times have you updated, have
you even thought about updating the firmware on your TV?
Probably not. It's not something that we're accustomed to doing. And that's just
a TV. There are other gadgets that we also need to update because they're
equally vulnerable.
So I talk about embedded systems in the book. And embedded systems are
different, because you don't have a layer of operating system, you don't
necessarily have applications. It's a chip. It's a chip that's running some
programming code. And oftentimes it's constrained by memory. It's constrained
by resources and it doesn't run a lot of code.
But that said, sometimes it can do powerful things, whether it's opening a garage
door or powering a pacemaker. And if it fails, again, it can be inconvenient, like
not letting you in your garage; or catastrophic, it could kill you.
So I'm just trying to open a dialogue and get people thinking about all the
gadgets that we have and all the things that those gadgets are secretly doing,
because we're looking at the bling bling, we're looking at what's up front. We're
wanting to play Angry Birds. We're not necessarily thinking of all the
interchanges and trade-offs that are going on in the back end.
Today, there are five times more gadgets connecting to the Internet than PCs.
That is why it is a concern. And if you think that security by obscurity is going to
play a role in this I think you might be wrong.
DARPA, for example, has commissioned a survey. There's a study going on in
Southern California right now that is looking at all the IP addresses. And what
they're doing is they're looking for gadgets that connect to the Internet that are
not protected. They're looking for access.
And they're recording all of them. And this is very interesting, because if they're
trying to -- if they're looking at an IP address for something that's behind a
military base insecure, you've got a vulnerability now.
So DARPA's very interested in finding those. A lot of those gadgets don't do
much. They're sensors. They're sensors in roads. They're traffic lights. They're
sensors. But a few of the others are given a lot more responsibility, and those
are the ones that we really should be protecting.
To give you another real world example of why this is important, we just had an
attack on the PlayStation. This is a gaming system of all things. And somebody
took the time and effort to attack these systems, and it caused Sony to pull the
whole network down while they reconfigure it. The danger to Sony is that
somebody could steal their resources.
They have a service that they offer through this gaming network; that is games,
also videos, also music, and somebody has decided that there's monetary value
there, that they wanted to go after it.
So what happened was Sony had security. They had security. The security they
had was good for the gaming community that was using that gaming console.
Somebody took it a level higher.
So the security they had wasn't robust enough to scale to that next level of
attack. They had a public and private key encryption for updating the firmware
on their gadgets.
That's usually good enough. The private key is never ever shared. It's baked
into the chip. The public key is out there. So when the update comes, the chip
says: I recognize it. Handshake done. Everything's taken care of. It's
authenticated. Great. Some kid reverse engineered the private key, made it
available out on the Internet.
Now anybody can make a firmware update for the gaming consoles. Again, this
is theft of services. There's monetary value attached. It was something that
somebody thought they could do.
But were we thinking of gaming consoles as something that would be attacked?
Probably not. So there's a theme in the book where I talk about the fact that after
ten years of looking at security, talking to security researchers here at Microsoft
and elsewhere, I've more or less become convinced that you never are
100 percent secure.
Security is always a moving target. Something is always changing, and you
always have to be adapting to that. What you can do, though, is try to put as
many obstacles between you and the bad guy.
So at the beginning of the book, I talk about locks, mechanical locks. Locks have
become important at hacking conferences now. There are lock-picking villages
where you can go into a room and there will be a table with various locks
arrayed, anything from the master gym lock that you all have, probably, or the
Schlage lock that locks your home, to the more expensive $100 models that
protect military facilities.
And there are people in the room who have tools who will help you pick those
locks. In most cases it's a fairly trivial task once you know what's going on
inside.
What's going on inside is that these mechanical locks, these metal locks, have
imperfections. Metals have impurities.
So in the creation of these locks, those impurities give way to vulnerabilities. And
if you know how to and exploit those vulnerabilities, boom, the lock opens. Trivial
task. So lock picking has become a sport and a challenge at security
conferences. It's pretty valid. But you wouldn't necessarily think that a
conventional lock would be of interest to a computer hacker, but it is.
So I use that and I contrast that with what's happening now with cars,
automobiles. It's probably the second most expensive thing that we purchase in
our lives next to owning a home: An automobile. And we had a key. And the
key wasn't very good, because they found ways to jimmy the lock and break in
and steal a car.
They could also hotwire the car and drive off with it. So the automobile
community came up with the vehicle immobilizer chip. Which is that plastic part
that's at the top of the key. That chip needs to be there when you insert it into
the ignition, it mates with something in the steering column saying, yes, this is the
correct key for this car. When that's there, it unlocks systems in the car and
allows you to drive that car as long as you want.
When you don't have that vehicle immobilizer chip there, what's called the valet
key, when you don't have it, you can drive it for a short distance and the car will
lock up. This immobilizer chip has been credited with a rapid drop in car thefts in
the United States.
I talk about that it in the book. From 2000 to I think the latest reporting year was
2009, every year there was a dropoff in auto thefts. So it was clearly doing
something good.
So parallel to that we had the key fob. And everybody loves the key fob. You go
into a parking lot, go beep, beep, your car lights up doors pop open and you can
get inside. That's convenient, that's very nice. We took it a step further. Now it
just doesn't open the car doors, it also allows you to sit down in the driver's seat
and push a button.
By wirelessly communicating, it says you're supposed to be the driver, I'm going
to allow you to drive the car. That's the new vehicle immobilizer. Small problem.
The codes that they're using aren't very robust. In some cases they were 40 bit,
which is a little old. So for opening the car door, keyless entry, Keylock, a
company that made that, the code was pretty trivial how they rolled it over and
how they used it in auto brands, it was something someone could predict.
Once it got out on the Internet someone could sit in a parking lot and capture an
exchange as somebody walked to their car and after they drove off, find another
car in the parking lot with the same model, same make, and use that same code
to figure out how to get into that other car. They wouldn't steal your car,
necessarily, they'd steal the other car in the parking lot. But, nonetheless, it was
crackable.
Extrapolate that further. We now have these little key fobs that allow you to get
in the car and also turn on the car and drive off.
In Europe, there's very expensive high end cars that were doing this for a couple
of years before the United States had them. David Beckham is an example,
state-of-the-art car that he drove in Madrid. He had it stolen. The first car -- I'll
admit -- he had stolen because his assistant didn't engage the security system.
Can't have security if it's not engaged.
So that was an obvious one. The second one, he went to a shopping mall in the
middle of the day to have lunch with his sons. He came out, broad daylight, the
car gone.
The thing about it is they didn't smash the window. They didn't jimmy the lock;
they didn't hotwire the car. Chances are they used a laptop because they knew
what they were doing, they knew the car that he drove and they knew what they
wanted. It could seat seven. In theory is they were using it to traffic in people in
countries in Europe.
So they wanted the car intact. And they took it. So here you had a keyless
ignition car that was stolen out of a parking lot in broad daylight because Moore's
Law has caught up with everybody. A conventional laptop you can buy from Dell
or any other company is probably fast enough now to run these algorithms. And
the thing is maybe there's a trillion codes out there for the ideal world.
Well, the manufacturers take a chunk. They don't take the whole trillion, they
take a chunk. And within that chunk it's possible to brute force and define the
codes that are there. They do something else. They do something called code
hopping where it's random, the next one, et cetera, et cetera, you just need to
have a couple of exchanges and you can figure out what that code hopping
sequence is and you can predict what the next exchange is.
You can also do a replay attack. There are varieties that you can do to execute
this. And I call it out in the book because, like I said, we all think of the lock that
is protecting our home that's protecting our gym locker as being secure. We
shouldn't. What we should be doing is we should be layering our security. Just
because you have a metal lock on your door doesn't necessarily mean that your
house won't get broken into.
What about the windows? There are other ways in which people can still get in.
A dedicated attacker is going to defeat whatever you put in his or her place. So
the point is you want to put as many obstacles between you and them, so they
look at you say and I'm not going to mess with you I'm going to go over there,
that person has no security whatsoever, I'm going to break into his house, take
his things. That's what security is. It's a bit of a game with the bad guy.
I admit it, it's not like you can master security. What you can do is get a frame of
mind where you start thinking in these terms and you start thinking how could
somebody defeat what I'm doing and realize that a dedicated hacker will do that.
Now, let me put some perspective on it. Just because I talk about these things in
the book, just because I demonstrate or researchers demonstrate in the book
that this is possible, I don't mean that every single one of us should go home,
throw out our gadgets or change our locks or whatever. We're not targets,
necessarily.
When we talk about the acquisition of data from our gadgets, it seems like, wow,
someone can really profile me with all this information that's being collected
about me. But chances are I don't think that's really happening. I think what it is
it's not the minority report where you walk by and the ads start talking to you
personally, I don't think that's the case. I think it's more Raiders of the Lost Arc
where your data is filed in a vast warehouse with the Arc of the Covenant in the
far, far corner that's where your personal data is. And unless somebody's
specifically coming after you, I don't think you're going to have as many problems
as you might have.
I'm not in the conspiracy category on this, I'm just saying we should be aware
that gadgets do collect information and sometimes the means by which we
secure that information is not perfect. And we need to recognize that.
Now, I talked about cars. I talked about locks. There are less obvious gadgets in
our world that can also lead to security problems.
For example, I'm staying in a hotel. And I have a remote control that accesses
my TV. One of the things I can do I can check my folio and billing for my room,
pretty amazing. I can check out without even going down to the lobby and doing
so.
Well, there's a problem with this. It's sort of a reverse security model. And I'm
going to read you the intro to Chapter 2, because I talk about a hacker is not
really a hacker, he's a security researcher. I should qualify it. When I say the
word hacker today, I'm talking about hacking in the old school sense, someone
who takes apart something. I want to try to reclaim that word for what it was
originally intended.
I'm a member of the media. I've done stories and stuff. I'm very clear when I
write about it. I say criminal hacker if I do. Now I'm saying cyber criminals. So
this is an individual named Adam Laurie, who discovered a few interesting things
about his remote control in his hotel room a few years ago.
The premium movie playing in Adam Lori's hotel TV screen may not necessarily
be the one he paid for. Perhaps not one intended for his room at all. One night,
out of boredom, Laurie said he became interested in his hotel room's TV remote
handset, and in the process of exploring it gained access to premium services to
other guest's accounts and the hotel's main billing server.
Unless they're accessing the weather channel or CNN, most people do not give
the common hotel TV remote a second thought. But again most people are not
Adam Laurie. He is the chief security officer and director of London-based
networking company called The Bunker Secure Hosting, housed inside a
decommissioned missile silo outside the town of Kent, England. His frequent
travels and speaking engagements are the result of Laurie's world-renowned
expertise in wireless vulnerabilities found in many gadgets today, including hotel
TV remote systems.
Laurie, who still uses the nickname Major Malfunction discovered the possibilities
after I believe tinkering with the infrared codes via his laptop one night in a
Holiday Inn room. Setting down his laptop, Laurie said he wanted to retrieve a
cold beer from inside his previously unlocked mini bar.
Somehow he managed to change one critical value via the TV and locked the
mini refrigerator. If only to rescue his beer, Laurie said he was compelled to
rediscover the exact numeric value that would unlock it. Of course, one thing led
to another.
Infrared signals on consumer gadgets are easily overlooked. Security by
obscurity. By comparison, there's a very basic radiofrequency controls used in
garage door openers. Garage door openers can be manually configured via dip
switch circuit with eight possible on/off positions. That leaves 256 possible code
combinations.
Laurie has demonstrated at various security conferences a script he created that
can run through all 256 combinations in a matter of minutes. A laptop computer
with a radio antenna, he can open just about any garage door.
With TV remotes, very few industry standards exist for infrared television remote
signals. Those that do are proprietary. For example, a Sony TV remote won't
work a Samsung TV but it might work with another Sony product such as a Sony
DVD player. No encryption or authentication is required to use a remote.
No authentication handshake says that only a Sony remote can work with
Gadget No. X and connect the TV to Gadget No. Y. This gives the convenience
of a universal remote even though they require some initial programming by the
end user if only to tell the universal remote what proprietary code to use.
Unlike the home version, hotel TV remotes include additional groups of code.
The home edition includes volume, channel select and text mode. The hotel
version includes codes for alarm clock, pay TV, checkout and administration
such as housekeeping.
Hotels, however, use an inverted security model in which the end gadget, in this
case the TV, filters the content. In other words, premium movies are broadcast
all the time. You just need a way to access them.
Instead of residing on a central server, access control is literally in the hands of
the paying hotel room occupants, whether they realize it or not. Laurie found that
he needed only a computer running the Linux operating system, an infrared
transmitter and USB TV tuner to access these extra groups of codes.
While staying at a Hilton Hotel in Paris he automated his attack which enabled
him to snap photographs of the various channels he could see and manipulate.
If he had malicious intent, Laurie could have zeroed his mini bar balance,
watched premium movies or surfed other people's e-mail. Instead, Laurie
decided to deface the hotel welcome screen, take a photo, then restore the
screen to its previous condition, later using the photo to show the hotel staff what
he had been able to accomplish.
If the system had been designed properly, Laurie said, I shouldn't be able to do
what I've been able to do. Yet, the ability to access mini bar records through the
hotel television shouldn't be surprising.
Hotel TVs are connected by a coaxial cable to a little metal box. So are the
rooms premium TV channels, voice over IP, the mini bar, GameBoy and Wii
entertainment systems. This bundling of premium services is convenient from
the hotel's point of view. Management doesn't have to rewire each room every
time it adds a new service.
And it's convenient from the guest's perspective. They can check out anytime
and bypass the front desk. But there's a flaw in all of this convenience. Using a
computer TV tuner and laptop keyboard as remote control Laurie said he's able
to access information intended for other rooms within the hotel.
Thus, Laurie can change the code and see the billing information for another
room, any Web mail that person may be reading at the moment or premium porn
channels that guest might be watching at that moment.
The hotel assumes that only -- sorry -- only you can see your account
information. It further assumes that most people are connecting their laptop
computers to their room's TV and accessing the hotel's private configuration
code.
For the most part, that's true. But Laurie isn't the only security researcher to
publicize this particular design flaw. Paul Pablo Holman of the SIME Group has
also gone public with his own findings on hacking hotel room TV remotes.
Another security researcher used a basic cable converter box purchased on
eBay to intercept hotel codes in his room and others have also found additional
ways to defeat hotel TVs such as plugging the coaxial cable into their laptop
directly.
Realizing that his otherwise trivial hotel TV remote more or less holds the keys to
the entire kingdom, Laurie has experimented at other hotels. He's seen only
three or four different back end systems used.
By one estimate, one of up to 16,000 possible code combinations is required to
unlock the services on any given hotel system. Each new location could take
Laurie hours to decipher. The speed that process, he created an automated
script to divine a particular hotel's relevant codes in about half an hour.
Laurie has no plans to release that script to the public. It exists only to further his
own curiosity. In one hotel Laurie inserted an unblocked porn channel image on
to the background of the welcome page, temporarily and only to show the
executive staff. Similarly, he once accessed the hotel's main server and had the
option of crashing the entire system if he had wanted.
That's where Laurie as a researcher differs from the criminal sometimes referred
to in the media as hackers. Laurie uses his experiences to educate people about
the dark side of common gadgets. But what if somebody really wanted to be
malicious? Could that person use a common gadget to get sensitive information
about us?
The lack of authentication allowed Laurie to gain access where he should not. In
many systems we take for granted, this lack of allocation is all too common
because the designers have not through the various ways in which someone
could attack. As we will see, our growing need for convenience makes us accept
clever short cuts in exchange for security. Short cuts that may in some cases
cost us money or in extreme cases our lives.
So in the book I go on and I talk about other things. I've already talked about the
car. I certainly talk about this. The smartphone, the mobile gadget. And there
are other gadgets such as electronic parking meters. The toll transponders that
are used now on roadways. Toll transponders tend to be unencrypted. One
could walk through a parking lot and could sniff a code from a car and clone it on
to their own and now having someone else pay for their toll usage.
The other thing about the toll transponders is on the back end, we know that our
cell phone records are being kept by our phone companies. That's something
that we sort of accepted a long time ago. There was this period where you could
roam, leave the network and go somewhere else. The phone company needed
to know when that happened so they could charge you. So they keep detailed
records for billing. We understand that.
Well, the thing about municipal systems, those records are easy compared to
phone records. Phone records are very hard. You need a court order and need
to have a reason to go after someone's phone records. But if it's transponder
records, it's fairly trivial right now. There aren't a lot of court precedent around
that. If I'm a lawyer and I'm representing a company and the company suspects
that this employee who is working from home isn't, we can look at his records
and see how many times did he cross this particular toll road, how many times
did he go over a bridge during working hours.
Or if I'm a divorce lawyer and I'm building a case against a spouse, I can look at
those records and I can say, wow, you have this pattern of every Tuesday at 2:00
going out and coming back an hour later. Where are you going? So these
records have been used. And it's a case where we are in the early, early stages
of all this location data.
We're collecting it, and we don't yet know how it's going to play out. People are
coming up with very creative means. I just mentioned how lawyers are
interpreting this data.
We talk about hackers thinking outside the box. Here's lawyers thinking out of
the box. The data is there. How can I use it?
I talk about that. I talk about medical stuff. There's a lot of gadgetry going into
our bodies, and sometimes that gadgetry needs to be configured by a doctor or
needs to be monitored by a doctor. And some of these gadgets are now going
on to the Internet.
Again, as you start connecting things to the Internet, you're exposing any
vulnerabilities inherent in them. What is really scary about the pacemaker attack
that I describe in the book is that researchers found they didn't need to know
which programming language was used. They didn't need to know anything
about it. They were able to throw gibberish at it. That was enough to cause the
pacemaker to go out of sync and start firing rapidly or slowly. And that could
really mess up somebody's heart. It could also crash the pacemaker which could
kill somebody.
Here's an example where fuzzing, where you're just throwing random gibberish
at gadgets can cause problems. Going back to the car, there are researchers
down in southern California and here Washington state who have been looking at
automobiles.
In particular since the whole new story around Toyota with the braking and
whether that was caused by electrical manufacturer or user error. Was ultimately
found out it was user error.
What they used to find that out was the black boxes in the car that was recording
how many times somebody hit the brake and how fast they were going. Our cars
are wired. They store up to 40 different pieces of information on average. That's
going to be standardized by the government. They're going to regulate what
those 40 need to be.
But it includes random things, like whether you're wearing a seat belt or not,
whether the antilock brakes fired correctly, whether the lights were on at night.
So using some of this data from Toyota, the government has now said, wow,
what else can be done with our cars. So these researchers from Washington
State and California are showing they can go in and they can attack a car.
Cars now have on average about 70 discrete computer systems in them. And
you don't need to know what language they're programmed in. They found they
could fuzz, they could throw gibberish and they could randomly fire the brakes,
they could randomly turn on the lights. They could randomly cause the volume in
the car to go very, very loud, so that the user could not turn the dial off.
They can cause a lot of mischief. One of the ways hey can get in is there's a little
diagnostic port under your steering column. It's there by law. That USB port is
an entryway. When you take your car to a garage, someone who is supposed to
do that will hook a diagnostic tool to it and it will tell them what's wrong with your
car. All the computer codes will come out and say this is misfiring that's misfiring,
et cetera. It's great for diagnostics. But somebody with criminal intent can go in
there and be tracking how many times you go from point A to B. They can put a
GPS on there. They can track where you're going. The other thing is we have
this tire pressure monitor system.
And it is wireless. Because if you think about it, the wheel is turning. You don't
want it wired to the car. This is a wireless connection to the car. Somebody has
found that they can intercept that wireless signal and give them entry into some
of the computer systems in the car.
Now the thing about the car, it's not necessarily unified. I know Microsoft is
working with Ford on a system and there's a couple of different theories how
that's going to play out as a platform. I think that's great.
But at the moment a lot of the cars on the road have discreet little systems. So
when you attack one, you're not necessarily attacking all the others. But some
day that will be possible. You'll have a whole platform that is the car. I'm worried
about the cars that are on the road. Because here on the west coast, the cars
tend to stay on the road a lot longer than on the east coast. So we'll have a lot of
hackable cars out there in the near future.
And is somebody thinking about that? Are we planning for how that might play
out? And now I've talked about breaking things. And I spend about 80 percent
of the book talking about all the things that can go wrong.
I do, however, conclude that there is hope. And I talk about some of the more
positive uses for how we can use some of the data that's collected, and some of
the gadgets that we have.
For example, the Apple controversy around location data. Well, that really didn't
surprise me, because I know that companies have been doing that for years.
They've been building databases of where Wi-Fi networks are and they've been
building databases of where cell phone towers are to position people.
I've known that for a while. I know that because when I turn on the TV in the
morning and I look at the traffic patterns, the daily commute, I see the red, yellow
and green on my familiar roadways of the Bay Area, and that's coming from
cellular phones. And it's anonymous. And it's aggregated. And it's providing a
service.
So if we find ways where we can use this data so that it benefits the public good,
I think there's a lot of potential here in some of this random data that's being
collected by our gadgets.
It's not all evil. Similarly, we can have a wired home. We have an aging
population. And rather than put them all in nursing homes, we can create homes
that have sensors, that tell loved ones or healthcare professionals did that person
wake up in the morning? Did they take their pills, did they leave their stove on
overnight, various things that can help keep them in their homes.
Again, we're going to have to surmount some problems, how do we make sure
that we are authenticated users to get that data and is that data being encrypted,
et cetera, et cetera, but those things can be worked out.
The other thing going on with healthcare that I think is very positive is several of
the manufacturers in the space are coming together and they're forming
alliances, and they're self-certifying themselves. They're identifying a baseline
for security and saying we are meeting that baseline and, fine, maybe it's too low,
maybe it's too high, they're doing it. And it's an industry standard.
And I think that's great that they're doing it. Much like in the financial services
space, we had PCI come out of it, the payment card industry got together and
came up with their DSS, digital -- anyway, so they are coming together rather
than the government imposing it upon them.
So whether it's coming from the government, whether it's coming from the
industry playing together or from consumers saying I'm not going to buy these
devices, I'm hoping to have a dialogue here where people recognize that just like
we secure our laptop computers and desktop computers from software
vulnerabilities, we need to be thinking about how our gadgets are secure, and
maybe there will be a seal of approval some day where you'll buy a gadget
because it's got that underwriter's seal of approval or whatever body that is
saying that gadget is secure to whatever specs are at the time. And of course it's
going to have to be a moving target. And I work for a company that does device
security. Perhaps my company will be leading the way.
But as a disclaimer I wrote the book before I worked for the company so it was a
nice convergence of things that we worked together today. So we've got about a
half hour here. I'd love to hear your questions. I'm sure you have a few. Or I
could tell some more stories from the book.
Yes.
>>: Well, three, four years ago there was quite a bit of concern about security
around electronic voting. I'm curious if you've seen any advances or should we
be even more concerned, because that's not something where we can choose to
purchase. This is the thing you have to use and how secure is it.
>> Robert Vamosi: That's where your government gets involved because they
make the purchasing decision. I chose not to put it in the book for a variety of
reasons. One, seems there's a lot of changes going on. The companies that
were in the space in 2000 have left that space. In 2004 we revisited again in
another presidential election, those companies were caught doing various things.
They've now made changes and so forth. Where we stand today, I think we
need to have a paper trail with the voting systems. It can't just be a digital thing.
There are voting systems that do the paper trail.
Where I live in San Francisco, we have an optical system. So it's digital but
there's still the optical paper that I fill out and insert in the machine. So later in a
recount we can have that tabulated. That was missing and I'm glad the
researchers did that.
A footnote, one of the researchers who led the call against the voting systems is
now working for the Federal Trade Commission. He's like the privacy czar. His
name is Ed Felton. I look to see some privacy regulation and discussions that
will come out of the FTC that will benefit everybody.
>>: You're falling into a common trap. Voting, confounding paper with
verifiability. Because what we want in our voting systems is to be able to verify
accuracy. And paper is not the same as being able to verify.
Verifiable systems can be paperless. You can have paper-based systems that
aren't verifiable. And the kind of verifiable paper trail that you're talking about
really only allows a voter to verify the accuracy of a vote up until the time it's
dropped in the ballot box and then it's completely out of the voter's control.
There are ways of doing much better. Those should be focusing.
>> Robert Vamosi: Do you have an example?
>>: There are verifiable election technologies that give full end-to-end verifiability
so every voter can be sure that their vote has been accurately counted in the
process. There are electronic versions and there are paper versions of this.
>> Robert Vamosi: I've looked at various systems. There was a smart card that
was used. Everybody was issued a smart card and you went into your voting
machine, and you did that. The problem was it used a method that allowed
somebody outside the voting booth to see what you voted. And the other thing
was it didn't actually erase it after it recorded it on the machine. So you had
layers upon layers upon layers of voting on the smart card, and you can go back
later and see how all these people voted.
>>: That's not the kind of system I'm talking about.
>> Robert Vamosi: I know. I'm just saying there are theories out there, and I
think what's interesting is we have to see how they are used, we have to see how
people use them and misuse them until we get it right. We've come along way
from 2000 when we first started using the electronic voting system. I think we
rushed, we rushed to put them in and we're doing something similar right now
with smart meters. We're rushing to put smart meters in every home in America
namely because other countries have already done it. So obviously we have to
do it, too.
In theory, it's a great idea. A smart meter will allow you to control individual
usage of your electricity. Now you can look and see like wow I'm burning a lot of
energy let's turn off all these lights and so forth it's really expensive right now. It's
really great but the problem is the rush to put these meters in our home the
government is giving a kickback saying that the utilities will pay you some money
if you get all your homes hooked up to a smart meter right now.
So they're going out and they're just buying smart meters, and this company is
mass producing them. There are a couple of companies. They're mass
producing them. Did anybody check to see if they're secure? One thing about
the smart meters is that they wirelessly communicate. Did anybody check to see
if they're secure? No.
So I went to a conference, Black Hat Briefings in Las Vegas, did a lot of research
at the Black Hat conferences over the last couple of years. These researchers
demonstrated how they can put a worm on one of these smart meters and would
go from house to house to house, neighborhood, streets, towns, suddenly you
could potentially have a blackout because of a worm.
And the utility would be like: What happened? I don't know. Why did everything
turn off? It could potentially take them a day or two to figure that out and to
reverse it.
There's no protection on it. And mind you, these are just little sensors in the
home but, nonetheless, you can shroud it in security. You can make it so it
doesn't accept an update from an outside source. It has to authenticate it and
make sure, and similarly it needs to encrypt the data as it's talking to your
refrigerator and talking to your washer and dryer. That all hasn't been worked
out. There was a rush to put it out there. The same thing was true with the
voting systems. It was cool, it was new, they ripped open the box. They put it
out and they said everybody's using the voting system.
Then later we went: Wait, that wasn't right. How come we had more votes in the
voting box than people who lived in the particular county. Wow, how did that
happen? And they were all of a particular political party. It was just interesting
how it played out.
So unfortunately I think we have to have some of that. We have to have some
experimentation, because sitting in a room, thinking about your gadget doesn't
tell you all the different ways that someone's going to use it.
Much the way when Facebook came out. A lot of people were like, this is cool,
I'm going to put my birthday on there. Everyone will know my birthday now. I'm
going to put my hometown. Put my mother's maiden name and put all this
information and share it with the world.
Five years later we're like oh, that was a really bad idea. But more to the point,
we started putting pictures of that beer bash that we had or running around doing
crazy stuff.
Now we're trying to get a job. Whoops. That data is there. Now, there's a huge
growth in what's called reputation services. And these are companies that will go
and help you scrub your Facebook, your social media experience, wipe it from
the Internet. And in some cases it's a matter of populating blogs and things to
help push that stuff way, way down.
So it's still there. You can't really get rid of it on the Internet. But you can push it
way, way down, I think we needed to go through that experimentation. Now it's
not social media, it's gadgets. And we're discovering, for example, it's great that I
have -- I don't need a digital camera anymore. I have a digital camera. It's not a
phone anymore. It's a computer. I can access the Internet. I can write notes, I
can do all sorts of great things with this.
Small problem. When you start marrying technologies on one device, you might
not think about the logical consequences of it. So we talked about location. This
is tracking my location. Every time I take a picture it's mating that location with
the picture. Because the file format allows for that. It allows for longitude and
latitude to be stored along with the color values and all this other information.
XS file format. So well that might not be a problem, but I post my pictures up to
the Internet. Okay. So now somebody comes along and writes a script and says
let's look for all the longitude and latitudes that have been posted to Twitter, all
the twit picks up there. Now they've discovered all my pictures and where I am at
any moment in time. And so maybe I like to take pictures of my house or loved
ones. They can start to profile me because they have longitude and latitude they
know where I go about my day because I posted pictures. Suddenly digital
photographs online become a privacy threat. And did we think about that? No.
We had to go through this process.
If you're interested there's a website IcanhackU, with a U, .com and shows the
late twit picks with the longitude and latitude. You go out see the photograph and
see the person and you can use that to look them up on Facebook or whatever
else you want to do.
>>: Maybe it's a generational thing but I think if you talk to anybody under the
ages of 25 and to say that their response is likely so what. What is your answer
to people who says who cares if they track my longitude and latitude?
>> Robert Vamosi: I just said it. We're in a stage of experimentation. So maybe
for the generation under 25 it doesn't play out as a problem. Maybe people
aren't stalking you because you have longitude and latitude turned on.
Maybe the consequences aren't as grave as I'm making them out to be. I'm older
than 25, and I'm thinking it cannot be a good thing. So I'm going to differ with
you on that and say that it's probably going to be a bad thing. I know that
younger people have a bit of invincibility about them. I was young once. I get
that. So it doesn't necessarily play out.
But as I said, people who posted the beer bong pictures on Facebook and
everything are now trying to get corporate jobs, and in retrospect they're looking
back and saying I shouldn't have done that and need reputation services to clean
that out.
So the other thing about it is I will say this about privacy. One thing I find is
privacy is personal. And some people are extroverts and they frankly don't care
if their Social Security number is out there or this information is out there. I've
talked to people and they've tried to convince me I'm wrong. I'm more of an
introvert. I don't personally don't want that. So I think what we need are the
controls. We need to have at least the option, we need to be able to opt out if we
really feel strongly about it.
So getting back to location, I noticed that Apple has an update going on with their
iPhone and potentially they're allowing the end user to say: Don't track my
location. And I think that's a great first step. And we're just going to have this
give and take going on, much the way we did with software 20 years ago, where
we start to learn what the boundaries are and what's safe behavior, what's not
safe behavior. We're all just going to have to figure that out.
But I want to raise awareness that we should at least be thinking about it and
talking about it. There was another question over here, yes.
>>: Just a comment. I think there's a -- as I understand there's a difference
between how under 25-year-old girls look at the problem of publishing
information on Facebook versus under 25-year-old guys looking at it. So I think
it's important to dig a little deeper into the anecdotal data to make conclusions.
My question is, turning it around and the last discussion, what are the incentives
that the marketplace would offer, that businesses would have, that users would
have to enable the protections that you're suggesting, the controls you just
mentioned, what's going to make it come into being? What's the business
model, if you will?
>> Robert Vamosi: Yeah, I totally get that. And I think the backlash that you
saw -- I mean, I've been on TV the last couple of weeks talking about the whole
location debate. And I think the backlash caused a manufacturer of a product to
change what they're doing.
And now potentially they can ride that and say: Look, we give you the option.
We're sensitive to your concerns. So it can be a marketable thing. You can start
to market security as something that's good.
I mentioned earlier the idea of Union Underwriters Lab just having some symbol
on the box that says this gadget has been secured by these standards, whatever
those standards are.
And it starts to differentiate when someone goes into a store and says I don't
know which one to buy, they can start to identify the ones that have that label.
Kind of like the Energy Star thing. It started off small and over time it's starting to
take over, we have more and more Energy Star products. So I think there's a
potential there that a company could benefit from it, we're just in the early stages
of it. Before you didn't want to say I am the most secure company in the world.
You did not want to say that, because that just caused everybody to throw rocks
at you.
But this is different when we're talking about gadgets. And I think people
understand that there's a difference. And so it's okay to actually brand and
market on security.
>>: So I travel a lot. I have a new passport. What are the specific vulnerabilities
of this new passport and what can I do to protect myself?
>> Robert Vamosi: Right. The United States did something interesting. They
put a shield around the passports. The passports that we have now and 40 other
countries are RFID-enabled. They have a chip inside. And what it broadcasts
out are three files. There's an international standards that has agreed upon this.
So it's not just the U.S. that came up with it. These 40 other countries all work
together on it. One thing they decided they decided to make it readable,
because not every country has strong encryption.
So they came up with a couple of different ways it can come up readable.
There's a bar code on the bottom when you open the passport that allows the
customs agent to be able to read these codes.
Back to my point. It's an RFID device. It can be empowered by any reader that
you pass by. And the shielding in the United States is pretty effective. However,
if the passport is open a little bit, it can still leak out. And they didn't do what I
talk about it in the book, two security researchers recommended, and that is a
way to negate that field basically to put another chip in there to confuse it so that
only if it's physically in your presence can you open it up and read it, not have it
broadcast out at a certain distance.
So for the moment just keep it closed. I mean, we at least have that protection.
In the UK, they do not. And I talked about Adam Laurie in my reading. Adam
Laurie appears later in the book. He showed me how he was able to read a
passport that he had not seen. The passport was obtained in the United
Kingdom, the newspaper the Daily Mail, paid him to do this research. It was
obtained in the United Kingdom on the same day. So he at least knew the day in
which it was issued. He knows it expires ten years from the day. Turns out that's
part of the code for unlocking it. He knew the address to whom it was mailed.
He did not open the envelope to look at it, by the way. He knew the address to
whom it was mailed. So he looked up the name and found her birthday, which is
also part of the code that unlocks the passport. Using those two pieces of
information he was able to infer other information, and he was able to crack the
code in about four hours.
He was able to access those three files that are stored on the RFID chip in the
passport without opening the envelope and looking inside.
And one of the files is a photograph. The other is your biometric information.
And the third file says: These other two files have not been tampered with. It's a
control file. So he was able to pull all three of those and pull them up and project
them up on a screen. It was pretty awesome.
So I also talk about biometrics in the book. That's another area. The book
covers a lot of territory. I talk about like fingerprint scanners and how fingerprint
technology is not what you see on CSI. It's not perfect. And in fact the FBI has
been caught in some errors. And I cite a few examples of people who are
wrongly accused because at best you can only match so much of the fingerprint
and call it a match.
They should really go further. So fingerprinting is not perfect. I also talk about
retina scans and a few other common biometrics and some of the flaws that are
in there.
What I recommend on biometrics is layering. If you're going to use biometrics
don't make it your one and only source of authenticating the individual. Have a
password, have a few other things in there as well.
Back to your point with the passport. The passport was designed to save time. It
was designed, the signal from 30 feet away, John Doe is walking up to the
customs agent, get ready get his file. Didn't work out that way hasn't saved time
at the customs station, for one. Two, the 30 feet which is traditional with the
passive RFID tag, that is a tag without a battery, can be exploited. In the book I
talk about Chris Padgett who is down in the Bay Area who has extended that
30 feet to 270 feet.
Turns out that the RFID signal used by the EPC Gen 2 RFID tags is a ham
radiofrequency. So all you need to be is a ham radio operator to obtain the
device that's necessary to transmit the reader out to an RFID tag and you can
read everybody's RFID tags in the room. It's pretty interesting. So if that's
possible I can sit at an airport and I can watch you go by. And maybe I'm evil,
and maybe I'm looking for somebody from a particular country, because the
country code is going to show up, and I can then use it to detonate something if I
want.
Or I could use it to target you specifically. I don't know your name, but I know
you're from this country that I don't like. So there are risks there. And I don't
think they really thought it through. They were just thinking of the convenience of
having people presenting their passports very quickly and easily.
Any other questions? One more.
>>: What about like the [inaudible] payment and so forth, credit cards and so
forth, because I do find a little bit of public fear in that what you can scan out of it
is not much different than when I give my waiter my credit card. Even then he's
able to grab more information out of it then, with my credit card. I was wondering
what your thoughts were with those kind of systems?
>> Robert Vamosi: Right. At the end of the book I talk about mobile wallets my
hope that we can get the mobile device secure enough to make it possible.
Some of the cool things you can do instead of having a hotel key card you can
use your phone to access your hotel door and unlock it.
We're not there yet. What's going on with mobile payments right now it's pretty
limited in what you can do and what's going on with mobile banking is pretty
limited. You can't actually do transfers.
The banks themselves aren't sold on the security to the point that they were with
online security when they offered online banking back in 2005. I was an early
adopter of online banking. I'm not an early adopter of mobile banking. In part
because I don't think that the services are robust.
I can't do that much. I can access my bank balance but I can't necessarily
transfer money yet. I know they're starting to experiment with that. So back to
your point about mobile payments. Going up to a cash register and wanding it.
Well, NFC, Near Field Communications, has also got its vulnerabilities and they
need to be locked down. Even though it's contact and so forth, I guess I don't
really have an opinion yet on whether it's secure or whatever.
If I were to try it, maybe I'd change my opinion on it. But I haven't had the
experience yet. My phone doesn't have an NFC chip in it yet.
>> Kirsten Wiley: Okay. Thank you.
[applause]
Download