Ethics and Politics of Information Technology

advertisement
EPIT
Ethics and Politics of Information Technology
An introduction to the materials and the project.
Directions for Use: This stack of paper contains three main parts: this introduction, abridged and
annotated transcripts of interviews with three individuals, and short commentaries (less than 2K words
each) on the transcripts, or on related issues. You will see that the cases are uneven; some have more
information than others, since we are different stages with each of the interviewees. We do not expect
everyone to read everything in detail. You might want to start by leafing through the abridged transcripts,
which have italicized sections that give a kind of overview of the issues discussed. The commentaries
vary, they are short and long, some are more like glossary entries, others are more like analyses of
arguments or issues. Here is what's inside:
ETHICS AND POLITICS OF INFORMATION TECHNOLOGY
1
Introduction
Process
Questions for Workshop participants
To Do:
2
2
3
4
DAN WALLACH: INTERVIEW 1
5
DAN WALLACH INTERVIEW 2
21
Bug: Anthony Potoczniak
SHALL WE PLAY A GAME? Tish Stringer
Regulatory Capture: Ebru Kayaalp
Reputation: Chris Kelty
Security: Ebru Kayaalp
Good and Bad Bug hunting: Chris Kelty
Ethics: Ebru Kayaalp
Paper: Ebru Kayaalp and Chris Kelty
Clue: Chris Kelty
AES conference paper: Anthony Potoczniak
34
37
40
42
44
46
Error! Bookmark not defined.
50
52
54
PETER DRUSCHEL INTERVIEW #1
62
PETER DRUSCHEL INTERVIEW #2
76
Commentary on Peter Druschel
Practice what you preach: Hannah Landecker
Emergent phenomena: Ebru Kayaalp
Degrees of Remove: Hannah Landecker
Hierarchy vs. heterarchy: Tish Stringer
Scalability: Chris Kelty
Worrying about the internet: Hannah Landecker
87
87
89
92
94
95
98
MOSHE VARDI INTERVIEW #1
99
Commentary on Moshe Vardi
114
1
Ethics and Politics of Information Technology: Workshop documents
Introduction
This project started with a demand and a question. The demand, which is always palpable here at Rice,
was to "build bridges" between the sciences and humanities, or engineering and social sciences. The
nature of this demand is poorly understood, but easy enough for us to respond to (and because it involves
the anthropology department, a kind of quick and easy field-site for our research). The question, on the
other hand, was of a different nature: why is there always an implicit assumption that "ethics" is the
province of ethicists, philosophers, social scientists or humanists, rather than scientists themselves? Is
there some logic to this supposed division of labor? We wanted to simply insist that there is no such
division, and to do a project that would give people a way to understand why "ethics"—insofar as it is a
useful label at all—happens everywhere and all the time: that it is a practical and a situated form of
reason which can be understood (and studied) as an empirical phenomenon. Scientific research never
happens in a vacuum and we are particularly sensitive to any claim that starts with the phrase "The social
impacts of X." Years of work in science studies, history of science, and philosophy of science have made
it clear that such a statement is only possible after the real social action is already over. Therefore our
answer to the question about this division of labor is rather arch: people who study the "impacts of" or the
"social effects of" or the "ethics of" science are just sweeping up after Parliament has gone home. The
real action happens as science is conducted, as tools are built, as people publish papers, and before anyone
realizes what happened. This does not mean that scientists and engineers recognize their own research
interests as part of any "ethical" issues (in two of the three cases presented here, this seems to be the
case), or indeed, as anything other than boring old technical work. But rather than let the "ethical" issues
be defined by someone else, somewhere else, and after the fact, then apply some kind of pseudoscientific method to analyzing these issues, we set out to conduct interviews and conversations tabula
rasa: we want to listen, to discuss, and to try to figure out what aspects of the work of individuals in
computer science might usefully be addressed as "ethical"—perhaps especially where the practitioners
themselves deny the existence of any ethical issues.
Process
So, we devised a process—and this document is the midpoint—which we thought might not only answer
this question, but be useful for allowing other people to continuously answer the question of this putative
ethical division of labor, and to use as a template for posing new ones. Two aspects of this process are
experimental. First, we wanted to find a way for a number of people to collaborate on a project which has
only the loosest definition and goals at the outset, and to allow these collaborators to follow up on
whatever issues seemed most interesting to each of them. Second, we wanted to find a way to track the
progress of this investigation, to represent it, and then to make it available as a complete process—not just
a final paper that reports results, but a potentially extensible and reusable document. The process, to date,
has been the following.
1. Identify subjects and people of interest. In this particular case, this would otherwise have been
difficult, or at least arbitrary, if it weren't for the specific scholarly interests of CK, who knew, to
some extent, where to look and who was dealing with issues that might potentially be understood
under this broad ethical frame. Arbitrary isn't bad though, but finding good interlocutors is an
avowed art in anthropology.
2. Read papers, presentations, proposals, (and existing interview material). We asked our
interviewees for their scientific work, as well as any material they thought might be of interest to
us. We read it, tried to understand what we could and discussed it.
2
EPIT
3. Come up with questions. Together we discussed what kinds of questions we thought might be
interesting to ask. Usually these discussions were fascinating, because it allowed us to see what
each of us had keyed on in the readings, and to discover what parts were alien and to whom.
4. Conduct interview. Usually with 2-3 people, on video, 1-1.5 hrs.
5. Transcribe interviews. This part sucks.
6. Return to step 2 for a second, perhaps a third interview.
7. Annotate and discuss the transcripts. This part has proven difficult for a couple of interesting
reasons. Ideally, what we would like, is something like a "wiki"—a place where we can all work
on the same transcript, add our own annotations, respond to others annotations, until we have a
certain spread and density of foci. At first we achieved this by printing out the transcripts and
cutting them up with scissors—a very old school process, but a very enjoyable and productive
one nonetheless. We would discuss our annotations, then collate them into one document that
showed where we overlapped (or not) in our interests.1
8. From these annotations and overlapping issues, CK would write "Missions" which the others were
to use in order to write short commentaries. They were meant to give a bit of coherence to the
various entries, without constraining people in what they might write.
9. Research group would write, read and discuss the various commentaries.
10. Workshop. This is where you come in…
Questions for Workshop participants
The goal of having a workshop on these materials is to recruit fresh eyes. We want the workshop to be a
direct extension of ongoing research (and a collaborative one : meaning credit for being part of the
research and conceptualization). Rather than presenting results, we are hoping for something more like a
game of exquisite corpse, so to speak. Often when people give talks, they say "this is a work in progress,
I'm really hoping your questions will help me refine it." We're trying to be more aggressive, and more
open about this: "This is a work in progress, we're hoping you will take over its body and make it say
something interesting." Zombie metaphors aside, here are some guidelines to think about;
1) Research. What other questions can be asked? We make no claims to have alighted on the only
interesting aspects of these interviews, certainly there are other interesting issues. And those
issues may well not be manifest in the transcripts: the point of running a workshop at which the
interviewees are also present is that these issues might be pushed further—think of the workshop
as a little like the 3rd interview.
2) Re-usability. We want this set of documents to have multiple offspring: conference papers, papers
in journals, and use in courses. Rather than aiming at a fixed and final document with one form,
we want to try to figure out what form the collected documents should take in order to be
something another scholar might stumble on, make use of, and ultimately add his/her own
contributions to. In terms of research the question is: what would make this material something
sociologists, anthropologists, historians or rhetoricians would make use of. Here there are two
questions 1) the issue of making it practically deep enough to be reusable (all we have really are
transcripts) and 2) the issue of propriety—research in anthropology and science studies is
heavily invested in having projects that are unique and different from any others—what's the
point of this material if it appears to belong to someone else? In terms of pedagogy, however,
1
We've had several discussions about using Qualitative Data Analysis software (e.g. AtlasTi and NVivo), but in our opinion,
they lack three things: 1) they are not really collaborative (each version of the program represents one user, who labels his or her
own annotations), it would require a bit of a workaround to have the annotations represent different people; 2) they tend to store
documents in non-standardized formats, or to store documents and annotations separately, making it non-obvious how one would
export an annotated document either to the web or to a word processor. 3) they are not web-accessible, and not coordinated—no
way for everyone to work on the same transcript at the same time.
3
Ethics and Politics of Information Technology: Workshop documents
there are a number of ways this material can be made re-usable: in classes as reading or material
to practice with. As some kind of "case study" which can be given either to social
scientists/humanities scholars or to scientists, engineers, ethicists, philosophers.
3) The archive. One of the long-term goals of this project is to breed a number of other projects that
are similar enough that we start to build an archive of material that might be loose enough that it
can be reconfigured and juxtaposed in surprising ways. Given interviews with computer
scientists, nanotechnologists, environmental scientists, chemists, etc. all conducted in a roughly
similar form, what kinds of new archival objects are possible? What researchable objects? What
kinds of tools for making qualitative materials into something that generate new research
questions?
To Do:
Come up with specific questions to ask either the interviewees or the research group for each of
the three sessions (interviewees may want to ask questions about commentaries, or about other
interviewees).
Offer examples from your own research areas that might deepen or expand on qany of the issues
raised in the transcripts or the commentaries. We are particularly interested in stories, interpretations and
comparisons that can be added to this archive in a Talmudic fashion.
Come up with questions or suggestions related to the issues of archives, publication, re-usability,
pedagogy, case-studies, etc. We've left some time on the agenda for more general discussion and
brainstorming about these issues.
A Note on Formatting. Some of the commentaries will cite parts of the transcripts by line number.
These citations refer to the full transcripts, rather than the abridged. Part of the vision thing here is to
have these documents all linked together, so that it is easier to find the context of a commentary, or to
move from a transcript to a commentary. We plan on using Rice's own homegrown Connexions project
(cnx.rice.edu). We ain't there yet, but we're close. For the time being, all of these materials are available
on the project web page, which is:
http://frazer.rice.edu/epit
username : epit
password : hallelu7ah
4
EPIT
Dan Wallach: Interview 1
Abridged and annotated interview with Assistant Professor Dan Wallach of Rice University. The
interview was held on October 9, 2003. Key: DW=Dan Wallach, CK=Chris Kelty, TS=Tish
Stringer, AP=Anthony Potoczniak.
Dan Wallach is a young professor in the Computer Science department at Rice University. He is a
security specialist, which is a field that seems to have come into its own only in the last 10 years (though
the basic principles were set out much earlier.2 He is fond of his hobbies, which include photography and
swing dancing. He both seeks out attention and has been rewarded through his involvement in a series of
high-profile events in the overlapping spheres of security, politics, the internet and free speech. Many of
these involve direct engagement with the law, lawyer, corporate legal council, or others. Dan's
experience with United Media,3 who threatened suit for his "inline" display of Dilbert comics is an early
example. No court case was filed. Perhaps more famously, Dan was involved in one of the first legal
challenges to the 1998 Digital Millennium Copyright Act, with Princeton Professor Edward Felten. At
issue was the attempt by a consortium of music companies (called the Secure Digital Musical Initiative)
to prevent Felten et.al. from publishing a paper which explained the insecurity of a watermarking scheme
(the SDMI had issued a challenge to break the scheme, which Felten et.al. had done, but rather than stay
mum and accept the small prize, they decided to publish the result). The suit was eventually dismissed,
the paper eventually published.4 Dan's most recent exploits have involved the newly available Electronic
Voting Machines, and the company Diebold in particular, issues which we addressed in this interview.
Both the first and second interviews with Dan show him to be concerned not only with the issues but with
the way in which they are stated. After the first transcription, Dan was horrified to see that we had
dutifully transcribed all of his ums and ahhs and he made a stoic effort to restrain himself. In this
abridged transcript, we have removed such phatic keys, but have left the areas where we noted a concern
with "watching what you say," which is how we started:
DW: Actually, in a couple of weeks I’m getting official media training from the Rice PR people.
CK: I’m supposed to go do that myself actually, I’ve heard it’s actually pretty intense.
DW: It starts at 8:30 and they’re gonna go all the way 'til noon.
CK: There’s a lotta words you’re not supposed to use, I’ve heard "content" is out… I’m not exactly sure
what you're supposed to say…
Perhaps due to his extensive involvement with lawyers, Dan is generally hyper-aware of the need to
watch what he says, so he asked at the outset:
DW: Right, One last thing before we get into it; there are some things that I could talk about but it might
be things that are not yet public information, so should I just should I censor myself to the extent that I
don’t tell you anything that I wouldn’t be embarrassed to see, or for the world to know about?
CK: Well, you should probably…I mean from our standpoint this is not going to go to anybody except us
and you right now—so you don’t necessarily need to censor yourself but you can, you should mark those
2
Saltzer and Schroeder, "The Protection of Information in Computer Systems," CACM 17(7) (Juy 1974).
http://www.cs.Virginia.edu/~evans/cs551/saltzer.
3 link http://www.cs.rice.edu/~dwallach/dilbert/
4 Scott A. Craver, Min Wu, Bede Liu, Adam Stubblefield, Ben Swartzlander, Dan S. Wallach, Drew Dean, and Edward W.
Felten, Reading Between the Lines: Lessons from the SDMI Challenge, 10th Usenix Security Symposium (Washington, D.C.),
August 2001; http://www.usenix.org/events/sec01/craver.pdf
5
Ethics and Politics of Information Technology: Workshop documents
things when you say them; that way when we give you the transcript you can say this is not public and it
should be crossed off.
DW: Ok, there’s only a small number of things I can think of but I just need to know whether I have to
keep that, that guard up.
CK: Yeah, we’re not gonna publish this [transcript], it’s not going anywhere besides you, me, Tish,
Anthony, and three other people in the anthropology department who are part of this project.
DW: At some point it’ll be synthesized and you’ll be writing papers and all that?
CK: Right. [We'll give you a copy] you’ll have final say on it
DW: Ok, Ok
The interview questions we designed at the outset focused on a variety of political issues as well as
"ethical issues" concerning definitions and practices of the security researcher, the advice he gives
regarding privacy and security, and questions about funding and research. We started however, with a
broad biographical question, which in Dan's case yielded more than half of the first interview. Much of
what we found most interesting about these stories dealt with issues of teaching, pedagogy, learning by
doing, and growing up steeped in computer culture. In certain cases, as at the outset, Dan showed a
penchant for something like "reverse ethnography"—a phenomenon common amongst hackers and geeks
in which he half-mockingly/half earnestly offers "social" or "ethnological" explanations for his own
behavior.
CK: So, maybe you can start by telling us a little bit about your biography, go as far back as you want.
Maybe tell us a little bit about how you came to be a computer science researcher, um whatever sort of
particular turning points, events, or people, might have influenced that,
DW: Ok, but now as anthropologists you know that whenever you’re taking a life history that you know
that your subject lies.
CK: Yes, yes, we’re expecting you to lie, we’re ready for it [laugh]
DW: Ok, so you’re
CK: But, you know what, that’s what’s interesting about anthropology is that the lies are just as
interesting as the truth.
Dan's life of high profile participation has a particular genesis.
DW: Ok, so I guess let’s rewind all the way back until like say the fourth or fifth grade and we’ll progress
rapidly, so my father is a computer engineer, he designed super computers and he was the chief architect
of the Data General "MV8000 Eclipse" and there was a book written called The Soul of the New Machine
by Tracy Kidder, it won the Pulitzer Prize.
CK: Yes, I know it.
DW: and like one of the chapters is about my dad.
CK: Wow
Pedagogy, Tacit Knowledge, and informal learning
Dan grew up in Cupertino, "Apple's backyard" with plenty of access to computers. By junior high, his
father moved the family to Dallas to start Convex Corporation.
DW: My dad would go to work on Saturdays and I’d go with him. I would go and use one of the
terminals in some random office that wasn’t being used—most people weren’t there on Saturday. Half
the time I’d just play computer games, but I was also starting to learn how to program for real. This was
BSD Unix on VAXs. And so I had experience [like] today what kids get with Linux I had equivalent
experience way back then. One of my early mentors and role models was the system administrator for
Convex, a fellow named Rob Kolestad. Rob (before he was at Convex) was involved in the, I think it’s
6
EPIT
called the Pilot Project, which was at Illinois Urbana…that got a lot of kids involved in computing very
early. And Rob was one of the people who implemented one of the predecessor systems to Net News—
you know, back before there was Usenet, Pilot had some equivalents. So Rob basically taught me
programming in C and taught me a lot of what I learned about computers back then. I think I took a
computer class or whatever in junior high but that was like a toy compared to going on Saturdays to my
dad’s office and playing on the big machines. We still didn’t have a home computer…
CK: Yeah, you had a computer company.
DW: They had a laser printer, that was really cool. I could like typeset reports and hand them in to
school and back then you know, nobody else had a LASER printer.
CK: Sure, there's nothing like typesetting your reports for high school!
DW: Yeah, yeah, so I was using troff at the time, and learned a lot about using random Unix utilities, and
that helped me today with the sort of LaTex things that I do. I’m quite comfortable working with these
things that I learned way the hell back when. Rob had an interesting teaching style, when I would have
some problem—say if I had a program I was working on that was similar to grep, that’d let you search for
text within files; whenever I’d walk into Rob’s office and say I’m having a problem with something, he’d
bring it up on his screen and he would show me how you could fix this or fix that and then he’d throw it
away. So, then I’d have to go do it myself.
CK: It was all about getting you to see the insight in it, not actually [doing it for you].
DW: But he wouldn’t actually do it for me, actually he would do it in front of me, but he wouldn’t give
me the result of doing it in front of me. So, he wouldn’t type everything out, he knew all these weird key
sequences and I’d interrupt him and say, "What did you just type?" So, that’s why I learned a lot of these
little nuanced key strokes that aren’t documented, but that vi would support. I learned them from Rob and
now, you know, I see my own students say, "What did you just type?" A sort of a cultural passing on
CK: Absolutely. Tacit knowledge, we call it. Lots of tacit knowledge.
DW: Sure, yeah. Tacit knowledge, picked up lots of that!
CK: There you go, spoken like an anthropologist.
DW: Yeah. Yeah, that’s it. I’m not comfortable with your vocabulary, but I’ll roll with it.
Dan's childhood steeped in computers and programming did not determine his life aims, however. He
relates how photography became a fascination in High School and College.
DW: I was Mr. Yearbook photographer, you know, I was inseparable from my camera, it was with me
everywhere I went, travel, whatever, I always had my camera, and I had one of these honking side flash
things so that way, you know, I was I was prepared. Even though when I was a senior I was in charge of
a staff of like 10 photographers, probably a 1/3 of the yearbook pictures are me. That’s maybe an
exaggeration. I’m lying. But, it was, it was clear that I was doing a lot more than my share of the work.
But that was Ok, ‘cause I loved it. So when it was time to apply to college, I actually was faced with sort
of a hard decision. If I want to be a photographer or a computer nerd… I spent one summer at Santa
Barbara, one summer at Stanford, and had taken photo classes, in summer ’88 at Stanford, my photo
instructor was a former lab assistant of Ansel Adams. So I learned a lot of "tacit knowledge" about
photography… All sorts of, I mean things that today you just do in Photoshop. But doing it in a
darkroom with wet chemistry is a lot harder. And I learned a lot of that, I was really enjoying it. And,
you know, I learned that at Berkeley—the only way at Berkeley with 30,000 students, a huge campus, the
only way you could take a photography class is if you are majoring in Environmental Design. Not even
art majors could take photo classes. It was you know the sort of thing that Rice kids don’t understand
how well they have it.
CK: Right
DW: So, I sort of realized that computers were where it was gonna be, and you know, computer science, I
certainly enjoyed it and was good at it, so it was a perfectly sensible thing for me to major in, that’s what I
was gonna do. And you know I didn’t take all that many, I’ve probably taken more pictures in the last
year than I did as an undergrad.
7
Ethics and Politics of Information Technology: Workshop documents
Dan decided to major in Computer Science in Berkeley. The researchers here note that one of the
interviewers (CK) uses this chance to participate in the discourse of legitimation through heroic
undergraduate undertakings.
DW: So as an undergrad I took all the computer science courses. I remember my freshman CS course,
CS 60A, I believe was taught by a guy named Brian Harvey. When he first walked out, we thought he
was the janitor and we were waiting for the real instructor to show up. He was this guy in a tee shirt,
walking… this overweight guy lumbering out and we thought, "Ok, well where’s the instructor?" "Ok, he
is the instructor," and he was a really fantastic instructor. I mean, he was one of these people, you know,
every every department has somebody who is not a CS researcher, more like a CS education researcher
studying [teaching]. So he taught this course just amazingly well. And it was based on MIT 6.001.
Famously one of the hardest CS curriculums, and one of the hardest classes in that curriculum. The MIT
joke is that it’s like drinking from a fire hose.
CK: Right. I took that one, from Rodney Brooks actually.
DW: Yeah, so I took 6.001 in Berkeley. And it’s amazing how much MIT culture pervades—the fact that
MIT course numbers were meaningful outside of MIT.
CK: Yeah, that’s really kind of sick, actually.
DW: Yeah…. that you would know what I’m talking about. So, I remember taking great pride in the fact
that I only lost 3 out of 350 points total in the entire semester.
CK: Um hm [Aside: Just for the record, CK did not offer that he too nearly aced his 6.001 course.
Modesty is a virtue in anthropology].
DW: So back then, I was sort of… I rocked. Computer stuff was easy. I was trying to minor in physics,
‘cause I had a really inspirational high school physics teacher who I loved. And it took me five semesters
of physics at Berkeley to finally break my back. Somewhere along the way, where it was like tensors of
inertia, and you know, I mean I survived quantum mechanics which was an accomplishment, but upper
division mechanics just broke me. I just couldn’t deal with it anymore. And at that point I decided Ok,
I’m just a CS major, I went off and took other courses for fun. I thought it was decadent, but I went off
and took a class in Greek tragedy.
Dan proceeded to garner the successful connections that come with majoring in computer science and
having a computer engineer father—working for a startup firm, interning at NASA, developing Repetitive
Strain Injury "I had totally trashed my wrists from too much typing and everything else. You can pan
your camera over here and see, now I’ve got a shiny keyboard, and I’ve got a fancy mouse, and I’ve got,
you know, notice how there’s no arm rests on this chair, I forcibly removed them, they’re sitting in a pile
in a corner somewhere." Eventually he began to think more carefully about a life in research, about the
need for mentors and letters of recommendation, and he chose a group to work with.
DW: So, early on one of my academic advisors had tried to talk me into working for him in his research
group. This guy did real-time multimedia, video streaming stuff. So, I went up to him and said, "Hey
Larry (Larry Rowe), can I join your group?" He said, "Sure." So, I ended up implementing two things
for them. They had this video streaming thing that would move uncompressed video off of one computer
on to another and display it and they had this complicated video disk thing so they could grab video into
the machine. So I wrote the audio re-sampling support, so that as you played it [makes noises of pitch
changing], the audio would speed up or slow down. And then I wrote the first half of what eventually
became the Berkeley MPEG encoder. So, about once a year I get an email from somebody who notices
my name buried in the comments somewhere asking me some weird tech question and I’m like, "I don’t
know, I haven’t touched that code for ten years!"
CK: Yeah. So it sounds like this was a kind of positive environment, you could just walk up to someone
and say, "Can I be in your group?"
8
EPIT
DW: Oh, yeah, now that I’m a professor I understand this better. When the talented student says, "I want
to give you free labor." The answer is, "Yes, right, let me introduce you to my group." And, among
other things, Larry taught me two very good lessons. One lesson was, he drew this little curve on the
board. He’s said "this is some research topic. Down here nobody knows about it yet. And then
everybody’s publishing and learning a lot about it, and then you get up to the place where it smoothes out,
and there’s not—now you’re just doing incremental things. But the excitement is over. And the trick is
to get on down here and get off up here." And as much as you can ride these curves, that’s the ticket to
academic success. So that was a valuable lesson I got from Larry. And probably the most valuable
lesson I got from one of his grad students, Brian Smith, when I was discussing this whole grad school
thing with Brian, at one point he said, "You just have to have blinders. You look at the thing in front of
you and you do it. Ok, I need to apply to grad school. Ok, I need to choose a grad school. Ok, I need to,
but if you try to look at the whole pipeline all the way through to becoming a professor, and beyond,
you’ll go crazy."
CK: He didn’t mean you shouldn’t think of other things related to it, just that you shouldn’t think about
[the whole]
DW: You should, you should look at solving the problem that’s in front of your face. And you shouldn’t
think about three steps down the road. ‘Cause, you’ll go crazy. It’s easier just to focus on the immediate
problem. That was a really valuable piece of advice. ‘Cause it’s so easy to get lost in the totality of trying
to project your career all the way to your retirement. And it’s just, it’s better just to not worry about it and
instead focus on the here and now. So that was a very valuable piece of advice from Brian. I’ll get back
to him later when I’m becoming a professor. Brian is a fountain of wisdom. So, I had this experience
working in a real research group. And, I was enjoying it. You know, just the stimulation of, like at one
point Brian and I were having lunch and he said So let me explain how JPEG works. And he whips out a
piece of paper and starts drawing equations and explaining it. And you should go read this CACM article
and whatever. Just the amount of, the being taught stuff one on one like that is so valuable. It’s much
better than anything in class. And so I learned a lot. Working for Larry for the year.
Dan's work on the MPEG encoder allowed him to produce a paper that was published very early on
(research he conducted as an undergraduate), something not unusual for computer science grad students.
Another pedagogical story is contained in this
DW: And so, I dragged my, my advisor, Michael Cohen, was Mr. Radiosity, global illumination, that sort
of thing. And here I was, dragging him into the world of digital video, which he knew nothing about. But
he sort of came along for the ride. I remember when I had the first complete draft of my paper, he said
"let me take a spin at it." And, magic happened, and suddenly it felt like a SIGGRAPH paper. It said all
the same things, but it sounded better. I don’t know how he… it’s like he waved a magic wand and it
looked, it felt, like a research paper. That’s a skill that I realize that now I have to do with my students.
You know, I’m as much a copy editor, as a cheerleader, as a research guidance system, as everything else,
I have to do it all. That was my first introduction to you know making good writing [sic].
CK: Right.
Finding Bugs: The story of Dan's "first ever ethical dilemma"
Dan continued his summer internships and his informal mode of learning throughout grad school. But by
the second year of grad school, he was looking for a project. Princeton had lost all of its graphics people,
so his interest in graphics had no group or outlet, and no talent in the faculty to guide him.
DW: Ok, summer of ’95, recall, Netscape 1.0 was barnstorming all over the place. Hot Java had just
come out and for the first time web-pages could MOVE and DO THINGS. Netscape had announced they
were gonna license Java. And, oh, by the way, Microsoft shipped Windows 95, it was a very busy
summer. A lot happened then. So in fall of ’95, I’m back in grad school, I’m TA-ing the graphics course
9
Ethics and Politics of Information Technology: Workshop documents
and I’m not real happy with it ‘cause the woman they have teaching it—I think I could do a better job
than she was doing, you know, I was sort of, I was a grumbly grad student. And so I was having coffee
one evening with another grad student, Drew Dean. And we were discussing this new Java thing. And the
discussion turned to the question "So do you suppose it’s secure like they say it is?" More context: mere
months earlier, Dave Wagner and Ian Goldberg5 had published this huge security flaw that they had found
in the Netscape SSL random number generator, and they’d gotten a huge amount of press from finding
bugs in Netscape. So this is in the back of our heads, and we’re sort of talking about possible security
issues and Java. And so, "well, what the heck? Let’s go have a look."
CK: Was this something that you talked about regularly? Security issues?
DW: Oh, we just talked, [Drew and I were friends]
CK: [Or was this like any old conversation]
DW: [We talked about everything] Y’know, this is back in the days before Slash Dot, you actually
discussed things with your friends. So, you know, one of the topics of the day was this new Java thing,
and Drew and I got along well so that’s what we happened to be talking about that night. Sitting there in
the café that night, we filled up two sheets of paper with hypotheses of things that may or may not be
broken. The next day we downloaded all the Java code and started having a look. In the space of two
weeks we found a whole bunch of interesting bugs and problems. So Sun is saying "Its great, it’s secure,
it’s wonderful," and what we’d found was that it was really horribly broken.
CK: Um hm.
DW: So, "Hey Drew, let’s write a paper." "No, no I’m really busy." "Hey, Drew, no really, let’s write a
paper." And turns out that sometime soon, there was a deadline for IEEE Security and Privacy. "Sounds
like a security conference. We could send a paper there." "Oh, fine." So, we didn’t sleep much for the
next week.
Dan and Drew's decision to write a paper represents one of the most important aspects of his graduate
career. It was both an essential career move and, as he explains: his "first ever ethical dilemma."
DW: We didn’t know what papers to cite. We just went to the library and sort of found the shelf of
security proceedings and read all of it. Well, skimmed. In hindsight, both of us sort of occasionally
remark to each other now that when we look back on it, we wrote a pretty damn good paper, given that
we didn’t know anything. The conference was blind review, we submitted this paper with our names
removed. "Now what do we do?" So, first ever ethical dilemma. We have a result, we have to make
some decisions. Like, we have some attack code, do we want to release it to the world? Probably not.
We have a paper that describes some attacks. Do we wanna release that to the world? Yeah. So, we
announced it on a mailing list called the Risks Digest6 and on the comp.risks newsgroup. So we said,
"we found some bugs, Abstract, URL." And then, as we’re reading the web logs, we’re getting hits from
all sorts of places you never see in your web logs. You know, "what’s that?" Oh, Airforce Open Source
Intelligence, "what the hell’s that?" You never see that when you read web logs. Well, you know, all
sorts of really unusual people were coming and downloading our paper.
CK: So you’d actually written code that would exploit these security flaws?
DW: Yes.
CK: That was part of writing the paper?
DW: Yes, but we kept that all private.
CK: But it was published in the paper?
DW: The paper said, "you can do this, this, and this." But the paper didn’t have code that you could just
type in and run. And we never, to this date, we never released any of our actual attack applets. I mean,
5
Wagner and Goldberg, "Randomness and the Netscape Browser: How secure is the World Wide Web?" Dr. Dobbs Journal,
January 1996. http://www.ddj.com/documents/s=965/ddj9601h/
6 http://catless.ncl.ac.uk/Risks/17.43.html#subj8
10
EPIT
we would say what the flaws were well enough that anybody with a clue could go verify it. But we were
concerned with giving a, you know "press a button to attack a computer"
CK: [Right] Press here to exploit flaws.
DW: Yeah, "would you like to bomb Moscow?" We didn’t want to make it quite that easy for somebody
else. So that was the decision that we made at the time. Which in hindsight was actually a reasonable
decision. We tried to find people inside Sun to let them know about all this. But what? File a bug report?
We didn’t know who to talk to, we didn’t know anything. So, we, we tried to file a bug report but
nothing really came of it. So, we’re like, "screw it!" So, we announced the paper. Then the right people
at Sun found us.
CK: Uh huh. Magically. Surprise.
Dan, like many computer scientist, enjoys pointing out the man behind the curtain. He enjoys it when he
can find a way to show up any prideful individual or company, and he had chosen a pretty big target.
DW: Yeah, we, we got their attention. It was interesting, Bill Joy, who recently retired from Sun, sent us
this long detailed, point by point email—just to us—rebutting our paper. Point by point, explaining why
we were wrong. And he was completely off base.
CK: And he wrote Java. Well, in addition to…
DW: He didn’t write Java. That’s revisionist history.
DW: Bill Gosling did all the real work on Java.
CK: And Steele, or…
DW: Yeah, Steele came along much later. Gosling started it. Gosling and Arthur Van Hoff and a cast of
other characters like that. Gosling worked in Joy’s research lab so, you know, Joy gets more credit than
he deserves for being a father of Java. No, Gosling, anyway, that’s a separate...
CK: But he wrote the letter, not Gosling.
DW: Yes. And his letter implied that he didn’t know anything about what he was talking about. His
letter demonstrated that he was completely clueless. So, we’re like, "Hm, that was weird."
CK: So was that worrying in the sense that (this decision to release the code or not) the people that should
know at Sun seemed to not understand what you thought anybody with a clue should be able to
understand?
DW: So, our opinion of Sun soured very quickly. Because the person who was supposedly in charge of
fixing all these things—she put up Sun’s Java security FAQ, wrote a bunch of things like that, but she
was also sort of in denial mode
Others took Dan seriously, however, such as Netscape. Netscape had by this point developed a "Bugs
Bounty program" in which they would pay $1000 to anybody who found a serious security flaw in the
Java implementatin in Netscape. Dan and Dean were awarded $1000. The story of how Sun finally "got
a clue" is an important part of Dan's understanding of his "first ever ethical dilemma". But meanwhile,
the experience of exploring the Java source code, discovering flaws, and getting a paper accepted to a
conference on a topic neither of them had touched before started to dawn on them.
DW: So, we published this paper, we submit a paper, we release it online, some fireworks happened, but
not much. We didn’t end up in the New York Times, we got like USA Today. So, I went back to TA-ing
the graphics course, Drew went back to working on benchmarking standard ML or whatever the hell he
was doing. And, over the course of the next month or two, it gradually dawned on both of us: why are
we trying to do blah when we could be doing security? This is clearly a hot topic. So, I approached Ed
Felten, who at the time was doing distributed systems, and I said, "Hey Ed, I know there’s no security
people here, but you’re like the closest thing and I’ve been doing this security stuff, how’d you like to be
my advisor while I’m working on security?" Then, he thought about it for all of second or two, and he
said, "Sure, I’ll be your advisor." At the time I thought I was the luckiest guy in the world. And again,
looking back, I can see that when somebody who’s self-motivated and is gonna do their own thing
11
Ethics and Politics of Information Technology: Workshop documents
anyway show’s up and wants to share the credit with you and you just help them along, you say "Sure!
Sign me up!" A lesson I’ve since used myself as a professor, I’ve allowed myself to be dragged all over
the map, because other people have good instincts too, and I run with them. Although, one of the things I
have to learn is that people don’t always have good instincts, and, and part of my job is to apply good
taste. These are skills I’ve had to develop more recently. But I digress. But you want my, you want me to
digress, right?
CK: Well, if you feel like digressing.
The IEEE security conference that year (May 1996) took place in Oakland. Sun was headquartered in the
Bay area.
DW: So, the time finally came to present the paper. Meanwhile we had found more flaws and had
amended the paper, and finally, we’re presenting the paper. Some people had arranged a little small
workshop after the conference hosted at Sun. So after the conference we headed down to Sun’s campus,
and Ed, Drew and I got to meet up with a whole bunch of big names in computer security that I hadn’t
known before. And so suddenly all these big names were all in a room talking about our work. Which
was an interesting thing, suddenly I got to see who I thought was cool and who I thought was clueless.
One of the things that hit me after that was that I really liked this guy at Netscape, Jim Roskind. So, I got
back to Princeton and we’re emailing him saying, Hey, Jim, looking for a summer internship, how ‘bout
Netscape? Couple of minutes later the phone rings. So, he talked to me for like an hour. And next thing
I know I’m flying out to California again for an interview. And it wasn’t so much a job interview as it
was like a debriefing. Or it was just that all of these people wanted to chat with me. So, it was like this
day long interview that was in some sense grueling but in some sense they weren’t trying to figure out if I
was qualified, they all were just, like, lining up to chat with me. And so I got to be an intern at Netscape
in the summer of ’96. That was back when Netscape was riding high and Microsoft was sort of a joke in
terms of web browsing.
CK: Can I ask you just to back up here, In terms of uncovering all of these flaws in Java, did you have a
sense at the time that you were, that Java just had a lot of shortcomings and they were easy to find and no
one was looking for them? Or that you actually knew how to look for things that other people weren’t
looking for?
DW: More the former. But all the flaws were there. Some were subtle, some were really obvious. Just
nobody had bothered to look. And we were the first ones out the gate.
CK: And so in that sense, was Netscape reacting to the fact that you were the first one, or did they
actually see you as someone who knew how to look for [flaws]?
DW: I mean, clearly we demonstrated that we were capable of finding security flaws. And, we’d also
demonstrated that we were somewhat responsible in how we dealt with them. I mean today, the notion of
how you should contact a vendor, I mean Microsoft has a formal procedure for how you can tell them
about a security flaw. In ’96, nobody had anything like that. A lot of this is congealed out of… there’s a
standard practice that we have today in 2003 didn’t exist in 1996.
The development of formal procedures for finding, announcing or reporting security flaws in widely
distributed software did not really exist prior to about this time, according to Dan (it may be an open
question as to whether there were more formal procedures amongst some companies than among others,
or between the corporate and the university worlds—such as with Free Software projects). However, for
a brief period, coinciding with the rise of the dot.com economy, there was in fact a kind of uncertain
period of experimenting with and testing procedures for dealing with security flaws. By 2003, such
procedures have become not only formal, but highly political—as in the case of Microsoft's practice of
once-a-month security updates, and the arguments about the relative security of freely available source
code.
CK: Right, right. So, back to Netscape, sorry.
12
EPIT
DW: No, this is all germane, so anyway, so at Netscape I had the chance to be on the receiving end of
some, of security bugs. There was a kid at Oxford, David Hopwood, who for a while was living off bugs
bounties. He just kept finding one flaw after another. Some of them were just really ingenious.
CK: That’s great!
DW: And he found out I was an intern at Netscape and said, "Shit, I can do that too." The next thing you
know we had David Hopwood working there as well. So I saw that some of my Princeton colleagues
kept finding bugs and they got bugs bounties and of course I didn’t because I was employed at Netscape.
It was all a good time. You know, I had a chance to see these bug fire drills for how Netscape would
respond to bugs and get patched versions out within two days or one day or whatever. Now I was on the
designing end, now I was working on how you might do signed Java applets, how you might give more
privileges, and trying to do some amount of reengineering to Java to reduce its vulnerability to security
holes. And so I had a chance to develop some ideas that Jim Roskind and myself and another guy Raman
Tenneti cooked up that evolved into what today is called "stack inspection," it's a security architecture
used in Java, used in C#, and programming language theory people in 1996 said "this is a stupid hack,
what are you doing?" But today, eight years later? Seven years later, a whole bunch of programming
theory people are finally writing papers on it and analyzing and studying it, and I’m being asked to review
all these papers that are follow-on’s to work I did seven years ago. So it’s delayed gratification.
That we really were on to something…
Dan's work at Nestcape eventually led to part of his PhD research, and represented, for him, a good
example of successful industry/academic collaboration. Dan's sense that he had been at the beginning of
something leads to his next, and very significant ethical explanation.
DW: Ok, so now there’s this idea that grad students [could find bugs]. Wagner and Goldberg started it,
Drew and I kept it going… there was an idea now that grad students, anywhere, who don’t know
anything, can find bugs and get famous for it. Cool! It was even possible to write special programs that
would allow you to find bugs by comparing the official and unofficial versions. So I could come up with
a long list of bugs and then take them to Netscape and say: I have a long list of bugs, give me a stack of
money or I’ll go public! But that’s called blackmail. The reaction from Netscape probably would not be
friendly. In fact, you would likely end up with nothing in the end.
CK: Um hm.
DW: Whereas, you know, we had given our stuff away-- but we’ll be happy to tell you about it, we would
tell Sun. We had sort of informally agreed with Sun, "We will give you a couple days warning before we
publicize our next flaw."
CK: Um hm.
DW: So, we’ll tell you, but the clock is gonna be ticking.
CK: Um hm.
DW: And we’re not just gonna sit on it until, until you’re ready. That was sort of an informal deal
between us and Sun. These days (in 2003), it’s considered better to wait for the vendor to ship a fix, but a
lot of it depends on how hard it is to exploit. If it’s something like, say, the most recent Windows RPC
vulnerability, so that 25 days after Microsoft shipped the patch there was the MS Blaster worm that
caused a huge to-do—that’s a vulnerability that allowed for some very serious, allowed for the Blaster
worm to happen. So, because of the seriousness, it makes sense that if you find something that has, that
can be exploited in that fashion, that you don’t want to announce it to the world, at least not in a fashion
that a skilled person could reproduce it, until the vendors have a chance to fix it. On the flip side, the Java
stuff that we were finding, yeah there were security holes, but you would have to trick me into surfing to a
website that you controlled before you could hijack my browser from me using our attacks. So, the
severity of what we were finding wasn’t as severe as some of these other things, so... there’s an ethical
balance I think we struck the right balance at the time. Especially given that neither we nor Sun knew
what the hell we were doing. And, interestingly, one of the questions you might ask yourself is, could we
have done better if we had say tried to commercialize or blackmail or what have you? And I think the
13
Ethics and Politics of Information Technology: Workshop documents
conclusion is that no, in the end it was better for us financially to give our stuff away free. ‘Cause it
created a huge amount of good will; it gave us a reputation that we didn’t have before, so reputations are
valuable. It’s probably part of why I have this job now, I developed a reputation. Also, a vice president at
Sun, Eric Schmidt, who’s now the president of Google, had a slush fund. He said, You guys are cool,
here, thud, and a $17,000 Sun Ultrasparc just arrived one day for us. So, he sent us a very expensive
computer gratis because he thought our work was cool. Y’know, that was, that was a really valuable
experience.
Dan's explanation (which condenses a number of issues) of the ethical issues relates to a more common
and widespread discussion conducted amongst geeks and hackers, one which, in various cases draws on
some notions familiar to anthropology.
CK: Did you experience that as a kind of quid pro quo, like if you keep doing it this way, that we have a
relationship? That kind of a statement
DW: Quid pro quo is
CK: Obviously not an explicit one, but
DW: I think a better way to think about it is as like a gift economy. We give them a gift, they gave us a
gift.
CK: That’s anthropology
DW: Gift economy’s an anthro term?
CK: Oh, yeah.
DW: So, yeah, we, we …
CK: You can interview me about that later...
DW: Yay. So I don’t have to tell you how a gift economy works. Sun helped reinforce in us that we were
doing the right thing by giving us a nice toy. And we were also developing a good reputation. It was
clear that we had a good formula going for how to publicize bugs, how long to embargo them. And we
never, ever released code to anybody except the vendor. So, to Sun or Netscape we’d say, "Here is the
source code that exercises the exploit. If you don’t believe us that it works, here try it." Visit this
webpage and file by that name will be deleted from your hard drive. You know, I believe we used
\tmp\javasafe.not. And you’d observe the file would disappear, and the only way that that could
happen… So, for Sun, we would give them everything.
Dan's experience with finding security flaws was thus both ethical and practical. The reputation could
lead to better jobs, but it also helped him develop a kind of methodology.
DW: If you find a serious flaw in a widely used program, then, you know you have the, some amount of
fame, from, y’know, your name being associated with having found it. And y’know I also realized that
there’s a whole research pipeline here. Find a bug, write a paper about why you, why the bug was there,
fix the bug, write a paper about how you fixed the bug, repeat. This is a research paradigm. This, you
can build a career doing this.
Definitions of Security
"Who is going to decide what to say about whom?" We inverted this question from Dan's own definition
of security "Who wants to learn What about Whom?" in order to try to find out more specifically what
"security" means in the world of CS research.
DW: I don’t know exactly where I wrote it but I can see myself saying that.
CK: Right. So, we were wondering if we changed this slightly and said we were interested in rather than
what the first one said, what if we said, "Who is going to decide What to say about Whom?" Which in
14
EPIT
some ways brings you into the picture. Right, since you have to decide whether or not you’re going to
say something about the things that you reveal. Is that still security?
DW: Well, that’s sort of the ethical issue about discussing security.
CK: But that’s different than "Who wants to know What about Whom?"
DW: Yeah. Security itself is confidentiality, integrity, or denial of service. You know, confidentiality is
your, "do your secrets stay secret Only the people who you want to learn something learn it?" Integrity
is, "is your data safe from being modified? Are only the people who are allowed to change something
able to change something?" Denial of service is "can you prevent people from making the whole thing
unusable?" So, that’s like a more formal definition of security, it’s a combination of confidentiality,
integrity, and protection against denial of service. But, the ethical issue of doing security research is
choosing who learns about what, when, and in how much detail. I mean, that’s the ethical dilemma. It’s
no big deal to find a flaw, but who do you tell about it, how do you exploit the knowledge that you have
about it?
CK: But that ethical dilemma is not also a security dilemma?
DW: Well, ethics of security--is that security? Whatever, I don’t know. It’s something that security
people certainly worry about. I s that security per se?
CK: I’m just thinking if you put yourself in the position of Microsoft or Netscape or Sun or whatever,
part of their issue of security is whether or not you (Dan Wallach) are going to release that information
about them. From their perspective your ethical decision is a security risk for them.
DW: Yes, Yes, I agree with that. I'm trying to see if I can restate it in my own words...So, certainly, from
the perspective of somebody being attacked, the ethical behavior of the person who has—attacked is not
the right word. From a vendor’s perspective, ethical behavior is different… the ethical behavior of the
person who finds the fault with their product--the vendor might have a different idea of what’s the best
behavior. I won’t say what’s most ethical ‘cause I’m not sure what that means. But certainly from
Microsoft’s perspective, the less the world finds out about it, the better.
CK: Right. And they can call that security, I suppose, they might refer to that as the security of their
product, while security researchers refer to it as security by obscurity, right?
DW: Absolutely. So, from a vendor’s perspective, the optimal outcome is that it gets quietly fixed and
nobody knows about it. Whereas, from the—I don’t want to use the term hacker because that means
different things to different people, so why don’t I just say: from the security researcher’s perspective,
you have different incentives. One, as a researcher I have the publish or perish incentive going. But also
as the service to the community, it benefits the community for me to make the community know about the
flaw. Because then that means that the community will understand the importance of applying the fix.
Right now Microsoft, once a week, they have “an important critical Windows security vulnerability fixpatch-mumble” and they never tell you anything about whether, about what, how it works or how widely
vulnerable that it is or anything. There’s a lot of information Microsoft doesn’t give you. I mean, they
don’t give you a lot of information almost to a fault…
Dan's sense of revealing information is that it is a tool for encouraging people to pay attention to a flaw,
and to encourage a vendor to fix it. But his understanding of the meaning of this symbolic act is limited
to that functional aspect of urging a vendor to fix something, as is evident in this exchange.
DW: So, we felt at the time and a lot of people certainly agreed that the only way you can persuade some
vendors to fix the problem is to shine the bright light of day on the problem and then that forces the
vendor to come up with a solution. The vendor can’t just camp on it.
TS: What did you think about the fact that the Blaster actually worm actually had a message in it about
"fix your software"?
DW: That was somebody’s idea of a joke, I think. You know, "Hey, Billy Gates, fix your damn
software," whatever. I mean the Blaster worm didn’t actually cause much damage, I mean yeah, it
gunked up the works, but the Blaster worm didn’t delete any of your files, it didn’t damage anything that
couldn’t be cleaned up. So that tells you that whoever wrote it has… I mean, they would disagree with
15
Ethics and Politics of Information Technology: Workshop documents
me ethically. My ethics say, you find this vulnerability, you don’t release it into the wild. To me it’s
perfectly ethical to attack my own computer. I own it. When you do an experiment on mice you have to
like, get it signed off. If you do experiments on people you have to go through all manner of whatever,
but luckily computers don’t have rights, at least not yet. So I can torture and mutilate my machine to my
heart’s content: it’s my computer. And once I prove that I can do the attack, then, that’s it. Whereas
some other people believe that the way to influence change is go from a hypothetical problem to a real
honest-to-god problem that the whole world has to deal with. And clearly whoever wrote the Blaster
worm had that in mind. To them, it was ethical to do because they were gonna effect change. Or y’know,
maybe they were one of these Linux activists, or had some activist bent in them, some anti-Microsoft
activism. And they felt that by reducing people’s perception of the quality of Windows, they might be
able to shift market share. I wouldn’t put it past some activists to wrap that kind of logic through their
brain.
CK: Um hm
DW: Needless to say, I don’t agree with that. I think it’s the wrong way to go.
TS: But do you think in both cases, people are coming from an ethical position in which they think
they’re doing the right thing?
DW: Oh, I’m certain that whoever the hacker was who did Blaster thought that their actions were
perfectly ethical. In their own warped worldview. But they were helping some other cause by damaging
people who were using Windows. And this is the same kind of logic as "only bad people get AIDS."
Replace AIDS with Blaster worm and bad people with Windows machines and it’s the same kind of
faulty logic. You know, you shouldn’t have to inflict pain on hundreds of millions of computer users just
to prove a point.
Being able to "think like a really good bad guy" is part of the security researcher's repertoire, and dan
agrees that this is part of the procedure. His understanding, however, of who or what constitutes good
and bad has been affected by his experience with these high profile cases.
DW: The words good and bad… in recent work I’ve been doing on voting security, we actually are
regularly tongue-tied about the words “good” and “bad.” Oh, we just found this flaw, "that’s great!"
Wait, "No! That’s terrible!" Y’know those two words “good” and “bad” get really fuzzy. And it’s not
clear. The meaning of them becomes deeply ambiguous because your wearing a, you’re chan-, you’re
wearing a black hat, you’re wearing a white hat, so, I’ve been, I don’t have a better vocabulary
replacement for good and bad. So, you know, you can say "That’s a good attack," "this is bad for people
who use the system." You have to start qualifying everything you say, which makes your speech get very
awkward and formalized, which kind of sucks. ‘Cause, I like to speak casually, and colloquially, and I’m
forced to be specific about my speech, which makes me slow down, to watch my words, this is what--I’m
sure--when I go to the "how to talk to the press training" they’ll beat me up on this mercilessly. I do
much better with print reporters than video. ‘Cause the print reporters will get rid of all the uh’s and um’s
and it all comes out sounding much better.
We asked Dan about the relationship between computer security and other kinds of security. He offered
us a long and detailed story of a computer security researcher, Matt Blaze, who chose to study master
keyed locks. He explained how they work and how Matt Blaze's decision to publish conflicted with a very
different ethic amongst locksmiths,
DW: Anyway so if this was a computer security paper, Matt Blaze would simply publish it and say
there’s a flaw with this lock and everybody would nod and say that’s the end of it. But when he
published this paper, the real locksmiths freaked out. ‘Cause, they’d known about this for years, but it
was like, it was sort of a, you had to be in the fraternity of locksmiths…
TS: A guild secret
16
EPIT
DW: it’s a guild secret. Yeah, it’s exactly a guild secret. And now that the secret’s out, bad guys could
take advantage of it. And Matt took a lot of heat for, for having publicized a guild secret. I like the term.
Although Matt also dug up some writings from a famous locksmith from the nineteenth century, saying
that of course the bad guys will know exactly how these locks work. So the good guys should feel free to
discuss it ‘cause the bad guys sure are. So just because it’s a guild secret doesn’t mean that the bad guys
don’t also know the guild secret. For all you know there could be a bad guy working for the guild.
CK: So, it seems that this are sort of different understandings of why other people are or aren’t ethical.
The people that keep the guild secrets assume that only bad people with bad ethics break locks. Whereas
the computer security researchers seem to publicize these things and put them out in the open because
they think anybody might be driven to do that.
DW: Yeah, and they think, "We need locks that, we wanna have locks that work, despite the fact that
everybody knows how they work."
CK: Because anybody could be bad… it’s a little bit more paranoid, it sounds like.
DW: Or that, well, there are bad people out there, and bad people can learn things. It’s hard to imagine
that a bad person couldn’t go disassemble a bunch of locks, and figure it out by themselves. And who’s
to say that the mafia doesn’t know this same secret already. But it’s good for the people who run Rice
University to know that their master key locks are vulnerable. And then they can make a decision about
whether they want to re-key the locks in some different fashion. That’s a policy decision that the
locksmith guild was taking out of the hands of their customers. Y’know, Rice should make the decision
about whether it’s willing to live with that vulnerability, not the locksmith guild.
CK: So you think there’s a parallel there with computer vendors, then. That some vendors are making
the decision for their customers
DW: Right. And so my ethical position is that there should be full disclosure between the vendor and the
customer. Of course vendors don’t like that idea very much. Because vendors sell GOOD things, not
BROKEN things. And so my ethical position is that there should be full disclosure. You should know
what you’re getting into when you install a piece of software. Or when you buy a particular brand of lock
or what have you.
Privacy, Policy and Secuirty
One of the services Dan tries to provide via his web site is a Privacy FAQ, which details some of the more
complicated issues around the relationship between employers and employees concerning what people
can do at work and what employers can learn. The concern about who is learning what from whom
seems to be a similar case, and we asked Dan whether he was suggesting that consumers need to be
experts to protect their privacy
CK: In the privacy FAQ you say, "if you have things that you’d rather that your employer didn’t know
about, you probably shouldn’t do them at work." And there’s another line there that’s sort of, "many
employers consider the fact that you’re using their computers and their connections to imply that they
have a right to say how you can use them." And in some ways part of our discussion when we were
talking about this turned on the fact that if you don’t want people to know about these things or if you
want to be secure, as an individual, you need to either be an expert, you either need to have the kind of
technical skill or technical expertise to know what you’re doing, or be subject to other people’s expertise.
Do you think that that kind of distinction is relevant to security research? That, that people are in this
position of having either to be really smart people who know what they’re doing or, you know, having
their choices, their rights, sort of curtailed because they give that right to someone else, whether it’s their
employer or [someone else]?
DW: Kind of. I mean what you’re saying certainly sounds reasonable. I mean clearly if you don’t
understand what’s going on under the hood of your car, then you’re at the mercy of the mechanic, who
might invent the name of a contraption that’s broken and charge you a ridiculous price to fix it.
CK: Right
17
Ethics and Politics of Information Technology: Workshop documents
DW: And to some extent, the more educated a consumer you are, if you are aware of the names of most
of the parts of a car, and if you’re aware that your transmission is not supposed to fail after 10,000
miles… you know, knowledge is power. And so, I wouldn’t say that consumers need to be experts. But
knowledge is power and they need to be aware that the Internet connection from this computer out to the
Internet runs through a bunch of gear owned by Rice. And if Rice had a corporate policy that says, "We
can read our employee’s email," I don’t know if they do or they don’t…
CK: See, now there’s an interesting question ‘cause you’ve already advised people not to do things that,
at work that they don't know about…
DW: Well, you know, if I was doing something that I didn’t want Rice to know about, I wouldn’t do it on
Rice gear. I’d do it on a home computer with a home net connection, where my packets didn’t go through
Rice. Now, Rice by and large doesn’t care that much.
Dan's interest in workplace Privacy is perhaps one reason why he was chosen to be on the RICE
University IT Policy committee, where he has been involved in restraining some of the more authoritarian
urges of the IT staff. His concern is that any policy needs to be carefully drafted to make clear what the
specific dos and donts are. This notion of a specific policy—a body of law that constrains people's
actions, sits uneasily wilh the discussion of security. An open question concerns the relationship between
the technical creation of secure technologies and the institutional creation of policies.
DW: And likewise, as an employee, you should be aware of the policies and procedures of your
employer.
TS: So, can I ask you what Rice's policies are?
DW: I actually don’t know all of Rice’s policies and procedures. I mean, I’m on the IT Security
Committee so in theory I’m helping to write those policies and procedures, and in fact, Vickie Dean had a
draft a while ago, that if you read it, gave Rice the right to go rummaging around inside your computer all
they wanted, for "security purposes." But if you read it on its face, it gave them a huge amount of power
they don’t have today.
TS: ‘Cause it’s not implemented, or?
DW: Or just because right now there’re only de facto policies at Rice--not formally written policies. And
the de facto policy at Rice is somewhat limited. And the thing that they wrote gave them the ability to do
things that they wouldn’t do today necessarily. And I called her on it, I said, Vickie, we can’t do this. It
allows you to do X Y Z, which will give people a lot of heartburn. She was like, well, it wasn’t meant to
give use those privileges… Yeah, but it DOES give you those privileges. So, you need to go back and
draft it more tightly… Part of why Rice wants the ability to do somewhat intrusive things to your
computer is to check if your computer’s infected with the latest virus or worm or what have you. And
that way, they have the ability to audit your computer to make sure it’s not messed up. And if it is
messed up, then they want to be able to go in and fix it. Or at the very least, pull the plug on it. And so
they’re trying formalize that as, as a formal written Rice policy. It's still a work in progress. So, to some
extent, you know, these ethical dilemmas are driven by security needs. It’s hard to disentangle them.
There are certain problems IT people HAVE to solve. And so some of those problems, you know, like
dealing with computers on campus that are infected with the latest worm, they have to have enough power
to be able to clean up the mess. That’s their job.
CK: Um hm
DW: And everybody agrees that it’s good to have people that go cleaning up after these messes. There’s
no ethical dilemma there. On the other hand, if you don’t craft the policy very carefully, it gives them the
privilege to do other behaviors that are less ethically clear. And if you permit an organization to do
something, then will they do it or not, you know, whatever, is s-, you’d like to have things written tightly.
Such that you aren’t giving them any extra privileges beyond what they need to get their job done.
CK: Um hm
18
EPIT
DW: In fact, that’s actually a, a basic computer security principle. You shouldn’t have a program able to
do more on a computer than it has to do. And so that basic security programming principle applies to the
design of policies for real world organizations. This is all interrelated. And it isn’t like all these
computer security principles were thought up out of whole cloth. You know, they apply, this all, there’s
a, the landmark paper in this was Saltzer and Schroeder in 1975, and they based a lot of their ideas no
doubt on just the ethics of things in the real world.
Dan's definition of the activity of looking for bugs and security flaws or, as we put it "being a really good
bad guy," is an illuminating, and somewhat poetic one.
DW: I don’t have a good recipe for being a good bad guy. Being a bad guy is like solving a puzzle. You
have a whole bunch of pieces, it’s like junkyard wars. You’ve got all this crap, and they tell you to build
a hovercraft. The analogous thing is you have all this software that’s out there, and your goal is to break
in. How can you put together the pieces and the parts that you have lying around, in order to accomplish
this goal. It doesn’t have to be beautiful. It doesn’t have to be pretty, it just has to work. So, in a certain
sense, it’s just a form of computer programming. I mean, this is what programmers do. A programmer
deals with a bunch of broken stuff: bugs in Linux, bugs in Windows, bugs in web browsers, and still
produce something that works most of the time. And then work around all these flaws. So to some
extent, finding and exploiting a security hole is no different than the normal programming process, only
you’re trying to do things that you’re not supposed to be able to do at all. You know, I’m supposed to be
able to write HTML and have it appear on your web browser. That’s a documented feature. But I’m not
supposed to be able to take over your web browser and take over your whole computer. That’s not a
documented feature. So, you’re trying to take advantage of undocumented features; and trying to make
the system dance in a way it was never meant to go. So, it’s just a harder version of the regular
programming problem, being a good bad guy. And then, of course, the better bad guy you can be, the
better good guy you can be to fix it. You know, or in order to build a system that’s robust, you have to
look at it from the perspective of somebody trying to break it.
Finally, Dan's concerns about ethics are intuitive (he balks at offering a definition), but they are also
clear enough that he can use them to teach classes by telling stories. The attitude is summed up in the
phrase "Doing the right thing ethically, is almost certainly the right thing professionally." This might be
a place to bring up that old saw about a Protestant Ethic…
DW: All I know is that there’s behavior that is good and there’s behavior that is bad. And in my
computer security course, my second lecture, I sit down and tell them a bunch of stories, I’ve told most of
the stories here to you guys today. Stories of how to behave and how not to behave. And my ideas on
how to do security research without, without running afoul of the law. And to make sure in the end you
look like a good guy. And so I teach that by telling stories.
CK: But presumably some of those stories get at the difference between ethics and legality; in the sense
that sometimes things that are against the law or are allowed by the law, are not, are still an issue for
[ethics]
DW: Right, but I like to draw a contrast between my experience with finding Java security flaws and with
the idea of demanding payment for finding bugs. And how, I came out better in the end. Yeah, I was
able to land a good summer job, we got the free computer, and you know, I got the reputation that
followed me into, y’know, and helped me get my job. Demanding payment is of dubious legality and
clearly unethical. And it’s impractical, in practice it isn’t worth any of those costs. So, that’s the way I
like to explain it—that doing the right thing ethically is almost certainly the right thing professionally as
well.
CK: Interesting.
DW: You know, people reward you for doing the right thing, often in ways that you can’t anticipate.
That’s this is the lesson I’ve learned over the years--that there’s a nice synchronicity between doing
19
Ethics and Politics of Information Technology: Workshop documents
what’s right and what’s gonna help you in the end. And it’s nice because [then] there’s no quandary
anymore. You do the right thing, and it’s gonna work out for you the best. You’re not sacrificing
anything to do the right thing. There’s no downside to doing the right thing. At least, that I’ve found.
20
EPIT
Dan Wallach Interview 2
Abridged and annotated interview with Assistant Professor Dan Wallach of Rice University. The
interview was held on Tuesday, November 18th. Key, DW=Dan Wallach, CK=Chris Kelty,
HL=Hannah Landecker
Interview two was conducted after reading and responding to the first transcript, and the questions and
discussion are organized and directed by material we found most interesting in the first case. Most of the
researchers were surprised to hear Dan Wallach characterize his situation with Sun as a "gift economy"
so the interview starts there.
Gifts and Bugs
CK: I thought I would seize immediately on the gift-economy stuff, since you brought it up last time…
DW: Mmm hmm hmm.
CK: And since it was something everybody knew something about, and so…um… you said last time, we
were talking about you getting a giant computer from Sun for having found bugs and you said it was like
a gift-economy, because I asked whether this was quid pro quo, whether this was a signal from them that
said you can have this computer as long as you keep doing things the way you’re doing it, and you said
it’s more like a gift economy…
DW: Course, I don’t actually know formally what a gift economy… I mean probably what I know about
a gift economy is what I read from Eric Raymond’s “The Cathedral and the Bazaar”…
CK: Probably… that’s the only way it circulates it turns out.
DW: Which may or may not have any correlation to what anthropologists consider to be a gift
economy…
CK: Its… sort of… it’s close, he’s not entirely wrong actually, I’ve thought a lot about this actually. One
of the things about gift economy classically is that it’s a constant flow, its not just a “I give you a gift you
give me a gift” but what it does it creates an obligation between people, it creates a relationship, and can
actually include more than just two parties… it was a way of theorizing society originally, that it included
everybody, and so I give you a gift, you give Hannah a gift, Hannah gives some one else a gift and it
produces a social bond between all of us, right. But the question that came up was, we wanted to know
what the difference is, in some ways, between say working for a company that pays you to find bugs and
maintaining the kind of impartiality that the University gives you to find bugs for whoever. Now you still
clearly have a relationship with these corporate entities, but they’re not paying you to find bugs. So what
you gain from maintaining this impartiality, why not go work for a company and find bugs, where you
could presumably be paid better?
DW: Well the university pays me quite well, better than many of these companies might. And it’s not all
that exciting to continue looking for bugs over and over again. If that’s your job, then that’s what they
expect you to do. To keep finding bugs, and that’s a real drag. The hope--which may or may not have
any correlation to the reality--the hope is that you find a bunch of bugs and convince them that there’s
something broken with their process and therefore they need to build a better system such that you won’t
be able to find bugs anymore…that’s the hope.
CK: Is that the exciting part? Re-building the system, or coming up with a better way of doing it?
DW: So there are two exciting parts, the first exciting part is realizing that there’s something broken-not
just with the software but with the process that created the software. That the people who made the
software didn’t understand the ramifications of what they were doing…and here’s an opportunity to
convince them that they need to take security more seriously, and what have you, and maybe you need to
convince them by demonstration, because other things don’t work as well. And once you’ve convinced
them, then in theory they’ve learned and they’ll go off and do the right thing, and now you can move on
21
Ethics and Politics of Information Technology: Workshop documents
to, either another target, or…I mean, I have a whole talk about this… My research pipeline is Step 1. Find
Bug Somewhere, Step 2. Write a paper about it. Step 3. Build a better mousetrap, which is what every
other academic in the world does, and then Step 4: Look for a new victim. So that’s my research pipe.
Finding bugs is a nice way of motivating the rest of the work. If anything it helps convince the rest of
computer science academia that, when I’m writing a paper that says “Here’s how to do better Java
Security” I’ve already convinced them that Java Security is a problem. Whereas if you go straight to step
3, they’re like: “Hey why are you wasting our time on this, why is this a problem?” I didn’t realize this
when I was doing it, but it all makes perfect sense now. It’s analogous actually to phone companies
selling services to help people call you up at home at 9pm when you’d rather be eating dinner, and then
selling you services to block the people from calling you. (Laughter). I mean you create a market for
your own products… So, it’s sort of an academic variant on the same theme.
Corporations, impartiality and Dan-as-Super-hero
Dan's involvement in the Diebold e-Voting machines controversy, which is a substantial part of this
transcript, is a natural point of comparison for us. The question of who Dan is protecting, or serving
seems to be a less than obvious one.
CK: Do you think that by focusing on commercial products and products that are out in the world rather
than on academic projects, other, you know, your peers and other things, that you risk compromising the
impartiality that you have here, in the university?
DW: Do I risk compromising my impartiality by looking at commercial vs. non-commercial products?
CK: Yeah.
DW: I don’t think so, ‘cause I also look at non-commercial things. I’ve been working with Peter Druschel
on security for peer-to-peer systems, and we’ve been looking not the commercial peer-to-peer things like
Kazaa or Gnutella, we’ve been looking at our own homebrew stuff—Pastry and various other things built
here in house. So that’s an example of where I could have gone out an beaten up on the real world
systems, but instead I was beating up on our local ones, cause the real-world ones are too easy to beat up
on. Not enough challenge there…they were never designed to be robust, so instead we’re beating up on
our own systems. So…
CK: So do you think you can then say that you pick commercial products based one whether or not they
have some claim to being robust or secure?
DW: I s’pose. There’s no fun beating up on an easy target. And it’s better to… hm. What’s the way to
say this? (Computer makes a bleeping noise, as if in response). That isn’t the way to say it. (Laughter
and cackling). If it’s a matter of expectations, if the people who are using the system have an expectation
of security, and that expectation has no grounding in reality, then that makes it an interesting system to
analyze. The emperor has no clothes. (Computer makes another noise). I am gonna shut this thing up
now. Ok. (Laughter). So it has nothing to do with what’s commercial and noncommercial. It has to do
with, where do a substantial number of people have expectations that aren’t met by reality.
CK: It makes sense. So I think sort of partially related question to this, that is you mentioned last time
that part of the ethical issue in doing this kind of work to the commercial companies is learning how to
maintain a good relationship with them. Right? Like learning how to release the results in a way that isn’t
damaging to them but still…
DW: The companies are actually secondary to their customers. I mean, if I damage a company, you
know, I’m sorry. But if the customers who are dependent on their products are damaged by something
that I did, then that’s a problem. So do I really care about Diebold’s financial health? No. Do I care
about the voters who are using Diebold Equipment? Hell yes. I care about the democracy for which
Diebold has sold products. I could really care less about Diebold’s bottom line. And particularly since
they have not exactly played ball with us, I feel no particular need to play ball with them.
CK: Right. So you don’t always need a good relationship then. By good relationship you mean you don’t
wanna hurt the people that they are trying to serve as customers.
22
EPIT
DW: Yeah. I, if you will, I serve the masses, not the…. choose your favorite like Marxist term,
“bourgeois elite” [laughter].
CK: Running Dog Lackeys of the Capitalist Elite? (Laughter). Alright.
The example of blackmail and the distinction between good and bad bug hunting.
CK: When we talked about it last time, you had used, as an example, the idea of blackmailing a company
with a list of bugs. You suggested there was a system that allowed one to exploit something that was, I
suppose, in some way the good will of Sun [Netscape], you know, “find us a bug and we’ll pay you.” So,
um, is there a clear line for you between the place where, um, uh, finding bugs turns into a kind of
exploitation of a process? Or is it something that you kinda feel, and have to know intuitively by case by
case?
DW: It's hard to make a real generalization. I think when you are doing something whose sole purpose
seems to be line your own pockets… then…of course that’s capitalism. Right? But in this case, it seems
that… it's just Blackmail, I shouldn’t have to, it’s clearly unethical. You know, blackmail in the
traditional world. You know… Give us money… “give us a lot of money or your daughter dies,” you
know that sort of thing. It is not just unethical it’s illegal. So, Gün is lucky that nobody tried to go after
him. I guess I worry about who gets hurt. And well, I mentioned earlier that I obviously have no love lost
for a company like Diebold. I am not trying to hurt them. I am not deliberately trying to injure them. I am
trying to protect all of their customers. And if that requires fatally wounding Diebold to protect their
customers, that’s fine. So in the context of Java, protecting all the millions of people surfing the web is
what it’s all about. And if it’s more effective to protect them by wounding the vendor, then that’s fine. If
it is more effective to protect them by helping the vendor, then that is preferable. Because the vendor has
a lot more leverage than I do. The vendor can ship a new product. You know, it would be great if
Diebold, if I could somehow talk them into having this epiphany that they’ve been wrong all along. But
we guessed quite possibly correctly that Diebold wouldn’t be amenable to having a friendly discussion
with us about our results. We assumed, without any information but almost certainly correctly that if
we’d approached Diebold in advance they would have tried to shut us up with a cease and desist before
we ever got out the door. And had they done that, that would have been more cost more, effectively more
injury to their customers, who actually we care about. So therefore, that was how we came to the decision
that we weren’t gonna talk to Diebold, we instead just came right out the door with our results.
After the 2000 Elections, Congress passed the "Help America Vote Act (HAVA)" which was designed
primarily to upgrade the voting infrastructure after the national trauma of dealing with hanging and
pregnant chads. From Dan's standpoint this act was basically a Giant Handout to companies like
Diebold, ES and S, Sequoia, and smaller firms such as Hart Intercivic (which makes Houston's eSlate
voting machines. Dan has a theory about why the machines are built the way they are, and it is a
combination of small-mindedness, incompetence, and hatred of paper. Dan's explanations also draw in
an usual term whose origins are far from computer science "regulatory capture".
DW: [After HAVA] the states had money, federal money, a one time shot to buy new gear, and there was
this perception that paper is bad, therefore lack of paper is good. And, oh by the way, whenever you talk
to an election official, the people who manage elections for living… they hate paper. It weighs a lot, you
have to store it, you know, if it gets humid, it sticks, and oh god it can be ambiguous. So, they would just
love to do away with paper and then you have these vendors who of course will sell whatever their
customers—which is to say election officials—want to buy. Regardless of what the real customers—the
electorate—would prefer to use. The electorate doesn’t matter at all, of course, it’s all about the election
officials. I’m letting a little bit of cynicism seep into the discussion here. The checks and balances are
supposed to come from these independent testing authorities. But it comes… are you familiar with the
concept of regulatory capture….
23
Ethics and Politics of Information Technology: Workshop documents
CK: It sounds fairly obvious but explain to me.
DW: So the concept is… We see this a lot with the modern Bush administration, where “oh, all you
polluting people, why don’t you figure out your own standards and regulate yourself.” Or if there is
federal organization that’s responsible for setting the standards and who are the people setting the
standards but the people who are been regulated. You know, if they’re setting their own standards, if
they’re defining what clean means in their own terms whether that is you know, smoke stack emissions or
auto emissions, or all the other things the EPA is messing up right about now. That’s an example of
regulatory capture. You know, where the people who are being regulated have effectively taken over the
people regulating them. So the voting industry has that problem in spades. What standards there are, are
weak; the people who certify them don’t know what they are doing and federal election commission
standards that are meant to be minimal standards, like, God anything ought to be at least this good, are
considered to be maximal standards instead, you know, anybody who has passed this bar is worth buying.
Everything is completely messed up. So the checks and balances that are supposed to keep this
simplification feedback loop between the vendors and their customers, the testing authorities that are
supposed to balance that out, aren’t. So that’s the state of affairs that we’re in, plus this huge injection of
money. Election officials don’t understand computer security, they don’t understand computers. They
don’t know what a computer is. They still think that computers are like The Jetsons or something. Or
maybe they think it’s like what they see on, you know know, Law and Order….
As is clear in other parts of this transcript, part of Dan's activism relies on his exquisite management (or
strategic uses) of his reputation. In the case of the Voting controversy, Dan explains how this figured in.
In the absence of any actual machines or source code to evaluate, Dan can only insist, based on his
credentials, that the machines are insecure.
DW: So, Jew Don Boney [former mayor of Houston invited two national experts and apparently Peter
Neumann (of Stanford Research Institute) didn’t actually want to bother to fly out to Houston. Instead he
said “Well there’s that Wallach guy, he knows a little bit, he’s a good security guy…give him a call”.
They ended up getting all three of us, and that was my first exposure to these voting systems. And right
away I clued in to this lack-of-paper problem. In fact I was opening their system and pulling out this
flashcard and waving it in front of City Council saying “Look, I can do this, so could somebody
else…this is bad.” But of course, nothing happened, and we bought it anyway. I think it got a brief
internal mention in the Houston Chronicle. I tried to sell a story to the Houston Press and they weren’t
buying it. They did eventually run a story written by, who is this guy, Connelly? [Richard Connelly] In
The Houston Press column where they rip on local politicians, he had an article saying how, you know,
Grandma wasn’t going to be able to comprehend this thing, written tongue in cheek. And I wrote a letter
to the editor and even though I mercilessly hacked it down, because I knew that they tended to have very
short letters, they still hacked my short letter down to the point where it was almost incomprehensible and
didn’t have any of my attributions, it just said Dan Wallach, Houston, TX. As opposed to Professor Dan
Wallach, Rice University. Normally when it’s a letter from an expert, they usually include the affiliation.
So they sort of denied me my, they denied my affiliation, which pissed me off. And that was about it in
terms of my voting involvement. You know, Hart Intercivic won that little battle, and I got to learn who
on the City Council had a clue and who on the City Council was clueless. I got to learn that Beverly
Kaufman, our County commissioner is … (silence) … insert string of adjectives that shouldn’t be printed.
I can’t think of any polite thing to say about her, so I’ll just stop right there.
CK: Did they at any point actually let you look at these machines or anything, or was it just like, you’re
an expert on security, so will put you on it?
DW: The latter, you’re an expert on security. Here they are sitting in front of you. At the time I had
challenged Bill Stotesbury by saying "I would like to audit these machines, I would like to read their
source code." They said “only under a non-disclosure agreement.” I said “No way,” They said, “No
deal.” I mean they would be happy to let me look at their source code under a non-disclosure, but that
means I can’t tell anybody else the results and that violates the point of me wanting to look at it, which is
24
EPIT
to, that the public should know about what they’re voting on. That’s the whole point of doing an audit, it
seems to me. Although, it turns out that these independent testing authorities that I mentioned, their
reports are classified. Not military kind of classified, but only election officials get to read them. They
are not public. So you joe-random voter wanna, you know, are told “Just trust us! It’s certified.” But you
can’t even go…not only can’t you look at the source code, but you can’t even read the certification
document. So you have absolutely no clue why it was certified, or you don’t even know who you are
supposed to trust. You’re just told “trust us.” Needless to say, this sets off all kinds of warning bells in
the back of my head. But there’s very little I could do about it.
Over the course of the next year and a half, other CS professors started to take note of the problem. In
particular, Stanford Computer Science Professor David Dill started to take note. Dill's career was built
on work in formal verification (See interview with Moshe Vardi for info on formal verification), the
subject of our interviews with Moshe Vardi. In this case however, verification was moot, because he
recognized, as Dan had, that the machines were designed in such a way that no one could practically
verify them in any meaningful way. Dill lives in Santa Clara county.
DW: So Santa Clara county was trying to decide what it was going to buy and Dill, like any other
educated computer scientist said; “what do you mean, there’s no paper? The software could do the wrong
thing.” It’s intuitively obvious to a computer scientist that this is a bad thing – having all this trust placed
in software, because software is malleable. So Dill was somewhat late to the party, but boy, he is an
activist. I mean he’s a very competent activist. And he’s he was on sabbatical at the time. So he just
threw all of his energy behind this voting issue, and, from nothing, to being one of the premiere experts in
the area. Because it’s not really that deep of an area to get an understanding of. So Dill contacted Peter
Neumann and Rebecca Mercuri, existing experts in the area, and they also put him in touch with me and
we wrote an FAQ all about why paper in your voting system is a good thing, and why we’re advocating
on behalf of this voter verifiable audit trail. So, we wrote that FAQ, it was originally something that was
meant to be submitted just to the Santa Clara County people who were making this decision. But
somehow they managed to suck Dill into the nationwide issue and he set up this, I think it’s
verifiedvoting.org. And… basically he’s turned his sabbatical issue into a crusade and he’s very
competently using his mantle of Stanford professor. He’s leveraging his prestige in all the right ways.
In addition to David Dill, an activist named Bev Harris appeard on the scene, with what appeared to be
the source code to one of Diebold's voting machines. Dan here tells the complicated story of how he and
his colleagues at Johns Hopkins decided to produce a report detailing the flaws with this "illegally"
obtained source code.
DW: So a number of other activists got involved in this issue. A very curious woman named Bev Harris
out of Seattle, claims that while she was Google searching, she stumbled across a Diebold FTP site that
had gigabytes and gigabytes of crap there for the downloading. Is that actually true? I don’t know. It
really doesn’t matter, because it was there and she grabbed it all. And in July of 2003, she had written
sort of an expose, explaining how the “GEMS” Global Election Management System, the back end
computer with the database that tabulates all the votes. She explained exactly how simple it was to
compromise that machine; it just used a simple Microsoft Access database, no passwords, if it was online,
anyone on the Internet could connect to it, edit the votes, no audit log, or the audit log that there was was
easy to work around, etc. And at the same time, she had announced that, "Oh by the way, here’s a
website in New Zealand that has all those gigabytes of original material that somehow was liberated from
Diebold." That was intriguing.
CK: (Laughter).
DW: So Bev Harris subscribes to conspiracy theorist opinion that clearly that these voting machines are
being used to compromise elections and here is further evidence of the evils of the Republicans, what
have you. And the problem with that sort of an attitude, is whether or not it’s true, it tends to make
25
Ethics and Politics of Information Technology: Workshop documents
[disbelieve you]. So Dill said, "This is an interesting opportunity, here’s all this code." And we could try
to leverage that whole academic prestige thing into making a statement that can’t be as easily ignored as
things that Bev Harris does herself. So Dill started shaking the grape vine, and ended up organizing a
nice group of people to do the analysis. It was a initially a much larger group, including the people at
Johns Hopkins, people here, some other folks in California, and we got the EFF (Electronic Frontier
Foundation) involved very early to help… One of the things that we learned as a result of the earlier
SDMI work is that it’s very, very important to dot your i’s, cross your t’s, to understand exactly the
ramifications of everything that you do, to cover your ass properly, in advance of any lawsuit. Another
important lesson we learned at the time, is that if you’re gonna do research that might drag your ass into
court, get your university counsel onboard BEFORE you do anything. [In the case of SDMI] I didn’t go
to talk to Rice Legal until the cease and desist letter had shown up, and the day I walked over there, they
were having Richard Zansitis’s (Chief Legal Counsel) welcome to Rice party. Welcome to Rice… help!
CK: [Laughter].
DW: So this time, I was sort of slowly ramping up, getting my research group ready. I made sure
everybody was comfortable with the legal ramifications of what they were doing. There were still some
legal risks involved so I made sure everybody was cool with that. Meanwhile, the Hopkins people were
racing ahead, Adam Stubblefield and Tadayoshi Kohno were busily pulling all nighters analyzing the
code, and Avi Rubin calls me on like a Thursday or a Friday, and says "The state of Maryland just
announced they’re purchasing 56-odd million dollars worth of Diebold gear, we’ve gotta get this out the
door, like now." And I still haven’t got Rice Legal to buy off. Rubin: “It’s gonna go out the door in like a
couple days, do you want your name on there or what?” "AAAh." So, we go chugging down to Rice
Legal, I’m like, "I need an answer, NOW."
CK: They knew about this they were just like trying to figure out what to do?
DW: Well, yeah. So this was like August. August in Houston, nobody’s here. My department chair, out
of town. Legal Counsel, out of town. President out of town. Dean of Engineering, out of town. So, what
ended up happening was, I ended up putting together a meeting, where I had the Associate Dean of
Engineering, the Vice Provost and the acting CS Chair, and One member of the Rice legal office,
conferenced in with Cindy Cohn from the EFF, she’s their head counsel, director, whatever, she’s the big
cheese over at the EFF. And I got her and Rice Legal Guy on the phone. Rice Legal Guy was skeptical
in advance of this conversation. He’s like, you know, violation of a trade secret is a class C felony in
Texas. Go to jail. You wanna do that? So he was, he was very skeptical. And I got him and Cindy on
the phone together. And the rest of us were sitting around listening to the speaker phone. Rice Legal Guy
and Cindy—after they got past the "hi nice to meet you,"—very quickly spiraled into dense legalese. The
rest of us could understand there was a conversation going on, and could pick out lots of words that we
understood, but it was clearly a high-bandwidth legal jargon conversation. And, at the conclusion of that
conversation, Rice Legal Guy said, “Go for it.” Well, actually, he didn’t say that, he said, “I’m gonna go
talk to the President. Rice Legal Guy briefed the President and the President's opinion was; "This is core
to what the University’s about, the University will support you." I knew I could do whatever I want—it's
academic freedom, the University couldn't stop me, but what I was asking, was I wanted the University to
stand behind me. What I wanted was, if the President were ever called by somebody from the press, I
wanted him to say, “We support Wallach.”
CK: Right.
DW: And all this was happening concurrently with Avi Rubin and company talking to a reporter from the
New York Times, before I’d even got the sign off from Malcolm. And in fact the original New York
Times article didn’t mention Rice. It was unclear, and, by the time we told the reporter, yes Rice is one
of the co-authors, it was too late. The article was out of his hands into the editor’s hands, and oh by the
way, the editors hacked the hell out of the story. Hopkins had thought they’d negotiated in exchange for
an exclusive, that they’d get page A-1, front cover. And then you know, something blew up in Iraq, or
whatever, and we got bumped to like page A-8 or something, on the front of the National section, which
is buried inside, we weren’t happy about that. But, dealing with the media is another topic entirely.
26
EPIT
The actual paper that Dan wrote with Avi Rubin, Dan (and CK's!) former student Adam Stubblefield and
Tadayoshi Kohno was based on the material that Bev Harris had found on an open server. In this
section, Dan talks a little about the mechanics of conducting research on such material.
CK: Maybe you can talk a little bit about what was involved with getting a Diebold machine, doing the
research…
DW: We never got a Diebold machine.
CK: You never did…
DW: I’ve still never touched one in person.
CK: Ah.
DW: Never. What we had was the source code to a Diebold machine. So, the, the legal decision hinged
on what we could and could not analyze from this chunk of stuff that we’d downloaded from this webserver. This is complicated. There were all kinds of different files. Some of them were just, you could
read them directly, some of them were in encrypted zip files. Zip encryption is not very strong, in fact the
passwords were posted on some website. And they were people’s first names. So, it wasn’t like we
didn’t have the technical ability to read any of these files. But the legal analysis hinged on three different
theories of how we could get into trouble. [First] Copyright. Just straight old boring copyright. Well, if
we shipped our own Rice voting machine using their code, that would be a clear violation of copyright.
But, reading their code and quoting from it, that’s fair use. [Second] DMCA. The only DMCA theory
that could apply is if somehow there was an anti-circumvention device that we worked around. So,
encrypted zip files are kind of like an anti-circumvention device. So, the legal ruling was, don’t look at
those. Not that you can’t, just don’t. And the third theory was that we were violating their trade secrets.
Now a trade secret as it happens is an artifact of state law, not federal law. So it’s different everywhere.
And in Texas in specific, violation of trade secret, and the definition of what it means to violate it is this
complicated thing. Which was sufficiently vague for us that it would really come down to a judge. But
violation of trade secret is a class C felony. Go to jail.
CK: Right. But in Maryland?
DW: Maryland it’s not. Maryland it’s a civil issue. Texas it’s a criminal issue. So these things vary by
state. Maryland has other intellectual issues. They have UCITA. Which is this DMCA extension of
doom… So, the conclusion was, that the trade secret issue was only an issue inasmuch as it was actually
still a secret. But here it was on a website in New Zealand, where the whole world could download it, and
it was very publicly announced that it was on this website in New Zealand, and had been there for about
three weeks at the time that we were forced to make a decision. Cindy Cohn called the New Zealand ISP
saying, “Have you gotten any cease and desist letters?” And they said, “No, and we have what on our
website?” (laughter). So that told us that Diebold hadn’t been taking any actions to protect their trade
secrets. As a computer scientist you think: either it is or it’s not a secret. That isn’t the way lawyers
think. Lawyers think, that it’s kind of a secret, and how much of a secret depends on how much effort
you spend to defend the secret. So if you aggressively go after those leaks, and try to contain it, then you
can still make a trade secret claims, but they’re, they’re diluted somehow, but they’re still valid. But if
you’re not taking aggressive steps to stop it, then it’s not, it loses its secrecy. But there was still some
vagueness. And that was the, the calculated risk that we had to take. That we were prepared to do the
work despite that calculated risk, and were, and, EFF was signed up to defend us in the event that we
landed in court based on that. And you know, that, like I said, you know. So, our decision was to go for
it. The Vice Provost remarked while we were in this meeting that in his whatever 20, 30 years at Rice,
it’s the most interesting decision he’d ever been involved in.
CK: Hm.
DW: I’m not sure what that means. And this guy’s a biologist. He’s involved in all sorts of whacky
ethical decisions that those guys have to make, right?
CK: Right.
DW: Is it ethical to stick hot pokers in mice. Cute little rabbits. Our decision was that reading the
source code was close enough to being in you know legally OK, that the risk was acceptable.
27
Ethics and Politics of Information Technology: Workshop documents
CK: Right. But back up a second, I mean, is it from security researcher’s standpoint, is it sufficient to
have only the source code to do this research?
DW: Diebold would say no. We would say yes. In our paper, we had to say in several places, "We don’t
know how this would exactly work, but it would be one of these three ways and no matter what, it would
still be vulnerable because of this." So our paper had a number of caveats in it, but despite those caveats,
our results were absolutely devastating to Diebold. So in addition to all the usual sorts of editing that
happens to a paper before you ship it out the door, we also had Cindy Cohn reading over the paper, and
mutating our language in subtle but significant ways. In fact, one of her, one of her sort of subtle
additions, had to do with our overall stance. She’d end the sentence along the lines of, you know, this
system is far below the standards of any other high assurance system. Far below what you’d expect for
anything else. I forget her exact words, but I saw that sentence quoted extensively. She’s good. So our
paper was, it generated quite a splash. Particularly coming only three or four days after Maryland had
announced this 56-odd million dollar deal with Diebold. As a direct consequence of our paper, the
Maryland state government commissioned an independent study with a vendor who we since think might
have had a conflict of interest. That’s SAIC. And SAIC’s report which took them like a month to write,
they reached, the full report was 200 some pages. The Maryland State government then redacted, excuse
me, redacted it down to about 40 pages. And even from what we could read, it said, there are some
serious problems here. And then the Maryland officials said, "It’s great! We’re gonna go ahead and use
it." And we’re like, "it doesn’t say you should go ahead and use it. That’s not what that says!"
Dan's discussion of electronic voting systems is dense with problems of transparency, interests, conflicts
of interest and the question of whether democracy requires complete openness at a technical level.
DW: Diebold went after us every which way they could. All of which was wholly without merit.
There… Avi Rubin as it turns out, is on a number of technical advisory boards, including for a company
called VoteHere.
CK: Um hm.
DW: Which was trying to sell software to firms like Diebold to put in their voting hardware. And they
had just announced a deal with Sequoia, one of Diebold’s competitors. So, an independent reporter had
figured out that Avi had this potential conflict of interest, and Avi had literally forgotten that he was a
member of that advisory board, because they hadn’t contacted him, you know, nothing, he had no
financial benefit. So he disavowed the whole thing, gave them back their stock, resigned from that
technical advisory board, but it’s, it’s common in responses that we’ve seen, where they try to say that our
work has no merit because we had a conflict of interest. We hear a lot of that. What we don’t hear is how
many of these election officials were given campaign donations by members of the election industry, like
Diebold. There’s a certain amount, not a certain amount, there’s a vast amount of hypocrisy…in terms of
conflict of interest accusations…There’s no reason why every aspect of our election system shouldn’t be
public and open to scrutiny. And by keeping these things hidden, you’re not, there’s no hidden strength
that you’re protecting. What you’re hiding are weaknesses…
In the previous inteview, we noted Dan's interest in watching what he says, and the difficulty of
controlling a message. Here (and again at the end of this tape) he addresses it directly.
HL: It’s so interesting hearing about all the things you have to learn that you would think that a computer
scientist would never have to learn. You're learning the intricacies of wording, of things that you know
are going to be really high profile, you’re learning the intricacies of state law about trade secrets…what’s
the experience of—all of a sudden having the kind of work that you do with technical objects tied into to
all of these legal and political issues?
28
EPIT
DW: Honestly I think it’s a hoot. The Rice Media Relations office, this is mentioned in the transcript last
time, I mentioned I was going there to be briefed, yeah they were gonna teach me how to deal with the
media, they set up a video camera, and they interviewed me about voting security, and then we sat down
and watched the video tape. And they were pointing out things that I could have done better, how I could
have kept my answers more precise. They told me all these things that I never would have thought about,
like on TV, the question from the interviewer is never aired cause they’re always cutting things so tight.
So it makes sense why politicians never actually answer the question that was asked of them, cause they
know that the question is hardly ever gonna be heard next to the answer. So instead they stay on message.
And so now I know how to stay on message: have your 3 points, know what they are, be able to state
them precisely, the better you know them the less you will say Uh Um Oh etc.
CK: Right.
DW: So I got briefed in all that, and then I read my own transcript and horror of horrors, I’m reading
myself doing all the things I’m not supposed to be doing!
HL: But we’re not journalists.
CK: So we actually don’t care if you do that. We’re actually, as anthropologists, we’re diametrically
opposed to journalists, we prefer the long meandering story to the, to the simple points, cause there are
always simple points, we can get simple points from the media, what we can’t get is the complex point…
DW: Well, I’m hardly as refined as a modern day politican is. So I don’t think you should be too worried
about me overly filtering what I’m saying. But, I don’t like to see what I say being as ungrammatical as
it can be. So the experience has been… every computer geek is fascinated by where technology intersects
with public policy and the law. The DMCA really raised a lot of people’s consciousness. These election
issues are a regular issue on Slashdot. I mean Slashdot’s a good barometer of what matters to your
friendly neighborhood geek. And intellectual property law, is actually very intriguing to your average
geek, even though to the general population it’s almost entirely irrelevant. And especially when that
technology intersects with things like, "god-dammit I want my MP3s!" And when you start saying no I
can’t have my MP3s, instead I get whatever stupid crap system you’re pushing because it will somehow
make it more difficult for teenagers to pirate music. That pisses the technologist off. And turns them into
some kind of an activist.
HL: So does that mean that you enjoy these experiences so much that you would continue to actively
seek out things which happen at this intersection of technology and public policy of is the type of work
you do just inevitably going to lead you to…
DW: Mmm. That’s a curious question. I don’t say…I don’t think seeking it out is the right answer, but it
seeks me out. This election stuff, definitely…the people who were more involved, people like David Dill
and all them, literally sought me out, and said "we need help." And I can’t say no to something as good
as that. Most of my work is boring academic stuff that ten people in the world care about, just like what
most academics do. I mean…you go look at my vita, and you’ll see, most of my papers have nothing to
do with tearing down the Man (laughter).
Dan's definitions of ethics, and the relationship to his "academic" "boring old technical stuff"
DW: When something as topical and relevant as this comes along, I certainly think now that it’s part of
my duty to do something about it. In the same way that I imagine that the folks over in political science
work on things that are much more obscure, the economists…you know if you’re an economist and you
spend all your time you know studying the widget economy, and then somebody comes to you and says:
what do you think about the recovery? Is it real or not? You know, it’s not really what you’re working on
and publishing papers about, but you know, you know enough about macroeconomics and you can read
the paper probably better than other people and read between the lines, and you can speak with some
authority, so you do. So in the same way, I feel I’m doing exactly what the rest of those folks do.
CK: Mm hm.
DW: Which is unlike what most computer scientists do. I mean computer scientists as a bunch Are antisocial. Very, very few computer scientists do well in front of a camera…I’m learning. My advisor Ed
29
Ethics and Politics of Information Technology: Workshop documents
Felton was a master. He wasn’t when we started, but he developed into it, and got very good at it.
Ridiculously good. And… in a certain sense I’m following in the footsteps of my advisor. Seeing that I
can build a successful career, I just have to maintain a balance between meaty, technical, nerdy stuff
and…ethically ambiguous stuff.
CK: Well, one of the things you said in the last interview that everyone seized on who read it was this
phrase “doing the right thing ethically is almost always doing the right thing professionally,” and I think
it’s interesting, in terms of computer science, all the obscure work tends not to fall into that hopper
because maybe there’s occasionally there’s the same kind of ethical issues, but I think what you were
referring to was these more high profile ones…
DW: Yeah, I guess what I probably meant to say with that quote was, when we’re talking about boring
technical things, “look I made a web server run 10% faster,” there are no ethical dilemmas there, but
when you’re talking about publishing a paper that criticizes a major voting industry firm, now there are
some ethical questions that you have to resolve. And, you have to make some decisions about what
you’re going to do. And in terms of the question of what’s right ethically and what’s right professionally
– what would it mean to do the wrong thing professionally? Well, you know, I could have tried to
blackmail Diebold, I could have tried, I could have just ignored university council’s advice and done
other things – there’s a lot of different ways I could have approached the problem. But the way we did
approach the problem produced results that, professionally people have applauded us for, and everything
that we did was very carefully reasoned about, , with lawyers and the works to make sure that it was legal
and ethical and all that. The earlier drafts of the paper had much more inflammatory language than the
final paper that went out. Is that an ethical issue? I’m not sure, but by keeping a very cool professional
tone, it helped set us apart from the conspiracy theorists and gave our voice more authority.
CK: Mm-hm.
DW: One of the things my fiancé criticized me about, rightfully, some of my early interviews, I would
use phrases like, “these voting machines are scary,” she was like, “scary is, like, a juvenile word, that’s a
word that kids use. Say, these voting systems are unacceptable.” And simple changes in the wording –
to me, ‘scary’ and ‘unacceptable’ are synonyms, but to the general public, those words – scary is a
juvenile word and unacceptable is an adult word. You know, unacceptable hits you in a way that scary
doesn’t. On my door, I’ve got a copy of This Modern World, by Tom Tomorrow, where he had Sparky
his penguin dressing up as a Diebold voting machine for Halloween – because it’s scary.
CK: Well you mentioned last time too, that the voting thing has made you pay a lot more attention to
your words because – I love the example where you said, “we found a flaw, that’s great, wait, no that’s
bad!” Like, part of it seems to me to be the good and bad thing, but also this is particularly fraught
because of the Republican/Liberal or the Republican/Democrat or any of those things. Can you say more
about your problem with watching your words?
DW: Well you will never in public see me say anything in support of or against a political candidate of
either stripe now. Of course I have political opinions, but you’re not going to hear me say them,
because…I absolutely have to be non-partisan, it’s a non-partisan issue. And I don’t care who you want
to vote for, you should, if you believe at all in our democracy you should believe everyone gets exactly
one vote, and those get tallied correctly. As it turns out, I’ve never actually registered for a political
party. Now I can actually say, good thing, “I’m non- partisan. I’m not a Democrat. I am not a
Republican.”
CK: And in terms of the technical analysis, does it descend to that level too?
DW: [It’s important…yeah…yeah] I mean technically it’s important for me to be able to say, you know,
this is a machine, these are its goals, this is the security threat it has to be robust against. It’s not. And I
need to be able to say that in a very clinical fashion. Have people used it to compromise elections? I
have no clue. Is it technically feasible? You bet. And that’s where I have to draw the line.
CK: Sure – but even in saying that it’s technically feasible don’t you – this is like imagining how to be a
really good bad guy, don’t you have to be able to imagine some specifics?
DW: Oh sure, and we go into a lot of those specifics. Yeah we hypothesize, perhaps you have a precinct
that’s heavily biased in favor of one candidate over the other. If you could disable all those voting
30
EPIT
machines, you could disenfranchise that specific set of voters and swing the election to the other
candidate. When I say it that way, I’m not talking about the Democrat and the Republican. In a paper
that we just shipped this weekend, it was a demo voting system that we built called Hack-A-Vote, to
demonstrate how voting systems could be compromised. In fact, it was a class project in my security
course. Phase One, the students were responsible for adding Trojan Horses to the code. Their assignment
was: you work for the voting company, you’ve been hired to compromise the election, do it. And then
Phase Two: you’re an auditor, here are some other students group projects, find all of the flaws… can
you do it? So in that assignment I put students in both roles within the same assignment. And we wrote
a paper about that assignment and one of the things we were very careful to do in…we have some screen
shots… in the initial version of the paper one of my students had just gone and figured out who was
running in the 2000 presidential election and had Gore, Bush, Nader, etc. I said, “nope, get rid of it.” We
have another set of ballots where we have the Monty Python Party, the Saturday Night Live Party, and the
Independents, so you could have an election between John Cleese and Adam Sandler and Robin
Williams. And, that’s something that lets you avoid any sense of partisanship.
Perfect Security. The notion of a "threat model" sheds some light on the way computer security
researchers think about security. Perfectly secure is a pipe-dream, but relative security is not.
CK: Do you have a sense that software can be perfectly secure?
DW: It can’t.
CK: It can’t?
DW: Ever.
CK: So how do you formalize that as a researcher, how do you, is it always just a contextual thing, or, is
it “secure in this case?”
DW: Well, secure with respect to some threat model. You can build a server that’s secure with respect to
people trying to communicate with it over the network, but can you build it secure with respect to people
attacking it with a sledgehammer? Probably not. That’s a threat that’s outside of our threat model, so we
don’t try to do it. Although for some systems that is part of the threat model. Military people worry
about that sort of thing.
CK: So is part of the problem then in terms of dealing with corporations who have software they’re
selling to customers trying to figure out what their threat model is?
DW: Well part of the problem is when industries – the threat model they’ve built the system to be robust
against, isn’t the threat model it’s going to face. Most of these voting systems are built to be robust
against being dropped on the floor, or losing electricity for the election, and you can put a big lead acid
battery in there, and you can test it by dropping it on the floor ten times and you can build a system that’s
physically robust against those sorts of issues. But if your threat is that somebody tries to reprogram the
machine, or if your threat is that somebody brings their own smart card rather than using the official one,
they didn’t appear to have taken any of those threats seriously, despite the fact that those are legit threats
that they should be robust against. What galls us is that the answer is so simple, it’s this voter-verifiable
audit trail. Once you print it on paper, then your software can’t mess with it anymore, it’s out of the
software’s hands. And the voter sees it and the voter says, yes, that’s what I wanted or no, that’s not what
I wanted. That means you don’t care if the software’s correct anymore. What could be better than that?
CK: Really. You’ve just given them free reign to write the crappiest software that they can.
DW: Yeah, now you’ve designed the system where they can write crap software!
CK: All you need to do is have a print button!
DW: So to me it’s obvious, but somehow that doesn’t work. And why it doesn’t work is some
complicated amalgam of whiny election officials who don’t want to have to deal with paper, poorly
educated election officials who don’t want to have to lose face over having made a very bad decision.
Similarly, vendors who don’t want to lose face after having told the world that their systems are perfect,
and this whole, you know, voting-industrial-complex where the whole process from soup to nuts is simply
broken, too many people have too much invested in that process, and we’re saying that the emperor has
31
Ethics and Politics of Information Technology: Workshop documents
no clothes. And saying it in a cool, professional, technical language. Hopefully sends chills down the
right peoples’ spines. Of course what we’ve found is, in fact, that isn’t enough, we have to go out and
give talks, and testify, and talk to the press.
CK: How far would you be willing to push that, maybe not you personally, but even just in terms of
lending your support? We talked last time about the Blaster Worm which you clearly said you were
opposed to as a procedure of, say, let’s call it, civil disobedience. How far would you be willing to push
the voting stuff, if someone decided using your paper, that they could go in and rig it so that Monty
Python or John Cleese did win an election, is that something that you think would demonstrate the
insecurity of these systems properly, or is that a dangerous thing to do?
DW: I think it’s dangerous to go dork with actual live elections where there’s real candidates being
elected. First off, it’s very, very illegal. And oftentimes people who try to prove a point that why, like
the character who went and left some box cutters inside airplane bathrooms, the security people didn’t
have much of a sense of humor about it. And I don’t really want to go to jail, and if I can prove my point
without needing to go to jail, then that’s all the better. And enough people are taking what we’re saying
seriously that I don’t feel need to do a stunt like that, ‘cause that’s all it is, it’s just a stunt. Instead, I’d
rather be in a debate where you’ve got me and you’ve got Beverly Kauffman or somebody where I can
debate them in a public forum and I want to win or lose a debate.
During the time that we conducted this interview, another set of hackers liberated a set of memos from a
Diebold site, including a now famous one purporting to be the CEO's words promising to "deliver the
election" to Bush in 2004. Dan explained that his "liberated" documents had nothing to do with this new
set of memos. Diebold's reactions to these events seem to suggest that they were not running a very tight
ship. Dan suggests that their cease and desist letter was a joke that amounted to saying "Stop that!
Or….We’ll say “Stop that” Again!”. Only one of he voting machine companies has since suggested that
anyone might be able to examine their source code (Vote Here), while the rest have refused to open up.
Meanwhile, Dan continues to teach his students to both build secure systems and to hack into each others
systems to understand why they are not.
HL: And when you teach this as an exercise in this class that you were telling us about, I presume you do
that partly because it’s something that you’re working on, so you get some ability to teach what you’re
working on, but also do you see it as teaching these students about these things you were talking about,
about technology and public policy?
DW: It’s an assignment that covers a lot of ground compressed into a small assignment. In prior years, I
had an assignment where phase one was to design a Coke-machine that you pay with a smart card. And
phase two was you’d get somebody else’s Coke Machine and beat up on it. And then Phase three you’d
take that feedback from the other groups in phase two and make your thing better. And the genius of that
assignment was among other things it avoided slow delay of grading, because you got immediate
feedback from other groups, because they, they had their own deadlines. So I just took that idea and I
replaced the Coke-machine with a voting machine. And I had you start off being evil rather than starting
off being good, but otherwise it mirrors an assignment that I’d already carefully designed to have this
duality of being on both sides of a problem. So I kept that duality but I stirred in a topical problem… And
I’ve found that students will work harder when they consider they’re work to be cool. So in a certain
sense, I’m trying to trick people into doing more work (laughter).
CK: We knew you were evil!!
DW: Mwah-hahh-hah (but it works!)…
Lastly, Tips for Effective Message Control, Courtesy of Rice University
CK: Okay I think we’ve pretty much the end. Um, but I just noticed this while we sitting here, cause
you talked about it, I just want to share on camera…
32
EPIT
DW: Oh yeah…
CK: Tips for effective Message Control…
DW: These are handouts from the Rice Media Center people.
CK: That’s so Orwellian…Okay, good lord. (Reading) Pre-Interview exercises. "Find out as much as
you can about the news media outlet interviewing you," and the…and …(reading) Did you do all these
things?
DW: I don’t remember getting an email from you?
CK: “Try describing your program or job in one sentence using clear concise language. Write down and
repeat to yourself 1-3 points you want to get across, and practice answering questions in advance,
especially ones you don’t….” What I like are the "During the Interview" suggestions… “Stay focused,
don’t confuse your interviewer by bringing in extraneous issues.” Why do computer scientists have so
many toys on their desk?
DW: Uhhhh…
CK: (Laughter). “Use flag word: The most important thing to remember… the three most important
things… for example… etc.” “Be interesting, tell a brief anecdote that will illustrate your point.” We’re
opposed to that one as well… (we say) Tell a long complex, meandering anecdote to illustrate your point,
so that we have something to WORK with!" “Stay positive. Don’t repeat any part of a negative question
when you are answering it.” “Speak from the public’s point of view.” That’s hard, that’s a really hard
one. How do you know when you’re in the public? “Speak past the interviewer, directly to the public.”
“State your message, then restate it in a slightly different way. Do this as many times as you can without
sounding like a broken record.” (Laughter) That’s a good one, isn’t it? And I like this one especially:
“Bridge from an inappropriate question to what you really want to say.” It all comes down to like, “say
the same thing over and over again, and don’t answer the questions you don’t want to.”
DW: Yeah, I mean this is what your politicians call “staying on message”.
CK: Right, Effective message control.
DW: And here’s some more…
CK: Oh, "Ethics and Procedures! “Tell the truth, the whole truth and nothing but the truth.” “Stay on the
record.” “face to face”.
DW: I like this: “You are the University.”
CK: Wow.
DW: I mean they emphasize this, because when I’m talking to a reporter, the distinction between a
professor… Professor X says, “blah!”—people say “Oh, Rice University says, ‘blah!’” People read
between the lines, so you are speaking on behalf of your employer whether you want to be or not…
CK: Right. You Are The University! On that note… thanks again, for your time.
DW: No problem
33
Ethics and Politics of Information Technology: Workshop documents
Commentary on Dan Wallach
Bug: Anthony Potoczniak
Main Entry: bug
Etymology: origin unknown
1 a : an insect or other creeping or crawling invertebrate b : any of several insects
commonly considered obnoxious
2 : an unexpected defect, fault, flaw, or imperfection
3 a : a germ or microorganism especially when causing disease b : an unspecified or
nonspecific sickness usually presumed due to a bug
4 : a sudden enthusiasm
5 : ENTHUSIAST <a camera bug>
6 : a prominent person
7 : a concealed listening device
-- Oxford English Dictionary
An unwanted and unintended property of a program or piece of hardware, especially one
that causes it to malfunction. [...] The identification and removal of bugs in a program is
called "debugging".
--Free Online Dictionary of Computing
The term “bug” is a metaphor used in computer science to refer to a flaw or defect in the source code of
software that causes programs to malfunction. The Oxford dictionary references early usage of “bugs” in
the sciences already during the late nineteenth century. The word had become incorporated into the
rhetoric of professional inventors and companies to signify performance problems in technical services
like telecommunications. Anecdotes from the early days of computing also describe “bugs” literally as
insects that sacrificed themselves in circuitry causing entire systems to fail. Similarly, we find the
metaphor of “bugs” not only to be pervasive in computer science, but also a term that increasingly is
associated with the craft of programming.
Dan describes two types of bugs in computers: hardware and software bugs. The former is more
problematic because its flaws are embedded in the circuitry of the hardware and often requires physical
tampering or replacement (1813-1816). Software “bugs,” on the other hand, can be ephemeral and, if
detected, relatively easy to fix. For the most part, bugs can be remedied with workarounds or “patches,”
which replace broken code with one that works.
It is hard to imagine that a bug in software can be anything but bad. From the perspective of a consumer
or user, bugs are obnoxious when software begins to fail. But not all bugs cause software failure.
Therefore, the innate quality of bugs is less categorical in the discipline. Dan refers to bugs are
synonymous to “security holes”, “broken code” and “undocumented” features of software, which only the
computer hack will know about (1354-1369). The ethical dilemma occurs when the hack exploits the bug
by taking over the end user’s operating system or blackmails an organization for personal gain (16571663). Bugs thus become part of a larger discussion of establishing professional rules within the
discipline for upholding proper ethical behavior.
The development of powerful computers and programs specifically designed to test hardware and source
code has made the chore of finding “bugs” easier to accomplish. During the interview, Dan describes this
development of using technology to detect broken code as a turning point for tackling flaws in machines,
34
EPIT
especially the more elusive hardware bugs (1803-1811). Interestingly, using machines to find bugs has
also been the basis of implicit criticism in the field. During the dot.com boom, when startup companies
were trying to accelerate the development of stable, secure software, they were offering cash incentives to
outsiders to join in the bugs hunt. Dan suggests that entrepreneurial individuals might have taken
advantage of these rewards by developing homemade programs to automate the detection of software
flaws. Part of Dan’s critique of this methodology, for example, was that the creation of a long list of bugs
is tantamount to blackmail (712-729).
An interesting theme in Dan’s interview is the way bugs – once they are found -- are handled by
companies. Bugs have transformed the way companies respond to serious flaws in software. The 1990s is
one such turning point, when private companies were striving to make a better product. For example, Dan
discusses the difficulty of getting the attention of company software developers at Sun to address security
problems in their software. Of course, at the time there existed a formalized system for reporting bugs,
like filing a “bug report” (533-540), but these notices yielded no immediate, appropriate response from
the company. It took public announcements on an Internet forum to get the attention of the right people
(508-511). Later bug reporting developed into a more rigorous, grassroots process that harnessed the
talent pool of the programming community. Graduate students played a key role in the development of a
kind of culture that became more sensitive to bugs. During his internship at Netscape, Dan refers to these
episodes as “bug fire drills”, which indicate a well-calibrated response effort. Once a bug was discovered,
software had to be reengineered, a corrective patch developed, and mechanism for its distributing be
made ready within a short period of time (668-682).
Bugs also have altered the computer science in the last 20 years. Dan draws distinctions of finding bugs in
at least three areas that differ in their context and spheres of activity: academia, business setting, and the
open source computing community. In academia, bugs can bring instant fame and notoriety to the
anonymous graduate student (475), popularity and prestige among peers and professional institutions
(509-513), and even an expensive piece of hardware from an appreciative vendor as an expression of
thanks (788-792). Bugs also provided quite an enviable publishing paradigm for professors and graduate
students alike. In essence, the model is a 5-step recursive algorithm: 1) find the bug, 2) announce it in a
paper, 3) find a solution, 4) publish the results, and 5) start again. Besides eking out at least two papers
for each bug, sometimes researchers received bonus recognition by having the discovered bug named
after them (834-838). Any specific examples?
Within a business setting, “bugs” translated into hard cash. In the mid-90s, startup companies like
Netscape introduced a “bugs-bounty” program where individuals (usually graduate students) were offered
$1000 for each discovered bug (671-677). Although Netscape established a competitive norm for
identifying bugs in mission critical software, the program nevertheless caused some resentment among
participants, who perceived other people exploiting these incentives (712-729, 1657-1663).
To contrast these examples of incentives that glamorized the process of finding bugs, the other-side-ofthe-coin perspective of finding bugs also existed. Identifying, fixing a software bug remained a thankless
job, a “drag” to some extent, especially within the open source community (829-831), where personal
satisfaction was the sole driving force in the effort. Dan however felt that finding bugs over and over in a
computer system was an indication of a larger problem with the process. From an academic or an
employee’s point of view, Dan found professional satisfaction in being able to leverage the existence of
“bugs” to build a better system (1575-1582). Bugs, therefore could be used as a legitimate reason to
redesign computer systems (834-838, 1585-1595).
At the end of the first interview, Dan makes a remarkable comparison between bugs and the field of
computing: “bugs” form the basis of all computer programming 1354-1369. “Bugs” are the
“undocumented” features of a piece of software, which programmers try to exploit for the good or the
35
Ethics and Politics of Information Technology: Workshop documents
bad. Working with bugs, whether finding them in their own software, or in someone else’s, is the primary
role of the programmer. Programmers deal with bugs in the way that many people deal with more
mundane aspects of everyday living: they strive “to produce something that works most of the time” For
Dan, there is no difference between programming and discovering bugs. Finding and exploiting a security
hole is analogous to the normal programming process. For Dan there is something very gratifying when
attempting to transcend the limits of the code and make the machine “dance in a way it was never meant
to go.” Bugs offer the programmer a path of “trying to do things that you’re not supposed to be able to do
at all” (1354-1369).
36
EPIT
SHALL WE PLAY A GAME? Tish Stringer
The beginning of an etymology of “hacker”
905-906
DW: … I don’t want to use the term hacker because that means different things to different people, so
why don’t I just say from the security researcher’s perspective…
Dan himself offers us the best introduction for the importance of exploring the term “hacker”. It means
different things to different people. To some it conjures visions of socially awkward teens, squirreled
away alone in a basement working feverously to take out their angst at an unjust world by launching
attacks on other computers or stealing private information. For others it simply means, working, fixing or
just being really good at what you do on a computer. And Dan himself uses it in both these ways, good,
and nefarious. In both of the cases listed here, the person who is called a “hacker” is shadowy and
unethical, either because they inflict perceived harm or because they are stealing. In both cases the
“hacker” is opposed to Dan’s more ethical and academic way of doing things.
961-965
DW: Oh, I’m certain that whoever the hacker was who did Blaster thought that their actions were
perfectly ethical. In their own warped worldview. But, y’know, they were helping some other cause
by, by damaging people who were using Windows. And this is the same kind of logic that y’know,
"only bad people get AIDS." Y’know, replace AIDS with Blaster worm and bad people with
Windows machines and it’s the same kind of faulty logic.
2409-2417
DW: … subsequent to our paper, some hacker broke into some Diebold system and carted away a
bunch of private email.
CK: So those aren’t the same thing, then?
DW: Those are separate.
CK: Oh OK. That’s important to know.
DW: Yeah. So some hacker carted away all this, this treasure trove of internal Diebold emails and
that’s what’s now being mirrored all over the place. So I have absolutely no involvement in that.
I especially like the image here of the hacker as pirate, carting off treasure. Pirate is a very similar word,
having both valued or valorized as well as evil and maligned connotations. But what do hackers say
about themselves? Are they shadowy figures of doom? The following definition is from the Jargon File,
an online dictionary of terms commonly used in computer oriented culture, “a comprehensive
compendium of hacker slang illuminating many aspects of hackish tradition, folklore, and humor.” The
jargon film is edited by Eric Raymond, who considers himself to be an “observer-participant
anthropologist in the Internet hacker culture.” Hence the awkward distant formality that seems to be
dangerously close to home. Eric’s work reminds me a little of the Nacirema , in that style makes it “true”.
37
Ethics and Politics of Information Technology: Workshop documents
From the Jargon File:
hacker: n.
[originally, someone who makes furniture with an axe]
1. A person who enjoys exploring the details of programmable systems and how to stretch their
capabilities, as opposed to most users, who prefer to learn only the minimum necessary.
RFC1392, the Internet Users' Glossary, usefully amplifies this as: A person who delights in
having an intimate understanding of the internal workings of a system, computers and computer
networks in particular.
2. One who programs enthusiastically (even obsessively) or who enjoys programming rather than
just theorizing about programming.
3. A person capable of appreciating hack value.
4. A person who is good at programming quickly.
5. An expert at a particular program, or one who frequently does work using it or on it; as in ‘a
Unix hacker’. (Definitions 1 through 5 are correlated, and people who fit them congregate.)
6. An expert or enthusiast of any kind. One might be an astronomy hacker, for example.
7. One who enjoys the intellectual challenge of creatively overcoming or circumventing
limitations.
8. [deprecated] A malicious meddler who tries to discover sensitive information by poking around.
Hence password hacker, network hacker. The correct term for this sense is cracker.
The term ‘hacker’ also tends to connote membership in the global community defined by the net
(see the network. For discussion of some of the basics of this culture, see the How To Become A
Hacker FAQ. It also implies that the person described is seen to subscribe to some version of the
hacker ethic (see hacker ethic).
It is better to be described as a hacker by others than to describe oneself that way. Hackers
consider themselves something of an elite (a meritocracy based on ability), though one to which
new members are gladly welcome. There is thus a certain ego satisfaction to be had in
identifying yourself as a hacker (but if you claim to be one and are not, you'll quickly be labeled
bogus). See also geek, wannabee.
This term seems to have been first adopted as a badge in the 1960s by the hacker culture
surrounding TMRC and the MIT AI Lab. We have a report that it was used in a sense close to
this entry's by teenage radio hams and electronics tinkerers in the mid-1950s.
The jargon file labels the maligned sense of the term actually as “cracker” rather than “hacker”. In this
case, hacker is more of a badge of honor, to be bestowed upon you by someone else. I find the original
meaning of “hacker” as one who makes furniture with an ax fascinating. It seems to be that someone
crafting furniture with an ax, is probably not making very finely detailed furniture but rather going at it in
38
EPIT
a crude manner. In either sense of the word, valued or despised, both definitions rely on expert
knowledge and skill, something seemingly far from making a chair with an ax.
I found one other interesting related entry for our purposes in the Jargon file:
hacker ethic: n.
1. The belief that information-sharing is a powerful positive good, and that it is an ethical duty of
hackers to share their expertise by writing open-source code and facilitating access to
information and to computing resources wherever possible.
2. The belief that system-cracking for fun and exploration is ethically OK as long as the cracker
commits no theft, vandalism, or breach of confidentiality.
Both of these normative ethical principles are widely, but by no means universally, accepted
among hackers. Most hackers subscribe to the hacker ethic in sense 1, and many act on it by
writing and giving away open-source software. A few go further and assert that all information
should be free and any proprietary control of it is bad; this is the philosophy behind the GNU
project.
Sense 2 is more controversial: some people consider the act of cracking itself to be unethical, like
breaking and entering. But the belief that ‘ethical’ cracking excludes destruction at least
moderates the behavior of people who see themselves as ‘benign’ crackers (see also samurai,
gray hat). On this view, it may be one of the highest forms of hackerly courtesy to (a) break into
a system, and then (b) explain to the sysop, preferably by email from a superuser account,
exactly how it was done and how the hole can be plugged — acting as an unpaid (and
unsolicited) tiger team.
The most reliable manifestation of either version of the hacker ethic is that almost all hackers are
actively willing to share technical tricks, software, and (where possible) computing resources
with other hackers. Huge cooperative networks such as Usenet, FidoNet and the Internet itself
can function without central control because of this trait; they both rely on and reinforce a sense
of community that may be hackerdom's most valuable intangible asset.
These two definitions taken together, show that the auto ethnographic work of a community selfidentified as “hackers” differs strongly from the more commonly understood, perhaps media generated,
definition of the rogue figure out to do evil from the basement of their parents home.
39
Ethics and Politics of Information Technology: Workshop documents
Regulatory Capture: Ebru Kayaalp
What does regulatory capture mean?7 In the Economist, it is defined as such:
“Gamekeeper turns poacher or, at least, helps poacher. The theory of regulatory capture was set out by
Richard Posner, an economist and lawyer at the University of Chicago, who argued that “Regulation is
not about the public interest at all, but is a process, by which interest groups seek to promote their
private interest ... Over time, regulatory agencies come to be dominated by the industries regulated.”
Most economists are less extreme, arguing that regulation often does good but is always at RISK of
being captured by the regulated firms.”
Richard Posner’s approach to regulation exactly echoes Chicago School’s traditional attitude to market
economy. The Chicago School of Economics became famous with the theories of Milton Friedman, who
had been influenced by the ideas of Hayek. Friedman and his colleagues in Chicago support the
deregulation of market, free trade and retreat of state intervention. It was an assault on the
macroeconomic assumptions of Keynes, which ended up as a thoroughgoing critique of antitrust law,
administrative regulation, tax policy, trade and monetary theory. In brief, they support the theory of a
competitive market as a regulatory system. Elsewhere Posner wrote that "The evils of natural monopoly
are exaggerated, the effectiveness of regulation in controlling them is highly questionable, and regulation
costs a great deal."8
According to the Chicago School of Economics, governments do not accidentally create monopoly in
industries. Rather, they too often regulate at the insistence, and for the benefit of interest groups who turn
regulation to their own ends. For them administrative regulation serves the regulated entities rather than
the consumers.
We can simply define ‘regulatory capture’ as the capture of ‘regulators’ by the regulated9. ‘Capture’
means that responsible authorities act to protect the same illegal practices that they are charged with
‘policing’. ‘Regulator’ is the class of professionals and authorities within corporations, organizations or
jurisdictions having formal administrative and legislative responsibilities for maintaining accountability
within those units. Examples of the ‘regulator’ class might include auditors and accountants, lawyers and
police, medical practitioners and nurses, government and private industry ‘watchdog’ authorities,
researchers and scientists. The ‘captors’ are supposed to be ‘regulated’. They might include major
industries, important customers, large corporations, political associations, professional elites, community
leaders, and organizations.
There are many examples of ‘regulatory capture’ in different sectors, such as medicine and banking. An
example might be given in e-voting system as well: As the Los Angeles Times reported on Nov. 10,
former California Secretary of State Bill Jones is now a paid consultant to Sequoia. As secretary of state
until 2003, he regulated the company's voting related services; now he works for them. Or Diebold
employs Deborah Seiler, who was chief of elections under California Secretary of State March Fong Eu10.
My aim here is not to map out the genealogy of the concept “regulatory capture.” However, I think that it is necessary to cite
some of the basic usages of this concept to shed light on how Wallach uses it in a different meaning as it has been originally used
in economics and law.
8 One of his law students tells the story of Posner, as a part-time professor at the University of Chicago Law School, coming into
the first day of class and writing the word "Justice" on the blackboard. He then turned and said to his class that he did not want to
hear that word in his course. Posner's approach is not a deliberative, but a cost-benefit analysis of law. This information is taken
from Ralph Nader’s article “Microsoft Mediation,” In the Public Interest, November 22, 1999.
9 This definition is taken from the article “Regulatory Capture: Causes and Effects”, by G.MacMahon at
www.iipe.org/conference2002/papers/McMahon.pdf
10 It is Diebold's president, Bush contributor Walden O'Dell, who stated in an Aug. 14 fund-raising letter to Ohio Republicans: "I
am committed to helping Ohio deliver its electoral votes to the president next year." O'Dell has since stated that he regrets the
7
40
EPIT
In the interview, Dan Wallach first uses the concept “regulatory capture” while arguing that check and
balances should come from the independent testing authorities. He then carries the argument to a different
level. He suggests that people (regulators) are setting their own standards and defining their own terms.
The standards they have are weak standards. “The people who certify them don’t know what they are
doing.” “Election officials don’t understand computer security, they don’t understand computer security,
they don’t understand computers. They don’t know what computer is” (emphases are mine) (1727-34).
Therefore, he shifts his criticism from “independent testing” to “expertise testing”. In other words, he
criticizes the lack of knowledge of officials about the computers rather than the “objective”,
“independent” or “self-interest” oriented testing.
If the problem here is defined in terms of “regulatory capture”, it should not matter whether the officials
have adequate knowledge about computer or not. Even if they know everything about the computers, they
can still manipulate this knowledge simply for their interests. And it would be easier if they know more
about computers, which simply gives them power to manipulate the others.
Interestingly while Wallach was explaining the case of Avi Rubin (who was on a technical advisory board
and who was claimed to have a potential conflict of interest in his criticism of Diebold,) Wallach simply
supports him by suggesting that Rubin “had no financial benefit (2104)”. Wallach never mentions the
concept of “regulatory capture” in this case. Instead, he says that “We hear a lot that [we have a conflict
of interest]. What we don’t hear is how many of these election officials were given campaign donations
by members of the election industry, like Diebold (2109-10).” I believe that Wallach’s attitude cannot be
simply explained as a contradiction. It signifies more than that.
First of all there is no normative situation in his description. His argument depends mainly on the
motivations of the actors. When he uses terms such as ‘independent test’, ‘conflict of interest’, ‘selfinterest’, he does not mention the concept of ‘regulatory capture.’ For Wallach these concepts (selfinterest and regulatory capture) refer to different problems. For example, just being in the advisory board
of a company is not necessarily bad if the guy is a “good guy.” Wallach appears to be much more focused
on the knowledge of people in defining “regulatory capture.”
Second, Wallach’s explanation of “regulatory capture” diverges from the original meaning of concept
radically. There is nothing wrong with it, but it is necessary to note that when his usage of the word is
deconstructed it raises an interesting point about regulation. As I mentioned before, his understanding of
the concept is more about not knowing the computers. Therefore, what he proposes is to leave this task to
the experts in this area. Therefore, what he criticizes is not the attempt of regulation itself (as Chicago
School criticizes) or the “self-interested” motivations of actors (he criticizes this in a different context)
but the regulations done by non-experts. Nevertheless, regulation of the experts, as I mentioned before,
might be corrupt and be defined as “regulatory capture” as well. Therefore, the trouble for Wallach is not
the regulation but the regulation -whether self-interested or not- done by the non-experts. This is similar
to the way arguments about objectivism and “truth” are combined and mixed in many cases as if they
were mutually exclusive, and how subjectivism always comes with the idea of ‘ignorance” and the nonscientific.
Does regulatory capture mean regulation by the experts? Or does it also mean deregulation in computer
science as originally defined by economists?
wording in the letter: "I can see it now, but I never imagined that people could say that just because you've got a political favorite
that you might commit this treasonous felony atrocity to change the outcome of an election." See
www.diebold.com/whatsnews/inthenews/executive.htm.
41
Ethics and Politics of Information Technology: Workshop documents
Reputation: Chris Kelty
DW: And I think the conclusion is that no, in the end it was better for us financially to give our stuff
away free. 'Cause it created a huge amount of good will, it built, it gave us a reputation that we didn't
have before, so reputations are valuable, it's probably part of why I have this job now, is I developed
a reputation. Also, a vice president at Sun, um, Eric Schmidt, who's now the president of Google, had
a slush fund. He said, You guys are cool, here, thud, and the $17,000 Sun Ultrasparc just arrived one
day for us. So, he sent us a very expensive computer gratis because he thought our work was cool.
Y'know, that was, that was a really valuable experience (769-775)
In the field of computer science, roughly since 1993, the notion of reputation and the associated notion of
social capital have become terms that actors themselves use to explain a phenomenon that is arguably
present in any economy, but that seems most strange to computer programmers and scientists. The
phenomenon is that of non-calculative exchange. It isn't simply non-monetary, for there are ways of
justifying trades and exchanges in terms of a simple (calculable) notion of barter (one in which each
exchanged item is compared to a third non-present measure of worth). Non-calculable exchange
however, is what is represented for Dan Wallach when, in return for the manner in which he announced a
bug, he is given a Sun Sparc workstation worth $17,000. Such an exchange can't be captured by a notion
of barter, precisely because, like all gift economies, it includes no notion of clearance (cf. Mauss,
Bourdieu, Logic of Practice). That is, the transaction is not over when the gifts are exchanged. For each
gift given extends a time horizon within which an obligation is set up. The gift of the workstation binds
Dan to a particular set of norms (roughly, releasing info about bugs first to a company, and then to the
public). The repetition of these norms builds the reputation. The more Dan cleaves to the norms set up
by these gifts, the more his reputation grows. Should he at any point choose to evade these norms (as,
perhaps, in the case of Diebold, where he goes straight to the public), two things must follow: a) he must
already have enough of a reputation to take this risk (otherwise there are no norms to guide action) and b)
he risks the reputation, and thereby risks its repeated strengthening by following the norms of practice set
up in this instance.
CK: It seems like the incentives too are different, that if, if you can y'know, if there's an incentive like
a bounty, to find bugs, that's different than in the open source world where finding bugs is kind of a
drag but at least you can fix it. Right, or
DW: Yeah, and but there's still reputation to be gained
CK: Yeah
DW: If you find a serious flaw in a widely used program, then, you know you have the, some amount
of fame, from, y'know, your name being associated with having found it. And y'know I also realized
that there's a whole research pipeline here. (829-839)
DW: Right, but y'know, I mean, like a, a contrast I like to draw is between my experience with finding
Java security flaws and with the idea of someone demanding payment for finding bugs.
CK: Um hm
DW: And how, I came out better in the end. Yeah, I was able to land a good summer job, we got the
free computer, and you know, I got the reputation that followed me into, y’know, and helped me get
my job.
CK: Um hm
DW: So, y’know, demanding payment is of dubious legality and clearly unethical
42
EPIT
CK: Um hm
DW: And impractical, it just didn’t, wa-, in practice isn’t worth any of those costs. (1414-1423)
43
Ethics and Politics of Information Technology: Workshop documents
Security: Ebru Kayaalp
According to Wallach security is a combination of confidentiality, integrity and denial of service.
Confidentiality means that only the people who you want to learn something learn it. Integrity is that
only the people who are allowed to change something able to change it. Denial of service is to prevent
people from making the whole thing unusable (866-870).
His definition of security is based upon the “exclusion” of other people who might threaten the system.
As he states, the machines must be “secure with respect to some threat model (2327).” “Most of these
voting systems are built to be robust against being dropped on the floor, or losing electricity for the
election…(2337-8).” But the machines are not robust enough against the bad intentions of human beings.
The idea here is not to maintain security through developing a more robust computer/software (as Peter
Druschel repeated many times in his interview11) but to eliminate all kinds of threats by not sharing
knowledge about the computers with the “bad” guys and/or to keep surveillance over the individuals in
general. This argument implies that every computer is vulnerable and keeping the security is only
possible with the control of individuals: “excluding” the “bad” guys and “policing” individuals (and
remember Dill’s speech at Rice in which he gives the example of voters who might do anything else
behind curtains with the computers. It was kind of interesting to see that his solution is to bring more
control over individuals through panopticon-like surveillance systems). In other words, in his definition of
security the key actors are individuals (not machines) who can manipulate the machines.
While talking about an abstract “security problem”, Wallach puts himself in the position of a subject who
holds knowledge/power. However, while talking about the Microsoft, he is in the position of a customer
who wants to share information belonging to the company: “They never tell you anything about whether,
about what, how it works or how widely vulnerable that it is or anything. There is a lot of information
Microsoft does not give you (915).” Furthermore, Wallach suggests that “there should be full disclosure
(emphasis is mine) (1093)”. He explains his rejection of auditing machines elsewhere by arguing that he
was asked to do that only under a non-disclosure agreement. He believes that the public should know
about what they are voting on: “you don’t even know who you are supposed to trust. You are just told
“trust us (1789-90).” Therefore, his conclusion is that the election system should be transparent, and
companies should share every kind of knowledge with the public. “There is no reason why every aspect
of our election system shouldn’t be public and open to scrutiny. And by keeping these things hidden”,
“you are hiding weaknesses (2124-28).” But he has already accepted that there will always be weaknesses
in the machines (“no software can be perfectly secure (2321)”. If we think of his definition of security, the
problem here actually is not the weaknesses of machines but the people who might manipulate them.
There seems to be a disparity in his discussion of security and transparency especially when he speaks
from different subject positions, such as customers, voters, and experts. As in his discussion of regulatory
11
As far as I remember, Druschel formulates his idea of security in a drastically different way. For him, if everyone
behaves in the way they should behave, they all win which means they will all benefit from the system. On the other
hand, if someone misbehaves s/he will be denied service (also Wallach mentions this) and will be taken out and
isolated. Here for me the basic difference between their approaches is that Druschel’s scenario first allows all people
play this game or participate into this system, whatever you call it. And then, if someone does something wrong,
then the system (not the person) deprives him/her from the benefits of the system. I think that this principle is based
upon the idea of “inclusion” instead of “exclusion” as formulated by Wallach. His is a negative definition of
“security”. To me, Wallach has a kind of Hobbesian belief that human nature is bad. Therefore, his ideas are more
skeptical and built on the possibility of threats, not actually occurred ones. In this system, punishment (not to share
the knowledge) is given kind of arbitrarily according to the judgments of Wallach, whereas in Druschel’s case, it is
given after something wrong is done.
44
EPIT
capture, Wallach’s starting point is human motivations/human nature. As a matter of fact, what links his
arguments of transparency and security is his desire to control the “bad guys”.
Transparency is being asked from Diebold on behalf of the voters. In many sections, Wallach uses
customers and voters as synonyms. Interestingly, his argument sometimes turns into a struggle for
democracy (“serving the masses”(1653)) instead of just protecting the rights of the customers. However, a
comparison of his different attitudes in two cases such as Sun (which exactly played ball with them) and
Diebold (which was supposed not to play ball with them) shows that his attempt is not just about
protecting the rights of customers, but it is also oriented to getting some respect/recognition in return for
his “services.” In the Sun case, knowledge was shared just with the company, not with the customers,
whereas in the latter, knowledge was shared with the public which certainly brought prestige and fame to
Wallach.
45
Good and Bad Bug hunting: Chris Kelty
Perhaps the most striking example of ethics in the interview with Dan Wallach is his story of the better
and worse ways of finding, and then publicizing, bugs in software. Dan is very fond of repeating his
"research pipeline": "there's a whole research pipeline here. Find a bug, write a paper about why you,
why the bug was there, fix the bug, write a paper about how you fixed the bug, repeat. This is a research
paradigm. This, you can build a career doing this. (836-838)" This lather-rinse-repeat methodology
sounds so clean when he tells it this way, but as suspected, there is a lot more that goes into each iteration.
First, finding a bug sounds like a simple procedure: most of us manage to do it every time we sit down at
a computer. But none of us would necessarily know how exactly to find a bug, deliberately.
Being a bad guy is like solving a puzzle. You have a whole bunch of piec-, it's like junkyard wars.
You've got all this crap, and they tell you to build a hovercraft. And you, and so the analogous thing is
you have all this software that's out there, and your goal is to break in. How can you put together the
pieces and the parts that you have lying around, in order to accomplish this goal. It doesn't have to be
beautiful. It doesn't have to be pretty, it just has to work. So, in a certain sense, it's just a form of
computer programming. I mean, this is what programmers do. A programmer deals with a bunch of
broken, bugs in Linux, bugs in Windows, bugs in web browsers, and still produce something that
works most of the time. And then work around all these flaws. So to some extent, finding and
exploiting a security hole is no different than the normal programming process, only you're trying to
exploit things that are not, you're trying to do things that you're not supposed to be able to do at all.
You know, I'm supposed to be able to write HTML and have it appear on your web browser. That's,
it's a documented feature. But I'm not supposed to be able to take over your web browser and, y'know,
take over your whole computer. That's not a documented feature. So, you're trying to take advantage
of undocumented features. And to make the system dance in a way it was never meant to go. (13541368)
Making the system dance in a way it was never meant to means understanding it at least as well as the
person who built it, or perhaps knowing some tricks that the person who built it doesn't. Second, fixing
the bug could require some kind of innovation. Dan, for instance, fixed bugs in the Java programming
language by implementing a technique called "stack inspection"-- something which he only latterly got
credit for (682-689). The technique of writing a paper about each of these stages distinguishes the
economy of academic work from that of the paid programmer. Academics, as is pointed out in many of
these interviews, value the finished paper which documents their work far more than any programming
that they do. It's more valuable for career and reputation than a well built application. Paid programmers
on the other hand are generally excluded from any such economy of reputation; they may be well
respected in a firm (and the experience of open source programming blurs the line between the in-firm
reputation and the out-of firm reputation) and they very likely will do no more than file a report to a
manager satisfying that the bug no longer exists (cf. Ellen Ullman's book "The Bug" which tells the story
of this kind of environment in somewhat excruciating detail).
Outside of this relatively simple pipeline, however, is a set of decisions, attitudes, intuitive judgments
which way heavily on a set of very important decisions: 1) whether and where to look for a bug 2)
whether and where to report the bug, and 3) how to negotiate 1 and 2 with the owner of the software bug.
Dan teaches by example. It is obviously one of his most successful techniques. In this case, he tells a
story to his students of how he has handled these decisions, and how others have. In telling the story of
his first experience with finding a bug in Java in 1995, Dan says:
DW: So, first ever ethical dilemma. We have a result, we have to make some decisions, like, we have
EPIT
some attack code, do we want to release it to the world? Probably not. We have a paper that describes
some attacks. Do we wanna release that to the world? Yeah. So, we announce it on a, a mailing list
called the Risks Digest. Comp.risks newsgroup. So we said, we found some bugs, basically we said:
abstract, URL. ...
CK: So you'd actually written code that would exploit these securities?
DW: Yes
CK: That was part of writing the paper?
DW: Yes, but we kept that all private.
CK: Um hm
DW: So this was sort of
CK: But it was published in the paper?
DW: The paper said, you can do this, this, and this.
CK: Uh huh
DW: But the paper didn't have code that you could just type in and run.
CK: Ok
DW: And we never, and to this date we never released any of our actual attack applets. I mean, we
would say what the flaws were, well enough that anybody with a clue could go verify it. But we were
concerned with giving, you know, giving a y'know press a button to attack a computer
(506-533)
Dan's decision to keep it secret played well with Sun. They "gifted" him with a $17K Sparc workstation
(cf. "gift economy"). Somewhat as a result of Dan's novel form of security research (no one was finding
bugs in this unsolicited manner), Netscape (who had implemented Java in their browser) developed a
procedure called "bugs bounty" which would pay people who found bugs "Find a bug, get $1000."
Netscape even went so far as to hire Dan on as an intern, where he got a chance to learn about the inside
procedure at a company regarding bugs, which had a certain appeal for enterprising kids like Dan:
DW: There was a kid at Oxford. Uh, David Hopwood. Who for a while was living off bugs bounties.
He just kept finding one flaw after another. Some of them were just really ingenious.
CK: That's great.
DW: And he found out I was an intern at Netscape and said, shit, I can do that too. The next thing you
know we had David Hopwood working there as well. And that was a really, so one of the interes-, so I
saw so some of my Princeton colleagues kept finding bugs and they got bugs bounties and of course I
didn't { } them because I was employed at Netscape. It was all a good time. You know, I had a
chance to see these bug fire drills for how the vendors, how Netscape would respond to bugs and get,
get patched versions out within you know two days or one day or whatever
(671-679)
As a result of this experience, Dan takes a pretty hard line on the distinction he draws between the right
and wrong ways to find bugs: Dan's way was to find the bug and to then more or less privately alert the
47
Ethics and Politics of Information Technology--Workshop materials
bug-owner (in this case, Sun) in order to give them a chance to prepare for, fix or deny the existence of
the bug Dan had found; he distinguishes this from an activity he refers to as blackmail:
DW: When I was at Netscape, Ok, there’s this idea now that grad students, y’know, so Wagner and
Goldberg started it, Drew and I kept it going, there was an idea now that grad students, anywhere, who
don’t know anything, can find bugs and get famous for it. It was even possible to write special
programs that would allow you to find bugs by comparing the official and unofficial versions. So I
could come up with a long list of bugs and then take them to Netscape and say: I have a long list of
bugs, give me a stack of money or I’ll go public! But that’s called blackmail. The reaction from
Netscape probably would not be friendly. In fact, you would likely end up with nothing in the end.
CK:Uh huh.
DW: Whereas you know, we had given our stuff away.
CK: Um hm.
On the one hand, it's easy for Dan to suggest that blackmail is wrong, and that there are better, more
strategic ways to deal with the finding and publicizing of bugs. On the other hand, it is possible to see
just how particular and situated a notion of ethics can become. The idea of writing a program to
automatically generate bugs is a product of a particular time period and context—one which Dan himself
has helped to bring about—in the late 90s, when the finding of bugs has been suddenly given an
economic value. It is essentially identical to the introduction of a cash economy where only an informal
economy existed before (Sun's gift to Dan). To put a value on bugs, to, in effect, create a market for
finding bugs can only produce one kind of incentive: he who finds the most bugs wins. But something
about such behavior really bugs Dan: it isn't fair play, but only by some standard of fair play (some set of
conventions) that is imperfectly understood, novel and highly situated within the world of bug-finding,
internet-savvy computer science graduate students in the late 1990s.
The point of highlighting these facts is not to impugn any particular person’s ethics (and certainly not
Dan’s) but to raise a couple of important questions:
How are "ethical decisions" culturally local and historically specific rather than personal, universal, or
philosophical?
What role does "economic incentive" play in ethical behavior-- especially if "economic" is understood to
encompass both the informal (gift) and the formal (money) economies of academic and corporate worlds?
If such decisions are in fact local and specific, what kind of pedagogy is appropriate: how should
someone like Dan teach his students about right and wrong; especially when things change rapidly, and
often as a consequence of your own actions?
48
EPIT
49
Ethics and Politics of Information Technology--Workshop materials
Paper: Ebru Kayaalp and Chris Kelty
This persistence about not using paper in voting systems reminds me the fascination about using
technology for any absurd reason in the third world countries. Technology always means more than its
functions. It symbolizes being a more developed and modern country. Using paper signifies something
primitive and backward. Paper becomes an “undesired” object, which must stay in the past of a country.
Whereas although the computers might have some bugs, they symbolize future, power and progress.
Therefore, I think the hatred of paper is not just because “it weighs a lot, you have to store it, it gets
humid, it sticks and it can be ambiguous” as Wallach argues. These are actually practical issues and same
mess can happen in the case of computers as well. I would prefer to express the hatred of paper in general
in terms of its “denial of coevalness” (Fabian)
One of the common comparisons with the US, when it comes to elections, is India--the world's largest
functioning democracy, with four times as many people and a considerably higher illiteracy rate.
Interestingly, at the same time that the United States has been struggling with it's voting technology and
the cultural rejection of paper, India has been conducting it's own experiment—one which has been
continuing off and on since 1989. Arun Mehta, who runs a couple of tech-related mailing lists and lives
in Delhi, wrote the following about his involvement in the first round of interest (on the India-gii mailing
list, Dec. 5, 2003):
>Who made them?
BEL and ECIL.
>Who audited them?
Funny that you should ask... (older list-members who have heard this before are begged indulgence)
when EVMs were first introduced in 1989, George Fernandes and Jaya Jaitley (who were then in
opposition) called a bunch of local computer people to give their opinion of whether these could be
used for hijacking an election. We all said, of course -- depends how you program it. But what about
the demo that the companies had offered? We all shook our heads -- easy to write software that
behaves one way in a demo, totally different in an actual situation. Now, that, for a layperson was
difficult to follow.
Therefore, one Sunday morning, I sat down and wrote 3 hijacked-EVM programs. What these needed
to do, in order to elect the desired person, was to determine two things:
1. Was this a demo, or the real thing? Easy: if it lasts more than 2 hours, it is the real thing.
2. Which person to elect? For 2, I had several options:
a. Elect whoever "wins" in the first half hour. If you, as a candidate, know this, you send your few
supporters to queue up as soon as voting starts.
b. Type in a secret code when you go in to vote. There was a c, but I forget how that worked.
Anyway, I showed these programs to George, who was very excited. A press conference was called,
50
EPIT
on the dais was VP Singh (then leader of the Janata Dal, a couple of months later Prime Minister),
George, my computer and me. We didn't ask that EVMs not be used -- merely that there should be
transparency, in light of the obvious pitfalls, and a paper trail. The hall was full of media people and
photographers. The next day, this was front page headline news (it even got coverage on BBC and
New York Times – but not on TV, which was a government monopoly then) and made the editorials
in the next few days. VP Singh said that in the aeroplane, people were actually sending him notes
through the stewardess on the subject! In that campaign he repeatedly asked, how can the source code
and circuit diagram only be accessible to one political party (I wonder how many politicians would
even have recognised them, if someone had mailed these to them!)
Seshan was then the election commissioner, he ruled that EVMs would not be used in that election.
The new government that came in set up a committee to look into EVMs, one of the people on it was
Professor PV Indiresan. I had a brief look at their report at the election commission, basically they
checked to see if someone could mess with the connectors and stuff – it did not seem to address the
questions we asked. I never received a copy of the report for detailed study.
Gradually, over the next decade, EVMs were introduced into elections, and AFAIK, there was only
one case where their role was questioned, in a Bangalore election, where it was alleged that the
opponent, the local political heavy, had access to surplus machines at the factory, and had switched
them with the real ones after the election and before counting.
Over the last four months, probably as a result of the media coverage of US voting machines, a small but
persistent group of people have started to question whether Indian EVMs are dangerous or risky in the
same ways as US machines. Most seem to agree that there is a need for a voter-verifiable paper trail, but
all are in general agreement that the system, even with it's flaws, beats the old paper ballot and inkthumbprint systems that it replaces. The notion that there is a relationship between national economic
development (or cultural development), and the choice to use (good new) electronic voting machines over
(bad old) paper ballots is more than just irrationality (as Dan suggests it is). Indeed, the fact that this
debate about modernity and development can happen in a putatively developed, and a putatively
developing nation at the same time seems proof enough that the categories of "development" have
themselves become irrational—perhaps, ironically, because of technology?
51
Ethics and Politics of Information Technology--Workshop materials
Clue: Chris Kelty
DW: I mean, we would say what the flaws were well enough that anybody with a clue could go verify
it. But we were concerned with giving [people] a y'know “press a button to attack a computer”(525528)
DW: You know, Hart Intercivic won that little battle, and I got to learn who on the City Council had a
clue and who on the City Council was clueless. I got to learn that Beverly Kaufman, our County
commissioner is ... (silence) ... insert string of adjectives that shouldn't be printed. I can't think of any
polite thing to say about her, so I'll just stop right there.
(1772-1775)
There are always colloquial ways of forming distinctions. For Dan Wallach, the colloquialism “having a
clue” seems to be a particularly useful one. It signifies expertise, know-how, knoweledge, but also a
certain distinction within knowledge: some people are smarter than others, quantitatively and
qualitatively. These are people “with a clue”. The origin of the colloquialism seems a bit obscure, but in
the American context is definitely tied to several pop-culture references: Clue the board game (in which
people collect clues to discover who, with what weapon, and where a murder was committed. The 1995
movie Clueless with Alicia Silverstone and based on Jane Austen's Emma popularized the phrase in an
unforgettable high-school jargon. Other permutations include “cluetrain” (as in, “Clue train is leaving the
station: do you have tickets?”), clue stick, and clue-by-four (as in “He needs to be hit upside the head with
a clue-by-four”). The last is a registerd trademark (http://www.thereisnocat.com/showme487.html).
DW: So, y'know, Joy gets some, Joy gets more credit than he deserves for being a father of Java. No,
Gosling, anyway, that's a separate...
CK: But he wrote the letter, not Gosling
DW: Yes. And his letter implied that he didn't know anything about what he was talking about.
CK: Hm
DW: His letter demonstrated that he was completely clueless.
CK: Hm
DW: So, we're like, hum, that was weird.
CK: So did that, was that worrying in the sense that, you know, this decision to release the code or not,
the people that should know at Sun seem to not understand what you thought anybody with a clue
should be able to understand?
(562-572)
The use of “clue” and “clueless” to make these distinctions is not frivolous. One crucial function is that it
gives the speaker a way of suggesting that someone who should understand something (Bill Joy), does not
in fact understand--without needing to explain exactly how. The shorthand expands to: I could explain in
detail why this person should understand, but they should 1) be able to figure it out themselves and 2) not
waste my time. Similarly, to give someone the credit for having a clue means including them amongst the
knowing, without specifying how:
DW: So, Netscape wrote us a thousand dollar check.
CK: Um hm
52
EPIT
DW: So, Netscape clearly had a clue.
(581-584)
Or again by implication:
DW: Which is, and one of the, and that was an interesting, suddenly I got to see who I thought was
cool and, who I thought was cool, and who I thought was clueless, and, one of the things that hit me
after that was I really liked this guy at Netscape, Jim Roskind. So, I got back to Princeton and we're
emailing him saying, Hey, Jim, looking for a summer internship, how 'bout Netscape? Couple of
minutes later the phone rings. So, he talked to me for like an hour. And next thing I know I'm flying
out to California again for an interview. And it wasn't so much a job interview as it was like a
debriefing. Or it was just, all of these people wanted to chat with me.
(633-639)
Finally, A more standard usage:
these independent testing authorities that I mentioned, their reports are classified. Not military kind of
classified, but only election officials get to read them. They are not public. So you joe-random voter
wanna, you know, are told 'Just trust us! It's certified.' But you can't even go'not only can't you look at
the source code, but you can't even read the certification document. So you have absolutely no clue
why it was certified, or you don't even know who you are supposed to trust. You're just told 'trust us.'
(1785-1790)
53
Ethics and Politics of Information Technology--Workshop materials
AES conference paper: Anthony Potoczniak
The Technology of Peers, Security, and Democracy:
Studying Ethics and Politics in Computer Science Research Practices
Anthony Potoczniak ,Chris Kelty, Hannah Landecker
Prologue
A group of anthropologists, philosophers, and computer scientists is involved in a collaborative
project at Rice University called the Ethics and Politics of Information Technologies in which one of its
goals is to understand how the political and ethical practices of experts manifest themselves in the
everyday decision-making processes in science. The project is organized as a set of iterative interviews
with scientists, one of whom I will talk about this afternoon. The scientist I will focus on is Dan Wallach,
who is a computer science professor at Rice University. His area of expertise is in computer security.
To publish or not to publish
Two computer science graduate students are sitting in a café talking about Java, the new wonder
programming language that promised to revolutionize the way computer users would experience the
Internet. One says to the other: “So do you suppose it’s secure like they say it is?” It was 1995 and many
things were converging in the world of information technologies. Besides the
introduction of Java, Microsoft released a completely revamped operating system
called Windows 95. The Internet startup company Netscape had adopted Sun
Microsystems’s Java code into its web browser and was distributing the software for
free. This seemed like a really novel approach of gaining market share, but it also put
a lot of Internet users at risk. Two scientists had just published an article regarding a
huge security flaw in the web browser. This article received a lot of press and also
brought to the public’s attention the risks associated with flawed programming.
In contemplating this question: “do you think it’s as secure like they say it is?” Dan Wallach and
his friend downloaded the entire library of source code from Sun and began culling the code painstakingly
line by line. After a brief two weeks, the two had come up with at least a dozen bugs. Their discovery
countered Sun’s public pronouncements that Java was “great, secure, and wonderful". In fact to the
astonishment of these young graduate students, they found the code to be “horribly broken.”
The decision whether to go public with this information or not reveals how these young scientists
were thinking about the impact of their research on the discipline. Like the article on the huge security
flaw in Netscape’s browser, it’s apparent that notoriety and recognition could be motivating factors for
these graduate students. Dan already had a family legacy of fame: his father, Steve Wallach, was one of
54
EPIT
the main characters in Tracy Kidder’s Pulitzer prize winning book The Soul in the Machine. But what
were these two graduate students going to do with this sensitive information? First, they tried contacting
the Java developers at Sun Microsystems, but no one from the company responded. Dan seemed
perplexed. While deciding how to proceed, they stumble upon a conference announcement on security;
and here I continue with Dan’s own narrative:
“So we submitted this paper [about our findings to a IEEE conference on security
and privacy] with our names removed from it. Now what do we do? So, first ever
ethical dilemma. We have a result, we have to make some decisions, like, we
have some attack code. Do we want to release it to the world? Probably not. We
have a paper that describes some attacks. Do we wanna release that to the world?
Yeah. So, we announce it on a mailing list called the Risks Digest newsgroup…”
Needless to say, very soon after these findings were posted on the newsgroup, the right people at Sun
found Dan. The decision to publish the article and to cooperate with the software vendor probably helped
Dan land a summer internship at Netscape and get a 17,000$ computer from Sun as a “thank you” gift for
doing good work.
This brief episode prompts several questions about our project to study scientists:
ï‚·
ï‚·
ï‚·
How do you speak ethnographically with scientists who have a dense technical and arcane
language?
Why should anthropologists care about bugs in software?
How do you get from two scientists talking in a café about security to a crisis in American
democracy?
The world of bugs
The term “bug” is a metaphor used in computer science to refer to a flaw or defect in the source code
of software that causes programs to malfunction. Anecdotes from the early days of computing describe
“bugs” literally as live insects that caused entire systems to fail.
55
Ethics and Politics of Information Technology--Workshop materials
First computer bug (1945)
The first actual “computer bug” was found and identified at Harvard University in 1945 by the late Grace
Murray Hopper. A 2-inch moth was discovered trapped in a relay switch of one of the earliest modern
computers. The computer operators removed this bug and taped it into the group’s computer log with the
entry “First actual case of bug being found”, and logged that they had "debugged" the machine; thus,
introducing the term "debugging a computer program.” Since that time, the term “computer bug” extends
beyond the notion of just software failure, and now encompasses larger categories that are synonymous
with terms like “security holes”, “broken code” and “undocumented” features within software.
Bugs since the early 1990s have had a life of its own in both the public and private sectors. In
academia bugs provided and still provide an enviable publishing opportunity for computer science
professors and graduate students alike. Within the private sector, “bugs” translated into hard cash. Startup
companies like Netscape introduced a “bugs-bounty” program where individuals outside the company usually graduate students - were offered 1000$ for each discovered bug. These cash incentives for bugs
were instrumental in formulating proper professional practice. It is interesting to note that at this time,
what was deemed ethical concerning bugs was not really explicit prior to the 90s. Bugs before this time
existed in the cocoon of academic scientific research and debugging was a necessary craft of
programming in the discipline. Only after the introduction of consumer software did bugs pose a social
liability and security risk. In other words, bugs become unmoored from the discipline itself, and through
incentive programs like the “bugs bounties”, it became a commodity.
56
EPIT
Dan shared with us a fascinating story about a graduate student’s attempt to profit from bugs. John
Bugsby, which is a pseudonym, developed a sophisticated program called a byte-code verifier, which
automated the process of finding flaws in the program. After a short period, Bugsby had compiled a long
list of program bugs, and eventually went to Netscape to collect his money. The company balked at this
demand, and Mr Bugsby threatened to go public with this information. Dan criticizes this graduate
student’s “ransom” style and equates it with blackmail. Netscape eventually “acquired” this list of bugs,
but the graduate student never got the money he demanded.
Brazil (1985)
Bugs can also have societal implication as we find in Terry Gilliam’s 1985 classic science fiction
movie, Brazil. A bug gets trapped into the computer's printer and causes a misprint of a last name. The
interchange of the letter “T” for a letter “B” causes an unfortunate administrative error. Here, we find the
hapless Mr Buttle arrested, hooded and sentenced to death in the living room of his home by the
government SWAT team.
Bugs in democracy
The 2000 US presidential elections and the Florida voting controversy that accompanied it shows
how quickly certain technologies like punch-card ballots could lose the public’s trust. For Americans, the
chad "symbolized the failure of the entire political system (Lucier, 444)." One of the “positive”
aftereffects of the 2000 election was the sudden infusion of government funds to update and standardize
out-of-date voting machines. The appearance of private companies like Diebold - a company that had
capitalized on its reputation for providing secure cash transactions -- came onto the scene. This is where
our fearless hero, Dan Wallach, again enters the picture.
57
Ethics and Politics of Information Technology--Workshop materials
Dan's current activism in electronic voting technologies was influenced by his initial experience
working with local government officials in Houston, TX. Dan, now a computer science professor at
Rice University and a recognized national expert in computer security, was invited by the council
members to review the security and reliability of e-voting machines. Dan's hacker mentality and his
ability to think like a really good "bad-guy" revealed during these meetings several possible security
vulnerabilities. During one meeting, Dan removed by hand the memory card of an e-voting machine
that stored all the election information and waved it to the council members saying: “If I can remove
this, someone else could”. Despite the fact that questions about security and the inability for security
experts to review independently the voting machine’s software, the city council nevertheless acquired
the voting system.
Later Dan teams up with several computer activists, who also questioned the public’s trust placed in
software’s ability to count votes accurately. Their attention turns to Diebold,
who
was in the process of selling over $68 million-worth of voting equipment to
the
state of Maryland. Diebold, within the computer security world, is a
controversial company. It has kept its voting technology a secret and hasn’t
allowed anyone access to inspect its voting hardware or software. This issue
of
secrecy came to the forefront when a substantial amount of actual source
code
from Diebold’s voting machine mysteriously appeared on a publicly
accessible web server in New Zealand. Leveraging academic prestige, the group of scholars from several
universities including Dan published a paper describing Diebold’s security flaws, which eventually forced
the Maryland State government to postpone the multimillion-dollar purchase of voting machines pending
a formal committee audit of the system.
In writing an article revealing security flaws in Diebold's source code, Dan and his colleagues risked
the legal consequences of violating copyright laws and disclosing trade secrets. In defending his actions to
go after Diebold, Dan says the following:
“Companies are actually secondary to their customers. If I damage a company, you know,
I’m sorry. But if the customers are damaged by something that I did, then that’s a
problem. So do I really care about Diebold’s financial health? No. Do I care about the
democracy for which Diebold has sold products? Hell yes. I could really care less about
Diebold’s bottom line. And particularly since they have not exactly played ball with us, I
feel no particular need to play ball with them.”
The transition that occurred between Dan the hacker-graduate student and Dan the computer science
professor and e-voting machine activist provides an interesting case study for anthropologists and poses a
paradoxical situation: what makes these two situations appear so similar and yet elicit such different
modes of interaction from the same scientist.
58
EPIT
On the one hand we view Dan's actions to work with Sun Microsystems as being altruistic to the
company's well being. He goes the extra mile, so to speak, to work with the company and establish
professional rapport to help the software company improve its ability to fix its software and provide a
better product to its consumers. On the other hand, Diebold comes onto the scene as a major player in the
e-voting machines market. However, Dan's approach here is different. He chooses not to work with
Diebold and goes out of his way to criticize the company for not cooperating with the community of
security experts, which he considers himself to be part of.
Bugs as gifts
One way to explain this discrepancy in the way Dan is experiencing these two scenarios is to
view computer bugs in Maussian terms, that is, bugs as part of a “system of total services”. The software
"bug" in the 1990s shares attributes with those of a gift in which a norm of reciprocity has been
established. As bugs become unhinged from the esoteric domain of academia and part of a larger social
structure, social institutions like public trust and safety are engaged in relation to how bugs affect society.
Mauss defines a gift as a system of total services in which the act of giving and receiving
represents forms of reciprocal exchanges that involve all social institutions. These forms are prevalent in
archaic societies and differ from societies like ours in which market economies drive the exchange of
commodities. As Mauss describes, a gift contains innate power that causes and obliges it to be acted upon
from the receiver creating a relationship or form of contract between the parties. This obligation consists
of giving and receiving, inviting and accepting, and reciprocating. Rejecting one or another form breaks
the alliance and in is tantamount to war. Similarly, a gift also has the explicit appearance of being given
generously without self-interest, when, in fact, the opposite is true, where there exists an obligation for
reciprocation.
In essence the bugs in software follow a similar chronological development, in which bugs
circulate in a system that is not yet regulated by formal laws and contracts. Thus, bugs during the 1990s
were part of a total system of services. Here social norms play an important role in establishing the proper
conduct for utilizing gifts. As bugs become a greater concern in consumer software development, bugs
are slowly transformed into commodities. In Dan's narrative, we find a combination of two systems
working concurrently in which the bug exists both in a gift economy and a market economy.
Dan’s concerted effort to find bugs and present them as a “challenge gift” to Sun obliges the
software company to reciprocate. At first Sun did not respond and thus, the “gift” of bugs is left
unreciprocated until Dan publishes this information on a website. Eventually, Dan establishes a social
bond with Sun; however, he's never explicit about expecting anything in return. As a result of accepted
59
Ethics and Politics of Information Technology--Workshop materials
professional behavior, Sun reciprocates and rewards Dan with an expensive computer gift, which in Dan’s
mind reinforces this notion of what is proper ethical practice. In contrast, Dan considers Mr Bugsby, who
demanded money for finding many bugs, unethical because the graduate student was not working in a gift
economy, but rather within a market driven system. Therefore, Dan places ethical behavior within the
realm of the gift economy, and the unethical within capitalism.
What about the current Diebold affair? Although it is difficult to outline briefly the complex
collaborative relationship between academia and the private sector, we can, however, also understand
Dan’s decision not to work with Diebold in Maussian terms. Dan disregards his obligation to work with
Diebold directly because he feels that the company is in no position to receive and reciprocate on such
"gifts" -- as he expresses colloquially in terms of "not playing ball with us." A matter of fact this
breakdown of alliance is described by Mauss within the gift-economy as a failure to initiate exchange,
which causes two parties to be at war with one another, in this case litigious in nature. Dan chooses
instead to be an advocate for the company's consumers – the voters, whom Dan feels is placed in a
position of even greater social risk. Lack of transparency and the inability to openly verify the voting
machine - in Dan’s view - show how companies like Diebold are willing to accept "gifts" only on their
own terms.
Epilogue
In the beginning of my presentation I posed the question: how do we as anthropologists speak
ethnographically with scientists who have a densely technical and arcane language? One way is to ask
them about the everyday decisions they make and conversations they have among themselves in
laboratories or in coffee houses. Our collaborative project, which has brought together many different
disciplines, examines the day-to-day practices of scientists in their laboratories. We know that science has
changed our society, but how does society impact science? Our on-going study has shown that there are
many complex social factors at play even while science is being made. I described in this presentation
how even a core technical practice of computer science like finding bugs is tightly intertwined with social
norms and behaviors that exist outside the discipline. Incidentally, Dan himself uses the term “gift
economy” to characterize the favorable working relationship he had established between himself and the
software company. However, Dan had no idea that “gift economy” is also a very common term studied in
anthropology. Thus, it was interesting that in the course of our discussions about bugs that we
anthropologists also were able to inform the scientist about something that is in fact commonly practiced
in society.
60
EPIT
61
Ethics and Politics of Information Technology--Workshop materials
Peter Druschel Interview #1
Abridged and annotated interview with Professor Peter Druschel’s research group at Rice
University This interview was conducted on Thursday Jan 20th, 2004. Key: PD=Peter Druschel,
CK=Chris Kelty, EK=Ebru Kayaapl, AP=Anthony Potoczniak, AS={ Atul, Animesh, Ainsley,
Alan}.
Peter Druschel is a Professor in the Computer Science Department of Rice University. He has worked on
operating systems, distributed systems, ad-hoc networking and currently on peer-to-peer systems. We
chose Peter and his research interests for obvious reasons: the national trauma surrounding music piracy
has centered primarily on technologies for sharing and downloading files—commonly called "filesharing software." Only a few of the existing file-sharing systems could fairly be characterized as "peer
to peer". Napster, for instance, relied on a central database that kept a record of who was sharing what
music on which computer—without it, individuals and their computers were just as isolated as before.
Gnutella (or Kazaa or Morpheus), on the other hand, allows users to search each others machines for
music directly, and when something isn’t found, the search is continued on the next machine in the
network—eventually, two users might connect to transfer a file from one to the other. Since these
software programs work primarily with the notion of a "file" there is nothing about them that requires
them to be used for sharing music files—that just happens to be what people are most interested in
sharing (second, of course, to pornography). For this reason, Peter and his research group are
necessarily involved in a highly politically charged environment: one in which they are working on a
technology primarily identified with law-breakers. Their quest to research and to demonstrate the "noninfringing" uses of peer to peer software thus puts them in an interesting position. On the one hand, they
must be able to build, and demonstrate principles related to the broad class of uses of a peer to peer
system (such as p2p mail, chat or secure archival storage), on the other hand, they must also consider
these systems with respect to what legal or illegal uses they might have, and what kinds of political or
legal consequences are involved.
Novelty
In our first interview, we talked with Peter and his students about working on these areas. From the get
go, it was obvious that it wasn’t only the politically salient aspects of these technologies that intrigued
them, but their genuine novelty in terms of computer science.
AS[1]: I’m Atul Singh. I am a 3rd year grad student in computer science department. When I joined Rice,
Peter was working on a lot of projects in peer-to-peer stuff, and peer-to-peer at that time was very
interestingfor a lot of reasons. First main reason was, it was a very new area, so you could do a lot of
things, which many people haven’t done. Secondly, you didn’t need much of the background to know a
lot of things in peer-to-peer, because it was a very recent area; and because you could do a lot of things
you can do in peer-to-peer, it was very interesting to see where you can publish papers in that field. .
AS[2]: I’m Animesh. I am also 3rd year grad student here. So I did my B.Tech in in India and then
immediately came here for the PhD program. And when I came here Peter was introducing this [p2p]
which was exciting. I mean the area was all about convincing a big research community, which in fact did
not have strong opinions in this field. It was still pretty new. It’s like a challenge in which you have to
turn the heads around around.
AS[3]: I’m Ainsley Post. Actually I did my undergraduate at Georgia Tech. and then I came here. I am a
2nd year student and when I first came here I was kind of not sure, what I was gonna work on. But I took
Peter’s class. What interested me about peer-to-peer is kind-of how extreme the technology is—how it is
taking regular distributed systems and kind of taking it as far as it can go and it brings up a lot of harder
62
EPIT
technical problems, that are more interesting more extremely unusual. So it makes things more
challenging.
AS[4]: I’m Alan Mislove. Actually I was an undergrad here in Rice and I sort of my last semester here
ended up taking class Peter’s Comp515 class; it was sort of a project class. We were working on top of
Pastry. And I found the project really really interesting, it was such a new area. I think there is a lot you
can do with it, more than anyone realized. So, I decided to stay here at Rice. I am a second year grad
student...
PD: In 2000, about 3 years ago now, I took a sabbatical, which I spent partly at Microsoft Research lab in
Cambridge. And that was the time when Gnutella and Freenet kind of hit the streets and the public was
interested in this file sharing thing and there was a lot of discussion about copyright issues and ethics and
the technology behind it, and music industries, lawsuits and so forth. We started looking at this more or
less for our enjoyment as a technical challenge. We were looking at these protocols and how they work
and whether one could do better. And then kind of one thing followed the other, we actually came up with
very interesting technical ideas, and realized very soon that this technology had applications far beyond
just file sharing. You can, essentially, build any kind of distributed system... well I shouldn’t say any, but
a large class of distributed systems (that we used to build in a completely different fashion), based on this
peer-to-peer paradigm. One would end up with a system that is not only more robust, but more scalable.
But also fundamentally, it lends itself to this kind of vast grass-roots style of organization; meaning an
individual, using this technology, could potentially distribute a large amount of information to a large
community without necessarily needing the approval of an institution or organization; and without having
to have access to capital or some sort of business model to raise the funds to put the necessary
infrastructure in place. Because, in effect, everyone who participates adds another piece of resources to
this and jointly they carry the entire system—nobody needs to provide dedicated resources. So over a
couple of months it became clear to me that it was interesting not only from a technical perspective, but
also in terms of the applications. In computer science, actually, we rarely have the opportunity to work on
things that have fairly direct impact on society... to really sort of stir things up. Mostly we deal about
things that make something more efficient or perhaps more scalable, it will cover a larger geographic area
or reach more people. But, we really have the opportunity to develop something that fundamentally seems
to change the impact of the technology on society.
A question that recurs throughout the interview is the question of how such novel research is perceived by
other CS researchers, and how the world outside of CS (both commercial and governmental) sees it.
CK: In terms of the kinds of research in computer science or the potential directions that you could have
chosen to go, how unusual is this? Is it something that other computer scientists say you shouldn’t be
doing, because it’s a flash in the pan, or it’s not gonna yield anything or something like that? Is there a
danger there?
PD: I think I will describe this as perhaps something that happened in two phases. Initially when we
started working on this, a fair amount of other groups in the area also started approaching the subject out
of curiosity. But when it became clear what the applications might be, what fundamentally this
technology can do, I think there was really a sort of division in the field. There’s a lot of people who
fundamentally reject the idea of building systems that have among other things the ability to support
things that are subverting the legal system. Essentially, they allow you to do things that are not approved
by the government. And there is sort of a deep skepticism by many people whether there are going to be
commercial applications of this technology—legal and commercial applications. And you continue to
face this problem of trying to convince people on one hand that it’s OK to work on things that may not
63
Ethics and Politics of Information Technology--Workshop materials
have a commercial impact, but then may have a more direct impact on society. And on the other hand,
that what we are doing is in fact likely to yield results. But I think that is actually typical of a good
research project. If everybody is going to agree that this is going to have an impact then you probably
shouldn’t bother, and just hand it over to industry.
CK: How do you get funded for this research? If there is not a clear commercial application and you
can’t clearly go to industry and say “fund us to research this,” how do you think about getting money or
convincing people?
PD: Well, I think I'm talking a little out of the shop here... But I think the classical approach is doing this
is to disguise it a little bit. If you write a proposal you want to emphasize the things that you think have a
commercial impact and perhaps emphasize things bits of the picture that are fairly convincing to
someone, who is looking to solve a particular problem. The government right now is very concerned
about making technology more secure more robust. Well, this technology among other things can do that
potentially. So of course this is what we emphasized in our research proposals. And actually the response
was very positive. We haven’t really had any problem at all raising funds for this, for this kind of work.
Of course we are also now in a bind where we need to produce results, but that’s a problem we like to
have. On the other hand, we have also gotten some funding from Microsoft, for instance, to do this work
and by and large all the big computer vendors are working internally on this technology. I think there is a
sort of a broad realization that it is not clear where it would go, but ignoring it is too risky. So, everybody
wants to have at least a foot in the door, wants to develop some local expertise and some competence.
Hierarchy, Complexity, Experiment.
Because the students were present it was easy to ask about the nature of the work in Peter’s research
group, and especially, the question of organization and hierarchy—since this is a central intellectual
concern in p2p research as well.
CK: So, Can you say more about the structure of how research projects work in Rice Computer Science?
Do graduate students get to pick or choose? Do they get to work one-on-one? How much does the funding
determine where you can work?
PD: It really depends on the faculty member. I think everyone has their style. There's faculty members
who have a very well defined research agenda and funding that ,from organizations like DARPA that
have well specified deliverables means you really don’t have a lot of flexibility. There is work that needs
to be done by day X and for a grad student to join the faculty member, the grad student has to basically
really work on that project and produce results by a certain deadline. I think other faculty members have
one or two different projects and maybe even three projects going on at the same time. And the students
have a lot more flexibility in choosing what they want to do. And quite often graduate students also have
a lot of impact what the next project is going to be. If someone shows up I am interested in working on X
and it’s not totally outside of my radar screen I might just get interested in it and then if you have initial
results you write a proposal and you get the funding.
AS[2]: I think the good part of peer-to-peer research is that, because it is so new, not every part of the
design space has been explored. So we can very easily convince Peter that this is a good part, and we can
do that … But I mean this has negative parts too—not everything is possible in this world. So you might
want to do something but then you need some advice whether you can really do this job or not. And for
that we need some experience, so the faculty comes to the picture at that time. This gives a lot of
flexibility, because it is a very new field. It is very exciting.
64
EPIT
PD: Another aspect is I think it is important for grad students to also have an impact on what they are
doing, what direction their project takes. On the other hand, you need to set also some useful restraints.
Because if you have a number of graduate students and everyone is working on something more or less
unrelated, it just doesn’t scale very well. It stretches my own expertise too far. You have to work yourself
into an area and stay in it by keeping up with the literature. And it also is the fact that there is a synergy
within the group. If everyone works on something unrelated, nobody helps each other, and there is no
synergy there. It is important to have some amount of coordination. This can be as one or two, even three
major projects but beyond that it gets difficult. And the nature of the work in experimental computer
science, at least, what we’re doing, in these is such that one can accomplish very little alone. One really
has to depend on feedback and exchange with other people. Even small groups in many other areas even
with more theoretical computer science - somewhere you work in the corner you come up with these
major results does not work in our area. You might succeed in doing that but if you are working on it by
yourself, chances are you will be overtaken by the time you have it resolved.
CK: What about collaboration across universities? Do people have a lot of work with other universities
or other research institutes?
AS[3]: I spent the summer at UC Berkeley with the Ocean Store project. And right now we are working
on a project which is related to peer-to-peer stuff. So, right now we are extending that in some directions
to see how far we can go in that area. We also collaborate with Microsoft software. So there is fairly large
amount of collaboration out there.
PD: It’s actually one of the positive aspects of this... The base funding of our project is from a large NSF
grant that involves 5 institutions. And it’s unusual for a project of that size... it is very well coordinated.
So we meet twice a year. The students run a student workshop within that, so they have exchange on their
level...
CK: Do you use any peer-to-peer system to distribute the documents or code?
[laughter]
PD: Yeah. That’s actually part of what we're doing.
CK: And what about collaboration internationally? I know you have connections in Germany and in
Cambridge UK. Are there others?
PD: Yeah. We have one more set of collaborators in Paris that are working with us on this sort of email
system and they have their own little project on the side and they are using our code after meeting with
them several times . They might come to us in the future and then, of course, the folks in Cambridge in
England. I think there may be other folks downloading the program but there is more than one way than
an exchange.
CK: Do you have a sense that within peer-to-peer research or generally that there is … it is localized or it
is different from country to country or there are different kinds of groups working independently on this?
Does it not divide in that way?
PD: It is a little bit difficult to tell. I should probably say that work in experimental computer systems is
fairly focused, I would say maybe 95% is in this country. There are a few clusters in Europe, one or two
in Japan. It is not very broad based. So, from that perspective is hard to tell.
65
Ethics and Politics of Information Technology--Workshop materials
In all of the interviews, some question about the difference between theoretical and experimental work in
CS and in other disciplines emerged. In all the cases, the definition was slightly different, but in Peter’s
case, there is the added complexity (which is more apparent in the second transcript) of whether the
experimental software is in wide use, and by whom, as part of the experiment. Part of the confusion
stems from the repeated use of terms that seem to refer to humans (such as trust, reputation, incentives,
bribery) but are in fact used to refer to experimental "agents". In this section we ask about these terms
and what they mean.
CK: How do you define this difference between experimental and theoretical computer science research?
PD: So theoretical computer scientists are essentially are applied mathematicians. They prove theorems
about things. They think about underlying mathematical structure algorithms, logic and so forth.
Whereas, we are more on the engineering side. We build things, we build prototypes of large systems and
evaluate them. We are interested in the actual construction process of the computer system and so forth.
It is more an engineering discipline than something of a classical science.
AS[1]: Can I say one thing? So, this field, the peer-to-peer system is not just like the theoretical or
experimental systems. It has generated interest in theoretical people too. So, you can find a lot of
publications at top-notch theoretical conferences where people publish and people talk about theoretical
aspects of how you should build a peer-to-peer systems that have these nice properties. And also in some
technical conferences where people look at certain aspects of the peer-to-peer systems and they say these
are the properties you should have or they basically compare whatever existing peer-to-peer systems there
are and which of these systems have these nice properties…nice, optimal properties. So, I mean this has
generated interest not only in our systems community, also in theoretical community.
CK: One of the most fascinating things about reading (we can understand of) your papers and what I
know from my own research is that there is lot of lingo in peer-to-peer world that straight out of the social
sciences. I am thinking of words like trust, accountability, reputation, responsibility, bribery, collusion,
incentives, community. So I wonder if when you think about and use these terms, do you go to social
science work, talk to other people about what these terms mean? How come they are so salient in peer-topeer research?
PD: That’s actually interesting question. Actually I’ve never thought about it. But, actually most of the
words you just mentioned have been in use in the security community long before peer-to-peer appeared
on the horizon.
CK: How does something like reputation or trust figure? Is it a situation where you sit down and say “I
am interested in how trust works; how can I program that?” or is it something that falls out of
experimenting with these systems?
PD: Well its more like... Trust is actually, the impact of trust on peer to peer systems… is primarily we
don’t have it.
CK: That’s what we say it in social science too. [Laughter].
PD: The conventional distributed systems that were built prior to [p2p] always assumed that there were
preexisting trust relationships. Everybody knows I can trust this node or this entity to do X. And in this
new world, you don’t have this. And in some sense, you realize it is actually an artificial concept that we
in these early distributed systems just assumed to exist, because it makes things easy. But in real systems
that model more closely real, human relationships, it doesn’t exist, at least not a priori. So, yes, in this it’s
actually kind of interesting because it’s simply symptomatic of what we deal with in our work everyday.
66
EPIT
We have to really rethink a lot of the concepts in distributed systems that actually existed a long time
before. Because nothing seems to quite fit here, which makes it exciting of course.
CK: So, is the work on peer-to-peer systems, in some ways really dependent on what’s come before?
On the particular ways the internet is built for instance, or the particular ways the distributed systems have
been built in the past, or is it just assumptions about those things?
PD: It is surprisingly different. It is really quite different. We’ve always assumed distributed systems to
some extent in many fields of engineering (it is actually much broader than just computer science). But
the way you built [these] systems is to construct the hierarchy where the things here depend on lower
level things and there are clear well defined relationships between different entities. Everything has a
well-defined role and relies on certain other components to get it done and trust works along this
hierarchy too. And the hierarchy is constructed simply by the assumption of deciding to figure out the
system. You map it out on a drawing board. You hire a contractor who installs all the components and
arranges them and configures them in this hierarchical structure. Then it exists and you turn it on and it
works. It doesn’t grow organically. It is designed. It is artificially put together. It is constructed, whereas
these peer-to-peer systems, there is no such predefined hierarchy, there are no well defined roles. In
principle every entity has a completely symmetric role in the system. There are no prearranged trust
relationships and that, of course, allows it to grow organically. It allows you eventually to start a large
system without putting anything into place, without putting a lot of money, putting things in place. And at
the same time because of its lack of relationships, there is no point at which a governing body or an
institution can exercise control. That’s, of course, what leads to these legal issues.
CK: And then so because of that situation do you need to have a theory of how something grows? How it
grows organically in order to do the design, the different ways in which it can grow? How do you do that?
How do you think about that?
PD: There are in fact mathematicians, who for a long time have studied systems - systems that are
evolving in this nature that you have components that have a few simple well-defined properties and they
study what happens when you put them together, let them interact. For instance, the stripes of a zebra.
There is a certain set of proteins with very well defined limited functions that satisfy nonlinear equations
that are very simple. If you put them and let them interact, totally unexpectedly they tend to form these
stripe patterns. So, this is an example and here the nice thing is that people have actually been able to
write down the equations exactly govern and how the set of symmetric entities forms the stripe pattern.
And we would love to be able do this kind of thing in peer-to-peer system and write down the formulas or
the individual formula based on the individual entities we’ve created and be able to precisely predict what
pattern will emerge. We are far from that unfortunately. That’s where we would like to go, of course.
And that's of course also how biological things work. What I mean is, there is a huge gap right now. And
even though peer-to-peer have made a step towards this kind of work in terms of not relying on
constructed hierarchies and prearranged relationships within components but our understanding of
emergent properties of emerging systems. Systems whose properties emerge from well-defined small
components that are very simple, is very, very limited.
One of the issues of understanding p2p systems in terms of these quasi-human characteristics is that it
leads to speculation about costs, benefits, and definitions of human beneficence or maleficence that might
determine how a system is designed...
PD: In fact this is a very good example, because in classical computer systems one would simply have a
way of identifying cheaters or malicious folks and then having a way of punishing the—you rely on the
centralized authority that has a way of detecting cheaters and then punishing them. But fundamentally you
can’t do this in a peer-to-peer system—there is no authority that can do this. So you have to design, you
67
Ethics and Politics of Information Technology--Workshop materials
try to add simple properties to the individual entities such that a behavior emerges that actually puts
people who cheat at a disadvantage, which occurs naturally, whenever they deviate from what they are
supposed to be doing, they actually hurt themselves. And this is an example of an emergent properties
that we tried very hard to create but our understanding is very limited right now. A lot of this is has to do
with trial and error with intuition right now and experimentation. We are far from being able to write
down a few equations and say this is what is going to happen.
AP: What will be considered to be malicious behavior in peer-to-peer, just a concrete example?
PD: Well, for instance an entity that actually behaves with the goal of disrupting the system, like denying
other people service or a particular form of malicious behavior which is usually called just "greediness."
It is to say: I want to use the service but I refuse to contribute anything to the system. So, I want to
download content but I am not making anything available and I am not willing to make my computer
available to help other people find content, I'm just completely greedy.
Collaborating with Hackers and the Worries about the Internet
Because most of the existing peer to peer systems (such as Gnutella, BitTorrent or Freenet) have actually
been designed outside of academia, this has meant that Peter’s groups has of necessity tried to
collaborate with people from corporations, open source projects, hackers, and other unconventional
partners. We asked how this collaboration was going.
PD: It is actually been somewhat difficult. I can’t say that there has been a lot constructive collaboration
and it is partly because these folks are a sort of special-breed of mostly self taught programmers or
computer scientists, who don’t really appreciate actually what we are trying to do, or further our
fundamental understanding of how these things work. They’re focused on making their systems work
today without necessarily having the desire to understand fundamentally why it works or what it means
for the future, how it extends. I think they may be driven by this desire, which is to do something that has
an impact in their world. And so we have contacts and there are some interesting exchanges. These
folks, for instance, have data derived from the actual operation of large systems that we don’t have and
we love to have those things because it helps us to design our systems better. We also we think we have a
lot of technology that we could give to them. And there has been some success in them adopting things,
but by and large, it hasn’t been as successful as one might think. The other thing is that all these largescale systems that are out there today are really - I suspect - they are attractive to people attractive to these
folks who put them up because they do something illegal at least by many peoples estimation are in the
grey zone of the legal system. There's a certain attraction of doing something, getting some content some
value or free or doing something that is not approved by the legal system or society is certain part of the
attraction. I think what I am trying to say here is I am not altogether sure that these systems could
actually hold up if they have to fend for themselves strictly on commercial terms. If they have to compete
with other systems on strictly commercial terms, I'm not sure they will work well enough at all, to have a
following on audience. [The content they deliver determines the technical structure of these projects] to a
very large extent. These systems are pretty much not good for much of anything else. Because for
instance they don’t reliably tell you if something you are looking for exists in the system. That doesn’t
matter if you are not paying for something; you are not very upset if you are not getting what you looking
for. It’s a freebie… but if you actually paid for this service you rely on it for getting your work done or
rely on something critical in your life, you wouldn’t be happy if they didn’t guarantee that something
exists in the system and it doesn’t show up when you look for it.
AS[2]: In fact the tension between these two communities was obvious from several papers in the sense
that their point is that they are "unstructured overlays" as we call it in peer-to-peer work.
68
EPIT
CK: Unstructured?
AS[2]: Overlays. Like when the research community advocates the research ideas and effort goes into
actually building a well-researched [system], with good guarantees of a being a structured system. There
are papers, which have been published which say that “hey, we have this unstructured overlay” but on
practical terms, like for some work loads, it does good enough and their papers would say that “it works
in these environments” but you don’t actually get the guarantees when there are different types of work
loads which might not be prevalent in some other scenarios.
CK: Are there places where your goals match with these two communities in terms of getting something
done, or alternately of appearing complicit with them, where you wouldn’t want to be. Does it put you on
a position of having to defend them, or to defend the work they do?
PD: It hasn't been a problem. Mostly getting accused of creating technology that by many people’s
estimation is only good to support these kinds of illegal activities. That’s mainly the kind of flack we get.
I don’t think there has been a close enough interaction that we are directly associated with it. I think this
will become an issue, if we were to actually distribute software based on our technology that was actually
a file sharing system. It would be an interesting legal situation. But we haven’t done this yet because it
doesn’t fit particularly well in our research agenda. Again, we are interested in demonstrating that this
technology is actually valuable and doing the things that require a lot more rigor, correctness, and
robustness than their systems can provide . So, I am not sure if it fits altogether in our research agenda to
do this. But if it did, it would be an interesting question to what extent we would have problems with the
funding agencies and with the university to associate ourselves with [them].
PD: It is partly also interesting that these guys are very independent, in those few cases where they have
picked up technology from our research community, they have used them in the form of re-implementing
their own software using some ideas we had demonstrated, rather than actually using the software that
came out of a research project. So, that also sort of gives you a certain level of uplift of course, [but] in
general they don’t give credit very well. So it is actually not easy for the public to track where these ideas
came from.
CK: They talk an awful lot about credit and reputation. I guess it is not surprising that they don’t do a
very good job of crediting.
PD: I think it is partly because they are not aware, they don’t think like we do in terms of intellectual
property. They don’t publish anything, they don’t write papers. You got to look at their software. In fact
you can’t even get them to read papers [laughter].
AS[1]: And they don’t make the protocol public right. So, you don’t know from whom they got those
ideas.
CK: They don’t make the protocol public, why not?
AS[1]: Yeah. I mean in most of the cases … Kazaa and a bunch of protocols, which people just have the
biggest ideas calling this and that. I mean the details are not published.
PD: Kazaa is a little bit different. It is actually a commercial product. But yeah.
In addition to concerns about other peer to peer groups and the software they create, we asked about the
importance of the internet and the companies that control, and to some extent determine what structure
the internet has. Since peer to peer projects rely on an interconnected computer network of some kind,
69
Ethics and Politics of Information Technology--Workshop materials
we wanted to know how control decisions by companies like Cisco, Verizon, SBC or IBM might affect
their work.
PD: There are a couple of things. It is certainly true in terms of firewalls and NATS, which kind of
introduced to the internet more and more and take advantage of the fact that prior to peer-to-peer, the
information flow is really mainly like in the traditional book publishing resources for a lot of people. It is
very one-way and they take advantage of that. But there are ways of getting around these things, which
makes life a little bit harder for us, but it doesn’t stop peer-to-peer. What is a little bit worrying is that
increasingly ISPs are also designing their own networks under the assumption that information flows
from a few servers at the center of the Internet down to the users not the other way around. ADSL and
cable modem all are strongly based on that assumption…And the more that technology is used the more
this is a problem actually. It is actually a chicken-egg problem. These ISPs design these systems or
created these systems in that fashion because in the web that’s how info flows, empirically speaking. It
goes from the server to you. There isn’t any large amount of information flow the other way because not
everyone publishes things, they just consume. It's just a producer-consumer relationship. In peer-to-peer
systems everybody produces and so the question arises what will happen? Let’s assume that there is going
to be more really interesting peer-to-peer applications that people want to use and if people just start using
them is that going to create enough pressure for ISPs to then have to react to that and redesign the
networks. Or is the state of the art in the internet actually putting such a strain on these systems that they
never really take off? There are not enough bandwidth there. It is hard to say, but it is a concern. The net
was originally really peer-to-peer nature, it was completely symmetric. There was no a priori bias against
end users at the fringes of the network injecting data or injecting as much data as they consume. Only
with the web having that structure, the network also evolved in this fashion and now we want to try to
reverse this but it’s not clear what’s going to happen. Personally, I think if there is a strong enough peerto-peer application, and there is strong enough motivation for people to use it, ISPs will have to react. It
would fix the problem.
Another technological change that seems to affect the ability of a peer to peer technology taking full
advantage of the internet is the "trusted computing" initiative. Peter discussed this briefly.
PD: There are few other technologies that are potentially a little bit more risky. So there is this "trusted
computing" initiative that basically tries on the positive side to handle all these security problems and
viruses by letting a remote site assert that a a remote computer is running a particular set of software. In
other words, with this technology I can assert through the network whether you are running a particular
type of software or if it is corrupted or not before I even talk to you. There's lots of wonderful benefits
this can have, it would really help tremendously with security break-ins, viruses things like that… but it
also allows basically an important site that provides value to the network like, say CNN.com or Google or
something like that, a service that a lot of people want to use, it allows them to basically to exercise
control. It allows them to require you to run a particular piece of software and nothing else. So that could
reintroduce the centralized control that actually we don’t want in peer-to-peer. That’s a little bit scarier.
But it doesn’t really have to do with internet per se.
CK: Where is that initiative coming from?
PD: Well, it’s mainly through by industry right now, Microsoft, HP, IBM. There is some question
whether they can pull it off technically, but if they can, it’s actually a little bit scary, in terms of some the
negative things they want to do using it. Apart from the fact that among more mundane lines it also gives
companies like Microsoft a wonderful tool to force you to use its software. If they were not under some
sort legal strain to behave, they could use very well to force you to run Internet Explorer not Netscape for
instance. It would be very easy to do that.
70
EPIT
Peter talked briefly at this point about the issue of selling peer to peer as a security-friendly technology
because of its power to decentralize control, and to eliminate single points of attack (similar to the
somewhat apocryphal story of the origin of Paul Baran’s packet switching technology and ARPAnet
being its geographic robustness against nuclear attacks). We also talked briefly about the similarity to
certain other kinds of networks, such as power grids (and though we didn’t discuss it directly, this relates
to the recent resurgence of interest in social networks, graph theoretic understandings of randomly
connected networks
Following this discussion we asked more specifically about the software they have created (called
"Pastry") and how they use and distribute it.
CK: What about the distributing the software and using it, distributing it as open source or free software.
Is that an issue now in computer science world at all or is that something that has become more of an
issue? Either for your peers in the computer science world or in terms of liability institutionally?
PD: I think one hand it's become expected in the field that one publishes a prototype. Our prototypes are
the artifacts we create that, we measure them, we study and publish about them. It's become expected to
make it be available so other people can verify your findings. So in that perspective it is pretty
commonplace. We also think it is an enormously effective way of effecting technology transfer to the
industry or to free software community.
CK: And when do you make the decision to do that? Presumably, you keep it to yourself until you’ve
done some research or published something? or do you make it public all along?
PD: We usually make it public along with first publication. First paper comes out, we try to make the
prototype available. It’s usually not we are not nearly as protective of stuff as some people in the natural
sciences are… about data sets for instance. They have a huge investment in those and they typically hold
them under tabs for years until they milk every aspect of it. I think we are more concerned with
actually… it’s a kind of double edged sword…, if you keep things to yourself perhaps you can milk
another paper out of it before somebody else does it. But your credibility often suffers too, because
people cannot independently verify it, whether what you found actually holds water and you also actually
deny yourself the opportunity of directly influencing other work. So I think in our field generally, one
comes on the side of being generous and making stuff available as soon as possible. So people can
independently verify things. It is good for us that people are using our things and confirm our results,
right. It helps our visibility. It is a good thing.
So, we work on something, we submit it, we make the stuff available, make it available on the web. So,
our colleagues who work in the same area will know what we’re doing.
CK: And what about patenting? Is that happening in this field?
PD: It’s happening routinely, of course, with those folks who work in the industry labs, which is a little
bit almost perverse, because technology that is patented is basically useless.
CK: Right. Because you want everyone to use it…
PD: It’s almost destructive. I suspect that some of these companies actually may have, Microsoft
probably you know is perhaps somehow deliberately doing this…you develop something, then you patent
it, you take it out of the arena.
CK: What about here locally? Does the technology transfer office pressure you to patent things or do you
think doing it by yourself?
71
Ethics and Politics of Information Technology--Workshop materials
PD: They encourage us. Pressuring, no. We have done it a couple of times, its happened a small number
of times. I personally think it’s not that interesting. It’s not the thing that gets you tenure or position in
the field, there's potential financial reward if something actually gets licensed and the licensing gets used
widely. I think if most people in academia if they were driven primarily by monetary concerns, they
wouldn’t be in academia.
Brief excursion into the inevitable gender question
One of the questions that has resurfaced regularly in this project is the question of gender. CS has one of
less impressive gender rations of the science and engineering fields, and the number of theories about
why this is so is large, and generally not very convincing. We spoke briefly with the students about why
this was so.
EK: What is the number of women in computer science? I feel like I am the only woman in this building.
[Laughter]
PD: It requires an explanation. [Laughter]
AS[3]: I think this is a bad one. 10 % or so.
PD: It’s very low. It is very low. All of us were in a large conference in our field [recently], 500 people
and I think probably less than 10 % were women. It is a very, very interesting phenomenon because there
is significantly more women in mathematics for instance, even in engineering, chemical engineering,
electrical engineering, mechanical engineering than computer science. It is very interesting a lot of
people have thought about this and studied this for a long time.
CK: You guys have any theory? Any theories?
AS[1]: I’ve just observed that most of the girls are in the CAM group here.
CK: Computer Aided Manufacturing?
AS[4]: In our particular subfield there's even less women than in computer science in general.
AS[2]: I think the reason is we end up spending a lot of time in our labs. Girls find that this is not a good
way to socialize. [Laughter]
EK: What do you mean? [Laughter]
PD: It is pretty well established that I guess in high schools computer science is strongly associated with
hackers, nerds that sit at the computer all night and this is probably even, by some studies at least, is
reinforced by high school counselor or high school science teacher and so forth you know that computer
science is about hacking all night. I think this probably has something to do with that.
A Day in the life of a CS Laboratory
Our interview was conducted in a large open room filled with bean bags, white boards filled with
scrawls, several workstations, maps of the internet, and a clear sense that at any time of day, you would
likely find someone there. We asked questions about the nature of this collaborative work in the peer to
peer "laboratory".
72
EPIT
AS[3]: I think the day the time when the day of graduate student starts depends, I mean varies
differently. Some people wake up early, some people wake up late. But whenever they come to lab, I
think there are a couple of people in the lab already. You discuss with them and weekly we have group
meetings. I mean progress meetings, the work we have done, the work we are supposed to do. And then
depending on the work to support you can basically differentiate the life of a graduate student in two
parts, at least in computer science, whether it is a deadline time or not a deadline time. If it is a deadline
time, we are always here. If it’s not deadline time, we are almost not here. You come, you work for
sometime and then you go to home or go to bed, recreation center, gym whatever. That’s what I would
say on our life.
AS[2]: So, I would make one more remark here so, in computer science although the things are a bit
different I mean a lab is not a necessary compared to other sciences where most of the research happens
in labs. I mean the stuff that we are doing we could have done it sitting in different desktops in our own
offices. But this lab was built with the intention and it fully served that purpose, of joint you know,
people communicate with each other much more, we discuss like if I am facing a problem with my
program, I have a bug, it is much more convenient to talk to each other and you know it gives a healthy
and speedy growth in the education process
PD: I would say this is rather unusual what we have. Most labs of our colleagues, they are full of
machines, in fact over there, is our other lab. It is full of machines. Normally every graduate student has
their office, they share with one or two officemates and all these guys do have their offices too. But we
created this specifically to overcome the problem of everybody is sitting on their desks not really involved
in each other’s work. And I think that has somewhat helped to have this lab. This was a lot more
interaction, people are a lot more aware what others are doing. Thereby learning more.
PD: Usually we have one weekly group meeting to discuss what happened and what needs to be done.
There is many additional sort of just ad hoc meeting to discuss... So, we are actually very fortunate to
work with other groups and our funding is basic research oriented. We have to jointly worry about our
software distribution, which is a little bit of a chore, dealing with requests or bugs reports from outside
people who use the software. But we are not on this schedule where we have to show every three month
this demo to a funding agency or preliminary software under contract which is you know what happens in
many many groups. And from that perspective we are less rigorous in our management. It is pretty much
self organizing.
AS[4]: Self organizing? [Laughter]
AS[3]: That’s peer-to-peer.
CK: All the components are defined equally. [Laughter]
PD: I am a little bit more equal, but....
CK: The ultra-peer.
Finally, as happens in these situations, we were asked the hard question.
AS[1]: What is the formal definition [of ethics]?
CK: Well, there isn’t one really. This is one of the things about our attitude towards this project .
Philosophers disagree vigorously about what constitutes ethics. But there are number of traditions of
73
Ethics and Politics of Information Technology--Workshop materials
thinking about what ethics are formally in philosophy. And some of those are more interesting than others
to us. We care about politics. But we also think that one of the most important aspects of ethics is, that is
part of practice, part of what people do in everyday. So there is the idealization of ethics, it’s like what
you should do and a lot of people are freaked out by ethics because its this notion that there is something
you should do, if you do it right that then you would be ethical and then there are things you do everyday
and the reason you decide to do one thing . We are interested in actually getting at that, of people making
those decisions.
PD: I think it is most prominent in our daily lives simply because ethics is a large part of the academic
exchange. You are supposed to give credit to other people’s contributions, other people’s intellectual
contributions. That comes up in the group, who should be an author on the paper and so forth. What is
the appropriate threshold? We have to make these decisions all the time. When we have a project, there’s
people directly working on it, there’s people giving feedback, acting as a sounding board, what level do
you say that warrants authorship? And different people have different takes on this. I tend to be very
generous… there's is no harm in being inclusive. But then that sometimes leads to problem that
sometimes – I’ve often been accused of being too inclusive… Then the primary authors of the work say
why is there this rat-tail of authors on there when I have done most of the work? So that’s an issue that
we confront everyday and also fairly citing in other people’s work, fairly portraying other people’s work
when we write out sections in our papers. I think it’s really an important issue. It is critical that these
guys get a take on how it is done in practice as part of their graduate education. In terms of our research
actually I think it rarely ever really comes into play, but interestingly in peer-to-peer it actually does.
There is this potential concern that you are only working on something that is fundamentally just used by
people who want to break the law.
CK: Or the theoretical aspects of it, like making these decisions about what it means for someone to be
misusing resources, and creating a system for punishing people for their behavior. Isn’t this a model of
ethics?
PD: But this kind of ethics is sort of … It’s actually... one can precisely define it. In terms of utility
function if you want to encourage behavior that maximize the common benefits, it’s more of an
economics thing. I wouldn’t have thought this is ethics but I suppose one could...
CK: What would you think of it as? Would you think of it as politics? Fairness?
PD: Just… economics…We try to turn it into an economics problem: setting up conditions that make in
the intereste of everyone to do the right thing.
AS[1]: I mean other thing is I mean when you basically track someone as saying that he is unethical or he
is misbehaving, probably what’s the reason why we don’t face so many problems is at least for the
systems that we have now, it’s more of a continuous function. It’s not like you had a speed limit, you
cross 70, and then ticket…it's not like that…it’s like, the more you abuse, like it’s continuous you know,
it gracefully degrades.
PD: I suppose it’s that way in human societies, there is not a structural threshold. You could be greedy or
selfish. The more you are acting that way the less friends you have. There is not really structural error.
AS[1]: Yeah, there’s no structural error. That works as a feedback control, right when the person gets a
change to correct his own behavior when he sees that.
CK: Right. Well, in some ways this is, in social science, the distinction between law and norms. The
law is based in the state or in the government… or it can be based at any level basically. But it’s made
74
EPIT
explicit and it is like a speed limit. This is like what you can do what you can’t do. Whereas norms tend to
be informally understood, the things that somehow through practice people engage and then understand
how you do it. Nobody has ever told them to do it this way but either by watching other people or by
figuring it out themselves they do it. So, social science is sort of more generally interested in the
distinction between those two things. What makes the technical things so interesting is that they all have
to be explicitly defined, whether they are laws or norms they are still technical. I find that quite
fascinating actually.
75
Ethics and Politics of Information Technology--Workshop materials
Peter Druschel Interview #2
Abridged and annotated interview (#2) with Professor Peter Druschel at Rice University This
interview was conducted on Thursday Jan 20th, 2004. Key: PS=Peter Druschel, CK=Chris Kelty,
TS=Tish Stringer, AP=Anthony Potoczniak
On Being Comfortable
Our second interview with Peter Druschel (this time without the students) turned into a very enthusiastic
three-way conversation between CK, TS and PD. After reading the first transcript we found ourselves
principally interested in the metaphorical or literal uses of social terminology, the implications of the
decentralization, the question of "what exactly Pastry is" and how people would use it. We began by
asking about the various interlocutors (us and them) which populated the previous interview, where
"them" referred alternately to hackers, commercial peer to peer products, theorists, and other
university/industry groups. TS had recently had experience with using a peer to peer client (BitTorrent)
on Rice campus, which she explained.
TS: I study a collaborative video project, a network of video activists around the world and they’ve been
using BitTorrent, together with a particular kind of codec to exchange videos around the world that
people are then able to re-edit and stuff, and its sort of a new breakthrough. So I was writing a paper on
how they were doing it and I wanted to install and run BitTorrent on my own computer in my office in
Sewall. So somehow I’m kind of a red flag for our department IT guy, he likes to have a big handle on
everything I’m doing, so I like to alert him, you know, to make him feel in the loop, so I talked to him
about putting it on and using it and he said: "Well, Rice’s policy is that it’s OK that you have this on your
computer, but what I want you do is connect to the network, pull down the video you wanna see, and
disconnect from the network immediately. You’re not allowed to leave this running and leave it open."
It’s curious to me that the work that you’re doing is supported by Rice, but then what Rice is doing in
policy is making me a "bad user" of the network [in Peer to Peer terms], cause I’m not providing
resources back to the network.
PD: Right. Well this is really an interesting question because in many ways what we’re trying to do runs
counter to the established set of business models that govern IT technology today, but also to the sort of
conventions that make IT administrators, you know, people like the one you’re dealing with, comfortable.
They like to have control over things and they like their users to be passive in the sense of consuming
information, downloading stuff, but not be active in the sense of publishing information. Because that
makes them... that makes Rice potentially legally vulnerable to suits related to intellectual property of
what you contribute. And then, what you’re downloading there, it might consume too much network
resources that cause problems for Rice members, if you’re consuming to much. I suspect those are the
main two things. And the third, perhaps is that whenever you have a server running on a computer that
can be contacted by outsiders that’s potentially a crack, an open door for security attacks. These are all
legitimate concerns, but under the conventional distributed systems, where you have a strict separation of
consumers of information from providers of information, it was okay to impose these kinds of constraints
and rules of engagement, cause it made everybody feel more comfortable; because you were on the safe
side legally and in terms of your network consumption, in terms of computer security. But now we are
trying to move away from this, and makes everybody feel sort of uncomfortable. So, a lot of it has to do
with control. I’m not convinced that there is a legitimate concern that if people like you started to offer
information via BitTorrent, that that would consume, a significant fraction of the available network
bandwidths to other Rice members. But it’s sort of a deviation from the current system where everybody
feels like they have a handle on what the network consumption, where they all feel like, there are so many
people on the Rice campus, they can at most be surfing the web so many hours a day and in doing that
76
EPIT
they can generate so much bandwidth because otherwise you just can’t consume more, because it’s a
manual process: there’s a user, a human at the end of this information flow. Whereas, if you are
providing information where there is no way of telling how many people outside might be connecting to
your BitTorrent client and downloading stuff, it just makes people uncomfortable.
Peter suggested that they consider it not only their duty, but an absolute necessity to make people
understand another model of managing resources. To that end, the group has developed several things
that let them "practice what they preach" such as a "serverless email" system, and a peer to peer chat
system. These projects are all based on Pastry, the software we develop. Over the course of this
interview we slowly came to an understanding of what Pastry was, and how it was used. Pastry itself is
not a peer to peer application, but an implementation of a protocol on which other applications might be
built. Nonetheless, we wanted to know how many people (besides Peter’s group) were using Pastry, and if
the communicated using it.
PD: There are about, I don’t know, it’s hard to estimate, but probably between 50 and 100 sites who, who
use Pastry either to do their own research or companies that evaluate technology, to varying degrees. I
know of one in New York, who apparently seriously are thinking about integrating it in one of their
products. But, um, there’s no deployed service right now, it’s really at a stage where people are
evaluating the technology.
CK: And in terms of the kind of questions you ask using Pastry, you don’t need to have lots of actual
users using it in order to ask those questions?
PD: Actually we do. We’d love to have a large deployed user base, because that would allow us to
actually derive data. Many of those questions we just merely speculate right now. In the absence of that
data... we ask questions of how do you make this secure, how scalable it, is this we can sort of touch on
these questions via simulation… We can sort of simulate a very large Pastry installation with 100,000 or
500,000 nodes and see if it still works. What’s a lot more difficult is to say, let’s see now how it performs
with an assumed one million users when you don’t have a good model of the workload that will be
generated by that many users using an application based on Pastry. Or if you don’t know how or what
fraction of the user would be free-loaders that are trying to get service without contributing resources.
You have to work with purely synthetic models and you really don’t have any good grounding in how this
would look in practice. We’d love to have that, but the thing is, trying to push Pastry, or doing our own
[p2p application]—we’ve been thinking about this a lot. What would it take to get a large user base, but
you have to, on the one hand, you sort of have to provide, obviously the support to make this available
and usable to a large community of naïve users, which is one side; the other thing is that you would have
find an application that’s attractive enough, to attract that many people, and then of course the obvious
thing is you get into file-sharing, but then you get into these legal issues and then you’d really get phone
calls from Rice Lawyers [laughter].
CK: In a perfect world you would have all of these users, and you would let them basically be humans,
and do what humans do, and you could just let it run. But in the simulation case, you have to make certain
assumptions about what human nature is like, about whether they’re going to manipulate…be bad,
basically, be bad apples or manipulate the system… do you actually have discussions about that kind of
thing, like what you’ll imagine users to be like?
PD: Typically we don’t, because part of the reason is that, even if we cooked up some sort of a hypothesis
about what we think is reasonable behavior, you would never get it past the reviewers, right. If you try to
publish it, people would say oh, where’s the data for that? What we typically do instead is we evaluate
the system or simulate it over an entire range of parameters, say, this is how the system would
perform…this is how the system performs over a range of some parameter—such a fraction of
77
Ethics and Politics of Information Technology--Workshop materials
freeloaders…and then we don’t have to commit to a particular model or assumption of how many
freeloaders there are. But of course, you can only push that so far as well, because there are only so many
parameters you can vary and still fit the data into a paper, or even simulate it…it’s very crude. Another
thing that we regularly do is that we simply pull data from another system like the world wide web. Under
some more or less credible assumptions, you could argue that, for instance, if you had a peer to peer
cooperative web-caching system, then it’s okay to take web-traces, which are available on the internet, so
you know traces of actual activity from web users, and play it against a simulated system that uses
internally a peer to peer content delivery system. There’s a question, obviously, is this really valid?
Because would the data in a peer to peer system look like web data looks like? Of course it doesn’t, if you
look at the BitTorrent, the stuff is completely different that what we find on the web, it’s much larger
because it’s primarily music or video, and not so much text pages. But still it’s one data point.
CK: Does it strike you as odd that, um, I mean if you actually did manage to have an experiment where
you had 100,000 or 500,000 users of Pastry, would it still be called an experiment in that sense? It strikes
me as odd that you would have actually produced a situation in which real people use it for real uses and
yet for your purposes it would be an experiment. Does that strike you as odd at all?
PD: Well I think it’s, it doesn’t really, although I am convinced the thing would take on its own life, right
and quickly become much more than an experiment. So there is a case, actually, one of our competitors,
if you will, or fellow researchers, who are working on a similar protocol, have actually gone this route
and, with the help of some open software folks, are putting together a music sharing system. So they are
running it based on their protocol; well they are a little bit craftier, they call the protocol a little bit
different, they pretend like they are not part of this, [but there is some sort of agreement]. I think it is
actually a system that has about 40-50 thousand users by now. Now what I don’t know is how successful
they have been in actually getting all the data because obviously there is privacy concerns.
CK: And not just privacy concerns but kind of ethical one is, I mean this is the university group
presumably getting this data? You guys have thoughts about this?
PD: Yeah, we haven’t gone this route so I can’t really say but this is certainly something I would be
thinking about too.
Hierarchy, Complexity, Experiment, once more with feeling!
Theory vs. Experiment. In an attempt to compare across interviewees, we raised the question of computer
science as a "science of the artificial", and the comparison with natural sciences.
CK: We have a variety of questions which—one is about this distinction between experimental and
theoretical. [Moshe Vardi’s] way of narrating it is to say that the distinction is between sciences of the
artificial and natural sciences. Do you see it that way as well? He says: we create computers and
therefore the science we are doing is the science of the artificial.
PD: Yeah, I see it this way, although there is another distinction with the theoretical work. Moshe in
some sense uses a completely different method to evaluate things that is not based on experiment. But,
certainly yeah, this is something that I am keenly aware of. I mean there are sort of fundamental
differences in the way science is done in the natural sciences and the way we do it. Many of the natural
scientists probably would say well what we are doing is not real science anyway. Because we create
something and then study it.
CK: But, and we actually discussed this, in terms of the natural sciences, if you think about biology, it is
actually doing the same thing these days. I mean, they are creating their own organisms and studying
78
EPIT
them. Right? You create a plasmid and you study that. Right? And so, it is not that far in the sense that
there is something about this distinction that seems a little bit peculiar.
PD: Frankly enough, you rarely run into this sort of discussion as part of a peer review process because
the peers really are part of the same community. So, there’s generally a basic understanding of what is
computer science and whether or not it’s a science or not. You know … and you rarely actually find that
topic even discussed among computer scientists. It is usually a discussion you find yourself in when you
are meeting with colleagues from, you know other colleagues from the school of sciences or like in the
promotion and tenure committee, or something like that, where you are trying to explain to them why
this particular assistant professor should get tenure.
Complexity. One of the clear questions, if not clearly understood by us, was what makes something like
hierarchy and complexity so fruitful for computer science, and is it meant in the same way by different
researchers
CK: We talked with Moshe at length about the subject of complexity. He said, the only way (for formal
verification, obviously he’s interested in how you do that), the only way you can get complexity is by
having hierarchy. You actually have to have some sort of hierarchy within which tasks of delegated all
the way down to lowest level; you can’t get complexity without it. And that seems to be and I don’t
know whether or not it is in conflict with your assertions, your kind of biological and social metaphors
about spontaneous organization or the lack of hierarchy. Do you see that as necessarily a conflict?
PD: No, actually not because I think there are hierarchies in our systems too, but it is just that more
conceptual.
CK: Can you explain them? How they work?
PD: So, for instance you know in many of these peer-to-peer overlay networks the reason that they
perform well, the reason you can find things reliably in them is, because they conform to a very specific
graph structure, which has a hierarchical embedding. It is just that the role a node plays within this
hierarchy is not a priori fixed. It is dynamically decided on. So a node can appear somewhere and say
"Where is the best place in that hierarchy that I can fit in, and what role and what task can I play at this
moment?" And as the network changes, that role can change. Whereas in sort of the more classical
distributed systems, it was a much more literal hierarchy that was embedded in the way you would sort of
basically put up a machine somewhere in a room and the fact that it was in this room on the Rice campus
and in Houston basically already determined what role it would play in that overall hierarchy. This is
much more rigid; the hierarchy is really embodied in the system as it physically exists.
CK: So the predetermined hierarchy, the kind of, the thing that... the graph structure you are mentioning,
is that the same as the protocol are those different?
PD: No. A protocol would be a … if you will, a recipe that allows nodes to organize themselves in a way
that conforms to the particular graph structure or hierarchy. So, really the structure is, is in some sense,
an invariant, that you are trying to maintain in the system. And the protocol is a recipe that allows you
actually to maintain it.
CK: [So, it is a] kind of like an expression of that graph structure or something.
PD: Right, right. If you think about it—I’m walking on thin ice here of course, it is not my area—but the
social structure within an ant hill has a particular structure that serves a purpose of survival of species and
so forth, or survival of the hive or whatever it is called. And the protocol would essentially be the
79
Ethics and Politics of Information Technology--Workshop materials
instinctive programming over each individual ant, the way it acts, that would play one role towards
maintaining that hierarchy towards furthering that overall structure. So, that is certainly self organizing
and very decentralized but underlying it is a particular social organization that exists over time.
TS: But, what about the role of the programmer or the programming team who writes the protocols to
allow certain kinds of actions that happen. What about their role in hierarchy or non-hierarchy, it seems
like obviously, you kind of get to decide what the options for nodes organizing themselves are?
PD: Well that is a sort of very different type of hierarchy in which we would decide or determine how a
project comes into existence and whose ideas get actually reflected in the final system, or which direction
a team takes. But that is quite separate and orthogonal to how the actual software... how the system
organizes itself once it is deployed.
TS: But say you deployed version one and you find that users are able to do something you don’t really
like; there is a higher percentage of bad apples than you want, so you rewrite some protocol and release
version two—like doesn’t that sort of give you a whole lot of more control over the system and the users
in terms of how it can organize something what they can do?
PD: You could although... You would like to be able envision any sort of overall structure you want and
then be able to turn the crank and come up with this recipe like that individual nodes or individual ants
have to follow to make that overall organization happen. But our ability actually to take that step [is
limited]. Well actually, there are two steps involved: there is a step from what ultimately is the goal you
want to achieve, what’s the behavior of the system or what steady state of a system do you want to
achieve. And from that you have to take a step towards this structure that the system has to adopt, one that
will actually drive you always towards that goal. And from that then you do have to derive this recipe for
the individual participants. And our ability to take these two steps, given an arbitrary starting point, is
very limited in our understanding of how to do this methodically. So, it is usually the other way around,
you play with small steps with the recipes. Occasionally, we see “oh if you do this, then ha! you know
that structure emerges” and now let’s see if that structure is actually good for something. And if we’re are
lucky it works.
CK: So, it is a little bit reversed from what you described earlier?
PD: Right, right. But making marginal changes, if we have a structure that does something, but it isn’t
quite what we want—to go back and say how do we have to change that small, individual recipe to make
that in a controlled manner—that change is extremely difficult.
TS: Are there social ways versus programmings ways that you can train users to use the system in ways
that produces the effect that you are looking for? Like giving out FAQs or, like how do you teach people
to be good users of the system? How are they supposed to learn those manners? Like how am I supposed
to know, it is not really polite to me to shut off my computer? How do you find that stuff out?
PD: I mean, this is sort of yet another layer to this. It’s sort of introducing human behavior into this and it
is something that we are just starting to think about it in terms of these incentive mechanisms. So, the
general thinking there is that it is probably pointless to try to come up with a recipe for human users and
assume that they would be motivated to follow that recipe. Instead, the thinking is that you need to
provide incentives such that everybody, in acting selfishly, is gonna do the right thing. So, you try to play
with this Nash equilibrium idea. If everybody acts in their advantage, if you somehow figure out that
“leaving your computer on” gives you access to much more content or makes downloading things much
faster, then you will do it and we don’t even have to tell you. And that is what people are currently trying
to do—setup a system so that if everybody just acts in their own interest, then they will also maximize the
80
EPIT
common good. So, if we can setup things such a way that automatically no matter what you are trying to
do, by leaving on your computer twenty four hours a day, the service you are getting is much better
than… we think that is the way to go. Then people will actually do it.
CK: So, I guess that … I guess that is like the example of trusted computing or something, that the
opposite answer would be to introduce a system which actually did control all of those ways in which
software was used?
PD: Right, right. Exactly, exactly. And of course there are weak forms of what you are saying, so for
instance many of these Kazaa clients that you find on the market now, when you shut them down, they
appear to be shutting down so the window disappears but they actually install themselves in the
background. So, again this is an ethical issue: is this really entirely kosher to do something in the
background that the user doesn’t realize, but of course it’s an effective way of increasing the fraction of
people actually online.
Humans? Machines? Societies? Organisms? Our confusions…
Our fascination with questions that seem properly to belong to the social sciences: reputation, trust,
incentive, rules, norms, etc. is continued here, as we tried to figure out what the terms "punishment" and
"incentive" referred to—humans or machines, or some hybrid?
TS: Is there an opposite version of what you are talking about, not the incentive version but like the
punishment system, is there anyway to punish a bad user on the system?
PD: We have been thinking along those lines. The difficulty is you have to have a stick in your hand. So
if you are talking about storage systems, the idea is you have this peer-to-peer system and everybody can
backup their disks by injecting backup copies of your local disk into the network. Then, a conceivable
punishment would be to threaten to delete all your backups. It seems to work OK. Although there is a
question there as well: a user could play with the expectations and say if I am inserting this thing N times
into different systems, what are the chances of being caught doing what I am doing and what are the
chances of being caught around the same time as losing all my copies? It is not as clearly effective. But
with [systems like] BitTorrent where actually you are just instantaneously using a service, where you are
downloading something, and once you are done with it, you couldn’t care less, it is very difficult to
punish someone. So a general theme in these untested systems is also what’s called reputation-based
systems. The punishment could be attached to your identity. So if we could somehow reliably track your
identity and blacklist you, that would be a conceivable punishment. But, the difficulty is that … to fix
your identity would have to have some sort of global central organization that attaches to you a label that
you cannot change. And that is very difficult. So you’d have to have some sort of a network social
security number that you couldn’t simply change.
TS: What about using something like keys, where, you know, over time they build up reputation and if
you have to revoke them, then you would have to start over without any kind of …
PD: There are many systems that try to do this. In fact, um…some of the practical systems that are out
there today now, they, when you first join, you get a very low degree of service. And only as you are
proven to be a good citizen, you have your client on a lot of time and you are providing good content that
is of interest to others, you get access to more services. But the difficulty is, all of these are vulnerable to
this – we call it – civil attack. If you work with multiple identities, and you change periodically identities,
then all of these things are basically ineffective. The reputation-based system is a little bit better than the
others, because there, you know, there is a cost for assuming a new identity. You have to build up a new
81
Ethics and Politics of Information Technology--Workshop materials
reputation anew. But the ones based on punishment are very ineffective. You are just acting badly, and as
soon as you get caught, you just appear under a different pseudonym.
Among the wealth of social science metaphors, the ones which fascinated the research group were those
related to biological and organic themes.
CK: Umm.. On that subject of metaphors and analogies, we noticed that there are many biological and
organic ones to the kind of … like the ant hill one you just used, for instance. Is there any particular
reason why you are sort of drawn to those models of emergence and…
PD: Because I think they are probably the closest to the real world existing example, for the kinds of
system we are interested in, that exists. They are truly decentralized self-organizing systems.
Interestingly, there are very few examples of man-made systems that adopt that sort of strategy. Right?
And it seems to be, interestingly, not a natural way of thinking about design, or synthesis for humans.
You know, you generally think in terms of top-down, right? This is what I want, and how do I lay out
something that conforms to this, as opposed to starting from the bottom. You know. How do I craft bricks
that if I put them together, gives me a nice house. You always tend to think top-down, when it comes to
synthesis.
TS: There are some social science researchers who study decentralized organizations organized into
nodes; and this is probably a very unpopular example (I can come up with ones that I like a lot better), but
the popular one right now is “terrorist cells” and that this is very effective organizational structure
specifically because it is going against a hierarchy and there is no way to just cut off the head off it. So
there are like social structures that resemble this and a lot of people are working on this…So, ant hill, you
think, is the best one?
PD: I don’t know. I haven’t really thought about this enough …You’re right. There might very well be
human organizations that are providing just as good examples.
TS: I thought ant hill was more interesting…
PD: There’s many grassroots organizations that are organized in a similar fashion. They spring up in
different things, and are loosely coordinated. And interestingly, they are not because they want to defend
themselves against some legal threat, although that could be a reason, too. But usually because there is not
the financial backing for some sort of organized approach, at least not initially, right, but maybe later on.
Yeah, I have to admit, we haven’t thought a great deal about, um, examples that could guide us in these
systems.
CK: Well, the very purest models of free market economies are the ones that people would probably be
most likely to seize on, I suppose, here. In its purest form: self-interested individuals maximizing their
utility independent of each other, and independent of the desires of each other, and out of that you get
self-organizing systems of production, basically. The sad fact, unfortunately, is that that this exists
nowhere in reality. And so, again, what’s interesting for me, is the potential experimental cases, where
you’re really dealing with real cases, like actually dealing with real humans – is in many ways makes it so
interesting.
82
EPIT
PD: But these systems tend to, in the end, depend on what I would call centralized infrastructure: banks
and stock exchanges, and courts, without which these markets probably wouldn’t function.
CK: That’s right, exactly. There is no example of a free market in that sense. There is no free market that
doesn’t have... well banks is really good example: institutions to guarantee money. You know you gotta
have money to have markets and somebody’s gotta guarantee money, so you have to have governments.
The purely institution-free model of a free market society is impossible.
PD: So this may be bad news for us. We’re also not sure that we can build a system that is meaningful
and does something interesting without coming back to this identity thing. Right? Nobody has figured
out how to do something, to build a system that really has strong properties.
CK: Is there something interesting about the research that you are doing that suggests that there are
interesting properties at smaller scales, rather than this test of, whether it’s scalable or whether it’s really
got interesting properties when everybody uses it? Are there interesting things that occur when a thousand
people use it? Or that might not be the same as top-down hierarchy that might be better?
PD: Sure. One aspect of this work is that we hope that some of these mechanisms we try to push to these
large scales in open environments, where you don’t have any control—that they would carry over into
much more controlled environments. Say, a network within a central organization. And that these
properties would still be useful, for instance, to make it more robust. So forget all the aspects of
centralized infrastructure. If you just… it may still be administered in a conventional sense. A single
organization oversees them, but in the event of a failure, they are more robust because they are
decentralized in their embodiment – that’s an advantage.
Part of the issue with changing to a "peer to peer mindset" as mentioned earlier, is the level of comfort
which people have, or more accurately, the level of control they feel they have. This leads Peter to
discuss issues of risk, blame, and trust that people might have for various kinds of systems.
PD: So this is an argument that we are often having within computer science, as well. There are people
who say: what you’re doing there is more or less crazy, because if I want a high assurance system, I’m
glad to accept the fact that I have to have an expensive centralized organization and expensive, highly
qualified staff that needs to run it, because—at least then—I have control. I know where to go when there
are problems. I have someone to blame if something goes wrong. Whereas what you are doing there gives
me no handle on this. Of course, the counterargument is that, perhaps, if you have such a flexible,
decentralized self-organizing system, you won’t have these problems. And you won’t need to blame
anyone. But, you know, it’s a long shot.
CK: It would require people to think differently about risk and blame to some extent, when something
goes wrong. Right?
PD: This brings us back to the beginning of this discussion. People feel comfortable right now having
hierarchies, with having control, exerted at some point, then you know exactly who to blame. And they
are uncomfortable the minute they feel like they’re losing control over some aspect of the system.
TS: It seems really interesting that the kind of software that you’re building would force a social
change—that you would have to think differently about risk and blame and hierarchies. And it’s
interesting too, that you’re not thinking about having a way to train people socially into this change. Like
I asked, if there is a social way of training people, like there are incentives and punishment, but you’re
not creating text on like how to be a good user or anything like that, or how it works…
83
Ethics and Politics of Information Technology--Workshop materials
PD: I suspect now that as you say it, I probably have a somewhat cynical view on that. I suppose we
really think in terms of relying or appealing to the good will of people. The very way these systems
currently function is that most of the users are actually idealists, who use the system because they are
interested in technology or interested in sharing information, and they believe in it. These systems are
currently not very secure. By manipulating the software, you could fairly easily cause the whole system
to shut down, just by manipulating your own client.
TS: Well, I think, this brings up a big issue of trust. How do you as a provider of Pastry convince users
that the software is secure? That by leaving their computer open all the time is not leaving it vulnerable to
hacks or anything like that. How do you …is it because you’re at a reputable institution or what
mechanisms could you use to establish validity and trust.
PD: So I have to emphasize again that we’re not providing anything to end-users right now. In fact, what
we’re packaging, what we’re distributing is not something you can pick up and use for any particular
purpose right now. You have to be a computer scientist to know what to do with it. You’d have to use it
to build a real application that will be useful to the average user. In one sense, we’re kind of one layer
isolated from an end-user. Our consumers, if you will, are computer scientists or developers in industry or
perhaps open-software developers. So, in that sense, it’s for us a different problem. For us to convince
these folks that this is secure, we basically open up the source code, we open up all our sources, and we
provide justification in the form of mathematical proofs or other studies to try to convince them that this
is secure. This is of course a way to convince technical people. The next level problem of say somebody
buys our story and thinks it’s secure and then builds a product and tries to sell it or download it and make
it available for free to an end user community, it would be a totally different one. And obviously
convincing people that it’s secure by basically providing all the gory technical details—that is not gonna
work for an internet user.
TS: This explains why we couldn’t understand what pastry did. [Laughter] We were all like, What does
Pastry do? It’s not just that we couldn’t get it... that is so great.
Going public and working in Private
In our earlier conversation with Peter and his students, it was clear that the students themselves could
have taken Pastry and created a file-sharing application, but that they consciously were not doing so. We
asked whether this was an ethical issue, a decision to let industry do commercial applications.
PD: This gets at two different issues here right? For instance five years ago before this bubble burst in
the stock market, [creating commercial software] was common place especially in computer science.
Probably half of the grad students here were involved in some sort of start-up company; trying to
commercialize some ideas they may or may not have started here. So there was certainly not anything
ethically that held them back. Although I was very concerned at the time because they were not doing
research, they were not finishing their Ph.D.s or not really furthering their education, not doing things that
in the set of incentives that we have here, really help them in anyway. It is more like going to the
gambling halls and hoping that you’re going to get rich, right? And of course it is not only my students
but it was the same thing among my colleagues, you know, I was both at Rice and internationally or at
least nationally, I mean, I am, I was a minority during that time. I was never involved in a startup
company, at least not directly as a partner or something. So most of most of my colleagues did it and as a
result, you had hoardes of computer science departments that were abandoned. There were grad students
lingering without professors being around during the day -- they just showed up to give classes.
Otherwise they were gone. That was really a problem. But it is probably not the issue you were after...
84
EPIT
CK: No, no, that is very interesting to compare. You know, because you are suggesting that that is not a
problem now. … post-bubble breaking.
PD: But so right now there is just not this pressure here. I mean most people are disillusioned and most
people now realize that the chances of actually getting rich are slim. It is a lot of hard work and long
nights and burn out and also the VCs are not as, you know, as easy and they’re not dishing out money, it’s
just not as easy to do this now. But I suspect what [the students] meant is more sort of the ethical issues of
putting together a file sharing system that at least has the potential of doing something illegal or you
know, with uncertain outcomes for your personal fortune and certainly uncertain outcomes for my ability
to attract grants and so forth.
Our last set of questions for Peter concern how he got into the topic and his experience working on it in
both academic and corporate settings.
AP: What led you to start research in the p2p arena and was there an attraction, an incentive?
PD: I was actually on sabbatical at the time. So I was generally in a mode where I was open for all kinds
of things and not over-committed with other stuff. So any sort of new idea, of course, falls on fertile
ground when one isn’t committed to anything particular and it was a time when a lot of discussion went
on in the newspapers about Napster. And we were looking at protocols like Freenet and Gnutella that had
just come out and we just tried to understand what they did, how they tried to attack this legal problem
that Napster had. And it sort of became pretty clear pretty quickly actually that the protocols themselves
were not very sophisticated. But the underlying approach was precisely the self-organized, decentralized
way of doing it. We were more or less mocking these protocols, but at some point the question came out:
could you actually do something real, you know in this model, could you build a protocol that can
provably find something that would exist for instance in a large network and still maintain the self
organization and the decentralized nature? And that was sort of the, you know, the spark that got us
motivated.
CK: Was this when you were at Microsoft research? They seem like such a big enemies of this kind of
thing. I mean obviously they are a huge organization, they always have their foot in everything, but
maybe you can describe a little bit what it is like there and I mean how it was different from the university
and how these kinds of ideas might have come into those research centers?
PD: Yeah actually I learned a few interesting things about this environment as a result of this experience
because you can be in a research lab, but very few companies have real research labs anymore. But
Microsoft does, because they have lots of money and they are doing well. You can, so to speak, fly under
the radar screen of a company there. You are left alone, you can work on things, nobody asks what you
are doing as long as you periodically say “well, I am working on the area X, Y and Z and this is somehow
loosely connected to perhaps future products at Microsoft.” You can work for months on something
without even talking to anyone about it. And even then, when you talk to your manager and perhaps the
lower management, they are usually not judging this immediately based on whether it is in line with
corporate strategy or product lines and so forth. It is only when you attract a lot of visibility either inside
the company, you know from the product groups or outside through publications that sort of the higher
levels of the administration become involved and interested. So this eventually happened with Pastry.
And basically it meant that… well they became interested in the intellectual property so they basically
have been told no longer to collaborate closely with us. So, that close collaboration as a result
unfortunately has stopped with Microsoft.
CK: Because they can’t have the intellectual property right?
85
Ethics and Politics of Information Technology--Workshop materials
PD: The problem is that their policy is they don’t even wanna apply for patents that if there are outsiders
involved.
CK: Oh wow.
PD: And frankly that’s fine with me too. Because I don’t want to do patents on any of this either. It
seems to be kind of counter-productive, if you want to have an open environment.
CK: But is it still possible to collaborate with people there if it is not under the auspices of… I mean what
if there are really smart people there that you wanna work with?
PD: Well, that is one of the things if you work with one of these companies, that is the principle cost,
you have to accept their intellectual property policies. But by and large, because these people are left
alone, it is a fertile ground for all kinds of ideas.
CK: Sure.
86
EPIT
Commentary on Peter Druschel
Practice what you preach: Hannah Landecker
Much of our discussion ranged around something of obvious fascination to social scientists looking at
technology: the way that technical parameters or practical approaches to organizing information transfer
can also function to regulate or give form to social interaction. This is very abstract and many of the
discussions were at a fairly abstract level about potential users, “freeloaders,” and so on. However, it
took very specific form in what I would call the “practice what you preach” moments, in which it became
clear that research on peer-to-peer systems is also about using the technology produced in the course of
research as the mode of interaction between collaborators. A peer-to-peer system is used to distribute
documents and code between researchers collaborating on peer-to-peer systems. While this recursive
structure of using what you make in order to make it better might seem intuitively obvious (you build it,
why not use it), this is a very concrete example of how technical choices are also choices about how to
work with others.
There is an abstract discussion about building centralized hierarchical systems with pre-set trust
relationships versus growing peer-to-peer systems with symmetric elements at the beginning of our
interview.
the way you build systems is to construct the hierarchy, where the things here depend on lower level
things and there are clear well defined relationships between different entities. Everything has a welldefined role in it and it relies on certain other components to get it done and trust works along this
hierarchy too. And the hierarchy is constructed simply by the assumption of deciding to figure the
system. You map it out of drawing board. You hire a contractor who installs all the components and
arranges them and reconfigures them in figures that in this hierarchical structure. Then it exists and
you turn it on and it works. It doesn’t grow organically. It is designed. It is artificially put together. It
is constructed, whereas these peer-to-peer systems, there is no such predefined hierarchy, there is no
well defined roles. In principle every entity has a completely symmetric role in the system. There are
no prearranged trust relationships and that, of course, allows it to grow organically which is so
interesting that allows you eventually to start a large system without putting anything into place
without putting a lot of money, putting things in a place. And at the same time because of its lack of
relationships, there is no point at which a governing body or an institution can exercise control.
This description, which contrasts design, artificiality, predefinition and construction against organic
growth, symmetry, and lack of predefinition, is echoed in a discussion of the lab’s own collaborative
relationships with others in which the workflow is described as “pretty much self-organizing.” While
observations of their own behavior as being very “peer-to-peer,” even in the sense of the spatial layout
of the lab and its bean-bag furniture were often accompanied by laughter (“you didn’t have to tell them
we sleep here”), and Peter describes himself as being “a little more equal,” it is clear that this is a
serious approach to treating other individuals in a group in a particular way. Everyone is a producer.
Similarly, during another passage, it is not just the local culture of the laboratory or a collaborative group
that is subject to a technical choice that is also a choice about how to make people relate to one another,
but the local University community. Peter describes a system of serverless email being used within the
group, and he says that it is “more than a responsibility” to expand the user base and to convince local IT
administrators at Rice to accept it. At a slightly less literal level, Peter and his students talk about the
importance of research agendas being influenced by the students’ own interests and not be fully
determined from the top down, and the general importance of doing research in a context rich in feedback
and exchange, because its faster and better than trying to do it alone.
87
Ethics and Politics of Information Technology--Workshop materials
What difference does it make, to take the cliché of “practice what you preach” seriously in computer
science? Does it have different implications for computer scientists because they are actually building
tools that constrain certain types of social interaction and allow others?
88
EPIT
Emergent phenomena: Ebru Kayaalp
While talking about building peer-to-peer systems, Peter Druschel asserts that there is neither predefined
hierarchy nor prearranged trust relationship in the structure of these “emerging” systems. In principle
every entity has a completely symmetric role in the peer-to- peer system, which grows organically (403406). He gives the social structure in an anthill as an example, which is “certainly self organizing and
very decentralized but underlying it is a particular social organization that exists over time” (1557-9).
Druschel has a specific model in his mind while he talks about organically growing, decentralized and
non-hierarchical systems. He expresses that these systems are accomplished by mathematicians “who for
long time have studied systems- systems that are evolving in this nature that you have components that
have a few simple well-defined properties and they study what happens when you put them together, let
them interact” (416-419). Presumably John Conway is one of the mathematicians that Peter thinks of
without uttering his name.
John Conway, a professor of finite mathematics, became well known outside of the world of mathematics
shortly after his invention of the Game of Life. Martin Gardner, Conway’s friend, described the Game of
Life in Scientific American in 1970.12
The Game of Life is not a game in conventional sense. There are no players, and no winning and losing.
The basic idea is to start with a simple pattern of organisms, which is one organism to one cell. Conway
applies his “genetic laws” (ibid) to these organisms/cells and observes how they change (this means that
they die, survive or live)13. These “genetic laws” are simple:
1. survivals: Every counter with two or three neighboring counters survives for the next generation
2. deaths: each counter with four or more neighbors dies (if removed) from over population. Every
counter with one neighbor or none dies from isolation.
3. births: each empty cell adjacent to exactly three neighbors –no more, no fewer- is a birth cell. A
counter is placed on it at the next move. (ibid)
It is necessary to mention that all births and deaths occur simultaneously.
The game is an example of a cellular automaton14, which is a system in which rules are applied to cells
according to a set of rules based on the states of neighboring cells. The game is known as one of the
simplest examples of what is called “emergent complexity” or “self-organizing systems”15. The idea here
is that the high level patterns and structure emerge from simple low-level rules. In addition to the Game
of Life, some other examples of emergence are connectionist networks, the operating system and
evolution.
Martin Gardner explicitly mentions the analogy with the rules of nature in the Game of Life: “Because of
its analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing
class of what are called “simulation games” --- games that resemble real-life processes.”16
This analogy drawn with nature proposes the idea that the study of simple animals would lead us to
discover things about more complex animals, such as human beings. Exactly this idea shows itself in the
evolutionary theory. High level behavior emerges over the course of evolution as a consequence of simple
Martin Gardner, (October 1970), “Mathematical Games. The fantastic combinations of John Conway’s new solitaire game
‘life’”, Scientific American, 223, 120-123.
13 The key argument here is that no pattern can grow without limit. Put another way, any configuration with a finite number of
counters cannot grow beyond a finite upper limit to the number of counters on the field”
14 Cellular automaton was invented in the 1940s by the mathematicians John von Neuman and Stanislaw Ulam. In the 1950s it
was studied as a possible model for biological systems. See: www.mathworld.wolfram.com/CelularAutomaton.html
15 (www.math.com/students/wonders/life/life.html)
16 Gardner.
12
89
Ethics and Politics of Information Technology--Workshop materials
dynamic rules for low level cell dynamics. In evolution the mechanisms are simple but the results are very
complex.
There are several points that I want to discuss here. First, this argument "emerges" from a taken for
granted assumption that animals represent the “past”, “primitive” and “simple” stages of human beings. It
assumes a continuation between the animals and human beings by denoting a teleological and reductionist
approach rather than acknowledging the uniqueness of both sides.
The animal-human analogy is an extension of the argument that there is continuity from simple to
complex systems. The ultimate goal is to bring an explanation to the obscurity and the ambiguity of the
system by breaking down the whole into its “simple” elements. This argument recalls Herbert Spencer’s
ideas about organisms: Cells combine to make up organisms, organisms themselves combine to make up
“superorganisms”, or societies. In similar lines, the Game of Life is said to be a study how “the stripes on
a zebra can arise from a tissue of living cells growing together.”17 Druschel gives exactly the same
example in his interview: “the stripes of zebra, actually it has found out that there is a certain set of
proteins with very well defined limited functions that satisfy nonlinear equations that are very simple. If
we put them and let them interact, totally unexpectedly they tend to form these stripe patterns. So, this is
an example and here the nice thing is that people have actually been able to write down the equations
exactly govern and how the set of symmetric entities forms the stripe pattern.” (419-424)
However, Druschel regrets that in peer-to-peer system, there are “far from being able to write down a few
equations and say this is what is going to happen” (452-3). But this is also the case for “emergent
phenomena” too. In the Game of Life, “You will find the population constantly undergoing unusual,
sometimes beautiful and always unexpected change.”18 Although this unexpectedness is an outcome of
both systems, Druschel attributes it just to the peer-to-peer systems. But why?
Since the rules of both systems are known from the start, I can say that according to Druschel, the
difference between two systems causing ambiguity emerges from the agents. While cells in the Game of
Life or in any organism are defined non-autonomous entities and their actions are predictable, human
beings are seen as agents having autonomy over their decisions and thus whose reactions and behaviors
are unpredictable19. It is not surprising to see that Druschel wants to find out the “reasonable behavior” of
human beings to figure out the possible consequences of peer-to-peer system. It is again not surprising to
see that he resorts to “human nature” theories (see human nature).
To some extent, it is correct that cells in the Game of Life have more predictable set of actions than those
of human beings in the peer-to-peer systems. However, the reason cannot be explained by passivity and
dependency of the cells. The explanation is rather related with the temporality element, which works
differently in both systems. As I mentioned before, in the Game of Life, the births and deaths occur
simultaneously. Therefore, the emergence is synchronic since it does not occur over time but over levels
present at a given time20. If we translate the situation into the terms of disagreement between Levi-Strauss
and Bourdieu about gift exchange, we can say that cells’ actions represent absolute certainty laid down by
the ‘automatic laws’ of Levi-Strauss whereas the human beings’ actions are very unpredictable because of
temporality element inserted by Bourdieu.
With this idea of ‘automatic laws’ of cycle of reciprocity, Levi-Strauss ‘reduces the agents to the status of
automata or inert bodies moved by obscure mechanisms towards ends of which they are unaware’21. This
is exactly true for the cells in the Game of Life which are also called cellular automata. However, in the
case of human beings, Bourdieu introduces ‘time, with its rhythm, its orientation and its irreversibility,
17
(www.math.com/students/wonders/life/life.html
Gardner.
19 Bruno Latour’s concept of actants might be a good criticism of this perspective. According to Latour not only the human
beings but also machines, cells, bacteria, etc have agencies which make the life unpredictable.
20 This point is also the difference between the emergence phenomena and evolution, which happens over a long period of time.
21 Pierre Bourdieu, (1990), The Logic of Practice, translated by Richard Nice, Stanford: Stanford University Press, p.98.
18
90
EPIT
substituting the dialectic of strategies for the mechanics of the model’ 22. This process is neither a
conscious act nor an unwittingly automatic act, rather it is a strategic one.
This argument about temporality points out another distinction between rules and strategies. Bourdieu
claims that rules provide explicit representations and they lay out everything simultaneously. Rules,
grounded on empty homogenous time, impose symmetry and certainty, although the actual practice is
asymmetrical, irreversible and uncertain. Druschel’s definition of the malicious behavior in peer-to-peer
is a good example of this:
“Well for instance an entity that actually behaves in a way with the goal to disrupt the system, like
denying other people a service or a particular form of malicious behavior which is usually called
greediness. It is to say I want to use the service but I refuse to contribute anything to the system. So I
want to download content but I am not making anything available and I am not willing to make my
computer available to help other people find content, I am just completely greedy” (486-473)
Peer-to-peer system, just like gift exchange, is based on certain tacit rules. The people who participate in
this system are supposed to obey these rules. However, these are normative rules, which are often broken
by human beings, as in the example of Druschel. Instead of following rules imposed on them, people
rather develop some strategies to manipulate these rules and benefit from the system. However, the
strategy or manipulation is neither a consequence of “human nature” nor a conscious act developed by the
human beings. It is rather a practice developed in the social (see human nature).
22
ibid, p.99.
91
Ethics and Politics of Information Technology--Workshop materials
Degrees of Remove: Hannah Landecker
How close to useable, in-the-world, consumer technology to get.
Within computer science, there are many different modes of working, which range from the highly
theoretical to the highly practical. The results of this work therefore can be closer or further from a
product form, which many non-computer scientist users have access to, and use to many different ends.
One of the things that seemed obvious to the interviewers was that the question of making an end-product
in the form of software that could potentially be used for illegal activity would be a fraught one, that
different scientists might react to very differently. One interesting and unexpected result of the interviews
with Peter was the realization that this question was too narrow to get at the many different routes that he
arrives at what he calls a position that is “kind of one layer isolated from an end-user”. The question of
potential illegal uses is only one aspect of the formation of a position in the spectrum that runs from
theoretical to applied work. The question of how close to the end product to go would not simply be
made on the basis of how a scientist felt about potential uses of a single application.
The decisions that computer scientists make about what part of this spectrum to occupy are not one-time
once-and-for-all decisions, but are a series of decisions large and small, perhaps not even always
explicitly articulated as decisions, but as better ways to go about things. In our discussions with Peter and
his graduate students, a series of observations about this kind of positioning between the theoretical and
the applied work of building new kinds of computer systems arose in the context of discussions of
funding sources, entrepreneurship, pedagogy, insulation from legal entanglements, the role of industry.
This came out strongly in the passages in which we asked why, if it was fairly trivial to take the next step
from the technologies they had to something that a lot of people could use, why not take that step and
produce file-sharing software?
Instead of answering the question very directly, Peter replied with a pedagogical answer. “This gets at
two different issues here right? For instance five years ago before this bubble bursts, you know in the
stock market, this was, you know, common place especially computer science. Probably half of the grad
students here were involved in some sort of start-up company; try to commercialize some ideas they may
or may not have started here. So there was certainly not anything ethically that held them back. Although
I was very concerned at the time because they were not doing research, they were not finishing their
Ph.D.s or not really furthering their education, not doing things that in the set of incentives that we here,
have here, you know really help them in anyway.” He discusses how grad students and faculty were
caught up in the dot-com boom and computer science departments were abandoned, with some faculty too
busy to do anything but show up occasionally and teach, because their attention was elsewhere. Although
Peter realized that this is not exactly what we were asking, for him it was an answer. In a sense, he
reinterpreted the narrow question as a broader one: What are computer science professors for?
This came up in different ways in other places. In the opening discussion of coming to work on peer-topeer systems, although the “end-product” – specific application such as
Gnutella and Freenet – were what sparked his interest in the subject in the first place, they were too
specific to be intellectually interesting. He gives the sense that one had to be one (or several) steps back
to understand the fundamental questions underlying these specific applications. So here the position is
not pedagogical or ethical but about being far away enough from end applications to be close enough to
“fundamentals.” The conversation circled back to this topic again, and Peter made the distinction
between his type of work and the programmers building peer-to-peer products. These programmers, he
says, are
92
EPIT
a sort of special-breed of mostly self taught programmers or computer scientists, who don’t really to a
large extent appreciate actually what we are trying to do, further our fundamental understanding of
how these things work. They’re focused on making their systems work today without necessarily
having the desire to understand fundamentally why it works or what it means for the future, how it
extends. I think they may be driven by this desire, which is to do something that has an impact in their
world.
Therefore, an interest in fundamentals is cast as a question of worlds and futures, a choice of which world
to have impact on, in what time scale.
In other places, the work of positioning is described as a balancing act between things that are interesting
enough, but not so interesting that you should just hand it over to industry to do. Passages that discuss
faculty “style” in choosing defined research agendas and well-specified deliverables versus less defined
research agendas are also concerned with positioning on the spectrum from theoretical to applied work.
The different factors going into positioning thus range from a responsibility to educate and be educated,
to positioning in relation to industry and funding sources, to the slightly less tangible sense of
fundamentals being set back and on a different time scale than applications with specific impacts now on
specific types of people. This is not simply a worry about whether or not to build something that operates
in the gray zones of legality.
93
Ethics and Politics of Information Technology--Workshop materials
Hierarchy vs. heterarchy: Tish Stringer
We really have the opportunity to develop something that fundamentally seems to change the impact of
the technology on society. – peter druschel
Peter’s version of p2p directly challenges established hierarchies of many kinds including: institutions
(120-124 don’t need approval of the institution to distribute), governments (2270-2272, there is nowhere
to go and sue), contemporary IT hierarchy (395-404, hierarchy of how you design systems), (12111230, IT professionals are comfortable is when users are consumers of information), (you no longer know
who to blame 1853-87)), law (460-463, hierarchies use police to punish), (147-153 can be used for illegal
purposes), hard drive and server capacity (2214-2221, don’t need servers), networks (1526, traditional
distributing systems have hard wire hierarchy built into them), isp infrastructure and adsl and cable
modems (626-632, designed for a one way flow of traffic).
In contrast, when he describes the organization of p2p he is talking about a heterarchy. He compares it to
a grass roots organization (1760-1764 like a grassroots org, has benefits of avoiding the law and running
on low money), an anthill (1554-1559 – decentralized, self organizing but within a social structure that
exists over time, bio programming each ants actions is like a protocol that tells nodes how to organize on
the network), organic.
In the most elegant “scalable” terms what Peter is describing is a revolutionary way of thinking; originally
about moving and storing data but then extending to rethinking networks, then hardware, then the way IT
managers work, then how humans interact with data, moving from consumers to producers, ultimately
challenging their reliance on states, institutions and capital. This is not your mother’s bureaucracy.
Moreover, he is proposing all of this while attempting to avoid the tar pit of “illegal downloading” in his
research. At this stage, it is not so hard for him and his team to do, considering that the main users of the
pastry architecture are other computer scientists. The fact that their project is one step away from end
users gives them a legal cushion to pursue their work. However, without end users, there is no user data to
show their work in action, only theoretical models. For this revolution to be successful, certain key
arguments would have to be shown to work with real user bases of differing sizes.
Some of those questions would be: is the heterarchy more robust? Is a decentralized system more
secure?Is scale a determining factor? If it works on a small scale, will it be scaleable with more users?
94
EPIT
Scalability: Chris Kelty
(this is excerpted from another paper called "Scale, or the fact of" written Spring 2000)
…It is nearly impossible to speak with a computer scientist or engineer without hearing the word
"scalable" (usually it is accompanied by "robust and secure"). A strange adjective, and even stranger
intransitive verb that suggests the cancellation of the imprecise use of "scale" to mean bigness (as in
"large scale"), and substitutes instead a delicate notion. A building and a train may be large or small—
may be built, as one says, to scale. But, when something is both big and small at the same time, then it
scales. Still, buildings and trains are too tangible for this intransitive miracle, it is a use of the word that
could only find subjects in the twentieth century. Here the OED tips the balance: "To alter (a quantity or
property) by changing the units in which it is measured; to change the size (of a system or device) while
keeping its parts in constant proportion."23 Scale the amount, add a zero, measure in gigabytes. What
could be more familiar in the world of measurement than the convenience of exponentiation. But
consider the second usage, where things— significantly systems or devices— become larger, but their
parts stay the same. It is a familiar usage today:
"Does your business scale?" "Yes, our product scales," "this web server is scalable." No need for a billion
servers to serve a billion hamburgers, because this baby scales. Microsoft hosts a yearly "Scalability
Day," advertised by banners that say things like: "Did somebody say Scalability?" (Ears brimming with
American media will hear a quote of a McDonalds advertisement, perhaps too subtly connected to the
billions and billions of that old-economy model of scale.) Scalability is defined on hundreds of mailing
lists, technical and otherwise. "Scalability, reliability, security" form a buzzword triumvirate. Servers
should scale, or succumb to too much traffic, but business plans should also scale, or risk the shame of
missed market opportunity— regret does not scale. People warn of 'scalability myths,' there are helpful
programming hints and "Scalability Killers"24 to avoid. One can try to write algorithms that scale (i.e. that
can solve problems of arbitrary size), or try parallel processing (i.e. scale resources to help a non-scaling
algorithm solve problems). "Clustering" is a popular solution that allows for scalable web-sites that access
growingly large databases of material.
The subtlety of 'to scale' often gives the slip to journalists and PR agents, who try returning scale to pure
size, or pure speed; an example from The Standard, January 3rd, 2000:
On the Internet, if you can't scale – if you can't get really big really fast – you're nowhere. And it's not
enough for just your technology to be scalable. Your entire business model has to have scalability, as
well; you need to be able to quickly extend your business into new markets, either horizontally or
vertically. "Will it scale?" is one of the first questions venture capitalists ask.25
Interesting choice of words, "if you can't scale, you're nowhere". True, it implies the opposite, that if you
can, you will be everywhere. But the choice indicates something else, you are no where, not no thing.
This is a story, a word, a metaphor maybe, of the internet as industrial market, geographical
manufacturing region, cyber-space. A topical imagination where fastness and largeness make more sense
than in the pure scripts of a world that is both big and small at the same time, which precludes it from
23
scale, OED, v. XIV p. 563, 1991
George V. Reilly, "Server Performance and Scalability Killers," Microsoft Corporation, February 22,
1999, http://msdn.microsoft.com/workshop/server/iis/tencom.asp
25
http://www.thestandard.com/ archives from January 3, 2000, my italic.
24
95
Ethics and Politics of Information Technology--Workshop materials
being precisely some where. Such a topographical insistence is a result of weak language, less than of
weak imaginations; not a result of the actual strangeness of a world saturated by computing, yet
indifferent to its physical organization, because language itself forces spatial figurations, prepositions
serve topoi on top of topoi. Fact is, if you can scale, you could be any where.
Try another example on for size. Gnutella, named for GNU (of GNU's Not Unix fame, from the Free
Software Foundation) and the tasty hazelnut-chocolate spread "popular with Europeans" is a simple
elegant tool for creating your own instant mini-internet, in order to share anything you want: music,
movies, porn, pictures, data. Its self-description:
Gnutella client software is basically a mini search engine and file serving system in one. When you
search for something on the Gnutella Network, that search is transmitted to everyone in your Gnutella
Network "horizon". If anyone had anything matching your search, he'll tell you.
So, time to give a brief explanation of the "horizon". When you log onto the Gnutella network, you are
sort of wading into a sea of people. People as far as the eye can see. And further, but they disappear
over the horizon. So that's the analogy...
And what of the 10000-user horizon? That's just the network "scaling". The Gnutella Network scales
through segmentation. Through this horizoning thing. It wouldn't do to have a million people in the
horizon. The network would slow to a crawl. But through evolution, the network sort of organizes
itself into little 10000-computer segments. These segments disjoin and rejoin over time. I leave my
host on overnight and it will see upwards of 40000 other hosts.
Here the where of there is earthly, it is the size of the planet, and divides into horizons, time-zones of a
sort, a metaphor of spatiality familiar to pilots and phenomenologists alike. This horizon, however, is by
no means geographical, but simply numerical. It is as one color thread in a tapestry, distributed
throughout the landscape connected by a quality. It is a statistical distribution, dotting a sample of billions
with 10,000 dots. The crucial difference here, is that everyone's horizon is different— not completely, but
enough that the horizons 'scale' to include the whole internet. What connects people first is not a physical
network, not a system of wires that has a necessary geographical component. What connects people is a
script, an instruction, a set of commands, a question and answer (or a 'negotiation' in telecommunications
terms, specifically, a 'ping' and a 'pong' that contain minimum information about origin and destination, IP
address and connection speed. It is nonetheless these pings and pongs that make up more than 50% of the
data on the Gnutella network, making scaling a serious problem for the designers. Imagine if everyone
were to ask their neighbors that question, and ask them to ask their neighbors, and so on— the world
would quickly end in queries without answers...). What connects people is not propinquity, not
community, what connects people in the world of tasty hazelnut spreads is, in short, a protocol, a simple
script, a set of messages. Your IP address can be static or dynamic, a proxy or a masquerade, but it doesn't
matter where it is, or how long it exists, only that it be connected to others, which are connected to others,
which are connected to others. This is the non-spatial space of 'it scales.'
The cliche of air-travel shrinking our world in bringing the remote close has nothing to do with this kind
of scale. Rather, if you must think of planes, think instead of the tall Texan crammed in the seat next to
you, and the conversation that will lead to the person you both have in common, and the meaning of the
inevitable phrase "it's a small world." These conversations are the new scale of the twentieth century…
(back to our regularly scheduled program)…
96
EPIT
When Peter talks about scalability, he is primarily fascinated by a kind of paradigm shift—a change from
conventional distributed systems, systems what could certainly be large scale, but would not quite be
"scalable" in the same way:
We realized very soon that this technology had applications far beyond just file sharing. You can,
essentially, build any kind of distributed system, while I shouldn’t say any, but a large class of
distributed systems that we used to build in a completely different fashion based on this peer-to-peer
paradigm. One would end up with a system that is not only more robust, but more scalable.(119-122)
The scalability of peer-to-peer systems comes from the fact that one can harness an already existing, and
large-scale network (namely the internet) to create a super-network where adding "systems or devices"
happens for other reasons (I buy a computer, you buy a computer, computers appear on desks everywhere
for myriad reasons—not because Peter says: "I will build a network!"). But as with almost all
engineering terms, an inversion necessarily occurs, and scalability is something that comes to be applied
to almost any situation:
Another aspect is I think it is important for grad students to also have an impact on what they are
doing, what direction their project takes. On the other hand, you need to set also some useful
restraints. Because if you have a number of graduate students and everyone is working on something
more or less unrelated, it just doesn’t scale very well. It stretches my own expertise too far. (233-237)
The idea of students "scaling"—in any sense other than recreational negative geotaxis—seems an odd
concept, but for anyone who works with engineers, especially internet and computer engineers, the notion
has become nearly second nature. The conversion it represents is one from a geographical orientation—
horizons—to a scriptural one—connections. Perhaps the most important aspect of this inversion (for
social scientists) is that engineers now routinely refer to this orientation as "social" (and often by
reference to "communities"). The internet has transformed our understandings of speed and scale, but
rather than create some new language for propinquity, proximity, or distance, it has co-opted an older
one—social networks are now all the rage; people are building social software, social forums,
communities, etc. The only social theory that seems to have embraced this transformation and rejected
the language of the social is, ironically, systems theory—which wholeheartedly embraces the engineering
and biological descriptions of growth, transformation, and change.
97
Ethics and Politics of Information Technology--Workshop materials
Worrying about the internet: Hannah Landecker
It is one thing to make a technology that works, it is another thing to have the internet work with it. How
ISPs are designing their networks has an impact on the potential uses of the things Peter is working on.
“What is a little bit worrying is that increasingly ISPs are also designing their own networks under the
assumption that information flows from a few servers at the center of the Internet down to the users
not the other way around. ADSL and cable modem all are strongly based on that assumption…And
the more that technology is used the more this is a problem actually. It is actually a chicken-egg
problem. …”
Which brings up the question of how much computer scientists have to worry about the internet. In
the same way that citizens of a democratic state are supposed to worry about the government, are
computer scientists, by their occupation, more responsible for the internet and its shape? Its state, its
design, its control, its commercialization, its regulation. Here the worry is that if the ISPs design their
networks for information flow from a few servers to consumers who only consume and do not
themselves produce, these are not systems that will be compatible with peer-to-peer systems in which
everyone produces.
“Let’s assume that there is going to be more really interesting peer-to-peer applications that people
want to use and if people just start using them, is that going to create enough pressure for ISPs that to
then have to react to that and redesign the networks, or is the state of the art in the internet actually
putting such a strain on these systems that they never really take off?”
You could build the most excellent peer-to-peer system in the world, but it has to work in the larger
context of how the internet is configured. Here the given solution, what to do in face of worry about the
internet, is to build something that people want to use, so that ISPs have to react to it. Build it and they
will come, is the proposed mode of intervention in the shape of the internet.
“… the net was originally actually really peer-to-peer nature I think. It was completely symmetric.
There was no a priori bias against end users of, at the fringes of the network, at front users of injecting
data or injecting as much data as they consume. Only with the web having that structure strongly,
having that structure, the network also evolved in this fashion and now we want to try to reverse this
but it’s not clear what’s going to happen. I think, personally, I think if there is a strong enough peer-topeer application, and there is strong enough moderation for people to use it, ISPs will have to react. It
would fix the problem.”
Would it?
98
EPIT
Moshe Vardi Interview #1
Video Interview #1 with Moshe Y. Vardi, Professor of Computer Science, Rice University.
This interview was conducted on January 9, 2004. Key: MV=Moshe Vardi, CK=Chris
Kelty, TS=Tish Stringer, AP=Anthony Potoczniak
Moshe Vardi is a professor of computational engineering at Rice. He's also the director of one of Rice's
institutes, The Computer and Information Technology Institute (CITI) as well as a member of the
National Academy of Engineers. Moshe's interests are avowedly abstract—logic, automata theory, multiagent knowledge systems, and model theory—but they have a curiously successful practical application:
computer-aided software verification. Software verification has been a much disputed field within
computer science, with a history of exchanges between proponents and detractors. He has written a very
philosophical work called "Reasoning about Knowledge" which formalizes several issues in formal
"group" epistemology, that is, knowledge as a problem for several knowing agents. We found this work
strange, but fascinating, especially as it allowed the authors to speak about inanimate objects like wires
"reasoning" about something. Moshe's work, among all of the interviewees was easily the most alien to a
group of anthropologists (though not so to the philosopher among us, Sherri Roush). The interview we
conducted here, however, does attempt to go into some detail on issues that connect to the other
interviews as well (including issues of complexity and hierarchy). We began in this case by asking some
biographical questions.
MV: Well, I really got into computer science when I was 16. I just finished high school, I finished a bit
early. I knew absolutely nothing about computers. Computers were just not on the horizon. There were
computers at the time, but I grew up on a Kibbutz, computers were just not something I had heard about.
I saw an ad in the paper for a computer programming course in Tel Aviv university. It cost 50 Israeli
pounds and given the currency has since then changed several times, I have no idea how much money it
was. I asked my father if he would give me the money to take the course—it was a two-week course, and
I fell in love with programming. At the time, computers were a very remote thing, you know, there it was
in a glass house; we basically wrote the coding [by hand], we didn’t even have access to the key punch
machine. We had coding sheets, you wrote your program, and it went to a keypunch operator, she made
mistakes, by the time you finished correcting her mistakes, you start catching your own mistakes—
everything took one day. And I just fell in love with it. There is something really interesting about it:
you see a fascination that teenage boys have with computers, and I don’t know that people have studied
this, but it's particular to boys, particular to a certain age. And the only theory I have is the complete
control that you have. Something about being completely in control. I don’t know. But I think it’s a
worthy topic for [its own sake].
CK: Even at such a distance? With the coding sheet…
MV: Right! Even then, there’s something about it, I just fell in love with it. I was going to major in
physics. So I added computer science as a minor. Then I went to do my military services and when I was
going back to graduate school, I kind of had to make a decision. Before that you could have both, then I
had to make a decision, and I chose to continue with computer science. Logic was kind of a really an
afterthought. The only logic I took in college was a course in philosophy, like a distribution requirement.
It was a book that you might be familiar with, by Copi. It is by now in edition 137, I suspect. It’s a very,
very widely used book. It’s also utterly boring—it bored me out of my skull. And I thought logic is just
a hum-drum, boring topic. I remember going to the professor and I said, "I have enough mathematical
background, do you mind if I don’t show up for classes?" I just came in and took the exam, writing logic
off, completely writing it off. Then when I went to graduate school [at the Weizmann Institute], there
99
Ethics and Politics of Information Technology--Workshop materials
was a course in mathematical logic. I had a wonderful teacher, and I just loved it. I thought, maybe I
should have studied logic, but, I didn’t realize there would be such a connection [to computers]. And then
I started working on my research, and the connection came about that there is an intimate connection
between logic and computer science. So it was rather serendipitous that I got into this.
CK: One of our first reactions to reading this stuff was: Isn’t this kind of a funny thing for a computer
scientist to be doing? It might seem obvious to you that it’s computer science, but to us…
MV: No! it’s not, it’s not at all. And actually, in the class I teach, the first lecture I give is a historical
perspective on this. I start by trying to give students this background, and say why there is such a
connection, because it’s not an obvious connection at all. I find it actually a very surprising connection
between logic and computer science…
CK: Where did it come up for you, biographically or historically, when did you realize that there was
this interesting connection?
MV: So, I was starting working on my master's; it was on a decision problem in database design, and I
was just completely stuck. I mean there was this problem and I just couldn’t make any progress. And I
was taking a course in computability, and I learned about undecidability. And, I said, "Ah! Some things
are undecidable!" so I went back to my adviser and said, you know this problem where I’m completely
stuck on, maybe I’m stuck on cause it’s undecidable! And he said, "No, come on, you’re taking a course
in computability, this is a database, this is a practical field, I don’t think we can run into this kind of
thing." So we dismissed it at the time. So I was stuck on the problem, and what we very often do is we
shift the problem, we change it a little bit. I generalized the problem, and in fact I was able to show it's
undecidable. Fifteen years later, with much progress, somebody else showed that my original problem
was indeed undecidable. [Later] my advisor ran into a logician, and somehow they start talking about
what we'd done, and this logician said, "Oh this all can be formulated in logic," which I did not realize
before and he showed it to me, and I said "oh!" So I just started reading more and educating myself in an
indirect way. [Some] people thought that I had really formal training in logic, but my training in logic
was actually very rudimentary. I took, as a student, probably two classes.
Part of Moshe's experience was at the IBM Almaden Research Center in California, where he had an
experience similar to the one Peter Druschel describes: being left alone to do basically whatever he was
interested in doing.
MV: At the time, IBM was making lots of money, and they were pretty happy to have people just doing
good science. The slogan at the time was that IBM wants to be famous for its science and vital to IBM
for it’s technology. But they were pretty happy if you just did good science, that was in some sense
connected to IBM’s business. And there was a connection [in my research] even that stuff about
knowledge was connected to distributed systems and to robotics, so the connections were there. So they
were pretty happy to let people do that. People in IBM jokingly called it IBM University, because we
kind of could ignore IBM. We got the money from IBM, we could do what we wanted. What a
wonderful period.
CK: I’ve heard that about Bell labs too…
MV: Used to be like that for Bell labs. They let you do whatever you want, so it was completely selfdirected research, curiosity-driven research as we call it, and the company was rich. I mean the period
when Ma-Bell was a monopoly, they had lots of money, so sure, why not?
100
EPIT
CK: How do you negotiate the boundary between basic and applied research in this field? What is it
about computer logic that gets considered basic research as opposed to that which is applicable? Or do
you sell it all as potentially applicable?
MV: Well, this is one advantage we have over mathematicians, even though what we do can be highly
mathematical, it’s almost always relevant to applications. Because ultimately that’s what drives this field;
ultimately that’s why if you look at the average salary of a CS professor, it’s probably higher than the
average salary of a math professor. Now, you have to understand, in the United States, computer science
is typically an experimental field. You know, the people who do the more theoretical work are at most
maybe ten percent. But even the people who do theoretical work, they always say the motivation is to
understand computational problem, even people who do the most basic stuff, which is probably
complexity theory, they would explain the need to solve problems faster or understand why we cannot
solve them faster. People who do the most abstract logic, they would relate to something to semantic
programs, there always will be some practical motivation. Even if you have people who say, "I do
theoretical computer science," they will rarely say “I do pure science,” or “I do basic science”, they
would think of computer science as an applied field, within which you can do theoretical stuff.
CK: Is it differentiated from computer engineering?
MV: That distinction is very controversial. I think it is a very artificial distinction, because sometimes I
do very abstract things, but the applications are in computer engineering. So eh, I think that the right
structure is that computer science/engineering is really one field. When it’s not it has more or less to do
with what economists call, what do they call it… “path dependency.” Because things emerge in a
particular way, and because eh, well electrical engineer don’t want to give up computer engineering
because there’s money and students that come with it, but that’s a whole other… more of an academic
politics, rather than a judgment about disciplines.
CK: And practically speaking, when you apply for grants, you don’t have to say…
MV: We always talk about applications. We always talk about applications. I’d say this is very natural in
the field. Computer science is an applied field… If you’re doing mathematics then [you say] we are
going to develop mathematical techniques, and history has proven that it’s a good thing for society
because ultimately you get someone who makes use of it, but I don’t have to worry about it. The
mathematician says: I will just understand Riemannian manifolds, I don’t even have to explain why this is
useful, all I have to say is “other mathematicians care about it.” Computer science is different. Every
proposal will say why this is useful to computing, and computing is after all, a tool. Ok, so we say this
will help us solve problems faster, cheaper, prettier, whatever, but it’s an applied field. Now, you find,
not necessarily, not everybody will agree with me, I think there are some people who would say that some
part of computer science are really like mathematics. And, they are very similar to mathematics in the
sense that the activity is of mathematical nature. But even these people if you push them why this is
interesting, some of them will take the same position and say it is interesting because it is interesting
mathematically, and usually most of these people then do it in a math department. And get the salary of a
math professor [laughter]. And get the grants of a math professor.
Moshe was inducted into the National Academy of Engineers for "contributions to verification"—meaning
his work on software verification. We asked him to try to give us a basic course on the subject of
software verification.
CK: In terms of trying to get a handle on some of this, could you maybe describe a little bit in more
detail the work you did on software verification?
101
Ethics and Politics of Information Technology--Workshop materials
MV: I was a post-doc and I went to a conference and I wanted to save some money. So I shared a room
with another graduate student and he started telling me about what he was doing. He was working on
verification and that’s how this collaboration started—just by trying to save money. And this is the
serendipity effect. So I have been working on this now for twenty years and it started as more theoretical
research. I had a unique way of doing it, which to me looks very elegant. But it turns out that elegance
was more than just an aesthetic value. It made it easier to understand and it was easy to implement.
Elegance in this case, as is very often the case—which is why we like elegant theories—end up being
more useful theories. Over the years, more and more people had picked up on that and that kind of
became one of the standard way to do things. So today there are various tools, software tools. You know
a little bit of automata theory? You remember a little bit? So, automata theory is something that started in
the 50s. You know Turing Machines? A Turing machine has an infinite tape, it is a very useful
abstraction for computability, but it doesn’t correspond to real computers because they don’t have infinite
capacity—they all have very finite amount of memory. So, Turing machines go back to the 30s; and in
the 50s, people said "can we have a model for finite computers?" And so, out of this came a
mathematical model for a finite device. And before there were punch cards there were paper tapes. They
were just these thin tapes that you kind of fed into the machine. So, there was this idea there is little finite
device which has a mathematical model and you feed it a finite tape and in the end it just flushes, it gives
you 0/1, it just says good/bad. Okay? This is actually very nice because it also made a connection to
linguistics: you can imagine that if you feed it a sentence it will say whether it is a grammatical sentence
or not. So, there is a connection to Chomsky working in language theory. And then in the 60s, in the
early 60s, people started asking "what happens if we feed this machine an infinite tape?" And they said,
"why would you? This is crazy, why would we feed it infinite tapes?" Well, this actually came to people
that wanted to understand the relationship between these machines and the theory of the natural numbers.
Now you can give me any natural number, so you could imagine: that’s why you need to feed infinite
tapes. So, it was very you know highly abstruse to say the least. And so, people developed a theory of
automata on infinite words, and on infinite trees to make it even more elaborate. Ok? And so, this is a
theory developed by logicians in the early 60s, early to late 60s and nobody thought there would be any
connection for this to computer science. There doesn’t seem to be any connection.
But then in the late 70s people said "well sometimes we are interested in computer programs that run and
give us a result that is just the right result. But sometimes we are interested in [the wrong result]" Think
of Windows. You want to say: Windows does what it is supposed to do. Well, it is not supposed to crash.
[Laughter …]. In principal, it is supposed to run forever. So, if you want to describe the computations
like that you really ought to think about the tape that starts but never ends. So, a connection came about
between this unlikely kind of theory of automata on infinite words and a way of talking about computing
processes. And that was the connection that I made together with some other people in the early 80s. And
first of all, it just gave a very beautiful theory and later on people actually went and said, "well let’s
implement it." So there are various tools used in industry today based on this. So, it is actually kind of
funny to go and talk to software developers in the industry and they talk about all these mathematical
languages that just twenty years ago even many mathematicians did know about it, because it was
considered to be too abstract.
CK: And practically speaking, this is for designing chips, designing any kind of software?
MV: This is for designing any kind of software, mostly it's useful for what we call “control intensive
software”. Some programs just do very intensive computing, such as doing signal processing, when you
run an algorithm. But when you look at what happens for example when you send email on the internet,
other than taking the initial data, and putting it in packets, after that you don’t really touch the data. What
you want is to send it to different places, and route it and synchronize and make sure you put things
together. This is what we call control intensive programming. So, especially today in a very highly
networked world, much of computing is really just people talking to each other—I mean "people"—
102
EPIT
agents, processes, programs talking to each other. If you use a dial-up modem, you hear this ‘be be beep’,
right? This R2D2 and R2D3 talking to each other, the two modems are trying to synchronize—and a
tremendous amount of this happening. Emailing, a lot of networking protocols, much of what’s done in a
chip today is really of that nature. So, yes on a chip, there is a piece that would be called the arithmetical
unit that does plus and times and things like that. But the biggest challenge today is to do things fast and
to do things fast you have to take many pieces and do them in parallel and then put things together. So, it
is all about control. So it turns out that … that when you look at numerical computation these
[verification] techniques are not very good. But for control the issue of coordination, synchronization,
communication that is where these techniques have been very useful.
Hierarchy and Complexity
From the other interviews we had done, specifically with Peter, we became interested in the nature of
thinking about hierarchy and complexity in computer science. On the one hand, Moshe's verification
techniques seemed to require a very detailed top-down approach, which we thought contrasted with the
peer-to-peer ethic, and with something like an open-source approach to verification. But according to
Moshe (and Peter as well) there is no contradiction. Moshe tried to explain hierarchy and complexity to
us using an example from chip design.
MV: No, no. The picture that I drew is kind of an idealistic picture, but the reality is that a full chip today
is enormously complex. So, today the full chip is millions of lines of codes. Only thing more
complicated than this is Windows. That’s more complex even than a full chip—about an order of
magnitude more complicated. But it is not a monolithic piece, right? It is divided between hundreds of
people. So, one of the big issues of research for us is what we call it "modular verification," which asks,
"Can you define the interface between different pieces in a precise enough way that I can take a small
piece and place the rest of the big thing around it by actually by small description of how it supposed to
behave as far as this little piece is concerned?" And then I am dealing with a much smaller object.
Most theories of design engineering basically say that large systems are very complex and the way to be
able to comprehend them is to divide and conquer. You divide them into modules and you define very
nicely the boundaries. And if you have good definition of the boundaries, then I can play with my
module and it is not going to screw up everything everybody else is doing. The methodology says to have
very clear boundaries, components, modules.
CK: So, you could even put some form of verification within open source software?
MV: Absolutely. In fact we would facilitate it, because the guy who is only dealing with one piece, can
ignore the rest, all he or she has to know is that there is some invariant that you have to keep. As long as
you obey some specification, then you don’t have to worry about what other people are doing.
TS: So, in the chip design example, is there anyone that has overall vision of the whole project?
MV: I think the key to human civilization is the constant of hierarchy. We have a large country, what is
it now? almost 300 million people. If you try to manage it very tightly, you get what happened in the
Soviet Union—it’s impossible to manage it very tightly. So we don’t; we try to break it into pieces, we
divide the work, and say, "you’re responsible for this at the federal level. And someone else is responsible
at the lower level." So we build an incredibly complex organization. Think of a company like IBM that
has 300,000 employees. How does it work? Nobody can have a picture of what 300,000 people are
doing. So there are people who think at different levels. There are the people who design the Pentium—
what is now Pentium 4? so someone is thinking about Pentium 5, the Pentium Pentium—and so there is a
small group of people and they will be called the architects, and they would have the high level view:
103
Ethics and Politics of Information Technology--Workshop materials
what is the approach? How is the Pentium 5 different than Pentium 4 and what are we going to do? You
divide the levels up. Intel takes a chip and divides it into pieces, every piece is called a cluster. So there
will be some number of clusters—probably not a huge number of clusters, I don’t know exactly, I would
guess not more than 8 clusters. There would be cluster managers and somebody would be in charge of the
whole project. And there would be a group of people who do architecture at that level. The cluster will
be different pieces of the chip, the cluster that does this, the cluster that does that… for example, there is
something called instruction decoding: there is a stream of instructions coming from the software, the
compiler generates this stream of instruction…load… add…but they are all coming in binary. So
somebody has to go and figure out what they are supposed to do. So that’s a piece of the chip. And a
cluster is divided into units; and units are divided into blocks, and a person sits and works on a block. So
there are different people, and they think at different levels. It’s the same here: we have a department
chair, then we have deans and we have the Provost, and then… it’s the same …hierarchy is the key to
being anything complex. If we try to do things, in a big way, monolithically, we call it, trying to do
monolithic design, it’s just too complex. Nobody can keep this complexity in their head.
CK: Does that raise the issue of how people trust machines or trust software? I mean, at a certain point
do you say: Okay, here’s some software, and someone says, well, it’s very complex, I’m going to need
more software, and whether it works correctly… and then you say: well, someone else needs to write
more software, to make sure that that software… Is there a point at which even the architects have to
throw up their hands and say: we just have to trust this?
MV: Think of… today you have people who use mathematical theories. And today we have some very
complex theories where you are required to use analytic number theory, you bring some number theory
into this, and a mathematician will not hesitate to take a theory from analysis that has been used
before…not some obscure theory, but the theorem has been kind of vetted around. I’m not going to
prove everything for myself, right, I’ll just go by trust, right? You get on the light rail don't you?
[laughter]? You basically said, well… there are lots of people who, that’s their expertise, and I assume
they know what they are doing. Most of the time it works, sometimes it doesn’t. We have accidents. It’s
true in any complex system, there is no one person that has the whole thing in mind… So our large
complex system can fail, and usually when they fail, it’s almost never one person, Homer Simpson, you
know, pushed the wrong button. I mean, it’s almost never like that. So when you build a large chip, it’s a
system, a team effort of several hundred people. There is cross-checking and double-checking, in fact,
today, it is estimated about 70% of the design work is actually what is called the validation effort. So
when you look at the people who design the chip, then, more money was for people to actually check
what the designers are doing, than the people actually sitting down and writing code.
On the subject of complexity and Trust, Moshe suggested a 1979 paper, "Social Processes and Proofs of
Theorems and Programs" by De Millo, Lipton and Perlis. The paper was one of the first strong critiques
of the idea of program verification, based on the suggestion (dear to science studies of many stripes) that
mathematical proofs are social processes, not logical demonstrations, and that it is the community of
mathematicians that decides what proofs will be used, not any logical or algorithmic set of steps. They
suggest, therefore that it is a dream that verification might replace proofs in mathematics, because no one
would be willing to read such an unwieldy thing as a program verification. What's interesting in both the
paper and in Moshe's description is the issue of a probabilistic notion of truth achieved through the most
demandingly logical analyses. As he says, it isn’t that it is "some heavenly truth" only that it increases
trust.
MV: Why do we believe proofs? It’s actually a very nice paper. It’s interesting to say why the
conclusions are wrong in the paper, it’s not trivial. But the analysis says, that proofs are believed as a
social process, it’s a vetting-out out process, essentially. You know Pythagoras thought of a great
theorem, right, he was very excited, and called his graduate students: "Let’s me show you this wonderful
104
EPIT
theorem I thought of last night." And the students say: "that looks good." But he says, you know, "my
students are probably afraid to tell me I’m wrong." So he went to Archimedes, which leaves how many
hundred years it was later? So let's say he went to his neighbor, and ran it by him, etc. So they have a
social analysis… and I think in particular because of this you guys should take a look at the paper. It’s
really an anthropology paper about verification, about the validity of verification. And so they said that
the reason we believe mathematical theorems is because there is a social process that leads to acceptance.
It’s not as if there is some god given truth, there is no Platonic truth that we don’t know, but we just
believe in it. It’s a social result. So in that sense he says that the mathematical theorem is really a social
construct. Post-modernists would absolutely love it. Everything is a social construct. So, the argument
was that, if you try to prove a program, the proof will be sometimes a humongous object. The program is
very complex, so the proof will be, somebody comes to you with a stack like this. They said, there will
be no social process to vette it. So why should we believe in these proofs. That was the argument. So we
should not believe in proof that cannot go through the social process. Today the way we get around all of
this is that we have proofs that are checked by computers. So we kind of boot-strap it and you can say,
why would you trust the computer that checks the proof? You have infinite regress in some sense. But
you know, today, I think most people think of verification as not as something that will give you, like,
some heavenly stamp of approval of correctness. But all I say is that if you went through this process
then my trust increases. That’s all I can say. It’s not about some absolute correctness, but increasing trust.
Moshe gave us a paper he had co-written, "On the unusual effectiveness of logic in computer science"—a
play on the 1960 paper by E.P. Wigner called "The unreasonable effectiveness of mathematics in natural
science." We discussed this paper briefly, and tried to understand why exactly Moshe and his co-authors
were so surprised that logic is unusually effective in computer science. Moshe suggested that very few
programmers actually think logically, but rather pragmatically, and that it is therefore a surprise that
logic works so well to describe the programs they end up writing.
MV: So what I remember in my experience when I started with programming is that it’s extremely
frustrating, because suddenly you have to be so precise, you are the programmer, and you run the program
and the first time you don’t get what you expected. And you start by thinking: “the compiler is wrong,
Intel is wrong, something is wrong, because I know it’s got to work.” Then you discover, no, I wasn’t
quite precise enough here. I wasn’t quite precise enough there. I said: “check that X is less than Y, and it
should be X is less than or equal to Y.” Otherwise this search doesn’t work. You have to be incredible
precise. Okay. So, the essence of very much of what computer scientists do, is to write a very, very
precise descriptions. So suddenly, it’s not enough that we know how to express ourselves, we have to
have a degree of precision…it’s almost unheard of. And the one discipline that deals with precise
description was logic. So this is why I think logic was waiting there to be discovered. People had already
studied the concept of language with formal semantics and preciseness in the same way. Because what
did they do: they tried to take mathematics and abstract it. Usually the way mathematics is written is in
an imprecise way. You look at mathematical paper, and they are written ambiguously in a mixture of
symbols and English. And when people wanted to develop the foundation of mathematics wanted to
make it precise. There are some famous German mathematical books from the 19th century, where the
trend was: “don’t use words, words are confusing.” Okay. “Just use symbols…” There was this German
mathematician Landauer, he was famous for that. It was really incredibly formal—almost no natural
language. So when people started developing late in the 19th century, early 20th century, the foundation of
mathematics, part of the goal was to get away from natural language. We have to have a language that is
completely formal with completely precise meaning. And that is what we essentially needed when we
want to talk to a computer. We need a language completely formal, with completely precise meaning
Moshe's book, Reasoning about Knowledge was co-written with three other people, all computerscientist/logician/philosophers. We talked briefly here about the mechanics of writing a book with four
105
Ethics and Politics of Information Technology--Workshop materials
authors and the "revision control system" they used to manage it. We also discussed some of the give and
take necessary for a collaborative project to work in the end.
CK: How do 4 people write a book together?
MV: The mechanics, actually, there is something called a revision control system…We developed our
own homebrew system, which in my opinion was much more effective. We developed this sort of
protocol of how you modify files. The first line of the file said who has the token for the file. And you
cannot touch the file unless it says you have the token. And if you sent it to someone and you forgot to
change it, they will say “no, I can’t receive it from you unless you give me the token.” Most of us were
good about it—Halpern was the bad guy, we kept jumping on him. And he always said, “just this time.”
We were very brutal, we said: “no. No token, you cannot work on the file.” [Laughter] So we kind of
developed a very primitive, but very effective protocol how to do the collaborative work. We were work
colleagues, but also friends. We had a very good atmosphere. We were just down the hall from each
other, so we would write and go and we would have incredible arguments. Somebody would write a
chapter and somebody says “I don’t know…” So you say: “you write it! You don’t like what I write, you
write it” “No no no, you wrote it first, you improve it, then I will take it.” But, we would have, actually,
it’s interesting that all four of us are Jewish, so there is a culture of open debate. And people usually
don’t get offended if somebody disagrees with them. And when we got into bitter disagreements, we
would have a weighted voting scheme, where you voted, but you also said how important this is to you on
a scale of 1-10. Of course, this only works if you’re honest about this. Otherwise everybody says 10 or
one! There's only one thing that still bugged me to this very day, which is a minor thing, but there is
something that comes from modal logic, there is a concept of a “rigid designator”, which is something
that in different model contexts can have the same meaning. And the question is: what is the opposite of
rigid, what do you think? What should be the opposite of rigid?
CK: Flaccid?
MV: No flaccid has all kinds of connotations! No, flexible. But my co-author thought it should be “nonrigid.”
CK: Non-rigid? Oh really, as opposed to flexible?
MV: So we had a big debate, and I lost on this one. This is the only thing that really for some reason
really mattered to me, and for some reason annoyed me.
CK: Maybe you would have won if you had suggested ‘flaccid’ [laughing]?
MV: Yeah maybe I’d have won if I’d suggested ‘flaccid.’ I going to propose it back for the next edition,
that we should have flaccid designators.
One of the most interesting aspects of our discussion concerns the arguments of the book Reasoning
about Knowledge and the question of how loosely or tightly the formal epistemology describes or can be
applied to humans, as opposed to logical systems, computers, wires or other non-humans. We asked first
if it played any role in how Moshe thought about running an institute or dealing with people, and he
insisted not--that formal epistemology was really at best an idealized version of how people think. We
asked whether that meant it shouldn't be applied in moral situations.
MV: This stuff has application to philosophy, to computer science, but also to economics. Because if
you think about what economics is, it’s about what happens when you have systems with lots of agents
and they trade and they do things together. It’s all about multi-agent systems. And we had meetings to
106
EPIT
bring all these things together, which were fascinating meetings. I remember sitting with a well-known
economist from Yale and I said: “but this doesn’t really describe people. Don’t you see this as a
problem.” And he says: “Don’t you see this as a problem?” I said: “I deal with computers, I deal with
artificial reality.” [Laughter]. When I talk about “know” I don’t have to imbue with any kind of cognitive
meaning, I just say it’s a convenient way of talking about it." When I say, "when the person received the
acknowledgement, it knows that the message was received," I don’t have to think about epistemology, it's
just a way of talking about it. It’s a convenient way of talking about it. It’s a very different if you try to
say that’s what people mean when they say: “I know.” So, there is a concept called the “possible world
approaches” where you say “I know the fact,” if in all the scenarios that are considered to be possible, that
fact holds. And it’s not clear that that’s what people really mean when they say “I know”. I know that the
table is, you know, formica.
CK: But you're suggesting that it won't work for humans but that it will work for wires. You say at the
beginning of the book…
MV: It will work in the sense that it gives me a way to design systems. Ok? So, if I think that my agents
should have some properties that they may not be aware of, some of the possibilities, then I can build it
into my theory--but it's in an artificial world, it’s a make believe world.
CK: And its one in which you can actually control what an agent knows.
MV: And even “knows” it’s not something the agent really “knows.” What does it mean that “the wire
knows?” It’s just an interpretations I put on it from theoutside. I can say “the wire knows that the wire
knows,” but there is [still] some naked reality there, and I choose to put an the interpretation on it. It’s
much more difficult to do it on human epistemology because it’s hard to take ourselves out of the picture.
It’s hard for me to dictate to you what you should mean by the word “know” if you object to this
meaning. So, I think it is much harder to do it in a human setting. But that is the beauty of computer
science is that it is the science of the artificial.
CK: I suppose a kind of perverse question would be: is there anything to be gained by imagining a way
in which you can ascribe belief to computer agents, wires, things that are manifestly not human but where
you might want a richer description of what it is they are doing. That seems a little bit science-fiction-y
though…
MV: But we did. I don’t care if you say my wires are logically omniscient, there is no other
consequence of their knowledge. But let’s suppose they need to act on it. They need to compute this
knowledge. And it takes time to compute. So, how do we factor this? One of the attempts in the book
was to try to deal with it, to deal with a concept called algorithmic knowledge. Yes, you know the
consequence of your knowledge but it takes time to get to know them and how do we factor this? What
we're interested in is a way to build systems. It means that I want my wires to act on the basis of their
knowledge, and they need to compute it, so….
CK: So what you’re saying is that it's like a wire could know that it could know something, but it has to
decide whether or not it’s worth it to…
MV: …To spend the resources on it. So one of the more advanced chapters of the book is called
algorithmic knowledge, and there we try to deal with the issue of: ok, how do you actually gain the
knowledge. And again, I think it’s an idealized picture appropriate to artificial systems, but again I think
it’s very difficult, when you try to put any kind of logical, any logical attempt to try to capture human
cognition is in my opinion is doomed to fail.
107
Ethics and Politics of Information Technology--Workshop materials
CK: What about…cases where people understand enough to deliberately keep something secret? [Cases
where you would] attribute some kind of malicious intent…which would turn it into presumably an
ethical question almost…but it would be machine ethics I suppose…
MV: Well there have been some attempts in computer science to reason about cryptography and privacy
and authentication. So there have been attempts actually to take this theory and apply it to that context.
The whole concept of cryptography is that there is a message, and I know what it is, and you see it but
you don’t what it means, right? So there was actually an attempt to relate this… I think they were
moderately successful, but not successful enough that people are using them. People in philosophy have
tried to look at various issues, for example you know the surprise quiz paradox? Or the hangman
paradox. Do you know that one? So, a professor tells his students, I’m going to give you a quiz on one
of the days next week, and you will be surprised. So the students sit down and think "Okay, by Friday, if
we didn’t get the quiz on any earlier day, surely we are going to get it on Friday, right, but then we won’t
be surprised, so we know the teacher is not going to give it to us on Friday." So Friday’s out. So now we
know that Thursday must be the last day, so we’re not going to be surprised on Thursday either, now they
eliminate every day, and the say, the conclusion is, it can’t happen, he won’t give us a quiz, because we
are not going to be surprised. He gives them the quiz on Tuesday, and they were very surprised
[laughter].
So, it’s a very cute little, little puzzle and the question is, where did they go wrong? Their reasoning
seemed to be impeccable, but the Professor told them he was going to give them a quiz and he was right,
they were surprised. So can you give a logical account of this—paradox is a good way to kind of test
your tools—and this is actually not an easy one… But as far as ethics… I never thought this might have
ethical implications. If there is you’d have to think that. Reasoning about Knowledge is not the right
title for this book, because it’s really "Reasoning about Group Knowledge." Even from the very
beginning, the muddy children puzzle, it’s analysis about a group of children, it’s just not interesting if
there’s only one child and one parent right? That's the one uninteresting case… because it only becomes
interesting when there are multiple children, so the whole book is really about group knowledge. So you
have to think of some ethical issues that come up that have to with group interaction and group
knowledge. But I’ve never actually, this is the first time I’ve thought about that.
Moshe talked a bit more about how this theory could be used in Economics, and how one of the seminal
papers in the field was by an economist, Robert Aumann's 1976 Paper "Agreeing to Disagree" in Annals
of Statistics. At issue in that work is the question of how two people with the same prior knowledge
cannot "agree to disagree". Although the original paper is not about buyers and sellers precisely, Moshe
explained that it has implications for the possibility of trading. Our follow-up question, still pushing on
the ethics button, concerned the case where some people in a group know enough to know what to keep
secret in order to gain an advantage.
MV: In fact we spent enormous amounts of effort to analyze a statement of the following: you and I have
common knowledge, that these two don’t have common knowledge about. And this turned out to be
very, very subtle, and especially when we try to iterate these kinds of things, the mathematical analysis of
this is actually very subtle. To really give it precise semantics you end up doing what we call trans-finite
reasoning, which is that you have to think of infinite numbers beyond the countable, and I spent a whole
summer trying to prove a theorem [doing that]. In order to get some intuition, I used a blackboard and I
drew lines like this [draws long straight lines on chalkboard], just to somehow give me some graphical
anchor for thinking about trans-finite numbers. So I would have the blackboard and nothing on it but
these very long lines. I was in Israel for the summer and my wife would come sometimes to pick me up
for lunch, and she’d see me in my office and I’m staring at the blackboard with these long lines
[laughter]. Okay, then she comes again at dinner, and she catches me in the same position, you know,
108
EPIT
staring at the blackboard and there are these long white lines. And she says: This is what they pay you to
do? [laughter].
MV: So I agree with you, that when you come to analyzing group situation, I just, I never thought, I have
to say, you kind of caught me off guard, I never thought about the ethical applications of this. An
interesting topic to think about.
Our attention turned here to issues that Moshe considered the "more mundane" but more practical ethical
issues. We discussed the kinds of issues he teaches and talks about as a question of "research ethics" and
it's comparison with medical ethics.
MV: Once a year, I give [the Rice Undergraduate Scholars Program] a lecture on ethics. The issues are
much more mundane, they don’t really require much philosophical sophistication. These [issues we have
been discussing], it’s interesting to play with these kind of questions, but usually the ethical issues that
people cope with [are more mundane]. An interesting issue that came up recently, for instance, is when
you have papers with multiple authors and it turns out that there is some fraud perpetrated in the paper:
what is the responsibility of the authors—this was done by one author, perpetrated it, what is the
responsibility of the other authors? Surely, when they get the credit, they want to say oh everybody gets
the credit for the paper [laughter]. But, when something’s wrong they say, oh no, he did it. Once we
invited Jerry McKenny [from philosophy], he gave a guest lecture and he tried to explain to us how do
ethicists think about ethics—does the action have to be ethical? Does the rule have to be ethical, I mean
there are various ways how to think formally about ethics. But most of the time, for people to behave
ethically, you know, I think they can ignore most of the philosophers…
CK: One of our group, Valerie, suggested that bioethics and medical ethics are really coherent fields
because they have a set of principles which everyone makes reference to—like benevolence, do no harm,
non-maleficence or justice—and therefore the kinds of questions that they ask and the answers that they
give in practical situations are a lot more coherent. Do you think it would help science and engineering
outside of medicine to have that kind of an approach? Does that seem like something that’s needed? Or
is this even more basic practical questions.
MV: There is no question that if you look at research ethics and compare it to medical ethics, I think
medical ethics is a much more sophisticated field. I mean there are really deep questions that people have
to decide: the meaning of life, when does life start, all kind of things like that, right? Research ethics is
mostly about human decency. People think that ethics is about scientific fraud. And I say, not really,
because I mean when we teach an ethics class, we don’t start the ethics class, ethics 101—"thou shalt not
kill, right?" [laughter] So there are certain things—fudging data is criminal and it’s immoral and its
unethical—whatever you want, so end of story, there’s nothing to say about it. What is an ethical
question? Well, you know, you run the experiment, and there’s a power fluctuation on one of the points,
but if you drop this point then you have no paper. Okay, and there are good reasons why you think the
power fluctuation did not affect the experiment… Ethics is about how to handle tricky situations. Not
about what—thou shalt not kill—that’s easy. In most of the cases that I saw, it’s really about how to be a
mensch, it's just about human decency.
I think it, it could be useful to have some principles. For example, a big issue in research ethics is that of
authorship. When I talk to students about it, I tell them that authorship is like sex. And the graduate
students, in particular, are like “what is he going to say here?” I explain that everyone understands that
sex is one of the crucial components of marriage, but very few people go into by writing down precisely
prenuptial agreements or spelling out everybody’s expectations and defining rules of conduct, right?
People are just supposed to wing it, right? Well that’s what happens with authorship rights. Of course,
everyone will agree that it's one of the most crucial components of scholarship, you know, putting your
109
Ethics and Politics of Information Technology--Workshop materials
name on a scholarly work, that’s the essence of scholarship, right? (And in fact the second book of
Djerassi, Bourbaki’s Gambit is about this issue of authorship). But you try to find out what the rules for
authorship are and you find that there are none. It’s not well defined, you can’t find almost any rules that
define who should be an author, it varies from field to field, I mean it’s a mess. And no wonder, …just as
there are many many problems with sex, there are many, many problems with authorship, it's rife with
problems. So it would be useful, it would benefit us to reach a level of understanding where we have a
better understanding of what authorshipis—it would if it were possible, but I’m not sure it’s possible. So
I think research ethics generally doesn’t have any of the sophistication that you find in bio-medical ethics.
CK: Usually when the ethical dilemmas that you are talking about are fuzzy or ambiguous, they tend
towards political issues. And I’m not talking about capital P political issues, I’m talking about political in
the sense of “how should we live?” that old Aristotelian question, which is also “how should we live
together?” So I mean there are ways in which those questions about authorship are also political ones.
Who gets credit and how? And who should decide, and whether or not the way in which credit is
distributed is just, right, those are political questions…
MV: Right, right. You know, I have very little background in this, as you recall, my first philosophy
course was logic and I didn’t like it [laughter]. And the rest is just from some reading here and there. I
also find this kind of high-brow discussion highly stimulating but they don’t quite give you concrete
answers. Can I sit down and say ok, what do I do now at the end of the day? We can discuss what is just,
but what do I do at the end of the day? What I usually tell students is about clear communication, about
being explicit rather than implicit about things, about human decency, but I’m not going to define for
them what is humanly decent… I mean you can go and ask, but what is it really? What is a decent
behavior? We usually stop there, we don’t get the definition of decency.
CK: So there’s a reliance on a common sense or an everyday notion…or even to some extent professional
ethics in the sense that you get inducted into through practice?
MV: Well you can socialize into it… Professional ethics is very different from morality. I give them an
example: take submitting a paper to two journals. Is it immoral? Nobody would think you are doing
something immoral if you submit it to two journals. It so happens that professional ethics says “thou shalt
not do it, this is a bad thing.” Why is it a bad thing? Well there are historical reasons why people frown
on this—it had to do with expense of doing this, with the desire to appropriate credit fairly and things like
that, so if you didn’t know about it… we had an interesting issue recently, I was running a program earlier
in the year, and we had a submission from Algeria. And we discovered that the same paper was
submitted to two conferences. And so the discussion started, "what’s the right way to handle this?" And
do these people know about these rules, I mean, who knows? They sit there in Algeria, maybe nobody
told them it’s not okay to do it. So there was a big debate on this.
CK: Well that’s sort of what I meant by political, I don’t mean…
MV: These are simply social conventions—what is the appropriate behavior in particular contexts—and
every society has it. I'm a transplant from one society to another, and it’s a big thing. For example: do
gifts have to be wrapped? You know, to me, it still looks like a colossal waste of effort [laughter] why
did this person spend all this time putting it in this paper, just so the next person has to tear it off. I’m still
puzzled by this, but this seems to be convention here, to call it a gift, you have to wrap it. And in
expensive paper. OK. You know, where I come from, you want to give someone a book, you just give
them the book. Give them the book!
CK: Clearly, you never made it past you logic class in philosophy… to the gift class.
110
EPIT
MV: To the gifting class… gifting 101! So society has conventions, and in scientific society, it’s a group
of people with their own conventions. And very much what we teach the students is just "what are the
expectations in this, what are the social expectations, how are you supposed to behave?"
TS: So do you teach them that in a class, or do you just teach them about that? Do you actually go
through what the expectations are?
MV: Well, I haven’t taught the class for a while but when we do that, yes. Much of what we taught in
the class was: these are the norms, these are the conventions, here is the process of publishing a paper,
here is how you are supposed to behave, here are the norms of reviewing the paper. So, submitting the
paper is relatively easy. But the reviewer is this mysterious anonymous figure with a background nobody
knows. And the reviewers (well actually we call it the referee, which is a somewhat harsher term), if you
think about it, the referee is of crucial importance: papers get accepted or rejected, careers are being made
by these people in the wings, there you have real ethical issues, you know, what’s the proper way of doing
this? So yes, much of what we talked about is just, how are you supposed, what are the rules of behavior,
in this particular society.
And in fact it changes: there are some principles that will be uniform, but some are not. Going back to
authorship—order of authors. Take people in this computer science department and you’ll find that
because each one belongs to a sub-discipline here—I am in one area and other people are in other areas—
and they will tell you that there are other rules…
TS: It’s not just alphabetical order?
MV: In my area, it’s alphabetical, and I love it. I think it’s think it’s simplest of rules, I love this rule of
alphabetical order, even though I’m usually the victim of this [laughter]. But it’s a simple rule. The
systems people tell me they usually but the younger people in front.
TS: Oh, it’s not the old, the most senior people first?
MV: No, it’s the most junior one in front.
TS: So you’re kind of teaching a class on manners? Basically… or expectations… that’s a great idea!
[laughter].
MV: Think about it this way, what do you know when you meet a person? You say hello, you reach out
your hand and you say “my name is so and so” right? How do you know that’s what’s supposed to
happen?
TS: It just happened to me the other day that I held out my hand and said "hello my name is so and so and
he said “Oh, in my religion we don’t touch.” [Laughter].
MV: Okay, but actually, at the very least, somebody taught him how to behave. Cause the more awkward
thing…
TS: It was pretty awkward. But I’m an anthropologist…
MV: But how do you know that’s what you are supposed to do? Well you absorbed it over time, your
mother told you, you probably don’t even remember it, it’s in the deep past. Well, even though I didn’t
star out being a scientist, many things that I do, I don’t even know how I know them. You osmote, you
grab it you get socialized, you start as a graduate student, and gradually someone tells you "no no, that’s
not the way you are supposed to…" and after a while you don’t even remember how you know it. And
111
Ethics and Politics of Information Technology--Workshop materials
when I tell it to students, there are certain things I tell them in passing, because I think its obvious, and I
see students say: “Oh, really? That’s the way its supposed to happen?” And I say Yeah, doesn’t
everybody know that? And they say no nobody even told us about it. I think the students who went to
[the class] thought it was wonderful. I mean they just, instead of waiting to gather a bit of information
here and there, suddenly somebody tells them the whole thing, there is a secret book [laughter]. And
somebody now opens the book for you, so it was a lot about just professional norms rather than about
ethics, cause part of the ethics is: follow the norms.
Discussion turned here to a series of questions about the interface between legal and ethical issues,
ranging from intellectual property to terrorism-related issues. Because Moshe is a full professor, head of
CITI and a respected member of the campus community, he is probably more attuned to these issues than
most other faculty. He discussed a few of the issues he has faced recently.
MV: The one place where I think you do get into legal issues mostly has to do with intellectual property.
Because there, you have issues of patents, you have agreements with companies, you know we sometimes
have cases that, where there is interplay between [law and ethics]. That’s the only place I can think of
where we run into legal issues. Well, of course, if you perpetrate scientific fraud and it was federally
funded research, I’m sure you have violated god knows how many federal laws, probably wire fraud, mail
fraud, whatever, but that’s not what we’re talking about. So I think the only place where we run into legal
issues is intellectual property.
CK: What about now with work in terror, you know related to terrorism?
MV: We haven’t seen it yet. In fact, there are issues coming right now, there is a new issue coming up
having to do with export control. This is a very hot issue now for our society, the IEEE, which is the
Institute for Electronic and Electrical Engineers. They recently stopped receiving – they also run several
journals – they recently stopped receiving submissions from scientists in embargoed countries.
TS: They won’t accept them?
MV: They won’t accept submissions. The reason is that they were sending them referee’s reports. So
there was some ruling by some government agency that said that sending a referee’s report is export of
data. So there is a big fight now.
CK: Just the referee’s report, not even publishing…
MV: Well you know, publishing the paper doesn’t seem to be a violation of any law, I mean you’re not
allowed to send them information. Here they are sending you information, nobody could argue… So
that has been ruled out, so they don’t take submissions from them. Right now, that’s a big controversial
issue. When petitions are circulated, you know it’s a big issue. So again there you could be running into
issues of what happens if the government requires something that you think is unethical. These are new
issues, so far we haven’t really run into these issues but times change so this might become an issue.
TS: There are a lot of laws about import/export of different kinds of software, right?
MV: Right, so what happened was that was that the government would classify some piece of research,
doesn’t even have to be government funded. I could write something: how to make a nuclear bomb in the
bathtub, and the government could say "this is classified." If it’s classified, then I can’t publish it. Now
the thing with export is that it’s a very sneaky way of restricting speech, not by saying it’s classified, but
by saying "you cannot export it." And then to take it even further, to say that when you write the referee
report on a paper submitted from Cuba, you are violating export law. I think that someone really ought to
112
EPIT
sue the government, and this ought to go to the supreme court. Most of the scientific societies don’t want
to piss off the government. I don’t know what will happen. The problem is, we need someone who is
willing to [do it]. I think these guys in the government really went way out of line on this one, to say that
the referee report is export of data. It is some government agency, I don’t even know how high it goes, to
make this ruling.
TS: this kind of gets back to what you were talking about at the beginning, when you were talking about
hierarchy, being like sort of the ultimate structure, because there’s nobody, somebody in another office,
the patent office, or whatever office, doesn’t know that this one person has passed an obscure law about
the referee’s report, there’s nobody with sort of an overall vision. It’s some obscure agency, I can’t even
remember which agency that is dealing with enforcing this data export, maybe the commerce department.
I don’t think that Bush woke up some morning and said okay, let’s not send referee’s reports to Cuba
anymore. He could have said, "let’s be tough on embargoed countries," and then there was a chain of
people who said let’s be tougher on… and eventually one bureaucrat says, oh I think that means we
should not send them referee’s reports. But it’s just like anything else: we have mechanisms of fighting
it, you can go to congress, you can fight it in the courts, we’ll see what happens, but, so what I’m saying
is that so far the only legal issues that have come up so far are issues of IP but I could see other issues
come up now, and it’s a different world.
Our interview ended with a brief discussion about our research and the question of where it would be
published.
MV: Where do you publish these things, where can you publish it?
CK: Well, there’s a journal called Social Studies of Science, which would probably be an appropriate
place, another one called Science, Technology and Human Values. Maybe in one of the anthropology
of…
MV: The Social Studies of Science, is this the strong school or the weak school?
CK: [Laughter] Well actually both; the distinction has weakened since then [laughter]. The Social
Studies of Science Journal was the place where the strong and weak debates were carried out…
MV: So how is the Edinburgh School doing?
CK: The Edinburgh School seems to be doing okay, I think there’s only one or two people left. A man
name David Bloor died recently…
MV: Oh yeah, he was one of the strongest, the strongest of the strong…
CK: Yes, well the debates never really reached much resolution, like these debates do, they sort of
petered out.
MV: Yes, history will have to judge them [laughter].
113
Ethics and Politics of Information Technology--Workshop materials
Commentary on Moshe Vardi
114
Download