Document 17954096

advertisement
>> Peter Lee: So let me say hi to everyone here and watching over the web. It's really kind of
cool and fun for me to have a chance to introduce Scott Draves, also known as Spot. So Scott
and I go way back because I was trying to think back and I believe Scott was my fourth PhD
student, but the sixth one to finish. Take that as you may, and he finished his PhD at Carnegie
Mellon in 1997. It was sort of a strange matchup because I was doing fairly theoretical work in
compilers and programming languages. Scott already, by the time you started your PhD work
was working with Andy Witkin in computer graphics. There was sort of a linkup because Scott
had some very, very interesting ideas that have now become mainstream in computer graphics
in using compiler techniques to optimize some graphics codes and oftentimes at runtime, and
so it was a very interesting research thesis and a PhD dissertation on that. And then there were
also, of course, important contributions just in algorithms, and the most famous one being on
Fractal Flames, which is a fairly hairy thing. I was noticing there is a pretty good Wikipedia page
that has I think the simplest account I've seen, but it's a little bit complicated because it's a
fractal concept involving nonlinear functions instead of FI transformation so mathematically it's
a little bit dense. But what you get out of it is absolute beauty, and so then that leads to Scott
as an artist and actually a fairly important artist working in electronic media today. For some
time Scott had gone astray and was working at a company called Google. [laughter]. Luckily
he's seen the light, a firm now called Two Sigma and he has been doing some interesting open
source work in visualization technology, and apparently they'll be some public announcements
about some things being put out within a few days. But the other side then, besides Fractal
Flames and contributions to computer graphics is a contribution to electronic art and mostly
notably something called Electric Sheep. I'm just really thrilled that in our kind of new Studio
99 that we are going to have for a period of time the privilege, the great privilege of hosting
Electric Sheep. I won't steal any thunder from Spot because I'm assuming you'll tell us a little
bit about what it's all about, but it is really cool beyond belief and if you think back, especially
back to the mid-1990s when some of these concepts were really unknown in the computing
world, I think both the longevity and impact of this has been significant. And I think it's a very
good thing that Electric Sheep has really inspired a lot of people, not just in major art
institutions around the world and arts competitions, but also numerous books, science books
and so on, many other pieces of graphic design, industrial design have been influenced by
Fractal Flames and by Electric Sheep. Lastly, I think many people already know this, but Spot
also has a connection to Microsoft because, and I know this sounds completely impossible, but
Spot is rich Draves brother. [laughter]. And so it's…
>> Scott Draves: And no relation to Amy Draves [laughter].
>> Peter Lee: Right. No relation to Amy Draves. So anyway, with that let me welcome Spot to
Microsoft Research and Electric Sheep to Microsoft Research. Take it away. Thanks.
[applause].
>> Scott Draves: Thank you Peter. Thank you very much for the kind introduction. And thank
you all for your attention here and for your time and thank you for the virtual audience too. I'm
going to back up this slide here. This is the title page. I think most of the slides I'm going to
show our art images instead of text. I'm going to try to show you more of a direct experience of
the work and talk about it more than a traditional presentation. The things I'm going to be
talking about today is really infinite animation and so the Electric Sheep is an infinite animation,
so first I want to tell you how it was made, why it was made, its history, where it's coming from,
why I do it. And, in particular, I want to also look at sort of the definition of like what is an
infinite animation, so there's the, it's very easy you can imagine just to generate a lot of random
numbers and show them on the screen. It's easy to be infinite, so I don't mean infinite in that
sort of trivial sense. I mean something a little deeper, and we will get to that in the talk. As far
as sort of the content, the topics we're going to cover will be things like distributed
computation because the Electric Sheep actually runs all over the internet. There are
thousands of computers involved and collective intelligence because, you know, it's not just
computers but people and so there is sort of a crowd source component. There's the Flame
algorithm which Peter mentioned which is just sort of the actual rendering part, the thing that
draws pictures. There's an open source component, so there's, which is, you know, not only are
we sort of taking interaction from the audience, but we're actually taking code from them.
That's an important part of the process. There are genetic algorithms, so there's an
evolutionary component and a genetic component and there's a mathematical component
which is, and that's sort of like when I'm talking about the Flame algorithm I'll talk about sort of
the equation behind it and the algorithm that solves it. Really, all of those are just different
components or different techniques with one goal in mind which is the creation of an artificial
life form. How many people here have heard of artificial intelligence? Everybody. How many
people have heard of artificial life? Geez, everybody. This is an unusual audience. Usually it's
about five or ten percent, but just like artificial intelligence is trying to create intelligence
through software, artificial life is trying to create life through software, life in digital form.
Whereas most people have a pretty easy idea of understanding like artificial intelligence and
talking to a computer and what intelligence is and what a computer might do that's intelligent;
life is sort of a tougher sell. People think of life is being sort of very squishy. How could a
computer, how could that involve chemistry? How could it involve, you know, the sort of softer
messier aspects of life and really, so when I say artificial life, I don't mean duplicating that sort
of the wetness of life, but I do mean sort of capturing the essence of life, some sort of more
abstract properties, something that, so it begs the question what is life, and so I'm going to
address that. Finally, this isn't just a software creation; the Electric Sheep isn't just a software
life form. Like I said, there's a human component as well. There's all the users and the
interactions and the programmers and stuff like that and so there's really together computers
and the people they form what I call a cyborg mind and that's because, you know, there's a
combination of man and machine and so that leads us to, and there's lots of ways of combining
these ingredients. And so what I'm going to address is, you know, the relationship between
man and machine, like how, you know, and what is that going to be going forward. This image
here that I've been showing is just sort of a title slate. Does anyone recognize? The artwork, of
course, is the Electric Sheep, or it's actually a high resolution special edition version of the
Electric Sheep. You may have, if you just go to the website before coming to this talk or maybe
you know of the Electric Sheep, there is a free version that lots of people are running. This is
made from that but is better than that in some ways. The location, does anyone recognize it?
>>: It's not the [indiscernible], is it?
>> Scott Draves: That's close.
>>: [indiscernible].
>> Scott Draves: You overheard me say it?
>>: Yes.
>> Scott Draves: Yeah, so it's another New York Museum. This is the main atrium of MOMA,
the Museum of Modern Art several years ago. Before that happened, there was this image.
This image was really a key and one that really got the ball rolling. The name is Frame Number
149 and it was made in about I think 1991 and it was eventually, I made it in Japan. It was the
first image I made with this Flame algorithm that was really different from the related work that
had been going on before. I had been sort of playing with these equations for years and getting
various results. This was the first time I felt like I had something. And this image in particular
sticks out because it was a couple of years after I made it I sort of, I came back and I showed
them actually to Andy Witkin who was my advisor at that time and he was like. This is cool.
You should send it to this art competition called the PR Selectronica which he had actually just
won the year before. I was like, okay. Sure. I didn't really think of myself as an artist. I had
been sort of doing computer graphics for many years. Actually, going back to my childhood
what got me into programming was doing computer graphics and sort of making little
interactive visual spirally toys. And you can imagine Spirograph and things like that. In code at
home alone in the dark and trying to entertain myself trying to figure out how can I program
the computer to do something which I didn't really expect. It had gotten me into programming,
but of course I followed a respectable career path and studied in college and in grad school
trying to be, and I imagine myself going on to be a professor or a scientist or something like
that. I didn't think of myself as an artist, but Andy said, you should send it to this art
competition, because this is art. And so I did. I just, and this was back in the day I actually had
to get a camera and point it at the screen and take pictures and like send 35mm slides in the
mail to Austria where they have support from the government to do art. It turns out I got a
prize as well, and suddenly I realized, you know, this isn't just something that I enjoy and that
maybe I can secretly show my friends, but that it's actually a thing. It's computer art and I'm an
artist and I should do more of it. Here was another image from that same series. This one is
number 191. I'm not going to remember all of the numbers for all of the art I'm going to show
you, especially as the years go by they have gotten more and more digits and at some point I
sort of gave up. Sometimes they are recorded and sometimes they are not and I lose track.
This one if you look at it, actually the design was a more primitive design process than what I
use now, but the reason I sort of picked it, the reason I liked it and this is, you know, this looks
like, it's an homage to Duchamp's painting The New Descending Staircase. I don't know if you
can see sort of like a form of a woman there sort of spiraling like limbs in motion, stuff like that.
But that's the kind of anesthetic or kind of thing that I'm trying to go for. It's not just, not just
something to look at, but something to think about. That's what I'm thinking about when I look
at it. Of course, most people don't know that. They might think their own thoughts while
they're looking at it. But what's important, or what is sort of different or interesting about
these images and this algorithm is sort of just the style. It's not made with normal computer
graphics. These are not lines and polygons. This isn't the sort of CAD/CAM metaphor that was
the birthplace of computer graphics. That was something I had been doing by day. Those were
the tools I used all the time, but so this was sort of my escape or attempt to go beyond that.
Really, it's, instead of the hard sort of mechanical straight lines, this is softness and curves and
instead of sort of black-and-white, this is subtlety and haziness and mistiness. It took a while,
so as soon as the web showed up I released this code as open source in probably 1994 right
after Netscape was downloadable. I said this is cool. I want to make a webpage. What am I
going to put on it? I'll put these pictures on it and I wanted to put not just the pictures, but I
wanted to put the code up there at too so that other people could make their own pictures,
because for me it was really about the style and this process and sort of this interaction instead
of the particular images. But I wanted to engage people at a deeper level, and of course, just
like there's the 1 percent rule, most people just look at the pictures. That's fine, but a few of
them can go deeper. So the code was sort of floating around the net. It got put into things like
the gimp. There were commandline versions. It got put into Photoshop and AfterEffects. I
think this image was sort of, as the years went by I was working on the code and improving it in
various ways and because it was open source, other people made their own improvements and
send stuff back for me. I'm going to get into the actual sort of the mathematical detail of the
algorithm a little later, but it was very interesting that, so this image was maybe around 2000,
so quite a bit later than the first ones. Here you go. Here's another example and really, the
other thing that I had transitioned from, one thing about this algorithm that's really different is
the good thing about it is that it's, it has, they look good. It has this sort of subtlety but the bad
thing about it is they are very expensive to make. It takes about an hour per frame to render on
a regular PC, so those first images I showed you were just still frames. These images now were
frames from animation, so computers had become so much faster I was able to do animation.
Here's another example. Let me get into sort of the definitions or how they were made and the
mathematical part. And let me also just say these equations are from a paper I wrote and so if
you want to, the last official sort of peer-reviewed sort of technical publication that I wrote on
the Electric Sheep is called The Evolution and Collective Intelligence of the Electric Sheep, which
was published in a book called The Art of Artificial Evolution, 2007 by Springer Verlag and the
editor was Penousal Machado, if I remember correctly. There were a couple of other earlier
versions of the paper at conferences, but that was sort of the book version. And the system,
that was 2007, so the system has evolved a lot since then, but it was a good snapshot on most
of the basic details and techniques are unchanged. Certainly, this part, these equations are still
the fundamental, the basis for the rendering algorithm. Let me just go through it. The Electric
Sheep, the way these images are made, the Flame algorithm, they are, the images are recursive
pictures and what that means is a recursive picture is a picture that is made up out of itself.
And that's exactly what this equation says. If S is a subset of the plane, a subset of r2, that's a
picture, an image, and so it is, S is defined to be the union of several copies, transformed copies
of itself. So T sub i are functions from r2 to r2, so those are images that transform the image
plane. Simple examples of functions that transform the plane are like affine matrices, like scale,
rotate, translate. Those are the 2-D functions that everybody knows. In fact, but the Flame
algorithm sort of generalized that or went beyond that. What we started doing is these
functions are, for example, this is an affine transform. This is a 2 x 3 matrix applied to x and y,
but then also I'm going to then apply another little function here called V sub j and then is
taking a dot product. Here's examples of the V sub j’s. I call them variations. That's why that's
a V. They are just nonlinear transformations of the plane. The space of functions from the
plane to the plane is infinitely dimensional. Most of it is useless and insane, but there's, so I'm
only scratching the surface here and so what these are are a few sort of hand-picked blobs of
algebra that do sort of visually interesting and useful things to images. When I first developed
the Flame algorithm I had about 10 of them and then, and I released the code. And it's pretty
easy, I mean it's easy to extend the code and add another little, add another couple of lines of
algebra to it and then see how it looks. After I released it as open source other people started
doing that and trying their own things out and some of them were cool and those were sort of
incorporated back into our version, the official version. These do various things, like a lot of
times I'm treating, like this is essentially treating the point of the complex number and then
doing the inverse of it. This just wraps sort of folds the whole plane into a square. These are
just twisting and bending and ultimately cutting so the functions don't have to be continuous.
They can have discontinuities. There are pretty much no constraints on them. That's the
definition of a recursive picture. Now just to give you some sort of intuition about how that
works, here's two transformations of a square, so here's two linear functions. This is, you
know,, this is just rotation and this is scaling and if you union them together you get this sort of
funny Tang Graham shape, but then the idea here is that you iterate and if you apply that
operator again, you apply those same two transformations to this shape, you end up with two
copies of that and then there are four squares. And if you apply it again you end up with this
shape and if you apply it an infinite number of times you end up with a fractal. This was pretty
much the state of the art when I started working with these equations. I think this is called and
iterated function system. They were, I don't know, popularized by this guy Barnsley. He made
an image of a fern that actually doesn't look too different from the spiral that was really
popular. It's really lacking in, it's just a set, so it's binary membership. You are either in it or
you are out of it, so there's no color. There's no shading and the shapes themselves are pretty
simple. Just to give you an example of this, so this is if you use the standard drawing algorithm
for IFS you end up with something like this, and so I said this was the state-of-the-art it would
be things that look like this. But the Flame algorithm it produces things that look like this. So
where does it get the extra color and the shape and stuff like that? How about this? How do
you, if you look at this, how is this a recursive picture? You can sort of see some repeated
elements like okay. There is this little squiggly thing, okay. And then we've got more squiggles
over here, so those are probably transformations. You can imagine there is some function that
is mapping one of them to the other, but if you look at the image as a whole, it's pretty hard to
sort of reverse it and think of how it sort of came from this process. Let me tell you how the
Flame algorithm does it. So basically it solves the equation with a pseudo-particle system and I
call it a pseudo-particle system because it's simpler than a particle system. The particles don't
have state and they don't really interact with each other. You basically generate them and
forget about them and so the, so the Flame algorithm it takes the IFS which I already showed
you and then it adds the color shading and the nonlinear functions. The way the color comes
from is by adding a third dimension to the iteration, so it's pretty much instead of a function
from r2 to r2, we just say okay, r3; add another coordinate, but instead of showing it in 3-D,
which you could do, it's just like when you draw the particle, you take that third coordinate and
when you're drawing the particle you use it as a lookup table into a palette. The idea of a
palette is really important, because previously some people have sort of like done the idea of,
you know, you might use the color according to some equation, like okay. The denser the
particles the bluer the particle or something like that. Like there’s some sort of formula to give
you the color, but that, when you do your color that way, you end up with sort of rainbow
mishmash or really sort of, it looks like, you end up with what looks like a scientific paper
instead of art and, you know, for art you need really total freedom in picking your colors. You
need a palette and so I used a whole other paper and the whole other process to sort of
derived pallets from photographs of like the natural world, like landscapes or people or famous
paintings. I would take a scan of a van Gogh and algorithmically extract the palette from it and
then use those colors as the lookup table for rendering Flames. That's the color and so then the
shading is derived from essentially like a high dynamic range, like process. So it's really just the
density of particles and so this was really, some people had sort of done this little bit. The idea
is just basically when you draw the particles, you are really just accumulating brightness. Every
particle is adding a little bit of luminosity. The trick is to do it in a nonlinear fashion. If you just
do a histogram, just add brightness, you end up with a few really bright points and a lot of sort
of murky dark stuff that you can't see, and if you turn up their brightness so that you can see
the dark stuff, well then the rest of the image is blown out. The same problem shows up in
photography. If I were doing this research today, I would probably just snap my fingers and
throw an algorithm at it, but this was 1990 and HTR didn't exist, so I just had to make
something up. I did the obvious thing which is to take the logarithm of the density and so the
brightness is determined that way. And then there's actually, if you just do that though you still
end up with sort of a dotty effect. You end up with some noise in the dark areas. So you need
some, in order to get a really smooth and sort of to really escape from the idea of I see the
particles. I see the digitalness of it behind that, you need some sort of smoothing as well and so
it took, this showed up a little bit later in the development of the algorithm, but the idea is
essentially a non-linear intelligent filter that changes, a filter as in just blurring the image
because if you've got noise, if you just blur then that smooths out the noise. But of course you
don't want to just blur your whole image because then you just have a blurry image. You only
want to blur the parts of the image that don't have a lot of particles, so the size of the filter
changes inversely proportional to the particle density. It's a standard technique now. That's
pretty much how the Flame algorithm works. It's very simple. Here's the actual pseudocode.
Boom. Generate a lot of particles, ten billion particles. When I say x equals a random process
of x, so what that means is basically just take the particles and send them through the
functions. So you have a bunch of functions from the plane to the plane, so if you've got a
particle on the screen, pick one of the functions at random, apply it to the particle. That gives
you a new particle, draw it on the screen and just iterate. Then accumulate, that's what I mean
about doing the histogram with the logarithm. That's how the Flame algorithm works. That's
what gives you images that look like this. Sure, question?
>>: How do you seed it? How do you start with something? Do you start with a number of
seeds or just one particle?
>> Scott Draves: Yeah, I skipped that part. I seed that with a random position.
>>: But a single position?
>> Scott Draves: A single position and then it iterates many thousands of times. There are
some caveats though, so if you did that you would end up with random particles on the screen.
That would be no good. It wouldn't actually be, it's only a small problem because it's only a few
particles. There's billions of particles; ten extra particles doesn't matter. What happens though
is because of the attractive nature, because the image sort of pulls the particles in, after 20
iterations you're in the image. So basically what it actually does is it iterates 20 times and
throws them out. It doesn't draw them. Then it starts drawing them, and then it draws, you
know, many thousands of them and then it actually throws that out and picks a new random
seed, because sometimes you get stuck in the backwater. So there's the few little caveats,
complications. There's also other things that go wrong. Like I said if the particle goes to infinity
or you divide by 0 or some other numerical problem, basically when that happens just throw it
out. Pick a new random starting seed. One thing you'll notice about images like this though is
that they are, it looks 3-D. There seems to be what I call this illusion of occlusion, like, you
know, this white thing is in front of this orange stripy thing. This white thing is in front. If I go
all the way, to where did that come from, because it's not, there is no third coordinate. It's
really strictly the 2-D algorithm and the answer is it's really just dominance, or sort of
saturation. There's a lot more of these white particles here than there are of the red particles
and so if you draw a million white particles a thousand particles on a particular pixel it comes
out white.
>> You could generalize this to volumes and do, generate a volume of these and then render
that afterwards.
>> Scott Draves: Totally you could. That's right. If you go with four coordinates, three spatial
and one color or as many as you want, yeah. It's just, volume rendering is little bit harder.
These are already pretty expensive to draw. But I'll get more into that a little later. There's a
sort of a little bit, so that's the very quick answer. The logarithm actually plays into that as well
because you might think that well a thousand and then a million, obviously you are not going to
see the thousand at all. But you do, because of the log effect and so really, it's essentially, the Z
coordinate is sort of coming from the density. And you could even, if you didn't want to rewrite
all the software with an extra coordinate, you could just sort of re-back compute the Z
coordinate from the density, but then that just requires a two pass algorithm because you don't
know the density until you've already generated all the particles, so you have to do it twice, but
big deal.
>>: So for any given image are we seeing only one parameter set?
>> Scott Draves: That's right. This is a single parameter set, a single set of coefficients for those
matrices, a single configuration of the variation functions.
>>: And is likely to be the case then, because it looks like we're seeing several different
patterns of things going on. Is that the nature of the convergence properties or nature of it
happened to be one of the redundancies that you got tended to spray stuff around the outside
and another one gave you a long stream path and another one gave you [indiscernible]
>> Scott Draves: Pretty much. In almost all cases it doesn't matter what the random seed is. It
converges onto the, you get sucked onto the image after just a few iterations because every
iteration generates another bit. After ten bits you are there. The different shapes that you see
are really just transformations of the thing and they are just highly distorted. These functions
are not linear. They go crazy and they have cuts in them so you might just be seeing part of the
whole thing. It's, it gets pretty hard to understand at some point and that's part of what I'm
actually trying to create here is something which I don't understand and something that you
don't understand and something which nobody understands. So let me just show you a few
more of these images and you can see the occlusion really clearly in this one, for example, and
this sort of, here's an example where there's sort of occlusion, but it's real ambiguous. It's also
transparency and it's kind of hazy and who knows what's going on. Oh, sorry. But so that's just
an example. But let me just return to that point that I was saying of like trying to sort of create
something which I don't understand. You know, it's like going back to what I was doing with
the computer at home. If you think of the computer as your slave for as, just the automaton
that does what you tell it, then why would it be fun, how could it be entertaining to program it
because how could you surprise yourself. You tell the computer to do something. Than it does
it and then you have what you were already thinking, so that's not surprising. To me, that's not
entertaining. What I was really looking for, or what I'm still really looking for is a way of getting
something which I didn't already know that I wanted. I want to exceed my own imagination
and I want to surprise myself. So I want to create an algorithm which, you know, I get out more
than I put in, so I put in, and that sort of where we lead to sort of a multiplier factor here. I do x
amount of work and then I get maybe 10 x of interest entertainment or time spent, you know,
watching this thing before it becomes boring. I've seen everything it can do and then maybe I
move onto something else. So that's, you know, and if that multiplier is sort of like big enough,
that's where we get the infinite animation. Anyway, now, in order to kind of get to this idea of
like the multiplier or the, I want to sort of show some other of my projects which look
completely different, use different algorithms, different techniques, but just in order to make
that abstract notion more clear. Here's one of them. This is called the Fuse algorithm. It's sort
of similarly named to the Flame algorithm and also early ‘90s, developed in the early ‘90s and
this one works with, you know, with an input image, so this is not abstract, right? Actually, it
doesn't come purely from mathematics. It actually comes from the photograph of my face and
actually in this case an image of my face and my friend’s face at the same time. We, the way
this algorithm works is it takes the input images and breaks it up into lots of little tiles, lots of
little squares and then builds the output image by assembling them in a way that sort of makes
sense locally, but there's also randomness, so globally the image has been scrambled. I was
really inspired by William S. Burrough’s cutup technique, which is where he does automatic
writing by literally cutting up the pages of a book and then throwing them down randomly and
then finding sentences or phrases in the results, sort of like throwing the I Ching. This was his
way of sort of finding inspiration and there was also, I guess, I'm not sure if it actually came
from that, but there's also a thingy in the Emacs called Associated Press which was a way of
generating text from input text, but sort of scrambling them and it generates sort of a Markov
model of the text and then generates a new text. I wanted to do something like that, but a
visual version of it and I came up with this. Again, sort of, when the web came out I put it on a
webpage, you know, released the code as open source and later on when the gimp came out I
put a GUI on it and it got widely used and showed up on like album covers by rock bands and
stuff like that. But this one, and then a little later actually this one got published. Eventually, it
got rediscovered my what was the name Efros and Freeman and they published a paper called
Image Quilting for Textural Synthesis, SIGGRAPH 2001, essentially exactly the same thing where
basically, you take little rectangles and you find the sum squared differences of the image of
the overlap of the rectangles and you sort of search for, you try to minimize that. You search
for a good tiling match. Their algorithm was actually better. They had this sort of minimum cut
things so that they, to reduce sort of the tile. If you look at these images you can sort of see the
tiles, because I just drew the tiles. When Efros, their algorithm, instead of drawing the whole
rectangle, drew a part of the rectangle and had like, it cut out the part and sort of minimized
the visual jump. It minimized between the tiles, so that was a big improvement. After they
published it it actually has become like a whole field in, I think, computer graphics. There's like
workshops to it. You can see it at SIGGRAPH, stuff like that. Not only can you sort of let the
algorithm run wild, but you can also in this case I gave it, in addition two functions to optimize.
One was matching with itself, but I gave it another goal function which was to match an image.
So this is a self-portrait. It's just an image of my face but made out of pictures of my eyes, so
you can see if you look in the hair it's all eyes. You can see the text, this is just a funny thing.
See at the bottom it says copyright 1993, which is actually not accurate because I made the
images before that, but I think when I actually put the copyright notice on that was the date. I
didn't really know much about it at the time. That was my e-mail address and it says finger me
for more info. Does anyone know about that? No, no audience knows about that. [laughter].
>>: [indiscernible].
>> Scott Draves: That was before you had home page. One more sort of past project, but sort
of the idea -- sorry. I forgot to connect to the idea. The idea here again was to try to multiply
my effort and so I put in -- this program is really simple. It's literally 50 lines of code to like do
the rectangles, some square difference pixels, random grab, just iterate. The idea here is to
sort of multiply the interestingness, so I can just take, now I spend a day writing this program
and now I can feed it images all day long and everyone that comes out is cool and different.
Some of them work better than others. It was really endless entertainment. This is this idea of,
so here's another manifestation of that idea of using the algorithm to sort of amplify your
creative output. This one is called Bomb. You can actually read the word Bomb there, kind of.
This was also, this was from a little later, the mid-‘90s and it was a visual musical instrument
and by that I mean it's something like a musical instrument. It's something that you can play
and produce but instead of producing music in real time, it's producing visuals in real-time.
Initially it was something that I just sort of I made for my own entertainment, but then it was
through a crazy party at CMU called the Beaux-Arts Ball, which I believe has been shut down
now for being too -- somebody probably died. I don't know what happened.
>>: I think right after you left, somebody died. [laughter].
>> Scott Draves: They started using the software and boom. Basically I discovered that people,
you could project on visual music along with music and then it's like a light show, like from the
‘60s but digital. This worked. This ran on a little PC laptop. I think I had a 486. I don't
remember how many megahertz, but just written in C using primarily cellular automata which is
a very well known technique. Reaction diffusion which I learned from Andy Witkin. Running in
real-time, reaction diffusion running in real-time, which was hard to implement. I had to do
integer arithmetic everything and then really low resolution, 320 x 200 pixels and I could only
get like ten frames per second. It wasn't strictly cellular automata, like I would totally break the
rules. I would do things like run the Flame algorithm to generate particles and draw those
particles into the cellular automaton and then let the energy flow off of them. Or I would run a
multiple cellular automata simultaneously at different scales of time and space, so maybe one
cellular automata is running slowly at low res; another one is running fast at high res. Usually, I
guess that's fast at, go ahead. Question?
>>: Yes. Did you ever throw in the i algorithm, because it seems like once like every month you
really freak yourself out, like with one of these pictures. Like if I was just dancing at a party and
a dude had like eyes for hair, like that's a medical incident, possibly. It's like, so how did you,
what's the art of figuring out which one of these compose? I mean, is it just sort of trial and
error?
>> Scott Draves: There was a lot of trial and error, yeah, and there was pretty much. In the
coding it was pretty much throw everything in. Some things, of course, didn't work at all, but
anything that was even like remotely interesting I left in the code because the idea was that it
was an instrument that you could then play in real-time and so it was up to the performer to
sort of pick and choose and gauge the mood and the audience and figure out what was sort of
going to fly. So a little bit of a punt on that to the performer. The software was, again,
released. It was open source on the web and was widely used, so I don't know how different
people addressed at. I do know that one thing I did though was put in what I called auto mode.
This was one thing that a regular musical instrument doesn't have. If the software was
unattended for more than a couple of minutes, it would start to just play itself pretty much
randomly pushing buttons and switching modes and doing stuff but with a reflective function so
it would actually watch what was on the screen too and make sure there was enough stuff
going on that there was something interesting up there, so it sort of measured the information
content of the screen, so that it could, it would stay good. Basically, the idea was at shows like
you are performing and then you have to go to the bathroom, so I could just walk away from
the console and not to worry about it. Or maybe it was I just got bored and wanted to go join
the party. I don't know. But this was definitely, again, this was used, lots of complex systems in
order to generate as much content as possible from a finite program that could essentially
generate content indefinitely using the techniques of complex systems. It got used sort of
popularly. There was also, for me it was really an art project so I wrote like sort of theoretical
treatise about like the concept behind it and, you know, what I was trying to do, which was sort
of like create this artificial life form which existed both on screen as sort of like a squishy
organic like here's some sort of cellular pixels squirming around, but also was like the living
code being transmitted over the internet and the idea of like different programmers creating
their mutant versions of it and sort of the code evolving and both of these sort of feeding back
on each other.
>>: [indiscernible] apart.
>> Scott Draves: That's right. And it was successful. This showed at major sort of like art
festivals in Europe and stuff like that. Let me return to the Electric Sheep now and get sort of
the Electric Sheep hits the Flame algorithm. The stuff that I already showed you was -- the
Electric Sheep started in 1999, so it was like eight years after the sort of the Redgren [phonetic]
algorithm. It took a long time before it went to the next level and sort of like became sort of
network enabled. This is where the distributed sort of computer part comes into play. The
thing about that Flame algorithm is that it makes nice images but it's really slow because it
takes billions of particles to get all that nice shading and that ends up taking, it can't run in realtime. Even me, if it's an hour per frame, me with my regular PC is just one second per day. I
would have to wait months to generate a decent amount of animation. In ‘99 Seti@home the
Berkeley project by David Anderson came out and I thought wow. That's a great idea. I want to
do that, but I want to get access to that kind of computer power, but instead of searching for
the alien signal, I'm going to create my own aliens. I'm going to bring these sheep to life. This is
a still frame, just the capture of the console of the distributed renderer showing that a bunch of
frames have been rendered but there's a bunch of question marks that haven't been finished
yet. The concept is really simple. It's a screensaver when you're not using your computer, the
screensaver comes on. The screensaver contacts my server says give me some work to do. You
download the parameters. You download the genetic code which is just a small text file of XML
format and then it goes to work rendering and cranks the math for a while, however long it
takes, and then uploads a PNG file to the server. So the server is collecting images from clients
all over the world and putting them into the grid and then once this whole sequence is
complete, once it has all the frames, it compresses it into an MPEG video file and then that
video gets downloaded by all users and it shows up on their screen as animation. Everybody is
watching animation which was collectively created by everybody who's watching it. There is a
lot of, I'm talking longer than I sort of anticipated but because this is such a technical crowd, I
want to say a few things about the distributed renderer which is one of the more interesting
points. Especially as the company is getting reliable results from it. Now it's pretty clear, you
know, different computers run at different speeds, but especially you often, as I was
commiserating with James, you don't know when somebody is really going to deliver any work
to you. I guess any manager knows this. You can assign projects to everybody and then you
only get a fraction of them back. In the screensaver world that's just because, you know, your
screensaver comes on and it goes to work. Then I come back to my computer and I move the
mouse and I needed my computer now. You have to relinquish control immediately, so I'd get
about half the jobs back. It's pretty straightforward. Well guess what? You time them out and
you send them out again. Eventually you get all the work back. That's sort of the easy part. If
you're doing a bunch of these simultaneously, if you are doing one sheep than you think you
might be really wasting a lot of time because you only have five frames left and almost
everybody is idle. But if you are doing a whole bunch of sheep simultaneously you can sort of
keep the pipeline flowing, but the harder part comes from a couple of unsuspecting directions.
One of them was just kind of like there was some bugs in the code maybe. I was getting some
images that were coming back were just wrong in various ways and there were some bugs. It
actually turned out like the Windows random number generator was different from the Linux
random number generator so they actually had to agree, so we put it into the code instead of
using the system library function, stuff like that. The result, of course, with bugs like that is like
flickering animations. Different computers differ so the thing jumps. We fixed a few bugs, but
there were still jumps. It was still glitchy. I couldn't find any more bugs and so I was kind of
stymied and eventually I realized that some of the bugs were really weird like I would get back a
half image, like you would be the top half of the image would be fine and the bottom half
would be black. How did that happen? What kind of code could do that? I think what's going
on is essentially bad RAM, or even cosmic rays. If you were basically, there are bits being
flipped randomly in the memory, especially given that this is a, you know, running on the
public's PCs, literally grandma’s PC, which is really old and probably just plain crashes
sometimes and she just reboots it. She doesn't fix the RAM. She doesn't upgrade it. How can I
fix that bug and, more generally, how can you sort of create a reliable computer out of this sort
of unreliable mechanism. And the answer was the same with communication is you do
redundancy. I can send the same job to several computers and get back the results and check
to make sure they agree and then it's probably right. To do that, you have to throw away much
of your compute power. If you have two-way redundancy, then your computer is half as fast.
Who wants that? So I came up with what I think is a cool solution which is actually specific to
the graphics and that's, and actually specific to this renderer as well, and that's I send out the
image. Instead, I don't send out exactly the same job; I actually do four-way. So I send out the,
I take the image and I shift it by half a pixel up, down, left and right for those four jobs and so,
then when I compare them they're not going to be exactly the same; they'll be just a little bit
different, so instead of like exact match, I have a threshold. The advantage though is because
the image has been shifted a little bit, when I average them together I get anti-aliasing and plus
when the, when I send them out the different jobs I use different random number seeds, so the
random number generator is consistent but the seeds are different and so the particles come
out differently in each of those four frames. So the little bit of noise that is sort of left over
from the randomness of where the particles landed is not completely smoothed out. As much
as we try to filtrate and smooth it, it's not perfect and so averaging together more images
smooths it out even more. So actually no work is wasted, but we get something which is
perfectly reliable. And then the final stage of that was actually, even after all that, then, you
know, when you're doing stuff at internet scale, you not only have to defend against bugs but
you have to defend against attackers and so we had people uploading, hacking the client to
upload porn instead of like rendering and then, of course, that porn would show up on
everybody's screen. But it was that redundant defense system fixed that bug as well, so it was
interesting that sort of once you build like a really reliable system like that it can defend itself in
multiple ways. I'm going to start to skip ahead a little because I'm running out of time. So
that's sort of the rendering process, and then the design process is done using Darwinian
evolution. So not only does it tap into the distributed computer power of the internet, but to
the distributed intelligence of all the people behind the computers. While you're watching, if
you choose to you can vote on whether or not you like what you see and I use the Roman
justice system so people can vote up, I like it, or down, I don't. Things that are voted down die
and disappear and images that are voted up then mate with each other and then reproduce
and so that's the genetic algorithm and the Flame algorithm, the parameterization is
particularly amenable to the genetic algorithm and that's what really makes the whole project
work year after year is the fact they really do evolve. It comes up with new things. Here's sort
of an example of a tiny fragment of a family tree. This one is actually pretty old. I think this one
is from like 2005. Here's a larger fragments. It's more, from a more recent, I just made this one
a couple of days ago.
>>: But [indiscernible] don't necessarily work that way. If you go back to the last one, me, for
example, I happen to really like in the top row the red. Is there a point over in the purple, the
fourth one over. They're both really interesting to me for their own reasons but their child is
atrocious to me. [laughter].
>> Scott Draves: Yeah.
>>: [indiscernible] [laughter].
>> Scott Draves: Real evolution works that way too. [laughter]
>>: I guess what I'm saying though is it's not clear to me that the parameter space that these
are in is edited that way.
>> Scott Draves: It's not, but on average it does push it into the right direction.
>>: It's like if you track fitness, for example, across generations you would see it monotonically
increasing over time if you smooth it. In other words, like if I take like the seeds versus their
descendents like let's say ten generations down, roughly speaking they will be more attractive
than their forbearers.
>> Scott Draves: It depends on, yes, but it depends on measuring attractiveness is actually
really tricky. I'm sort of measuring it by basically voting a popularity contest. I don't get more
and more votes over time. People don't get like more and more excited about it. They just, so
the total number of votes is constant.
>>: But the average image should have more of a positive [indiscernible]
>> Scott Draves: It turns out what you like is influenced by what you've just seen.
>>: So I might give on average half up half down no matter what I'm seeing?
>> Scott Draves: Yeah. What people really like is to see things that are like new and different.
They like novelty. They like excitement and so what I've actually seen in the Electric Sheep is
less like the traditional optimization problem which sort of is climbing a mountain and more like
the fashion industry where hemlines go up and down except it’s, you know, sort of infinitely
dimensional. It just keeps wandering around and new things happened but it's not really global
in that sense.
>>: Just speaking for myself over time as I've watched Electric Sheep, my capacity to appreciate
intense complexity in sheep changes over time. I think early on I couldn't be attracted to very,
very intensely complex sheep and then, but over time it's changed.
>> Scott Draves: That's right. So we are influenced by our experience and as you look at them
more, that part of your brain develops and you start to appreciate sort of the rare weirdos for
their weirdness because you've seen so many normal, beautiful ones, the exotic jumps out at
you. Lots of things going on. It's really subjective. It's hard to, when you reach the, some even
not that high up the mountain of aesthetics, even in the foothills, people start to disagree, but
there is actually, people do agree about the plane. Almost nobody, nobody wants to look at a
black image. Nobody wants to look at a dot. A lot of what the genetic algorithm is doing is
really just avoiding the total disasters more than sort of picking out the highest points in the
whole mountain range. Anyway, another thing that happened that was really interesting is we
don't only use the genetic algorithm, Darwinian evolution. We can do better. It turns out there
is also a, we also do intelligent design and that is done with this genetic editor, so it's additional
software you can download where you can sort of open up the genetic code of an Electric
Sheep and sort of see like this skeleton. These triangles represent the matrices and then all
these numbers represent the coefficients for the variations and there's more to it than sort of I
can put on one screen here, but there's a pallet and there's a preview. So you can draw one of
these pictures pretty quickly if you are okay with this kind of quality. When I say it takes an
hour per frame, you know, if you do less particles you can make it as fast as you want, but…
>>: Have you tried GPU implementations?
>> Scott Draves: There have been some GPU implementations but they are not really that
much faster. I think maybe 10x over regular CPU. They have been buggy. I haven't done one
personally. It's on my vague agenda. They've been buggy and not entirely compatible and they
haven't really caught on. What's really interesting about this program, is sort of designer
program is that I didn't write it. It wasn't my idea and I didn't want it. I think a lot of people
when they released stuff as open source they like open source because they think oh, the
people are going to fix my bugs for free or, you know, I'm going to get a job later or something
like that, so basically they're going to get free labor or get something that they want. But the
really great thing about open source is that you get something that you didn't want.
Manipulating…
>>: [indiscernible] expect [indiscernible] didn't want.
>> Scott Draves: Certainly. And, you know, I still don't use this program. I don't want this
program. It's incredibly tedious. I have no interest in dragging around these little triangles and
making an image. There are people who like that and it's bizarre and very hard to get what you
want. Again, the relationship between the genome and the images is extremely unpredictable.
If you have an image in your mind like to come up with the genetic code to produce it is
practically impossible. So it's frustrating if that's what you want to do.
>>: Let's say [indiscernible] design the [indiscernible] idea where you can have a steer it a little
by…
>> Scott Draves: It does. This is the classic Karl Sims’ mutation explorer. I think it says
mutation on there. That's right. And this was actually from, even before this program there
was in the Photoshop version, so the Flame algorithm after I initially open sourced it, before
there was any animation the Kai's power tools licensed it and put it into their product. It was in
Photoshop and that version had this explorer, but it did not have this. This was not Kai's style.
But you can use both simultaneously. And they mix great together and this program doesn't
have the crossover operator. This is only the mutation. Crossover is actually way cooler than
mutation. But this program doesn't support it. There are, this program has now been forked
because it was also open source because it was based on my code. There are many versions of
it now and some of them do support the crossover operator and they're many artists who use it
to make their own work. Anyway, as far as the Electric Sheep goes, you can sort of make your
own sheep. There are human designers as well. You can download the genetic code of a sheep
and edit it or you can just start from scratch; it's up to you. When you've got something that
you like you upload it to the server and then that image then gets distributed rendered and
shows up on everybody's screen. People vote on it and it just, you know, if it's successful, it
breeds and reproduces and evolves and so sort of the AI, the genetic algorithm starts riffing off
of the human designs and, of course, humans can see what the AI is doing and vice versa, so it's
all just one big gene pool. Sort of an open question of it is like how much, you know, if we've
got human stuff and we've got the evolution happening, so they are both competing and
collaborating with each other. So there was a question as to which one is better or who is
really winning this game; who is helping more? When I was talking sort of about that
multiplication factor, so the question of basically these people are putting designs into the
sheep and then the sheep is tweaking them and playing with them and making more stuff like
that. I was able to measure the, because in the sheep I have a database of all the designs and
how popular they were and who made them, it was pretty easy to just add it up and see with
the answers were. So this was actually published in that Art of Evolution Book. The answer at
that time for the Electric Sheep was the ratio was of sort of computer-generated to human
generated was like one-to-one. It was kind of a tie and so the genetic algorithm was essentially
doubling the creative work of the human designers. Since then I've massively improved the
genetic algorithm and I think it's like literally five times better, something like that, so maybe
the ratio is now ten instead of two. I'm kind of doing that by cheating. I'm like literally filtering
the genetic algorithm so instead of just sending every child out to the public, I actually look at
hundreds of potential children, just still images and pick some of them out. It's not very
publishable as far as in the field of genetic algorithms because a human is not allowed to pick
the winners. It's a dubious scientific contribution, but in fact, it works great for the art project
and it's just like another layer of, you know, man and machine, it just keeps going back and
forth and reflecting around and becomes one giant mishmash and I don't really know who
came from what or which one from who. How did those Nietzsche's lines go, the Star Bellies?
So this is not a really new idea. It's really an open question. I think it was first Ada Lovelace
who worked on the with Babbage, I guess basically asked the question and she said came the
machine do anything new? Can a computer really be creative and her answer was no. It only
does what we told it. Alan Turing much later, you know, had the opposite answer and so he
wrote this paper Can a Machine Do Anything New, so his answer to Lovelace’s question. His
metaphor was like a nuclear pile and so the idea is humans can put some stuff in and then the
computer can recombine it, you know, but eventually it kind of peters out, so you get some
kind of energy generation, but does it go supercritical? Can it really become a self-sustaining
reaction and it's just a matter of, you know, improving the algorithms and keep cranking that
number up and eventually maybe it can be self-sustaining.
>>: That's the singularity.
>> Scott Draves: Exactly. It's pretty prescient, as usual. Let me just sort of put this a little more
in context as far as where this art has gone. Since, here's an example from some SIGGRAPH.
They used it, I think this was 2006 they picked my art as sort of the graphic identity for the
conference, so the theme was evolve, so it was an obvious choice. The calendar is the website.
All of the collateral was Electric Sheep for that year. Here is a shot in Germany. This is the
public art. Does that mean it's over?
>>: [indiscernible] evolving.
>> Scott Draves: Thank you.
>>: [indiscernible] I'll get it.
>> Scott Draves: This is the diagram for generation 243. At some point I started, part of what
I'm trying to do is create this artificial life form, so that means part of being alive is that it sort of
sustains itself. And not just being a parasite, but, so I started making high resolution artwork
from the free screensaver. The problem with the screensaver is it's great to have 450,000 sort
of nodes in your render engine, but it sucks to have 450,000 computers downloading videos
from your server. I was really limited by bandwidth and therefore I was limited by the quality
of the video I could produce. I wanted to, I was also stuck trying to sell this thing as art and sort
of make the project self-sustaining because it's a free online thing. Who is going to buy that?
Why should I pay for it when I can always download it, right? So this is, the solution I
developed for that was to use the screensaver network as a factory to produce limited edition
art. Instead I could render an arbitrary resolution, but instead of distributing it to all the users, I
would just download one copy of it and then, and that's something I could put in a museum or
a gallery or sell, hang on the wall. And so generation 243 is one of those pieces and what they
are is particular collections of sheep. Another problem with the screensaver is that it's popular,
right? Popularity is good; isn't it? Well, how about pop music? Who loves that, right?
[laughter].
>>: Youth?
>> Scott Draves: So what I think is beautiful is not necessarily what is most popular in the
screensaver, and so to make these pieces, I looked through the thousands of things that come
out of the factory and pick a small subset that I think are truly beautiful and then re-render
them in higher resolution, higher quality and especially higher resolution in time, slow motion
and that's because when things are moving fast it's easy to get people's attention like TV, but
when something is truly beautiful you need time to appreciate it and especially the slower
something moves the more like a painting it is. And that was the market that I was trying to
capture. Instead of being like five seconds long, I made sheep that are a minute long. Anyway,
so this is a diagram of a graph. So each thumbnail in the graph is a shape, is a genetic code, is
an image. And the edges of the graph, the lines are transitions between those points in the
genetic space. If you interpolate between two genomes you get animation and you can see
each of the nodes of the graph has several edges coming out of it. In order to sort of play the
piece, you wander the graph at random and every time you come to a node you have a choice
of which way to go next. Using this framework, I can create pieces which sort of play
indefinitely for a very long time and they don't repeat. They're not a loop. Content does come
back. They have themes. You can learn the piece if you watch it long enough, but you have to
watch for a really long time until you've seen the whole thing. This one is actually not that
large. I'm not sure. There might be a hundred edges in this one; I don't remember. It probably
takes maybe a couple of weeks of continuous watching to sort of see the whole thing until it's
transversed every edge. I made larger pieces as well. I've made pieces with a thousand edges.
It's not, disk drives are really cheap so you can make a big piece and with one of those you have
to like watch for a whole year before you've seen the whole thing, so it's effectively infinite. It's
not truly infinite in that fixed piece, but it has so many different designs in it that it can hold
your attention for a really long time. This was at burning man. We used to, I used to go there.
It was actually that kind of culture was a big inspiration for my work, this notion of like just
radical self-expression. It was actually, this is how I learned to do the immersive projection. At
burning man most people, I was one of the, actually I was, I think I was the first VJ to perform at
burning man. I took a projector which was of course destroyed by the dust, but it was good
times while it lasted. You know, of course you would bring a dome or some kind of screen like
spandex and stretch the spandex and have a screen to project your stuff and people enjoy the
art. This year we, I made the screen and then showed the art and then that night a windstorm
came and just destroyed the screen, blew it away. So then we were like what are we going to
do. And I had the realization that at burning man, yes, yes ultimately, but the playa, the
surface, the desert the ground of burning man is completely flat and featureless and it's the
ultimate screen. So we just got the projector up high enough like about ten feet off the ground
and projected down onto the ground at a place where people walked by and people would just
walk into the art and they could see it on the ground and they could see it on themselves and it
was really fun. And so then I took that into the gallery and the museum experience and sort of
just projecting everywhere and stopped worrying about getting the perfect rectangle on the
screen. It's like, there's stuff on the wall. There's a window. There's a door. There's a person,
just whatever, turn up. It's really fun. This was actually in Hungary a couple of years ago. This
was in the 21C Museum Hotel which is like a new hotel chain that comes from basically the
Walmart people where they actually have, you get to live -- it's a hotel that is also a museum so
you see the art. This was the grand opening. Then they bought the art too. This was in
Moscow, a big party. This was sort of like before, during the rave days. It was projected a lot
like this. This was in a New York City club, lots of flat screens. This is at a museum in Madrid.
This is a very traditional triptych presentation, sort of like the art. And this is a gallery in
Williamsburg at a solo show two years ago and this is showing a sculpture element where I
wanted to sort of go beyond just like the television and you can see the computer has a
custom-built metal frame and then the playback computer is sort of connected with a big metal
braided cable and so it looks like, it's supposed to look like a body. The head and the body is
like thinking this is the computer’s dream. This is a close-up of the playback device. It's a
commodity little PC board that's been actually thermally bonded to the, a big chunk of heatsink
aluminum so that there's, I took the fan off and it just, so it's completely solid-state. Question?
>>: We have just one minute left so we can get out to the reception.
>> Scott Draves: Okay. So here's, because of the open source stuff, other people started using
it. This is Paul Simon's problem, so some artist, without my permission or knowledge just is
using my software, makes stuff like this. Paul Simon, I guess his staff, probably not personally,
puts it on his album cover. I'm sure Paul Simon has no idea where this came from. And
Stephen Hawking, his last book, The Grand Design, same thing, the cover. And I can't walk into
a bookstore without like seeing this all over the place. If you search for superstring, a third of
the results are Flames. [laughter]. So it's caused some kind of pop science thing and I don't
think there's any relationship between the mathematics of superstrings and the Flame
algorithm, but I do think there's something about the Flame algorithm and the visual system
that is more than just a coincidence that this works. App version, oh, I just want to thank my
employer Two Sigma. I did, in fact, go to work for Google for a while. I did this full-time for
quite some time but I was really a starving artist in New York City. And then the crash came and
I had to quit and get a day job again, so I went to Google, but it sucked and so now I work for
this hedge fund and it's actually very interesting and I can tell you more about it at the
reception. So that's the end of the talk. I think we're going to go outside.
>>: If everyone could just kind of move out into the atrium and then you can follow up with
questions there. [applause].
Download