Stewart Tansley: Well, good morning, gentlemen

advertisement
>> Stewart Tansley: Well, good morning, gentlemen. I'm pleased to introduce, if
you don't know, Professor Seth Goldstein, who taught yet on brain in the bottle,
and now he's going to talk about some other of his work in systems
nanotechnology is the bigger picture I think.
He prefers to call it realizing Clayton. Without further ado, I'll hand it over to Seth
and we'll get going. Thank you.
>> Seth Copen Goldstein: Thank you, Stewart. Feel free to ask questions. This
is a project that has a lot of people involved, I'm sort of representing a group. We
started this project about four years ago, and there was a sort of you know,
looking for a market need for the research. We were thinking about the desire for
people to communicate with each other and the fact that we're often not in the
same room, as is true now, I gather.
So from a computer science point of view, we can think about the telephone as
being sort of an abstract communication device, where it senses some physical
phenomenon in the case of a telephone it senses sound waves, encodes them
as a string of ones and zeros, sends it over the net, and then there's some
actuator that takes that string of ones and zeros and turns it into that same A
similar a chroma of the physical phenomenon. So it's not the exact same sound
waves but it's a representation of the sound waves that fools the listener into
actually thinking they're talking to the person.
So the telephone has been around for a long time and it's sort of a bummer
because it real hasn't changed much in a hundred years. I mean despite some
recent stuff and video conferencing and such, it's basically the same device that
it's always been. And that seems like a crime because basically the telephone is
just a computer, right?
So we were thinking about how we could improve the telephone, and basically
from the abstract point of view, I mean the telephone takes voice to voice, it has
an -- its actuators are speakers, that's something that's easy to do. Televideo
essentially takes photons and sounds and uses a monitor and speakers to make
a representation of the input phenomena.
And if we want to take that next step, we need to what we call telepario capture
not just the sound waves and the photons but also let's say a 3D model of
whatever it is that is being captured, and now we need a new kind of output
device that can recreate that physical phenomena that it captured.
So it's not cloning, it's not the exact same thing, but it's something what has
physical form and can change dynamically so you could have a conversation with
someone you wouldn't know whether they're there or they're somewhere else
and they're being reproduced.
We call this material claytronics. That's what this talk is about. To give you an
idea because that was sort of abstract, I have here a video that was put together
for us by the entertainment technology center at CMU, and none of this is really,
it's all CGI, but just to give you an idea of what's going on, those spheres which
have been shrinking down to a small size, each one is one unit of claytronics. I'll
sometimes refer to that as a claytronic atom.
And so you can imagine you have lots and lots of these things and they're in
this -- set into this table, so there's this sort of whole pile of them in the table, and
these people are sitting around a table designing a car, but instead of drawing it
on a screen looking at 3D representations, they actually have this claytronic stuff
so they can touch it and feel it and move it around.
And every one of these units has a computer in it, it's running computations to
the constraint that you would put into the CAD program are essentially running
on the claytronics, so you'll see, for instance, when they change the shape of the
trunk or the window, everything else moves in proportion, it's not just like
Playdough it's actually computational material or what we call programmable
matter.
In addition to having physical form and being able to sense its environment and
change shape, it can also change color, so that's how we could do a, reproduce
the fact that you might be in, you know, not be in the same room but appear to be
in the same room.
So this is the sort of the broad picture and this to me is sort of one of the near
term applications of claytronics. Okay?
So that's what I'm going to talk about. In short, claytronics is one instance of the
broad class of materials of programmable matter, it's a programmable material, it
can actuate, it can sense the environment and most importantly it can under
the -- by running a program it can change its shape or form.
And there's other instances of programmable matter as well which I'll talk about
in a minute.
So just to go over one more time sort of the high level long term goal, capture 3D
object, encode it as a 3D model, transmit it over the wire all that stuff is stuff that
other people have worked on or is working already, and the idea here is to
recreate that in the near term the individual units might be large and so you get
this coarse grain representation just the way video screens used to have really
big pixels, and then over time the video units will get smaller and more finer
grained and you'll get a better, higher fidelity resolution.
So like any good research project, you should know when you can declare
victory and I like to think of the victory dance here will be done when we're sort of
can past the touring test for appearance. So when you can't tell whether I'm here
or back in Pittsburgh because the units are fine grained enough and is enough of
them and I have sufficient physical form that I can exert forces here instead of
there. Okay?
So as I said, claytronics is one instance of programmable matter. You know, you
could think of modular robots as another instance of this. I mean, it's just very
coarse grained pixels or voxels so to speak. You know down at the nanoscale
people have been designing molecules that can be influenced by this
environment to change their shape, change their electrical properties, change
their optical properties and all these things share the property that sort of the
whole is greater than the sum of the parts. The individual units are quite useless
and meaningless, but when you put them together, you get something very
interesting. Okay.
So that's the sort of demand pull side. As far as the technology push side, it
sounds very fantastic, but I think this isn't a matter of if we get there, it's a matter
of when we get there, and so I'm agnostic about what we will build programmable
materials out of in the long range. But in the near term we can think of using
photolithography, I mean these are essentially computers.
And you know, Moore's law is pretty amazing, so amazing that I think it's hard to
metabolize what it really means, so I like to think of Moore's laws in pictures, you
know, 1972 with (inaudible), in the late '90s, the Play Station 2, you know, equal
powers, equal amount of performance, you know, one of them you could plug
into the wall. If you go back further, the Apollo 11, Ferby (phonetic). I mean this
is pretty amazing.
If we go back further in time to like the Eniac (phonetic) and then, you know,
today you can go into a stationary store and buy these little musical greeting
cards, this one is falling apart, so that the amazing thing about this is it's not
just -- it doesn't just have more computational power than this room full of
hardware which had a meantime to failure of two minutes and cost many millions
of dollars and required a power substation and all that kind of stuff, but this also
has a sensor, it knows when I opened up the card, has an actuator, a speaker,
it's a self contained power source. In a sense, it's a disposable computer and,
you know, the paper cost as much as the process or practically in this kind of
situation.
So, you know we keep moving down this road and these individual units if we
can somehow use photolithography to make them ought to be inexpensive and
have plenty of room on them to put as much processing power and
communication stuff as you would want.
So our goal right now anyway is to think about how to harness sort of the
monolithic manufacturing process that you get from photolithography to create
programmable matter. At least this kind of programmable matter, claytronics.
And the sort of driving mantra of everything we do not group is about scaling. So
it's scaling both up in numbers, that's really the software challenge, if I'm going to
be represent -- you know, if I'm going to be composed of millions and millions of
units and there have been to millions and millions of them because they are all
very tiny to represent all of the minute details, I have to come up with
programming models and methods that will allow me to control, you know, the
sort of massively distributed system. And it's not only is it a massively distributed
system but it has to work in the uncertainty of the environment.
So that's the software challenge scaling up in numbers. And then of course
there's the hardware challenge of reducing the size of this essentially a robot
down to submillimeter dimensions.
So I'm going to talk about both of those things. I'm going to start by talking about
the hardware challenge, just so that I can hopefully convince you that this is a
when and when is actually fairly soon so that way we can all think about what I
think is really the fundamentally harder challenge and that is the software
challenge.
So I want to use Moore's law. And the immediate sort of gotcha that you think
about is the fact that we need to create 3D objects, right, these essentially we
can think of it as spheres so that they can move around in 3D, but
photolithography is a 2D printing process, it creates 2D objects, guys.
So when I initially sort of posed this challenge a few years ago there was a
researcher at AFRL, Rob Reed, who came up with a proposed solution. And he
said let's start off with silicon on insulator wafers. These are sort of standard
wafers you can buy for any standard lithography process.
The thing about silicon insulator wafers is it's a layer, a very thin layer of silicon
and then a layer of insulator which is like silicon dioxide, for instance, and then
the rest of the wafer.
And so the idea here would be to print on top of this insulator a dye but instead of
making it a square or rectangular dye, let's make it in the shape of this flower.
Okay? And if we do the right thing here, we should have enough area on this for
our processor and everything else.
And then we'll do a process where we lift off the dye off of the bottom insulator,
and you can engineer the stress of those layers so it will curl into a ball, actually
sort of MEMS, in general which is microelectro-mechanical systems devices tend
to either have to fight the stress that's always there or harness the stress, so we
just want to harness the stress.
And so this is a great picture so in fact he actually made one, and that's a really
old picture, I should update that. But instead I can actually show you a movie
about recent progress so you can see that looks a lot nicer than the one we saw
on the previous page, but this is also sitting on a board where we can put charge
on these various wires, whichever, these dark lines and use electrostatic forces
to move around, move the sphere around.
So just that we're clear about what I'm claiming here, this is just a demonstration
of the manufacturing process. There's no transistors on this thing, this is just
pure silicon essentially. And so it's being moved around with an external source.
And so of course the challenge is to get all this external field generation and the
processor and everything on to that, on to that dye.
But the basic idea of creating a 3D object out of this 2D process is possible.
So I should -- this is more recent work. We started off the project by thinking
about how to make our lives easier. Let's not be at the millimeter scale, let's
think about what we could do at a slightly larger scale and also let's not worry
about going in 3D, let's just work on the plane and set out some of our principles.
So one of the things that is important to I think to succeeding here is to make
each of the units as simple as possible. I know these don't look that simple, but
in this case the idea was to avoid all moving parts, so I want to make it as simple
as possible, and that sphere I'm not going to have any moving parts, and yet I
want to be able to move the units.
And so this is an example of essentially a robot, the ensemble itself, let's call that
one robot that's made out of -- that moves, that doesn't have any moving parts.
Okay? So the way these work is these coils and coarse, these are essentially
electromagnets running around the surface and when two, a unit wants to move
around another unit, they communicate with each other, turn, polarize their
magnets in the right direction, of course also communicate with the other robots
to sort of hold on to each other so that you have a bigger mass over here and
they can start moving together.
So the idea here is sort of what we call the ensemble principal that we can
sacrifice some functionality of the individual because that functionality can be
made up by the group. Okay? And that way we can hopefully simplify the
individual units so that sort of the expensive stuff we can throw away. The stuff
that's likely to fail or be hard to manufacture and replace that with essentially
software. So you make the software problem harder.
So this is a little video of those robots performing the first sort of shape
transformation, if you want to sort of stretch your imagination a little bit, right?
They go from a line to a triangle to a line. And they do that by communicating
with each other and turning their magnets on the right polarities moving around.
So that was work that's now maybe a year or so old. We're pushing harder on
the submillimeter scale robots now. We're also building larger robots, but we're
pushing hard on this and the idea here is to again simplify the problem. So
instead of starting to try and build a sphere directly, move around in 3D, we want
to verify that we can use electrostatics to do all the things we want to do and we'll
do it in a cylinder instead. So these are pictures of some more recent devices
that we fabricated and essentially these started out as a rectangle of silicone with
the right stresses and so it curls up into the cylinder and we're in the process of
taping out something that will have enough computation and actuators on it so
that it will be able to either roll clockwise or counterclockwise, depending on what
the frequency of signal is that it hears.
And this is just the same thing as the previous one where it's externally actuated.
So we want to take all that electronics and put it inside.
Let me just talk about this one mechanism that I've said a few times electrostatics
and show you how this one mechanism can actually solve almost all of our
problems as far as sensing and actuation go. The computation still has to
happen on a processor, but I think this one mechanism solves both our need to
have things to adhere to each other, because it's going to be made up of millions
of individual units to be able to stick together, to be able to move around each
other, to communicate with each other and also to transfer power, to make sure
everyone has sufficient energy to carry out their tasks. Okay?
So imagine I have two units of this blue unit and this pink unit and they're next to
each other and we're going to design them such that at the surface, there are
essentially metal plates. These metal plates are right on the surface and they're
covered by dialectric, some kind of insulator. And we'll arrange it so that the
processor can, using switches can put arbitrary charge on any one of those
plates. Okay? So the way it's going to carry out its various tasks is by
distributing charge in some kind of smart manner. Okay?
So if I just look at two of these plates here from each one of these, I deposit
oppositely charged particles on them, then they're going to stick together. So
that's pretty easy, right? I rub a balloon, I put it on the wall, it sticks to the wall.
As you shrink down in size and the surface area to volume ratio gets bigger and
bigger, you can actually get significant forces here that hold these together. I
mean these are the forces that hold us together, so it should work.
If I want to rotate these, then I just have to play with this charge so for instance
instead of just trying to stick together, I could make these charges opposite in
polarity and these charges the same and then that would cause the blue one and
the pink one to rotate together towards the top of the page.
So it's the same underlying hardware being used for both adhesion an actuation.
So communication is may be a little bit less obvious but if we put an AC signal
across these two blue plates, essentially what these two plates are capacitors,
right? It's just two plates separated by dielectric. So any kind of charge I put on
one side is going to be mirrored on the other side.
So if I essentially put an AC signal on one side, it's going to be mirrored on the
other side, so I can use that to do both communication, I can just modulate the
frequency or amplitude of that, and I can also do it to do power transfer. So, you
know, this AC signal causes the load to be dropped on that resistor. I mean, it
wouldn't be a resistor it would be a much more complicated circuit.
But the idea is that the center of these units is filled with a super cap as TRO that
holds signature amount of charge and as things run out of charge they send
requests to be sent packets of power and their neighbors send them power.
An example that I started with where you had that stuff in the conference table
you can imagine the surfaces of that box are pumping power in by these AC
waves, and then the units near the surface are sending that throughout the entire
on so many bell so everybody gets power.
>>: (Inaudible).
>> Seth Copen Goldstein: It's very inefficient and almost all of our heat will come
for this, but it's efficient enough and the super capacitors are big enough that we
don't have to spend all of our time sending power around.
>>: (Inaudible).
>> Seth Copen Goldstein: No. The other thing is that you only need to power
the units -- you only need to really send significant power to the units on the
surface. The ones inside don't really have to do very much. They just have to be
there for routing, essentially.
And so it's not like you have to make sure everybody's getting power throughout
the entire ensemble.
The nice thing about -- I mean assuming you're not like operating in wood or
something is when you put the charge on the plates you can leave it there and
they'll stick together without having to do anything active.
Okay. So just as a sanity check to make sure that this is basically possible, we
can sort of do basic evaluation of how much area we have on this unit so that
would tell us how many transistors we can get and how much it's going to weigh,
and you can just do some basic calculations to figure out what you get.
So for instance we'd have enough area in 90 nanometer process, yeah, in a 90
nanometer process to get, you know, something like an arm seven and about
256K of memory, which is a lot more processor than, you know, this industry
started out with.
And that's a 90 nanometer process. If we go to a 65 or a 45 nanometer process,
we have more than we need in some sense.
If we calculate the kinds of forces we can generate, if we put sort of metal plates
along the surface for that, for doing that capacitive coupling, we should be able to
move it about five body lengths a second, which is pretty impressive for a robot,
right? I mean, that's 30 feet a second for me is five body lengths, right?
And we can use this for communication, essentially for the power distribution if
we do this thing right. It takes about a microsecond to fill up the super capacitor.
If you start -- there's a full one talking to an empty one, it takes about a
microsecond. So you have plenty of time to do other stuff. And the super
capacitor itself would hold enough energy to execute about 200 million
instructions or to move about two million body lengths.
So this is very much in the realm of practicality. I mean it's just a matter of doing
it in some sense.
>>: (Inaudible) for that (inaudible). I mean the speed of --
>> Seth Copen Goldstein: Yes, the group velocity can be bigger in some sense,
right?
>>: If you have the (inaudible) then you will have something will have to move
faster than 500 (inaudible).
>> Seth Copen Goldstein: Okay. So that's pulling yourself up around against
gravity, so you can actually do something we call collective actuation. You can
imagine if you put let's say six of these spheres in a three by two row and you
rotate the outer two, right, then you're going to get this leverage effect to get
more force, but also at some point you'll get less force but faster movement. And
so we can do things like that to get things to move faster.
In that video I imagine a lot of that force is coming from the fingers as being
directed actually. So -- and so maybe that video, 2012, so, what I have four
years, we'll get back to. Are there any ->>: (Inaudible) since you've gone these calculations these tend to be very light,
very light right now.
>> Seth Copen Goldstein: Actually we assume a density of water because the
super capacitor itself is not -- it's actually lighter than water, but just as a worst
case we assume water.
>>: And then how would that (inaudible).
>> Seth Copen Goldstein: Actually, it wouldn't look like this. After we would sort
of finish making this sphere, we would polymerize it and it would essentially be
like a glass bead. So it wouldn't feel like skin, that's for sure. Although, you
know, our sense of touch has less to do with what we're touching than the
vibrations that it makes when we move across it.
>>: (Inaudible) consistency when you have to be very careful how you touch.
>> Seth Copen Goldstein: Oh, so our calculations show that if we give -- not at a
millimeter, but at something a little bit smaller than a millimeter, we could
actually -- it's like -- it would appear to be like a credit card. So it would have that
Young's modulus. So it would be pretty stiff. It would be good.
So you couldn't make a hammer out of it probably -- well, you could make a
crappy hammer out of it. Any other questions about the hardware?
Okay. So that was the easy part. Now comes the hard part. So these two broad
areas that we can -- that the software challenges fall into. So the one on the
bottom of it programming the unit is general problem that all robots have, right? I
mean just programming robots to do the right thing is hard. There's no question
about it. But lots of people are tackling that, and so we're not really focused on
that. I mean, we do have to program individual units but we're not -- research
isn't really pushing in this direction. What we're real focused on is programming
the ensemble. We've got thousands or millions of units and how do we get them
to act like a coordinated hole. And that's one of the main sort of scientific
challenges of the project.
So there's two ways that we can approach this, and one very, at least deceptively
attractive method is to try and do things that were called emergent behavior,
where you have a program for an individual unit, you have a whole bunch of units
that work together, and you get this amazing effect, you know, birds flocking,
ants going back to the food and finding the shortest path around obstacles, I
mean all kinds of things from biology.
And this is an example of how that emergent behavior might work. On the left
here is this one statement program in a language that we're developing called
Meld, which is sort of you could think of it as a decedent of Datalog. It looks a lot
like Prolog. Essentially you have some facts that you want to prove and then the
means to prove them and unlike in a purely logic programming language, some
facts if you prove them have side effects.
So for instance this fact that we're trying to prove is actually a system primitive
where if you prove move around XYP, it moves the unit at X, the unit that X
stands for around unit Y, to this point. Okay? So if you ever can satisfy all these
requirements, X moves around Y to point. And that -- and the reason why it's like
this is they can't move on their own, right, they have to coordinate with their
neighbors. So X moves around Y to someplace.
So what does this program say basically? It says first of all X and Y have to be
neighbors to prove that. Well, if you're going to move X around Y, they'd better
be neighbors, so that seems pretty reasonable. And then we have these other
facts that are actually -- we could sort of underlying facts the system provides
you with that have to do with your sensors. So if you remember Prolog, if you
have something that's a known, it's filled in essentially when it gets proven.
So here we say that the brightness level at the unit X is N and the brightness
level at unit Y is M. Okay? So we essentially just did readings from some photo
sensors we had. And then we want to make sure the point is vacant. In other
words, the point that's next to Y is vacant, there's no one there. Okay? So if all
of these things are true, X is next to Y and there's nobody sitting at point, and the
brightness level at X is less than the brightness level at M, and we will have
proved this fact and X will move to point. Okay. This is a very simple program.
>>: (Inaudible).
>> Seth Copen Goldstein: We're reading some photo sensor. So we're sensing
the environment. Yes, this is just some example.
So the idea we want was to make this ensemble phototropic, okay? So here's a
light bulb, here's our ensemble, and the ensemble moves to the light bulb. So
you'll notice a couple of things about this program. And these are the things that
make this sort of emergent behavior type program so attractive.
It didn't say anything about obstacle avoidance, it doesn't say anything about
trying to fit through holes and yet, by the way, sorry, these are two views, right,
from the back and the front, but yet the son am bell goes through this hole and
avoids all these obstacles and makes its way to the light bulb. So these
programs are very, very robust.
It's also nice and short. I mean it's a very short program. It didn't take up a lot of
resources. It doesn't even do anything about say like, it could be that X is -- it
doesn't say which point around Y. It could be that X is -- and Y are like this, and
it's brighter over there and then X moves away from the light. It just picks a
random point. It's just an over -- you know, over time and generally dimmer units
are moving to brighter areas. Okay? It also doesn't say anything about keeping
things connected. You notice we lose some units here.
If we want to make sure that they also stay connected, this takes like about four
lines instead of one. I can't fit that on the slide. But this is, you know, one of the
things about emergent behavior type programs, these programs that use
stochastic properties is you have to be willing to sacrifice some individuals. So
you're never going to use this behavior to like balance your bank accounts, even
if you had a million bank accounts.
I don't think this is the right approach. People have been thinking about
emergent behavior in some sense you could think that cellular atonima (phonetic)
is also an example of emergence behavior, right? You have these small little rule
sets, you put some pixels down like in the game of life and you get these cool
things that come. But we still don't have any general methods for proving what
the ensemble effect is from the individual programs. People are trying hard and
have lots of smart people have been trying very hard for a long time, and it
seems daunting to me.
So what I think the attributes of the right kind of programming thing is you want it
to be ensemble level thing. This isn't really ensemble level thinking. You're sort
of writing this one and you're seeing what happens. I think that's what we need
to do.
This is one positive attribute of the program is it's very small, so it's concise. And
it's certainly scaleable, no matter how many units I have, this is going to work
well. This program is not amenable to prove. I don't know how to prove that the
entire ensemble is going to move to the right based upon this one line program,
but I think that's a necessary thing.
When we're talking about programs that are distributed across thousands or
millions of units, I think we want to be able to try and prove something about
them. So by necessity, given our proof sort of our automated proof, you know,
theorem proofers and such, we need to make these programs as short and
concise as possible.
And finally we would like to make them something that this thing does have is
robust to uncertain -- you know, environmental uncertainly, failures, defects, et
cetera. We've got millions of units inevitably, we're going to have lots of failures.
Okay?
At a very high level and also when I'm trying to pat myself on the back, I guess, I
feel like what we're trying to do here is come up with some kind of
thermodynamics of computing. Okay? So thermodynamics is this really great
property that it's you we think about ensembles as ensembles, so it's -- it
embodies ensemble level thinking.
If I have a box of gas and I want to double its temperature, I do not think about
the gas particles and how I this tweak their velocities, I just want to say I want to
take this box and I want to have the volume. And I know that by controlling this
aggregate behavior, the behavior of the entire ensemble, I'm going to double the
temperature of the ensemble. And that's the same way I would like to think about
programming thousands or millions of processors. So have sort of ensemble
level knobs have the compiler figure out how to translate that into the programs
that run on each of the individual units. Just like when I have the volume of that
box, all of a sudden the velocities start doubling because they're bouncing
against each other more, there's more energy in the box.
So, you know, this idea of ensemble level thinking has been around in you know
physics and chemistry and thermodynamics for a long time, they deal with 10,
you know, to the 23 elements, so it makes it, you know, it's sort -- if you don't do
ensemble level thinking you can't reason about it at all. Traditional computer
science is focused on one unit, you know, one robot, one thread. I mean we
have trouble writing programs for four cores or you know 16 processors and so
this idea of trying to get an ensemble effect, control ensembles with some kind of
the ensemble level is sort of the grand goal of this sort of software part of this
project.
Okay. So I've sort of described these -- in effect what I think the program should
have and of course that means we need to work on compiler technology that
takes these sort of ensemble level descriptions and compiles them to the
individual units and manages the message flow and where state is stored and all
that kind of stuff.
As soon as the programmer has to start worrying about that kind of stuff, you're
hosed. And then I think, also, we want to somehow in the algorithms, we want to
harness the fact that it's a distributed problem with lots and lots of actors instead
of fighting it.
So if we think about this program here, it's using the fact there's lots of these
elements. If there is just two elements here, you wouldn't make any progress to
the light with any -- you know, it would take for ever, maybe, if it did work, right?
We're using the fact that there's lots and lots of (inaudible) okay.
So let me give you something concrete. I'm going to talk about a programming
language that we've been talking about for a couple of years now, and it's very
different than the sort of imperative approach that, you know, where everything is
focused on the unit. Okay? So I'm going to just to start off, I'm going to -- I've
already sort of described Meld to you a little bit, we have these facts that we want
to prove and the means by which we've proved them. And this, this tiny little
program that gets 3 units to move to some point, okay.
So it's not thousands of units and it doesn't really scale. But basically the idea for
instance is we've proved some fact that says that unit S has some distance D
from the destination, right, so S is at point P and we calculate the Euclidian
distance for D. So that's very simple.
And then we have two units, S and T, and if they're neighbors and S has some
distance and T has some distance and S's distance is greater than T's distance
then we say that S is further away from the destination than T, and so then we
can move S around T to some point U if S is further than T and S is further than
U and T and U aren't the same. Okay? So it's a very simple program that gets
these three actually can be more than three units, to walk towards the
destination.
And that takes about four pages of C++ code. So what's being hidden under the
covers here well the various facts that we've proven are distributed amongst the
units, they're not sitting in one central database, you know. The distance that S
has and the distance that T have are sitting on two different units. Somehow S
and T have to communicate that, so they've got to figure out what's going on.
So all of that message traffic has to be done when S does move around T, then
S has a new destination, we've got to delete the old fact that S had and then
reestablish its new distance. So all of that management of state and message
distribution has handled sort of under the covers. Okay?
So the way to think about this is that each of these units has their own database
with their own facts and that as the program evolves we prove new facts and
delete facts that aren't true anymore. You know, A has facts like it has neighbors
B and C, and it's at some point. And if I move A, then I have to delete the fact
that B has neighbor A and that A has neighbor B and that B -- A was at some
point because it's now in some new point. Okay. So all that is managed by the
compiler.
So a way to basically it's fairly straightforward right now, our compiler is still pretty
naive, the way things are broken up is that we run the rules on one of these units
we could arbitrarily pick any of them, but generally it's the first unit that we run it
on, and then we create versions of these units that run, rules that run on the
remote units, they prove they're a piece of it, and they sort of transmit it back to
the unit that they know this rule is running on. Okay?
And so that's how we manage to distribute the state, handling deletion and side
effects is a talk in and of itself. So I'm happy to talk about that offline.
Okay. So let me give you sort of the high level picture of how this works and why
I'm pleased to be talking about it. So one thing is obviously the programs are all
a lot smaller. Haven't written any really big programs, but, you know, in the
months these samples of programs, you know, they're usually about 20 times
shorter. They're short enough that you can actually think about them, you know,
they're on one page a piece of paper.
And also amazingly enough, the most important metric, which is sort of how
much message traffic there is that's running between these units, basically the
Meld programs do as well as the C++ programs. Okay? In fact, they do at least
as well as the C++ programs. So this should actually not be possible, right? If
you're -- the per programmer C++ give you total control, you know, it's imperative
language and you ought to do the right thing, there's no reason why Meld should
ever send fewer messages.
So what's the explanation for that? Well, there's two possible explanations.
Particularly for the morph program, which is the most complicated of these three
and essentially is a program that will change something from one shape to
another, which is sort of the basic requirement for claytronics, you'll see it's
significantly fewer messages were sent by Meld than C++. So explanation
number one is the person who wrote the Meld program is just smarter than the
person who wrote the C++ program. That's likely to be true because I wrote the
C++ program, my graduate student wrote the Meld program.
So that's only part of the explanation though. I think the main reason is that the
Meld program is very, very small and inherently parallel. As a matter of fact you
have to work to make it serial, whereas the C++ program it's a lot of pages of
code and it was just easier to attack it serially because I was writing -- I was
figuring out what every unit was doing, and I wanted to avoid the sort of normal
concurrency mistakes and so as a result, it does things a little bit slower, it just
takes longer to change from one shape to another. There's more steps
performed, and so the Meld program does better.
Did you have a question?
Okay. So we've been using Meld to write lots of different applications. This is
just a video showing one of the things the ensemble has to do when you boot it
up is every unit has to figure out where it is, given the fact that you have noisy
sensors that may be defective units you need to do that in some fairly robust way
and you'd also like it to scale to large ensembles and so this is just something
that we started off with this cube, they don't really know where they are, we just
project them in space and where they think they are. And so everyone sort of
starts off randomly, and then units will try and join up with their neighbors and do
a rigid body alignment. And so this is what happens here.
Okay. And so we've got programs for figuring out where you are, for shade
morphing, deciding on where you want to go. We have debugging tools,
debugging these things can be hard but they actually have turned into a
language in and of themselves. In other words, this is basically, you have a -when you find bugs in these programs, it's generally not that there's a bug in the
program on one unit but rather it's the relationship between the state between
units is in error. So how do you figure that out? Well, what you'd like to do is
write some predicate that says, you know, if A is one and B is two, then C ought
to be three but it's evaluated over that tire ensemble and if ever that predicate is
false think about it as like the distributed assertions then you can stop and debug
your program.
It turns out you can also use that very efficiently for writing programs. So I want
to talk about shade planning for just a couple of minutes because that's one of
the basic things we want to do with the ensemble. And so basically we're looking
at stochastic approaches. And as I said in addition to building these small sort of
microrobotic units, we're also building modular robots from scales of like three or
four centimeters on a side to actually two meters on a side. And all -- like almost
every, I shouldn't say almost, like every modular robotic system I've ever seen,
the units tend to be nonholonomic, in other words, you might have two
configurations that are next to each other in the configuration graph, but to get
there you've got to go all the way around, like sort of like a clock is the best
example, you know. Between 12 and 12:01, you know, that's just one tick on the
clock, but if you want to go from 12:01 to 12, you've got to go all the way around.
So that's a nonholonomic system.
So when you have a nonholonomic systems and you want to do motion planning
it can be very difficult because you think you can get to that configuration easily
but you can't. And so coming up with metrics to evaluate what path you ought to
take is hard.
So basically these nonholonomic constraints make life difficult. There's also
other constraints like global constraints. Like if G wasn't here E could not move
to F because that would disconnect the ensemble. That's like a global constraint.
Global constraints are actually good. They might take a while to evaluate, but
they restrict the space of possible plans that you can use. And so we have
developed a way that we can take any modular robotic system and create and
turn it into a holonomic system essentially by creating meta modules, so we put a
bunch of modules together and they act as if they were one.
And so this is an example of a, you know, like our little magnetic cadums moving
around we can't necessarily move this guy over to here in one step because it's
blocked, you know, you have to make lots of steps.
But if we were thinking about each one of those units being a meta module
composed of lots of modules, then we could. We could move this guy directly
from here to here. What we have is some prewritten plans that basically say if
you want to move a unit down to this spot, then it's going to take all of its units
and fill this other unit up, so this became from an empty unit to a full unit and then
it spits the unit out.
So this is one step in the configuration graph. And I don't have to move any
other units. And so using Meld, we were able to write a essentially one page of
planner that works for all the modular robotic systems. Okay?
So the reason why I sort of go through this is because the very neat thing about
this Meld program is that we can prove that it's correct. Okay? And this is the
thing that particularly excites me about this.
So we can prove that it's complete but if there is a plan that gets you from point -you know from shape A to shape B, we will find it. Okay? It might not be the
most optimal plan, in fact it's almost definitely not because it's stochastic planning
process. Yes?
>>: (Inaudible) how this plan is represented and this global plan, how is it
(inaudible).
>> Seth Copen Goldstein: Yes, I haven't really told you anything about the
planner. So I'll discuss this, then I can finish this. There's two parts to what we
say what I mean by provably correct, so when it's complete, if there is a plan it
will be found, and the second one is that its sound in the sense that it will never
disconnect the ensemble. It's the fact that we can prove that about this is to me
positive indication we're going to make progress and we're being able to right
programs. Massively distributed programs that are provably correct.
And the proof right now is still some -- it's not completely automatic but because
the translation from Meld to 12 which is a language for doing formal proofs by
computer happens partially by hand. But we essentially translate this and then
12 essentially proves the fact. Yes?
>>: (Inaudible)
>> Seth Copen Goldstein: Good point. But right now we just prove that the
algorithm works if there are no failures. But the proof system will enable us to
deal with things like node failures and other things as well.
So we will be able to actually prove whether -- how uncertain, I believe we'll be
able to prove how uncertainty tolerance it is. We haven't been able to do that
yet. This is the first step.
The fact that we can prove this at all is great as far as I'm concerned. The way
the planner works is basically the target shape is distributed to, in this particular
case, the -- everybody has a copy of where they should not -- not where they
should be, what the target shape is. They don't know what their current shape is,
but they know what their current location is. And basically if you're next to a
space that should be in the target shape and it's vacant, you ask people to send
you resources. And then you spit those resources out into the target shape.
It does, but it doesn't need to, actually, only the ones that are adjacent to the
target shape need to actually have the target shape, and they would propagate it.
But right now just to make it easier, everyone has a copy of it.
The ones that aren't adjacent to the target shape, they don't use it at all, ever.
>>: (Inaudible). Yes, that's right. So what we do is we pick one unit to be the
seed unit. We come -- so they run a program to give themselves a consistent
coordinate system. That consistent coordinate system as you said, it's just
internally consistent. Somebody's got to be told there is 000. And so you just
pick somebody. It doesn't really matter. You could use some external reference
for that. And so that seems like the second order of detail to me.
You could have some special cadum or some special place on there in the
environment. Okay. So I haven't really talked about applications of this, but this
is a list of some of the things that we're thinking about like a 3D fax machine
which is sort of doesn't require moving or motion, you know, all the way down to,
you know, not being there to appearing to be there, right. There's a large
collection of applications, but I think that, you know, in some sense this has less
to do with the applications of claytronics than understanding ensembles and
scaling and forming a basis for the way I like to think about it is forming a basis
for understanding how we're in a controlled systems with the really a lot of
particles. Because a million particles just isn't that many.
But you know things in the future like systems nanotechnology and stuff.
So at this point I'll entertain more questions. I'm done.
>> Stewart Tansley: Thank you, Seth.
(Applause).
>>: So you skipped over the initial talk of truth maintenance system. What sort
of -- at a high level (inaudible) going on there you have to avoid or accommodate
the issues of things moving and changing too fast to keep up with the updates
and the like?
>> Seth Copen Goldstein: They can't actually move -- well, if the system is
working they can't move faster than the updates because they're only going to
move when they have a proof for that update. And then the deletion, so
communication happens faster than -- I mean we can actually if you have a proof
that you can move and the system is working right, your neighbors shouldn't
have proofs that they can move. So they could only start moving once you
have -- the fact that you aren't where you were is sort of propagated to the
people that need to know that. Does that make sense?
So there is a very -- I don't know what you -- about truth systems, but there is a
very interesting thing that we're wrestling with right now and that is the difference
between belief and truth. So for instance if that coordinate system, so everyone
figures out what their globally consistent coordinate system is, and it turned out
that I happened to be the seed module at 000, okay. And then I move. So now
there's this big question. If I was 000, I'm still 000, and so I didn't actually move,
everybody else moved of with respect to me.
I mean, there's this notion of what I believe to be true and what's true, and so for
instance also if I have my valid coordinate system and there's no unit over here,
no one has the knowledge of what that coordinate is. I could sort of infer it. So
what some of the research that we're working on right now is how you take belief
that should be universally true and turn it into truth so that if I -- so once I have
this consistent coordinate system and I move, my coordinate actually changes. I
don't drag everybody else around with me so to speak. Does that make any
sense at all?
>>: It sounds like it would be better not to have one unit linked to the 00,
(inaudible) but to simply have a belief for every unit where it should be in some
abstract (inaudible) and that belief should actually be really represented as belief
in multiple hypotheses.
>> Seth Copen Goldstein: The question is how do you initially seed the right
hypothesis. So we start out no one knows where they are.
>>: (Inaudible). Randomly. They have.
>> Seth Copen Goldstein: Exactly. So what you need to do is be able to
conclude that you have enough knowledge, that everyone believes the same
coordinate system exists and then turn that into truth.
And the hard part is that transition. Anyway, I believe that that's the hard part.
As I said, this is work we're doing right now.
>>: So one way you could -- every node is trying to figure out where it is in the
global plan and if -- as long as it has uncertainly, you still have to keep on
sending messages around. Once there is no uncertainty, then that can only
happen if they're all in the right spot?
>> Seth Copen Goldstein: So the planner, so I would like to divorce this notion of
belief and truth from the planner because that statement didn't really hold true.
The planner works in a way such that you only have to have belief for it to work,
you don't actually need to truth.
But this notion of as long as there's uncertainty you have to keep proving things
and when the uncertainty disappears you want to turn this into truth is the right
one. The planner itself you don't need to actually globally understand that you
have the -- your position in the plan, the plan is stochastic and random.
>>: (Inaudible) not -- it's something that's not sitting on these units or it's sitting
on these units.
>> Seth Copen Goldstein: It is.
>>: Each one of these units?
>> Seth Copen Goldstein: Each one of these units is saying what should I do,
should I move left, should I move right, you know, oh, that guy over there asked
for some resources, I don't have anything else to do, I'll go there. Someone else
might think they want to go there, too. So we have to coordinate.
>>: (Inaudible) reputation uncertainties on.
>> Seth Copen Goldstein: No.
>>: Or no?
>> Seth Copen Goldstein: Well, we don't have it as a first class object. We have
the notion of I could move possibly if it's okay and someone else has that same
thing and then we would negotiate and then say okay, I'm not going to move. So
there is a sense in which we have facts that indicate uncertainly but there's no
first class sort of, you know, I believe this with X percent.
>>: So this to me is like all the literature (inaudible).
>> Seth Copen Goldstein: I think that's true.
>>: But that's exactly the problem ->> Seth Copen Goldstein: That's the -- so there's multiple levels to this problem,
too. There's one thing in which we all believe with 100 percent certainty some
fact, and then we can turn that into a truth that's also true. There's other things
where I can say I had 80 percent belief.
And you know, if we all have 80 percent belief, maybe we could think about
turning them to truth and we might be in error, so there's different levels of this.
It's an interesting research problem which is just exactly what we're trying to do.
>>: (Inaudible) global trick that makes things work a little faster and that is the
belief you're sending is not a belief of what -- it's not about your state, it's about
the messages are coming from your neighbors. And that's being transformed to
send over. So your state is defined by what everybody tells you to do, but what
you submit is not what you are but what.
>> Seth Copen Goldstein: They believe.
>>: So you're making one step further, one step faster.
>> Seth Copen Goldstein: Yeah. I don't know how we would integrate this. I
mean, I think that it is useful and we need to explore it, but we have more basic
problems to tackle first at this point, I think. Yes?
>>: Do you have a feel for how this is (inaudible) approach scale to environment
to where the atoms don't entirely control where they are (inaudible) forces
stronger than you and they are going to be (inaudible) affects the ground shifts a
little bit or if someone tries to shake your hand and they push a little too hard.
>> Seth Copen Goldstein: No, I don't know. I mean these are really hard
problems I think.
>>: (Inaudible) but it seems at that point you have nodes that may believe things
that are simply wrong and --
>> Seth Copen Goldstein: I would hope that if that was true they very quickly
based on trying to prove other things and other nodes trying to prove things by
the wrong belief are corrected and told that they're, you know, there's issues
here.
You can also imagine that I talk about each one of these programs as being its
own program, you know, you're going to be running localization as well as a
planner, as well as some sensors and that hopefully there would be some quick
feedback loops that would keep you honest, so to speak.
>>: (Inaudible).
>> Seth Copen Goldstein: I mean, if you really could understand, if I really -- if
we could make some progress on taking simple rules and figuring out the
ensemble effect is that is more attractive way to program these things, I think it
will naturally be more robust.
Hopefully by going top down will shed some more understanding on what it
means to go bottom up. Maybe there's some subclass of things or some
subclass of expressiveness in which you can prove those things. But that's a
very long range kind of thing. Okay. Thank you.
>> Stewart Tansley: Thank you very much.
(Applause)
Download