transcript - New Mexico Computer Science for All

advertisement
Emergency Response
Hi, my name's Stephen Guerin, working here in Santa Fe at a company
called Redfish and another one called Simtable. We're an applied
complexity company, where we're taking ideas coming out of places like
Santa Fe Institute, UNM, Los Alamos and Sandia, looking at complex
systems and finding applications to them in the real world.
Some of the tools that we're using are things like agent-based modeling,
which we're learning in tools like NetLogo; machine learning, machine
vision, statistics and probability, kind of come together into ways that
people can come around a physical interactive table to look at emergency
management issues, like how will a fire spread, how will fluid move down
in a dam break, what's the social side of an evacuation, who's going to
shelter and place, who might evacuate, and also the traffic dynamics
arising from these instances.
Instead of traditionally presenting our results up on a screen or on a wall,
you know, the laptop around the wall, we're taking the same projector
and projecting it down onto surfaces or around the room and then
making those surfaces interactive by watching that same surface with a
camera. This has nice challenges of how do you detect where a laser
pointer is clicking on a very non-uniform surface and map that up to a
projector, or how do I detect where somebody's hand is or their body is,
so these are all nice problems in machine vision, at the core of, I think,
computer science today in many applications in addition to a simulation,
big data and analytics, as well as machine learning.
What we have up here on the top maybe out of frame as we pan over
here, is we have a projector, it's just an off-the-shelf projector projecting
on a table, we have a web-camera and a Mac mini bolted on the back and
that's the full extent of the computation that's going on. We're projecting
that down onto a table here and we'll turn off the lights here so you can
get a better view of it. [02:00] Basically you're seeing the camera taking a
picture of this table and reflecting these white borders back. I don't know
if you can see my hands in here or the laser pointer. In the beginning you
can see there's a little bit, well first of all the camera's upside-down,
flipped, and it's a wide angle so the image is a little bowed.
The first algorithm we're going to do is we're going to take the projector
space and convert the X and Y of the projectors into a binary code.
There's a particular kind of binary code called Gray code. We're now
going to come through and project that Gray code of all our position
coordinates and let the camera take a photo. It's learning for every pixel
Stephen_Guerin_Sim-Emergency_Response
Page 1 of 4
in the camera it's positioned in camera space, and then converting that to
a projector space. Once the camera registers that we're able to bring up a
GIS information. Right now the table is flat and we're projecting the area
of Santa Fe here, for instance, and we also have the ability to use my
laser pointer and make the surface interactive. The first thing I'm going to
do is put it into 3D scan mode and I'm going to make some arbitrary hills.
Imagine a firefighter wanted to train on how fire behaves when it's going
through a valley or through a saddle point, we have the ability now to
project some lines on the table, this is a sign of sort of greyscale pattern.
Based on how the stripes move and the displacement of the camera from
the projector, there's enough information to recover the height of the
sand. So that scanning process let's us now use that as real information in
a fire.
The other way we like to use this is loading a known topography, like in
Santa Fe. Let me turn off all these different layers for you first . We start
off with the colors [04:00] of the rainbow. We can also click on any one
location in here and fly to that position in Google Earth. Now we're
registered in GIS space. What I'm going to do now with the colors of the
rainbow is I'm going to move the sand from the low points, kind of the
red points, and using the colors of the rainbow, ROY G BIV, red, orange,
yellow, green, blue, indigo and violet, we're going to make the terrain of
Santa Fe. Take my trusty piece of wood here and get the bulk of the sand
to the east in Santa Fe in the East Mountains. This is north on the table.
I'm just going to bring the sand in here roughly and then we'll do finer
detail with the hands. So right now we're forming the ski basin. Here's
the Santa Fe watershed coming down here with the McClure and Nichols
reservoirs, Cerro Gordo. This is Hyde Park coming through here up to the
ski basin. This is Thompson Peak in the east. Part of this is called tangible
computing. It let's people interact with the real surface and actually form
the surface again, a little bit of muscle memory as people learn in
different ways. Some people can just look at a contour map, like an
expert, but some people learn contours in a different way and then
elevation, and being able to form it with their hands has some advantage.
This is roughly Santa Fe with the mountains in the east. Now we can lay
around different pieces of information. Here we're going to show, I'll turn
on hill-shaving, so if you think of a raster or bitmap of elevations, we
could look at every point on that patch, like if you're in NetLogo, and look
at its eight neighbors and figure out what direction that patch is facing.
We call that aspect in GIS. Then we can cover that or shade it based on
where the sun is. [06:00] Here, I'll move the sun to the east or the west
and we can put a little bit more detail on here. The elevation data was
Stephen_Guerin_Sim-Emergency_Response
Page 2 of 4
coming from the USGS at a 10-meter resolution for pixel. We can now lay
around things like the roads as a polyline, or as a structure, which are
points, so these are houses. We can also come in and inspect certain
areas. So I can say what is the fuel or vegetation type in any one of these
pixels. It's like I'm inspecting a patch, it's a patch variable. Ultimately
we're going to have a fire model in here that wants to move uphill,
downwind, it'll be a function of the fuel type as well as the strength and
direction of the wind, which is a single vector here with the strength and
direction of the wind indicated. Once we have this in here, while I was
inspecting the patch layer and we're showing elevation, but I can actually
show the fuels layer also. Here's ponderosa pine, Pinyon-juniper, and
grass and [inaudible 07:01]. Once we have this loaded, I've got all the
features necessary to light a fire and having a bio-cellular automota
model of how fire spreads.
Let's put this guy into fire mode and we can start up a fire maybe down
in this upper canyon and Cerro Gordo intersecting here. It's maybe easier
to see on the terrain view here. We have a fire spread now that's a
function of the direction of the wind and the slope. I can speed up time
and watch that thing spread or we can also simulate what if there was
spotting behavior up on the hills as the wind is pushing it. We can also
think about the human response of where would I put maybe an air
tanker to slow down the head of the fire, the direction in which it's going,
which is a very dangerous place to put human resources so we'd want to
use our airplane to slow down the fire there. We maybe [08:00] put our
humans with a hand crew at the heel or the base of the fire, away from,
downhill and downwind, or upwind of the fire, excuse me. These guys
will have a certain production rate. Be easier to see I if turn off the roads
here. So these guys are making their line in a certain way. We can also
introduce things like a bulldozer team who might be a resource arriving
later as the fire gets more progressed. Compare their progression rates to
the hand crews over time. So they're able to dig a lot more lines and
contain this fire.
This is the physical aspect of a fire. We can also turn on the roads and the
structures and for every house we can simulate an evacuee, or in this
case, one-and-a-half evacuees per house, and start to look at where we'd
expect congestion to be. Now we can have the fire service interacting
with public safety or the police, who are going to be in charge of the
evacuation. Typically these guys train separately on their part of the
problem. This let's them come together around the common problem
and deal with those issues.
Stephen_Guerin_Sim-Emergency_Response
Page 3 of 4
This is a first-instance of using agent-based modeling in the real world,
kind of a new form of human-computer interaction that takes advantage
of machine learning, simulation and a lot of statistics. Think of this as a
new ways of solving problems. This is Simtable.
Stephen_Guerin_Sim-Emergency_Response
Page 4 of 4
Download