Document 17954767

advertisement
>> Zhengyou Zhang: Okay, let’s get started. I'm Zhengyou Zhang from Microsoft Research Next project
called MIX, which is Media Interaction and eXperiences. Today, it’s my good pleasure to introduce YiPing Hung. I know him for more than thirty years, actually. He is a expert in computer vision, graduated
from Brown University, and he’s now a professor at National Taiwan University. And in last few years,
he has spend a lot time thinking about interaction using multimedia technology, including computer
vision, and especially paying attention to—you know—e-heritage, like Dunhuang caves. Yi-Ping, please.
>> Yi-Ping Hung: Okay, I can just talk here, yeah? Yeah, thank you, Zhengyou. Yeah, thank you for this
opportunity to present our recent research result on virtual touring of Dunhuang. Yeah. So I don’t know
how many people have visit Dunhuang in the audience, but Dunhuang is in a western part of China,
which is the beginning point of the Silk Road, where in ancient time, people were … moved silk from
China to Europe. But because of it goes through the desert, so it’s really quite dangerous. So many
merchant, they will pray to the Buddha, say that if they can come back safely, they will do something
here, yeah? So eventually, they built a lot of cave like this one-by-one, starting from 366—yeah—AD.
And now, just in … there are many site in Dunhuang; one of the most famous is the Mogao grottoes,
which consist of more than seven hundred cave, and you can see now that they built the staircase so
that the tourists can visit it. However, the Dunhuang academy doesn’t like too many people to get into
the cave, because it’s ruining the painting—the wall painting—so not everybody can go into every cave.
Yeah. Of course, VIP can do it, but not the general tourist. [laughs] So we think we can do some of the
virtual touring for people who really want to enjoy it, and … but without damaging that cave. And so in
a cave, I think one of the important figure, we … I call it a flying spirit, okay. It’s just like a angel. You can
see a lot of symbol like this in Dunhuang, because now, it’s almost become a symbol of Dunhuang. And
the flying spirits, you can see it in almost every caves. Yeah, I say just like angels everywhere, just like
there are a lot of spirits here who are attending a talk, yeah.
So in our work, we tried to let people to move around in a cave like a flying spirit. Okay, I will show one
of … this dæmon; this is what we recently did in a gallery near National Taiwan University. We let
people wear the Oculus, so he will feel like he’s really flying in a cave and be able to enjoy the wall
painting or the ceiling. Yeah. And so the idea is: if we can put a … the Oculus Rift is really a immersive
eyeglasses, so you won’t be able to see your surrounding, but if we put a camera in front of it … so at
the beginning, you will see your surrounding—it’s a camera see-through—but when you look at this wall
painting—we recognize we are looking at it—then we let the tourist emerge into their … the virtual
space through there. So … okay, so it’s like a gate allows you to move into the cave, and now, you can
see where the audience is moving his body. You can jump, then you can almost jump to the ceiling; it’s
just like you are in the moon—you know, very low gravity. You jump, and you jump to the ceiling, and
you can begin to slide down, so you can move from one partition of the cave to another partition just
like a flying spirit. In the future, we are trying to let people to be able to also walk and jump—yeah—but
to move around, you need to do a cell partitioning, and there’s some technology that we are still
developing. Many people are developing that; I will mention that later, yeah.
>>: I think that [indiscernible] two wall …
>> Yi-Ping Hung: Oh, this is the …
>>: [indiscernible] initial impression …
>> Yi-Ping Hung: … yeah, initial impression you are in the …
>>: Little cave—okay—and then ….
>> Yi-Ping Hung: … like a corner of the cave, and then you can …
>>: Expand the …
>> Yi-Ping Hung: … get into the virtual space through this symbol.
>>: Okay.
>>: This teleport, yeah. Yeah, okay, so … but before we work on the Dunhuang project, we are really on
another project, called virtual exhibition, for National Palace Museum. So I will first introduce our first
project which is National Palace Museum; it’s really what we call a magic crystal ball. So it’s based on
those experience that we will eventually be able to work on the Dunhuang.
>> video: The National Palace Museum in Taiwan, one of the world’s top museums. China’s imperial
treasure belonged to just one man, the Emperor. And the pieces were created for his eyes only.
>> Yi-Ping Hung: Okay, this film was made by National [indiscernible]
>> video: Now, the National Palace Museum of Taiwan wants to place these artifacts in the hands of
everyone [indiscernible]
>> Yi-Ping Hung: I think I will just move to …
>> video: … use expertise in three-D imaging.
>> Yi-Ping Hung: So this is what we did: we use the object movie techno … to capture the image of the
artifact, and then we can put this object into a virtual environment. So for example, this artifact
originates from Beijing, but if we can digitize it, and then we can show it in Beijing—yeah, where it
originally come from—without physically moving it over there. And for this project, we put something
called magic crystal ball; we use a … some optical device to let people be able to see a floating image,
and when you move your hand, you will be able to control this moving object. So we put a gasses so you
won’t be able to touch the object. Yeah, so that’s what we did for … this is a very famous collection in—
yeah, the cabbage, jade cabbage—in National Palace Museum.
So that’s our first project. And after we did that, we then tried to work on the virtual exhibition of the
Chinese painting for National Palace Museum. So this is our project on making a interactive, multiresolution, tabletop system. It’s … the project really started from 2004, and so we—eventually—we got
something published in 2008 in tabletop conference, but it’s also at that time, I found Microsoft has the
Surface also announced at that time, yeah?
>>: And maybe you heard about it.
>> Yi-Ping Hung: [laughs] Yeah, but we would … in fact, when we are developing it in 2004, I didn’t know
there’s a Surface project in Microsoft. So … but at beginning, we are doing something different from
Microsoft. It’s real projection; its multi-touch is the same, but what’s different is we have a multiresolution ... we have two resolution. We have one projector which provide the screen for the whole
table, but we also have another projector—we call it Fovea projector—project on a steerable mirror,
and we’re able to provide a more high-resolution projection in the center, and the motivation for this
kind of multi-resolution display is: human really has the multi-resolution vision capability. We have a
Fovea region, which has a very dense cone sensor; we can see very high-resolution, acute visual detail
for the flower that we are looking at, but if we want to look at the detail of another flower, we have to
move to over there, right? So therefore, we think maybe we can use this kind of human constraint; we
can build a display … we don’t need to provide the high resolution everywhere, just provide the high
resolution on where you are looking at it—where you are paying attention to it.
So in 2011, when we … after we finished the hardware, then we talked to National Palace Museum,
based on our achievement before for the magic ball—so they trust us. So they began to provide us
some collection; so now, if you go to national palace museum—since 2011—you can see two table
there: one showing Chinese painting; one show Chinese cartography—what they call “must-see”
collection. So they have … they probably have more than a few thousand or so painting, but after then,
a few dozen they think they are really important one. But you won’t be able to see it when you go to
the National pure … Palace Museum, because all the painting only … each painting only show maybe one
month every few years. So when you go over there, say, “Hey, I want to see this painting,” they say,
“Sorry, you have to wait another two or three years to see it.” So that what you can see in National
Palace Museum … okay, let me show you—yeah. Okay, so this was a table functioning. So if we are
interested in certain painting, you can just touch it, [music] and you can see that the detail inside, and
you can see something the … in lower resolution nearby. Okay. So this one is provided by the Fovea
projector, but remember that I said that, in our original design, the Fovea projector is projected on a
mirror that ro … however, when we put into the museum, and well, the real idea is: if I want to see
somewhere, I just touch my hand there, or in the future, maybe I just fixate my gaze there, and the
system will give high-resolution part. But the museum didn’t like the sound; it makes the … the
environment looks like … sounds like a machine shop, because you will hear the nnnn, nnnn, nnnn, ya,
ya. So they asked us to just fix it. So even though the initial design is rotatable mirror, but when we
move it to a National Palace Museum—when you see it—it’s fixed in the center; you can only mo … you
have to move your painting with your hand, yeah. So it’s not exactly the same as our original design, but
it’s … it still work. And I know many people has tried that system when they visit the National Palace
Museum. Even the researcher themselve are not allowed to see all the painting all the time—you
know—you … because you have to ask for permission to get the painting, to extend it, and to find where
it is. So they are also using our system to do their research, yeah.
And after that, we began to work on something on the top, and for this one, we work on the tangible
object on top of the surface. So this is a simple project, but this is also the basis of what we did on the
Dunhuang, so I am going show you it a little bit. So here, we show you can put your … a figure on the
tabletop, and you will be able to navigate on a map. Okay, so this just give you an idea what we did
initially on there, and also, we tried to work on something beyond the tabletop, so I use a video to show
what we did there.
>> video: This table top has been engineered to display map projections …
>> Yi-Ping Hung: Okay.
>> video: Researchers from the National Taiwan University have invented a tabletop projection
system…
>> Yi-Ping Hung: Okay.
>> video: … to create high-resolution maps in [indiscernible]
>> Yi-Ping Hung: So what we did is we tried to not just use the tabletop space for display the virtual
object; we are trying to also use the space above it. So what we did is we do … we tried three different
device there; one is what we call the …
>> video: … turning that into an interactive service [indiscernible]
>> Yi-Ping Hung: … the interactive lamp. Okay, so it’s like you are moving a lamp to where you want to
pay attention to, but there’s a Pico Projector provide a high-resolution on where you are looking at.
Sorry.
>> video: [indiscernible] That information will be projected back onto the map to allow users to absorb
the interim areas of specific interest. Researchers say [indiscernible]
>> Yi-Ping Hung: So the lamp provide not just the projector, but also there’s a camera will be able to see
where it did look at. So it will just project what you … what it should be project there, okay. And we
have also tried on a interactive window; we put a camera also behind the pad, so that you will be able to
show what you are supposed to see for any angle, yeah.
>> video: [indiscernible] scanning and may have a mobile device like the iPad or the iPhone, that means
that identifies scanning positions over…
>> Yi-Ping Hung: Okay, so the idea is you can use the space above the window—above the surface,
yeah, above the surface, yeah.
>>: So the projector—the lamp—give you the high-resolution image.
>> Yi-Ping Hung: Yeah. Yeah, instead of we … having the mirror …
>>: Yeah, I know.
>> Yi-Ping Hung: … with the Fovea projector, we are using a Pico Projector …
>>: Uh huh.
>> Yi-Ping Hung: … and there’s a camera positioning where the projector is, and then the projector can
project the right image on the Fovea region. So that’s why …
>>: But then you have to suppress the low-level thing that’s created.
>> Yi-Ping Hung: Yeah, yeah, yeah, yeah. So … says … yeah, yeah. So the camera detect where it’s
looking at, then it’s suppress the low-resolution part and just put the high-resolution part on it.
>>: Okay.
>> Yi-Ping Hung: Yeah, yeah. Yeah, so there’s another way of doing it, but this give you a—the viewer—
a tangible feedback …
>>: Yeah.
>> Yi-Ping Hung: … because it says that you are moving a lamp …
>>: Right.
>> Yi-Ping Hung: … but the … usually, a lamp provide a brighter light, but now, it provide a better
resolution, yeah. And this has something to do what you are going to see; this provide a—for the
display—it provides you with a two-D view, right? But with a display, it provides with a three-D
perspective, yeah. Yeah, okay, so based on what you have just seen, we begin to approach Dunhuang;
we really begin to approach them maybe starting from 2008, but it’s takes some time for a discussion of
what to do. So the … we—really—we begin this project in 2011, but in 2010, we come up with a de …
agreement of what we are supposed to do, yeah. So after this meeting is … really kick off our project.
So what we are going to do in that project is we try to develop a interactive multi-reso … multimedia
system for virtual touring the Mogao cave, yeah. So at beginning, we are working on this kind of fixed
platform, but in the middle, we begin to shift to something more mobile, yeah. So I’ll … I’m going to first
introduce what we did on this system; it’s interactive with horizontal display—okay—one vertical
display, and one mobile display. Try to use this space to allow people to tour the cave. Okay. And so in
this part, I will … I’ll first introduce our work on digital content—you … we produce some digital content
in order to integrate into this cave—and also, we tried one of system with horizontal, vertical, and
mobile, and this one is just with mobile one, yeah.
And for the content, we have work on three different kind of content: one is on story animation; and
one is on the two-D painting; and the other is on the three-D statue. For the story animation, basically,
you—when you go to the cave—you ca … you probably see a painting, but it’s stilled—it’s not moving.
So we can … because there … this work are work together with some historian; we are interested in
showing what’s the story it’s in. So they give us some story, and so we can make some animation. For
example, this animation show the Mount Wuta—Wuta mountain—that’s in cave sixty-one, and when
you walk into that cave, you will see the big cave—oh, sorry. I need a … I really need a internet, but
okay. That’s okay.
>>: We can connect the internet [indiscernible] any secure …
>> Yi-Ping Hung: Huh? Oh, okay.
>>: … via …
>> Yi-Ping Hung: I forgot to connect it.
>>: [indiscernible]
>> Yi-Ping Hung: Yeah? Yeah?
>>: [indiscernible]
>>: Do you want to …
>> Yi-Ping Hung: Okay, let see.
>>: Do you want to connect it to internet?
>> Yi-Ping Hung: It should be connected. I don’t know why it’s not. Let me do it again. Is it now?
[murmuring] Oh, sorry. Oh … I’ll skip this one, but anyway …
>>: [indiscernible] you should try to connect.
>> Yi-Ping Hung: This one, yeah?
>>: Yeah, and …
>>: Open.
>>: Open.
>>: Just open.
>>: Open.
>> Yi-Ping Hung: Open on this one?
>>: Yeah
>>: Yeah.
>>: Yeah, you needed to go to the throttle.
>> Yi-Ping Hung: Oh, throttle, okay.
>>: ‘Kay?
>>: Okay.
>>: ‘Kay.
>> Yi-Ping Hung: ‘Kay?
>>: In the pipeline, they say you need a endpoint.
>> Yi-Ping Hung: Okay. Let’s see …
>>: Okay, see you have …
>> Yi-Ping Hung: Yeah, okay. Good. Yeah, sorry about this; I forgot to do it. Okay, let me see if I can
see. No? Ah, I tried … oh, yeah, yeah. Okay, so it takes that. So we made some story animation just like
this for different part of the painting, and this animation—for example—this one, it shows … oh, but
sorry, it’s in Chinese, okay? [laughs] Build this map for Dunhuang, okay. Yeah, in the future, we are
translating into English, yeah. Okay, so for this kind of animation, it explain what’s the story trying to …
that the painting’s trying to express. Yeah, yeah. It takes too much time; I think I will just try to take this
away, yeah. Let’ just forget this. If you are interest, you can go to our project website. We have our
website show all the story. Yeah, yeah.
Okay, let me move to this one; this is a two-D painting restoration. So when you go to the cave, there’s
a lot of painting is really deteriorate, so … but we … it’s not that easy to do the physical restoration,
because once you do the restoration incorrectly, it’s ruined. So you … but you can do it digitally fairly
easy, okay. So that’s what many people in Dunhuang are do … are working on that. So with that kind of
digital restoration, we can easily overlap into the painting with our interactive system. And another one
is something like this: for this case, you won’t be able to see any statue here—yeah—but you can only
see, for example, a lion tail here, and so you know there’s a lion, and since it’s a lion, you know which
Buddha is there. So then the historian go to see different Buddha in different cave or different temple,
and we trying to come up with something there. But unfortunately, different researcher will have
different opinion—you know, some researcher say, “Okay, you should have a figure like this;” so fisher
say, “No, it should have figure like that.” So there’s really no conclusion at the end, so … but hopefully,
since we are doing digital restoration—so we just made different kind of possibility. Yeah, yeah. So
everybody will be happy. [laughter] For me, I don’t really see much difference, you know? [laughs] But
for them, they even say, “Okay, that this Buddha has two feet like this,” and the other say, “Okay, the
Buddha should have the feet like that.” And I don’t know, yeah, yeah. So we just work on different
version of it, and then we can display it in the cave in a virtual way.
So what’s the difference if you go to the Mogao cave physically or when you go to a museum to see the
collection—for example, in Getty, yeah. So on this, you can duplicate the complete ascents in cave—
usually, it take a lot of effort—otherwise, the … you won’t have this kind of spatial experience. And also,
if you don’t have a guide, you won’t be able to see … to know the story. And so what we are trying to do
is: we try to kind of integrate all the digital content into our system, and eventually, we hope to move
this system into the visitors’ center so that can let the tourist know … to use it, yeah. So this what we
did. I think I will just move to—because of the time—I think I will just move to …
>>: You have time. You have [indiscernible]
>> Yi-Ping Hung: Huh? I have time? I have some other stuff at the end, [laughs] so I think I will show
these three part. We have three device: one is called interactive figurine—so you can use this device to
explore the cave by a tangible device—and then you can also have a iFlash—like you have a flashlight in
the cave—you can see your … the painting; and the you have the iWindow, in which you allow you to
have a three-D, interactive window to look at the three-D restored statue. Okay, so let me first show
the first one; we generate a … we use this figurine allow the user to move around and be able to raise up
the head by this kind of device. Okay, this really a small figure which can raise up the head; when you
raise up the head, there’s a adaptive bit move around, so the camera underneath the table will be know
… will know that it’s raise up the head, and then you can see the ceiling. Okay, so I will show you the
interactive figurine. Okay, so the … it’s moving around; it’s like panning are left and right; and then if
you want to see ceiling, you just move the head; it go up; and you can see the ceiling. Okay, so that’s
how this system allow the user to move around in a cave, yeah.
>>: Can you see the ceiling straight up?
>> Yi-Ping Hung: For this one, not yet. [laughs] Yeah, because—for this one—it’s because the constraint
of the physical one, but on the system, we are shipping there, we shou … we change our design; we are
not using the physical and optical one, because it’s not that reliable. We are now using the IMU inside,
and so we will be allow the head to go up further until then. For that one, you are only probably see to
maybe sixty degree—you can see the ceiling, but not … for our next one, we will be allow the … the head
will be able to move up more—just put a inertial measurement unit inside, yeah.
>>: So if you compare with joystick, why figurines better?
>> Yi-Ping Hung: It—joystick—because it’s a … I think it’s the user experience. You … it’s kind of: you
move yourself into this figurine and move around, and this is like you, and it’s more intuitive.
>>: I see. Proxy.
>> Yi-Ping Hung: Yeah, it’s like a proxy, yeah. Okay, and this one is the idea of using a flashlight, you will
be able to see a restored two-D painting, okay. For there, we are just using a Microsoft Kinect to figure
out the gesture, and if you hold your fist, then you will be able to see something one thousand years
ago, right? And then you gradually extend your hand, you will go back to what we see now, yeah. So
there’s also a demo video showing that. Okay, so if you move to this part of the painting, push your
button, then you activate this … we call restoration flashlight; then you will be able to restore the part of
the painting you want to restore, yeah. Okay.
So the last one is the … for showing a three-D restoration of statue, okay. So here, we are also using a
Kinect to look at the … this window to this pad, okay. I think I will just show these with the video again.
This is very similar to what I’ve shown you with the interactive pad to show those three-D object, but
here, I’m using the pad to look at the empty platform layer, but we see this pad, you will be able to see
the restored three-D object—all those three-D statue, yeah. So only go through this window, you able
to see with restored statue.
>>: [indiscernible] is it three-D?
>> Yi-Ping Hung: Three-D.
>>: Okay.
>> Yi-Ping Hung: Yeah, it’s a three-D figure, yeah. And then, what we are working now is: we are
building this system, and this system will be moved … shipped to the visitors’ center before the
summer—yeah, before this summer, yeah.
>>: Before the summer?
>> Yi-Ping Hung: Before the summer, yeah. This our plan—yeah—it should be done. It’s not that hard,
because we have finished all the prototype. Now, we are just trying to make a system more reliable,
yeah. So the problem of this platform is it’s not that easy to move. Because when we finish this project,
and Dunhuang say, “Oh, it’s good. Can you ship us the system?” Then it’s begin … we begin to find out
that it’s not that easy to move a system around. Okay, so at a certain point, we try to build this mobile
display. It’s … the need is really for them, because they ask us to have some … our system to move to
Turkey—at one time, they are showing these painting in Turkey, and they say if we can show our result
over there. So because it’s not that easy to move the platform through there, so we are thinking, “Oh,
maybe we can just move a small window—a small pad.” Okay, so with the pad, it’s like a portal that
allow you to—when you are looking at the painting—you can kind of move into the space. So it’s just
like … Doraemon has this Dokodemo Door; you can walk into the space through this window. So that’s
what we did for this project, yeah.
So we have this teleport for Dunhuang. For example, you just look at the painting; when you pointing
your pad to this, and let’s suppose there’s some sound … ah, okay. Generally, there’s sound to make
you feel like you are now walking into the building, and then you can move in the building while you are
physically not in that space, okay. And over there, we are trying to let people be able to walk in the
space. Yeah, but for that version, people find it’s kind … it’s still tiresome to hold something to walk
around; you don’t really like to hold a pad in a space for too long—it makes your hand very tired. So we
began to use this wearable device, and I have show this video for you. Maybe I will show another one,
okay. The … after we show … after we work on this fly spirits, which allow you to jump to the ceiling and
begin to slide … sliding in the cave, we have another idea: maybe you can move around in a cave just like
a spiderman. So what does the spidermen do? The spidermen can use their spider silk to move around,
right? So [music] I asked my student to make a demo for this, okay. If you want to go into this cave, you
can just put on your wearable device—all you need is, maybe, hover lens in the future, okay. [laughs] Or
… then you can wear some smart watch—okay. So you can look around. If you want to see that
painting—maybe very high in a ceiling—you can just throw your hand. Then you … I think you can see a
silk—not yet, huh? Oh, okay, jump; after they jump to the ceiling, and if they are interest in that
painting—okay—you kind of use this … the spider silk to move over there, okay. If you want to move to
another part, you just use a spider silk again, and now, I … my student’s also using the swinging—yeah—
so you can, like, give a spider silk to the ceiling, and you can go up, and then you can swing over
[laughter] to another part of the cave. So that’s what we are working on—the spider silk, yeah.
Okay, so you can go to our website to see what we have done for the fixed platform. For the mobile
part, it’s not in the website yet, because it’s just done in the last year, but for the platform part, it’s
already in a website. Okay, so for this part, I have to thank the Chiang Ching-kuo Foundation for their
help, and if I have time, I will act to touch another topics—it’s also using this kind of technology—allow
you to see some virtual stuff, allows you to across the boundary between your physical world and the
virtual space. And you … this is the sum … this is the work we are working in a NTU Intel center. Intel
has founded a center in National Taiwan University; in the center, there’s a focus group working on the
intelligent transportation system, which try to use the internet of things—or internet of vehicle—be able
to help people to have a safer or more efficient driving experience. So this is a situation: in the future,
all the car will be able to broadcast their information to other car. So for example, for myself, when I’m
driving every morning, sometime, I need to get her to the exit, but I don’t want to go to the … these
lanes too early, because we are … I move to this too early, then—because then this lane go very slow,
right? But if I go too late, I won’t be able to cut in, yeah. So … but every morning, I don’t know … the
Google Map tell me whether it’s crowded or not, but it don’t tell me where’s the end of the lanes. Okay,
so this happen a lot for me in Taipei; I don’t know whether it’s in Seattle, [laughter] but in Taipei, I want
to find the most efficient time to cut in. So in the future, I want be able to see the situation, either with
a giraffe view or with a see-through car way.
Okay, so the giraffe view is like a virtual giraffe; it’s not a real giraffe, yeah. [laughter] And the seethrough car provides you the thing behind the car. You can do it in two way—you can either put some
camera there and try to generate a view for this, or you can also just render some meaningful object—
you will be able to know the situation of the traffic. And when we are working on this project, we found
that the USDOT report if we can just help the left turn assist and the intersection movement assist, then
you can really avoid a lot of accident and save a lot of life, because a lot of the time, when you want to
make the left turn, your sight’s blocked by the car over there. So when you make the turn, and you may
get … have some collision, yeah. And I will own … I will show you one example here; this is a backup on
the exit that I told you. This is a tragedy, so you had better to prepare what you are going to be … see—
this is really a tragedy, yeah. Okay, because the driver cannot see the traffic backup, so he’s try to cut
into the lanes, but unfortunately, when he find out, it’s already too late, yeah? So it’s really happened a
few months ago, yeah—last year, yeah. So what if, at this point, the driver will be able to see a seethrough car, or you will be … have a giraffe view—be able to know that there’s really already a traffic
backup, so don’t cut in through there, yeah.
So that’s what we did in our laboratory; we tried to simulate a … the … to simulate different situation, so
to let people know … we were trying to figure out what’s the best way to deliver this kind of
information, okay. So this is the … what we call a transparent car; at certain time, you can be
transparent, but for what you have seen here …
>>: They don’t want to see [indiscernible]
>> Yi-Ping Hung: Oh, no, they don’t want to hit the … yeah. Okay, for what you have seen there, it’s
when you do it in a simulator, but to make it really come true, you need to know the position of each
vehicle. So we are also working on the cell positioning the way we did—I probably didn’t put it here,
yeah. Okay, so I didn’t put it in this slide, but what we are using is what Microsoft has developed long
time ago—like a photo-tourism. You … if all the car is driving through the same place all the time, then if
you just transmit the image to the crowd, then we can process the image and to generate a three-D
structure for the surrounding here. So you can use this information to position you where you are. So
what we have accomplished is we are able to position each car up to a sub-meter accuracy, so if it’s … if
each car now is positioned with sub-meter accuracy, and it also observing all the other car, then we can
… you can broadcast that information. So every car will know exact position of the nearby traffic, and
that the problem is how to deliver the information to the driver. So what we were working here is we
tried to use either transparent car or the giraffe view, and we try to … right now, we are trying to deliver
this kind of information—use the wearable display. Of course, you can also use the head-up display, but
to use the wearable display is more advantageous, because you can have many other application—for
example, you can also have this kind of transparent a pillar. Okay, you … there’s … because sometime
the … a pillar will also block your sight, and generate some dangerous [indiscernible] Okay, so that’s
what we are trying to do; we are not just using this kind of techknowledge for virtual touring of the
Dunhuang art; we are also using it for other application, like in driving, yeah.
Okay, and so for display part, in fact, we are also working a project in National Taiwan University; we
tried to use this kind of smart display to make people in different site to have some communication.
That project is still ongoing, but before that project, we are working on this project—what I call … it’s
really a small wall that know who is watching. And with this techknowledge, I work on a series of
interactive art installation. Okay, this work is what I call “I am”—I am, because people will think who I
am, my … and then, the Buddha really teach us you should not so attached to who you are. Maybe you
think you are somebody, but it’s better that your physical body is not that real, okay. I think I will just
show the video of this one, okay. [music] Okay, so Andy Warhol said that in the future, everyone will be
world-famous for fifteen minutes. So we put this wall in the subway station, and so people, when they
sit down, they will see many different people who watch this before, and we will choose one famous guy
who look more similar to you, yeah. What do you think you … the systems think who your looks like?
Yeah. But that project … then we evolved to another one; we tried to let people see that nowaday,
most the young woman, they make up, so eventually, they look so much alike. So with this work, I tried
to let people think that everything is not real, okay—what you see is not really what you see. So what I
want to say is that … what the Heart Sutra said—so what you see is not the real what you see, yeah.
Okay, so …
>>: That can be said of anything.
>> Yi-Ping Hung: Yeah, yeah. [laughs] So for this work, eventually, at one time, we show the baby face,
and it’s show the old lady face and transform to … all these figure that you see are really virtual, be …
there’s no really anyone that looks like that, for the baby and for the old lady. They are all—how do you
say—we kind of mix different picture to show something some similar one. So they are really all the
similar one, and for this wall—for example, for this old lady, it’s not … they are all mixed—and for this
wall, we have the camera behind the wall; we’ll be able to capture the audience sitting front of the wall;
and we are able to detect the attention of their … the attention of the audience. So if you are looking at
certain part—okay—so interesting thing is: all the baby looks very similar; all the old, old men look
similar, too, right? But we are supposed to express ourself when we are young, but nowaday, since we
have the same standard of beauty, so everybody uses a plastic surgery to have all the lady looks so
similar. Yeah, so we have this work. Okay. So now, it’s doing a face recognition to recognize who it is
most … looks most similar to you, yeah. Okay, I thi … so at the end of this work, I … we will show that …
we will let all the figure become you, which means what you see is really just your projection. Okay,
what you see is just your projection in this work, and eventually go to nothing, okay. So it’s like—okay—
I gave you a book; there’s really no self, no others, yeah.
>>: Feeling very spiritual now. [laughter]
>>: It must be this.
>> Yi-Ping Hung: Yeah, after working on the Dunhuang’s project—yeah—become more spiritual.
[laughter] So after that project, we are working … I shift what I’ve just did to another work called Smiling
Buddha. Okay, the Smiling Wall was first shown in the ACM Multimedia Art Exhibition, 2013, and then
last year, we demonstrated this Smiling Buddha in Linz in Ars Electronica. And this year, there’s a now …
has already may country asked us to show this work—this Smiling Buddha. Okay, so I will show the
Smiling Buddha’s video [music]—similar idea, but different content. Okay, so now, you can see we are
detecting its attention, so you see the background is changing. The background change means that the
user’s attention is changing, but if he spend his attention at certain figure for three second, it project his
smile there, and you go attract other smile. So what I want to express there is what you see is really
your own expression. In fact, that’s what Buddha told us—yeah—there’s everything is elusive; the only
thing real is your heart. So what you perceive is what you … so what you experience what you generate.
So eventually, we will display all the smile. We … our camera is working … is doing a smile detection, so
it detect a different smile of you and eventually display it in the wall. So you—if you smile a lot—then
you can see a lot of smile surround you at the end of the … this work. And this wall kind of remember all
the smile it has seen from different visitor. And each smile is like a light generated from the Buddha,
and so what I want to express there is your smile can change the world—change your world and the
whole world, yeah. So this is our project related to Buddha, yeah, okay. So I think I have expressed … I
have finished my talk already, yeah.
>>: Thank you very much.
>> Yi-Ping Hung: Thank you. [applause]
>>: [indiscernible]
>>: That’s good.
>> Yi-Ping Hung: Yeah, a question? Yeah?
>>: I’m just curious about this expression recognition. So do you only do smile, or do you do any other
expressions, like …?
>> Yi-Ping Hung: Okay, in fact, my student has done many expression recognition before, yeah. But for
this project—for this artwork—we only work on smile. And to speak the truth, smile is the easiest thing
to detect than others—other expression, yeah. It’s not that easy to detect all different kind of
expression—even human, sometime you can misclassify the expression, yeah.
>>: Right.
>> Yi-Ping Hung: Unless you are good actor—you know—the good actor can do a very precise
expression, yeah. But sometime, if you are angry, people may feel you are frightened, I know. [laughs]
Yeah.
>>: Yeah, I don’t know the … I guess you read it somewhere with those books—Buddha’s books—I don’t
believe Buddha … ‘cause Buddha … is Buddha only interested in smile? Or is always [indiscernible]
>> Yi-Ping Hung: For this work, okay. [laughter] Buddha probably interested in everything, [laughter] but
for this work, we are interested in … only in smile, so … yeah, yeah.
>>: [indiscernible] I see.
>> Yi-Ping Hung: Because I think this smile is something very important …
>>: Yeah.
>> Yi-Ping Hung: … for us to generate.
>>: Sometimes … sorry.
>> Yi-Ping Hung: A smile can change yourself from inside, yeah.
>>: Right. So sometimes, people go to a temple—right—in front of a Buddha, and they have concerns.
>> Yi-Ping Hung: Yeah.
>>: They want to pray for something. Like suppose … so maybe somebody goes to your wall …
>> Yi-Ping Hung: Yeah.
>>: … and then maybe they have some concern in their mind …
>> Yi-Ping Hung: Yeah.
>>: … and then you can detect that. That’s not … it’s not gonna be a smile; it’s gonna be something
else—some other expression.
>> Yi-Ping Hung: Yeah, yeah, yeah, okay.
>>: Maybe you show something—I don’t know—maybe you show some …
>> Yi-Ping Hung: It’s maybe in the future, yes, but … [laughs]
>>: Maybe the … okay, so [indiscernible]
>> Yi-Ping Hung: … but not now, not now. [laughter] It’s very difficult to detect one’s emotion just by a
single camera, yeah. Probably, we can detect the brain wave [laughs] in the future, yeah. Yeah?
>>: You were using your pad to do your … but how were you doing your tracking? Were you tracking
with—you know—orientation sensors within the device, or were you doing physical tracking?
>> Yi-Ping Hung: So when we do that in front of the platform, we are using Kinect, really.
>>: Oh.
>> Yi-Ping Hung: Yeah, but when we are walking around, we are using the inertial measurement—
inertial movement of the … like gyro-accelerometer, yeah. And then, we are also working on a cam …
try to use the camera to …
>>: Figure out.
>> Yi-ping Hung: Yeah, to build a … the environment—the structure of the environment—and then
partitioning it—yeah—with the structure. Just like at Google, they have a project called Google Tango,
yeah.
>>: Yeah.
>> Yi-Ping Hung: But there, they are using TEPS camera to reconstruct it, yeah. Which … but we try to
do it without TEPS camera, just with the …
>>: It’s possible, yes. Yeah, ‘cause charge your phone, which costs LAN.
>> Yi-Ping Hung: Yeah.
>>: So if you’re doing an immersive—coming back to Dunhuang—you’re doing an immersive experience
…
>> Yi-Ping Hung: Yeah.
>>: … and you said, “You need a guide.”
>> Yi-Ping Hung: Yeah.
>>: What do you think’s the most effective way?
>> Yi-Ping Hung: In the immersive …?
>>: Yeah, say you’re doing cave sixty-one …
>> Yi-Ping Hung: Yeah, yeah, yeah.
>>: … and you want the person to be able to tour the cave …
>> Yi-Ping Hung: Yeah, it’s …
>>: … and to learn.
>> Yi-Ping Hung: Yeah. Yeah, so …
>>: The education part.
>> Yi-Ping Hung: Yeah, for that, I think a flying spirit will be very useful. Okay, so maybe you are … one
of this flying spirit can fly along with another flying spirit. The flying spirit will—probably—will show you
where it is. I don’t know if … I’m going to attend a GDC in San Francisco next week—yeah, the
Connective Effect Conference. They have a game called “Journey,” award … that got six award two
years ago. The “Journey” … in “Journey,” the character is not allowed to speak; they just move around;
you … and then you have to use the … this kind of … it’s not even body gesture—they’re moving closer
or farther away—then you kind of know each other. It’s very interesting interaction. Okay, so we … I
think, in the future, in a Dunhuang cave, I’m trying to work on something similar to that; so I allow you
like a spirit—maybe there’s another spirit guiding you to tour the space. Yeah, but it’s a good … we
haven’t worked through that part yet, but this is good direction—sub-speech …
>>: That’s sort of a subset on how in information, not how to communicate that …
>> Yi-Ping Hung: Yeah.
>>: … and yet have an awe-inspiring experience.
>> Yi-Ping Hung: Uh huh. So I think that’s needed some good design, because in each cave, there are
men … different hot spot for the expert; they know what are the interesting part; but for the tourist,
they have their own interest. So it’s important to be able to detect what’s the interest of the tourist.
I’m working on this kind of smart display work, because I think the display should be … the display
should also know your attention. If I pay more attention … if my … I put my gaze on certain—for
example—painting for longer, then the system probably know I’m more interested in that, but the
system probably didn’t know exactly which part, so this time, maybe the … a good angel will come out,
try to communicate with you, but exactly how to do it, I didn’t know yet. We are working on … I hope
we can work on that in the future. Yeah, but that’s a good direction, yeah, yeah. Yes?
>>: So during flying spirits virtual reality, so have anyone experienced motion sickness as they fly? This
… ‘cause they stay in place, but they fly in virtual world too.
>> Yi-Ping Hung: Uh huh. Yeah, I thought with Oculus, most people feel okay, but some … it depends;
it’s [indiscernible] from … different from person to person. Some people will think it’s kind of dizzy or
what. So you can see we built some rail to protect it—yeah, yeah—but we are now working on
something that … I think we are trying to work on something that you can go from the physical world to
the real … the virtual space—easily transform you from one to the other. Especially with—you know—
hover lens—this kind of stuff—you … sometime, you can block outside, and sometime, you can let
people see through. So at this moment, we are using the camera see-through stuff; so when you open
the camera, you will be able to see what’s outside, right? When you close it, you are immersive. So I
think, in the future, it’s a good research issue to figure out: when does people feel uncomfortable? So
you can … or how much part of the real space has to be seen, and how much part of the virtual space
should be seen? Sometime, you don’t need the completely immersive; sometimes, you show the
complete immersive. Because our body is on a … this physical world, but we want to tour a virtual
space, so how big are you going to show the virtual one? So that’s what we are thinking of … on
working, but this is a good issue, I think. But people who work on the wearable device try to reduce the
latency, yeah. The latency is one of the major issue through cause this kind of uneasiness, yeah. Yeah,
any other question?
>> Zhengyou Zhang: No more question? Okay, let’s thank Yi-Ping again. [applause]
>> Yi-Ping Hung: Okay, thank you.
Download