>> Hrvoje Benko: Hello. Good morning. Thank... Hrvoje Benko. I'm a researcher here at Microsoft Research. ...

advertisement
>> Hrvoje Benko: Hello. Good morning. Thank you all for coming. My name is
Hrvoje Benko. I'm a researcher here at Microsoft Research. And it's a great
pleasure of mine today to present to you Professor Yoshifumi Kitamura from
Tohoku University in Japan.
Tohoku is actually located in Sendai.
And I met Professor Kitamura many years ago in Osaka, where I saw his illusion
hall display. That was the first kind of convincing multiuser stereoscopic display
that I've seen, and it really kind of launched me in this area. So I'm hoping to see
some really fun interfaces coming from Professor Kitamura right now. So without
further ado.
>> Yoshifumi Kitamura: Thank you very much, Dr. Benko.
I'm Yoshifumi Kitamura from Tohoku University, Japan. The -- first of all, you
may not me so well. So now let me give the brief self-introduction myself.
So my first year I started in Canon in Tokyo. And there I moved to ATR,
Advanced Telecommunication Research Laboratories in Kyoto but just located in
the border with Kyoto and [inaudible].
So both of these are close to Osaka. But I stayed there several years in ATR.
And then moved to Osaka University. And they made an interactive techniques
research. It started at that kind of topic in Osaka. And just moved to Tohoku
University last year.
You may not familiar with the location in -- of these cities in Japan. Anyway, just
a very short introduction of Japan.
Osaka on the Sendai is a relatively one-hour flight between these two cities. And
Osaka University is a relatively new university in Japan, national university in
Japan. But Sendai we have the Tohoku University. It's established more than
100 years ago. It's basically a -- it's the first group established in Japan.
And maybe you don't know about Tohoku University because there is no -- not
that many researchers working in this field. So in this case, this is [laughter] very
helpful. Wikipedia on Tohoku University. As described here the Tohoku
University has 18,000 students. And one of the Imperial University, Imperial
University is first group of established university in Japan by government. Seven
universities established in the beginning of the new age of the -- after the
[inaudible] period.
And there are several ranking. But the -- please note that engineering and
technology is relatively powerful in Tohoku University. And maybe I'm going to
start the human interface related research in Tohoku University right now.
Anyway, if you are interested, please visit by yourself. Anyway. So, as
Dr. Benko mentioned, Tohoku University is located Sendai. I think you have
heard name of the city of Sendai this year several times by TV news program or
newspaper.
Actually, we had a earthquake. Maybe, yes, the origin of the earthquake maybe
here in the -- very close to the seaside of Japan and in Pacific Ocean. And soon
-- in March 11th. And soon after that, we had a tsunami. That area, that very
large area affected by tsunami. Sendai is the one with the city. But Sendai has
the one million population right now, and the biggest city in that area. And part of
the Sendai city was actually affected by tsunami.
Have you ever been to Sendai before? So this is a picture of Sendai city
observing from the hill toward the downtown looking -- looking from the west side
to the east. So you can see the downtown area, the business area. And you can
see after the earthquake this scenery did not change at all. But behind the
buildings there must be the seashore. Actually, this is just the beginning of the
introduction of Sendai. This is the Pacific Ocean and the seashore. And the blue
area are the area affected by tsunami.
The rings from here to here they are approximately five kilometers. The tsunami
reached up to approximately five kilometers. And here is the downtown area.
And approximately 10 or 15 kilometers or 10 miles from the seashore. And that
picture, that previous picture taken from here to this direction, like this.
But my office is located here and here. And my house is located here and here.
So it was safe. Anyway, after, soon after the tsunami arrived, I visited these
areas by myself to see what actually happened. And maybe you saw several
pictures on that areas, for example. This is actually taken from here to looking to
this direction, to west.
Everything disappeared here and here. And sometimes after that time I tried to
go to that area to -- for the [inaudible] to help the people who have to repair their
own houses to clean up the mud or to clean up everything, to carry the by -floated by -- away with tsunami or things like that, like these and something like
this.
Anyway, thanks to your help, Sendai in Japan is now becoming better. So now
I'm here. So let me start my talk today.
I moved to Tohoku University one year ago, approximately one year ago. And I
started the project of the interactive content design in Tohoku University. So
content sometimes refer to the object or a piece of image or music file used in
application. But we are going to try to treat the content the more wider situation.
But, of course, the content must be observed and enjoyed by the humans. And
between the human and the computer or content we have to consider about the
input and output device. This is called the -- sometimes you call that user
interface systems. So the -- in addition to the content, I have to consider about
this interactive systems. And -- so the interaction between the human and
computer, the human will enjoy them more meaningful content. The content will
have the new value through the interaction between these computer and
humans.
And through the interaction human will become happy. So this is my pleasure.
In addition, it's not enough if someone in this situation who, for example, get
angry or, in other words, they'll get -- will become very happy, the other person
will have the effect by that kind of a change. So in order to make them happy, to
make them all happy, we have to consider about the -- all the environment,
including the computer and the content and the humans.
So this is the stage I'm going to consider about the interactive content design in
Tohoku University. So I'm going to introduce some of the related research like
this. I'm going to introduce several, but some of them started -- already study in
Osaka University and to continue in new university in Tohoku University. And
some of them just started, and that obtained a good results that -- so far. But I'm
going to introduce. And let me discuss a little bit about the possibility or
feasibilities of the project.
Anyway, I'm going to start with the displays. The first is the display. And I'm
going to -- I wanted to develop the 3D display or three-dimensional stereoscopic
display which is useful in these situations. Children play with their friend using
blocks, dolls or sometimes cars. And through these experience children will
learn how to communicate with others like this. Instead of using now mobile
individual devices, I prefer these kind of techniques to support the face-to-face
communication for children. Or younger people.
But, as you know, it's difficult to find, which is useful for these kind of situation for
-- in the field of three-dimensional displays. First we have to -- the 3D -three-dimensional displays have to equip the multi-user -- must have the multiple
user stereoscopic display.
Suppose several users looking to one screen. Suppose this is the stereoscopic
display. Here that he observed from here and he observed from this direction.
The appearance must be different from him to him. So they must observed from
different directions if they observed the same object.
Moreover, if he moved from here to there, the appearance must be changed. So
the requirement that I wanted to develop is the stereoscopic display which has
the motion parallax for each user.
So I wanted develop the three-dimensional display with which multiple users can
see and interact simultaneously. If this kind of three-dimensional display is
achieved, we can use this one to the above-mentioned situation.
Interact, interaction with the three-dimensional display is not so easy. The -- this
is very common way to interact with this computer. So using the mouse now it is
very indirect way. The keyboard is very indirect way to communicate with
compute. But maybe you can agree with -- this is the most recent popular
technique is direct multi-touch user interface.
Thank you very much. I borrowed these images from the website.
So a similar technique must be used for the three-dimensional. So I said
interact, but it's not correct. Incorrectly is the multi-touch direct interaction that's
important, even for the three-dimensional display.
What kind of three-dimensional display is feasible or possible to use for this
purpose? There are several different kind of approaches so far. The hologram is
feasible but still difficult to interactively generate it, interactively change the
situation.
The stereoscopic or autostereoscopic display using the binocular stereo is
feasible.
A volume display is, of course, feasible, for example, in a stereoscopic display or
autostereoscopic display.
Well, one user standing in front of the display observing the Space Shuttle from
the side in this situation, if he moves to left, the object must be changed and
displayed like this. In order to achieve this, the user's position must be detected
in some way by using some kind of a tracker or some other devices. And, of
course, if he moves to right, the object must be changed like this that he have to
observe from the direction from right side of the same object. This is called the
motion parallax.
This is the interactive stereoscopic display for single user.
But what happen if there are three persons standing in front of the display? Now,
the position of the user, B point is measured by -- A is measured. And he's
primary person at this moment. He observe the Space Shuttle from the side
correctly. But what happened for B and C standing in prime position?
He must observe the distorted image because the display showed the image for
only A. The -- both shown in B and C, the image is distorted like that. But he -they have to observe the image like that. So in order to achieve this, the
stereoscopic display must generate and show, in this case, the stereoscopic left
and right image for three person, in total six images must be generate
simultaneously.
Multiple-users stereo. We have an opportunity to see the stereoscopic image in
theater. But, yeah, this is for the multiple-user stereo. But it's not interactive.
This is interactive. The users don't change the viewing point showing directions
freely. But I don't want to -- my children to wear the head-mount display like this
situation.
So the binocular stereoscopic approaches the limitations. So how about the
volumetric displays? I think this is popular. As you know, the screen's rotating
very fast inside of the ball. And approaches like this is also the similar screens
rotating in the theatre. And the mirrors also are rotating here. And the [inaudible]
is rotating inside this cylinder.
In any cases the interaction must be indirect way because the users cannot
reach to the -- the finger to the position exactly -- exact position of the image.
So we developed the IllusionHole. This is the multi-user stereoscopic display
which developed almost 10 years ago. I may -- you may know the detail of the
IllusionHole because we developed it 10 years ago. But let me introduce a little
bit about the principle.
The IllusionHole consists of normal display, horizontal display, and the display
mask with a hole in the center. When we have the three users, the -- each user
can observe the stereoscopic image on the hole.
But, for example, if the user A looks the display through the hole, he can observe
the image drawn area only for A. The other areas, for example, the image area
of B and C would be actually occluded by the mask. From the viewpoint of B
only the viewpoint area of B can be observed to the user B. So this is the basic
principle.
And according to the position measured by some in the tracker, the viewing
areas will be changed according to the B position.
Just a moment.
[music played].
>> Yoshifumi Kitamura: As you can see, the multiple users can't reach their
hands directly to the image. And the position in the workspace, including the
stereoscopic image, can be completely shared by multiple persons. If someone
point the head of the Space Shuttle, the other users can recognize he reaches to
his finger to the Space Shuttle-- at the top of the Space Shuttle. So this is very
important feature of the IllusionHole.
So this is because the [inaudible] is very easy. We drove the pair of stereoscopic
images according to the measured position for each user with adequate disparity
in order to make the image to the same three-dimensional point like that.
By using this feature, we can make the [inaudible] applications. And one of them
just recently started the direct multiple-touch interactions or the stereoscopic
images in -- on the IllusionHole. The -- inspired by the recent trend of
multiple-touch interfaces, and try to achieve the similar one on the IllusionHole.
So it's now completely -- the complete ones is now still developing right now, the
one example as shown in the movie.
And since I developed the -- presented the idea of the IllusionHole, we started
many joint project with the -- outside of the laboratory. For example, the medical
researchers and the communication -- telecommunication companies for
example like that.
One of the most recent result of the collaboration was the three-dimensional fax
e-mail. Suppose in this imaging site, imaging site, that there are -- they have one
very unique object. And this object can be shared by three people in this case,
for example. This is the typical door, in Osaka anyway.
And if they want to share the same object with physically demoted site, how can
they transmit the shape of the object to the other side? Taking pictures or CAD
data is very useful, but it's not good. So I took several images around the object.
And then using the image-based rendering technique, we generate the -according to the positions we generated stereoscopy of images simultaneously,
then displayed all the IllusionHole at the remote site.
So this is called three-dimensional fax e-mail. I skip this one. Yeah. And maybe
you can see that the body of the IllusionHole looks similar or the imaging site and
display site. At this moment, we designed the separately, but the next stage will
be designed in combined ones with the imaging and display site.
Anyway, we're still developing some application or some joint project based on
the original idea. Okay.
Okay. Next topic is also related to the display. We have the -- some -- we have
-- we can use several displays in the room. For example, for here we have one,
this one, and the other one. So we -- in the meeting would be held in this kind of
situations. And recently more and more cases we have the tabletop displays like
this. And we call this the multiple-display environment, multiple-display
environment. This is very useful. But there are some problems. Mainly the
distortion and the connectivity.
Almost all graphic system assume that viewing position is just on the horizontal -that is the center axis of the display. So that if the user observe the incorrect
directions, the image will be distorted. For the tabletop display, for example, the
center axis must be here, for example, but the user will always have to observe
from here. So this is not good for the user.
Another problem is connectivity. Recently the normal computer support the
multiple displays, but usually the image plane must be located just left or up or
right or something like that.
For example, this is a typical situation. One image is spread over two display.
But if you change the viewing function, this connectivity distorted like this.
So in collaboration with the Miguel Nacenta and the Sriram Subramanian we
solve this problem using the perspective windows or perspective cursors.
Perspective windows provides the display that window presented to the viewer
which is always perpendicular to the direction to the user if he moves -- the user
moves the -- this image, virtual image will be he always change it to the -- vertical
to the user. This perspective windows is useful when the one image is spread
over several displays. Even in this case the image is seamlessly displayed like
this. Like this.
And in addition to the wall type or tabletop display, we can use the mobile display
like this. And the perspective cursor moved according to the user's position. The
cursor moves according to the temporary generated connectivity. If he moves a
little bit, the connectivity would change, then the position or displayed position of
the cursor will be changed.
[music played].
>> Yoshifumi Kitamura: Okay. Let's move to the next topic with the physical
types of interfaces. This is also the [inaudible] project. Maybe you know the
name of the ActiveCube. Do you know? You don't know. Anyway, the
ActiveCube has three main features. One, the first one is the realtime 3D
modeling. When a user connect or disconnect the cube, the computer system
automatically recognize this connection with this connection. So the result, the
computer always recognize the shape of the user now constructing.
And for this purpose, each block has the CPU inside of the block. So by using
this one, user can interact with them like that. For example, the virtual object in
computer is controlled by the real physical object in front of the user. And user
does not have to manipulate and control the mouse or keyboard anymore. The
input device or output devices are equipped with the software of the ActiveCube
so the user can directly interact with the cube.
[music played].
>> Yoshifumi Kitamura: So in order to search the -- retrieve the kind of object,
instead of inputting the keyboard from the computer, user can just show the
example by connecting the cubes.
By using the constructed object, user can play.
[music played].
>> Yoshifumi Kitamura: We can control the flying airplane and to reach to the -and cross the river.
[music played].
>> Yoshifumi Kitamura: But as you can see, the ActiveCube has always
maintained the consistency in terms of the shape and the functionality. By using
this feature we developed several kind of applications. The one of them we
showed last moment of the video were the cognitive cube. This is to measure
the human spatial ability. This is subject. The target shape displayed in front of
the subject, and the subject is asked to construct the same object of -- by using
the ActiveCube. The ActiveCube can record the timing which object -- which
block is connected with which position.
So they automatically recognize the process of the user's construction. By
analyzing this one, so, for example, this horizontal axis means -- indicates the
time transition. And the vertical ones indicates the similarity to the target shape.
The 100 person means the -- equal to the target shape. The -- we measured for
the younger and the elderly and the -- mild Alzheimer's diseased person. And
the speed of the younger people is very fast. And the elderly person is next.
And mild Alzheimer's diseased person have some kind of difficulties. For
example, sometimes they want to walk to the target object but sometimes not.
So the -- we found it's possible to measure and classify the few humans spatial
abilities.
But based on this one, we recently studied with the University of Hifa, as a tool to
develop that -- for the Developmental Coordination Disorder in children. Maybe I
think one of the movie works.
The blinking box is to show the next block to connected -- to be connected. If the
user made mistake, the [inaudible]. And by using this constructed one, the
children can play.
[music played].
>> Yoshifumi Kitamura: Israel is very different country. But by using this
technique, the researcher in this area tried to develop the tool to help the
children.
Anyway, the -- we don't have enough time to explain the more details about our
research project of the Physical Tangible Interface. But you may know about the
Funbrella or Fusa2 Touch Display we demonstrated this in the SIGGRAPH
E-Tech. But we don't have time to explain these works. But if you are interested
in, please Google or place such in the website. Anyway.
I have to go to the next topic. It's map navigation techniques. We have several
map navigation techniques. The motivation has come from the programs in
current navigation systems which has the -- we -- they induce the physical load
because we have to switch the -- between the scroll, to use in it a scroll bar, and
zoom to press the button [inaudible]. So we you have to move [inaudible] it's
very tough.
And during that kind of work they have to the context easily. So we develop the
anchor point technique we call the anchor navigation technique. Which is pan
and zoom and tilt coupling method. It is useful to decrease the physical load for
the user.
And so the pan, zoom, tilt coupling is based on the anchor point given by the
user. The anchoring point of user interest is always included in the viewpoint.
So the -- it is useful to maintain of the spatial context of these.
So this is the rough work to indicate, to show how the system works. But I think
it's a good idea to show the movie. This is anchor zoom. So the red one is
anchor zoom. If he wanted to scroll like this, but always the anchor point is the
main in the viewpoint. By using a double [inaudible] we can easily change the
new anchor point. There's only one action the user can achieve the scroll and
zooming.
And another example is anchor zoom and tilt. By tilting the camera like this, the
user can observe more larger areas. And the -- like this. And the closer area
can be observed very closely by tilting camera. And the area, the platform that
the current position is a little degree small but it's good because this is distant.
So this is the idea of anchor navigation using for the map.
So I have to go to the next step. So the next one is the interactive content with
recursive multi-layered experience. I wanted develop this kind of interactive
system in which the user generates their own alter egos in the virtual
environment. And this is a real environment. And in the virtual environment their
alter egos interact with each other with -- generated from other persons in their
world.
Now, of course, in their environment they can interact with each other. So, now,
based on this idea, now, we developed a video agent. So this is an interactive
video content generated from a live video taken from the real world. The
real-world creatures, such as a human or animals, continue to behave
autonomously based on events encountered in cyberspace. So this is the
overview of the technique.
The images of several actions are recorded by movie cameras in advance and
stored in the database. And in the cyberspace the video agent, each of the video
agent behave autonomously. And according to the event, they -- their emotional
and physiological parameter changed. And then based on the parameters, the
image retrieved from the database and is displayed continuously. This is the
process of the image recording situation. The several typical actions are
recorded in advance, and they're stored in the database like this.
And each of the video agent has the emotional and the physiological parameters,
and the value be changed according to event. And based on the video agent
technologies that we developed Osaka developing story. Like this. Osaka
developing story.
Osaka is my previous city. Why Osaka? Osaka maybe, as you know, a mecca
for entertainment in history. They have the Manzai, or Rakugo, or some other
places. And the first Universal Studio established in Osaka in Japan, in Asia.
And, moreover, the people in Osaka are more humorous [laughter]. Maybe
Seattle or Belmont.
Okay. So based on the idea -- technique of developing the video agent, we
developed the Osaka developing story to demonstrate in a big event in Osaka.
So, now, we introduced the online capture on the restoration technique. In the
video agent that we demonstrate in this SIGGRAPH E-Tech, we only use the
offline captured system.
In the Osaka developing story we introduced the online. So the online person,
the real person can easily participate and in the virtual space to appeal to the
public. So in the Osaka developing story, we have several ideas. For example,
the World Design is downtown Osaka almost a hundred years ago. The
background is animation movie like this. And I think it's a good idea to show the
next movie.
[music played].
>> Yoshifumi Kitamura: This part is almost similar on the other -- to the previous
video agent.
[music played].
>> Yoshifumi Kitamura: So this is the collaboration with the TV broadcasting
company Osaka. And beside me, the gentleman and lady was a real caster, TV
caster in the broadcasting company. So they appeared in the TV program every
day.
So the -- in order to change the parameter of the emotions the coins will give the
positive effect and fire will give the negative effect. So by giving the coin the fires
we can control the more actions indirectly. But directly we can give the
trajectories like this.
[music played].
>> Yoshifumi Kitamura: So my part is to participate in a [inaudible]. So now I'm
controlling, I'm interacting with myself in virtual environment. This is what I said
in the recursive interaction. So he's angrier. So the real participants [inaudible].
[music played].
>> Yoshifumi Kitamura: Yes. Now, we demonstrated this one in the event in
Osaka in 19 -- in 2009. And that 180 persons participated in this demo and
became the agent like this. This is the collaboration with the TV broadcasting
company. So this kind of setup it's very easy for us. It's -- by myself, it's very
difficult. But it's easy.
Anyway, there is a cooperation with the help of the TV technicians. We recorded
several actions for the onsite participants from walk, jump, turn, and action for
joy. This is example of action for joy. They can go inside of the -- the
cyperspace and working, jumping, something like. And action with joy, for
example, maybe can you make the action with joy right now? Like that.
In Osaka, this is like this. Very similar. [laughter]. But, remember, this is the
mecca of entertainment. Humor is very witty in Osaka. For example, like this.
Anyway, the participants look very happy. I also very happy because I
understand they are very happy. And I put the message board to their own alter
egos.
For example, the[inaudible] in Japan is -- I'm not sure this is correct translation
but, for example, the wow, he looks much more live than I. Now, people in
everyday life seems to have difficulties for business or in their house or
relationship with others, some things like that.
But in such area they are free from that kind of problems. So from their viewpoint
that their alter egos is very active and cute. So I'm very satisfied with the result
of this research work or demonstrations. And based on this experience, I'm
going to develop the new type of interactive contents based on the technique.
And the detail is not decided yet, but I'm going to continue this kind of research.
Anyway.
Final topic I'm going to introduce a little bit is controlling the atmosphere in the
conversation space.
As I said, I wanted to -- I want to make the people, all the people in the
environment. For example, when some people talking each other, some of them
are very tired and some of them very actively speak a little bit each other. But
the -- some of them spoiled. Other person is very hot or warm.
In that situation, I think everyone want -- must be happy. So in order to make
such demonstration I started this project. First, the -- the environment must be
sensed by some adequate sensors. And we have to model and estimate the
activities of the atmosphere. They have to establish the relationship between the
human nonverbal or sometimes the verbal information extracted from sensors
and the activity of the atmosphere.
By using the measured atmosphere we can control the content to show and to do
display in the environment. By using this loop, I'm going -- I wanted to control the
environment.
It is [inaudible] too. So I started the very preliminary experiments using the
triadic conversation environment. We measured the relationship between the
human nonverbal information using extracted from the -- we extracted sensors
and accelerometers and the microphone like that.
And so this is detail of the sensors and the apparatus. And, anyway, I found that
we -- we administered several measurements like this. [inaudible] the we can
say and the speaker times [inaudible] and carefully examined each of them and
found the activity with the atmosphere can be represented by using the speaking
time and the hand acceleration and head-rotation frequency. And the relatively
higher [inaudible].
Anyway, it seems to be possible to measure the activity of the atmosphere in the
room. So the -- how can we use this result? The -- this supposed application
can be considered only in the brainstorming, drinking parties, or buffet-style
parties for example.
This kind of system, it must be useful. For example, drinking or buffet style, the
students are strongly motivated because they thought this system is might be -must be useful in the situation with drinking party which -- and the girls and boys
come together to drink and join party and become friend or something like that.
We call in Japanese a [inaudible] something that's a group of younger students
and they're from women's school or men's school come together and meet and
chat and become friend.
So they establish the demo system like that. So this is very -- the first -- the
preliminary version. But they rarely make this kind of situation using the cup,
installing the accelerometers or microphones here and the three-dimensional
trackers like that and like before with something demonstrated image displayed.
It is possible to. I'm not sure.
Anyway, so the activity, measured activity is displayed feedback to the
participants.
[video played].
>> Yoshifumi Kitamura: And the interest of each participant asked in advance.
And based on that information, we displayed the topic.
[video played].
>> Yoshifumi Kitamura: Anyway, this is just started project that we have to make
more accurately calculated activity, must be used in the application. So this just
the beginning.
So the -- firstly, I introduced several research activities where I studied in Osaka
and continue at Tohoku University. Anyway, my purpose is to make everyone
happy through the interactive content. Thank you very much for your kind
attention and support from the friend all over the world. Thank you very much for
listening.
[applause].
>> Hrvoje Benko: We have time for a couple questions [inaudible] a couple
questions, and then we'll go to lunch.
>>: So [inaudible] so I found it interesting that, you know, you're still modulating
spatially, right, to get the left and right image and you use, you know, the -- I'm
sorry, in time, and then you spatially divide the images for the three people. So
you've kind of done a mix of -- you know, if you had a projection system that was
fast enough, right, just have separate images for each person with their shuttered
glasses. And I guess you could imagine even going -- you know, if you had the
right particulars, or whatever, you could do it all spatially, as well, right, based on
where people are looking from.
So it's an interesting mix that you've gotten there between, ,you know, spatial
and time domain for multiplexing it.
>> Yoshifumi Kitamura: To achieve the ->>: Yeah.
>> Yoshifumi Kitamura: Stereoscopic?
>>: Yeah.
>> Yoshifumi Kitamura: Yes, this is possible. And, firstly, we -- so there are
several types of stereoscopic display we used in the prototype of the IllusionHole.
Some of them the shuttering are active still, and some of them is using the
passive stereo they using in our prioritization. But, of course, we combine with
the active and the prioritization. For example, the left and the right images or
active and passive, we can combine [inaudible] four different kind of images
that's been displayed, for example.
But it always have to struggle with the time domain, the limitation of the time. So
now I try to use the solution of the spatial ones. So this is -- the approach is a
little bit different. But, of course, we can combine each of them.
>>: Have you ever tried actually building the large occlusion poles that you
illustrated in the videos?
>> Yoshifumi Kitamura: Yeah, just -- yeah. We didn't have sponsor. I wanted to
discuss with Disney World or Universal Studio, but it not [inaudible].
>>: Did you try, like, wall-mounted versions rather than strictly tabletop displays?
>> Yoshifumi Kitamura: Say again?
>>: Did you try wall-mounted versions of the Illusion ->> Yoshifumi Kitamura: Wall mounted. Wall mounted. I showed in the movie.
It's just on the -- I calculated the parameters. But not actually yet.
>>: You said it was one of your goals to make it fun for people to interact in your
systems. Do you have any metric of how do you measure fun [laughter].
>> Yoshifumi Kitamura: It's good question. But, yes, that -- in that -- the final
project I showed today, in the framework I wanted to examine the happiness or
fun, something like that. That project is joint project with the psychologist. So we
probably -- I hope to get some kind of information you mentioned in ->>: I'm just curious if you can actually use the data or some results from your
second to last project where you did -- when you actually captured when people
appeared to have fun. Right. I mean, I know those are exaggerated, but it
seems kind of interesting to actually use that to detect fun if your ->> Yoshifumi Kitamura: Yes, I think so. So without using kind of the explicit
sensors, where I wanted to change the parameters, inside parameters of the
virtual existence of myself, for example, or yourself, and behave automatically,
it's one of the [inaudible] of that system.
>> Hrvoje Benko: Any more questions? All right. Well, thanks again to our -[applause].
>> Yoshifumi Kitamura: Thank you very much.
Download