>> Hello. Hello! [CHEERS AND APPLAUSE]

advertisement
>> Hello. Hello!
[CHEERS AND APPLAUSE]
Welcome. Welcome to the Design Expo 2015. In case you're wondering about the
slide show that was going as you guys were arriving, this was an inclusive
visual identity that we came up with. It was a joint effort here at Microsoft
and outside. We created the main symbol that you can see in color, and we asked
other people to reinterpret that symbol, which is a capital letter D in a way
that it represents inclusive design. So that's what you were looking at. And
this event is public, open to the public. So feel free to tweet. If you tweet,
use the #Dexpo15, which is our hashtag for this event. I was the first one to
use it so please fill that up. So starting with just a bit of feedback about
background about this. There's been so much feedback. Kind of getting out of
that mode. Background about Design Expo: It's about identifying some key
challenges that the world is facing, and giving them out to top design schools
around the world and see how they interpret the problem. And another goal is to
enhance long-term relationships across the schools and between Microsoft and the
schools. We had a couple success stories from the past when a couple schools
met in here, Design Expo Brazilian school and school from the Netherlands, and
they started an exchange program. We also had one of our Design Expo alums go
found a company which happened to be Foursquare. So he passed through this room
just like you guys. So we like to foster this community with Microsoft and also
across the schools. And this is the 12th year we're doing Design Expo. So been
doing this for a while. So the process, we started the end of the year, we
started by selecting the topic and then we select a set of schools and it's
usually eight schools worldwide and we rotate through the schools and then we
assign Microsoft liaisons which are people who travel to the schools to meet
with them and give them feedback throughout the process. And each school starts
a semester-long course on the project, and then the liaison goes back to the
school to select the final project that is here. So you guys, if you're here,
it's because you're a winner. So be happy for yourselves and you made it and
we're super happy to see what you have.
[applause]
And today's the day they present, and they present for eight minutes and then
get feedback from the amazing critics here. The design challenges here is
inclusive design, and in 2014 the World Health Organization has radically
revised their definition of disability. They're defining it as contextdependent condition versus an attribute of the person. So in this way of
looking at disability makes us realize that we all go through disabilities
throughout our days and throughout our lives from being in the sun and not being
able to see my screen to being somewhere where it's really noisy and not being
able to hear or when I'm driving I am cognitively disabled and visually impaired
when I'm doing that task. So we ask the students to look at a spectrum from
permanent disability to temporary to situational. An example of permanent would
be somebody who amputated their arm to a temporary case which is a new parent
who has to do everything one handed because they're holding a baby on the other
hand, to -- I blanked -- to, yeah, to a situation when you have a broken arm,
for instance. Or when you're carrying groceries outside the store. Just to say
that it happens throughout our days and throughout our lives. And we think that
by designing for the one percent, we end up benefitting everybody. So just
think it's a good design process. These are our participating schools: We have
Pontifical Catholic University from Rio, Brazil; University of Applied Sciences
Potsdam, Germany; Carnegie Mellon University from the U.S.; Korea Advanced
Institute of Science and Technology, KAIST, from Korea. Both KAIST and Korea,
it's their first year doing Design Expo. We have Delft University of Technology
from the Netherlands. And NYU Shanghai Program. Art Center College of Design.
And our neighbors, University of Washington.
[Applause]
We have amazing critics sitting right here in front of you, and I will go ahead
and introduce them. I hope you like the pictures I picked for you guys. It was
on the Web, so I feel -- so Rob Girling is co-founder of a local design and
innovation firm Artefact. His career started at Apple in 1991 and 1992, when he
won the Apple Student Interface Design competition for concepts around mobile
and personal computing. Rob then joined Microsoft for ten years. He started at
the Office team. He then worked in games and eventually became the design
manager responsible for the user experience and branding for Windows XP. He
worked briefly on Vista before going to join IDO as senior interaction designer.
And at IDO is when he decided he should go found Artefact and he founded
Artefact which has been around for almost ten years now. And prior to founding
Artefact Rob worked at Sony Computer Entertainment of America. Thank you, Rob.
We're very happy to have you.
[Applause]
Our next critic is Angelo Sotira, who has met the Grumpy Cat in person. So we
can get his autograph later. Angelo is an American entrepreneur and co-founder
of the online community Deviant Art that most of you probably know about. And
Deviant Art is a social network for artists and art enthusiasts and a platform
for emerging and established artists who publish, promote and share the work
with an enthusiastic artistic community. Deviant Art has over 32 million
registered users and it attracts over 65 million unique visitors per month. The
community members upload over 160,000 pieces of artwork every day.
Angelo co-founded Deviant Art at 19 years old. It was not his first company.
He founded another company four years earlier, when he was still a baby, music
file sharing site called Dimension Music, and he sold it in 1999 before starting
Deviant Art. Thank you, Angelo.
[Applause]
I forgot to say he was born in Greece, and he's an Aquarian like myself. Next
we have Wendy March, senior designer for Intel's New Devices Group. You're not
for the New Devices Group? Is that not you? Okay. That's what was on your
website. What's your role? [indiscernible]
Experience manager for Intel's New Devices Group. Her current work focuses on
speech interfaces and new form of interactions with physical devices. She has
also led projects that explore new forms of interaction with mobile devices that
are enabled by sensors, which happen to be the topic of last year's Design Expo.
And advanced imaging technologies, especially new types of gaming. She also led
projects that included reimagining the mass robots, future of smart streets and
how the design of digital money can reflect our social values. Wendy has an MA
in computer-related design from the Royal College of Art in London which has
participated in Design Expo, this school, and a master of science in information
systems from the University of Brighton. Prior to working at Intel, Wendy also
worked at IDO and also interned at Apple in her early career. Thank you, Wendy,
for being here.
[Applause]
I want to give special thanks to our veteran, Mike Kasprow. Mike is just over
here. He's our honorary critic. He's been helping students present better and
giving them creative feedback for the last nine years of Design Expo. And Mike
is now a senior vice president and executive creative director at Proximity in
Toronto, Canada. I learned that Toronto, Canada was voted the best city for
five years in a row. And Mike has a spare bedroom. [Laughter]
[Applause]
And thank you to the liaisons and thank you to [indiscernible] for coming up
with the visual identity and the whole visual identity team. Thank you.
[Applause]
We're ready to start with our first team, Rio de Janeiro, Pontifical Catholic
University, with MESH, reliving memories emotions and impressions.
[Applause]
>> Hello. First of all, it's a huge pleasure to be here today. We are from
Pontifical Catholic University of Rio de Janeiro in Brazil. And my name is
Patricia and these are my team colleagues -- Edy, Jessica and Raphael. We are
here today to present our project which is focused on the visually impaired. So
there are about 285 million people visually impaired worldwide, and according to
the World Health Organization, there are, sorry, there are four levels of visual
acuity that goes from normal vision to blindness. We decided first to focus on
the three last levels to interview. So our first conclusions were that -sorry, first you ask who are they, what do they like to do, which challenges do
they face? And our first conclusions were there's some subjects that came up
like they talk about their feelings, about where do they like to travel to, how
they interact with people and there was this particular subject that was memory.
We talked with Aliny. Here's Aliny. She's a girl with severe visual
impairments. And she has the habit of keeping a journal. Actually, she blogs.
So she likes to share her experiences and feelings online and talk to -- share
with people. And then we started talking about memory also with the other
interviewers and they talk about storytelling and relating memories to events
and places. So realize a strong connection between people and they're recalling
past expenses.
>> Among these statements we found three of them, three quotes that were
interesting for us to work with. So like most of the tools to keep memory rely
on vision. My memory is like a photo album to me. Data power means nothing to
me. The most important is what I've done there.
So those three phrases show us that memories are related to places and events
and that for fully sighted people, they realize visual aspects such as
photography and video. And how about the vision impaired? They don't have
this. They don't realize the visual aspects. But they still have the
connections between places, events and their moments. That's the point.
There's so much more to memory than only visual aspects.
>> So what about you guys, what do you do to remember, what is memory to you?
We have a lot of ways of keeping memory on a daily basis. For instance, when
you check in your favorite restaurant or when you go out to somewhere special
and you check in on social media, the bottom line is remembering is our way of
treasuring our past. That's why we created MESH, the memory sharing. MESH is a
system that allows users to capture, relive and share memories through a
wearable device using an online platform. With MESH you can capture, relive
sensations and you share and discovery memories to and from others.
>> But how do you capture a memory? Well, every interaction done with MESH
service is done by this little device here. And for you to capture a memory you
just need to tap on the bottom of the device, and for doing that all the
information will be recorded by the microphone here. The temperature of your
skin will be recorded in the back of your ear and there will be a sensor. Here
the user's temples will record the heartbeat. All of the representation of this
memory will also be done by this device in the same trip, three parts. And for
reliving the memory you just need to associate using those three characteristics
-- the name of the user that recorded this memory, the place that it was
recorded, and the date it was recorded. By instance I'll show just like -- you
just have to tap here and say relive my memory on April 15th, 2012, in Rio de
Janeiro. And if you're not looking for any specific memory and you want to
search for something interesting, you just have to tap it here again and say
relive memories randomly, and then walk around looking for memories that will
astonish you. But every memory by default is only personal and private. And
you need to share it or make it public for other people. And you can do that by
two things. You can share directly to someone's MESH or you can use a social
network such as Facebook and Twitter. You just need, when you record a memory
or just relive them, you tap and say share on my Facebook and then tag your
friends such as tag Maria Bonelli.
>> So you might be wondering for whom MESH was designed for.
everyone with different levels of visual acuity.
[Music]
>> My love I miss you a lot.
language]
By all means, for
I wish you could be here with me.
[Foreign
>> Wow. You're the best thing I have in my life.
[Music]
[Heartbeat]
>> Thank you.
[Applause]
THE MODERATOR:
Rob?
Thank you.
Now off to the judge's comments.
Want to start,
>> Sure. What a commendable job, guys. As you saw, I'm cheating a little,
because I saw the presentation on Monday and we had some concerns with the way
it was presented. And you guys have totally addressed all of that and then
some, and totally nailed the timing and you've done a great job. So
congratulations on that component of this, which is an important component,
right, the storytelling of your method and process. I'm really struck by the
power of the video to convey the idea. I think the simple voice user
interaction for recall based on place and voice key phrases is probably a pretty
good start at the UI. There's going to be some challenges there on that April
12th day in 2012. There was quite a lot of activity, and trying to sort of
disseminate between different types of activities could be quite challenging.
But I think it's a commendably simple user experience at least at this level.
And I think the form factor as well deserves some nice -- it's very discrete.
It actually looks quite fashionable. There's the nice wearable kind of
aesthetic to it. And so I just wanted to congratulate you guys on a job well
done in communicating the power of your idea in a quite challenging space. So
well done.
>> I really agree with that feedback. I really like the result. Like I love
the way that you ultimately post this to Facebook and you have a visualization
of the experience and that little kind of heartbeat piece. I think the more you
guys expand on that piece and that resulting product, the more you'll drive
people to want to engage in that behavior. So that's really, really emotional
and really, really, like, human and really great. So I think that will
transcend nicely in social spaces. I'm not so sure about kind of the hardware
components or the wearable components. I think you've done a phenomenal job
with it so far, and I look forward to seeing you guys progress on the hardware
piece. So, yeah, terrific.
>>
Thank you.
>> Well done. I have worked on something that goes around your ear. It's
really hard getting things to fit people's ears. So I think that was nice.
Very kind of simple and you thought about that. I thought it was a very
interesting space to think about, as you say most memory-related things tend to
be visual. Although people have for centuries written novels and journals. But
it's an incredibly important thing for people. So I think that was a very
interesting space to focus on. I would have liked to have seen a little more
prototyping happening. I think to understand both what you're capturing and
then the prototyping of kind of playing it back and kind of having some sort of
showing that, you know, this is how people felt about it and this is how it
worked and this is what a memory was made of. I think that would have been
really an important part. So I think for another project that kind of bringing
that out would be really important. Very nice. Thank you. They're all
terrified of being over time.
[Applause]
>> Okay, we have three minutes for questions, but you guys should switch. I
mean, the next team should start coming and we'll open for questions. Phil?
I'll repeat the question.
>>
[indiscernible]
>> So the question is -- I need to repeat the question for folks on the video - that you guys talked about recording sound but ambient sound is another great
thing to be recorded. Have you included that?
>> Yes, we have. We decided to cut this from the presentation because in terms
of time but we had two devices, and each one of them will have one microphone.
So we used kind of binaural sound capturing for environmental sound recording.
It was in the process of the project but we decided to cut it off because it was
not the focus of the user experience.
>> Any more questions from the audience? Yes, in the back, two in the back.
What's the length of the recording? What's the length, the duration?
>> Hi. We didn't really decide for a length, but it should have the -- yeah,
but you can record forever because you can forget that the device is on and it's
going to be recording everything. So, yeah, we had to decide on length, and
that's a good idea. Thank you.
[Applause]
>> Thank you. Now we have Team Tracktile, explore a city with your hands and
ears, from University of Applied Sciences Potsdam in Germany. And they have
physical prototypes to share with you guys.
>> Hello. Hello, hello. Can you hear me? Yeah. Nice. Okay. Hello
everyone. We are Patrick, Cecile and Johannes from the University of Applied
Sciences in Potsdam, Germany, and we are proud to present our project,
Tracktile. So we started with a simple question: How do blind and visually
impaired people explore a city? So we humans are very visual creatures, and
when we think about how blind or visually impaired person would explore a city
we think there's a lot they can't experience and a lot they're missing because
they just can't use their eyes. But we didn't actually want to define visual
impairment as a deficit as something that remove those affected from an
essential part of human experience. Instead what we wanted to do was try to see
if there was something to be gained for everyone from the way they experienced
the world. So how can we explore a city without our eyes? First, there are
your immediate physical surroundings, just the space around you. So what's the
floor like? Are you walking on grass? Is there pavement, maybe? Is it
raining? Are you perhaps cold or uncomfortable? Yeah. Then you start to
listen and maybe you can hear strangers talking. Maybe there's a park across
the street, and you can hear the birds in the trees. Or maybe there's a street
just up a block and you can hear the cars rushing past. All right. Then
there's your sense of orientation, the knowledge you have of your surroundings.
So, do you know where you're going? Do you know the name of the street? Do you
know that there's a great coffee shop just to your right? So all these
different states of feeling, of knowing, of hearing, of remembering, these make
up our experience at the city scape. But there are other people in the city as
well and all of them, they have their own experiences, their own stories and
their own memories. And we wanted to make it possible to share those with
friends, with just anyone really. We wanted to give you the means to explore or
discover an entire city in a new way to connect with it, to learn about it,
perhaps even to fall in love with it. Tracktile is an electronically and
digitally augmented tactile map that makes it possible to explore a city with
your hands and with your ears. And Patrick will now talk about the magic behind
Tracktile.
>> Tracktile is not only an object but there's a whole service around it.
First thing you do as we go to the website and select your destination and the
sites you want to see. For example, Berlin, Berlin and Checkpoint Charlie.
Then you order your customized map. When it arrives you can really get in touch
with the city scape by feeling out streets, buildings, parks and water through
the different surface structures we use. But what makes Tracktile really
different to other tactile maps? With Tracktile you can use your ears to
discover even more. There are three different audio layers on the map which can
be activated separately. The first one, which is on every map, includes the
basic information about a city. Like street names, important buildings, and
points of interest. So the second layer is the community layer where you can
access different content, shared and created by other Tracktile users. There
are like helpful informations and reviews. But also sound recordings of the
noises of a city to really feel the atmosphere. But you can also create this
content yourself during your trip, and this content is included in the third
layer, your personal one. You can think and store these audio recordings on to
your map, which makes it easier to relive your memories. And you can also share
them with the community, if you like. So let's have a closer look to Tracktile
in real life. Now you will hear the basic layer, which helps you to prepare
your trip and understand a city layout.
>>
Unter Den Linden.
Pariser Platz, Brandenburger tour.
>> The community layer gives you the possibility to learn more from the blind
and visually impaired community.
>> [German].
Great beer garden and a beautiful park near the river. Checkpoint Charlie:
was quite an experience standing on the border of the Berlin Wall.
It
>> You can use our fully accessible app for sound recordings during your trip.
And you would be surprised how visually impaired people can use smartphones.
[Bells chiming]
[Birds chirping]
[Crowd noise]
[Bird chirping]
>> After your trip, you can sync and store your recordings to your map and use
it to relive and share your memories. Now to Cecile.
[Crowd noise]
[Bells chiming]
[Birds chirping]
>> Now, you've just seen our second prototype in action, but how did we get
there? To evaluate if our concept could be realized, we started prototyping in
an early stage. This is a work-in-progress picture of our first prototype. It
is made of paper and as you can see it's already electronically augmented and
functional. And this is the finished prototype. It shows a map of downtown
Berlin. We did a lot of testing with different papers and structures in order
to explore how to represent streets and how to distinguish buildings from them.
We experimented with braille but we decided against it in the end because we
don't want to exclude those who can't read it. We also have a prototype with us
and we'd love it if you would try them out later in our booth in the open
showcase. So we engaged with the community and got in touch with a lot of
online users on Facebook groups and they gave us insights about orientation and
navigation in the city and traveling with visual impairments. During our first
research phase we met Michael, who is very active in the blind community and who
likes to travel to new places. He tested our first prototype and later also the
others and gave us valuable feedback about distances and sizes on the map layout
and ways to improve them. He also said that this map would help him to prepare
his next journey and that he would understand the street layout and give him
more autonomy in addition to his cane. Also we met with the mobility group of
the Blind and Visually Impaired Association in Berlin, and they gave us
qualified feedback about our form of style and really important fact about
tactile patterns and marks. If you want to learn more about our research, we
have a really detailed documentation on our website, which was Tracktile.co. So
our research led us to our development of the second prototype. And here we
focused more on formal and aesthetic properties like surface structure and the
overall look and feel. We see [indiscernible] and use different patterns for
the different kinds of parks, green spaces and waters. And we varied the sizes
and routes of streets. During our research, we felt that there was a real
demand for a portable version. And so we developed one. We developed a third
prototype which is foldable and fits into your pocket. So you can explore a
city while you are on the go. Well you can try it out later in our booth.
>>
It was just working.
>>
It's too nervous, apparently.
>>
We just tried it back there and it worked.
>> The best thing is to try it out yourself.
[Applause]
>>
It's a nervous map.
Thank you.
>> Just a second. So it was our goal to create more than a tool. It was our
goal to create more than a tool -- a beautiful and aesthetic perhaps even poetic
object, something that has a quality for everyone, whether they're blind,
visually impaired, sighted. And we believe that Tracktile is a beautiful way to
explore a city with your hands and with your ears. Thank you.
[Applause]
>> Thank you, Potsdam. Come see them at the booth today from six to eight so
you can use it. We've seen it work. Now off to the judges comments. Who would
like to start?
>> So thank you very much. That was very nice despite that moment that we've
all had. Absolutely kind of always happens. So I thought one of the
interesting things about that presentation there was sort of like what about
your research. You came in with the research and kind of questions about well
it's a little bit clunky. But then you moved on to it being sort of foldable
and much like a general city map. And I thought the way that you thought
through that process you obviously don't have much time to present but it kind
of showed I'd really like to see that you thought through lots of the issues and
explored them and prototyped it and really kind of pushed that. So I think that
was great. Of course I still have lots of questions about it. And but you have
a website so maybe I should go there. But kind of knowing ->>
You can meet us later in the booth.
>> Or see it in the booth later. I thought it was a really nice project.
You'd also come out both with a sort of functional but also aesthetic thing,
which is really nice to see that you kind of pushed both angles of that and
pushed something that you really tried to make usable by the community that you
were designing for. So that's great.
>> So I love this. I mean, I love this more than I can put to words because
it's exactly what you might imagine I might love especially the community layer,
which -- so I think anytime that you can take something physical or something
that's hardware that can become the center point for the connection for your
community you have an incredibly powerful platform. I'm sure I'm probably going
to pitch you your own idea here, but I think when you -- the cumulative effect,
if you can get this into the hands of the people that need it, and the community
connection point. So first you start off navigating your city, but ultimately
the community layer allows those members to begin to connect. And that
connection point brings like people together. I can tell you beyond a shadow of
a doubt when that occurs in society, it makes the world a better place for the
people that are benefitting from it. So I commend you on this thing. This is
really, really exciting. I'd love to talk to you later about it. But I don't
know, raving reviews, great feedback. I don't know the exact technological
things with it. This is not my field. So you guys know a lot more than me.
But assuming that you can get to a good price point and into the hands of the
demographic, it's very, very exciting, and I love the intention behind the
project. This is just great.
>> Likewise, I'm very, very -- a lot of desire for one of these on my wall -connecting me to a place that I love and I'm sure many people in the audience
would be able to identify with that idea of being sort of somehow connected
through some object, a large-scale wall installation or something. I actually
liked the way you both are helping the clearly disabled user and going after
that use case which is very difficult, and at the same time designing, like
choosing not to include braille is a quite bold choice in this project
particularly to make it more inclusive of everybody, right? So this is
something that both visually impaired and non-visually impaired can both enjoy.
I think there is a question still around the sort of is this for preparation
before learning the city? Or is it for experiencing the city while you've got
the mobile version? Or is it for reflecting on the city? And perhaps in
conclusion there, it's all three and it's equally good at all aspects of that,
which is even more commendable, because you would have thought it would be
specifically sort of dialed into one of these, and I think you've done a good
job making it, address each of those kinds of scenarios. So congratulations.
Great presentation, too, visually.
[Applause]
>> We don't have time for questions.
guys.
Questions later at the booth.
Thank you,
>> Next up we have Carnegie Mellon University with GatherWell, creating shared
knowledge.
>> Hi, I'm Jane, and this is Lorraine and David. We're from Carnegie Mellon
University and we're here to share a concept called GatherWell. I'd like to
introduce you to Laura. So Laura was admitted to the hospital for pneumonia
just a few days ago. Unfortunately, her experience there was overwhelming. She
mainly worked with Dr. Lee, but she also came in contact with a number of other
medical professionals. Her husband, Nathan, tries to visit as often as he can,
but he also has to take care of their baby at their home. On top of managing
communications with her doctors and her family, she also has a ton of
information given to her from test results to home-care instructions. So Laura
is having to deal with all this and it's really overwhelming. But imagine if -yeah, so she's not alone. It's really not uncommon to have this experience.
And in our early research stages, we came across one patient who we talked with,
and she had 18 different medical-care providers in one hospital stay. Another
patient told us about how she had a surgeon and a physical therapist give her
conflicting information. And she left not knowing what to do. And this is not
an uncommon experience. In the midst of all this, you can imagine how much more
difficult it might be if Laura was deaf. She's not alone in this. In one
study, 41 percent of deaf patients left their health appointment feeling
confused about their medical appointment. And this is a really critical moment
when they leave because if they're not understanding their treatment, their
illness, then it can really lead to readmissions, unnecessarily. And hospitals
want to prevent readmissions because of exponential costs. So we wanted to
learn more about healthcare and the field of communication. In order to do so
we first interviewed 26 individuals from patients to nurses and social workers,
academic researchers as well as designers in the field of healthcare.
We also conducted workshops such as role playing and card sorting. We observed
nurses and social workers in the hospitals. And we also did usability tests,
through experience prototyping and rapid concept testing in two senior living
communities and a medical center's innovation center. So this is a photo from
one of the experienced prototyping sessions where we had a participant pretend
to be a deaf patient interacting with a doctor and we had real-time
transcription happening. And here's a clip from that session.
>>
Do you have any questions about what an ATB inhibitor is?
>> It's really great to see it on the wall. My family doctor is telling me I
might have to go on AC inhibitors, but that's all he said. He never explained
it. If you could explain a little further, that would be great.
>> So what this experience prototyping really taught us, as well as some of the
other research, is that a lot of the interactions we want are possible today.
But what we want to do is think about how we can push the idea further and think
about the future. So we decided to design an ecosystem of multiple technologies
that can really create the seamless experience. And now Lorraine will talk more
about our key interactions and insights.
>> So -- hello? After spending much time -- okay. After spending much time
going through our research meeting and interviewing with these people, we've
consolidated our research into three key insights. So one of them is that
discharge information is often very cryptic and lengthy because it is a process
that involves many procedures and conversations. And discharge papers that aim
to consolidate all of this into a single piece of paper or several pages of
paper may not ensure that the patients are ready to leave the hospital. So in
response to that, GatherWell summarizes and catches each conversation that
happens with tagging and visualization, so that instead of having to sift
through pages of paper, you can easily extract key information very quickly.
And secondly we learned about medical interaction is a group experience that
involves many, many stakeholders, and they have to be on the same page. So in
response to that, GatherWell also offers remote virtual conversation where
stakeholders can join medical conversations anytime regardless of location.
Lastly, we heard that hearing disabilities create major communication barriers,
communicating medical information, delivering them and receiving them. When we
were talking to a deaf interviewee she revealed that deaf patients oftentimes at
hospitals, they're very, very reliant on in-hospital translators with very
limited independence. So in response to that, GatherWell offers real-time live
translation and transcription of American sign language so that they can be
independent and depend less on human assistance. And also the service offers
both translation and transcription in ASL and text, American sign language.
Because it revealed us -- through research, we learned that deaf community, the
members of it, sometimes may prefer one or the other as ASL is considered a
separate language from English. So here is a video that demonstrates concepts.
[Music]
>> So we're really excited about what this combination of technologies can
provide. It's not just a simple translation service but a combination of
multiple technologies. So for a conversation between a patient and the doctor,
we're using augmented reality glasses to give that live translation. For
caregivers, in this case, the husband, can also join the conversation through
remote access. And then also all of the stakeholders can be involved through
the transcription and the summary capabilities so they can review those
conversations at a later time. And it's also interesting because this
technology can also span across a spectrum of other types of people. Not just
those in the deaf community. For example, somebody who is temporarily disabled,
like, say, somebody is waking up from anesthesia, they could use this,
especially the transcription abilities, to review the conversations that happen.
Also somebody that may be situationally disabled, like somebody in a foreign
country could use this, for the live-translation capabilities, as well as remote
access because they could invite their doctor at home to join the conversation.
And then beyond that, just really any patient can use this technology to kind of
bring everyone to the table and be on the same page on the patient's journey to
wellness. Thank you.
[Applause]
>>
Thank you, CMU.
Who would like to start?
>> Great job, guys. I also saw this on Monday. It's come a long way. You've
done a great job answering some of the questions and clarifying some of the
points that you've obviously done a great deal of work on. I'm a big fan of the
sort of the transcribing, collecting and presenting back to patients what is the
recommended course of action, the next steps, how to get to your next
appointment, follow-ups, all the kind of cryptic stuff that -- and even the
service that just did that part alone would be a huge step forward for a lot of
people because I'm sure we're all disabled to some degree cognitively by post
illness, just trying to sort of remember what the course of actions are. I feel
like this may be a project that sort of suffers from like one too many ideas a
little bit and the augmented reality translation layer where the caregiver is
essentially there's a super imposition on her doing an animation of the actual
sign language so she prefers sign language than just the text in this scenario.
I feel like whilst that probably will happen at some point in the future, it's,
I think, probably a fairly distant moment. And I'm not sure of the sort of lift
is worth the benefit right now and there's sort of some issues around what if
she was gesturing something important gesturally, and how would the sign
language interpret that that was not something that didn't need to sort of take
over or sort of crush, if you know what I mean. So I had some questions around
that aspect of it but overall I think it's a very commendable effort and would
have a wide variety of potential beneficiaries. So congratulations.
>> I wish I could be more helpful. I think from the aesthetic here this is
obviously very useful technology. I love the clean design. I mean, it feels
like a special purpose sort of Skype in design. So that's really, really
attractive and seemingly very functional. I think to your points, there is
probably quite a bit of complexity in there. You wouldn't really know exactly
how well this works unless you were kind of in the dynamic. And so I'm kind of
lacking that perspective. But elegant from your presentation. So congrats.
Seems like a lot of hard work.
>> Very nice. I agree, I think, with Rob. It's really nice but maybe a bit
overly complicated and maybe taking like one part of that would have been easier
to both express the idea in a short time but also easier to engage us, your
audience, I think, in kind of really what was going on in that AR video. It's
like completely confused. I think also, but you do have a really great idea
here about the importance. Nobody understands -- no one can remember anything
the doctor said as soon as they go outside the door. It's a well-known
phenomenon. It's completely, like, no matter how much -- that's why they say
take a friend and take notes. And you both sat there completely befuddled and
then leave and have no idea what they said. So it's that importance of
conveying that information, being both deaf or not actually being in a foreign
country, being in your own country and speaking another language can be a huge
barrier that you have to wait around in pain until someone comes who can explain
in a language you can understand what's going on is terribly traumatic for
people. So I think that would be brilliant. So I think really focusing on,
what's called there is really actually how do you explain stuff to people so
they can remember. I think that's really powerful and well worth exploring
more.
>> We have time for one question from the audience.
thank you so much, CMU.
No questions?
Okay.
So
[Applause]
Next up we have KAIST Korea Advanced Institute of Science and Technology from
Korea. It's their first time participating in Design Expo. Their project is
called Sparkle. It's a visual feature description system.
>> Hey. This is Sparkle from KAIST Korea. I'm Cheolho Jeon, and these are my
team members Jonghyuk, Sungbae and Sungwon. Let's start with how our idea
emerged and led us here. 285 billion people in this world, which is
approximately one in 25 people, are with visual impairments. Most of them have
visions -- most of them -- some of them have no visions at all. Most of them
have a sense of color. We conducted a contextual inquiry with people with
visual impairment. We followed them shopping and we volunteered to help them
shopping. And we found out this: I would like a pink [indiscernible] cardigan.
They say it's trendy, says a women with visual impairment. Trendiness and
fashion matter to sighted people. And not surprisingly they also matter to
people with visual impairment. But how do they wear and choose clothes without
visions? So this is the very question that led us here to Redmond. How do
people with visual impairment see clothes? With this question in mind we
conducted more interviews with people with visual impairments and we found out
there are two kinds of information during the shopping. First, nonvisual
information like thickness, material, texture, can be gathered directly by the
people with visual impairments through touch. Second, visual information,
however, can only be gathered indirectly through the help of another person.
Assistants like clerks, friends, family, have to, need to verbally communicate
and describe the colors and patterns to people with visual impairment.
And we noticed that there's a great deal of inconvenience during this process.
Especially around the colors and -- especially around describing colors and
patterns. Imagine you're describing these clothes to your friend. How would
you describe the pattern to your friend? What about colors? Please take a look
at these two colors. How would you describe the difference between two colors?
I can say the two colors are blue, and one is darker and one is lighter. But
those are not the only things we see. We know the two are similar but
definitely different. And it is hard for us to verbally describe the
difference. And this is what happens during the process. Assistants, even if
they're eager to help, they struggle to verbally describe the difference. And
we wanted to sparkle-up the situation. We decided to help and improve the
verbal description process. For that, we're going to use cross-power and
technologies. There are two people involved in the shopping process.
Assistants are going to use the Sparkle app to better describe what they see.
And people with visual impairment will use the Sparkle Band to better recognize
patterns in the clothes and this together is called Sparkle. And here's our
video, and after the video [indiscernible] will continue the presentation.
Thank you.
[Music]
>> Jill lost her vision just a few years ago. Despite her disability she still
actively enjoys a cultural life. Here she feels like she should go out and buy
some new clothes. [indiscernible] is a good brother but certainly not a fashion
star. [indiscernible] tells her brother to take her shopping. Her brother is
resistant at first, but then he wants to enjoy the weekend. As his sister
insists, he says yes. He is a good brother.
[Music]
>> The two arrive at the department store. [indiscernible] is trying to buy
some cool clothes for spring and summer. It seems like she has to pick. She
wonders how does the pattern look like. All the patterns are too complicated
for her brother to explain. She does not just stand there and watch. She uses
her Sparkle. The Sparkle then delivers information about [indiscernible]
displayed on the braille display.
[Indiscernible] uses the Sparkle app. The Sparkle Band sends a visual data to
the Sparkle app and the Sparkle app displays suitable description words of the
visual data on the screen. [Indiscernible] describes how the clothes look and
feel with more detail from the help from the Sparkle. With her good brother and
useful Sparkle she continues to shop. They shop and shop and shop until the
brother begged her to stop. It seems like it was a nice day, at least for one
of them, isn't it?
[Applause]
>> Let's talk some more about details. There are three main stakeholders in
our system. First, everyone who can see denoted by green or gray. Second, the
assistants of people with visual impairment denoted by green. And finally there
are people with visual impairment marked red. We will describe how each user
group contributes to and take advantage of our service. Let's say we have a
piece of clothing that has some complex patterns. Using the camera, Sparkle
band can transform the pattern information into braille that people with visual
impairment can feel with their fingertip. So, how does it actually work? We
have a software demo for this. The braille on the surface of the band will move
up and down to create a texture that resembles the pattern. The color of each
cell represents the height of each braille. So the wider the cell, the higher
the corresponding braille piece. Now, let's see how sighted people and
assistants are involved in our system. First, sighted people. When they shop
for clothes online they'll be asked to do a small task for the people with
visual impairment. The assistant will ask them to describe the colors they see
with a few words. This processed data will be stored at the central database
and it will be used by Sparkle app, Sparkle Band and other services. This
processed data can be useful for everyone even sighted people. For example,
Sparkle can offer advanced searching and rich color dictionary. With Sparkle's
rich color vocabulary assistant, made by people, we can now search with a Curry
like, three-piece baked tea and expect some clothes which match that
description. Furthermore, now we have a color dictionary that also contains
descriptors of emotions and feelings attached to colors. Designers can easily
reference and pick colors accordingly. As you saw in the video, the assistants
can use the Sparkle app to get recommendations and describe colors easier. Do
you still remember these colors? We got some help from the crowd through
surveys and this is what we got. Using the system, assistants can say about the
color on the right, this is a light blue color which reminds me cloudy day and
makes me relaxed. So people who acquired visual impairment can imagine colors
easily with the descriptions given and people born with visual impairment can
share the feelings that the sighted people feel from the color. And we will
talk some more about the future applications of Sparkle. First, independence is
important to all of us, but some of us don't fully enjoy it. We want to help
people with visual impairment to shop freely on their own with Sparkle. Sparkle
can drive them to the shopping mall they're looking for and let them choose
clothes and describe colors and patterns to them. And, second, clothes are not
the only ones that are hard to explain. Furniture, accessories, drawings.
There are tons of things that are hard to describe. Sparkle makes, can make
this describe easier. Thank you. Any questions?
[Applause]
>>
Thank you Korea.
Who would like to start?
>> Great job, guys. I really like this concept. I think you guys have done an
amazing job actually. You've got a ton of detail in here. There's the software
service that you've designed, a platform for crowd-sourced information and
subjective, qualitative descriptions of color. There's a hardware device in
here, it's obviously got its own gamut of scenarios that you've only lightly
touched on here. Not only that, the graphic communication work, the design of
your presentation is really beautiful, the character design and all of those
things. I think there's just tons to commend you on, and even looks like there
was some software image processing work done just to do some live video
presentation. And let me just say that quite a lot of that happened between
Monday and today. So even more impressive for how far it's come. So well done.
>>
Thank you.
>> So I think that was really interesting. I really like many aspects of this.
I thought the braille, the sort of representation of patterns by sort of a
tactile-pattern thing was really interesting. I don't know, it would be great
to see some kind of progression on that and testing and how can people really
feel it and what does it mean to have polka dots -- a lot of understanding
patterns is really interesting. I liked the way that you thought about color
and different ways of explaining that as well. I think it would have been great
to see -- I was not completely convinced by the way that the wrist thing went
with the app and the description of the color versus the pattern, I think that
felt a little bit kind of disconnected. Apart from that, it was really nice,
and you did a great job explaining it.
>> Yeah, I think this is really, really neat. I mean, it took me -- maybe I'm a
slow study -- it took me a while to get -- because the pattern that was taken
off the rack had texture to it. It threw me off. You probably already heard
this feedback. So I won't belabor the point. But once I got it I started to
kind of appreciate the kind of expansiveness of what you guys were going for
here. This is a lot of vision. I always worry in those situations it's a lot
in the platform. But if you can produce that entire experience, I see the
vision. I think it's really, really exciting. So what I was most excited about
was the end result. I thought that the crowd-sourced actual description was
really kind of very helpful, very meaningful kind of in that response. So just
getting that part right in terms of how people give you the right words and how
you discern that these are the right ones for this thing and for each of those
cases -- each color, each garment -- that's a very expansive problem, even if
you don't have hardware or anything else contributing to that. So I really,
really like that part. I see that as a piece of it that in and of itself has a
lot of value. I don't know if that helps you at all. But I do enjoy the
project. I like the branding. I like the look. Congrats. A lot of great
work.
>> Thank you.
[Applause]
>>
We have time for one question.
>>
[indiscernible]
Cindy.
>> So the question was if you guys did any thinking about shopping online,
because when you shop online you have to rely a lot on the metadata that the
website provides.
>> So we actually met and have communicated with people with visual impairments
and we asked about the online shopping. And the answer was, like, we were
depressed to hear that they do not enjoy online shopping as much as we do. But
they said that they use online shopping only with offline shopping. They go to
the off line mall and wear it and try it out, then go back to their home and
order it online because it's cheaper. And we did not quite think about how to
make the online shopping better. That's all. Yes.
>> Thank you.
[Applause]
>> So next up we have Fuzzy Bird. A buddy for autistic kids that supports them
in good and bad times. This is Delft University of Technology, from the
Netherlands, and I'll switch you over to their presentation.
>>
Thank you.
So we're Delft.
My name is Max.
>> And I'm Sophie. And we would like to present to you the Fuzzy Bird, which
is a buddy for autistic children in good times and in bad times. This is a
photo made during the exhibition where we presented the Fuzzy Bird at Delft.
Our other teammates are Joseph and Astrid, and in the middle you can see Thomas,
who unfortunately couldn't make it. We are in our first year of our master
program called Design for Interaction in Delft, and we learn to design for real
people. But again for this project that turned out to be quite a challenge. In
America, one in 68 children is diagnosed with autism, which is a wide spectrum
of disorders with a shared set of symptoms. Thomas, the eight-year-old boy,
he's one of them. He's diagnosed with highly functional autism. He really
likes repetitive behavior, structure and order. And he really likes to feel and
squeeze and touch different kind of textures and materials. But in daily life,
he sometimes has a hard time with social interaction. And Thomas has a fear for
uncertainty. During this project, in the context of inclusive design, we tried
to design for the 1 percent, but see how all the new insights we gained could
also benefit for us all. So, like, for Emma, who is a very shy girl who doesn't
dare talk to other people, or Noah, who is just very nervous for his first day
at school. But let's pretend I'm Thomas, and I feel very tense. So I'll go to
my Fuzzy Bird and I'll start interacting with it. I will start squeezing and
touching, and when I squeeze the left green wing, you will see the Fuzzy Bird
tilt his head to the left and green lights will start to glow. It's a very
direct and understandable reaction for Thomas, which is very nice because
otherwise it would confuse me and now it reassures me. When I start hugging the
bird, the bird will start vibrating, and this cuddling movement will comfort me.
And by petting the head [chirping] the bird will make a happy sound. It will
make him very lively and friendly. I will trust the bird and I will even find
myself mirroring the bird. And after interacting with the bird, Thomas feels
comforts again, comfortable again.
>> So now if you met the Fuzzy Bird and you've met Thomas, I will go a little
bit more into detail about the process and tell you about the insights. So our
process is quite a roller coaster. It had some good times but also some bad
times. So we started off at a project with Eggy. Eggy, yeah, Eggy had some
flaws in the way it interacted with the user. Its movement was quite
unpredictable. And that caused a lot of stress amongst the user. So that led
to the insight that we shouldn't only focus on what the product should do and
also what it shouldn't do. So developing our concept, we decided to lay the egg
back in its nest to see what would happen. So we waited for some time and
suddenly Fuzzy Bird was born. [laughter] So also with this I can give you an
answer to the age-old question, what came first, the chicken or the egg?
[laughter] So with this concept, we went to see Helma, Helma Van Rijn, sorry,
Helma Van Rijn. She's a Dutch award-winning designer who is designing learning
toys for kids that are diagnosed with autism. She gave us a pretty
straightforward advice: You are not designing for the kids you were used to be;
you're designing for the one percent and you should really know your target
group. Because autism has such a broad spectrum, some things work at certain
people but don't work with other people. So you have to simplify your concept
to see what works and what doesn't. So that led to a little trouble. When
you're designing and you're doing research it's sometimes hard to synchronize
the way of conceptual thinking with the way of programming and broadening your
design. It's always important to test out your interactions to see if they are
working. So sometimes you have to simulate or mimic the technical part in order
to see if it's working. That led to our final, or not to a final, but led to a
working prototype which we tested at a primary school in Delft. It had some
kids with special needs, including Sam, who is highly functional with autism.
So we would like to share this impression with you that we made and I really
would like to point out the change over time of the facial expression of Thomas
or, sorry, Sam, and see how he interacts with the Fuzzy Bird. So Sam is
normally quite closed and shy, trying not to have eye contact. He starts to
explore what he can do with Fuzzy Bird. He starts hugging. That comforts him.
And he dares to speak up. We really think that user testing is one of the most
rewarding things in research and design, because you can really test out your
concept and see how it works. So I would also like to quote one of the teachers
who told us that when Bob was going back to class after playing with Fuzzy Bird,
he started interacting with other kids, telling about his experience with the
Fuzzy Bird. So we do not only think that the Fuzzy Bird can have an effect on a
short term but also maybe also on a long term. So here we are now with the
Fuzzy Bird here in Redmond. But we also want to take a look into the future to
see what the Fuzzy Bird can do. So I would like to quote one of the
psychologists who stopped by at our expo in Delft, and she told her -- she told
us that the Fuzzy Bird reminded her of a transitional object. That's an object
used by therapists to gain the child's trust and to make him open up and tell
about difficult events. So with these insights of inclusive design and
designing for the one percent, we can try to find insights that can help us in
designing for the rest, and to create a benefit just not only for Thomas but
also for Emma and also for Noah. So I would really want you to come to our
showcase and experience the Fuzzy Bird yourself. Thanks.
[Applause]
>> Thank you. That's a great picture.
Wednesday. So who would like to start?
That also happened between Monday and
>> You guys know you're doing well with this. I don't need to tell you. I
think -- I don't know, I need a Fuzzy Bird. Sometimes work gets kind of
stressful. And so it seems like a rather complicated issue, right? So you're
user testing and what you guys -- how many people have you actually tested?
>> We've tested with eight different kids.
of five to ten.
Boys and girls varying from the age
>> Very interesting. It's all kind of in the results, right, for what you're
going for here. And so a lot more testing, a lot more iterating would be great.
I think this concept is really, really strong. It could probably keep going.
There's probably quite a bit of innovation, quite a bit of additional features
that you could keep adding and a lot more that you can do with it. So that's
very exciting and what a great -- this is a platform. Look at this platform.
What an awesome platform. So, lovely, lovely. I'll leave it to you for some
critique.
>> Sure. Great job, guys. I think this is probably the most complete concept
out of all the kind of work we're seeing. I love the journey that you're
getting here, the fact that your prototype is fully functioning and you pulled
off the presentation and demonstration of it really perfectly. That's very
commendable and that you subjected this to user evaluation and really learned
through prototyping. That's all like just total magic for me in terms of just
getting out of the theory, building it and really learning from the hard knocks
and the experiences of seeing it in the hands of your target audience. Totally
commendable. Love the work. Great job.
>> Yeah, very thoughtful. Great storytelling. Beautiful job of putting the
story together and telling it and talking about the bits that were a bit, didn't
work. Like Eggy, just wasn't there. And the story, it was very neat and very
nice. I think the only thing that I would have liked to have seen, just because
you've done such a good job, something else, was thinking about other forms for
this. I know you went from Eggy to bird. But I kept kind of feeling a little
unsure about the form and the bird and the audience you were going for, even
once you get to school, would you take that to school with you? And, so, kind
of thinking about the world of stuffed animals and what was going to work for
people and it being a platform, basically. So that's just kind of next steps
really, like what else could you do. But a really nice project.
>> Thank you.
[Applause]
>>
We have time for questions from the audience.
>> Hi. Really interesting project. I'm very curious if you were able to
identify which aspects of Fuzzy Bird the children were responding to? Was it
the form of the interaction? Was it the physical shape and the presence of a
face or no face or did you have a way of testing that to find out?
>> I think they're reacting on both. We did not test how they reacted if it
was not a recognizable animal, bird without a face, so that might be very
interesting if that's actually necessary. But the direct feedback that the bird
gave and I think definitely the features of this softness and somewhere
cuteness. I'm not sure if you need the face word, but the movement, that
reaction.
>> Also the interaction with the bird is sort of human like in a small way.
you're touching it, you're trying different stuff. You're cuddly. That can
help in things that have to deal with autism.
So
>> Okay. Thank you so much.
[Applause]
>> Next up we have NYU Shanghai with -- you can cheer for your school if you
want -- with Adapt Tool, helping amputees use tools directly without a
prosthetics hand as an intermediary step.
>> Hello we're the NYU Shanghai team, and this is our product, Adapt Tool.
Kadallah Burrowes from Washington D.C. in the United States.
>>
My name is Ellina and I'm from [indiscernible] Russia.
>>
My name is [indiscernible] I'm from Shanghai, China.
>>
My name is [indiscernible] I'm from Beijing, China.
I'm
>> So this is our project, Adapt Tool, and Adapt Tool helps amputees to use
tools directly without using a prosthetic hand as an intermediate step. So our
school is located in China, and in China the total population of amputees is
even bigger than the total population of Netherlands. [laughter] Yes,
Netherlands, hi. And the biggest problem, those amputees is facing is 73
percent unemployment rate, because most of the jobs and careers rely heavily on
holding the tools. So we had the chance to interview an amputee who lost both
of his arms in a firework accident. His name is Mr. Ku, and he works in China
Disabled Persons Federation. So Mr. Ku confirmed that such unemployment problem
does exist, and he also showed us how he comes up with creative ways on using
the tools in his daily life.
So in the next two videos you will see him combing his hair and also drawing.
[Foreign language]
>> So the way he does it now, he has this wristband and he tucks the hairbrush
inside. However this process takes time and connection is also really loose.
So sometimes the hairbrush even falls down. The way he draws now he is using
both of his arms and his drawings are beautiful. As Mr. Ku notices, really, one
really important thing that both of his tools, the pencil and the hairbrush,
lack now is stability. Stability is really important. As you may notice Mr. Ku
now he's not using a prosthetic hand. When you lose your arm the obvious
solution would be to get a new one. However, in our research we found out that
this solution doesn't always work. [Foreign language] Mr. Ku thinks that it's
better not to use a hand at all than to have a bad version of it.
Unfortunately, most of the prosthetic hands now, they are not really efficient,
they are not really reliable and they are also quite expensive. Because the
conventional idea of designing a prosthetic hand is design something that will
look like a human hand. But in our research we found out that in our daily
life, most of the time we used our hand for holding the tools and using them.
So we thought why just don't we just keep the grip.
>> So in our research we found out there's some people already doing so. So
this man actually attached the chain saw directly to his arm with a ball joint.
We love this idea but we also want to be able to use multiple tools. So we
looked at Aimee Mullins who has 12 pairs of prosthetic legs, and we want to use
this idea of interchangeable legs to design a new prosthetic arm. So here is
Adapt Tool, combining direct attaching and interchangeability. It works like
this: You just wear it. So how can this -- [laughter] yeah, right -- so how
can this thing help Mr. Ku? So he's walking in his garden trying to operate in
space. He uses two arms and it's super complicated. But Adapt Tool will make
everything so easy. When Adapt Tool, you just put a tool into this tool socket,
you click a button and the tool is fastened. And this is how it works. The
design of the mechanism. So with Adapt Tool, work like this, just one arm is
enough for effective operation.
>> With more complex tools and more complex jobs, you need to be able to do
more than just hold the object that you're using. [Foreign language]
>> In this video we saw Mr. Ku operating a water hose. And while most people
are able to just clench their fists in order to operate it, for Mr. Ku it became
quite a complex affair where he had to use both his arms -- where he had to use
both of his arms. But you can imagine that if Mr. Ku is trying to use something
a little more dangerous such as a screwdriver or a drill, you wouldn't really
want to do such a thing because it can be quite dangerous. So with the use of
Adapt Tool, where we plan on using multiple sensors, specifically myoelectric
sensors, we're able to give amputees more control than they've had before.
>> Myoelectric sensors read the electrical signals that your muscles give off
when you contract them. And through Adapt Tool we plan on allowing them to map
out specific contractions to the actions that the tool is taking. In this video
we see someone who has used a pretty basic prototype of myoelectric controller
to operate and turn on and turn off a drill. And if we apply this idea to Mr.
Ku and his garden hose then you can imagine instead of having to use two arms to
operate, he would instead be able to clench his fist and aim the water at
wherever he needs to water.
>> Just in case that an amputee wants to implement even more complicated tools
he may need configuration. So we decided to add a touch screen on the Adapt
Tool, but we're not sure if it's practical for an amputee to use a touch screen
until we see this video. [Foreign language] Here we can see Mr. Ku is using his
cell phone which has a touch screen. It tells me that an amputee, it's possible
for an amputee to use touch screen. So on the touch screen of Adapt Tool -sorry. On the touch screen, you can adjust the settings of a specific tool.
For example, in the case of a water hose you can adjust the water pressure by
moving the button on the touch screen. And the last thing I want to mention
about the Adapt Tool is the sleeve. The sleeve is the part where the amputee
puts his arm to Adapt Tool. That's very close to a normal prosthetic sleeve,
but we want to make it more comfortable, so we use a special material called
Memory Foam which can adjust its form to something right here between the arm
and the Adapt Tool. So the amputee won't feel pressure if he wears Adapt Tool
for a long time. And we also have supportive bands around the shoulder to make
sure that the Adapt Tool won't fall out even if the tool is heavy.
>> So this talk is technical. We don't want to lose sight of the most
important part of the project -- the user. So we asked Mr. Ku what he thought
about the Adapt Tool [foreign language]
>> Thank you for your time.
[Applause]
>>
Shanghai, who would like to go first?
I see a hand, not a tool.
>> That was a really fascinating, really great project. And I think actually
we all really need a round of applause for Mr. Ku because what a brilliant kind
of research participant. He was so good. That was a huge strength of your
project is that you had this really interesting person and you went into such
detail, that you really saw and understood what he was trying to do. And so you
came out with this kind of output of basically your arm becoming sort of more
like a drill and drill bits and that whole kind of idea of taking the hand out
was really a great kind of step forward. I really liked that. The end reminded
me of those multi-colored pen that you get -- the green bit pokes out and
changing colors. I thought that was lovely. Also the technical thought through
things like using myoelectricity -- how would I control it, what are you trying
to do? So I thought that was really impressive and really interesting and kind
of trying to solve a very fascinating problem. For one very specific person,
but you could see great, the equivalent of Holland -- the Netherlands was going
to be doing it. So that's great. Well done.
>> You guys have seen Inspector Gadget. I mean, this is the kind of thing you
see and you sort of go this doesn't exist already. I can't believe it doesn't
exist already. So for you guys to come up with it is both -- all the criteria
you want the look for. It just feels right. And so it seems like a thing in
the world that will make a lot of lives better. So congrats. That's really,
really -- that's the whole goal, right? Great work.
>> Yeah, I echo both those sentiments. I think Mr. Ku is just the most amazing
find, and I think -- I love how, when you encountered him, I don't know the full
genesis of the story, but I think you went from designing a tool for a single
amputee to one that would work for a double amputee, which is upping the
challenge even more for yourself. And again this is, why doesn't this exist is
sort of like this puzzling question. And it's definitely something that needs
to exist. It sort of feels empowering in so many different and diverse
scenarios for so many people. So great job. Great concept, really simple
concept. Super challenging execution. Would love to have seen more prototyping
and something to evaluate for Mr. Ku and get some consideration going in that
design. But the concept itself, super strong.
done.
[Applause]
>>
So great user research.
Well
Any questions from the audience?
>> [indiscernible] you could have sold everyone in the audience one of those.
It was unbelievable. I was reaching for my wallet it was so good.
>>
I totally agree.
>> No questions?
[Applause]
Thank you so much, guys, it was great.
>> Next up we have the Art Center College of Design from the U.S. They're
going to present Radical Sensing Super Smelly Neuro Prosthetic. Hold onto your
seats now.
>>
Hi, I'm Selwa Sweidan [indiscernible]
>>
My name is Jay Hong.
>> And we're from Art Center College of Design's Graduate Media Design
Practices program. So, instead of seeing disability as something to fix our
professor's brief was to invert disability and design for a super power.
Radical sensing is rooted in the sense of smell. And radical sensing imagines a
future in which we've chosen to replace our noses with a super smelling neuro
prosthetic or a post-nose. It filters. It shares. And it remembers scent.
>> The super smelling neuro prosthetic amplifies, isolates and [indiscernible]
through hand gestures and customization to your post-nose.
>> For centuries scholars have placed smell at the bottom of a hierarchy of
senses. CondUlac, Darwin and Kant, they've all described the sense of smell as
the least useful of the senses.
>>
But we chose scent.
>> Yeah, we chose scent as a way to really design for the emotive, nonrational,
intuitive aspects of brain computing. And, really, thanks to brain-scanning
imaging technology, the hegemonic divide between what we know and what we don't
know and the rational and the rational collapse thanks to this. So here we see
a scan -- going back a little bit, a little forward -- here we are. Here we see
a scan of the olfactory pathways in the brain. And really the sense of smell is
actually the most complex of the senses because of how it functions and how it's
mapped through the brain. So we spoke to experts in computational neurology,
anaplastology, brain computer interface hardware development and also we
collaborated with an independent perfumer.
>> These are our three noses. We imagine fitting them to a person's face once
their noses are removed. So it is a radical idea. Why not a radical
prosthetic?
>> For example, this one is inspired by Techno. This is our Techno nose.
here we have our sleepy nose to protect you while you sleep, and this one
challenges the architecture of the nose.
And
>> So what does it mean for the replacement of the body part to become
aspirational?
>> So currently there is no neuro prosthetics for smell. So how do you design
for something that doesn't exist? This led us to the idea of working with
performers. By performers, we mean those with specialized training and movement
and dance.
>> Radical sensing is in dialogue to this performer and body researchers.
Second to the left is Lucy McRae, who works with boundaries of human body and
dynamism of technology.
>>
So there are clear advantages to working with performers.
>> Their special vocabulary allows for rich messaging and interpretation in a
very collaborative manner. It is a symbolic and metaphoric process. It turns
formal movement into narrative which then we work off.
>> So we created this methodology which we started calling performative
prototyping, and instead of being a narrowing process it's very divergent and
generative. And this allowed us to really work with concepts that don't exist.
>> Over time, we made working prototypes and these were used for our
performative prototyping iterations.
>> And these iterations led to these three areas. So GCMS is a technology used
by the perfume industry and scientists to analyze and -- to analyze and measure
scent. E-nose technology takes us to a portable scale. So if you imagine that
this could merge with a neuro prosthetics of the nose, this would be a huge
change in the way we could record and store scent. This is how memory works.
And based on this technology, we here are showing two performers recording and
sharing scent.
>> To begin we've conducted three studies with a diverse group of performers.
We were interested in the relationship between language to smell. So we asked
the performers to verbalize their reactions.
>> So that led us to think would a smell memory device facilitate a richer
discourse around smell, and would that lead to a smell literacy?
>> To move on to augmentative function, we tested stereo smell. We asked the
performers to do improvisations through long, extended straws, which then made
us think about control.
>> So initially our questions were what does it mean to augment smell to be
more precise or focused or enhanced? But later our questions became about what
does it mean to relinquish control of smell both willingly or unwillingly.
>> This led us to sharing.
leader and follower.
This test is about the sharing dynamic between
>> And we took this idea of sharing an intimacy to the extreme by giving the
performers a very restrictive prototype which they described as painful.
[laughter]
>> We repeated the tests with performers who knew each other very well. And
they were able -- it was less challenging for them and they were able to move
more quickly.
>> So since they were moving so much more confidently, it led us to consider
how does familiarity affect ability. And during our debrief, they mentioned
about synchronization of brief, which for them to move easily.
>> And then through this it allowed us to consider could the breath be used as
a queue in sharing of smell?
>> So to recap, we used the methodology of performative prototyping to reach
these two areas.
>> With memory, how does a smell memory device impact the language around
discourse and smell literacy? With control, what does it mean to design for an
ability to control or relinquish control of smell? And finally with sharing
what does sensorial consensus mean? Can we use the breath to facilitate
consensus?
>> We took a radical approach to look into the future to explore designing
possibility around super power and ability.
>> In the process, we came up with a methodology that we're calling
performative prototyping which led us in a different direction than if we had
taken a more conventional approach. Radical sensing looks at neuroscience,
physical computing and performance research to really explore the potentials of
designing around a radical augmentation of smell and also a radical prosthetic.
Radical sensing imagines a future in which we've chosen to replace our noses
with a super smelling neuro prosthetic or a post-nose. Thank you.
[Applause]
>>
Thank you, Arts Center.
Who would like to go first?
>>
I'm -- where do you start.
[laughter]
>> It's a wonderful project and I commend you on sort of taking the approach of
looking at augmentation of powers, super powers, and sort of exploring this from
a very different sort of starting point. The performative prototyping, I think,
is absolutely genius, and I think the video communication storytelling that you
do in this presentation is, like, amazing. I mean, you're just doing all sorts
of storytelling going on about what you've explored and how you explored it.
And I'm left very much with the impression that perhaps not as radically as you
suggest in this incredibly bizarre moment where we see this void in the nose
cavity. But just the idea that there could be devices that enhance or give us
super powers of smell or recall or as you said record and share. There's
something definitely super exciting there, and I'm compelled to imagine a future
in where we have the heightened sense of smell and be able to share those
incredible moments. But anyway, just very bold, very creative. Great job.
>> I definitely -- certainly the subject matter is fascinating, right,
especially when you're exploring memory as it pertains to smell. This is
something that I've always been very curious about. I find my sense of smell
fluctuates. Different times in my life it's more vibrant. Other times it's
less, different months. The result here could be really, really fascinating.
This seems like one of those projects that really contributes to a base of
knowledge that we're going to need to get there. I really just like the
presentation, I think, and the project and the work, and to sort of be you for a
second. Because there's a lot of art in this. There's a lot of exploration in
this, there's a lot of creativity in it, and it's a lot of out-of-the-box
thinking and so that's very exciting. What resulted in its practical
applications here? I don't know, but the step, I think, is really exciting to
see. So great presentation. Thank you.
>> Yep, echos. I love the way you kind of went out, way out, but kind of that
forced you to really think about some of the things and actually what would that
be like. And I think that's something that for some of the other projects I
think that would have been really helpful. What would it be like to wear this?
What would it be like to use it, to be that, to have that as part of my daily
life? If I capture a smell and I give it to somebody else, what's that going to
be like? Those are really interesting questions to think about and the way you
did it was incredibly evocative. I think that was great. So very kind of
different, a different way of doing that. And so I really enjoyed that. And in
taking just a complete departure of what did it mean to sort of reverse that
feel about contextual disability -- well, could you, being differently abled, as
it was called for a long time, actually, yeah, be differently abled. I would,
like, supernose. It's kind of a really different way to think about it. So I
thought it was a great project. Really liked it.
>> Thank you very much.
[Applause]
>>
Any questions from the audience?
Stella.
>> Thank you very much for your presentation. And so here I own up. I am
someone with this invisible disability of having mostly no sense of smell. And
what I love about your project is that it actually shows things that I do. For
example, when I go for a walk together with my husband, he tells me, now it
smells of grass, and I go, is that really? And then I'm going, okay, maybe I
can smell just a little bit of it. So I try to go [sniffing]. So actually what
this is is one of the behaviors that you showed in your movies. And so what I
think is really nice about it is it's beyond the product, because actually it
enables me to recognize that this is something that we're doing and I can talk
about it. So it's actually a direct mediator of experiences without the product
in the sense that I'm using a product, but actually it enables people to share
experiences. So I love that.
>> Thank you so much for sharing your experience.
more. [laughter]
>>
Any more questions?
I'd love to talk to you
Clay Shirky?
>> The performative prototyping idea seems to even transcend your project as a
general technique. As you guys were doing that, did you think, oh, I know what
else this would be useful for. Like, when would you use that again for what
other kinds of projects?
>> Thank you. And that's a really good question. I think we've been
meditating on it or trying to understand. But I guess it started when we took a
dance choreographic practice I have and applied it to some design research. I
found it to be a very satisfying experience. But I think, at least from my
perspective, at this moment, I see it as really applying to brief creation and
maybe insight finding when the space is very broad. I think kind of how we
talked about it, when you're really open and you need to be led down a different
path. But besides that, it's just really fun. [laughter]
>>
Thank you.
>>
Questions?
Yeah.
Unless somebody has a question.
>> Okay. So back to Claire's point, I mean, there are some earlier work around
sort of informants and some early papers around that, but I think that's always
been using the kind of performance to tell the idea much more than that, yeah,
that sort of prototyping and using those performance and dances to kind of,
because as you say, so good with thinking their bodies -- yeah, very much so
that was a great idea.
>>
Thank you.
>>
Any other questions or comments?
Okay.
Thank you so much Art Center.
[Applause]
>> Next up we have our home team, University of Washington, with Loom,
storytelling experience for your tight-knit community.
>> So we are from the University of Washington Division of Design. I'm
Jennifer. This is Charlotte. Catherine, Jaewon and Kendall, and we're excited
to share our project with you. Loom is a tool for friends and family to
collectively weave stories. A bit about how we got here we were really
interested in the social experiences of older adults, an area that is far less
just explored or designed for them, for example, the medical aspects of aging.
To better understand this space, we reached out to the Wallingford Community
Senior Center and we were able to spend time there. We served lunch and visited
a few times, were able to spend time with people at the center to better
understand the social context of older adults. We did this through observation
and simply sitting down and having conversations with them. One thing we
noticed was the popularity of tabletop games like dominos or bridge. And the
director of the center she had a really valuable insight with us, that
activities that people think of as more sedentary or passive serve a very
important purpose to keep people connected and for cognitive stimulation. One
particularly moving example of this at the senior center is the knitting circle
that has been around for 40 years. Getting people to just sit down and engage
in conversation. And we ran with this metaphor of the knitting circle as we
designed a physical prompt for storytelling. So Loom is a tool to just get
people sitting and talking and to weave conversations and memories.
>> So when people talk about a family photo, they usually start with one story
and then branch off into many different stories related to that one story. Loom
actually captures the connections between these memories in addition to the
memories themselves. And once we had this idea we wanted to try and explore and
expand it with Yoko and Ichie, a great couple we met at the senior center. So
we quickly developed our prototype, and then asked them to share their family
stories with us and our video is a documentation of the experience they had.
[Music]
>> Next month it will be 58 years.
[Music]
He's 92 and I'm 85.
>> He likes to look at old pictures. Nice memories will come back and so he
enjoys that. Every time he gets a chance, he loves to tell the stories. The
stories helps.
>> My father went up to Mt. Rainier and he made it up.
picture looks like the tallest one.
When you look at the
>> And I think it's good for our family, our children and grandchildren, to
know what kind of life he had and what kind of hardship he overcame and he never
gave up.
>> My father went up to Mt. Rainier and he made it up.
[Music]
>> He had lots of friends. Although most people passed away. Looking at the
old pictures and he likes to think about people he met all through his life.
[Music]
>> That must be the flag that my father put up there.
[Music]
>>
So I think the story comes alive.
>> It was nice talking to you.
-- [laughter] -- bye.
It's true to them.
I'm glad I know all about you now.
I know that
>> So by spending time with Ichie we saw how excited he was to use Loom to
share his stories and recount his family history but we learned also how to
further develop our concept. In the video we used prototypes made out of old
iPhones and realized that the size was actually a little be small. Which leads
us to imagine that the Loom system could actually be applied across a whole
variety of across larger devices. This means that even old devices could be
upcycled by simply adding a wooden frame to give it more warmth and a friendlier
feeling. And we also realized that we take the complexity of touch interaction
for granted. So for Itchie to distinguish between tap and tap and hold wasn't
really possible. We decided to minimize the touch interface and focus more on
voice direction to create a more accessible multiple input experience. The way
the system works, when Itchie tells stories, Loom records them and attaches them
to a specific photo. And this content lives in the cloud and photos can come
from existing photo sharing services such as Facebook or Instagram that Loom
could build on top of or people could choose to upload individual photos.
Because this content is in the cloud Itchie can not only access the photos when
he's at home but when he's in the senior center, when he's visiting family and
friends, and having multiple devices allows for the same type of collaborative
activity that prompts storytelling, that the knitting circle does that we
discovered in the senior center. And because you can use any number of devices,
the system is really flexible in allowing you to tell a great variety of
stories.
>> So as you saw in our video we toyed with this idea of arrangements. But
what we discovered was that the placement of the devices isn't really what
matters. What's more important is that Loom captures the individual threads
that connect these stories together. For example, in this photo, when Yoko and
Itchie were telling the story together, the story is linked to the gestures that
they make on the photograph, visualizing the experience of that conversation,
like who told which part of the story and what area of the photograph they
pointed to. This narrative context adds a deeper layer of meaning to the
photos, and these can be played back and shared with family and friends so that
they can get details that are not immediately apparent in that photograph.
Shared across generations, these memories come alive. And we think that the
potential of Loom offers the ability to apply to other contexts such as schools,
the workplace or museums. Really any other number of social contexts where a
physical object can prompt conversation. One of the most touching experiences
that we had throughout this process is that moment captured in the video where
Mary says: You talk, I know that you talk now. And it's because she had never
really heard him speak. This is because Itchie has some loss of hearing and
that sometimes prevents him from being able to engage in conversation. But we
really saw him sort of come alive and open up and tell these amazing stories,
with those family photos in front of him, to the point where we basically
couldn't get him to stop talking. And this really speaks to the potential
transformative value of Loom. Like an actual loom, this is a tool you weave
with. And the outcome of this is continuously growing fabric composed of the
narrative threads from the most important people in your life. Thank you.
[Applause]
>>
Thank you, University of Washington.
Who would like to go first.
>> So where are you guys now in the product development phase? Is this
something that's out and people can download it as an app or where is it.
>>
[indiscernible]
>>
Still at the conceptual level.
>>
To explore this idea.
All right.
Cool.
>> That's really great. I really find this to be very meaningful. And I
really like it. I always debate, you know, there's -- what you're introducing
is essentially a microphone with a photo. So the technology is actually very,
very simple. But what you're providing is the context for this to occur, and
that's an area that can work or sometimes doesn't work. It really depends on
the thing. Here I think it really can work. I think there's a lot of potential
to the idea, because while a network like I don't know Facebook can have the
functionality to record sound over a photo, that would be a feature for
Facebook, the contextual nature of having an app that gives you kind of that
closeness to your family is actually the only place where you really kind of
want that depth for most photos, I think, or where you would really consume that
experience. So that's really exciting. I really like the brand. I think you
guys have really nailed the word, it's just so good for it. I just really like
it. And I think the simplicity of really, really helps especially with the
demographic that you're working with there. So I encourage you to look at -- I
haven't done this research, but any networks that pertain to family that already
exist could be threatening to you or you might want to become that network by
directly as a thought. But I think the brand would work for both. So it's very
promising and happy to talk to you guys and those that worked on it that helped.
>> Yeah, very nicely presented and put together project. Very, very engaging
video. I did have to -- it was embarrassing if I cry. That was lovely. You
did -- the video was really well done. And a little bit slushy but that's okay.
The fact that you thought about the inclusiveness of your video and included
subtitles, kind of a good touch. I kept thinking about almost like the literal
out of box here, and thinking you kind of fudged this a little bit about well
the photos can come from anywhere but I think really thinking about how you move
from the online repository of photos to how many of these things do you actually
have and how does that picture get on it and what that's like, I feel like that
would be incredibly central to that experience of setting it up. I think that
would feel really important. And I think it's worth spending time on that, much
like you had about like I can't do all that swiping and stuff, had a very simple
interaction how do you make that whole setup experience would be vital. But
really nice project.
>> Yeah, I commend you guys on the video storytelling and there's this little
detail which probably everybody noticed but on viewing it a second time I
noticed again which is the fact that the wife of the participant, central
participant, is actually narrating the story, which you sort of forget the
storytelling elegance of that as it's not you guys telling the story, it's
actually somebody who's been affected by the concept and seen the transformative
effect it had on their partner. So that video is really quite masterful.
Congratulations on that. I feel like the concept, I feel very similar, like
there's a lot of just, because it's a simple idea, tying the storytelling to the
picture and having that recording and the sharing and the sort of longevity of
that idea, those ideas and stories for a family, it's such a simple thing, but
there's so many kind of horrible barriers for getting even those photographs
digitized and where to put them and the setup experience. Again, I feel like
you definitely skipped over a lot of that detail. Maybe it's all done. I don't
know the extent to which you explored this. But it feels like many of those,
many of the power of this idea would come to life in those moments and the fact
that it would be a very successful service. And I'm sure would have actual real
commercial value if you could get those experiences nailed and really think
about that kind of, that long user experience from first touch to several years
later relative stumbling upon and hearing the story for the first time. So
there's some great work here. I encourage you to keep going. It's really
solid.
[Applause]
>>
Any questions from the audience?
Richard, thanks.
>> This is great. Really enjoyed it. I really like that idea particularly of
giving old devices a second life. That kind of renewal. I think one question
for you is actually about the future, as much about the system as it is
currently. Because what you're doing to some extent is building a system of
legacy. So there's a question about what happens to all this content and what
happens to all the artifacts when these people unfortunately have kind of passed
away. And I wonder what impact that has on how you think about the service. Is
there a commitment you can make to these kinds of services and systems that
maybe lasts 30, 40, 50 years as these objects become an important central part
of families to reminisce about people who are a part of their ancestry? Have
you thought at all about that, kind of the long-term of these kinds of systems.
>> We thought some about that. Mostly that we saw this as an experience for
these older people, but also really as an experience for ourselves too like if
you think about your grand parents and the kinds of stories that they have that
you haven't captured and they're getting older and older. So really any
generation below or after would benefit from that and so I guess for us it was
kind of also thinking of a different way to archive things that people share
with you. So every time we've shared this concept with people even personal
places we came from is oh I have a grandparent with Alzheimer or I have a
grandparent who was in World War II and these are all experiences that start
vanishing even before that person has passed away because those intricacies just
aren't there anymore. So, yes, I don't know if we understood your question
quite correctly, but I guess, yeah, we imagined that I guess no matter where
technology or systems that would hold this kind of content go there would be
people who want that there because you always will care about your own personal
family, and I think even business people who want to just like make money,
money, money, they always care about their family and that I guess was an
incentive.
>>
Even them.
>> Continue supporting this type of content.
the real.
Thinking about it practically, on
>>
Thank you.
>>
Hold on, Mike, there was a question in the back.
Kate Holmes.
>> Hey, beautiful project. Thanks for sharing. And I was interested to hear a
little bit more about your experience as designers through the course of this
project, because I do believe one of the most important pieces of inclusive
design is the inclusion of people in the process that both we as designers and
then also the people we work with in our designs change through the process of
creating some kind of solution. So I'm interested to hear kind of what it was
in your journey in spending that amount of time it was clear you had spent some
dedicated time understanding and getting to know the people you shared today.
So would love to hear more about that. How was that experience.
>> Yeah, absolutely. It was a really humbling experience to be able to kind of
be welcomed into the center and especially welcomed into Itchie and Yoko's home.
We really had a really nice, like, time doing this. We started to have like a
real meaningful relationship. We started -- we gave them flowers, and then some
of us graduated and they gave us graduation presents and then we cried in the
parking lot, for I kid you not, 15 minutes. So I guess on another note, it was
also kind of a frustrating experience because of how meaningful these short-term
visits and things like that were. It was frustrating we could only spend X
amount of time dedicated to research and spending time with these people and
that we want to be able to develop like longer research cycles or like periods
to this kind of work, especially with the idea of designing for accessibility.
>> I think the other thing that was sort of different about our process is that
we have this idea that you have to go in with this preconceived idea and then
you go and test it and see what it's like, and we went there without any idea
what we were going to see. We went there with a completely blank slate just to
spend time there. And the knitting circle, those kinds of things that we
observed turned into the idea itself. I don't know if I explained that very
well. But, yeah, that we didn't already go there with this concept in mind. I
think we really grew from that experience.
>> So along a similar line, I'm fascinated -- I haven't asked you this question
yet, but it's been in the back of my head. So you're probably 20-somethings who
likely fall asleep and the last thing to hit the floor is your cell phone or
your smartphone, right, because like all of us you're probably on it all the
time. And weirdly, the import of photographs to us has seemingly lost some of
its meaning. I mean, I think about an onion article that said girl finishes
uploading 1 million pictures to one week trip to Paris. Like we just constantly
flashing photographs to the point that it loses a great deal of meaning. When
you're embedded in the ethnographic research in the senior center, not crying in
the parking lot, and you see the importance that these photographs have to these
people as a not only as a memory but as a way of knitting together of an idea of
the life that they led, did that change your relationship with the way that
you're using photographs on your cell phones and the way you're taking Instagram
photos of your food, did it change your perception of that slightly.
>> I think maybe we haven't changed our perception, but thinking about the
potential for digital photographs is interesting. Our professor kind of put it
in this way that we're able to kind of make digital photographs physical again
and create artifacts out of them, even just with the photos not with the Loom
system.
>> I guess it kind of maybe has changed our perception or maybe this is
something we knew but kind of realized again, even though, yes, we're taking a
million pictures for a one week trip to Paris there's always that couple that
stands out. So for every 50 croissants there's one of me petting this dog that
reminds me of the dog that passed away that kind of sticks in my mind. I think
for me personally it was kind of even realizing we both went on a trip recently
to Asia even while being there which of these photos am I going to care about
later, is it the cool lights in the elevator or seeing the reaction of my
friends to those cool lights.
>>
Thank you.
Thank you so much UW.
[Applause]
>> Oh, this is a wrap. This was really a group effort. It was months of
liaisons and professors and students and volunteers working together. So it's
great to see it becoming all together and we all left inspired and hopefully
different designers. And we have a showcase tonight. So all the groups are
going to be in the showcase together with Imagine Cup students it's from six to
8:00 p.m. at the Thunder and Trident here on Building 92 and we also have
another showcase tomorrow. And Lilly wants to say something.
>> So thank you all for coming. Special thanks to a few people. Mike, who
comes here all the time and helps us and has spent the last couple of days
getting feedback.
[Applause]
And Carolyn, I don't know where she is. She's probably out there with Sara.
But say thanks to them on the way out. And special thanks to Melissa. Melissa.
[Applause]
Coordinating this. You're a liaison and you did all the brand and everything
and so thank you very much. It was awesome.
[Applause]
>> Thank you, Lilly. Thank you Lilly for being the sponsor and Curtis wonk to
actually starting Design Expo and thank you so much for our critics. You guys
were amazing.
[Applause]
And for Steve Konoco for having the party on Friday. I hope you guys can make
it. We heard that his house looks really nice. Thank you all. This is great.
[Applause]
Download