Low-Fidelity Prototype Introduction and Mission Statement

advertisement
Low-Fidelity Prototype
Introduction and Mission Statement
Our system uses the camera functionality of the mobile phone to take a picture of foreign words/text and then
translates it into the user’s native language. The purpose for this application is for people who enter an area,
whose language is foreign to them, to be able to interact with the surroundings comfortably. The experiment
we are conducting will be useful for helping us discover the flaws in our design, as well as the components
the users liked. We wanted to discover what elements were confusing for the user and how we might make it
less confusing. We want to be able to further build on our current design to improve the usability of the
application for the users. Our mission for this experiment is to improve our design and make it an application
that will be practical to use.
Eric Chung - Computer: Played the role of computer and wrote the prototype pages (plus sketches).
Michael So - Greeter/Notetaker: Greeted the user, took down notes, paying special attention to critical
incidents. For this document, wrote the Environment Section under Method, the Results for K, and the
Discussion section.
Henry Su - Greeter/Notetaker: Took down notes, paying special attention to critical incidents. For this
document, wrote the Participants section, the Test Measures, the Results for Alpha and Beta, the Consent
Form and the list of Critical Incidents in the Appendix
Jeremy Syn - Facilitator: Introduced the tasks to the participants and led them through the testing, answering
questions when needed.
Everyone did the Demo Script.
Prototype
When the user begins the application, the first screen to appear is the camera taking screen. This will look as
much like the Android camera-taking application (or the smartphone's camera-taking application). The way
we have it now, it's based off of traditional phone camera applications. The picture takes up most of the
screen. There is a part of the screen at the bottom where the picture taking options appear. These include the
menu that is typically included with the camera, the take option (which takes the picture), and the zoom
function, which lets one zoom in and out of the picture. These functions are mapped to the "menu" key (for
the pop-up menu), up and down for the zoom function, and the "select" key for taking the picture. From this
screen, we only have two ways to go to a different part of the application: going to the options screen from
the pop-up menu (which we will explain later), and taking the picture. Taking the picture will take the user to
another screen that is similar but the bottom of the screen has changed. The picture is still there (except the
picture just taken will be frozen on the screen). Below that is the text "To: [language]" and "From:
[language]" next to each other, with the "[language]" strings replaced with the names of actual languages.
These determine what language will be translating to and what language it is being translated from. There is a
pop-up menu like before except the menu options are now geared towards translation stuff. The link to the
options menu is still there (which will be explained later). There is also an option to "retake", which will take
the user back to the camera mode. There is an option called "From: ~" which lets you change the "From: "
language. There is a similar option for "To: ~". There is also the "Save" option that lets you save the picture
and the "Send" option lets the user share the picture with friends. The last option is "Crop". This is a special
function which allows the user to select a portion of the picture to translate just in case the area to be
translated isn't being recognized correctly. Once selected, the pop-up menu grays out and the select button
changes to the "begin crop" function. The arrow keys will move an on-screen cursor where the user can
choose a location to begin crop. After the user presses select (to choose a starting point), the user then moves
the cursor with the arrow keys to select the area to crop: when the arrow keys are pressed, a rectangle is draw
from the original location to the current location which represents what area would be cropped. After the
select key is pressed again, everything except for what is inside the rectangle is grayed out, indicating a crop,
and the select key returns to its "translate" function. This function will take the user to another screen. This
screen is like the last screen except crop is no longer an option. Also, you cannot translate anymore. The
picture is still there (with the grayed out areas if it was previously cropped). If there was an error recognizing
text, an error screen will appear suggesting to the user to retake the picture, which will then go to the second
screen after input. Other than that the last screen will also have translations (numbered in order of relevancy)
with arrows to the side, indicating if you can scroll up or down to access more translations (or parts of
translations) with the arrow keys. The options menu has four options (which can be accessed by typing in the
number as well as traditional arrow keys + select): Favorites (which takes the user to a numbered list of saved
translations with an optional search box), Tourist Guide (which will take the user to a page of information
about a country of interest, like a web page), Select Languages (which will allow the user to switch To and
From languages like in the second screen we mentioned), and Add/Remove Dictionaries (which will take the
user to a screen with stored dictionaries and the option to search the internet for more or updated dictionaries
to use with the program).
Method
Participants:
The first participant, Alpha, was an EECS major. We picked him because he was a translation device user.
Alpha likes movies, and codes in his spare time. He is fairly proficient technology-wise. He dislikes
inconsistent people. The second participant, Beta, was an English major. We picked him because firstly,
he's bilingual in English and Spanish, and also he frequently encounters foreign restaurant menus that he'd
like to translate without asking the waiter or waitress. In terms of technological level, Beta uses a computer
every day, and is competent with programs such as Photoshop, Dreamweaver, and Office applications. He
owns a Blackberry, but only uses its calculator, texting, and calling features. Beta likes videogames, and
dislikes not getting enough sleep. The third participant, K, was a pre-pharmacy chemical biology major. His
technological background is pretty typical: he doesn't code, but does use Office applications, media player,
and plays computer games. He only uses his cell phone for the phone feature, and very rarely for taking
pictures. K likes video games and food, and dislikes being rushed. He is an ESL person (he learned Korean
before English) and a translation device user, so he fits our user profile nicely.
Environment:
There were two testing environments. One was the Mac lab on the 3rd floor of Soda Hall. The lo-fi
prototype was set up by the Computer (aka Eric) on one of the desks in the lab and chairs were arranged such
that the Computer and user were able to sit comfortably and manipulate the lo-fi prototype. Off on the
opposite side of where the prototype was being set up, the Greeter/Notetaker (aka Henry) and
Greeter/Notetaker (aka Michael) greeted the user and did the other preliminary tasks such as dealing with the
consent form and making a user profile. After the preliminary tasks were done, the user was seated in front
of the prototype. The Facilitator (aka Jeremy) stood close beside the user while the Computer sat on the
other side of the user. The second environment was in a user's apartment in the living room. Everyone sat
on the floor. While the Computer was setting up the prototype, the Greeter and Notetaker did the preliminary
work.
Tasks:
The first task that we assigned to our users was to make use of the multiple translations capability of our
application. When looking up a translation for a certain image, we provided the option of looking at
alternative translations in the case that the meaning of the words were ambiguous and hard to understand in
that particular context. In order to look at the multiple translations, the user would have to go through the
normal process of taking a picture and pressing the translate function to receive the translation of the image.
Then below the image would appear the translations of the text. The user can scroll through the different
translations by pressing the down arrow key. Arrow signs appear on the side of the translations to indicate
that you can scroll down, and a grayed out arrow means that’s the end of the list.
The second task that we assigned was to have the user save the particular translated image into his favorites
list. We provided the capability to be able to store images with their corresponding translations into the
phone for when you want to look at them again at a later time. This may happen when you want to look for
something, but you don't remember what the sign looks like. You can look for the image from your list
based on the translation and look for the signs that look like the sign in the image. To do this, when on the
screen with the image and its translations, the user would click the menu button and select the save option.
Then the application would acknowledge the user that it has been saved to the favorites list. To bring up the
favorites list, click the menu button at any time and select the options menu. In that menu, the favorites list
would be the first on the list.
The third task that we assigned was to have the user take a picture of a sign and then crop out unnecessary
objects that may interfere with the text recognition program. This would be useful in situations where the
camera is maximally zoomed in and couldn't quite center the text, with a lot of extra things in the background
such as trees or buildings. To perform this task, the user would take the picture and in the next screen before
pressing the translate button, go to the menu options and select the "crop" option. A cursor would then
appear on the screen and the user would move the cursor with the arrow buttons. When the middle button is
selected the cropping begins, and when pressed again, it completes the cropped area. The user can then press
the translate button to translate just the area inside the dotted outline.
We chose these tasks because it makes use of the application’s features and options and allows the user to
interact directly with the user interface, in which we can closely observe on how the users react to our
interface.
Procedure:
We started off the experiment with our Greeters greeting our participants and trying to make them feel
comfortable. The participants were then asked to read and sign a consent form. After filling out the form,
our Greeters, Michael and Henry, asked our participants basic questions to help us get an idea of what kind of
background they had with technology and mobile devices. Meanwhile, our Computer was setting up the
interface prototype to prepare for the experiment. After we have finished the short interview with the
participant, we introduce our product to him and inform him that we will now begin the experiment. The
Facilitator, Jeremy, then began the experiment by showing the participant a demonstration of how to perform
the most basic function, which was to take a picture of a sign in a foreign language and then get the
translation of the sign. He also showed how to initiate the menu and showed how to change the language of
the translation. The computer, Eric, controls all the mechanics of the prototype. Whenever the user presses a
button on the paper prototype, he would modify the prototype as it would react to that action. After the
demonstration, the Facilitator introduced the three tasks that the user was to perform, and then allowed him to
try and perform the tasks. Meanwhile, our Note-Takers, Michael and Henry, were busily taking notes,
recording down any questions or comments that the participants asks and also writes down when the user is
having an easy time or a hard time completing the task. When the user is having trouble, Michael and Henry
record the sequence of actions done by the user, so that we can all examine it afterwards to see the user's train
of thoughts, and what changes should be made to the interface, accordingly. The Facilitator would watch the
user perform these tasks closely and provided assistance if the user was confused and needed help. After the
user is done performing the tasks, we would ask for his feedback and questions about the interface. After the
participant has given us all his feedback and answered all our questions, we would conclude the experiment
by thanking him and assuring him confidentiality of his information.
Test Measures:
During the user tests, we looked out for several things. First, we noted exactly when and where the
participant made a mistake. This way, we know what part to analyze when doing the next design iteration.
We also noted the approximate time it took each user to complete each task. This is important because the
longer the user takes, the more likely he or she is having trouble, and so we could concentrate on improving
those areas on our next design. We also noted the steps that the user took to accomplish each of the three
tasks. This is noteworthy because we want to know how users would accomplish the functions we have
provided. It is significant to compare the steps they took versus the steps we the designers feel it should
take. If the steps were more, it tells us that perhaps we need to alert the user of a more convenient path. If
the steps were less, it tells us that there is perhaps a better way that we the designers did not realize and
should enforce as the best way to accomplish the task. Otherwise if the steps were the same, we figure that
our way may not need to be improved. Furthermore, we tried to encourage the user to criticize the interface
after the tests, and we purposely did not try to defend our design. This was a way to elicit as many comments
as possible before we sit down and analyze them, to see if we should incorporate them into our design.
Results:
Alpha: When trying to get the Menu, Alpha pressed the Call button instead, because the Call button was
placed right under the on-screen "menu" button, so it seemed logical to him that pressing "call" activates the
Menu, when in fact it doesn't. Also, in the multiple translations screen, Alpha wasn't sure whether there are
additional translations, as it wasn't obvious whether one can keep scrolling down the list. Regarding the task
about saving the picture, Alpha was confused about when and where the picture was saved, because after
pressing "save", there was no indication. This was further aggravated by the fact that although there was a
"save" button on the Menu, there was no "load" button (one had to go to Menu->Options to go to the
Favorites list). As for the cropping task, Alpha was confused how to activate the cropping feature. Alpha
accidentally went into Menu->Options, because he didn't see anything named "crop" in the Menu pop-up (it
was actually named "select"). Alpha made a lot of comments about the interface after the user test, too. He
observed that in the favorite translations list, if you pressed a number, it was ambiguous whether it would
select the corresponding translation in the list, or if the number would go into the search box. Also, Alpha
thought that it might be a nuisance if there was no indication of what language the translation was for, in the
favorites list. This would be especially annoying if multiple translations were saved under the same English
word. Alpha also pointed out that the "select language" button in Main->Options sounded like it would
change the application's interface language. Furthermore, Alpha noticed that the translate to/from language
could be changed from two different locations, and whether changing one affects the other is ambiguous.
Beta: On the first task (multiple translations), Beta did not know where to go after selecting the translation
language (was it Home, or Back or...?). Beta did not have any issues with the task about saving/loading from
the Favorites list, however. For the cropping task, Beta tried to zoom first, then took the picture. He
proceeded to Menu->Options, and reached a dead end. When trying to get out of Options, Beta was confused
whether to use Back or Cancel. Beta then realized that there was a "select" button at the bottom of the Menu
list that is used for cropping.
K: K questioned the usefulness of our application. If someone was on vacation in a foreign country, they
would have a tour guide to provide the necessary translations. If someone was in a foreign restaurant, the
person could ask the restaurant owner/worker what to order or to offer translations of the food they offer. K
also questioned the need for a cropping tool. K pointed out that the zoom in and out function provided on the
camera interface would be enough to get hold of the text the user wants translated. K asked if our translation
application could recognize what language is the text on the language without the user explicitly saying so.
K was confused on how to get out the pop-up menu when K hit menu. K wondered what would happen to
the Favorites List screen if there were two pictures that translated to the same word (i.e. Barbershop). K also
wondered would the translation device be able to translate text that are skewed or rotated when taken. For
the cropping task, K was pondering how to crop the image using arrows.
Discussion
Alpha: Regarding to saving the picture, when the user saves a picture into his Favorites list, the interface
will be changed such that a window will pop up alerting the user that the image was saved and where it was
saved (aka "Saved to Favorites"). When Alpha was trying to do the cropping task, the label "Select" in the
pop-up menu did not register to him as the cropping function, so we decided to change "Select" to the name
"Crop". Alpha's comment on the Favorites screen will result in the list to be without numbering. This will
avoid the confusion the user may have when pressing a number.
Beta: Beta's test results do not add to the list of changes that our next iteration will incorporate (all of Beta's
problems were experienced by another user already, and accordingly will be resolved in the next design).
K: Even though K does not see our application as being useful in real life, we feel he has not considered
other factors into the equation. In a foreign country tour, the tour guide may not always be accessible and
there may be a time where the tourists are released on their own, or times where a tourist would desire to
venture on his or her own. At those times, our mobile application will prove to be worthy of use. And at a
foreign restaurant, there may be instances where the restaurant owner/worker may be unable to communicate
because he or she doesn't speak or write the customer's language. During these instances, using our
application would prove more convenient. Basically, our application will prove to be a convenient
alternative for translations. Therefore our user interface design should be designed to maximize the
convenience and minimize the hassle in the interaction component.
K did think the zoom feature made the crop feature obsolete. It is true that the zoom function can get rid of
unwanted scenery in the picture, but you also lose the precision and control offered by our crop feature. The
zoom function typically reserves the up and down arrow keys to zoom in and out, respectively. If the user
zooms in and wishes to re-center the image such that the text that is to be translated is in a more suitable
position, the user is unable to do so. Unless we redo the keys for zooming and add keys for centering the
image, zooming would not be sufficient to get the image the way a user may want. Therefore the crop
feature will remain, giving the user precision over what text is to be translated.
** The experiment does not reveal the speed of the internal mechanics of our application.
Appendix
Critical Incidents logged by Observers:
Interview #1:
-accidentally pressed phone call button for Menu
-when viewing multiple translations, pressed center button, but it did nothing
-did not see any indication of the end of the translations list
-for the saving picture task, did not know when the picture got saved
-took a while to figure out that Favorites list was in Menu->Options
-for the cropping task, went to Menu->Options, and had to go back because it was the wrong place
Interview #2:
-for the multiple translations task, did not know where to go after selecting translation language
-tried Home and Back
-for cropping task, tried to zoom first
-then went to Menu->Options, and had to go back because "crop" was not under Options
-did not know whether Back or Cancel should be used to get out of Options
Interview #3:
-thought device is able to just recognize the language
-for the multiple translations task, first tried to zoom
-then presses Menu, and realized that it was not fruitful
-tried getting out of Menu by pressing Menu again
-for the Favorites task, did not see anything like "retrieve" in Menu
-took a while to realize that Favorites was in Menu->Options
-was worried that if there was another picture of a barbershop saved, then would not know which one is the
right language
-for the cropping task, thought that zooming would be sufficient
-was confused whether there was a difference between Menu->Options in Camera Mode and Translation
Mode
-was worried that the picture's angle may affect the translation results
-overall, was bothered that he needed to repeatedly push Up and Down to navigate the Menu
INFORMED CONSENT FORM
You are invited to participate in a study of user interface design. We hope to learn how usable and effective
our prototype is. You were selected as a possible participant in this study because you might be a potential
user of our application
If you decide to participate, we will have you complete a series of tasks that the application will support,
using our paper prototype. The entire procedure should take less than 45 minutes. Physical discomforts may
include paper-cuts, because you will be shifting around index cards. Emotional discomfort may include
slight psychological uneasiness from the pictures on the index cards. There are no direct benefits, but you
may learn something new from this experiment, and if ou product is ever commercialized, you will have
known that you have helped us.
Any information that is obtained in connection with this study and that can be identified with you
will remain confidential and will be disclosed only with your permission. Some anonymous information will
be submitted to the professor, as this is a class project. Although we will try our best to keep your personal
information confidential, we cannot make any guarantees, and in the event that your personal information is
leaked, we will try to notify you as soon as possible.
Your decision whether or not to participate will not prejudice your future relation with the University of
California at Berkeley. If you decide to participate, you are free to discontinue participation at any time
without prejudice.
If you have any questions, please do not hesitate to contact us. If you have any additional questions later,
please contact Henry Su at henrysu@berkeley.edu who will be happy to answer them.
You will be offered a copy of this form to keep.
_______________________________________________________________________
You are making a decision whether or not to participate. Your signature indicates that you have
read the information provided above and have decided to participate. You may withdraw at any
time without penalty or loss of benefits to which you may be entitled after signing this form
should you choose to discontinue participation in this study.
_____________________________________ __________________________
Signature
Date
Eric Chung, Michael So, Henry Su, Jeremy Syn
DEMO SCRIPT
Henry: Hi. I'm Henry, and I will be a note-taker.
Michael: Hi. I'm Michael, and I will also be a note-taker.
Eric: I'm Eric, and I will be acting as the computer.
Jeremy: I'm Jeremy, and I will be the facilitator, the only person you can talk to when you
are working on the tasks.
Michael: You will be the user. We appreciate your participation in our experiment. Would you
like some candy or refreshments?
Henry: Here's a consent form for you to read. If you agree, please sign it. Here's a pen.
...After filling out form and getting comfortable...
Michael: OK, we need some information about you to complete your user profile.
Henry: First, what is your major?
(Wait for answer)
Michael: What is your technological background?
(Wait for answer)
Henry: What are your likes and dislikes?
(Wait for answer)
Michael: Can you classify yourself as a {ESL, vacationer, English speaker/reader who wants to
read a restaurant menu written solely in a foreign language, but doesn't understand the
language completely, translation device user, or an everyday person who may want to
communicate in any area with a foreign language}?
(Wait for answer)
Henry: OK, now we can continue with the experiment.
Michael: As you've read in the consent form, by participating in this experiment, you will be
helping us evaluate a user interface design. Our application is a mobile application that is a
point and shoot translation device.
Jeremy: First, we will demo a simple sequence of taking a picture and obtaining a translation.
So first I start the application, and it boots up in the camera mode. I take a picture of some
foreign text I can't read, and then I click the middle button to take the picture. Then we go
get into another screen to manage the image. Currently the language is set to read from the
wrong language so I will now change the language to read from. So I press the MENU button,
and a menu will pop up. I select the option "From: ~", and it brings up a list of language to
choose from. I move the cursor to "Korean" and then I select it using the middle button. On
the bottom of the screen you can now see the "From:" text field changed from "French" to
"Korean". I click the middle button to translate the text in the image we have. We can now
see the translation of the text on the bottom of the screen. And that concludes our simple
demonstration. Okay, now we're going to assign you three tasks to perform on our device.
First, we want you take a picture of a text, like how I did, and then look for multiple
translations of that same text. We will busily take notes during this time.
(User proceeds to perform the task)
(Answer any questions asked)
Jeremy: Okay now we want you to save the picture and translation into the device, and then
once that's done, bring up the picture and translation from your favorites list.
(User proceeds to perform the task)
Jeremy: Lastly, we want you to take a picture and then crop out the text to filter out all
unnecessary objects that may interfere with recognizing the text.
(User proceeds to perform the task)
Henry: Thank you for your participation. The information we extract from this experiment is
very valuable to our project.
Download