Jan Brejcha

advertisement
SEMIOTICS IN THE CONTEXT OF USABILITY STUDY
Jan Brejcha
Jan Brejcha
jan.brejcha@ff.cuni.cz
Charles University, Prague
Faculty of Arts
Institute of Information Studies and Librarianship
http://jan.brejcha.name
Curriculum Vitae
Jan Brejcha is a Ph.D. student at the Institute of Information Studies and Librarianship of
Charles University since 2007. His main interest is semiotic evaluation of user interfaces.
Abstract
During the design and testing of user-interfaces (UI) there is an ever growing role of different
humanities disciplines, which shows ICT systems in the social context. This paper looks at the
current methods of user study in comparison with the semiotic analysis, that tries to
emancipate users, designers and programmers of interactive systems. We argue, that UIs build
according to heuristics (guidelines, manuals) may be syntactically valid, but semantically
flawed, as they often fail at communicating the intended meaning. Proper usability testing is
expensive in both time and resources. We are going to prepare the ground for experimentally
testing the hypothesis, that a theory-based analysis tool would produce usability assessment
faster with less resources needed and better results than usability testing.
1 Introduction
The current fast-paced development of Information and Communication Technologies (ICT)
is based mainly on technical professions, which often lack a facility for a deeper reflection of
the results of implementation in the social context. The nonexistence of this reflection might
lead to ill-tailored, ineffective, difficult-to-use, and sometimes even dangerous applications.
Therefore interdisciplinary groups of specialists should participate on the development of
ICT.
This trend leads to incorporate experts from the humanities into the development teams to
allow for using the implicit knowledge from other domains [Mackay and Fayard, 1997]. The
main reason for this is the shift of emphasis from how something works, to why something
works in the human context, which is a base for evaluation. The proposed research starts to
focus on the meaning of technology products, the values they represent, and the userexperience they convey. For this purpose it is needed to de-construct (and re-interpret) the
point of departure of ICT design.
We argue, that a research and evaluation method grounded in semiotics can provide the
missing bridge between the needs of different participants in the HCI community.
Among the HCI practitioners in the industry, there is a strong need for cost-effective usability
evaluation, which has led to the development of discount usability evaluation methods, that
could be applied from the first iterations of the product. These methods, however, don't
always follow the man-machine interaction from a narrative or discourse [Connolly, 2006]
standpoint, which is contrasting the way people tend to think.
2 Evaluation approaches and methods
Currently, the usability evaluation methods can be divided into three main groups:
(1) Analytic evaluation, (2) Usability testing and (3) Field study [Sharp et al., 2007, 591]. In
Table 1 we show the general methods, as well as their main representatives.
Table 1. General usability evaluation methods.
Analytic evaluation
Heuristic evaluation
Cognitive walkthrough
Semiotic inspection method
Usability testing
Usability testing
Field studies
Field studies
Communicability evaluation
Among the most widely known in the HCI community we can list heuristic evaluation
[Nielsen and Molich, 1990], cognitive walkthrough [Polson et al., 1992] and usability testing
[Whiteside et al., 1988]. While these approaches together with their ramifications have been
widely adopted, they don't provide a framework for analyzing the intrinsic values of the UI,
neither the possible interpretations.
Mainly to allow for this different take, the semiotic engineering approach has been evolved
[de Souza, 2005]. It is based upon the idea of analyzing the signs [Peirce, 1955; Andersen,
1997], codes, messages and discourses that take place in the communication between
designers, computers and users. In the Peircean tradition, a sign is anything that represents, or
stands for, something in one's perspective. In the UI signs can be icons, buttons, menus,
windows, pointers, etc. The semiotic engineering looks also into the meta-communication that
takes place during the user interaction with the system. According to this view the system is
built according to the designer's understanding of the user's needs. This understanding is
encoded in all the parts of the UI and when the system is used, it speaks for the designer's
part.
In the semiotic inspection method [de Souza, 2006], which forms a part of the above approach
and follows the analytic inspection method, the evaluator takes into account, how the intended
message gets through to the user by means of help, documentation, static and dynamic
interaction signs. This is done by (a) examining signs in documentation and help contents,
(b) examining static interface signs, (c) examining dynamic interaction signs, (d) collating and
comparing meta-communication messages and (e) appreciating the quality of the overall
designer-to-user meta-communication. Meanwhile in the communicability evaluation [Prates,
2000], which mimics the usability testing method, a user video recording is analyzed by (a)
tagging the communication breakdowns with a predefined set of utterances, (b) interpreting
the mapping between tags to typical HCI problems, and (c) semiotic profiling, when the
expert evaluator extracts the original designer's meta-communication.
Each method gathers a different type of data and is therefore suitable for a different purpose.
However, practitioners seldom have the information needed about which method to choose
for their work. There is also a lack of a theoretical background for applying the method
reliably and consistently. For this reason it is important to have a formal method for
comparing the outcome of different usability evaluation methods. One such project we can
possibly rely on to document the actual usefulness of our approach, is the model proposed by
Andre [2000; Hartson et al., 2003]. His model is based on the work of Bastien and Scapin
[1995] and Sears [1997], where three comparing measures were identified:
(1) Reliability: evaluation results should be independent of the individual performing the
usability evaluation, (2) Thoroughness: the results should contain as many usability problems
as possible, and (3) Validity: the problems found should be real problems.
3 Research problem and future work
Following this work we propose to analyze the interaction language that is inherent to each
UI. By this we mean the pragmatics (i.e. the goal or concept), semantics (i.e. the meaning and
functionality) and syntactics (i.e. the grammar to create sentences) used when building such
interface. This is discussed in the computing domain e.g. by Foley [1995], Mullet and Sano
[1995] and Winograd and Flores [1987]. Other relevant sources include the theory of speech
acts of Austin [1976], Searle [1969], Grice [1975] and the concept of language games by
Wittgenstein [1953].
We argue, that UIs build according to heuristics may be syntactically valid, but semantically
flawed, as they often fail to communicate the intended meaning, e.g. through internal
inconsistency and choice of improper conceptual models.
To deal with this problem we propose building a lightweight software tool for gathering
contextual (semantic) annotations. By using the semantic annotation tool the designer’s
intended meaning and the user’s interpretation of the meaning could be easily compared and
modified in a later iteration. The evaluation would take place in an interaction timeline
environment, where user comments and the relative UI hierarchy (i.e. the position on the
interaction path together with the related time-stamp) would be captured.
[insert picture here]
Figure 1: Mockup of semantic annotation using expandable tags
The tool would support both aforementioned methods: semiotic inspection and
communicability evaluation, by allowing the user to tag (e.g. with a picture/button/text tag)
the UI signs she has problems with (e.g. uninformative or misleading text, breakdown of the
intended interaction) and by adding an annotation to this tag. In the analytic inspection
scenario an evaluator would use the tool to mark the discourse parts between the designer,
user and system, in order to analyze the flow of the communication through a semiotic and/or
content analysis. In the experimental (communicability) evaluation scenario the tool would
gather annotation data from different users, which would be then grouped and mapped onto
the usability issues we are focusing on at the time of testing. From these we could generate a
report and suggest possible improvements to the UI. One of the benefits of user annotation is,
that she is not put under examination conditions, where stress can play a major role in the
quality of gathered data. Instead, the user can comment on the UI at her own pace and in her
own environment, thus promoting a sense of anonymity and security.
During the development, the results from such a supportive tool should be evaluated against
the results of other methods in terms of reliability, thoroughness and validity, as mentioned
above, to produce a clear comparison. Our hypothesis is, that it would produce better, more
detailed results while saving time and resources compared to the widely-used methods.
To preserve the computer-based semantic data, such as the type of object that is tagged or the
position in the UI hierarchy, a UI description markup language (e.g. XML-based) should be
used. The majority of the currently marketed annotation tools rely just on screen-capture, that
strips the semantic meta-data, which could be used to better contextualize the metacommunication evaluation.
4 Final remarks
This paper has presented a summary of the work done in the field of semiotic evaluation, as
well as the current research goals set to build a firm ground to further develop this approach.
The proposed tool could be used to streamline the qualitative data gathering for further
semiotic analysis by the semiotic engineering evaluators. The limitation of the proposes
approach is, that in order to be effective, it must be carried out by experts both in usability and
semiotic engineering.
References
1. ANDERSEN, P. B. (1997). Theory of computer semiotics. Cambridge University
Press.
2. ANDRE, T. S. (2000). Determining the Effectiveness of the Usability Problem
Inspector: A Theory-Based Model and Tool for Finding Usability Problems, Virginia
Polytechnic Institute and State University.
3. AUSTIN, J. L. (1976). How to do things with words: The William James lectures
delivered at Harvard University in 1955. Oxford University Press, Oxford.
4. BASTIEN, J. M. C., & SCAPIN, D. L. (1995). Evaluating a user interface with
ergonomic criteria. International Journal of Human-Computer Interaction, 7(2), 105121.
5. CONNOLLY, J. H., CHAMBERLAIN, A., & PHILLIPS, I. W. (2006). A discoursebased approach to human-computer communication. Semiotica, 2006(160), 203-217.
6. DE SOUZA, C. S. (2005). The semiotic engineering of human-computer interaction.
MIT Press.
7. DE SOUZA, C. S., LEITÃO, C. F., PRATES, R. O., and DA SILVA, E. J. 2006. The
semiotic inspection method. In Proceedings of VII Brazilian Symposium on Human
Factors in Computing Systems (Natal, RN, Brazil, November 19 - 22, 2006). IHC '06,
vol. 323. ACM, New York, NY, 148-157.
8. FOLEY, J. D. (1995). Computer graphics: Principles and practice. Addison-Wesley
Professional.
9. GRICE, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan, Syntax and
Semantics volume 3: Speech Acts. New York : Academic Press, 41-58.
10. HARTSON, H. R., ANDRE, T. S., & WILLIGES, R. C. (2003). Criteria for
evaluating usability evaluation methods. International Journal of Human-Computer
Interaction, 15(1), 145-181.
11. MACKAY, W. E., & FAYARD, A. L. (1997). HCI, natural science and design: A
framework for triangulation across disciplines. Proceedings of the Conference on
Designing Interactive Systems: Processes, Practices, Methods, and Techniques, 223234.
12. MULLET, K. and SANO, D. (1995). Designing Visual Interfaces: Communication
Oriented Techniques, Sunsoft Press.
13. NIELSEN, J., & MOLICH, R.(1990). Heuristic Evaluation of User Interfaces, CHI
1990 Proceedings, 1-5.
14. PEIRCE, C. S. (1955). Logic as semiotic: The theory of signs. Philosophical Writings
of Peirce, 98-119.
15. POLSON, P. G., LEWIS, C., RIEMAN, J., & WHARTON, C. (1992). Cognitive
walkthroughs: A method for theory-based evaluation of user interfaces. International
Journal of Man-Machine Studies, 36(5), 741-773.
16. PRATES, R. O., BARBOSA, S. D., and DE SOUZA, C. S. 2000. A case study for
evaluating interface design through communicability. In Proceedings of the 3rd
Conference on Designing interactive Systems: Processes, Practices, Methods, and
Techniques (New York City, New York, United States, August 17 - 19, 2000). D.
Boyarski and W. A. Kellogg, Eds. DIS '00. ACM, New York, NY, 308-316.
17. SEARLE, J. R. (1969). Speech acts: An essay in the philosophy of language.
Cambridge University Press.
18. SEARS, A. (1997). Heuristic Walkthroughs: Finding the Problems Without the
Noise. International Journal of Human-Computer Interaction, 9(3), 213-234.
19. SHARP, H., ROGERS, Y., PREECE, J., PREECe, J. (2007). Interaction Design:
Beyond Human-computer Interaction, Wiley, 2007
20. WINOGRAD, T. (1987). Understanding computers and cognition: A new foundation
for design. Addison-Wesley.
21. WITTGENSTEIN, L. (1953). Philosophical investigations, trans. GEM Anscombe.
New York.
22. WHITESIDE, J., BENNETT, J., and HOLTZBLATT, K. (1988). Usability
engineering: Our experience and evolution. In Helander, M. (Ed.), Handbook of
Human-Computer Interaction, North-Holland, Amsterdam, 791-817.
Download