D4_GiuseppeFrau_LITERATURE-REVIEW

advertisement
LITERATURE REVIEW
GIUSEPPE FRAU
DEEP BLUE SRL
UNIVERSITA’ DI ROMA 3
INTRODUCTION AND PROBLEM STATEMENT
3
LITERATURE REVIEW
DESCRIPTION OF AUTOMATION
5
5
INTERACTING WITH AUTOMATION
SITUATION AWARENESS
FEEDBACK – WARNING – ATTENTION – BOREDOM
WORKLOAD
10
10
11
14
INTERACTION TECHNIQUES
ENABLER TECHNOLOGIES
16
20
REFERENCES
22
INTRODUCTION AND PROBLEM STATEMENT
Overcoming the traditional human-automation issues is a challenge that has
been addressed by many research works. Design choices like picking the
right level of automation (or deciding for an adaptive automation), or
correctly distributing roles and tasks between the human and the system etc.
are very important decisions that deserve robust foundations in order to
“build the system right”. However, what do we do after we have made those
choices? How is the automation affecting the real interaction with the
operator and how do we cope with that?
Choosing the right level of automation doesn’t guarantee us a sound
interaction. Neither the right task allocation does. Failures can happen in the
real interactive actions due to poorly designed interfaces.
It is necessary to map the system design choices to the more suitable
interaction techniques and user-interface design methods. Moreover, in the
recent years, we assisted to a growing maturity of several technologies that
are enabling and making accessible new means of interaction.
We are in the middle of three big innovation trends. A first trend represented
by the predictable hardware progress [Moore’s Law and [3]], a second one
characterized by “long periods of stability interrupted by rapid changes” [3],
and finally the ATM one, involved in the big innovation started with SESAR.
Figure 1 - Qualitative representation of HW, User Interfaces and ATM Progresses over time
This literature review will put the basis for my research project, which aims
to answer the above-mentioned questions

What is the automation impact on the real interaction and user
interfaces

How do we link the system design choices to the suitable interaction
techniques

How do we take advantage of the new interaction techniques?
LITERATURE REVIEW
Human Automation Interaction research field has made a wide effort to produce
theoretical frameworks and guidelines for the design of highly automated
systems. Researchers and designers have been trying to use these results in
order to develop effective proof of the actual correctness of the underlying
theories.
This literature review will go through the main theoretical findings of the
automation field and through some of the main practical realizations trying to
highlight the gaps between these two words.
DESCRIPTION OF AUTOMATION
We can identify several dimensions of automation. In [16] for example, O’Hara
and Higgings, identified 6 dimensions (applied to the nuclear-power plant
cases): levels, functions, processes, modes, flexibility and reliability. However not
all of them are applicable in different fields.
From the level dimension of automation point of view, the work of Parasuraman
[17] it is wildly used as the framework of reference at least in the ATM world.
Parasuraman identifies four broad classes of function in which automation can
be applied, he highlights how the automation is represented by a continuum of
solution from manual control to full automated systems.
Figure 2 - Four classes of functions in Parasuraman work
The authors also propose a ten levels granularity of automation mainly for the
Decision and Action selection stages.
Figure 3 - Levels of automation for Decision and Action Selection
Two broad sets of criteria are then identified in order to help the designers in the
selection of the most suitable levels of automation: human performance is the
first criteria while automation reliability and cost of the decision and action
consequences represent the second criteria.
As the authors state, their model does not claim to provide comprehensive
design principles but it can be used in the early stages of the design to get a
support about what types and levels of automation implement and what issues
the designers are more likely to face as consequences of their choices. Moreover
several limitations -in some extent anticipated by the authors- of the practical
applicability of this works have been found recently [21]. Save and Feuerberg
developed a new level of automation taxonomy after trying to apply the
traditional automation frameworks to concrete automation examples. They
noticed how the traditional theories are still useful to understand the variable
nature of the automation support but sometimes not applicable to the concrete
cases basically because specific levels of automation are not specified for each
class of function. The authors argue that each class of function (Information
Acquisition,
Information
Analysis,
Decision
and
Selection
and
Action
Implementation) has its own levels of automation and they differ both in the
number of levels and in the nature of support. Limitations of this approach can
be found in the lost of comparison between the different functions: if each
function has its own levels it’s not clear anymore how a level of one function is
directly comparable with a level of a different function.
The same seminal work of Parasuraman [17] can also represent the base for the
function dimension of automation. The classes of functions to which automation
can be applied are identified as information acquisition, information analysis,
decision and action selection and action implementation. The automated
functions are mapped with the cognitive functions of the human being. Similar
functions have been identified in the field of Nuclear Power Plants automation.
O’Hara [16] for example refers to a set of five functions classified as follow:

Monitoring and detection: represents the set of activities used for the
information extraction and for establish the status of the plant

Situation Assessment: refers to the activities involved in the evaluation of
the current condition that is deciding if the status of the system is
acceptable and if not, what are the reasons.

Response Planning: represents the decisions of what are the actions that
are considered suitable to be implemented in the current situation.

Response Implementation: refers to the actual execution of the decided
actions.

Interface Management: refers to the actions required to configure the
human machine interface, navigating, arranging informations.
The automation mode dimension would deserve a dedicated literature review
itself, as there is no commonly accepted precise definition of the word “mode”.
Being over simplistic we can use the O’Hara statements “modes involve
performing the same functions in different ways. They provide capacity for a
system to do different tasks or to accomplish the same task using different
strategies under changing conditions”. Vicente and Jamieson [12] provide a
sufficient overview of the mode challenges as mode transition design and mode
awareness and they offer a set of principles for the automation design facing
those issues. Degani [6] defines a mode as “the manner of behaviour of a given
system” in which a behaviour is defined by the machine’s input, output, and state
as a function of time. In his definition, Degani points out how a mode/behaviour
can be imposed either by the machine or by the operator. He classified the
modes according to two attributes: in terms of functions (for presenting different
interface formats and for allowing different control behaviour) and in terms of
transitions (those engaged by the operator, the automatic mode transitions, and
both manual/automatic transition).
Figure 4 - Observed frequencies of Mode Types
The flexibility dimension regards the capacity of automation to change the
responsibility and the allocation of a certain task or activity. The flexibility can be
characterized in
- The change of the levels of automation for a given task depending on some
context parameters or some operator selections
- Delegation: the change of the task allocation of a subset of tasks: for example a
task belonging to the action implementation class can be dynamically assigned to
the operator or to the system on the basis of the context information or, again,
some operator selections. Flexible automation may have effects on several
dimensions of the human performance such as mental workload, situation
awareness and user acceptance [13].
Lastly, the reliability dimension regards the degree of correctness in the results
or behaviour of the automated system. Studies [7] and [19] report effects on
performance and workload due to low automation reliability and they give
general guidelines for mitigating those effects in case full reliability cannot be
guaranteed.
INTERACTING WITH AUTOMATION
As shown in the previous section, automation is characterized by several
dimensions. Those dimensions can be used to conduct an analysis of a given
automated system and depending on the required accuracy one can select less
dimensions or try to define new ones or, again, considerate intermediate values
in each dimension. Varying the state of a given dimension, we can possibly
measure different effects on the quality of interaction.
The following part of this literature review aims to address the impacts of
automation on the front-end operator. For each impact a set of pertinent studies
is reported along with results and possible guidelines on the mitigation of the
impacts.
SITUATION AWARENESS
Situation awareness is a topic widely discussed in the scientific literature.
Research studies aim to address how to effectively measure situation awareness,
how to define it and what it is impacted by. This section will focus on the latest
point.
Several works show evidences of automation affecting situation awareness:
Parasuraman in [17] points out that automating the decision-making functions
may reduce the operator awareness due to the fact that humans are incline to
pay less attention to changes made by other agents.
[4] compared pilots SA across three automation concepts that varied in the
responsibility of the traffic separation. In the first concept, pilots could use a
conflict resolution tool that enabled self-separation and bad weather avoidance.
The concept 2 consisted in the air traffic controller taking care of the traffic
separation task but pilots still had the same tools on the flight deck. In the last
concept a ground based automation was introduced and used for conflict
detection and resolution; flight deck was equipped with a tool that allowed pilots
to deviate in case of bad weather. The Results showed pilots had higher SA in the
first concept. This is a confirmation of both Parasuraman study and [5] who
performed a similar experiment on pilots and obtained a higher SA in the manual
and interactive concepts of automation.
SA, however, is also affected by other dimensions. High workload for example is
considered to lead to low SA due to cognitive tunnelling.
In general, comparisons between studies about SA are delicate tasks: SA may be
easily influenced by the methodology and during the experiment. SA is also
affected by the user interface designed for a given experiment, thus, a promising
automation concept may produce low SA values due to a poor user interface
design.
FEEDBACK – WARNING – ATTENTION – BOREDOM
Feedback is considered to be one of the main problems with automation [15].
Back in 1990 Norman claims that the problem about automation is not the
automation itself but rather the way it is designed, the poor and inadequate
usage of feedbacks, which is connected to the insufficient level of intelligence of
the automation: Norman stated that automation should be made either less
intelligent or sensibly more intelligent in order to correctly support the operator
in complex domains. But is the automation more intelligent after more than 30
years? Probably it is, but the complexity of the task to be supported is also more
complicated. It is conceivable that these two elements (the automation
intelligence and the tasks complexity) will have parallel trends through the time.
Norman also claims that the reason why automated systems provided poor
feedback was because feedback was under considerate by designers. Underconsideration, in turn, was due to the fact that automation doesn’t really need
feedbacks or dialogues to work unless something goes wrong. Feedback is
certainly an important concept to be studied in human automation interaction,
however there’s a need to expand this concept: feedback a piece of information a
user expect after executing a certain action. With automated and autonomous
systems it is necessary to prompt some information to the operator even it is not
related to a previous command and this information may be the basis for a
possible action by the operator: an augmented notification, a feedforward.
Attention and boredom have been studied by several researchers: Cummings in
[2] set up a low task load experiment organized in three different levels that
required operator input every 10, 20 and 30 minutes. The operators had to
control unmanned aerial vehicles in a simulation environment. As expected, the
top performer was the one who spent most of the time in an attention or divided
attention state.
Figure 5 - Attention Management as a function of time (M. Cummings)
Surprisingly, the other top 4 performers spent at least on third of their time in a
distraction state suggesting that low task load environments that induce boring
can be effectively managed with attention switching and supporting this practice
can be a promising design strategy. Authors also highlighted how there was no
observable relationship between amount of distraction and performances.
In one study of 2001 [14], Nikolic and Sarted explored the solution of splitting
tasks and information about automation across different sensory modalities.
They run and experiment in which feedback about automation was presented on
the peripheral vision of the operators and compared this situation with the
normal one in which all the information is presented in a direct way. They found
evidence that peripheral visual displays leads to a better detection performance
for unexpected events and faster response time. Moreover, this kind of feedback
didn’t affect the performance of concurrent visual tasks more than traditional
feedback mechanisms.
A similar but more comprehensive approach was used by Hameed, Ferris,
Jayaraman and Sarter in [10] where both peripheral visual and tactile cues were
used to support task and interruption management. In this experiment the cues
were used to indicate the need to pay attention to a separate visual task. The cue
was characterized by three main parameters: the domain, the importance and
the expected completion time respectively represented by location, frequency
and duration of the cue. Detection rate was higher with both tactile and
peripheral cues compared to the baseline condition but the results also put in
evidence that erroneous task switching are usually due to misinterpretation of
the cues. Thus deeper research on how to implement appropriate cues for
specific domains is required.
The automotive domain shows advanced results in this field: J. Scott and R. Gray
in [22] examined the effectiveness of rear-end collision warnings presented as
tactile, visual and auditory cues. They registered reaction times (elapsed time
between the warning presentation and the breaking action of the driver) to the
different warning modalities and showed significant advantages for tactile cues
compared to visual or auditory. A basic difference between the previous studies
is the absence of the interpretation component of the cue: drivers didn’t have to
understand why was the warning presented, they had to hit the break in any
case.
Some guidelines on the design of multimodal information presentation are given
in [20] by Sarter. Guidelines are divided in specific topics such as the selection of
modalities, mapping between modalities and tasks or type of information, the
usage of multiple modalities at the same time and the adaptation of multimodal
information presentation.
WORKLOAD
Workload reduction is one of the reason for the introduction of automation.
Unfortunately this benefit has not been as straightforward as expected right
before the introduction of automation. As Sarter and Woods observed [9],
workload (and errors) was not reduced but only unevenly distributed over time
–generating the so called clumsy automation- and sometimes also between the
operators working as a team. The authors also point out how the workload
changed in the quality dimension due to the fact that the typical tasks are no
longer of active control type but they shift to supervisory control, generating
attentional needs in part covered in the previous section.
In [8] Endsley analysed the level of automation effects on performance, situation
awareness and workload. Results showed operator workload reduced when the
decision making portion of the system was automated. This reduction however
did not lead to significant performance improvements.
Figure 6 - Workload distribution in Endsley study
Endsley proposed a 10-level LOA taxonomy and monitored the workload in each
level. The results are summarized in Figure 6
INTERACTION TECHNIQUES
There is no formal definition of interaction technique but the common
understanding of this concept usually involves the following components:

a combination of input/output devices (e.g. keyboard, mouse and
monitor; or touchscreen tabletop etc.)

one or more software modules that translate the user inputs into
computer commands and provides feedbacks and notifications for the
user(s)
The way the input/output devices are used and the type of language resulting
from the enabled dialogue are what we can usually refer as interaction
techniques.
During the last years several new interaction techniques have been used by
researchers in different fields. The ATM has its own examples too although for
many reasons (e.g. preserving safety, cost of the certifications, heterogeneity of
the technology providers etc.) it is not the leader field in introducing this kind of
innovation.
Many researchers spent a vast effort in tabletop systems studies. Persiani, De
Crescenzio, Fantini and Bagassi in [18] realized TABO, a tabletop-based interface
to support complex simulated environments analysis. TABO was used for the
visualization of Air Traffic Control scenarios adopting the “God’s eye view” mode
and enabling a multi-user collaboration. Users were allow to interact with the
interface trough a wireless mouse.
Figure 7 - The TABO interface
Objective of the experiment was to enhance the perception of depth and give
more awareness about the vertical separation of airplanes.
In 2011 Conversy and his colleagues realized a comprehensive system to support
Air Traffic Control collaboration with a TabletTop [1]. Conversy highlights how
new trends in ATC systems tend to reduce the collaboration dimension in the
work environment although the collaboration itself is a key to safety and
efficiency. After several interviews and workshops with real users, they
identified a set of requirements to support collaboration and applied them in the
development of the multi-user tabletop surface.
Recently a more sophisticated interaction style has been used by Hurter [11] to
find a solution that could replace the paper strips used by air traffic controllers.
Hurter developed a novel system for ATC that allows interaction through
augmented paper and digital pen. In this way controllers can still use tangible
artefacts but still take advantage of the support provided by the digital system.
Figure 8 - The Strip'TIC system in [11]
Strip’TIC was designed to support en-route controllers in the separation
assurance task. The authors received good feedbacks about the virtual/physical
strip duality and about the interaction with the digital pen, especially for the
selection task.
The Strip’TIC system is a good example of transition between the traditional
interaction styles represented by the WIMP interfaces and the more natural
interfaces that are gaining importance in the last years. This transition is not
trivial as it’s not trivial the design of new interfaces using new technology. In
general, the risk is the one outlined by Wigdor and Wixon in [23] that is
designing new interaction by just transposing the old ones. For example the GUI
interaction designed to work with the mouse is (or at least should be) radically
different by the one enabled with natural gestures. There is a considerable
amount of research trying to define methods and guidelines for the design of the
so-called Post WIMP user interfaces. This kind of research has started in the 90’s
right after the commercial affirmation of the WIMP interfaces. It highlights the
principal drawbacks of the WIMP GUIs like in [3] where we can find:

the complexity of the interface grows faster than the complexity of the
application

expert users often leave the point and click actions (which are the basis of
the WIMP GUIs) for the keyboard commands

WIMP were designed for office kind of tasks

3D data is not easily explorable

They are designed for a single desktop user who controls objects that
have no autonomy. The interaction is one channel at a time.

“WIMP GUIs based on the keyboard and the mouse are the perfect
interface only for creatures with a single eye, one or more single-jointed
fingers, and no other sensory organs”
Despite all these disadvantages, the controller positions are currently developed
on the top of the WIMP paradigm.
The new interaction paradigms will not probably replace the WIMP GUIs in
every domain as the GUIs did not with the command line interfaces [23]. Each
new paradigm will more likely find its own niche in which is the most suitable
and used. A research activity in those fields (and ATM does not make exception)
is needed to identify the most promising interaction techniques. The research I
will carry on will focus on the introduction of a gesture based interface. There
are several motivations for this choice:

The TRL of the devices that enable the gesture based interaction style is
by now high enough to permit a reliable utilization. Gesture interactions
have gain importance with the success of the touch devices but there is a
growing interest in a new generation of devices that allow free hand
gesture interaction (see ENABLER TECHNOLOGIES for details).

Gestures can be designed to be as natural as possible and they can
provide an high engagement in the interaction process.

The devices can be easily found in the market and the price is usually
much lower (around 100€) than the one for tabletops or for complex
virtual reality systems. Moreover, they can also be used as a monitoring
tool to detect different conditions of the user (boredom, distraction,
engagement, fun etc.).
ENABLER TECHNOLOGIES
At the moment three similar technologies have been identified for the
development of this research.
Figure 9 - The Leap Motion controller
The Leap Motion Controller. The Leap (www.leapmotion.com) is a little
controller device able to track hands and fingers of the user with a very high
precision in the 3D space (1/100th of a millimetre). All the sensing technology
and gesture recognition software run inside the device, which is equipped with a
dedicated CPU and a set of LEDS and sensors for the motion capture. The Leap is
now under testing by a selected but big group of developers and will be
delivered in may 2013 together with a rich set of APIs.
Figure 10 - The DUO controller
DUO (http://duo3d.com/) is the open source alternative to Leap Motion. It
promises the same performances and it uses the same technologies. The features
are listed below:

Low latency

Low cost and fully customizable

A rich set of APIs and easy integration with existing software
Figure 11 - MYO controller
MYO is a piece of wearable technology that can recognize the movements and the
gestures combining the motion sensing and electricity levels measures of the
arm. It also provides haptic feedbacks to the user. It’s going to be shipped in the
early 2014 and as the other devices will be accompanied by a rich set of APIs for
the developers.
REFERENCES
1.
Conversy, S. and Gaspard-Boulinc, H. Supporting Air Traffic Control
collaboration with a tabletop system. Proceedings of the …, (2011).
2.
Cummings, M., Hart, C., Thornburg, K., and Mkrtchyan, A. Boredom and
Distraction in Multiple Unmanned Vehicle Supervisory Control. 4196, 617,
1–33.
3.
Van Dam, A. Post-WIMP user interfaces. Communications of the ACM 40, 2
(1997), 63–67.
4.
Dao, A., Brandt, S., Bacon, L., and Kraut, J. Conflict resolution automation
and pilot situation awareness. Human Interface and the …, (2011), 473–
482.
5.
Dao, A. V, Brandt, S.L., Battiste, V., and Vu, K.L. The Impact of Automation
Assisted Aircraft Separation. In Human Interface. 2009, 738–747.
6.
Degani, A. Modes in human-automation interaction: Initial observations
about a modeling approach. … , 1995. Intelligent Systems for the 21st …,
(1995), 64–65.
7.
Dixon, S.R. and Wickens, C.D. Automation reliability in unmanned aerial
vehicle control: a reliance-compliance model of automation dependence in
high workload. Human factors 48, 3 (2006), 474–86.
8.
Endsley, M. Level of automation effects on performance, situation
awareness and workload in a dynamic control task. Ergonomics, (1999),
462–492.
9.
Factors, H. AUTOMATION SURPRISES N.B. Sarter, D. D. Woods, and C.E.
Billings Cognitive Systems Engineering Laboratory The Ohio State
University. Human Factors, (1994), 1–25.
10.
Hameed, S. and Ferris, T. Using informative peripheral visual and tactile
cues to support task and interruption management. Human Factors: The …
51, 2 (2009), 126–135.
11.
Hurter, C., Lesbordes, R., and Letondal, C. Strip’TIC: exploring augmented
paper strips for air traffic controllers. Proceedings of the …, (2012).
12.
Jamieson, G. a. and Vicente, K.J. Designing Effective Human-AutomationPlant Interfaces: A Control-Theoretic Perspective. Human Factors: The
Journal of the Human Factors and Ergonomics Society 47, 1 (2005), 12–34.
13.
Miller, C. a and Parasuraman, R. Designing for flexible interaction between
humans and automation: delegation interfaces for supervisory control.
Human factors 49, 1 (2007), 57–75.
14.
Nikolic, M.I. and Sarter, N.B. Peripheral Visual Feedback: A Powerful Means
of Supporting Effective Attention Allocation in Event-Driven, Data-Rich
Environments. Human Factors: The Journal of the Human Factors and
Ergonomics Society 43, 1 (2001), 30–38.
15.
Norman, D. The’problem'with automation: inappropriate feedback and
interaction, not'over-automation'. … of the Royal …, (1990).
16.
O’Hara, J., Higgins, J., Fleger, S., and Barnes, V. Human-system Interfaces for
Automatic Systems. (2010).
17.
Parasuraman, R., Sheridan, T.B., and Wickens, C.D. A model for types and
levels of human interaction with automation. IEEE transactions on systems,
man, and cybernetics. Part A, Systems and humans : a publication of the IEEE
Systems, Man, and Cybernetics Society 30, 3 (2000), 286–97.
18.
Persiani, F., Crescenzio, F. De, and Fantini, M. A Tabletop-Based Interface
to Simulate Air Traffic Control in a Distributed Virtual Environment.
(2007), 6–8.
19.
Rovira, E., McGarry, K., and Parasuraman, R. Effects of Imperfect
Automation on Decision Making in a Simulated Command and Control
Task. Human Factors: The Journal of the Human Factors and Ergonomics
Society 49, 1 (2007), 76–87.
20.
Sarter, N.B. Multimodal information presentation: Design guidance and
research challenges. International Journal of Industrial Ergonomics 36, 5
(2006), 439–445.
21.
Save, L. and Feuerberg, B. Designing Human-Automation Interaction: a
new level of Automation Taxonomy. hfes-europe.org, (2012).
22.
Scott, J.J. and Gray, R. A Comparison of Tactile, Visual, and Auditory
Warnings for Rear-End Collision Prevention in Simulated Driving. Human
Factors: The Journal of the Human Factors and Ergonomics Society 50, 2
(2008), 264–275.
23.
Wigdor, D. and Wixon, D. Brave NUI World. Morgan Kaufmann Publishers
Inc. San Francisco, CA, USA, 2011.
Download