Uploaded by Lou S

Lau 2020 Users Internal HMI Information Requirements HAV

advertisement
Users’ Internal HMI Information
Requirements for Highly Automated Driving
Merle Lau(&), Marc Wilbrink, Janki Dodiya, and Michael Oehl
German Aerospace Center (DLR), Lilienthalplatz 7, 38108 Brunswick, Germany
{merle.lau,marc.wilbrink,janki.dodiya,
michael.oehl}@dlr.de
Abstract. The introduction of highly and fully automated vehicles (SAE levels
4 and 5) will change the drivers’ role from an active driver to a more passive onboard user. Due to this shift of control, secondary tasks may become primary
tasks. The question that arises is how much information needs to be conveyed
via an internal Human-Machine Interface (iHMI) to fulfill users’ information
requirements. Previous research on iHMI regarding lower automation levels has
shown that user require different information respectively. The present study
focuses on how users’ information requirements change for highly automated
driving (SAE level 4) when the on-board user is distracted with a secondary task
opposed to when the user is non-distracted. Twelve participants experienced
different driving conditions and were asked to rate their attention distributions to
other traffic participants. Results show clearly that users rated their attention
distribution to other traffic participants significantly lower in automated distracted mode compared to automated non-distracted mode and manual driving.
Furthermore, the question of users’ information requirements was translated into
iHMI design preferences. For this purpose, four different iHMI prototypes based
on a 360° LED light-band communicating via color-coded interaction design,
which proved to work well for lower levels, were evaluated regarding the
information richness level sufficient for users for highly automated driving (SAE
level 4). Results show that the sufficient information richness level is conditional
upon gender. Implications for future research and applied issues will be
discussed.
Keywords: Highly automated driving Internal HMI Intelligent HMI Usercentered design
1 Introduction
The development of automated vehicles (AV) will change the roads of tomorrow and
will shift the drivers’ role from an active driver to a more passive on-board user [1]. In
conditional automation (SAE level 3), the driver needs to be able to take control over
the vehicle as soon as the driving system reaches its limits, whereas in higher
automation levels (SAE level 4 & 5), the on-board user will be more or less decoupled
from the driving task [2]. Due to this shift of control, the task of vehicle control will
become increasingly irrelevant for the on-board user and secondary tasks may become
primary tasks [3]. Therefore, possible consequences for the on-board user needs to be
© Springer Nature Switzerland AG 2020
C. Stephanidis et al. (Eds.): HCII 2020, CCIS 1294, pp. 585–592, 2020.
https://doi.org/10.1007/978-3-030-60703-6_75
586
M. Lau et al.
considered, e.g., the increasing allowance to execute non-driving related activities
(NDRA) [4].
The question that arises is how much information needs to be transmitted by a
vehicle’s on-board or so-called internal Human-Machine Interface (iHMI) to meet
users’ information requirements. An iHMI serves as communication channel between
the vehicle and the on-board user ensuring a well-working collaboration by transmitting sufficient information about the vehicle’s behaviors (e.g., driving mode) playing a
key role for the driver’s trust in automation [3, 5]. Latest research on iHMI design
focusing on lower automation levels (until SAE level 3) has shown that information
supporting the monitoring process becomes more relevant compared to information
that is necessary for executing the driving task [6]. So far, there has been little research
as to what extent the on-board users’ information requirements will change in terms of
higher automation levels (SAE level 4 & 5). Therefore, the present study focuses
especially on the change of on-board users’ information requirements for highly
automated driving (SAE level 4).
When translating users’ information requirements into design preferences in terms
of iHMI, former research shows promising results by using the peripheral vision of the
driver as iHMI modality. The presentation of information via a LED light-band showed
great potential even in lower levels of automation [7–9]. However, since the tasks of
the driver and her or his information requirements in higher automation level, the
interaction strategies need to change as well. Even if no driving related task remains at
the on-board user, the interaction between her or him and the AV stays important since
the information communicated by the iHMI plays a key role regarding acceptance and
trust in AV [10]. Since the on-board user does not need to perform any safety critical
system interventions, a certain level of unobtrusive information could be enough to
gain the transparency of the AV. Therefore, besides the investigation of users’ information requirements for highly automated driving, the present study focuses on the
desired information richness levels for a LED light-band iHMI design.
2 Users’ Information Requirements (Part 1)
In part 1, the present study investigates the on-board users’ information requirements
for highly automated driving (SAE level 4). Therefore, the attention distributions to
interacting traffic participants when driving in different levels of automation, i.e.,
manual and highly automated driving (SAE level 0 & SAE level 4) is in focus.
Furthermore, the study investigates the changes in attention distributions when the onboard user of a highly automated vehicle is distracted and occupied with NDRA
opposed to when the user is non-distracted.
2.1
Method
A qualitative in-depth interview using a within-subject design was conducted by twelve
participants (six female) with ages between 23 and 75 years (M = 48.00; SD = 23.90).
In a fixed-base driving simulator, all participants experienced three driving conditions
(manual driving, highly automated driving being non-distracted, highly automated
Users’ Internal HMI Information Requirements for Highly Automated Driving
587
driving being distracted) in an urban left-turn scenario. The selected scenario was a
video recording of a partially signalized left-turn intersection at Kastanienallee in
Braunschweig, Lower Saxony (Germany). All participants possessed a valid German
driver’s license and were familiar with the selected scenario since they had experienced
the exact intersection in real life before. The participants’ affinity for technology was
rated M = 4.75 (SD = 0.80) on a 6-point scale (from “completely disagree” to
“completely agree”) with the ATI questionnaire [11]. The participants received an
expense allowance of 10 € per hour. The complete experiment was recorded on video
for later reference with participants’ consents.
The purpose of the study was explained and consent forms were signed by the
participants. The participants were asked to take a seat in a fixed-based driving simulator consisting of two projection displays positioned in front and on the left of the onboard user. A 360° LED light-band was in the interior of the driving simulator communicating via a color-coded interaction design. A blue LED light-band means the car
is driving in a highly automated mode and white LED light-band means the car is in
manual mode. Firstly, the participants saw a pre-recorded video from the driver’s
perspective of the left-turn scenario for one minute. During the video, the participants
were instructed to keep their hands on the wheel to imagine manual driving. After this,
the participants were provided with a snapshot of the left-turn scenario based on a
segmentation approach [12] and were asked to rate their attention distribution to other
traffic participants on a 7-point Likert scale (from 0 = “not important” to 7 = “important”). After this, to experience highly automated driving, the participants drove in
highly automated mode on a preselected route in a virtual environment two times. The
first run was non-distracted. The second run was distracted, which means the participant performed a secondary task, i.e., reading a magazine while driving in the AV.
After both simulation runs, the participants were provided again with the snapshot of
the left-turn scenario based on a segmentation approach [12]. They were asked to
imagine driving in an AV for both runs (being non-distracted vs. being distracted) and
to rate their attention distribution to other traffic participants on a 7-point Likert scale
(from 0 = “not important” to 7 = “important”).
2.2
Results
The participants’ attention distributions given to other traffic participants were categorized into three types:
• Type 1: Other traffic participants with direct interaction
• Type 2: Other traffic participants with indirect interaction
• Type 3: Environment surrounding the traffic situation
In Fig. 1, the mean attention for type 1 to 3 for all three driving conditions (manual
driving, highly automated driving being non-distracted, highly automated driving being
distracted) are shown.
For manual driving, type 1 traffic was rated highest, i.e., vehicles M = 6.75
(SD = 0.62) and cyclists M = 6.61 (SD = 0.88). Moreover, type 2 traffic was rated
with M = 4.27 (SD = 0.70). Type 3 was rated as not important. During highly automated driving being non-distracted, type 1 vehicles were rated M = 1.83 (SD = 0.65)
588
M. Lau et al.
Fig. 1. Driver’s subjectively rated (0–7) mean attention given to other traffic participants for
manual driving (at the top) vs. automated driving being non-distracted (mid) vs. automated
driving being distracted (bottom).
and type 1 cyclists with M = 1.25 (SD = 0.51). Type 2 traffic scored M = 1.63
(SD = 0.66). Type 3 was rated with M = 0.17 (SD = 0.17). Moreover, for highly
automated driving being distracted, type 1 vehicles scored M = 1.00 (SD = 1.28) and
type 1 cyclists were rated with M = 0.33 (SD = 0.65). Type 2 traffic was rated
M = 0.08 (SD = 0.29) and type 3 M = 0.58 (SD = 0.39).
Non-parametric Friedman tests were used for analyzing the data due to violated
normal distributions. Results show that the mean attention differ significantly between
the three driving conditions for type 1, i.e., for vehicles (v2(2) = 22.29, p = .00) and
cyclists (v2(2) = 21.33, p = .00) and type 2 traffic (v2(2) = 15.94, p= .00). Post-hoc
Dunn-Bonferroni tests showed that the mean attention for type 1 vehicles is significantly higher for manual driving compared to highly automated driving being nondistracted (z = 4.19, p = .00) and highly automated driving being distracted (z = 3.16,
Users’ Internal HMI Information Requirements for Highly Automated Driving
589
p = .01) with showing the highest attention for manual driving and the lowest attention
for highly automated driving being distracted. The same effect is found for type 1
cyclists and type 2 traffic (p < .05).
3 Internal HMI Design Evaluation (Part 2)
In part 2, the users’ information requirements were translated into design preferences in
terms of an iHMI. Therefore, four paper-pencil iHMI prototypes using a 360° LED
light-band as key modality for human-machine interaction were presented and evaluated focusing on the users’ desired information richness levels.
3.1
Method
Participants experienced the second part of the study right after the first part on the
same day. Therefore, the same sample as in part 1 was used (see Sect. 2.1). Four
different iHMI prototypes (paper-pencil) using a 360° LED light-band were presented
on a computer screen to the participants. The iHMI prototypes describe different levels
of information richness with a color-coded interaction design (Fig. 2).
Notes. LED light-band color: Dark blue = AV in highly automated mode; light blue bar = Pedestrian detected by AV; red
= AV will brake; green = AV will turn left
Fig. 2. Different information richness levels for an AV’s LED light-band iHMI design.
Richness level 1 consists of head-down-display (HDD) including speedometer and
the LED light-band giving information about the actual driving mode (automated vs.
not automated). The dark blue colored LED light-band indicates that the AV is in
highly automated mode (SAE level 4). The information available to the user increases
with higher information richness levels. Richness level 2 consists of the richness level 1
information plus the perception of other traffic participants by the AV (e.g., pedestrian
in the direct path of the vehicle). An additional light blue colored bar on the LED lightband (directly under the object in the environment) states that a traffic participant was
detected, e.g., a pedestrian. Additional to the previous described information, richness
level 3 includes the intention of the AV, i.e., the next maneuver (e.g., braking) which is
displayed via a red colored bar on the LED light-band. Additionally, richness level 4
consists of the path, i.e., the next trajectory of the AV which is conveyed via a green
colored bar on the LED light-band. After the presentation of the iHMI prototypes,
590
M. Lau et al.
participants were asked to state the sufficient information richness level for iHMI LED
light-band regarding automated non-distracted and automated distracted mode so that
they feel well-informed.
3.2
Results
For the iHMI design evaluation, four different information richness levels for a LED
light-band were evaluated regarding the information richness level sufficient for users
(see Fig. 3).
Fig. 3. For users sufficient information richness level for iHMI LED light-band (automated
being non-distracted vs. automated being distracted).
For automated being non-distracted driving, 33% found richness level 1 and 17%
richness level 2 sufficient. Furthermore, 25% require richness level 3 and 25% richness
level 4. In automated being distracted driving, 33% rate the richness level 1 and 17%
the richness level 2 as sufficient information richness level. 42% of the participants
stated richness level 3 and 8% richness level 4 to be sufficient (Fig. 4).
Fig. 4. Sufficient information richness level for male (M) and female (F) participants for iHMI
LED light-band (automated being non-distracted vs. automated being distracted).
4 Discussion
In this study a qualitative in-depth interview was conducted to get a deeper insight into
on-board users’ information requirements for highly automated driving (SAE level 4).
Regarding the users’ attention distributions to other traffic participants, results show
clearly that users rated their attention distribution to other traffic participants
Users’ Internal HMI Information Requirements for Highly Automated Driving
591
significantly lower during distracted mode opposed to not being distracted and especially in comparison with manual driving. It becomes evident that traffic participants in
direct and indirect interaction that were given a high rating during manual driving are
no longer as important for the user during highly automated driving being distracted.
The results are consistent with previous findings for lower automation levels (until SAE
level 3) which show that users require different information regarding different
automation level [7]. Based on individual statements by the participants, it was noted
that children and to some extend cyclists are of great importance due to their sometimes
unpredictable behavior especially during manual driving. Due to the fact that children
are not included in the stimulus material of this study, it is necessary to particularly
consider other road users, e.g., children, requiring special attention in further research.
Referring to the users’ desired information in terms of an LED light-band iHMI,
results show that when driving highly automated, 33% of the participants found the
lowest information level to be sufficient that just indicates automated driving mode
being active in terms of ensuring the user’s mode awareness. 17% found level 2 in both
conditions (non-distracted vs. distracted) sufficient. But in the distracted scenario, 42%
of the participants require a higher information richness level 3 compared to 25% (nondistracted). Only 8% of the participants required the highest information richness level
4 (distracted) vs. 25% (non-distracted). So they did not want to be informed about the
AV’s path in the distracted condition. Here participants stated that they just want to be
well-informed in case of any critical interaction so that they quickly can gain a minimum of situation awareness. A closer look shows that users’ desired information
richness levels are especially conditional upon gender. Whereas male preferences were
towards lower information richness levels, i.e., 67% found just level 1 sufficient for
both conditions, female preferences were towards higher information richness levels so
that they felt well-informed by the automation. This might be discussed in terms of
trust in automation. So maybe a longer contact of female participants might have
created higher trust in automation. The use of paper-pencil prototypes can be seen as a
limitation and therefore, more realistic prototypes and simulations with the possibility
to experience the AV’s driving dynamics providing additional information to the user
should be used in future research. Overall, the findings can be seen as a first outlook on
how users’ on-board information requirements will change during highly automated
driving (SAE level 4) and how these information requirements can be translated into
design preferences in terms of an iHMI LED light-band.
References
1. Meyer, G., Deix, S.: Research and innovation for automated driving in Germany and Europe.
In: Meyer, G., Beiker, S. (eds.) Road Vehicle Automation. LNM, pp. 71–81. Springer, Cham
(2014). https://doi.org/10.1007/978-3-319-05990-7_7
2. Society of Automotive Engineers: Taxonomy and definitions for terms related to driving
automation systems for on-road motor vehicles. SAE, Michigan (J3016_201806) (2018)
3. Carsten, O., Martens, M.H.: How can humans understand their automated cars? HMI
principles, problems and solutions. Cogn. Technol. Work 21(1), 3–20 (2018). https://doi.org/
10.1007/s10111-018-0484-0
592
M. Lau et al.
4. Kun, A.L., Boll, S., Schmidt, A.: Shifting gears: user interfaces in the age of autonomous
driving. IEEE Pervasive Comput. 15(1), 32–38 (2016). https://doi.org/10.1109/MPRV.2016.
14
5. Bengler, K., Rettenmaier, M., Fritz, N., Feierle, A.: From HMI to HMIs: towards an HMI
framework for automated driving. Information 11(2), 61 (2020). https://doi.org/10.3390/
info11020061
6. Beggiato, M., Hartwich, F., Schleinitz, K., Krems, J., Othersen, I., Petermann-Stock, I.:
What would drivers like to know during automated driving? Information needs at different
levels of automation. In: 7th Conference on Driver Assistance (2015). https://doi.org/10.
13140/rg.2.1.2462.6007
7. Dziennus, M., Kelsch, J., Schieben, A.: Ambient light – an integrative, LED based
interaction concept or different levels of automation. In: 32. VDI/VW Gemeinschaftstagung
Fahrerassistenzsysteme, Wolfsburg, 8–9 November 2016, pp. 103–110 (2016)
8. Dziennus, M., Kelsch, J., Schieben, A.: Ambient light based interaction concept for an
integrative driver assistance system: a driving simulator study. In: de Waard, D., et al. (eds.)
Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2015 Annual
Conference, Groningen, NL, pp. 171–182 (2016)
9. Pfromm, M., Cieler, S., Bruder, R.: Driver assistance via optical information with spatial
reference. In: 16th International IEEE Conference on Intelligent Transportation Systems
(ITSC 2013) (2013)
10. Wilbrink, M., Schieben, A., Oehl, M.: Reflecting the automated vehicle’s perception and
intention. In: IUI 2020: 25th International Conference on Intelligent User Interfaces, Cagliari
Italy, pp. 105–107 (2020). https://doi.org/10.1145/3379336.3381502
11. Franke, T., Attig, C., Wessel, D.: A personal resource for technology interaction:
development and validation of the affinity for technology interaction (ATI) scale. Int.
J. Hum.-Comput. Interact. 35(6), 456–467 (2018). https://doi.org/10.1080/10447318.2018.
1456150
12. Fastenmeier, W., Gstalter, H.: Contribution of psychological models and methods for the
evaluation of driver assistance systems. Zeitschrift für Arbeitswissenschaft (ZfA) 62, 15–24
(2008)
Download