>> Mary Czerwinski: Okay. Welcome. It's my... Dr. David Woods from the Ohio University of Columbus, Ohio. ...

advertisement
>> Mary Czerwinski: Okay. Welcome. It's my pleasure today to introduce
Dr. David Woods from the Ohio University of Columbus, Ohio. Dave is a full
professor there and a professor of cognitive and human factors in fact.
Dave's skill, I've known Dave since like 1989 or '88, I think. We have shared
pedigree because we're both perceptual cognitive psychologists an our advisors
go way back. But Dave's expertise lies in the area of critical decision making and
usually under uncertainty and pressure and in life threatening situations, although
he's done work in all facets of cognitive engineering.
I remember when I first met him at Johnson Space Center he was working very
much in the nuclear control industry as well. So done lots of fascinating work
with lots of different kinds of methodologies and now is going to talk to us a little
bit about what he's learned about design, which will be interesting to hear. So
welcome, David.
>> David Woods: Thank you. So quick before I do the big zinger to bring in all
the people sitting at their desk, we do lots of cool things. We're about a million,
million and a half operation at Ohio State. And the talk is going to hit in indirect
way several of the themes in the lab.
How do we make systems more resilient, everything is brittle, right. To be more
efficient, more lean, this is an overarching theme especially in the safety area,
but all aspects of complex systems.
What's rigor and information analysis synthesis? This is the data overload
problem. How do we find relevant data. How do we not make mistakes which
are often forms of premature narrowing or shallow or low rigor analysis.
We're doing a lot of stuff with sensors and how do you integrate feeds from
multiple sensors, be they as robot in a search and rescue kind of situation or
surveillance networks and make sense of them, drawing on our perception and
attention background.
You'll hear a little bit about how we're trying to help synchronize activities in crisis
management like situations. This is relevant to lots of areas. Healthcare as well.
And in particular, we've like you, have been worried like a lot of these
organizations dealing with new human interactive technologies, how do we
innovate. The world is changing rapidly and we're constantly surprised by the
what the actual effects of new technology are.
So let's start the talk. Users are in revolt against their systems. All right. John
Graham, colonel doctor has been travelling back and forth in the Middle East and
this was his summary line of what's going on as we're deploying lots and lots of
new technologies into soldier's hands, users are in revolt against their system.
Well, that's old news to us, right. We know, right, users protest, the people
introduce clumsy technology, it thwarts the purposes of people, it creates
workload bottlenecks, knowledge bottlenecks, intentional bottlenecks so people
work around them. Right. How long have we noticed problems like that. How
many generations of theses have we go through to show our chops that we can
go out and find these bottlenecks and fix them?
But that's not why John said this. When he said users are in revolt, he meant a
deeper thing, that they're in revolt against the design process. They're in revolt
against outsiders thinking that they can anticipate and outdesign the difficulties
and surprises and pressures of their world.
They're in revolt because they feel with modern technology they actually have
some design power. And they are going to go and create systems or modify
systems to make them work for them in their critical situations. And that's really
the themes I want to talk about.
John's, another way we've summarized John's point is called we call the law of
stretch systems. And the way I like to present this is from a previous -- this result
goes way back. You could say Winegrad (phonetic) said it in '86, you could say
Jack Carroll said it in '88, (inaudible) cycle, things like that. What are we talking
about here? Well, we'll summarize really nicely in the late '90s on a major
deployment of new technologies into a complex world.
And the summary, the afteraction report set all this equipment deployed was
supposedly designed to ease the burden on the operator simplify tasks and
reduce fatigue. Instead these advances were used to demand more. Almost
without exception, right, operators were required -- operation of the system
required almost exceptional human expertise commitment and endurance.
There's a natural synergy, right, between human factors, technology and tactics
so that effective leaders will exploit advances to the limit asking people to do
more and to it more complexly.
The law of stretch systems, right. The systems under pressure, leaders under
pressure will exploit improvements, new capabilities. The end result will be
operational roles under greater pressure.
So where are we? We're really pointing out that people are adaptive, goal
seeking, meaning seeking, explanation building, tension focusing, learning
agents. And this is really a contrast to most framing in HCI. People are limited.
Oh, those poor, poor people, they're so biased. They will fall apart under
uncertainty.
But we will come and save them. Somehow it's only those other people who are
limited, it's not us who hang out in research centers and innovation labs and
know how to do all these fancy digital things, right, or user testing or whatever,
great algorithm people or we're the latest sensor people, whatever the latest
technology is, it's those other people, right, who are the problem.
So we design artifacts as resources, and we want to -- the paradigm the stance
we want to push is that our designs, our stimulants are triggers on adaptive
cycles. And two general kinds of adaptive cycles happen, all right.
In the law of stretch systems reminds us that we usual have expansive
adaptations. If there's a capability that's meaningful, people will find it. Like
they're exploring a niche in an ecology. We may see the early adopters, what
are they, they're active agents exploring and discovery new niches in their
information. Ecology, right? We've heard these analogies before from other
leaders in the field, right?
And they will take advantage of that capability and in that process they transform
the nature of activity, the goals they're seeking, right, what's exceptional, what's
typical, what are standard roles. Transforming activity, coupling, et cetera.
Now, what we usually see is that most systems introduce bottlenecks, intentional
workload, knowledge, bottlenecks, right. And so what do we see users do,
various kinds of gap filling adaptations, work-arounds and things in order to get,
be responsible agents to work around these new complexities, to achieve their
goals.
All right. So our theme is and what we really want to push in our organization
and other related organizations is that we need to really move beyond that we're
thinking about devices and objects and features, right. It's not enough to think
about we're going to support engagement and experience and activities in the
world, right, but instead we have to recognize and develop the foundations and
the techniques and the concepts and the designs that look at our innovations, our
releasing cycles of adaptive behavior and how do we model those, how do we
trigger those, how do we understand those processes?
We're trying to amplify the adaptiveness and resilience of human systems and
we're not playing these games about overcoming human limits.
So the question is what are the concepts, measures, techniques that we can do,
use and develop, what do we need to do? So here's the quick summary. Design
triggers adaptive cycles, there's two kinds of adaptive cycles. That means we
have design activity -- opportunities at three levels, right. So people could do
expansive adaptations by taking advantage of a capability to grow expertise in a
role. I'll give you a couple quick examples.
To better synchronize activities over wider scope and ranges, right, hyper
connectivity and, three, expanding systems potential for future adaptive action,
resilience, how do we make systems better able to, right, deal with prize and
change in the world and still accomplish goals or even redefine what are the
goals that are meaningful to accomplish.
Now, I'll illustrate that with a study of crisis management that we've done
recently. I'll give a quick introduction to the new work that's emerging on
modelling resilience of systems. I will try to spend more of the time focusing on if
you take this adaptive stance what does it tell us about how to model technology
change and talk a little bit about our linked expansion constriction model and we'll
demonstrate that with the new study we just finished on the electronic intensive
care unit and how that's been introduced to supplement actually care units. So
that's the plan.
So what we're saying is if we take an adaptive stance it means technology is
going to trigger that to cycle. You can think of it as three cross link levels.
Growing expertise in a role which partly is related to how a role participates in
joint activities with other groups, all right, and so we have our distributed work
perspective synchronizing activities, but synchronizing activities is related to the
ability to expand adaptive capacity, right, as an emerging property resilience of a
system. What makes for resilience of a system, well better synchronization; for
example, cross-checks, we've done a lot of studies in how to make cross-checks
work better in health care. If you don't have good cross-checks what do you
have? You have coordination surprises, you have a system that is much more
brittle and likewise with expertise in a role.
Now, the backdrop for this is a very old concept in human systems. And I think
going back and looking through old, old writings, you know, things even older
than I am, right, 40 years ago instead of 30 years ago, we have not taken
seriously a notion has been around since the origin of human systems, research
and human factors which is the concept of fitness.
Now, if you take an ecological perspective, you have to wrestle with what's
fitness. You can't -- if you take an adaptive or co-adaptive system stance you
have to define, not to define fitness, you have to have an operational measure or
approach to fitness. In fact people can define an adaptive system of something
that's struggling to maximize fitness. It never achieves it in a long term sense
because in achieving a certain level of fitness, what happens? The world adapts.
In human systems effective leaders will take advantage of your success, so in
health care safety issues, patient safety discussions we don't go back and say
oh, you're terrible in health care if you just copied aviation you'd be much safer,
we come back and say no, because you're successful, be right, you actually
create new problems. So because you're so successful in making certain
healthcare procedures work better than they ever have, what have you done?
You've created a system where if you have a certain disease that fits one of their
silos, you get the best care ever. But if you don't fit the silo or you have a chronic
condition that crosses silos, this health care system is highly fragmented. So that
second level becomes the critical one.
We're really expert in a role, we're really poor at synchronizing or maintaining
continuity of cross roles in health care.
So fitness, so I just want to run through really quickly because it ends up being a
foundational idea that we have to take -- we all do it, we all talk about it indirectly,
we got to take it seriously, we got to better model it, better understand how to
measure it. What's fitness? The search and struggle for fitness. It's another
way to think about design, shifting away from usability testing and saying what
I'm doing is a kind of fitness management. How do I explore, how do I change
definitions of what's fit?
So what's fitness? The simplest notion if you go back of fitness is that there are
various kinds of demanding events and changing situations in the world, all right.
As those events occur, people develop behavioral responses, all right, to
respond to these situations. Pick any field, right, we're learning agents as we
confront various situations, we have experience, what do we do, we learn, we
take responses, we get feed back, we learn.
So we can in effect plot these sort of different proto-dimensions and then say oh,
wait a minute, the match between those is a kind of definition of fit, how do those
relate to each other? Notice fitness is a relational variable. As you learn more in
a local way, what happens, right, you get the set of behavioral responses that
become more fit relative to the experienced dimensions of the situations.
As long as you're experiencing them, getting good feedback, right, you will learn,
no matter what. You're a learning system, you will learn something. We can turn
it off but only under very extreme conditions, right? We are explanation building
learning systems.
What happens though? What we learned about the world, given what we've
experienced is challenged, right, the world's complicated extra things happen.
Things go beyond the normal boundaries. Surprise occurs. Now, that's a
fundamental factor, right? Lucy Sessman (phonetic) pointed out to us with
respect to the limits of procedures, there are always going to be for practical and
theoretical reasons situations that challenge the boundaries of procedure system,
make the procedures more complicated, we create new situations in which
following those procedures will break down because of the variability and
potential for surprise in the real world. So the potential for surprise means any
algorithmic system has limits.
Actually we were warned about this by Norbert Wiener back in the late '50s,
early '60s. He warned us in a variety of stories in the human use of human
beings even though he's one of the fathers of modern computer systems about
being aware of the dangers the literal minded algorithms, literal minded agents.
Why? Because there will be surprises that occur, the agent will continue to act
as if these are the classes of events that occur when it's really in a different
situation.
So as we confront these kinds of complicating factors, these mismatches and
breakdowns occur, we start to develop coping strategies, we develop extra
adaptations. In some sense this is already a definition of resilience. A resilient
system is one that can respond effectively not in terms of building this base level
of fitness, but rather respond when complicating factors or surprises challenge
the normal ways they behave to handle the normal range in variations of
situations they experience.
How well do they invoke extra adaptations to take care of these surprises and
complicating factors? Learning something new about the system.
So what I end up with is playing with our Hal Escher (phonetic) image here is this
mantra. In design we either hobble or support people's natural ability to grow
expertise, synchronize activity or adapt resiliently.
All right. We stimulate their adaptive capability or we undermine their adaptive
capability. There is no neutral. There is no neutral. That's an odd thing. You're
either getting in their way and they're going to work around you, or you are giving
them resources that will participate that they can seize upon in them adapting
relative to their goals and pressures and the systems they exist in.
And that's a really different stance. So a couple quick examples so you know
there's really specifics in our field about all this. Expertise in a role, well, we can
run through things like Paul Feltovich in Spiro's cognitive flexibility theory how do
we escape oversimplifications or reductive tendency in their work, how do we
recognize boundaries, how do we avoid premature narrowing information
analysis, how do we broaden search, how do we revise assessment, how do we
reconceptualize or reframe if you look at sense making work from Karl Weick and
Gary Klein.
In all of these, all right, expertise in a role is more than just being expert at one
piece of things, having autonomy, autonomous action on one skill, one piece of
things, but connecting those together and being able to revise and switch.
Synchronizing joint activities. A lot of interesting things going on. How do we
integrate diverse perspectives? We talked about that quickly today and last
night. Anticipated reciprocity of professor -- I don't know if you ever took a
course from her, she was there when you were there, Elinor Ostrom at IU, all
right, in psychology department, has some fabulous work on reciprocity as a
model of trust and it's redundant to put anticipated in there, but I want to
emphasize that, reciprocity is anticipated, right? I will do something in my role
that may risk or consume resources, risk outcomes in my role, but I'll do it
because it helps you in your role overcome difficulties or it takes these
opportunities and together we will better achieve overarching goals in our
system.
And if you analyze things like tragedy of the commons you'll find that anticipated
reciprocity is a fundamental requirement for effective joint activity where we don't
fragment and spin off into you doing your role well, me doing my role well and the
total system going to hell in a hand basket despite that.
This turns out to be very important in accountability models in patient safety. For
example, healthcare areas. Directing and redirecting attention, judging
interruptability, a lot of these are classic phenomenon, you can put different
labels on them in CSCW kinds of work, collaborative joint activity. And our
favorite one is enhancing cross-check. So that's what we spend a lot of time
trying to study and enhance how do you make more effective cross-checks to
enhance these collaboration and synchronization. Healthcare crisis
management and layered sensing are huge opportunities for us right now.
So let's do a study. We've been using crisis management as a natural
laboratory. It's our standard method. We go out and work with real people who
have to do significant -- responsible for carrying out significant risky tasks. It's
neat because people will spend money when bad, really bad things can happen.
But we do it in part because it's a great laboratory for getting data because these
people really have to work at what they do because they know bad things will
happen maybe to themselves or to people close by if they don't make good
decisions.
One of the studies we just finished is a critical incident analysis of a major
metropolitan fire department somewhere between Boston and Philadelphia on
the East Coast. Wonder where that could be. And they made available some of
their firefighter injury and death cases. When we break out some that just
happened because this is just dangerous stuff and you start looking at the
coordination surprises we start to see how there's a variety of things they do to
prepare for these episodes that create cohesion but the inherent difficulties they
tend to approach, remember fires are relatively spatially contained, at least
initially, so they approach and initiate their activities effectively, but the inherent
difficulties of communicating what we find is they start to work at cross-purposes.
And working at cross-purposes is one of the classic signatures of a
synchronization breakdown.
So o posting lines, so one group is hose line is driving a fire towards the other
group, venting can do it, so if you vent inappropriately, you can actually increase
the intensity of fire towards -- do you remember the movie Back Draft, towards
the other group. And in a huge percentage of the cases these kinds of working
at cross purposes play out, someone gets injured, they reconfigure they have a
crisis -- a new crisis within the crisis which is to deal with a rescue an injured
responder, and eventually they have some kind of resolution to that situation.
So some very interesting kinds of things you can look at about the
synchronization. But I want to tell you about a different study. It also illustrates
the kind of way we use natural labs. So in this case, it's the new capability is you
can put sensors out in the world. And you can understand remotely before you
couldn't -- if you were a commander, for example, in a crisis, disaster, chemical
plant fire, say, you were remote, you weren't on the scene, there were people on
the scene, you were trying to share information, you had different roles, well with
new sensor technologies you can be on the scene even though you're in the
command center. One of the ways is UAV flying around, you get the video
feedback in the incident command post. People love it. And if you've been in
any of these incident command posts or military command post with these UAVs
everyone lost the UAV feed. So we have this new capability, people adapt to
exploit it, in fact they are so captured with it they overutilize it, over-rely on it.
And everybody who sees this goes, I wonder if this is creating a danger. It's
changing the roles creating the success and creating new vulnerabilities, new
forms of breakdown. So they said let's study it, how do we study it?
Well, we have a bunch of reasons we would worry about over-relying on one
data channel, right, premature closure, framing. You're hypothesis generation
and is framed by the image you may be unable to revise effectively if the data
comes from a different information channel.
So what do we do? Well, we create a staged world, right. We want to have a
realistic rendering, but we want to be able to repeat the problem and run it for a
variety of real people. So we were able to track down eight different real incident
commanders in the Midwest who do this with reasonable amount of experience.
We set up a command post setting, that's pretty easy to do, because what are
they, rooms with lots of paper and a couple computer screens. The only thing
unrealistic is we didn't put as much noise in the background as would really be
going on as people came in and out and talked in the background.
What did we do? Well, we flew around, took pictures of a chemical area and we
created digitally a fire and so we simulated a UAV feed. Not quite as realistic.
So then you design that, how do we do that? Well, we take in this case we took
a real accident and it happened a few years ago, in England and we modified it a
little bit and we did a garden path problem.
And so on a garden path problem the imaging channel suggests initial plausible
diagnosis, later subsequent data comes in that's contrary to the current diagnosis
and it comes in outside the imagery the UAV video feed, all right. Additional give
them a second opportunity to understand what's going on. All right. So the
question becomes do they revise? And so what do we end up doing is seven of
the eight commanders went down the garden path, right, they were stuck in the
initial plausible diagnosis, they over-relied on the video channel, and when you
look at the details of what they did, again, only one avoided the trap, but the ones
who were caught were using fewer data sources, had limited cross checking
between the data sources, and had a variety of anomalies in the data that they
did see relative to their hypothesis that they didn't recognize and follow up, that
their hypothesis really didn't account for everything they had seen either.
All right. So they were doing a poor job of understanding what was going on in a
dynamic uncertain situation.
Now, let's pause for a moment. In some ways you can say that's a classic
situation we're in. New technology has an upside and a downside cognitively
and collaboratively. Here we're identifying that upside and that downside. You
don't want to throw out the baby with the bath water and say don't use this stuff.
They will use it. It creates a new challenge for us. Let's design better
visualizations, bad balance, help people balance diverse heterogenous
information sources. Some will be more compelling under some circumstances
or most of the time.
All right. Some may be, you know, some abstract data sources may be less
compelling. Sometimes they may tend to believe people on the ground, maybe
they don't believe people on the ground, maybe they can't translate what they're
getting on the ground because it's in a verbal form versus a sensor, a data
sensor form that's reading certain readings about contaminants versus a
concrete visual. Very diverse formats. How do we integrate and balance those?
Great research questions for us, fabulous for the visualization community, very
important, very relevant, we're being reactive to an adaptive cycle.
Now, we're trying to influence the adaptive cycle. So in some sense I'm coming
back and saying what do we have to do? We have to get ahead of these
adaptive cycles. How do we look further ahead, how do we start to understand
what's going to trigger what kind of adaptive cycle? What's going to dominate?
Is the expansive adaptations going to dominate in the video feed example, the
UAV example? Are the bottlenecks going to dominate? In this example we can
predict pretty much what's going to happen, right, the new capabilities are highly
attractive, the deficiencies only happen when you actually do this, right, and so
you don't usually do this, so we don't notice you do it wrong.
And when you do it wrong, there's always a simple excuse, right, it's human
error. What do we need? More technology to overcome those erratic other
people. Not as smart as we are. Instead of recognizing, right, that these are
fundamental challenges for any human system, human system because the
system however automated, however many fancy algorithms, however many
virtual technologies in the end is about human purposes, right. We're trying to
have a safe world, an effective world, an energy efficient world, access to equity
and access to healthcare, et cetera.
So what we're really at is that the time to take advantage of all the advances in
ecological and adaptive system modelling and start to bring it in to understanding
and designing human systems. That means we have to start thinking about that
third level. How do we expand adaptive capacities? How are systems resilient
and brittle? How do we see hidden dependencies in coupled systems
fundamental and software reliability, software dependability issues.
How do we track side effects in replanning? We adjust, we have all this great
new information, we can see that situations are evolving rapidly, we can take
advantage of that instantaneous information, let's take directions to take
advantage of this or to cope with this new information. What happens, we miss
side effects other effects that are associated with the changes in plan and
direction and activities and so we end up with that as a typical kind of failure.
Rigor and information analysis. We have access to so much data we think we
have enough data to make a reasonable decision on for example a space
launch. Is a phone impact a safety of flight risk or is it just a throughput risk that
we have to handle this on the turn around for the next launch of the space
shuttle? So we have low rigor shallow analyses just to find, right, it's only a
productivity issue, why, because they're under high productivity pressure.
And you have these rationalization discounting processes going on. No one
sees that it's in fact a low rigor analysis. In fact if you looked at the real data,
right, these are more energetic than they had any actually models to predict.
They were striking parts of the vehicle that no one had the -- that weren't the
usual parts that they were worried about structurally, so no one knew the
structural limits of the leading edge device on the wing where it actually struck in
Columbia, so you end up with a situation where there's these key warning signs,
right, you have a shallow, actually zero rigor analysis. You move ahead and all
of a sudden, boom, you're surprised by an unlivable vehicle.
Complexity costs of creeping featurism are a great example. Some people might
say they're a great Seattle example. You know, we have -- we put out systems
and we keep adding things. How do we recognize when the complexity costs
dominate the incremental fitness value, right, what do we need, a fitness space.
Complexity costs are looking at an aggregate over the fitness, shifting our fitness
definitions from very local, right, to more global things that require take a broader
stance on what people are trying to accomplish in the world.
And so we have to think about how systems are resilient and brittle and develop
models like that.
Now, adaptive capacity, all right, that's been our theme all along today is to say
that we need as partners is different research organizations to take this adaptive
stance seriously and develop these new tools. And one of the interesting things
about adaptive capacity is that it's about the future. Measures of adaptive
capacity are about your ability to respond to future surprises. But the only way
we can assess that kind of adaptive capacity is to look at how the system
responded to past opportunities or disrupting events.
So we look at how you adapt in the past, right, in order to assess your potential
for adapting meaningfully to future surprises, even though we don't know what
the future surprise is going to look like.
So adaptive capacity becomes a generic kind of thing, generic kind of system
capability that we could measure based on what the system does now or in the
recent past but it's important variable because it tells us how the system is likely
to behave in the future even though we can't predict the exact disrupting events.
In fact, that is the definition of surprise, right which is, right, that future events will
challenge in smaller or larger ways current plan full activities, right, how likely are
your -- if you just behave according to plan routine typical contingencies that you
do -- that you've done in the past, how likely is that to work in the face of future
events?
So what's a great place where this plays out? Emergency medicine is a great
place to think about this. And in emergency medicine, you can think about
systems and you can say oh, emergency medicine is designed to handle varying
loads. Well, one of the interesting things is there's a report from two years ago
from the Institute of Medicine saying that the emergency department is the brittle
point in the national healthcare system. Why? Well, ask the head of Emory, the
Emory ED. The Emory ED handled the Atlanta Olympic bombings. So an
Olympic year. What was the casualties there? Two dead, 12 wounded I believe
something on that order? I haven't memorized the exact statistics.
The Emory ED head is very proud of how they handled that mass casualty event,
and he has stated publically that if the Olympic bombing happened today his ED
could not handle it, much less anything larger. Could not handle it. Erosion of
expertise, important rigid environment, inability to shift rules and authority, lines
of authority. Depleted physical resources. Right. EDs are under-resourced
relative to increasing demands, more cases, more diverse cases flowing through.
A recent studied shoed that EDs start to sacrifice normal quality of care
indicators almost as soon as patients show up at the door. So normally you think
of an ED would start sacrificing things as surge levels got high, it turns out you
see some things being dropped as soon as you get a couple difficult patients. It
doesn't take much to challenge these things.
So what do you see in an ED, you see a lot of adapting. As loads go up, you can
learn a lot about, right, cognitive strategies and how they change, how they utilize
physical resources in new ways, how they change patterns of teamwork,
assignments, roles, communication strategy, all of these things adapting in order
to keep the system intact.
What's a failure? Well, we have interesting things going on. If you can't
handle -- if you're in charge of the ED and you're starting to run out of capacity,
what are you doing, you're anticipating that ability -- can I handle the patients,
right? So logical thing from your point of view, say I'm getting too close to
capacity, I might mishandle a patient. So we'll diverse patients to other EDs,
other hospitals. Patient dies on the way to the other one or waiting at the other
ED, what happens? Scandal in the paper, right? Who is to blame? This
hospital. What does your hospital director do, gives you a new policy, you can't
divert.
So is diversion a failure or is diversion an adaptive strategy? Both. Right. Now
you can't divert so how do you handle patients? Well, now it's your mistake, not
the hospital's mistake if you can't handle all the patients you have to deal with.
Surge capacity modelling EDs is one of the hot places where we're trying to
understand, again, a place that needs this and can doubt as you innovate new
technologies how do they support adaptive capacity, the ability to change and
escalate these strategies as loads escalate to stay resilient. On the other hand,
it's a perfect natural laboratory for us to model and understand these processes,
all right, and how do we design technologies and designs to support it.
So this is the big thing. How do you manage systems resilience? How do you
assess brittleness? How do you do this kind of stuff? We can draw on
ecological and some math modelling kinds of large-scale complex systems.
There's a variety kinds of basic principles that we don't have time to go into
today.
How are we on time?
>> Mary Czerwinski: About 15 minutes.
>> David Woods: Okay. Let me skip that example. I already hit this. Let's skip
up to the technology change issue.
So if we're taking this adaptive stance, then what this tells us is we have to think
more about models of technology change. So this is an old animation we did a
long time ago, the black box of new technology hitting ongoing fields of practice,
creating a variety of reverberations of change.
All right. Ongoing field of practice defined by artifacts, various kinds of agents in
collaboration given demanding strategies in the world, the black box hits it, the
processes of transformation occur, new complexities, people adapt in various
ways, failure breaks through, usually gets blamed on human error but in fact we
see new capabilities, new forms of coupling when we take advantage of those
new forms of tempo, new complexities for people to take.
How do we understand, anticipate the side effects of these changes? Well, the
classic model of technology change is technology adoption, early late adopters,
the Roger's S curve. Now, there are some other models that are starting to move
beyond this, but this is the classic one.
Now, this is enormously weak model for understanding what's happening with
computer based and digital technologies. Jonathan isn't here. This is the part
where I would talk all about him.
>> Mary Czerwinski: Say hi to him. He's on vacation.
>> David Woods: Yes. In Ohio of all places. And so one of the great examples
that you can't explain with this is systems that fail. And you can't explain systems
that fail due to workload bottlenecks, right, so the classic phenomenon that we
see over and over again where in your role -- in my role as an administrator I get
all the benefits if you take on new workload associated with the introduction of
this new technology. But you're all under the workload pressure. So you get no
advantages but you're supposed to do these things so I in my role get
advantages. And normally those things fail.
And you can read this large in many of the medical information technology
failures. So for three decades they've been trying to put in computerized medical
records. What often happens is exactly Gruden's law (phonetic), right the people
who pay the workload penalty don't get benefits for their role, they are under
workload pressure or even workload saturated. So naturally these systems don't
go very far.
They don't explain adaptive expansions, right, which is successes don't take the
form of what designers anticipated but users create new exploit the capabilities to
do new things that no one predicted, law of stretch systems, co-adaptive
processes. So what did we do?
So we start let's take these adaptive things. Even in a simple descriptive way
we've got to start moving forward, sort of taking advantage of these other areas
that have developed, different ways to do the multi agent co-adaptive simulation
and things. It's a similar way to think about a role in the world as a role is like a
performance track and the performance track is defined, at least when a role is
well practiced relatively stable environment, stable background system for
funneling people into the role. Think of it as a nice geometrically regular track,
performance track.
The performance track is defined by facets over four dimensions. We've got our
workload dimension, right, Gruden's law, we have to have a workload dimension.
We have an expertise dimension, right, technology change can introduce new
capabilities for us to exploit in terms of new forms of expertise.
We have an economic boundary, right, efficiency, productivity, economic gain.
We introduce these things people want to get, right, see productivity or economic
gains from it. And then we have, we've been debating whether to call this a
safety or a risk or an adaptability. Let's just say for right now think of this as a
risk kind of intention and a variety of facets where we are anticipating that things
can break down and go wrong badly, all right. And we do a variety of hedges
and adaptations so that we don't have major failures in some sense. Okay?
Four classic kinds of things, all right. This role as an individual could be a whole
group. An ICU could be modelled as a role, all right. An ICU dock could be
modelled as a role. An emergency room could be modelled as a role or an
individual subteam within that could be seen as a role, the cardiac group, right.
And we have these dimensions, right.
So what happens when you throw new technology? Well, you can have
constrictions or expansions. So we have a simple way. Our track gets, right,
bottlenecks thrown in. You can't go on the straight path. You've got to go
around. Or we could break this path and say you've got to fill the gap, there's a
hole in the path, you've got to reconnect the path in order to continue to do your
job. Simple visual metaphor for the work around, right, gap filling.
But you have the possibility for expansions. So think of expansions like niches.
Now, notice with an expansion it's different. With a constriction you run into
something. If you just keep going, you run into something. So you've got to go
around. You've got to deviate from the usual things to make it work.
In an expansion, you can keep going in the old way, but a new niche has opened
up. That's the way you would think of it in ecological systems terms. A new
niche has opened up and people explore who are thrown off the routine track
start to discover, wait a minute, here's something new I can take advantage of.
What happens over time, all right, is, right, we renormalize because we
experience these as regularities in our environment, we take advantage of this,
that defines new routines, new practices, right, we have standard ways to work
around, we learn these, we develop these, we transmit these to other people
come in, boom, boom, boom, we're now into a regular performance shape again.
A regular track again.
And you see this very nicely as people develop new experimental procedures in
healthcare, they start to move them out to other settings and so it becomes a
routine accepted and paid for by insurance kind of measure.
All right. So let's look at this happen and so the place we went to look is the
electronic ICU. So what's electronic ICU? It's a remote facility meant to support
the physical ICU. What's physical ICU? That's where the actual patients are.
So electronic ICU has access to nurses and doctors in a remote facility who are
looking in doing vital sign monitoring, other forms of looking and communication.
For example, they can have video, remote cameras they can control to look at
the patient, look at the equipment set up around the patient, maybe a telemetry
on vital signs monitoring, they can help out the nurses.
Why would you want to do this? Well, the ICUs are under a lot of pressure. And
you can see that as economic pressure, you can see it as expertise pressure. All
right. There's a shortage of experienced nurses. Qualified people. In rural
environments there can be a shortage of specialized expertise for different
conditions in the ICU. You can have a small hospital that has a general ICU but
may not have enough people to cover all of the different specialties issues that
may arise for a particular patient.
So you can expand the access to specialized expertise, a lot of different things.
So what did we do? Well, we did the classic kind of stuff. What is it you start
with, you go out and look.
So we got access to it actually ICU and we hung out with them, observe what
they did. Based on that, we started to do a cognitive task analysis of what are
the different ways the EICU is being designed and operated to support the
physical ICU? Well, what was really critical was they were logging their
interventions, in other words whether did the EICU intervene to help the physical
ICU or maybe the physical ICU didn't think of it as much help, but there was
some interaction and since we had these logs, we had the ability to take a
longitudinal study. What kinds of interventions and how were they changing over
time? Well, if we're going to look at adaptive systems, we've got to start adopting
longitudinal methods, we can't just look once, we can't just say how do people
react at one point in time in a learning curve, right, how are we going to start to
recognize the value and adapt that into their situation?
We started out with worried about, well, gee, these things are going to support
anomaly recognition especially, so monitoring help. Nurses like to call it extra
eyes on the patient. Sense making kinds of functions. For example we saw
issues where the people in the EICU could step back and take a big picture
approach, right, revise, wait a minute patient wet or dry. Are we late or early, are
we overreacting to something, driving, overdriving them into the wrong state kind
of thing?
The people in the physical ICU, right, have a lot of physical tasks to carry out.
They deal with the patient family, variety of attendings and residents coming in, a
lot of interruptions and things, interruptions, right, so all of a sudden you can start
saying hum, the EICU is valuable because they're a more stable environment
relative to monitoring.
A lot of potential pluses. Specialized expertise, that was a key one we thought
would be valuable. What we didn't best practice reminders, maybe that's sort of
a sense making kind of thing but basically giving feedback to the physical ICUs of
various interventions they need to be doing. In some ways this is stepping back
from the detailed flow and saying there's other tasks you haven't carried out yet
that should be on your priority list.
One we didn't realize would be there was mentored learning. The EICU nurses
were tended to be fairly experienced nurses and in the kinds of ICUs that needed
an EICU had tend to have more junior. They were licensed but they were more
junior in experience, they were very junior in experience levels. Yeah?
>>: What was the mechanism to communicate with the EICU to ICU?
>> David Woods: They had television linkages. They had video -- well, there
was two-way communication. They end up calling, all right. So they had
dedicated lines to call and talk to them about what was going on.
That's one of the predicted trajectories is the EICU can now become a source of
interruption and inappropriate interruptions. If they don't have effective common
ground, how do you get effective common ground, is the video interplay critical,
right? Not just to see what's going on with the patient but also to see what's
going on in the physical ICU and the nursing loads to know when to interrupt and
when to interact appropriately given what is going on.
So you can think of it as being an ICU under a variety of pressures, variety of
constrictions, economic constrictions, expertise constrictions and workload
constrictions. What do you see? You see people taking advantage of we can
avoid risk, we can adapt in a variety of ways by taking advantage of these remote
monitoring capabilities. You see a transition and now we've moved from the -we started this study two and a half years ago and the number of EICUs in the
country have doubled or two and a half times from when we started the study.
So we weren't at the very beginning, but we were early in the migration, and it's
been taking off. So this is a success story. People are adapting to take
advantage of this. Different ones are configured differently because of their
particular geographic area concerns, and we're starting to see it normalize into a
new pattern of activities.
What are we interested in is predicting what's going to happen next. Well, we
saw one of these things happen because it was a longitudinal observation.
There was one thing on that list I didn't mention. Billing. Billing. So what
happened? All right. Hospital administrations under various -- these are
notional, don't take these as the literal point from the data, but notional, they're
under various pressures that create constrictions and from the hospital's point of
view finances, right, are a big one. What are they looking for in looking for ways
to be healthier and here's this EICU which is in their environment is a resource
that they can seize upon and use to adapt hospital procedures. So what do they
do?
They start introducing workload into the EICU saying EICU your job is to monitor
the physical ICU to make sure they enter things in a way that maximizes our
ability to bill, not improve patient care, no more -- not access specialize expertise.
What do you see? You see this thing where a success can start evolving into a
not quite as success or the grounds for an accident that we could then come in
and investigate and go oh, look at how this was a system failure.
Look at how the origins of this accident happened, begun years before in
organizational decisions, how this is an organizational accident not human error.
I don't want to be there. I've been there enough. I want to prevent those things
from happening by being proactive. So we saw this trend happening. So part of
our analysis is trying to project adaptive trajectories and how do we do that, well,
without going into all the things, I just wanted to illustrate on these how we
related a variety of sustainability conditions. So that's one of our new themes
here is you have a new capability that's seen as a niche, be effective leaders
start to recognize the niche and exploit the niche and expand the niche.
As they do that, they have a variety of effects, it's a linked expansive restriction,
expanding on one dimension and the facets can create restrictions on other
facets, other dimensions for that or other roles.
So the issue is let's identify sustainability conditions and say if those aren't met,
what are we going to see? We're going to see this erosion, instead of benefits on
the expertise and quality dimensions or risk dimensions we're going to start to
see benefits in financial ones erode those expertise gains and we'll start to see
the risk of failure go up.
So we can start to see some of these common patterns that our study reveal.
So the extra eyes, right, better monitoring is great but that assumes the EICU
isn't task loaded with other additional administrative or other kinds of tasks. If
you've got a bunch of people sitting around, not much is happening, how you
going to -- you going to load them more. How are you going to load them more,
well, if you're monitoring three physical ICUs from one EICU, why not do five?
Wait a minute, we can get more billing. Let's do seven. Whoops, wait -- all of a
sudden we see economic interest of the entity running the EICU increasing the
workload, either direct monitoring or indirect tasks gets -- other additional tasks
gets transferred to the EICU all of a sudden that monitoring expertise goes away.
Best practice reminders. Those are interruptions from the physical EICU's point
of view. If you don't have mechanisms for effective common ground, what are
you going to do, you're going to have bad collaboration. We've studied these
things over and over again and in healthcare in particular interruptions are a very
simple corollate of mistakes. High interruption, more mistakes in these routine
activities. Administrative billing task complexity. The sense making. Your ability
to step back and take a big picture approach, that wasn't one of the high
frequency gains but it was potentially a critical gain relative to outcome of your
patient, if your relative or you were in the ICU, right, you care about that a lot.
Again, as tasking goes up, how are they going to be able to step back and take a
big picture approach, make these high level judgments that may be difficult for
the nurse on the physical scene because they have so many activities and
interruptions there that are going on.
Specialist expertise. Well, one of the interesting things about the EICU that
nobody noticed is part of the reason the EICU is working right now is because
there's a pool of available expertise. So older experienced nurses who no longer
want to deal with the physical rigors of being on shift in the ICU. So you've got
experienced nurses, older nurses, they want to work in a different style, they can
have a better lifestyle or they can work shorter shifts part-time in the EICU than
they would work in the physical ICU.
Now, when that experience base gets consumed will some of these benefits still
hold? Mentored learning, you can't have mentored learning if the people in the
EICU aren't much more experienced than the people in the physical ICU. You're
not going to have the same access to specialized expertise, this isn't simply the
physical expertise, it can be also to experience nursing expertise, so a
sustainability -- the sustainability condition is that there's an experienced pool
available to draw on.
So what we're starting to do is starting to say, hey, wait a minute, as a field we
know a lot about these adaptive trajectories. We use them all the time in the way
we design studies and the way we make recommendations and critique
interfaces and run usability studies and innovate new designs.
But we don't organize any of our knowledge around adaptive trajectories, we
don't organize our knowledge and techniques around assisting and predicting
and steering these adaptive trajectories. But maybe we should.
>>: So the idea of the experience pool and perhaps all of those adaptations that
you talked about, (inaudible) thinking of them as sort of directional, that the
experience pool only benefits the EICU people with more experience than the
ICU people? The flip of that, this may be true for lots of these different sort of
adaptations is maybe the people in the original system in the ICU in this case can
now provide additional learning for someone that might be at this remote
location. I want that person in my Internet new by in my ICU but, yeah, sure they
can over the shoulder in the EICU and be out of the way and get this sort of high
level thing but still gets ->> David Woods: What did he do? What did he do? He just did an expansive
adaptation. He said wait a minute if I want to train in nurses, all right, what do I
do, I start bringing them through the EICU as a mechanism to get them up to
speed before they have to deal with everything as a way to get concentrated and
safe learning because in the medical world we don't want people learning on real
patients. That's one of the trends out there, right, so we're doing more crisis
simulation, other forms of medical simulation are starting to penetrate and
spread. It's still relatively early even though they're available in most
metropolitan areas at least one or two centers. But that becomes oh, wait a
minute, another trend, another set of constrictions, here's a resource I can take
advantage of, right, and so that's what we want to do is we're saying how do we
start mapping these potential trajectories.
Now, one of the other ideas I didn't put up on the slide is how do you monitor, we
may not be able to absolutely predict ahead, but at least we can detect early
emergence of these trends, right? And so your idea says I don't know for sure
they're going to adapt in the training, and I'm not going to make a Las Vegas bet
that their adaptation is going to be new training methods. On the other hand
that's one of the potential trajectories.
Notice we can come up with this by applying our general knowledge of human
systems and new technology in HCI to these things. We don't have to know a lot
about this to know, hey, that's one of the potential trajectories. Then the question
is set up monitoring conditions.
So now we say innovation monitoring conditions. How would we notice if that
was turning out to be adaptively valuable to the user community? These human
systems. Then all of a sudden you go wait a minute it turns out as training we
can help you take advantage of that and to that in a good way, rather than in a
poor way. Remember the cross benefits and weaknesses in our UAV for
example for incident command, the technology capability created new
vulnerabilities at the same time they provided new benefits for the incident
commander.
And so we don't want to get in these simple, you just have to take the bad with
the good over-automation, human error, back and forth, games that don't get us
anywhere in terms of overall system design, system cape ability. So we need to
start being able to use our knowledge. It's already halfway there, at least, to be
able to talk about emerging patterns across local adaptations. All right. That
means we have to start integrating these multiple perspectives in different levels,
right, so we had the hospital administration, ICU, physician, right, nurse, nurse
training, all these different perspectives interacting.
How do we anticipate adaptive traps when they're going to get stuck? Classic
example we're all dealing with right now is corn ethanol. Great example of
adaptive track.
And it's also an example of an adaptive fluorescence. Can we recognize link
sets of adaptive expansions where one adaptation -- there's an expanded niche,
somebody adapts to take advantage of it, that creates another expansion which
somebody takes advantage of which creates another expansion, that's an
adaptive fluorescence. The emergence of corn ethanol was an adaptive
fluorescence. It turned out to be an adaptive trap when you took a larger
perspective that we're stuck in right now.
That was the news driving over was cellulose-based ethanol, that will get us out
of the adaptive trap. Notice the key theorem behind an adaptive stance. An
adaptive cycles, yesterday's solution produce today's surprises that become
tomorrow's challenges. It's not about right or wrong, good or bad, it changes the
way we do the metrics problem. What we're trying to do is anticipate how people
will adapt. When they adapt they will be maladaptive and in the sense of it will
create risks from a certain perspective. And there will be benefits. That's one of
the fundamental things about adaptive behavior. Locally adaptive behavior does
something for those who adapted the behavior.
From a larger perspective we can later come back and go boy was that strange
behavior, right, and call it error, right, but we have to recognize and that's been
20 years of the new look at error which is error is fundamentally adaptive
behavior, locally adaptive behavior that globally is maladaptive when you take a
larger perspective and set of information.
So the challenges I want to leave you with are can we project and anticipate the
multiple unintended consequence of proposed changes? What methods do we
have, are we using those methods, are we getting too local, are we being locally
adaptive and globally maladaptive like in complexity creed?
Can we forecast trajectories of adaptive responses? And can we discriminate
what's promising so that we can actually participate in steering change in towards
expansive adaptations? Back to Colonel Graham's comment, our users are in
revolt. They are not waiting for us to figure it out. They are not waiting for us to
create new capabilities, new visualizations, they are going to do it, they are going
to adapt because they are under pressure to accomplish goals they appearance
real risks of bad consequences. Whether we're talking about intensive care
units, emergency departments, military systems, crisis management systems,
people are adapting and in that adaptive process how do we find those open
periods that we can help them, right, rather than being reactive or rather than just
keep throwing new little innovations hoping one or another will penetrate the
walls of adaptation that the adaptive traps they are stuck in. It's a whole new
stance and paradigm. But the good news is we actually have most of the work
already available, right.
In the end, we already know a lot about how to release human adaptive power, it
is through our design innovation process. But we have to harness and link that
and model and do -- and change our techniques a little bit in order to connect it to
our customers and the end users in the world a little differently. Thank you.
(Applause)
>> Mary Czerwinski: Any questions?
>> David Woods: Comments? Arguments? Yes?
>>: I've been using (inaudible) from crisis management trying to adapt it to long
crisis management situations like life science, research which is kind of -- it's not
actually -- it's a bit kind of crisis management situation but there are no human
(inaudible) involved directly. However, it does involve high cost if you think about
the (inaudible) so do you think (inaudible) situation is where there's no fire,
nobody is dying but still it's very critical time-wise or financial?
>> David Woods: This is an old, and old debate in some ways, right, which is oh,
yeah, there's the sort of everyday experience, it's the business world, it's
whatever versus there are these high risk critical specialized situations. And one
of the changes that the technological world has produced for us has linked those
two together, right?
The stuff I used to do with mission control or with the nuclear control room are
now happening in the business world. They are talking about resilience not
because there's lives at risk but because the financial stability of the organization
is at risk.
If you hang around people who make lots of money, these high levels they don't
act like it doesn't matter, right. When they lose out on their bonus, they get really
mad, right. When the company goes under and they're out of work, they're not
happy campers. So what's at -- there's always -- you know, when you say it's a
human system there's always stuff at stake. And it -- the other way to look at it is
be a parent because when you're a parent and you look at your early, the
pre-teen for example or whatever and they are just obsessed with the crisis in
junior high, in middle school and as an adult you're like oh, come on, this doesn't
matter a bit about what clothes you wear or what style or which camp you're in,
you know, or whatever, but to them everything in the world depends on that.
So again, it's a classic example of balancing perspectives at multiple scales. So
that's the paradigm and that's some of the other work we're doing is again
combined knowledge from multiple disciplines, putting it together, we're trying to
look at what's called poly centric control models and what are poly centric control
models that actually comes from tragedy of the commons, it comes from social
scientists like Elinor Ostrom at IU and what it is is a new way to think of
supervisory control or managing adaptive systems and what you're doing is
saying I've got partial autonomy at different levels so I've got multiple centers of
control, each with partial autonomy and how do I balance that out?
And so they have to be in sort of a creative tension across those levels. And that
seems to be one of the key drivers to creating systems that are effectively
resilient in the face of change and avoiding a kind of classic pattern of
maladaptation which is called the tragedy of the commons. I don't know how
many people know the tragedy of the commons.
>>: (Inaudible).
>> David Woods: Yes. Or you know, Enron or, you know another thing would be
to look at Al Ross work. Another great example of this, he's the husband of one
of the cognitive engineers, cognitive human factors people, always like to say it
that way. You know, he's a famous microeconomist but he's the husband of a
great human factors person.
And he has a great paper on how facts unravel, sort of a synthesis on
experimental microeconomics and again markets unravelling you start to see
these multiple level operations going on. And so he's been doing this on
transplant exchange programs, for example, or cross-bid systems like how do
you place residents or medical students in residency programs. So cross-bid
systems are a couple of the examples, transplant exchange systems, the kidney
exchange system right now is working better, finding better matches because of
the stuff he designed.
So these are great examples that we can take these adaptive approaches and
that we can do things that improve design. Now, one of the things is when I work
with the designers at OSU is it's an interesting challenge because industrial
designer, it's not the normal thing you design anymore. Right. But for HCI it's
not the normal thing you design anymore. We're thinking of there's an object,
there's a -- our system where it still has kind of an object quality to it, here's the
visualization, right, you have a name for it. All right. And we can get that name
in the system into the larger system because they make use of a fish eye lens or
they make use of this other technique or visualization or gooey feature that we
created.
It's saying our design object is quite different when we take the adaptive system
stance. Now, adaptive system stance has been around for a while. It's not new.
What's new I think is that a whole bunch of people are saying it's matured, now is
the time to move. All right. There's enough resources to draw on. And don't let
the people who do adaptive system models do it by themselves because they'll
screw it up when it comes to things like emergency rooms and intensive care
units and genomics and all the other areas of hyperactivity.
Everybody's experiencing brittleness in a highly hyper- interconnected,
hyper-pressure world and they want to be able to respond more agilely, they
want to have more foresight. These are the things that we're trying to do. Now,
that's why I said they're cross link levels, right. Because if you're going to
enhance adaptive power, what are you going to do, you're going to come up with
better CSCW systems. What are you going to do, you're going to come up with
better expertise systems like visualization aids and things that give you better
feed back, better ability to sort through massive amounts of data, right?
But the reason this matters is because how it helps synchronize and how
synchronization matters is how it releases adaptive power, we've got to get those
three levels interconnected. Yeah.
>>: So sometimes it seems like you can be both (inaudible) and supportive.
>> David Woods: Yes.
>>: The EICU is a good example. I am no longer in the space, so I'm not hands
on, you know, there's some things that I clearly can't get through a video camera,
but at the same time you said there are benefits like taking the high level view.
So has this just become a larger balancing problem now or perhaps ->> David Woods: Well, the multiple effects problem, that's why I use the crisis
management example, because that's a really intense specific example and this
was the first study that just gave empirical support to the observation, the
anecdotal observation. And it doesn't have any design implications yet. We've
got a better balance across the sorting the different information sources.
But the mixed effects problem, right, that's what you're pointing out, technology
change has mixed effects. And the adaptive stance says look for and anticipate
how those mixed effects are going to play out and then if we're an effective field
we ought to know a bunch about how those mixed effects are likely to play out.
So when we come back and somebody says oh, this is going to be great, it's
going to be a remote monitoring facility. We ought to be able to say, even though
we've never walked into an ICU, oh, there's a variety of things that a remote
monitoring facility might do that would be effective and there's a variety of traps
that might happen in a remote -- because it's remote from the real world.
So we would say what's the value of being on the scene? Is there something
special about being on the scene? And so all of a sudden you start going what is
it about being on the scene that's different than only looking at things remotely?
Well, one of the things that comes out for example is this was shown pretty nicely
in the movie Black Hawk Down or the novel actually we don't know if it's really
true or not so I'll say the novel, it was really great which is the remote monitoring,
right, the commander was monitoring the battle, all right, from feeds on the
helicopters over it.
And what is striking is he had no feel for how things were going downhill. All
right. So in a -- so you remote monitoring. I know in advance that remote
monitoring are more susceptible to late recognition and intervention when things
are sliding downhill. They don't see the slide downhill early. We call it going
sour. It's a going sour signature leading to an accident. So they're late at
recognizing the going sour signature. Right. So there's something special about
being on the scene. Now, it's not enough to just be on the scene because we've
got lots of going sour cases where there was a resident on the scene who wasn't
expert enough to exercise that things were going downhill until there was a
collapse in the physiology.
A good place to study this is intensive care units, operating rooms,
anesthesiology kinds of places. But there's others. We've done it with aircraft
automation, same thing happens with aircraft automation. So we know this is the
case. We also know in highly automated worlds it's hard to recognize going sour
signatures. Why, because the automation keeps responding to the early trouble
hiding the trouble from the human supervisor until things having to hell and then
you go, and then the automation goes here, dude, take care of it, and the guy
goes, oh, no, what do I do now?
And so there's a really cheezy Japanese reenactment of a real aviation accident
where this happened, a near miss, where this happened, an airplane plunged
20,000 feet before the pilots regained control. So these things really do happen.
So we know this pattern and so remote monitoring has this potential mixed effect.
And notice the other reason I use the crisis management UAV example was
notice when we pay the penalties for mixed effects can be different versus when
we get the advantages. So the UAV feed gives us advantages right off the bat
that everybody sees, they want to take advantage of that new opportunity. They
don't want to give it up.
The penalties are later, in cognitive and collaborative performance in an actual
disaster where they get trapped in the wrong assessment. They have an
incomplete analysis of all the data.
So one of our advantages would be to help people, all right, see that mix. And
then one of our things should be is how do we compensate for that mix. So we
get the advantages that the opportunity represents because they're going to
adapt to take advantage of the capabilities anyway if they're under pressure to do
more.
>> Mary Czerwinski: You have a question back here, Dave.
>> David Woods: Yes?
>>: I was just curious, is any work being done to investigate so the system that
you described, EICU and ICU in the hospital I am in negotiating those together
into a broader network of knowledge so to get that wholistic schedule especially
in epidemiology, I (inaudible) so you can identify friends and ->> David Woods: That would be a great thing to get with a public health school
who is connected with some public health departments in the area and work on.
I mean, we've been trying to do that in Ohio and if we can get more funding or
whatever, that's a great opportunity.
What's the short answer? In healthcare, what do they think? They think that you
can do this bottom up. Collect all the data, once you have the complete digital
system for healthcare all the data somewhere in the digital system so then what
do you do because there's too many data and you can't find anything meaningful
then you must have data mining algorithms and the data mining algorithms will
tell you, aha, here is the outbreak and da, da, you're done and it will tell you this
and now you know that it was this pattern or trends and now you know what's
going on.
Will data mining do some things from massive healthcare databases? Sure.
One argument is, a Google argument is that because we can have ultra massive
databases on basic human activities all we need are bottom up data mining. We
have massive amounts and we can find these little emerging trends that matter.
I don't think -- yes, you will have some successes. Will it work overall, I don't
think so. It's not really a wholistic approach. A wholistic approach would
combine a bottom up approach or top down or middle out approach.
>>: Okay so there's no real work being done ->> David Woods: I've been crying for 12 years now, 2008 we started patient
safety movement in 1996. I've been trying out for 12 years saying guys, it's all
about overcoming the fragmentation, you can't overcome the fragmentation if you
try to save one big massive database will allow us to integrate everything. You
have to have others kinds of mechanisms. Top down may be too hard to initiate
given the current structure so the issue is how do you do middle on you, how do
you do decentralized, how do you get emergent properties, what are the key
levers that will give you emergent properties to create continuity? But that's the
big, big target in healthcare. How do you get emergent continuity over virtual or
distance interactions.
And you're right, can we get combinations of perspectives? We've done it in the
national aerospace, we've decentralized authority and we've created better
coordination mechanism between airline dispatch and the strategic air traffic
control so we know when to back off and how to back off when risk is introduced
into the system and how to handle disrupting events in meaningful ways. We've
done it on a fairly large scale system.
But national air space system is tiny compared to the national healthcare system.
>>: (Inaudible) applied to Southwest I believe it was for managing their main
crews, luggage ramp, all that.
>> David Woods: Well, we constantly run into people trying to transfer business
world solutions which are related to efficiency and lean in hyper-controlled
settings to really messy high potential for surprise settings.
Healthcare is a high potential for surprise world, and things that depend on
controlling the environment. So the classic -- applies to automation, right? If you
increase the level of automation, level of autonomy, the automation in the system
what's going to happen? I can tell you exactly what's going to happen, all right.
There will be human roles developed that will -- whose job is to close, to align the
context gap. In other words, their job is to make sure the world matches the
assumptions behind the automation. As long as the world that the automation
runs in matches its assumptions, it will do things that it really -- make everything
look real smooth, hyper-smooth, hyper-efficient. The problem is if the context
gaps brews, the automation will keep doing what it things what's right for the
world it thinks it's in.
That's actually that UAV crash I skipped over, that the example I use for this.
1999, a UAV has a problem, takes six months in 1999 to plan one UAV mission,
Global Hawk. Obviously they're a little faster now. Many people were involved in
it.
Onboard failure, software contingency plan, return to base. Plans, stops, goes to
look up the next thing to do, there's a bug in the software and it looks up nobody
had evaluated the interactions across software modules, instead of looking up a
taxi speed, the speed it looks up is the last to send speed. So it tries to taxi, to
taxi and make a 90 degree left turn accelerating to 150 knots. Needless to see
the physics doesn't work and it goes careening off and crashing in the desert.
What's there. You have the automation does the right thing in the wrong world.
Area 501, software blows up the rocket launch. Software modules interact. A
test piece of software they didn't realize would actually be feeding live data or
feeding non live data to one of the monitoring programs to say is it on course or
not. So the monitoring program thinks the rocket's off course, it's not off course
and blows it up.
Literal minded machines, right. And that was what Norbert Wiener was
reminding us of and warning us about is that danger. So people will be there to
align and close the context gap. So again, what do we have, a trajectory, there's
a technology change, there's certain things we know that are likely to happen or
could happen. What do we monitor are those things happening, are the support
features for that in place or are we going to end up again in one of these
mixed-effect situations?
The classic mixed-effect situation is where we end up with some successes and
problems on one scale, some successes and problems on the other scale and
we end up debating human error over automation. I went through this with the
aviation industry, going through this in the healthcare industry. And you just point
back and forth. You know, there's a screw up, well, doesn't matter just put more
technology in. The screw up means that you have too much technology, stop the
technology, right, just have people do it. And of course neither of those are
stable, stable points out there.
>> Mary Czerwinski: Okay. We should probably stop there just to save his
voice.
>> David Woods: I'm actually fine. I don't know why I've lost it here.
>> Mary Czerwinski: Thank him again.
(Applause)
Download