>> Mary Czerwinski: Hello everybody. Welcome. It... Stiles-Shields, I should say from Northwestern University. It's my...

advertisement
>> Mary Czerwinski: Hello everybody. Welcome. It is my pleasure today to introduce Colleen
Stiles-Shields, I should say from Northwestern University. It's my pleasure to have her and her
colleague Stephen here today to talk about the very fascinating work that they're doing with
mobile usual interventions. She is associated with Feinberg School of Medicine and also
another center for behavioral technologies that she will tell you about. Her work is very
complementary to the work that we've been doing in this area and I'm super happy that she's
here so take a week Colleen.
>> Colleen Stiles-Shields: Thank you. Thank you very much, Mary. I'm absolutely thrilled to be
here. As well as being from the Feinberg School of Medicine I'm also from the Center of
Behavioral Intervention Technologies and I come from this with a background in social work as
well as clinical psychology, so aside from just being thrilled to be here in general, I'm really
excited about the hopefully good discussions that come afterwards given the varied expertise in
this room and beyond. Today I'm going to be talking a little bit about a different way to
improve behavioral intervention technologies and that's through harnessing more human
features of the apps. To start off, I'm going to begin with the very obvious problem and that is
the pervasive nature of mental health problems. In a given year 19 percent of the U.S.
population will meet criteria for a mental health disorder, which translates to roughly
58,000,000 adults in need of mental health services a year. To put this into perspective, 21 to
30,000,000 per year that's depression alone. That's a fair number of people. However, that
roughly 58,000,000 people only 21 percent are receiving adequate care. Surprisingly, given the
number of undergrads you find on a college campus that say that they are psych majors the
issue of pervasive mental health is further compounded by the fact that we simply don't have
enough behavioral change professionals in the field to meet this need of 58 million people a
year requiring treatment. We actually have currently about half what we need which is around
300,000. Further, there are access barriers beyond the number in our workforce. For example,
time constraints. There aren't a lot of people who work full time, have child care, have a
commute, so maybe they got 30 minutes on their lunch break and that's really all they can give.
Lack of access or geography, conveniently there's a lot of therapists in Seattle. There are a lot
of therapists in Chicago, but unfortunately, a lot of places in our country there aren't a lot of
therapists in some areas. Further, costs, not only the cost of going to therapy because
insurance doesn't cover a lot, but costs of maybe childcare or transportation, a lot of hidden
costs. Transportation as well, I actually see a family at the University of Chicago right now that
takes about two buses across an hour and a half to two hours each way to come and see me
each week and Chicago does not have winters like Seattle's, so now not only are they waiting
for two buses by they're waiting for two buses in very bitter cold. I don't think I could actually
overcome that barrier and I'm impressed that they do. Finally, there are symptoms inherent in
some conditions that really are going to interfere with maintenance and initiation so
depression, for example, we expect people with depression to have low motivation, to have
things that are really feeling that fun for them, that's already hard to get them out the door to
get to these things. Taken together, changes in treatment delivery are necessary to reach more
people and overcome these barriers. Queue the entrance of behavioral intervention
technologies which I will hereafter referred to as BITs. BITs our applications that use
technologies such as mobile phones, computers, tablets and sensors to support behaviors that
improve health, mental health and overall will being. I say BITs rather than M Health or E
Health because I'm really focusing on technology center focused on behavioral changes
specifically, and not broader technologies in health. That's why I'm sectioning out BITs from the
overall umbrella of M Health and E Health. As the title may indicate BITs are a major focus of
my lab, the Center for Behavioral Intervention Technologies for CBITS. Today I'm going to be
focusing on BITs delivered through smart forms and I chose this focus for a few reasons. Smart
phones are rapidly increasing in computing capacity while dropping and cost. They also
overcome a variety of disparities. For example, ethnic minorities are more likely than
Caucasians to own a smart phone, to access the web via a phone or to download health habits
in general. For this reason and more they are forecasted to be the dominant platform for
communications and web access. This from we're good idea if we figure out how and why BITs
work on them. What do we know so far? Good news and bad news. I'm someone who likes to
get the bad news out of the way first, so let's jump in. While there are a lot of health apps and
many of them are geared towards mental health, they are really not getting downloaded.
They're really skewed to a very small number. Further that, if they are downloaded they are
really not getting used. In 2014 the group looked at a commercial diet app him of roughly
200,000 downloads 86.4 percent were never used. 11 percent were tried a few times and 2.6
used it 10 times or more. Now the 2.6, these are people who were already engaged in to
tracking, so really not the people who were having trouble, the people they were trying to
target to help them track their diet. Additionally, if you think about treatments for depression,
for instance, something that happens less than 10 times is probably not going to have a huge
intervention punch and especially it doesn't give you a lot to monitor. That 2.6 of 10 times or
more is a really, really big problem, because that means people are not getting the dosage, not
getting treatment and they are likely not going to get better.
>>: [indiscernible] than just general apps?
>> Colleen Stiles-Shields: What was the first part, sorry?
>>: Are these statistics different than just random apps where people are more or less using
[indiscernible] apps? Again, if you'll take just 4000 random set, random set of 40,000 apps
again you will see some kind of distribution and usage. Do you know if it's similar to what you
see here? Is a better, worse?
>> Colleen Stiles-Shields: I think that's a great question and to be honest I don't think I'm wellinformed enough from the literature of the overall usage of apps across the board. What I can
say is that from the health app side and specifically the mental health outside, that this study is
pretty representative that the tail office is just terrible. I wish I could speak to the broader
community about how that looked across entertainment apps. I'm going to do that after this
talk and check that out, so thank you. This is the bad news. Good news, I get to work with a
bunch of scary smart people who are part of a larger community of very, very smart people that
are trying to figure multiple ways of getting to this problem, of how I can get people to use the
apps more, get the treatments and get the dosage. As a psychologist, I am personally
interested in addressing this problem in seeing how relationships may be able to help people
use BITs more the solution boils down to two main approaches. The first one is to harness
human connection with helping professionals, or connected health, for instance, whether that's
a known person or an unknown person the user is being connected with. Secondly, we can
harness the human connection that might be seen within the interactions with the app. Let's
first discuss the role of human relationships with BITs in apps. In 2011 Mohr and colleagues
proposed the supportive accountability model to provide a framework for what the evidence
was saying, and the evidence was saying that adherence to BITs is benefited by human support.
The human support provided by a helping professional, which I'll call a coach for right now,
enhances adherence to prescribed behaviors in the BIT through a few ways. First, the coach
must be seen as demonstrating behaviors consistent with bond, so liking, trust and respect.
And must also be seen as someone that's benevolent and having the necessary expertise for
their role. Further, the coach must complete various specific behaviors. For instance, framing
the relationship as one with reciprocity, so the patient can expect very specific benefits from
the coach if they both do what they agreed to do. What is agreed upon must be done so
collaboratively defining very clear goals and expectations from the start. These goals are clear
to the point that the patient also gets why they're happening and how they're related to what
their overall treatment goal is. Finally, it's got to be clear that performance monitoring will
occur and it's not to wag a finger if someone doesn't really do what they're supposed to do.
The coach can really help the user to use the BIT to its full potential so it's clear from the
beginning that the coach is going to see what the user puts into the BIT. Taken together, the
idea is if they're seen a certain way and they perform the certain actions, accountability will
increase. The support of accountability model predicts that adherence enhance these support
variables provided by the coach. However, this relationship to adherence is believed to be
mediated by the user's motivation. If the user has really high motivation, they probably need
less communication from their coach. But if low, they probably need a lot more. The tricky
thing is conditions like depression, we expect that people are going to start with a low
motivation and hopefully with symptom improvement get to a higher motivation. But if you
have any experience in treating mental health disorders, you know that it's really not a linear
relationship from this step to this step. It's a lot more of a tango going on of two steps forward
and one step back. This mediation really needs to be monitored carefully based on where the
motivation is that communication can adjust appropriately so that the overall adherence
improves. To understand this idea a little bit better, let's look at a couple of ways that this has
been put into place. Taking examples from treating depression, the most common treatment
for depression is actually through antidepressant medication. As most people go to their
physicians for medication, primary care is therefore the de facto treatment site for the
treatment of depression. But it has really bad outcomes. Only 20 percent are typically
symptom-free after eight months of treatment. That's not really a good average. The reason
this happens is usually attributed to poor adherence from the patients. They are really not
taking the medications the way they're prescribed and when they were prescribed.
Additionally, physicians have this communication barrier where they do not follow-up to
optimize the dose and they are missing a lot of information from the patient. A target for
harnessing the support of accountability in the treatment of depression in primary care is the
existing helping relationship between the patient and their physician. For this purpose CBITS
has developed MedLink which is designed to increase adherence and accountability for people
with depression, initiating a new antidepressant medication. It does this by bolstering the
reciprocity in the existing relationship between the patient and the physician. Since it's an
already existing relationship, we start with the assumption that the bond and some trust in the
expertise is already there, but will be really bolstered by increasing communication and
transparency between the two. The MedLink system does a lot of really cool things, but I am
going to be keeping it at a high level as to what supports the accountability model. On the
patient end they are provided a MedLink app and a cellularly enabled pillbox which tracks that
their daily medication has been taken and when. The data from the pillbox as well as data from
weekly assessments that are conducted through the app provide information to the patient
about if and when it might be recommended that they reach out to their provider.
Additionally, it's made very clear to them what data their provider will be getting from this
entire system. That's something in usability testing that patients were really responsive to
because they noted a lot of failure points in the relationship. When they get into the office
they kind of don't really remember what the major problems or with their meds when they
were at home. Or they give really inaccurate data, or just the experience that a lot of us have
had that you go into your primary care doctor's office and it's very clear they have a very short
amount of time to be with you and that's hard to overcome sometimes and you get that feeling
of no, no, please sit and wait and listen to all of my problems. They were really excited that
their physicians forget this data and know that it was accurate without them really having to do
that themselves. On the physician signed the data is collected and combined into this physician
information from which you see and it includes depression scores over time, side effect
information as well as adherence. Mary?
>> Mary Czerwinski: So when do they take the PHQ9? Do they do that every day?
>> Colleen Stiles-Shields: Every week. Every week and we adjust the PHQ to the one week time
frame as well, so it's also not collecting two weeks of information every week. The depression
scores over time, the side effects, the adherence, these are all decision points that would really
help the physician to figure out what contact they want to have with their patient, if they want
to change dose, if they need to provide some psychoeducation about common side effects, for
instance, and how long they might expect that to be. This is really helpful information, but
usually it's something the physician just doesn't have, especially if the patient isn't calling to
schedule an appointment in the first place. With the information that both parties have, the
expectation about each role becomes much more explicit. There's a lot more communication
and there's a lot more follow-up of what each side is doing.
>> If they are taking the PHQ9 every week and see models of side effects, they must be taking
a much longer questionnaire.
>> Colleen Stiles-Shields: The question. The weekly questionnaire includes the PHQ9 and then
they get the prize and if necessary the fipster [phonetic] as well, so we get overall information
about what domains the side effects are having and what impact it's having as well so we can
check that change. It's expected that in the first four weeks you're going to get some side
effects going.
>>: But is there any issue with them taking such a long questionnaire or are they okay with it?
>> Colleen Stiles-Shields: Thus far it's been okay. We have tried to streamline it as much as
possible and that's why, and forgive me. I want to say the fipster is the optional one based on
how they answer the prize. If the side effects aren't too impairing and they are just taking two
rather short ones we've built into the assessment model that based on how they answer they
get specific didactic content afterwards. We tried to note that as a benefit to the patient that
the information that your giving is not only going to go to your position, but you're also going to
get some real-time feedback about it. So far so good. The people are answering it, and we also
have benefited from the fact that in the research lab we have the research assistants that are
also following up pretty carefully so this isn't real world yet. We'd like it to be but it's still too
early. The beauty of research assistants, they'll follow-up and make sure that things are
answered. Applying this model to this problem sounds pretty good. I'm painting a nice picture.
What does the data tell us so far? In an ongoing field trial of MedLink, all patients were taking
their meds for the first four weeks which is one of our assessment time points. The average
adherence was 84 percent with five taking it every day, two missing about once a week and one
person missing more than once a week. The exciting thing about that one person is because of
the data that we're getting through the app, we know that they were not totally been adherent
more so because of the side effects. The reason that's exciting is because if that's something
that's communicated to their physician, that's a really adjustable problem, whether it's through
psychoeducation or just playing with the meds a little bit. So 84 percent, in school that would
be a B, like how does that sound? Compared to what usually is happening at primary care
within the first four weeks only about 40 percent are still taking their meds. The field trial is
eight people, but we're pretty excited about that B so far. Additionally, the depression severity
is improving likely because they are getting the treatment and follow-up that is really
recommended for their depression. A further reflection of this is that the communication and
follow-up between the patient and physicians is so much higher than is typically seen in the first
four weeks, so seven of our eight patients had some type of physician follow-up, so six office
visits, four phone calls and two messages through the EMR system. Again, this level of
communication is really unheard of in primary care between the physician and the patient, so
we're pretty excited. We'll learn a lot more when the randomized control trial starts which is
pretty soon, but this is exciting initial field data that the supportive accountability model is
connecting the patient with a known provider and it seems to be helping adherence which
therefore helps overall treatment goals. What about harnessing support of accountability
without having a known person to hook them up with, which is usually what we are dealing
with? To examine that we'll consider the other way of treating depression which is therapy.
The therapy with the greatest evidence-based on the treatment of depression is cognitive
behavioral therapy or CBT. As the name suggests, CBT targets depression through interventions
related to changing cognitions and behavior to ultimately impact mood. BITs, therefore, takes
intervention principles from CBT and attempts to change thoughts and behaviors over time.
However, if this is happening face-to-face a person would be paired with a therapist and that
would be the known provider. But the BIT is sort of taking the role of the therapist. What
human support can we therefore harness? That's where the term Coach comes back in and
that's where we are harnessing support with an unknown person that's new to the BIT user.
Coaching for BIT becomes common due to improved it hearing associated with coach
involvement which really starts to be noticed in a web-based BITs. The coach and user they
meet over the phone. There's no face-to-face interaction whatsoever and then have weekly 5
to 15 minute phone calls that can be replaced with messaging after a certain point or
messaging can occur with the phone calls as well. The coach's rule is to use the supportive
accountability model to increase adherence and help the user to use the BIT in the best way
possible for them to get the treatment. As coaches are not actually therapists, this also
increases generalized ability and practice because it really increases the number of people who
might be able to fill this role which is one of the barriers we are trying to target. We just don't
have enough behavioral change specialists. It's exciting that it overcomes this barrier and has
positive outcomes. The other thing that we are starting to notice in the data is that having this
expectation that there's going to be this phone call or this message once a week really helps
establish usage patterns. It's similar to how you might scrounge into your homework right
before class. People are really logging into the BIT to use it immediately before or after their
coaching call. What we've noticed is the most successful users are the ones that get into this
pattern. People may even do a Friday Saturday Sunday pattern and the coaching contact is
really increasing that accountability to kind of get them into a pattern even if it's just around
that coaching call.
>>: [indiscernible]
>> Colleen Stiles-Shields: Technically, right now the coaches we are using our therapists, but
we are working to increase this model that it could just be BA level people that are very trained
in the BIT, so they have the expertise which is part of the model, but don't necessarily do CBT.
It's really the BIT that is going to do the CPT.
>>: [indiscernible]
>> Colleen Stiles-Shields: Great question. The first call is usually a little bit longer. It's a little
bit more of a walk-through and getting some history gathering. Otherwise, kind of going back
to the model, the calls usually start off by the coach acknowledging what they notice in terms
of the usage. Say they used like a thought tracker, the coach might make a comment like I saw
that the thought tracker was used really well this week and you were really using those
principles, or you are using the thought tracker and I'm a little confused about some of the data
you were putting in and I want to make sure that this is really a helpful tool for you. Following
the model it starts off with really the expertise and the accountability and showing what each
side has done and what work they done towards their goals. But then there is a little bit of the
bond as well, how are you doing, how are things going and then unless there is some explicit
question then it's goal setting for next week, because it's really trying to teach them certain
skills from CBT. What skills are you really going to practice and put to use with the BIT in your
life this week?
>>: [indiscernible] the difference without this app in having the therapist calling an being on
the same frequency, will they achieve the same result or does the app which actually finds the
real users, does it really matter, because if you are just calling to ask him how do you take the
pill, he probably won't tell you a lie, right?
>> Colleen Stiles-Shields: I think it's interesting and a lot of the questions, like going back to our
MedLink app, there's a lot of information they want relayed in a certain way, but they said that
it's easier sometimes to tell an app than a human and calling and checking in. It's a different…
Did I get to your question? Okay. Moving back. Another thing that we've noticed is that when
we create BITs without coaching, there seems to be a request for it in the user feedback that
we receive, particularly around the guidance in using the BIT effectively. We've gotten
responses like I would like more leading to help walk me through the features and to really
know when I should be using it, or I'd like more discussion questions or encouragement on how
to use it and I could really use more instruction. People kind of want that extra human support
to help them use the tool the way it was designed to and give them that support and
encouragement. Additionally, when coaching is provided bond is usually very often noted as
well as improved confidence in using the BIT to its full potential. We get things like it's really
nice to talk to the coaches. Having the interaction was helpful. I was able to read through the
notes with the coach to figure out feature functions later on my own. This is really the key. We
want people to feel empowered to have the information so they know how to use it and know
that they can do this on their own in real-life.
>>: [inaudible]
>> Colleen Stiles-Shields: Right. To coach them to use the app. As much as our coaching data
so far has been related to web-based interventions I also wanted to provide a snapshot of what
we have seen so far in our coach apps. We have an app called Mobilyze which we did an eight
week pilot for in the last year with a sample of eight. Mobilyze has didactic content through
text video and audio, interactive tools, notifications and feedback. Through the eight week trial
we had six participants complete all eight weeks. Six out of eight and you go well, but I keep
thinking back to that 2.6 percent that use the health app more than 10 times. This is a different
ratio than we were expecting based on what the data was already giving us. Further, it goes
back to, the supportive accountability model is really bolstering the use and people are actually
getting a higher dose of treatment and that seems to be impacting their depression. We have
significant scores in every measure of depression that we had at the pre and post. Again, it's
only eight people. We're going to do much larger trials, but so far so good. The take away here
appears to be that human support has some real benefits. Bolstering BITs with human support
with a known person like a primary care physician or an unknown person like a coach,
harnesses support of accountability through bond and legitimacy to increase explicit
accountability on both parts. This is seems to increase adherence which means participants are
getting a higher dosage of treatment, which appears to be benefiting their depression which is
really our goal in giving out these BITs. The stars of my presentation are actual CBITS people
and that's why if it's not quite a great quality picture, but I wanted to mention that in this slide.
Now we're going to shift gears a little bit. The second way of improving adherence and use of
BITs delivered to apps is to design them in very specific social ways. Before I get to explaining
what that means, let me back up and briefly explain what led to this idea and why we think it's
important. We're actually going to back up to me being a wide-eyed 22-year-old fresh out of
undergrad and starting my first clinical internship. The first time I sat down with a client and
really felt this strong mutual connection, internally I was like wow. This is a really cool feeling
and I need to find out how to make this happen more. That really cool feeling is a lot better
known as therapeutic alliance. Therapeutic alliance was defined by Bordin in the '70s as the
relational bond formed between a therapist and client through collaborative work, mutual trust
and establishing and reaching treatment goals. Usually this construct is broken down into a
global agreement, a task agreement and bond. These probably sound like a familiar constructs
because of the accountability model we were just talking about, and that's actually not a
coincidence. Beyond feeling cool, therapeutic alliance is really an important factor in mental
health treatment. For example, in CBT as we discussed a few minutes ago, which has the
strongest evidence base for the treatment of depression, therapeutic alliance is given this title
of a critical nonspecific factor. What that means is the level of therapeutic alliance that is found
between the therapist and client when they are doing therapy is an important predictor of how
well the therapy does regardless of really what is said in the room. Therapeutic alliance is not,
if you pick up a CBT book, they mention that it's great because it's this important for this critical
nonspecific factor, but it's not one of the specific change tactics of CBT. It's not targeting the
cognition. It's not targeting behavior. It's simply the feeling between the person and their
therapist. Again and it's kind of broken down to our week really agreeing on my overall goals
for treatment? Are we really agreeing on the tasks that are going to get us there? And do I
really feel like I like this person and this person respects me? That is really, strangely, it took us
a really long time to figure this out. Bordin discovered this in the 1970s and as you know
Sigmund Freud has been around a lot longer. We came to realize that sometimes how the
person feels about the therapist is actually much more important than what the therapist is
saying. Okay. Therapeutic alliance is a big deal. More exciting, because I am interested in both
tele-health BIT work, recent findings also indicate that therapeutic alliance can be effectively
established without visual cues. In fact, when we compared therapeutic alliance ratings for 162
people who were randomized to face-to-face CBT to 163 people who were randomized through
telephone CBT, they never laid eyes on their therapist or saw a picture of their therapist. We
found that there were no significant differences in therapeutic alliance from both the therapist
and client perspectives of both treatments. And that these similar readings were consistent
across treatment duration. The even more exciting thing is that without those visual cues the
people who were on the telephone CBT, their level of therapeutic alliance was just as important
in predicting how well they would do in their treatment as it was for the face-to-face folks. This
is the stuff that I geek out about because I get really excited about the human connection and
the fact that you can take someone, I could call someone in Utah, for instance, and never lay
eyes on them and never see a picture of them and they could feel just as connected and feel
that we are just on task with the goals and tasks as I would with someone who is down the
street from me in Chicago and coming into an office. It's really exciting. At the same time we
were finding these outcomes in tele-health, I was also conducting usability testing for a
MedLink app. Despite me sitting down next to each participant that came in and really asking
how well the app is working and how well it's meeting their needs based on how we designed
it, their word choices were really interesting. It got a lot of old look. It totally knew what I was
thinking or I feel like this just knows me and what I didn't get or remember from my doctor.
They really never said anything about the designers or the CBITS or even me. I didn't design it,
but it was the human there representing that. It was a lot more about the app's choices and
behaviors, which kind of struck me. I started feeling more like I was getting information about a
being rather than an apt and it led me to two questions. First, do people have sort of a social
response to technology, and if so, is there a way that we can harness that and build a
relationship that is similar to therapy alliance to increase the use and outcomes? Those
questions lead straight to those two men Byron Reeves and Clifford Nass who in 1996 literally
wrote the book on the many studies the group had conducted regarding a phenomenon
referred to as a computers as social actors or CASA Paradigm. I'm sure a lot of you are familiar
with this work. I, however, was brand new to it, so bear with me as I walk you through my
thought process a little bit. CASA holds that people treat computing technology as social actors
and conform to the same social behaviors they would with other people even though they
would think it unreasonable to be polite, for example, to a piece of machinery. To give an
example of this can I ask for to brave volunteers for a very quick poll? Those two hands went
up first. Who wants to go first? Okay. How am I doing with the presentation so far?
>>: You are doing fine.
>> Colleen Stiles-Shields: Oh, thank you. How am I doing?
>>: Great.
>> Colleen Stiles-Shields: Great, cool. Thank you. All right. Glad to hear it. That's the kind of
feedback I enjoy getting. However, it's likely that there was some social influence going on
here. One, it made me feel good which I appreciate, but it also made it less likely that I would
burst into tears in front of everyone which makes everyone in the room a lot more comfortable.
But it's likely that had I left the room and someone else had walked in and said how is Colleen
doing, that we would see a little bit more variance in the responses. Little bit more like okay or
she talks kind of fast and the West accent instead of the great feedback that I just got, thank
you. And it's not that you were lying. It's more you were being polite which is something that
we are culturally norm to do. And in fact, social psych has found this, that when a human asks a
person about herself the user will give more positive responses then when a different human
asks the same questions, and because people are less honest when the human asks about
herself, the answers will be more homogenous than when a different human asks the same
questions. It turns out that this applies to computers. To evaluate this, people with computer
experience were brought in to a lab and told they would be given a tutorial on a computer and
then after the tutorial receive some type of an even evaluative test. Because the program was
being evaluated or in development, they would then evaluate how the computer did on the
tutorial. People were brought in. They sat down at computer 1 and computer 1 gave them 20
facts such as according to a Harris poll, 30 percent of all American teenagers kiss on the first
date, scandalous, I know. And then people were asked to rate on a scale how much
information they felt they knew about that fact. Based on that they were led to believe that
the computer would give them special facts after that, but really it gave the same info in the
same order afterwards. People then completed the test of how well they understood the facts
and computer 1 told the person how they did based on how they answered the test and then
computer 1 said by the way, I think I did a great job. Please evaluate me. Half of the samples
stayed on computer 1 and evaluated it and half the sample walked 3 feet over and rated on an
identical computer that we'll call computer 2. Those that stayed on computer 1 rated it as
significantly more friendly, significantly more competent and the answers had significantly less
variance. Every single subject was debriefed afterwards and asked would you ever change your
answers to make a computer feel better. Very confidently everyone said no. That is silly. I
would definitely not be that. When they put this out to the world they got a lot of feedback
and a lot of concern about confounders. Very briefly I'll go through a couple. I could talk up
here all day about it. They replicated the study and instead of having people go to computer 2
for half of the sample, the other half of the sample went down the hall and answered paper and
pencil questionnaires, same thing. They've been replicated this again and instead of computer
1 delivering all of this information via text, it had a specific voice. Part of the sample stayed at
computer 1 and rated that voice and how well it did. Part 2 switched over to computer 2 that
had a different voice to give the ratings and another part, again, did the paper and pencil, same
thing. Be it text or audio, people were changing their answers to be a little bit nicer to the
computer. This research and many, many other studies conducted by CASA researchers have
been conducted using a methodology that simply takes social psychology findings, looks for the
word human or person and replaces it with whatever computing technology they want to test
out. For example, what we just went through would be when a computer asks a user about
itself, the user will give more positive responses then when a different computer asks the same
question. Because people are less honest when a computer asks about itself, the answers will
be more homogenous than when a different computer asks the same questions. This
methodology has been effective even though participants frequently describe the polite to
computer group that they really think it's weird to follow social norms with pieces of
technology. This group explains mismatches as basically an evolutionary problem. Technology
has gone so much faster than our brains that our default is to be social. It's sort of like when
you watch a scary movie. You get really freaked out but you're not in the danger and neither
are the actors except for some career choices, but that's what they link it to. CASA isn't the
only social attribution to computing technology theory, but it is the most cited of this
phenomenon. You may also have heard of it as the media equation or other things listed at the
bottom of the screen. CASA has been demonstrating using the methodology we just discussed
in over 100 studies with the computers and this work has expanded to many other types of
computing technology and characteristics of it such as voices. It is of note that this
phenomenon seems particular to computing technology. For instance, we don't really have
social interaction with our dishwasher. We don't really hold other pieces of technology
responsible in anger inducing situations. You don't really get mad at your car when you're in
traffic, but you do get mad at your computer when you get an e-mail you really don't want.
There's something about the computing factor that brings out the social nature that technology
in general doesn't. Yes?
>>: Has someone studied the difference in the different age groups, so I might imagine that we
are old enough that we grew up with computers and develop our social skills without
computers and then maybe use computers in the same way we use humans, whereas, my son
is used to communicating with his device and he might have different expressions? Is that
something that has been studied?
>> Colleen Stiles-Shields: Yes. I'm actually about to get to that. If I don't fully address that
bring it back up again. Perfect timing. CASA demonstrates that people make the social
attributions, but beyond the evolutionary explanation, they don't really have the data to
explain more of why this happens and under what circumstances it could be promoted, which is
what I'm interested in with this whole therapy alliance idea. Another theory that developed
out of moderators of CASA that started to be identified, such as computer experience, is the
computing technology continuum of perspective, or CTCP conceptualizes a continuum in
regards to how technology is viewed. The continuum is anchored by a locally simplex
perspective that views technology as a machine that's programmed, controllable and alterable
by humans. The other end is a globally complex perspective which it views technology as an
autonomous entity that is capable of exerting some type of control or influence on a person's
life. The hypothesis and research findings really indicate that most people are somewhere in
the middle of that. However the CTCP and CASA literature provide us a lot of insight into
characteristics of both the technology and user that can really start to shift where someone is
on that spectrum, which is something that I really want to do. I was really excited to find this.
Very quickly, because again, this may be old hat to a lot of people in this room, some tech
characteristics that they have noticed. Improvement on task and improvement in social
attributions, things like distance, people remember things a lot better if something on the
screen is a lot closer so if an agent happens to be suddenly this close on the screen. It's a fine
line because much like in real life if someone got really close in your face to explain something
that would creep you out. A lot of these have a lot of fine line instances. Additionally, Flannery
and praise from a computer even if unwarranted really increased the feelings of likability and
socialness towards it. Politeness goes a long way. Additionally, expertise, if a computing
technology that's going to do some type of didactic or tutorial at the beginning establishes its
expertise, people will remember that task better and they will do better in the future and will
perform it more quickly than if it's believed the technology is more of a generalist. Additionally,
putting people on teams it seems that having that camaraderie with the technology really
bolsters that feeling like I'm having this social interaction with this thing and we're all in it
together. There are a lot of things they also found that might interfere with it. People do not
like being criticized in general, but this goes with computing technology as well, specifically if
it's unwarranted. They really lose their credibility if they get criticized and you really shouldn't
have been. Additionally, boastfulness goes really badly informing that social cue. It's an
interesting finding because if you want to really establish that the computer is an expert it
needs to not be braggy about it. There are a lot of things that depend. Gender voice really
changes how people attribute different social characteristics and how they might perform.
Different voices really aren't looped together. Different personalities are prescribed to them.
Even the presence, so Clippy in Microsoft Word, they have found that if Clippy is in the corner
really not doing anything and we assume the user isn't really noticing them, women actually
have some social inhibition when they are performing tasks and they are being observed by
that agent. That's good to know depending on what type of thing you're going to be building.
Some amusing characters have come out. Women have been found across the board to be
more social towards computing technology and make more attributions. Additionally, higherlevels of education also seems to be impacting that you move more towards the simplex of the
model. And context of youth, for example, I'm in Seattle and I have no sense of direction and
I've never been here before. In a context like that I might actually attribute a lot more
intelligence to my app because it knows where I am and I don't. However, I might not see it as
intelligent when I'm at home and I know exactly where I am. Then there's a lot of psychosocial
which as a psychologist I'm really interested in because these are factors that we're hoping to
change and have the BIT respond to people change. People with social anxiety, anxiety and
depression are incredibly comorbid and so it's important to know this. Social anxiety, those
people are more likely to be much more honest to computing technologies and also have more
social attributions to them, so they are really building that relationship and telling them or
things than they would to a human. People who are more neurotic, people who really view
their locus of control as external and outside of them, which, again, is very common for
depression or have lower self-esteem or lower self advocacy really plays up how much social
attribution they are making. Again, I'm very excited to stumble upon this literature because a
lot of it turned out to be very applicable to what we are trying to design for BITs. Aside from
being a very exciting and telling us a lot, how can we actually apply these findings to BITs to
increase the attributions and possibly the relational factors to BITs and users over time? We
can start from what we already know in terms of developing BITs. Given that current
psychological theoretical models are efficacious in face-to-face treatments really haven't been
sufficient to inform the design and development of BITs. Mohr and Schweller and colleagues
proposed a BIT model to provide a framework for this need. I'm going to present a very highlevel overview of this, but I highly recommend the paper in JMIR or speaking to Doctor
Schweller afterwards if you have more questions about it. In developing a BIT this model
breaks down the theoretical aspects and instantiation to the four w's and hows one needs to
figure out to design a BIT. The theoretical includes the why. Why do you want to make this
BIT? What are your clinical aims to get people to get out and do things, to improve their
depression? Really, what are you trying to do and how much do you need to engage with it to
get that clinical aim? The conceptual how, what change strategies are you going to decide? For
CBT a lot of times people do go the route of either the cognitive change or the behavioral
change, like what tactics are you going to use to get to the why that you want. What I'm
actually much more interested in is the instantiation side. If we are thinking about social
attributions, why the instantiation and not the theoretical? That goes back to that whole
critical nonspecific factor thing in therapeutic alliance. We know that therapeutic alliance
impacts how well someone does without actually being in a specific change tactics. It's really
more the intangible feel that is established. That's why I'm much more interested in the
instantiation side and what's being expressed there to the user. The instantiation side question
we want to consider is what. Though what is what elements are you going to engage with with
your user? Are you going to have messaging? What type of information delivery mechanisms
are you going to use? Will there be logging, those types of things? As well as the technical how
which really gets to the characteristics of the BIT. What mediums of delivery are you choosing,
like text versus audio versus video? And then finally, the when, which includes the workflow. If
you think about it BIT that is going to have to be around for a while, we're going to have to
interact with it for a while to really an act that change and the workflow is really important of
what they are getting when. Those social attributions literature provides BIT developers with
design implications the instantiation of BITs. For example, the answer to the what of a BIT I
might want to develop, developers might consider some of the following from the CASA CBT
literatures. If things are more when they give praise when it's unwarranted, it might design a
BIT that has a lot of positive reinforcement and praise to notifications or banners on home
screens. Or if there are inhibitions on tasks when females are observed by agents, then I really
want to make sure that the agents do not feel present when they are going to be conducting
certain tasks or learning new tasks. If there is a positive bias that they are evaluated on the
same platform, I'm really going to want to launch assessments and evaluations in a different
app or platform, especially if we are really tracking how those assessments are changing over
time, I want to make sure it's as accurate as possible. If it's great that they demonstrate
intelligence than my feedback reports of visualization I'm probably going to get from some
passive data collection or do some analytics to provide some information that is not directly
resulting from the user input, so the BIT kind of knows stuff without being told. For the how, so
male and masculine voices are perceived as friendlier and more influential. I might, therefore,
decide that my introduction information will be delivered via a masculine voice audio. If
experts are believed and liked more and more information is retained, I'm really going to make
sure that the user knows the expertise of my BIT, but if it's like less when it praises itself, then I
don't want it to be boastful about this. If it's liked most in BIT personality adapts to the user's
personality then I'm going to do some tailored information delivery and some messaging styles
to match the user personality. For the when, again, if we're going to do the intelligence I'm
probably going to come up with some really interesting algorithms like when a certain behavior
is noticed a different element is launched. Or making sure that it's an event-based progression
rule because that seems more targeted and planned than an open format. If it's important that
they demonstrate emotions and caring, then maybe I'm going to set something that is going to
launch a notification when noticing or noting concern or caring when it notices something
that's different from the pattern, or if it's liked more when it demonstrates that it's acting out
of plan, then there is going to be really clearly defined frequency deployment progressions.
Then the user will really feel that the BIT has a plan of where they're going. It's kind of getting
into the goal and task agreement aspect of it, like we've got a plan and we're in this together
and this is how we are getting there. These design considerations work through the lens of the
BIT model and the social attributions literature and forms the ultimate design of the BIT. So
we've got that. But what we need is a framework for understanding and conceptualizing in this
research so we can generate hypotheses and evaluate them here further, our understanding of
social attributions and the impact on BITs doesn't stop at the creation of the BIT. To better
understand the effect through usage, we've developed a social attributions to BITs model.
We've already covered the design piece. The design piece is in purple, but beyond this we
know that BIT developers will consider their intended users when making decisions. However,
we really actually get to pick out who uses our BITs. They usually don't fit in that well-defined
box that we make for them of what we expect. Therefore, the variability of intensity of
expected user characteristics for those who use the BIT will inform the ultimate level. This is
the information we went through that's informed by the literature which goes into the ultimate
design of the BIT, but then really, what the actual user versus the intended user brings to the
table, these two together will create the social attributions to our BIT initially. It's expected
that much like a relationship, a therapeutic relationship, these levels will change over time and
particularly considering that we want the user to have improved symptoms, that's a change
that we hope will happen, but we also expect that the BIT itself will change based on the
responses and machine learning that we incorporate within it. The first half of the model is
really the initial and then we expect there to be some changes much like there is in therapy that
reinforces that. Ultimately, that leads to changes in usage and how, which, again, is the
ultimate goal. People use us more and then get the dosage they need and hopefully they start
feeling better.
>>: [indiscernible]
>> Colleen Stiles-Shields: Right, we have also definitely seen that when people are doing pretty
well there is less use.
>>: That must be problematic for your research.
>> Colleen Stiles-Shields: It is problematic for our research, but if that's something that we find,
like in therapy when someone starts doing better, you'd go to biweekly sessions, and then you
go to monthly sessions and then it's kind of like okay. You're good. You got the stuff. If that is,
if the BIT is developed in such a way that they have that social relationship with it where they
feel like it is there and they got that safety net that they can go back to, then maybe that
lowering in usage over time is actually really reflective of what we want and expect from faceto-face treatment.
>>: [indiscernible] get out of their face as they get better and then remind them.
>> Colleen Stiles-Shields: As they get better, yeah.
>>: [indiscernible]
>> Colleen Stiles-Shields: Cool.
>>: But if you are you measuring success of your app more or less usage is not necessarily the
metric saying whether it's useful or not.
>> Colleen Stiles-Shields: Right. Ultimately, we want to see if the depression scores went
down. That's the metric we want.
>>: [indiscernible]
>> Colleen Stiles-Shields: Exactly. So far what we've seen with the really short term BITs, 4 to 8
weeks, ideally, we would like someone to hold on for 4 to 8 weeks, but a lot of times that
doesn't happen. In those short times there does seem to be a little bit of a dosage effect
because at the beginning is really when you are going to get the big punch. How long this
arrow goes in time is yet to be defined. The take away from the second step of harnessing the
human side of apps is that we know that people make social attributions to computing
technology. The literature is there to show us this. But we can take this information and use it
to inform our design decisions to work towards improving the usage adherence to apps for
better outcomes. In the future, we would like to continue to explore and refine both models.
Both the support of accountability model and harnessing the social attributions to BITs. A few
field trials are underway or have recently wrapped up some of which we've seen today and
some larger RCTs are coming where we're hoping again to really focus on some of this. But one
specific future project we have is hoping to incorporate both of those. I've got an app 31 which
is a research training grant from the National Institute of Health and that was to complete a
project that the first part is the usability phase and the second part is a small pilot trial. I will be
taking elements of apps that are designed more for behavioral activations, the B part of CBT
and cognitive therapy, the C part of CBT, which we really don't have much data on how those
work up against each other in an app which is very interesting because they are really the go
to's for treating depression. In usability testing I am hoping to kind of Frankenstein the most
usable app that we can make for both of those and those designs will ultimately inform the
apps that will be tested in those small pilot trials. We'll see how the BA and CT apps go against
each other and against a control group. Coaching will be involved in the trials so that will
further help refine and understand the support of accountability model. We're also hoping to
track the social attribution levels throughout treatment and start really testing this theory out
and seeing how it works. I'm getting towards the end of time and I am at my thank you page
where not all of my names are popping up, which is a fun thing that happens when you're in
front of a lot of people on camera. Ultimately, I would like to very quickly thank CBITS and the
many money people that contributed to this work as well as technology funding from the
National Institute of Mental Health and, most importantly, thank you all for being here and
taking time out of your day to be a part of this talk. [applause]. I didn't have any questions pop
up here so I'm excited to see a hand.
>>: Have you also gotten any sort of deeper research or study into game design, specifically,
even for mobile apps that have a stickiness factor to it? People come back and back. Have you
done any sort of looking into why certain things are sticking and how that can be applied to this
area?
>> Colleen Stiles-Shields: I haven't too much so I'm really excited you brought that up. I would
like to look into the stickiness of bit more.
>>: [indiscernible] I mean that's a whole field into itself that they really are very thoughtful
about how they're engaging their users and giving them rewards along the way to keep them
going.
>> Colleen Stiles-Shields: Yes, definitely, very cool. Thank you. Any other questions?
>> Mary Czerwinski: All right let's thank Colleen.
>> Colleen Stiles-Shields: Thank you. [applause]
Download