Structured Outcomes Tools - Children`s Homes Quality Standards

advertisement
Structured Outcomes Tools
Chris Taylor MSc (Psych)
Updated July 2015
Introduction
The Children’s Homes Regulation (2015) place considerable emphasis on the impact of care
provided. Ofsted inspectors are interested in a child’s experience of care and their progress,
and homes now have to provide evidence of how they help children and young people make
progress in their education, heath, and social, emotional and psychological well-being, and
how children and young people are prepared for the future. It is therefore necessary to think
about what we mean by progress, and how it is measured.
There is a risk in seeing outcomes as somehow separate from other aspects of care, whereas
what is needed is an integrated approach to assessment and evaluation. A good outcomes
framework builds confidence in presenting a robust argument for how a home makes a
positive difference in the lives of the children living there. Outcomes measures also help
monitor the quality of practice issues that relate to the Quality Standards. As well as
providing evidence for good impact, outcomes measures are part of what we need to be able
to think reflectively about practice and the quality of care provided.
Baselines
Outcomes that are set for the whole population (e.g. achieving five CGSE or above) may not
be so useful for showing progress for disadvantaged groups (such as children in care) because
the starting point for these children may be very different. It is increasingly recognized that
progress and achievement needs also to be thought about in terms of “distance travelled” and
the Regulations and the inspection framework place considerable emphasis on being clear
about a child’s starting point,. For this to be possible it is important to establish good
baselines (where is this child before your service has any impact) and to be clear about how
the home defines and measures children’s experiences, progress and outcomes. As
expectations for practice to be evidence-based increase, so thinking around what is evidence
develops. Caregivers’ subjective sense of their child’s progress is not without merit, but it
carries a greater degree of credibility when backed up by structured assessments and
outcomes measures.
Assessment is a process, not a single event. An initial assessment is a systematic process to
acquire an accurate, thorough picture of a young person’s strengths and weaknesses, in order
to support their development to meet existing and future challenges, and should take place
before deciding the activities or interventions provided by the home. Good initial
assessment focusses on the child’s needs and their strengths. The presence of positives is not
the same as the absence of negatives, and an individual’s “signature strengths” should be
identified and employed towards their progress.
An initial assessment assesses levels of need and establishes baselines against which progress
can be measured. It informs initial plans to build on strengths and work on deficits. Initil
[1]
assessment supports a review, at some later point, to evaluate progress. In simple terms, this
Plan-Do-Review approach allows for a purposeful, evidence-based approach to the care
being delivered in the home. Throughout this process, thought needs to be given as to who
will provide feedback to the young person. It is their assessment, and there are strong ethical
objections to carrying out assessments without the subject’s consent.
No initial assessment will reveal everything about a person. It is important to acknowledge
what is unknown, and that information is put into context, including typical child
development, race and culture, and recent events in the young person’s life. It is also
important that there is a structured assessment of risks.
Structured assessment of risk
Risk is here defined as a combination of the estimate of probability an event will occur and
consideration of the consequences if it occurs. Because risk is not a simple dichotomy (risk
vs. no-risk) but a continuous variable (degrees of risk) it is important to use as scientific a
method for identifying the probability of target behaviours occurring as possible, as well as
setting up good monitoring processes.
A structured approach to risk assessment is more likely to reduce harm and provide
defensible practice than unstructured approaches that often rely on “gut instinct”.
Unstructured approaches are prone to bias. Typically, we are more likely to notice risks that
match our own subjective notion of what is “risky”. This includes the phenomenon of
“regression to the mean” (the flaw of failing to account for natural fluctuations) by which
decisions that are made following temporary fluctuations in presentation of riskiness are often
unreliable future indicators. Additionally, we may have a distorted sense of risk due to our
tendency to give more weight to information that is familiar or salient. We also have
tendency to misjudge risk based on estimates of what we already perceive the risk to be;
anchoring our prediction at previously defined points in our own thinking.
A more scientific approach follows a set structure:
1. Identify specifically what the target behaviour is. Thus, we should avoid a catch-all
like “self-harm” and identify specific behaviours (e.g. cutting, swallowing batteries,
etc.) depending on what the person does. In this example, each self-harming
behaviour would need to be thoroughly risk assessed. New behaviours may also
emerge, which again need to be addressed. This approach ensures the accuracy and
relevance of any risk assessments are maintained.
2. Examine the reasons for the behaviour. This includes exploring and understanding
the young person’s own account of triggers, function and frequency of the target
behaviour, those of staff and other adults working with the young person, and relevant
reports. It is also important to examine the relevant knowledge base of professional
training and published materials.
3. Identify factors that might increase or decrease the target behaviour. This includes
known triggers as well as changes in protective factors (e.g. staff changes) and
increases in challenges in the person’s life.
[2]
4. Estimate the probability of the target behaviour occurring and assess the
consequences that come from the event (the harm).
5. Assess the acceptability of the risk.
6. Take steps to reduce the probability of the target behaviour occurring.
7. Take steps to reduce the harm if the target behaviour occurs.
8. Specify how the assessment will be monitored and reviewed.
Measuring change
Just as when assessing risks, there is a strong need to use some structured measures to
evaluate progress. There is also some advantage in using measures that are familiar in the
field and recognizable to commissioners and Ofsted inspectors. Consideration also needs to
be given to resources. Some structured outcomes measures require significant levels of
training and clinical expertise. Cost is also a consideration; licences can be expensive and not
all suitable measures are free to use. There are important ethical issues for the incorrect use
of outcomes measures and significant business risk. The ICHA outcomes study group
identified a range of suitable measures’ although some that are mentioned in their initial
report require specific levels of clinical training, others are free to use and are suitable for the
qualification levels found in residential childcare. It is also worth noting that CAMHS will
recognize the usefulness of measures they use in their own services, some of which are free
to use and do not require high levels of clinical training. The summary provided below is not
exhaustive. It focuses on measures that meet some of these criteria. Before making a
decision, it is important to consider how data will be collected and processed and how reports
will be written. Whatever measures are chosen, they need to be possible for use in your
organization. The Child Outcomes Research Consortium (CORC) have some online training
videos (http://www.corc.uk.net/resources/implementation-support/training-videos/) and
several measures can be downloaded from their website
(http://www.corc.uk.net/resources/measures/)
Any assessment is to some degree a product of, and dependent on, the environment in which
the assessment occurs, and organizations will choose how to control for this, perhaps by
using a range of qualitative, quantitative and psychometric data and “triangulating” between
these. Ultimately the Registered Manager, or their wider organization, will need to make
decisions about how change is measured.
Summary of possible measures
Several structured measures that can show distance travelled are briefly summarised below,
from which a home might pick two or perhaps three. In selecting measures it is important to
have something that is achievable as well as useful, and to bear in mind that assessment is an
intrusive process, which should only be carried out for the benefit of the person being
assessed. This is only a brief summary, and additional information is available for all the
measures discussed here.
SDQ (Strengths and Difficulties Questionnaire) is used in CAMHS and was among the
measures identified by ICHA. It is a well-validated 25-item screening questionnaire for
[3]
childhood psychological function. SDQs are completed whilst in placement by care staff,
teachers (if applicable) and the young person, and can be scored on-line. On-line scoring also
provides a useful framework for reporting.
As the SDQ is sensitive to change in both strengths and difficulties, they are repeated every
three to six months. SDQs are good at detecting conduct, hyperactivity, depressive and some
anxiety disorders, but are poor at detecting separation anxiety or phobias. Goodman cautions
against over interpretation of scores and suggests that if results “seem wrong, they probably
are.” Copies of the SDQ and more information are available from http://youthinmind.com/
BERRI (Behaviour, Emotional well-being, Risks to self and others, Relationships, Indicators
of developmental or psychological conditions) was also identified by ICHA. It is a
comprehensive assessment of needs that is completed as a baseline and then repeated three
monthly. Although there is at present little research to establish its validity, it is intuitive and
straightforward to use and requires no specialist training. BERRI identifies recent life events
for context and provides a score based on frequency and difficulty across five domains. A
scoring and reporting system is available.
GBOs (Goal Based Outcomes) measure is used to identify what parents and children want to
achieve from their contact with a service and measure how far they have achieved their goals.
Goals are identified in free text, and rated how close the respondent is to these goals on a 10point scale. After setting baselines for each goal, the respondent (often a child, but this could
be a parent or carer – just be clear about whose goals they are) is usually asked at six-month
intervals to rate how close they are now to these goals. In CAMHS services, goals could
include 'Getting through a whole day at school', or 'Cope with my child's tantrums' but the
same approach can be used for placement goals. Setting clear goals helps a placement feel
purposeful, and GBOs allow for progress to be noted, even when problems remain.
Changing scores over time are indicative of progress being made, and scores can be
represented graphically. Duncan Law has written a useful guide to GBOs
(http://www.ucl.ac.uk/ebpu/docs/publication_files/GBOs_Booklet) and an Excel spreadsheet,
adapting GBOs for placement goals, is also available; download Placement Goal Based
Outcomes from http://christaylorsolutions.org.uk/publication/forms/.
NCBRF (Nisonger Child Behaviour Rating Form – Ohio State) is a standardized instrument
for assessing child and adolescent behaviour. Professionals working in the mental health
field are welcome to copy and use the NCBRF for clinical and research purposes. It is
available for parent and teacher responses, and usefully, is available in a version suitable for
Learning Difficulties. Overall scores are generated for Positive Social behaviours and for
Problem behaviours. More information from http://psychmed.osu.edu/ncbrf.htm.
ICHA are keen that some form of resilience scale is adopted. The RS (Resilience Scale) is
widely recognized as the best instrument for the study of resilience in adolescents. RS is a 25
item questionnaire that takes about five minutes to complete. Although RS can be
administered by a range of professionals, a licence ($150) and user guide, which includes
a 10-page, detailed explanation of resilience ($90.00) are required. Alternatives measures for
resileince could include the Adolescent Resilient Scale (ARS) and Conner-Davidson
[4]
Resilience Scale, which have acceptable credibility, although both are in need of more study
for adolescents.
There are over twenty versions of the Outcomes Star, which allows for a visual
representation of change in multiple domains. The Outcomes Star is said to both measure
and support progress for service users towards self-reliance or other goals. The Stars are
designed to be completed collaboratively as an integral part of key-working. Licencing is
needed (£330.00 - £660.00 per annum) and training is recommended for three people in each
team (£195.00 per head). More information from http://www.outcomesstar.org.uk/.
Conclusion
Ultimately you will need to decide what works in your organization. You may even wish to
explore other measures than those highlighted here. I’ve been selective and hope this is a
practical list. Many are measures I have used and all of them I have seen in use and thought
them to be useful. It is important not to have too many and to think about how you will roll
out any you select, taking into account cost, practicality and usefulness.
[5]
Download