Detection of Effort and Malingering

advertisement
Detection of Effort and
Malingering: State of the Art
Jason Gravano, M.S.
6/30/14
Speaking euphemistically?
• ‘‘cognitive effort,’’ ‘‘mental effort,’’ ‘‘insufficient effort,’’ ‘‘poor effort,’’
‘‘invalid effort,’’ or even ‘‘faked effort.’’
• response bias, disingenuous, dissimulation, non-credible, malingered,
or non- or sub-optimal effort
Bigler, 2012
Outline
• Introduction to Symptom Validity testing
• Definitional Issues
• Assumptions
• Measures of cognitive “Effort”
• Measures of validity on self report
• “Diagnosing” malingering
Validity of the Evaluation: the crux of the
issue?
• We do not measure cognition or symptoms directly, rather the behavioral
output of cognition (test scores, performances, self reports etc.)
• NP tests require the participant's best effort/cooperation for our inferences
about their cognition to be valid
• Likewise, self-report of symptoms may not be taken at face value
• Thus, in instances where the evaluation informs clinical or legal decision
making (arguably, all cases), validity checks become necessary
Bigler, 2012
Validity of the Evaluation: the crux of the
issue?
• Especially because research has shown repeatedly that experienced
experts are inaccurate in identifying valid versus invalid ability
performances from mere observation of behavior or test scores
• E.g.:
• Ekman, O’Sullivan, & Frank, 1999
• Faust, 1995
• Faust, Hart, Guilmette, & Arkes, 1988
AACN 2010
Symptom Validity Testing:
• Methods and instruments designed to detect exaggeration or
fabrication of neuropsychological dysfunction, as well as somatic and
psychological symptom complaints.
AACN 2010
Detection
• Symptom validity tests
• “effort measures”
• Any measure that is normally performed (and in some cases, perfectly performed) by a
wide range of patients who have bona fide neurologic, psychiatric, or developmental
problems
• Have to try to perform poorly?
• Performance validity/Response validity
• Negative response bias
AACN 2010
Keep it consistent, people.
• Invalid performances/report should manifest as inconsistencies:
• In ability
•
•
•
•
•
•
Consistency within a measure (hard vs. easy)
Consistency within a cognitive domain
Consistency within a battery
Consistency between two or more testing sessions
Consistency of NP results with real-world performances
Consistency with expected profile of disorder (& severity)
• In self report of sys
• More on this later…
“Effort” Measures
…now with more political correctness!
Effort measures
• Typically rely on extremely low ceilings
• Measures that are so easy, “anyone can do them” 
• Inference: If they don’t do well, they must not be “trying”
• If performance is poor, raises questions about the validity of your
examination
• Can be used as supporting evidence for malingering, especially when
performance is below chance
Effort measures
• Embedded measures (usually found after publication to have utility)
• Examples: CVLTii forced choice; Reliable Digit Span; WMS Faces; WCST “other
categories”; TMT time.
• PROS- Face valid, don’t take extra time, relevant to the validity of the specific
measure, hard(er) to coach
• Stand alone measures (designed purposefully for eval of effort)
• Examples: TOMM (Test of Memory Malingering); Word Memory test; Portland
Digit)
• PROS- Created for this reason, likely better research base
AACN 2010
• Embedded measures can take advantage of floor effects:
• Use an alpha of .05 from a distribution of Severe TBI on a case of Mild
TBI
• Stand-alone cognitive effort tests
• Forced-choice
• Significantly below chance is direct evidence of deliberate underperformance
• “invalid performance can be identified using thresholds that are well
above a level that is significantly below chance”
• Non-forced-choice
• evaluate random responding
• unrealistically slow or erroneous responding
• inconsistency of response patterns
AACN 2010
• Bimodal distribution?
• Cut Scores=/= chance performance
• Based on validation studies
Bigler, 2012
Effort testing studies
• Rogers (2008)
• (a) simulation;
• (b) criterion groups (‘‘known’’ groups);
• (c) differential prevalence designs
AACN 2010
Differential prevalence design
• Assumes that response bias varies as a function of context (forensic,
clinical, incentives offered, etc.)
• Useful in initial validation studies
• Lacks objective accuracy standards (base rates are not 100%)
AACN 2010
Simulation
• Random assignment of non-clinical participants to
• simulate or fake
• Control
• Often first, proof of concept studies
• Lacks generalizability
AACN 2010
Criterion “known groups”
• Apriori criteria (often an “established” or “gold standard” measure +
Slick criteria for malingering) to define groups
• Criteria chosen to have low false positive rate
• Occasionally take out individuals with “borderline” performance on
the gold standard test……. So how does it perform across the board?
AACN 2010
Issues Bigler (2012) raises about SVT research:
• Most SVT studies do not have IRB approval
• No Systematic Study Lesion/Localization Studies of SVT performance
• Implication: there may be some lesions that disrupt SVT performance
• Neurogenic drive & motivation problems?
• Psychological/Psychiatric factors may influence SVT performance
• Cut scores will misclassify
• Is type 1 or type 2 error worse?
• Is the entire eval invalid?
Bigler, 2012
Resources for effort measures & embedded
validity measures
• Sweet, 2009
• Boone, 2007;
• Hom & Denney, 2002; Larrabee, 2007
• Vickery, Berry, Inman, Harris, & Orey, 2001
• Babikian, Boone, Lu, & Arnold, 2006;
• Heinly, Greve, Bianchini, Love, & Brennan, 2005;
• Iverson & Tulsky, 2003
• + the other readings from this week
AACN 2010
Consensus Recs: Effort
• Use tools designed (or re-designed) to detect sub-optimal effort
• Use both stand alone and embedded measures
• Use multiple measures, across the eval to account for varying effort
• Encourage optimal effort (through rapport)?
• Make sure the normative samples are appropriate
• Not all effort measures are created equal. Do some homework to see
which ones perform best with your population.
Symptom Reporting
Self-Report
• Disorder-specific inventories; symptom inventories
• E.g., ptsd, depression, etc.
• When evaluating the validity of self-report, if possible, clinicians
include measures that possess an internal means of assessing
response bias.
• In general, disorder-specific inventories and symptom checklists that
do not contain effective means for determining response bias and
possible response invalidity should not be used in isolation.
AACN 2010
Self-Report
• General personality inventories
•
•
•
•
Strong validity research & design
Keep up with research, appropriate populations
Don’t “interpret” clinical scales when validity is at issue
Be conservative in assuming intentionality (without supporting evidence)
AACN 2010
Review: Validity Scales on the MMPI-2
• Traditional validity scales
•
•
•
•
L – “Lie” present favorably
F- “Faking good or Faking Bad”
K – “defensiveness”
Fb- Infrequent items at the end of the test; Random responding at the end of the
exam?
• VRIN- Consistency
• TRIN- Consistency
• Other
•
•
•
•
Infrequency Psychopathology scale (Fp; Arbisi & Ben-Porath, 1995)
F-K (Gough, 1950)
Obvious minus Subtle Scale (O-S; Greene, 1991; Wiener, 1948)
Dissimulation scale (Gough,c1954, 1957) for the MMPI-2 (Ds2; Dsr2)
MMPI-2
• Other, newer scales that may be important:
• FBS-r on MMPI-2-rf
• RBS- Response Bias Scale- predict insufficient effort on neuropsychological
evaluation (Gervais et al., 2007a)
• HHI- Henry Heilbronner Index- exaggeration of somatic or
‘‘pseudoneurological’’ symptoms (Henry et al., 2006, 2008).
Butcher 2003 on FBS
• Originally termed “Fake Bad Scale”
• The FBS was developed by identifying items rationally on a content basis utilizing unpublished frequency
counts of malingerers’ MMPI responses and the authors’ subjective observations of personal injury (LeesHaley et al., 1991).
• Item pool contains 43 items that include somatic symptoms, unusual beliefs, and deviant attitudes.
• Scores for the FBS were then compared between
•
•
•
•
•
20 personal injury claimants who appeared “notably credible,”
A group of 25 personal injury claimants who appeared clearly to be malingering
16 medical outpatients asked to simulate emotional distress caused by a motor vehicle accident,
15 medical outpatients asked to simulate emotional distress caused by a toxic exposure,
36 medical outpatients asked to simulate emotional distress caused by stress on the job (Lees-Haley et al., 1991).
FBS
• The cumulative FBS literature suggests that the scale continues to
differentiate groups (“malingerers” or “over reporters” vs. valid) as
well as, and under certain conditions superior to, other MMPI-2
validity scales (including all of the F-family scales).
• In particular, two factors that are particularly relevant to practicing
neuropsychologists, effort status and TBI, substantially moderated
FBS magnitudes.
Nelson, 2010
FBS
• Early criticisms of FBS
• The basis for assigning individuals to the notably credible group and the
malingered group was not explained.
• Original norming was questionable (private practice)
• No T scores, just mean data
• Cutoffs were not intended for mental health arenas
• FBS may not measure a single dimension
• Items contained on the FBS are generally not those that are rare or
infrequently endorsed
• Low cross over with other validity scales
• FBS does correlate with psychopathology…
• Gender specific cutoffs?
Butcher 2003
Psychological/Somatic
• Inference: Symptom exaggeration, symptom promotion, fabrication
• Evaluation:
• In depth diagnostic eval
• Onset, course, symptom picture, comorbidities, treatment efforts, response
to treatment
• Inconsistent, contradictory data noted & Documented
AACN 2010
Onset
• What is the typical age of onset
• When was the first symptom (e.g., after the arrest?)
• Personal history (abuse, neglect, etc.)
• Family history
AACN 2010
Symptom Presentation
• Keep an eye out for atypical symptoms
• Layman’s understanding of the disorder
• Atypical combo of symptoms
• **Distinct clinical presentations and/or prevalence of
psychopathology may differ among some cultures and ethnic groups
(Draguns & Tanaka-Matsumi, 2003).
AACN 2010
Course
• Consistency of symptoms
• Expected trajectory
AACN 2010
Response to Treatment
• Have they responded to treatment? Why or why not? When? Prior to
litigation/compensation?
AACN 2010
Consensus Recs: Symptom Reporting
• Use self reports with validity measurements
• Make sure the normative samples are appropriate
• Evaluate ability and disorder response bias separately
Malingering
Malingering
• In considering the diagnosis of malingering, the clinician is explicitly
making a determination of intent: more specifically, a determination
of intentionally exaggerated symptoms and/or intentionally
diminished capability with the goal of obtaining an external reward.
• To do this:
•
•
•
•
•
the context of the evaluation
overall presentation of the examinee
background information
history information gathered during interview
observations, neuropsychological tests, and measures of response bias
AACN 2010
Effort  Malingering
• Failed Effort measure =/= malingering, just as passing one =/= not malingering
• Assessing effort is one dimension of the evaluation of Malingering
• equating ‘‘poor effort’’ with malingering is an oversimplification
• It takes effort to fake!
• The process of detecting malingering is one in which consideration is given to
multiple dimensions of behavior that differentiate malingering from other
entities
•
•
•
•
factitious disorder
conversion disorder
cogniform disorder
somatoform disorder.
AACN 2010
Diagnostic classifications for Malingering
• DSM
• NP operationalized systems
• Bianchini et al., 2005
• Slick, Sherman, & Iverson, 1999
• Reliable classification indices; operationalized
• Guidelines recommend incorporation of multiple sources of
data/information:
• AACN Practice Guidelines for Neuropsychological Assessment and Consultation; AACN, 2007
• Specialty Guidelines for Forensic Psychologists; Committee on Ethical Guidelines for Forensic
Psychologists, 1991
AACN 2010
“Diagnosing” malingering
• Context of the evaluation
• Clinical vs. forensic
• Gain, secondary gain, external gain, financial gain
• ***MUST HAVE SOMETHING TO GAIN***
• Necessary, but far from sufficient
• Examples: financial reward, compensated time away from work,
avoidance of military duty, relief from legal consequences, and
obtaining medications/narcotics
AACN 2010
Assessing intent?
• The best way to assess intent is by ruling out other possible conditions
(e.g., psychological, neurological, developmental) that might otherwise
explain the suspicious behavioral presentation and by requiring the
presence of multiple improbable performances and/or atypical
symptomatic complaints.
• The differential diagnosis of malingering is a clinical process that:
•
•
•
•
(1) requires careful analysis on the part of the examiner,
(2) is based on objective criteria;
(3) incorporates indicators that have established classification accuracy,
(4) combines clinical judgment with the results of scientifically validated measures in
this process.
AACN 2010
The possibility of malingering
• (1) disparity between real-world observations and either test
performance or self-report,
• (2) inconsistency between type or severity of injury and test
performances,
• (3) inconsistency between an individual’s behavior when he/she is
aware of being evaluated versus when not aware of not being
evaluated, and
• (4) inconsistency across serial testings that cannot be explained by an
underlying neurological process or known psychiatric condition.
AACN 2010
The assessment
•
•
•
•
(1) review of records;
(2) psychosocial history obtained by interviewing;
(3) observations of the claimant’s behavior during the assessment period;
(4) consideration of information from collateral sources, such as significant
others, employers, etc., when available and appropriate;
• (5) formal psychological/neuropsychological testing;
• (6) response validity assessment procedures; and
• (7) surveillance video/audio, when available.
• ‘‘Does what I am learning about this claimant make sense in light of the
putative claim, the diagnosis, history and totality of the presentation?’’
AACN 2010
• Unlike psychometric indicators of invalid performance, objective
standards for the evaluation of inconsistencies between the clinical
presentation and evidence of capacities outside the clinical setting
may not exist.
• In the absence of such standards, a determination that an
inconsistency indicates invalidity should be cautious and conservative
and in some cases may be left to the trier-of-fact.
AACN 2010
Consensus Recs: Malingering
• Consider & investigate “real world” performance
• Be aware of potential litigation cases
• Be aware of discrepancy between test results and known-group
performances
• Evidence for/against malingering should be considered incrementally
Consensus recs: Practice
• List SVIs in reports, but be cautious of test security
• Consider serial evaluations to discriminate unrealistic performances
(e.g., lack of practice effects)
• Use collateral informants (when appropriate)
• Don’t rely on the manuals and cut scores: use up-to-date research
AACN 2010
Questions for the Class!
• If you see invalid performance in testing, how should you address it?
• Could we use and interpret effort measures in looking at intentional
disorders (e.g., subcortical disorders)?
• What do you do if you suspect there is genuine neurological disorder
& malingering?
Consensus recs: Research
• How to interpret multiple effort measurements
• Populations that “fail” effort measures despite effort
• Deliberate intent to feign
• Future ability tests should have validity indicators
• Pediatric samples?
• Rule out or rule in?
AACN 2010
The end
Download