Summary of Commission meeting held on 5/23/07. The meeting... Wisconsin State Bar Center in Madison. Wisconsin Criminal Justice Study Commission

advertisement
Wisconsin Criminal Justice Study Commission
Summary of Commission meeting held on 5/23/07. The meeting was held at the
Wisconsin State Bar Center in Madison.
Commission Members present: Dan Blinka (acting chairman), Floyd Peters, Emily
Mueller, Suzanne O’Neill, Kelli Thompson, Fred Fleishauer, Judy Schwaemle, Penny
Beerntsen, Ken Hammond, Keith Findley, Gerry Mowris, Nannette Hegerty, Michael
Smith, Jerry Buting, Michael O’Hear
Also present: Lou Wilcynski (Assistant Administrator, Division of Law Enforcement
Services, Wisconsin Department of Justice), Jerry Geurts (Director, Madison Crime
Laboratory)
Not present: Mike Malmstadt (chairman), John Charewicz, Enrique Figueroa, Scott
Horne, Bob Donohoo, Noble Wray, Cheri Maples, Steve Glynn, Gerard Randall, Bill
Grosshans
Staffed by: Byron Lichstein
The meeting began with Byron introducing two new members: Michael O’Hear, a
Professor at Marquette Law School who will be taking Thomas Hammer’s place, and
Judy Schwaemle, Deputy District Attorney of Dane County (2).
Presentation by Michael Saks
Next, the Commission heard a presentation from Michael Saks, a Professor of Law and
Psychology at Arizona State University (2). Saks began by showing a National
Geographic video about arson investigation. The movie described several indicators that
for years were relied upon as proof that a fire was intentionally set rather than accidental
(2-6). These indicators, which were based on anecdotal observation and experience, have
now been disproven by scientific experiments. Scientists now agree that these indicators
do not reveal whether a fire was intentional or accidental. Saks stated that this process—
using scientific experiments to test assumptions—needs to be applied to many of the
forensic sciences (6).
Saks then switched from arson to “individuation” forensic techniques—those techniques
that try to say that a specimen from a suspect matches a specimen found at the crime
scene (6). Saks said he reviewed the first 86 DNA exoneration cases and found that
forensic science errors were the second leading contributing factor (behind eyewitness
error) (7). Saks noted that the few validation studies of forensic sciences (such as hair
comparison, bite-mark identification, and handwriting analysis) show a high rate of false
inclusions (7-8). Saks added that, for most of these techniques, there has been little
attempt to research the methods to test whether their assumptions are valid (8) (The
National Institute of Justice noted this problem in a recent publication). Saks discussed
ongoing efforts to initiate such research (8-9).
1
Saks said the forensic individuation sciences are a two-step process: 1) evaluate whether
two things are alike, 2) evaluate the meaning of the fact that the two things are alike (10).
Saks discussed several specific cases of error in forensic individuation sciences—the Ray
Krone bite-mark case from Arizona, and the Brandon Mayfield fingerprint case (Madrid
train bombing case) (10-11). In the Mayfield case, not only was the initial examiner
wrong, but two peer-reviewing experts were also wrong.
Saks discussed how the second stage of forensic science individuation—evaluating the
meaning of a match—necessarily relies on “random match probabilities,” meaning the
likelihood that the person matches but did not contribute the crime scene sample (12).
Random match probabilities can be determined only through a statistical analysis of how
common the suspect’s characteristic is in the general population. Most of the forensic
sciences lack such population data. In the absence of data, some forensic analysts make
guesses about population statistics or even fabricate statistics (15). Saks showed a
videotape of such testimony by a microscopic hair analyst (15).
Saks discussed “observer effects,” the phenomenon by which forensic examiners’
decisions are influenced by expectations and outside information (13).
Saks discussed the subjectivity involved in declaring a match in many of the forensic
sciences (14). This is in part because most of the methods do not rely on statistics or on
scientific experiments.
Saks said that extreme misleading or outright fabrication by forensic scientists were
present in about 27% of the DNA exoneration cases (17). He said that this is possible
because most criminal defense attorneys—particularly those representing poor clients—
lack the resources and skills to challenge the evidence (17).
Michael Smith asked about how common false negatives are (18). Saks said that data on
false negatives is not as accessible b/c we never find out about them (19).
Michael O’Hear asked about crime lab accreditation (20). Saks said accreditation is a
good thing, but is not the answer to the problem, in part b/c the existing standards aren’t
high enough.
Hegerty asked whether training for arson investigators has changed since the time of the
Willingham case (20). Saks said it’s hard to know the general state of training
nationwide, although he noted that gains in knowledge in medicine have often taken as
much as a generation to completely take hold (20).
Blinka said that one lesson from the Willingham case is that the new information about
arson investigation was presented to the courts and the governor, but it didn’t prevent the
execution. This suggests that education for judges and lawyers is one important remedy
(21), to ensure that the system makes appropriate use of scientific knowledge.
2
Findley noted that arson investigation is ahead of many of the other forensic sciences, b/c
with arson the scientific community has become involved with research and guidelines,
whereas in other forensic sciences no research or scientific guidelines exist (22).
Presentation by David Faigman
The Commission next heard a presentation from David Faigman, a professor of law at
Hastings College in San Francisco, CA.
Faigman began by discussing the difference between validity and reliability. He said
validity deals with the question of whether we’re actually measuring what we think we’re
measuring, while reliability deals with consistency across different examiners (22-23).
As an example, he said that with polygraphs, different examiners are very consistent with
each other (meaning the test has good reliability), but accuracy rates (whether the
examiners reach a correct conclusion about truth-telling) are not particularly high,
meaning the test lacks good validity (23). This result (good reliability, bad validity) is
not surprising, because all polygraph examiners use the same methods, but the methods
aren’t scientifically sound (23).
Faigman said most crime lab oversight deals with reliability (making sure results are
consistent across examiners), but not validity (testing the accuracy of the underlying
methods) (23). Faigman said that the first part of his talk will assume that we have valid
methods and discuss how to make the application of those methods reliable. (After
lunch, he discussed validity.)
First, he discussed quality control methods for laboratories. He noted that California is
moving toward a state forensic science commission, which has oversight of the crime
laboratories and regulates the use of forensic evidence in court (24). Such a commission
could utilize audits, proficiency testing, and inspections.
He also suggested that forensic scientists be separated from prosecutors and investigators,
so that they won’t view themselves as part of the prosecution team. Similarly, he
suggested that forensic scientists be shielded from other evidence in a case, so that their
conclusions aren’t tainted by information irrelevant to the forensic analysis (24). A
related idea is requiring “blind testing,” such that forensic examiners don’t know which
specimen comes from the suspect, and also don’t know the results of prior examinations
by other forensic scientists. Finally, another related idea is “evidence lineups,” such that
a forensic analyst is presented with fillers, to ensure that the analyst can distinguish the
correct specimen from other similar ones (25). Currently, most forensic analysis is more
like a “show-up” (eyewitness procedure), which most courts have deemed suggestive.
Evidence lineups would be very easy with fingerprints or ballistics, b/c agencies already
have databanks with potential fillers.
As to court-related reforms, Faigman suggested having more defense experts (27).
However, he noted that the typical criminal case does not allow enough time for the
prosecutor and defense attorney to thoroughly study and understand the scientific
3
principles and potential problems underlying a particular forensic technique (28). He
noted that there aren’t many defense experts in these fields, b/c most of the practitioners
come up through law enforcement ranks. He also said some jurisdictions do not provide
sufficient funds for defense experts (28).
Faigman said we need better education for judges and lawyers about the scientific
method, validity, reliability, and statistics (28). Law schools aren’t doing a good enough
job with teaching these subjects. This also includes strengthening the ethical obligations
of prosecutors relating to use of certain forensic evidence. He suggested creating
specialization in public defenders’ offices, such that a public defenders office has a core
group of scientifically literate attorneys who handle forensic science issues regularly (29).
Mowris suggested the possibility of creating specialization among judges, as well (30).
Faigman also suggested placing limits on what forensic scientists can testify to (31). He
suggested limiting forensic testimony to those tasks that have been empirically tested and
supported. This would more accurately reflect the reality that certain kinds of forensic
analysis have been tested and validated, while others have not (31). A related idea is
limited forensic testimony to discussion of factual similarities between a suspect’s trait
and a crime scene trait, rather than conclusions about whether the suspect matches or
doesn’t match. This would reduce the effect of exaggerated or misleading statistics (32).
Other options for addressing exaggerated forensic testimony include jury instructions or
expert panels to assess the relevance and scope of proffered forensic testimony (32).
The Commission then took a break for lunch (32).
After lunch, Faigman resumed with the second part of his presentation—improving the
validity of forensic sciences (32). He began by stating there are essentially two separate
communities: forensic science practitioners and academic scientists. The two
communities historically have not worked together. Because most forensic sciences have
little or no application outside the criminal justice system, they haven’t been tested or
challenged in other marketplaces.
Faigman said DNA should be the gold standard for forensic science (33). First, DNA
analysis has random match probabilities, unlike the other forensic sciences. In addition,
DNA analysis originated from mainstream academic science in the late 1980s—its
central principles were debated and refined in scientific laboratories. The other forensic
sciences were applied in a law enforcement context but never studied in the academy
(34). Academic scientists ignored practical forensic sciences b/c they were nontheoretical. Conversely, practicing forensic scientists did not collect data or research
their disciplines b/c they were not trained as scientific researchers, but rather as applied
practitioners, and b/c they don’t have the time to do research (34).
Faigman then said that the way to make forensic sciences more scientific is to find a way
to get academic scientists interested in studying the forensic sciences (35). The way to do
this is to make the research questions appealing to academics, and to have sufficient
4
funding. Faigman described an institute he and Saks are trying to put together to
facilitate academic research of the forensic sciences (35-7). He emphasized that
improving the forensic sciences will improve crime-fighting, just as the development of
DNA testing has (36).
Faigman then took questions from the Commission members (38). Schwaemle pointed
out that, just as the forensic sciences are split from the academic scientific community,
the same is true of forensic pathologists: Dane County has had difficulty getting people
from the medical school interested in performing autopsies (38-9).
Blinka then asked Jerry Geurts from the State Crime Lab to offer his thoughts. Geurts
said some of the issues discussed by Saks and Faigman don’t fall within the Crime Lab’s
ambit (39). He also said the WI Crime Lab started out in 1947 with an oversight board,
but the board was later dissolved when the Crime Lab was brought into the DOJ (39).
Further, until the mid-90s, the Crime Lab had a UW council that worked with the lab in
academic pursuits. That council was eliminated as non-essential in the mid-90s.
Faigman discussed the conditions under which a forensic science commission could be
effective (40). He said he would want the commission to go beyond oversight and
evaluate validity concerns. Blinka asked about what other states’ experience with
forensic science commissions has been. Byron said he could provide that information to
the members before the next meeting (41).
Buting asked Faigman to address non-crime lab forensic science testimony, such as
shaken baby syndrome, sex offender profiles, etc (41-2). Buting also noted that
Wisconsin has one of the most liberal admissibility standards anywhere—essentially any
relevant expert testimony can come in (42). Faigman and Saks discussed whether
admissibility standards should be more stringent—whether Wisconsin should be a
Daubert state (42-5). Faigman and Saks both said that the extent to which a method has
been scientifically researched should figure in to whether testimony based on that method
is admissible (43-4). Faigman said asking a particular forensic scientist whether his/her
method is “generally accepted” is not a rational way to decide admissibility—any
technician will claim his/her disclipline is generally-accepted.
Faigman suggested that, in addition to considering trial-level admissibility standards, the
Commission should also examine appellate standards of review (47). The current
appellate standard is “discretionary,” which means it’s very unlikely that an appellate
court will reverse a trial court’s decision on admissibility. Faigman suggested it would
make more sense for admissibility decisions on scientific evidence to be reviewed by
appellate courts without deference to trial courts, because the usual reasons for deference
to trial courts don’t apply (47). With scientific evidence, the questions of validity and
reliability of a technique can be reviewed by an appellate court just as easily as by a trial
court. Changing the appellate standards of review would also produce greater uniformity
in appellate decisions—currently, appellate courts sometimes end up affirming trial
courts that came to opposite conclusions (48).
5
Faigman and Saks said that a forensic science commission could publish reports
providing an opinion on the admissibility of various techniques (49). These reports
would not be binding on courts, but they could be advisory. Moreover, since no other
forensic science commissions are doing that kind of work, it could become a source of
prestige for the commission and institutions involved.
Blinka pointed out that plaintiff’s lawyers in civil cases have opposed stiffening
admissibility standards for scientific evidence (50).
O’Hear asked Faigman to elaborate on ethical obligations of prosecutors in handling
forensic evidence (50). Faigman said there aren’t clear ethical guidelines for the use of
forensic evidence. He said a forensic science commission could examine that question.
Emily Mueller and Fred Fleishauer asked about court-appointed experts, and specifically
how a court can find one (51-52). Faigman discussed several options, including
contacting the American Academy for the Advancement of Sciences (AAAS), which
helps judges find experts in a particular scientific discipline (52).
Findley asked Geurts how difficult it would be to implement blind testing for forensic
analysis at the State Crime Lab (52). Geurts said he thinks blind testing would be a good
idea, but that it raises practical questions (54). He said that, in large cases where the
crime lab does many different kinds of forensic analysis, it becomes very difficult to
insulate analysts from what other analysts have done. He also said that it can be difficult
to shield an analyst from the submitting agency, because in some cases the analyst needs
to communicate with the submitting agency about where a particular item was found and
why it needs to be tested (54). Findley asked whether the crime lab could have a
gatekeeper who would handle those kinds of threshold questions and then pass the
evidence on to the actual analyst without the extraneous information (54). Geurts said
there might be some possibilities there, but that you’d need a separate gatekeeper for each
forensic discipline. Geurts also noted that the crime lab increasingly uses robots and
initial computerized analyses such as AFIS (fingerprints)—this is one method for
implementing blind testing (54).
Geurts also said the crime lab has tried to implement proficiency testing, in which they
attempt to send a test case to a particular analyst and evaluate the analyst’s performance
(without letting the analyst know that he/she is being tested) (55). Geurts said this
typically has not worked, b/c the analyst usually finds out that the test is fake (55).
Blinka said that the Commission should submit a list of questions to the crime lab, and
give them a chance to respond before the next meeting (55). The list should include
information about what kind of analyses the crime lab does, and what the practical effect
of certain changes would be (56). He then said that, before the next meeting, the
Commission should also review information about what other jurisdictions are doing with
forensic science commissions (55). He also said we should consider what kind of
education we’re providing to prosecutors and public defenders on scientific/forensic
issues (56).
6
Faigman said that his list of priorities would be the following: 1) funding for defense
experts and training of public defenders in forensic sciences, 2) expanded discovery of
the information underlying forensic analysis, 3) education of lawyers and longer-term
research into the validity of certain forensic techniques (56).
Findley asked what the crime lab’s rules are on discovery (56). Geurts said they provide
all the information to the District Attorney, who then complies with discovery requests
(57). He said that, by statute, they’re prohibited from giving information to anyone other
than the submitting agency—whether the submitting agency is the prosecution or defense
(57). Mowris said that, in his cases as a defense attorney, anytime he’s had a question
about something the crime lab did, the prosecutor was willing to let him ask questions
directly to the crime lab (57). Buting said that the fault often lies with the defense bar b/c
they fail to ask for the underlying material and don’t know how to interpret it once they
have it (57). Buting said defense attorneys need expert help to interpret the material.
Findley and Schwaemle discussed what the discovery statute requires as far as material
underlying a forensic analysis (58). Schwaemle said almost any judge or prosecutor
would provide the defense with the notes and data underlying a forensic analysis (58).
Findley agreed, but said that, in the cases where the prosecutor or judge doesn’t agree, the
discovery statute should require disclosure (58).
Buting suggested that judges, prosecutors, and defense attorneys should have more joint
education sessions (59). Mowris said he raised that possibility with Chief Justice
Abrahamson when he was State Bar President, and that she thought much of the training
for lawyers wouldn’t be relevant to judges, and vice versa (60). However, Mowris said
that, on this topic, there might be more overlap.
Discussion of Eyewitness Identification Field Study
Ken Hammond then updated the Commission members on research into eyewitness
identification. He said the Wisconsin Dept. of Justice is working with 5 WI law
enforcement agencies to participate in a national field study to evaluate whether a
sequential or simultaneous presentation is superior for lineups and photo arrays (61). The
study is a response to a recent report from IL suggesting simultaneous is superior to
sequential, contrary to scientific research. Hammond then asked whether the
Commission had any thoughts on how to protect the participating agencies who will be
asked to use a simultaneous presentation—he said some defense attorneys may claim that
those agencies are using a method that is not recommended by Wis. DOJ (61). Faigman
suggested an analogy to medical studies, in which the FDA monitors studies to determine
whether a particular drug is working, and, if the results strongly show that it is working,
the FDA can stop the study and provide the drug to those subjects who were getting the
placebo (62).
Findley suggested that the DOJ or the Commission could issue a statement saying that the
agencies doing simultaneous lineups are doing so b/c of doubts about whether sequential
7
or simultaneous is superior (62). This would give the agencies cover. Mowris said it
might make sense for DOJ to say that, at the time they issued the guidelines, sequential
was clearly best, but now that there’s doubt it makes sense for agencies to participate in a
new field study (63).
Blinka said the Commission should get the details on the study and then determine what,
if anything, the Commission can do (65).
8
Download