What is peer review?

advertisement
Peer reviewer training
part I:
What do we know about peer review?
Dr Trish Groves
Deputy editor, BMJ
What do editors want from papers?
•
•
•
•
Importance
Originality
Relevance to readers
Usefulness to readers and, ultimately, to
patients
• Truth
• Excitement/ “wow” factor
• Clear and engaging writing
Peer review
• As many processes as journals or grant giving
bodies
• No operational definition--usually implies
“external review”
• Largely unstudied till 1990s
• Benefits through improving what’s published
rather than sorting wheat from chaff
What is peer review?
• Review by peers
• Includes:
internal review (by editorial staff)
external review (by experts in the field)
BMJ papers
• All manuscripts handled by our online editorial
office at http://submit.bmj.com
• The website uses a system called Benchpress
• Reviewers recruited by invitation, through
volunteering, and by authors’ suggestions
• Database also includes all authors
• We monitor reviewers’ workload for BMJ
• We rate reviewers’ reports using a 3 point scale
BMJ peer review process I
• 7000 research papers, 7% accepted
• approximate numbers at each stage:
– 1000 rejected by one editor within 48 hours
– further 3000 rejected with second editor
– within one week of submission 3000 read by senior
editor; further 1500 rejected
– 1500 sent to two reviewers; then 500 more rejected
– approx 1000 screened by clinical epidemiology editor
and more rejected
BMJ peer review process II
• 400-500 to weekly manuscript meeting attended by the
Editor, an external editorial adviser (a specialist or
primary care doctor) and a statistician..
• …and the full team of BMJ research editors, plus the
BMJ clinical epidemiology editor
• 350 research articles accepted, usually after revision
• value added by commissioned editorials and
commentaries
BMJ peer review process III
• always willing to consider first appeals--but
must revise the paper, respond to criticisms,
not just say subject’s important
• perhaps 20% accepted on appeal
• no second appeals; always ends in tears;
plenty of other journals
What we know about
peer review
Research evidence
Peer review processes
• “Stand at the top of the stairs with a pile of
papers and throw them down the stairs. Those
that reach the bottom are published.”
• “Sort the papers into two piles: those to be
published and those to be rejected. Then swap
them over.”
Some problems
•
•
•
•
•
•
•
•
Means different things at different journals
Slow
Expensive
Subjective
Biased
Open to abuse
Poor at detecting errors
Almost useless at detecting fraud
Is peer review reliable?
(How often do two reviewers agree?)
NEJM (Ingelfinger F 1974)
• Rates of agreement only “moderately better than chance”
(Kappa = 0.26)
• Agreement greater for rejection than acceptance
Grant review
• Cole et al, 1981 – real vs sham panel, agreed on 75% of
decisions
• Hodgson C, 1997 – two real panels reviewing the same
grants, 73% agreement
Are two reviewers enough?
• Fletcher and Fletcher 1999 - need at least six reviewers,
all favouring rejection or acceptance, to yield a stats
significant conclusion (p<0.05)
Should we mind if reviewers don’t
agree?
• Very high reliability might mean that all
reviewers think the same
• Reviewers may be chosen for differing positions
or areas of expertise
• Peer review decisions are like diagnostic tests:
false positives and false negatives are inevitable
(Kassirer and Campion, 1994)
• Larger journals ask reviewers to advise on
publication, not to decide
Bias
Author-related
• Prestige (author/institution)
• Gender
• Where they live and work
Paper-related
• Positive results
• English language
Prestigious institution bias
Peters and Ceci, 1982
Resubmitted 12 altered articles to psychology journals that
had already published them
Changed:
• title/abstract/introduction - only slightly
• authors’ names
• name of institution, from prestigious to unknown
fictitious name (eg. “Tri-Valley Center for Human
Potential”)
Peters and Ceci - results
• Three articles recognised as resubmissions
• One accepted
• Eight rejected (all because of poor study
design, inadequate statistical analysis, or poor
quality: none on grounds of lack of originality)
How easy is it to hide authors’ identity?
• Not easy
• In RCTs of blinded peer review, reviewers
correctly identified author or institution in 2450% of cases
Reviewers identified
(open review) – results of RCTs
Asking reviewers to sign their reports
in RCTs made no difference to the quality
of reviews or recommendations made
• Godlee et al, 1998
• van Rooyen et al, 1998
• van Rooyen et al ,1999
Open review on the web
Various experiments and evaluations are
underway…
What makes a good reviewer? – results of
RCTs
• Aged under 40
• Good institution
• Methodological training (statistics &
epidemiology)
What might improve the quality of reviews?
• Reward/credit/acknowledgement?
• Careful selection?
• Training?
• Greater accountability (open review on web)?
• Interaction between author and reviewer (real
time open review)?
Download