Uploaded by hanaandinko

Writing an Effective Discussion Section

Editorial: How to Write an Effective Results and Discussion for
the Journal of Pediatric Psychology
Dennis Drotar, PHD
Department of Pediatrics, Cincinnati Children’s Hospital Medical Center
Presenting Results
Follow APA and JPP Standards for Presentation
of Data and Statistical Analysis
Authors’ presentations of data and statistical analyses
should be consistent with publication manual guidelines
(American Psychological Association, 2001). For example,
authors should present the sample sizes, means, and
standard deviations for all dependent measures and the
direction, magnitude, degrees of freedom, and exact
p levels for inferential statistics. In addition, JPP editorial
policy requires that authors include effect sizes and confidence intervals for major findings (Cumming & Finch,
2005, 2008; Durlak, 2009; Wilkinson & the Task Force
on Statistical Inference, 1999; Vacha-Haase & Thompson,
2004).
Authors should follow the Consolidated Standards of
Reporting Trials (CONSORT) when reporting the results of
randomized clinical trials (RCTs) in JPP (Moher, Schultz,
& Altman, 2001; Stinson-McGrath, & Yamoda, 2003).
Guidelines have also been developed for nonrandomized
designs, referred to as the Transparent Reporting of
Evaluations with Nonrandomized Designs (TREND)
Provide an Overview and Focus Results on
Primary Study Questions and Hypotheses
Readers and reviewers often have difficulty following
authors’ presentation of their results, especially for complex data analyses. For this reason, it is helpful for authors
to provide an overview of the primary sections of their
results and also to take readers through their findings in
a step-by-step fashion. This overview should follow directly
from the data analysis plan stated in the method (Drotar,
2009b).
Readers appreciate the clarity of results that are
consistent with and focused on the major questions
and/or specific hypotheses that have been described in
the introduction. Readers and reviewers should be able
to identify which specific hypotheses were supported,
which received partial support, and which were not
supported. Nonsignificant findings should not be ignored.
Hypothesis-driven analyses should be presented first, prior
to secondary analyses and/or more exploratory analyses
All correspondence concerning this article should be addressed to Dennis Drotar, PhD, MLC 7039, 3333 Burnett
Avenue, Cincinnati, OH 45229-3936, USA. E-mail: dennis.drotar@cchmc.org
Journal of Pediatric Psychology 34(4) pp. 339–343, 2009
doi:10.1093/jpepsy/jsp014
Advance Access publication March 10, 2009
Journal of Pediatric Psychology vol. 34 no. 4 ß The Author 2009. Published by Oxford University Press on behalf of the Society of Pediatric Psychology.
All rights reserved. For permissions, please e-mail: journals.permissions@oxfordjournals.org
Downloaded from http://jpepsy.oxfordjournals.org/ at University of Western Ontario on June 19, 2015
Authors face the significant challenge of presenting their
results in the Journal of Pediatric Psychology (JPP) completely, yet succinctly and writing a convincing discussion
section that highlights the importance of their research.
The third and final in a series of editorials (Drotar,
2009a,b), this article provides guidance for authors to
prepare effective results and discussion sections. Authors
also should review the JPP website (http://www.jpepsy.
oxfordjournals.org/) and consider other relevant sources
(American Psychological Association, 2001; APA Publications and Communications Board Working Group on
Journal Reporting Standards, 2008; Bem, 2004; Brown,
2003; Wilkinson & The Task Force on Statistical
Inference, 1999).
statement (Des Jarlais, Lyles, Crepaz, & the TREND
Group, 2004) (available from http://www.trend-statement.
org/asp/statement.asp). Finally, studies of diagnostic accuracy, including sensitivity and specificity of tests, should
be reported in accord with the Standards for Reporting
of Diagnostic Accuracy (STARD) (Bossuyt et al., 2003)
(http://www.annals.org/cgi/content/full/138/1/W1).
Finally, authors may also wish to consult a recent
publication (APA Publications and Communications
Board Working Group on Journal Reporting Standards,
2008) that contains useful guidelines for various types
of manuscripts including reports of new data collection
and meta-analyses. Guidance is also available for manuscripts that contain observational longitudinal research
(Tooth, Ware, Bain, Purdie, & Dobson, 2005) and qualitative studies involving interviews and focus groups (Tong,
Sainsbury, & Craig, 2007).
340
Drotar
(Bem, 2004). The rationale for the choice of statistics and
for relevant decisions within specific analyses should be
described (e.g., rationale for the order of entry of multiple
variables in a regression analysis).
Report Data that is Relevant to Statistical
Assumptions
Integrate the Text of Results with Tables and/or
Figures
Tables and figures provide effective, reader-friendly ways
to highlight key findings (Wallgren, Wallgren, Perrson,
Jorner, & Haaland, 1996). However, authors face the
challenge of describing their results in the text in a way
that is not highly redundant with information presented in
tables and/or figures. Figures are especially useful to report
the results of complex statistics such as structural equation
modeling and path analyses that describe interrelationships among multiple variables and constructs. Given
constraints on published text in JPP, tables and figures
should always be used selectively and strategically.
Describe Missing Data
Reviewers are very interested in understanding the nature
and impact of missing data. For this reason, information
concerning the total number of participants and the flow
of participants through each stage of the study (e.g., in
prospective studies), the frequency and/or percentages of
missing data at different time points, and analytic methods
used to address missing data is important to include. A
summary of cases that are missing from analyses of primary
and secondary outcomes for each group, the nature of
missing data (e.g., missing at random or missing not
at random), and, if applicable, statistical methods used
to replace missing data, and/or understand the impact of
missing data (Schafer & Graham, 2002) are useful for
readers.
Consider Statistical Analyses that Document
Clinical Significance of Results
Improving the clinical significance of research findings
remains an important but elusive goal for the field of pediatric psychology (Drotar & Lemanek, 2001). Reviewers
and readers are very interested in the question: what do
Include Supplementary Information Concerning
Tables, Figures, and Other Relevant Data on
the JPP Website
The managing editors of JPP appreciate the increasing
challenges that authors face in presenting the results of
complicated study designs and data analytic procedures
within the constraints of JPP policy for manuscript
length. For this reason, our managing editors will work
with authors to determine which tables, analyses, and
figures are absolutely essential to be included in the
printed text version of the article versus those that are
less critical but nonetheless of interest and can be posted
on the JPP website in order to save text space. Specific
guidelines for submitting supplementary material are
available on the JPP website. We believe that increased
use of the website to post supplementary data will not
only save text space but will facilitate communication
among scientists that is so important to our field and
encouraged by the National Institutes of Health.
Writing the Discussion Section
The purpose of the discussion is to give readers specific
guidance about what was accomplished in the study, the
scientific significance, and what research needs to be
done next.
The discussion section is very important to readers
but extremely challenging for authors, given the need for
a focused synthesis and interpretation of findings and
Downloaded from http://jpepsy.oxfordjournals.org/ at University of Western Ontario on June 19, 2015
Authors should provide appropriate evidence, including
quantitative results where necessary, to affirm that their
data fit the assumptions required by the statistical analyses
that are reported. When assumptions underlying statistical
tests are violated, authors may use transformations of data
and/or alternative statistical methods in such situations
and should describe the rationale for them.
the findings mean for clinical care? For this reason,
I strongly encourage authors to conduct statistical
evaluations of the clinical significance of their results
whenever it is applicable and feasible. In order to describe
and document clinical significance, authors are strongly
encouraged to use one of several recommended
approaches including (but not limited to) the Reliable
Change Index (Jacobson, Roberts, Burns, & McGlinchey,
1999; Jacobson & Truax, 1991; Ogles, Lambert, & Sawyer,
1995), normative comparisons (Kendall, Marrs-Garcia,
Nath, & Sheldrick, 1999); or analyses of the functional
impact of change (Kazdin, 1999, 2000). Statistical analyses
of the cost effectiveness of interventions can also add to
clinical significance (Gold, Russell, Siegel, & Weinstein,
1996). Authors who report data from quality of life
measures should consider analyses of responsiveness and
clinical significance that are appropriate for such measures
(Revicki, Hays, Cella, & Sloan, 2008; Wywrich et al.,
2005).
Editorial
presentation of relevant take-home messages that highlight
the significance and implications of their research.
Organize and Focus the Discussion
Describe the Novel Contribution of Findings
Relative to Previous Research
Readers and reviewers need to receive specific guidance
from authors in order to identify and appreciate the
most important new scientific contribution of the theory,
methods, and/or findings of their research (Drotar, 2008;
Sternberg & Gordeva, 2006). Readers need to understand
how authors’ primary and secondary findings fit with what
is already known as well as challenge and/or extend scientific knowledge. For example, how do the findings shed
light on important theoretical or empirical issues and
resolve controversies in the field? How do the findings
extend knowledge of methods and theory? What is the
most important new scientific contribution of the work
(Sternberg & Gordeva, 2006)? What are the most important implications for clinical care and policy?
Discuss Study Limitations and Relevant
Implications
Authors can engage their readers most effectively with
a balanced presentation that emphasizes the strengths yet
also critically evaluates the limitations of their research.
Every study has limitations that readers need to consider
in interpreting their findings. For this reason, it is advantageous for authors to address the major limitations of their
research and their implications rather than leaving it to
readers or reviewers to identify them. An open discussion
of study limitations is not only critical to scientific integrity
(Drotar, 2008) but is an effective strategy for authors:
reviewers may assume that if authors do not identify key
limitations of their studies they are not aware of them.
Description of study limitations should address
specific implications for the validity of the inferences
and conclusions that can be drawn from the findings
(Campbell & Stanley, 1963). Commonly identified threats
to internal validity include issues related to study design,
measurement, and statistical power. Most relevant threats
to external validity include sample bias and specific
characteristics of the sample that limit generalization of
findings (Drotar, 2009b).
Although authors’ disclosure of relevant study limitations is important, it should be selective and focus on the
most salient limitations, (i.e., those that pose the greatest
threats to internal or external validity). If applicable,
authors may also wish to present counterarguments that
temper the primary threats to validity they discuss. For
example, if a study was limited by a small sample but
nonetheless demonstrated statistically significant findings
with a robust effect size, this should be considered by
reviewers.
Study limitations often suggest important new
research agendas that can shape the next generation of
research. For this reason, it is also very helpful for authors
to inform reviewers about the limitations of their research
that should be addressed in future studies and specific
recommendations to accomplish this.
Describe Implications of Findings for New
Research
One of the most important features of a discussion section
is the clear articulation of the implications of study findings for research that extends the scientific knowledge base
of the field of pediatric psychology. Research findings
can have several kinds of implications, such as the
development of theory, methods, study designs data
analytic approaches, or identification of understudied
and important content areas that require new research
Downloaded from http://jpepsy.oxfordjournals.org/ at University of Western Ontario on June 19, 2015
Authors are encouraged to ensure that their discussion
section is consistent with and integrated with all previous
sections of their manuscripts. In crafting their discussion,
authors may wish to review their introduction to make sure
that the points that are most relevant to their study aims,
framework, and hypotheses that have been previously
articulated are identified and elaborated.
A discussion section is typically organized around
several key components presented in a logical sequence
including synthesis and interpretation of findings, description of study limitations, and implications, including
recommendations for future research and clinical care.
Moreover, in order to maximize the impact of the discussion, it is helpful to discuss the most important or
significant findings first followed by secondary findings.
One of the most common mistakes that authors
make is to discuss each and every finding (Bem, 2004).
This strategy can result in an uninteresting and unwieldy
presentation. A highly focused, lively presentation that
calls the reader’s attention to the most salient and interesting findings is most effective (Bem, 2004). A related
problematic strategy is to repeat findings in the discussion
that have already been presented without interpreting or
synthesizing them. This adds length to the manuscript,
reduces reader interest, and detracts from the significance
of the research. Finally, it is also problematic to introduce
new findings in the discussion that have not been
described in the results.
341
342
Drotar
(Drotar, 2008). Providing a specific agenda for future
research based on the current findings is much more
helpful than general suggestions. Reviewers also appreciate
being informed about how specific research recommendations can advance the field.
Describe Implications of Findings for Clinical
Care and/or Policy
Acknowledgments
The hard work of Meggie Bonner in typing this manuscript
and the helpful critique of the associate editors of Journal
of Pediatric Psychology and Rick Ittenbach are gratefully
acknowledged.
Conflict of interest: None declared.
Received February 4, 2009; revisions received and accepted
February 4, 2009
References
American Psychological Association. (2001). Publication
manual of the American Psychological Association
(5th ed.). Washington, DC: Author.
Downloaded from http://jpepsy.oxfordjournals.org/ at University of Western Ontario on June 19, 2015
I encourage authors to describe the potential clinical
implications of their research and/or suggestions to
improve the clinical relevance of future research (Drotar
& Lemanek, 2001). Research findings may have widely
varied clinical implications. For example, studies that
develop a new measure or test an intervention have greater
potential clinical application than a descriptive study
that is not directly focused on a clinical application.
Nevertheless, descriptive research such as identification
of factors that predict clinically relevant outcomes may
have implications for targeting clinical assessment or
interventions concerning such outcomes (Drotar, 2006).
However, authors be careful not to overstate the implications of descriptive research.
As is the case with recommendations for future
research, the recommendations for clinical care should
be as specific as possible. For example, in measure development studies it may be useful to inform readers about
next steps in research are needed to enhance the clinical
application of a measure.
This is the final in the series of editorials that are
intended to be helpful to authors and reviewers and improve
the quality of the science in the field of pediatric psychology.
I encourage your submissions to JPP and welcome our
collective opportunity to advance scientific knowledge.
APA Publications and Communications Board Working
Group on Journal Article Reporting Standards. (2008).
Reporting standards for research in psychology.
Why do we need them? What do they need to be?
American Psychologist, 63, 839–851.
Bem, D. (2004). Writing the empirical journal article.
In J. M. Darley, M. P. Zanna, & H. Roediger III (Eds.),
The complete academic: a career guide. Pediatric
psychology (2nd ed., pp. 105–219). Washington, DC:
America Psychological Association.
Bossuyt, P., Reitsma, J. B., Bruns, D. E., Gatsonsis, C. A.,
Glasziou, P. P., Irwig, L.M., et al. (2003). The STARD
statement for reporting studies of diagnostic accuracy:
Explanation and elaboration. Annals of Internal
Medicine, 138, W1–W12.
Campbell, D.J., & Stanley, J. L. (1963). Experimental
and quasi experimental designs for research. Chicago:
Rand McNally.
Cumming, G., & Finch, S. (2005). Inference by eye:
Confidence intervals and how to read pictures of data.
American Psychologist, 60, 170–180.
Cumming, G., & Finch, S. (2008). Putting research in
context: Understanding confidence intervals from one
or more studies. Journal of Pediatric Psychology.
Advance Access published December 18, 2008;
doi:10.1093/jpepsy/jsn118.
Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TREND
Group. (2004). Improving the reporting quality of
nonrandomized evaluations of behavioral and public
health interventions: The TREND Statement. American
Journal of Public Health, 94, 361–366. Retrieved
September 15, 2004, from http://www.trendstatement.org/asp/statement.asp.
Drotar, D. (2000). Writing research articles for publication.
In D. Drotar (Ed.), Handbook of research methods
in clinical child and pediatric psychology. Pediatric
psychology (pp. 347–374). New York: Kluwer
Academic/Plenum Publishers.
Drotar, D. (2006). Psychological interventions in childhood
chronic illness. Washington, D.C.: American
Psychological Association.
Drotar, D. (2008). Thoughts on establishing research
significance and presenting scientific integrity.
Journal of Pediatric Psychology, 33, 1–3.
Drotar, D. (2009a). Editorial: Thoughts on improving
the quality of manuscripts submitted to the Journal
of Pediatric Psychology: Writing a convincing
introduction. Journal of Pediatric Psychology, 34,
1–3.
Drotar, D. (2009b). Editorial: How to report methods in
the Journal of Pediatric Psychology. Journal of
Editorial
Health Treatment of Depression Collaborative
Research Program data. Journal of Consulting and
Clinical Psychology, 63, 321–326.
Revicki, D., Hays, R. D., Cella, D., & Sloan, J. (2008).
Recommended methods for determining
responsiveness and minimally important differences
for patient-reported outcomes. Journal of Clinical
Epidemiology, 61, 102–109.
Schafer, J. L., & Graham, J. W. (2002). Missing data:
Our view of the state of the art. Psychological Methods,
7, 147–177.
Sternberg, R. J., & Gordeva, T. (2006). The anatomy of
impact. What makes an article influential?
Psychological Science, 7, 69–75.
Stinson, J. N., McGrath, P. J., & Yamada, J. T. (2003).
Clinical trials in the Journal of Pediatric Psychology:
Applying the CONSORT statement. Journal of Pediatric
Psychology, 28, 159–167.
Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated
criteria for reporting qualitative research (COREQ):
a 32-item checklist for interviews and focus groups.
International Journal of Quality in Health Care, 19,
349–357.
Tooth, L., Ware, R., Bain, C., Purdie, D. M., & Dobson, A.
(2005). Quality of reporting on observational
longitudinal research. American Journal of
Epidemiology, 161, 280–288.
Vacha-Haase, T., & Thompson, B. (2004). How to
estimate and interpret various effect sizes. Journal of
Counseling Psychology, 51, 473–481.
Wallgren, A., Wallgren, B., Persson, R., Jorner, V.,
& Haaland, F. A. (1996). Graphing statistics and data.
Creating better charts. Thousand Oaks, CA: Sage.
Wilkinson, L., & the Task Force on Statistical Inference.
(1999). Statistical methods in psychology journals.
American Psychologist, 54, 594–604.
Wywrich, K. W., Bullinger, M., Aaronson, N., Hays, R. D.,
Patrick, D. L., Symonds, T., & The Clinical
Significance Consensus Meeting Group (2005).
Estimating clinically significant differences in
quality of life outcomes. Quality of Life Research, 14,
285–295.
Downloaded from http://jpepsy.oxfordjournals.org/ at University of Western Ontario on June 19, 2015
Pediatric Psychology. Advance Access published
February 10; doi:10.1093/jpepsy/jsp002.
Drotar, D., & Lemanek, K. (2001). Steps toward a
clinically relevant science of interventions in
pediatric settings. Journal of Pediatric Psychology, 26,
385–394.
Durlak, J. A., (2009). How to select, calculate, and
interpret effect sizes. Journal of Pediatric Psychology.
Advance Access published February 16; doi:10.1093/
jpepsy/jsp004.
Gold, M. R., Siegel, J. E., Russell, L. B., & Weinstein, M. C.
(1996). Cost-effectiveness in health and medicine.
New York: Oxford University Press. Advance Access
published February 16, 2009, doi:10.1093/jpepsy/
jsp004.
Jacobson, N. S., Roberts, L. J., Berns, S. B.,
& McGlinchey, B. (1999). Methods for defining
and determining clinical significance of treatment
effects: Description, application, and alternatives.
Journal of Consulting and Clinical Psychology, 67,
300–307.
Jacobson, N. S., & Truax, P. (1991). Clinical significance:
A statistical approach to defining meaningful change
in psychotherapy research. Journal of Consulting and
Clinical Psychology, 59, 12–19.
Kazdin, A. E. (1999). The meanings and measurement of
clinical significance. Journal of Consulting and Clinical
Psychology, 67, 332–339.
Kazdin, A.E. (2000). Psychotherapy for children and
adolescents: Directions for research and practice.
New York: Oxford University Press.
Kendall, P. C., Marrs-Garcia, A., Nath, S. R.,
& Sheldrick, R. C. (1999). Normative comparisons for
the evaluation of clinical significance. Journal of
Consulting and Clinical Psychology, 67, 285–299.
Moher, N., Schultz, K. F., & Altman, D. (2001). The
CONSORT statement: Revised recommendations for
improving the quality of reports of parallel-group
randomized trials. Journal of the American Medical
Association, 285, 1987–1991.
Ogles, B. M., Lambert, M. L., & Sawyer, J. D. (1995).
Clinical significance of the National Institute of Mental
343