Uploaded by ANYA

Science-206-journal critique guidelines

advertisement
1
GUIDELINES TO A CRITICAL EVALUATION OF PUBLISHED REPORTS
(Journal Critic)
Prepared by Dr. Merab A. Chan
Being able to evaluate published reports critically is a stepping-stone to conducting an
excellent research. Understanding these guidelines is, therefore, necessary.
1. Choose a published research article. The article must be an experimental,
quantitative research. The date of publication must be within the last five years.
2. Read the whole article and write a summary of what the study is all about.
3. Journal critic proper
Evaluate the following components of the paper:
a. The research hypothesis
Identifying the research hypothesis is the first step in reviewing an article.
Make sure you answer the following questions: 1) Why was this research
performed? 2) Does it have relevance to you? 3) Is there any practical or
scientific merit to it? If not, there is no need to read any further.
b. Variables studied
Before reading the report in detail, ask yourself what variables would shed
light on the research hypothesis. Then, identify the variables included in the
report. Which are the dependent variables? Which are the independent
variables? Are the variables relevant to the research hypothesis? Determine
whether the variables were studied appropriately. Furthermore, identify the
scale of measurement used for recording each variable. It is at this point that
you consider the methods used to obtain the data. Were state-of-the-art
clinical or laboratory techniques used? Do these methods produce precise
measurements?
Examples of variables in many human studies: age, race, gender
c. The study design
The appropriate methods of analysis are determined by the study design.
Identify the study design and the study population to determine the relevance
of the results.
Consider the following questions:
1) Is the study experimental or observational?
2) What are the study units and how were they selected?
3) Are the study units a random sample, or were they chosen merely because
it was convenient to study these particular units?
4) What was the target population?
5) How does the target population compare with the study population?
6) How was the study conducted?
7) At what point was randomization used?
8) Was an adequate control group included?
2
d. Sample size
The sample size must be large enough for the results to be reliably
generalized. Statistical tests used to declare results significant or not
significant depend upon the sample size. The greater the number of study
units in an investigation, the more confident we can be of the results. The
larger the sample size, the more powerful the test and the more sensitive the
test to population differences. Thus, with large sample size, trivial population
differences could be declared significant.
On the other hand, in a small sample a large difference may be nonsignificant. Findings based on one or two subjects cannot be expected to be
typical of large populations.
Note: Determine how large a difference could have reasonably been detected
by the sample size used, especially in studies that report no significant
differences.
A good study that reports negative results will also quote the power of the
statistical tests used.
e. Completeness of the data
A good study will include all the data available in the various analyses
performed.
Be careful of reports in which a portion of the data had been discarded as
outliers because of gross errors or other unusual circumstances. If an
investigator discards those data that do not support the research hypothesis,
the scientific objectivity is lost.
Note: If a large number of study units, say more than 20% has a significant
amount of incomplete data, then the credibility of the results should be
questioned.
f. Appropriate descriptive statistics
Be sure you understand what the numbers in the text and tables represent, and
what is graphed.
Take note of the following:
1) Tables and graphs give an overview of the findings of the study. Be
watchful that these are not misleading.
2) Distinguish between the sample standard deviation and the standard error
of an estimated parameter. If it is not reported, calculate the coefficient of
variation for some of the important variables.
3) Check if numbers in the text agree with numbers in the tables and graphs.
If not, then the paper is poorly written.
3
g. Appropriate statistical methods for inferences
Determine if the statistical methods used are appropriate for the study design
and scale of measurement. This requires identification of the null hypothesis
that is being tested. If multiple significance tests are performed, multiply each
p-value by the number of tests performed to obtain an upper bound for the
overall significance level.
A p-value represents the chance that a difference in the sample data is the
result of random variation, when in fact there is no difference in the
populations from which the samples came.
Take note of the following:
1) Check if the names of the statistical tests used in the analysis are stated in
the methods section of the report.
Examples of statistical tests: Student’s t-test, paired t-test, analysis of
variance, multiple regression analysis, etc.
2) Be wary of reports that state p-values without indicating the specific
method used to analyze the data.
h. Logic of the conclusions
Do the conclusions make sense?
Note: Many research reports have absurd conclusions. The main reason is
that concluding statements are improper or there are no comparisons made.
4. Attach a photocopy of the article.
Reference:
Elston, R.C. and Johnson, W.D. 1995. Essentials of Biostatistics. 2nd Ed., Info Access &
Distribution Pte Ltd., Singapore.
FORMAT OF PAPER TO BE SUBMITTED
Must be typewritten, Font size 12, Times New Roman
Write appropriate headings on the different sections of the critic proper.
Download