Outline of Analyses – Cognitive Tests

advertisement
EDL 880
Dr. Jeffrey Oescher
Outline of Analyses – Cognitive Tests
I.
Entering data
A. Determine the structure of your data set
1. How many variables
2. Names of the variables
a. Use obvious names
b. Think ahead for potential problems like reversing Likert responses
c. Consider labeling the variables only if it will help you in the analysis or
presentation of the data
3. Types of variables
a. SPSS sometimes does not convert string variables to numeric or recognize string
variables in procedures (e.g., ANOVA)
b. Use numeric variables when possible
c. Record the codes for the values you assign to the levels of any variables (e.g., 0
for females and 1 for males) or use a pneumonic to remember these codes (0 for
females and 1 for males - 0 is the first number and 1 is the second while 'f' is the
first letter in the alphabet and 'm' the second)
4. Coding the identification of the respondents and/or surveys
a. Usually numeric and consecutive
b. Coding additional information into an ID such as group membership
5. Entering data
a. Enter the actual response to any given item (e.g., a, b, c, or d for multiple choice
items, t or f for true false items, etc.)
b. Entering data
i. By yourself
ii. Using a colleague to help - one reads and the other records
iii. Using scantron data
6. Saving your data set
a. Use names that are self explanatory
b. Number your data sets as you modify them (e.g., fe1.sav, fe2.sav, fe3.sav, etc.)
c. Always keep a backup copy of the most recent data set.
II. Cleaning data
A. Always check the data set for "dirty" data such as typos, incorrectly keyed items, missing
data, etc.
B. Use the SPSS analysis procedure FREQUENCIES for each item and all other
demographic data that is reasonable (i.e., no more than three or four values of the
variable)
C. Corrections
1. Correct any obvious mistakes with data entry
2. Eliminate either the item response if no mistake can be established
3. Eliminate the observation if there is a preponderance of missing items (e.g., 90%)
1
D. Save the data set under a new name so as to preserve the original data
III. Create and/or modify variables
A. Create any variables needed
1. Item scores (i.e., correct or incorrect)
2. Total scores
3. Subscale scores
B. Use the SPSS transform commands COMPUTE or RECODE as well as any appropriate
functions such as NMISS or MEAN
1. Always recode or compute variables with names other than those that currently exist
2. Usually score missing responses as a wrong answer unless there is a compelling
reasons to not do so
C. Check one or two observations to be certain the transformations are being done correctly
IV. Analyze the data
A. Review the proportion of responses to each alternative
1. Use the SPSS analysis procedure FREQUENCIES for each item
2. Review the proportions of students choosing each of the alternatives to an item
a. Examine the difficulty index (i.e., the proportion of students choosing the correct
answer) to be certain the majority of students have answered each item correctly
b. Examine the proportions of students choosing each alternative to be certain the
responses are distributed across all alternatives
3. Consider revising any item for which difficulties have been identified
B. Review all item reliabilities1
1. Calculate the item reliabilities for all alternatives to an item
a. This is the correlation between a student's choice of an alternative (i.e., 0 or 1)
and their total test score
b. Prior to running this analysis you must create a variable designating each
student's selection of each alternative for the item (i.e., 0 or 1)
2. Use the SPSS analysis procedure CORRELATIONS to calculate the item reliability
index for each alternative to each item
3. The reliability index for the correct answer to an item is known as the discrimination
index for that item
a. The guide to interpreting the value of this index is that it should be greater than
+0.30
b. Higher discrimination indices (e.g., +0.50, +0.60, etc.) indicate the item
differentiates well between those students who know the material in comparison
to those who do not
1
These analyses apply to situations where a total score is being calculated. If subscales are
being calculated, the analyses should be used for the items comprising that subscale and the
subscale score.
2
c.
Lower discrimination indices (e.g., +0.20, +0.15, etc.) suggest something is not
working correctly with this item as students who get it correct have lower scores
than those students who answer it incorrectly
4. The reliability indices for any incorrect answers are known generally as item
reliabilities
a. The choice of an incorrect response (i.e., 1) should be associated with a lower
total test score
b. The guide to interpreting the value of this index is that it should be negative (e.g.,
-0.15, -.20, etc.)
c. Larger negative item reliability indices indicate the item differentiates well
between those students who do not know the material in comparison to those
who do
d. Smaller or positive item reliability indices suggest something is not working
correctly with a particular alternative
5. Consider revising any item for which the item reliability data is suspect
C. Review the reliability index for the test or scale
1. Calculate a KR 20 or Cronbach's alpha for the test as well as subscales
2. Use the SPSS analysis procedure RELIABILITY to calculate these reliability indices
a. This procedure is found using the ANALYZE AND SCALE pull down menus
b. The RELIABILITY procedure will calculate Cronbach's alpha which is a more
generalized form of the KR 20
3. Interpret the reliability index relative to the decisions being made with the test or
scale results
a. Reliability indices greater than +0.70 are usually acceptable for tests that are not
standardized
b. Factors affecting reliability indices
i.
Test length
a) The reliability index for a longer test is likely going to be higher than that
for a shorter test
b) Using tests with very few items (e.g., a subscale based on three items) is
not recommended
ii.
Variability of scores
a) A test on which students score very similarly (i.e., homogeneous scores)
will have a lower reliability index than a test on which students scores
vary greatly (i.e., heterogeneous scores)
b) Tests in which students all do very well - a goal of good instruction - will
have lower reliability indices due to the homogeneity of scores
c) Tests in which students al do very poorly - a situation often found in preassessment of students' knowledge - will have lower reliability indices
due to the homogeneity of scores
D. Review the descriptive statistics for the scores of those in the sample
1. Calculate descriptive statistics (i.e., mean, standard deviation, n, etc.) for the total
score and any subscale scores
2. Use the SPSS analysis procedure DESCRIPTIVES to calculate these statistics
3. Interpret the scores
3
a. Test scores
i.
ii.
Usually scores are interpreted in a criterion-referenced manner (e.g., John
answered 97% of the items correctly, Sally mastered three of the four
objectives, Michelle can correctly add and subtract single digit numbers, etc.)
Some test scores are interpreted in a norm-referenced manner (e.g., Jim
made the lowest score in the class, Ronnie's scores fell in the middle of the
pack, Sharon had the best paper, etc.)
4
Download