Adventures in CPS

advertisement
Adventures in CPS
Aaron Bruck
Towns Group Meeting
September 25, 2007
Goals for the summer
► Complete
CPS projects
 2 different projects
►Quantification
and categorization of CPS questions
►Linking CPS results to results on exams
► Look
into new ideas for research (and
possibly an OP)
 Scientific Literacy
 Assessment tools
What is CPS?
► Students
use “clickers” to
respond to instructor-generated
questions
► These responses are stored in
an online database
► Students received grades based
on the number of questions
answered correctly
Categorization
► We
decided to categorize the questions in
the following ways:




Solo vs. Buddy
Definition vs. Algorithmic vs. Conceptual
Using Bloom’s Taxonomy
Using Smith/Nakhleh/Bretz Framework1
► We
also compared our analyses with those
of Mazur2 to make sure we were looking for
the right things.
K. C., Nakhleh, M. B., Bretz, S. L. An Expanded Framework for Analyzing
General Chemistry Exams. Journal of Chemical Education. In press.
2Fagen, A. P., Crouch, C. H., Mazur, E. (2002) Peer Instruction: Results from a Range
of Classrooms. The Physics Teacher. 40, 206-209.
1Smith,
Categorization, cont.
►
Here are the results from one of the sections (others
followed a similar trend):
Bloom's Taxonomy
# of questions
# solo
# buddy
Knowledge (1)
44
41
3
Comprehension (2)
44
34
10
Application (3)
27
18
9
Analysis (4)
3
1
2
Synthesis (5)
0
0
0
Evaluation (6)
0
0
0
118
94
24
total
Bloom's Taxonomy
# of questions
# Definition
# Algorithmic
# Conceptual
Knowledge (1)
44
40
3
1
Comprehension (2)
44
14
11
19
Application (3)
27
4
21
2
Analysis (4)
3
0
1
2
Synthesis (5)
0
0
0
0
Evaluation (6)
0
0
0
0
118
58
36
24
total
More categorization
Smith/Nakhleh Framework
Bloom's Taxonomy
# questions
# Definition
# A-MaMi
# A-MaD
# A-MiS
# A-Mu
# C-E
# C-P
# C-I
# C-O
Knowledge (1)
44
42
0
0
1
0
0
1
0
0
Comprehension (2)
44
15
1
0
7
0
1
16
4
0
Application (3)
27
4
7
4
6
3
0
1
2
0
Analysis (4)
3
0
0
0
0
1
0
2
0
0
Synthesis (5)
0
0
0
0
0
0
0
0
0
0
Evaluation (6)
0
0
0
0
0
0
0
0
0
0
118
61
8
4
14
4
1
20
6
0
total
Results by Category
►A
2 tailed t-test (solo/buddy) and one way
ANOVAs (all others) were performed to test for
statistical differences in the data
► Analyses showed no significant differences
between any of the categories and how the
students performed on the questions
► The only exception were the solo-buddy questions
for one professor
Solo vs. Buddy
Bloom’s
Taxonomy
Question Type
(D/A/C)
Smith/Nakhleh/Bretz
Framework
t
p
F
p
F
p
F
p
Professor A
-3.189
0.002*
0.730
0.485
0.307
0.820
0.285
0.942
Professor B
0.049
0.962
1.301
0.277
1.102
0.352
1.102
0.429
Professor C
-0.579
0.564
1.001
0.371
2.456
0.067
1.923
0.064
Solo/Buddy Analysis
► Prompted
by the unusual results, we further
investigated the solo/buddy analysis
► We also looked at pairs of solo/buddy questions
asked one after the other:
Solo/Buddy
Solo/Buddy
N
Mean
Std. Deviation
Std. Error
Mean
1
8
45.5875
19.09786
6.75211
2
8
70.2525
17.42225
6.15970
T-test results: t=-2.699 p=0.017 (significant difference)
That’s great, but…
► We
found a significant difference between
solo and buddy questions…but is it worth
anything?
► Our next step was to see if this apparent
difference in performance due to style of
question translated into better test scores
on the exams.
Exam Analysis
► We
compared exam questions with
questions asked in class using CPS.
► Surprisingly, we found very few questions
on the exams that directly or indirectly
corresponded to CPS questions.
► Each exam was analyzed individually before
pooling all of the data to determine any and
all effects.
Exam Analysis
# solo
# buddy
# neither
Exam 1
17
0
40
Exam 2
21
9
31
Exam 3
10
10
36
Exam 4
29
10
64
Totals
77
29
171
%
correct
solo
%
correct
Buddy
%
correct
Neither
Exam 1
68.119
n/a
57.5222
Exam 2
56.5675
62.1900
66.1966
Exam 3
68.1138
67.9493
54.5532
Exam 4
66.1699
50.3368
60.3920
Totals
64.2338
60.0887
59.5438
Question Effects
F value
p value
Exam 1
3.508
0.066
Exam 2
2.162
0.124
Exam 3
2.718
0.075
Final Exam
2.793
0.066
Pooled Exams
1.632
0.197
Professor A
1.032
0.361
Professor B
0.341
0.712
Professor C
1.468
0.236
per instructor…
All analyses showed no significant
differences at the p=0.05 confidence
level.
Instructor Effects
► We
also ran an
analysis to check for
any instructor effects
that could have
possibly skewed the
data.
► Results showed no
significant differences
at the p=0.05 level:
Instructor Effects
F value
p value
Exam 1
0.54
0.586
Exam 2
0.484
0.619
Exam 3
0.108
0.898
Final Exam
1.255
0.289
Pooled Exams
0.987
0.374
Is CPS better than nothing?
►A
final analysis was performed between
questions that correlated to CPS questions
and those that did not.
► Unfortunately, no significant differences
were found, though the average score was
higher for CPS questions.
CPS vs. Nothing Results
Descriptive Statistics:
Percent Correct
N
Mean
Std. Deviation
Std. Error
1
106
63.0998
17.25487
1.67594
2
171
59.5438
20.13811
1.54000
Total
277
60.9046
19.13260
1.14957
Results of ANOVA:
Percent Correct
Between Groups
Sum of Squares
df
Mean Square
827.451
1
827.451
Within Groups
100204.068
275
364.378
Total
101031.520
276
F
2.271
Sig.
.133
Conclusions
► CPS
is an effective lecture tool that engages
students interactively in their content
► Most CPS questions are low-level questions
in terms of Bloom’s Taxonomy and other
categorization tools
► Students seem to learn content through
interaction with their peers when using CPS,
though this does not necessarily correlate to
success on exams
What else did I do?
► Research
Questions
 In the event that I need to do a project other than the
NSDL project, what avenues are available?
 Could any of these ideas turn into a possible OP in the
following months?
► Ideas
of interest
 Scientific Literacy
► What
is the value of a textbook?
► Could other materials help?
 Assessment
► Immediate
feedback assessment technique (IFAT)
 Could it work in chemistry?
Download