Jordan, S. (2014). Computer-marked assessment as learning analytics. Presentation at CALRG Conference, Milton Keynes, 10-11 June 2014.

advertisement
Computer-marked assessment as
learning analytics
Sally Jordan
Department of Physical Science
CALRG-C, 11th June 2014
What this presentation is about
• The potential of “assessment analytics”;
• The use of computer-based assessment as a
diagnostic tool (at the individual student level);
• The analysis of responses to computer-based
assessment at the cohort level to provide information
about student misunderstandings and student
engagement;
• A consideration of factors that affect student
engagement;
• The future: student engagement as assessment?
[using examples from my work]
Relevant literature
• Definitions of learning analytics e.g. Clow (2013, p. 683):
“The analysis and representation of data about learners
in order to improve learning”;
• But assessment is sometimes ignored when learning
analytics are discussed. Ellis (2013) points out that
assessment is ubiquitous in higher education whilst
student interactions in other online environments are not;
I will also argue that analysing assessment behaviour
also enables us to monitor behaviour at depth;
• Assessment literature is also relevant e.g. Nicol &
Macfarlane-Dick (2006) state that good feedback
practice “Provides information to teachers that can be
used to shape teaching”.
Analysis at the individual student
level: Diagnostic testing
Analysis at the cohort level:
Student errors
• At the most basic – look for questions that students
struggle with;
• Look at responses in more detail to learn more about the
errors that students make;
• This can give insight into student misunderstandings.
• So what topics in Maths for Science do students find
difficult?
So what topics in Maths for
Science do students find difficult?
Analysis of student responses to
individual questions
Gives information about student errors, linked to
their misconceptions. The confidence in the
findings is increased when
• The questions require a ‘free-text’ (constructed)
response;
• The questions are in summative use (students
are trying);
• Similar errors are seen in different variants.
See Jordan (2014)
Why is the answer 243? (instead
of 9)
8
Why is the answer 243? (instead
of 9)
The question was:
Evaluate 36/3
6
3
5
Students were evaluating
 3  243
3
Instead of 36/3 = 32 = 9
For another variant the answer was
5000 instead of 100
The question was:
Evaluate 104/2
4
10
10000
Students were evaluating

 5000
2
2
Instead of
104/2 =
102
10
= 100
Measuring student engagement…
“750 students used my iCMA”
Measuring student engagement…
Measuring student engagement…
When do students do iCMAs?
(overall activity)
When do students do iCMAs?
(impact of deadlines)
When do students do iCMAs
(typical patterns of use)
Length of responses to shortanswer questions
Student engagement with
feedback
Student engagement with
feedback (identical question)
Module A
Module B
General conclusions
• Analysis of student responses to interactive
computer-marked questions can give information
about student misunderstandings and student
engagement with assessment;
• Generally, students do what they believe their
teachers expect them to do;
• Engagement with computer-marked assessment
can act as a proxy for more general engagement
with a module (and so act as an early warning if
engagement is not as deep as we might wish).
The future?
• Redecker, Punie and Ferrari (2012, p. 302)
suggest that we should “transcend the testing
paradigm”; data collected from student interaction
in an online environment offers the possibility to
assess students on their actual interactions rather
than adding assessment separately.
References
Clow, D. (2013). An overview of learning analytics. Teaching in
Higher Education, 18(6), 683-695.
Ellis, C. (2013). Broadening the scope and increasing the
usefulness of learning analytics: The case for assessment
analytics. British Journal of Educational Technology, 44(4), 662664.
Nicol, D. & Macfarlane‐Dick, D. (2006). Formative assessment
and self‐regulated learning: a model and seven principles of good
feedback practice. Studies in Higher Education, 31(2), 199-218.
Redecker, C., Punie, Y., & Ferrari, A. (2012). eAssessment for
21st Century Learning and Skills. In A. Ravenscroft, S.
Lindstaedt, C.D. Kloos & D. Hernandez-Leo (Eds.), 21st Century
Learning for 21st Century Skills (pp. 292-305). Berlin: Springer.
For more about what I’ve discussed
Jordan, S. (2011). Using interactive computer-based
assessment to support beginning distance learners of science,
Open Learning, 26(2), 147-164.
Jordan, S. (2012). Student engagement with assessment and
feedback: Some lessons from short-answer free-text eassessment questions. Computers & Education, 58(2), 818834.
Jordan, S. (2013). Using e-assessment to learn about learning.
In Proceedings of the 2013 International Computer Assisted
Assessment (CAA) Conference, Southampton, 9th-10th July
2013. Retrieved from http://caaconference.co.uk/proceedings/
Jordan, S. (2014). Adult science learners’ mathematical
mistakes: an analysis of student responses to computermarked questions. European Journal of Science and
Mathematics Education, 2(2), 63-87.
Sally Jordan
Senior Lecturer and Staff Tutor
Deputy Associate Dean, Assessment
Faculty of Science
The Open University
sally.jordan@open.ac.uk
blog: http://www.open.ac.uk/blogs/SallyJordan/
Download