Amruth KUMAR, Adrian MARIES
Ramapo College of New Jersey,
505, Ramapo Valley Road, Mahwah, NJ 07430, USA
{amruth, amaries}@ramapo.edu
Abstract.
We conducted a within-subjects study of the effect on student learning of using open student model in the form of taxonomic concept map. We found a significant improvement in learning on two questions related to the topic of the tutor, and no pre-post improvement on questions unrelated to the topic of the tutor.
These preliminary results suggest that students may learn concepts indirectly by viewing the open student model presented as taxonomic concept map after each problem.
Keywords: Concept maps, Open student model, Intelligent tutors, Programming
1 Introduction
Making the student model available to students can make them aware of their own knowledge, or lack of it, and, in turn, improve their learning [3,4]. One survey shows that students want to have access to their learner models [1]. If the student model is made available in a table format though, it is often difficult to understand [6].
Therefore, visualization of data is a critical part of the open student model [7]. Concept maps, with their ability to reveal the relationships among concepts, have been proposed as a mechanism to present the learner model [1,2,5,8].
Can students learn concepts by simply viewing the open student model presented as a concept map after each problem, especially concepts that are not explicitly mentioned in the problems on which they are tutored? In fall 2006, we conducted a within-subjects evaluation to answer this question. We used a tutor on arithmetic expressions for our evaluation. It presents problems on evaluating arithmetic expressions (e.g., 5 + 4 % 8), and provides demand feedback.
2 Evaluation
We used the traditional pre-test-practice-post-test protocol to evaluate the hypothesis.
First, the subjects answered a pre-test consisting of 9 questions, 6 related and 3 unrelated to arithmetic expressions. Next, they worked with the tutor for 15 minutes solving expression evaluation problems and reading the feedback. After solving each problem, the subjects were shown their open student model as a taxonomic concept map (domain concepts are nodes, links are is-a and part-of relations, and the map is an and-or tree), with their percentage completion of each concept graphically displayed in
the corresponding node. Finally, the subjects answered a post-test consisting of the same 9 questions as on the pre-test.
The subjects were students in a Psychology course who used the tutor over the web, on their own time, as part of the requirements for a Psychology course. The questions on the pre-test and post-test were:
1.
How many arithmetic operators are available in C++ programming language?
1/2/3/4/5/6
2.
Pick ALL the operators among the following that are C++ arithmetic operators
(check all that apply): <, * , !, /, %, ^
3.
How many of the following numbers are prime: 2, 3, 4, 5, 6, 7, 8, 9, 10.
4.
Which of the following C++ operators have 'integer' and 'real' types (check all that apply)? - <=, >, &&, +, /, ^
5.
For which of the following C++ operators is 'Dividing by Zero' an issue (check all that apply)? - >=, ||, %, ^, !
6.
What is the sum of the internal angles of a triangle? 90/180/270/360/450/600
7.
To how many C++ arithmetic operators does the issue of 'Precedence' apply? – none/only one/all/only two/only three
8.
Pick all the issues that apply to all the C++ arithmetic operators (check all that apply) – Coercion, Correct evaluation, Error, Associativity, Real
9.
Which of the following are types of operators in C++ (check all that apply)? –
Relational, Abstraction, Repetition, Logical, Selection, Assignment
Questions 3, 6 and 9 are not related to arithmetic expressions, and were meant to serve as control questions for each subject. Answers to questions 1,2,4,5,7,and 8 cannot be synthesized without an overview of the domain of arithmetic operators, such as that provided by a concept map, since these questions are not about any particular operator, but rather about groups of operators. During the problem-solving session, the tutor never explicitly provided the answers to any of the questions. But, the answers to questions 1,2,4,5,7,and 8 were evident from examination of the open student model presented as a taxonomic concept map after each problem.
3 Analysis
23 students participated in the evaluation. We used negative grading to penalize guessing – if a problem had n answering options, m of which were correct, the student received 1/m points for each correct answer and lost 1/(n – m) points for each incorrect answer. For each subject, we combined the scores on all the related questions
(Questions 1,2,4,5,7, and 8) and the scores on all the unrelated questions (Questions
3,6, and 9). We did a 2 X 2 within-subjects ANOVA analysis of the collected data with pre-post and related-unrelated as the two within-subjects factors. We found a significant main effect for related versus unrelated questions [F(1,22) = 6.725, p =
0.017] – students scored 1.833 points on related questions and 1.238 points on unrelated questions. We found a significant main effect for time-repeated measure (prepost) [F(1,22) = 7.659, p = 0.011] – students scored 1.246 points on the pre-test and
1.825 points on the post-test. We found a significant interaction between related versus unrelated and time repeated measure [F(1,22) = 16.754, p = 0.000] – whereas the prepost change on related questions was from 1.175 points to 2.491 points, the same on unrelated questions was from 1.318 points to 1.159 points.
We did a post-hoc analysis by question. We found the pre-post changes on only questions 2, 5 and 6 to be statistically significant at the p < 0.05 level. On question 6, which is unrelated to the topic of the tutor, the average decreased from 0.782 to 0.652.
Among the related questions, on question 2, the average improved from 0.202 to 0.694.
The correct answer to this question was *, /, and %. On Question 5, the average improved from 0.043 to 0.565. The correct answer to this question was %, which is an operator unique to programming languages.
In both the questions, it is reasonable to argue that students selected only the operators on which they had solved problems. But, not all the students solved problems on all the arithmetic operators during practice. Most of the students did not solve any problem on dividing by zero error with % operator. Moreover, students had no basis to rule out the distracters in either question, except by inspecting the concept map. By using negative grading, we discounted for random guessing. Since the subjects were unaware that we were using negative grading, this could not have prompted them to be conservative in their answers and select only the operators on which they had solved problems. Finally, since the questions were of an overview nature, deducing answers to them from individual problems required more synthesis and recall than could be expected of Psychology subjects using the tutor for a not-for-grade exercise.
Our preliminary conclusion is that students may learn some concepts that are not explicitly covered by a tutor, simply by examining the open student model presented as a taxonomic concept map of the domain. A confound of our evaluation is that subjects could have scored partial credit on the test questions by selecting only the operators on which they had solved problems. Therefore, further evaluations are needed to confirm/refute this conclusion. If confirmed, this would be another reason in favor of presenting the open student model as a taxonomic concept map of the domain.
Acknowledgements: Partial support for this work was provided by the National Science Foundation under grants
CNS-0426021 and DUE-0631738 . This work was supported in part by a grant from the Ramapo Foundation.
References
[1] Bull S., 2004. Supporting learning with open learner models. Proceedings of fourth Hellenic Conference on Information and Communication Technologies in Education , Athens, Greece, pp. 47-61.
[2] Bull S., Mangat M., Mabbott A., Abu Issa A.S., Marsh J., 2005. Reactions to inspectable learner models: seven year olds to University students. Proceedings of Workshop on Learner Modeling for Reflection,
International Conference on Artificial Intelligence in Education , pp. 1-10.
[3] Dimitrova, V., Brna, P., and Self, J. A. 2002. The Design and Implementation of a Graphical
Communication Medium for Interactive Open Learner Modelling. In Proceedings of the 6 th
International Conference on Intelligent Tutoring Systems (June 02 - 07, 2002). S. A. Cerri, G.
Gouardères, and F. Paraguaçu, Eds. LNCS vol. 2363. Springer-Verlag, London, 432-441.
[4] Kay, J. (1997). Learner Know Thyself: Student Models to Give Learner Control and Responsibility, in Z.
Halim, T. Ottomann & Z. Razak (eds), Proceedings of International Conference on Computers in
Education , Association for the Advancement of Computing in Education (AACE), 17-24.
[5] Mabbott A., Bull S., 2004. Alternative views on knowledge: presentation of open learner models. Seventh
International Conference of Intelligent Tutoring Systems . Springer, Berlin, Heidelberg, pp. 689-698.
[6] Mazza, R. and Dimitrova, V. 2004. Visualizing student tracking data to support instructors in web-based distance education. In Proceedings of the 13 th
Iinternational World Wide Web Conference, (New York,
NY, USA, May 19 - 21, 2004). WWW Alt. '04. ACM Press, New York, NY, 154-161.
[7] Mazza R. and Milani C. Exploring Usage Analysis in Learning Systems: Gaining Insights From
Visualisations. In: Workshop on Usage analysis in learning systems. 12th International Conference on
Artificial Intelligence in Education (AIED 2005). Amsterdam, The Netherlands. 18 July 2005. 65--72.
[8] Zapata-Rivera and Greer, 2004. Interacting with inspectable bayesian student models. International
Journal of AI in Education . vo1 4 (2). 127-163.