Online Assessments in the Eyes of Students Taking an Assessment

advertisement
Online Assessments in the Eyes of Students Taking an Assessment Course
Yuankun Yao
University of Central Missouri
USA
yyao@ucmo.edu
Abstract
This study investigated whether the assessments designed for an online class were
effective in the eyes of the students. Students enrolled in an online graduate level
assessment course were asked to provide their views on the weekly discussions, weekly
quizzes, assignments, and projects used for the course. The research questions for the
study are: 1) Did the assessments serve distinctive purposes? 2) Were the students
satisfied with the overall design of the assessments? The students liked the assessments
used for the online course, and believed that various assessment activities complemented
each other. Implications of the study were discussed.
Introduction
The value of online courses has been increasingly affirmed in recent research studies.
Kirtman (2009), for instance, compared student learning outcomes from a face to face class with
the outcomes of an online class. The study found that the learning outcomes based on paper
grades were similar between online and face to face classes. In addition, the students were very
satisfied with the online class. Warren & Holloman (2005) found that student learning outcomes
in an online section of a course were no different from those in the face to face section, whether
they were based on portfolio scores by external evaluators, self assessment scores by the students
themselves, or final grades. In addition, no significant difference was found in student
satisfaction with the course across the two sections.
Critics of online courses, though, may argue that the learning outcomes of online courses may be
overestimated, as students may be able to use references and textbooks when trying to answer
questions from an online test. This may explain the discrepancy in the assessment results in a
study by Prince, Fulton, & Garsombke (2009), which compared student test scores between
exams proctored and online non-proctored exams. The study found that the scores from the
proctored exam were significantly lower. The results seem to suggest that the responses students
provide on an online exam do not accurately reflect their true knowledge or understanding.
While it is true that the use of references and other materials during an online exam may
misrepresent the learning of the students, such use may not be totally inappropriate. Khare &
Lam (2008), for instance, suggested that open book exams may be justified from the
constructivist and situated learning perspectives, on occasions when the learners are asked to
apply their learning to ill defined, real life contexts, a task particularly suitable for learning at the
higher levels. The constructivist rationale was also utilized by Wu, Bieber, and Hiltze (2008)
when they investigated the usefulness of a participatory examination approach in engaging
students and enhancing student learning. In this study, students were guided by their instructor
and were made responsible for working with their peers in designing and answering exam
questions, and grading their answers. The students suggested they enjoyed the process and
preferred this type of assessment to the traditional type of exams.
Online assessments also have the capacity to provide students with instant feedback (Khare &
Lam, 2008). The immediacy of feedback for students is an important characteristic of formative
assessments, a tool valuable to students during the learning process. Other researchers also
suggested the need to utilize technology to enhance assessment strategies for online courses (e.g.,
Buzzetto-More & Alade, 2006).
Gaytan & McEwen (2007) investigated characteristics of online assessments perceived as
effective by students and faculty. Both the faculty and the students indicated a preference for a
wide variety of clearly explained assessments, and the need for continual, immediate, and
detailed feedback. In addition, both the faculty and students listed self-assessment as an effective
assessment strategy.
This study was designed to determine whether the assessments designed for an online class were
effective in the eyes of the students. The research questions for the study are: 1) Did the
assessments serve distinctive purposes? 2) Were the students satisfied with the overall design of
the assessments used for the online course?
Methods of Study
This study took place in two online sections of a graduate level assessment class offered
to candidates for a Masters of Arts in Teaching degree at a Midwestern university. The
researcher taught both sections of the class.
One of the activities that were used for the online course was a weekly discussion. Each week,
the students were required to make at least two responses, either directly to the prompt, or to
another student’s post. They were divided into groups of five or six people during the discussions
and were instructed to respond to postings made by their group members, although they also had
access to postings in the other groups. The instructor monitored student postings in all groups
and made brief comments or responses in most cases. The discussion topic was often in the form
of an open ended question. While the discussion was primarily designed as an learning
opportunity for students to discuss and make further inquiries, it was also meant as an assessment
in the sense that both the instructor and the students were able to monitor whether the students
had the right knowledge or understanding.
A second weekly activity for students enrolled in the online course was a quiz that they had to
take. There were five multiple choice questions for students to answer. The questions mostly
assessed student understanding of the most important topics covered during the week. Student
response to each question was automatically scored, and instant feedback was provided if the
answer was incorrect. Each student got two opportunities to take the weekly quiz.
In addition to the discussions and quizzes that were given on a weekly basis, there were also four
assignments that students were required to complete. The assignments focused on topics that
were covered in one or more lessons. Unlike the weekly quizzes that mainly dealt with factual
knowledge and conceptual understanding, the assignments were meant to assess student learning
at the application and other higher levels.
Finally, two projects were used as summative assessments of student learning. The first project
was for students to design a unit assessment and provide a critique of the assessment in terms of
its measurement qualities. The project was given around the middle of the semester, at a time
when students have learned measurement issues such as validity and test bias, and how to
construct test items. The second project was for the students to conduct a review of a research
study on an assessment practice that applies to their own area of teaching and provide a critique
of the study.
In the semesters prior to the time of the study, the instructor used a midterm exam and final exam
for the course. The exams were replaced by the weekly quizzes in the current semester.
The data was collected through a survey that was administered to the students via the Blackboard.
The survey contained 10 Liker-Scale questions and three open ended questions. The study was
approved by the institutional Human Subject Review Committee. Around the middle of the Fall
2011 semester, students from the two sections of the assessment class were asked to take the
survey. There were twenty students in one section and seventeen students in the other section.
The students were told that participation in the survey was optional and their decision whether to
participate in the study or not would not penalize their grades in the course or affect them in
other ways.
Results of Study
Student responses to the survey were downloaded from the survey website three weeks after the
day the survey was made available to the students. Out of 37 students from the two sections of
the graduate assessment course, 26 students responded to the survey, including one student who
skipped some of the survey questions. The response rate was 70.3%.
An exploratory factor analysis was run to see how the survey items load on common factors.
Since some of the survey items were originally negatively worded, those items were recoded
when conducting the factor analysis. The principal component analysis was used to extract the
components. Three components were extracted from the data, with the first component
accounting for 47.5% of the total variance, the second accounting for 13.2% of the variance, and
the third accounting for 10.1% of the variance. The rotation method used was Varimax with
Kaiser Nominalization. As a result of the rotation, the first component accounted for 31.1% of
the total variance, the second component 21.4%, and the third one 18.4%. The first component of
the survey involves five of the 10 survey items. It covers the general purpose of the various
assessment activities for the course, including the formative and summative purposes of the
different assessments, the need for various assessment activities for the course, and the number
of assessments. The second component, consisting of three survey items, differentiates the
features of the weekly quiz and the assignments. The third component, consisting of two survey
items, asked students their opinion regarding whether the four assignments were redundant, and
whether they preferred having two exams rather than two projects.
Summary statistics regarding student responses to the 10 survey items are provided in Table 1.
As can been seen, student perceptions of the assessments used for the course are very positive.
On a 1-5 scale, students were almost unanimous in agreeing that the discussions served
formative purposes (Mean=4.16, SD=0.90), that the quizzes focused on student understanding of
basic concepts (Mean=4.48, SD=0.65), that the assignments emphasized applications
(Mean=4.52, SD=0.65), and that the projects served summative purposes (Mean=4.08, SD=0.76).
They were also positive that the course assessments were well designed (Mean=4.27, SD=.72).
The students felt the discussions and quizzes were necessary, and that the assignments were not
redundant. They preferred the projects to comprehensive exams. They did not feel there were too
many assessments for the class.
Conclusions and Discussions
This study examined the purposes of various assessment activities used in an online
graduate level assessment course. The students felt that each assessment activity served its
distinctive purpose. The assessment activities were found to complement each other. No activity
was considered unnecessary or redundant. The students also felt that both formative and
summative assessments were used in the course.
The positive responses of the students regarding their perceptions of the assessments used for the
online course suggest that the use of multiple types of assessments for online classes is important.
In an online environment, the instructor is not around the students to watch for visual clues as
how they are progressing in their learning. The use of regular formative assessments such as the
weekly discussions and weekly quizzes would compensate for this potential shortcoming.
Since the weekly quizzes were newly implemented, the instructor/researcher was not sure how
students would react to the frequent quizzing. To the surprise of the researcher, the students did
not feel there were too many quizzes. Some of the respondents even suggested in their answers
to the open ended questions that they hoped more questions would be used in the quizzes.
Several students also appreciated that fact that they got a second chance to answer the quiz
questions. This opportunity for students to revise their answers did not seem to be normally
available with exams given in face to face classes. Nevertheless, such opportunities are
beneficial for the learning process if the purpose of the assessment is primarily formative. The
quizzes had in a sense become a tool for the students to self-assess and improve their learning.
In previous semesters, the instructor had heard student complaints when the final exam was
replaced by a final project in the course. They felt that without having a final exam the students
would lack the motivation to study for the course. In the current semester when the study was
carried out, no complaints were heard regarding the absence of the exams. One possible reason
was that the need for the exams was partly fulfilled by the use of the weekly quizzes, since the
majority of the questions on the final exams were multiple choice items and assessed conceptual
understanding. The weekly quizzes would give students sufficient motivation to study for the
course.
Admittedly, a weekly quiz is not a perfect substitute for a comprehensive exam. The former is
used as a formative assessment, whereas the latter summative. Nevertheless, the assignments and
the final projects, if designed carefully, can fulfill the summative functions of the final exam,
while also avoiding the issues with non-proctored online exams (Prince, Fulton, & Garsombke,
2009).
References
Buzzetto-More, N. A., & Alade, A. J. (2006). Best practices in e-assessment. Journal of
Information Technology Education, 5, 251-269.
Gaytan, J., & McEwen, B. C. (2007). Effective online instructional and assessment strategies.
The American Journal of Distance Education, 21(3), 117-132.
Khare, A., & Lam, H. (2008). Assessing student achievement and progress with online
examinations: Some pedagogical and technical issues. International Journal on ELearning, 7(3), 383-402.
Kirtman, L. (2009). Online versus in-class courses: An examination of differences in learning
outcomes. By: Kirtman, Lisa. Issues in Teacher Education, 18(2), 103-116.
Prince, D. J., Fulton, R. A., & Garsombke, T. W. (2009). Comparisons of proctored versus non-
proctored testing strategies in graduate distance education curriculum. Journal of College
Teaching and Learning, 6(7), 51-62.
Warren, L. L., & Holloman, H. L. (2005). On-line instruction: Are the outcomes the same?
Journal of Instructional Psychology, 32(2), 148-151.
Wu, D., Bieber, M., & Hiltz, S. R. (2008). Engaging students with constructivist participatory
examinations in asynchronous learning networks. Journal of Information Systems
Education, 19(3), 321-330.
Table 1.
Student perceptions of assessments used in an online educational assessment course
___________________________________________________________________________
N
Mean SD
The weekly discussion serves as a formative assessment for the course.
25
4.16
.90
Participating in the weekly discussion is a waste of time for me.
25
2.04
1.06
The weekly quiz is designed to assess my understanding of basic
concepts in each lesson.
25
4.48
.65
The weekly quiz is unnecessary work for the students
25
1.60
.82
The four assignments focus on the application of important concepts
and skills in this class.
25
4.52
.65
The assignments just repeat work already included in the quizzes and
projects.
25
2.0
.82
The projects in this class served summative assessment purposes.
25
4.08
.76
I wish that we had two comprehensive exams instead of two projects
for this course.
25
1.92
1.0
The combination of the weekly discussions, quizzes, assignments, and
projects represents a good assessment design for this course.
26
4.27
.72
I feel there are too many assessments in this class.
26
1.85 .68
_____________________________________________________________________________
Download