Introductory Computer Programming, Problem Solving and Computer Assisted Assessment Charlie Daly, DCU

advertisement
Introductory Computer Programming,
Problem Solving and
Computer Assisted Assessment
Charlie Daly, DCU
John Waldron, TCD
Preview: A Problem
A Programmable Robot
A Problem
• Solving a maze.
• Program a Robot to Solve a maze
• Analyse a program that is supposed to solve
a maze
But ...
• Anybody can check if the program works.
– Create a maze and check if the robot can solve
it.
• And provide feedback if it doesn’t work
– Show what the robot did.
So ...
• Certain problems are very challenging ...
• ... but they can be checked very simply
• ... easy to provide useful feedback to faulty
programs
• End of Preview
Mathematical proofs
similar, easy to check,
difficult to conceive
Talk Outline
•
•
•
•
•
•
Programming Courses are not working.
Programming ability is difficult to assess
The Solution: Proper Assessment
Implementation issues (software and peopleware)
Results
Conclusions
Programming Courses are not
working
• Students in Introductory Programming
Courses do not learn to program!
An international multi-institutional study of
introductory programming courses
ITiCSE 2001
“Do students in introductory computing
courses know how to program at the
expected skill level?”
No!
What is Wrong?
Assessment
"The spirit and style of student assessment
defines the de facto curriculum"
“Assessing Students”, Derek Rowntree '77
What is Wrong?
• It is difficult to assess programming ability
in a traditional written exam.
• Programming exercises are subject to
plagiarism; a serious problem in
introductory programming courses.
• If you do not assess something, the students
will not learn it.
What's wrong with Exams
• Two sides of programming
– language syntax (easy to examine)
– problem solving (hard to examine)
One exam
• Unoriginal (repeat) questions
• Marks for attempting a question
• Assuming insight where none exists
• Objectivity
Lecturer doesn't want to
fail whole class.
What's wrong with Assignments
• Most students don't see a problem with
using somebody else's code ("if I
understand it it's OK")
• Plagiarism is a huge problem.
• Lecturers frequently know it's happening
and do nothing.
don't understand there's a
difference between
writing and understanding
The Bright Shining Lie
• Lecturers think the students know how to
program
– they passed the exams
• Students think they know how to program
– they passed the exams
• Students have an excuse: for students
education is about passing exams
Talk Outline
•
•
•
•
•
•
Programming Courses are not working.
Programming ability is difficult to assess
The Solution: Proper Assessment
Implementation issues (software and peopleware)
Results
Conclusions
The Solution: Proper assessment
• Come up with original challenging
problems.
• Mark properly; only give marks if the
solution is completely correct.
• Allow the student to get computer feedback.
– (compiler errors, testing)
programming is a process
• Make students aware of the assessment!
Context: the course
• Introductory Programming Course
– one semester: 12 weeks of lectures
• Students
– 400 (300 pure Computing)
– no prior programming experience
– education ≡ passing exams
• Exams
– programming exams week 6 and 12 (Weighting 30%)
– written exam week 16 (Weighting 40%)
RoboProf
• Automated Program marker
• WWW interface
• Runs a student's program using different
input and check that the program produces
the correct output.
• Student is shown the result and (if not
correct) may modify and resubmit their
program.
more
The Programming Exam
•
•
•
•
3 Questions (in increasing difficulty)
2 hours (first half hour without computer)
Not open book, but may use a 'cheatsheet'
During the exam the students
–
–
–
–
Write a program
Submit it to RoboProf
View feedback
May resubmit without penalty
The Programming Exam
• Students know their result at the end of the
exam.
– Fewer complaints
– Greater insight into their ability
– More likely to listen when subsequently shown
a correct solution
• General effect: formative assessment works.
RoboProf Vs Manual Exam
• Standard CAA advantages
– Objective
– Fast Feedback
• Avoids the problem of the manual marker
interpreting the student solution
• Models standard program development
process (computer feedback)
• Huge resource requirements (~400 PCs)
Written Exam
100
80
60
40
20
0
0
10
20
30
40
50
60
Programming Exam
70
80
90
100
Written Exam
100
80
60
40
20
0
0
20
Programming Exam
40
60
80
100
RoboProf Vs Manual Exam
• Results: Not much difference.
• Markers only awarded marks for programs
that definitely looked like they might work.
• Students knew that the marking system was
similar to the programming exams.
Student Impressions
Student Impressions
"I liked the web-based test"
Problems
Solved
Count
Mean
0
1
2
3
17
43
81
45
2.563
2.302
1.863
1.578
Student Impressions
"The marking system was fair"
Problems
solved
Count
Mean
0
1
2
3
17
43
81
45
3.882
3.548
3.138
2.773
Comparison with previous years
• There were too many simultaneous changes
to the course to draw firm conclusions
(programming language, number of
students, lecturers, facilities)
• Results of last three years
– 1999: 30%
– 2000: 20%
– 2001: ?
Conclusions
• CAA can evaluate problem solving ability
– Using approriate problems
• Programming problems have added
advantages
– Models the program development process.
– Can show errors without showing the solution.
– Avoids the inherent problems that humans have
assessing programming problems
Future Work
• Improve Feedback
• Make it adaptable (can be used by other
institutions)
Download