Teaching Programming and Debugging through Antagonistic

advertisement
Teaching Programming and Debugging through Antagonistic
Programming Activities
A. Gaspara* & S. Langevina
a
University of South Florida at Lakeland, FL, USA
This paper discusses the design and preliminary evaluation of two constructivist pedagogical strategies for
early programming courses. These activities are meant to help students acquire the necessary skills to read,
write, debug and evaluate code for correctness. They have been tested in two undergraduate programming
courses at the authors’ institution; COP 2510 Programming Concepts, meant for students with no preliminary
programming experience and using Java; COP 3515 Program Design, meant as an intermediate programming
course using the C language to strengthen students’ existing skills and prepare them for upper level systemoriented courses (e.g. operating systems). In both courses, what we termed Antagonistic Programming
Activities (APA) have been introduced. We discuss how they impacted students’ learning by (1) refocusing
the learning and teaching dynamics on the programming thought process itself rather than its outcome (a
complete, finished solution) and (2) enabling competitive learning to be leveraged in order to introduce testing
techniques without distracting students from the course’s learning objectives.
Keywords: programming, debugging
_____________________
*Corresponding author. USF Lakeland, Information Technology Department, 3433 Winter Lake Road,
Lakeland, FL, 33803, USA. Email: alessio@lakeland.usf.edu
INTRODUCTION
Problem Statement
We strongly believe that programming involves more than thinking of a design and typing the code to
implement it. While coding, professional programmers are actively on the lookout for syntactical glitches,
logic flaws and the interaction of their code with the rest of the project. Debugging and programming are
therefore not to be seen (and taught) as two distinct skills but rather as two intimately entwined cognitive
processes. From this perspective, teaching programming requires instructors to also teach students how to
read code rigorously and critically, how to reflect on its correctness appropriately and how to identify
errors and fix them. Previous work underlined that those students who have difficulties in programming
courses often end up “coding without intention” (Gaspar and Langevin, 2007b). They search for solved
exercises (slides, Google, Krugle) whose description is similar to that of the new problem they are
expected to solve. They then cut and paste the solution and iteratively modify it, sometimes almost
randomly, until it compiles and passes the instructor’s test harness. This problem is further exacerbated by
CS-1 textbooks which only require students to modify existing code for an entire semester thus ignoring
the creative side of programming. Breaking this cognitive pattern requires programming courses to engage
students in activities specifically crafted to develop their critical thinking skills along with their
understanding of code and of its meaning. This can be achieved by improving their capability to
rigorously trace code and identify errors of various natures.
What are Antagonistic Programming Activities (APA)?
APAs are a class of programming assignments aimed at honing the students’ debugging skills. They are
active learning activities which can be used during class meetings or in distance education settings (given
appropriate supporting technologies). These activities also leverage peer learning in a constructivist
manner; very little of the solution is directly passed to the students by instructors or peers, instead,
students are put in situations where their errors are pointed out and they are left to determine their sources.
These practices are also based on engaging students in activities which are antagonistic in nature (e.g. one
student develops a solution, the other develops a test harness to identify its flaws). While these activities
mostly involve students scrutinizing and analyzing each others’ code, we consider it as being the first
necessary step toward having them internalize the debugging skills to the point they will similarly
scrutinize their own code as they write it.
We have evaluated APA in two programming courses taught in spring 2007. COP 2510 Programming
Concepts is a first-time programming course for Information Technology, Computer Science and
Computer Engineering majors. It uses the Java language along with the BlueJ environment (Kolling et al,
2003). COP 3515 Program Design is meant as a follow-up on the latter and was taught to IT majors only.
The C language was used for this course to focus on strengthening students’ skills and exposing low-level
concepts (program stack, heap…) in preparation for system-oriented senior-level courses (e.g. operating
systems).
Paper Organization / description of activities
The two following sections will discuss the design and preliminary evaluation of two APA variants; (1)
Test-driven APA, which leads students to critically analyze each others’ code in search for logical flaws.
(2) Student-led live coding, which teaches them to criticize code as it is being developed by a peer instead
of simply evaluating the end result. We conclude by summarizing our work and detailing future directions
we will be exploring.
Test-Driven Antagonistic Programming Activities
This section identifies the teaching and learning issues which motivated this first APA variant. We then
discuss its implementation from the pedagogical perspective and present early quantitative evaluation
results along with lessons learned from the instructor’s perspective.
Problem at hand
The very idea of APA, leveraging competitive learning dynamics to further engage students, is not new;
consider as examples game environments (e.g. Robocode in Bierre et al., 2006) and programming contests
(e.g. ACM). Unlike pair programming pedagogies (McDowell et al. 2003), such activities improve student
learning and motivation levels through competition rather than collaboration. The similarities of intents
and differences in means between these two pedagogical strategies motivated us to look further into their
potential complementariness.
Our experience with Robocode revealed that competition can often lead students to focus too much on the
final result (i.e. a tank “blasting its way” to victory) rather than the means (i.e. learning to program). In an
artificial intelligence (AI) course, this effect would manifest through the fact that the code winning the
competition would be a better illustration of how one can “play the system” rather than a solid
implementation of one of the AI approaches discussed in the lecture. In a first programming course, it
would be most likely the code quality which would suffer from the imperative necessity to win.
Interestingly enough, Ken Schwaber also described this need to sacrifice code quality when under pressure
in professional settings as being almost a “second nature” for developers (Schwaber, 2006). Obviously, we
might want to steer away from nurturing such reflex in novice programmers. This opens the question of
how can APA escape such drawback.
Design of the activities
Test-driven APA achieve this by switching the focus from the end results (e.g. “my code is performing
better than yours”) back to the programming process itself (e.g. right coding practices, code correctness).
This allows the instructor to introduce the notion of test harness and have students start early to
appropriately test the code they produce. As mentioned in Langr’s text (Langr, 2005), this is not a goal
whose importance should be underestimated.
In our Test-driven APA, we asked students to pair and, unlike with pair programming activities, start by
both independently developing a solution for the same problem. Once their solutions were coded, we
introduced the idea of test harness by asking them to verify independently how their code functioned with
a diversified set of inputs. After this second step, they exchanged test harnesses and tested their
classmate’s code. Our goal is to lead students to start thinking during the development phase in terms of
“how would my code react to this input?” and “what is a programmer most likely to overlook in this
code?”. The evolution of CS1 programming courses’ contents indicates that programming is now
perceived as an activity that goes beyond the mere writing of code but requires many skills such as
designing, implementing, testing and debugging. The activity we propose in this section helps sharpening
multiple of these skills without focusing solely on implementation ones. Trying to fail the classmate’s
code by developing an appropriate test harness switches the competitive focus away from the resulting
code’s efficiency and more toward its correctness. The resulting learning environment is authentic and
leverages competitive learning dynamics to engage students while avoiding to distract their learning form
the real objectives of the course.
The following exercise illustrates how this type of active learning activity was implemented in both COP
2510 and COP 3515 during spring 2007. The exercise was adapted from one of Nick Parlante’s Javabat
applets available at http://javabat.com/:
The squirrels in Palo Alto spend most of the day playing. In particular, they play if the temperature is
between 60 and 90 (inclusive). Unless it is summer, then the upper limit is 100 instead of 90. Using
Raptor, write a flowchart which is going to ask the user to provide a temperature value (between 0
and 130) and a number summer which will be equal to 0 or 1. Depending on the values that were
passed to you by the user, you will determine whether the squirrels are playing or not and display on
the screen an appropriate string.
This first step was followed by an individual code testing;
Make sure you extensively test your program. This time you will write down the tests you have been
performing on paper as follows:
A table was provided for students to design and then record their testing experiments. This table had 4
columns labeled “Value for TEMP”, “Value for SUMMER”, “Expected outcome” and “Observed
outcome”. Finally, the next exercise extended the testing to their neighbor’s code. At this point the
objective was clearly to “attack” each other’s code through the test harness in order to “prove” it wrong.
Now that you have developed both a flowchart and a series of test cases to make sure your program
works, exchange seats with your neighbor and run your tests on their program. Try to find tests which
will prove their code wrong, keep track of these results and show them to your classmate once you
think you can’t find any more bugs in their code.
Regardless of the students’ preliminary programming experience, this activity allowed them to play both
the developer and observer roles on each assignment rather than taking turn (unlike pair programming
practices). This might be beneficial when trying to introduce code testing practices at such an early stage
in so far that students don’t have to understand and adapt to a role description. More importantly, this
assignment introduced the testing mindset as a competitive game which is more likely to motivate students
and which we hope will help them develop a solid programming thought process.
Early Evaluations
We used the above as part of a series of early active learning activities using the Raptor flowchart
interpreter (Carlisle et al., 2005). These have been used both in COP 2510 and COP 3515. In the former,
the pace was slower (3+ weeks) since these were meant for absolute neophytes. In the latter, they were
used as a warm up session (1-2 weeks maximum) to ensure all students were starting at the same level.
An anonymous online survey was used to evaluate how students perceived these Raptor-based sessions. In
particular a Likert scale question was used to evaluate the above-described test-driven exercise: “Indicate
your level of agreement with the following statement: The flowchart-testing exercises were useful”. The
responses were as follows; In COP 2510, 10% (1) of the students strongly disagreed, 10% (1) disagreed,
30% (3) were neutral, 30% (3) agreed and 20% (2) strongly agreed. In COP 3515, responses indicated that
80.0% (4) of students agreed and 20.0% (1) strongly agreed.
These results are an early evaluation attempt. Due to the low enrollments, conclusions can only be viewed
as an illustration of the students’ perception of our pedagogical efforts in the specific settings of this study.
Students with preliminary programming experience seemed to better appreciate these exercises than
absolute beginners. We believe that code testing might have been perceived as distracting by novice
programmers which were trying to acquire fundamentals. Also, students who have already taken
programming courses are more likely to be open to learning new methods or concepts while a portion of
absolute beginners is simply put off by programming in itself. It is therefore delicate to evaluate whether
their dislike of such activities is due to the activities themselves or their aversion to programming.
From the instructor’s perspective, this type of activity had several pedagogical impacts on teaching:
1. It allowed the early introduction of code testing practices, which can be positive given their impact in
corporate settings.
2. It also helped the instructor to address common misconceptions which novice programmers have about
program correctness (Kolikant, 2005).
3. It allowed for an innovative application of peer learning dynamics. Unlike most peer learning
dynamics, which are instructivist in nature (Willis et al., 1994), this one allowed for student to correct
each other without explicitly exchanging fragments of “correct” code. Instead, they corrected each
other through the exchange of test harnesses specifically crafted to fail each others’ code.
This latter point is particularly interesting in so far that it relates to a well known, yet poorly published and
documented, teaching practice. We have often taught sorting by having students develop their own
algorithms and, later, compare classic sorting algorithms to these generally naïve versions to understand
their pro and cons. In this process, we developed a specific approach to teach students about the flaws of
their first attempts at sorting.
Our objective was to help students engage in discovery learning and generate their own solutions in order
to experience the constraints and difficulties of the problem at hand. Therefore, in order to minimize
instructivism, we decided to guide them by pointing out the flaws of their programs without explicitly
correcting them. This was achieved by asking them to run their programs on data sets built on the fly to
reveal the specific errors they made. In the case of sorting algorithms, it is particularly easy to design an
array which will reveal a bad sorting algorithm’s flaws. Once the students’ algorithms would fail to sort
the array, the instructor’s role was to help them debug their code and come, by themselves, to the
realization of the source of the error(s). It would obviously be faster to simply point out the error and
replace the erroneous code fragment by a correct one. However, this would be dictating to the student the
correct solution little piece by little piece; which is arguably barely better than giving them the whole
solution directly and disregarding their attempts.
We will further develop this principle in the activities described in the next section. For now, it is
important to realize that by developing test harnesses in an antagonistic manner, students reproduce this
same constructivist teaching practice. While the original requires the instructor to have enough time to
devote to each student, and might therefore fail to scale up successfully to larger classes, this “peer
troubleshooting” allows reaping the same pedagogical benefits through a peer learning dynamic which can
scale up to larger classes.
Student-led live coding and Peer Reviewed Programming
This section identifies the teaching and learning issues which motivated student-led live coding activities
and then discusses their implementation from both the pedagogical and technological perspectives. Early
quantitative evaluations are also presented along with lessons learned from the instructor’s perspective.
Problem at hand
The previous section presented a possible implementation of APA. Like almost any programming
instruction method, this activity could be further improved by refocusing the learning dynamics on
programming skills rather than their outcome. Let us explain this last statement by considering how
assignments are handled in most textbooks and courses. Step #1: the problem is presented in plain
English. Step #2: students develop their own solution(s). Step #3: the book (or an instructor) explains the
design and implementation of one or several correct solutions. During such activities, the instructor is
mostly involved in the first and third steps. During in-class exercise sessions, the situation is slightly better
since many instructors tutor each student into developing their own solution(s) during step #2. However,
the reliance is still strong on a skillful presentation of a reference solution as a way to teach programming.
What is wrong with such a widely spread approach?
First, this teaching process can hardly be considered constructivist in so far that it leads students who were
unsuccessful in their attempts to simply discard them and adopt the reference solution. While it still allows
for students to engage in some experimentation on their own (often unsupervised), the process relies
essentially on the acceptation of the reference solution as a way to teaching programming. Doing so
prevents the instruction process from identifying and addressing the specific misunderstanding that misled
the students. This means that the cognitive model of the solution developed so far by students is simply
ignored thus preventing instruction to leverage constructivist practices.
Second, this approach instructs students through the repeated presentation of problem-solution pairs. This
exacerbates an increasingly disturbing tendency of students to solve new problems by matching their
description to a similar, previously solved exercise and then modifying its solution until it “fits” the new
specifications. This approach to the programming activity, which can lead to “coding without intention”
(Gaspar and Langevin, 2007b), has become increasingly common over the past decade, undoubtedly
supported by the availability of code examples on the web (e.g. Krugle.com). While analogical thinking
and code reuse are key to professional programmers’ practices, they are abused by novice programmers
who reuse that which they don’t understand. Students end up perceiving problem-solving as a matter of
pattern matching a new problem in a large dictionary of existing solutions. This, in turn, further distances
them from the right perception of the concept of program correctness (Kolikant, 2005) and the capability
to critically analyze and debug code.
Students no longer focus on acquiring the programming thought process but rather memorize solutions
they will re-apply and adapt to new problems. This can be particularly detrimental to students who have
high sequential or precision scores in learner type evaluations instruments such as the LCI
(http://letmelearn.org/). For these students, the natural need to work with well-specified problems, to know
what the exam will be exactly about, is exacerbated by such instructional methods. As a result, they might
feel comforted to be asked to learn almost by heart problem-solutions pairs (high student satisfaction
measures in survey) and their performance might even improve if the exams are aligned on testing such
memorization of problem-solutions skills (high student performance). However, they fail to be taught the
creative side of programming and problem-solving and therefore the very skills which are necessary to
solve computing problems.
Design of the activities
These arguments motivated the programming education community to develop new strategies to refocus
the learning process on programming skills themselves rather than their outcomes. The next subsection
will introduce this and detail how we improved them and their relation with debugging instruction.
A rather straightforward solution consists in having instructors engage in “live coding” sessions during
which they develop solutions from scratch in front of their students. This pedagogical strategy is often
discussed in the context of the apprentice model (Kolling et al., 2004) and successfully refocuses the
teaching activity to the programming skills themselves instead of memorizing finished code solutions.
However, this successful re-aligning of the teaching practice with learning outcomes still fails to leverage
constructivist dynamics; students are still spectators as they are instructed about the correct thought
processes involved in solving a given exercise in very much the same way they were instructed using
complete solutions before. They are still left to bridge their own attempts with the correct process. It is
admittedly easier for them to do so in so far that the thought process is made explicit this time and
therefore offers more opportunities to relate parts of it to the students’ own erroneous attempts. However,
while instruction has been successfully refocused on presenting the right contents (programming thought
process vs. solution only), the manner in which this content is taught could still be further improved.
Henceforth, we will refer to this activity as instructor-led live coding.
Although it has not yet been extensively published and studied, instructors have reported letting students
take the lead and solve programming problems “live” in front of their classmates. We coined the term
student-led live coding in order to clearly differentiate such approaches from the above instructor-led
variant. The manner in which students alternatively take the lead in such activities is influenced by the
available classroom technology. For instance, some instructors (Hailperin, 2006) circulate a wireless
keyboard among students to engage them in live coding sessions. We used a similar device for our first
experimentations with this approach in spring 2007. The classroom was equipped with PCs for each
student to work on their programming exercises while the instructor circulated among them to help. At the
beginning of each programming exercise, a student was picked and given a wireless keyboard connected
to the podium PC whose screen would be projected for everyone to see. The student was then working in
parallel on the exercise with his/her classmates, thus allowing the instructor to point other students to
his/her work when appropriate (hesitation at an important step, error committed by others as well, etc.).
This aspect of the activity differentiates it qualitatively from instructor-led live coding in many respects.
Firstly, the errors which students are most likely to commit are easier for their classmates to relate to as
compared to errors “faked” by instructors who expect them in novice programmers. Second, these errors
also change year to year as new generations of students reach us with ever evolving cognitive bias. By
focusing the live coding on students, there is no need for instructors to study and find which errors they
should fake when leading a live coding session. Instead students themselves bring forth their
misunderstanding, bias, and common errors to which the instructor can then react. Student-led live coding
also allows for the instructor to point classmates’ attention to the difficulties encountered by other students
while they develop their solution. This allows one to leverage these sessions as a student-focused way to
teach the programming though process itself. Other peer learning approaches tend to focus on students
exchanging concepts or explanations or evaluating each others solutions with respect to correctness or
value through discussion in class. However, in all of the above scenarios, it is the complete program which
is the focus of attention rather than the programming process itself.
From these perspectives alone, student-led live coding has a strong potential to transform instructor-led
live coding into a genuine active learning experience in which students are more than spectators. There is
another aspect of these activities that has an interesting pedagogical potential as well; once the time
allocated to the exercise is over, the entire class works on evaluating the solution the chosen student came
up with and discusses the various design decisions made during the exercise. This allows for the instructor
to help student hone higher level skills from Bloom’s taxonomy (Bloom, 1956) by learning to rigorously
read and evaluate code. We then go one step further by asking students to fix problems they perceived in
the student worked solution. Finally, the activity is generally concluded by having other students take the
wireless keyboard and present alternative solutions. This last part can be even more interesting for
students since it allows the instructor to craft exercises which accept a diversified range of solutions. Such
exercises have been argued to be particularly efficient during student-led live coding in so far that they
allow the instructor to pass the keyboard to students with very different solutions and forces the whole
class attention to understand each possible alternative (Gaspar and Langevin, 2007a).
Early Evaluations
We implemented a series of student-led live coding sessions, mostly in the COP 3515 program design
course during spring 2007. These sessions allowed us to evaluate the idea of using a wireless keyboard to
engage students in active learning while also evaluating some ideas of our own concerning the nature of
the exercises best suited for this type of activity. The following lessons were learned from this experience.
Let us start first with the students’ perspective; we used Likert-scale questions to get an idea of how they
perceived student-led live coding activities as opposed to the instructor-led ones. The questions follow:
(1) “Seeing the instructor code ‘live’ solutions was more useful than seeing the final solutions directly”.
Students responded to this question at 40% (2) “Agree” and 60% (3) “Strongly Agree”.
(2) “Seeing classmates code ‘live’ solutions was more useful than seeing the final solutions directly”.
Responses were 40% (2) “Agree” and 60% (3) “Strongly Agree”.
(3) “Seeing classmates code ‘live’ solutions was more useful than seeing the instructor do so”. This time,
100% of the responses were “Strongly Agree”.
The first two questions indicate that students perceived both student and instructor-led variants as equally
valuable. Once again, due to the small population, it is difficult to extrapolate this specific observation but
it will be interesting, as more data is collected, to keep an eye on this trend. The third question was meant
to specifically prompt students for a preference between the two. All respondents felt that the student-led
live coding activities were more useful. This is consistent with the fact that this variant is much more
engaging for students due to its active learning nature.
We also used an open ended question to prompt student for positive or negative feedback on the live
coding approach. Below are the responses which were relevant to the student-led live coding activities.
R01 - “It made us participate in the class and help each other code.”
R02 - “When the students typed we often ran into problems and the instructor could explain
everything as we went it, if we didn’t catch it before hand, we would try to compile, and run
into problems and fix them from there, where when the code is already finished I didn’t feel
like I learned as much, what to look for, common problems, etc.”
R03 - “The problem with live coding is the time spent on each program. If we could limit the amount
time spent on each program and just come up with the solution after a certain time frame. That
way the whole class does not get stuck on one portion of the program and be able to focus on
the solution of the entire program.”
R04 - “This was by far the most helpful to me. It helped me get involved. It was also helpful to see
others make errors, and then for us to find out what was wrong with the code they typed.”
The improvement of students’ engagement has been clearly identified (R01, R02, R04) along with the
improved learning resulting from observing others while they code (R02, R04). The downside expressed
is that it can be frustrating for students to have to “wait” while the others are catching up on the example.
This results from the use of a single wireless keyboard, and the fact that the code review is still guided by
the instructor. This student’s critic also underlines that, for this approach to be scalable to large classes,
the centralization of the activity through a single instructor must be addressed. These issues are the focus
of our current work with Dr. Simon’s Ubiquitous Presenter Workshop group.
From the technological point of view, several lessons were also learned. First, each of our students was
working on a remote Linux server. This came in handy to speed up the switch from one student to another
on the podium PC as they could resume working in their own personal environment in a matter of seconds
without requiring file transfers. Second, alternatives to the wireless keyboard are definitely worth
investigating since students barely agreed to the statement “The wireless keyboard/mouse was useful in
implementing the students' live coding activities” (20% (1) disagreed, 40% (2) agreed and 40%(2)
strongly agreed). At first sight, alternatives such as Classroom Presenter (Anderson et al., 2006),
Ubiquitous Presenter (Wilkerson et al., 2005), or Message Grid (Pargas, 2006) might look promising.
Keep in mind that they would only be suitable to present students’ finished codes rather than expose their
thought process while developing. Tools such as DyKnow and Elluminate might be more accommodating
with respect to this “live” aspect since they allow for application or desktop sharing. Such tools would
also be suitable for distance learning.
Finally, from the pedagogical perspective, it is clear that student-led live coding is worth exploring further
for its active learning and engaging features but also for its focus on the programming process itself rather
than finished code. We also found that assignments with multiple valid solutions had the most potential in
terms of student engagement and classroom discussion. To give an example, we used the following
assignment in COP 3515 using the C language. It took place within the first 4 weeks of the course and was
part of a series of exercises focused on recursive programming:
Implement an iterative and recursive version of a function which will return how many times its
(strictly positive integer) argument can be divided by two until you get a non null remainder.
Examples;
F(4) Æ
2 time(s)
F(5) Æ
0 time(s)
F(6) Æ
1 time(s)
This assignment was purposely open to interpretation but, after a small lecture on recursion, a handful of
classic examples and a couple of take home exercises, our students were ready to innovate (or realize they
had to spend some more time on the topic). Students came up with a wide range of solutions including;
-
Classic iterative solutions (while loop)
-
Classic recursive solution; the result is built as the recursive calls return (this is the solution which was
expected from students due to its similarity with the lecture examples). It is interesting to note that
while developing this solution, some student suggested making the recursive call as: F ( no/=2 ); while
not strictly incorrect, this kind of remark is a good starter for a class discussion and explanation of
what superfluous code can mean. The stack diagram was used to show the student how this difference
impacts the local variables. This helped in realizing that although not leading to a bug per se, this
approach would be comparable to walk in a direction taking 3 steps forward and 1 step back.
-
Recursive solutions in which the result is constructed while making the recursive calls instead of as
they are returning. This solution wasn’t originally planned for this lecture and a student came across it.
Explaining this new possibility in details (using stack diagrams) helped reinforce the understanding of
the classic recursive solution. This was true even with students who didn’t think of this alternative at
first or those who needed a refresher on the working of the stack.
Finally, the two categories of solutions really surprised us. The lecture on recursion took place right after
explaining variables’ scope and duration based on their location in the stack and heap segments. Some
students, armed with this freshly acquired knowledge, immediately saw a way to apply it to the problem at
hand. While discussing incorrect versions of the above functions, one bug caused the count variable to
never be modified either when passed to the recursive call or when incremented before being returned.
This motivated some students to come up with a fix which led to solutions using a global variable or a
static local variable. These solutions allowed for a discussion of the potential problems that could emerge
from exposing a global variable when other programmers would try to reuse this code.
Discussion & Future Work
This paper presented two Antagonistic Programming Activities which propose to improve the teaching of
programming skills by focusing students’ learning efforts on the programming process itself and by
leveraging competitive learning dynamics. For each activity, we discussed early quantitative evaluations,
taken form the student’s perspective, as well as qualitative evaluations from the instructor’s point of view.
While the focus of this paper was to establish and justify our approaches from a pedagogical and instructor
viewpoint, our future work will focus on gathering more data through students surveys so that we can
confirm the trends that this paper sketched out. We will also further develop the peer learning dimension
of this work in cooperation with faculty from other disciplines.
References
Anderson, R., Anderson, R., Chung, O., Davis, K. M., Davis, P., Prince, C., Razmov, V., Simon, B. (2006)
Classroom Presenter – A classroom interaction system for active and collaborative learning, workshop
on the impact of pen technologies on education
Anderson, R. et al., Supporting active learning and example based instruction with classroom technology,
SIGCSE 2007
Bierre, K., Ventura, P., Phelps, A., Egert, C. (2006), Motivating OOP by blowing things up: an exercise in
cooperation and competition in an introductory java programming course, SIGCSE
Biggs, J. (2003), Teaching for Quality Learning at University, Buckingham: Open University Press/McGraw
Hill Educational
Bloom, B. S., (1956), Taxonomy of educational objectives: The classification of educational goals. Handbook
I, cognitive domain, Longmans,
Bruner, J., (1960), The Process of Education. Cambridge, Massachusetts: Harvard University Press
Carlisle, M.C., Wilson, T.A., Humphries, J. W., Hadfield, S.N. (2005), Raptor: a visual programming
environment for teaching algorithmic problem solving, Proceedings of the 36th SIGCSE technical
symposium on Computer science education, St Louis, Missouri, USA, 2005, pp. 176-180
Gaspar, A., Langevin, S. (2007a), Active learning in introductory programming courses through student-led
“live coding” and test-driven pair programming, accepted to EISTA 2007, International Conference
on Education and Information Systems, Technologies and Applications, July 12-15, Orlando, FL
Gaspar, A., Langevin, S. (2007b), Restoring “Coding With Intention” in Introductory Programming Courses,
submitted to SIGITE 2007, proceedings of the international conference of the ACM Special Interest
Group in Information Technology Education, July 12-15, Orlando, FL
Hailperin, M. (2006), SIGCSE-members mailing list, posts #34 and #37, July 10th 2006, accessed on
2/22.2007 at http://listserv.acm.org/archives/sigcse-members.html
Kolikant, Y.B.D. (2005), Students' alternative standards for correctness, Proceedings of the international
workshop on computing education research ICER
Kolling, M., Quig, B., Patterson, A., Rosenberg, J. (2003), The BlueJ system and its pedagogy, Journal of
Computer Science Education, special issue on learning and teaching object technology, Vol. 13, No. 4,
12/2003
Kolling, M., Barnes, D.J. (2004), Enhancing apprentice-based learning of Java, 35th SIGCSE technical
symposium on computer science education, pp. 286-290
Koza, John R. (1992), Genetic programming: on the programming of computers by means of natural
selection, MIT Press
Langr, J. (2005), Agile Java: Crafting code with Test Driven Development, Pearson
Mayer, H.A. (1998), Symbiotic coevolution of artificial neural networks and training data sets, Lecture Notes
in Computer Science, Springer, Vol. 1498
McDowell, C., Hanks, B., Werner, L. (2003), Experimenting with pair programming in the classroom, ACM
SIGCSE Bulletin Proceedings of the 8th annual conference on Innovation and technology in computer
science education ITiCSE '03, Volume 35 Issue 3
Pargas, R.P. (2006) Reducing lecture and increasing student activity in large computer science courses,
SIGCSE 2006
Popper, Karl R. (1972), Objective Knowledge, An Evolutionary Approach. Oxford University Press
Schwaber, K. (2006), Online resource, Google Tech Talk webinar, Scrum et al., 9/5/2006, available at
http://video.google.com/videoplay?docid=7230144396191025011&q=ken+schwaber+google+tech+talks&total=6&start=0&num=10&so=0&type
=search&plindex=0, last accessed 7/27/2007
Searle, John R. (1984), Minds, Brains, and Science. Cambridge, Mass.: Harvard University Press.
Wilkerson, M., Griswold, W.G., Simon, B. (2005), Ubiquitous Presenter: Increasing Student Access and
Control in a Digital Lecturing Environment. SIGCSE’05, February 23-27, 2005, St. Louis, Missouri,
USA
Willis, C.E., Finkel, D., Gennet, M.A., Ward, M.O. (1994). Peer learning in an introductory computer science
course, SIGCSE 1994
Download