Computerized education has become so clearly important to the

advertisement
A Context for Computers in Education
by
Michael Huggett
B.Sc. University of Toronto, 1999
A MASTER’S ESSAY SUBMITTED IN PARTIAL FULFILLMENT OF
THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
in
THE FACULTY OF GRADUATE STUDIES
DEPARTMENT OF COMPUTER SCIENCE
___________________________________________
___________________________________________
THE UNIVERSITY OF BRITISH COLUMBIA
April 2001
© Michael Huggett, 2001
Contents
1
Introduction
1
2
Technology in Education
3
3
A Non-Technological Alternative
4
4
Appropriate Application of Technology
6
5
The Intelligent Tutoring System (ITS)
7
5.1
Student Modeling
9
5.2
Making Uncertainty More Certain
14
5.3
Novel Approaches
18
5.4
ITS Effectiveness
21
6
Out of the Lab
22
7
Politics, Economics, and Tradition
25
8
The Future of Schools
28
9
Conclusion
29
References
30
“I believe that the computer presence will enable us to so modify the learning environment outside the classroom that
much if not all the knowledge the schools presently try to teach with such pain and expense and such limited success will
be learned, as the child learns to talk, painlessly, successfully, and without organized instruction.” Seymour Papert
[Papert]
1
Introduction
While the Information Age continues to advance in leaps and bounds, the institution which would best
prepare us for its challenges seems bound to the past. Instead of tackling difficult structural problems of
pedagogy and bureaucracy, education is allowing itself to be distracted by the flash and dazzle of
technology. Some advocates claim that computers are the answer to education’s woes, but their ‘quickfix’ point of view amounts to little more than throwing money at the problem. Unfortunately, without a
more fundamental re-examination of education’s priorities, past errors seem likely to be repeated. While
some research has suggested that the computers already in classrooms have had no significant impact on
learning, still more computers are being pushed into classrooms with apparently reckless haste, with no
consideration of potential adverse effects; across North America, arts programs are being slashed to
make way, even though evidence suggests that the arts best develop crucial thinking skills in children
[Oppenheimer]. As a result, students’ increasingly mass-media-defined perspectives become ever more
narrowed.
Steve Jobs, founder and CEO of Apple Computer, is certainly in a position to comment,
I used to think that technology could help education. I've probably spearheaded giving away more computer
equipment to schools than anybody else on the planet. But I've had to come to the inevitable conclusion that the
problem is not one that technology can hope to solve. What's wrong with education cannot be fixed with technology.
No amount of technology will make a dent… We can put a Web site in every school - none of this is bad. It's bad only
if it lulls us into thinking we're doing something to solve the problem with education. [Wolf]
1
Computers are being pushed into an institution that is fundamentally outdated, a product and
reflection of a bygone era. Education today is founded on assumptions that have been passed down for
centuries – indeed, one might claim that our first great educator lived some two millennia ago. But while
Socrates based his Method on a genuine intuition for an individual student’s learning style, such
pedagogical luxury faded with the advent of the Machine Age. A sudden demand for a skilled, educated
work force drove mass education into the mainstream, and emphasis inevitably shifted from teaching to
training. Due perhaps to economies of scale, but also partly due to industrial imperatives of discipline
and respect for authority, educators came to rely on the lecture as the primary, unquestioned method of
mass instruction.
And yet, there is evidence that the group lecture is actually the worst way to learn a subject,
while one-on-one tutoring is the best [Bloom 84, Cohen, Graesser, peer]. Placing industry ahead of
enrichment has not served students well. “In the first half of this century, schools were designed on the
‘factory model,’ in which thousands of students traveled through enormous, anonymous high schools
like products on an assembly line” [Mosle]. As the manufacturing base expanded further, disciplined,
reliable workers were essential to drive the economy; schools became a means of preparing citizens for
their limited role as economic drones. Education today is a direct descendant of this tradition, and retains
a functional interest in weeding out ‘defects’–not of really teaching students, but of running them
through a homogenizing and filtering process which, by real-life standards, could only be considered
artificial.
Criticism of such traditions comes from all sides. While one would expect Nobel prize winners,
the foremost products of formal education, to praise it as their guiding light, some have surprisingly
negative comments. Bertrand Russell, winner of the 1950 Nobel Prize in literature, said that education is
“one of the chief obstacles to intelligence and freedom of thought.” [ Schilpp] Albert Einstein, the most
hallowed of laureates, went so far as to say –
It is, in fact, nothing short of a miracle that the modern methods of instruction have not yet entirely strangled the holy
curiosity of inquiry; for this delicate little plant, aside from stimulation, stands mainly in need of freedom; without this
it goes to wreck and ruin without fail. It is a very grave mistake to think that the enjoyment of seeing and searching
can be promoted by means of coercion and a sense of duty. [Schilpp]
Author and educator Grace Llewellyn was so disgusted with formal education that she helped to found
the ‘unschooling’ movement, which seeks to educate students virtually anywhere but in an institutional
2
setting: “Learning is not a product of teaching - that was the most radical concept for me to get.” People,
she says, “are born learning. They learn how to walk, how to talk. They're basically little scientists. If we
don't stop that process, it will continue.” [Llewellyn]
Fred Keller, distinguished psychologist and educator, sees shortcomings in more pedagogical
terms. Psychologists recognize that learning is much more an individual than a group phenomenon. “The
traditional group method assumes that all the students are much the same, but everyone knows this isn’t
true. Some students will move quickly through the material; others more slowly.” [Chance] This is
perhaps the most detrimental aspect of traditional education, “if the material is cumulative, as it is in
mathematics, science, and languages, then the slower student gets further and further behind.” [ Chance]
Many slow learners lose motivation and ultimately curtail their studies, yet speed of learning is not
related to depth—it is worth noting that Einstein himself was a slow learner, leaving one to wonder how
many similarly ‘slow’ students have been lost along the way.
2
Technology in Education
The simple introduction of computers into the existing educational system, then, cannot hope to improve
the situation. Applying a new tool according to an old paradigm can arrest its potential; although there
are always latent possibilities (both good and bad) in any technology, they are only tacitly implied, and
may not be powerful enough to overwhelm established standards. Despite whatever power of social
reform technology is claimed to possess, in schools it has a general history of failure. Fifty years ago,
radio was touted as the great reformer that would bring the world into classrooms. Soon filmstrips, and
finally television, promised great benefits. [Troxel] Computer critic Clifford Stoll writes that televised
education –
... has been around for twenty years. Indeed, its idea of making learning relevant to all was as widely promoted in the
seventies as the Internet is today. So where's that demographic wave of creative and brilliant students now entering
college? Did kids really need to learn how to watch television? Did we inflate their expectations that learning would
always be colorful and fun? [Stoll]
Technology critic Neil Postman is similarly emphatic in his indictment of technological panaceas, “I
thought that television would be the last great technology that people would go into with their eyes
closed. Now you have the computer.” [Postman]
3
Unfortunately, the clamour for computers is growing. Seduced by technology, American teachers
in a 1997 poll ranked computers higher in importance than reading classical and modern literature, than
studying history and the sciences, and than discussing drugs or family problems. [ Oppenheimer] Research
claiming a net benefit from using computers in education has been consistently disappointing. “The
circumstances are artificial and not easily repeated, results aren’t statistically reliable, or, most
frequently, the studies did not control for other influences, such as differences between teaching
methods,” says Edward Miller, former editor of the Harvard Education Letter, “most knowledgeable
people agree that most of the research isn’t valid. It’s so flawed it shouldn’t even be called research.
Essentially, it’s just worthless.” [Oppenheimer]
If one follows this line of reasoning, it becomes apparent that, if computers are to be used in a
truly effective manner, they should be integrated into a system of learning based more on human needs
than industrial imperatives, in short, one that consciously acknowledges the cognitive mechanisms by
which people actually learn. The question is, are computers strictly necessary to manage this system?
3
A Non-Technological Alternative
Mastery Learning is not a new idea. Indeed, it was first suggested by Benjamin Bloom in 1963 [ Bloom 76,
Bloom 84],
and its popularity has been growing gradually ever since. Its premise is simple: a program of
instruction is laid out as a sequence of ‘units’ that chart a course through a subject, and which typically
increase in complexity. The purpose and goal of each unit is explicitly specified. In order to proceed
through the program, each unit should be mastered, that is, learned to an ‘A’ level, before proceeding to
the next. Unlike current classrooms, fast and slow learners are not bound together in lock-step, avoiding
the boredom and frustration common to group instruction. Instead, each student may take the time they
need to master a unit before continuing.
Tests are merely diagnostic. Indeed, if the purpose of a test is to reflect a student’s knowledge,
why should students be forever bound to a grade that they received while they were still in the process
of learning? If you were to receive an average grade in a subject, but then through further study refine
your knowledge, why should it not be possible to ‘upgrade’ the mark to reflect your improved level of
understanding? That is precisely the point of professional certification – why can’t the same principle be
applied in academia? In Mastery Learning students are allowed to retake a test in a particular unit as
often as desired, until they have demonstrated mastery of the material and thus the readiness to continue
4
to the next unit. Fred Keller, developer of a mastery system called the Personal System of Instruction
(PSI), defends the purpose of such stringency, “What’s the point in letting the student squeak by? If he
doesn’t really understand the material, then we’re just kidding ourselves in thinking he’s getting an
education.” On the other hand, “there is no penalty for flunking unit tests. What matters is mastery of the
material, not whether the student has achieved mastery quickly or slowly.” [Chance]
Bloom summarizes Mastery Learning in four rules, which stand in stark contrast to traditional
education’s essentially pessimistic view of human ability. First, “a normal person can learn anything that
teachers can teach.” In other words, according to Mastery advocates, given enough time, 90% of
students can learn their lessons to a mastery level, that is, earn an ‘A’. This directly challenges the
established notion that most students have ‘average’ (or worse) capability, and are incapable of
understanding above a ‘C’ grade level. Second, “individual learning needs vary greatly.” Alternate
explanations should be available, so that a problem may be approached from a perspective which seems
sensible to the learner. Third, “under favourable learning conditions, the effects of individual differences
approach the vanishing point, while under unfavourable learning conditions, the effects of individual
differences are greatly exaggerated.” Certainly students differ in their distribution of skills and aptitudes,
but mastery educators argue that variations in student achievement have more to do with man-made
factors; given some flexibility these differences have proven no hindrance to understanding. Fourth,
“uncorrected learning errors are responsible for most learning difficulties.” If student errors are
corrected in the context in which they arise, confusion will not accumulate. This point alone can lead to
significant increases in performance. [Bloom 76, Bloom 84, Chance]
Perhaps the most surprising (and to many, outrageous) aspect of the mastery system is its deemphasis of grades. Quite simply, if you complete a series of units, you get an ‘A’; your transcripts need
only list the subjects which you have mastered. Says Keller, “If [a student] has mastered quadratic
equations or Pavlovian conditioning, who needs grades?” [Chance]
One university-level program in the sciences claims great success with such techniques –
There are many advantages to this approach. Students create their own set of notes as they create their own
understanding of a topic. They are active learners and demonstrate long term retention. In class they are more skillful
at using concepts from previous courses and previous lessons. They develop stronger skills for deciphering complex
written material, and they display levels of self confidence that are not seen with ordinary didactic instruction.
Students demonstrate this by the ease with which they approach new problems in the text and the freedom they show
when volunteering an explanation of a topic or problem during class. The students also become more aware of their
own skills and more proficient at communicating complex concepts to teachers and classmates. [Zielinski]
5
Thus mastery methods give students the skill and confidence to teach themselves and others. Ideally,
shouldn’t this skill be the ‘hidden agenda’ of all teaching? Surely schools should foremost teach
intellectual independence. Students will not forever be in school; they should be adequately prepared to
deal with the unforeseen challenges that they will face throughout their lives.
Mastery learning, however, is not without its costs. First and most obvious is that, since students
are learning at different rates, one of two problems will arise: either fast students will have to wait for
slower students to catch up, or teachers will have to spread themselves more thinly across the syllabus,
simultaneously instructing students at various stages of progression. This has prompted some critics to
lament that teachers will need to work harder –
The main drawback of mastery learning is the increased need for instruction, since for all students to achieve a 95%
mastery of a subject, the instructional component should be increased by 10% to 20%. This cannot be considered a
reasonable demand on teachers since most teachers do not have that kind of time to spare. [Horton]
– or that the need for inspired teachers itself poses a restriction –
The problem with model programs … isn’t that they don’t work (they usually do) but that their successes almost
always depend on factors that cannot be endlessly reproduced: limited numbers of particularly talented and dedicated
educators who are drawn to new and innovative programs. [Mosle]
4
Appropriate Application of Technology
Fortunately, if correctly introduced, computers are ideally suited to precisely such problems, which
many educators since Papert have pointed out 
In many ways, computers are the ideal teacher. Unlike their human colleagues, computers are never too harried to
answer a question, never too distracted to notice that a student is puzzled. They always proceed at each child's own
pace, presenting information in a variety of ways until students show that they understand the material. The best
computerized tutors can capture and hold a child's attention for hours. [Cetron]
This provides individualized instruction and appropriate learning time for students, and learning becomes a rewarding
experience for them. Aside from the initial preparation of materials, this also frees the teacher to help with remediation
of students with greater learning problems. [Caissy]
6
The results of such computer-assisted mastery methods regularly generate surprising testimonials in the
media, and in some unexpected environments –
As a freshman at McDonough 35 High School in inner-city New Orleans, Corey Flagg completed the entire Algebra I
curriculum—all 15 chapters of the textbook—by the second week of February. He used a computerized education
system called I CAN Learn. It was no small feat: Students in blackboard-taught classes at McDonough are lucky to be
on chapter 10 by the end of the school year in June. [Button]
Nonetheless, as computerized teaching gains ground, it gives new life to worries that “my kids are going
to turn out to be robots. What concerns me is that … their entertainment and education may be so
thoroughly reliant upon computers that their opportunities for creative expression and exploration will
be gone” [Halpern], and that they will ultimately fail to appreciate more pedestrian printed sources.
Indeed, there seems little need to build a system that claims to do it all, replacing notebook, text, and
other learning aids in one stroke. Putting such extreme faith in technology makes little sense –
Why should we discard excellent textbooks and other familiar teaching materials? Suppose instead that we begin by
asking what the machine can do well, see if it is possible to assign some tasks to the machine, other tasks to books and
printed materials and some tasks to verbal interaction between the teacher and student. In other words, let's explore a
systems view of the teaching process. [Tyree]
And this, in fact, is what some research programs have been doing, in a Mastery-congruent manner, for
the last 25 years. If human resources are strained by Mastery Learning, then perhaps here the computer
can help. Let us consider for a moment the state of the art in research on computerized instruction.
5
The Intelligent Tutoring System (ITS)
Largely unheralded outside of the academic circles which study them, ITSs nonetheless represent the
future of computer-based learning. Born at the crossroads of artificial intelligence, cognitive science,
and education, ITSs stand in contrast to the more traditional drill-and-practice approach in which
computers are often used as sophisticated workbooks. The emphasis is on adding intelligence to
computerized education: “[artificial intelligence] programming techniques empower the computer to
manifest intelligence by going beyond what’s explicitly programmed, understanding students’ inputs,
and generating rational responses based on reasoning from inputs and the system’s own database.”
7
[Shute] ITSs seek to extend and add subtlety to interactions along several dimensions: by offering
appropriate advice and explanations at the appropriate time, by gauging a student’s readiness to advance
to new material, by adaptively planning the presentation of a lesson, by giving feedback on progress,
and by choosing the timing and style of remediation.
Historically, the ITS follows from Computer Assisted Instruction (CAI) systems. While there is
some debate over the finer points of what ‘ITS’ actually stands for, the ITS always represents a big step
forward from CAI’s rigidity –
An ITS differs from CAI in that: (a) instructional interactions are individually tuned at run-time to be as efficient as
possible, (b) instruction is based on cognitive principles, and (c) at least some of the feedback is generated at run-time,
rather than being all canned. [Wes Regian, in Shute]
The intent of the “I” in ITS was to explicitly recognize that a tutoring system needs to be exceedingly flexible in order
to respond to the immense variety of learner responses. CAI, as the forerunner of ITS, didn’t have the range of
interactivity needed for learning. In fact, the movement...to ITS was to further distance the new type of learning
environments from the rigidity of CAI. [Elliot Soloway, in Shute]
Breaking away from the stereotype of impersonal, machine-like rote learning, part of the promise
of many research-based approaches derives from their ongoing basis in cognitive theory. Getting the
computer to behave more intelligently partly involves “examining issues related to the representation
and organization of knowledge types in human memory ... classic CAI used pages of text to represent
knowledge, but that had little psychological validity.” [Shute] Indeed, much of the seminal ITS work has
explored the implications of fundamental psychological research into learning, memory, reasoning,
collaboration, and retention. Cognitive scientist John Anderson is one of the most widely-cited pioneers
of the ITS field –
Each of [our] tutors has had a production system model of the skills they were supposed to teach. This involved
developing a set of rules which in combination would perform the target skill. A typical tutor would involve as many
as 500 of these rules. These rules reflected a cognitive model which we thought could underlie successful problem
solving in that domain. [Anderson 91]
This perspective sometimes takes form as a revelation; when a tutoring module was added to MYCIN,
the first practical expert system, the developers came to the realization that their role as educators was
not to shovel facts into empty heads so much as to assist the students in building mental models, in
8
“making concrete the reasoning process,” developing natural representations of the way the domain
knowledge was organized –
This is an amazing change. Ten years ago I thought I was trying to teach parameters and rules, and now I’m saying
that I want to teach the student to be an efficient model builder. What can we tell the student that will help him
critique the model that he’s constructing? [Clancey]
The irony is that researchers, of necessity stripped of conventional classroom assumptions, purposely or
otherwise employ at least some of the principles of Mastery Learning as defined by Keller and Bloom
[Bloom 76, Bloom 84, Chance]. While not all systems hold the student back until ‘mastery’ of a unit has
been demonstrated (especially see discovery environments, below), without the need to keep an entire
class on the same page, self-paced learning is so suited to computerized instruction that it is evident in
all ITS systems. Furthermore, since the computer must be able to deal effectively with a student’s
confusion, efforts are typically made to provide it with a sufficiently large database of alternative hints
and explanations. The material is also frequently subdivided into independent discrete tasks or steps – in
fact, in order to make the difficult task of student evaluation manageable, ITSs require that students’ use
of the interface be discretized into events or states that the computer can recognize, and that the
student’s level of understanding be modeled in terms of relevant cognitive parameters. Just as classical
Mastery was developed to be more responsive to the needs of the learner, so too do ITSs seek to adapt
themselves intelligently to their users. This is the foundation of perhaps the single most important facet
of an ITS: the way in which it forms a picture of an individual user, termed the student model, in order
to adapt to particular needs and aptitudes –
The main promise of computer tutors ... lies in their potential for moment-by-moment adaptation of instructional
content and form to the changing cognitive needs of the individual learner, and our task... is to find principles which
can guide the construction of tutors which fulfil that promise. [Ohlsson]
5.1 Student Modeling
Unfortunately, while considered crucial by most researchers, student modeling is also by far the most
difficult aspect of constructing “intelligent” teaching systems.
9
The task is a big one, so fraught with difficulties that some would deem student modelling an “intractable problem”.
The intense difficulty of constructing and dynamically updating models of student understanding is one reason why,
more than 2 decades after computers have been introduced to learning in schools, the promise of computers to provide
individualized instruction has not yet been fulfilled. [Katz 92]
The reasons for this difficulty are obvious: human students lack the predictability of algorithms or
machines, their behaviours are more complex and ambiguous, and are not particularly consistent over
time. This introduces a significant amount of uncertainty to the problem, which requires methods which
are best suited to its resolution.
Student modeling – the task of building dynamic models of student ability – is fraught with uncertainty, caused by
such factors as multiple sources of student errors, careless errors and lucky guesses, learning and forgetting. [Katz 92]
Advocates of student models would wish to go beyond the analyses of student performance in terms of surface
mistakes. They would like to isolate the underlying misconceptions which are the “cause” of the mistakes, because
remedying such misconceptions might eradicate a whole set of mistakes. But defining, representing, and recognizing
such misconceptions is even more difficult than identifying a procedural mistake. [Self]
Such uncertainty has led to a thriving specialization in ITS research: how to interpret a vast and
ambiguous range of human behaviours in a consistent, usefully accurate way. When reasoning from a
position of certainty, one uses deduction, the use of fundamental axioms to build more complex facts –if
X implies Y and X is true, then Y must be true. When reasoning from a position of uncertainty,
however, induction is the tool –if X implies Y and Y is true, then X is more plausible. The most popular
approach to handling uncertainty is numerical and probabilistic. Bayesian Networks (BNs, also called
Belief Networks) are a simple and consistent set of rules for induction, model selection, and comparison.
Formally, BNs are a way to calculate a posterior probability distribution for a set of queries, given exact
values for a set of evidence. [Baldi]
An agent gets values for evidence variables from its percepts (or from other source of reasoning), and asks about the
possible values of other variables so that it can decide what action to take. [Russell]
The Bayesian approach provides a principled, mathematical approach which shows how to
change existing beliefs in the light of new evidence; thus it allows scientists to combine new data with
their existing knowledge or expertise. In a Bayesian model, if one element can be considered a
precondition to another, then the second element should be considered more or less probable based on
10
the probability of the first. A BN is a graph possessing certain properties. A set of random variables
represents the nodes of the network, and a set of directed links connects pairs of nodes in such a way
that if a link points from a precursor node X to a node Y, then X has direct influence on Y. Each node
also has a conditional probability table (CPT) which defines the influence that its precursor (“parent”)
nodes have upon it; Y may have an arbitrarily large number of precursor nodes which point to it. To
avoid infinite looping, a network built on these properties should have no cycles (and is therefore a
DAG, or Directed Acyclic Graph). [Russell] As a set of familiar concepts are combined to provide
evidence of an outcome, Bayesian networks facilitate a common-sense interpretation of statistical
conclusions. [Gelman]
A popular introduction to Bayesian belief networks (judging by this writer’s exposure to
undergraduate and graduate courses) is Russell and Norving’s Burglary / Earthquake example (fig. 1)
[Russell]. A new alarm system is installed which reliably responds to burglaries, but is also occasionally
triggered by minor earthquakes. Two persons are tasked to respond to the alarm. John always calls when
he hears the alarm, but sometimes confuses it with other alarm sounds, and calls then too. Mary likes
loud music and occasionally misses the alarm completely.
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
Figure 1: A simple Bayesian belief network
The topology of the network in Figure 1 represents the general structure of causal relationships
between elements of the domain; thus the probability of the alarm going off is directly affected by
burglaries and earthquakes, but whether John or Mary respond only depends upon whether the alarm
11
goes off; they do not witness burglaries nor feel minor earthquakes. Factors such as Mary listening to
loud music or John confusing various alarm types are accounted for in the uncertainty inherent in the
links from Alarm to JohnCalls and MaryCalls. It would be very difficult in a practical sense to
determine and weigh all the various reasons why a person may not respond as expected: John and Mary
may simply not be home, may be asleep, may have water in their ears, and so forth. For the alarm as
well, there are potentially an infinite number of reasons why it might go off (e.g. construction, small
animals, sun flares, etc.) or might not (e.g. faulty wiring, poorly tuned sensors, etc.). The probabilities
used summarize these potentials, and therefore allow a BN to function simply but approximately
correctly in a complex world, while still allowing the accuracy of the approximation to be improved
further by adding more relevant information.
Once the topology is fixed, the conditional probability table (CPT) in each node must be set,
such that the probability of the node being true is stated for every truth combination of its parent nodes.
Thus Alarm would have a stated probability of being true (i.e. probability of ringing) for each of the
following: neither burglary nor earthquake occurring (lowest probability), just an earthquake occurring
(slightly higher), just a burglary occurring (highest), and both a burglary and earthquake occurring(also
highest). Thus more generally, there are 2n probabilities which must be specified in a node with n
parents. A node with no parent nodes, such as Burglary or Earthquake above, is initialized with
just the prior probability of each of its possible values–in this case the probabilities of both true and
false–that the event might or might not occur.
In the context of education uncertainty is common, since student reactions are inherently “noisy”
and difficult to interpret. Here BNs can be used to determine the probability that a hypothesis about a
student’s ability is correct, given the (sometimes conflicting) evidence of behaviour. Certain activities
which the student has performed are taken as evidence that the student has understood a concept, and the
student can thereby be represented as a collection of cognitive parameters that indicate learning. A
specification of the relationship between the activities and the concepts makes it possible to predict
learning based on the evidence of behaviour, or conversely, to infer that prerequisite cognitive skills
have been achieved based on a high solution score.
A major advantage of the Bayesian approach in general, and for educators in particular, is its
relative ease of use, as “it is usually easy for a domain expert to decide what direct conditional
dependence relationships hold in the domain.” [Russell] This is especially important when the “domain
expert” is a teacher or administrator working in a relatively non-technical field such as education: a
12
BN’s causes and effects are represented in a hierarchical tree-like form that makes intuitive sense to
non-scientists, and it can be constructed and debugged using user-friendly graphical tools. Domain
experts are sometimes able to work with them effectively after just a few hours of use [Horvitz].
Despite their widespread appeal for dealing with uncertainty, Bayesian networks nonetheless
present some problems. First, in order to calculate any probabilities at all, the network must be
initialized with some prior beliefs, which can be problematic:
... if a tutoring system is being deployed for the first time in a new school with a different type of pupil than before,
there may be no way of obtaining a meaningful prior distribution ... such cases are often handled through the
assignment of equal prior probabilities to all hypotheses; but as advocates of [other approaches] point out, this method
does not distinguish between a state of ignorance about a variable and a genuine belief that all of its values are equally
probable. It also means that valuable observational evidence may end up being combined with largely arbitrary prior
assumptions. [Jameson]
Thus BNs can be criticized for their “high knowledge engineering demands”; that is, although domain
experts can be taught to work with the networks, these “knowledge engineers” then face a difficult
challenge in specifying variables’ prior probabilities and the conditional probabilities of the links. [Katz
92] Sometimes
this can be reduced to an educated guess:
In most of the systems...most or all of the required numbers were apparently entered by the designers on the basis of
intuitive judgment. Even in cases where systematically collected empirical data were used, the designers themselves
warn against optimistic assumptions about the accuracy of the numbers... [Jameson]
Second, defining the structure of the network is also problematic. “We require not only that each
variable is directly influenced by only a few others, but also that the network topology actually reflects
those direct influences with the appropriate set of parents. Because of the way that the [BN’s]
construction procedure works, the ‘direct influencers’ will have to be added to the network first if they
are to become parents of the nodes they influence.” [Russell] Adding nodes in the wrong order can result
in links that represent tenuous relationships and require difficult and unnatural probability judgments,
such as assessing the probability of Earthquake given Burglary and Alarm. Bad design can lead
to poor representation of relationships, and the specification of a lot of unnecessary numbers.
Third, the conditional probability table at each node requires many numbers, even when a node
has a relatively small number of parents. While the relationships between a node and its parents are not
13
arbitrary and can often be reduced to a type classification, in the worst case, filling in a CPT can take a
lot of time and experience with all the possible conditioning cases. [Russell]
Fourth, and more generally, Bayesian networks can be demanding of computation; in fact, it has
been proven that the exact application of inference techniques (as used in BNs) is generally NP-hard,
and even some of the approximate applications are similarly complex. Thus, while it may be possible to
calculate a result, it may not always be practical; although seldom mentioned by researchers in the user
modeling field, this is often a problem. [Jameson]
The question then is why should Bayesian inference, with these shortcomings and among all
possible methods, be so compelling? The perhaps unexpected answer is that it can be shown in a strict
mathematical sense that the Bayesian approach is the only consistent way of reasoning in the face of
uncertainty, [Baldi] and as such has made previously difficult problems solvable. While on occasion
researchers may make tenuous assumptions about the objectivity of their methodology, the Bayesian
approach requires that all assumptions be made explicit –
Bayes theorem and, in particular, its emphasis on prior probabilities has caused considerable controversy. The great
statistician Ronald Fisher was very critical of the “subjectivist” aspects of priors. By contrast, a leading proponent I.J.
Good argued persuasively that “the subjectivist (i.e. Bayesian) states his judgements, whereas the objectivist sweeps
them under the carpet by calling assumptions knowledge, and he basks in the glorious objectivity of science”
[Jameson]
This direct quantification of uncertainty means that there is no impediment in principle to fitting models
with many parameters and multiple layers of relations, and this flexibility and generality allow the
Bayesian approach to cope with very complex problems. The result is a conceptually simple numerical
method for coping with multiple parameters. [Gelman]
5.2 Making Uncertainty More Certain
Perhaps the trickiest aspect of formalizing any method, numerical or otherwise, lies in its reliance on
researchers’ initial assumptions of what human activities should be observed and how they should be
interpreted. Attempts to refine these assumptions by consulting with domain experts, or by running
empirical studies on human cognitive characteristics, are compromised by the difficulty of preserving
nuances of meaning as the observations are formally represented in the system. In terms of the Bayesian
14
inference methods mentioned above, the problem is as much how to reify subtly interrelated concepts as
how to set the network’s initial states –
A vague concept has to have a membership function, and the various pieces of input data for a complex rule have to
be combined according to some operators. These internal representations can in principle be just as unrealistic as
arbitrarily chosen input probabilities... So the problem remains of how to choose the right ones. [Jameson]
How well do the data structures and procedure calls in the network correspond to the structures and skills that we
expect people to learn? From the network designer’s point of view, the psychological validity can be improved or
denigrated by choosing one structural decomposition instead of another ... Measuring the “correctness” of a particular
network is a problematic issue as there are no clear tests of validity. [Brown]
The problem in observing human behaviour is the same regardless of whether recording is done by
human or machine –
One major issue ... is what Smith & Geoffrey (1968) have termed the “two-realities” problem -- the fact that the notes
as recorded cannot possibly include literally everything that has actually transpired. Hence, a source of potential bias
is the possibility of selective recording of certain types of events. [Schofield]
The problem is not just what, but also how much is recorded. Student activities are the inputs to a
diagnosis which then infers what the student understands. The fewer activities recorded, or the coarser
the granularity of observation, the more difficult diagnosis becomes. Bandwidth becomes the measure of
the amount and quality of input [VanLehn]; ideally (for education perhaps, but not for privacy) machines
would be able to read minds, but this is currently impossible. By either actively asking enough
questions, or by observing at a sufficiently fine level (e.g. ‘the symbol-by-symbol’ basis of Anderson’s
LISP tutor [Anderson 95]), an ITS can obtain indirect information that approximates the students’ mental
states. In more complex problem solving situations, such as chess games or algebra problems, mental
states may be beyond reach, but the machine can still observe and react to the trail of intermediate states
which the student produces, from the posed problem to its eventual solution (e.g. Andes [Conati],
Sherlock II [Katz 92], Self’s logic tutor [Self], and various reviewed in [Collins]). The problem is akin to
inferring the traits of an invisible person–you can see objects move as they are being manipulated, but
cannot directly see the cause. The difficulty of inferring complex deep cognitive structures from a
limited set of interactions is clear.
One researcher has suggested ways to bypass problems inherent in student modeling by
proposing pragmatic heuristics, which aim at reducing the complexity of the task. For example, don’t
15
guess: ask the student to tell you what you want to know; don’t waste resources diagnosing problems
that you can’t fix; don’t be pedantic – students may have valid reasons for their mistaken beliefs; also it
is misleading for a machine to feign omniscience – perhaps it is more realistic to adopt instead a “fallible
collaborator” role. [Self]
Other researchers have teased out methods to automatically ‘debug’ students’ faulty reasoning. A
single misconception about, say, subtraction could cause a student to err on every question on a test,
even though in each case the student is consistently applying just a single incorrect operation.
Debugging one problem based on apparent gibberish in answers can be impossible for a human teacher,
but perhaps not for machines:
By being able to synthesize such deep-structure diagnostic models automatically, we can provide both a teacher and
an instructional system with not only an identification of what mistakes a student is making, but also an explanation of
why those mistakes are being made. Such a system has profound implications for testing, since a student need no
longer be evaluated solely on the number of errors appearing on his test, but rather the [fewer] fundamental
misconceptions which he harbours. [Brown]
Thus the advantage of this approach is that bug-aware systems may be able to provide very specific
feedback to the student about the nature of the misconception. [Shute] In terms of organization, the idea
is that –
[Such bug-aware] systems represent both misconceptions and missing conceptions. The most common type of student
model in this class employs a library of predefined misconceptions and missing conceptions. The members of this
library are called bugs. A student model consists of an expert module plus a list of bugs. This bug library technique ...
diagnoses a student by finding bugs from the library that, when added to the expert model, yield a student model that
fits the student’s performance. [VanLehn]
While finding the appropriate bug in a large library might be expensive, there is evidence that the
judicious choice of a specific numerical technique (such as Dempster-Schaffer theory over Bayesian
networks) can be “very efficient in tracking down and identifying the bug a student has, when applied to
an intelligent buggy system.” [Tokuda]
However, the buggy approach has some drawbacks. First, the student’s errors must be matched
with a bug in the system’s “bug library” in order to be recognized; bugs unanticipated by the system’s
designers may pass unnoticed and un-remediated. “The correct model must contain all of the knowledge
that can possibly be misunderstood by the student or else some student misconceptions will be beyond
16
the modeling capabilities of the system.” [Brown] Should the system fail, “it may totally misdiagnose the
student’s misconceptions.” [VanLehn] The system may also try to match the error to some combination of
known bugs, however it then must consider the “nonobvious” interactions of multiple bugs: “in general,
the interactions between bugs can be arbitrarily complex.” [Brown]
Second, several distinct bugs may generate the same answer. In the worst case, a student may
have several misconceptions which actually combine to produce a correct answer; how then should a
machine diagnose such an ‘error’?
Third, for student errors which do not match any bugs in the library, it would be desirable to
expand the library to include new sources of error (indeed, this would be consistent with the definition
above of what makes an ITS ‘intelligent’). While “bugs have to be hand-coded into the network now...
one can envision generatively producing bugs by a set of syntactic transformations” or for non-syntactic
bugs, by inferring from some semantic theory. [Brown] Along these lines, bugs may be constructed ad
hoc during diagnosis from “bug parts” using production rules (as seen in Langley & Ohlsson’s ACM
system), rather than being predefined (and combined, as seen in Brown & Burton’s BUGGY and
DEBUGGY systems). One advantage of this is that libraries based on rules rather than distinct cases
may be made smaller while specifying as many bugs; the implication is that this may actually make the
problem of filling bug libraries easier, although ultimately the online analysis demands seem to be
higher. [Langley & Ohlsson, in VanLehn].
Last, the system may not be able to discern between a genuine student error and a careless slip.
While the WEST system [Burton R] “has compiled into it diagnostic routines for many typical errors that
a student is apt to make (such as precedence errors in arithmetic),” the authors can only hint at the
problem of handling careless errors: “if a student makes a potentially careless error, be forgiving. But
provide explicit commentary in case it was not just careless.” [Burton R] However WEST does employ
some subtle alternative strategies in its attempt to detect “mind bugs” in the student’s understanding of
the structure of the game, or to detect an “alteration in the spirit of the game” wherein the student “no
longer cares about winning ... but instead in psyching out the actual teaching strategies embedded in the
system.” Nonetheless, the system’s approach to interventions is very conservative – in the face of an
uncertain situation, “if the student is doing something completely ‘off the wall’” the system is unlikely
to intervene.
17
5.3 Novel Approaches
Other explorations have led to some compelling and even counter-intuitive findings with respect to
learning and instruction. For instance, Latent Semantic Analysis (LSA) is a statistical technique that
compresses a large corpus of texts into a space of 100 to 500 dimensions [ Graesser]; experiments using
the AutoTutor system with LSA are based on the premise that a knowledgeable tutor is not a critical
requirement. In a study of human tutoring sessions, the authors –
– found that the human tutors and learners have a remarkably incomplete understanding of each other’s knowledge
base and that many of each other’s contributions are not deeply understood. It is not fine-tuned student modeling that
is important, but rather a tutor that serves as a conversational partner when common ground is minimal. [Graesser]
It is this role that AutoTutor is designed to fill. Students type natural-language answers to questions
which AutoTutor poses (using a talking head with synthesized speech); the response set of words typed
by the student is compared with sets of words related to good answers; with LSA as its evaluation
module, AutoTutor “exhibits the performance of an intermediate expert,” which compares well with
“the vast majority of human tutors”. AutoTutor scores the student on how well they answer a series of
questions, using “simply the mean of the...scores for all the previous learner turns in the tutorial
session,” while each individual answer score is “based on its resonance with the ideal good answer of
the current topic.” AutoTutor does not otherwise formulate a student model, and yet the authors’
experiments show that it performs very well, apparently due to its ‘conversational’ nature.
AutoTutor simulates the normal tutor’s attempt to collaboratively construct answers to questions, explanations, and
solutions to problems. It does this by formulating dialog moves that asses the learner in an active construction of
knowledge, as the learner attempts to answer the questions and solve the problems posed by the tutor. Thus AutoTutor
serves as a discourse prosthesis, drawing out what the learner knows and scaffolding the learner to an enhanced level
of mastery. [Graesser]
From an educational perspective, ITS research into collaboration is of particular interest. In an
attempt to reach the goal of better instruction through individualized tutoring [ Bloom 84, Cohen], ITSs
have typically been designed for use by individual learners. Some researchers have followed a different
path, in designing collaborative systems in which students work together at one computer, according to
the notion that students are likely to learn better in the presence of their peers [peer] since they can
combine forces to help each other overcome errors in reasoning or deficient knowledge. Indeed, “many
18
researchers have shown impressive student gains in knowledge and skill acquisition from collaborative
learning environments,” [Shute] and there are a dizzying variety of social and psychological paradigms
which seek to explain and measure such human interactions. [Dillenbourg] A critical issue with respect to
student modeling is that the stakes have been raised: the machine now has to form a model based on a
jointly shared rather than individual problem-solving space. There is also the issue of unintended or
unforeseen collaborative effects resulting from the introduction of an ITS to a classroom. Anderson et
al. found that even in cases where students were working at individual machines, “there is a constant
banter of conversation...in which different students compare their progress and help one another...we
have come to realize that our tutors would be less successful if such peer learning were not available.”
[Anderson 95] In one intensive study involving a “state-of-the-art artificially intelligent geometry proofs
tutor”, researchers noted that the role of the human teachers had also been affected: “specifically, the
teachers’ behavior changed from that of a rather distant expert to that of a collaborator...rather than
addressing the entire class in a relatively formal manner the teacher worked on an individual basis with
students,” and noted that as teachers kept busy responding to student requests, students took a more
active role in initiating the interaction. [Schofield]
A different view takes the computer itself as the collaborator, as suggested above by Self (as an
alternative heuristic to student modeling). In such case, “an interesting issue concerns the necessity to
have a plausible co-learner.” [Dillenbourg] The problem is that human learners are not necessarily very
tolerant of a computerized collaborator’s silly mistakes, and tend to avoid further interaction with a
simulated peer if it is repeatedly wrong. Still, “the advantage of human-computer collaborative systems
for the study of collaboration is that the experimenter can tune several parameters” in the computerized
collaborator [Dillenbourg], or alternatively can use the computer, as in the case of the Clarissa system, as a
“test-bench for examining collaborative activity,” generating “some implications for a new range of
software agents capable of plausible collaborative behaviour.” [Burton M] In any case, additional research
is required to test the efficacy of single versus collaborative learning, with or without computers. [Shute]
Another interesting approach involves discovery learning environments (or microworlds),
typically comprised of a simulation environment with a simple interface and some tools. In an attempt to
make computerized learning more flexible, such environments allow students to more freely explore and
interact with them. [Burton R, Collins, Shute] The student is placed in the potentially motivating situation of
being able to form a hypothesis about the environment, and is given the tools needed to investigate.
Contrasting heavily with “shoveling facts into empty heads,” microworlds encourage an active
19
relationship between the learner and the knowledge and skills to be acquired, a relationship claimed vital
by the legendary developmental psychologist Piaget [Shute] in that things discovered by one’s self tend
to be better remembered and more highly valued. The designers of the Smithtown system, for example –
– believe that discovery learning can contribute to a rich understanding of domain information by enabling students to
access and organize information themselves...Thus, applying interrogative skills is the ‘active process’ that leads to
learning in discovery situations...since Smithtown was designed to be a guided discovery environment, there is no
fixed curriculum. [Shute 90]
Instead, students generate their own hypotheses and test them by executing a series of actions in the
environment; the series of actions comprising a student’s ‘experiment’ represents their answer, and is
evaluated against known good and bad behaviours. In exploring scientific principles in the domain of
microeconomics, students are provided with various tools: a notebook, spreadsheets, plots, a calculator,
and a point-and-click interface in which they construct hypotheses in English and submit them to the
system. Other notable discovery systems provide interactive instruction in the domains of geometry,
algebra, spatial reasoning, logical reasoning about errors, troubleshooting skills, steam plants, and LISP
programming (as reviewed in [Burton R]), and physics [Conati], each imposing different degrees of
structure on what the student is allowed to do.
While adaptable to a wide range of users, the problem with microworlds is that not all students
are sufficiently motivated or skilled enough in exploratory behaviour to explore the environment
effectively; [Shute] in short, they may not have the necessary experience to form and test hypotheses,
although this behavioural shortcoming can itself become a target for effective remediation. [ Bunt] As
computer power increases and supporting technologies are refined, virtual reality (VR) will undoubtedly
be used to enhance immersive learning environments. Research to date indicates its generally positive
value for learning: VR can “make information more accessible through the evolutionarily-prepared
channels of visual and perceptual experience,” [Shute] making the discovery experience even more
impressive.

This is incidentally also a stated goal of some Mastery approaches, which themselves become a type of ‘discovery
environment.’ [Chance, Tyree, Zielinski]
20
5.4 ITS Effectiveness
Poised at the apex of computerized instruction, given the enormous time, effort, and diversity of
approaches involved in developing ITSs in their various forms, how well then do they perform? While
one might expect that better individualization of instruction would lead to more efficient skill
acquisition, results of evaluations are split: “for some learning situations and some curricula, using fancy
programming techniques may be like using a shotgun to kill a fly.” [Shute] The issue is clouded by the
fact that researchers often do not test their systems in controlled studies, and there is little agreement
over standardized reporting of results [Shute]. Shute & Psotka’s meta-evaluation of six such studies noted
that while all appeared very positive regarding the effects of the systems, “we are familiar with other
(unpublished) tutor-evaluation studies that were conducted but were ‘failures.’” Nonetheless, the
generally positive trend was viewed as encouraging, and of the six reviewed, “the findings indicate these
systems do accelerate learning with no degradation in final outcome.” [Shute] One of these six was the
LISP tutor [Anderson 95]; in one evaluation students completed exercises 30% faster than controls, and
performed 43% better on a post-test when using the tutor; a second study showed also showed
significant gains of 64% and 30% respectively. Another of the six was the Smithtown discovery
environment [Shute 90] mentioned above; control and experimental groups showed similar improvement
in their post-test scores, although the Smithtown group received half as much instruction, in line with
results for the LISP tutor. In short, students using either system learned faster with no loss of
performance as compared to traditional learning methods.
Other systems show even more positive outcomes. The Practical Algebra Tutor (PAT) [Koedinger]
was introduced to three urban high schools in Pittsburgh, and was made an integral part of the 9 th grade
algebra course. Compared to traditional methods, students using PAT scored 15% higher than their
classmates on standardized tests, and a staggering 100% higher on tests targeting curriculum objectives
of the Pittsburgh Urban Mathematics Project (PUMP), for which PAT was designed (PUMP itself is
consistent with the curriculum recommendations of the National Council of Teachers of Mathematics).
The authors also provide anecdotal evidence that PAT is popular with teachers as well, who “like the
way that the tutor accommodates a large proportion of student questions and frees teachers to give more
individualized help to students with particular needs.” This support was pivotal in convincing the
Pittsburgh school board to expand the program to other schools. As the authors state, “this study
provides further evidence that laboratory tutoring systems can be scaled up to work, both technically and
pedagogically, in real and unforgiving settings like urban high schools.” [Koedinger]
21
6
Out of the Lab
Unfortunately, successes on the scale of the PAT tutor are rare, and even the designers of the Grace
Tutor [McKendree] make concessions in relation to the “qualified” success of their own system (which
was very closely based on the LISP tutor):
There is still the major hurdle of having the tutor taken from the developers hands and used every day in real
classrooms. We feel that this is possibly the major challenge facing educational technologies such as ITSs today.
There are countless examples from conferences and labs of clever, effective, well-designed systems which are not
being used to any great extent. [McKendree]
The key problem, as they see it, is that of introducing new technologies into old environments;
instruction based on the traditional “accumulation model” sees students as “a storehouse for facts and as
long as the facts are right, effective learning will occur.” This leads to “a serious mismatch between the
cognitive theories and learning-by-doing approaches that underlie ITSs and the traditional classroom
instruction.” [McKendree]
Conversely, sometimes developers of ITSs themselves seem to avoid contentious issues. For
example, PAT employs a “client-centered design” to fit into a client-proscribed curriculum, rather than
suggesting possible alternatives. Elsewhere, it is also interesting to note, for example, that while many
debates continue over cognitive and technical details, it seems that none of the researchers discuss the
issue of grading. While this could be seen as congruent with Mastery Learning principles, it could also
be viewed as a symptomatic avoidance of education’s thornier issues:
I would like to believe that a decade of research in this area has given [the researchers] a solid perspective on what to
teach, how to teach it, and how to assess the effect of that instruction. Instead of providing guidance to educators in
this area, [they] seem willing to abrogate this responsibility and to settle into the role of technologists, teaching what
the current curriculum dictates regardless of the appropriateness. [Anderson 95]
In fact, since ITSs are often narrowly constructed as test-beds for cognitive theories or
algorithmic strategies, relatively few of these systems have taken a comprehensive approach which
would adapt them into full featured, commercially available products. Robust, reliable systems are
difficult to develop; even in better-funded industry labs, promising developments are sometimes stripped
from major products as they head to market. [Horvitz 98] For researchers, this represents a vicious circle:
22
big challenges require big money, and the discrete bits of research that do manage to get completed are
often too small to have an effect on the outside world.
While a handful of systems have been successfully implemented and retained by the institutions
in which they were tested [Gertner, Koedinger], and some ITS-based course materials are now available for
institutional purchase [Carnegie], other well-conceived and useable systems, through no fault of their
own, have fallen into disuse due to purely logistical factors, such as lack of user feedback channels or a
maintenance plan [Katz 98]. Of the school environment, one professor of education laments –
A looming crisis in most schools is the lack of technical personnel to keep technology working and running ... I have
seen schools where teachers have given up on computers because they cannot get the school district to fix them in a
timely manner ... for now the greatest problem for computer acceptance in "curriculum integration" is not its
applicability to school-related tasks but rather the inability to get service and keep it running. Long-term funding for
technology is essential for maintenance and repair, scheduled replacement, training, and support. [Marsh]
Unfortunately, the ongoing technical challenges of implementation still distract researchers from
more general pedagogic issues of curriculum, such as how computer use should be integrated into the
existing syllabus (though PAT’s success is instructive), and how educational software can be made more
easily adaptable and modifiable to stay relevant to a curriculum which continues to evolve and change.
As such, researchers do not always have a clear perspective of the realities of daily use, while those who
best understand the curriculum –the teachers themselves– may be substantially overwhelmed by the task
and the technology involved in ITS development. Tellingly, “the construction of a ‘pseudo-Socratic’
machine is costly. Best estimates of costs range between 100 and 400 hours…to build a one hour
tutorial. Once built the programs are generally not easy to modify, making them unsuitable for many
areas of…teaching” [Tyree]. Teachers are not expected to write their own textbooks anymore; neither
should they be expected to develop, their own computer tutorials from scratch and at onerous cost.
In order that better approaches, algorithms, and machines become standardized, there are two
parallel developments which offer progress in the short term. The first solution lies in changes taking
place in academia. For better or worse, widespread cutbacks in academic funding and escalating
academic costs (not to mention the large discrepancy between academic and industrial salaries) have
encouraged research institutions to take a more entrepreneurial tack, to patent and develop their own
commercial products [Reichman]. As a result, there has been a mushrooming of academic start-ups which
promote the fruits of research for hard cash; in the field of educational systems there are a growing
number of examples [Carnegie, WBT, WebCT]. Taking this trend one step further, a university might well
23
fund the development of sophisticated computerized materials for its own courses, thereafter generating
some income by adapting and selling them to outside interests. It would indeed be costly for individual
teachers to develop their own materials; it would be vastly preferable to establish a consistent, organized
system of production whose foundation is built more on empirically-derived Mastery-consistent
principles than on the profit motive. It seems unlikely that academic research will ever become primarily
concerned with the bottom line; its concern is instead with theory and empiricism, which if properly
pursued should discover and illuminate hidden causes and processes. While itself imperfect, the stated
goal of science is an abstracted ‘truth’.
Conversely, the second solution may lie with industrial educational-software publishers; many
already offer both completed tutorials and tutorial-editing tools. While many industrial products are
well-designed and useful, a general caveat might be that their primary function is to generate profit and
assure corporate survival. As such they are clearly more likely to be influenced by market surveys,
which are often used to discern what people intuitively prefer [Jackson], or to cannibalize popular
existing products, or even advertize trademark and teach brand loyalty (as seen in educational products
from Walt Disney Corporation and Warner Bothers [Disney, Warner]). In a market economy, all other
concerns are by definition secondary to profitability. The hope is then that industry can be convinced of
the value of existing research, although this may be difficult; as yet there is scant empirical evidence that
the cost and risk of implementing a more sophisticated system would result in enough of an advantage.
Regardless of the means of production, once the materials are prepared, the potential impact of
research-based systems on a Mastery-type school environment is not lost. More sophisticated models
with smaller grain size and more robust inference algorithms can result in more reliable mastery
learning, better helping in the difficult and sometimes tedious tasks of record keeping, low-level
curriculum organization, and tracking student understanding and progress. Furthermore, the initial
investment in development time would be amortized across many sessions, as “teachers do not have to
devote time to repeatedly individualize rates of instruction or to prepare additional lesson plans. The
schedule of the educational institution can remain unchanged” as the system is assimilated into the
school [Koohang]. This assumes, of course, that educational software be specifically designed to integrate
easily into existing curricula; as mentioned above, this is a vital issue which ITSs must ultimately
address.
Another potential hurdle is that a flexible mastery-type curriculum needs to allow students to redo
assignments and retake unit tests at will. Schools may argue that they do not have the resources to
24
administer them–this would be true in the current system of education. Nonetheless, textbook publishers
typically offer hundreds of test questions with every text that they publish; ITS publishers should do the
same. A unique randomized sample of these questions could then be assembled ad hoc whenever a test
was requested. The ease of this system, once in place, would increase exponentially [Tyree].
Furthermore, fears abound that human teachers will disappear. “I've had people say to me: ‘Oh,
you're trying to replace the teacher,’” says Dorelia Harrison, a teacher at McDonough. “No. It has given
me more time to do individualized instruction, and a lot of it” [Button]. Mastery proponents are quick and
unanimous in their insistence on teachers as the strong link in the chain. Says Keller, “The teacher has a
great deal to do. It is the teacher who designs the course, prepares the study guides and questions, selects
and trains the proctors [teaching assistants], handles disputes on test scoring, and generally supervises
the entire process.” [Chance]
Thus human interaction should always be valued over programmed computer responses; computers
are in no way replacements for teacher or textbook. “A software program and a teacher are vastly
different; the teacher humanistically responds in ways the software program cannot” [Caissy]. ITSs are
best used within a larger instructional context – as research has shown, computers are best used as tools
to develop specific skills; they certainly cannot as easily convey an infectious enthusiasm for ideas.
7
Politics, Economics, and Tradition
Unfortunately the exhaustive deliberations and finely-tuned cognitive models of the ITSs have had little
influence to date on the growing public fervour for educational technology. Citing much of the flawed
research which so concerned Edward Miller [Oppenheimer], at the transition to the 20th century, the
Clinton Administration adopted a proposal to put 40 billion dollars of high-tech teaching tools into
schools [USGov], while much equipment already in place goes unused. The initiative’s advisory panel is
populated two-to-one by representatives of the computer industry. Given the nature of North American
capitalism, it is not too cynical to suggest vested interests; clearly industry would like to see large sums
spent on its products, regardless of the results.
One of the fundamental criticisms leveled against computer-assisted instruction is that an infatuation with hardware
has minimized the concern about the educational merits of the courses. Critic after critic has bemoaned the poor
quality of the material. As the market has grown, the rush to produce software has resulted in a lowering of quality.
[Rosenberg]
25
When panel member Esther Dyson, the oft-quoted and influential “queen of cyberspace”, was asked
what discussion the group had had about the potential downside of computerized education, she
answered that there hadn't been any [Oppenheimer]. The new Bush administration shows every indication
of continuing in Clinton’s footsteps, albeit with less fanfare. [USGov] This apparent obsession with
technology seems like a wasteful distraction to some.
The limitations of technology in providing the answer to educational problems are revealed in a simple insight
underlying what has come to be called the Comer process, after Dr. James Comer, a Yale University psychiatrist: “...a
child’s home life affects his performance at school, and ... if schools pay attention to all the influences on a child, most
problems can be solved before they get out of control. The Comer process ... encourages a flexible, almost customtailored approach to each child.” These observations seem so obvious and so unglamorous compared to high
technology that it is not surprising that it has taken so long for adequate attention to be paid to them. [Rosenberg]
Philosophers such as Foucault and critics of first-world industrial practice would find the
government’s initiative recklessly irresponsible [Wenk]. Members of Clinton's panel may have blindly
sung the praises of their products for education, but as Foucault so passionately argued, tools – and the
technology that they embody – are not neutral.
They are born into, developed for, and applied on behalf of the interests of power…the major purpose of television is
the accumulation of wealth, a purpose that some futurists also claim for software and for education generally. Many
observers believe that the development of technology and education along these lines has reached a dangerous point –
where ‘technological solutions’ threaten to overturn democracy altogether. [Howley]
In one lengthy education supplement, the New York Times exposed numerous examples of how donors
of education technology were seeking to control the schools’ curriculum [Oppenheimer].
On the other hand, the reaction of the educational establishment to methods which propose
change, even if there are clear advantages, is equally disturbing. Kenneth Koedinger, co-developer of
PAT and a professor of computer science at Carnegie-Mellon University, notes that although the I CAN
Learn system proved successful in an inner-city school setting, it makes conservative educators uneasy
in its challenge to the existing order: “having a system whose major virtue is student achievement—I
mean, it's sad but true—is not necessarily a winner.” [Button] Fred Keller similarly does not offer much
hope of progress. After ten years of research which proved the significance of Mastery Learning, “the
evidence was so clear. I thought, ‘Well, we have got a better mousetrap and the world will beat a path to
our door.’ It didn’t happen. I had no idea how immovable the educational establishment was” [ Chance].
26
Albert Shanker, for 22 years the president of the American Federation of Teachers, has long struggled
with educational administration:
Our persistent educational crisis shows that we've reached the limits of our traditional model of education. Given our
present and foreseeable demographic, economic, social, and educational circumstances… the capacity for responding
to new challenges must itself be institutionalized. Unfortunately, the bureaucratic nature of our system of public
education makes it impossible for our schools to work in these ways. [Shanker]
Schools have many reasons to perpetuate bias, however artificial. One is the cachet of intellectual
elitism; if everyone earns an A, those who benefit most from the existing system (and their children)
lose their power advantage. Another reason to keep the current grading system involves the companies
that recruit graduates; they need some simple way of distinguishing between applicants. If everyone
earns an A, hirings become less clear-cut. In the face of such attitudes, educators like Keller see the key
obstacle to student achievement as “the force of tradition. We’ve built a structure around group
instruction to serve and protect it.” [Chance]
It is reasonable to assume, therefore, that while schools will be willing recipients of vast
technological donations, they will only employ them within their accustomed, traditional paradigm.
Unfortunately, the effective introduction of ITSs may generally require
...substantial changes in both teachers’ and students’ behaviour. For example, effective usage of intelligent tutors is
likely to require much greater role change on the teachers’ part than usage of more traditional drill and practice
applications in which computers are often used as sophisticated electronic workbooks and thus fit much more readily
into established classroom roles and routines. [Schofield]
Unfortunately, says educator L.J. Perelman, “The common practice of trying to simply add-on
technology to education while actively prohibiting transformation of the rest of the system’s social
infrastructure is just what has made so much of the technological experiment in education fruitless.”
[Perelman]
27
8
The Future of Schools
The advent of security guards at an increasing number of high schools, as well as metal detectors, and
even police patrols on one hand, and bullying, stabbings, and shootings by students on the other, only
reinforce the notion of school as a “minimum-security” institution. If sophisticated, adaptive educational
software such as ITSs could be used at home, schools – with their increasing negative connotations –
could become more of a part-time option than a full-time requirement.
A growing number of wage earners work and shop at home by computer; this trend will continue
to expand to other areas of life. Indeed, there are already ITSs that function effectively over an internet
connection [Brusilovsky]. In the post-secondary arena, “wired” education is taking firm root as more
universities and colleges offer on-line courses, while a few pioneers even eschew campuses altogether to
adopt an exclusively on-line presence [CEW]. Political considerations aside, there is no obvious reason
why this paradigm should not be extended to child education; the possibility is already being explored at
Canada’s own Virtual High School. Promisingly, their self-paced on-line courses –
– have animations, visuals and auditory devices to reinforce the curriculum. They are also scripted to provide
feedback for formative problem solving. These online courses also have formative evaluation vehicles built into them
to give the student feedback about the quality of their learning. [VHS]
Each course is moderated by a human teacher who monitors student progress and answers any questions
which may come up. In answer to the question of whether it would be better to take courses in a typical
high school classroom, their response is:
Yes, a typical high school class offers a host of benefits and advantages that an Internet delivered course can never
hope to achieve. However, these courses have benefit in that they allow the student to develop their own selfmotivation skills, event management skills and other attitudes necessary for survival in today's academic and work
milieu. [VHS]
By contrast, the “benefits” of a regular classroom are those that can never be fully replaced by
technology. Schools are socialization centers for those who want or need socialization, and can offer
precisely what is not available in the home: labs, workshops, recreational facilities, and expert
remediation. But clearly, the virtual approach as stated has its own benefits: an introduction to personal
responsibility, where success (made more likely through self-pacing and individualized feedback) has a
positive impact on one’s future, and failure costs little more than time, since one can always try again
28
without the stigma of being publicly ‘held back’ a year. Students who would enroll in the virtual school
are listed as –
Those students who cannot go to a regular high school for a whole host of reasons; confined to home due to a
disability; home-schooled; parents; students who cannot get a particular course in their high school; students who
cannot get a particular course at a particular time; summer-school students; students who would like to fast track or
upgrade, etc. [VHS]
The implications for homeschooling are compelling. In this context, educational software in
general, and ITSs in particular, would greatly assist work-at-home teacher-parents. While some people
may find it disturbing that computers would ‘separate’ family members to be each quietly absorbed in
their own tasks (and thus not ‘communicating’), this way of life was common until family shops gave
way to mass-production at the beginning of the 20th century. Instead, by keeping the family in closer
proximity, computer-assisted education may actually improve family coherence; it is arguable that the
often-repeated complaint that children today receive inadequate parental supervision may be partly due
to the fact that, with the Machine-Aged insistence on mandatory attendance at school and work, family
members see precious little of each other as it is. If parents’ work requires that they leave home every
day, supervision of homeschooling is indeed an issue. In such case, schools could assist by more
actively tracking progress and providing supplemental social structure.
Thus a ‘hybrid’ model of education emerges, wherein students mostly study at home by virtue of
ITS, and attend local schools on a drop-in basis, to discuss problems, run experiments, participate in
sports and clubs, and attend assemblies and dances.
9
Conclusion
While other influential sectors of society race forward as fast as the pace of discovery can take them,
education lags behind. In particular, it proves remarkably slow to accept the growing wealth of
knowledge regarding human cognition, clinging instead to familiar procedures and practices which were
developed in another time, for a purpose which no longer applies. The forecasts of our societal future,
and the future of the planet we are so busily corrupting, grow worse daily. If we are to redress our evils,
and live more by our ideals, we must lay the foundation for coming generations by developing the best
29
possible system of education, one that creates a citizenry which can deal confidently and intelligently
with the challenges to come.
There are admittedly many dangers implicit in information technology: too much enthusiasm for
computers sends the message that the digital world is somehow more compelling than the real one; the
manipulation of software can be mistaken for the manipulation of concepts; there are some who see in
computers the power to encourage children to sit still and be quiet; and certainly, placing a child in front
of a computer will not necessarily solve her problems if she was never read to by her parents. But
consider for a moment how different, how much better, society could be if its basis  education  were
founded on principles of enlightenment instead of clockwork, where most of its members were raised to
be knowledgeable and independent, and thus empowered. Naturally, those who profit by ignorance
might oppose such circumstances, but suppose that, if all were to enjoy learning and to learn well, we
might be taking a first small step toward a better world.
References
[Anderson 95] Anderson, J.R., Corbett, A.T., Koedinger, K.R. & Pelletier, R. Cognitive Tutors: Lessons Learned.
The Journal of the Learning Sciences, v4 n2, pp.167-207, Lawrence Erlbaum Associates, Inc. 1995.
[Anderson 91] Anderson, J.R. & Pelletier, R. A Development System for Model-Tracing Tutors. Proceedings of
the International Conference of the Learning Sciences, 1-8. Evanston, IL, 1991.
[Baldi] Baldi, P. & Brunak, S. Bioinformatics. The MIT Press, 1998.
[Bloom 76] Bloom, B.S. Human Characteristics and School Learning. NY: McGraw-Hill, 1976.
[Bloom 84] Bloom, B.S. The 2 Sigma Problem: The Search for Methods of Instruction as Effective as
One-to-One Tutoring. Educational Researcher, 13, pp. 4-16, 1984.
[Brown] Brown, J.S. & Burton, R.R. Diagnostic Models for Procedural Bugs in Basic Mathematical Skills.
Cognitive Science, n2, pp.155-192 1978.
[Brusilovsky] Brusilovsky, P. Course Sequencing for Static Courses? Applying ITS Techniques in Large-Scale
Web-Based Education. In Intelligent Tutoring Systems, proceedings of the 5th International Conference, ITS 2000,
Montreal, Canada, June 19-23, 2000. Berlin: Springer-Verlag 2000.
[Bunt] Bunt, A., Conati, C., Huggett, M. & Muldner, K. On Improving the Effectiveness of Open Learning
Environments through Tailored Support for Exploration. To appear in the proceedings of the 10th International
Conference on Artificial Intelligence in Education (AI-ED 2001) in San Antonio, Texas, May 19-23, 2001.
[Burton M] Burton, M., Brna, P. & Pilkington, R. Clarissa: A Laboratory for the Modelling of Collaboration.
International Journal of Artificial Intelligence in Education v11, pp. 79-105, 2000.
[Burton R] Burton, R.R. & Brown, J.S. An Investigation of Computer Coaching for Informal Learning Activities.
In Sleeman D. & Brown J. S. (Eds.), Intelligent Tutoring Systems, pp. 79-98. New York: Academic Press 1982.
30
[Button] Button, G. Algebra made easy. Forbes Magazine, September 22 1997.
[Caissy] Caissy, G. Evaluating Educational Software: A Practitioner’s Guide. Phi Beta Kappan, v66 n4
1984.
[CEW] Canadian Education on the Web. A comprehensive list of online educational resources compiled
by the Ontario Institute of Studies in Education, available online at
http://www.oise.on.ca/~mpress/eduweb.html and /distance.html
[Carnegie] Carnegie Learning Inc., Pittsburgh PA. Available online at http://www.carnegielearning.com
[Cetron] Cetron, M. Reform and Tomorrow's Schools. Technos Quarterly For Education and
Technology v6 n1, Spring 1997.
[Chance] Chance, P. The Revolutionary Gentleman. Psychology Today, pp.44-48, September 1984.
[Clancey] Clancey, W.J. From GUIDON to NEOMYCIN and HERACLES in Twenty Short Lessons. The AI
Magazine, August 1986.
[Cohen] Cohen, P.A. Kulik, J.A. & Kulik, C.C. Educational Outcomes of Tutoring: A Meta-Analysis of Findings.
American Educational Research Journal, v19 n2, pp. 237-248, Summer 1982.
[Collins] Collins, A. & Brown, J.S. The Computer as a Tool for Learning Through Reflection. In H. Mandl & A.
Lesgold (eds.), Learning Issues for Intelligent Tutoring Systems, pp. 1-18, New York: Springer, 1988.
[Conati] Conati, C., Gertner, A., VanLehn, K. & Drudzel, M.J. On-Line Student Modeling for Coached Problem
Solving Using Bayesian Networks. In Jameson A., Paris C., Tasso C., (eds.) User Modeling; Proceedings of the
sixth International Conference UM97. NewYork: SpringerWien 1997.
[Dillenbourg] Dillenbourg, P., Baker, M., Blaye, A. & O’Malley, C. The Evolution of Research on Collaborative
Learning. In Spada, H. & Reimann, P. (Eds) Learning in Humans and Machines: Towards an Interdisciplinary
Learning Science. Oxford, U.K. ; New York : Pergamon 1995.
[Disney] Disney Corporation educational products, available online at http://disney.go.com/educational/.
[Gelman] Gelman, A., Carlin, J., Stern, H., & Rubin, D. Bayesian Data Analysis. Chapman & Hall / CRC 1995.
[Gertner] Gertner, A.S., Conati, C. & VanLehn, K. Procedural Help in Andes: Generating Hints Using a
Bayesian Network Student Model. AI in Education, pp.106-111, AAAI Press / The MIT Press 1998.
The Andes system has been in ongoing use at the US Naval Academy since 1997.
[Graesser] Graesser, A.C. ,Wiemer-Hastings, P., Wiemer-Hastings, K., Harter, D., Person, N., & the TRG. Using
Latent Semantic Analysis to Evaluate the Contributions of Students in AutoTutor. Interactive Learning
Environments, in press.
[Halpern] Halpern, S. Beware of Computerized Baseball Cards, Robot Children. The Chronicle, Duke
University Press, October 26 1995.
[Horton] Horton, L. Mastery Learning. Bloomington, In. Phi beta Kappa Educational Foundation, 1981.
31
[Horvitz] Horvitz, E. http://www.auai.org/BN-Testimonial.html
[Horvitz 98] Horvitz, E., Breese, J., Heckerman, D., Hovel, D., & Rommelse, K. The Lumière Project:
Bayesian User Modeling for Inferring the Goals and Needs of Software Users. Proceedings of the
Fourteenth Conference on Uncertainty in Artificial Intelligence, July 1998.
[Howley] Howley, C. & Howley, A. The Power of Babble: Technology and Rural Education. Craig B.
Howley, Appalachia Educational Laboratory; Aimee Howley, Associate Dean, College of Education,
Marshall University, 1995.
[Jackson] Jackson, W. Methods: Doing Social Research. Scraborough, Ontario: Prentice-Hall Canada Inc., 1995.
[Jameson] Jameson, A. Numerical Uncertainty Management in User and Student Modeling: An Overview of
Systems and Issues. User Modeling and User-Adapted Interaction, n5, 1996.
[Katz 92] Katz, S., Lesgold, A., Eggan, G., & Gordon, M. Modelling the Student in Sherlock II. Journal of
Artificial Intelligence in Education, v3 n4, pp. 495-518, 1992.
[Katz 98] Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordon, M., & Greenberg, L. Sherlock 2: An
Intelligent Tutoring System Built on the LRDC Tutor Framework. In C. P. Bloom & R. B. Loftin (Eds.),
Facilitating the development and use of interactive learning environments. Mahwah, NJ: Erlbaum. pp. 227-258,
1998.
[Koedinger] Koedinger, R.K. & Anderson J.R. Intelligent Tutoring Goes To School in the Big City. International
Journal of Artificial Intelligence in Education, n8, pp.30-43 1997.
The PAT system has been in ongoing use at several Pittsburgh high schools since 1997.
[Koohang] Koohang, A., & Stepp, S. Computer Assisted Instruction: A support for the Mastery
Learning System. Alex A Koohang, North Carolina Wesleyan College, Sidney L. Stepp, South Illinois
University at Carbondale – “Published Viewpoints” 1992.
[Llewellyn] Llewellyn, G. Real Lives: Eleven Teenagers Who Don't Go to School. Lowry House,
Eugene Oregon, 1993.
[Marsh] Marsh, G.E. A Brief History of Instructional Technology. From the graduate course
AIL 601: Theories of Learning Applied to Technological Instruction, College of Education, University
of Alabama. Available online at
http://www.bamaed.ua.edu/ail601/overview.htm
[McKendree] McKendree, J., Radlinski, B. & Atwood, M.E. The Grace Tutor: A Qualified Success. In
C. Frasson, G. Gauthier & G.I. McCalla (eds.) Intelligent Tutoring Systems: Second International
Conference, ITS ’92 Proceedings, pp. 677-684, Berlin: Springer-Verlag, 1992.
[Mosle] Mosle, Sara. Public Education’s Last, Best Chance. The New York Times Magazine, sec.6/p.37
August 31 1997 .
[Ohlsson] Ohlsson, S. Some Principles of Intelligent Tutoring. In R.W. Lawler & M. Yazdani (Eds.), Artificial
Intelligence and Education (v1): Learning Environments and Tutoring Systems, pp. 203-37. Norwood, NJ: Ablex
1987.
32
[Oppenheimer] Oppenheimer , T. The Computer Delusion. The Atlantic Monthly. Volume 280, No. 1;
pp. 45-62. July 1997.
[Papert] Papert, S. Mindstorms: Children, Computers, and Powerful Ideas. NY: Basic Books 1980.
[peer] Peer Learning Citations:
Behm, R. Ethical Issues in Peer Tutoring: A Defense of Collaborative Learning. Writing
Center Journal, v10 n1, pp. 3-12, Fall-Winter 1989.
Fuchs, D. Peer-Assisted Learning Strategies: Making Classrooms More Responsive to
Diversity. A paper of the National Institute of Child Health and Human Development, 1996.
Forman, E. Learning in the Context of Peer Collaboration: A Pluralistic Perspective on
Goals and Expertise. Cognition and Instruction v13 n4 pp. 549-64, 1995.
McCauliffe, T. Status Rules of Behaviour in Scenarios of Peer Learning. Journal of
Educational Psychology, v86 n2, pp. 163-72, June 1994.
[Perelman] Perelman, L.J. School’s Out: Hyperlearning, the New Technology, and the End of
Education. NY: William Morrow 1992.
[Postman] Postman, N. Technopoly: The Dictatorship of Reason in the West. NY: Vintage Books 1992.
[Pride] Pride, M. Homeschool Goes High Tech. Practical Homeschooling No. 6. 1994.
[Reichman] Reichman, J.H. & Samuelson, P. Intellectual Property Rights In Data: An Assault On The Worldwide
Public Interest In Research And Development. Vanderbilt Law Review, n50, January 1997.
Draft available at http://www.ksg.harvard.edu/iip/acicip/REISAMDA.HTM
[Rosenberg] Rosenberg, R. The Social Impact of Computers. 2nd ed., Chapter 6 – Computers and Education, San
Diego, CA: Academic Press 1997.
[Russell] Russell, S.J. & Norvig, P. Artificial Intelligence: A Modern Approach.. Upper Saddle River, NJ:
Prentice Hall 1995.
[Schilpp] Schilpp, P. Albert Einstein: Philosopher-Scientist. NY: Library of Living Philosophers, Inc., 1951.
[Schofield] Schofield, J.W. & Evans-Rhodes, D. Artificial Intelligence in the Classroom: The Impact of a
Computer-Based Tutor on Teachers and Students. Arlington, VA: Office of Naval Research, Cognitive Science
Program 1989.
[Schooler] Schooler, L.J. & Anderson, J.R. The Disruptive Potential of Immediate Feedback. Proceedings of the
Twelfth Annual Conference of the Cognitive Science Society, pp.702-708, Cambridge, MA 1990.
[Self] Self, J.A., Bypassing the Intractable Problem of Student Modeling. In C. Frasson and G. Gauthier (eds.).
Intelligent Tutoring Systems: At The Crossroad Of Artificial Intelligence And Education (pp.107-123). Norwood,
NJ: Ablex Publishing Corporation 1990.
33
[Shanker] Shanker, A. A Proposal for Using Incentives to Restructure Our Public Schools. Phi Delta
Kappan January 1990.
[Shute 90] Shute, V.J. & Glaser, R. Large-scale evaluation of an intelligent tutoring system: Smithtown.
Interactive Learning Environments, v1, pp. 51-76, 1990.
[Shute] Shute, V.J. & Psotka, J. Intelligent Tutoring Systems: Past, Present, and Future. In Handbook of
Research on Educational Communications and Technology. Jonassen, D.H. (ed.) NY: MacMillan 1996.
[Stoll] Stoll, C. Silicon Snake Oil: Second Thoughts on the Information Highway. NY: Doubleday, 1995.
[Tokuda] Tokuda, N. & Fukuda, A. A Probabilistic Inference Scheme For Hierarchical Buggy Models.
International Journal of Man-Machine Studies, v38, pp. 857-872, 1993.
[Troxel] Troxel, S. Innovation for the Common Man: Avoiding the Pitfalls of Implementing New
Technologies. Paper to the 2nd Annual Conference on Rural Datafication, Minneapolis, Minn. Liberty
University 1994.
[Tyree] Tyree, A.L. Cost-effective Computer Tutorials. University of Sydney Press, 1992.
[USGov] The President's Educational Technology Initiative. Drafted by the Clinton Administration, at
http://www.whitehouse.gov/WH/EOP/OP/edtech/ (since deleted).
Note that Congress initially agreed to $2 billion; the bulk of the suspiciously large $40 billion figure involves
equipment presumably to be donated by industry.
No Child Left Behind, drafted by the Bush Administration, available online at
http://www.whitehouse.gov/news/reports/no-child-left-behind.html#9
[VanLehn] VanLehn, K., Student Modeling. In Polson, M. & Richardson J. (eds.), Foundations of
Intelligent Tutoring Systems. Hillsdale, NJ: Erlbaum. 1988. ppg. 55-78
[VHS] The website of the Virtual High School , FAQ in the Introduction, at
http://www.virtualhighschool.com/frame_main.htm
[Warner] Warner Brothers, example of an educational game available online at
http://www.warnerbros.com/pages/kids/games/crosswordpuzzle.jsp.
[WBT] WBT Systems: TopClass, Dublin, Ireland. Available online at
http://www.wbtsystems.com.
[WebCT] WebCT: World Wide Web Course Tools, Vancouver, Canada. Available online at
http://www.webct.com/.
[Wenk] Wenk, E. New Principles for Engineering Ethics. Bent, Tau Beta Pi, pp. 18-23, 1988.
[Wolf] Wolf, G. Steve Jobs: The Next Insanely Great Thing. Wired Magazine, February 1996.
[Zielinski] Zielinski, T.J. The Mastery Learning Alternative to Physical Chemistry Lecture. Chemistry
Department, Niagara University, NY 14109, 1996. Available online at
http://www.niagara.edu/~tjz/dpapers/essay2.htm
34
Download