Use of Computers in Education

advertisement
Table of Contents
Page
Section
1
I.
Introduction to the Question
3
II.
4
III.
Purpose of this Paper
5
IV.
Drill and Practice
7
V.
Tutorial Systems
10
VI.
15
VII.
21
VIII.
22
IX.
24
X.
The Question
Simulations, Microworlds, and Programming
Intelligent Tutoring Systems
Summary
A Postscript - Speculations on Three Decades of Computers in
Education
References
Uses of Computers in Education:
Drill-and-Practice, Tutorial, Simulation,
and Intelligent Tutoring Systems
Epistemological, Ontological, and Methodological Dimensions
Submitted to: Professor Michael Streibel
University of Wisconsin-Madison
Ph.D. Preliminary Examination Question
October 26, 1992
Submitted by: David C. Gibbs
3140 Ehrlinger Road
Janesville, WI 53546
(608)754-0153
I.
Introduction to the Question
Use of computers in educational settings dates to the earliest days of
mainstream computing. As mainframe machines made their way from government
and military applications to large businesses, research institutions acquired
computing facilities.
While university researchers had been involved in the
development of digital computers (Goldstine, 1972), widespread use came somewhat
later when business and military needs for hardware and software development
exceeded the capabilities of private companies. Timesharing computers arrived on
campuses where faculty across a range of disciplines used them primarily as numbercrunchers for research endeavors.
A new area of study emerged inasmuch as
government and industry needed specialists to program, develop, and teach about
computers themselves (Capel, 1992).
Early instructional efforts were directed at
learning the languages needed to control the computer. Not until 1964, when John
Kemeny at Dartmouth created the programming language BASIC (which itself
became an object of study) was the computer used for learning about something other
than itself.
Faculty soon realized the computer could be utilized for instruction
within all instructional areas, and many set about the task of making it fit whatever
their current instructional interests happened to be.
Of course computers, and more recently microcomputers, are simply the latest
technology to arrive and be heralded as the saviors of education. Like television
before it, radio and film before that, computers were embraced by reformers and
1
thrust into use wherever possible (Cuban, 1986). Far too often, their use was seen as
implicitly beneficial and accepted without question (Sloan, 1985).
Enthusiastic
educators asked "HOW?" instead of "WHY?" (Cuffaro, 1985), and it became heresy to
critically examine their use.
2
II.
The Question
Critically
discuss the epistemological,
ontological,
and methodological
dimensions of drill-and-practice, tutorial, simulation, and intelligent-tutoring-system
computer use in classroom settings.
3
III.
Purpose of this Paper
This paper shall examine the classroom use of computers, focusing upon four
specific implementations:
tutoring-systems.
drill-and-practice, tutorials, simulation, and intelligent-
Examples of each implementation shall be given with an
explanation of the intentions of the creator of each style.
The epistemological,
ontological, and methodological dimensions accompanying use of the computer in
such a way shall be revealed through critical analysis. Lastly, the implications of
such computer use will be examined.
4
IV.
Drill and Practice
In 1966, Patrick Suppes gave a glowing description of the (potential) use of
computers in education; computer assisted instruction (CAI) would offer millions of
schoolchildren "access to what Philip of Macedon's son Alexander enjoyed as a royal
prerogative: the personal services of a tutor as well-informed and responsive as
Aristotle" (Suppes, 1966, page 3). Writing in Scientific American, Suppes spoke of
individualizing
instruction
in
terms
of
evaluating
prerequisite
knowledge,
customizing the rate and style of learning, monitoring achievement, and the
remarkable efficiency of instruction. At that time, his CAI programs had been in use
a little over two years.
A professor of philosophy and psychology at Stanford, Suppes' world views are
shaped by his disciplines, and coincidentally, consistent with features of digital
computers. Logic consists of discrete formulas and propositions; behavior is made up
of discrete stimulus-response units (Solomon, 1986). Both of these views are present
in his drill-and-practice environment, consisting of the subject areas of mathematics,
reading, and language arts.
A learner in the CAI framework is essentially a "black-box"; the object of
predetermined stimuli offered with the hopes of obtaining desired responses.
Knowledge consists of encoded "facts"; these facts are manipulated according to rules.
Learning has occurred when the student's responses to problems posed by the
computer match those expected by the program, within some range of achievement.
5
Consider the following methodology of CAI first published in 1965 (Suppes,
1980)
"It consists of approximately fifty sections, each introducing a new
"concept", followed by problems to be worked. The number of problems
varies considerably from concept to concept. A criterion of successivenumber-correct has been set for each section. If the student meets the
criterion he proceeds immediately to the next concept (or to review
section if scheduled); if he cannot meet the criterion he finishes the
section and goes into a remedial branch which presents the same
concept in a new way." (page 222)
"It", in this case, is Suppes' program for first graders. Surprisingly enough, that is an
appropriate description of much drill-and-practice software of today.
The implications of CAI/drill-and-practice are many. Students learn answers
are right or wrong, and right answers are those associated with rewards. Speed and
accuracy are important for advancement towards the educational goals. Contrary to
Suppes' (1980) encouragements, the teacher's role becomes largely one of manager;
the classroom culture resembles that of "work" more than that of learning, due to the
inordinate emphasis of the technological delivery system (Streibel, 1986). Computer
assisted instruction seen in this light serves both teachers, who delegate rote learning
tasks to the computer, and the technological society, apt to benefit from efficient and
productive (albeit unidimensional) workers.
6
V.
Tutorial Systems
Tutorial courseware systems emerged from the wishes of instructional designer
to extend the rote learning of drill-and-practice to include what was seen as a
fundamental part of learning: dialogue. If CAI could not be of value in encouraging
the (non-behavioral) skills of reflection and critical thinking, for example, perhaps the
computer could serve as "tutor", complete with question-and-answer capabilities.
Among the earliest developers of tutorial courseware systems was Alfred Bork,
a physics instructor interested in learning through dialogue. Bork's (1980) idea of
dialog is
"a 'conversation' between a student and a teacher, where the student is
at a computer display and the teacher is conducting the dialog through
the medium of a computer program" (page 15)
He developed tutorial programs consisting of three types of sub-systems: on-line tests,
remedial dialogues, and interactive proofs (Bork, 1980). A typical dialog may begin
with an on-line test to check student progress against some model of the student
programmed into the tutorial. (Depending upon the sophistication of the program, a
model may be constructed as the session unfolds. In either case, the learner is subject
to constant "quality control" (Streibel, 1986).) A remedial dialog may be invoked by
the program should the on-line test indicate a performance deficiency. Interactive
proofs allow a student to examine data and examples illustrating concepts and
subsequently pose their own models. Bork's background in physics and mathematics
7
led him to construct programs in those areas; those subjects, rooted in formulas and
algorithms were easier to represent on a computer.
Human-computer dialogues are decidedly different from the human-human
dialogues they hope to imitate. The most obvious difference is that of communication.
When humans engage in dialogue, they speak to each other. When humans engage
in dialogue with computers, they type phrases on the keyboard and computers write
words on display screens. (This was true in Bork's time, is true in 1992, and given
the difficulties of speech recognition, is apt to be the case for many years to come.)
The implications of this kind of "interactive" communications processes shall be
discussed below. Less obvious but perhaps more important are the differences in
control and the view of learner. Human-human dialogues involve shared control,
even in the presence of power differentials. Learners in human-computer dialogues
have only pseudo-control, and then only over such delivery factors as rate, route, and
timing (Streibel, 1986). A good human tutor engaged in dialogue will attempt to
understand and empathize with the student; the computer can only describe the
learner in preprogrammed generic terms, based upon established rules.
A learner in a tutorial system, then, is a rule-following individual; individual
inasmuch as s/he will be cast as one of a finite number of "types" of learner that have
been prepared by the programmers. Knowledge once again consists of discrete units,
but in this case with more importance (than in CAI) placed upon the rules, formulas,
and algorithms that connect the atoms. What it means "to know" is to satisfy the
8
generic inquiries put forth by the tutor. Thus, learning takes place when the tutee
can manipulate the atoms, according to the rules, in a procedural fashion, all to the
satisfaction of the tutor.
It is likely this analysis echoes that of the behaviorist CAI framework. Hoping
to exceed the limitations of drill-and-practice, tutorial systems utilize more
sophisticated software techniques, a dialogue interface, and the guise of more
challenging subject matter. Nonetheless, they are unable to escape the behaviorist
and technological paradigm.
Manipulation of the generic learner towards
predetermined responses under algorithmic control only extend the limitations.
Who is served by tutorial programs? Students, denied the opportunities for
meaningful interaction with a human tutor, are clearly NOT served. Developing
procedural problem solving techniques (a theme to be repeated in the next section),
that is, to "learn to think like a computer", legitimizes one way of thinking.
Reflection, intuition, and simply "messing around" are not a part of learning through
tutorial systems. Viewed this way, society in general is not served. Teachers may
benefit, if class sizes of 32 students can be reduced through the use of tutorial
systems. The real winner from computerized dialogs (and CAI for that matter) is the
technical society; its citizens are learning valuable lessons both explicit (keyboard
skills, procedural analysis, sub-skill mastery) and implicit (respect for technology as
authority, validity of machine intelligence, and reification of social divisions (Apple,
1992)).
9
VI.
Simulations, Microworlds, and Programming
In the late '70s it was possible to categorize uses of the computer in one of three
ways: tutor, tool, and tutee (Taylor, 1980). Use of the computer as a tireless drill
instructor or as a skilled master of dialogue involves it in a role as tutor. Using the
machine to process text (word processing) or data (spreadsheets or data managers)
establishes its use as a tool. Allowing the learner to teach the computer inverts the
tutor relationship, putting the machine in the role of tutee. Of course, it is intended
that the learner gains in understanding from teaching the computer; Taylor (1980)
states "because you can't teach what you don't understand, the human tutor will
learn what he or she is trying to teach the computer" (page 4). It is this use of the
computer that encompasses simulations, microworlds, and programming.
This
section shall examine the simulation and microworld uses of the computer.
Perhaps the most prominent representative of this use of computers is
Seymour Papert, the creator of the LOGO environment. At one level, LOGO is a
complete programming language, although Papert would prefer it to be thought of as
a microworld. In LOGO, the child moves a cursor (an anthropomorphized turtle)
around the screen using a set of commands. Turtle geometry, Papert's replacement
for Euclid's axiomatic approach, involves a constructive computational approach
(Papert, 1980). Papert hopes to utilize the child's real-life experiences as the basis for
the development of mathematical intuition, and a way of thinking about their own
thinking.
10
Papert views the ends of formal education to be giving a student the ability to
build and use cognitive structures to learn about his/her learning as a problem is
solved. He sees the learner in rather traditional Piagetian terms - intellectual growth
occurs in stages and as a result of assimilating and accommodating new objects
within existing structures when possible, and building new structures if necessary
(Papert, 1980). His hope is that the microworld will provide the student with the
experiences necessary to prod the cognitive structure model (Solomon, 1986).
Manipulation of the learning process is profound in
Papert's approach.
He is
proposing fundamental changes in the way in which students encounter the objects
which cause accommodation and assimilation. He is trying to help children bridge
the gap between the physical skills of the Piagetian sensorimotor stage to that of
theories in the stage of formal operations (Papert, 1980).
Simulations are software programs that model some physical process or system
taken from the real world. They are created by software engineers working with
domain experts to synthesize the elements of a physical system into its quantifiable
components and the rules which relate them. Note the presence of a key assumption
here: it is possible to construct a formal model of the system.
A popular simulation intended to aid elementary children in learning
American history is entitled Oregon Trail.
Students "travel" west, keeping close
account of their "supplies" as the program monitors their "decisions" and reacts
accordingly (Grady, 1983).
Another popular program from the same software
11
company, Three Mile Island, simulates the crisis at a nuclear power plant.
Simulations are also used in the physical sciences, where complexities of real systems
can be simplified, allowing students to gain an understanding of the underlying
principles (O'Shea and Self, 1983).
Learners are given the opportunity to make
decisions and observe the consequences, gaining an understanding of the model along
the way.
Simulations may offer learners the opportunity to explore a situation otherwise
out of the reach of direct experience. But at what expense? The experiences offered
by the simulation are mechanized; the only choices allowed are those offered by the
computer.
Sensory experiences are reduced to keyboard and screen abstractions.
Time can be compressed from years to seconds. And perhaps their most telling fault,
they conceal what is missing
(Chandler, 1992), and their assumptions remain
unchallenged due to their invisibility.
In this framework, knowledge is once again specifiable facts, represented
explicitly within the computer.
Formal operations or rules again govern the
relationships between the facts. This is easily recognized when using simulation
programs with a restricted set of data settings and operators. It is not so obvious in
programming or using the microworld. After all, programming is thought to give
control to the users.
Someone "teaching" the microworld or programming the
computer is not restricted to the operations supplied, however, because new
relationships can be created. This is the not-so-obvious part: even the opportunities
12
to create are restricted by the limitations of the computer itself. Created operations
must fit within the abstract, representable forms of knowledge and the allowable
formulas and algorithms. They will favor quantitative, declarative, and procedural
knowledge over intuitive, tacit, and interpretative knowledge (Streibel, 1986).
To complete the ontological and epistemological dimensions of this framework:
the tutor (the student learner) is a symbol manipulator, just like the tutee (the
computer). Knowledge remains facts explicitly represented with formal operations
defined on those representations. Learning consists of learning to act as if you were a
computer, that is, successfully applying the rules to the facts. In a simulation, that
means making the right choices in order to "win". When programming, successfully
applying the rules means (at least) two things: first, writing syntactically correct
statements, and then determining their semantical correctness by testing them
against all imaginable data sets.
The use of computers for simulations and microworlds moves away from the
behaviorist outcomes of the CAI and tutorial environments. The paradigm instead
attempts to establish computer ways of thinking. The view of mind as information
processor results, with the further de-legitimizing of non-technological ways of
learning and thinking.
In spite of many impassioned pleas for student control of the computer
(Luehrmann, 1980), one of the great ironies of placing the student in the role as
teacher is that the computer is still in control of what's learned. In this case, it (the
13
computer) can only be taught what it is capable of learning, as determined by the
creators of the microworld or simulation, or programming language.
Another
limitation of its ability to learn is the strict adherence to a grammar, which the
student MUST learn before s/he can teach the computer.
While this obstacle is
present in each of the environments in question, it is formidable when programming.
Lastly, the only "teachable" items are those which are easily represented on a digital
computer. The emotional, aesthetic, moral, and affective dimensions of learning are
abandoned.
14
VII.
Intelligent Tutoring Systems
The computer assisted instruction (CAI) of the 60's and 70's was praised for its
ability to individualize instruction, although its principal skill was that of selecting
problems at a level of difficulty appropriate to a student's performance. CAI systems
formed "models" of the student based upon which problems were done correctly, and
which were done incorrectly. Programs that extended student modeling to include an
assessment of student knowledge, prepared their own model of expert behavior, and
could thus "coach" the student towards solution of the problem were said to represent
intelligent computer assisted instruction, or ICAI (Sleeman and Brown, 1982).
Subsequent work has refined both the programs and hence the terminology further.
Intelligent tutoring systems (ITSs) are computer programs that use the
techniques of artificial intelligence while carrying on an interaction with a student
(Clancey, 1987). In order to qualify as an ITS, a program must pass three tests of
intelligence (Burns and Capps, 1988):
"First, the subject matter, or domain, must be 'known' to the computer
system well enough for this embedded expert to draw inferences or solve
problems in the domain. Second, the system must be able to deduce a
learner's approximation of that knowledge. Third, the tutorial strategy
or pedagogy must be intelligent in that the 'instructor in the box' can
implement strategies to reduce the difference between expert and
student performance." (page 1)
15
ITSs exhibit another feature not found in CAI programs: the subject material is
separated from the teaching method. This separation of domain knowledge from the
procedures that use it exemplifies declarative knowledge representation (Clancey,
1987).
Intelligent tutoring systems have evolved considerably since Sleeman and
Brown first described them in 1982.
Research suggests there are at least four
modules in the standard architecture: an expert, a student model, a tutor, and an
interface. The expert module contains the domain knowledge, which may require
years to produce. Because of the time and expense of acquiring and representing
expert knowledge, many ITSs have been built upon existing expert systems. The
student diagnosis module infers a model of the student's current understanding of the
subject matter. The goal is to adapt the instruction to meet the student's particular
needs. There are three dimensions to student diagnosis: 1) available knowledge of
student information; 2) distinguishing between the types of knowledge to be learned;
and 3) assessing differences between students and experts (Van Lehn, 1988).
Diagnosis is typically done only upon the student answer to a question; this is partly
a limitation of programs but more often a result of how the computer is used. Many
times the intermediate steps are not available (to the computer) because they were
done on scratch paper or in the student's head. The types of knowledge the system
attempts to capture about a student are procedural (flat and hierarchical) and
declarative. To assess the differences between students and experts the system will
16
typically rely upon the domain knowledge in the expert module, thus sharing that
knowledge base. The student's knowledge is viewed as a subset of the expert's; so
"missing conceptions" are noted but not "misconceptions". More advanced systems
take the student diagnosis farther, attempting to model misconceptions and
erroneous and incorrect knowledge (Burns and Capps, 1988).
The tutor or
instructional module controls the selection and sequencing of material for the student,
responds to student inquiries regarding instructional goals and content, and
determines strategies for delivering help. The human-computer interface in an ITS
determines the communication process between user and the system. The goal of
design is to make the interface transparent. First-person interfaces allow the user to
become direct participants in the domain, as is done in manipulating icons of the
Macintosh personal computer. Second-person interfaces give the user a command
language.
The focus of the collected papers anthologized in Polson & Richardson (1988)
was to describe the state of the art in each of the distinct components of ITSs
described above.
More recent work has recognized the difficulties of trying to
separate the modules of ITSs.
The human-machine communication dimension
consists of the expert module, the interface, and the student.
The instructional
dimension includes the expert, student model, and the interface. The knowledge
dimension consists of the domain simulation, the expert, and the interface (Burns &
Parlett, 1991). A complete description of these dimensions is beyond the scope of this
17
paper and probably unnecessary for a critical analysis of ITS.
Nonetheless, the
complexity and interdisciplinary nature of the dimensions point out the difficulties in
producing ITSs. Each module is a sophisticated software program in its own right,
and yet must be interrelated with the others (Fink, 1991).
A critical analysis of ITS shall begin with a description of a session. The user
logs on to the system, probably with a name and password. If this is the initial
session, the system may need some information before proceeding, otherwise it will
access its history "file" on this user. The starting point for this session is determined,
either from the history, or perhaps from a brief question and answer session to
determine the user's knowledge state at this moment. As the interaction ensues, the
instructional module is accessing the expert module in order to determine what to
ask, to evaluate a response, and to determine what to ask next.
The student
diagnostic module, meanwhile, is also accessing the expert module, building a model
of the student, based upon the responses. At each step, decisions are made based
upon goals as specified by the difference between the constructed model of the student
and the model of the expert. Recall that representation of knowledge was a key issue
for both the student diagnostic module and the expert module.
Procedural and
declarative knowledge are both necessary; procedural knowledge in order to answer
"How?" questions and declarative to answer the "What?" questions. At the close of
the session, the user gracefully exits as the system updates its history file.
Within an ITS, knowledge is contained in the expert module in the forms
18
familiar to artificial intelligence. Production rules contain knowledge in the form of
if-then questions: IF you have a headache, THEN take an aspirin. The expert system
MYCIN contains approximately 450 such rules making up a production system
(Clancey, 1982). Frames, or schema, are nodes (objects) connected by links, indicating
their relationships. Related schema and their links form schema-systems (O'Shea
and Self, 1983). In any case, knowledge consists of the storing of abstract symbols in
some data structure. Learning takes place in an ITS when the student narrows the
difference between the system's representations of the expert and the student. The
successful student, then, will be able to manipulate the abstract symbols just as the
expert would. A learner is a goal-driven symbol manipulator. Of course, it is the hope
of the designers of ITS that in the process of manipulating symbols to match that of
the expert, there are accompanying shifts or alterations in the cognitive make-up of
the learner.
Designers of ITSs and ICAI programs have gone to great lengths to gather the
knowledge of experts, synthesize it into representable components, and make it
available to users. The computer is thus used as a conduit to provide learners with
access to knowledge. But the computer as conduit extracts a toll on the process; only
certain types of knowledge are representable; human elements of knowledge
acquisition are left behind; and ways of knowing not collapsible to rules or frames are
discarded as non-representable.
Just as tutorial systems extended the behavioral paradigm of CAI, ITSs extend
19
the cognitive information processor paradigm of simulations and programming. The
programs are more complex, the interfaces more sophisticated, and the systems more
highly regarded.
Input-process-output has been replaced by the abstract
manipulation of symbols, but it is still a specific type of learning: think like a
computer.
20
VIII. Summary
The emergence of the digital computer in education is a relatively recent event
spanning the last three decades. This paper has critically assessed four styles of use
of computers in classroom settings since their introduction to classroom settings.
There is a clear progression in the use of computers that accompanies the
predominant learning paradigm of its time. An interesting question is to ask to what
extent the computer has impacted this paradigmatic shift! Behaviorism influenced
the early applications of CAI and tutorial systems, while simulations and intelligent
tutoring systems have moved steadily toward the cognitive information processing
schemes. O'Shea and Self (1983) applaud the move
"from a behaviouristic to a cognitive approach to teaching and learning
in that they view computers as devices for implementing not rigid,
mechanistic, statistically-based teaching systems, but ones which treat
the student as a thinking, understanding and contributing individual."
(page 120)
As this paper has attempted to outline, however, even such respect for the learner is
not without its price when the computer is used to mitigate the learning environment.
21
IX.
A Postscript - Speculations on Three Decades of Computers in
Education
The direction future applications may take is open to conjecture.
Research in artificial intelligence is proving extremely problematic, which is probably
good, inasmuch as educators seem so eager to relate (if not equate) human knowledge
with computer knowledge. Each advancement in instructional computing has been
tagged "intelligent" by proponents, and yet we are just beginning to realize how
"dumb" computers are today.
One aspect that is quite easy to follow over the thirty years in question is the
"technologizing of education."
instruction.
Teachers are increasingly becoming managers of
As is happening in many fields, teaching is becoming de-skilled;
pressures of too many students and too many subjects have forced teachers to deliver
someone else's instruction - be it computer based or otherwise. There are no areas of
school life exempt from technological restructuring; perhaps due to pressures from
within and without to prepare students for life in the information society (Apple,
1992). Apple describes many critical issues present in schooling today attributable to
the "technologizing", and calls for a social literacy to offset the technical bias. As the
United Kingdom prepares to invoke the National Curriculum, complete with a newly
created "foundation subject" of Technology (Barnett, 1992), others are calling for a
shift of how technology should be viewed (Beynon, 1992), (Capel, 1992).
In summary, it is probably safe to conclude that future computer-based
learning environments will continue to reflect the predominant learning paradigm;
22
that each new application will be thought of as "intelligent"; and, the impact will be to
further technologize education. Only through a shift in the way in which technology
is viewed, as a result of a shift in the way in which technology is used, can such
consequences be avoided.
23
References
Apple, M. (1992). Is the New Technology Part of the Solution or Part of the Problem
in Education? In J. Beynon & H. Mackay (Eds.), Technological Literacy and the
Curriculum (pp. 105-124). London: The Falmer Press.
Barnett, M. (1992). Technology, Within the National Curriculum and Elsewhere. In J.
Beynon & H. Mackay (Eds.), Technological Literacy and the Curriculum (pp.
84-104). London: The Falmer Press.
Beynon, J. (1992). Introduction: Learning to Read Technology. In J. Beynon & H.
Mackay (Eds.), Technological Literacy and the Curriculum (pp. 1-37). London:
The Falmer Press.
Bork, A. (1980). Preparing student-computer dialogs: Advice to teachers. In R. Taylor
(Eds.), The Computer in the School: Tutor, Tool, Tutee (pp. 15-52). New York:
Teachers College Press.
Burns, H., Parlett, J., & Redfield, C. (1991). Intelligent Tutoring Systems. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Burns, H., & Capps. (1988). Foundations of intelligent tutoring systems: An
introduction. In Foundations of Intelligent Tutoring Systems (M. Polson & J.
Richardson, Eds.) (pp. Lawrence Erlbaum Associates (pp. 1-20). Hillsdale, NJ.
Capel, R. (1992). Social Histories of Computer Education: Missed Opportunities? In J.
Beynon & H. Mackay (Eds.), Technological Literacy and the Curriculum (pp.
38-64). London: The Falmer Press.
Chandler, D. (1992). The purpose of the computer in the classroom. In J. Beynon & H.
Mackay (Eds.), Technological Literacy and the Curriculum (pp. 171-196).
London: The Falmer Press.
Clancey, W. (1982). Tutoring rules for guiding a case method dialogue. In Intelligent
Tutoring Systems (D. Sleeman & J. Brown, Eds.) (pp. 1-8) Academic Press:
New York.
Clancey, W. (1987). Methodology for building an intelligent tutoring system. In G.
24
Kearsley (Eds.), Artificial Intelligence and Instruction (pp. 193-228). Reading,
MA: Addison-Wesley.
Cuban, L. (1986). Teachers and machines. New York: Teachers College Press.
Cuffaro, H. (1985). Microcomputers in Education: Why is Earlier better? In D. Sloan
(Eds.), The Computer in Education: A Critical Perspective (pp. 21-30). New
York: Teachers College Press.
Fink, P. (1991). The role of domain knowledge in the design of an intelligent tutoring
system. In H. Burns, J. Parlett & C. Redfield (Eds.), Intelligent Tutoring
Systems (pp. 195-224).
Goldstine, H. (1972). The Computer from Pascal to von Neumann. Princeton, NJ:
Princeton University Press.
Grady, D. (1983). What every teacher should know about computer simulations.
Learning, 11(8), 34-46.
Luehrmann, A. (1980). Should the computer teach the student, or vice-versa? In R.
Taylor (Eds.), The Computer in the School: Tutor, Tool, Tutee (pp. 129-135).
New York: Teachers College Press.
O'Shea, T., & Self, J. (1983). Learning and Teaching With Computers. Englewood
Cliffs, NJ: Prentice-Hall Inc.
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York:
Basic Books.
Polson, M., & Richardson, J. (1988). Foundations of Intelligent Tutoring Systems.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Sleeman, D., & Brown, J. (1982). Introduction: Intelligent tutoring systems. In D.
Sleeman & J. Brown (Eds.), Intelligent Tutoring Systems (pp. 1-8). New York:
Academic Press.
Sloan, D. (1985). The computer in education: A critical perspective (D. Sloan, Eds.).
New York: Teachers College Press.
Solomon, C. (1986). Computer environments for children. Cambridge, MA: The MIT
Press.
25
Streibel, M. (1986). A Critical Analysis of the Use of Computers in Education. ECTJ,
34(3), 137-161.
Suppes, P. (1966). The uses of computers in education. Scientific American, 215(3),
206-220.
Suppes, P. (1980). Computer-based Mathematics Instruction. In R. Taylor (Eds.), The
Computer in the School: Tutor, Tool, Tutee (pp. 215-230). New York: Teachers
College Press.
Taylor, R. (1980). The Computer in the School: Tutor, Tool, Tutee. New York: Teachers
College Press.
Van Lehn, K. (1988). Student modeling. In M. Polson & J. Richardson (Eds.),
Foundations of Intelligent Tutoring Systems (pp. 55-78). Lawrence Erlbaum
Associates: Hillsdale, NJ.
26
Download