מבוא - האם יש צורך בשינוי בתחום התוכן בלימוד מדע באוניברסיטה

advertisement
-A-
University Teaching of Natural Phenomena
Described by Mathematical Models
Thesis Submitted for the Degree
Doctor of Philosophy
by
Guy Ashkenazi
Submitted to the Senate of the
Hebrew University in Jerusalem
July 1999
-A-
-B-
This work was carried out under the supervision of
Prof. Nava Ben-Zvi and Prof. Ronnie Kosloff
-C-
I would like to thank
Prof. Ronnie Kosloff, for the enthusiasm that gave birth and endurance to this project,
and
Prof. Nava Ben-Zvi, for guiding, supporting and caring for me.
This work is dedicated
To my parents, Miriam and Asaria
To my wife Sigalit
and especially
To my children, Amit and Einat.
-D-
Table of Contents
SYNOPSIS ................................................................................................................................... G
CHAPTER 1 - INTRODUCTION ................................................................................................... 1
1.1 THE ART OF CURRICULUM MAKING ................................................................................................ 1
Arts of Eclectic ............................................................................................................................... 1
Technological Motivation ............................................................................................................... 2
Objective ....................................................................................................................................... 2
1.2 THE CASE STUDY .......................................................................................................................... 3
Statement of the Problem ................................................................................................................ 3
Technological Motivation ............................................................................................................... 3
Objective ....................................................................................................................................... 4
Validity of the Model ...................................................................................................................... 4
1.3 OUTLINE OF THE THESIS ................................................................................................................ 5
CHAPTER 2 - THEORETICAL FRAMEWORK ............................................................................ 7
2.1 MEANINGFUL LEARNING................................................................................................................ 7
Meaningful vs. Rote Learning .......................................................................................................... 8
Integrative Reconciliation vs. Compartmentalization ......................................................................... 9
The Practice of Meaningful Learning ............................................................................................. 10
Advance Organizers ..................................................................................................................... 11
2.2 EPISTEMOLOGY ........................................................................................................................... 12
Positivism vs. Constructivism ........................................................................................................ 13
Epistemology and Learning Strategies ........................................................................................... 13
2.3 THE CONSTRUCT OF SUBJECT MATTER IN THE PHYSICAL SCIENCES ................................................. 14
Models and Natural Phenomena .................................................................................................... 15
Finding the Meaning in the Construct ............................................................................................ 19
Natural Phenomena as Advance Organizers ................................................................................... 22
CHAPTER 3 - A POTENTIALLY MEANINGFUL CONTENT ......................................................25
3.1 WHAT DO WE TEACH? ................................................................................................................ 25
Curricular Change ....................................................................................................................... 26
The Computer as an Opportunity for Curricular Change ................................................................. 28
Objective ..................................................................................................................................... 29
3.2 APPLICATION TO ELEMENTARY QUANTUM MECHANICS .................................................................. 30
Wave-Particle Duality .................................................................................................................. 31
De-Broglie’s Relation ................................................................................................................... 35
Quantization of Energy ................................................................................................................. 41
Quantum Dynamics ...................................................................................................................... 47
-E-
CHAPTER 4 - A MEANINGFUL LEARNING SET .......................................................................53
4.1 THE LEARNING ENVIRONMENT ..................................................................................................... 54
The Computer Lab ........................................................................................................................ 54
Greater Personal Commitment ...................................................................................................... 55
The Feedback Cycle ..................................................................................................................... 56
An Integrated Learning Environment ............................................................................................. 56
4.2 STUDENT PERFORMANCE ............................................................................................................. 58
Active Learning – Guided vs. Open Ended ...................................................................................... 58
Symbolic vs. Visual Representation ................................................................................................ 59
Team vs. Individual Work.............................................................................................................. 60
CHAPTER 5 – CONCLUSION .....................................................................................................61
5.1 ARTS OF ECLECTIC – REVISITED ................................................................................................... 61
Cognitive Theory of Meaningful Learning ...................................................................................... 61
Epistemological Constructivism ..................................................................................................... 62
Content Selection ......................................................................................................................... 63
The Learning Environment ............................................................................................................ 63
The Role of the Computer.............................................................................................................. 63
5.2 GENERIC MODELS ....................................................................................................................... 64
A Potentially Meaningful Content .................................................................................................. 64
An Integrated Learning Environment ............................................................................................. 65
BIBLIOGRAPHY .........................................................................................................................67
-F-
Synopsis
This is an investigation for practically possible ways to introduce computer technology into
the university milieu, in order to promote meaningful learning of scientific theories. The
main objective is to define a systemic approach for the incorporation of computer
technology into existing curricula, firmly rooted in both theory and practice. The courses
for which this investigation best applies are university courses that deal with natural
phenomena described by mathematical models.
This work asserts three points that are fundamental to curriculum development:
1.
Curriculum development should follow an explicit and coherent theoretical
framework.
2.
The learning environment should play a major roll in curricular decision making.
3.
The process of curriculum development should be involved with actual teaching,
accompanied by a real time feedback and rectification mechanism.
The first part of the work combines learning theories from the behavioural sciences and
epistemological approaches from the philosophy of science. The emphasis is on the
relation between constructivist epistemology and meaningful learning. The theoretical
innovation of this work is the creation of a framework, which integrates both theories.
According to this framework, every scientific theory is divided into two parts – a set of
natural phenomena on one hand, and their corresponding models on the other. Each part
taken alone does not provide definite meaning, and their full meaning comes only from the
construct that unites these two parts. In order to achieve meaningful learning, it is crucial
to discuss the full construct and the relations within it. This discussion is promoted by
using natural phenomena as advance organizers for teaching scientific theories. In this
way, abstract models are taught in the context of perceptual observations.
The second part of the work deals with possible ways to integrate computer technology
into the university curriculum. The emphasis in this part is that new educational technology
is not limited to change in teaching methods, but is an opportunity for change in the
content of university courses as well. The innovation in this field is the re-examination of a
problematic field in university teaching – quantum mechanics – and its reconstruction
-G-
according to the theoretical framework, in view of the practical abilities of computer
technology.
The first two parts propose a theoretical change in the content of university courses. The
practical ability to implement their conclusions for a specific course was investigated by a
case study, and is described in the third part of the work. The development process was
combined with actual delivery of the course, with a feedback and rectification mechanism
between development and delivery. The development process accounted for a total
reconstruction of the course. The emphasis was on a full integration of computer
technology in the course – in the lectures, recitations and homework. The computer served
not only as a tool for conveying new content, but constituted a learning environment
supportive of active learning. This learning environment did not replace the traditional
lecture forum, but rather enhanced it. The use of computer technology opened new
channels of communication, which contributed to student-teacher interaction.
These three parts, taken together, define a process for integrating new technology into an
existing curriculum. This process can be generalized to define a generic model of
curriculum selection, in university teaching of natural phenomena described by
mathematical models:
1.
The teaching of models should be integrated with the teaching of natural
phenomena.
2.
For each concept taught, an illustrative natural phenomenon should be introduced
first, to serve as an advance organizer.
3.
If the selected phenomenon is not adequately described by traditionally taught
models, new models may be introduced into the curriculum. Computer visualization and
simulation can be used to overcome traditional inability to deal with these models.
4.
New models should be considered because they may constitute a part of
contemporary scientific methods, or illustrate fundamental concepts of the subject
matter better than traditional models.
-H-
5.
After the inclusion of new models, the significance of traditional models should be
re-evaluated. If the new models serve the same purpose better, the curricular hierarchy
should be changed accordingly.
This model is intended for reconstructing existing curricula in a theory directed manner
and in view of practical considerations. It emphasizes the relation between mathematical
models and natural phenomena, and so promotes the meaningful learning of both. The
curricular selection process described is deeply involved with the abilities of computeraided instruction. This dependency should reflect in the learning environment, by making
computer technology an integral part of it. This defines a model for the use of computer
technology in all stages of instruction:
1.
Use of a computer lab as the main mode of student-computer interaction, to:
One)
Promote students’ active learning.
Two)
Constitute a real-time feedback and rectification mechanism.
2.
Use of a computer for in-class demonstrations, to establish a common visual
language between lecture and lab.
3.
Use of an Internet based interface for course materials, to:
One)
Provide the means to introduce computer-related assignments for homework.
Two)
Allow students to review all types of course materials through a single, easy
to use and familiar user interface.
-I-
-J-
Chapter 1 - Introduction
1.1 The Art of Curriculum Making
When teaching science to university students, the ultimate objective is to make them adept
in the ways of science. The problem is that the scientific knowledge base in every field is
so vast, that there is no way to master all of it even in a lifetime, let alone in a one-semester
course. More than that, scientific knowledge is not a static database, but is constantly
changing and growing. The art of curriculum making is to find a minimal basis set of
knowledge and skills, which its successful incorporation by the student will grant him:
1.
An elementary understanding of the field at hand – knowledge of its main concepts
and methods of observation, and the relations between them.
2.
The ability to acquire further knowledge in this field, as required for specific
purposes.
Curriculum making is an art, because the task of extracting such a set from the immense
scientific knowledge base is not algorithmic or straightforward. There are many possible
sets that fulfill these two requirements; the curriculum is intended for many different
students with deviant pedagogical needs; and there is no objective method to determine
which of several curricula is the best. This complexity calls for a sophisticated use of
theoretical and practical methods, synthesized to achieve mutual accommodation.
Although these methods can be described and exemplified, they cannot be reduced to
generally applicable rules. Rather, in each instance of their application, they must be
modified and adjusted to the case in hand. In a series of papers on curriculum building,
Schwab (1978a-c) describes such an approach, which he calls “arts of eclectic”.
Arts of Eclectic
Theories of curriculum and of teaching and learning can help in curricular decision making
in two ways. First, theories can be used as bodies of knowledge. They provide a kind of
shorthand for some phases of deliberation and free the deliberator from the necessity of
obtaining firsthand information on the subject under discussion. Second, the terms and
distinctions, which a theory uses for theoretical purposes, provide a framework for
classification and categorization of particular situations and facts. They reveal similarities
among subjects and disclose their variance. However, such theories cannot, alone,
determine what and how to teach. These curricular questions arise in concrete situations of
-1-
particular time, place, person and circumstance. Theory, in contrast, contains little of such
concrete particulars. Theory achieves its theoretical character, its order, system, economy
and generality only by abstraction from such particulars. Each theory treats only a portion
of the complex field from which educational problems arise, and isolates its treated portion
from the portion treated by other theories.
The eclectic arts are arts by which theory is adapted for practical use. This can be done by
synthesizing several theories to compensate for the partial view of each, and by
accommodating theory with practical methods. In this work, the theoretical synthesis is
between learning theory (behavioral sciences) and epistemology (philosophy of science).
The practical side relies heavily on innovative use of computer technology. To assure
agreement between theory and practice, curriculum development should be involved with
actual instruction. A feedback mechanism brings about mutual accommodation of the two.
Technological Motivation
During the last decade, in view of the overwhelming development of computer technology,
numerous attempts were made to utilize this new technology for teaching purposes.
Distinct abilities of the computer, such as interactivity, database management,
computational power and communication were used to solve specific teaching needs or
learning difficulties. Many software-titles for science education are available on the
market. At the end of the 20th century, almost every educational institute is equipped with
the infrastructure and hardware to support extensive use of computers for teaching. But in
spite of all the efforts, actual use of computers in teaching is sporadic, especially at the
university level.
After a decade of singular solutions, a more systemic approach is well due. This work
suggests a framework for greater exploitation of existing computing infrastructure. This
can be achieved by integrating technology into the curriculum, with impact on the content
as well as on the teaching environment.
Objective
Define a systemic approach for the incorporation of computer technology into existing
curricula, firmly rooted in both theory and practice.
-2-
1.2 The Case Study
As stated earlier, a practical approach cannot be reduced to generally applicable rules.
Rather, in each instance of its application, it must be modified and adjusted to the case in
hand. To make a concrete example of the generic approach, it was applied to an established
course taught in the Hebrew University. “Introduction to Chemical Bonding” is an
undergraduate chemistry course, taken by chemistry majors in their fourth semester. It is
an introductory course to quantum mechanics. It serves as the basis for advanced courses
in spectroscopy and quantum chemistry, taken in the fifth and sixth semesters,
respectively. Traditionally it consisted of two 1½ hours lecture sessions for the entire class,
and one 45 minutes recitation session conducted in three smaller groups. The completion
of this course is a requirement for majoring in chemistry, and typical enrollment is 60-70
students.
Statement of the Problem
The course in quantum mechanics was selected because it is known as a difficult subject to
teach, as well as to learn. The writer first conceived this from personal experience as a
student and as a teaching assistant in this course. It was later confirmed by personal
discussions with other students, graduates and professors in the Hebrew University. Other
research groups that deal with teaching quantum mechanics have described it as “a difficult
and abstract subject, and there has been little success in teaching it at all levels” (Redish &
Steinberg, 1997), and even as the students’ “most feared and disliked course” (Whitnell et
al, 1994).
The difficulties in this course can be traced to its complex and abstract axiomatic base,
formulated in purely mathematical terms. Even those students who succeed in this course
don't have a feeling of “understanding” after its completion. Although they exhibit good
technical ability, they can only apply it in a mathematical context, and not in a physical
context. They lack the ability to relate the abstract models they learned with concrete
scientific phenomena.
Technological Motivation
Before entering the field of science education, the writer has conducted research in the
field of quantum molecular dynamics. Perplexed by the abstract models and complex
numerical computations, I found visualization an essential tool for understanding my own
research. As visualization technology became more powerful, yet faster and cheaper, the
-3-
prospect of using it for educational purposes seemed as a natural extension of my research.
First this technology was restricted for use in a special high performance Silicon-Graphics
workstation equipped classroom. But shortly, the ability to make real time calculations for
three-dimensional visualizations became available at the PC level. It is possible to view
these visualizations on a large screen in class, using a PC projector. It is also possible to
communicate these visualizations via the Internet to a remote PC. The technological
message is clear – computer visualization could now be made an integral part of the
university curriculum, as once were the blackboard or a library textbook. The technological
difference of a computer in respect to a blackboard or a textbook is a matter that
curriculum cannot ignore, and thus selected as the focus of this work.
Objective
Integrate computer visualization into the teaching of the course “Introduction to Chemical
Bonding”, to help the students relate its mathematical models with natural phenomena.
In order to achieve this objective, the entire course was built from scratch. Both the content
and the learning environment were changed, in view of the new capabilities supplied by
computer technology.
Validity of the Model
Although this work deals with a specific course in a specific curricular context, most of the
conclusions drawn in this work should be valid for similar courses in similar curricula.
Elementary quantum mechanics is a subject common to many courses at different levels, in
physics, chemistry, biology and engineering. Other fields in physical chemistry, such as
thermodynamics and chemical kinetics, also share the difficulty of complex mathematical
models that should be related to physical reality. Consequently these course share many of
the problems and possible solutions suggested here. On a broader view, the relation
between mathematical models and natural phenomena is the prime concern of this work.
This relation gives a general frame for validity – a similar approach should be valid to all
university level teaching of natural phenomena described by mathematical models.
-4-
1.3 Outline of the Thesis
This work asserts three points that are fundamental to curriculum development:
1.
Curriculum development should follow an explicit and coherent theoretical
framework, based on theories from the behavioral sciences and from the philosophy of
science.
2.
The learning environment, with its potential capabilities and limitations, should play
a major roll in curricular decision making.
3.
The process of curriculum development should be involved with actual teaching,
accompanied by a real time feedback and rectification mechanism.
The first point stems from the recognition that the structure of scientific subject matter
cannot uniquely define the curricular content selection. There are many coherent
substructures that fulfil the basic requirements of a curricular basis set, as defined in the
beginning of this chapter. To select between several options, decisions should be made.
These decisions always carry implicit pedagogical and epistemological considerations.
These considerations should be made explicit, and put forward in a coherent theoretical
framework. To account for both pedagogical and epistemological considerations, this
framework should be based on theories from behavioral sciences and from philosophy of
science. The need for an explicit framework is twofold. On the one hand, it consciously
guides curricular decision making in a consistent manner. This results in an integrative,
rather than a fragmented, curriculum. On the other hand, the abstract nature of theory
weakens its applicability for concrete situations. An explicit formulation helps identify the
weaknesses of the adopted theories as ground for decision in a particular situation.
Acknowledging these theoretical weaknesses is the first step in finding practical ways to
overcome them. In this work a synthesis was made between learning theory and
constructivist epistemology. This synthesis is presented in the second chapter –
“Theoretical framework”.
The second point stresses that the learning environment is not subordinate to the content.
Teaching technology determines what parts of the subject matter can be taught effectively,
and what background is needed for their teaching. It also determines how much material
can be covered during a given time. In these aspects it bridges between the theoretical side
of curriculum development and the practical side of classroom teaching. The correlation
between theory and practice raises new considerations for curricular decisions. This is why
-5-
the learning environment is more than just a tool to bring about the learning of pre-decided
content. The ways in which advancements in technology can change an existing
curriculum are discussed in the third chapter – “A Potentially Meaningful Content”.
A curriculum is a solution to a concrete problem. In contrast, the first two points
considered are general in nature. As such, the principles derived from them are not
guaranteed to be suitable for a certain time, place, person and circumstance. Rather, in each
instance of their application, they must be modified and adjusted to the case in hand. The
third point accounts for that. The process of curriculum development should be involved
with the actual teaching of the course, with real time feedback obtained from the students.
The process of adjustment of the general principles of chapters two and three to a specific
case study is described in chapter four – “A Meaningful Learning Set”.
These three points, taken together, define a process for integrating new technology into an
existing curriculum. Through the actual application of this process on the course
“introduction to Chemical Bonding”, specific and general conclusions are drawn. These
conclusions are summarized in the last chapter.
-6-
Chapter 2 - Theoretical Framework
Being a thesis in science education, this work has its theoretical roots both in learning
psychology and in scientific subject matter. The fundamental learning paradigm is that of
cognitive psychology, identifying human knowledge and learning with a personal
cognitive structure and its development. Within the paradigm, this work relies on the
theory of meaningful learning, which subsumes under it the concept of advance organizers.
The scientific subject matter is examined from an epistemological perspective, taking a
constructivistic approach. Constructivism is considered here from a philosophy of science
point of view, and not in the more widespread context of cognitive psychology. This view
suggests that for a scientific theory to be potentially meaningful it should be viewed as an
integrated process of knowledge construction and not as an aggregate of definite
conclusions. This requires teaching scientific subject matter as a synthesis of models and
phenomena rather than as the extraction of principles from facts.
These two theoretical foundations are linked by a study that shows that students’
conceptions of the structure and origin of scientific knowledge are intimately linked to
their approaches to learning science, by confronting positivistic and constructivistic
epistemologies.
The first section of this chapter describes the theory of meaningful learning, and is based
on the work of Ausubel, Novak & Hanesian (1978). The second section deals with
epistemological views of science and their impact on students’ learning, based on the work
of Edmondson & Novak (1993). The third section synthesizes the previous two and is the
theoretical innovation of this work.
2.1 Meaningful Learning
Cognitive theory regards the human nervous system as a data-processing and storing
mechanism so constructed that new ideas and information can be meaningfully learned and
retained only to the extent that they are relatable to already available concepts or
propositions which provide ideational anchorage. Without this ideational anchorage,
human beings, unlike computers, can incorporate only very limited amounts of discrete and
verbatim material, and can retain such material only for very short intervals of time, unless
it is greatly overlearned and frequently reproduced, as in the case of rote learning.
Meaningful learning, on the other hand, makes use of this information-processing and
storing mechanism – the cognitive knowledge structure. A learner who undergoes a
-7-
meaningful learning process has first to relate new material to relevant established ideas in
his own cognitive structure. He should recognize in what ways it is similar to and in what
ways different from related concepts and propositions. Than he should translate it into the
frame of reference of his own experience and vocabulary, and often to formulate what is
for him a completely new idea, requiring him to reorganize his existing knowledge
accordingly. Such a learning process accounts for the psychological organization of
knowledge as a hierarchical structure, in which the most inclusive concepts occupy a
position at the apex of the structure and subsume progressively more highly differentiated
subconcepts and factual data.
This section will deal first with the distinction between the two learning processes –
meaningful and rote. Following is the issue of discriminability and its possible
consequences – integrative reconciliation or compartmentalization. Finally the practice
needed to achieve meaningful learning will be considered, with an emphasis on the
technique of advance organizers.
Meaningful vs. Rote Learning
The distinction between meaningful and rote learning is defined in terms of two criteria:
nonarbitrariness and substantiveness. The first criterion, nonarbitrariness, implies some
plausible or reasonable basis for establishing the relationship between the new material and
the ideas already present in the student’s cognitive structure. This may be a simple
relationship of equivalence, as when a synonym is equated to an already meaningful word
or idea. In more complex instances, as when new concepts and propositions are learned,
the new concepts may be related to existing ideas in cognitive structure as examples,
derivatives, subcategories, special cases, extensions or qualifications. They may also
consist entirely of new combinations, superordinate (more inclusive) of the new material
and existing ideas. The second criterion, substantiveness (or nonverbatimness), implies that
the content learned is not dependent on the exclusive use of particular words and no others.
The same concept or proposition could be expressed in different words without loss or
change of meaning.
The advantage of meaningful learning over rote memorization is attributed to these two
properties. First, by nonarbitrarily relating new material to established ideas in his
cognitive structure, the learner can effectively exploit his existing knowledge as an
ideational and organizational matrix for the incorporation of new knowledge. Nonarbitrary
incorporation of a learning task into relevant portions of cognitive structure, so that new
-8-
meanings are acquired, implies that the new learning material becomes an organic part of
an existing, hierarchically organized ideational system. As a result of this type of
anchorage to cognitive structure, the newly learned material is no longer dependent for its
incorporation and retention on the frail human capacity for assimilating and retaining
arbitrary associations. The anchoring process also protects the newly incorporated
information from the interference effects of similar materials, previously learned or
subsequently encountered, that are so damaging in rote learning. The temporal span of
retention is, therefore, greatly expanded. Second, the substantive or nonverbatim way of
relating the new material also serves to maintain its identifiability and protect it from
interference. Much more can be apprehended and retained if the learner is required to
assimilate only the substance of ideas rather than the verbatim language used in expressing
them.
Integrative Reconciliation vs. Compartmentalization
One of the obstacles in the process of meaningful learning is the issue of discriminability
of new learning material from previously learned ideas. In the effort to simplify the task of
interpreting the environment and its representation in cognitive structure, new learning
material often tends to be apprehended as identical to previously acquired knowledge,
despite the fact that objective identity does not exist. Under these circumstances, the
resulting meanings obviously cannot conform to the objective content of the learning
material. In other instances, the learner may be aware that new concepts in the learning
material differ somehow from established concepts in his cognitive structure but cannot
specify the nature of the difference. The learner may interpret this difference as a
contradiction between the established ideas and the new concepts, and adopt one of three
ways of coping with this apparent contradiction:
1.
Dismiss the new concepts as invalid.
2.
Compartmentalize them as isolated entities apart from previously learned
knowledge.
3.
Attempt integrative reconciliation.
Compartmentalization may be considered a commonly employed form of defense against
forgetting, particularly in learners with a low tolerance for ambiguity. By arbitrarily
isolating concepts and information, one prevents confusion with, interaction with, and
rapid destructive assimilation by, more established contradictory ideas in cognitive
-9-
structure. But this is merely a special case of rote learning. Through much overlearning,
relatively stable rote incorporation may be achieved, at least for examination purposes.
But, on the whole, the fabric of knowledge learned in this fashion remains unintegrated and
full of contradictions, and is therefore not very viable on a long-term basis.
Integrative reconciliation, on the other hand, is an effort to explicitly explore relationships
between related ideas, to point out significant similarities and differences, and to reconcile
real or apparent inconsistencies. The destruction of artificial barriers between related
concepts reveals important common features, and promotes acquisition of insights
dependent upon recognition of these commonalties. Integrative reconciliation encourages
meaningful learning by making use of relevant, previously learned ideas as a basis for
incorporating related new information. It discourages rote learning by the reduction of
multiple terms used to represent concepts that are intrinsically equivalent. Finally, it helps
discriminate between similar but different concepts, and keep them as distinct established
ideas in the learner’s cognitive structure.
The Practice of Meaningful Learning
In order to initiate meaningful learning, two conditions must be met:
A Meaningful Learning Set – the learner manifests a disposition to relate the new
1.
learning task nonarbitrarily and substantively to what he already knows.
A Potentially Meaningful Content – the learning task is relatable to the learner’s
2.
structure of knowledge on a nonarbitrary and substantive basis. Since meaningful
learning takes place in particular human beings, meaningfulness depends on two
factors:
One)
The objective nature of the material to be learned.
Two)
The availability of relevant content in the particular learner’s cognitive
structure, that will serve as ideational anchorage for relating.
It follows that cognitive structure itself – that is, the substantive content of the learner’s
knowledge in a particular subject-matter area at any given time, and its organization,
stability, and clarity – should be the major factor influencing meaningful learning and
retention in this same area. Potentially meaningful concepts and propositions are always
acquired in relation to an existing background of relevant concepts, principles and
information, which provides a basis for their incorporation, and make possible the
emergence of new meanings. The content, stability, clarity and organizational properties of
-10-
this background should crucially affect both the accuracy and clarity of the emerging new
meanings and their immediate and long-term retrievability. According to this reasoning, it
is largely by strengthening salient aspects of cognitive structure that meaningful new
learning and retention can be facilitated. When we deliberately attempt to influence existing
cognitive structure so as to maximize meaningful learning and retention, we come to the
heart of the educative process.
Deliberate manipulation of the relevant attributes of cognitive structure for pedagogic
purposes can be accomplished:
1.
Programmatically, by employing suitable principles of ordering the sequence of
subject matter, constructing its internal logic and organization.
2.
Substantively, by using for organizational and integrative purposes those unifying
concepts and propositions in a given discipline that have the widest explanatory power,
inclusiveness, generalizability, and relatability to the subject-matter of that discipline.
One of the strategies that can be employed for deliberately enhancing the positive effects
of cognitive structure generally in meaningful learning involves the use of appropriately
relevant introductory materials – the advance organizers.
Advance Organizers
Advance organizers are introductory materials, which provide ideational scaffolding for
the stable incorporation and retention of the more detailed and differentiated material of
the learning task, and, in certain instances, to increase discriminability between the new
material and apparently similar or conflicting ideas in cognitive structure. Advance
organizers help the learner to recognize that elements of new learning materials can be
meaningfully learned by relating them to specifically relevant aspects of existing cognitive
structure. To achieve that, the organizer is introduced in advance of the learning material
itself, and must be:
1.
Maximally clear and stable in his own right, and stated in familiar terms.
2.
Presented at a higher level of abstraction, generality and inclusiveness, to provide
greater explanatory power and integrative capacity.
The principal function of the organizer is to bridge the gap between what the learner
already knows and what he needs to know before he can meaningfully learn the task in
hand. This can be achieved in three ways:
-11-
1.
Provide ideational anchorage for the new material in terms that are already familiar
to the learner.
2.
Integrate seemingly new concepts with basically similar concepts existing in
cognitive structure.
3.
Increase discriminability between new and existing ideas, which are essentially
different but confusingly similar.
Since the substantive content of a given organizer is selected on the basis of its
appropriateness for explaining and integrating the material it precedes, this strategy
satisfies the substantive as well as the programmatic criteria for enhancing the positive
transfer value of existing cognitive structure on new meaningful learning.
Advance organizers are expressly designed to further the principle of integrative
reconciliation. They do this by explicitly draw upon and mobilize all available, similar
concepts in cognitive structure that are relevant for and can play a subsuming and
integrative role in relation to the new learning material. This maneuver can effect great
economy of learning effort, avoid the isolation of essentially similar concepts in separate
compartments that are noncommunicable with each other, and discourage the confusing
proliferation of multiple terms to represent ostensibly different but essentially equivalent
concepts. In addition, it may increase the discriminability of genuine differences between
the new learning material and the seemingly analogous ideas in the learner’s cognitive
structure. An organizer should depict clearly, precisely, and explicitly the principal
similarities and differences between ideas in a new learning passage on the one hand, and
existing related concepts in cognitive structure on the other. Constructed on this basis, the
more detailed ideas and information in the learning task would be grasped with fewer
ambiguities, fewer competing meanings, and fewer misconceptions suggested by the
learner’s prior knowledge of the related concepts. And as these clearer, less confused new
meanings interact with analogous established meanings during the retention interval, they
would be more likely to retain their identity.
2.2 Epistemology
Epistemology is the concept of the structure of knowledge and knowledge production.
Research has shown (Edmondson & Novak, 1993) that students’ epistemologies have an
important link to their choices of learning strategies and whether or not they integrate what
they learn. The two epistemological views considered are positivism and constructivism.
-12-
Positivism vs. Constructivism
Positivist views of science focus on the “objective” study of phenomena and support a
notion of knowledge that is discovered through observation, unfettered by previous ideas
or beliefs. These views of science are based on the pursuit of universal truths and new
knowledge through logic, mathematical application, and objective experience. This view of
science stands in contrast to a constructivist approach, which posits a view of knowledge
as a construction based on previous knowledge that continually evolves, and does not exist
independent of human experience. Truth is based on coherence with our other knowledge,
not correspondence between knowledge and objective reality.
Epistemology and Learning Strategies
In their research, Edmondson and Novak conducted interviews, which focused on students’
conceptions of scientific knowledge, their belief in absolute truths, their role in the
generation of knowledge, and their approaches to learning. Two views of science emerged
as a result of these interviews. The first general view rested on conceptions of structure,
method, and experimentation, and tended to support assumptions of a positivist
epistemology. The second view emphasized ways of thinking about problems or questions,
and was more compatible with a constructivist epistemology. The students who were
identified as positivists tended to be rote learners oriented to grades, while the students
who were identified as constructivists used meaningful learning strategies. The primary
goal of the latter’s approach to learning was to develop a deep understanding of the
material.
For most students, a dichotomy existed between the way they viewed the world as
individuals, as students, and the way they viewed it as scientists. A high level of
integration cannot occur if students maintain two or more systems of knowledge-making
that do not intersect. Parallel ways of knowing foster compartmentalization, not integration
and synthesis. They imply the existence of separate, objective truths that are domainspecific and constant. This view of the structure of knowledge invites rote learning
strategies, because on the surface rote strategies appear to be the most efficient. They
subtly endorse a student’s passivity, because knowledge appears to be self-evident. The
fact that students have become adept at keeping their school learning so separate from their
personal experiences, and that they are able to ignore or discount conflicts between their
-13-
intuition and what they are taught, have contributed greatly to the maintenance and
reinforcement of parallel ways of knowing.
Not only that most students have epistemological ideas that tend to be positivistic in
orientation, typical elementary and college level science courses tend to exacerbate the
problem, moving students toward more positivistic views. Further more, the synergism
between learning approach (e.g., rote learning) and epistemological orientation (e.g.,
positivism) leads toward stabilization of learning strategies that are ineffective for
understanding science and influence the development of positive attitudes toward science.
By making epistemological issues explicit to the students, educators can help them move
beyond thinking procedurally and toward thinking “like real scientists”. By emphasizing
the active role of the knower in the construction of knowledge and encouraging students to
reflect on their learning, educators invite students to move away from rote learning
strategies and toward more meaningful ones.
An orientation to the material, meaningful learning strategies, and a constructivist
epistemology share an emphasis on integration, an active role for the knower, and the
assumption that truth is intimately involved with our experiences. Advocating meaningful
learning carries an implicit endorsement for the adoption of a constructivist epistemology.
2.3 The Construct of Subject Matter in the Physical Sciences
Building on the conclusions of Edmondson & Novak, the relationship between a
constructivist epistemology and meaningful learning will be further explored. Usually
when employing a constructivist epistemology in the context of science education, the
emphasis is put on the active role of the observer in the construction of scientific
knowledge. The personal construction of knowledge and its reconstruction by social
interaction (like classroom learning) is a major concern of constructivist science pedagogy
(Driver, 1988). The focus of this work is somewhat different, not dealing with the
development of personal knowledge, but with science as public knowledge. Instead of
centering on an individual student’s construction of science, science itself is viewed as a
process of human construction. In particular, this work concerns with the structure of
established scientific subject matter, and the meanings embedded in the structure in the
process of its construction. This structure serves as the base from which content is
extracted for the scientific curriculum.
-14-
Models and Natural Phenomena
Any mature physical theory consists of three parts: a set of observed phenomena, a
theoretical model and a set of correspondence rules that link the model to observation
(Smit & Finegold, 1995). The first two parts, natural phenomena and models, are the focus
of to this work, which examines the interplay between them and its consequent curricular
applications. To emphasize this dual nature of a scientific theory, the integration of natural
phenomena and models will be referred to from here on as “the construct of subject matter
in the physical sciences”, or in short – “the construct” (Figure 2-1).
Theoretical
Correspondence
Observed
Model
Rules
Phenomena
Figure 2-1: A generic structure of a scientific theory – “the construct”.
As in Section 2.2, an epistemological perspective will be used to examine the construct of
subject matter. In addition to positivism and constructivism, this section will also reference
an instrumentalist view (Hodson, 1985). Although all three epistemological views accept
the generic construct, different epistemologies associate different meanings with each part.
The positivist and instrumentalist associated meanings will be addressed first, followed by
a discussion on the curricular consequence of these meanings.
Discover
Scientific
Observed
Truth
Phenomena
Explain
Figure 2-2: A positivist interpretation of Figure 2-1.
Fit
Theoretical
Scientific
Model
Truth
Organize
Figure 2-3: An instrumentalist interpretation of Figure 2-1.
-15-
1.
Positivism – Observations serve as a tool to discover the scientific truth. Models are
extracted from objective experience through logical conclusion and mathematical
application, and are science’s best approximation to the laws of nature (Figure 2-2).
2.
Instrumentalism – Observations are the scientific truth. Models are convenient
fictions that fit the observed data, and serve as tools to organize and predict facts
(Figure 2-3).
These two views accept both phenomena and models as an integral part of a physical
scientific theory. But is this acceptance reflected in the curricular preferences of these
views? In both views, the teaching of theoretical models can be made separate from the
teaching of observation, because there is a distinction between observable entities and
theoretical ones, and each has an objective autonomous existence: models are absolutely
defined by mathematical equations, and observations can be recorded as objective facts.
On one hand, this implies that the teaching of observation may concern itself mainly with
the techniques of observation, and not with a description of observed phenomena, because
mere facts can be looked up in a reference if needed. On the other hand, in theoretical
courses most effort should be put in the imparting of knowledge about models, because
they embody non-trivial concepts, while facts are self-evident. Students can than use these
models during their education period as basic principles to explain and understand
subsequent facts (positivism), or as tools to systematize them (instrumentalism). When the
students graduate and become researchers, they’ll be able to exploit their laboratory
techniques to devise new experiments and than use these models to explain the new found
facts (positivism), or fit the results with a known model and generate more facts
(instrumentalism). Either way, there is an intrinsic separability of the construct that is
exploited for the simplification of subject matter, allowing the teaching of theoretical
models to be detached from observed phenomena.
While positivism and instrumentalism hold opposite views as to which part of the construct
corresponds to the scientific truth, constructivism goes even further and opposes the
fundamental notion of “truth” as correspondence. It rejects the positivist belief that models
correspond to some basic “laws of nature”, and adopts an instrumentalist view in which
models don’t have objective existence. On the other hand, it doesn’t accept the
instrumentalist notion that observations correspond to some objective “facts”, since
observations are based on previous conceptions, and subjected to different interpretations.
-16-
Instead of viewing truth as correspondence to some external reality, constructivism sees
truth as internal coherence between the two parts of the construct (Staver, 1998). For
constructivists, observations, objects, events, data, laws and theory do not exist
independently of observers, and so can only be checked in an observer dependent way.
This means that statements of a theory can not be measured for their closeness to some
absolute truth, but only for their truth in relation to each other. The measurement of reality
is replaced by the degree of viability of a theory. When correspondence with a reality
outside the construct is lost, “truth” can only come from coherence inside the construct, so
neither part can be fully understood on its own:
1.
Models comprise both the core of scientific knowledge (substantive structure) and
the method of its creation (syntactic structure). A model is not a mere reflection of an
existing law of nature (positivism), or a completed tool whose value is judged by its
usefulness in organizing and predicting observations (instrumentalism). Together with a
set of relevant observations it forms a process – of construction (from observation to
model), and validation (from model to observation). Models differ in their relation to
observation – each has a particular set of phenomena it applies to, and specific ways it
can be validated or deployed. This difference is not apparent from looking just at the
models – it may be appreciable only by reviewing the divers ways different models
interplay with observation. This process of construction and validation is crucial for the
understanding of the model: “To understand science is to know how scientific models
are constructed and validated” (Hestenes, 1992).
2.
Observations are not an objective recording of facts, and are model dependent:
“Knowledge won through enquiry is not knowledge merely of the facts but of the facts
interpreted.” (Schwab, 1962). Observations are always made in a context of a specific
model, which determines:
One)
The questions asked – which relevant data to choose from an infinitely large
possible data set (Schwab, 1978d).
Two)
The terms with which to represent the answer – the choice of terms can
emphasize certain facts in a situation at the price of obscuring others. The observed
facts are interpreted in the terms of the model. Science grows not only by increased
precision and discovery of new phenomena but also by redefinition and replacement
of terms, so as to illuminate larger aspects of phenomena or to relate aspects of
phenomena previously disjoined (Schwab, 1978e).
-17-
From a constructivist viewpoint, the construct of subject matter in the physical sciences is
composed of two mutually dependent parts, which describe a dynamic process of
construction of knowledge, and not just a static collection of facts. The observed
phenomena, on one hand, are the building blocks from which a theoretical model is
constructed, but on the other hand they owe their existence (in terms of interpretation) to
the same model (Figure 2-4).
Build
Theoretical
Observed
Model
Phenomena
Interpret
Figure 2-4: A constructivist interpretation of Figure 2-1.
This work is not concerned with the philosophical debate between these three viewpoints.
Its goal is not to decide which one is closer to the “truth”. Rather, it is concerned with their
pedagogical outcome. Hodson (1985) studied the connection between philosophy of
science, science and science education. He writes: “In presenting theoretical knowledge in
science it is important that the nature and purpose of theory is made apparent… merely to
learn theory without examining its empirical basis is little better than the rote
memorization of facts”. As shown in Section 2.2, there is a relation between students’
learning strategies and their epistemological views, and that a constructivist epistemology
endorses meaningful learning. In this section, a connection between constructivist
epistemology and an integration of natural phenomena and models was established. This
kind of integration is not a part of positivism or instrumentalism. While constructivist
epistemology has an inherent tendency to integrate observed phenomena with theoretical
models, positivism and instrumentalism tend to separate them. Taking integration as a
measure of potentially meaningful content, constructivism is the epistemology of choice
for delivering scientific knowledge in a meaningful way.
-18-
Finding the Meaning in the Construct
The integrative power of constructivism is best seen in the distinct approach that it takes
when considering scientific “facts”. Numerous factual statements can be found in any
textbook, and because of these being just “facts” (for a positivist or an instrumentalist),
they deserve no further clarification – the only reason for their inclusion in the text is to
exemplify some principle or substitute unknown variables in the end-of-chapter exercises.
Take for example the following statements:
1.
The mass of this ball is 0.1 Kg.
2.
The mass of Pluto is 1.271022 Kg.
This section will examine these statements from a constructivist point of view. While such
statements seem to be mere descriptions of direct observation, they are usually
interpretations of some other observations. They carry with them a hidden meaning, which
is related both to the original observation, and to the model that was the basis for the
interpretation.
The first statement seems to be no more than the result of an experimental measurement of
the mass of an object. But what kind of experiment yielded this result? Newtonian
mechanics provides two separate models from which an operational definition of mass can
be constructed. One model is Newton’s second law, which defines the inertial mass of an
object, and the other is Newton’s universal gravitation law, which defines its gravitational
mass. Newtonian mechanics does not distinguish between the two definitions, and uses a
single term for both. The reason for the model to ignore this difference comes from
experiment – the two distinct definitions of mass can be united through the observation that
bodies of different gravitational mass but equal shape fall with the same acceleration. This
observation means that according to Newtonian mechanics, the two different experiments,
using two different operational definitions of mass, should yield the same result. It turns
out that the term “mass”, which explicitly means “a measure of the amount of substance”
(observation), has an implicit meaning of accepting the equivalence between gravitational
and inertial masses (model), which in turn is based on some other observation.
In the first statement, the mass of the object was determined by measuring a property of the
object itself (e.g. its weight or its resistance to acceleration by a force other than
gravitation). In the second statement, the mass specified is of a free falling body (affected
only by gravitation). The trajectory of such an object is independent of its own mass,
which means its mass cannot be determined by measuring its own orbital parameters. The
-19-
mass of a planet can only be inferred from measurements of its gravitational influence on
some other object’s trajectory, and a model should be constructed to determine which
object could be used as a reference. Even before it was discovered, the existence of Pluto
was predicted in order to explain apparent deviations in the orbits of Uranus and Neptune,
and its mass was calculated to account for these deviations – different models gave results
from 2 to 6 times heavier than Earth. When it was discovered in 1930, the models changed
to fit its computed orbit, yielding new values for its mass, ranging from one Earth mass
down to 1/10 Earth masses. The large variation in values came from using an indirect
reference – measuring small perturbations caused by a planet to an orbit determined
primarily by the sun. All these models rely on knowledge of the masses of Uranus and
Neptune, which were accepted as accurate since they were determined by direct
measurement (it was observed that these planets have satellites that are influenced
primarily from each planet’s field of gravity). In 1979, when Pluto's satellite Charon was
discovered, a direct reference for Pluto was available, and a new model for determining its
mass was constructed. Once again, the same theory gives two different operational
definitions for measuring the same quantity, based on different experimental observations.
This time, the results disagree: according to the observed orbit of the new satellite, the
mass of the Pluto-Charon pair turned out to be only about 1/500 Earth masses! Such small
mass could not account for the deviations in the orbits of Neptune and Uranus, and so the
two models are inconsistent1. The term “mass”, though again used explicitly as an
observation, has different implicit meanings according to the model it presupposes, which
in turn relies on other observations.
Examining these two examples, it is obvious that although they share the same semantic
structure, the scientific meaning of each is dramatically different. Treating these two
statements as mere facts, that can be read from the scale of some measuring device, does
the scientific method great injustice. Each observation is intimately involved with a model
that is used for its interpretation, and each model is constructed in view of other
observations. The full scientific meaning is never found in the model or the observation
taken separately, but is dependent on both. Understanding of models or phenomena can
1
Ten years later, the mystery was solved. The deviations in the orbits of Uranus and Neptune were found to
originate from inaccuracies in the determination of their masses. In 1989, the spacecraft Voyager II passed by
Neptune and yielded more accurate masses for the outer planets. When these updated masses were inserted in
the numerical integrations of the solar system, the residuals in the positions of the outer planets disappeared.
-20-
only be achieved by examining the full construct of the theory that describes them. The
relations between all the components of a theory are numerous and complex – the
unraveling of these relations is what makes science a coherent construct of inter-related
concepts rather than an aggregate of isolated facts.
Sometimes students fail to find the meaning of a model they have learned because they
seek the meaning outside of the scientific construct of models and phenomena. In his
introduction to a popular science talk on the subject of quantum electrodynamics, Richard
Feynman (1985) said: “The reason that you might think you do not understand what I am
telling you is, while I’m describing to you how Nature works, you won’t understand why
Nature works that way. But you see, nobody understands that. I can’t explain why Nature
behaves in this peculiar way… The essential question is whether or not the theory gives
predictions that agree with experiment.” In science, the meaning of a model does not come
from some metaphysical reasoning of why the model should be true, because such
reasoning has to rely on assumptions that come from outside the scientific construct, such
as religious, social or esthetic beliefs2. Scientific meaning of a model is directly related to
the observations the model applies to. To understand the meaning of a model, one should
first know how nature works, i.e. what natural phenomena are associated with this model,
and than learn how the model is constructed to account for these phenomena, and how can
it be validated by them.
To get back to the “facts”, constructivism changes the traditional textbook balance between
models and observation. By emphasizing the scientific meaning of each model and “fact”,
it promotes natural phenomena from being subordinate to the model, a source for examples
and exercises, to an integrative part of the understanding of science.
2
While such considerations can determine which of several valid models will be accepted as the preferable
model, they are inferior to the scientific criterion of validation. This is the reason why Einstein’s objection to
the probabilistic nature of quantum mechanics – “God does not play with dice”, although backed up by
theological reasoning and the social prestige of the Nobel Prize winner, was scientifically rejected.
-21-
Natural Phenomena as Advance Organizers
How can natural phenomena be incorporated into the curriculum, in a way that emphasizes
their role in scientific thinking? How should their interconnection with models be
disclosed, so the intricate network of scientific meaning can be unraveled? What sequence
of presentation will help the students integrate the new meanings into their existing
cognitive structure?
A possible answer to these questions is found in a general chemistry textbook written by
James Birk (1994). Trying to find a new balance between chemical concepts (models) and
descriptive chemistry (phenomena), Birk followed a non-traditional sequence of teaching –
demonstrating relevant chemical phenomena before the presentation of the underlying
chemical concepts. In the introduction he wrote: “Students seem to understand and retain
material more easily if it starts with an investigation of matter that reveals some factual
material, then develops models or principles that explain these observations, and finally
applies these principles to some new area of chemistry”. Each chapter starts with an
illustrative example of some chemical phenomenon – this example serves as an advance
organizer, to which the newly learned chemical concepts are related through the rest of the
chapter.
Can natural phenomena serve as advance organizers? In Section 2.1, advance organizers
were postulated to be:
1.
Maximally clear and stable in their own right, and stated in familiar terms.
2.
Presented at a higher level of abstraction, generality and inclusiveness, to provide
greater explanatory power and integrative capacity.
The second requirement suggests that models, rather than phenomena, should be used as
the ideational anchorage to which subsequent learning will be related. This may be true for
high-school science teaching, or for teaching non-mathematical models in higher
education. This is not true for university teaching of natural phenomena that can be
described by mathematical models – mainly in the physical sciences – in which models fail
to fulfill the first requirement. When teaching physical sciences at the university level,
most of the inclusive principles are hard to conceptualize. Following are some
characteristic examples of why inclusive principles don’t satisfy this requirement:
1.
The second law of thermodynamics states that the entropy change in a closed
system is always greater or equal to zero. The term “entropy” is hard to conceptualize,
-22-
because it does not embody a perception of some everyday phenomena. Entropy can be
related to an everyday perception of disorder, to its thermodynamic definition (the ratio
of heat to temperature), or to the statistical mechanics concept of number of states, but
the term itself remains abstract. Because its use of abstract terms, this inclusive
principle would not be clear as an advance organizer.
2.
Newton’s first law states that a body will maintain its velocity and direction of
motion if no forces act upon him. This law apparently contradicts everyday experience,
in which moving bodies tend to slow down and stop. While terms like “motion”,
“velocity” and “force” do embody a perception of everyday phenomena, their
employment in the characterization of an ideal limit of everyday experience can become
counter-intuitive. Because its use of an idealization, which contradicts existing
cognitive structure, this inclusive principle would not be stable as an advance organizer.
3.
The measurement postulate of quantum mechanics associates with every physical
measurement a mathematical operator, whose eigenvalues are the possible results of the
measurement; the probability to get a specific result is determined by projecting the
state of the system, represented by a wavefunction, on the associated eigenfunction.
Quantum mechanics abandons the description of nature by perceptual properties, such
as position and velocity (Newtonian mechanics) or heat and work (thermodynamics),
altogether. It adopts a new formulation, in which nature is described in terms of
mathematical concepts, like “operator” and “wavefunction”, and physically measurable
quantities are calculated through mathematical manipulations, like “projecting”. The
underlying mathematics requires the knowledge of operator algebra and differential
equations. For many students these mathematical concepts are new and should be
specifically learned for this purpose. Because this inclusive principle can only be
formulated as a complex mathematical model, it can not be stated in familiar terms
before teaching the relevant mathematics. But if the relative mathematics is taught
before the inclusive principle, its teaching will lack an advance organizer.
If the two requirements for advance organizers are mutually exclusive in university level
physical sciences, which one should be kept? When considering the principal function of
the organizer – to bridge the gap between what the learner already knows and what he
needs to know – it is obvious that the requirement for clarity, stability and familiarity is
more important for its proper functioning.
-23-
Can natural phenomena fulfill this requirement? Natural phenomena are perceptual, and
many times are perceived by the student in his everyday life. At the same time, they form
the basis for the construction and validation of abstract models. This duality enables them
to bridge between perception and abstract ideas, between what the learner already knows
and what he needs to know. Even if a phenomenon is unknown to the student, it can be
described by perceptual terms, which are familiar to him from his everyday life, and so
create a clear image of it in his cognitive structure. If the new phenomenon seems to
contradict an already acquired perception of reality, it his harder to reject a clearly
demonstrated perceptual “fact” than it is to ignore an abstract hypothesis. This makes
natural phenomena a stable addition to the learner’s cognitive structure (although, at first,
he might associate an alternative conception with his perception of this phenomenon).
Natural phenomena also carry an integrative capacity as advance organizers. A single
phenomenon can be described by several models, each one presupposing different
assumptions and illuminating a different aspect of the phenomena.
This gives the
opportunity to explain how seemingly different models can describe the same phenomena,
acknowledge their common features and variance, and demonstrate their agreements and
discrepancies with observed data. Teaching models in the physical context of natural
phenomena helps to integrate seemingly different models that deal with similar
phenomena, and to increase discriminability between confusingly similar models by
emphasizing the different aspects of the phenomena each pertains to.
This work will demonstrate how incorporating computer-centered teaching can promote
the employment of natural phenomena as advance organizers in the highly complex and
abstract fields of quantum mechanics.
-24-
Chapter 3 - A Potentially Meaningful Content
In the previous chapter, a theoretical framework of meaningful learning was established.
According to this framework, a basic requirement for promoting meaningful learning is the
availability of a potentially meaningful content. Since a scientific theory is a construct
consisting of models and observations, such a content should include a combination of
both. The inclusion of natural phenomena as an integrative part of the content has two
reasons. The first is that the scientific meaning of a concept or an observation often
emerges from the interplay between the two parts of the construct. This meaning cannot be
fully appreciated from examining only one part of the theory. The second reason is that
natural phenomena are well suited to be used as advance organizers. These organizers help
the student bridge the gap between his everyday experience and the abstract concepts of
scientific theory.
This chapter deals with the practical implementation of the theoretical framework into a
university level curriculum. A major factor is the use of computer technology to facilitate
this implementation. The first section demarcates the technological aspects of content
development, and specifies the incentives for curricular change. The second section centers
on a specific subject matter – elementary quantum mechanics – and contrasts the
conventional and technological approaches to its curricular structure. This section is the
outcome of an extensive development process, and is the major practical innovation of this
work.
3.1 What Do We Teach?
When developing a scientific curriculum, it is important to discern between two facets of
the curriculum. One is the selected content, with its coherent inner structure and its relation
to the rest of scientific knowledge. The other is the learning environment, which comprises
the teaching methods and any technological mediators.
With the advent of a new educational technology, educators strive to utilize it to improve
their teaching. Usually the first impact of the new technology is on the learning
environment – the content is well established and tested, but now there are new ways of
delivering it. The working assumption is that content is self-evident (Laws, 1996), and the
question that guides the development process is: “how do we teach?”
-25-
But educational technology can also have impact on another side of the teaching process –
the selection of subject matter for teaching. Such an approach opens the well-established
curriculum for critical reexamination, with an emphasis on the adaptation of the content in
combination with a change in teaching methods. The focus of this chapter is on the
content: the working assumption is that content is far from being self evident, and the
guiding question is: “what do we teach?”
Curricular Change
As stated in the introduction, the goal of curriculum making is to find a minimal basis set
of knowledge and skills that can be transmitted to the student. The successful transfer of
this set should fulfill two objectives. The first is that the student would have an elementary
understanding of the field at hand. The second objective is that the student would be able
to autonomously acquire further knowledge in this field.
Once a successful curriculum for a specific field is established and used for many years,
there can be several incentives to its reexamination and consequent change:
1.
A change in the underlying scientific knowledge base.
2.
A change in the guiding epistemology.
3.
New educational technology.
The most obvious reason for curricular change is a change in the underlying scientific
knowledge base. As new experimental methods replace old ones, new concepts take
precedence, while other become obsolete. The curriculum should reflect these changes in
order to keep the students updated with current scientific knowledge. Even if the basic
concepts stay unchanged, modern observations might be better suited to illustrate these
concepts, as technology brings more accurate and more direct evidence of natural
phenomena.
The underlying epistemological view determines which elements of the knowledge base
are more likely to be incorporated in the curriculum. An instrumentalist curriculum will
prefer observations, and a positivist one will favor models. A constructivist approach will
accommodate both, to emphasize the interaction between observations and models. A
change in the guiding epistemology should therefore result in a change in curriculum.
New educational technology may have impact on both content selection and the learning
environment. It may facilitate the presentation of natural phenomena that were too
-26-
complex, expensive or hazardous to present otherwise. It may offer different
representations of the same model, to suit different learning styles and intuitions. It may
even change the way the teacher interacts with his students, and the students with
themselves. In this manner, teaching technology determines what parts of scientific
knowledge can be taught effectively, and what background is needed for their teaching.
The content selected for a curriculum should be transmitted to the students. The learning
environment determines, to a large extent, if a specific content can or cannot be transmitted
successfully.
The first incentive is manifest in the subject field of quantum mechanics in chemistry,
which is the case study of this work. While the theory of quantum mechanics was well
established by 1930, new experiments that test its validity are being developed ever since.
Describing only the experimental background that brought about its initial development, an
approach that many textbooks share, does great injustice to this rich and productive field.
Many of these new experiments offer more intuitive display of microscopic properties,
which makes them better suited for educational demonstration of quantum phenomena.
LEED (Low Energy Electron Diffraction) is a good example of such an experimental
technique. Its educational relevance is demonstrated in Section 3.2 under “Wave-Particle
Duality”. Ultra-fast spectroscopy is a modern experimental method based on quantum
theory, which opened a new research field in chemistry. The change in the conceptual
hierarchy this new field brings to chemistry is described under “Quantum Dynamics”.
Concerning the second incentive, most physical chemistry textbooks are inclined toward a
positivist view of science. This is in contrast to the conclusion of the second chapter, that a
constructivist approach is more appropriate for a meaningful learning of scientific
concepts. Spectroscopy is an experimental branch of chemistry, and the differences
between a positivist and a constructivist treatment of this subject are contrasted under
“Quantization of Energy”.
These two incentives are not new in the field of quantum mechanics. However, they had to
wait for their fulfillment until the realization of the third – a new educational technology
based on computer visualization and simulation. Only through this technology could
modern phenomena and a constructivist epistemology be incorporated in an effective way
into the curriculum.
-27-
The Computer as an Opportunity for Curricular Change
Computer technology offers many advantages over traditional educational technology.
This work identified three technological issues that have direct impact on the domain of
content development:
1.
Graphical capability.
2.
Computational capability.
3.
Real time capability.
Many university level science courses deal with abstract mathematical models. The
traditional way of communicating these models is in a formal symbolic mathematical
language. The graphical capability of the computer facilitates an alternative to this
symbolic representation. Through two- or three-dimensional visualization many abstract
mathematical concepts can be given a concrete graphical representation. For many (though
not all) students, such a concrete representation is easier to follow and manipulate, and
provides an intuitive bypass to the abstract symbolic representation. In some courses,
where the manipulation of mathematical models is an objective, visualization can serve as
an aid for developing intuition and understanding. In others, where mathematical models
are just a mediator for the representation of natural phenomena, visualization can be used
to eliminate altogether the need for formal mathematical manipulation. In this way
complex mathematical models can be incorporated into the curriculum, even if the students
lack the formal background for their manipulation. Graphical manipulation of these models
can provide the conceptual understanding of the represented natural phenomena, without
the (unnecessary) technical ability required for formal manipulation.
Traditional teaching usually restricts both the model and its prediction to be described
symbolically. This restriction allows only for the solution of analytically solvable models –
those models that can be symbolically manipulated to give a prediction. But for most
natural phenomena, these simple models don’t give an adequate representation. The
computational capability of the computer facilitates a more versatile approach. Starting
from a symbolic model, the computer can calculate numerically its prediction, and present
it in a graphical representation. Removing the restriction for a symbolic representation of
the prediction widens the range of models that can be discussed. Utilizing numerically
solvable models means that teaching is no longer restricted to a few ideal situations that are
-28-
seldom encountered in nature. Real observations can thus be included in the curriculum,
and the validity of both simple and more complex models can be discussed.
Even when traditional teaching did use graphical representation of models, these
representations were restricted by the media they were represented on – like blackboard or
paper. These media types are two dimensional and static. The real time capability of the
computer, taken together with its graphical and computational capabilities, gives the ability
to animate such representations. This relieves the restriction on having to view twodimensional
projections
of
three-dimensional
objects.
Virtual
rotation
of
the
representations of these objects allows viewing them from all angles in real time. But more
important is the possibility to incorporate the dimension of time into model representation.
As nature is constantly changing, giving only a static representation of it is a great
limitation toward its understanding. The ability to animate dynamical calculations removes
this limitation.
The integration of computer technology to the course “Introduction to Chemical Bonding”
offers many examples to its impact on the content taught. In this course, the abilities to
perform a Fourier Transform or to solve a differential equation are neither a prerequisite
nor an objective of the course. Yet, only through these mathematical manipulations some
basic concepts of quantum mechanics can be revealed. Two ways to overcome this formal
mathematical barrier through visualization are described in Section 3.2 under “DeBroglie’s Relation” and “Quantization of Energy”. The anharmonic oscillator is a basic
model in the representation of spectroscopic observations, yet it does not have an analytic
solution. A numerical approach to the teaching of this model is described under
“Quantization of Energy”. The incorporation of dynamics into the curriculum is discussed
under “Quantum dynamics”.
Objective
Define a model for selecting content from a scientific knowledge base, in view of the
capabilities of computer aided instruction, so as to build a potentially meaningful
curriculum.
-29-
3.2 Application to Elementary Quantum Mechanics
The practical considerations described in Section 3.1, along with the theoretical
considerations of the Chapter 2, served as guidelines for a thorough curricular change in an
existing university course. This course deals mainly with elementary quantum mechanics,
as described in the introduction under “The Case Study”. Four concepts were selected from
this course for presentation in this work, each described in a different subsection. On the
one hand, these concepts were chosen because they are fundamental concepts, whose
incorporation is crucial for understanding quantum mechanics. On the other hand, they
exemplify the need for curricular change, and the advantage gained by utilizing computer
technology in their teaching.
In the theoretical framework, natural phenomena were suggested as the most suitable
advance organizers for meaningful teaching of scientific models. This approach will be
employed throughout the section. For the sake of readers who are not intimate with the
models of quantum mechanics, each subsection opens with a description of an experiment.
This experiment serves as an advance organizer for teaching the concept appearing in the
title. The appropriate quantum mechanical model for this concept follows. After the
scientific background is established, the use of computer technology in teaching the
concept is demonstrated. Following that, the conventional approach for teaching the same
concept is described. Suggestions for curricular change will ensue by contrasting the new
and conventional approaches.
When choosing a phenomenon to serve as an advance organizer in the field of quantum
mechanics, the major consideration should be the students’ prior knowledge of classical
mechanics. Classical mechanics deals with macroscopic objects, with which the student
had been interacting for all his life. Many models of classical mechanics are thus familiar
and intuitive. But the models appropriate for macroscopic objects often fail when applied
in the microscopic domain. On the one hand, the students shouldn’t resort entirely to
(intuitive, but often incorrect) classical mechanics when dealing with quantum mechanical
problems. On the other hand, they shouldn’t get the impression that quantum mechanics is
totally alien to classical mechanics, as it shares some of the concepts and intuition of
classical mechanics. In many cases, these concepts and intuition can be called upon to
simplify the solution of quantum mechanical problems. It is important to demonstrate for
which phenomena these similarities exist, and for which they fail. As stated in the
theoretical framework, the destruction of artificial barriers between related concepts
-30-
reveals important common features, and promotes acquisition of insights dependent upon
recognition of these commonalties. The advance organizers should therefore differentiate
between the two kinds of mechanics and draw some boundaries, but also point out the
similarities and the domains of overlap between the two. In this way they can help the
student to integrate seemingly new concepts with basically similar concepts existing in his
cognitive structure, and to increase discriminability between new and existing ideas which
are essentially different but confusingly similar.
Because numerous similar courses are being taught worldwide, there is no one
conventional method for teaching this subject. For the sake of simplicity, however, an
“average” conventional approach was composed by reviewing four popular textbooks, and
combining common features of content and structure. These textbooks, all titled “Physical
Chemistry” (Castellan, 1983; Barrow, 1988; Levine, 1988; Atkins, 1995), were chosen for
being standard textbooks of comparable courses in Israel and in the United States, as
determined by a random selection of syllabi from the Internet.
Most of the figures that appear in this chapter are also available as interactive computer
programs on the accompanying CD-ROM.
Wave-Particle Duality
By the time students take a course in introductory quantum mechanics, they have already
been exposed to such phrases as “light is an electromagnetic wave” and “light is composed
of particles called ‘photons’, which travel at the speed of light”. The concept of particles
that behave as waves is a basic concept in quantum mechanics, but also a very difficult one
to understand. This is because everyday perception of phenomena draws a clear distinction
between the discrete nature of particles and the continuous nature of waves. Wave-particle
duality only exists in the microscopic world. This model was developed to account for
certain phenomena associated with light, and later with electrons, atoms and molecules. In
order to understand this model, such phenomena should first be exhibited. The
phenomenon that was chosen to serve as an advance organizer is a feeble light interference
experiment (Reynolds & Spartalian, 1969). In this experiment, a low intensity light beam is
passed through a Fabry-Perot interferometer, and the transmitted light is recorded on a
photographic plate. At very low intensities, discrete dots appear on the plate. This behavior
corresponds to a particle model of light – each photon is scattered by the film and strikes
the plate at a single point. At high intensities, the result is radically different: a pattern of
alternating dark and light areas appears. This behavior corresponds to a wave model of
-31-
light – constructive and destructive interference, caused by different path lengths of the
light passing through the interferometer, determines the continuous intensity change. At
intermediate intensities, the two pictures blend (Figure 3-1). Individual dots can still be
identified, but these dots aggregate to form the alternating light and dark areas of the
interference pattern. This experiment serves as an integrative organizer, showing that the
two apparently incompatible models are actually two faces of the same phenomenon.
Figure 3-1: Interference patterns created by a very low intensity light source. Left: After a short
exposure, an apparently random pattern of discrete dots marks the positions of single photons
striking the photographic plate. Right: As the exposure time gets longer, the discrete dots merge
into a continuous diffraction pattern.
The quantum mechanical model for this phenomenon is Born’s interpretation of the wave
function. According to Born, the wave model is appropriate for calculating the intensity
distribution at high intensities, where individual particles can not be discerned. Each
individual particle is not distributed over space like a wave, and can be found only in a
discrete position when measured. The wave model does not determine the trajectory of a
single particle. Rather than determination, it gives only the probability of finding the
particle at a given position at a given time. This probability determines the average
distribution of individual particles. At low intensities, each photon has a probability to
strike at many different locations, and the pattern of photons appears as a random
distribution of individual dots. As the number of particles increases, statistical deviations
from the average become negligible. Thus at high intensities, the probabilistic particle
model gives the same predictions as the continuous wave model.
To clarify this model, an interactive computer simulation of a related experiment was
devised.
-32-
Figure 3-2: A simulation of a LEED (Low Energy Electron Diffraction) experiment. Top Left: A
photograph of a real LEED experiment, showing an hexagonal diffraction pattern created by
electrons scattered from a metal crystal. Top Right: A computer simulation of the same experiment,
showing an overview of the experimental setup – an electron gun, a metal target and a fluorescent
screen. Bottom Left: A simulation of a high intensity electron beam, showing a continuous
diffraction pattern. Bottom Right: A simulation of a low intensity electron beam, showing the
discrete nature of single electrons striking the screen.
In this experiment, electrons scattered from a metal crystal create a diffraction pattern on a
fluorescent screen (Figure 3-2). This phenomenon was chosen for three reasons. First, it
demonstrates that the wave-particle model is also applicable to electrons. Second, this
phenomenon is the basis to a standard experimental technique for measuring
crystallographic properties, called LEED (Low Energy Electron Diffraction). Third, it
facilitates the demonstration of the phenomena in an interactive manner. A slider controls
the intensity of the electron beam. At the one extreme, high electron intensity gives a
continuous diffraction pattern. At the other extreme, very low intensity shows random
flashes on the screen, marking a single electron collision at a time. A smooth transition
from one extreme to the other can be achieved by slowly varying the slider’s position. The
-33-
ability to change the intensity of the beam and see the dynamic change in pattern discloses
the underlying probabilistic model in an intuitive visual way.
The conventional way of teaching this subject also starts from demonstrating natural
phenomena. The difference is in the selection of the demonstrated phenomena, where each
phenomenon only exhibits one aspect of the model. The most common references for
particle behavior are the cathode-ray experiment by Thomson (1897) for electrons and the
Einstein model of the photoelectric effect (1905) for photons. As for wave behavior, the
electron diffraction experiment by Davison and Germer (1927) is usually cited, while the
wave properties of light were not demonstrated in any of the books reviewed (probably
assumed to be well known at this stage). Since non of these exhibits duality, the wave and
particle models seem to be incompatible, each one suitable for explaining a different set of
observed phenomena: “Under certain experimental conditions, an electron behaves like a
particle; under other conditions, it behaves like a wave. It is something that cannot be
adequately described in terms of a model we can visualize” (Levine, 1988).
Confronted by two apparently incompatible models, students try to compose their own
synthesis to settle inconsistencies in their understanding of light (Smit & Finegold, 1995).
They sometimes think of light as a transverse wave motion through a sea of photons (the
way sound waves propagate through air), or as particles following a sinusoidal trajectory.
These are mental images based on their previous perception of “particle” and “wave”.
However, the meaning of “particle” and “wave” in the microscopic domain does not come
from everyday analogies of bullets and water waves, nor is it an abstract concept. It is
related to some concrete phenomena. While the early experiments showing the waveparticle duality of light and matter were sufficient for the pioneers of quantum mechanics
to devise their own models of nature3, it seems that the less enlightened students fail to
follow. But there is no need to limit the students to the knowledge present at the time the
model was first developed. Scientific progress presents modern phenomena that
demonstrate clearly the transition between wave-like and particle-like behavior. Using
computer technology to simulate such experiments encourages the incorporation of modern
phenomena into the curriculum. The ability to do so in an interactive manner enhances the
comprehensibility these experiments offer.
3
Even for them, the probabilistic model was hard to fathom. The first time Born suggested his interpretation
he did so in a footnote, not being confident enough to present it in the text itself.
-34-
Figure 3-3: An experiment that measures the relation between a particle’s velocity and its
wavelength. Top Left: The experimental setup. Helium atoms emerge from an oven with a thermal
distribution of velocities and random direction. Passing the atoms through a narrow slit (skimmer)
creates a directed beam of atoms. Passing the beam through a velocity selector (chopper) narrows
the velocity distribution. A crystal of Lithium Fluoride then diffracts this nearly monochromatic
beam. A manometer measures the angular distribution of the scattered atoms. Bottom Left: A
computer simulation that models the operation of the chopper. The apparatus is composed of two
notched discs on a rotating axis. Atoms with different velocities (denoted by different colors) strike
the first notched disc in a random pattern. Those atoms that pass through the first notch meet the
second disc in a velocity sorted order. The fast (red) atoms get there before the second notch, while
the slow (purple and blue) only arrive after the notch has passed. Only atoms having the
appropriate velocity (pink) pass through the second notch. Increasing the angular frequency of the
axis will allow faster atoms to pass through both notches. Right: Experimental results. The angular
distribution of the scattered atoms clearly shows the first order diffraction peak. As the angular
frequency of the chopper increases, the angle of the first peak decreases, which corresponds to a
shorter wavelength.
De-Broglie’s Relation
In 1905 Einstein devised a particle model to explain the photoelectric effect. In 1916
Mulliken validated this model experimentally. By that time, the wave-particle duality for
light was well established. Based on this model, De-Broglie suggested in 1924 that
particles should also exhibit wave-like properties, and that the wavelength of a moving
-35-
particle should be inversely proportional to its momentum (its velocity multiplied by its
mass). This model was validated in 1927 by Davison and Germer for electrons, and in
1930 by Stern for atoms and molecules (Trigg, 1971). The latter experiment was chosen as
the advance organizer for teaching the concept of De-Broglie’s relation. In this experiment,
a beam of Helium atoms is velocity selected and diffracted from a Lithium Fluoride crystal
(Figure 3-3). The angular distribution of the scattered atoms is measured. The distribution
exhibits a sharp diffraction peak, whose angle is inversely proportional to the velocity of
the particles. This experiment was chosen because it demonstrates clearly the concepts of
velocity and wavelength, to which De-Broglie’s relation applies. In the first part of the
experiment, it is easy to visualize the operation of the velocity selector (“chopper”) by
associating a particle model with the motion of the atoms. Each atom is localized – it
should be in one place at one time and in another place at another time in order for the
chopper to operate. In the second part, the diffraction pattern in the angular distribution has
to be associated with a wave model, because a fully localized particle cannot interfere with
itself. The simplest model for describing diffraction patterns is a monochromatic wave.
This wave is a global phenomenon, i.e. it exists over all space simultaneously. Thus it can
interfere with itself constructively or destructively to create a diffraction pattern. The angle
between successive constructive interference peaks is proportional to the wavelength of the
monochromatic wave. However, this simple wave model cannot account for the first part
of the experiment, because a global monochromatic wave is a poor description for a
localized particle that moves through space. This experiment illustrates the need for a more
elaborate wave model, which allows interference of semi-localized particles.
In quantum mechanics, the state of a particle is represented by a function of position. The
square of the function determines the probability density of finding the particle at some
position. According to this model, a single monochromatic wave describes a particle with
equal probability to be everywhere. To localize the particle, a wave-packet model is
employed. A wave-packet is a sum of many waves, each multiplied by a weight. The
waves interfere constructively and destructively to give a position dependent probability
distribution. The shape of the resulting distribution is dependent on the weight of each
component in the superposition. The weight function of a given wave function can be
calculated by performing a mathematical manipulation. This manipulation is called a
Fourier transform, and is usually beyond the scope of the undergraduate chemistry
curriculum.
-36-
Figure 3-4: An interactive construction of the wave-packet model. The program sums 33 wave
components of the form  k ( x )  e ikx , each multiplied by a factor of A(k )ei ( k ) . The interface
consists of two graphic equalizers: one controls the amplitudes of each wave component ( A(k ) –
top right in each program window), and the second controls their phases (  (k ) – bottom right).
The display shows the resulting superposition in the range [-4, 4]. The contour of the
superposition denotes its absolute value, which determines the probability density. The fill color of
the superposition denotes the phase, where red is ~ 0, pink is ~ /2, blue is ~  and purple is ~ -/2.
Top Left: A single wave component with k = 0. Middle Left: A single wave component with k =
0.25. Bottom Left: The sum of the above. Note that where the two functions have the same phase
(at x = 0) they interfere constructively, and where they have opposite phases (at x = 4) they
interfere destructively. Top Right: A sum of 17 wave components, where A(k) is a gaussian,
centered at k = 0. All components interfere constructively around x = 0 and destructively elsewhere,
to produce a localized wave-packet centered at x = 0. Middle Right: A similar wave-packet,
centered at k = 2. This represents a particle with non-zero momentum. Bottom Right: The dynamics
of a free wave-packet. The time-dependent change of phase causes the wave-packet to move.
-37-
To overcome this mathematical barrier, an interactive computer program for
superimposing waves was devised (Figure 3-4). In this program, a set of sliders controls
the superposition weights of 33 monochromatic wave components. These weights can be
selected as to make all the waves interfere constructively in a restricted domain, and
destructively outside of it. Thus, a localized wave-packet is constructed. The relation
between the positions of the sliders and the resulting wave-packet is mathematically
equivalent to performing a Fourier transform, but doesn’t require any mathematical skills
in doing so. The basic concept of constructing a localized wave-packet from a series of
global monochromatic waves is thus conveyed without any mathematical prerequisites.
Furthermore, the dynamics of such a wave-packet can be demonstrated. The dynamics are
governed by the time-dependent Schrödinger equation of a free particle. According to this
equation, the change in time of the phase of each wave component is proportional to the
square of its wave number, denoted k. The relative change of phase induces motion of the
wave packet. The quadratic dependence in k causes the relative change to be larger for
higher k values. Consequently, wave-packets with higher k values will move faster, and
have greater momentum. On the other hand, the wavelength of a wave-packet, denoted ,
is related to its wave number:  = 2/k. Therefore, the dynamics of a free wave-packet
show that the shorter the wavelength, the greater the momentum, as stated in De-Broglie’s
relation.
A relevant question at this point would be: “If a wave-packet is a localized wave, does it
still exhibit wave-like behavior which depends on the global nature of waves, like
diffraction?”. This question is a difficult one to answer, because it deals with the
diffraction of wave-packets. Diffraction calculations for a monochromatic wave are
straightforward, and can be found in any basic optics textbook. Free particle wave-packet
propagation is a bit harder, still it has an analytic solution that can be found in advanced
textbooks of quantum mechanics. However, the diffraction of wave-packets is a highly
complex problem, and its solution requires mastery of advanced topics such as scattering
theory. This obstacle is again overcome by using computer technology. In this case, the
computational capability of the computer facilitates simulation of the propagation of a twodimensional wave-packet through two slits in a barrier (Figure 3-5). This problem does not
have an analytic solution, and the simulation is calculated numerically. The solution is
dynamically presented as an animation, and clearly shows that wave-packets behave at first
as particles and later as waves, and obey De-Broglie’s relation.
-38-
Figure 3-5: Animations of two-dimensional wave-packets passing through two slits. The red, pink,
blue and purple colors denote the phase of the wave function, as in Figure 3-4. The brown color
denotes a reflective energy barrier, with two slits in it. The background is green, denoting
negligible probability of finding the particle. Left Panel: A gaussian wave-packet with a wave
number of 0.3. Right Panel: A similar wave-packet with a wave number of 0.6, i.e. half the
wavelength. Top: The initial wave-packets at t = 0. Middle: The wavepackets at t = 2. Notice that
the shorter wavelength wave-packet moves twice as fast as the other does. Bottom: The
wavepackets at t = 8. The part of the wave-packets that passed through the slits creates an
interference pattern. The zero-, first- and second-order diffraction peaks are marked. The angle of
the first order diffraction peak is inversely proportional to the momentum of the wave-packet.
-39-
The concept of wave-packets is neglected in conventional teaching at undergraduate level.
It is usually dealt with in graduate quantum mechanics courses. In the textbooks reviewed,
it is only mentioned once (Atkins, 1995), and even then the explanation is very limited by
the static nature of printed, two-colored graphs. In all cases, it is not clear when should a
particle’s momentum be treated as indication of its motion, and when as an indication of its
wavelength: “In chemistry we proceed most simply and effectively by thinking of electrons
as particles. Under some circumstances these particles behave in a way that can be
describes by using the methods of wave mechanics” (Barrow, 1988). It seems that the
association of momentum with velocity and motion is a part of classical mechanics, while
in the quantum domain particles behave entirely different, and momentum is only
associated with wavelength.
From the point of view of learning theory, the wave-packet model is most suitable for
teaching the concept of De-Broglie’s relation. On the one hand, it legitimizes the use of
intuitive classical concepts of position and motion when dealing with free particles. By this
it offers an opportunity to integrate new knowledge into existing cognitive structure. On
the other hand, it sets limits to the validity of these concepts – wave-packets clearly
demonstrate Heizenberg’s uncertainty principle4 (Figure 3-6). In this it increases
discriminability between the new quantum wave model and the existing classical
perception of particles. By performing these two functions, the wave packet model
promotes integrative reconciliation and thus meaningful learning.
Does the fact that the wave-packet model is absent from conventional teaching mean its
pedagogical value was not acknowledged? The single instance in which it does appear
suggests otherwise. It is probably due to the technical difficulty of illustrating the abstract
concept of wave-packets, or due to the mathematical prerequisites for dealing with it
rigorously, that the teaching of this subject was postponed. In this subsection, the impact of
computer visualization on the ability to teach the concept of wave-packets at the
undergraduate level was demonstrated. This is an example of how computer technology
can influence curriculum decision making, by eliminating prerequisites of prior knowledge
and facilitating the incorporation of advanced abstract models at an early stage.
4
Heizenberg’s uncertainty principle states that measurements of position and momentum have inherent finite
accuracy. Reducing the uncertainty in measuring the position of a particle always increases the uncertainty in
measuring its momentum, and vice versa.
-40-
Figure 3-6: Example of the Heizenberg’s uncertainty principle. Left: A localized wave-packet is
composed of many wave components, each with a different wavelength and hence associated with
a different momentum (according to De-Broglie’s relation). Right: If the number of wave
components is smaller, i.e. the uncertainty in momentum is reduced, the resulting wave-packet is
more spread-out, which means its uncertainty in position is increased.
Quantization of Energy
One of the first goals of quantum mechanics was to explain the observation that energy in
the molecular world often exhibited a discrete, rather than a continuous, behavior. The
advance organizer chosen to demonstrate the quantization of energy is the visible spectrum
of molecular gaseous Iodine (Figure 3-7). In this experiment, a sample of I2 is radiated by
visible white light. After passing through the sample, the light is diffracted onto a
photographic plate. The diffraction separates the white light into its colored components,
so longer wavelengths are shifted to the left side of the plate, and shorter wavelengths are
shifted to the right. The light striking the plate darkens the photographic material. A clear
progression of discrete bright stripes can be seen on the plate, meaning only a small
amount of light with the corresponding wavelengths has passed through the sample.
Figure 3-7: The visible absorption spectrum of I2 molecules. At the left side of the spectrum, the
gas only absorbs at discrete wavelengths. The spacing between adjacent absorption lines decreases
as the wavelength shortens. At 4995Å, the absorption becomes continuous, i.e. any shorter
wavelength is absorbed.
-41-
The quantum mechanical model of this phenomenon is that some of the light is absorbed
by the Iodine molecules, and so does not reach the photographic plate. According to
Einstein’s model of electromagnetic radiation, light is composed of particles, called
photons, each carrying a specific amount of energy determined by its wavelength. When a
photon hits a molecule, it vanishes and its energy is transferred to the molecule, which
becomes excited. A photon can not be partially consumed, so if the molecule cannot accept
all of its energy it does not accept any. Since only specific wavelengths are observed as
absorbed, this means that there is only a discrete set of energy levels the excited molecule
can have. Photons with energies that match these energy levels are absorbed. All other
photons pass through the sample and strike the plate.
Why does the excited Iodine molecule can have only a discrete set of energies? A quantum
mechanical model for this behavior treats the molecule as two masses connected by a
spring. When the masses are displaced from their equilibrium position by a distance x, the
spring induces a potential change, denoted V (x ) . The state of the molecule is represented
by a wave function, denoted  (x ) . For the excited molecule to be in a state with a specific
energy E, its wave function must satisfy the stationary Schrödinger equation5:
1  2 ( x )
 
 V ( x )  ( x )  E  ( x )
2 x 2
This is a second order differential equation. It does not have a general solution. Rather,
there are only a few potential functions for which this equation is analytically solvable.
The harmonic potential V ( x)  12 x 2 is one of them, and so serves as a first approximation
for solving this equation. This assumption is justified for small displacements, for which
every potential behaves like a harmonic one. However, having an analytic solution doesn’t
mean having a simple solution. The analytic treatment of the harmonic oscillator is
complex and lengthy (a few pages long), and requires knowledge of differential equations.
More than that, the solution is unique for this problem and gives no global insights. Still,
behind all the mathematical manipulations, there is a single important physical concept.
While there is a mathematical solution for every value of E, only a discrete set of solutions
obeys certain boundary conditions. These boundary conditions are imposed by the demand
that the square of the wave function could be interpreted as a probability distribution.
5
For the sake of simplicity, the units for all equations have been chosen as to make all non-relevant constants
(such as the particles’ mass, the spring constant and Planck’s constant) to be equal to 1.
-42-
Figure 3-8: A model demonstrating that the demand for boundary conditions forces quantization of
energy. Starting from an initial value on the left, the program integrates numerically the second
order differential Schrödinger’s equation:  2 ( x ) / x 2  2[V ( x )  E ]  ( x ) , where E is a free
parameter associated with the energy of the resulting function. A solution to this equation has a
physical meaning only if it is bounded (does not diverge to ). Shown are the harmonic potential
curve V(x) in green and the integrated function (x) in blue or red. Top Left: The integration
carried out with E = 3.49. The resulting function diverges to minus infinity for large values of x.
Top Right: The integration carried out with E = 3.51.
The resulting function diverges to plus
infinity for large values of x. Bottom Left: The integration carried out with E =
3.50.
The
resulting function goes to zero for large values of x, and so obeys the boundary condition. Bottom
Right: The integration carried out with E = 4.50.
For a harmonic potential, the allowed
energies are equally spaced (yellow lines).
To illustrate this physical concept, without getting into the unnecessary mathematical
complications, an interactive computer program was devised (Figure 3-8). In this program,
the differential equation is solved by numeric integration. The value of E can be changed
by a slider. For most values of E, the resulting wave function diverges to . Such a wave
function does not qualify as a probability distribution, as infinite probability has no
-43-
physical significance. There are wave functions that go to zero at x = , thus obeying the
physical boundary condition. This only happens for a discrete set of energies. Showing
this, the model is suitable for describing the discrete spectrum of the molecule. For a
harmonic oscillator, the spacing between energy levels is constant.
However, a closer inspection of the spectrum reveals that this is the case only for the lower
energy absorption lines (v’ < 30). For higher energies, the spacing between adjacent
absorption lines gets smaller as the wavelength gets shorter. Starting at 4995Å, the
absorption becomes continuous, i.e. any shorter wavelength is absorbed. The harmonic
oscillator model cannot account for these observations. A refinement of the model is
needed to accommodate the new observations. This is achieved by changing the potential
function to an anharmonic potential, which becomes a constant function for large interatomic separations. Because the program uses numeric integration, the fact that an
anharmonic potential doesn’t have an analytic solution has no affect on it. Just by changing
the potential function, the same program gives the desired results (Figure 3-9). An
anharmonic potential has a finite set of discrete energies, which grow closer as energy
increases (shorter wavelengths). After the threshold energy is reached, the integrated wave
function stays bound for all energies. This corresponds to a continuous spectrum. It is
important to note that in this region, the wave function has a probability of finding the two
atoms at infinite separation, which corresponds to dissociation of the molecule into two
atoms. This prediction can be tested experimentally, and indeed in the region of continuous
absorption, atomic Iodine can be identified in the radiated vessel.
Figure 3-9: A program similar to the one described in Figure 3-7, using an anharmonic potential
curve. Left: For an anharmonic potential, the space between adjacent energy levels decreases as the
energy increases. Right: When the energy exceeds the dissociation threshold, every solution is
bounded, and so all energies are allowed.
-44-
The epistemological reasoning behind the technological approach is constructivist in
nature. It starts from a phenomenon, than constructs a model to describe it. The model is
analyzed, compared again with experiment and refined. At the last stage, a prediction
based on the model is made, and validated through experiment. In all steps, the relation
between model and observation is emphasized. The terms used to interpret the experiment
are taken from the model, and the range of validity of each model is determined by
experimentally testing its predictions. Conventional textbooks take a different
epistemological approach to this subject. Models and observations are treated separately.
The models of quantum mechanics are established first, entirely on theoretical basis, in a
chapter called “Quantum Mechanics” or “Quantum Theory”. Experimental evidence to the
validity of these models is presented much later, in a chapter called “Spectroscopy”. For
example, in Castellan (1983), the harmonic oscillator model is introduced on page 491.
Even though this model is announced as “applicable to real physical oscillators… for
example, the vibration6 of a diatomic molecule such as N2 or O2”, the discussion of
experimental evidence associated with the harmonic oscillator is postponed until page 628.
The concept of an anharmonic oscillator only appears in the chapter about spectroscopy.
Even then, it is not described as a theoretical model in its own right. Rather, an
experimental parameter called “anharmonicity” is added to the harmonic solution. This
parameter serves as an empirical correction term, to account for the observed spectroscopic
behavior of real molecules.
Because of the complex mathematical derivation, only one of the reviewed textbooks
solves the problem rigorously. All the others simply state the formula for harmonic energy
levels. Instead of associating the solution of the harmonic oscillator with observed
spectroscopic phenomena, they justify the solution by analogy to another model – the
particle in a box. The particle in a box is the simplest model in quantum mechanics. It has a
simple analytic solution that can be derived in a few lines. This solution exhibits many
fundamental quantum concepts, most notably the quantization of energy. For these reasons,
it is conventionally the first model to be taught in introductory quantum mechanics
6
It is interesting to note that the model of molecular vibration has assumed the status of a fact in this
sentence. This is indicative of a positivist view of nature. In this view, models need not be associated with
phenomena, but rather have an independent existence as “truths” or “facts”. Following that, the nature of
molecular vibration is explained in classical terms, without justifying why these terms are still valid in
quantum mechanics.
-45-
courses. Many of the new concepts of quantum mechanics are introduced through this
model. It serves as a simple tool for analyzing complex problems, to a first approximation,
and so became part of the quantum mechanical language. It is also the basis for some
advanced models in statistical thermodynamics and solid state theory. Unfortunately, there
is a small problem with this model. There is no discrete physical phenomenon that can be
directly associated with this model7. Hence, no evidence to the validity of its prediction for
the quantization of energy is available. The particle in a box is a simple, robust and useful
model. But it is just a model, and as such is not suitable for serving as an organizer. It lacks
the concrete anchorage in the student’s existing cognitive structure, as elaborated in the
theoretical framework. When used as an organizer, the new abstract model of the harmonic
oscillator is anchored to yet another abstract model, rather than being associated with a
concrete observation.
Computer technology offers an alternative. It facilitates the teaching of harmonic and
anharmonic energy levels to students with little or no mathematical background. By doing
this, many easily observed phenomena become appropriate for discussion. These
phenomena can be used as advance organizers demonstrating the concept of quantization
of energy. Even though computer technology seems more suitable for teaching the model
part of scientific theories, it offers an opportunity for a change in the observation part as
well. The ability to discuss more complex models broadens the range of relevant natural
phenomena that can be incorporated into the scientific curriculum. Having that, a
constructivistic approach is more easily followed, and the relation between models and
phenomena can be thoroughly investigated.
Another important corollary is that the ability to teach a model no longer depends on
having a simple analytic solution for it. This can change the priorities in curriculum
decision making, concentrating on physical concepts rather than mathematical
manipulations. In the example above, the particle in a box can still be considered as a
practical tool, but it need not be treated as a corner stone of quantum mechanics teaching.
7
In most textbooks, an electron in a conjugated hydrocarbon is taken as an example of a particle in a box.
Although this physical system bares some similarities to the model, its broad continuous spectrum doesn’t
reveal any evidence for the quantization of energy.
-46-
Figure 3-10: An ultra-fast pump-probe experiment demonstrating an oscillating time dependent
molecular property. Left: A schematic diagram of the energetics of the experiment. I2 molecules in
their ground electronic state are radiated by a short laser pulse at t = 0, which induces transition to
an excited electronic state (I2*). After a short time delay, at t = t, the excited molecules are
radiated by a second pulse. This pulse has enough energy to ionize the excited molecule only when
it is stretched out from equilibrium, but not if it is contracted. Right: Experimental result for the
amount of ionized molecules as a function of the time delay (t) between the pump and probe
pulses. The ion signal changes periodically in time, indicating that the excited I2* molecule
oscillates between its contracted and stretched configurations.
Quantum Dynamics
By the end of the 19th century it became accepted that molecules undergo vibrational
motion, in which the atoms oscillate around their equilibrium positions. This model was
successful in describing heat capacities of solids and gases at high temperatures. In the
beginning of the 20th century, quantum mechanical treatment of the same model was
successful in extending the range of its validity to all temperatures down to the absolute
zero. The same treatment also provided a framework for interpretation of the discrete
spectrum of molecules. As described in the last subsection, the solution of the quantum
model gives a discrete set of wave functions, which obeys the stationary Schrödinger
equation. According to quantum mechanics, these functions are stationary. This means that
if a diatomic molecule’s state is described by one of these functions, its probability
distribution does not change over time. This behavior is completely different from the
original classical model, in which the atoms change their positions in an oscillatory manner
over time. Still, it is adequate for describing the above mentioned phenomena, without
having to include any dynamics or motion into the model. Nevertheless, the terms
-47-
“vibrational energy” and “vibrational spectrum”, suggesting molecular motion, have
endured.
Only in the last decade, developments in laser technology have allowed for real-time
investigation of molecular motion. Femtosecond (= 10-15 second) pump-probe studies of
gas phase chemical reactions (Zewail, 1988) reveal new phenomena which demand a
dynamic approach. The advance organizer chosen for teaching the concept of quantum
dynamics is such a pump-probe experiment of molecular Iodine (Fischer et al, 1995). This
experiment was chosen because it deals with the same molecular system as in the previous
subsection, and so serves as an integrative organizer between stationary and dynamic
quantum theory. In this experiment, a property of the molecule is measured at different
times (Figure 3-10). The result of the measurement clearly changes over time, which
means that the state of the system is not stationary. If the ionization rate is associated with
the bond length of the molecule, the oscillating ion signal indicates that the molecule
undergoes vibrational motion.
In quantum mechanics, dynamics is generated by the time-dependent Schrödinger
equation. According to this equation, only the phase of a wave function with specific
energy (as determined by the stationary Schrödinger equation) changes over time. The
change is proportional to the wave function’s energy. Since the change is only in phase, the
absolute value of a single stationary function does not change, and its probability density
remains constant. The case is different when the state of the system is described by a
superposition of several stationary functions. Each function’s phase changes by a different
amount. Since the relative phase of superimposed waves determines the areas of
constructive and destructive interference, the relative change of phase creates a timedependent interference pattern. The absolute value of the superposition changes over time,
and so does the probability density. This is the quantum equivalent of a motion.
To help visualize this procedure, an interactive computer program was devised (Figure 311). In this program, the dynamics of single stationary wave function or a superposition of
several functions is shown. The stationary states, calculated for a harmonic potential, are
the same as in the previous subsection. A superposition of these stationary states exhibits
an oscillatory motion around the equilibrium distance. When the molecule is more
probable to be contracted, the ion signal is expected to be low. When it is more probable to
be stretched, the ion signal should be high. This behavior is validated by the experiment.
-48-
Figure 3-11: A time dependent model of an oscillating molecule. The green line denotes the
potential, with an equilibrium point at x = 0. The contour of the solid plot denotes the wave
function’s absolute value, and the fill color denotes the phase, as in Figure 3-4. Top Left: The
ground state of the oscillator. Middle Left: The first excited state, with a node (zero probability) at
the equilibrium distance, and equal probability of being contracted or stretched. Bottom Left: The
superposition of the above at t = 0, which shows greater probability of finding the molecule
contracted. Right Panel: The same superposition at t = /4, /2, and  (where  is the vibrational
period), exhibiting an oscillatory motion from contracted to stretched and back.
-49-
Figure 3-12: Long-exposure photograph of a white pendulum bob swinging against a black
background. The picture is taken from an educational article (Nelson, 1990). This article
erroneously asserts that the quantum mechanical probability distribution reflects the motion of the
particle in the same way in which the density of the image on a long exposure photograph reflects
the motion of a macroscopic object.
Conventionally, quantum dynamics is not part of the undergraduate curriculum. The timedependent Schrödinger equation is only briefly mentioned, and no indication for the
existence of non-stationary states is given: “In an isolated atom or molecule, the potential
energy is independent of time, and the system can exist in a stationary state… We shall
mainly be interested in the stationary states since these give the allowed energy levels”
(Levine, 1988). Failing to demonstrate any dynamic phenomena in the context of quantum
mechanics leads to incorrect interpretation of the concept of stationary states. Since
classical concepts of position and motion already exist in the student’s cognitive structure,
he tries to interpret the new quantum concepts using classical ones. In the minds of many
students, the quantum concept of a stationary probability distribution is translated into a
classical picture of an average motion (Figure 3-12). This picture is misleading, because it
is inconsistent with the wave model that is essential for the correct predictions of quantum
mechanics. Predictions based on the classical picture may be wrong, and lead to conflicts
with the correct quantum mechanical predictions. These conflicts hinder the incorporation
of quantum mechanics models into the student’s existing cognitive structure, as many of
the students find it confusing, inconsistent and illogical.
An example for such a contradiction can be found by examining Figure 3-11 (Middle Left
panel). In the quantum mechanical picture, the wave function of this state has a node at the
equilibrium point, meaning there is zero probability of finding the oscillator at this point.
According to the classical picture, the oscillator is in constant motion, alternately stretching
and contracting. Since it spends some of its time contracted and some of its time stretched
out, it must pass through the equilibrium point. This is clearly in contradiction to the
quantum mechanical picture, which excludes the oscillator from being at this point.
-50-
Students often ask (Nelson, 1990): “How do particles get across nodes?” The question
itself reveals the misconception. According to quantum mechanics, the state of the
oscillator is described by a wave function, not by position and motion. The wave function
doesn’t have to “get across” because it is present simultaneously at both sides of the
equilibrium point. Quantum dynamics further clarifies the subject. If we want the oscillator
to “get across” the equilibrium point, it should first be localized at one side in order to “get
across” to the other side. Since each stationary states is equally probable to be found on
both sides, this can only be done by superimposing two or more stationary functions, as
demonstrated in Figure 3-11 (Bottom Left panel). This superposition has no problem to
“get across”, because it is not stationary, and exhibits oscillatory motion from one side to
the other. And so, the question has no meaning for a stationary state, and has a simple
answer for a superposition. It is only by forcing classical concepts on a quantum
mechanical description that the alleged paradox arises. For this reason it is important to
expose the student to the models of quantum dynamics. Having seen that, the student can
associate his prior conception of motion with the correct dynamic quantum description of
motion, instead of falsely associating it with the stationary model. Quantum dynamics
serves as an organizer for the stationary model – it helps the student to integrate seemingly
new concepts with basically similar concepts existing in his cognitive structure, and to
increase discriminability between new and existing ideas, which are essentially different
but confusingly similar.
To summarize, the motivation to add quantum dynamics to the undergraduate curriculum
is twofold. First, femtochemistry is an exciting new field of chemistry, and students should
be exposed to it as part of their general chemical education. Second, and more important,
quantum dynamics can serve as an organizer for concepts which are already accepted to be
part of the curriculum. This may change the conventional hierarchy of concepts in quantum
mechanics, and so change the considerations for curriculum decision making. This change
is facilitated by the availability of relevant modern phenomena, and by the ability to
demonstrate them and their associated models.
-51-
-52-
Chapter 4 - A Meaningful Learning Set
The previous chapter dealt with the first requirement for promoting meaningful learning,
which is the availability of a potentially meaningful content. But providing the student with
a potentially meaningful content is not enough. In order to fulfill the potential, the student
must also posses a meaningful learning set – he should manifest a disposition to relate the
new learning task in a meaningful way to what he already knows. Such a disposition can
be encouraged by a supportive learning environment. The previous chapter also
demonstrated the advantages offered by computer technology. It is clear that the question
is not whether computers should be incorporated into future teaching environments, but
how should they be incorporated. This chapter gives a possible answer to this question, by
describing a computer based learning environment that supports a meaningful learning set.
The purpose of this chapter is to demonstrate how the theoretical considerations of the
previous chapters can be accommodated with practical reality. It does not pretend to be a
description of the best way to teach introductory quantum mechanics. Rather, it is an actual
account of how this topic was delivered over two years in a case study course. From this
experience general conclusions can be inferred and key variables identified. The emphasis
is on the real time feedback and rectification process the course underwent during its
development and delivery. While the success of a course is a function of certain time, place
and person, the feedback mechanism has general applicability in a wide range of
circumstances.
The content of this chapter is based on the personal experience of the author during the
development and delivery of the course. The author has been intimately involved with all
phases of the course, including:
1.
Analysis of content and reconstruction of the course material.
2.
Development of computer programs.
3.
Observation of all the lectures.
4.
Participation as a teaching assistant in all computer lab sessions.
5.
Correcting home exercises.
6.
Conducting unstructured personal interviews with 20% of the students (randomly
selected). The interviews were conducted in the fourth week of the semester, and in the
week before the final exam.
-53-
4.1 The Learning Environment
A new approach for teaching four basic concepts of quantum mechanics was illustrated in
the previous chapter. This is just a small example of the extensive analysis of content
carried out for the case study course “Introduction to Chemical Bonding”. The course was
completely reconstructed, using the theoretical basis and the technological approach so far
established. The reconstructed course was delivered on two consecutive academic years. In
order to exploit the advantages offered by computer technology, this technology had to be
introduced into the learning environment. A major consideration in the introduction of
computer technology to an existing course was to preserve the existing university milieu.
When integrating a new approach to an existing milieu, it is better to refrain from drastic
changes to it. Thus, the traditional lecture forum was not replaced, but rather enhanced, by
computer technology.
The Computer Lab
The first introduction of technology into the course was through interactive computer labs,
which replaced the traditional recitation sessions. To allocate more time for active
computer interaction, the traditional schedule for the course was changed. The recitation
sessions were extended from 45 minutes to 1½ hours, at the expense of 45 minutes of
lecture time, so the total time frame was conserved. The conservation of the time frame
was part of the effort to minimize global changes, which might have effect on the existing
university milieu. The computer lab took place in a classroom equipped with SiliconGraphics workstations. Three lab sessions were held each week, with 20 students attending
each session. The students interacted with computer in pairs, guided by a lab worksheet.
Each week the lab concentrated on a single interactive computer program (for example,
Figure 3-4), demonstrating a specific model in quantum mechanics. Each computer
program consisted of two parts – a simple graphic user interface (sliders, buttons and
checkboxes), and a visualization window. The students would interactively change the
model’s parameters through the user interface, and watch the result of their actions in the
visualization window. The students performed most actions by moving sliders, because
these provide a continuous control of the value of each parameter. As all calculations were
carried out in real time, this resulted in a continuous change of the visualization. Instead of
seeing only two pictures – before and after the change, the students would observe a
gradual change from the initial to the final state. When several parameters have to be
-54-
changed concurrently, this procedure isolates the effect of each parameter on the final
result.
The worksheets gave specific instructions as to which parameters should be changed and
how. Each instruction for change was followed by a set of guiding questions that drew the
students’ attention to the result of their actions. After several sets of instruction and
guiding questions, an integrative question was asked. The answer to this question was the
conclusion which the students were expected to deduce form their work. The three types of
activities were clearly marked in the worksheet –
question and

 for an instruction, ? for a guiding
for a conclusion. This cycle would repeat several times in each lab. At
the end of the lab, the students were assigned written exercises for homework.
A typical lab session would begin with a short presentation of the computer program by the
teaching assistant, followed by self-paced work in pairs. At the end of the lab, the teaching
assistant would go over the main conclusions of the lab. This ensured that the slower-paced
students would at least get their final results right, even if they didn’t manage to follow all
the intermediate steps. During the self-paced work, the students could call the teaching
assistant to their station, and ask for clarification or guidance.
Greater Personal Commitment
In a traditional frontal recitation session, personal communication between the teaching
assistant (TA) and the students is limited. The TA spends most of his time at the
blackboard, writing equations and explaining them. Students can ask questions, but only so
much, as the TA has to personally deliver a specified amount of content in a given time.
The TA can ask the students questions, but usually only a small amount of students
actively participate in classes. The situation is totally different when the recitation session
is replaced by a computer lab. This is due to two reasons:
1.
More than one TA can attend each lab session.
2.
Most of the content is delivered by the computer. This frees the TA’s to give
individual attention to needing students.
In this course, a ratio of 3 TA’s to 20 students was found to be adequate. Students needing
clarification for the instructions in the worksheet could call a TA to their station. The TA’s
would monitor the progress of all students, and offer guidance to students encountering
difficulties in reaching the correct conclusions. In a personal dialogue, the source of
difficulties could be traced, and an individually suitable explanation would be given. More
-55-
advanced students would try to go beyond the specific instructions in the worksheet. A TA
could encourage or discourage such attempts, according to the given circumstances and
time. The personal involvement with the students was so extensive, up to a stage where the
TA’s knew all of the students by their names, which is very uncommon in traditional
recitation sessions.
The Feedback Cycle
In a traditional course, it is not uncommon for the lecturer to find himself surprised at the
results of the students’ final exams. While the personal communication in a recitation
session is inadequate, during a lecture it is almost non-existent. Usually, the only feedback
mechanism is the final exam. This makes the feedback cycle a full semester long, which
means amendments to the course could be made only in the following academic year.
Having the students’ homework handed in and checked shortens the feedback cycle down
to two weeks, which is still too long for effective rectification.
The situation is different when part of the teaching takes place in a computer lab. As
described in the previous subsection, personal communication is enhanced, and so
feedback time is drastically reduced. This facilitates an effective feedback and rectification
mechanism. At the shortest time-scale, students’ difficulties can be detected and
individually accounted for immediately. If the same difficulty arises for many students in
the first lab session, the lab itself could be improved for the next sessions in the same
week. If the problem is traced back to material taught during the lecture, the lecturer could
be notified and his teaching improved by the next lecture. This mechanism was utilized
during the delivery of the course “Introduction to Chemical Bonding”, in combination with
the traditional, long time-scale mechanisms of homework assignments, midterm quiz and
final exam.
An Integrated Learning Environment
The most important comment obtained from the students during the first year of delivery
was that the lab sessions seemed to be disconnected from lecture material. While the
computer lab emphasized visual representation of models and perceptual concepts, the
lecture maintained the traditional formal symbolic representation of quantum mechanics.
Most students found it very hard to relate the two. In order to make both lecture and lab
speak the same language, it became necessary to integrate computer technology into the
classroom as well.
-56-
Some unsuccessful attempts were made to do so in the first year. Overhead transparencies
of snapshots from the computer screen were printed, but they lacked the dynamic nature of
computer simulation. Videotapes of computer animation were recorded, but these lacked
interactivity, and the video projection equipment was of poor quality. These technological
drawbacks discouraged frequent use of these materials, and the lecture was still
communicated mainly in symbolic terms. But near the beginning of the second year, it
became evident that a significant change in technology opened new possibilities for
classroom integration. Up until that time, only high-end graphic workstations had the
ability to make real time calculations for three-dimensional visualizations. As computer
technology advanced, this ability became available at the PC level. At the same time, PC
projection technology has advanced to a level where a high-resolution portable projector
could be purchased at an affordable price. Thus it became possible to show computer
visualizations on a large screen in class. The ability to use a PC, rather than a high-end
workstation, for classroom demonstrations also made possible further accessibility outside
the formal lab and lecture sessions. The availability of on campus PC classes and students’
home PC’s offered an opportunity to use the same computer programs for homework
assignments and individual review of course materials.
Following these new technological prospects, a new developmental effort commenced.
First, most of the Silicon Graphics computer lab programs were ported to the PC platform.
Second, the limitation of using only one computer program per week was relaxed, as the
presentation of computer visualizations was no longer restricted to the computer lab. And
so, many new computer simulations and visualizations were written for other concepts and
models, not previously treated in the computer lab. These programs were intended either
for classroom demonstration, or for pre-lab home exercises. The pre-lab exercise was
targeted at reviewing material that was studied in previous courses. This included
mathematical concepts (e.g. complex numbers, Gaussian distribution, spherical
coordinates, etc.) and classical mechanical concepts (e.g. rigid rotor, harmonic oscillator,
center of mass coordinates, etc.), which serve as the basis for the quantum mechanical
model presented in the lab.
Consequently, an integrated learning environment was created, as can be viewed on the
accompanying CD-ROM. An Internet browser was chosen to serve as the common
interface for all course materials. The first reason for that is its ability to display text,
graphics, animations and interactive simulations with a simple point-and-click user
-57-
interface. This interface is also familiar to most students, judging from the increasing
popularity of Internet surfing. The second reason is the accessibility offered by the
Internet, for students to use the course materials outside of the formal sessions. In this
manner, technology bridged the gap between formal symbolic concepts and perceptual
ones, by enabling the same visual language to be used in lectures, labs and homework.
4.2 Student Performance
An obvious question at this point would be: “Does the new learning environment improve
student performance, as compared with traditional teaching?”. From direct observation
during the labs, and from the personal interviews conducted, it became evident that for
many students the use of computers posed an obstacle, rather than being a support, for
learning. For these students, the main concern was adjusting to the new learning
environment, rather than utilizing it for learning purposes. This behavior is not unique to
this course. Edmondson and Novak (1993) report that “the introduction of constructivist
learning tools… is resisted by the students and tends to have little impact on their learning
approaches or epistemological views”. They also cite other studies, which illustrate the
difficulty of moving students toward more meaningful learning approaches through
isolated efforts in a single course. One difficulty is that often there is a decrease in
individual performance that accompanies the implementation of a new skill or program,
known as a “Performance Dip”. Over time, with adequate support and as an integral part of
a course or program, this decline will reverse, and performance will climb to a level that is
usually higher than the original level. Another difficulty is that elementary science courses
tend to be presented with a strongly positivistic orientation and course evaluation
frequently requires extensive verbatim recall of information. This is in contrast with the
constructivist approach advocated by this work, and its strive for meaningful learning.
For these reasons, a comparative study of student performance is pointless at this stage.
Only when this work’s approach becomes extensively employed at all levels of the
curriculum, such a quantitative study might be in place. Thus, the following subsections
are not to be taken as a measurement of the success of this approach, but as key variables
that should be addressed in future research,
Active Learning – Guided vs. Open Ended
One of the pedagogical features of the computer lab is the active role of the student in the
learning process. Computer labs were designed for self-paced work by the use of
-58-
worksheets. The structure of the assignments, described previously, led gradually from
technical manipulations, through guided observation, to a summarizing conclusion. This
format allows for a wide spectrum of learning styles. It can be designed to give only
general directions and loose guidance, or supply step by step manipulations and structured
questions.
In the beginning of the first year, the format of the worksheet tended toward open-ended
questions and directions. This is due to the author’s optimistic view of the students’
motivation for self-learning. This view proved to be naive, as most students didn’t succeed
to finish the labs in a reasonable time. During the interviews, some students expressed their
preference to the traditional laboratory work or recitation session, in which the procedures
and results are known before they actually have to perform them. They wanted the
questions to be demonstrated and explained by a TA, and than have similar exercises given
as homework. This shows a preference for rote technical mastery over meaningful
learning.
As a compromise, the amount of material covered in a single lab was reduced, and step by
step directions were given. This approach had its drawbacks as well. Students could breeze
through the guided activities just by answering the trivial questions, but without thinking
about the meaning of their answers. This would give them a false sense of success and
understanding. Usually they would get stuck at the concluding questions, and ask for a
TA’s assistance. Then, the entire cycle of instruction, guidance and conclusion would have
to be repeated by personal instruction of the TA.
Symbolic vs. Visual Representation
During the interviews, the students were questioned about their preference of
representation for mathematical models. Some students showed enthusiasm for the visual
computer representation, claiming it gives them intuitive understanding of the
mathematical model. Others regarded it as “a play thing”, which doesn’t help at all. These
students preferred the symbolic mathematical representation. Most students showed a
combination of the two views. They enjoyed the visual representation, and admitted it
helps them to understand the basic concepts. On the other hand, they weren’t ready to rely
only on this representation. Some said that they can only believe a rigorous mathematical
proof, and not a hand waving explanation based on visualizations. Others were worried
about their ability to perform the visual manipulations without the help of the computer,
-59-
particularly in the final exam. They rather have a concrete symbolic algorithm for solving a
problem.
An important corollary of the interviews was that every visual model was visually
connected with its symbolic form. This was achieved by displaying the mathematical
formula for the model in the simulation window.
Team vs. Individual Work
The students were instructed to work in pairs. Occasionally, some students would work
alone, for instance when their regular partner was missing from class. It was observed that
students working alone always lagged behind the rest of the group. Teaming such students
in pairs always resulted in catching up and keeping pace with the group.
First of all, the discussion between two students usually helped them overcome difficult
questions in the lab. More than that, when a pair of students reached a question to which
both didn’t have an answer, their common inability to solve it legitimized a call for
assistance. Students working alone were more reluctant to call for a TA’s assistance,
perhaps not knowing if their question is “good” enough. They would rather stare at the
computer for a long time, until addressed by a TA who spotted their need.
However, some couples didn’t have a good working relationship, and one of them would
do all the work while the other passively watched. This conflicts with the desire for selfpaced active learning of all students.
-60-
Chapter 5 – Conclusion
5.1 Arts of Eclectic – Revisited
In a series of essays on curriculum development, Schwab (1978a-c) claims that no single
theory can be comprehensive enough to encompass the complex field from which
educational problems arise. He suggests an eclectic approach, which combines the use of
several theories, each having a partial view of the subject and a restricted domain of
validity, together with practical considerations and reference to real students. First, the
theories are used as bodies of knowledge. They provide a kind of shorthand for some
phases of deliberation and free the deliberator from the necessity of obtaining firsthand
information on the subject under discussion. Second, the terms and distinctions, which a
theory uses for theoretical purposes, provide a framework for classification and
categorization of particular situations and facts. They reveal similarities among subjects
and disclose their variance. This framework is than used in conjunction with practical
considerations for the discussion and concrete solution of curricular questions, which arise
in actual situations of particular subject matter and individual students.
This work investigated possible ways to integrate computer technology into the university
curriculum, in order to promote meaningful learning of scientific theories. To achieve this,
it followed the eclectic approach. It started by synthesizing the cognitive theory of
meaningful learning and epistemological constructivism into the theoretical framework of
Chapter 2. Then, this framework was applied to the subject field of introductory quantum
mechanics in Chapter 3. Finally, a mechanism for adjustment of the predetermined
curriculum to specific needs of individual students was described in Chapter 4. The major
practical issue considered is the availability of an innovative educational technology – that
of interactive computer simulation and visualization.
Cognitive Theory of Meaningful Learning
The first step in the eclectic use of a theory is to define its partial view and domain of
validity. This theory focuses on the cognitive structure of individual students, and the
process of incorporating new knowledge into the existing structure: “If we had to reduce
all of educational psychology to just one principle, we would say this: The most important
single factor influencing learning is what the learner already knows. Ascertain this and
teach him accordingly” (Ausubel et al, 1978). It disregards other parameters such as
personal motivation and ability to practice meaningful learning techniques, individual
-61-
learning preferences and social interaction. Examples for its validity are mostly taken from
the humanities, social studies and biology, but not from the physical sciences.
The second step is to identify key terms that will be used for classification and
categorization of particular situations and facts. The theory states that in order to initiate
meaningful learning, two conditions must be met: the curriculum must supply a potentially
meaningful content and the student must posses a meaningful learning set. The terms
“potentially meaningful content” and “meaningful learning set” were used to divide the
practical work into two separate tasks:
1.
The task of constructing a potentially meaningful content was associated with
curricular development and reconstruction, as was discussed in Chapter 3.
2.
The task of promoting a meaningful learning set was associated with the learning
environment, as was discussed in Chapter 4.
Another key term adopted from this theory is “advance organizer” in its two versions:
“integrative reconciliation” and “progressive differentiation”. These terms served as
criteria for selecting natural phenomena as advanced organizers in Section 3.2. The
phenomena were chosen for their ability to integrate seemingly new concepts with
basically similar concepts existing in cognitive structure, and to increase discriminability
between new and existing ideas, which are essentially different but confusingly similar.
Epistemological Constructivism
The incorporation of a theory from the philosophy of science into the theoretical
framework increased the domain of validity of the cognitive theory. The epistemological
theory focuses on scientific theories, which are the knowledge base from which curricular
materials are selected for teaching. This theory allows for the examination of specific
knowledge fields, and the meanings associated with this knowledge. This work focused on
content from the physical sciences, especially content related to university level teaching
of natural phenomena described by mathematical models. For this domain of university
science teaching, the appropriate advanced organizer is not an abstract inclusive principle,
as implied by the cognitive theory of meaningful learning, but rather a concrete
observation, as elaborated in Section 2.3.
Two important terms adopted from this theory are “model” and “phenomena”. The
distinction between the two is the basis for the curricular analysis carried out in Section
3.2.
-62-
Content Selection
The practical capabilities of visualization and simulation offered by computer technology
were considered in conjunction with the theoretical framework, and guided the process of
curricular development described in Section 3.2. In this case theory served as a knowledge
base, according to which meaningful content was defined. The students were only
referenced in this theoretical frame, so it is possible that real students would find the
content selected less meaningful than other possible selections. On the other hand, it is
impossible to field-test each individual selection on its own. Following an explicit
theoretical framework in curricular selection produces a coherent theme for the entire
curriculum, which makes the total have an added value over its individual components.
The Learning Environment
The practical capabilities of interactive learning, multimedia presentation and distant
communication offered by computer technology were considered in conjunction with the
theoretical framework, to produce an integrated learning environment, described in Section
4.1. This learning environment supported a feedback and rectification mechanism, which
enabled to reference the students in a practical situation. Areas not covered by the
theoretical framework, such as personal motivation and ability to practice meaningful
learning techniques, individual learning preferences and social interaction, were addressed
in a practical manner in Section 4.2. This practical mechanism gave specific solutions for
individual problems, but did not constitute a definite solution, because different instances
of the problem would require different solutions. It did, however, identify key variables for
future reference and accommodation. These variables can be dealt with further theoretical
analysis, or left in the domain of the practical.
The Role of the Computer
In both practical aspects it was demonstrated that a computerized learning environment is
more than a mediator for the delivery of a finalized predetermined curriculum. In the
context of content selection, the ability to visualize and simulate complex models allowed
for the inclusion of new models into an existing curriculum. This, in turn, broadened the
range of appropriate natural phenomena that could be discussed and used as advance
organizers. In the context of the learning environment, the computerized learning
environment supported an effective feedback and rectification mechanism. This
mechanism allowed for an interdependent development and delivery process, in which the
-63-
curriculum was changed in real-time to accommodate for the individual performance and
ability of the students.
5.2 Generic Models
This work focused on a specific case study. The content selection process was carried out
for the subject field of introductory quantum mechanics, and the learning environment was
constructed for the course “Introduction to Chemical Bonding”. In section 1.2, however,
the conclusions of this work were proposed to be valid for a broad range of subject fields
and courses. The next two subsections define a more general view of Chapters 3 and 4.
A Potentially Meaningful Content
The building of a potentially meaningful curriculum for teaching basic concepts of
quantum mechanics was demonstrated in Chapter 3. These concepts were analyzed based
on the theoretical framework of Chapter 2, in view of the capabilities of computer-aided
instruction. This analysis directed a curricular selection process, in which pedagogic
materials were extracted from the scientific knowledge base. This process can be
generalized to define a generic model of curriculum selection, in university teaching of
natural phenomena described by mathematical models:
1.
The teaching of models should be integrated with the teaching of natural
phenomena.
2.
For each concept taught, an illustrative natural phenomenon should be introduced
first, to serve as an advance organizer. To qualify as an advance organizer, this
phenomenon should:
One)
Provide ideational anchorage for the new concept in terms that are already
familiar to the learner. The phenomenon should be maximally clear by its own right
– preferably a direct observation that can be described by perceptual terms rather
than abstract ones.
Two)
Integrate seemingly new concepts with basically similar concepts existing in
cognitive structure.
Three)
Increase discriminability between the new concept and existing ones, which
are essentially different but confusingly similar.
-64-
3.
If the selected phenomenon is not adequately described by traditionally taught
models, new models might be introduced into the curriculum. These models can be
mediated by computer visualization and simulation. Models that should be considered
for this are:
One)
Models that use complex symbolic mathematics, but have simple
visualizations.
Two)
Dynamic (time-dependent) models.
Three)
Three-dimensional models.
Four)
Numerically solvable models.
4.
The inclusion of such models should be considered if they:
One)
Describe modern phenomena, which constitute a part of contemporary
scientific methods.
Two)
Illustrate fundamental concepts of the subject matter better than traditional
models.
5.
After the inclusion of new models, the significance of traditional models should be
re-evaluated. If the new models serve the same purpose better, the curricular hierarchy
should be changed accordingly.
This model is intended for reconstructing existing curricula, in a theory directed manner
and in view of practical considerations. It emphasizes the relation between mathematical
models and natural phenomena, and so promotes the meaningful learning of both.
An Integrated Learning Environment
In Chapter 4, a practical framework for the incorporation of potentially meaningful content
into an existing university milieu was demonstrated. The curricular selection process
described is deeply involved with the abilities of computer-aided instruction. This
dependency should reflect in the learning environment, by making computer technology an
integral part of it. Computer technology is used at all stages of instruction:
1.
Use of a computer lab as the main mode of student-computer interaction, to:
One)
Promote students’ active learning.
Two)
Constitute a real-time feedback and rectification mechanism.
2.
Use of a computer for in-class demonstrations, to establish a common visual
language between lecture and lab.
3.
Use of an Internet based interface for course materials, to:
One)
Provide the means to introduce computer-related assignments for homework,
especially in order to prepare students for the computer lab.
-65-
Two)
Allow students to review all types of course materials (lecture slides, lecture
notes, computer animations and interactive programs) through a single, easy to use
and familiar user interface.
This defines an integrated learning environment for a single course. This integrated
learning environment supports a meaningful learning set – it helps the students relate
things they learn in class, lab and home. However, students are often reluctant to adapt
themselves to a new learning environment. Therefore, an effective learning environment
should extend throughout the curriculum. This requires concurrent change of several
courses at different stages of teaching, especially at the freshmen level. Through this
systemic change, university courses would supply a potentially meaningful content, and
students would posses a meaningful learning set.
-66-
Bibliography
Atkins, P.W. (1995). Physical Chemistry, 5th Edition. Oxford, UK: Oxford University
Press.
Ausubel, D.P., Novak, J.D., & Hanesian, H. (1978). Educational Psychology – A Cognitive
View (2nd Edition). New York: Werbel & Peck.
Barrow, G.M. (1988). Physical Chemistry, 5th Edition. New York: McGraw-Hill Book
Company.
Birk, J.P. (1994). Chemistry. Boston, Massachusetts: Houghton Mifflin Company.
Castellan, G.W. (1983). Physical Chemistry, 3rd Edition. Menlo-Park, California:
Benjamin/Cummings Publishing Company.
Driver, R. (1988). Theory and practice II: a constructivist approach to curriculum
development. In Fensham, P. (Editor) Development and Dilemmas in Science Education.
London: Falmer, 133-149.
Edmondson, K.M., & Novak, J.D. (1993). The interplay of scientific epistemological
views, learning strategies, and attitudes of college students. Journal of Research in Science
Teaching, 30, 547-559.
Feynman, R.P. (1985). QED: The Strange Theory of Light and Matter. Princeton, New
Jersey: Princeton University Press.
Fischer, I., Villeneuve, D.M., Vrakking, M.J.J., Stolow, A. (1995). Femtosecond wavepacket dynamics studied by time resolved zero-kinetic energy photoelectron spectroscopy.
Journal of Chemical Physics, 102, 5566-5569.
Herzberg, G. (1950). Molecular Spectra and Molecular Structure, 2nd Edition. New York:
Van Nostrand Company.
Hestenes, D. (1992). Modeling games in the Newtonian world. American Journal of
Physics, 60, 732-748.
Hodson, D. (1985). Philosophy of science, science and science education. Studies in
Science Education, 12, 25-27.
Laws, P.M. (1996). Undergraduate science education: a review of research. Studies in
Science Education, 28, 1-85.
-67-
Levine, I.N. (1998). Physical Chemistry, 3rd Edition. New York: McGraw-Hill Book
Company.
Nelson, P.G. (1990). How do electrons get across nodes: A problem in the interpretation of
the quantum theory. Journal of Chemical Education, 67, 643-646.
Redish, E.F., & Steinberg, R.N. (1997). A New Model Course in Quantum Mechanics for
Scientists and Engineers. http://www.physics.umd.edu/rgroups/ripe/perg/qm/nsf.htm
Reynolds, G.T. & Spartalian, K. (1969). Interference effects produced by single photons. Il
Nuovo Cimento, 61B, 355-364.
Schwab, J.J. (1962). The teaching of science as enquiry. In Schwab, J.J. & Brandwein, P.F.
(Editors) The teaching of science. Cambridge, Massachusetts: Harvard University Press, 3103.
Schwab, J.J. (1978a). The practical: a language for curriculum. In Schwab, J.J. Science,
Curriculum, and Liberal Education. Chicago, Illinois: The University of Chicago Press,
287-321.
Schwab, J.J. (1978b). The practical: arts of eclectic. In Schwab, J.J. Science, Curriculum,
and Liberal Education. Chicago, Illinois: The University of Chicago Press, 322-364.
Schwab, J.J. (1978c). The practical: translation into curriculum. In Schwab, J.J. Science,
Curriculum, and Liberal Education. Chicago, Illinois: The University of Chicago Press,
365-383.
Schwab, J.J. (1978d). Education and the structure of the disciplines. In Schwab, J.J.
Science, Curriculum, and Liberal Education. Chicago, Illinois: The University of Chicago
Press, 229-274.
Schwab, J.J. (1978e). Scientific knowledge and liberal education. In Schwab, J.J. Science,
Curriculum, and Liberal Education. Chicago, Illinois: The University of Chicago Press,
68-104.
Smit, J.A.A., & Finegold, M. (1995). Models in physics: perceptions held by final-year
prospective physical science teachers studying at South African universities. International
Journal of Science Education, 17, 621-634.
Staver, J.R. (1998). Constructivism: Sound theory for explicating the practice of science
and science teaching. Journal of Research in Science Teaching, 35, 501-520.
-68-
Trigg, G.L. (1971). Wave Properties of Matter. In Trigg, G.L. Crucial Experiments in
Modern Physics. New York: Van Nostrand – Reinhold, 105-115.
Whitnell, R.M., Fernandes, E.A., Almassizadeh, F., Love, J.J.C., Dugan, B.M., Sawrey,
B.A. & Wilson, K.R. (1994). Multimedia chemistry lectures, Journal of Chemical
Education, 71, 721-725. http://www-wilson.ucsd.edu .
Zewail, A.H. (1988). Laser femtochemistry. Science, 242, 1645-1653.
-69-
‫לימוד אוניברסיטאי של תופעות טבע‬
‫הניתנות לתיאור ע"י מודלים מתמטיים‬
‫חיבור לשם קבלת תואר‬
‫דוקטור לפילוסופיה‬
‫מאת‬
‫גיא אשכנזי‬
‫הוגש לסינט האוניברסיטה העברית בירושלים‬
‫אב תשנ"ט‪ ,‬יולי ‪9111‬‬
‫‪-‬א‪-‬‬
‫‪-‬ב‪-‬‬
‫עבודה זו נעשתה בהדרכתם של‬
‫פרופ' נאוה בן‪-‬צבי ופרופ' רוני קוזלוב‬
‫‪-‬ג‪-‬‬
‫‪-‬ד‪-‬‬
‫תוכן העניינים‬
‫תקציר ‪VII ........................................................................................................................................‬‬
‫פרק א' ‪-‬‬
‫מבוא ‪1 ..............................................................................................................................‬‬
‫אומנות הפיתוח הקוריקולרי ‪1 ..............................................................................................................‬‬
‫אומנויות אקלקטיות ‪1 .........................................................................................................................‬‬
‫מוטיבציה טכנולוגית ‪2 ........................................................................................................................‬‬
‫מטרה ‪2 .............................................................................................................................................‬‬
‫מקרה הבוחן ‪3 ......................................................................................................................................‬‬
‫הגדרת הבעיה ‪3 .................................................................................................................................‬‬
‫מוטיבציה טכנולוגית ‪3 ........................................................................................................................‬‬
‫מטרה ‪4 .............................................................................................................................................‬‬
‫תחום תקפות המודל ‪4 .........................................................................................................................‬‬
‫ראשי הפרקים של התיזה‪5 ....................................................................................................................‬‬
‫פרק ב' ‪ -‬מסגרת‬
‫תיאורטית ‪7 ...........................................................................................................‬‬
‫למידה משמעותית ‪7 .............................................................................................................................‬‬
‫למידה משמעותית כנגד למידת שינון‪8 ...................................................................................................‬‬
‫שילוב כנגד הפרדה ‪9 ..........................................................................................................................‬‬
‫יישום של למידה משמעותית‪10 ............................................................................................................‬‬
‫מארגנים מוקדמים ‪11 ..........................................................................................................................‬‬
‫אפיסטמולוגיה ‪12 ................................................................................................................................‬‬
‫פוזיטיביזם כנגד קונסטרוקטיביזם ‪13 .....................................................................................................‬‬
‫אפיסטמולוגיה ואסטרטגיות למידה‪13 ...................................................................................................‬‬
‫מבנה תחום התוכן במדעים הפיזיקליים ‪14 ............................................................................................‬‬
‫מודלים ותופעות טבע ‪15 .....................................................................................................................‬‬
‫מציאת המשמעות במבנה ‪19 ................................................................................................................‬‬
‫תופעות טבע כמארגנים מוקדמים‪22 ......................................................................................................‬‬
‫פרק ג' ‪ -‬תוכן הניתן ללמידה‬
‫משמעותית ‪25 .....................................................................................‬‬
‫מה אנחנו מלמדים? ‪25 ..........................................................................................................................‬‬
‫שינוי קוריקולרי‪26 .............................................................................................................................‬‬
‫המחשוב כהזדמנות לשינוי קוריקולרי‪28 ................................................................................................‬‬
‫מטרה ‪29 ...........................................................................................................................................‬‬
‫יישום למכניקה קוונטית בסיסית ‪30 ......................................................................................................‬‬
‫דואליות גל‪-‬חלקיק ‪31 .........................................................................................................................‬‬
‫יחס דה‪-‬ברולי‪35 ................................................................................................................................‬‬
‫קוונטיזציה של אנרגיה‪41 ....................................................................................................................‬‬
‫דינמיקה קוונטית‪47 ............................................................................................................................‬‬
‫‪-‬ה‪-‬‬
‫פרק ד' ‪ -‬נטיה ללמידה‬
‫משמעותית‪53 ...............................................................................................‬‬
‫סביבת הלמידה ‪54 ...............................................................................................................................‬‬
‫מעבדת המחשבים ‪54 ..........................................................................................................................‬‬
‫הגדלת המחויבות האישית‪55 ...............................................................................................................‬‬
‫מעגל המשוב‪56 .................................................................................................................................‬‬
‫סביבת למידה מוכללת‪56 ....................................................................................................................‬‬
‫ביצועי הסטודנטים ‪58 ..........................................................................................................................‬‬
‫למידה אקטיבית ‪ -‬הנחיה כנגד פתיחות‪58 ..............................................................................................‬‬
‫הצגה סימבולית כנגד הצגה חזותית ‪59 ..................................................................................................‬‬
‫עבודה יחידנית כנגד עבודת צוות ‪60 .....................................................................................................‬‬
‫פרק ה' ‪-‬‬
‫סיכום ‪61 ............................................................................................................................‬‬
‫אומנויות אקלקטיות ‪61 .......................................................................................................................‬‬
‫התאוריה הקוגניטיבית של למידה משמעותית ‪61 ....................................................................................‬‬
‫קונסטרוקטיביזם אפיסטמולוגי ‪62 .........................................................................................................‬‬
‫בחירת התוכן ‪63 ................................................................................................................................‬‬
‫סביבת הלמידה‪63 ..............................................................................................................................‬‬
‫תפקיד המחשב‪63 ...............................................................................................................................‬‬
‫מודלים גנריים‪64 .................................................................................................................................‬‬
‫תוכן הניתן ללמידה משמעותית ‪64 ........................................................................................................‬‬
‫סביבת למידה מאוחדת‪65 ....................................................................................................................‬‬
‫ביבליוגרפיה ‪67 ................................................................................................................................‬‬
‫תקציר עברי ‪ ...................................................................................................................................‬ז‬
‫‪-‬ו‪-‬‬
‫תקציר‬
‫עבודה זו עוסקת בשילוב של טכנולוגית למידה חדשה אל תוך הקוריקולום האוניברסיטאי‪ .‬טכנולוגית‬
‫הלמידה החדשה היא המחשב‪ ,‬כאשר התחום שנמצא המתאים ביותר לשילוב הוא תחום הלימוד של‬
‫תופעות טבע הניתנות לתיאור ע"י מודלים מתמטיים‪.‬‬
‫עבודה זו מציגה שלוש נקודות עיקריות בתחום הפיתוח הקוריקולרי‪:‬‬
‫‪ .9‬תהליך הפיתוח צריך להיות מבוסס באופן מפורש על מסגרת תיאורטית‪.‬‬
‫‪ .2‬סביבת הלמידה מהווה גורם מכריע בקביעת התוכן‪.‬‬
‫‪ .3‬תהליך הפיתוח צריך להיות משולב עם יישום מעשי בהוראה‪ ,‬מלווה במערכת משוב ותיקון‪.‬‬
‫החלק הר אשון של העבודה משלב בין תאוריות למידה ממדעי ההתנהגות‪ ,‬לבין גישות אפיסטמולוגיות‬
‫מהפילוסופיה של המדע‪ .‬הדגש הוא על הקשר בין אפיסטמולוגיה קונסטרוקטיביסטית לבין למידה‬
‫משמעותית‪ .‬החידוש התיאורטי בעבודה הוא יצירת מסגרת המאחדת את שתי התיאוריות הללו‪.‬‬
‫במסגרת זו‪ ,‬מחולקת כל תיאוריה מדעית לשניים ‪ -‬אוסף של תופעות טבע מצד אחד‪ ,‬והמודלים‬
‫המתאימים להן מצד שני‪ .‬לכל אחד מהמרכיבים אין משמעות בפני עצמו‪ ,‬אלא הם שואבים את‬
‫משמעותם מתוך המבנה המאחד את שניהם‪ .‬כדי להביא ללמידה משמעותית‪ ,‬יש חשיבות בהצגה של כל‬
‫המבנה וההקשרים שבתוכו‪ .‬כדי להשיג מטרה זו‪ ,‬משמשות תופעות הטבע כמארגן מוקדם ללימוד של‬
‫תיאוריה מדעית‪ .‬בצורה זו המודלים המופשטים נלמדים בהקשר של תופעות הטבע המוחשיות‪.‬‬
‫החלק השני עוסק בדרכים בהן ניתן לשלב טכנולוגיות מחשוב בהוראה האוניברסיטאית‪ .‬הדגש בחלק‬
‫זה הוא שטכנולוגית הוראה חדשה יכולה לשנות לא רק את אופן ההוראה‪ ,‬אלא היא בעלת השפעה‬
‫מכרעת גם על תוכן ההוראה‪ .‬החידוש בתחום זה הוא בחינה מחדש של תחום בעייתי בהוראה‬
‫האוניברסיטאית ‪ -‬מכניקת הקוונטים ‪ -‬ובנייתו ע"פ העקרונות התיאורטיים ובהתחשב ביכולות‬
‫המעשיות של טכנולוגית המחשוב‪.‬‬
‫‪-‬ז‪-‬‬
‫שני החלקים הרא שונים עוסקים באופן תיאורטי בשינוי תכני ההוראה‪ .‬היכולת ליישם באופן מעשי את‬
‫מסקנותיהם עבור קורס ספציפי נבדקה ע"י מקרה‪-‬בוחן ומתוארת בחלק השלישי‪ .‬תהליך הפיתוח נעשה‬
‫במשולב עם יישומו בשטח‪ ,‬תוך הפעלת מערכת משוב ותיקון בין תהליכי היישום לפיתוח‪ .‬תהליך‬
‫הפיתוח כלל שי נוי מקיף בקורס כולו‪ .‬הדגש היה על שילוב מלא של יכולות המחשוב בקורס ‪ -‬בהרצאות‪,‬‬
‫בתרגול ובעבודת הבית‪ .‬במסגרת זו שימש המחשב לא רק כאמצעי להעברת תכנים חדשים‪ ,‬אלא היווה‬
‫גם סביבת למידה התומכת בלמידה אקטיבית‪ .‬המחשב היווה סביבת למידה‪ ,‬אך עם זאת לא נשברו‬
‫הפורומים הרגילים של הרצאה ותרגול‪ .‬השימוש במחשב פתח ערוצי תקשורת נוספים שתרמו‬
‫לאינטראקציה מורה‪-‬תלמיד‪.‬‬
‫השילוב בין שלושת החלקים מגדיר דרך פעולה לשילוב של טכנולוגית המחשוב אל תוך קורסים‬
‫קיימים‪ .‬דרך פעולה זו יכולה להיות מוכללת לקבלת מודל גנרי לפיתוח קוריקולרי‪ ,‬עבור לימוד‬
‫אוניברסיטאי של תופעות טבע הניתנות לתיאור ע"י מודלים מתמטיים‪:‬‬
‫‪ .9‬הלימוד של מודלים צריך להיות משולב עם לימוד של תופעות טבע‪.‬‬
‫‪ .2‬לפני כל מושג שנלמד יש להציג תופעת טבע המדגימה אותו‪ ,‬המשמשת מארגן מוקדם ללימוד‬
‫הנושא‪.‬‬
‫‪ .3‬אם תופעת הטבע אינה מתוארת בצורה מספקת ע"י המודלים הנלמדים באופן מסורתי‪ ,‬יש לשקול‬
‫הכנסה של מודלים חדשים אל תוך הקוריקולום‪ .‬ניתן להשתמש בהדמיות מחשב על‪-‬מנת להתגבר‬
‫על חוסר היכולת שהיתה קיימת בהצגת המודלים הללו‪.‬‬
‫‪ .4‬מודלים חדשים צריכים להילקח בחשבון משום שהם יכולים להוות בסיס לשיטות מדעיות‬
‫מתקדמות‪ ,‬או להדגים מושגים בסיסיים בתחום התוכן באופן ברור יותר מאשר המודלים‬
‫המסורתיים‪.‬‬
‫‪ .5‬אחרי שילובם של המודלים החדשים‪ ,‬חשיבותם של המודלים המסורתיים צריכה להישקל מחדש‪.‬‬
‫במידה והמודלים החדשים מתאימים יותר להשגת המטרה‪ ,‬ההיררכיה המושגית של הקוריקולום‬
‫צריכה להשתנות בהתאם‪.‬‬
‫‪-‬ח‪-‬‬
‫המודל הזה מיועד לבניה מחדש של קורסים קיימים‪ ,‬באופן המונחה ע"י התיאוריה‪ ,‬ובהתחשב‬
‫בשיקולים מעשיים‪ .‬הוא מדגיש את היחס בין המודלים המתמטיים ותופעות הטבע‪ ,‬ובכך מסייע‬
‫ללמידה משמעותית של שני חלקי התיאוריה המדעית‪ .‬תהליך הפיתוח הקוריקולרי המתואר הוא פועל‬
‫יוצא של היכולות של לימוד בעזרת מחשב‪ .‬התלות הזו צריכה להשתקף בסביבת הלמידה‪ ,‬ע"י הכנסת‬
‫טכנולוגית המחשוב כחלק אינטגרלי ממנה‪ .‬מכאן ניתן להגדיר מודל עבור השימוש בטכנולוגית מחשוב‬
‫בכל שלבי ההוראה‪:‬‬
‫‪ .9‬שימוש במעבדת מחשבים כאופן האינטראקציה העיקרי בין הסטודנט למחשב‪ ,‬על‪-‬מנת‪:‬‬
‫‪ )I‬לאפשר למידה עצמית של הסטודנטים‪.‬‬
‫‪ )II‬להוות מערכת משוב ותיקון בזמן אמת‪.‬‬
‫‪ .2‬שימוש במחשב להדגמות בכיתת ההרצאות‪ ,‬על‪-‬מנת ליצור שפה חזותית משותפת בין השיעור‬
‫והמעבדה‪.‬‬
‫‪ .3‬שימוש בממשק משתמש מבוסס אינטרנט לקישור לחומרי הלמידה בקורס‪ ,‬על‪-‬מנת‪:‬‬
‫‪ )I‬לאפשר נתינת עבודות‪-‬בית המבוססות על עבודה במחשב‪.‬‬
‫‪ )II‬לאפשר לסטודנטים גישה לשם עיון מחדש בכל חומרי הקורס דרך ממשק משתמש יחיד‪ ,‬פשוט‬
‫ומוכר‪.‬‬
‫‪-‬ט‪-‬‬
Download