The Role of Search Strategies and Feedback on a Computer-Based... Solving Task A Proposal Presented to the

advertisement
1
The Role of Search Strategies and Feedback on a Computer-Based Collaborative Problem
Solving Task
A Proposal Presented to the
Faculty of the Graduate School
University of Southern California
San-hui (Sabrina) Chuang
University of Southern California
to
Dr. Harold O’Neil, Jr. (Chair)
Dr. Robert Baker
Dr. Dennis Hocevar
Dr. Edward Kazlauskas
Dr. Audrey Li (Outside Member)
2677 Orchard Ave. #1
Los Angeles, CA 90007
(323) 731-8344
sanhuich@usc.edu
2
Table of Contents
ABSTRACT………………………………..……………………………………
4
CHAPTER I: INTRODUCTION………………………………………………..
Background of the Problem…………………….……………………………….
Purpose of the Study…………………………………………………………….
Significance of the Study………………………………………………………..
6
6
8
10
CHAPTER II: LITERATURE REVIEW….…………………………………….
Relevant Studies…………………………………………………………………
Collaborative Problem Solving…………….…………………………………
Assessment of Collaborative Learning Processes……………………….
Assessment of Problem Solving………….………………….…………..
Measurement of Content Understanding..……………………….
Measurement of Problem Solving Strategies…..………………...
Measurement of Self-Regulation…………………....……………
Summary………………………………………………………..………..
Information Seeking……………………………………………………..……….
Information Seeking in Education…..…..……………………………….
The Information Age and its Impact……………………………..
Information Seeking and Help Seeking…………………………..
Curriculum Change for Tomorrow’s Workforce Readiness.…….
Electronic Information seeking/searching…………………………..……
The Information Age and Computer Technology……………..…
Search Strategies on the World Wide Web……..……………..…
Technical Considerations in Electronic Information …………….
Summary………………………………………………………………….
Feedback………………………………………………………………………….
Definition and Feedback Type……………………………..…..…………
Feedback Characteristics …………………………………………………
The Amount of Information……………..………………………..
Timing of Feedback ………………………………………………
Representation of Feedback………………………………………
Summary………………………………………………………………….
Summary of the Literature………………………………………………………..
12
12
14
15
18
18
21
24
24
26
26
26
28
30
32
32
33
38
39
40
40
43
43
46
49
52
53
CHAPTER III: METHODOLOGY……………………………………………….
Research Hypotheses……………………………………………………………..
Research Design……………………………………………………………….….
Pilot Study…………………………………………..……………………………
Method of the Pilot Study………………………………………………..
Participants…………………………………………………….…
Networked Knowledge Mapping System………………………..
Simulated World Wide Web Environment………………………
59
59
59
59
62
62
62
67
3
Table of Contents (Cont.)
Measures…………………………………………………………………
Group Outcomes Measures………………………………………
Individual and Group Teamwork Process Measures……….……
Relevant Information Found……………………………………..
Information Seeking and Feedback Behavior Measures…………
Data Analysis…………………………………………………………….
The Main Study…………………………………………………………………..
Method…………………………………………………………………...
Participants……………………………………………………….
Measure…………………………………………………………………..
Procedure…………………………………………………………………
Collaborative Training Task……………………………………..
Boolean Search Strategies Training Section……………………..
Data Analysis…………………………………………………………….
REFERENCES ………………………………….………………………………
Appendix A Self-Regulation Questionnaire……..………………………………
71
71
72
72
73
74
74
74
74
74
74
75
75
77
80
92
4
ABSTRACT
Collaborative problem solving and cooperation skills are considered necessary skills for
success in today's world. Cooperative learning refers to learning environments in which small
groups of students work together to achieve a common goal, and problem solving is “cognitive
processing directed at achieving a goal when no solution method is obvious to the problem
solver” (Mayer & Wittrock, 1996, p.47). Thus, collaborative problem solving is defined as
problem solving activities that involve interactions among a group of individuals.
Hsieh (2001) successfully improved a computer program to evaluate student collaborative
problem solving and team processes on a computer-based knowledge mapping group task with a
simulated Web space as an information source. The program also provides feedback to the
participants. The feedback content tells the collaborative group how much they need to improve
each concept in their map by three feedback groups: a group that needs a lot of improvement, a
group that needs some improvement and a group that needs a little improvement. Even though
her feedback provided participants a direction as to “what” area to improve for search and task
performance, her feedback did not provide practical tips on “how” to improve the performance.
In addition, in her study, “search” was significantly negative related to performance.
This proposal argues that by providing search strategies training and feedback, students
will become more efficient in locating the information needed. Therefore, this study seeks to
demonstrate that with search strategies training and feedback, students should perform better than
Hsieh’s (2001) results in general.
There are three research hypotheses in this study. Hypothesis 1 is that students with taskspecific/dependent adapted knowledge of response feedback will be more likely to use decisionmaking and leadership messages in their communication than students with task-
5
general/independent adapted knowledge of response feedback. Hypothesis 2 is that students will
perform better in the concept mapping task by receiving task-specific/dependent adapted
knowledge of response feedback than by receiving task-general/independent adapted knowledge
of response feedback. The last hypothesis is that information seeking using search strategies
such as Boolean operators will have positive effects on problem solving strategies and group
outcome performance on the concept map.
The research design is an experimental one. Students will be randomly assigned into two
person groups as either searcher or leader. Then the groups will be randomly assigned into two
feedback treatment groups (task specific versus task general). The dissertation will consist of a
pilot study and a main study. There are two purposes of the pilot study. First, it will be used to
assess the functionality of the computer system with the newly implemented feedback treatment.
Second, the pilot study will be used to assess if the revised predefined messages will permit
students (leaders and searchers) in each group to construct the knowledge map collaboratively
after deleting the messages of low reliability. The pilot study will be done during October while
the data collection of the main study will be done in December. In general, descriptive statistics
(e.g., means, standard deviations) will be used to estimate all measures. In addition, MANOVA
will be used to examine the relationships between outcome and process variables (i.e., variables
of teamwork process and information seeking process with concept maps).
6
CHAPTER I
INTRODUCTION
Background of the Problem
People in the 21st century face inevitable and rapid changes in every aspect of their lives.
Traditional education systems do not sufficiently address the needs of our students in this everchanging society. Whereas teachers were once viewed as knowledge transmitters, in this new
information age it is never possible to transmit enough updated knowledge to students for solving
the problems they face throughout their lifetimes. In this era where change is the only constant,
the greatest legacy education can provide for students is a will to learn and to continue learning
as personal circumstances change throughout life (Covington, 1998).
In view of the changing needs in the workforce, the National Center for Research on
Evaluation, Standards, and Student Testing (CRESST) examined five studies on workforce
readiness skills. All five studies identified higher order thinking, interpersonal and teamwork
skills, and problem solving as the necessary generic skills needed for success in today's world
(O’Neil, 1997, Allread, & Baker, 1997). Among these skills, graduates and employees rate
thinking, decision making, communications skills, teamwork and cooperation skills as the most
important ones (Sinclair, 1997).
Teachers and professors at all educational levels have recognized the changes in the
requisite work force skills and have tried their best to prepare their students. They incorporate
problem solving and cooperative learning in daily instruction. They encourage students to
develop higher order thinking skills through problem solving, and they hope that through
constant practice students will harness cooperative skills. At the end, they hope to stimulate
students’ motivation in learning during school and self-learning after graduation (Shapiro, 1999).
7
Recently, many educational assessment programs have also used collaborative smallgroup tasks in which students work together to solve problems or to accomplish projects to
evaluate learning results (Webb, Nemer, & Chizhik, 1998). In reality, when students enter the
work force, they inevitably have to work in groups. Empirically, collaborative problem solving
has been shown in educational research to enhance students’ cognitive development (Zhang,
1998; Webb et al., 1998).
O’Neil (1999) defines problem-solving as consisting of three facets: content
understanding, domain specific problem-solving strategies, and self-regulation. A good problemsolver understands the content well (content knowledge), possesses specific intellectual skills
(problem-solving strategies), is able to plan use of her resources and skills and, during the
process, monitors her own progress toward the end goal of solving the problem (self-regulation).
Content understanding can be evaluated using concept maps, which are graphical
representations consisting of terms (concepts) and links (interrelationships between concepts). It
is a useful tool in understanding a complex content area, especially in science learning (Hurwitz
& Niemi, 1999). Self-regulation, which includes planning, self-checking, self-efficacy, and
effort, can be measured by questionnaires (O’Neil, 1999). Assessment of problem solving
strategies is more problematic. Most traditional assessment relies on questionnaires, selfreporting, interviews or natural observations. However, those methods are not able to capture the
essence of the problem solving process (Chung, O’Neil, & Herl, 1999). Recently, with the
emergence of computer technology, researchers have used computer-simulation as an assessment
tool for problem solving skills (e.g., Herl, O’Neil, Chung, & Schacter, 1999; Hsieh 2001;
Schacter, Herl, Chung, Dennis, & O’Neil, 1999). Computer technology with its powerful
relational database makes capturing problem solving processes easier and less costly, especially
8
in a large scale test setting. For example, O’Neil, Wang, Chung, & Herl (1998) and Hsieh (2001)
used a computer-simulated teamwork task to evaluate problem solving and measure the thinking
processes involved. These processes were recorded by the computers and through predefined
messages that participants used to communicate with their team members. However, Hsieh’s
(2001) study showed that one of the problem solving strategies, searching, was significantly
negatively related to team performance.
Search strategy is one of the problem solving strategies attracting considerable attention
in educational research. Cyveillance (2000) estimated in 2000 that the World Wide Web
contains more than two billion publicly accessible information pages and it will continue to grow
exponentially. The World Wide Web (WWW) has the advantage of being easily updated and
being assessed in real time. In schools, students use the Internet more and more each day, either
for email or for academic projects (U.S. Census Bureau, 1999). Currently, over 95 percent of
public schools have computers (National Center for Education Statistics, 2000). According to
Becker (1999), at least 40-45% of American classrooms are linked to the Internet, and almost
90% of teachers perceive the World wide web as an important research source for their teaching.
More and more schools demand that students do research on the World wide web so that they can
satisfactorily complete their projects (Breivik, 1998; Ercegovac, 1998; Roblyer, 1998). In fact,
research is the most common classroom Internet use (Becker, 1999).
Purpose of the Study
The primary purpose of this study is to improve Hsieh’s (2001) results by providing more
task-related feedback to the participants. Hsieh provided feedback on the task progress. A
simulated Web space used for information seeking and feedback accessing was used in Hsieh’s
(2001) study. She demonstrated the following: (a) students who received adapted knowledge of
9
response feedback significantly outperformed students with knowledge of response feedback, and
(b) information seeking and feedback accessing were positively correlated with students’
outcome performances. However, surprisingly, within information seeking, searching for the
adapted knowledge of response groups was negatively related to group outcome. She did not
offer an explanation.
In order to improve searching, this author suggests including searching strategies training
and including them in the feedback. The rationale is that even though Hsieh’s (2001) feedback
provided participants information about how much they accomplished the task, and new direction
as to “what” area to improve their search and their concept map, her feedback did not provide
practical tips on “how” to improve the map. This proposal argues that by providing search
strategy training and feedback, students will become more efficient in finding the relationships
between the concepts and in turn improve the overall result of the map. Therefore, the author
will modify the original task by providing examples on one specific type of search strategy, use
of Boolean operators, in addition to the original information on what area the participants should
improve next. Because the participants were high school students, they were considered as
lacking the perquisite knowledge to do the task. By providing search tips that have proven to be
effective, participants in the proposed study should perform better than Hsieh’s (2001) students
in general. In addition, increased use of the Boolean operators when doing the simulated web
search is expected. With the search tips, students should be able to get more relevant
relationships among the concepts and should be able to construct a better map through efficient
information seeking and self-monitoring of their performance through feedback.
Based on Hsieh’s (2001) study and adding search training and feedback, the proposed
study will evaluate student collaborative problem solving and team processes on a computer-
10
based knowledge mapping group task with special attention to the effectiveness of information
seeking/searching. Hsieh (2001) compared the effects of two levels of feedback, i.e., knowledge
of response feedback and adapted knowledge of response feedback. Her study indicated that the
group with adapted knowledge of response feedback outperformed the group with knowledge of
response feedback. The proposed study will compare two levels of adapted knowledge of
response feedback. One is Hsieh’s (2001) general task-progress feedback (how much progress is
made) and the other one is the improved one with more specific task feedback (how to better
search the simulation web).
The dissertation will consist of a pilot and a main study. A pilot study will be run
because even though Hsieh (2001) has proven the feasibility of the computer simulation program
and her participants were able to successfully complete the task with the predefined messages,
some scales and messages had unexpectedly low reliability. Those messages will be deleted and
a pilot study will be run to test whether a group of a searcher and a leader can still complete the
task using the remaining messages.
Significance of the Study
Problem solving skills and teamwork skills have been regarded as the most important
skills for both the workplace and for the school context. A new problem solving skill coming
from the advance of computer technology and the World Wide Web is search strategies. The
present study is significant for the following four reasons. First, it represents one of the earliest
attempts to use computer simulations to report and capture the process of cooperative problem
solving with an emphasis on search strategies. Second, while some studies have been done in the
area of feedback and collaborative problem solving (Tudge et al., 1996), very few studies have
explored the effects of types of feedback on collaborative learning in computer related
11
instruction. The present study will investigate the effects of two different types of feedback (task
general/dependent and task specific/independent) on collaborative problem solving. Third, use
of search strategies has been considered in education as a characteristic of expert problemsolving. Research has shown that with simple problems, there is not much difference between a
novice search and an expert search (Kubeck, Miller-Albrecht, & Murphy, 1999). However,
when it comes to more difficult problems, expert search is by far more efficient and more
accurate in locating the information needed. With the participants in Hsieh’s (2001) study
achieving less than satisfactory results (the highest score possible = 20, Mean for all students =
9.9) in the map construction, the current task should be considered as a more difficult one.
Therefore, by adding the feedback on search strategies, this study seeks to demonstrate that
students should be able to locate the information needed in a timely fashion and, at the end,
achieve better results in the map construction due to better problem solving strategies. Fourth,
the computer simulation used in this study not only records but also evaluates students’ process
and performance on cooperative problem solving. In addition, it also reports back to students
during the process on how they are doing and how they can improve. It is a dynamic assessment
tool (Grigorenko & Sternberg, 1998). The study also represents an attempt to improve the
current computer simulation program. It does so by identifying searching as one of problem
solving processes and adds it to the program. Hopefully, it eventually will become an automatic
learning and an assessment tool.
12
CHAPTER II
LITERATURE REVIEW
The purpose of this chapter is to review studies and literature that are relevant to this
study. The organization of the review has five parts. First, studies that are directly related to and
make this study possible will be discussed. Second, collaborative problem solving and its
assessment will be addressed. The third section will focus on information searching in education
and information science. Fourth, the effects of different types of feedback in learning will be
discussed. The fifth section is a brief summary of this chapter.
Relevant Studies
Three studies are closely related to this study. The first one is by Schacter et al. (1999).
They used a simulated Internet Web space to measure individual problem solving strategies on
networked computers and they found that content understanding scores for participants increased
significantly with access to a simulated Web space. Information seeking processes such as
browsing, searching, and accessing feedback had improved students’ performance significantly
in the posttest.
The second study, Chung et al. (1999) used a computer-based collaborative knowledge
mapping to measure team processes and team outcomes. Team processes were measured through
predefined messages and each message belonged to one of six teamwork skills (i.e., adaptability,
coordination, decision-making, leadership, interpersonal skills, and communication) in the
CRESST taxonomy of teamwork (O’Neil, Chung, & Brown, 1997). Unfortunately, in Chung et
al.’s (1999) study, there were no significant positive relationships found between most team
processes and outcome measures; surprisingly, decision-making and communication were found
to have negative effects on outcome performance.
13
Hsieh (2001) hypothesized that the lack of useful feedback and the type of task involved
(not a real group task) may have influenced on the results. Therefore, based on the two
hypotheses, the third study by Hsieh (2001) attempted to improve the Chung et al. (1999) study
by changing the nature of the tasks to a real “group task” and by providing more extensive
feedback. A group task is a task where no single individual possesses all the resources and
where no single individual is likely to solve the problem or accomplish the task objectives
without at least some input from others (Cohen & Arechevala-Vargas, 1987). Therefore, Hsieh
modified the original task into a real group task by assigning specific roles (leader vs. searcher)
respectively to each group member in order to meet the requirement of a group task where no
single individual possesses all the resources (information, knowledge, heuristic problem-solving
strategies, materials, and skills) and that no single individual was likely to solve the problem or
accomplish the task objectives without at least some input from others (Cohen & ArechevalaVargas, 1987) In addition, in order to improve the group performance, she implemented two
levels of feedback (knowledge of response feedback and adapted knowledge of response
feedback) on group map construction processes. Feedback assessing was demonstrated to be
significantly related to students’ performance in Schater et al.’s (1999) studies.
By combining the methodologies of Schater et al.’s (1999) and Chung et al.’s (1999)
studies, Hsieh (2001) successfully demonstrated that the use of adapted knowledge of response
feedback to be better in group concept map performance than knowledge of response feedback.
While doing so, she also evaluated student collaborative problem solving and team processes on
a computer-based knowledge mapping group task with a simulated Web space as an information
source. She demonstrated that decision making and leadership were positively related to group
concept map performance. However, searching in her study was unexpectedly negatively related
14
to group concept map performance. In addition, even though her feedback provided participants
a direction as to “what” area to improve for search and task performance, her feedback did not
provide practical tips on “how” to improve the performance.
By combining the methodologies of Hsieh ‘s (2001) study and by providing more task
specific feedback on search strategies, this study attempts to evaluate student collaborative
problem solving and team processes on a computer-based knowledge mapping group task.
Collaborative Problem solving
Currently the terms cooperative learning, collaborative learning and group work are used
interchangeably to mean the same thing in the educational literature. Cooperative learning refers
to learning environments in which small groups of students work together to achieve a common
goal (Underwood & Underwood, 1999). A problem is an unknown resulting from any situation
where a person seeks to fulfill a need to accomplish a goal (Jonassen, 1997), and problem solving
is “cognitive processing directed at achieving a goal when no solution method is obvious to the
problem solver” (Mayer & Wittrock, 1996, p.47). Collaborative problem solving thus refers to
problem solving activities that involve interactions among a group of individuals (Zhang, 1998).
A review of the educational literature revealed that cooperative learning methods have
attracted special attention over the past two decades (Brown, 2000). Rich educational
environments encourage students to construct their own knowledge, to solve problems
collaboratively, and to raise their interest and motivation to continue to learn (Shapiro, 1999).
Recently, collaborative learning groups have also been used as educational assessment programs
in which students work together in small groups and try to solve problems or to accomplish
projects (Webb, Nemer, & Chizhik, 1998). According to Johnson and Johnson (1997), a
successful cooperative learning group has the following characteristics: (a) a clearly defined goal,
15
(b) a cooperative structure, (c) shared responsibility, (d) individual responsibility and
accountability, (e) communication among members, (f) consensus in decision-making, (g)
interpersonal skills, and (h) acceptance and support for members in the group. The first four
characteristics are task-related characteristics and the latter four are group member
characteristics.
Assessment of Collaborative Learning Processes
This study will use the teamwork processes model developed by CRESST researchers as
measurement of collaborative learning processes. The CRESST model consists of six skills.
They are “(a) adaptability—recognizing problems and responding appropriately, (b)
coordination—organizing group activities to complete a task on time, (c) decision making—
using available information to make decisions, (d) interpersonal—interacting cooperatively with
other group members, (e) leadership—providing direction for the group, and (f)
communication—clear and accurate exchange of information” (O’Neil et al., 1997, p. 413).
Adaptability. Adaptability refers to the group’s ability to “monitor the source and nature
of problems through an awareness of team activities and factors bearing on the task” (O’Neil et
al., 1997). That is, adaptability is mainly used for the detection and correction of problems. In a
concept mapping task, adaptable teams should detect problems with their knowledge map at a
deep (semantic) level and at a surface level, by identifying inaccuracies as well as the strength
and significance of a relationship, and by recognizing that the given set of concepts and links
should be included in their map respectively. In the present study, students are low in prior
knowledge about environmental sciences, thus the author does not expect any effect of
adaptability on performance.
16
Coordination. Coordination is defined as a group’s “process by which group resources,
activities, and responses are organized to ensure that tasks are integrated, synchronized, and
completed with established temporal constraints” (O’Neil et al., 1997, p. 413). Therefore, in a
concept mapping task, coordinating strategies will include members’ domain expertise to
determine the relationships between concepts and consciousness of the time constraints and
respond appropriately. Because the task is a one-shot event and the time for the task is fairly
short, group maturation effects are not expected (Morgan et al., 1993).
Decision-Making. Decision-making is defined as a group’s “ability to integrate
information, use logical and sound judgment, identify possible alternatives, select the best
solution, and evaluate the consequences” (O’Neil et al., 1997, p. 415). Effective teams employ
decision-making that takes into consideration all available information and thus decision-making
is regarded as playing a significant role in the outcome map performance (Chung et al., 1999). In
addition, Chung et al. (1999) indicated that compared to group members who lacked of prior
knowledge, group members with relevant prior knowledge might be more likely to engage in
substantive discussions concerning the relationships.
In the present study, participants are expected to have little prior knowledge in the
domain of environmental science. However, through seeking information from the simulated
Web space, the author believes that the participants will have the opportunity to engage in
substantive discussion with the relationships. Thus, decision-making is expected to have a
positive effect on group performance.
Interpersonal Skill. Interpersonal skill is defined as “the ability to improve the quality of
team member interactions through the resolution of team members’ dissent, or the use of
cooperative behavior” (O’Neil et al., 1997, p. 416). Interpersonal processes are important since
17
they minimize inter-group conflict as well as foster team interdependence (Weng, 1999). In the
concept mapping task, the group consists of only two members. Inter-group conflict should be
highly unlikely to happen. Therefore, no relationship between interpersonal skills and group
performance is expected in this study.
Leadership. Leadership is defined as “the ability to direct and coordinate the activities of
other team members, assess group performance, assign tasks, plan and organize, and establish a
positive atmosphere” (O’Neil et al., 1997, p. 417). In the current study, leadership is expected to
have an effect on group outcome performance.
Communication. Communication is defined as “the process by which information is
clearly and accurately exchanged between two or more team members in the prescribed manner
and by using proper terminology, and the ability to clarify or acknowledge the receipt of
information” (O’Neil et al., 1997, p. 417). According to O’Neil et al. (1997), communication,
(1) promotes the transmission and reception of support behaviors as well as detection and
correction of error, (2) helps team members synchronize their activities and affects the quality of
decision-making, (3) affects the character of team cohesion, and (4) establishes operational
norms among team members.
Following Chung et al.’s (1999) and Hsieh’s (2001) measures, instead of evaluating the
communication dimension in a separate category, the overall communication (including
adaptability, coordination, decision-making, interpersonal relations, and leadership) were used in
the computer-based concept mapping task study. In the present study, overall communication
will be used to evaluate the relationship between teamwork process and group outcome
performance. It is expected communication should have an effect on group performance since
18
the searcher and the leader alone cannot complete the task successfully without constant
communication with each other.
Assessment of Problem Solving
Although the need for problem solving skills is well documented, the assessment tool for
problem-solving skills can still be improved (O’Neil, 1999). The National Center for Research
on Evaluation, Standards, and Student Testing (CRESST) has developed a problem-solving
assessment model with three sub-elements: (a) content understanding; (b) problem-solving
strategies; and (c) self-regulation (O’Neil, 1999). The following section will discuss how the
assessment of these three elements is done.
Measurement of Content Understanding
In the cognitive theory of education, it is stressed that learned knowledge should be
organized into long-term memory for later access. In addition, the expertise literature also
suggests that experts’ understanding of domain knowledge is not just awareness of the concepts
but also of the connections among the concepts (Schau & Mattern, 1997). Concept maps have
been extensively used in K-12 classrooms, especially in the understanding of science (Schau et
al., 2001). Various research studies on concept maps also show them to be effective for teaching,
learning, and assessment purposes (Herl et al., 1999; Hurwitz & Abegg, 1999; Ruiz-Primo,
Schultz, & Shavenlson, 1997; Schau, Mattern, Zeilik, Teague, & Weber, 2001).
Concept maps. A concept map is a graphical representation that consists of nodes and
links. Each node represents an important term (standing for concept) in the domain of
knowledge. Links are used to represent the relationships of nodes (concepts) (Hurwitz & Abegg,
1999). A proposition is the combination of two nodes and a link. It is the basic and the smallest
unit in a concept map used to express the relationship between two concepts (Dochy, 1996).
19
Ruiz-Primo et al. (1997) proposed conceptualizing concept maps as an assessment tool in
science. For them, concept mapping as an assessment tool is further distinguished by three parts:
(a) a task for students to use their knowledge in a domain, (b) a response format for students, and
(c) a scoring system which accurately evaluates the student’s performance. Table 1 lists the
concept map components and variations identified in their study.
Table 1
Concept Map Components and Variations identified (Ruiz-Primo et al., 1997)
Concept Map
Assessment
Component 1
Task
Characteristics
Task demands
Task constraints
Content structure
Examples
Students are asked to do one of the following:
 Fill in a map
 Construct a map from scratch
 Organize cards
 Rate relatedness of concept parts
 Write an essay
 Respond to an interview
Students may or may not be
 Asked to construct a hierarchical map
 Provided with the concepts used in the task
 Provided with the concept links used in the
task
 Allowed to use more than one link
between nodes
 Allowed to physically move the concepts
around until a satisfactory structure is
arrived at
 Asked to define the terms used in the map
 Required to justify their responses
 Required to construct the map collectively
The intersection of the task demands of
constraints with the structure of the subject
domain to be mapped.
20
Concept Map
Assessment
Component 2
Response
format
Concept Map
Assessment
Component 3
Scoring
system
Characteristics
Examples
Response mode



Leader
Leader can be one of the following two:
 Student
 Teacher or researcher
Characteristics
Components on
the map to be
scored
Use of a criterion
map
With Paper and pencil
Orally
On a computer
Examples
Focus is on the following three components:
 Propositions
 Hierarchy levels
 Examples
Compare a student’s map with an expert’s
map.
Experts can be one of the following:
 One or more experts in the field
 One or more teachers
 One or more top students
Several researchers have successfully used concept maps to measure students’ content
understanding in science, (e.g., Aidman & Egan, 1998; Herl et al., 1999; Schacter et al., 1999;
Schau et al., 2001). Schau et al. (2001) used select-and-fill-in concept maps to measure
secondary students’ understanding of science and measure postsecondary introductory astronomy
students’ content understanding in two studies. In the first study, the students’ performance on
the concept map correlated significantly with a traditional assessment which was a standardized
middle school multiple choice test (r= .77 for eighth grade and r=. 74 for seventh grade). This
correlation provides evidence of the validity of concept maps as an assessment tool. In the
21
second study, concept maps were compared with both multiple choice test and relatedness ratings
assessment. The results were as follows. The concept maps had an internal consistency of .83
(N=93). In addition, the map scores showed a mean increase from 30% correct at the beginning
of the semester (SD=11%) to 50% correct at the end (SD=19%), dependent t (83)= 10.09, p<.
0005, d= 1.30. Finally, correlations between concept map scores and multiple choice test scores
and relatedness ratings assessment were large (e.g. for concept maps and multiple choice test r=.
51, N=93; for concept maps and relatedness ratings, r=.52, N=89).
More recently, CRESST has developed a computer-based knowledge mapping system
that was used in two studies. In the first, students constructed a group map through collaborating
over a network (Chung et al., 1999), and, in the second, students constructed individual maps
through searching a Web space (Schater et al., 1999). In both studies, the map construction
contained 18 environmental science concepts (i.e., atmosphere, bacteria, carbon dioxide, climate,
consumer, decomposition, evaporation, food chain, greenhouse gases, nutrients, oceans, oxygen,
photosynthesis, producer, respiration, sunlight, waste, water cycle), and seven relationships or
links (i.e., causes, influence, part of, produces, requires, used for, uses). Students were asked to
use these terms and links to construct concept maps through the computer.
Measurement of Problem Solving Strategies
According to O’Neil (1999), problem solving strategies are either domain-independent or
domain-dependent. Domain-independent strategies are usually applied over several subjects
while domain-dependent strategies are more specific to the subject area.
Three examples of domain-independent strategies will be discussed here. The first is the
use of multiple representations (O’Neil, 1999). Brenner, Mayer, Moseley, Brar, Duran, Reed,
and Webb (1997) studied multiple representations strategies in learning algebra. Multiple
22
representations as a problem solving strategy in math is defined as more than one way to translate
the words of a problem into other modes of representation, e.g., using diagrams, pictures,
concrete objects, equations, number sentences, verbal summaries, and even the problem solvers'
own words, (Cardelle-Elawar, 1992). Algebra problems in the study were represented in
multiple formats, and students learned to solve them in cooperative groups. Students receiving
the multiple representation treatment did significantly better on the posttest. They were more
successful in representing the word problems into different formats, and better at solving word
problems. Similar effects of a multiple representations strategy on quadratic minimum values in
math learning were found in Choi-koh (2000).
The second example of domain-independent strategies is the use of analogies (O’Neil,
1999). For example, Bernardo (2001) studied high school students learning word problems in
basic probability. He found that students who used an analogies strategy were significantly better
at: (a) transferring problem information between analogous source and target problem, (b)
retrieving the analogous source problem, and (c) applying the retrieved analogous information.
The third example is search strategies. Electronic information seeking has attracted
considerable attention in education and library science. There is no way to avoid it. Cyveillance
(2000) estimated in 2000 that the World Wide Web contained more than two billion publicly
accessible information pages and it will continue to grow exponentially. According to National
Center for Education Statistics (2000), over 95 percent of public schools have computers. In
addition, about 90% of classrooms have Internet access (Smith & Broom, in press).
On the students’ side, according to Becker (in press), almost 50 percent of the U. S.
student population uses computers and the Internet at school several times per week. At home,
57 percent of all students have a computer and 33 percent of all students have access to the
23
Internet. More and more schools demand that students do research on the World wide web so
that they can satisfactorily complete their projects (Smith & Broom, in press). In fact, research is
the most common classroom Internet use (Becker, 1999).
However, merely getting online to the World Wide Web does not automatically result in
getting the information needed. Information seeking is viewed as a complex problem solving
activity that involves memory, decision-making, learning, creativity, and intelligence (Kubeck et
al., 1999). According to Smith and Broom (in press), students and teachers alike still lack basic
information technology knowledge and skills. In addition, the current curriculum, instruction,
and assessments do not adequately make use of the capabilities of today’s networked information
systems (Smith & Broom, in press). Electronic information seeking is not just a single step, but a
process of steps together. It involves determining the information needed, choosing topics to
pursue, locating sites, and locating information to increase overall domain understanding,
analysis and evaluation the information found, and finally ending the search and returning to
solving the problem (Lazonder, 2000).
In order to assess information seeking strategies in a concept mapping task, CRESST
created a simulated Internet Web space (Schacter et al., 1999). Schacter et al. (1999) used this
simulated Internet Web space to measure individual problem solving strategies and they found
that content understanding scores for participants increased significantly with access to a
simulated Web space. Information seeking processes such as browsing, searching, and accessing
feedback improved students’ performance significantly in the posttest.
This study will also use this close content-controlled, simulated Web space rather than the
actual World wide web for four reasons. First, with the real World wide web, search queries
sometimes end up in a broken page if there are technical problems. A broken page is defined as a
24
URL whose contents cannot be accessed or displayed due to server problems or a URL that does
not exist any more. Second, with real pages used, there is accessing time, and students always
feel frustrated with the World wide web if they have to wait a long time for a page to download.
For the first two reasons plus the limited time constraint of the task, students might get frustrated
and this is a variable that this author wishes to exclude. Third, it would make scoring impossible
if the whole World wide web is used because it is impossible to count all pages on the World
wide web and it is impossible to assign a relevance score to all the pages. Fourth, doing so will
maintain continuity with both Schater et al.’s (1999) and Hsieh’s (2001) studies.
Measurement of Self-Regulation
Self-regulation includes metacognition and motivation (O’Neil, 1999). O’Neil and Herl
(1998) proposed examining metacognition in two aspects: planning and self-checking and
examining motivation from self-efficacy and effort. These four components make up the
measurement of self-regulation in problem solving for the CRESST model. Planning is the first
step because one must have a goal and a plan to achieve the goal, while self-monitoring or selfchecking is assumed to be an essential mechanism to monitor the processes for goal achievement.
In addition, self-efficacy is defined as one’s confidence in being capable of accomplishing a
particular task, whereas effort is the extent to which one works hard on a task. Based on their
model, O’Neil and Herl (1998) also created a questionnaire for self-regulation assessment in
problem solving. All four components (planning, self-checking, self-efficacy, and effort) are
assessed using 8 questions each.
Summary
Collaborative problem solving is considered a necessary skill for success in today's world
and schooling. Cooperative learning refers to learning environments in which small groups of
25
students work together to achieve a common goal, and problem solving is “cognitive processing
directed at achieving a goal when no solution method is obvious to the problem solver” (Mayer
& Wittrock, 1996, p.47). Thus, collaborative problem solving is defined as problem solving
activities that involve interactions among a group of individuals. The following figure (figure 1)
shows the components and their relationship to each other in collaborative problem solving.
Figure 1. Collaborative Problem Solving
adaptability
coordination
cooperative
learning
decision
making
interpersonal
leadership
cooperative
problem
solving
communication
content
understanding
problem
solving
problem-solving
strategies
domain
independent
domain
dependent
effort
motivation
self-efficacy
self-regulation
self-checking
metacognition
planning
26
As seen in figure 1, cooperative problem solving is first divided into two components
(cooperative learning and problem solving). According to O’Neil et al., (1997), Cooperative
learning can be further assessed by six cooperative skills (adaptability, coordination,
interpersonal, decision making, leadership, and communication). According to O’Neil, (1999),
Problem solving has three areas (content understanding, problem solving strategies, and selfregulation). Content understanding may be measured by using a concept map. Problem solving
strategies can be domain dependent or domain independent. Self regulation has two main
components (motivation and metacognition) and each of them has two components respectively.
Motivation consists of effort and self-efficacy, and metacognition consists of self-checking and
planning.
Information Seeking
In this section, this author reviews information seeking in education and library science.
The organization of the review on information seeking consists of two parts. First, literature on
information seeking found in the educational research literature is reviewed. Then the relevant
research on information seeking as a problem solving strategy from the information science and
library science literature will be used.
Information Seeking in Education
The Information Age and its Impact
According to Juke, Dosaj, and Macdonald (2000), as humanity has moved from the
industrial age into the information age, the meaning of being literate has changed also. When
public education first became widespread in the 1800s, being literate meant having the
knowledge to read and write. In the 1900s, the term expanded to mean “well-read,” i.e., being
knowledgeable about a variety of cultural subjects, such as art, literature, and the classics. In the
27
information age, as the vast amount of information increasingly comes to dominate people’s
lives, their established knowledge base is being superseded by new knowledge at an astonishing
rate every minute of every day. This characteristic of the information age is diminishing the
value of older, fact-based knowledge, while at the same time increasing the value of the skills
required to find and process information effectively. Therefore, being literate in the 21st century
is no longer just “having the knowledge to read and write”. It means the ability to locate and
manage knowledge and information effectively. On the other hand, information illiterates do not
have the motivation or desire to locate, evaluate and use information in a meaningful way
(Burdick, 1998).
Research in education in the past two decades has recognized and stressed the importance
of information seeking in the information age (Small, Sutton, & Miwa, 1998). Researchers have
advocated giving up the idea of success in Industrial Age schools which was based primarily on
knowing facts. They realize that in a rapidly changing world, facts today can soon become
irrelevant and outdated tomorrow. It is essential to spend more time teaching children
information finding and processing skills rather than rote memorization and repetition of certain
facts. This new emphasis on the skills of information seeking and processing should
substantially change the way curriculum and assessment programs are structured (Covington,
1998).
Covington (1998) also stresses the importance of changing the focus of curriculum and
assessment programs in school as a solution to prepare students in the 21st century who will
inevitably face rapid changes in every aspect of their lives every day. Traditional education
systems do not sufficiently address the needs of students well enough in this ever-changing
society. At the center of this change is the information explosion with its growing glut of facts.
28
Covington advocates a change in the teacher's role. Teachers before were viewed as knowledge
transmitters; however, in this new information age it is never possible to transmit enough
updated knowledge to students for solving the problems they face throughout their lifetimes. He
suggests that in this era where change is the only constant, the greatest legacy education can
provide for students is a will to learn and to continue learning as personal circumstances change
throughout life. A will to learn throughout life is an educational concept defined as selfregulated learning.
Information Seeking and Help Seeking
According to Karabennick and Keefer (1998), information is defined as anything that
helps people progress towards their goals. Zimmerman (1998) studied academic self-regulatory
learning processes by experts in diverse disciplines such as music, sports, and professional
writing. He found that most experts locate and manage information effectively. Because other
individuals or resources can provide assistance or information, help seeking is also considered as
part of information seeking, and so one form of information seeking that experts use is help
seeking. Help seeking is defined as obtaining the information necessary for assisting learning.
For example, experts in professional writing frequently obtain literary advice or feedback from
colleagues. Feedback seeking, one form of both information seeking and help seeking, is crucial
to expert performance. Experts in sports and music often return to their coaches or teachers
when flaws develop in their performance. Interestingly, self-regulated learners do not view help
seeking as social dependence and low competence. On the contrary, students who are not selfregulated learners tend to avoid asking for information to improve their learning or performance,
since they are concerned with adverse social consequences (Ryan & Pintrich, 1997; Zimmerman,
1998).
29
Similarly, Ryan and Pintrich (1997) examined two types of help seeking behavior in a
math classroom: avoidance of help seeking and adaptive help seeking. Avoidance of help
seeking is defined as situations when a student needs information from others but refuses to seek
it. Avoidance of help seeking students put themselves in learning and performance
disadvantageous situations. Adaptive help seeking refers to a student seeking for hints about a
solution to a problem. They found that information seeking from peers and teachers is related to
both perceived cognitive competence and social competence. Adolescents who feel comfortable
and skillful in relating to others were less likely to perceive threats from peers when asking for
information when trying to solve a math problem. In addition, adolescents who have higher selfperceived cognitive competence were more likely to seek information from peers without feeling
vulnerable.
In expanding the research on help-seeking and help-avoidance, Karabennick and Keefer
(1998) also investigated the pros and cons of help seeking and found they were linked to the
types of helping resources. Informal resources such as classmates, friends and family are
generally not considered as threatening as formal ones such as teachers or study group.
However, the tradeoff is that informal resources maybe not have the expertise required to offer
the exact information needed.
It is clear that the research on information seeking cannot be looked upon as a single issue
independently. It is influenced by cultural, social and motivational influences (Butler, 2000;
Borgman, 2000; Ryan & Pintrich, 1997; Zimmerman, 1998). For example, Butler (2000)
examined information seeking in task-involving versus ego-involving conditions among pupils
between Grades 4 to 8. Pupils were further distinguished by their acquisition of a differentiated
concept of ability or not. Acquisition of a differentiated concept of ability is defined as the
30
realization of the distinction between ability and effort. Acquisition of a differentiated concept of
ability usually happens around age 11-12 when students have considerable experience in school
and they begin to realize that effort does not equal ability. At the same time, because of social
comparison, they begin to develop a perception that expending a lot of effort in a task indicates
low ability. She found that pupils who had acquired the differentiated concept of ability behaved
differently in their information seeking behaviors. In the task-involving condition, pupils with a
differentiated concept of ability strived to seek information relevant for mastery of the material,
while in the ego-involving condition, they strived to outperform others and the type of
information they sought was normative comparison feedback. Surprisingly, pupils who had not
acquired a differentiated concept of ability responded to both conditions with the same behavior.
They strove to assess normative success and feedback but displayed neither the costs of ego
involvement nor the benefits of task involvement. Even though the information age has been
with us for decades and information seeking has been widely studied, the curriculum does not yet
reflect changes corresponding to it.
Curriculum Change for Tomorrow’s Workforce Readiness
Covington (1998) suggested an overall change in school curriculum and assessment
programs. According to him, schools today are not doing so well in preparing students for the
new skills required in the workplace in the 21st century. Schools grapple with only the most
obvious features of the information explosion, that is, the mastering of more information by
making learning faster, sometimes through computer-based instruction or by requiring students to
spend more time. However, this is not a solution to the problem. In the information age, there
will never be enough time to master the infinite amount of information. In fact, what will be
more strategic for schools to do is to instruct students in broader, future-oriented skills that
31
include a keen sense and skills for discerning what information is relevant and what is not in
order to solve problems in everyday life and in the workplace (Eisenberg, 1997).
Other researchers and organizations also echo Covington’s view. In view of the changing
needs in the workforce, the National Center for Research on Evaluation, Standards, and Student
Testing examined commonalities in five studies on workforce readiness skills. Among them,
higher order thinking, teamwork skills and problem solving were identified as the necessary
generic skills needed for success in today's world (O’Neil, 1997). A critical element within
teamwork skills is decision-making which in turn requires seeking, gathering and organizing
information (Borgman, 1999). Durham, Locke, Poon, and McLeod (2000) examined goal-setting
theory and the relations between group goals, group efficacy, their information seeking strategies,
and their performance. They found that the seeking of task-relevant information was the sole
direct predictor of group performance among all the variables mentioned above.
O’Neil (1999) claims that with computer networks and advances in technology,
information becomes more and more accessible. It is imperative that students learn how to
locate, organize, discriminate among, and use information stored in digital formats to make
decisions and to solve problems. Effective electronic information seeking strategies are required
problem solving skills for students to cope with job and learning demands in the future.
Electronic information seeking provides multiple sources of information, which gives problemsolvers the opportunity to explore and compare alternatives before they set on one final solution.
With the recognition of the importance of information seeking, especially electronic
information seeking in education, the next section will discuss the relevant research on electronic
information seeking/searching from library science and information science.
32
Electronic Information Seeking/Searching
Electronic information seeking has attracted considerable attention in education and
library science. This attention is inseparable from the advances in Computer technology.
The Information Age and Computer Technology
Computers, the Internet and the World Wide Web are here to stay. Cyveillance (2000)
estimated in 2000 that the World Wide Web contain more than two billion publicly accessible
information pages and it will still continue to grow exponentially. According to National Center
for Education Statistics (2000), over 95 percent of public schools have computers. In addition,
about 90% of the classrooms have Internet access (Smith & Broom, in press).
On the students’ side, according to Becker (unpublished), almost 50 percent of the U. S.
student population use computers and Internet at school several times per week. At home, 57
percent of all students have a computer and 33 percent of all students have access to the Internet.
More and more schools demand that students do research on the World wide web so that they can
satisfactorily complete their projects (Smith & Broom, in press). In fact, research is the most
common classroom Internet use (Becker, 1999).
However, merely getting online to the World Wide Web does not automatically result in
getting the information needed. According to Smith and Broom (in press), students and teachers
alike still lack basic information technology knowledge and skills. In addition, the current
curriculum, instruction, and assessment do not adequately make use of the capabilities of today’s
networked information systems (Smith & Broom, in press). In the workplace, information
seeking skills are emergently needed as well. According a study by Wilkins, Hassard, and Leckie
(1997), even university professional and managerial staff, when they have to conduct projects of
33
longer duration, have difficulty narrowing the scope of their research and have trouble identifying
and collecting the necessary and relevant information.
Tyner (2001) compared the World Wide Web to the largest bookstore in the world but all
the books and journals are without ISBN and cover pages and there is no central catalogue,
either. Luckily, there are two solutions: search engines and subject directories to help locate the
information needed (Tyner, 2001). Search engines basically let users key in words/concepts that
run against a database. The search engines then retrieve the pages that contain the key word.
While search engines are more or less the same, slight differences still exist. Effective search
strategies come after learning and practice.
Search Strategies on the World wide web
Given the importance of search strategies today, researchers are paying attention to
effective search strategies. What are the characteristics of effective search strategies? Tyner
(2001) listed the steps toward effective searching. They are (1) formulate the research question
and its scope, (2) identify the important concepts within the question, (3) identify search terms
for the concepts, (4) consider synonyms and variations of the terms, and (5) prepare search logic.
Searching in the World Wide Web resembles that of a database. Research has shown that people
with search problems in information database environments also have problems when it comes to
searching on the World wide web (Fidel et al., 1999; Kafai & Bates, 1997).
Novices’ and Experts’ Search
For novices, the challenge of searching on the World wide web is defining a proper
starting point and developing the procedures necessary to complete the information search (Hill,
1997). Information search is not a single step but a multiplicity of closely related and interlinked
steps. Hill (1997) further found that the more familiar adults were with the structure of the World
34
wide web, the more they employed problem-solving strategies, such as integrating new
information, taking varied viewpoints on the information they found, and extracting the relevant
details. On the other hand, unfamiliarity tended to relate to delayed queries formation, difficulty
in defining search terms and options, and usually feeling easily lost in the system. Lazonder
(2000) proposed a process model of web searching which is shown below in Figure 2.
Figure 2.
Start
Search
LOCATE SITE
Identify
search goal
Select search
strategy
Execute search
strategy
Monitor the
search
LOCATE
INFORMA
TION
End Search
In addition, using his model he tested novice and expert searching behavior on the World
wide web. He found that experts significantly outperformed novices in the first part of the search,
namely, the locating of the relevant pages, but there was no significant difference between
novices and experts in locating the information within the pages once found. He concluded that
novice users’ foremost training need was exclusively related to locating websites. He suggested
that novice users’ ability to locate websites that contained the exact information needed can be
35
greatly improved by teaching them how to monitor their own searching and how the World wide
web and computer system is constructed. Khan and Locatis (1998) found that novice users
usually were not aware of the full potential of the search engine. They might not know the
Boolean operator options, and they might not know they could set the default of the search
engine to multiple-word search rather than just one-word search. On the other hand, expert
database searchers located the information needed faster, used more search logic in their queries,
such as Boolean operators, and used quality keyword sets and synonyms (Klein et al., in press).
Synonyms are necessary and especially useful in retrieving relevant information in the areas of
humanities and social science. Knapp, Cohen, and Juedes (1998) found 96.5% of a sample of
2,524 common-noun subject headings from the humanities to have at least quasi-synonymous
terms.
Effect of Age and Computer Experience on Searching Strategies
Mead, Sit, Roger, Jamieson, Rousseau, and Gabriel (2000) examined the effects of
general computer experience and age on library system searching among 40 novice library
system users. Twenty of them were college students and 20 were older adults. They were asked
to perform 10 Boolean searches on an unfamiliar library system. The results demonstrated that
within both the college students' group and older adults' group, high computer experience was
associated with significantly better task performance. In addition, older adults overall showed
significantly lower performance in search results. Therefore, age might be mediating the effects
of computer experience. Younger adults tend to have more computer experience than older
adults. Older adults also made more syntax errors with Boolean logic.
Researchers have found that even children as young as five were able to use the Web to
surf for information. Older children such as junior high school students begin to distinguish
36
among search engines and use Boolean operators (Kafai & Bates, 1997). In addition to computer
experience, verbal ability was found to influence the search results, too. People with higher
verbal ability were better at coming up with search keywords and alternative terms if the
keywords used did not yield satisfactory results (Bilal, 1998; Fidel et al., 1999; Kafai & Bates,
1997; Schacter, Chung, & Dorr, 1998).
Assessment of Search Strategies
A lot has been said about the importance of search strategies in library science and the
World wide web; however, there is still a lack tools for assessing the effectiveness of those
strategies. Schacter, Herl, Chung, Dennis, & O’Neil, (1999) was among the first in educational
research to experiment on developing a computer-based assessment tool for such strategies.
Schacter et al. (1999) used a simulated Internet Web space to measure individual problem
solving strategies on networked computers and they found that content understanding scores for
participants increased significantly with access to information on a simulated Web space.
Information seeking processes such as browsing, searching, and accessing feedback improved
students’ performance significantly in the posttest.
Klein et al. (in press) investigated how best to assess students’ Web fluency. Web fluency
was defined as students’ level of effective use of search strategies on the World Wide Web
through training and/or experience. They first came up with three categories of measures. They
included (a) outcomes of the searches (e.g., information selection and search efficiency), (b) the
process involved (e.g., the kinds of navigational techniques used such as “back” and “forward”,
and their use of searching logic such as “AND”), and (c) the attitudes students possess toward the
World Wide Web (e.g., positive and negative). According to these three measures, they created
37
an assessment tool called Web Expertise Assessment to explore students’ World Wide Web
searches. The subjects were grade 7 to grade 12 students classified as experienced web users.
The Web Expertise Assessment contains four important components: (a) an on-line
search engine, (b) a Web-based information space, (c) a navigation toolbar, and (d) an automatic
logging capability. Their program with its simulated Web space called Web Expertise
Assessment contains around 500 pages of information on multiple topics. The research results
showed that students had difficulty searching. The study only confirmed the results of other
studies, i.e., that there is an urgent need to teach search strategies. Only less than half of the
searches were rated “good”. But students were able to redirect their search, and critically review
the information they located before starting a new search.
Effects of Training on Boolean Search Strategies
According to Lindroth (1997), five major search engines (Alta Vista, Excite, InfoSeek,
HotBot, and Webcrawler) on the World Wide Web utilize and recognize Boolean search logics.
Researchers have been strongly advocating the need for training in using Boolean logic (Fidel et
al., 1999; Lindroth, 1997). For example, Lindroth (1997) suggested that training start with
students brainstorming key words and identifying the synonyms related to the topic to be
searched. Second, students should decide on the search Boolean logic to use (AND, and OR).
Using a Venn diagram to show the effect of the Boolean search logic terms will help students to
visualize the difference (Lindroth, 1997; Mead et al., 2000). Azzaro and Cleary (1994)
successfully taught college students about Boolean Search Logic also using Venn Diagrams.
Similarly, Lazonder (2000) explored novice users’ training needs in searching information on the
World Wide Web. He suggested that by instructing them in search logic such as Boolean
38
operators and in advanced system knowledge about the computer network would help improve
the quality of their search. However, none of these studies adopted either an experimental or
quasi-experimental design. They were based on a naturalistic observation. This author did not
find any statistical evidence that training improved search strategies performance. In addition,
Klein et al. (in press), in their attempt to assess students’ web expertise, found that even though
with one on one training session on how to use Boolean operators in the search, the task of
searching was still difficult for the students in their study. The explanation for this could be that
when students have to struggle with a search task which is a new and difficult task for them, it is
not easy for them to remember to apply newly learned search strategies such as Boolean logic.
Therefore, in addition to training on how to use Boolean operators, feedback containing search
strategies can act as a helpful reminder for a novice’s electronic information seeking. This
dissertation will implement such a strategy.
Technical Considerations in Electronic Information Seeking on the World wide web
The World wide web is not without drawbacks. First, sometimes search queries end up in
broken pages if there is a server problem or misplaced links. Second, sometimes page accessing
time can be too long. Some have joked about the World wide web as World Wide Wait.
Information transfer time, if not quick enough, can result in students becoming frustrated (Tsai,
2000). People can get really frustrated with these two problems. Therefore, the use of a closed
content-controlled, database like World wide web information search system rather than the
actual World wide web is better for assessment of searching behavior and for training purposes
for three reasons. First, with a simulated World wide web, search queries will never end up in a
broken page. Second, the accessing time will be shorter than on the World wide web. This is
especially important when a time constraint is an important aspect of the task. Third, it is easier
39
for scoring the page relevance for assessment purposes. If the whole World wide web is used, it
is impossible to count all the pages on the Web and it is even more impossible to assign a
relevance score to each page.
Summary
Research in education in the past two decades has recognized and stressed the importance
of information seeking in the information age. This new emphasis on the skills of information
seeking and processing should change the way curriculum and assessment programs are
structured (Covington, 1998). Information is defined as anything that helps people progress
towards their goals. Help seeking is also considered as part of information seeking. Research on
expertise has shown that experts locate and manage information effectively (Zimmerman, 1998).
Two forms of information seeking that experts use a lot are help seeking and feedback seeking.
With computer networks and advances in technology, information becomes more and
more accessible. Effective electronic information seeking skills are required problem solving
skills for students to cope with job and learning demands in the future. Many students and
teachers alike still lack basic information technology knowledge and skills (Smith & Broom, in
press). In addition, current curriculum, instruction, and assessment do not adequately make use
of the capabilities of today’s networked information systems. Research has shown that effective
searching is not just one single step. It includes at least the following five steps: (1) formulating
the research question and its scope, (2) identifying the important concepts within the question,
(3) identifying search terms for the concepts, (4) considering synonyms and variations of the
terms, and (5) preparing the search logic (Tyner, 2001).
Novice and experts’ searching behaviors are very different. Novice users usually are not
aware of the full potential of the search engine and the Boolean operator options, and multiple-
40
word searching. On the other hand, expert database searchers located the information needed
faster, and used more search logic in their queries, such as Boolean operators (Lazonder, 2000).
Research has concluded there is a need for training on Boolean search strategies, and a few
studies have also demonstrated its effectiveness.
Feedback
The use of feedback to inform learners of their state and progress in learning is an
indisputable fact at all levels and all kinds of educational disciplines and contexts. Feedback is
an important element for learning and motivation (Nabors, 1999). With feedback, learners can
restructure their knowledge, correct what they do wrong, and further support their metacognitive
processes. Extensive research has been done in the area of feedback. The purpose of this section
is to review literature on the feedback types and effects. The organization of this section will be
divided into two main parts. First, the author will address the definition and different types of
feedback in a general traditional classroom/learning situation. Second, the author will describe
the current implementation of feedback and its effects according to its various characteristics.
Definition and Feedback Type
Historically, research on feedback started with teacher feedback. According to
Rosenshine and Stevens (1986), teacher feedback was an important teaching tool. Pintrich and
Schunk (1996) summarized and distinguished between the four types of teacher feedback. The
distinction was made based on the type of information included in the feedback. They were
performance feedback, motivational feedback, attributional feedback, and strategy feedback.
Table 2 gives a definition and examples for different types of teacher feedback. The table has
been adapted from Pintrich and Schunk (1996, p. 336) with some modifications.
41
Table 2
Teacher feedback type
_______________________________________________________________________
Type
Description
Examples
_____________________________________________________________________________________
Performance
Performance feedback provides
“That’s right.”
students with information on accuracy
“That’s correct. Do not
of work. It may include corrective information.
forget to deduct the number”
Motivational
Provides information on progress and competence;
may include social comparison and persuasion.
“You are getting better.”
“Great Job.”
Attributional
Links student performance with one or more
attributions
“You’ve been working hard and
you are doing well ”
Strategy
Links student performance with one or more
strategy used
“The tutoring after school works,
you are now very good at this.”
_____________________________________________________________________________________
The Implementation of Different Types of Teacher Feedback
Even though there were four types of teacher feedback identified by the researchers
(Pintrich & Schunk, 1996), their implementation was not equal. Foote (1999), found that among
the 20 teachers he observed over a 10-week period, there was a significantly high use of
performance feedback but significantly lower use of attributional feedback. The situation in the
classroom and in the computer-based instruction were the same. Currently, the majority of
computer-based instruction programs that include feedback utilize only performance feedback.
Only a few programs have tried to include both performance feedback and motivational feedback.
Strategy feedback has been used in Hsieh’s study (2001). This author has not found any literature
dealing with that in computer-based instruction programs. This study will implement strategy
feedback on the students' concept mapping task on a computer networked system.
In addition to differential implementation in classroom and in computer-based instruction,
the four feedback types have had different functions in helping students learn. An interesting
study done by Croy et al. (1993) has incidentally suggested the different effects of two types of
42
teacher feedback: performance and motivational feedback. Croy et al. (1993) conducted a study
on human-supplied versus computer-supplied feedback. The research design was an
experimental design. The feedback content was diagnosis of a student’s current work process.
The content of the feedback was decided together by the instructors of both groups. The only
difference was the method of delivery. One group received feedback through a computer-assisted
learning program while the other group had a special meeting with the instructor in which the
feedback was delivered orally. The results showed that students in the human-supplied feedback
group performed significantly better in exams, in class responsiveness, and moreover held a more
positive attitude towards the instructors and other students.
The researchers offered two possible explanations for these unexpected results. First,
even though feedback was delivered to students at the same time, in the human-supplied
feedback group, students had a chance to ask for further explanation or clarification on the
diagnosis feedback during the group meeting, whereas students in the computer-supplied group
had to wait till at least the next day to get clarification from the teacher if they had any questions
about the feedback. The extra explanation or clarification was essential to the learners'
monitoring of their own progress. This gave rise to students’ increased self-efficacy, resulting in
better performance. The second explanation was related to the first explanation. They suspected
that the further explanation or clarification of the feedback during the meeting with the teacher
increased learner motivation, which in turn led to better exam performance. In other words, even
though the feedback content they used contained only performance feedback, the human supplied
feedback group benefited not only from performance feedback, but also motivational feedback
during the interaction with the instructors. This additional motivational feedback might be the
reason for better performance in the human supplied group. To sum up, different types of
43
feedback have specific functions in instruction. More research needs to be done so that the
different feedback types can be better implemented in computer-based learning programs.
Caution for Evaluating Feedback Effectiveness
Even though the effects of feedback on learning were recognized, these effects were
contingent upon many characteristics of the student and the situation. Students’ self-efficacy,
control, anxiety, the subject matter, task difficulty, test type, timing and presentation of feedback
are all very important variables when incorporating feedback into instruction. Therefore, when
looking at feedback research, these variables should be taken into account in order to get a whole
picture of the feedback effect.
Feedback Characteristics
This section will look at feedback research with special attention to feedback
characteristics and their effects on learning. These characteristics include the amount of
information included, timing and representation. All feedback in the studies discussed in this
section was performance feedback unless otherwise specified.
The Amount of Information Included in the Feedback
Based on the amount and type of information included in the feedback, three categories
are identified. They are verification feedback, elaboration feedback and adapted feedback.
Verification Feedback
Three types of verification feedback are constantly discussed in the educational research.
They are (a) knowledge of response, (b) knowledge of correct response, and (c) answer until
correct. Knowledge of response only informs the learner whether her answer is correct or
incorrect. Knowledge of correct response not only points out whether the answer is correct or
44
incorrect, it goes on to inform the learner of the correct answer (Clariana, Ross, and Morrison,
1991). Answer until correct only informs the learners when they come up with a correct answer.
Elaboration Feedback
Elaboration feedback provides an explanation for why the answer is correct or incorrect,
in addition to knowledge of response. Within elaboration feedback, Ross and Morrison (1993)
distinguish three more types: task-specific elaboration feedback, instruction-based elaboration
feedback, and extra-instructional elaboration. Task specific elaboration feedback concentrates on
the current test item. The format is either a restatement of the correct answer or inclusion of
multiple-choice alternatives. Instruction-based elaboration uses text explanations from the
lesson. Extra-instructional elaboration includes examples or analogies not included in the
original instruction.
Adapted Feedback
Sales (1993) introduced the term adapted feedback in computer-based instruction.
Adapted feedback was designed in computer-based instruction with attention to customizing
feedback for the users’ needs rather than just one fixed form of feedback. Sales examined the
current computer-assisted learning and computer-based learning programs available. She found
feedback when included was not carefully designed to maximize its effect. Usually it contains
only one fixed type of feedback with a very limited capacity to communicate with the learner.
Sales gave one example of adapted feedback. For example, when a software product is used by
pairs of learners, the feedback may be adapted to that instructional situation by incorporating
comments that address both learners and encourage cooperation and mutual support during the
instruction.
Effectiveness of Types of Feedback with Different Amounts of Information
45
Generally speaking, when the task difficulty level was medium, the use of knowledge of
correct response was superior to knowledge of response, and knowledge of response is in turn
superior to no feedback at all (Pridemore & Klein, 1995). However, it was not clear what they
meant by medium difficulty level. Vispoel (1998) conducted research on the psychometric
characteristics of computer-adaptive vocabulary tests and self-adaptive tests regarding the
vocabulary section. He found that students completed the task in a shorter time when there was
feedback given compared to situations where feedback was not presented. In addition, this
difference was most obvious when students’ prior knowledge level was at medium to high levels.
However, Pridemore and Klein (1995) did not find any significant difference in final test
scores between an elaboration feedback group and a no feedback group in their studies. They
provided one possible explanation for the failure to find a difference in the elaboration feedback
group and the no feedback group. The students in the no feedback group had nothing to rely on,
so they had to read the text more carefully. Therefore, their posttest performance was as good as
the group with elaboration feedback. Their major finding was that the elaboration feedback
group and the no feedback group performed much better than the knowledge of response group.
Besides, they also demonstrated that the more elaborated the information given, the more
positive the overall attitudes of the students. More specifically, Nagata and Swisher (1995) used
natural language processing in computer-assisted language learning to design intelligent
computer feedback that provided additional rules about the nature of the errors (extrainstructional elaboration feedback). They found that with the addition of metalinguistic rules,
students’ grammar proficiency increased significantly more than the group with only task specific
elaboration feedback.
46
In addition, Clark and Dwyer (1998) found that elaboration feedback was better than
verification feedback when dealing with the following types of learning (knowledge,
comprehension, application, and analysis). This was consistent with the findings in a metaanalysis by Bangert-Drowns, Kulik, Kulik, and Morgan (1991). They concluded that elaboration
feedback produced better performance than verification feedback.
However, task difficulty can also influence the effects of feedback. Waldrop, Justen, and
Adams (1990) found that verification feedback and elaboration feedback on student achievement
did not make a significant difference when the task was very difficult and the material was very
new to learners. Students’ prior knowledge also played a role in feedback effectiveness. For
students with low prior knowledge, when given either verification feedback or elaboration
feedback, the effect on their learning outcomes was not significantly different (Waldrop et al.,
1990).
Timing of Feedback
Two types of timing are most frequently investigated in feedback research. They are
immediate feedback and delayed feedback (Kulik & Kulik, 1988). Kulik & Kulik (1988)
defined immediate feedback as performance feedback provided to a learner or examinee as
quickly as the computer’s hardware and software will allow after a test item or after a whole test.
Delayed feedback was defined by them as performance feedback provided to a learner or
examinee after a specified delay interval after a test. Furthermore, the idea of immediate
feedback and delayed feedback is relative rather than absolute. Immediacy and delay only have
meaning when they are in contrast with each other.
Effects of Immediate and Delayed Feedback on Learning
47
Some research has concluded that immediate feedback is better than delayed feedback
while other studies conclude the opposite. Again, as mentioned before, the feedback issue cannot
be looked upon separate from the task type it is dealing with. Task type is defined by the type of
learning results that learners should achieve. Examples include vocabulary learning, procedural
knowledge, problem solving, and concept learning. When task types were taken into account, the
research findings are more consistent and the problematic and contradictory findings can be
resolved.
Kulik and Kulik (1988) conducted a meta-analysis on 53 studies on verbal learning/list
learning and timing of feedback. They concluded that immediate feedback was better in helping
learners in regular classrooms or programmed instruction settings. Delayed feedback was better
only with verbal/list learning when the items used in the instruction were used in the test later. In
this condition, the exact questions and feedback were used as part of the instruction for the later
test.
Two interpretations were proposed to explain why delayed feedback was more helpful in
test-acquisition conditions. The first explanation was the delay-retention effect (DRE).
Brackbill et al. (1962) defined the delay-retention effect as the phenomenon that learners’ initial
error responses interfered proactively with the acquisition of the correct answer when the student
saw immediate feedback after responding. This hypothesis was also called the interferencepreservation hypothesis in Kulhavy and Anderson (1972). The second possibility is that delayed
feedback in the task serves as a repetition that reinforces the item again for the learners.
Therefore, when compared to immediate feedback, the delayed feedback group was provided two
chances for processing and therefore had better performance. The second hypothesis was
confirmed by Clariana, Wagner, and Lucia (2000). They found among items with differential
48
difficulty, retention of initial learning responses was greater for delayed feedback compared to
immediate feedback across all items, but the result was more pronounced with difficult items.
In adding to Kulik and Kulik’s (1988) findings, Schroth (1992) found that if the learning
is concept learning, delayed feedback was better than immediate feedback. The explanation was
that delayed feedback allowed the transfer of the concept; however, with immediate feedback,
this did not happen. Lee and Zalatimo (1990) also verified the same point. They found when
learning to solve analogy problems, students given delayed feedback scored higher in the posttest
than the immediate feedback group. King et al. (2000) conducted a public speaking performance
improvement with immediate and delayed feedback. The conclusions they found were consistent
with previous research. King et al. found that immediate intervention was more effective than
delayed feedback when automatic processing occurs. Delayed feedback was more effective when
tasks involved deliberate effort in processing. They found immediate feedback was effective in
modifying eye contact behavior in communication while delayed feedback was better in helping
learners expand and lengthen planned instruction. Their findings were consistent with the delay
retention effect. It is clear that both immediate and delayed feedback have an effect according to
different instruction and assessment needs. One type of assessment that was constantly
investigated was dynamic testing. Using an immediate feedback and delayed feedback definition,
feedback provided in dynamic testing would be considered as either immediate feedback or
delayed feedback depending on the program. The next section will focus on feedback in
dynamic testing.
Effects of Feedback in Dynamic Testing
According to Grigorenko and Sternberg (1998), “ Dynamic testing is a collection of
testing designed to quantify not only the products or even the processes of learning but also the
49
potential to learn” (p. 75). In order to fulfill the claims made for it, dynamic testing involves not
only testing end products but also learning processes at the same time. This type of testing is
quite different from traditional, static testing which only assesses the learned end products.
Another difference between dynamic testing and static testing is the role of feedback (Grigorenko
and Sternberg, 1998). According to Grigorenko and Sternberg (1998), in traditional static testing,
feedback about the performance is usually not given during the test. In dynamic testing,
feedback is given during the test to help assess learning. In dynamic testing, an examiner
presents a sequence of gradually more difficult tasks. After each performance from the student,
the examiner gives the test taker feedback and continues until the examinee either solves the
problem or chooses to give up. The basic goal of dynamic testing is to see when feedback is
given whether test takers change and how they change. This is done through provision of
feedback. However, there are no agreed upon ideas about how much information should be
included in the feedback. Currently, different approaches of dynamic testing vary in the amount
of information contained in the feedback (Grigorenko and Sternberg, 1998).
Representation of Feedback
Representation is an important aspect in feedback research as well. Basically, two
categories of feedback representation were frequently compared. They were visual and verbal
representations. A visual form of feedback may include feedback in a graphic format, while the
verbal form usually contains linguistic information coded in texts or words. Before discussing
the visual and verbal feedback effects, an introduction of cognitive load theory, and dual coding
theory is indispensable.
Cognitive Overload, and Dual Coding Theory
50
Cognitive load theory has two assumptions. The first is that the human cognition process
has two parts: working memory and long-term memory. The working memory is limited in
terms of its capacity and duration, while the long-term memory is unlimited. The second
assumption is that schemata are cognitive structures that allow information temporarily stored in
working memory to be transferred into long-term memory and thus reduce working memory load.
Thus, it would be expected that when presenting more information in the feedback than the
cognitive load can handle, the information is lost rather than being used by the learners. This
situation is called cognitive overload. One theory that offered solutions for reducing cognitive
load is the dual coding theory.
Dual coding theory was proposed by Paivio (Paivio, 1991; Sadoski & Paivio, 1994).
Dual coding suggests that human cognition is divided into two processing systems, visual and
verbal. The visual system deals with graphical information processing and the verbal system
deals with linguistic processing. These two systems are separate and are activated by different
information. Paivio (1991) also suggested that words and pictures are coded and decoded
differently mentally. For example, pictures are most likely to be coded both visually and
verbally, whereas words are usually coded verbally rather than visually. Following these
assumptions, Paivio (1991) went on to propose that if the information to be processed is coded
both visually and verbally, the acquisition chances for learners would be doubled because the
information is presented physically as a whole. Cognitive load theory and dual coding theory
together have been widely used to explain feedback presentation research.
Effects of Visual and Verbal Feedback
Mousavi et al. (1995) demonstrated that by mixing auditory and visual representation
modes, cognitive load was reduced for mathematics learning. Their research also revealed that a
51
visual-audio presentation mode promotes a deeper understanding of materials than a visualvisual presentation mode.
Rieber et al. (1996) experimented with the effects of animated graphical feedback and
textual feedback on 41 undergraduates in a computer-based simulation program concerned with
the laws of motion. They found that when given animated graphical feedback, subjects
performed better, completed the game task in less time and were less frustrated.
Knowing that visual feedback is more successful than verbal feedback in general, Park
and Gittelman (1992) found that within visual displays, animated visual display feedback was
more effective than static visual display feedback for college students learning electronic
troubleshooting skills. Similarly, Lalley (1998) demonstrated that video representation feedback,
a visual-verbal presentation mode of feedback, led to better learning outcomes on the
computerized biology multiple choice tests than textual feedback. Besides, the video feedback
used in the study was preferred much more by the high school students in his study. Likewise,
O’Neil et al. (2000) examined training applications of a virtual environment simulation. They
chose understanding an F-16 aircraft’s fuel system as the learning task. They fixed most basic
instructional design variables and only allowed the feedback representation to vary (e.g. audio
and text feedback). They have shown that the audio group did better than the on screen pop-up
text group on three measures: the transfer test, the matching test, and the knowledge mapping test.
Computer-Based Feedback
Computers with their interactive multimedia, are very appropriate for feedback delivery
(Dempster, 1997). In essence, computers combine almost all the kinds of delivery systems
mentioned earlier. Computers can deliver legible written feedback, audio feedback, video
feedback, and even synchronous feedback with both teacher and student in a computer-simulated
52
conference. Theoretically, a computer should be able to take advantage of the benefits of all
media while eliminating all the disadvantages each medium has in providing feedback. For
example, Dempster (1997) mentioned that computers can be programmed to give immediate
feedback via a variety of feedback strategies such as answer until correct, knowledge of response,
and knowledge of correct response. However, the success of the feedback via a computer still
depends upon the computer’s ability to verify learner input correctness, and link it to the causes.
Summary
Feedback had been regarded as an important resource to assist learners in their learning
process. Even though in general feedback’s effect on learning has been recognized as positive,
its specific effects are contingent upon many characteristics of the student and the task involved.
In order to get a whole picture of how feedback influences the learning process, five
categorizations of feedback (teacher feedback, complexity, timing, representation, and delivery
systems) have been addressed in this review.
Historically, research on feedback started with teacher feedback. Four types of teacher
feedback were identified (Pintrich & Schunk, 1996). They were performance feedback,
motivational feedback, attributional feedback, and strategy feedback. Currently, most computer
programs utilize only performance feedback; only a few programs have tried to include both
performance feedback and motivational feedback. No programs include attributional feedback.
Based on the amount and type of information included in the feedback, three categories were
distinguished. They were verification feedback, elaboration feedback, and adapted feedback
(Sales, 1993). The research findings have shown that elaboration feedback and adapted feedback
are more effective than verification feedback. Verification feedback in turn is better than no
feedback.
53
Two types of timing of feedback were most frequent investigated: immediate and delayed
feedback. Immediate feedback is more effective when dealing with rote memory items and verbal
learning, while delayed feedback is more efficient when dealing with concept learning or
problem solving (Kulik & Kulik, 1988). Immediate feedback is widely used by dynamic testing
proponents to combine testing with instruction to measure learning potential rather than learned
products as in the traditional testing scenario. Two categories of representation were frequently
compared: the visual form and the verbal form. Researchers consistently have found visual
feedback to be more effective than verbal only (Rieber et al., 1996). Because of the cognitive
load limitation and dual coding system in human learning, a visual-verbal presentation is more
instructive than a visual-visual or verbal only presentation of feedback (Paivio, 1991).
Last, the delivery medium is one important aspect in feedback research as well. Among
the delivery systems as at present, the computer has the most advantages. In theory, it can
combine all the advantages from different modes of representation (e.g. written feedback, or
video-taped feedback).
Summary of the Literature
The purpose of this chapter has been to review the literature on collaborative problem
solving and its assessment, information searching in education and information science, and the
effects of different types of feedback in education.
Cooperative learning, in which groups of students work together to construct their own
knowledge, to solve problems, and to raise their motivation to learn, is a consistent educational
goal. Recently, collaborative learning groups have also been used in collaborative problem
solving assessment programs. The National Center for Research on Evaluation, Standards, and
Student Testing (CRESST) researchers developed a model for the measurement of collaborative
54
learning processes. The CRESST model consists of six skills: adaptability, coordination,
decision-making, interpersonal relations, leadership, communication (O’Neil et al. 1997).
CRESST has also developed a problem-solving assessment model with three subelements: content understanding, problem-solving strategies, and self-regulation (O’Neil, 1999).
Concept maps have been extensively used in K-12 classrooms. Research has shown that
computerized concept mapping systems are effective for assessment of content understanding.
The measurement of problem solving strategies is different according to each strategy.
Problem solving strategies can be either domain-independent or domain-dependent. Three
examples of problem solving strategies in the literature are multiple representations, the use of
analogies, and information seeking. At CRESST , the measurement of self-regulation is done
through the use of a questionnaire. Self-regulation includes metacognition and motivation.
Metacognition further includes planning and self-checking and motivation consists of selfefficacy and effort.
One frequently investigated problem solving strategy is called information seeking and in
education this strategy has received considerable attention. Recently, with the advances in
computer networking, research in electronic information seeking as a problem solving strategy
has also been stressed. Studies have shown that a good search is not completed in one single
step, but rather in a process of interlocking steps together (Lazonder, 2000). It involves
determining the information needed, choosing topics to pursue, locating sites, locating
information to increase overall domain understanding, analyzing and evaluating the information
found, and finally ending the search and returning to solving the problem. Research has shown
that people with problems in searching information database environments also have problems
when it comes to searching on the World wide web. Many researchers have tried to understand
55
electronic information searching by observing novices and experts search. They have found that
novices have a hard time defining a proper starting point and also deciding on the procedures
necessary to complete the search. On the other hand, experts were more familiar with the World
wide web structure and they frequently used search logics such as Boolean operators during their
search. Experts also used additional problem-solving strategies, such as integrating new
information, taking varied viewpoints on the information they found, and extracting the relevant
details. The literature strongly suggests that novice users’ foremost training need is related to
locating websites using the search logic employed by the search engine. Researchers have
advocated a strong need for Boolean logic training; however, there is limited research evidence
showing that training of the Boolean logic was effective for improving searching behavior
(Lazonder, 2000). In addition, a training session alone will not make the use of Boolean logic
become automatic. A reminder such as feedback is needed for the search task before the use of
Boolean logic can become automatic.
Feedback has been regarded as an important resource to assist learners in the learning
process. Feedback effects are contingent upon many characteristics of the student and the task
involved. Five categories of feedback (teacher feedback, complexity, timing, representation, and
delivery systems) were addressed in the literature.
Historically, research on feedback started with teacher feedback. Four types of teacher
feedback were identified (Pintrich & Schunk, 1996). They were performance feedback,
motivational feedback, attributional feedback, and strategy feedback. Currently, most computer
programs utilize only performance feedback; only a few programs have tried to include both
performance feedback and motivational feedback. Attributional feedback is rarely seen. Based
on the amount and type of information included, feedback can be categorized into three types:
56
verification feedback, elaboration feedback, and adapted feedback (Sales, 1993). The research
findings have shown that elaboration feedback and adapted feedback are better than verification
feedback. Verification feedback, however, is better than no feedback at all.
Two types of feedback timing were frequently investigated: immediate and delayed
(Kulik & Kulik, 1988). Immediate feedback is more effective when dealing with rote memory
items and verbal learning, while delayed feedback is more effective when dealing with concept
learning or problem solving. Two categories of representation were frequently compared: visual
and verbal forms. Researchers consistently found visual feedback to be more effective than
verbal only(Rieber et al., 1996). Because of cognitive load limitations, the split attention effect,
and the dual coding system in human learning, visual-verbal feedback is more effective than a
visual-visual or verbal only presentation (Paivio, 1991).
Lastly, the feedback delivery medium is one important aspect in feedback research.
Computers have the most advantages, because they combine all the advantages of various modes
(e.g. written feedback, or video feedback).
According to the literature reviewed above, it is clear that when the task is difficult, and
when students have low prior knowledge, providing more task-specific feedback to the
participants should improve their learning results. In Hsieh’s (2001) study, she provided
feedback on the task progress. A simulated Web space used for information seeking and
feedback accessing is used in Hsieh’s study. She demonstrated that information seeking and
feedback are positively related to students’ outcome performance. However, even though her
feedback provided participants with a direction as to “what” area to improve, it did not provide
practical tips on “how” to improve the map.
57
This proposal argues that by providing database/Internet search tips, students will become
more effective and efficient in locating the relationships among the concepts and in turn
improving the overall result of the map constructions. Therefore, this researcher will modify
Hsieh’s (2001) original task by providing examples on one specific type of Internet /database
search tips (e.g., Boolean operator) in addition to information on what area into which the
participants should put more effort. Because the participants are high school students, they are
assumed as lacking the perquisite knowledge to do the task. By being provided with the search
tips that are demonstrated to be effective, participants in this study should perform better than
Hsieh’s (2001) participants in general. In addition, the increased use of the Boolean operators
when doing the simulation web search is expected. With the search tips, students should be able
to get a more accurate and relevant relationship among the concepts and should be able to
construct a better map through efficient information seeking and self- monitoring their
performance through feedback.
By including Hsieh’s (2001) adapted feedback and adding task-specific feedback on
search strategies, this study will investigate student collaborative problem solving and team
processes on a computer-based knowledge mapping group task with special attention to the
effectiveness of information seeking/searching. The effects of two levels of feedback Hsieh
compared were knowledge of response feedback and adapted knowledge of response feedback.
She demonstrated that the group with adapted knowledge of response feedback outperformed the
group with knowledge of response feedback. However, searching in her study was unexpectedly
negatively related to group concept map performance. This study will compare two levels of
adapted knowledge of response feedback. One is general task-progress feedback (how much
58
progress is made) and the other has more specific task feedback (how to better search the
simulation web).
59
CHAPTER III
METHODOLOGY
Research Hypotheses
Hypothesis 1: Students with task-specific adapted knowledge of response feedback will be more
likely to use decision-making and leadership messages in their communication than students with
adapted knowledge of response feedback.
Hypothesis 2: Students will perform better in the concept mapping task by receiving taskspecific adapted knowledge of response feedback than by receiving adapted knowledge of
response feedback.
Hypothesis 3: Searching using Boolean operators will have positive effects on problem solving
strategies and group outcome performance on the concept map.
(a) The more Boolean operators used, the higher the group outcome score on the concept
maps.
(b) The task-specific adapted knowledge of response feedback group will use more
Boolean search operators than the adapted knowledge of response feedback group.
Research Design
The design is an experimental one. Students will be randomly assigned into two person
groups as either searcher or leader. Then the group will be randomly assigned into two feedback
treatment groups (task-specific adapted knowledge of response feedback versus adapted
knowledge of response feedback). The dissertation will consist of a pilot study and a main study.
Pilot Study
The rationale for the pilot study is that although Hsieh (2001) demonstrated that her
participants were able to successfully complete the task with the predefined messages, some
60
scales had unexpected low reliability. Table 3 shows the teamwork skills questionnaire item and
scales’ reliability for Heish’s study.
Table 3
Item-Total Correlations and Alpha Reliability of the Teamwork Process (N=110)
Scale
Message #
Adaptability
Coordination
Decision Making
Interpersonal
Leadership
Communication
1
2
18
32
33
34
15
20
21
22
23
24
8
9
10
11
13
14
26
27
28
29
30
31
3
4
5
6
7
17
12
16
19
25
35
36
Corrected Item-total
Correlation
.28
.35
.23
.09
.14
.33
-.11
-.27
.11
-.04
.09
-.02
.00
.04
-.01
.03
.33
.39
-.05
.34
.21
.21
.17
.18
.34
-.02
.00
.16
-.20
-.19
-.17
-.11
-.06
-.13
.06
.08

 if item deleted
.43
.40
.28
.40
.44
.43
.31
-.14
.37
-.38
-.22
-.30
-.23
.31
.25
.38
.25
-.06
.04
.72
.10
.09
.11
.14
.13
-.11
-.01
-.19
-.03
.03
.06
.05
-.02
.04
-.01
-.30
-.11
-.23
.24
.18
-.02
-.04
As may be seen in Table 3, the reliability ratings ranged from -.23 to .43. Using the rule
of thumb that  .7 would be acceptable, the reliability of her scales is not acceptable. She raised
the reliability higher by deleting some of the items by examining the “alpha if item deleted”. The
results of the final items used for her study analysis are in table 4.
61
Table 4
Final Item-Total Correlations and Alpha Reliability of the Teamwork Process (N=110)
Scale
Message

#
Adaptability
2
.53
34
Coordination
23
.63
24
Decision Making
13
.74
14
Interpersonal
28
.74
29
30
31
Leadership
3
.24
6
Communication
35
.22
36
Using the rule of thumb,  .7 is acceptable, only the reliability of decision-making and
interpersonal relations are acceptable. Even though she revised the scales based on the new data
set, there was no cross validation. In summary, in this writer's pilot study, there will be two
purposes: (a) testing the feasibility of new programming of the new feedback condition; and (b)
testing whether, after deleting those unreliable items and adding new ones, the modified
predefined messages will still allow students to successfully complete the task. In addition, a
statistical approach to item deduction will also be explored through use of exploratory factorial
analysis following the logic outlined by Weng (1999).
In the main study, the effects of two levels of feedback, i.e., task specific adapted
knowledge of response feedback and adapted knowledge of response feedback will be
investigated.
62
Method of the Pilot Study
Participants
Following USC Human Subject Approval, data in the pilot study will be collected in Oct
2001. There will be 12 English speaking adults (6 females and 6 males) participating in this pilot
study. These young adults will be aged from 20-30. All of the participants will be graduate
students from University of Southern California. Each participant will be assigned to a group
with a partner, i.e., two persons in a group. There will be total 6 groups in the pilot study. Each
participant will have a specific role in the task, either as a leader or as a searcher. Leaders are
those who construct the knowledge maps whereas searchers are those who conduct the
information seeking from the Web environment to help leaders construct the map through the
predefined messages. The pilot study will be conducted at Taper Hall 216. The author has the
approval of the current director of the language center at USC to use their computer equipment
and their computer room for the study.
Networked Knowledge Mapping System
Table 5 lists the specification for the networked knowledge mapping system that will be
used in this dissertation.
63
Table 5 Domain Specifications embedded in the software (adapted from Hsieh, 2001)
General Domain Specification
Scenario
This Software
Create a knowledge map on environmental science by
exchanging messages in collaborative environment and by
searching relevant information from simulated World Wide
Web environment
Participates
Leader
Searcher
Student team (two members)
The one who does the knowledge mapping
The one who accesses the simulated World Wide Web
environment to find relevant information and ask for feedback
Knowledge map terms
(Nodes)
Predefined – 18 important ideas identified by content experts:
atmosphere, bacteria, carbon dioxide, climate, consumer,
decomposition, evaporation, food chain, greenhouse gases,
nutrients, oceans, oxygen, photosynthesis, producer,
respiration, sunlight, waste, and water cycle
Predefined – 7 important relationships identified by content
experts: causes, influences, part of, produces, requires, used
for, and uses
Knowledge map terms
(Links)
Simulated World Wide
Web environment
Contains of over 200 Web pages with over 500 images and
diagrams about environmental science and other topic areas
Bookmarking applet
A function in the program that provides a way for the searcher
to bookmark Web pages he/she finds during the search
Training
All students will go through the same training section.
The training will include the following elements:
• how to construct the map
• how to search
• how to communicate to the other group member
Feedback
Feedback is based on comparing group’s knowledge map
performance to that of expert’s map performance
Two Levels of Feedback
Task general adapted
knowledge of response
feedback
Including knowledge of response feedback, messages about
how much improvement students have done in current map
compared with previous map will be provided, but does not
contain search strategy for electronic information seeking.
Representation
Graphics pus text
Adapted knowledge of
response feedback
Including knowledge of response feedback, messages about
how much improvement students have done in current map
64
compared with previous map will be provided, and the useful
search strategy for electronic information seeking.
Representation
Graphics pus text
Timing of feedback
Both feedback used in this study can either be the immediate
or the delayed feedback because the feedback accessing is
controlled by searchers.
Type of Learning
Problem solving measures
Knowledge map
Collaborative problem solving
Content understanding and structure :(a) semantic
content score; (b) organizational structure score; (c) the
number of concepts; and (d) the number of links
Relevant information found
Web pages sent to bookmarks are scored by four-point scale
Information Seeking
Feedback
Browsing, searching, focused browsing, and bookmarking
The number of times students request feedback for their
knowledge map
Planning, self-checking, self-efficacy, and effort
Self-regulation
Team processes measures
Adaptability, coordination, decision making, interpersonal,
leadership, communication
The Java system to be used in this study will be similar to Hsieh’s (2001) system which
was based on the Schacter et al. (1999) study’s individual problem solving and Chung et al.’s
study on collaborative assessment (1999). For the purpose of this study, a Java programmer and
the author will work together to modify the original feedback with task-specific information.
Leaders in each group can add concepts to the knowledge map and link concepts to other
concepts via concept selection and via link selection on the menu bar respectively. In addition,
they can move and erase concepts as well as links. Searchers in each group can both seek
information from the simulated Web environment and can also access feedback about their group
map. The leader cannot search the simulated Web environment and the searcher cannot construct
the concept map. Each member in the group must collaborate to successfully perform the task.
65
User interface. An illustration of user interfaces in this system is shown in Figure 3.
Figure 3. User interfaces.
The display is partitioned into three major sections. The lower left part of the screen
displays the messages that are listed in the order sent by members. For example, it may be seen
in figure 3, it says which concept should we focus on first? The lower right part of the screen
consists of numbered message buttons. For example, as may be seen in Figure 3, the buttons
with 1, 2, 3, 4, 5, 6 and so on. All predefined messages will be listed on the handouts and will be
given to every student. When participants click on a button the corresponding message will be
sent to their partners’ computers. The third part is the area where students construct the map as
seen in the Figure 3 on the top left-hand side area.
66
Message handouts. Participants will receive a paper handout that lists the 14 messages,
grouped by common functions. In addition, the list of concepts and links will be provided on this
handout as well. The handout is shown in Table 6.
Table 6: Message Handout (Pilot Study)
Concepts and Links…
Add concepts & Links
1. What about [C]?
2. Let’s [add/erase] [C].
3. Let’s do [C]-[L1]-[C] instead of [L2].
Information from Web
4. Feedback shows we should work more on [C].
5. Feedback shows [C] in good shape. Don’t waste time
on it.
Keeping Track of Progress…
6. We only have X minutes left.
7. How’s your searching?
Messages about the Group…
8. You’re doing great – keep going.
9. Good idea.
10. Yeah! We have made some progress.
11. We’re doing better and better.
Quick Responses…
12. Any ideas?
13. O.K.
14. No.
Concepts…[C]
atmosphere
bacteria
carbon dioxide
climate
consumer
decomposition
evaporation
food chain
greenhouse gases
nutrients
oceans
oxygen
photosynthesis
producer
respiration
sunlight
waste
water cycle
Links…[L]
causes
influences
part of
produces
requires
used for
uses
As may be seen in table 6, messages are categorized for students’ easy understanding.
The categories include (a) concepts and links, (b) messages about the group, (c) keeping track of
progress, and (d) quick responses. Table 7 shows the predefined messages categorized according
to collaborative teamwork messages categories.
67
Table 7: Predefined Messages
Messages
Adaptability: Monitor the source and nature of problem through an awareness of
team activities and factors bearing on the task
Any ideas?
What about [C]?
Coordination: Process by which team resources, activities, and responses are
organized to ensure that tasks are integrated, synchronized, and completed with
established temporal constraints
We only have X minutes left.
How’s your searching?
Decision-making: the ability to integrate information, use logical and sound
judgment, identify possible alternatives, select the best solution, and evaluate the
consequences
Feedback shows we should work more on [C].
Feedback shows [C] is in good shape. Don’t waste time on it.
Interpersonal: the ability to improve the quality of team member interactions
through the resolution of team members’ dissent, or the use of cooperative
behavior
You’re doing great – keep going.
Good idea.
Yeah! We have made some progress.
We’re doing better and better.
Leadership: the ability to direct and coordinate the activities of other team
members, assess team performance, assign tasks, plan and organize, and establish a
positive atmosphere
Let’s do [C]-[L1]-[C] instead of [L2].
Let’s [add/erase] [C].
Communication: the process by which information is clearly and accurately
exchanged between two or more team members in the prescribed manner and by
using proper terminology, and the ability to clarify or acknowledge the receipt of
information.
O.K.
No.
Simulated World Wide Web Environment
The simulated World Wide Web will run on PC under Windows NT ™. The same
World Wide Web environment used in Hsieh’s (2001) study will be used in this study as well.
The Web environment contains over 200 Web pages with over 500 images and diagrams about
environmental science. Ninety percent of the information was downloaded from the Internet and
68
ten percent of the information was adapted from science textbooks and other science unit
materials. In addition, Schacter et al. (1999) and Hsieh (2001) also provided a bookmarking
applet to allow individual students to send Web pages they found during their searching process
directly to related concepts in the knowledge map.
Bookmarking applet. Basically, this study will use the version of Hsieh’s (2001)
bookmarking applet. In Hsieh’s (2001) study, only searchers can access the database, so it is
reasonable that only searchers in each group, not leaders, have the right to use the bookmarking
applet. However, when the searchers send related bookmarks to the concepts, the leaders can
view these results when they select the corresponding concepts on the knowledge map.
Feedback. During the information seeking process, every searcher will be allowed to
access feedback to know how well they are performing. Feedback will be divided into two
categories: adapted knowledge of response feedback and task-specific adapted knowledge of
response feedback. Both types of feedback are based on comparing students' knowledge map
performance to that of experts’ map performance. Adapted knowledge of response feedback in
the present study is the same as Hsieh’s (2001) feedback which provides students suggestions
that each concept on the map needs a little, some, or a lot of improvement, plus what area on
their knowledge map to improve so their results will be better next time. For example, as may be
seen in Figure 4 , food chain needs a lot of improvement according to previous feedback. If
current feedback shows food chain only needs some improvement now, the computer will show a
message like “You have improved ‘food chain’ from ‘a lot of improvement’ category to “some
improvement” category”. Then a message such as “It is most useful to investigate information
for the ‘A lot’ and ‘Some’ categories rather than the ‘A little’ category. For example, improving
‘atmosphere’ or ‘climate’ first rather than ‘evaporation’” will show on the bottom. The purpose
69
of adapted knowledge of response feedback is to help students by pointing out concepts that need
modification. On the other hand, task-specific adapted knowledge of response feedback includes
the information contained in adapted knowledge of response feedback plus Boolean search
strategies tips.
The author predicts that task-specific adapted knowledge of response feedback will
benefit more to student knowledge mapping construction than adapted knowledge of response
feedback. Figure 4 shows an example of feedback provided in Hsieh’s (2001) study.
Figure 4. Feedback in Hsieh’s (2001) study
Your map has been scored against an expert’s map in environmental science. The feedback tells you:
 How much you need to improve each concept in your map (i.e., A lot, Some, A little)
Use this feedback to help you search to improve your map.
A Lot
atmosphere,
bacteria,
decomposition,
oxygen, waste
respiration
nutrients
Some
climate,
carbon dioxide,
food chain,
photosynthesis,
sunlight, water
cycle, oceans,
consumer,
producer
A Little
evaporation,
greenhouse,
gases
Improvement: You have improved the “food chain” from needing “A lot of improvement” to “Some
improvement” category.
Strategy: It is most useful to investigate information for the “A lot” and “Some” categories rather
than the “A little” category. For example, improving “atmosphere” or “climate” first rather than
“evaporation”.
As seen in figure 4, her feedback provided participants a direction as to “what” area to
improve for search and task performance, but her feedback did not provide practical tips on
“how” to improve the performance. Figure 5 shows an example of feedback provided in the
present study.
70
Figure 5. Example of feedback in the present study
Figure 5. Feedback in the present study (search tips adopted from Firstsearch help guild at http://newfirstsearch.oclc.org/)
Your map has been scored against an expert’s map in environmental science. The feedback tells you:

How much you need to improve each concept in your map (i.e., A lot, Some, A little)
Use this feedback to help you search to improve your map
A Lot
atmosphere,
bacteria,
decomposition,
oxygen, waste
respiration
nutrients
Some
climate,
carbon dioxide,
food chain,
photosynthesis,
sunlight, water
cycle, oceans
A Little
evaporation,
greenhouse
gases
Improvement: You have improved the “food chain” from needing “A lot of improvement” to “Some improvement” category.
General Strategy: We have scored your concept map to give you feedback by scoring your concept map with experts’ maps.
Concepts were categorized into 3 categories: concepts that need a lot of improvement (there were no matches
with the experts’ maps), concepts that need some improvement (there are at least 20 % of links matched with
the experts’ maps), and concepts that need a little improvement (all others). It is most useful to investigate
information for the “A lot” and “Some” categories rather than the “A little” category. For example, improving
“atmosphere” or “climate” first rather than “evaporation”.
Search strategy 1: Use the Boolean operators AND to combine search terms when you need to expand or narrow a search.
AND retrieves only records that contain all search terms. Use this operator to narrow or limit a search.
Oxygen AND atmosphere
If you type:
oxygen AND atmosphere
It searches for:
Only records containing both oxygen and atmosphere
oxygen AND atmosphere AND carbon
dioxide
Only records containing all three search terms—oxygen, and atmosphere,
and carbon dioxide
Search strategy 2: If using Boolean operators AND ends up with no relevant pages found. Use the Boolean operators OR to
retrieve all records that contain one or both of the search terms. Use this operator to expand a search.
Oxygen OR atmosphere
If you type:
oxygen OR atmosphere
oxygen OR atmosphere OR carbon
dioxide
It searches for:
Records containing oxygen, records containing atmosphere, and records
containing both oxygen and atmosphere
Records containing oxygen, records containing atmosphere, records
containing carbon dioxide, and records containing any combination of two
or all three search terms
As seen in figure 5, the feedback provided participants a direction as to “what” area to
improve for search and task performance, and also “how” to improve the performance.
71
Measures
The measures in this study are adopted from Schacter et al. (1999), Chung et al. (1999)
and Hsieh’s (2001).
Group Outcome Measures
Group outcome measures will be computed by comparing a group’s knowledge map to a
set of four experts (Schacter et al., 1999). There are four group outcome measures used in this
study: (a) semantic content score; (b) organizational structure score; (c) the number of concepts;
and (d) the number of links. All of these measures were developed by Herl (1995). The
following descriptions are how these outcomes are scored.
First, the semantic score is based on the semantic links in experts’ knowledge map and is
calculated by categorized map scoring. By using this method, all links are categorized into
causal and set classifications. For example, “causes” and “influences” are classified as the
‘casual’ category. Every proposition in a map is compared against all propositions in experts’
maps. The average score across all experts is the semantic score of the map.
Second, the organizational structure score evaluates the similarity between a group's and
the experts’ knowledge maps. A metric measuring the degree of similarity between
neighborhoods of terms of networks, C, is used to calculate this score (Goldsmith, Johnson, &
Acton, 1991). C ranges from 0 to 1. If the score was 0, it means that there is no similarity
between a group’s knowledge map and an expert’s knowledge map. In contrast, the score of 1
means that there is a perfect structural match between the group’s and an expert’s knowledge
map.
Finally, the other three measures are counted on how many concepts, links and links per
term are used on a knowledge map.
72
Individual and Group-Level Teamwork Process Measures
These two measures will be used to evaluate the engagement of the individual and the
whole group in each of the team processes (i.e., adaptability, coordination, decision- making,
interpersonal, leadership, and overall communication). Individual-level team process measures
will be computed by the total number of messages sent from an individual in each teamwork
process category, whereas group-level teamwork process measures will be counted by adding the
number of messages each member in a group sent from each teamwork process category. That
is, if a group leader A sends 7 messages from the adaptability category, the group leader A’s
individual-level adaptability score will be 7. In addition, if both the leader and the searcher in a
group each send 7 messages from the adaptability category, the group-level adaptability score
will be 14.
This method will be used for all team process categories except communication.
Communication will be measured by adding the total number of messages sent in all team
process categories.
Relevant Information Found Measures
Relevant information found will be measured by students’ bookmarks on the Web pages.
Schacter et al. (1999) scored each Web page by their relevance with each concept. A score of
zero represented that the Web page did not contain any information about the assigned concept.
A score of 1 meant that the concept appeared but was not explicitly defined in the Web page. A
score of 2 meant that the concept was explicitly defined and a brief definition of the concept was
provided in the Web page. Finally a score of 3 meant that the Web page provided not only the
definition of the concept but also the information about how this concept related to other
concepts in the domain of environmental science. Two judges rated the score for each Web page,
73
so each concept had two scores resulting in 36 scores (18 concepts) per Web page. The interrater
reliability scores for all Web pages for each concept were calculated by Schater et al. (1999)
using Person Product Correlation coefficients. Rater agreement for scores on each of the 18
concepts ranged from r = 0.74 to 0.89.
Information Seeking and Feedback Behavior Measures
Information seeking and feedback behavior measures consist of four measures: (a)
browsing; (b) focused browsing; (c) searching; and (d) feedback requesting.
Browsing will be measured by how many times the searchers select the Web pages from
the hypertext directory or glossary, or by clicking on any hypertext within the Web pages.
Moreover, focused browsing will measure the degree to which searchers are searching for
specific information about a concept. Since there are 18 concepts in the study, the score range
will be from 0 to 18. If a group conducted focus searching on 7 concepts, then their focused
browsing score will be 7. The more concepts the searchers focus on in their browsing, the higher
the score the group will earn.
For searching, one point will be awarded for simple searching. A group will be awarded
an additional point if the search conducted is one involving search strategies such as Boolean
searching, truncation, adjacency searching, exact word searching, or entering multiple keywords
into a search string.
Feedback requesting will be measured based on how many times the group requested
feedback about their knowledge map. In Hsieh’s (2001) study, feedback requesting was
positively related to the knowledge map score (r = .42, p < .01). That is, the more the students
accessed feedback, the better the map they constructed. Table 6 represents the relationship
between the outcome and the information seeking as well as feedback in Hsieh’s (2001) study.
74
Table 6
Correlations of Outcomes and Problem Solving Strategies (Hsieh, 2001)
Problem Solving
Variables
Browsing
Searching
Feedback
*
Group Outcome (Knowledge Map Score)
Adapted
Knowledge of
Knowledge of
Total
Response Group
Response Group
(N=58)
(N=29)
(N=29)
. 27
-. 13
. 13
*
. 14
-. 40
-. 08
. 45*
. 41*
. 42**
p < .05 (2-tailed).
p < .01 (2-tailed).
**
Data Analysis
Descriptive statistics will be used to estimate all the measures (e.g., means, standard
deviations).
The Main Study
Method
Participants
Participates will be 120 junior high and high school Chinese-American students (60
teams), from 15 to 18 years of age. All of the participants were born and raised in the United
States. All students speak English as their first language.
Measures
Measures in the main study will be the same as those in the pilot study. But the
predefined messages will be modified according to the results of pilot study.
Procedure
Following USC Human Subject Approval, data will be collected in winter, 2001. One
hundred and twenty students will be randomly distributed into 60 groups. Each participant will
be randomly assigned to a group and to a role (leader or searcher). Because the present study
tries to evaluate the effects of types of feedback on the map performance, all 60 groups will be
randomly assigned to either receive adapted knowledge of response feedback or task-specific
75
adapted knowledge of response feedback. In order to have good control, during each research
session there will be only 5 groups involved in the experiment. Each group will be randomly
assigned to treatment. Therefore, 12 days will be expected to complete the data collection.
The estimated time for the whole experiment will be 85 minutes. First, there will be a
collaborative training session for both leaders and searchers for about 20 minutes on how to use
the computer system, how to construct the knowledge map collaboratively, and how to access
feedback from the Web environment. Everyone will receive the same training. After that, there
will be a 15 minute session on how to conduct Boolean search strategies. Then each group will
have 50 minutes to complete the task as much as they can.
Collaborative Training Task
The training sessions for the task-specific adapted knowledge of response feedback group
and training for the adapted knowledge of response feedback group are the same.
Group Scenario. Both leaders and searchers will be trained how to communicate with
each other by predefined messages. The researcher will also explain the responsibilities for each
role.
Leaders. Leaders will be trained how to add (erase) concepts and create (delete) links
between concepts.
Searchers. Participants in the role of searcher will receive a hands-on training using a
Netscape Web browser, searching Web pages for related information, bookmarking relevant Web
pages and accessing feedback about their group map.
Boolean Search Strategies Training Section
The instructional design chosen for Boolean search strategies training is from
76
The model developed by O’Neil and Duesbury (1996). Table 7 shows five essential components
of the model.
Table 7 Cognitive Instructional Strategy Object (adopted from O’Neil and Duesbury, 1996).
The Strategy
–
–
–
–
–
Communicate the metacognitive or cognitive strategy
Communicate a description of the context
Confirm or teach subordinate skills
Describe and demonstrate cognitive strategy
Provide practice and feedback
According to this model, this author designed the following training on Boolean search
strategies. Table 8 shows the components of Boolean search strategies training.
Table 8
The Strategy
– Communicate the search strategies principle
– Communicate a description of the context: using it to improve the construction of
the concept map
– Confirm or teach subordinate skills: checking the understanding of the Boolean
operators
– Describe and demonstrate the search task strategies: the trainer demonstrates its
use with a real topic and shows the real search results.
– Provide Practice: students will be given examples of topics to search using
Boolean operators and feedback.
First, students will be given a handout on Boolean logic with Venn diagrams. The
explanation handout is as follows. Five minutes will be devoted to reading and discussion on
the blackboard about Boolean operators.
Explanation handouts: describe and demonstrate search task strategies
77
What is Boolean operators “AND"?
What does it do?
Boolean operators AND is used to combine search terms when you need to narrow a search. AND
retrieves only records that contain all search terms. Use this operator to narrow or limit a search.
Oxygen AND atmosphere
If you type:
It searches for:
oxygen AND atmosphere
Only records containing both oxygen and atmosphere
oxygen AND atmosphere And
carbon dioxide
Only records containing all three search terms—
oxygen, and atmosphere, and carbon dioxide
What is your research results end up in no relevant information found, then it is time to consider
using OR.
What is Boolean operators “OR"?
What does it do?
Boolean operators “OR” is used to retrieve all records that contain one or both of the search terms.
Use this operator to expand a search.
Oxygen OR atmosphere
If you type:
oxygen OR atmosphere
oxygen OR atmosphere OR carbon
dioxide
It searches for:
Records containing oxygen, records containing atmosphere, and
records containing both oxygen and atmosphere
Records containing oxygen, records containing atmosphere, records
containing carbon dioxide, and records containing any combination of
two or all three search terms
Data Analysis
In general, descriptive statistics (e.g., means, standard deviations) will be used to estimate
all measures. In addition, MANOVA will be used to examine the relationships between outcome
and process variables (i.e., variables of teamwork process and information seeking process with
concept maps). The following details the data analysis according to each hypothesis.
78
Hypothesis 1: Students with task-specific adapted knowledge of response feedback will be more
likely to use decision-making and leadership messages in their communication than students with
adapted knowledge of response feedback.
Descriptive statistics (e.g., means, standard deviations) will be used to estimate all
collaborative processes such as decision-making and leadership messages. In addition,
MANOVA will be used to examine the relationships between feedback and collaborative
processes (e.g., decision-making and leadership). If the MANOVA is significant, then the
individual ANOVA will be run.
Hypothesis 2: Students will perform better in the concept mapping task by receiving taskspecific adapted knowledge of response feedback than by receiving adapted knowledge of
response feedback.
Descriptive statistics will be used to estimate group outcome measures (e.g., means,
standard deviations). A T-test will be used to test the significance of the difference between two
groups’ scores on the concept map.
Hypothesis 3: Searching using Boolean operators will have positive effects on problem solving
strategies and group outcome performance on the concept map.
(a) The more Boolean operators used, the higher the group outcome score on the concept
maps.
(b) The Task-specific adapted knowledge of response feedback group will use more
Boolean search operators than the adapted knowledge of response feedback group.
Descriptive statistics (e.g., means, standard deviations) will be used to estimate measures
on search strategies using Boolean operators. A Pearson product correlation technique will be
used to estimate the relationship between the number of Boolean operators used and group
79
outcome performance on the concept map. For (b), a T-test will be used to test the significance of
the difference in number between two groups’ use of Boolean operators.
80
References
Aidman, E. V., & Egan, G. (1998). Academic assessment through computerized concept
mapping: validating a method of implicit map reconstruction. International Journal of
Instructional Media, 25 (3), 277-294.
Azzaro, S., & Cleary, K. (1994). Developing a computer-assisted learning package for
end users. CD-ROM professional, 7, 95-101.
Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The
instructional effects of feedback in test-like events. Review of Educational Research, 61, 179212.
Becker, H. J. (1999). Internet use by teachers: Conditions of professional use and teacherdirected student use. Teaching, Learning, and Computing: 1998 National Survey [On-line].
Center for Research on Information Technology and Organizations, the University of California,
Irvine, and the University of Minnesota. Available:
http://www.crito.uci.edu/TLC/FINDINGS/internet-use/startpage.htm
Becker, H. J. (Unpublished). Who’s wired and who’s not. [On-line]. University of
California, Irvine. Available:
http://www.gse.uci.edu/doehome/DeptInfo/Faculty/Becker/packard/text.html [2000, July 5].
Bernardo, A. B. (2001). Analogical problem Construction and transfer in mathematical
problem solving. Educational Psychology, 21(2), 137-150.
Bilal, D. (1998, October). Children’s search processes in using World Wide Web search
engines: An exploratory study. Paper presented at the meeting of the American Society of
Information Science Annual Conference, Pittsburgh.
81
Borgman, C. L. (1999). Books, bytes, and behavior: rethinking scholarly communication
for a global information infrastructure. Information Services and Use, 19(2), 117-125.
Borgman, C. L. (2000). The premise and the promise of a global information
infrastructure. In C. L. Borgman (Ed.), From Gutenberg to the global information Infrastructure:
access to information in the networked world. Cambridge, MA: The MIT Press.
Brackbill, Y., Bravos, A., & Starr, R. H. (1962). Delay improved retention of a difficult
task. Journal of Comparative Psychology, 62, 148-156.
Breivik, P. S. (1998). Student learning in the information age. Phoenix, AZ: Oryx Press.
Brenner, M. E., Mayer, R. E., Moseley, B., Brar, T., Duran, R., Reed, S., & Webb, D.
(1997). Learning by understanding: The role of multiple representations in learning algebra.
American Educational Research Journal, 34(4), 663-689.
Brown, N. W. (2000). Creating Motivation in Education. (pp. 1-2). New York, NJ:
Falmer Press.
Burdick, T. A. (1996). The premise and diversity in information seeking: gender and the
information search styles model. School Library Quarterly, 25, 19-26.
Burdick, T. A. (1998). Pleasure in information seeking: reducing information aliteracy.
Emergency Librarian, 25, 13-17.
Butler, R. (2000). Information seeking and achievement motivation in middle childhood
and adolescence: the role of conceptions of ability. Developmental Psychology, 35(1), 146-163.
Cardelle-Elawar, M. (1992). Effects of teaching metacognitive skills to students with low
mathematics abilities. Teaching and Teacher Education, 8, 109-121.
Choi-kol, S. S. (2000). A problem-solving model of quadratic min values using computer.
International Journal of Instructional Media, 27(1), 73-82.
82
Chung, G., O’Neil, H. F., Jr., Herl, H. E. (1999). The use of computer-based
collaborative knowledge mapping to measure team processes and team outcomes. Computers in
Human Behaviors, 15, 463-493.
Clariana, R. B., Ross, S. M., & Morrison, G. R. (1991). The effects of different feedback
strategies using computer-administered multiple-choice questions as instruction. Educational
Technology Research and Development, 39(2), 5-17.
Clariana, R. B., Wagner, D. M., & Lucia C. R. (2000) Applying a connectionist
description of feedback timing. Educational Technology Research and Development, 48(3), 5-21.
Clark, K., & Dwyer, F. M. (1998). Effects of different types of computer-assisted
feedback strategies on achievement and response confidence. International Journal of
Instructional Media, 25(1), 55-63.
Cohen, B. P. & Arechevala-Vargas (1987). Interdependence, interaction, and
productivity (Working Paper No. 87-3). Stanford: Center for Sociology Research.
Covington, M. V. (1998). The future and its discontents. In M. V. Covington, The will to
learn: a guide for motivating young people (pp.1-26). Cambridge, United Kingdom: Cambridge
University Press.
Croy, M. J., Cook, J. R., & Green, M. G. (1993). Human-supplied versus computersupplied feedback: an empirical and pragmatic study. Journal of Research on Computing in
Education, 26(2) 185-204.
Dempster, F. N. (1997). Using tests to promote classroom learning. In R. F. Dillon (Ed),
Handbook on testing, (pp. 340-341). U.S.: Greenwood Press.
Dochy, F. J. R.C. (1996). Assessment of domain-specific and domain-transcending prior
knowledge: Entry assessment and the use of profile analysis. In M. Birenbaum & F. J. R. C.
83
Dochy (Eds.), Alternatives in assessment of achievements, learning process and prior knowledge
(pp. 93-129). Boston, MA: Kluwer Academic Publishers.
Durham, C. C., Locke, E. A., Poon, J. M. L., & McLeod, P. L. (2000). Effects of group
goals and time pressure on group efficacy, information-seeking strategy, and performance.
Human Performance, 13(2), 115-138.
Eisenberg, M. B. (1997). Big 6 tips teaching information problem solving. Emergency
librarian, 25(2), 22
Ercegovac, Z. (1998). Information literacy: Search strategies, tools & resources. Los
Angeles: InfoEn Associates.
Fidel, R., Davies, R. K., Douglass, M. H., Holder, J. K., Hopkins, C. J., Kushner, E. J.,
Miyagishima, B. K., & Toney, C. D. (1999). A visit to the information mall: Web searching
behavior of high school students. Journal of the American Society for Information Science, 50,
24-37.
Foote, C. (1999). Attribution feedback in the elementary classroom. Journal of Research
in Childhood Education, 13(2) 155-166.
Goldsmith, T. E., Johnson, P. J., & Acton, W. H. (1991). Assessing structural
knowledge. Journal of Educational Psychology, 83(1), 88-96.
Grigorenko, E. L. & Sternberg, R. J. (1998). Dynamic testing. Psychological Bulletin,
124(1), 75-111
Herl, H. E. (1995). Construct validation of an approach to modeling cognitive structure
of experts’ and novices’ U.S. history knowledge. Unpublished doctoral dissertation, University
of California, Los Angeles.
84
Herl, H. E., O’Neil, H. F., Jr., Chung, G., Schacter, J. (1999). Reliability and validity of a
computer-based knowledge mapping system to measure content understanding. Computer in
Human Behavior, 15, 315-333.
Hill, J. R. (1997). The World Wide Web as a tool for information retrieval: An
exploratory study of users’ strategies in an open-ended system. School Library Media Quarterly,
25, 229-236.
Hsieh, I. (2001). Types of feedback in a computer-based collaborative problem solving
Group Task. Unpublished doctoral dissertation. University of Southern California.
Hurwitz, C. L., & Abegg, G. (1999). A teacher's perspective on technology in the
classroom: computer visualization, concept maps and learning logs. Journal of Education,
181(2), 123-43.
Internet Exceeds 2 Billion Pages. (2000). Available at:
http://www.cyveillance.com/us/newsroom/pressr/000710.asp
Johnson, D. W., & Johnson, F. P. (1997). Joining together. Boston: Allyn and Bacon.
Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured
problem-solving learning. Educational Technology Research & Development, 45, 65-94.
Juke, I., Dosaj, A, & Macdonald, B. (2000). Understanding the infoWhlem. In I. Juke,
A. Dosaj, & B. Macdonald (Eds.), Net.savvy (pp. 6-9). Thousand Oaks, CA: Sage Publications
Company.
Kafai, Y., & Bates, M. J. (1997, Winter). Internet Web-searching instruction in the
elementary classroom: Building a foundation for information literacy. School Library Media
Quarterly, 25, 103-111.
85
Karabennick, S. A. & Keefer, J. A. (1998). Information seeking in the information age.
In S. A. Karabenick (Ed.), Strategic help seeking: implications for learning and teaching.
(pp.219-250). Mahwah, NJ: Lawrence Erlbaum Associates.
Khan, K. & Locatis, C. (1998). Searching through cyberspace: The effects of link
display and link density on information retrieval from hypertext on the World Wide Web.
Journal of the American Society for Information Science, 49(2), 176-182.
King, P. E., Young, M. J., & Behnke, R. R. (2000). Public speaking performance
improvement as a function of information processing in immediate and delayed feedback
interventions. Communication Education, 49(4), 365-374.
Klein, D. C. D., Yarnall, L., & Glaubke, C. (in press) Using Technology to Assess
Students’ Web Expertise In H. F. O’Neil, Jr. & R. S. Perez (Eds.), Technology applications in
education: A learning view. Mahwah, NJ: Erlbaum.
Knapp, S. D., Cohen, L. B., & Juedes, D. R. (1998). A natural language thesaurus for the
humanities: the need for a database search aid. The Library Quarterly, 68(4), 406-430.
Kubeck, J. E., Miller-Albrecht, S. A., & Murphy, M. D. (1999). Finding information on
the World Wide Web: exploring older adults’ exploration. Educational Gerontology, 25, 167183.
Kulhavy, R. W., & Anderson, R. C. (1972). Delay-retention effect with multiple-choice
tests. Journal of Educational Psychology, 63, 505-512.
Kulik, J. A., & Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of
Educational Research, 58(1), 79-97.
Lalley, J. P. (1998). Comparison of test and video as forms of feedback during computer
assisted learning. Journal of Educational Computing Research, 18(4), 323-338.
86
Lazonder, A. W. (2000) Exploring novice users' training needs in searching information
on the WWW. Journal of Computer Assisted Learning Special Issue: Approaches to the design
of software training, 16(4), 326-335.
Lee, W., & Zalatimo, S. (1990). Computer-assisted instruction with immediate feedback
versus delayed feedback in learning to solve analogy items. International Journal of Instructional
Media, 17(4) 319-329.
Lindroth, L. (1997). How to …use Internet search tools. Teaching pre K-8, 28, 14-15.
Mayer, R. E., & Wittrock, M. C. (1996). Problem-solving transfer. In D. C. Berliner, &
Calfee, R.C. (Ed.), Handbook of educational psychology (pp. 47-62). New York, NJ:
Macmillian Library Reference USA, Simon & Schuster Macmillan.
Mead, S. E., Sit, R. A., Rogers, W. A., Jamieson, B.A., & Rousseau, G. K. (2000)
Influences of general computer experience and age on library database search performance.
Behavior & Information Technology, 19(2), 107-123.
Morgan, B. B., Salas, E., & Glickman, A. S. (1993). An analysis of team evolution and
maturation. The Journal of General Psychology, 120 (3), 277-291.
Mousavi, S. Y., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing
auditory and visual presentation modes. Journal of Educational Psychology, 87(2), 319-334.
Nabors, Martha L. (1999). New functions for “old Macs”: providing immediate feedback
for student teachers through technology. International Journal of Instructional Media, 26(1) 105107.
Nagata, N., & Swisher, M. V. (1995). A study of consciousness-raising by computer: the
effect of metalinguistic feedback on second language learning. Foreign Language Annals, 28(3)
337-347.
87
National Center for Education Statistics. (2000). Internet access in U.S. public schools
and classrooms: 1994-1999 (NCES # 2000086). Washington, DC: U.S. Department of
Education.
O’Neil, H. F., Jr. (1999). Perspectives on computer-based performance assessment of
problem solving. Computers in Human Behavior, 15, 225-268.
O’Neil, H. F., Jr., & Duesbury, R. (1996). Effect of type of practice in a computer-aided
design environment in visualizing three-dimensional objects from two-dimensional orthographic
projections. Journal of Applied Psychology, 81(3), 249-260.
O’Neil, H. F., Jr., & Herl, H. E. (1998). Reliability and validity of a trait measure of selfregulation. Los Angeles, University of California, Center for Research on Evaluation, Standards,
and Student Testing (CRESST).
O’Neil, H. F., Jr., Allred, K., & Baker, E. L. (1997). Review of workforce readiness
theoretical frameworks. In H. F. O’Neil, Jr. (Ed.), Workforce readiness: Competencies and
assessment. (pp. 3-25). Mahwah, NJ: Lawrence Erlbaum Associates.
O’Neil, H. F., Jr., Chung, G. K. W. K., & Brown, R. S. (1997). Use of networked
simulations as a context to measure team competencies. In H. F. O’Neil, Jr. (Ed.), Workforce
readiness: Competencies and assessment (pp. 411-452). Mahwah, NJ: Lawrence Erlbaum
Associates.
O’Neil, H. F., Jr., Mayer, R. E., Herl, H. E., Niemi, C., Olin, K. & Thurman, R. A.
(2000). Instructional strategies for virtual aviation training environments. In H. F. O’Neil, & D.
H. Andrews (Eds.), Aircrew training and assessment, (pp. 105-130). Mahwah, NJ: Lawrence
Erlbaum Associates.
88
O’Neil, H. F., Jr., Wang, S., Chung, G., & Herl, H. (1998). Draft final report for
validation of teamwork skills questionnaire using computer-based teamwork simulations. Los
Angeles: University of California, National Center for Research on Evaluation, Standards, and
Student Testing.
Paivio, A. (1991). Dual coding theory: Retrospect and current status. Canadian Journal
of Psychology, 45(3), 255-287.
Park, O., & Gittelman, S. S. (1992). Selective use of animation and feedback in
computer-based instruction. Educational Technology Research and Development, 40(4), 27-38.
Pintrich, P. R., & Schunk D. H. (1996). Motivation in education. (pp. 336-339).
Englewood Cliffs, NJ: Prentice Hall.
Pridemore, D. R., & Klein, J. D. (1995). Control of practice and level of feedback in
computer-based instruction. Contemporary Educational Psychology, 20, 444-450.
Rieber, L. P. (1996). Animation as feedback in computer simulation: Representation
matters. Educational Technology Research and Development, 44(1), 5-22.
Roblyer, M. D. (1998). The other half of knowledge. Learning and Leading with
Technology, 25(6), 54-55.
Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. C. Wittrock (Ed.),
Handbook of research on teaching (pp.376-391)
Ross, S. M., & Morrison, G. R. (1993). Using feedback to adapt instruction for
individuals. In J. V. Dempsey & G.C. Sales (Eds.), Interactive instruction and feedback (pp.177195). Englewood, NJ: Educational Technology publications.
Ruiz-Primo, M. A., Schultz, S. E., and Shavelson, R. J. (1997). Concept map-based
assessment in science: Two exploratory studies (CSE Tech. Rep. No. 436). Los Angeles,
89
University of California, Center for Research on Evaluation, Standards, and Student Testing
(CRESST).
Ryan, A. M., & Pintrich, P. R. (1997). “Should I ask for help?” The role of motivation
and attitudes in adolescents’ help seeking in math class. Journal of Educational Psychology,
89(2), 329-341.
Sadoski, M., & Paivio, A. (1994). A dual coding view of imagery and verbal processes in
reading comprehension. In R. B. Ruddell, M. R. Ruddell, & H. Singer (Eds.), Theoretical
models and processes of reading, (pp. 582-601). Newark, DE: International Reading
Association.
Sales, G. C. (1993). Adapted and adaptive feedback in technology-based instruction. In
J. V. Dempsey & G.C. Sales (Eds.), Interactive instruction and feedback (pp.159-176).
Englewood, NJ: Educational Technology publications.
Schacter, J., Chung, G. K. W. K., & Dorr, A. (1998). Children’s Internet searching on
complex problems: Performance and process analyses. Journal of the American Society for
Information Science, 49, 840-849.
Schacter, J., Herl, H.E., Chung, G. K.W.K., Dennis, R.A., & O’Neil, H.F., Jr. (1999).
Computer-based performance assessments: A solution to the narrow measurement and reporting
of problem-solving. Computers in Human Behavior, 15, 403-418.
Schau, C. & Mattern, N. (1997). Use of map techniques in teaching applied statistics
courses. American statistician, 51, 171-175.
Schau, C., Mattern, N., Zeilik, M., Teague, K., & Weber, R. (2001). Select-and-fill-in
concept map scores as a measure of students' connected understanding of science.
Educational & Psychological Measurement, 61(1), 136-158.
90
Schroth, M. L. (1992). The effects of delay of feedback on a delayed concept formation
transfer task. Contemporary Educational Psychology, 17, 78-82.
Shapiro, W. L. (1999). Collaborative problem solving in a large-scale space science
simulation. A paper presented at the Annual Meeting of the American Educational Research
Association, Montreal, Canada. (ERIC Document Reproduction Service No. ED 431 599).
Sinclair, K. E. (1997). Workforce competencies of college graduates. In H. F. O’Neil, Jr.
(Ed.), Workforce readiness: Competencies and assessment (pp. 103-120). Mahwah, NJ:
Lawrence Erlbaum Associates.
Small, R. V., Sutton, S., & Miwa, M. (1998). Information seeking for instructional
planning: an exploratory study. Journal of Research on Computing in Education, 31(2) 204-219.
Smith. M. S., & Broom, M. (in press). The landscape and future of the use of technology
in K-12 education. In H. F. O’Neil, Jr., & Perez, R. S. (Eds.), Technology applications in
education: a learning view.
Tsai, Chin-chung. ; Lon, Sunny S. J. ; Yuan, Shyan-ming. (2000). Taiwanese high school
science students' views of using a WWW-based concept map testing system International Journal
of Instructional Media, 27(4), 363-368
Tudge, J. R. H., Winterhoff, P. A., & Hogan, D. M. (1996). The cognitive consequences
of collaborative problem solving with and without feedback. Child Development, 67, 28922909.
Tyner, R. (2001). Sink or swim: Internet search tools & techniques. Available:
http://techlearning.com/db_area/archives/TL/200102/picksofmonth.html
U.S. Census Bureau. (1999). Computer use in the United States: Population
characteristics. Washington, DC: U.S. Department of Commerce.
91
Underwood, J. & Underwood, G. (1999). Task effects on co-operative and collaborative
learning with computers. In K. Littleton & P. Light (Eds.), Learning with computers (pp.10-23).
New York, NY: Routledge.
Vispoel, W. P. (1998). Psychometric Characteristics of computer-adaptive and selfadaptive vocabulary tests: the role of answer feedback and test anxiety. Journal of Educational
Measurement, 35(2), 159-167.
Waldrop, P. B., Justen, J. E., & Adams, T. M. (1990). Effects of paired versus individual
user computer-assisted instruction and type of feedback on student achievement. Educational
Technology, 30(7), 51-53.
Webb, N. M., Nemer, K. M., & Chizhik, A. W. (1998). Equity issues in collaborative
group assessment: Group composition and performance. American Educational Research
Journal, 35(4), 607-651.
Weng (1999). A reliability and validity study of Chinese version of a teamwork skills
questionnaire. Unpublished doctoral dissertation, University of Southern California.
Wilkins, J. L., Hassard, & Leckie, G. J. (1997). University professional and managerial
staff: information needs and seeking. College & Research Libraries, 58, 561-574.
Zhang, J. (1998). A distributed representation approach to group problem solving.
Journal of the American Society for Information Science, 49(9), 801-809.
Zimmerman, B. J. (1998). Academic studying and the development of personal skill: a
self-regulatory perspective. Educational Psychologist, 33(2), 73-86.
92
Appendix A
Trait Thinking Questionnaire
Name (please print): ________________________________________________________________________________
Teacher: __________________________________________
Date: _____________________________________
Directions: A number of statements which people have used to describe themselves are given
below. Read each statement and indicate how you generally think or feel on learning tasks by
marking your answer sheet. There are no right or wrong answers. Do not spend too much time on
any one statement. Remember, give the answer that seems to describe how you generally think
or feel.
Almost
Never
Sometimes
Often
Almost
Always
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1.
I determine how to solve a task before I begin.
2.
I check how well I am doing when I solve a task.
3.
I work hard to do well even if I don't like a task.
4.
I believe I will receive an excellent grade in this course.
5.
I carefully plan my course of action.
6.
I ask myself questions to stay on track as I do a task.
7.
I put forth my best effort on tasks.
8.
I’m certain I can understand the most difficult material
presented in the readings for this course.
9.
I try to understand tasks before I attempt to solve them.
10.
I check my work while I am doing it.
11.
I work as hard as possible on tasks.
12.
I’m confident I can understand the basic concepts taught in
this course.
1
2
3
4
13.
I try to understand the goal of a task before I attempt to
answer.
1
2
3
4
14.
I almost always know how much of a task I have to
complete.
1
2
3
4
15.
I am willing to do extra work on tasks to improve my
knowledge.
1
2
3
4
93
Almost
Never
Sometimes
Often
Almost
Always
16.
I’m confident I can understand the most complex material
presented by the teacher in this course.
1
2
3
4
17.
I figure out my goals and what I need to do to accomplish
them.
1
2
3
4
18.
I judge the correctness of my work.
1
2
3
4
19.
I concentrate as hard as I can when doing a task.
1
2
3
4
20.
I’m confident I can do an excellent job on the assignments
and tests in this course.
1
2
3
4
21.
I imagine the parts of a task I have to complete.
1
2
3
4
22.
I correct my errors.
1
2
3
4
23.
I work hard on a task even if it does not count.
1
2
3
4
24.
I expect to do well in this course.
1
2
3
4
25.
I make sure I understand just what has to be done and how
to do it.
1
2
3
4
26.
I check my accuracy as I progress through a task.
1
2
3
4
27.
A task is useful to check my knowledge.
1
2
3
4
28.
I’m certain I can master the skills being taught in this
course.
1
2
3
4
29.
I try to determine what the task requires.
1
2
3
4
30.
I ask myself, how well am I doing, as I proceed through
tasks.
1
2
3
4
31.
Practice makes perfect.
1
2
3
4
32.
Considering the difficulty of this course, the teacher, and
my skills, I think I will do well in this course.
1
2
3
4
Copyright © 1995, 1997, 1998, 2000 by Harold F. O’Neil, Jr.
Download