WAINESS PHD QUALIFYING EXAM Qualifying Examination Richard Wainess Rossier School of Education University of Southern California to Dr. Harold O’Neil (Chair) Dr. Richard Clark Dr. Edward Kazlauskas Dr. Janice Schafrik Dr. Yanis Yortsos (Outside member) 14009 Barner Ave. Sylmar, CA 91342 Home Phone: (818) 364-9419 E-Mail: wainess@usc.edu In partial fulfillment of the requirement for the Degree Doctor of Philosophy in Education in Educational Psychology and Technology 1 WAINESS PHD QUALIFYING EXAM 1. 2 Review the theoretical and empirical literature on the impact of games on learning and motivation. Please, focus on training of adults and include a discussion of various game characteristics, such as fun, competition, fantasy, and challenge. The purpose of this review is to describe: the differences between games and simulations; the motivational aspects of games; how game are currently being used; possible learning outcomes attributed to games; and some issues pertaining to individual differences, such as gender. While instructional games have been touted as the next great educational tool, research hasn’t supported these hopes or claims. Richard Clark’s famous remarks that media, such as video games, do not improve learning, they are simply a means of delivering content (CITE) continue to prove true. Therefore, the question becomes, “If instructional games do not improve learning, then why bother creating them?” The answer is relatively simple. As Clark (YEAR) states, media do not improve learning. That means books and lectures, which are both mediums (i.e., delivery vehicles for instruction), do not improve learning either. And that’s correct. A lecture, in and of itself, does not improve learning. Neither does a book. It’s the instructional methods that are incorporated into the book or into the lecturer that cause learning (Schacter & Fagnano, 1999). According to Cobb (1997), while there may be no unique medium for any job, that does not mean that one medium isn’t better than another or that determinining which is better is not a worthwhile empirical question. We use books and lectures because they’re practical—they achieve some form of efficiency. Books are portable, can be read just about anywhere, and provide a convenient way to reaccess large amounts of information. Lectures provide an opportunity for spontaneous changes to presentations and for engaging in real-time dialog with WAINESS PHD QUALIFYING EXAM 3 the instructor and among classmates. Video games also have the potential to work well for specific situations. As Cobb (1997) argued, no medium is instrinsically better than another, but one may be better than another depending upon instructional needs. If computers can provide opportunities for learning, they should be considered, and not patently rejected. However, they should also not be patently accepted. Computers are not teaching devices. The instructional content they deliver and the instructional methods embedded in that delivery are the teaching devices (CITE). Computers are simply powerful instruments with robust capabilities for delivering experiences that are either unique from other media or, in certain circumstances, more practical than using other media. For example, computers can deliver games with a combination of features and capabilities not available with other electronic media. Researchers generally agree that games provide motivational outcomes (CITE). What is questioned is learning outcomes, as measured by retention, and more importantly, transfer. While there is inconsistent evidence of learning outcomes (Lee, 1999; CITE), agreement on the role of motivation in learning (CITE) is relatively stable. Many, if not most, of the current models of learning and problem solving include affective components, including motivation, as critical to learning (e.g., the CANE model: Clark, 1999; O’Neil’s Problem Solving Model: O’Neil, YEAR; GIVE MODEL NAME: Waller, Knapp, & Hunt, 2001). While it can easily be argued that motivation does not mean learning will occur (CITE), it can equally be argued that motivation supports learning by encouraging mental effort and persistence (CITE). Researchers have cited a large number of benefits of computers and, particularly, of computer games. Taylor, Renshaw, and Jensen (1997) commented that technology, in the form of computer-assisted instruction (CAI), incorporates a wealth of techniques that promote WAINESS PHD QUALIFYING EXAM 4 motivation, and has the potential to tremendously improve educational effectiveness. It has also been argued that some empirical evidence exists that games can be effective tools for enhancing learning and understanding of complex subject matter (Cordova, & Lepper, 1996; Ricci, Salas, & Cannon-Bowers, 1996). And LeemKull, de Jong, de Hoog, and Christoph (2003) argued that games and simulations provide students with a framework of rules and roles through which they can learn interactively through a live experience, the students can tackle situations they might not be prepared to risk in reality, and they can experiment with new ideas and strategies. Mayer, Mautone, and Prothero (2002) also commented that when learning by doing in a physical environment is not feasible, learning by doing can be implemented using computer simulations. Mayer, Mautone, Prothero further commented that, in learning by doing in virtual environments, students actively work in realistic situations that simulate authentic tasks for a particular domain.According to Cross (1993), experiential learning is the process of gaining knowledge through experience and behavior, and games are commonly used tools for experiential learning. However, for games to be effective, they must embed sound instructional strategies and appropriate content. According to Garris, Ahlers, & Driskell (2002), recent research has begun to establish links between instructional strategies, motivational processes, and learning outcomes. The researcher argued that people learn from active engagement with the environment and this experience coupled with instructional support (i.e., debriefing, scaffolding) can provide an effective learning environment. GIVE OTHER EXAMPLES? In the early 1970s, Duke (1995) developed a series of game for the United Nations Educational, Scientific, and Cultural Organization (UNESCO) for use in underdeveloped countries. The games showed promise as a way to quickly provide a cogent model for urgent problems, such as nutrition planning and economic planning. Also in the 1970s, a new type of WAINESS PHD QUALIFYING EXAM 5 client for gaming began to emerge as, increasing, leadership of large public and private organizations sought to locate new methods for developing strategic vision. These clients included international banks, railroads, pharmaceutical companies and chemical companies (Duke, 1995). Resnik and Sherer (1994) commented that computerized games and simulations can be used like any other professional tool to deal with clients’ conflicts, their current troubled situation, or with future dilemmas, and Rieber (1996) argued that games offer an organizational function based on cognitive, social and cultural factors all related to play. According to Salas, Bowers, & Rodenzer (1998), Simulation is a way of life in many aviation training environments. For example, military, commercial, and general aviation all use simulations to train a variety of tasks. And in the past decade, there has been considerable interest in using computer-simulated (virtual) environments (VEs) for training spatial knowledge (Waller, 2000). DEFINITIONS One of the first problems areas with research into game and simulations is terminology. Many studies that claim to have examined the use of games, did not use a game (CITE). At best, they used an interactive multimedia that exhibits some of the features of a game, but not enough features to actually be called a game. A similar problem occurs with simulations. A large number of research studies use simulations but call them games (CITE). Because the goals and features of games and simulations differ, it is important when examining the potential effects of the two media to be clear about which one is being examined. However, there is little consensus in the education and training literature on how games and simulations are defined. According to Ricci, Salas, and Cannon-Bowers (1996), computer-based educational games generally fall into one of two categories: simulation games and video games. Simulation games model a process or mechanism relating task-relevant input changes to outcomes in a simplified reality that may not have a definite endpoint. They often depend on learners reaching conclusions through exploration of the relation between input changes and subsequent outcomes. Video games, on the other hand, are competitive interactions bound by rules to achieve specified WAINESS PHD QUALIFYING EXAM 6 goals that are dependent on skill or knowledge and that often involve chance and imaginary settings (Randel, Morris, Wetzel, & Whitehill, 1992). Games According to Garris, Ahlers, and Driskell (2002) early work in defining games suggested that there are no properties that are common to all games and that games belong to the same semantic category only because they bear a family resemblance to one another (Garris, Ahlers, & Driskell, 2002). Betz (1995) argued that a game is being played when the actions of individuals are determined by both their own actions and the actions of one or more actors. Dempsey, Haynes, Lucassen, and Casey (2002) commented that a game is a set of activities involving one or more players. It has goals, constraints, payoffs, and consequences. A number of researchers agree that games have rules (Crookall, Oxford, and Saunders, 1987; Dempsey, Haynes, Lucassen, and Casey, 2002; Garris, Ahlers, & Driskell, 2002; Ricci, 1994). Researchers also agree that games have goals and strategies to achieve those goals (Crookall & Arai, 1995; Crookall, Oxford, and Saunders, 1987; Garris, Ahlers, & Driskell, 2002; Ricci, 1994). Many researchers also agree that games have competition (Dempsey, Haynes, Lucassen, and Casey; 2002) and consequences such as winning or losing (Crookall, Oxford, and Saunders, 1987). Betz (1995) further argued that games simulate whole systems, not parts, forcing players to organize and integrate many skills. Students will learn from whole systems by their individual actions, individual action being the student’s game moves. Crookall, Oxfodr, and Saunders also noted that a game does not intend to represent any real-world system; it is a “real” system in its own right. According to Duke (1995), games are situation specific. If well designed for a specific client, the same game should not be expected to perform well in a different environment. WAINESS PHD QUALIFYING EXAM 7 Simulations In contrast to games, Crookall and Saunders (1989) viewed a simulation as a representation of some real-world system that can also take on some aspects of reality for participants or users. Similarly, Garris, Ahlers, & Driskell (2002) wrote that key features of simulations are that they represent real-world systems, and Henderson, Klemes, and Eshet (2000) commented that a simulation attempts to faithfully mimic an imaginary or real environment and content that cannot be experienced directly, for such reasons as cost, danger, accessibility, or time (Henderson, Klemes, & Eshet, 2000). Berson (1996) also argued that Simulations allow students to engage in activities that would otherwise be too expensive, dangerous, or impractical to conduct in the classroom. Lee (1999) added that a simulation is defined as a computer program that relates them together through cause and effect relationships. Reiber (1992; 1996) discussed microworlds, a variant of simulations. He described microworlds as small representations of content areas or domains that can be recognized by an expert, and simulations as designed to mimic real life experiences, such as a flight simulator (Lee, 1999). Thiagarajan (1998) argued that simulations do not reflect reality; they reflect someone’s model of reality. According to Thiagarajan, a simulation is a representation of the features and behaviors of one system through the use of another. Elements of a simulation correspond to selected elements of the system being simulated. Some simulations focus on the physical features of a real world object, such as a model airplane, while others focus on the processes and interactions of real world events, such as mathematical equations that predict the number of traffic fatalities during a holiday weekend (Thiagarajan, 1998). At the risk of introducing a bit WAINESS PHD QUALIFYING EXAM 8 more ambiguity, Garris, Ahlers, and Driskell (2002) proposed that simulations can contain game features, which leads to the final definition: sim-games. Sim-Games Thus, it is not too improper to consider games and simulations as similar in some respects, keeping in mind the key distinction that simulations propose to represent reality and games do not (Garris, Ahlers, & Driskell, 2002). Combining the features of the two media, Rosenorn and Kofoed (1998) described simulation/gaming as a learning environment where participants are actively involved in experiments, for example, in the form of role-plays, or simulations of daily work situations, or developmental scenarios. Being away from the real workplace, participants have the freedom to make wrong decisions and to learn from them. This paper will use the definitions of games, simulations, and sim-games as defined by Gredler (1996), which combine the most common features cited by the various researchers, and yet provide clear distinctions between the three media. According to Gredler, Games consist of rules that describe allowable player moves, game constraints and privileges (such as ways of earning extra turns), and penalties for illegal (nonpermissable) actions. Further, the rules may be imaginative in that they need not relate to real-world events (p. 523). This definition is in contrast to a simulation, which Gredler (1996) defines as “a dynamic set of relationships among several variables that (1) change over time and (2) reflect authentic causal processes” (p. 523). In addition, Gredler describes games as linear and simulations as nonlinear, and games as having a goal of winning while simulations have a goal of discovering causal relationships. Gredler also defines a mixed metaphor referred to as simulation games or WAINESS PHD QUALIFYING EXAM 9 gaming simulations, which is a blend of the features of the two interactive media: games and simulations. A major design weakness in game studies is that most studies compare simulations to regular classroom instruction (lecture and/or classroom discussion). However, the instructional goals for which each can be most effective often differ. The lecture method is likely to be superior in transmitting items of information. In contrast, simulations have the potential to develop the students’ mental models of complex situations as well as their problem-solving strategies (Gredler, 1996). MOTIVATIONAL ASPECTS OF GAMES According to Garris, Ahlers, and Driskell (2002), motivated learners are easy to describe. They are enthusiastic, focused and engaged, they are interested in and enjoy what they are doing, they try hard, and they persist over time. Furthermore, they are self-determined and driven by their own volition rather than external forces (Garris, Ahlers, & Driskell, 2002). Ricci, Salas, and Cannon-Bowers (1996) defined motivation as “the direction, intensity, and persistence of attentional effort invested by the trainee toward training.” Similarly, according to Malouf (19871988), continuing motivation is defined as returning to a task or a behavior without apparent external pressure to do so when other appealing behaviors are available. And more simply, Story and Sullivan (1986) commented that the most common measure of continuing motivation is whether a student returns to the same task at a later time. In general, these descriptions of motivation include the concept of continued motivation; persistence. With regard to video games, and Asakawa and Gilbert (2003) argued that without sources of motivation, players often lose interest and drop out of a game. However, there seems little agreement among researchers on what those sources are—the specific set of elements or characteristics that lead to motivation in any learning environment, and particularly with educational games. According to Rieber (1996) and McGrener (1996), motivational researchers WAINESS PHD QUALIFYING EXAM 10 have offered the following characteristics as common to all intrinsically motivating learning environments: challenge, curiosity, fantasy, and control (Davis & Wiedenbeck, 2001; Lepper & Malone, 1987; Malone, 1981; Malone & Lepper, 1987). Malone (1981) and others also included fun as a criteria for motivation. For interactive games, Stewart (1997) described some of the same elements as above, but also included additional motivational elements; goals and outcomes. Locke and Latham (1990) also commented that on of the most robust findings in the literature on motivation is that clear, specific, and difficult goals lead to enhanced performance (Locke & Latham, 1990). They argued that clear, specific goals allows the individual to perceive goal-feedback discrepancies, which are seen as crucial in triggering greater attention and motivation. Clark (2001) argued that motivation cannot exist without goals. The role of goals will be discussed in question 2. The response to this question will focus on fantasy, control and manipulation, challenge and complexity, curiosity, competition, feedback, and fun. Fantasy Research suggests that material may be learned more readily when presented in an imagined context that interests the learner than when presented in a generic or decontextualized form (Garris, Ahlers, & Driskell, 2002). Malone and Lepper (1987) defined fantasy as an environment that evokes “mental images of physical or social situations that do not exist” (p. 250). According to Garris, Ahlers, and Driskell (2002), games involve imaginary worlds; activity inside these worlds has no impact on the real world; and when involved in a game, nothing outside the game is relevant. Rieber (1996) commented that fantasy is used to encourage learners to imagine that they are completing the activity in a context in which they are really not present. However, Rieber also described endognenous and exogenous fantasies. Endogenous fantasy WAINESS PHD QUALIFYING EXAM 11 weaves relevant fantasy into a game, while exogenous simply sugar coast a learning environment with fantasy. An example of an endogenous fantasy would be the use of a laboratory environment to learn chemistry, since this environment is consistent with the domain. An example of an exogenous environment would be a using a hangman game to learn spelling, because hanging a person has nothing to do with spelling. Rieber (1996) describes endogenous fantasy, but not exogenous fantasy, as important to intrinsic motivation, and further commented that, unfortunately, exogenous fantasies are a common and popular element of many educational games. According to Malone and Lepper (1987), fantasies can offer analogies or metaphors for real-world processes that allow the user to experience phenomena from varied perspectives. A number of researchers (Anderson and Pickett, 1978; Ausubal, 1963; Malone and Lepper, 1978; Malone and Lepper, 1987; Singer, 1973) argue that fantasies in the form of metaphors and analogies provide learners with better understanding by allowing them to relate new information to existing knowledge. According to Davis and Wiedenbeck (2001), metaphor also helps learners to feel directly involved with objects in the domain so that the computer and interface become invisible. The relationship of analogy and metaphor to learning is discussed in question 2. Control and Manipulation Hannifin and Sullivan (1996) define control as the exercise of authority or the ability to regulate, direct, or command something. Control, or self-determination, promotes intrinsic motivation because learners are given a sense of control over the choices of actions they may take (deCharms, 1986; Deci, 1975; Lepper and Greene, 1978). Furthermore, control implies that outcomes depend on learners’ choices and, therefore, learners should be able to produce significant effects through their own actions (Davis, & Wiedenbeck, 2001). According to Garris, WAINESS PHD QUALIFYING EXAM 12 Ahlers, & Driskell (2002), games evoke a sense of personal control when users are allowed to select strategies, manage the direction of activities, and make decisions that directly affect outcomes, even if those actions are not instructionally relevant (Garris, Ahlers, & Driskell, 2002). However, Hannafin & Sullivan (1996) warned that research comparing the effects of instructional programs that control all elements of the instruction (program control) and instructional programs in which the learner has control over elements of the instructional program (learner control) on learning achievement has yielded mixed results. Dillon and Gabbard (1998) commented that novice and lower aptitude students have greater difficulty when given control, compared to experts and higher aptitude students, and Niemiec, Sikorski, and Walberg (DATE) argued that control does not appear to offer any special benefits for any type of learning or under any type of condition. Challenge and complexity Challenge, also referred to as effectance, compentence, or mastery motivation (Bandura, 1977; Csikszentmihalyi, 1975; Deci, 1975; Harter, 1978; White, 1959), embodies the idea that intrinsic motivation occurs when there is a match between a task and the learner’s skills. The task should not be too easy nor too hard, because in either case, the learner will lose interest. Similarly, Malone and Lepper (1987) have claimed that individuals desire an optimal level of challenge; that is, tasks that are neither too easy nor too difficult to perform. Stewart (1997) commented that games that are too easy will be dismissed quickly. According to Garris, Ahlers, and Driskell (2002), there are several ways in which an optimal level of challenge can be obtained. Goals should be clearly specified, yet the probability of obtaining that goal should be uncertain, and goals must also be meaningful to the individual. They further argued that linking WAINESS PHD QUALIFYING EXAM 13 activities to valued personal competencies, embedding activities within absorbing fantasy scenarios, or engaging competitive or cooperative motivations could serve to make goals meaningful (Garris, Ahlers, & Driskell, 2002). Curiosity According to Rieber (1996), challenge and curiosity are intertwined. Curiosity arises from sitatuions in which there is complexity, incongruity, and discrepancy (Davis, & Wiedenbeck, 2001). Sensory curiosity is the interest evoked by novel situations, cognitive curiosity is the evoked by the desire for knowledge (Garris, Ahlers, & Driskell, 2002). Cognitive curiosity motivates the learner to attempt to resolve the inconsistency through exploration (Davis, & Wiedenbeck, 2001). Curiosity is identified in games by unusual visual or auditory effects, and by paradoxes, incompleteness, and potential simplifications (Westbrook & Braithwaite, 2002). Curiosity is the desire to acquire more information. This is a primary component of the players’ motivation to learn how to operate the game (Westbrook & Braithwaite, 2001). Malone and Lepper (1987) noted that curiosity is one of the primary factors that drive learning and is related to the concept of mystery. Garris, Ahlers, and Driskell (2002) commented that make the distinction between curiosity and mystery to reflect the difference between curiosity, which resides in the individual, and mystery, which is an external feature of the game itself. Thus, mystery evokes curiosity in the individual, and this leads to the question of what constitutes mystery (Garris, Ahlers, & Driskell, 2002). Research suggests that mystery is enhanced by incongruity of information, complexity, novelty, surprise, and violation of expectations (Berlyne, 1960), incompatibility between ideas and inability the predict the future (Kagan, 1972), and information that is incomplete and inconsistent (Malone & Lepper, 1987). Competition Studies on competition with games and simulations have mixed results, due to preferences and reward structures. In a study by Porter, Bird, and Wunder (1990-1991) WAINESS PHD QUALIFYING EXAM 14 examining competition and reward structures found that the greatest effects of reward structure were seen in the performance of those with the most pronounced attitudes toward either competition or cooperation. The results suggested that performance was better when the reward structure matched the individual’s preference. According to the authors, implications are that emphasis on competition will enhance the performance of some learners but will inhibit the performance of others (Porter, Bird, and Wunder, 1990-1991). Yu (2001) investigated the relative effectiveness of cooperation with and without intergroup competition in promoting student performance, attitudes, and perceptions toward subject matter studied, computers, and interpersonal context. With fifth-graders as participants, Yu found that cooperation without inter-group competition resulted in better attitudes toward the subject matter studies, and promoted more positive inter-personal relationships both within and among the learning groups than cooperation/competition did (Yu, 2001). The exchange of ideas and information both within and among the learning groups also tended to be more effective and efficient when cooperation did not take place in the context of inter-group competition (Yu, 2001). Feedback Feedback within games can also easily be provided in order for learners to quickly evaluate their progress against the established game goal. This feedback can take many forms, such as textual, visual, and aural (Rieber, 1996). According to Ricci, Salas, and Cannon-Bowers (1996), within the computer-based game environment, feedback is provided in various forms including audio cues, score, and remediation immediately following performance. They argued that these feedback attributes can produce significant differences in learner attitudes, resulting in increased attention to the learning environment. Fun Learning that is fun appears to be more effective (Lepper & Cordova, 1992). Quinn (1994, 1997) argued that for games to benefit educational practice and learning, they need to combine fun elements with aspects of instructional design and system design that include WAINESS PHD QUALIFYING EXAM 15 motivational, learning, and interactive components. According to Malone (1981a, b) three elements (fantasy, curiosity, and challenge) contribute to the fun in games (as cited in Armory et al, 1999). While fun has been cited as important for motivation and, ultimately, for learning, there is no empirical evidence supporting the concept of fun. This might be due to the fact that fun is not a construct but, rather, represents other concepts or constructs. Relevant alternative concepts or constructs are play, engagement, and flow. Play is entertainment without fear of present or future consequences; it is fun (Resnick & Sherer, 1994). According to Rieber, Smith, and Noah (1998), play describes the intense learning experience in which both adults and children voluntarily devote enormous amounts of time, energy, and commitment and, at the same time, derive great enjoyment from the experience. This is termed serious play to distinguish it from other interpretations which may have negative connotations (Rieber, Smith, & Noah, 1998). Webster et al. (1993) found that labeling software training as play showed improved motivation and performance. According to Rieber and Matzko (2001) serious play is an example of an optimal life experience. Csikszentmihalyi (1975; 1990) defines an optimal experience as one in which a person is so involved in an activity that nothing else seems to matter; termed flow or a flow experience. When completely absorbed in and activity, he or she is “carried by the flow,” hence the origin of the theory’s name (Rieber and Matzko, 2001). Rieber and Matzko (2001) also content that a person may be considered in flow during an activity when experiencing one or more of the following characteristics: Hours pass with little notice; challenge is optimized; feelings of selfconsciousness disappear; the activity’s goals and feedback are clear; attention is completely absorbed in the activity; one feels in control; and one feels freed from other worries (Rieber & Matzko, 2001). And according to Davis and Wiedenbeck (2001), an activity that is highly WAINESS PHD QUALIFYING EXAM 16 intrinsically motivating can become all-encompassing to the extent that the individual experiences a sense of total involvement, losing track of time, space, and other events. Davis and Wiedenbeck also argued that the interaction style of a software package is expected to have a significant effect on intensity of flow. However, Rieber and Matzko contend that play and flow differ in one respect; learning is an expressed outcome of serious play but not of flow (Rieber & Matzko, 2001). Engagement is defined as a feeling of directly working on the objects of interest in the worlds rather than on surrogates. According to Davis and Wiedenbeck, this interaction or engagement can be used along with the components of Malone and Lepper’s (DATE) intrinsic motivation model to explain the effect of an interaction style on intrinsic motivation, or flow (Davis, & Wiedenbeck, 2001). Garris, Ahlers, and Driskell (2002) commented that training professional are interested in the intensity of involvement and engagement that computer games can invoke, that the “holy grail” of training professionals is to harness the motivational properties of computer games to enhance learning and accomplish instructional objectives. Garris, Ahlers, and Driskell further argued that engagement in game play leads to the achievement of training objectives and specific learning outcomes (Garris, Ahlers, & Driskell, 2002). LEARNING/OUTCOMES Druckman (1995) concluded that games seem to be effective in enhancing motivation and increasing student interest in subject matter, yet the extent to which this translates into more effective learning is less clear (Garris, Ahlers, & Driskell, 2002). Anything that contributes to the increase of emotion (the quality of the design of video games, for example) reinforces the attraction of the game but not necessarily its educational interest (Brougere, 1999, p. 140). WAINESS PHD QUALIFYING EXAM 17 Although students generally seem to prefer games over other, more traditional, classroom training media, reviews have reported mixed results regarding the training effectiveness of games (Garris, Ahlers, & Driskell, 2002). In sum, liking the simulation does not translate to learning (Salas, Bowers, & Rhodenizer, 1998). According to Ricci, Salas, and Cannon-Bowers (1996), results of their study provided evidence that computer-based gaming can enhance learning and retention of knowledge. They further commented that positive trainee reaction might increase the likelihood of student involvement with training (i.e., devote extra time to training), but it is not a necessary factor for enhanced learning. Simulations and games have been cited as beneficial to a number of disciplines and for a number of educational and training situations, including aviation training (Salas, Bowers, & Rhodenizer, 1998), aviation crew resource management (Baker, 1993), military mission preparation (Spiker & Nullmeyer, n.d.), laboratory simulation (Betz, 1995), chemistry and physics education (Khoo & Koh, 1998), urban geography and planning (Adams, 1998; Betz, 1995), farm and ranch management (Cross, 1993), language training (Hubbard, 1991), disaster management (Stolk, Alexandrian, Gros, & Paggio, 2001), and medicine and health care (Westbrook & Braithwaite, 2001; Yair, Mintz, & Litvak, 2001). For business, games and simulations have been cited as useful for teaching strategic planning (Washburn & Gosen, 2001; Wolfe & Roge, 1997), finance (Santos, 2002), portfolio management (Brozik, & Zapalska, 2002), marketing (Washburn & Gosen), knowledge management (Leemkull, de Jong, de Hoog, & Christoph, 2003), and media buying (King & Morrison, 1998). Playing games is a way of learning laws of logic and methods of thinking. Older adults can benefit from these experiences as much as younger populations (Weisman, 1994). In addition to teaching domain-specific skills, games have been used to teach generalized skills. Since the mid 1980s, a number of researchers have used the game Space Fortress, a 2-D, simplistic arcade-style game, with a hexagonal “fortress” in the center of the screen surrounded by two concentric hexagons and a space ship, to improve abilities that transferred far outside gameplay, such as improving the results of fighter pilot training (Day, Arthur, and Gettman, 2001). According to Day, Arthur, and Gettman, 2001; Gopher, Weil, and Bareket, 1994; Shebilske, Regian, Arthur, and Jordan,1992), Space Fortress includes “important informationprocessing and psychomotor demands” (p. 1024). When the business game method was pitted against the case approach, and when casebased evaluation criteria were not employed, the game approach was superior to cases in producing knowledge gains. Less can be stated, however, regarding the relationship between gaming procedures and learning outcomes (Wolfe, 1997). WAINESS PHD QUALIFYING EXAM 18 Whether verbal information, motor skills, or intellectual skills are the object of the instruction, computer games can be designed to address specific learning outcomes (Dempsey, Haynes, Lucassen, & Casey, 2002). In a series of five experiments, Green and Bavelier (2003) showed the potential of video games to alter visual selection attention, using a popular action video game, Medal of Honor (by Electronic Arts). The control group played Tetris, a popular game requiring visual-motor control. While both treatment group and control group improved visual selection attention, the amount of improvement in visual selection attention was significantly higher in the treatment group (Green & Bavelier, 2003). By forcing players to simultaneously juggle a number of varied tasks (detect new enemies, track existing enemies, and avoid getting hurt, among other tasks), action-video-game playing pushes the limits of the three rather different aspects of visual attention. It leads to detectable effects on new tasks and at untrained locations after only 10 days of training. Therefore, although video-game playing may seem to be rather mindless, it is capable of radically altering visual attention processing (Green & Bavelier, 2003). In series of two experiment of college students (8 video game experts and 8 novices) to test visual attention strategies, experienced video game players showed a marked ability to not focus on low-probability areas for target objects as compared to novice video game player. And compared to novices, video game experts were faster responders at both the low and high probability locations (Greenfield, DeWinstanley, Kilpatrick, & Kaye, 1994). Taken together, the two experiments showed that skilled or expert video game players had better skills for monitoring two locations on a visual screen and that experimental video game practice could alter the strategies of attentional deployment so that the response time for the low-probability target was reduced (Greenfield, DeWinstanley, Kilpatrick, & Kaye, 1994). According to Leemkull, de Jong, de Hoog, and Christoph (2003), much of the work on the evaluation of games has been anecdotal, descriptive, or judgmental, but there are some indications that they are effective and superior to case studies in producing knowledge gains, especially in the area of strategic management (Wolfe, 1997). According to Garris, Ahlers, and Driskell (2002), in an early meta-analysis of the effectiveness of simulation games, Dekkers and Donatti (1981) found a negative relationship between duration of training and training effectiveness. Simulation game became less effective the longer the game was used (suggesting that perhaps trainees became bored over time). The ultimate test of the knowledge and skill acquisition is usually not in the knowing but in the ability to use knowledge appropriately—in the translation of knowledge into behavior WAINESS PHD QUALIFYING EXAM 19 (Ruben, 1999). Moreover, coming to know, and especially being able to use knowledge and skills generally, requires reinforcement, application, repetition, and often practice in a variety of settings and contexts, in order for it to become fully understood, integrated, and accessible in future situations (Ruben, 1999). According to Ricci, Salas, and Cannon-Bowers (1996), the proper assessment of training effects involves the examination of transfer or retention of the skills toward which training was directed. According to assimilation theory, there are two kinds of learning: rote learning and meaningful learning. Rote learning occurs through repetition and memorization. It can lead to successful performance in situations identical or very similar to those in which a skill was initially learning. However, skills gained through rote learning are not easily extensible to other situations, because they are not based on deep understanding of the material learning. Meaningful learning, on the other hand, equips the learner for problem solving and extension of learned concepts to situations different from the context in which the skill was initially learned (Davis, & Wiedenbeck, 2001; Mayer, 1981).Meaningful learning takes place when the learner draws connections between the new material to be learned and related knowledge already in long-term memory, known as the “assimilative context” (Ausubel, 1963; Davis, & Wiedenbeck, 2001). Meaningful learning results in an understanding of the basic concepts of the new material through its integration with existing knowledge (Davis, & Wiedenbeck, 2001). However, there is general consensus that learning with interactive environments such as games, simulations, and adventures is not effective when no instructional measure or support are added (Leemkull, de Jong, de Hoog, & Christoph, 2003). According to Thiagarajan (1998), simulations can be used for instruction, awareness, performance assessment, teambuilding, transfer, research, and therapy. However, if not embedded with sound instructional design, games and simulations often end up truncated exercises often mislabeled as simulations (Gredler, 1996). Gredler further commented that poorly developed exercises are not effective in achieving the objectives for which simulations are most appropriate—that of developing students’ problem-solving skills (Gredler, 1996). In other words, outcomes are affected by the instructional strategies employed (Wolfe, 1997). The generally accepted position is that games themselves are not sufficient for learning but that there are elements of games that can be activated within an instructional context that may enhance the learning process (Garris, Ahlers, & Driskell, 2002). There are a number of empirical studies that have examined the effects of game-based instructional programs on learning (Garris, Ahlers, & Driskell, 2002). For example, both Whitehall and Mcdonald (1993) and Ricci et al. (1996) found that instruction incorporating game features lead to improved learning (Garris, Ahlers, & Driskell, 2002). de Jong and van Joolingen (1998), after reviewing a large number of studies on learning from simulations, concluded, “there is no clear and univocal outcome in favor of simulations. An explanation why simulation based learning does not improve learning results can be found in the intrinsic problems that learners may have with discovering learning” (p. 181). These problems are related to processes such as hypothesis generation, design of experiments, interpretation of WAINESS PHD QUALIFYING EXAM 20 data, and regulation of learning. After analyzing a large number of studies, de Jong and van Joolingen (1998) concluded that adding instructional support to simulations might help to improve the situation (Leemkull, de Jong, de Hoog, & Christoph, 2003). Berson (1996) cited Becker (1990) as stating that common problems encountered throughout the literature on computer effectiveness in the social studies include: (a) design flaws exacerbated by poor data collection procedures; (b) inadequate analysis of data and insufficient presentation of results; and (c) poor description of the methodology, including the setting and conditions under which the program was implemented (Berson, 1996). With regards to research into the effectiveness of computers in social studies, methodological problems persist in the areas of insufficient treatment definitions and descriptions, inadequate sampling procedures, and incomplete reporting of statistical results. Overall, there is paucity of empirical evidence, and most conclusions are impressionistic. Consequently, there is not satisfactory evidence on which to base decisions to integrate computers into social studies instruction (Berson, 1996). While benefits were found with MIF, it was not compared to other forms of learned, and therefore, it is unclear as to any sort of “benefit” that could be derived from using the software (Henderson, Klemes, & Eshet, 2000). When students were asked to assess what they learned from the learning unit, students reported learning negotiating skills and the role that timing and deadlines played in the buying and selling process. Learning outcomes were not assessed (King & Morrison, 1998). In meta-analyzing a number of studies and meta-analyses on video games, Lee (1999) commented that effect size never tells us under what conditions students learn more, less, or not at all compared with the comparison group. For instructional prescription, we need information dealing with instructional variable, such as instructional mode, instructional sequence, knowledge domain, and learner characteristics. If we don’t know how these variables are connected to learning outcomes, there is no way to prescribe appropriate conditions of instruction for specific target learners. As a result, findings of these studies cannot contribute to the quality of instruction in various educational settings (Lee, 1999). REFLECTION/DEBRIEF A game cannot be designed to directly provide learning. A moment of reflexivity is required to make transfer and learning possible. Games require reflection, which enables the passage from play to learning. Therefore, debriefing (after action review) appears to be an essential contribution to research on play and gaming in education (Brougere, 1999). Participants of a simulation are not in a position to learn anything worthwhile unless they are required and encouraged to reflect on the experience through the process of debriefing. WAINESS PHD QUALIFYING EXAM 21 Structured approaches to debriefing are more effective for learning than unstructured approaches (Thiagarajan, 1998). Debriefing is the review and analysis of events that occurred in the game itself. Debriefing provides a link between what is represented in the simulation/gaming experience and the real world. It allows the participant to draw parallels between game events and real-worled events. The debriefing allows us to transform game events into learning experiences. Debriefing may include a description of events that occurred in the game, analysis of why they occurred, and the discussion of mistakes and corrective actions. Learning by doing must be coupled with the opportunity reflect and abstract relevant information for effective learning to occur and for learners to link knowledge gained to the real world. Debriefing and scaffolding techniques provide the guidance and support to aid this process (Garris, Ahlers, & Driskell, 2002). Student should be prepared and encouraged to study and critique the simulation model event as it provides them with a degree of insight into urban processes that is not otherwise possible (Adams, 1998). Results of the study indicated that the four-phase approach (introduction, instruction, engagement, and reflection/debriefing) was an effective approach to learning. The debriefing process not only provided the opportunity to improve learning, it provided the opportunity for feedback on the simulation, which resulted in many suggestions for improvement (Leemkull, de Jong, de Hoog, & Christoph, 2003). For example, users indicated the game could be more challenging and competitive (Leemkull, de Jong, de Hoog, & Christoph, 2003). Using a simulation game, the “experimentarium,” Rosenorn and Kofoed (1998), examined to nature of various phases of reflection in a learning process: reflection-before-action, reflection-in-action, and reflection-on-action. Reflection-before-action occurs before learning, and involves learners considering the types of problems they hope to solve more successfully. Reflection-in-action occurs during a pause in learning and involves learners reviewing goals, asking themselves whether they are moving in the right direction, and making necessary adjustments. Reflection-on-action occurs after learning is completed, when learners consider what was learning, why the outcome turned out to be as it did, and how the outcome may be applied in the near future. According to the authors, these reflection periods contribute to the depth and durability of the learning as well as to the changes in attitudes. To examine their assumptions, participants worked in the experimentarium, virtual room for, according to the researchers, experiments that are removed from the daily life of an organization, to give employees, managers, and consultants the opportunity to develop and test new ideas. Tools such as performance measurement, cognitive and task analysis, scenario design, and feedback and debriefing mechanisms are necessary to ensure learning in simulation-based training systems (Salas, Bowers, & Rhodenizer, 1998). Debriefing is too important to be added on as an afterthought to an interactive simulation, especially one used for training, increasing awareness, or team building. No simulation package can be considered complete without an extensive debriefing guide (Thiagarajan, 1998). CONCLUSION/DISCUSSION/SUMMARY WAINESS PHD QUALIFYING EXAM 22 The modern computer technology has made possible a new and rich learning environment, the simulation. In an instructional simulation, students learn by actually performing activities to be learned in a context that is similar to the real world. Instructional simulation is used in most cases as unguided discovery learning. Students can generate and test hypotheses in a simulated environment by examining changes in the environment based on their input. Unlike the traditional classroom instruction, in which students’ roles are passive in most cases, this particular type of instruction requires students to be involved in their learning in an active way (Lee, 1999). Educators and trainers began to take notice of the power and potential of computer games for education and training back in the 1970s and 1980s (Donchin, 1989; Malone, 1981; Malone & Lepper, 1987; Ramsberger, Hopwood, Hargan, & Underfull, 1983; Ruben, 1999; Thomas & Macredie, 1994). Computer games were hypothesized to be potentially useful for instructional purposes and were also hypothesized to provide multiple benefits: (a) complex and diverse approaches to learning processes and outcomes; (b) interactivity; (c) ability to address cognitive as well as affective learning issues; and perhaps most importantly, (d) motivation for learning. According to Ricci, Salas, and Cannon-Bowers (1996), motivation can be defined as “the direction, intensity, and persistence of attentional effort invested by the trainee toward training.” Currently, the increase power and flexibility of computers technology is contributing to renewed interest in games and simulations. This development coincides with the current perspective of effective instruction in which meaningful learning depends on the construction of knowledge by the learner. Games and simulations, which can provide an environment for learner’s construction of new knowledge, have the potential to become a major component of this focus (Gredler, 1996). If we are able to participate in games and simulations, it is because as children we learned to master rules. We even ask ourselves if play does not prepare for a number of learning situations characterized by a more or less explicit dimension of simulation, which supposes master of the second degree and rules specific to certain situations. This is probably why the Romans had the same name ludus for play and for school and why the teacher was called magister ludi (Brougere, 1999). Like other forms of instruction, simulations and games are likely to be more effective with some students than with others (Gredler, 1996). According to Ruben (1999), the theoretical foundations for simulations, games, and other forms of interactive, experience-based learning had been in place at least since the writings of Aristotle and the practices of Socrates (Ruben, 1999). The public should not accept the rhetoric that technology makes learning easier and more efficient, because ease and efficiency are not prerequisite conditions for deep and meaningful learning (Schacter & Fagnano, 1999). We than make the more important distinction that computer technologies, when designed according to sound learning theory and pedagogy, have, and can substantially improve student learning (Schacter & Fagnano, 1999). Computer-Based Instruction (CBI) has been shown to moderately improve student learning and achievement (Schacter & Fagnano, 1999). WAINESS PHD QUALIFYING EXAM 23 Schacter and Fagnano (1999) conducted a meta-analysis of 12 meta-analyses on computer-based instruction, comprised of a total of 546 individual studies, with subjects from elementary, secondary, precollege, special, and college institutions (Schacter & Fagnano, 1999). When computer technologies are designed around principles gleaned from learning theories and implemented systematically, one can argue that the effect that these technologies have on student learning and achievement are both powerful and transformative. Technologies designed around educational and psychological theory compare favorably to other education reform efforts because they have embedded proven teaching principles into the technology. Thus, one gets the effects of both the teaching reform and the technology (Schacter & Fagnano, 1999). Our findings from two experiments involving high school students suggest that the effectiveness of CAI may go beyond basic cognitive processes, such as rote memory. While previous work has found that CAI can be as effective as traditional teaching methods for rote memory, it has not always been shown to be more effective (Taylor, Renshaw, & Jensen, 1997). According to Cobb (1997), Clark’s work has prompted educators to be skeptical of inflated media claims; to notice when expensive media are promoted where cheap would do; to center instructional designs on the learner rather than the medium; to track learning effect to instructional cause at the lowest level of analysis possible (medium attribute rather than medium per se, method rather than medium, message rather than method; Cobb, 1997). There have been a great number of experimental studies to examine the instructional value of simulation. In most cases of these studies, researchers used expository instructional methods, such as traditional classroom lectures or computer-based tutorials for comparison groups. The research results from these studies were conflicting (Lee, 1999). Instructional games offer the opportunity for the learner to learn by doing, to become engaged in authentic learning experiences. However, people do not always learn by doing. Sometimes we learn by observing; sometimes we learn by being told. “Learners are not passive blotters at which we toss information; nor are they active sponges that absorb all they experience unaided. We must temper our enthusiasm for the gaming approach with knowledge that instructional games must be carefully constructed to provide both an engaging first-person experience as well as appropriate learner support” (Garris, Ahlers, & Driskell, 2002, p. 461). Games, simulations, and case studies have an important role in education and training in putting learning into a context. Furthermore, they are constructivistic environments in which students are invited to actively solve problems. Games and simulations provide students with a framework of rules and roles through which they can learn interactively through a live experience. They can tackle situations they might not be prepared to risk in reality, and they can experiment with new ideas and strategies (Leemkull, de Jong, de Hoog, & Christoph, 2003). They involve individual and group interpretations of given information, the capacity to suspend disbelief, and a willingness to play with the components of a situation in making new patterns WAINESS PHD QUALIFYING EXAM 24 and generating net problems (Jacues, 1995; as cited in Leemkull, de Jong, de Hoog, & Christoph, 2003). A type of learning environment, which is very close to games, is simulation. Simulations resemble games in that both contain a model of some kind of system and learners can provide input (changes to variable values or specific actions) and observe the consequences of their actions (Leemkull, de Jong, de Hoog, & Christoph, 2003). Play is traditionally viewed as applying only to young children (Rieber, 1996). There is also a sense of risk attached to suggesting an adult is at play. Work is respectable, play is not. Another misconception is that play is easy. Quite the contrary, even as adults we tend to engage in unusually challenging and difficult activities when we play, such as sports, music, hobbies, and games like chess (Rieber, 1996). Play is a difficult concept to define. Play appears to be one of those constructs that is obvious at the tacit level but extremely difficult to articulate in concrete terms—we all know it when we see it or experience it. Its definition can also be culturally and politically constrained (Rieber, 1996). Nevertheless, according to Rieber (1996), play is generally defined as having the following attributes: (a) it is usually voluntary; (b) it is intrinsically motivating, that is, it is pleasurable for its own sake and is not dependent on external rewards; (c) it involves some level of active, often physical, engagement; and (d) it is distinct from other behavior by having a make believe quality (Blanchard & Cheska, 1985; Csikszenmihalyi, 1990; Pellegrini, 1995; Pellegrini & Smith, 1993; Yawkey & Pellegrini, 1984). The commonsense tendency is for people to define play as the opposite of work, but this is misleading (Rieber, 1996). Computer games offer a new possibility for wedding motivation and self-regulated learning within a constructivist framework, on which strives to combine both training nd education, practice and reflection, into a seamless learning experience (Rieber, Smith, & Noah, 1998). There are clearly many media for any instructional job, but this does not mean they all do it at the same level of efficiency—whether economic, logistical, social, or cognitive (Cobb, 1997). WAINESS PHD QUALIFYING EXAM 25 References for Question 1 Adams, P. C. (1998, March/April). Teaching and learning with SimCity 2000 Amory, A., Naicker, K., Vincent, J., & Adams, C. (1999). The use of computer games as an educational tool: Identification of appropriate game types and game elements. Asakawa, T., Gilbert, N. (2003). Synthesizing experiences: Lessons to be learned from Internetmediated simulation games. Baker, D., Prince, C., Shrestha, L., Oser, R., & Salas, E. (1993). Aviation computer games for crew resource management training. Barnett, M. A., Vitaglione, G. D., Harper, K. K. G., Quackenbush, S. W., Steadman, L. A., & Valdez, B. S. (1997). Late adolescents’ experiences with and attitudes toward videogames. Bauza, G. B., & Gelabert, M. E. (1995, June). The Hakayak’s last odyssey: A computer game with a difference Bell, H. H., & Crane, P. M. (1993) Training utility of multiship air combat simulation. Benne, M. R., & Baxter, K. K. (1998). An assessment of two computerized vocabulary games reveals that players improve as a result of review Bernard, K. J. (1997, December). Strategic management games: A review [Electronic Version]. Simulation & Gaming Special Issue: Teaching Strategic Management, 28(4), 395-422. Berson, M. J. (1996, Summer). Effectiveness of computer technology in the social studies: A review of the literature. Journal of Research on Computing in Education, 28(4), 486499. Betz, J. A. (1995/1996). Computer games: Increase learning in an interactive multidisciplinary environment. Brougere, G. (1999, June). Some elements relating to children’s play and adult simulation/gaming. Brozik, D., & Zapalska, A. (2002, June). The PORTFOLIO GAME: Decision making in a dynamic environment. Carr, P. D., & Groves, G. (1998). The Internet-based operations simulation game. Carvalho, G. F. (1991). Evaluating computerized business simulators for objective learning validity. Choi, W. (1997). Designing effective scenarios for computer-based instructional simulations: Classification of essential features. Cobb, T. (1997). Cognitive efficiency: Toward a revised theory of media. Educational Technology Research and Development, 45(4), 21-35. Cross, T. L. (1993, Fall). AgVenture: A farming strategy computer game. Davis, S., & Wiedenbeck, S. (2001). The mediating effects of intrinsic motivation, ease of use and usefulness perceptions on performance in first-time and subsequent computer users. Interacting with Computers, 13, 549-580. Dempsey, J. V., Haynes, L. L., Lucassen, B. A., & Casey, M. S. (2002). Forty simple computer games and what they could mean to educators. Duke, R. D. (1995, December). Gaming: An emergent discipline. Simulation & Gaming, Silver Anniversary Issue (Part 4), 426-438. Fabiani, M., Buckley, J., Gratton, G., Coles, M. G. H., Donchin, E., & Logie, R. (1989). The training of complex task performance. Forbus, K. D. (2001). Articulate software for science and engineering Education. WAINESS PHD QUALIFYING EXAM 26 Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation, and learning: A research and practice model. Gopher, D., Weil, M., & Bareket, T. (1994). Transfer of skill from a computer game trainer to flight. Gopher, D., Weil, M., & Siegel, D. (1989). Practice under changing priorities: An approach to the training of complex skills. Acta Psychologica, 71, 147-177. Gredler, M.E. (1996). Educational games and simulations: a technology in search of a research paradigm. Green, C. S., & Bavelier, D. (2003, May 29). Action video game modifies visual selective attention. Greenfield, P. M., deWinstanley, P., Kilpatrick, H., & Kaye, D. (1996). Action video games and informal education: Effects on strategies for dividing visual attention. Henderson, L., Klemes, J., & Eshet, Y. (2000). Just playing a game? Educational simulation software and cognitive outcomes. Herselman, M. E. (2000). University students benefiting from the medium of computer games: A case study. Hindle, K. (2002, June). A grounded theory for teaching entrepreneurship using simulation games. Hubbard, P. (1991, June). Evaluating computer games for language learning. Keys, J. B. (1997). Strategic management games: a review [Electronic Version]. Khoo, G.-s., & Koh, t.-s. (1998). Using visualization and simulation tools in tertiary science education [Electronic Version]. King, K. W., & Morrison, M. (1998, Autumn). A media buying simulation game using the Internet. Kirriemuir, J. (2002, February). Video gaming, education, and digital learning technologies: Relevance and opportunities. Lee, J. (1999). Effectiveness of computer-based instructional simulation: A meta analysis. Leemkuil, H., de Jong, T., de Hoog, R., & Christoph, N. (2003). KM Quest: A collaborative Internet-based simulation game. Malouf, D. (1987-1988). The effect of instructional computer games on continuing student motivation. Moreno, R., & Mayer, R. E. (2002). Learning science in virtual reality multimedia environments: Role of methods and media. Noyes, J. M., & Garland, K. J. (2003). Solving the Tower of Hanoi: Does mode of presentation matter? Computers in Human Behavior, 19, 579-592. Park, O.-C., & Gittelman, S. S. (1995). Dynamic characteristics of mental models and dynamic visual displays. Instructional Science, 23, 303-320. Porter, D. B. (1995). Computer games: Paradigms of opportunity. Prislin, R., Jordan, J. A., Worchel, S., Semmer, F. T., & Shebilske, W. L. (1996, September). Effects of group discussion on acquisition of complex skills. Human Factors, 38(3), 404416. Resnick, H. (1994). Introduction: Electronic tools for education and training. Resnick, H., & Sherer, M. (1994). Computerized games in the human services--An introduction. Ricci, K. E. (1994, Summer). The use of computer-based videogames in knowledge acquisition and retention. WAINESS PHD QUALIFYING EXAM 27 Ricci, K. E., Salas, E., & Cannon-Bowers, J. A. (1996). Do computer-based games facilitate knowledge acquisition and retention? Rieber, L. P. (1996). Seriously considering play: Designing interactive learning environments based on the blending of microworlds, simulations, and games. Rieber, L. P., & Matzko, M. J. (Jan/Feb 2001). Serious design for serious play in physics. Rieber, L. P., Smith, L., & Noah, D. (1998, November/December). The value of serious play. Rosenorn, T., & Kofoed, L. B. (1998). Reflection in learning processes through simulation/gaming. Simulation & Gaming, 29(4), 432-440. Ruben, B. D. (1999, December). Simulations, Games, and experience-based learning: The quest for a new paradigm for teaching and learning. Salas, E., Bowers, C. A., & Rhodenizer, L. (1998). It is not how much you have but how you use it: Toward a rational use of simulation to support aviation training. The International Journal of Aviation Psychology, 8(3), 197-208. Salies, T. G. (2002). Promoting strategic competence: What simulations can do for you. Salomon, G. (1983). The differential investment of mental effort in learning from different sources. Educational Psychology, 18(1), 42-50. Santos, J. (2002, Winter). Developing and implementing an Internet-based financial system simulation game. Schacter, J., & Fagnano, C. (1999). Does computer technology improve student learning and achievement? How, when and under what conditions? Journal of Educational Computing Research, 20(4), 329-343. Spiker, V. A., & Nullmeyer, R. T. (n.d.). Benefits and limitations of simulation-based mission planning and rehearsal. Unpublished manuscript. Standen, P. J., Brown, D. J., & Cromby, J. J. (2001). The effective use of virtual environments in the education and rehabilitation of students with intellectual disabilities. Stewart, K. M. (1997, Spring). Beyond entertainment: Using interactive games in web-based instruction. Stolk, D., Alexandrian, D., Gros, B., & Paggio, R. (2001). Gaming and multimedia applications for environmental crisis management training. Story, N., & Sullivan, H. J. (1986, November/December). Factors that influence continuing motivation. Journal of Educational Research, 80(2), 86-92. Taylor, G. L., & Disinger, J. F. (1997, Spring). The potential role of virtual reality in environmental education [Electronic Version]. The Journal of Environmental Education, 28, 38-43. Taylor, H. A., Renshaw, C. E., & Jensen, M. D. (1997). Effects of computer-based role-playing on decision making skills. Journal of Educational Computing Research, 17(2), 147164. Tennyson, r. D., & Breuer, K. (2002). Improving problem solving and creativity through use of complex-dynamic simulations. Thiagarajan, S. (1998, Sept/October). The myths and realities of simulations in performance technology. Educational Technology, 38(4), 35-41. Thiagarajan, S. (2001, May). Fun in the workplace. Waller, D. (2000). Individual differences in spatial learning from computer-simulated environments. Journal of Experimental Psychology, 6(4), 307-321. Waller, D., Knapp, D., & Hunt, E. (2001, Spring). Spatial representations of virtual mazes: The role of visual fidelity and individual differences. Human Factors, 43(1), 147-158. WAINESS PHD QUALIFYING EXAM 28 Walters, B. A., Coalter, T. M., & Rasheed, A. M. A. (1997). Simulation games in business policy courses: Is there value for students [Electronic Version]? Washbush, J., & Gosen, J. (2001, September). An exploration of game-derived learning in total enterprise simulations. Weisman, S. (1994). Computer games for the frail elderly. In H. Resnick (Ed.), Electronic Tools for Social Work Practice and Education (pp. 229-234). Bington, NY: The Haworth Press. Westrom, M. & Shaban, A. (1992, Summer). Intrinsic motivation in microcomputer games. Whitehill, B. V., & McDonald, B. A. (1993, September). Improving learning persistence of military personnel by enhancing motivation in a technical training program. Simulation & Gaming, 24(3), 294-313. Winn, W., & Jackson, R. (1999, July/August). Fourteen propositions about educational uses of virtual reality. Wolfe, J. (1997, December). The effectiveness of business games in strategic management course work [Electronic Version]. Wolfe, J., & Roge, J. N. (1997, December). Computerized general management games as strategic management learning environments [Electronic Version]. Yair, Y., Mintz, R., & Litvak, S. (2001). 3D-virtual reality in science education: An implication for astronomy teaching. Yildiz, R., & Atkins, M., 1952- (1996, May). The cognitive impact of multimedia simulations on 14 year old students [Electronic Version]. Yu, F.-Y. (2001). Competition within computer-assisted cooperative learning environments: Cognitive, affective, and social outcomes. Journal of Educational Computing Research, 24(2), 99-117. WAINESS PHD QUALIFYING EXAM 2. 29 Review the theoretical and empirical literature on the relationship of cognitive load to learning. Please, include a discussion of cognitive load in relationship to interactive media (e.g., multimedia and games). Be sure to focus types of cognitive load (e.g., intrinsic, germane, and extraneous load). Educational technology as a field now seems in a mood to move beyond the issue of whether media contribute to learning, to acknowledge that media are here to stay in any case, and drop the learning issues without resolving it (Cobb, 1997). However, Cobb (1997) contends that the issue can be resolved in a more principles manner with one minor adjustment to Clark’s position. He suggested that if a recurrent concept in Clark’s discourse, “efficiency,” is expanded to include “cognitive efficiency,” then media choices become connected to learning, in some circumstances (Cobb, 1997). For first-time users, engagement appears to have resulted simply from the novelty of learning a new computer application, regardless of the interaction style (Davis, & Wiedenbeck, 2001). When the human operator has to master a very complex task, it may be advisable to train different task components separately (Fabriani et al., 1989). Briggs and Naylor (1962) and Naylor and Briggs (1963) proposed that two dimensions are crucial in determining amenability of a task to part training: task complexity and task organization. Complexity refers to the load imposed by each task component taken in isolation while organization refers to the processing demands that originate from the interactions among different task components. Briggs and Naylor claim that part-task training is most efficient when task complexity is high while task organization is low. This is because it is under these conditions that practice on individual task components makes it easier for the trainee to determine the optimal means for dealing with each part, without the distraction imposed by other task components. Thus, the trainee’s conception of the task is clarified and transfer of training can occur (Fabriani et al., 1989). However, one of the main advantages of part-task training—enabling the subjects to perform parts in isolation—is also one of its main drawbacks. This is because skills practiced in isolation may not integrate well with each other, and may not transfer well to the while task. In addition, even if one agrees that it would be desirable to adopt some form of part-task training, it is not always obvious how the task should be partitioned into components (Fabriani et al., 1989). Gopher and his colleagues (Gopher et al., 1989) combined concepts derived from schema models and attention theory to develop an approach to training that depends upon shifts in attention and emphasis. They assumed that it is preferable to expose the subject to the entire task throughout the training period. This assures integration by avoiding the partitioning of the WAINESS PHD QUALIFYING EXAM 30 task. However, part-task training was achieved by emphasizing different task components during different phases of training. This allowed the trainee to focus on the component that is emphasized without losing sight of the whole task (Fabriani et al., 1989). The hierarchical approach to training, developed by Frederiksen and White (1989), drew from theories of the role of mental models in learning to devise a set of “problem environments” which shaped the development of the trainee’s mental model. In addition, Frederiksen and White determined the subject’s optimal strategy through an analysis of the task based on a principled decomposition of its component skills. Then, a batter of training sub-tasks, none of which need bear any similarity to the whole task, was developed. The training was designed to emphasize the hierarchical nature of the sub-tasks. Sub-tasks administered later in training required skills taught in previous sub-tasks, and the subject was let to incorporate the elements of the optimal strategy in an integrated fashion (Fabriani et al., 1989). Both regimes (emphasis on change, and hierarchical) were successful in improving the subjects’ performance in a complex perceptual-motor task—the Space Fortress game. Those that received both treatments achieved the highest performance improvement. Yet the hierarchical group achieved the highest performance in absolute terms. A repeat of the studies at the University of Illinois resulted in less extreme results. Differences were potentially attributed to differences in subjects (Fabriani et al., 1989). In addition, and perhaps more importantly, the exposure to the whole task—the standard Space Fortress game—varied considerably for subjects in the two training regimes. The participants in the Gopher et al. study played the whole game 200% to 400% more often than the Frederiksen and White study. Therefore, differences may have been due to differences in familiarity to with the whole task (Fabriani et al., 1989). In this study comparing and integrated and a hierarchical approach to learning Space Fortress, care was taken to eliminate the differences in training schedule and subject pool (Fabriani et al., 1989). It is equally important to assess the degree to which the acquired skills are robust to interference. Therefore, in the present study we examined the degree to which subjects were capable of performing the standard Space Fortress task concurrently with several other tasks. These concurrent tasks formed a battery designed to assess the load placed on different components of working memory (Fabriani et al., 1989). The study included 33 university students (all right-handed males, 18-24 years old). There were three groups, a control group that learning the play the game as a whole task, a treatment group (the integrated group) that received emphasis training while playing the whole game, and the hierarchical group that trained on subtasks and eventually the entire game (Fabriani et al., 1989). The integrated group began with scores below the control group, but eventually outperformed the control group. The hierarchical group performed more poorly than either of the other groups in early stages, but eventually outperformed both the control and the integrated group. During the dual task (the interference) stage of the training, the hierarchical group’s performance gap increased over the other two groups increased. However, the hierarchical group was the most affected least by the less disruptive secondary tasks but more with the more disruptive secondary tasks. The integrated training group was more resistant to disruption by the presence of concurrent secondary tasks than the other two groups (Fabriani et al., 1989). In dividing performance into 28 variables (e.g. number of fortress hits, shooting efficiency, perfect of foe mines killed), the hierarchical group outperformed the integrated group on 20 of the variables and the control group on 22 of the variable. WAINESS PHD QUALIFYING EXAM 31 In the initial stages of the study, a screening test of shooting ability was conducted. At the end of training, the low scorers in the hierarchical group performed best, followed by the control group, then the integrated group. It appears that integrated training is detrimental, or at best of no value, for subjects with low screen scored. For the high scorers, the curve of the integrated group is only slightly below that of the hierarchical group, and above the control group. And for the medium scorers, the curve of the integrated group is intermediate with respect to the curves of the other two groups (Fabriani et al., 1989). In summary, on most performance variable, subjects trained with either of the part-task training methods achieved higher scores than did subjects trained on the whole task. The hierarchical task achieved superior performance when the game was performed alone. The integrated group’s performance was more resistant to disruption by concurrently performed secondary tasks. The training regime and the initial capability of the subject as measured by the aiming screening task interacted in determining the effectiveness of training. Subjects who scored high in the screening task taken before training began did well regardless of the part-task training method to which they were subjected. On the other hand, the method of training did make a difference for subjects with low and medium screening scores. The hierarchical method was particularly beneficial, and the integrated methods particularly detrimental, for those subjects who scored poorly on the screening task (Fabriani et al., 1989). Improving science and engineering education is a critical problem for technological societies, who, in addition to needing scientists, engineers, and technicians, need a scientifically literate population in order to make wise decisions. We believe a new kind of educational software, articulate software, can help solve this problem. Articulate software understands the domain being learning in human-like ways, and can provide explanations and coaching to help learners master it. Articulate software is made possible by advances in artificial intelligence, particularly qualitative physics, combined with the ongoing revolution in computer technology (Forbus, 2001). By embedding human-like models of entities and processes in software, the software’s understanding can be used to provide explanations that are directly coupled to how specific results are derived. These explanations can delve into topics that traditional software cannot handle, for example, why a process was considered to occur and hwy a specific approximation makes sense (Forbus, 2001). In additional to their commercial popularity, computer games have captured the attention of training professionals and educators. There several reasons for this professional interest. First, there has been a major shift in the field of learning from a traditional, didactic model of instruction to a learner-centered model that emphasizes a more active learner role. This represents a shift away from the “learning by listening” model of instruction to one in which students learn by doing (Garris, Ahlers, & Driskell, 2002). Space Fortress has a substantial history as a research instrument for complex problem solving task, and is the instrument used in a number of the studies appearing in this article. According to Day, Arthur, and Gettman (2001), Space Fortress includes “important informationprocessing and psychomotor demands” (p. 1024). Space Fortress is a visually simplistic, 2-D video game, with a hexagonal “fortress” in the center of the screen surrounded by two concentric hexagons and a space ship. The ship’s path and rotation are controlled by a joystick. Missile firing is controlled by the mouse. Participants try to destroy the moving fortress by shooting missiles, while trying to avoid collision with the fortress, and mine that periodically appear. Participants benefit by shooting foe mines, but are penalized for shooting friendly mines. WAINESS PHD QUALIFYING EXAM 32 Additionally, bonus events occur which require specific mouse actions. The ship works in frictionless space, meaning that once it’s in motion, it will continue to move at a constant speed unless altered by another joystick movement, to speed it, slow down, or stop. Speed and movement can also affect score. The various events and conditions already described result in points being added or deducted. To achieve a maximum score, subjects must destroy the fortress, defend themselves, destroy all mines, manage their resources of missiles and point bonus, and avoid being hit by either fortress or mines (see Arthur, Strong, Jordan, Williamson, Shelbilske, and Regian, 1995, for a detailed description of the game). Gopher, Weil, and Bareket (1994) stated that both flight training and Space Fortress include continuous and discrete manual control, visual and spatial orientation, procedural knowledge involving long- and short-term memory information, and high attention demands under severe time constraints. Verbal communication was also introduced into the game to simulate the demands in the flight situation. Gopher, Weil, and Bareket applied two approaches to learning: emphasis change and hierarchical part-training. Under the emphasis change approach, subjects practiced the whole game at all times, but they were led through instructions and auxiliary feedback indicators to vary their focus of attention on different aspects of the game during different game trials. Under this method, participants were exposed to the full load of the task and taught alternative ways for coping with the task. In hierarchical part-training, the whole task is decomposed, and before subjects are introduced to the full game, they are led through a sequence of simplified part games, which gradually become more integrative and complex. Subjects are given verbal tips on recommended behavior, based on subject matter experts. The Gopher, Weil, and Bareket (1994) study involved 58 flight school cadets, with one group learning Space Fortress and a control group that did not receive any video game training. One group of cadets, the full training (FT) group, was given both emphasis change and hierarchical part-task methods. The other group, the emphasis only training group (EOT), was given emphasis change and attention-management procedures. The control group consisted of cadets who were matched in ability to the experimental groups. Each group of flight cadets in the experimental groups were trained for 10 one-hour sessions consisting of 10 to 14 trails of 2 or 3 minutes each. Transfer effects from the game training to actual flight were tested during eight flights (45-60 minutes each) of the transition stage to the high-performance jet trainer. The results from the study by Gopher, Weil, and Bareket (1994) provide strong support for the emphasis change approach for teaching generalizable skills. Subjects in the FT group obtained significantly higher final game scores on all measures of game performance, compared with the EOT group. Despite the large differences in final game scores, the FT and EOT groups did not differ in subsequent flight performance. The game group was significantly better in its flight performance than the non-game group; About one-third of the subjects in the game group were included in the highest score category, whereas only 3.4% were in the lower category. None of the non-game subjects were included in the high scores category, whereas 28.6% were in the lowest category. The game group increased its probability of graduation by 30%. The most significant result of the games was that the percentage of graduates from the game group was twice that of the non-game group. The authors contend that part-task training appears to focus trainee attention on task specific elements, while emphasis-change training results in more generalizable skills. Therefore, while the FT approach resulted in higher game scores (which would benefit from the specific focus) than the EOT approach (which emphasized generalizable skills), it did not transfer to higher scores in flight performance (because only the generalizable skills which both groups acquired from the game experience were transferable to actual flight). WAINESS PHD QUALIFYING EXAM 33 Information that is not held in working memory will need to be retained by the longterm memory system. Storing more knowledge in long-term memory reduces the load on working memory. This results in a greater capacity being made available for active processing. When problem solving, if the various rules have been learning and their application practiced, this information can be held in long-term memory. Thus, once the individual is familiar with the problem, s/he will be in a better position to plan how to solve the problem. There are two important issues here. First, is the role of experience in terms of aiding the forming of mental representations, reducing the memory loads and facilitating planning activities. Second, the implications of having a display of the problem that acts like an “external memory” and provides the user with information about the problem at all times. It is reasonable to conclude, therefore, that an important characteristic of using a computer is that it reduces the load on working memory (Noyes & Garland, 2003). The use of “memory” is of interest here, since it might be argued that the display screen is merely providing an external representation of the problem rather than a memory. However, whatever the terminology, there are many advantages to having this situation when problem-solving (Noyes & Garland, 2003); It reduces the load on internal working memory; storing less information in the internal working memory means that there is les chance of forgetting information—this reduces the chance of the problem solver making errors; Problem-solvers may consider the task to be less cognitively complex, because of the reduced load on working memory—hence, they feel more confident about solving the problem; and it allows the user to become more focused on solving the problem as opposed to remembering the rules (Noyes & Garland, 2003). In their simplest form, verbal protocols require individuals to report their thoughts as they carry out the task. This is particularly appropriate for tasks that involve sequential processing, because this mirrors the consecutive nature of the thought processes. it is then relatively easy to talk through solving the problem. Verbal protocols, and in particular, the socalled “thinking aloud” techniques, have been shown to aid problem solving and this benefit has been well documented (Ahlum-heath & DiVesta, 1986; Berry, 1983; Ericsson & Simon, 1993; as cited in Noyes & Garland, 2003). The Tower of Hanoi is a well-known problem-solving task that has been used many times in an experimental setting (see Anderson & Douglass, 2001; as cited in Noyes & Garland, 2003). It involves a problem space about which the problem-solver has very little specific domain knowledge, and solvers need to acquire additional knowledge to decompose a goal into sub-goals. They need to learn how to evaluate the outcomes of their actions in order to sort the actions that they carry out in terms of their contribution to solving a sub-goal (and ultimately, the overriding goals of solving the Tower of Hanoi puzzle). It is a relatively straightforward taks with a set of very simple instructions that can be easily represented (Noyes & Garland, 2003). The Tower of Hanoi puzzle comprises a number of vertical pegs, and doughnut-shaped disks of graduated sizes that fit onto these posts. At the start of the problem-solving exercise, all the disks are arranged in pyramid form on one of the end pges with the largest disk on the bottom. The ‘problem’ is to move all of the disks from this end peg to the other end peg, subject to a number of constraints. These are: (1) only one disk can be moved at a time; (2) a disk cannot be moved to be placed on a disk that is smaller than itself, and (3) no disk can be put aside. Any number of disks can be used; the minimum number of moves is 2N – 1, where N equals the number of disks. However, five disks and three pegs provide a problem of sufficient difficulty that can be solved within a relatively short period, as only 31 moves need to be carried out (Noyes & Garland, 2003). WAINESS PHD QUALIFYING EXAM 34 In summary, it was hypothesized that for a simple, problem-solving task such as the Tower of Hanoi, having access to a model of the problem well benefit performance in terms of more successful problem-solving (i.e., completion of the puzzle), and more efficient problemsolving (i.e., fewer moves and faster times; Noyes & Garland, 2003). In the first experiment, participants made fewer moves using the mental representation than the physical and computer models, and more people gave up when trying to solve the puzzle using the physical model. This suggests that problem-solving “in the head” is more efficient than using a computer (Noyes & Garland, 2003). The physical model took the most moves. The computer presentation of the Tower of Hanoi puzzle provides a means of representing the problem pictorially. Thus, it provides an “intermediate representation” between the physical and mental models. Compared to the physical model, manipulation of the computergenerated version of the puzzle was very easy and involved a “drag and drop” mouse oeration to move the disks on the screen. Thus, individuals could very quickly elicit the desire moves; perhaps, this ease of operation resulte din them not focusing on reaching the end-point by the most efficient means, and as a result, a “trial and error” approach was being adopted (Noyes & Garland, 2003). Problem-solvers did not have to be too careful about making sure the next move was the right one (Noyes & Garland, 2003). In all three experiments, participants were faster when using the computer version of the puzzle in terms of moves per second (Noyes & Garland, 2003). In the case of the Tower of Hanoi puzzle, making the moves relies on the internal working memory, but the person also needs to apply the restriction rules to making the moves. This information, although not shown on the screen in the same way as the puzzle, is “present” in the computer; hence, the user has access to an “external (working) memory.” Further, there is little cost of interruption of carrying out the task as information is not lost, for example, if concentration momentarily lapses. This may help explain the greater number of moves when using the computer version. Participants had so much information present on the display screen that there is no need to be totally focused on solving the problem (Noyes & Garland, 2003). In essence, it could be argued that display-based problem solving reduces the complexity of the mental processes involved by reducing the loads on working memory (Noyes & Garland, 2003). When presented with a computer model for the Tower of Hanoi, there is no need to make any effort to form your own mental representation, because there is an external representation on the display screen. Consequently, the problem-solver faced with the computer version of the problem is immediately at a disadvantage, because they are not having the benefit of having to apply themselves to beginning to solve the problem (Noyes & Garland, 2003). Further, the computer’s representation of the problem may not match their internal representation of the problem. In effect, the computer model may be providing so-called “cognitive clutter” that is interfering with the optimum route for problem solving. In contrast, solving the problem using only a mental representation allows you to build a strong representation of the problem, and this results in more efficient problem solving (Noyes & Garland, 2003). A further explanation may lie in the use of verbal protocols. Individuals solving the puzzle using the mental representation version of the Tower of Hanoi were required to talk through the moves they were making, that is, to think aloud. This form of protocol analysis is particularly appropriate to transformation problems such as the Tower of Hanoi (Noyes & Garland, 2003). WAINESS PHD QUALIFYING EXAM 35 One of the difficulties associated with the user of any computer-generated model is the nature of the interface. This is particularly the case when considering problem solving as the ergonomics of the display and the user’s interaction with it can influence the ease with which the problem can be solved. The importance of the design of the interface must not be overlooked, because as Zhang (1991) pointed out the external representations of the problem provide memory aids. Hence, the design of these can change the nature of the task. The precise design of the computer-generated Tower of Hanoi will, therefore, influence the solving of the puzzle. This needs to be taken into account when generalizing from Tower of Hanoi studies (Noyes & Garland, 2003). Mental models explain human cognitive processes of understanding external reality, translating reality into internal representation and utilizing it in problem solving (Park & Gittelman, 1995). Mental model formation depends heavily on the conceptualizations that individuals bring to a task. When interacting with the environment, with others, and with the artifacts of technology, people form mental models of themselves and the things with which they interact (Norman, 1983; as cited in Park & Gittelman, 1995). We can learn something from a source of information, given that it carries some potentially useful information, if we perceive it to warrant the investment of effort needed for the learning to take place (Salomon, 1983). The argument of this article is that learning, in its generic sense, greatly depends on the differential way in which sources of information are perceived, for these perceptions influence the mental effort expended in the learning process. This argument is comprised of two ideas. First, the amount of mental effort learners invest in extracting information from a source, discriminating among its information units, remembering the information, or elaborating it in their minds, is influenced by the way they perceive that source (Salomon, 1983). Second, it is argues that learning is strongly influenced by the amount of mental effort learners invest in processing the material—that is, the “depth” or “thoughtfulness” with which they process it (Salomon, 1983). It is often assumed that what determines effort investment is the difficulty of the stimulus or task—that is, its novelty or complexity or the amount of “cognitive capacity” that is uses as a function of its content density or structural complexity (Salomon, 1983). Do the learners’ justified or unjustified perceptions of a medium’s quality—its typical attributes and the task one usually performs with it—influence their learning as well (Salomon, 1983)? There are at least two kinds of elaboration to be considered. Elaboration can be automatic, carried out by well mastered mental processes over which a person exercises little conscious control, and which are carried out with great ease in large chunks. Such elaborations would usually be the result of much repeated practice and training (Salomon, 1983, p. 43). Elaboration can, however, be controlled and nonautomatic, requiring attention and effort. Such elaborations would generally be applied to relatively new, complex, or otherwise less practiced material. Given a specific level of relevant skill mastery, it is the employment of controlled, effortful elaborations that improves learning in the sense that better recall, more generated inferences, and better integration of the material in memory (Salomon, 1983, p. 43). Mental effort, relevant to the task and material, appears to be the feature that distinguishes between mindless or shallow processing on the one hand, and mindful or deep processing, on the other. Little effort is expended when processing is carried out automatically or WAINESS PHD QUALIFYING EXAM 36 mindlessly (Salomon, 1983). Mindlessness refers to the ostensibly unattentive behavior of otherwise intelligent people; as the absence of conscious processing (Salomon, 1983). According to Salomon (1983), mindfulness refers to a cognitively active state characterized by the conscious manipulation of the envisioned elements (Langer & Imber, 1980). Shallow processing refers to automatic processing of well rehearsed features. Deep processing refers to the effortful employment of non-automatic elaborations (Salomon, 1983). Mental effort investment and motivation are not to be equated. Motivation is a driving force, but for learning to actually take place, some specific relevant mental activity needs to be activated. This activity is assumed to be the employment of nonautomatic effortful elaborations (Salomon, 1983). Mental effort invested in processing means the employment of nonautomatic elaborations performed on the material (Salomon, 1983). All things being equal, the amount of mental effort should be a combined function of one’s mastery of the relevant mental skills, and the nature of the stimulus to be processed for a particular task. One would expect that, given a particular stimulus task and a desired level of performance, children with a better mastery of relevant skills with invest less effort in processing a unit of material than children who have a poorer mastery of the requisite skills (Salomon, 1983). Better skill mastery implies more automaticity of skill employment, and hence, by definition, a smaller amount of mental effort is needed to teach the same pre-set level of message comprehension by the more skillful child. Similarly, more demanding, difficult, or novel stimuli are generally expected to evoke more effort investment than simple stimuli (Salomon, 1983). The reason attributed to children’s shallow processing of television is the medium’s shallowness, pictoriality, “crowdedness,” and rapid pace. On the other hand, the more serious, deeper treatment of print is claimed to reflect the more demanding nature of that medium, its relative abstractness, and imagery-generation requirements (Salomon, 1983). The nature of the stimuli, their complexity, novelty, structuredness, pace, and the like, in interaction with learners’ abilities, affect performance or learning outcomes only to some extent. Perceptions, in the sense of dispositions, preconceptions, attitudes, and attributions, also play an important role in the way one treats information. Furthermore, perceptions do not always, nor necessarily, reflect the true nature of the given material (Salomon, 1983). Langer and Benevento (1978) have shown that when people perceive a message as highly familiar in structure, they forgo any detailed processing of its content and respond to it mindlessly. Such mindlessness can take place based on its structural features (Salomon, 1983). Strong preconceptions or perceptions of some material, source, or medium may affect the actual investment of mental effort, and hence of learning (Salomon, 1983). The material presented on TV is perceived to be shallower and less variable than the material presented in print, even when the content areas (e.g., adventure stories, sport, science) are held constant (Salomon, 1983). Perceived self-efficacy refers to subjective judgments of how well one can execute a course of action, handle a situation, learn a new skikll or unit of knowledge, and the like (Salomon, 1983). Perceived self-efficacy has much to do with how a class of stimuli is perceived. The more demanding it is perceived to be the less efficacious would the perceivers be about it, and the more familiar, easy, or shallow it is perceived, the more efficacious they would feel in handling it (Salomon, 1983). WAINESS PHD QUALIFYING EXAM 37 It follows from the above that perceived self efficacy should be related to the perception of demand characteristics (the latter includes the perceived worthwhileness of expending effort), and that both should affect effort investment jointly (Salomon, 1983). Expectancy theory tells us that two factors are involved hers: the importance of a particular yield, and the price to be paid for it. If one learns that information from one certain sources is not very important, why should more effort be invested in it (Salomon, 1983)? Learning is the process of acquiring knowledge (Tennyson & Breuer, 2002). Thinking is the process of employing knowledge (Tennyson & Breuer, 2002). The purpose of this article is to present an instructional method that has been shown to significantly improve higher-order thinking strategies by enhancing the above descried processes. The method employs computer-managed simulations that present contextually meaningful problem situations that require learners to prepare solution proposals. The simulation assesses the proposals and offers to learners the consequences of their decisions while also iteratively updating the situational conditions. This type of simulation, unlike conventional simulations that are used for the acquisition of knowledge, is complex-dynamic, requiring the students to fully employ their knowledge base by generating soluations to domain-specific problems (Tennyson & Breuer, 2002). An important contribution of cognitive psychology in the past decade has been the development of theories and models to explain the processes of learning and thinking. The value of these theories is that they offer operational definitions of not only how learning and thinking occur, but also why it occurs. The why explanation provides more direct means for understanding how instructional methods may accomplish predictable improvements in both learning and thinking (Tennyson & Breuer, 2002). Tennyson and Breuer (2002) proposed an Interactive Cognitive Learning and Thinking Model for cognitive learning based on a complexity theory perspective. The stages include: external environment and behavior (action); sensory receptors (memory); executive control (meta/automatic); cognitive strategies; affects; and knowledge base. The executive control includes perceptions, attention, and resources (effort). The cognitive strategies include construction of new knowledge and strategies, differentiation for selection of existing knowledge, and integration, for restructuring and elaboration of knowledge. Affects include motivation, feelings, attitudes, emotions, anxiety, and values. And knowledge base includes declarative knowledge (knowing that), procedural knowledge (knowing how), and contextual knowledge (knowing why, when, and where) (Tennyson & Breuer, 2002). In Tennyson and Breuer’s (2002) model, there are bi-directional connections between external environment and sensory receptions, sensory receptors and executive control, executive control and cognitive strategies, executive controls and affects, and executive controls and knowledge base (Tennyson & Breuer, 2002). The cognitive processes of differentiation, integration, and construction of knowledge are abilities that can be improved by effective instructional methods (Tennyson & Breuer, 2002). In sequential information processing models, the labels short-term memory and working memory are used synonymously to describe many of the functions of the executive control component (Tennyson & Breuer, 2002). There is agreement in the psychological field that the knowledge base has no capacity limits and that knowledge is considered permanent, although it may become difficult to retrieve in certain situations (Tennyson & Breuer, 2002). WAINESS PHD QUALIFYING EXAM 38 The knowledge base consists of domains of knowledge that can be described as complex networks (or schemas) of information (e.g., concepts or propositions). Within a domain, knowledge is organized into meaningful modules called schemata. Schemata vary per individual according to amount, organization, and accessibility (Tennyson & Breuer, 2002). Motivation influences both attention and maintenance processes (Tennyson & Breuer, 2002). Values and feeling would influence the criteria associated with acquisition of contextual knowledge. Anxiety as an affect variable influences much of the internal processing abilities. Along with emotions, anxiety can be a serious interfering variable in the cognitive system (Tennyson & Breuer, 2002). Differentiation is defined as the twofold ability to understand a given situation and to apply appropriate contextual criteria by which to selectively retrieve specific knowledge from the knowledge base (Tennyson & Breuer, 2002). Integration is the ability to elaborate or restructure existing knowledge in the service of previously unencountered problem situations (Tennyson & Breuer, 2002). Construction is the ability to both discover and create new knowledge in novel or unique situations (Tennyson & Breuer, 2002). Higher-order thinking strategies involve three cognitive processes: differentiation, integration, and construction of knowledge (Tennyson & Breuer, 2002). The more fully developed the knowledge based in memory, the greater the opportunities for differentiation and integration and possibly, creation of knowledge (Tennyson & Breuer, 2002). The cognitive processes of differentiation, integration, and construction of knowledge are abilities that can be improved by effective instructional methods. Intelligence, on the other hand, seems not to be directly influenced by instructional conditions (Tennyson & Breuer, 2002). Thinking strategies represent a continuum of conditions ranging from a low-order of automatic recall of existing knowledge to a high-order of constructing knowledge. From low to high, the strategies are recall, problem solving, and creativity (Tennyson & Breuer, 2002). Recall represents the retrieval of knowledge from memory. Recall strategies involve an automatic differentiation of knowledge from the existing knowledge base. A higher-order recall strategy is employed when more complex situations in which new conditions that have not been previously encountered are part of the problem. With recall, the integration of all appropriate schemata is required to succeed at a task (Tennyson & Breuer, 2002). Problem solving is associated with situations dealing with previously unencountered problems. That is, the term problem solving is most often defined for situations that require employing knowledge in the service of problems not already in storage. In these types of situations, the thinking strategies require the integration of knowledge to form new knowledge (Tennyson & Breuer, 2002). A first condition of problem solving involves the differentiation process of selecting knowledge that is currently in storage using known criteria. Concurrently, this selected knowledge is integrated to form a new knowledge. Cognitive complexity within this condition focuses on elaborating the existing knowledge base (Tennyson & Breuer, 2002). Problem solving may also involve situations requiring the construction of knowledge by employing the entire cognitive system. Therefore, the sophistication of a proposed solution is a factor of the person’s knowledge base, level of cognitive complexity, higher-order thinking strategies, and intelligence (Tennyson & Breuer, 2002). WAINESS PHD QUALIFYING EXAM 39 The highest order of human cognitive processing is the creating of a problem situation. Rather than having the external environment dictate the situation, the individual, internally, creates the need or problem. The highest cognitive condition exists when the individual creates not only the situation but also constructs both the new knowledge and criteria necessary for solution. Constructing knowledge involves the entire cognitive system. Creativity seems to involve both the conscientious deliberations of differentiation and integration and the spontaneous integrations that operate at a metacognitive level of awareness (Tennyson & Breuer, 2002). Simulation in educational computing is a widely employed technique to teach certain types of complex tasks. The purpose of using simulations is to teach a task as a complete whole instead of in successive parts. For example, simulations are used in aviation training to replicate the complex interaction of a number of variables needed to successfully pilot an airplane. Learning the numerous variables simultaneously is necessary to fully understand the whole concept of flying. We define these types of situations as task-oriented because the educational objective is to learn the variables (declarative and procedural knowledge) and their context (conceptual knowledge) (Tennyson & Breuer, 2002). The assumption in complex-dynamic simulations is that that student has acquired sufficient knowledge to proceed in the development of thinking strategies employing the cognitive processes of differentiation, integration, and construction (Tennyson & Breuer, 2002). The Minnesota Adaptive Instructional System (MAIS) is basically a computer research tool in which we have investigated instructional variables associated with improving learning according to individual differences and needs. As such, the instructional variables are represented in adaptive instructional strategies that in turn are monitored for each student by an expert tutor system using artificial intelligence techniques (Tennyson & Breuer, 2002). The MAIS consists of two main components: (a) a curriculum component (or macro) which maintains a student model (i.e., the cognitive, affective, and memory models of each student) and a curricular level knowledge base; (b) an instructional component (or micro) that adapts the instructional strategies according to moment-to-moment learning progress and need. Both components are managed by expert tutor systems (Tennyson & Breuer, 2002). Unlike task-oriented simulations, complex-dynamic simulations do not necessarily employ the computer as an instruction delivery system. The main purpose of the computer in a complex-dynamic simulation is to manage the simulation with the student doing most of the learning activities with resources other than the computer. Depending on the learning situation, the computer could certainly be used as a learning and instructional resource (Tennyson & Breuer, 2002). Research indicates that intra-group interactions in problem-solving situations contribute to cognitive complexity development because learners are confronted with different interpretations of the given simulation conditions by other group members (Tennyson & Breuer, 2002). An important issue in cooperative learning is the procedure used to group students. Our research shows that, for development of thinking strategies, group membership should be based on similarity of ability in cognitive complexity (Tennyson & Breuer, 2002). The expectations a user has about a computer’s behavior come from mental models, while the “expectations” a computer has of a user come from user models. The two types of WAINESS PHD QUALIFYING EXAM 40 models are similar in that they produce expectations that one “intelligent agent” has of another. The fundamental distinction between them is that mental models inside the head while user models occur inside a computer. Thus, mental models can be modified only indirectly by training, while user models can be examined and manipulate directly (Allen, 1997). Models are approximations to objects or processes which maintain some essential aspects of the original (Allen, 1997, p. 49). In cognitive psychology, mental models are usually considered to be the ways in which people model processes. The emphasis on process distinguishes mental models from other types of cognitive organizers such as schemas. Models of processes may be thought of as simple machines or transducers which combine or transform inputs to produce outputs (Allen, 1997). A mental model synthesizes several steps of a process and organizes them as a unit. A mental model does not have to represent all of the steps which compose the actual process (e.g., the model of a computer program or a detailed account of the computer’s transistors) (Allen, 1997). Mental models may be incomplete and may even be internally inconsistent. The representation of a mental model is, obviously, not the same as the real-world processes it is modeling (Allen, 1997). Because they are not directly observable, several different types of evidence have been used to infer the characteristics of mental models: predictions, explanations and diagnosis, training, and other methods. Users predict what will happen next in a sequential process and how changes in one part of they system will be reflected in other parts of the system. Explanations about the causes of an event and diagnoses of the reasons for malfunctions reflect mental models. People who are trained to perform tasks with a coherent account of those tasks complete them better than people who are not trained with the model. And evidence is also obtained from reaction times for eye movements and answering questions about processes (Allen, 1997). Models of mental models may be termed conceptual models. Conceptual models include: metaphor; surrogates; mapping, task-action grammars, and plans; and prepositional knowledg.e (Allen, 1997). Metaphor uses the similarity of one process with which a person is familiar to teach that person about a different process. Metaphors are rarely a perfect match to the actual process and incorrect generalizations from the metaphor can results in poor performance on the task (Allen, 1997). Surrogates are descriptions of the mechanisms underlying the process. For a pocket calculator, surrogate models would describe its functions in terms of registers and stacks. Surrogate models are not well suited to describing user-level interaction (Allen, 1997). Another class of conceptual model describes the links between the task the users must complete and the actions required to complete those tasks. Mappings are suitable for describing learnability and as a basis for design (Allen, 1997). Grammars are of interest because of their ability to describe systematic variations of complex sequences (Allen, 1997). Planning models can also integrate tasks and actions (Allen, 1997). According to Allen (1997), Laird (1983) describes propositional knowledge is the basis for most logical thinking (Allen, 1997). Although mental models have been studies in physics and mathematics, the vast majority of research on them has been based on computer-human interaction. Many aspects of WAINESS PHD QUALIFYING EXAM 41 human-interaction with computers involve complex processes, thus people who interact with computer systems must have some type of mental model of these processes (Allen, 1997). The most important practical application of understanding students’ mental models is for training (Allen, 1997). Selection of appropriate text and graphics can aid the development of mental models. Training materials may highlight text, or include diagrams or other techniques for improving the learners’ mental models (Allen, 1997). Scaffolding is the process of training a student on core concepts and then gradually expanding the training (Allen, 1997). Animation of data or scenarios which evolve over time should be especially useful for developing mental models because the causal relationships in a process can be clearly illustrated (Allen, 1997). According to Allen (1997), Gonzales examined many properties of aniamtions and found that factors such as the smoothness of the transitions were important for performance on tasks which had been presented with the animations (Allen, 1997). In user models (the computer’s model), the task expert has information about what the user is trying to accomplish and possible strategies for accomplishing those goals. The situation expert contributes knowledge about the environment in which the user is trying to complete the task (Allen, 1997). User models are often said to adapt to users. However, there are different senses in which a model may be adaptive. In the simplest sense, a model is adaptive if it gives different responses to different categories of users. A more interesting sense is that a model adapts as it gains experience with the individual user (Allen, 1997). Feedback uses output from the model to refine it (Allen, 1997). The main criterion for the effectiveness of a user model is in predicting important behavior which facilitates the user’s activities. Among components contributing to this are relevance, accuracy, and generality, adaptability, ease of development and maintenance, and utility (Allen, 1997). Relevance requires that models make predictions that apply to the target behavior or user goals. Accuracy requires that the modesl make correct predictions. Generality of the model requires robustness despite changes in tasks, situations, and users. The model should be scalable. Adaptability requires that the model be able to respond to changes in user behavior. Ease of development and maintenance is whether the effort in maintaining the user model is worthwhile. And utility means that the model should improve the user’s behavior (Allen, 1997). Instructional interaction between a computer and a human being may be viewed as a specialized conversation (Allen, 1997). Personalization of tutoring may be modeled by observing the conversations between tutors and students (Allen, 1997). Although there is no precise definition of worked examples, they share certain family resemblances. As instructional devices, they typically include a problem statement and a procedure for solving the problem; together, these are meant to show how other similar problsm might be solved. In a sense, they provide an expert’s problem-solving model for the learner to study and emulate (Atkinson, Derry, Renkl, & Wortham, 2000). We hold that the explicit understanding of learning processes obtained through controlled experimentation, including laboratory experimentation, is an important part of the scientific knowledge base about teaching and learning, which, in turn, has had a significant positive impact on instructional research and practice in classrooms. Transfer from laboratory to WAINESS PHD QUALIFYING EXAM 42 classroom is possible because, while there are may differences between laboratory and classroom environments, there are also many constants across setting in terms of students’ basic neural and cognitive processes, as well as the structure of the interventions and materials investigates (Atkinson, Derry, Renkl, & Wortham, 2000, 185). The worked examples literature is particularly relevant to programs of instruction that seek to promote skills acquisition, a goals of many workplace training environments as well as instructional programs in domains such as music, chess, athletics, programming, and basic mathematics (Atkinson, Derry, Renkl, & Wortham, 2000, p. 185). Although the early research demonstrated that worked examples were instructionally effective, our review suggests specific factors that moderate their effectiveness. These include (1) intra-example feature, in other words, how the example is designed, particularly the way the example’s solution is presented, (2) inter-example features, principally certain relationships among multiple examples and practice problems within a lesson, and (3) individual differences in example processing on the part of students, especially the way in which students “selfexplain” the examples (Atkinson, Derry, Renkl, & Wortham, 2000, p. 186). According to Atkinson, Derry, Renkl, and Wortham (2000), instructional materials requiring a student to split attention among multiple sources of information might impose a heavy cognitive load. The imposition of a heavy cognitive load was thought to negate the benefits of studying worked examples. Tramizi and Sweller (1988) labeled this phenomenon the split-attention effect and hypothesized that it interfered with the student’s acquisition of schemas representing the basic domain concepts and principles that students should learn from examples (Atkinson, Derry, Renkl, & Wortham, 2000). Summary of intra-example features. Examples should be constructed to maximally integrate all sources of information—including diagrams, text, and aural presentation—into one unified presentation, since splitting students’ attention across multiple, non-integrated information sources may cause cognitive overload and impair learning. However, when an example display is complex, simultaneous aural explanation must be accompanied by a method for explicitly directing students’ attention to pertinent parts of the example as it is being described or discussed. Otherwise, students will expend too much effort trying to locate those parts of the example that the aural presentation is referencing, which creates cognitive overload. In addition, because subgoal tasks within complex problems typically represent important conceptual ideas that students need to learn, instructional effectiveness is enhanced when examples clearly demarcate a problem’s subgoal structure, either by labeling each step or by visually isolating steps in an example display (Atkinson, Derry, Renkl, & Wortham, 2000). Research on explanation effects suggests that self-explanations are an important learning activity during the study of worked examples. Unfortunately, the present research suggests that most learners self-explain in a passive or superficial way. Among the successful learners, there seem to different subgroups employing different self-explanations styles (anticipative reasoning and principle-based explanations). Both of these styles can be fostered by instructional methods. Direct training appears to be effective, as are structural manipulations of examples as adding subgoal labels, utilizing an integrated format, or using “incomplete” examples. Less promising are the data on improving self-explaining (and problem solving) through setting social incentive to explain, such as inducing students to prepare to tutor others. In particular, students who have no prior tutoring experience and who are novices within the domain being tutored appear to experience stress and overload when asked to provide instructional explanations (Atkinson, Derry, Renkl, & Wortham, 2000). WAINESS PHD QUALIFYING EXAM 43 We postulate that learning from worked examples causes learners to develop knowledge structures representing important, early foundations for understanding and using the comain ideas that are illustrated and emphasized by the instructional examples provide. These representations guide problem solving and may be conceptualized as representing early stages in domain schema development and in the acquisition of expertise in accordance with Anderson’s model of skills acquisition (Atkinson, Derry, Renkl, & Wortham, 2000, p. 202). Through use and practice, these representations are expected to evolve over time to produce the more sophisticated forms of knowledge that experts us. Even after expertise is achieved, learners can benefit from study of examples representing the performance of other experts (Atkinson, Derry, Renkl, & Wortham, 2000). Worked-examples lessons will promote transfer if they include variability. This means that examples within lessons should differ from each other in terms of numerical values and form (Atkinson, Derry, Renkl, & Wortham, 2000, p. 204). There is evidence that the structure of worked examples enhances students’ selfexplanation behavior. Moreover, there is evidence that students’ self-explanation behavior during study in turn mediates learning. However, it has not been determined that the effects of example structure on learning outcomes are fully mediated by self explanation (Atkinson, Derry, Renkl, & Wortham, 2000). In additional to example structure, situational factors, such as training and social incentives, can foster self-explanation (Atkinson, Derry, Renkl, & Wortham, 2000). Problems are often presented to students as cases, such as medical cases, and students are guided by a tutor as they analyze cases and seek solutions, for example, diagnoses and treatments (Atkinson, Derry, Renkl, & Wortham, 2000). Critics to worked examples may claim that students exposed to worked examples are not able to solve problems with solutions that deviate from those illustrated in the examples, can not clearly recognize appropriate instanced in which procedures can be applied, and have difficulty solving problems for which they have no worked examples (Atkinson, Derry, Renkl, & Wortham, 2000). The current view suggests, however, that examples can in fact help educators achieve the goal of fostering adaptive, flexible transfer among learners (Atkinson, Derry, Renkl, & Wortham, 2000). Worked-out examples typically consist of a problem formulation, solution steps, and the final answer itself (Atkinson, Renkl, & Merrill, 2003). Research indicates that exposure to worked-out examples is critical when learners are in the initial stages of learning a new cognitive skill in well structured domains such as mathematics, physics, and computer programming (Atkinson, Renkl, & Merrill, 2003). Although worked-out examples have significant advantages, their use as a learning methodology does not, of course, guarantee effective learning (Atkinson, Renkl, & Merrill, 2003). According to Atkinson Renkl, and Merrill (2003), at first Chi and her colleagues (Chi et al., 1989) postulated that the self-explanation effect principally involved inference generation on the part of a learner. That is, by self-explaining, the learner is inferring information that is missing from a text passage or an example’s solution. However, because of some inconsistencies among this view and some of the findings in the self-explanation literuature, Chi (2000) revised this initial view by suggesting that the self-explanation effect is actually a dual process, one that involves generating inferences and repairing the learner’s own mental model. In the latter WAINESS PHD QUALIFYING EXAM 44 process, it is assumed that the learner engages in the self-explanation process if he or she perceives a divergence between his or her own mental representation and the mdoel conveyed by the text passage or example’s solution. According to Chi, this new viewpoint extends the inference generation by suggesting that “each student may hold a naïve model that may be unique in some ways, so that each student is really customizing his or her self-explanation to his or her own mental model” (p. 196; as cited in Atkinson, Renkl, & Merrill, 2003). According to Atkinson, Renkl, and Merrill (2003), our findings on the usefulness of a learning environment that combines fading worked-out steps with self-explanation prompts support the basic tenets of one of the most predominant, contemporary instructional models, namely the cognitive apprenticeships approach (Collins, Brown, & Newman, 1989). This approach suggests that learners should work on problems with close scaffolding provided by a mentor or instructor (Atkinson, Renkl, & Merrill, 2003). According to Atkinson, Renkl, and Merrill (2003), this approach is characteristic of Vygotsky’s (1978) “zone of proximal development” in which problems or tasks are provided to learners that are slightly more challenging than they can handle on their own. Instead, solving the problems or tasks independently, the learner must rely—at least initially—on the assistance of their more capable peers and/or instructors to succeed. According to this approach, learners will eventually make a smooth transition from relying on modeling to scaffolding problem solving to independent problem solving (Atkinson, Renkl, & Merrill, 2003). In other words, this model advocates the fading of instructional scaffolding during this transition. Correspondingly, our partially worked-out examples provide a scaffold that permits learners to solve problems they could not successfully solve on their own. The instructional scaffolding—in the shape of worked-out solution steps—is gradually faded in our learning environment (Atkinson, Renkl, & Merrill, 2003). Learners are encouraged to reflect on their problem-solving process and to try to identify ways of improving it. For instance, they are encouraged to reflect on the problems that they have missed or to try to explain how to generate the correct solution, a process that can increase the likelihood that the correct solution procedure will be internalized by the learner (Atkinson, Renkl, & Merrill, 2003). Overall, the use of prompts that encourage the learners to figure out the principle that underlies a certain solution step can be recommended for several reasons, including the following: (a) it produces medium to high effects on transfer performance, (b) these effects are consistent across different age levels (university and high school), (c) it does not interfere with fading, (d) it is very easy to implement (even without the help of computer technology), and (e) it requires no additional instructional time. This prompting procedure is, however, not without its drawbacks. Because this procedure is designed to elicit principle-based explanations, it is ideally suited for well-structured domains such as mathematics and physics that contain clearly identifiable domain principles “under” each solution step (Atkinson, Renkl, & Merrill, 2003). As one can imagine, not all domains contain such clearly identifiable principles. Hence, it is worth noting that our prompting procedure can nly be applied in an unmodified manner when each solution step can be explained by a principle within a domain (Atkinson, Renkl, & Merrill, 2003). New technologies, such as the use of multimedia, can afford rich opportunities for constructivist approaches in the field of education (Bailey, 1996). Simply put, constructivism is learning by assembling meaning from pieces of reality (D’Ignazio, 1992; as cited in Bailey, 1996). WAINESS PHD QUALIFYING EXAM 45 Learning which is active becomes a reality as the learner is not a passive nonparticipant who easily ignores or forgets the encounter. The initial “piece of reality” is participation in the process. Constructivists then, advocate student-centered learning which is self-directed, which has personal relevance to the learner, and which is manifested by a form of active demonstration (Bailey, 1996). Self-directed learning is more likely to have personal relevance, and as new technology is assimilated with personalized associations, meaning and retention are increased (Bailey, 1996). When using technology, initially, the learner must focus on the acquisition of skills and knowledge related to learning the technology. However, once these are mastered and acclimated, they may become—much like writing, typing, or keyboarding—tools for conveying information (Bailey, 1996). The difference in this tool is that so many differing modes of communication are possible in the context of the technology of multimedia, and deciding which is appropriate is in itself a higher thinking decision (Bailey, 1996). O’Neil’s Problem Solving Model O’Neil’s Problem Solving model (O’Neil, 1999; see figure 2 below), is based on Mayer and Wittrock’s (1996) conceptualization: “Problem solving is cognitive processing directed at achieving a goal when no solution method is obvious to the problem solver” (p. 47). This definition is further analyzed into components suggested by the expertise literature: content understanding or domain knowledge, domain-specific problem-solving strategies, and selfregulation (see, e.g., O’Neil, 1999, in press). Self-regulation is composed of metacognition (planning and self-checking) and motivation (effort and self-efficacy). Thus, in the specifications for the construct of problem solving, to be a successful problem solver, one must know something (content knowledge), possess intellectual tricks (problem-solving strategies), be able to plan and monitor one’s progress towards solving the problem (metacognition), and be motivated to perform (effort and self-efficacy; O’Neil, 1999, pp. 255-256). Each of these problem-solving elements would have to be taught and assessed in the game context. Fig 2. O’Neil’s Problem Solving Model Problem Solving Content Understanding Problem-Solving Strategies Self-Regulation Metacognition Planning Domain Specific Domain Independent SelfMonitoring Motivation Effort SelfEfficacy WAINESS PHD QUALIFYING EXAM 46 Baker and Mayer’s CRESST Model of Learning The CRESST model of learning (Baker & Mayer, 1999) links the components required to assess problem solving in technology environments. The model is composed of six families of cognitive demands: five families—content understanding, collaboration, problem solving, communication and self-regulation—all radiating from learning, the sixth family of cognitive demands. The shift from unidimensional measure of a construct to multidimensional domains is rooted in the work of Glaser (1963), Hively, Patterson, and Paige (1968), and Baker and Popham (1973). In the CRESST model, “each family consists of a task that can be used as a skeleton for the design of instruction and testing” (Baker & Mayer, 1999, p. 275). For example, understanding consists involve explanation, which in turn involves a variety of actions such as having students read opposing views, invoking prior knowledge, and organizing and writing a valid explanation. This task framework supports many different learning domains, such as history or science. For problem solving, the task is instantiated in different domains so that a set of structurally similar models for thinking about problem solving is applied in science, mathematics, or social studies. In each domain, there is a need to identify the problem, understand content, understand key principles, and fit solutions to constraints. The six families of the CRESST model support all forms of learning. Fig 3. Baker & Mayer’s CRESST model of learning: Families of cognitive demands CRESST model of learning Content Understanding Collaboration Communication Learning Problem Solving Self-Regulation When the goal of instruction is meaningful learning (or student understanding), assessments of problem-solving transfer are called for (Baker & Mayer, 1999). Assessments that focus solely on the quantitative issue of how much was learned are based on a view of learning as knowledge acquisition, i.e., that learning involves adding pieces of information to one’s memory. In contrast, assessments that also focus on the qualitative issue of how knowledge is structured and used by the learner are based on the view of learning as knowledge construction, i.e., that learning involves making sense out of presented material by building a mental model (Baker & Mayer, 1999). Problem solving is cognitive processing directed at transforming a given situation into a desired situation when no obvious method of solution is available to the problem solver (Mayer, 1990 as cited in Baker & Mayer, 1999). WAINESS PHD QUALIFYING EXAM 47 A problem exists when a problem solver has a goal but does not know how to reach it, so problem solving is mental activity aimed at finding a solution to a problem (Baker & Mayer, 1999). A promising direct approach to knowledge representation, more parsimonious than a typical performance assessment, is knowledge or concept mapping, in which the learner constructs a network consisting of nodes (e.g., key words or terms) and links (e.g., “is part of”, “lead to,” “is an example of”) (Baker & Mayer, 1999, p. 274). In problem solving, the skeletal structures are instantiated in content domains, so that a set of structurally similar models for thinking about problem solving is applied to science, mathematics, and social studies. These models may vary in the explicitness of problem representations, the guidance about strategy (if any), the demands of prior knowledge, the focus on correct procedure, the focus on convergent or divergent responses, and so on (Baker & Mayer, 1999). In each academic area, there is the need to identify the problem, the need to understand content provided and omitted, the need to understand the key principle(s) at work, and the need to fit solutions to constraints (Baker & Mayer, 1999). Domain-specific aspects of problem solving (the part that is unique to geometry, geology, or genealogy) involve the specific content knowledge, the specific procedural knowledge in the domain, any domain-specific cognitive strategies (e.g., geometric proof, test and fix), and domain specific discourse (O’Neil, 1998, as cited in Baker & Mayer, 1999). Cognitive complexity is a concept at the heart of problem solving. It minimally requires that students must process material beyond the recognition or recall level. Typically, cognitive complex tasks have either implicit or explicit multiple steps by which the test taker must proceed to develop an adequate solution (Baker & Mayer, 1999). Two trends in technology are certain: the cost of computer technology will continue to drop, and technology of all sorts will become easier to use (Baker & O’Neil, 2002). Dreary intellectually, predictable pedagogically, despite cuter, more active graphics, our learning systems will need massive rethinking to make them useful for the challenges facing instruction for both children and adults (Baker & O’Neil, 2002, p. 611). One key to their ultimate utility will be the degree to which technology can be used simultaneously to teach and to measure better, more deeply and speedily, the complex tasks and propensities needed for learners to achieve and to continue to learn in a rapidly changing society (Baker & O’Neil, 2002, p. 611). Problem solving is a family of cognitive demands that can be required in may subject areas. The term problem solving goes far beyond the application of algorithms (e.g., subtraction rules) to simple tasks (Baker & O’Neil, 2002). Our definition of problem solving (Baker & Mayer, 1999; O’Neil, 1999) is already an important component of educational reform efforts designed to raise the expertise of students (Baker & O’Neil, 2002). Problem-solving tasks can take a third form, dealing with simulations and problem for which there is not a known solution, but which present, like the first case, a rapidly changing scenario, for instance, with chance or the probability of existing faults occurring as “surprises” during the examination sequence (Baker & O’Neil, 2002). Here, the intellectual task for the learner varies and includes assimilation and incorporation of useful strategies and a running internal record of the degree to which any combination of procedures or actions is likely to optimize the outcome (Baker & O’Neil, 2002). WAINESS PHD QUALIFYING EXAM 48 Not only can problems be obscured or embedded in distracting settings, or presented in complex language, problems can also be provided sequentially to learners in a computerized setting. Solving the first part of a task in a particular way may lead to a conditional representation of the second part of the task. Contingent tasks may, on the other hand, approximate reality, for there are consequences of correct and incorrect paths (Baker & O’Neil, 2002). Both domain-independent and domain-dependent knowledge are usually essential for problem solving. Domain-dependent analyses focus on the subject matter as the source of all needed information (Baker & O’Neil, 2002). Domain-independent analyses are those that attempt to capture the general strategies that are in use across subject matters. These approaches should engender not general notions of intelligence testing, but rather the attributes of performance that could be expected to transfer over a wide task domain (Baker & O’Neil, 2002). The use of visual materials to complement regular instruction has become a common instructional technique at all levels of education. However, its integration into the instructional environment has not realized its promise of increased effectiveness and efficiency in terms of optimizing student learning. Analyses from existing, visual related research has failed to provide generic guidelines for the integration of visualization to improve learning (Baker & Dwyer, 2000). Learning will be more complete as the number of cures in the learning environment increases. An increase in realism in the existing cues increased the probability that learning will be facilitated (Baker & Dwyer, 2000). In a meta-analysis of eight studies involving 2000 college and high schools students, Baker and Dwyer (2000) found argued that an overall effect size of .71 demonstrated the general positive effect that visualization can have in facilitating student achievement. However, (a) the realism continuum is not an accurate predictor of instructional effectiveness, (b) not all types of visuals are equally effective in facilitating achievement of different educational objectives, (c) color can be an important instructional variable in facilitating achievement of specific types of educational objectives, and (d) the type of visualization most effective for facilitating different educational objectives may be dependent on the method of presentation (Baker & Dwyer, 2000). Visuals which contain the essence of the message to be transmitted should be more effective in facilitating achievement than the more realistic illustrations which have to be coded by the central nervous system before being transmitted (Baker & Dwyer, 2000). The effectiveness of the simple line presentations (color) may have resulted because the use of color made the visuals more attractive and students attended to them more vigilantly (Baker & Dwyer, 2000). This explanation is suggested since the only different between the b@w and color treatments was that the color version consistedn fo blue lines on a pink background rather than black on white and provided no additional information (Baker & Dwyer, 2000). The effectiveness of the detailed shaded drawing presentation may have resulted because the realistic detail in the visuals was accentuated by color enabling the students to identify and interact with the relevant characteristics (Baker & Dwyer, 2000). While attempting to focus on a mental activity, most of us, at one time or another, have had our attention drawn by extraneous sound (Banbury, Macken, Tremblay, & Jones, 2001). Sounds often seem to intrude on our awareness, without our invitation or, apparently, control (Banbury, Macken, Tremblay, & Jones, 2001). WAINESS PHD QUALIFYING EXAM 49 Evidently, these are instances in which our capacity to focus, to attend selectively to thoughts or events, suffers some kind of breakdown (Banbury, Macken, Tremblay, & Jones, 2001). There may be occasions when the system designer may wish to capture the attention of the person, and knowledge of auditory distraction can be put to good use in the design of auditory warnings and alarms (Banbury, Macken, Tremblay, & Jones, 2001). In the present paper, we review a range of recent studies that focus on establishing the conditions under which a person may be distracted while undertaking a relatively complex mental task (Banbury, Macken, Tremblay, & Jones, 2001). Generically, these are know as irrelevant sound studies (Banbury, Macken, Tremblay, & Jones, 2001). Because hearing is omnidirectional and has the capacity to receive information at all times, even in darkness or during sleep, it has been dubbed “the sentinel of the senses” (Banbury, Macken, Tremblay, & Jones, 2001, p. 13). Clearly, even when our attention is fastened to one activity, the brain is registering a range of other events; otherwise, how do we manage to switch attention between sources of information so purposefully and so adroitly (Banbury, Macken, Tremblay, & Jones, 2001)? The general procedure for the irrelevant sound paradigm is straightforward. The participant undertakes a short-term memory task involving recall of the order of a sequence of verbal items (usually visually presented). While the task is being undertaken, irrelevant sound is played, either narrative speech or isolated events at about one item per second. The participants are told to ignore any sound they hear and are reassured they will never be required to report any feature of it (Banbury, Macken, Tremblay, & Jones, 2001). Because the memory task and irrelevant event are presented in different sensory modalities, the effect cannot be attributed to some kind of interference (or masking) at the sensory level. Instead, the disruption must be attributable to a confluence of processing from the eye and the ear at some level beyond the sensory organs. This can be described as a breakdown in attentional selectivity. Despite the intent of the person to concentrate on the memory task, the irrelevant sound intrudes (Banbury, Macken, Tremblay, & Jones, 2001). One explanation of why the sound is heard is the disruption is based on a conflict of content between what is seen and what is heard. The other explanation is that interference arises between concurrent common processes (specifically, the degree to which the two activities draw on the ordering of material in the brain). This latter account, the changing state hypothesis, is part of a more general model of working memory based on a blackboard analogy called the object-oriented episodic record (Banbury, Macken, Tremblay, & Jones, 2001). The irrelevant sound effect can be explained by supposing that interference results form a conflict based on similarity of process between relevant and irrelevant sequences, not similarity of content (Banbury, Macken, Tremblay, & Jones, 2001). The disruptive effect of irrelevant sound on performance is independent of the level of sound (the volume) (Banbury, Macken, Tremblay, & Jones, 2001). Above three voices, the disruption is a decreasing function of the number of voices. This effect is readily understood in terms of the masking of one sound by another. When the sound contains a relatively large number of voices, words are no longer individually distinguishable. In particular, there is evidence that changes in energy at the boundary of the sounds are important in determining the degree of disruption (Banbury, Macken, Tremblay, & Jones, 2001). WAINESS PHD QUALIFYING EXAM 50 The paper page with orderly rows and alphanumeric symbols, and occasionally images, is no longer the only nor, in many cases, even the dominant resource for contemporary readers (Bangert-Drowns & Pyke, 2001, p. 214). Electronic media are increasing a preferred means of information and entertainment (Bangert-Drowns & Pyke, 2001, p. 214). In general terms, texts are any relatively permanent structures for the storage, organization, and accessibility of a coherent body of information (Bangert-Drowns & Pyke, 2001). Electronic texts are information structures stored by and accessible through nonprint, electronic media (Bangert-Drowns & Pyke, 2001). Bangert-Drowns and Pyke (2001) developed a taxonomy of student engagement with interactive computer media for text intrepretation. The taxonomy consisted of seven levels ranging from a high of literate thinking to a low of disengagement. According to the authors, disengagement occurred when “navigational and operational competence or interest is so lacking, the student declines purposeful interaction” (Bangert-Drowns & Pyke, 2001, p. 226). Also according the to authors, “the taxonomy’s ‘higher’ levels presuppose navigational and operational competence” (Bangert-Drowns & Pyke, 2001, p. 233). Referring to Corno’s analysis of volition in learning, Bangert-Drowns and Pyke (2001) argued that their higher taxonomic levels reflect increasing capacity to employ metacognitive strategies to monitor progress toward goals. Volitional capacities, strategic prioritization of goals, and perseverance in pursuit of personal interests, appear clearly in self-regulated interest (Bangert-Drowns & Pyke, 2001). In contrast to more traditional technologies that simply “deliver” information, current computerized learning environments offer greater opportunities for interactivity and learner control (Barab, Young, & Wang, 1999). Nodes refer to the information units being displayed (e.g., paragraphs of text, pictures, sets of questions), while links refer to the connections among nodes (Barab, Young, & Wang, 1999).. Hypertext programs may simply offer sequencing and pace control, or they can allow the learner to decide which, and in what order, information was be accessed (Barab, Young, & Wang, 1999). Learners are able to make navigational choices by activating clickable areas, allowing them to jump from one location to another (Barab, Young, & Wang, 1999). Increased affect does not necessarily lead to increased learning (Barab, Young, & Wang, 1999). In spite of the intuitive and theoretical appeal of hypertext environments, empirical findings yield mixed results with respect to the learning benefits of learner control over program control of instruction (Niemiec et al., 1996, Steinberg, 1989, as cited in Barab, Young, & Wang, 1999). It appears that learner control wields a double-edged sword; for some users, it can extend their intellectual performance, while for others, it may not facilitate performance— possibly even leaving the user lost in a maze of information (Barab, Young, & Wang, 1999). In generative learning (see Wittrock’s generative model in Whittrock, 1974, 1978), learners are not passively receiving learning, but are actively engaged in the construction of meaning as it relates to their beliefs, experiences, current goals, and the context in which learning is occurring (Barab, Young, & Wang, 1999). According to the results of a study by Barab, Young, and Wang (1999), increased levels of learner control are beneficial when students are using a hypertext program to solve a specific problem. In their study, university students were free to navigate directly to those nodes WAINESS PHD QUALIFYING EXAM 51 of information they deemed appropriate. In the first of two experiments, which involved problem-solving, those students did significantly better at the problem-solving task than those who proceeded through the document in a linear manner (Barab, Young, & Wang, 1999). In the second study, which involved reading comprehension, there were no differences between groups (Barab, Young, & Wang, 1999). The amount of learner control, from the perspective of the learning, afforded by a hypertext system should not be assumed as high simply because the instructional designer creatd the opportunity to visit links. Rather, it must be though of as a construct that is codetermined by the learner’s perceptions of the affordances of the hypertext in relation to his/her particular goals (Barab, Young, & Wang, 1999). It appears that various goals and their inherent constaints (i.e., the goal paths they establish) will affect both the process and product of learning (Barab, Young, & Wang, 1999). In a four-factor MANOVA design, this exploratory study experimentally investigated the influence of navigation mode (linear, nonlinear), distracting links (presence, absence), sensation-seeking tendency (high, low) and spatial-synthetic ability (high low) on perceived disorientation and incidental learning (accuracy of main point, example generation) in web navigation (Baylor, 2001). Incidental learning is conceptualized in two ways in this study: (a) from the macro level of text processing, as one’s effectiveness at figuratively “getting the gist” of the website content and developing a schematic mental representation to determine the main point; and (b) from the micro level of text processing, as one’s effectiveness at generating and recalling examples from the content. The distinctions between macro and micro levels of processing are made for the purpose of describing this study (Baylor, 2001). Disorientation is defined her as a user’s perception of his/her uncertainty of location (Baylor, 2001). While the implementation of a three-dimensional spatial environment is technically feasible and would solve some disorientation problems for the learner, the use of such an environment with its visualization facilitation may provide the learner with too much information about locating information without letting the user discern the structure and meaning of the information (Baylor, 2001). Disorientation is a problem in terms of learning in open-ended learning environments of both the navigational issue from the user’s perspective and also the external geography of the website (Baylor, 2001). In terms of navigation mode, two contrasting instantiations are linear or nonlinear. In a linear navigation mode, the user moves through the website sequentially and is only allowed to move forward or backward through the content; thus, the sequence of web pages is controlled by the website. A nonlinear navigation mode is where the user has options to “jump” to any location within the website at any time, providing more flexibility and control for the user (Baylor, 2001). And advantage of a nonlinear navigation mode (typical of hypertext-based systems) is that a learner could navigate in a personally meaningful way to access information (Baylor, 2001). A disadvantage of a nonlinear navigation mode is it may not have the coherence that would be provided when the learner is forced to process the information in a more systematic way (from beginning to end). Specifically, in a nonlinear mode, the learner may not be able to determine how the overall content is globally represented (Baylor, 2001). WAINESS PHD QUALIFYING EXAM 52 In traditional forms of navigation, one must determine spatial position in relation to landmarks or astral location to decide on the means of moving toward a goal. In a virtual world, the feeling of being lost while navigating may result from a lack o connection among the physical representations of the world. This suggests the need for some sort of mapping or landmarking to serve as cues (Baylor, 2001). The presence of distracters (extraneous and seductive details) had a negative effect on both example generation and understanding the main point of the content (Baylor, 2001). Disorientation was negatively correlated with the learners ability to generate examples and to define the main point of the content (Baylor, 2001). A moderately high effect size indicated that participants (average 30 years old, predominantly white, with 56% males) reported more disorientation with the linear navigation mode as compared to the nonlinear navigation mode. This indicates that users are more used to and more comfortable with the nonlinear format of websites than when forced to navigate in a linear configuration (Baylor, 2001). Interestingly, the expected role of prior knowledge in facilitating orientation was not supported. Participants in the linear mode had marginally more prior knowledge than those in the nonlinear mode, yet the linear mode exhibited the higher level of disorientation. Therefore, navigation mode may be a greater factor than prior knowledge in influencing orientation (Baylor, 2001). The presence of distracters (extraneous and seductive details) negatively affected example generation scores (Baylor, 2001). That the pupil of the eye dilates during mental activity has long been known in neurophysiology (Beatty, 1982). Only recently has this phenomenon been used as a tool in investigating human cognitive processing (Beatty, 1982). Dilations occur at short latencies (100 to 100 msec) following the onset of processing and subside quickly once processing is terminated. Perhaps, more important, the magnitude of papillary dilation appears to be a function of processing load or “mental effort” required to perform the cognitive task (Beatty, 1982). Pupillary dilations related to cognitive load occur both during the processing of new information in working memory (e.g. hearing and repeating a series of numbers) and retrieval of existing knowledge from long-term memory (e.g., recalling a series of number; Beatty, 1982). Rehearsal strategies that improve performance on a working memory task act to reduce the amplitude of the task-evoked papillary response (Beatty, 1982). Selective attentional processing of sensory information occurs under conditions of high information load when it is not possible to process adequately all incoming information (Beatty, 1982). An iconic interface uses images to represent actions and objects that can be invoked or manipulated by a user. There are a variety of icon types which convey meaning in different ways. For example, representational icons are meant to represent actual physical objects and to inherit the properties of those objects, while abstract icons are meant to convey abstract concepts such as fragility (Benbasat & Todd, 1993). Representational icons are the most common type of icon employed in computer interfaces (Benbasat & Todd, 1993). When interfacing with a computer, a user is typically focused on some “primary cognitive task” which may relate to problem-solving, analyzing, reading, or writing. Attention devoted to the interface may interfere with the primary task. Since text-based processing is associated with cognition, more interference will result between a cognitive task and a text-based interface which demands the use of the same cognitive resources (Benbasat & Todd, 1993). WAINESS PHD QUALIFYING EXAM 53 The less effort required to use the interface, the more likely it is that the primary task will be successfully completed. If the iconic interface draws on a perceptual resource pool and the primary task draws on a cognitive pool, than overall performance will improve (Benbasat & Todd, 1993). In evaluating the advantages of iconic interfaces, it is important to remember that there is a difference between advantages attributable to some inherent property of icons and those that are attributable to specific implementations (Benbasat & Todd, 1993). It is often argues that iconic interfaces will be easier to use because they represent a collection of familiar objects; thus, interference from the icons to system functions will be facilitated. While this may be true, the advantage likely comes not from the icons per se, but from an implementation which permits users to employ metaphors by which to map known attributes of familiar objects (Benbasat & Todd, 1993). According to the authors, the true advantage of icons may come from the fact that visual cues can be processed more rapidly than text-based cues, and that an icon may carry more information than a text-based cue (Benbasat & Todd, 1993). Because the icons are present on screen, syntax error are eliminated (i.e., the syntax is predefined; Benbasat & Todd, 1993). It is argued that by facilitating the use of metaphor, iconic systems lead to significant advantages. While it is true that most iconic interfaces rely on metaphors, such as the “desk top” or “office” metaphor, this is a design and implementation issue, not an icon issue. Though it may be less compelling, there is no reason why text-based cues could not be employed in lieu of icons to represent such things as folders and documents (Benbasat & Todd, 1993). Another advantage of icons is that common characteristics can be carried across applications through consistent application of icons (such as a “quit” icon). However, this is a feature or advantage that is not unique to iconic interfaces but rather is a property of good design (Benbasat & Todd, 1993). The ability to represent objects rather than abstract concepts through icons is another claimed advantage. However, there is no real reason to believe that icons are the only way to represent objects in system interfaces. Yet, it is possible that icons provide a superior way of representing objects. This, however, is yet to be determined (Benbasat & Todd, 1993). The disadvantages of iconic interfaces arise primarily from difficulties in implementation rather than from any inherent properties of icons. For example, it is difficult to design icons to convey the desired meaning without invoking other connotations. The interpretation of a user and the intent of the designer may be different. When this happens, problems arise and semantic errors occur. Such ambiguity in meaning arises because there is no universal set of icons or principles to guide icon design (Benbasat & Todd, 1993). Broadly defined, direct manipulation interfaces incorporate the concept of physical manipulation of a system of interrelated objects which are analogous to objects found in the “real world” (Schieiderman, 1983, as cited in Benbasat & Todd, 1993). Object representations may take on a variety of forms. However, they are most commonly represented as icons although it is possible to provide text-based implementations of the objects or a combined text-icon presentations (Benbasat & Todd, 1993). According to Benbasat and Todd (1993), Hutchins, Hollan, and Norman (1986), developed a model to explain the effects of a direct manipulation interface. They claim that the directness” of an interface results from the commitment of fewer cognitive resources in order to complete a given task. Cognitive effort is minimized if the system interface maps directly into WAINESS PHD QUALIFYING EXAM 54 the user’s view or mental model of a specific task. Directness is argued to be a function of two factors: the first is distance, which must be minimized, and the second is engagement, which must be maximized. Distance refers to the notion of the gap between the user’s thoughts and the way they can be accomplished. Engagement relates to the degree of involvement the user experiences with the system. Under conditions of high engagement or involvement, the sytem interface becomes transparent and the user has the perception of working with the actual objects of interest, rather than through an abstract computer system (Benbasat & Todd, 1993). There are several advantages to direct manipulation. First, novices can learn basic functionality quickly, because the system incorporates a model of the task as held by the user (Benbasat & Todd, 1993). Second, experts with both the system and the task and/or task domain can work extremely rapidly (Benbasat & Todd, 1993). Third, intermittent or casual users can retain operational concepts. Casual users may have to go through a learning period each time they use an application. With a direct manipulation interface, such relearning will be reduced since the interface maps into the user’s model of the task. Fourth, users can immediately see if their actions are furthering their goals (Benbasat & Todd, 1993). It should be noted, however, that none of these advantages are unique to direct manipulation (Benbasat & Todd, 1993). There are also a number of advantages of direct manipulation. Some are inherent disadvantages and some are disadvantages in implementation (Benbasat & Todd, 1993). In terms of inherent disadvantages, first is model specificity. Direct manipulation interfaces gain much of their power from the development of specific models which the user can understand and apply. Such specific models may sacrifice flexibility. Users may be required to learn many specialized systems rather than fewer generalized ones. Second is constraint of the solution space. The success of direct manipulation systems depends on their ability to capture a user’s model faithfully. As a result, it is unlikely that they will lead to new ways to think about problems. Rather, the interface will reinforce current thinking; thus, discouraging innovative solutions. And third is repetitive operations. Repetition, or looping, of functions can be tedious to perform in a direct manipulation environment (Benbasat & Todd, 1993). In terms of implementation disadvantages, first is the question of whose model of interaction is to be built into the system. To design a general interface, one assumes that that is a prototypical user’s model to draw upon. This may not be the case. The second disadvantage is precision in manipulation. In a direct manipulation system, invoking a command requires precise manipation by the user. Virtually every user of a mouse-driven interface has experience the frustration of attempting detailed work on a screen and having incorrect objects activated (Benbasat & Todd, 1993). In a study involving 48 university students (27 males and 21 females), little or no advantages were found for icons. However, the author did state that these results may not generalize to other applications such as games (Benbasat & Todd, 1993). The general study of human-machine interaction began in WWII with a focus on understanding the psychology of soldiers interacting with weapon and information systems. After the war, human-machine interaction began to examine more broadly the relationship of work and computer product environments. Human-computer interaction (HCI) developed from this work and is a multi-disciplinary field involving computer science, psychology, engineering, ergonomics, sociology, anthropology, and design. HCI is concerned with the design, evaluation, and implementation of interactive computing systems for human use (Berg, 2000). HCI is generally used to mean human-computer interaction, but sometimes is described as human-computer interface or man-machine interface (MMI; Berg, 2000). WAINESS PHD QUALIFYING EXAM 55 The literature on HCI focuses in part on cognitive processes (mental processes), especially in terms of the capacities of users and how these affect users’ abilities to carry out specific tasks with computer systems (Berg, 2000). The cognitive aspects of HCI include motor, perceptual, and cognitive systems, as well as two types of memory: working and long-term (Berg, 2000). Usability refers to the degree to which a computer is effectively used by its users in the performance of tasks (Berg, 2000). According to Berg (2000), designing for experienced users is difficult, but designing for a broad audience of unskilled users presents a far greater challenge (Berg, 2000). Interface design is a subset of HCI and focuses specifically on the computer input and output devices such as the screen, keyboard, and mouse, and has its roots in the ergonomic study of instrument panels during WWII (Berg, 2000). In addition to visual interface issues, the HCI literature also touches on topic related to visual perception and how the specifics of human visual perception may impact human-computer interaction (Berg, 2000). Interface metaphors are often discussed in HCI literature as they pertain to interface design. Interface metaphors work by exploiting previous user knowledge of a mental model (Berg, 2000). Research suggests that metaphors stand in the way of making new connections and associations and that, while similar representations creatd by metaphors can be useful, they can also be detrimental to user behavior under specific conditions, particularly if the metaphor does not fie appropritately (Berg, 2000). The term animation is used to describe movements of either text or graphics on the computer screen (Berg, 2000). Agents are active and ever-present software components that perceive, appear to reason, act, and communicate. They are also referred to as guides and personal assistants (Berg, 2000). For the past 50 years, computers have had a profound effect on humans and have advanced our lives in immeasurable ways (Chalmers, 2000). According to Hokanson and Hooper (2000), computers were predicted to improve both teaching and student achievement. Students would learn more through computers: test scores would rise, students would remember more, and they would learn at a faster rate. Computer-assisted education would help students prepare to compete in a modern, global workforce. Despite continued optimism, we now find criticisms and concerns being raised regularly. Principal among the complaints is the failure to find an improvement in learner’s performance (Hokanson & Hooper, 2000). Two of the most significant developments during the 1980s in the domain of humancomputer interaction (HCI)—direct manipulation (DM) and graphical user interfaces (GUI)— combine to form direct manipulation interfaces (DMIs). These two innovations were introduced and proposed, hand in hand, as vehicles to user-friendly design promoting ease of use and improved task performance (Kaber, Riley, & Tan, 2002). Hypertext and hypermedia make wide use of the graphical user interface (GUI), which operates on the metaphorical premise of direct manipulation and engagement by the users (Brown & Schneider, 1992). Hypermedia relies heavily on the use of windows, icons, menus, and pointer systems (Brown & Schneider, 1992). In addition, interface metaphors such as the Microsoft Windows “desktop” metaphor, are widely used. Some of the newest metaphor developments can be found in interfaces created for presenting information structures, multimedia, group work, and virtual reality (Neale & Carroll, WAINESS PHD QUALIFYING EXAM 56 1997, p. 442). Along with the proposed benefits of metaphors, the way in which the user interacts with the computer environment has also been suggested to influence performance. Allen (1997) has suggested that interaction between a computer and a human being may be viewed as a specialized conversation. According to de Jong, de Hoog, and de Vries (1993), one way of interacting with a computer, where objects can be manipulated directly, is found in the so-called direct manipulation metaphor. In the early 1980s, Shneiderman coined the term direct manipulation along with its key concepts. With direct manipulation, objects on the screen are representations of real world objects, and interactions with that world, in the simplest form, are manipulated through clicking and dragging with a mouse. Because there is a minimum of syntactic knowledge required by the user, he can concentrate fully on the semantics of the objects and the actions of the task (de Jong et al., 1993). While this is expected to lead to improved performance (Benbasat & Todd, 1993), it has not always been the case. Review of the Literature This review is divided into three parts. First is a discussion of mental models and their relationship to computer interfaces. In particular, the second part of the discussion centers on metaphors, a specific type of mental model which is used in modern computer interfaces. The last section is a discussion of the three interface types, which are defined by their mode of interaction: command, menu, and direct manipulation. In this third section, research results are discussed along with conflicting findings and possible causes for those conflicts. Mental Models Models are approximations of objects or processes which maintain some essential aspects of their original form (Allen, 1997), and mental models are usually considered the way in which people model processes. This emphasis on process distinguishes mental models from other types of cognitive organizers such as schemas (Allen, 1997). The majority of the research on mental models has been with studies of computer-human interaction. Many aspects of humaninteraction with computers involve complex processes; therefore people who interact with computer systems must enlist a mental model for those processes (Allen, 1997). According to Eberts and Brittianda (1993), the user forms a mental model of how the computer system or program works, which then guides the user’s actions and behaviors. The mental model can be thought of as the user’s understanding of the relationships between the input and output of the computer so the user can predict the output that would be produced by possible inputs (Eberts & Brittianda, 1997). In addition to mental models, other models relevant to human-computer interaction include user models and data models. “The expectations a user has about a computer’s behavior come from mental models, while the ‘expectations’ a computer has of a user comes from user models (Allen, 1997, p. 49). A Graphical User Interface (GUI) is a type of representation of a data model from the perspective of user interaction; It determines how the data are displayed to the user. Therefore, the GUI can only be effectively designed after the data model has been developed (Stary, 1999). Computer-related design tasks, including software design of educational applications and video games, may involve the interaction of several mental models. They may include models of the capabilities of the tools, models of the partially completed work, and models of the user’s interests and capabilities (Allen, 1997). A number of visual and auditory components can aid in the development of mental models, including text, graphics, and animation (Allen, 1997). Metaphors WAINESS PHD QUALIFYING EXAM 57 One mental model, the metaphor, uses “the similarity of one process with which a person is familiar to teach that person about a different process” (Allen, 1997, p. 50). Metaphors also help learners feel directly involved with objects in the domain so the computer and interface become invisible (Wiedenbeck & Davis, 2001). There are several types of metaphors, including activity metaphors, mode of interaction metaphors, and task domain metaphors. Activity metaphors are determined by the user’s highest level goal; for example, controlling a process, communicating, or playing a game (Neale & Carroll, 1997). Mode of interaction metaphors organize the principal nature in which users think about interacting with the computer; these metaphors are task independent. The third type of metaphor, the task domain metaphor, provides an understanding for how tasks are structured. Most of the user interface literature discusses metaphors at the task domain level (Neale & Carroll, 1997). According to Neale and Carroll (1997), the mode of interaction metaphor can be divided into three interaction categories: conversation, declaration, and model world. Two of these categories, the conversation and model world metaphors, will be discussed here, due to their relevance to command and direct manipulation interfaces. The conversation metaphor creates a conversational interface (e.g. command line) which functions as an implied intermediary between the computer and user, and is modeled after human to human conversations (Neale & Carroll, 1997). The model world metaphor is what most in the user interface community thinks of when working with metaphors. The model world is usually based on the metaphor of the physical world, and the user interacts directly with the modeled world. A combined metaphor, the collaborative manipulation metaphor, is a combination of the conversational and model world metaphors (Neale & Carroll, 1997). The ways in which a metaphor is incorporated into a mental model are difficult to examine and probably vary greatly from user to user. In addition, a metaphor can be counterproductive because the metaphor is rarely a perfect match to the actual process and incorrect generalizations from the metaphor can result in poor performance on the task (Allen, 1997). Metaphor mismatches can occur for several reasons. Small dissimilarities between the source and target domains cause mismatches. Combining several metaphor source domains will typically result in mismatches among the metaphor mapping of the composite; the metaphors in the composite can be inherently different, often directly contradicting each other. Mismatches can also occur when the user’s task characteristics and goals change (Neale & Carroll, 1997). Interface Style Interface design is an effective way to accommodate user differences (Sein, Olfman, Bostron, & Davis, 1993). According to Kaber et al. (2002), Graphical User Interfaces (GUIs) were, in part, an outgrowth of direct manipulation, implying that the term interface includes both the screen design and the style of interaction. Wiedenback and Davis (1997) contend that interaction style may have a strong impact on perceptions of software and ultimately on its use, particularly for users who are not computer professionals and who are characterized by an irregular or less-intense pattern of use (Wiedenback & Davis, 1997). Three types of interfaces are defined by the literature, based on their interaction style: conversational (or command), direct manipulation, and menu. Command interface. The conversational interface requires the user to read and interpret either words or symbols which tell the computer to perform arithmetic operations and processes (Brown & Schneider, 1992). In conversational interfaces, the user typically uses a keyboard to type commands telling the computer what he or she wants to have happen. Often these commands are similar to, but still unlike, natural languages (de Jong et al, 1993). A more WAINESS PHD QUALIFYING EXAM 58 common term for the conversational interface is command interface (or command line interface). Command interfaces are operated by the user typing a command string in a language and syntax recognized by the system. The user must remember an array of commands, as well as their syntax. And since several command lines, or a single complex line, may be required to achieve the desired outcome, the user must also structure a sequence of actions correctly to achieve the intended result (Wiedenback & Davis, 1997). Because interactions are carried out using a keyboard, rather than by pointing, clicking, and dragging with a mouse, the results of the actions are often not as visible to the user as when using the other two interface types; direct manipulation and menu (Wiedenback & Davis, 1997). Direct manipulation interface. Researchers credit Schneiderman with coining the phrase direct manipulation in the 1980s (Brown & Schneider, 1992; Eberts & Brittianda, 1993; Kaber et al., 2002; Phillips, 1995). Direct manipulation (DM) is a collective term that refers to a style of HCI for user interfaces showing the properties of continuous representation of objects and actions of interest, object manipulation with physical interaction with icons and buttons rather than the use of complex syntax, and rapid incremental reversible operations with rapid, visible feedback (Eberts & Brittianda, 1993; Kaber et al., 2002). The direct manipulation interface (DMI) is defined as one in which the user has direct interaction with the concept world; the domain. The user is able perceive a direct connection between the interface and what it represents (Brown & Schneider, 1992). Broadly defined, direct manipulation interfaces represent the physical manipulation of a system of interrelated objects analogous to objects found in the real world. While the object representations may take on a variety of forms, they are most commonly represented as icons; although it is possible to provide text-based implementation of the objects or combined text-icon presentations (Benbasat & Todd, 1993). DMIs allow users to carry out operations as if they were working on the actual objects in the real world. The gap between the user’s intentions and the actions necessary to carry them out is small. These two characteristics of direct manipulation are referred to as engagement and distance. High engagement with small distance lead to a feeling of directness in a system (Wiedenbeck & Davis, 1997). Engagement is defined as a feeling of working directly with the objects of interest in the world rather than with surrogates (Wiedenbeck & Davis, 2001). Engagement refers to the perceived locus of control of action within the system (Frohlich, 1997). A critical determination for level of engagement is whether users feel they are the principle actors within the system. In systems based on a conversational style of interaction, the locus of control appears to reside with a “hidden intermediary” (Frohlich, 1997, p. 465). This interaction is considered indirect because the user is not directly engaged with the objects of interest. In systems based on a graphical style of interaction, with use of a pointing, clicking, and dragging device, the locus of control appears to reside with users who manipulate the objects of interest (Frohlich, 1997). This creates a sense of engagement. The cognitive effects of direct manipulation can be described in terms of distance (Frohlich, 1997; Wiedenbeck & Davis, 2001). Distance is a cognitive gap between the user’s intentions and the actions needed to carry them out. This distance is in part a syntactic distance, consisting of the translation of user intentions to a language and syntax understood by the computer. It is also partly a semantic distance consisting of the translation of a user’s “real world” understanding of the task to its computer implemented form (Wiedenbeck & Davis, 2001). With direct manipulation the syntactic distance is reduced by presenting the user with a predefined list of visible options. The semantic distance is reduced by the use of an interface WAINESS PHD QUALIFYING EXAM 59 metaphor that allows the user to click and drag familiar objects in a well-understood context (e.g., the Windows or Macintosh desktop metaphor). The metaphor is most often complemented with icons meant to evoke the metaphor in a concrete, visual way (Wiedenbeck & Davis, 2001). According to Frohlich (1997), distance refers to “the mental effort required to translate goals into actions at the interface and then evaluate their effects” (p. 466). Each intended action must span a cycle of goal, action, and result. Interfaces which make it easier for users to perform these cycles are said to be more direct (Frohlich, 1997). Menu interface. In a menu style of interaction, objects and possible actions are represented by a list of choices, usually as text. Menus are similar to direct manipulation in that they help guide the user, which, like with direct manipulation, reduces mental burden. The menu may help to structure the task and eliminate syntactic errors (Wiedenbeck & Davis, 1997). However, menu-based systems are generally less direct than DMIs because the hierarchical structure of the menus provide a kind of syntax that the user must learn. As a result, users do not feel as directly connected to the objects they are manipulating through their actions (Wiedenbeck & Davis, 1997). Comparing interfaces. A number of studies have been conducted comparing command, direct manipulation, and menu interfaces; some with consistent results and some without. The findings of studies comparing menu to command line interfaces have been relatively consistent. Overall, recognition and categorization may be faster for pictures than text (Benbasat & Todd, 1993). Menu interfaces lead to better task performance than the command interfaces, which is attributed to a smaller gap between user intentions and actions with menu interfaces. (Wiedenbeck & Davis, 2001). The results of studies comparing DMI to menu or DMI to command line have been less consistent. Widenbeck and Davis (1997) suggested that direct manipulation interfaces lead to more positive perceptions of ease of use than does a command interface. With elementary school students, Brown and Schneider (1992) found DMI more comfortable and enhanced the speed of problem solving. DMI was also less frustrating compared to the conversational interface. Sein et al. (1993), contended that because a direct manipulation interface provides an “explicit, comprehensible, analogical conceptual model of the computer system, it can reduce the demands placed upon subjects to internalize system states, which in turn leads to improved performance” (p. 615). In support of this view, de Jong et al. (1993) found direct manipulation interfaces enhanced the efficiency of task performance for both simple and complex tasks, with the improved performance more pronounced for complicated tasks. Other findings for direct manipulation interfaces are mixed or unclear. In an analysis of empirical studies into the benefits of icons, and therefore direct manipulation, Benbasat and Dodd (1993) found no clear advantage for the use of icons. According to Kaber et al. (2002), although direct manipulation may minimize cognitive distance and maximize engagement, the interface is less effective from the perspective of repetitive or complex tasks, particularly those where one action is to affect multiple objects. They argue that, to achieve semantic directness (the distance between the user’s intentions and the objects and operations provided by the system), the user should be able to communicate those intentions to the system in a simple and concise manner at all times. The need for repetitive actions in order to affect multiple objects is not supported by DM and, therefore, increases mental effort and the amount of time needed to complete a task (Kaber et al., 2002). In a comparison of the DMI to the command interface, Westerman (1997) found that the performance strategies of novices were relatively insensitive to command complexity while WAINESS PHD QUALIFYING EXAM 60 experts were aware of this factor and used the command line less frequently as complexity increased. And with regards to experts, Frohlich (1997) found that performance slows, rather than speeds up, with direct manipulation interfaces, for two reasons. First, as was also suggested by Kaber et al. (2002) and Westerman (1997), the language of DM limits complex actions. Second, use of familiar real-world metaphors may limit users to existing ways of doing things; while this may make learning and remembering easier for novices, it is more constraining for experts. In communications, Frohlich (1997) found that direct manipulation interfaces increase the cognitive load on conversational partners, even though it decreased the interactional work between them. A number of causes have been suggested to account for the discrepancies in the findings for direct manipulation interfaces. Eberts and Brittianda (1993) questioned the validity of interface comparison studies. They suggested that comparing performance differences across interface design is difficult because the predicted execution times are intrinsically different for each interface and, therefore, difficult to compare (Eberts & Brittianda, 1993). In contrast, Benbasat and Todd (1993) argued that direct manipulation interfaces are often compared to both command or menu type interfaces in studies. The menu interface eliminates the confounding effect of time on performance found with command line. Also, since the menu interface is usually made up of menu panels containing a list of options which may be words or icons, the selection of menus facilitates an experimental design to test the main and interaction effects of direct manipulation versus menus, and text versus icon-based interfaces (Benbasat & Todd, 1993). With this in mind, in a study of adult learners, Benbasat and Todd (1993) found no performance advantages for icon-based systems, when compared to other interfaces. However, both Benbasat and Todd (1993) and Frohlich (1997) have suggested that the icons themselves may be influencing the findings. A number of factors have been found to affect the value and usability of icons: specifically, complexity, meaningfulness, and concreteness. These factors combine to define an icon’s distinctiveness. Distinctiveness refers to whether one icon is confused with other icons (McDougall, de Bruin, & Curry, 2000). According to McDougall et al. (2000), icon complexity is concerned with the level of detail used in constructing the icon’s imagery. It is particularly important when simple icons are presented against a complex array, or when complex icons are presented against a simple array. Icon meaningfulness refers to how well an icon presents the user with its intended function; how much it portrays the action it generates. And icon concreteness is the degree to which an icon depicts real world objects users are familiar with (McDougall et al., 2000). The effects of these various characteristics are influenced by the way in which icons are grouped, concreteness of one icon compared to the other icons, and the complexity of an icon compared to the complexity of other icons. According to the researchers, meaningfulness, rather than complexity, appeared to be the primary determinant of icon distinctiveness when the concreteness of the icon arrays was varied. Concrete icons (i.e., pictorially representing realworld objects), were seen as more meaningful against an abstract array. Conversely, abstract icons were seen as more meaningfully against a concrete array. When simple and complex icons were presented, which consisted of a mixture of both abstract and concert icons, both of these effects were observed (McDougall et al., 2000). In their study, the effects of icon concreteness were found to be short lived and limited to user’s early experience with an icon set. By contrast, the effects of icon complexity were most apparent in tasks involving a search component and did not diminish as a result of experience (McDougall et al., 2000). WAINESS PHD QUALIFYING EXAM 61 While Benbasat and Todd (1993) found little or no performance advantages for icons, and while Frohlich (1997) found no general advantages to using icons rather than textual menus, Frohlich did suggest that in particular cases icons may be better because of the additional information they carry. According to Benbasat and Todd (1993), icons may lead to improved performance for novices and casual learners because of the superiority of visual memory over verbal memory. Frohlich (1997) argued that the quality of the icon can affect both user performance and study results. Frohlich has contended that poorly designed icons can actually be worse than labels because those icons carry less information (Frohlich, 1997). And even with well designed icons, it is difficult to design icons to convey the desired meanings without invoking other connotations. When this happens, problems arise and semantic errors occur. Ambiguous meanings arise because there is no universal set of icons or principles to guide icon design (Benbasat & Todd, 1993). A final possible confound in the findings with regards to direct manipulation interfaces may be due to how specific interface implementations are defined. Many so called direct manipulation interfaces include elements from several interface styles, and are more accurately referred to as mixed mode interfaces (Frohlich, 1997). They include menus and windows, as well as conversational interaction such dialog boxes, fill-in forms, and command languages (de Jong et al., 1993; Phillips, 1995). The Macintosh operating system is one such example. While it is typically referred to as a direct manipulation interface, it covers a range of interactions involving a pointing device and keyboard for menu selection, dragging, and drawing, along with dialogue boxes and text entry (Phillips, 1991). Pure direct manipulation interfaces according to the framework would be “model-world interfaces based on Action in/Action out modality involving only the media of sound, graphics, and motion. Dialog boxes, forms, and short-cut commands are not part of this definition” (Frohlich, 1997, p. 478). Using this framework, many interfaces which have traditionally been thought of as direct manipulation interfaces are in actuality mixed mode interfaces (Frohlich, 1997). Conclusion Computers assist people in performing tasks of an increasingly difficult, complex, and comprehensive nature, and using an application can be made easier by introducing “transparent” interfaces (de Jong et al., 1993). A fundamental motivation of graphical user interfaces (GUI) is to improve the medium and content of human-computer communication. The implementation of such an interface can be achieved by providing visual representations of the concepts or items of interest through the use of objects which can be directly manipulated by a pointing or selecting device such as a graphics tablet, mouse, or track-ball (Edmonds, O’Brien, Bayley, & McDaid, 1993). According to Kaber et al. (2002), at the heart of the direct manipulation concept is the promotion of manual interaction with objects, rather than the use of a communications language and syntax, to reduce the mental load placed on the learner’s cognitive system. The DM paradigm has been found to support ease of learning for novice users, reduced error rates, and decreased computer related anxieties (Kaber et al, 2002). These characteristics, along with a greater perception of the relationship between input and output, can be found in a variety of systems ranging from video games, interactive graphics packages, and spreadsheet programs to computer-aided design systems, virtual-control systems, and many office systems (Kaber et al., 2002). The premise behind the expectation that direct manipulation interfaces would improve performance is that, according to Benbasat and Todd (1993), when interacting with a computer a WAINESS PHD QUALIFYING EXAM 62 user is typically focused on some cognitive task Attention devoted to the interface may interfere with that task. Since text-based processing is associated with cognition, more interference should be expected. The belief was that, with direct manipulation interfaces, the less effort required for interaction would result in more mental resources being available for the learning task (Benbasat & Todd, 1993). However, findings do not always support this expectation. Instead of an interface that would provide benefit in all situations, these interfaces seem to improve selected aspects of usability on a restricted set of tasks (Frohlich, 1997). Possibly, the solution lies in mixed interfaces. According to Svendsen (1991), some tasks should have a command interface, others a direct manipulation interface, and yet others some kind of hybrid interface. As Frohlich (1997) suggested, manual interfaces are not always better than conversational ones, and combined interfaces can leave the choice very effectively in the hands of users. Ultimately, the challenge is to fine-tune computer interfaces to make computers easier to use and accessible to all learners (Chalmers, 2000). The purpose of this review is to explore the various task related constructs and conditions that affect motivated behavior and, ultimately, mental effort. “Motivation generates the mental effort that drives us to apply our knowledge and skills. “Without motivation, even the most capable person will not work hard” (Clark, 2003, p. 21). Motivated behavior involves attempting and persisting at academic achievement tasks (Corno & Mandinah, 1983), and learning is strongly influenced by the amount of mental effort, the depth or thoughtfulness, learners invest in processing material (Salomon, 1983). Mental effort is defined as “working ‘smarter’ at either a new or old performance goal” (Condly, Clark, & Stolovitch, in press, p. 1). A number of items affect motivation and mental effort. In an extensive review of motivation theories, Eccles and Wigfield (2002) discuss Brokowski and colleagues’ motivation model that highlights the interaction of the following cognitive, motivational, and self-processes: knowledge of oneself (including goals and self perceptions), domain-specific knowledge, strategy knowledge, and personal-motivational states (including attributional beliefs, self-efficacy, and intrinsic motivation). In a study of college freshmen, Livengood (1992) found that psychological variables (i.e., effort/ability reasoning, goal choice, and confidence) are strongly associated with academic participation and satisfaction. And Corno and Mandinah (1983) commented that students in classrooms actively engage in a variety of cognitive interpretations of their environments and themselves which, in turn, influence the amount and kind of effort they will expend on classroom tasks (Corno & Mandinah, 1983). Task characteristics can be divided into three broad categories: A) the nature and content of the task, B) the learner’s perceptions and interpretations of the task, and C) the context in which the task is occurs, all of which can affect task perceptions, motivations, and mental effort. The nature and content of the task includes elements such as task difficulty and whether the task is collaborative or individualistic, as well as the task’s domain, the information to be learned, and the instructional elements applied to the task. Individual perceptions and interpretations of the task are based on a number of personal factors such as goal orientation, self-efficacy, expectancies for success, and the value placed on the task. The context in which the task occurs includes a variety of elements such as the classroom structure (e.g., whether a classroom is collaborative or competitive), the instructional design, the presence or absence of rewards or other incentives, the nature of the evaluative processes, the amount and type of instructional support offered, and the goal orientation of the classroom. Each of the components within and across the three categories interacts to create a complex network of influences and interdependencies, which ultimately affect motivation and mental effort. The various WAINESS PHD QUALIFYING EXAM 63 components can be referred to as task characteristics, since each explicitly defines the task or applies interpretations to the task that alter perceptions and the personal definition of the task. Each component serves to either support or undermine the investment of mental effort. This review is based on the relationships and constructs defined in the CANE (Commitment And Necessary Effort) model of motivation, (Clark, 1999). The model is a compilation of a number of smaller, disparate motivational models (see Pintrich & Schunk, 2002). The one area of divergence from the CANE model in this review is with respect to persistence and mental effort. In the CANE model, persistence and mental effort are seen as distinct indicators of motivation that do not directly interact. Other researchers (e.g., Miller, Greene, Montalvo, Ravindran, Nichols, 1996; Thompson, Meriac, & Cope, 2002) suggest that mental effort can be an indicator of persistence, creating a relationship where persistence is an independent variable and mental effort is a dependent variable. This review adopts a blend of these two perspectives, where persistence is seen as an indicator of mental effort for those findings that either explicitly connect the two and for findings that refer to persistence in a way that suggests the application of mental effort. I have divided this review into four sections: goal setting and goal orientation, expectancy-value theory and self-efficacy, instructional design, and cognitive engagement and self-regulation. Each of these four sections are further divided into subsections. The goal setting and goal orientation section includes an introduction of goal orientation, followed by discussions of task orientation and performance orientation. The expectancy-value theory and self-efficacy section is subdivided into an introduction of the expectancy-value theory, followed by task value, and self-efficacy. The Instructional design section is subdivided into an introduction, task difficulty, support and feedback, collaboration, and incentives. The final section, cognitive engagement and self-regulation, is subdivided into an introduction, followed by cognitive engagement, and effective strategy use. Review of the Literature Eccles and Wigfield (2002) discuss Pintrich and colleagues’ model of the relations between motivation and cognition. The model incorporates a variety of components including student characteristics (such as prior achievement levels), the social aspects of the learning setting (e.g., the social characteristics of the task and classroom interactions between students and teachers), several motivational constructs derived from expectancy-value and goal theories (expectancies, values, and affect), and various cognitive constructs (e.g., background knowledge, learning strategies, and self-regulatory and metacognitive strategies. Both the cognitive and motivational constructs are assumed to influence students’ involvement with their learning and, consequently, achievement outcomes (Eccles & Wigfield, 2002). Students achievement values determined initial engagement and their self-efficacy facilitated both engagement and performance in conjunction with cognitive and self-regulation strategies. In sum, the social cognitive view of self-regulation emphasizes the importance of self-efficacy beliefs, casual attributions, and goal setting in regulation of behavior directed at accomplishing a task or activity. (Eccles & Wigfield, 2002). Goals Setting and Goal Orientation Individuals without specific goals (such as “do your best”), do not work as long as those with specific goals, such as “list 70 contemporary authors” (Thompson et al., 2002; Locke & WAINESS PHD QUALIFYING EXAM 64 Latham, 2003). Goal setting theory, according to Thompson et al. (2002), is based on the simple premise that people exert effort toward accomplishing goals. Goals may increase performance as long as a few factors are taken into account, such as acceptance of the goal, feedback on progress toward the goal, a goal that is appropriately challenging, and a goal that is specific (Thompson et al., 2002). Goal orientation theory is concerned with the prediction that those with high performance goals and a perception of high ability will exert great effort, and those with low ability perceptions will avoid effort. (Miller et al., 1996). Whether or not a person adopts a goal is not only influenced by his view of his ability, it is influenced by other, salient evaluation criteria (Bong, 2001). In a study by Ames and Archer (1988) on the relationship of goal orientation to task choice, and selection and use of effective learning strategies, the researchers found that the use of learning strategies may be related to whether students adopted a mastery or performance goal orientation in the classroom. A mastery goal orientation is when students undertake challenging tasks for the sake of learning and improving abilities. Those who adopt a performance goal orientation are concerned with how their abilities are perceived or evaluated by others. Those with a performance orientation try to validate their superior ability or receive an extrinsic incentive (Jagacinski & Nicholls, 1984; Bong, 2001). Depending on the situation, those with a performance orientation may either try do demonstrate their ability (performanceapproach) or hide a perceived lack of ability (performance-avoidance). Those with a mastery orientation also might try do demonstrate ability (mastery-approach) or avoid a situation where they are not entirely sure of their ability to succeed (mastery-avoidance; Archer & Scavek, 1998). For mastery oriented learners, effort is seen as a way to increase ability and to succeed. For performance oriented students, effort is seen as a sign of inability and, therefore, the appearance of effort is to be avoided (Archer & Scavek, 1998). There are a number of alternative terms for mastery orientation, including intrinsic orientation, task orientation, task-involved orientation, and learning orientation. Alternatives to performance orientation include extrinsic orientation, ability-focused orientation, ego-orientation and ego-involved orientation (Jagacinski & Nicholls, 1984; Ames & Archer, 1988; Archer & Scevak, 1998; Coffin & MacIntyre, 1999; Bong, 2001). In this review, the terms mastery and performance will be used for these two constructs, respectively. Mastery orientation. With mastery orientation, the belief is that more effort will lead to greater mastery. If we try hard and increase mastery, that success leads to a greater feeling of competence. Mastery is an end in itself—for challenge, curiosity, and mastery (Jagacinski & Nicholls, 1984). A mastery orientation can be fostered by the way a task is structured, by the nature of the evaluative system in which instruction is embedded, by the level of autonomy afforded students, and by the opportunity to work collaboratively with other students (Archer & Scevak, 1998). For example, providing students an opportunity to resubmit assignments as a way to improve skills and grades has been found to promote a mastery orientation. Informational feedback versus ranking feedback has also been found to promote mastery orientation; Informational feedback gives students an indication of strengths and weaknesses and where to focus future effort. (Archer & Scevak, 1998). According to Covington and Omelich (1984), mastery oriented learning structures promote a number of factors thought to initiate and sustain task involvement, persistence, and improved performance (Covington & Omelich, 1984). When students perceive their class as emphasizing a mastery goal, they were more likely to use WAINESS PHD QUALIFYING EXAM 65 effective learning strategies, prefer challenging tasks, enjoy their class more, and believe that effort and success covary (Ames & Archer, 1988). Performance orientation. In contrast to mastery orientation, individuals who are performance oriented hold a differentiated conception of ability (i.e., effort and ability covary), because their assessment of ability is based on normative information (comparison to others). Perceived success occurs for when they demonstrate superior ability by outperforming peers rather than displaying high effort or personal improvement (Fry & Duda, 1997). Activation of the differentiated conception of ability will be likely when learners are directly concerned with evaluating their own or another’s ability, such as with academic test performance and grading systems, where competition with others is emphasized (Jagacinski & Nicholls, 1984). When a performance orientation was salient to students, there was a tendency to see the work as too difficult, reflecting a maladaptive motivational pattern that was unlikely to support continued effort (Ames & Archer, 1988); Evaluative conditions can have this effect. Testing situations commonly involve norm-referenced evaluations on performance, increasing the likelihood that a differentiated conception will be activated. The differentiated conception is necessary for adequate or objective evaluations of ability; If we don’t compare our effort and performance with that of others, we can’t tell whether our performance is due to task difficulty or effort, as opposed to ability (Jagacinski & Nicholls, 1984). High effort in mastery involving situations can lead to feelings of competence, accomplishment, and pride. High effort in performance involving situations generally results in lower feelings of competence (Jagacinski & Nicholls, 1984). Expectancy-Value Theory Expectancy-value theories propose that the probability of behavior depends on the value of a goal and expectancy of obtaining that goal (Coffin & MacIntyre, 1999). Expectancies refer to beliefs about how we will do on different tasks or activities, and values have to do with incentives or reasons for doing the activity (Eccles & Wigfield, 2002). From the perspective of expectancy-value theory, goal hierarchies (the importance and the order of goals) also could be organized around aspects of task value. Different goals may be perceived as more or less useful, or more or less interesting. Eccles and Wigfield (2002) suggest that the relative value attached to the goal should influence its placement in a goal hierarchy, as well as the likelihood a person will try to attain the goal and therefore exert mental effort. Task value. Task value refers to an individual’s perceptions of how interesting, important, and useful a task is (Coffin & MacIntyre, 1999). Interest in, and perceived importance and usefulness of, a task comprise important dimensions of task value (Bong, 2001). Citing Eccles’ expectancy-value model, Townsend and Hicks (1997) stated that the perception of task value is affected by a number of factors, including the intrinsic value of a task, its perceived utility value, and its attainment value. Thus, engagement in an academic task may occur because of interest in the task, or because the task is required for advancement in some other area (Townsend & Hicks, 1997). According to Corno and Mandinah (1983), a task linked to one’s aspirations (a “self-relevant” task) is a key condition for task value (Corno & Mandinah, 1983). However, task value can be affected by other perceptions. For example, if a person has a performance orientation, it is predicted that motivated behavior should decrease on self-relevant WAINESS PHD QUALIFYING EXAM 66 tasks if the performance of significant others are interpreted as relatively more successful (Corno & Mandinah, 1983). Low task relevance can have a similar effect. If, for example, the results of a test are nonconsequential (they have no utility value), or if the student perceives a test as nonconsequential, he may not invest sufficient effort on complex (and therefore more mentally taxing) test items (Wolf, Smith, & Birnbaum, 1995). In addition, participation in any task may also carry negative aspects or costs which can affect the individual’s perception of the task. These costs may include the amount of effort necessary for success or loss of valued alternative activities. According to Townsend and Hicks (1997), because of the limitations of time and energy, a student’s decision to participate in a valued academic task might result in an inability to participate in another highly valued activity, such as a social activity. Thus, an activity in one life domain may have high intrinsic, utility, and attainment values, yet may act as an obstacle to success in an activity in some other life domain. The cost of involvement in the first activity would decrease the overall value of that activity (Townsend & Hicks, 1997). Goal satisfaction or dissatisfaction in any domain may be related to how activities in other domains are perceived, through task value, suggesting that Eccles expectancy-value model of motivated behaviors can be used to consider not only academic achievement behaviors but also achievement in the wider sense of social goals (Townsend & Hicks, 1997). In addition, social satisfaction also influences the value of social tasks, and their position in the goal hierarchy. The more socially satisfied a person is, the greater the perceived value and the lower the cost. For those low in social satisfaction, a classroom structure that supports the social domain, such as a classroom that promotes collaboration and cooperative learning, can have a positive effect on students’ task values. For example, students in a math or language classroom with a cooperative goal structure reported higher task values for those classes (Townsend & Hicks, 1997). There are other ways a classroom can be structured to increase perceived value. Miller et al. (1996) suggested that an emphasis on the coordination of proximal goals with distal valued outcomes (future consequences) is one such solution. The distal goals are expected to help sustain effort in academic areas that are of low interest to students. The proximal goals are to help promote the utilitarian component of task value. Archer and Scevak (1998), suggest that choice in a task or topic can promote interest; another component of task value (Archer & Scevak, 1998). Self Efficacy. Academic self-efficacy is a student’s beliefs about his or her capabilities to perform academic tasks at specific levels (Bong, 2001). People’s beliefs about their ability to successfully perform a task influence their willingness to attempt the task, the level of effort they will expend on the task, as well as their persistence in the face of challenge (Miller et al., 1996). Self-efficacy can also determine the goal orientation of a student. According to Livengood (1992), students low in confidence in their intelligence tend to be performance oriented, to validate their ability and perform in order to look good, even at the risk of not learning. Those high in confidence tend to be mastery oriented and participate in activities to develop their abilities and increase mastery. The goal orientation of the task, whether it is performance or mastery oriented, can affect students differently, depending on the students’ levels of self-efficacy. Jagacinski and Nicholls (1984) suggested that people who perceive themselves as able will perform equivalently in performance and mastery situations. Those will low self-efficacy will perform worse in performance oriented situations than in task oriented situations. Performance in task oriented WAINESS PHD QUALIFYING EXAM 67 situations appears to be equivalent regardless of whether a person has low or high self-efficacy. In performance situations, low self-efficacy students will perform poorly due to fears of negative appraisals; they see performance tasks as a test of their abilities. Those with high self-efficacy perform well because they do not have that overemphasis on being evaluated. They are more concerned with the task and the learning process; they tend to approach the performance situation as if it were a mastery situation (Jagacinski & Nicholls, 1984). In contrast to self-efficacy, which is somewhat global, task value and goal orientation are more domain specific. How much value students attach to particular subject matter and their preferences toward task mastery and challenge in that subject varies across domains (Bong, 2001). Furthermore, task value (importance, usefulness, and intrinsic interest) may play a more meaningful role than self-efficacy in guiding students to a mastery orientation. In a study of high school students, Bong (2001) found that task-value perceptions were clearly differentiated across diverse subjects. In addition, mastery orientation followed the same pattern as task value, suggesting a correlation of cross-domain associations. In contrast, self-efficacy perceptions were only moderately correlated across subjects (Bong, 2001). Instructional Design. Self-efficacy interacts with a number of theories, including attribution theory, social cognitive theory, and achievement theory. In classrooms of students at cognitive levels where attributional explanations for behavior make sense, situations most likely to induce attributions vary. According to Corno and Mandinah (1983), these situations can include grading, testing, and other evaluation procedures; skill training or drilling exercises; and problem solving or competitive games, as well as assignments involving the simultaneous application of various academic skills (e.g., leading a discussion, writing) and performance of sex-typed tasks by students of the opposite sex (Corno & Mandinah, 1983). Social Cognitive Theory posits that those who perceive a more positive outcome will work harder to increase learning, and will therefore perform better (Wiedenbeck & Davies, 2001). According to Miller et al. (1996), theories of achievement motivation built around competence-related goals have suggested that students’ desires to increase their knowledge, understanding, or skills (i.e., mastery orientation) are major factors in guiding their level of engagement in academic tasks. However, the extent to which students hold valued long-term goals and the extent to which they perceive their current school experiences as related to the attainment of those goals must also be considered. This suggests that educators must enlist a variety of tools in their efforts to foster cognitive engagement and learning (Miller et al., 1996). Achievement motivation is also enhanced to the extent that the learner perceives the positive relationship between the amount of study time expended and the rewards (e.g., proximal rewards such as grades) attained. This covariation strengthens the saliency of effort as a primary cause of one’s successes and failures (Covington & Omelich, 1984). There are a number of ways a classroom environment can be structured to encourage learning and enhance student motivation. As an example of an effective teaching and motivational tool, providing multiple-retesting, in order to attain a goal with a predetermined level of achievement, provides feedback regarding what is yet to be learned (Covington & Omelich, 1984). According to Hughes, Sullivan, and Mosely (1985), the few studies dealing explicitly with the effects of teacher evaluation on continuing motivation (i.e., return to task) have only partially supported the contention that motivation is reduced by teacher evaluation. WAINESS PHD QUALIFYING EXAM 68 One explanation is the nature of the environment in which the evaluation occurred. One effective format for evaluation is to first allow students to gain mastery of a task and only evaluate performance after the task becomes relatively easy for the majority of students. Teacher comments and judgments, while students are still learning to perform the task well, should take the form of constructive feedback designed to help students improve their performance, rather than as evaluation for some other purpose—a similar instructional and evaluative process recommended for mastery learning (Hughes et al., 1985). A second example of how to improve learning and student motivation is to provide a classroom environment that supports social interactions. According to Townsend and Hicks (1997), social satisfaction should be higher in classrooms where teachers utilize methods of instruction that provide greater opportunities for involvement and affiliation with other students. One such form is cooperative learning, where small groups of students work together to accomplish shared goals. Goal orientation. Goal orientation also plays a significant role in how students utilize mental effort, as well as their attitudes. Ames and Archer (1988) commented that when we ask why students fail to use effective learning strategies, we may not be giving enough attention to the conditions of learning that may affect the use of learning strategies. We may need to consider how the student perceives the goal orientation of the learning environment. Situational demands can affect the salience of specific goals, which in turn results in differential patterns of cognition, affect, and performance (Ames & Archer, 1988). For example, when social comparison is made salient, students focus on their ability, and these self-perceptions mediate performance and affective reactions to success and failure. By contrast, when absolute standards, selfimprovement, or participation are emphasized, students focus more on mental effort and task strategies (Ames & Archer, 1988). In many classrooms, the informational cues that serve to emphasize one goal or another are often mixed and tend to be inconsistent over time. Further, students in the same classroom may differ in the degree to which they focus on certain cues, as well as how they interpret them (Ames & Archer, 1988). The degree to which a classroom climate emphasizes mastery orientation, rather than performance orientation, is predictive of how students choose to approach tasks and engage in learning (Ames & Archer, 1988). However, it is the students’ perception of the classroom orientation that matters more than the teachers intended orientation. Archer and Scevak (1998) found that the way lecturers approach their teaching—the attitudes and behavior they display—is related to students’ motivation to learn. Students teachers who perceived the lecturer to be encouraging a mastery orientation made use of the types of study strategies that are expected to enhance understanding, they enjoyed their tutorials, they saw the subject as relevant to their future (teaching) careers, and they were willing to tackle difficult rather than easy tasks. This adaptive approach was displayed not only by the highly competent students but by students who saw themselves as only average or below average, as well (Archer & Scevak, 1998). Another instructional practice that can foster mental effort is related to absolute grading standards (criterion-based assessment). However, while absolute grading standards contribute to performance improvements, it is the level of standards expected, rather than whether they were defined in relative or absolute terms, that primarily affected the increased performance. This raises the question of the optimal motivational level of task difficulty (Covington & Omelich, 1984). WAINESS PHD QUALIFYING EXAM 69 Task difficulty. According to Davis & Wiedenbeck (2001) cognitive curiosity, which arises from situations in which there is complexity, incongruity, and discrepancy, motivates the learner to attempt to resolve the inconsistencies through exploration. Salomon (1983) suggests that learning, and the amount of mental effort expended, greatly depends on the differentiated way in which sources of information are perceived, and that those perceptions influence the mental effort expended in the learning process. The amount of mental effort learners invest in extracting information from a source, discriminating among information units, remembering, or elaborating is influenced by the way they perceive that source. Perceptions of a source refer to the mental effort requirements of the message, its attributions (e.g., depth, complexity, importance), the tasks to be performed, as well as the context in which the learner is exposed to the source (Salomon, 1983). According to Archer and Scevak (1998), task difficulty is an elusive thing to define. One influential component in that definition is the probability of error or the time or effort required to avoid error, pointing to the importance and interrelatedness of the subtasks that constitute smaller and simpler cognitive skills (Archer & Scevak, 1998). Crawford (1978) comments that the instructional difficulty level that best facilitates learning has been examined in a number of different contexts. The findings have indicated that no single best difficulty level exists for optimally promoting knowledge acquisition for all types of learners in all situations (Crawford, 1978). A 50% difficulty level is suggested for individuals with a high need for achievement, since this is neither too easy and boring nor overly difficult and frustrating. For individuals with a strong fear of failure and a low need for achievement, instruction that is neither very low nor very high in difficulty is predicted as being optimal—either because there is a very low probability of failure at the low difficulty level, or an excuse for failure (an opportunity for external attribution) at the high (above 50%) degree of difficulty. Therefore, these learners would prefer difficulty levels tending toward the extremes (0% or 100%; Crawford, 1978). According to Clark (2003) an impossible task is one where the perceived probability of success is less than 15%. In a study by Archer and Scevak (1998), participants performed better when trials were more difficult to initiate. These results are consistent with theories in which attention (mental effort) is allocated in response to the high level of task difficulty. By creating more difficult initial tasks, allocation mechanisms are “tricked” into investing more attention and effort than is necessary (or longer than is necessary) so that on subsequent tasks cognitive performance benefits from that initial boost initiated by the trial-initiation demands. This would suggest that conditions of difficult trial initiation result in relatively increased cognitive arousal, which in turn yields corresponding increases in the capacity of available attention (Archer & Scevak, 1998). These findings have implications for instructional practices, particularly in the form of computerbased instruction or drills, which are frequently designed with the goals of making procedures as easy as possible and introducing material slowly. The findings of Archer and Scevak suggest that such practices may be counterproductive, because students are unlikely to sustain mental effort if the initial tasks are too easy and do not produce high mental effort demands. Therefore, each type or dimension of task difficulty should be carefully considered in the design and analysis of tasks, to determine the optimal initial levels of task difficulty for eliciting and sustaining attention, accelerating learning, and improving performance (Archer & Scevak, 1998). In contrast, Hughes et al. (1985) suggested that students have been shown to return to a task at a greater rate as they feel more competent on the initial task. Return to task was significantly higher when subjects initially were given an easy task rather than a hard one. As WAINESS PHD QUALIFYING EXAM 70 students’ performance improved, they returned to task more often. Return rates were also significantly higher for students who reported they thought they did not perform well on initial tasks. And students returned to the easier task at a higher rate than they did to the harder task (Story & Sullivan, 1986). The effects of task difficulty on performance may be moderated by other variables such as goal orientation. Jagacinski and Nicholls (1984) commented that presentation of a moderately difficult or challenging task (i.e., at the 50% difficulty level) in mastery oriented conditions should generate an expectation that higher effort would lead to more mastery, thereby demonstrating higher ability. As long as the task is not perceived as too difficult to support a gain in mastery, all individuals should apply high effort and perform effectively (Jagacinski & Nicholls, 1984). However, if the same moderately difficult task were presented in a performance oriented context, individuals might face the dilemma that although high effort could increase performance, it could also become a demonstration of low ability. Individuals who believe their ability is low (as compared to others) would expect to perform poorly (relative to others), even if they tried hard, and therefore demonstrate low ability. For these individuals, low effort might be seen as a way of reducing the degree to which failure would imply low ability (Jagacinski & Nicholls, 1984). Crawford (1978) suggested that learners with strong cognitive structures learn optimally under less redundant (i.e., more difficult) conditions. However, for less able students, the instruction is probably best if it proceeds in smaller steps and presents the information in a more redundant format. For these less able students (or those who perceive themselves as less able), success on a task appears to improve performance on subsequent attempts at the same task, and success on one task effects the speed of learning on the second task. However, success on one task does not always facilitate success on a subsequent task (Crawford, 1978). Small steps, and prior success may not be beneficial to all students, depending on their goal orientation. According to Latta (1978), for students who are not mastery oriented but must master a difficult task, prior success can be detrimental to the learning process. In contrast, prior success helps students with a master orientation when attempting to master a difficult task (Latta, 1978). Support and feedback. A positive, personalized, and encouraging comment may not be powerful enough to motivate students to return to task and exert mental effort (Story & Sullivan, 1986). The context of the comment is an important mediator. For example, challenging tasks may be less threatening and possibly even more attractive to students who view the situation as emphasizing the process of learning, encouraging effortful activity, and deemphasizing the negative consequences for making errors—a mastery orientation (Ames & Archer, 1988). According to Hughes et al. (1985), students returned to task more often after a hard activity under self-evaluation or after an easy activity under teacher evaluation. It was suggested that the reason for low return to task on difficult tasks with teacher evaluation was do to the threat of exposure. Students felt threatened that their poor performance would be observed and evaluated by the teacher. This threat reduces motivation and therefore reduces return to task (continued mental effort). By providing self evaluation for difficult tasks, that threat was removed. As a result, students commonly perceived performance on a hard task as more of a challenge under self-evaluation and as more of a threat under teacher evaluation (Hughes et al., 1985). One type of feedback that seems particularly helpful in motivating students is success feedback. Success feedback may function as a reinforcer, a cue for eliminating errors, or an incentive (Latta, 1978). The immediate effects of success feedback will lead to better WAINESS PHD QUALIFYING EXAM 71 performance by individuals low in achievement orientation compared to those high in achievement orientation. An individual high in achievement orientation is predominantly motivated to approach success, while a person low in achievement orientation is predominantly motivated to avoid failure. Therefore, the probability of success on a task is an important, moderating factor. The differences between those with high and low achievement orientation occur because those high in achievement orientation prefer to work on tasks with a probability of success of about .5, while those low in achievement orientation prefer to work on tasks with a probability of success closer to either 1.0 or 0.0. Thus, any facilitation of performance by success feedback observed on the first task should be moderated by initial achievement orientation, with success exerting a more positive impact on individuals initially low in achievement orientation (Latta, 1978). Collaboration. Cooperative task structures are situations in which two ore more individuals are allowed, encouraged, or required to work together on a task. The task structures used in cooperative (collaborative) learning situations can be divided into two categories: task specialization and group study. With task specialization, each group member is responsible for a specific part of the group activity. With group study methods, all group members study together and do not have separate tasks (Slavin, 1984). According to Slavin (1984), there are several reasons collaborative tasks might be expected to improve student achievement. Collaborative tasks can promote peer tutoring, group discussions, and controversy—all which appear to increase comprehension. However, the effects of cooperative learning tasks on achievement depend on the behaviors of the group and the characteristics of participants, as well as other factors. Cooperative environments have been found to be beneficial in some circumstances and harmful in other (Slavin, 1984). Cooperative learning (i.e., collaboration) can either support or deter mental effort, depending on student attitude and on classroom structure. Students may use collaboration as a way of doing less. For example, Archer & Scevak (1998) found that for students who worked with a partner, some stated they chose to partner in order to halve the workload. Others, though, chose collaboration for positive reasons. Some students chose to work with others to increase the number of ideas generated (Archer & Scevak, 1998). In a study that used television as the medium for content delivery, Klein, Erchul, and Pridemore (1994) found that students who worked alone performed better than those who worked cooperatively. The structure in which the students worked on the first tasks, influenced their preferences for the way subsequent tasks were structured. Students working alone expressed more interest in individual activities, while those who worked cooperatively expressed a desire for activities that required cooperative learning (Klein et al., 1994). However, the results of the study may have been skewed, due to the nature of the incentive structure (the rewards). Klein et al. (1994) suggested that the results of their experiment indicated that the positive effects of these methods on student achievement resulted from the use of cooperative incentives, not from the use of cooperative tasks. Slavin (1984) contended it is not just the administration of rewards, but the nature of the rewards that may affect outcomes (Slavin, 1984). Slavin stated that the most successful cooperative learning methods do little to alter the content or deliver of instruction. While the methods do change the way students study, the group study aspect of the cooperative learning methods has not been found to contribute to achievement effects. However, the evidence indicated that a simple change in a classroom incentive system produces relatively consistent changes in student achievement (Slavin, 1984). WAINESS PHD QUALIFYING EXAM 72 Incentives. Students who are unmotivated to learn do now learn (Slavin, 1984). Student motivation refers to students’ interest in doing academic work and learning academic material. Continuing motivation (persistence) is defined as returning to a task or behavior without apparent external pressure to do so when other behavior alternatives are available (Malouf, 19971998). Classroom incentives refers to methods teachers use to motivate students to do academic work and learn materials (Slavin, 1984). Student motivation is influenced in part by classroom incentives, but also by such factors as interest in the task, parents’ interest in the students’ achievement, and students’ perceptions of their abilities and chances of success (Slavin, 1984). According to Malouf (1997-1998), several factors have been found to influence the effects of inducements upon subsequent effort, including the power of the inducement, the initial level of motivation, the effects on self-perceived competence and task enjoyment, and the relationship between inducement and behavior. For example, incentives based on mental effort have been shown to produce a performance gain of 20% (Condly et al., in press). A clear distinction should be made between the terms reward and reinforcer. A reinforcer acts to strengthen a behavior, (e.g., by increasing rate, intensity, duration, or quality). If a reward is delivered but no strengthening of behavior is observed, it cannot be said that reinforcement has occurred. The majority of studies on the reduced continuing motivation have not reported strong reinforcement effects on behavior (Malouf, 1983). Continued effort is only one of several possible ways in which rewards may influence behavior. Rewards may also convey information about the probability of future reinforcement, promote the development of skills which may allow a student to enjoy previously unenjoyable activities, or convey information about a students ability or competence. The net effect of a reward on subsequent behavior may be from a combination of these and other messages conveyed by the reward (Malouf, 1983). A number of researchers (e.g., Hughes et al., 1985; Coffin & MacIntyre, 1999) have commented that, in many cases, extrinsic motivation (rewards) decreases initial intrinsic motivation (interest) and may even interfere with the process of learning. This effect, know as the over-justification effect, commonly occurs when both intrinsic and extrinsic reasons for participating in the task are present. Because there is an overabundance of justification, the attribution of intrinsic interest is discounted by the presence of the external incentive. In general, this occurs because extrinsic rewards may distract attention away from a student’s interest and enjoyment of a task, as well as the actual process of learning (Coffin & MacIntyre, 1999). The over-justification hypothesis has been used to explain the results of the apparent negative effects of rewards on intrinsic interest. The plausible explanation is that offering external motivators for an inherently interesting activity will result in a reduction of interest in the activity. This prediction is an outgrowth of the self-attribution theory and of the study of personal causation (Hughes et al., 1985). According to the self-attribution theory, the reasons for engaging in activities are perceived and inferred from the environment. When there are no external motivators, the reason for pursuing an activity is attributed to personal interest and desire. However, when an external motivator is introduced, the reason for engaging in the activity is attributed to that external force. Personal causation hypothesizes that for a person to be motivated to pursue an activity, she must feel she is the cause of that action. Rewards change this perception of personal causation and thus undermine intrinsic interest (Hughes et al., 1985). There are a number of researchers who dispute the over-justification hypothesis. For example, Hughes et al. (1985) believe there are inconsistencies in the over-justification hypothesis. They suggest that the lack of consistent relationship across studies between teacher WAINESS PHD QUALIFYING EXAM 73 evaluation and continuing motivation may indicate that the over-justification hypothesis doesn’t adequately explain the relationship between grades and motivation. For example, it appears the hypothesis only applies with an activity initially of high interest for an external reward to reduce that interest (Hughes et al., 1985). Eisenstein (1985), too, commented that rewards that undermine interest for initially high interest subjects appear to raise interest for initially low interest subjects. Other researchers have found a number of factors that also might affect overjustification. According to Miller et al. (1996), it may be the nature of the reward and not just any reward that affects intrinsic interest. Immediate extrinsic rewards are typically presented in a manner that reduces a person’s sense of self-determination. However, the pursuit of distant outcomes (distant rewards), rather than proximal rewards, is likely to be viewed as selfdetermined rather than imposed; The result would be continued intrinsic interest (Miller et al., 1996). Malouf (1983) suggested that exogenous (rewards unrelated to a task) may support the over-justification effect, while endogenous rewards (rewards related to the task) do not. Eisenstein (1985) also found that endogenous rewards enhance an activity so that the activity itself is the end, and when the rewards are exogenous, the activity simply becomes a means to an end (Eisenstein, 1985). In addition to offering rewards for individual work, rewards can also be offered to those working in a group. According to Slavin (1984), there are two primary components of cooperative learning methods: a cooperative task structure and a cooperative incentive structure. Cooperative learning methods always involve cooperative tasks, but not all of them involve cooperative incentives. Cooperative task structures are situations in which two ore more individuals are allowed, encouraged, or required to work together on some task, coordinating their efforts to complete a task. The critical feature of a cooperative incentive structure is that group members are interdependent for a reward they will share if they are successful as a group. Cooperative incentive structures usually involve cooperative tasks, but the two are conceptually distinct (Slavin, 1984). There are three types of incentive structures used in cooperative learning methods: rewarding a group, rewarding the individual, or offering no rewards at all. A group reward structure provides all group members the same reward, based on performance of the group as a whole. An individual reward structure provides each individual in the group with a reward, based on that individual’s performance (Klein et al., 1994). Through a metanalysis of 46 field experiments on cooperative learning, Slavin (1984) suggested that the optimum reward structure for group tasks is group rewards, because rewards based on group performance creates group member norms supporting performance; group members try to make the group successful by encouraging each other to excel. In support of Slavin’s comments, in a meta-analysis by Condly et al. (in press), findings indicated a 48% increase in performance for team-based incentives. Slavin hypothesizes that groups create an internal, very sensitive, and very effective socially based reward system for each other, in which they pay a great deal of attention to each other’s efforts and socially reinforce efforts to help the group achieve its goal. The group is also likely to apply social disapproval to group mates who are underperforming or playing around instead of learning (Slavin, 1984). The individual reward system for a group can also take the form of a competitive reward system, and promote a competitive learning environment. In a competitive learning mode, rewards are restricted to top performers (or the top group) so the likelihood of a student or group receiving a reward is reduced by the presence of other able students or groups. In contrast, under an individualistic reward structure, the likelihood of attaining a reward does not depend on the performance of others. Such noncompetitive conditions lead to a classroom mastery orientation, WAINESS PHD QUALIFYING EXAM 74 where improvement in performance over time becomes the basis for evaluation and selfimprovement becomes a dominant goal (Covington & Omelich, 1984). An alternative approach to the competitive reward structure is to reward the group based on the highest scores. In an analysis of a number of studies, Slavin (1984) found that when the group was rewarded based on the highest scores, high achievers learned the most, while low achievers learned the most only when the group depended on their scores. Student achievement is best enhanced by cooperative learning methods that use group rewards for individual learning, and by learning methods that maintain high individual accountability for students.Cooperative learning where groups are rewarded on the basis of the sum of all members provides the greatest learning benefit, and therefore the greatest expenditure of mental effort, to all group members (Slavin, 1984). Cognitive Engagement and Self-Regulation. Students in classrooms actively engage in an array cognitive interpretations of their environments and themselves. This in turn, influences motivation in the form of the amount and type of effort exerted (Corno & Mandinah, 1983). Goals initiate and direct behavior, and the content of the goals help to determine the strategy used for achieving them (Rosswork, 1977). According to Corno and Mandinah (1983), evidence suggests that students use varied processing strategies to carry out common academic tasks. These strategies are variations of self-regulated learning, and students differ in their spontaneous use of these variations. Students apply different cognitive engagement strategies because tasks vary in novelty, difficulty, and competitive features, because teachers provide different types of instruction and guidance, and because students have different goals, past experiences with the task or the domain, and general ability levels and mental sets (Corno & Mandinah, 1983). Jones, Yokoi, Johnson, Lum, Cafaro, and Kee (1996) also supported the effect of the availability and accessibility of relevant knowledge on strategy processing. Cognitive engagement. According to Corno and Mandinah (1983), there are four forms of cognitive engagement: self-regulation, task focus, resource management, and recipience. Each form is defined by the amount of acquisition (alerting, monitoring, and high-level planning) and transformation (selectivity, connecting, and low-level planning) processes used. Transformative processes are cognitive processes that directly help in generating knowledge (Corno and Mandinah, 1983). Examples of transformative processes include hypothesis generation and data interpretation (de Jong, de Hoog, & de Vries, 1993). Transformation processes (i.e., selecting, connecting, and planning) have both metacognitive and cognitive features; They can activate other cognitive schemata that may be relevant for the task (Corno & Mandinah, 1983). According to de Jong et al. (1993), alertness, monitoring, and high-level planning are predominantly information acquisition processes; the information is gathered primarily from the environment. Acquisition processes bound and control the transformation processes. The acquisition processes are viewed as metacognitive because they regulate the transformation processes. The transformation processes have both metacognitive and cognitive aspects (Corno & Mandinah, 1983). de Jong et al. (1993) defined similar processes, using the term regulative processes, which combines some aspects of both acquisition and transformation. de Jong et al. (1993) stated that regulative processes help manage learning through processes such as monitoring, planning, and verifying, and that monitoring and planning together can be called navigation. For this discussion, Corno and Mandinah’s terms and definitions will be used. WAINESS PHD QUALIFYING EXAM 75 Considered the highest form of cognitive engagement, self-regulation is one of Corno and Mandinah’s four forms of cognitive engagement and consists of specific cognitive activities, such as deliberate planning and monitoring, which learners carry out as they encounter academic tasks (Corno & Mandinah, 1983). Self-regulation processes include elaboration, problem solving, decision making, integration, and planning (Corno & Mandinah, 1983). According to Eccles and Wigfield (2002), self-regulated learners have three important characteristics: They use an assortment of self-regulated strategies; they are self-efficacious; and they have numerous and varied self-determined goals. Self-regulated learners engage in three important processes: self-observation (monitoring personal actions); self-judgment (evaluation and comparison to a performance goal or other standard, such as the performance of others); and self reactions (reactions to performance outcomes). When these reactions are favorable, students are more likely to persist and apply mental effort. The reactions to failure are of particular importance. The favorableness of a learner’s reaction to failure is determined by how the learner interprets his difficulties and failures (Eccles & Wigfield, 2002). Corno and Mandinah (1983) suggested that self-regulated learners is are forever increasing, deepening, and manipulating specific content networks or associative memory networks, including the strength of the bonds between propositions. Therefore, self-regulated learning is an effort to deepen and manipulate the associative network in a particular area (including non-academic domains) and to monitor and improve that deepening process. In the second form of cognitive engagement, task focus, students activate relatively more information transformation processes than acquisition processes; selectivity, connecting new information to existing knowledge, and task-specific planning are the key cognitive activities. Task focus is appropriate when tasks require quick analytic responses and little self-checking or use of external cues. Task focus can be promoted by instruction that systematically eliminates the irrelevant features of an object, idea, argument, or event; for example, demonstrating the steps a learner would take in determining information relevant to completing a task, and sorting and chunking of that information into meaningful categories. Task focused instruction would emphasize the separation of the relevant from the irrelevant information and further emphasize that only the relevant information is important to achieve the desired performance. Task focused instruction should also emphasize the importance of using what the student already knows to help categorize and anchor new information in memory, and to visualize changes in design and visual fields. This type of instruction can help students prepare for some types of achievement and ability tests (Corno & Mandinah, 1983). The third form of cognitive engagement is resource management. According to Corno and Mandinah (1983), although self-regulated learning is the highest form of cognitive engagement, self-regulated learning is somewhat taxing. When tasks create cognitive demands, students may engage in self-regulated learning; or they may shift the mental burden by calling on available external resources, such as a knowledgeable peer; this process of acquiring external cognitive resources is termed resource management. With resource management, learners intentionally avoid the mental effort of carrying out information transformation on their own, instead enlisting the help of others for some or all task components (Corno & Mandinah, 1983). The social character of the classroom setting can encourage resource management. Cooperative learning environments, where group work or peer support is encouraged, is an example of a classroom situation that can encourage resource management (Corno & Mandinah, 1983). Recipience, the fourth form of cognitive engagement, is a form of passive response or learning, where the environment provides much of the transformation and low-level monitoring WAINESS PHD QUALIFYING EXAM 76 processes; termed short-circuiting. In these environments, most of the mental burden is removed from the learner and provided by an external source, similar to resource management. The difference is that with resource management, the learner must enlist external support. With recipience, the external support is automatically provided to the learner through the instructional process. For example, advanced organizers that provide short-circuiting promote the use of recipience (Corno & Mandinah, 1983). Short-circuiting organizers include charts and diagrams, summaries and reviews, outlines and marginal notes, markers of important points, and advance organizers. Whether or not these organizers provide short-circuiting depends the type and extent of the contain they contain. They only short-circuit if they provide most or all of the to-belearned information. In these instances, all the student has to do is memorize the information provided by the organizers. No transformational mental processes are required, just acquisition processes; and only some of the acquisition processes are required, such as rehearsal. In addition to short-circuiting immediate learning, an implicit message is being sent that learning is rote or associational, rather than requiring problem solving and mental elaboration. The result of shortcircuiting is reduced development of cognitive skills, compared to the amount of development promoted by either self-regulation or resource management (Corno & Mandinah, 1983). In contrast, if the roll of the organizer is simply to guide learning (if the organizer provides only key terms or information, or examples that support and assist learning), short-circuiting does not occur (Corno & Mandinah, 1983). Effective strategy use. When confronted with tasks, learners automatically use the knowledge and skills they have already acquired and are perceived to be relevant to goal attainment (Locke & Lathan, 2002). There are a number of ways the various cognitive strategies can be utilized by students and by instructional design to promote learning. In addition, classroom instruction can be designed to assist learners in gaining and developing cognitive strategies—to help learners learn to learn. For example, while short-circuiting is generally viewed negatively from an educational perspective, short-circuiting can serve as a learning tool. For low or even average ability students, short-circuiting can be beneficial. For these learners, short-circuiting can provide achievement of immediate, lower-level objectives, thereby increasing task-specific efficacy expectations for those students. Those students are then more apt to apply higher order cognitive strategies (Corno & Mandinah, 1983). For high achieving students, even though they should be more knowledgeable and aware of effective learning strategies, their use of those strategies is dependent on their perception of the goal emphasis of the class (Ames & Archer, 1988). Task perceptions also affect high performer’s strategy selections. According to Corno and Mandinah (1983), in some instances, high achieving students may prefer low level processes, such as short-circuiting, as a way to shortcut certain learning requirements. In contrast, these more able students use active mental approaches for complex tasks. In classroom environments, able learners shift between active and less active learning processes as interest or task perceptions dictate (Corno & Mandinah, 1983). According to Corno and Mindinah (1983), able learners have cognitive strategies for accomplishing tasks that may not be present in the repertoires of less able learners. Less able learners may approach tasks passively (recipience) or by seeking external assistance (resource management), because they are unfamiliar with higher order processes. Students must be taught alternative cognitive engagement strategies, alternatives that are more effective for some tasks (Corno & Mandinah, 1983). The context of the learning environment as well as the instructional design can affect development and use of various cognitive strategies. According to Eccles and WAINESS PHD QUALIFYING EXAM 77 Wigfield (2002), some environments do not allow much latitude in choice of activities or approaches, making self-regulation more difficult. Corno and Mandinah (1983) added that learning is less self-regulated when some of the processes are overtaken by classroom teachers, other students, or features of written instructions (short-circuiting). Instructional design methods can be utilized to foster not only the use of cognitive strategies but the development of those strategies and an awareness of when to use them as well. One example is classroom recitation. According to Corno and Mandinah (1983), classroom recitation is when a teacher conducts a lesson dialog involving repetition of goals and content, asking students questions to cognitively engage students and to elicit responses, and responding to student questions and comments. An advantage of classroom recitation is it can encourage cognitive engagement on several levels, without enveloping any one instructional strategy long enough to harm the motivation or performance of students with differing abilities. It may also restrict the likelihood of any one cognitive engagement strategy becoming automatic or habitual (Corno & Mandinah, 1983). Another possible determinate of strategy use is future consequences. Future consequences are “anticipated and valued distant consequences thought to be at least partially contingent on task performance but not inherent in the performance itself” (Miller et al., 1996, p. 390). The researchers commented that future consequences contribute to the explanation of variance in both academic engagement (e.g., effort, strategy use, self-regulation) and achievement, even when controlling for other goals and perceived abilities. A need to please the teacher was also found to increase self-regulation, and it covaries with reported use of selfregulatory behaviors such as setting goals, monitoring progress, and making adjustments in study behavior (Miller et al., 1996). Self-setting goals has also been shown to lead to more task commitment and better task strategies for learners with high self-efficacy (Locke & Latham, 2002). In general, people with high efficacy are more likely than those with low efficacy to develop effective task strategies (Lock & Latham, 2002). Knowledge can be stored in memory in a variety of forms. One way is in isolated and disconnected pieces of information, often the result of learning by rote. Much of this knowledge that students acquire in school seems to be in this form. In contract, knowledge can be organized into large, interconnected bodies, where pieces of knowledge are conceptually linked to other pieces. This network of interconnections can extend and link to other information to broaden the range of cognitive activities, such as answering a variety of domain-specific questions, drawing analogies, making inferences, and generalizing to other domains (Blanton, 1998). A second aspect of the information-processing approach which is also an integra. part of an instructional design concerns the activating of relevant background knowledge (Blanton, 1998). The media selected for the design must be consistent with the operational objectives. Media can be books, pamphlets, brochures, handouts, slides, film strips, television, computer, etc. (Blanton, 1998). Hypertext and hypermedia production tools are making wide use of the graphical user interface (GUI). This interface operates on the metaphorical premise of direct manipulation and engagement by the user. Authors of hypermedia constructs are relying heavily on the avaialblity of windows, icons, menus, and pointer systems in producing and implementing software presentations (Brown & Schneider, 1992). WAINESS PHD QUALIFYING EXAM 78 The direct manipulation interface (DMI) is defined as one in which the subject has direct interaction with their concept world. The subject hs the ability to perceive a direct connection between the interface and what it represents (Brown & Schneider, 1992). In a study with eighty-seven elementary school students, grades three through six, students had little trouble assimilating the direct manipulation interface, and had more difficulty with the conversational computer interface. While the study examined attitudinal differences and found a DMI preferable to a conversational interface, learning outcomes were not examined (Brown & Schneider, 1992). According to Brunken, Plass, and Leutner (2003), Sweller (1999) distinguished three types of load: one type that is attributed to the inherent structure and complexity of the instructional materials and cannot be influenced by the instructional designer, and two types that are imposed by the requirements of the instruction and can, therefore, be manipulated by the instructional designer (Brunken, Plass, & Leutner, 2003). The cognitive load caused by the structure and complexity of the material is called intrinsic cognitive load. The complexity of any given content depends on the level of item or component interactivity of the material, that is, the amount of information units a learner needs to hold in working memory to comprehend the information (Pollock, Chandler, & Sweller, 2002, as cited in Brunken, Plass, & Leutner, 2003). Cognitive load imposed by the format and manner in shich information is presented and by the working memory requirements of the instructional activities is referred to as extraneous cognitive load, a term that highlights the fact that this load is a form of overhead that does not contribute to an understanding of the materials (Brunken, Plass, & Leutner, 2003). Finally, the load induced by learners’ efforts to process and comprehend the material is called germane cognitive load (Gerjets & Scheiter, 2003; Renkl & Atkinson, 2003; as cited in Brunken, Plass, & Leutner, 2003). According to Brunken, Plass, and Leutner (2003), both extraneous and germane load can be manipulated by the instructional design fo the learning material (Brunken, Plass, & Leutner, 2003). Among the instructional strategies that have been found to reduce extraneous cognitive load and optimize germane cognitive load are worked examples (Kalyuga, Chandler, Tuovinen, & Sweller, 2001); goal-free activities (Sweller, 1999); and activities that are based on the completion effect (van Merrienboer, Schuurman, de Croock, & Paas,2 002), modality effect (Brunker & Leutner, 2001; Mayer & Moreno, 2003; Sweller, 1999), and redundancy effect (Sweller, 1999) as cited in Brunker, Plass, and Leutner (2003). Cognitive load theory is based on several assumptions regarding human cognitive architecture: the assumption of a virtually unlimited capacity of long-term memory, schema theory of mental representations of knowledge, and limited-processing capacity assumptions of working memory (Brunken, Plass, & Leutner, 2003). According to Brunken, Plass, and Leutner (2003), the Baddeley (1986) model of working memory assumes the existence of a central executive that coordinates two slave systems, a visuospatial sketchpad for visuospatial information such as written text or pictures, and a phonological loop for phonological information such as spoken text or music (Baddeley, 1986, Baddeley & Logie, 1999). It is also assumed that both slave systems are limited in capacity and independent from one another in that the processing capacities of one system cannot compensate for lack of capacity in the other (Brunken, Plass, & Leutner, 2003). For ech of the two working memory subsystems, the total amount of cognitive load for a particular individual under particular conditions is defined as the sum of intrinsic, extraneous, and germane load induced by the instructional materials. Therefore, a high cognitive load can be WAINESS PHD QUALIFYING EXAM 79 a result of a high intrinsic cognitive load (i.e., a result of the nature of the instructional content itself). It can, however, also be a result of a high extraneous or germane cognitive load (i.e., a result of activities performed on the materials that result in a high memory load). In other wors, the same learning material can induce different amounts of memory load when different instructional strategies and designs are used for its presentation, because the different cognitive tasks required by these strategies and designs are likely to result in varying amounts of extraneous and germane load (Brunken, Plass, & Leutner, 2003). If the difference between total cognitive load and the processing capacity of the visual or auditory working memory approaches zero, then the learner experiences a high cognitive load or overload (Brunken, Plass, & Leutner, 2003). The foundation and implications of CLT can be especially well investigated in the context of multimedia learning, because the use of this technology as instructional medium involves perceiving and processing information in different presentation modes and sensory modalities. A process theory that supplements CLT in the description of the cognitive processes in multimedia learning was introduced by Mayer (2001) as the generative theory of multimedia learning (Brunken, Plass, & Leutner, 2003). Two of the principle foundations of the generative theory of multimedia learning are the dual-coding assumption and the dual-channel assumption (Brunken, Plass, & Leutner, 2003). The dual-coding assumption refers to the presentation mode of the information and posits that verbal material (e.g., written and spoken text) and pictorial material (e.g., pictures, graphics, and maps) are processed and mentally represented in separate but interconnected systems, an assumption taken from dual-coding theory (Paivio, 1986). The dual-channel assumption refers to the sensory modality of information perception and proposes that visual information (e.g., written text) and auditory information (e.g., spoken text) are processed in different systems that correspond to the visuospatial and phonological subsystems in Baddeley’s (1986) working memory model (Brunken, Plass, & Leutner, 2003). According to Brunken, Plass, and Leutner (2003), the generative theory of multimedia learning combines these two assumptions with a gerative approach to learning (Wittrock, 1974, 1990) by stating that learners actively select relevant visual and verbal information from the learning material and organize them in visual and verbal working memory, respectively, by building associative connections between them (Brunken, Plass, & Leutner, 2003). Learners then integrate the mental representations as well as prior knowledge by building referential connections (Mayer, 2001). The modality effect may best illustrate how these principles allow for design of multimedia instruction that enhances learning outcomes. Focusing on the sensory modality of information, this principle states that knowledge acquisition is better facilitated by materials presented in a format that simultaneously uses the auditory and the visual sensory modalities, than by a format that uses only the visual modality (Mayer, 2001). Using CLT, the modality effect can be explained by describing the memory load condition for each of the treatments. The picture-and-text variant induces a higher load in visual working memory, because both types of information are processed in this system. The picture-and-narration variant induces a lower amount of load in visual working memory, because auditory and visual information are being processed in their respective systems. Thus, the load total load induced by this variant of the instructional materials is distributed among the visual and the auditory system (Brunken, Plass, & Leutner, 2003). WAINESS PHD QUALIFYING EXAM 80 Cognitive load can be treated as a theoretical construct, describing the internal processes of information processing that cannot be observed directly (Brunken, Plass, & Leutner, 2003). The various methods of assessing cognitive load that are currently available can be classified along two dimensions, objectivity (subjective of objective) and causal relation (direct or indirect). The objectivity dimension describes whether the method uses subjective, selfreported data or objective observations of behavior, physiological conditions, or performance. The causal relation dimension classifies methods based on the type of relation of the phenomenon observed by the measure and the actual attribute of interest (Brunken, Plass, & Leutner, 2003). Self-report questionnaires on the amount of mental effort individuals feel they exerted are an example of subjective-indirect measurements. Self-reports of the difficultly level of materials are an example of subjective-direct measurements. Analyzing performance outcomes are an example of indirect-objective measurements. And neuroimaging techniques (e.g, MRIs), physiological techniques (e.g., papillary response), and dual-task analysis are examples of objective-direct measurements (Brunken, Plass, & Leutner, 2003). Human-computer interaction (HCI) as a multidisciplinary and multifaceted area is strongly influenced by technological, organizational, and socioeconomic factors (Bullinger, Ziegler, & Bauer, 2002). Preferential selection refers to choices that we make about what is, and about what is not, attention-worthy (Calvert, Watson, Brinkley, & Bordeaux, 1989). The purpose of thise study was to examine the effects of presentational features on children’s preferential select and memory for information presented in oral story format and depicted in a computer microworld. As expected, preschoolers preferentially selected and recalled words that had been presented with moderate levels of actions better than words that had been presented with no action (Calvert, Watson, Brinkley, & Bordeaux, 1989). Action was both inherently interesting to children, as demonstrated by their preferential selection scores, and memorable to children, as demonstrated by their free recall scores (Calvert, Watson, Brinkley, & Bordeaux, 1989). Objects presented without sounds were better recalled than objects presented with sounds (Calvert, Watson, Brinkley, & Bordeaux, 1989). Sex differences in children’s preferential selection scores suggested that action is more inherently interesting to boys than to girls (Calvert, Watson, Brinkley, & Bordeaux, 1989). How easily users, or learners in the case of educational technology, become disoriented in a computerized text may be a function of the user interface (Chalmers, 2003). One area where disorientation can be a problem is in the use of links. Links enable users to expand their knowledge to include thousands of related topics. Although links create the advantage of exploration, there is always the chance that the explorer may get lost, not knowing where they were, where they are going, or where they are (Chalmers, 2003). Learning theories have traditionally been applied to venues of instruction such as textbook instruction, classroom instruction, and one-on-one tutoring. However, it cannot be assumed that learning theories applied to these venues can automatically be applied to learning with computers (Chalmers, 2003). Schemas are generally though of as ways of viewing the world and in a more specific sense, ways of incorporating instruction into our cognition. According to Chalmers (2003), Satzinger (1998) described schema theory to include knowledge structures that concepts in human memory, including procedural knowledge of how to use the concepts (Chalmers, 2003). WAINESS PHD QUALIFYING EXAM 81 Piaget proposed that learning is the result of forming new schemas and building upon previous schema (Chalmers, 2003). Scaffolding is a term used to describe the process of forming and building upon a schema. Interface scaffolding refers to a schema support for computer-assisted learning. A key component of one kind of interface scaffolding is that it can be made fadeable. That is, interface scaffolding can be faded in or out as needed. This fading can be a function of the learner or the computer. In learner induced fading, learners describe whether or not to show the scaffold. The trouble with this idea is that learners may not make good decisions about which scaffolding to show and which scaffolding to hide. In computer induced fading, the computer decides whether or not to fade the scaffolding, based on a model of the learner’s understanding. The main problem with this approach is that an extensive model of the learner’s knowledge may be hard to specify or evaluate in more open ended domains (Chalmers, 2003). In addition to schemas, another closely related cognitive learning theory is that of cognitive load. Cognitive load is a term used to describe the amount of information processing expected of the learner. Intuitively, it makes sense that the less cognitive load a learner has to carry,t he easier learning should be. In fact, researchers have proposed that working memory limitations can have an adverse effect on learning (Sweller, 1993; Sweller and Chandler, 1994, Yeung, 1999; as cited in Chalmers, 2003). In addition to learning, students also need to retain information, if they are to use their knowledge beyond the learning situation. Retention refers to the amount of knowledge which can be remembered after a given amount of time (Chalmers, 2003). Retention can be subdivided into two types, depending on the amount of time which as elapsed between the point of learning and the point of recall. These subdivisions are called short-term retention (i.e., in working memory) and long-term retention, (i.e., in long-term memory; Chalmers, 2003). Short-term retention is assessed during or immediately after the material has been presented. Long-term retention is assessed at least one week after material has been presented (Chalmers, 2003). To enhance retention, a number of techniques have been suggested. One of these techniques is chunking; that is, to group the multiple pieces of information into chunks (Chalmers, 2003). Good screen design leads to completing lessons in less time and with a higher completion rate (Chalmers, 2003). Outline organizers may be presented in the form of an agenda before a tutorial or lecture (Chalmers, 2003). Post organizers are used to help learners summarize information. They can appear in the form of a summary at the end of a chapter or lecture (Chalmers, 2003). Graphic organizers are organizers of information in a graphic format. Graphics organizers can be described as spatial displays of text information that can be provided to students as study aids that accompany text (Chalmers, 2003). A continuous organizer is an organizer that is continuously updated and context sensitive (Chalmers, 2003). Concepts maps contain nodes and linkages to identify interrelationships between pieces of knowledge. The learner generates concept maps (Chalmers, 2003). Through hypertext, using associative links and taking advantage of the structure of the information, learners are encouraged to explore and find the information they need, then progress to other learning activities (Chou & Lin, 1998). WAINESS PHD QUALIFYING EXAM 82 After initial information needs have been satisfied, the next stage is knowledge acquisition—integrating new knowledge with existing knowledge (Chou & Lin, 1998). One hundred twenty one students (98 males and 23 females) from two mid-sized universities in northern Taiwan in a required freshman Introduction to Information Technology course participated in the experiment using a hypertext learning course requiring a search task and three map types: a global map, a local map, and a local tracking map. The global map showed the entire hierarchical structure, listing the names of the 94 nodes contained in the course. The names represented the concept taught within the node, and a tree-like overview map provided the conceptual structure of the information. Using the map, learners could find where they were, where they had visited, and where they had not visited (Chou & Lin, 1998). The local maps could be described as parts of the global map showing particular knowledge areas in the course. They focused only on neighborhoods of activated nodes, that is, one level above and two levels below the current node (or concept). Users were always in the current map, but did not know exactly where they were in the overall courseware. Local maps were updated once a user moved to a node outside the current local map (Chou & Lin, 1998). The local tracking map was similar to the local map, but always showed the activated node in the center of the may in a “You-are-her” fashion. The local tracking maps were updated whenever the users moved to other nodes (Chou & Lin, 1998). The experiment included five treatments, one treatment for each of the three map types, one no-map group, and one group that received all three maps (Chou & Lin, 1998). Map type caused significant effects on the subjects search steps, search efficiency, and development of cognitive maps. Subjects in the global map and all-map groups took fewer steps (jumping from one node to another using either the global map or hot keys) than subjects in the other three groups. The all-maps group used the global map 84% of the time they used a map, therefore it was concluded that the global map helped learners find particular information in fewer steps (Chou & Lin, 1998). The search steps for the local map and tracking local map groups was similar to the no map group, suggesting that the limited scope of the local maps gave them little advantage of having no map (Chou & Lin, 1998). The search efficiency of the global and all-maps groups was better than the other three groups (Chou & Lin, 1998). Given no time limit, the map type, including having no map, did not affect search-task completion (Chou & Lin, 1998), suggesting that map usage was not beneficial to task completion but was to task efficiency and speed (Chou & Lin, 1998). The problems of cognitive overhead and disorientation are inter-related. Cognitive overhead is the additional mental effort learners must make in order to choose which links to follow and which to abandon from a large number of options. No knowing one’s location and continual decision making can be distracting and complicate the learning journey in a hypertext environment. These two problems can become even more serious if the hypertext system as a large number of nodes and links (Chou, Lin, & Sun, 2000). Sixty four students (51 males and 13 females) from a mid-sized university in northern Taiwan participated in a study, using a hypertext course for a search task, with next and previous buttons for moving forward or backward a screen at a time, and hot keys for jumping to specific screens. Two types of maps were provided: global maps and local maps. There were three randomly assigned groups: global map, local map, and no map. Participants were assessed on WAINESS PHD QUALIFYING EXAM 83 completion of the search task and on their ability to create concept maps of the relationships among the course nodes, that is, concepts (Chou, Lin, & Sun, 2000). Map type significantly affected subjects’ search steps, revisitation ration, hyper jumping, and cognitive map development. Subjects in the global map group took fewer steps (jumping from one node to another using either the global map or the hot keys), then subjects in the other two groups (Chou, Lin, & Sun, 2000). The search steps for the local map group was not significantly different from those of the no map group (Chou, Lin, & Sun, 2000). Subjects in the global map group has a lower re-visitation ration and lower hyperjumping scores than those in the other two groups (Chou, Lin, & Sun, 2000). The no map group has the highest mean score for cognitive map development, followed by the global map group, then the local map group (Chou, Lin, & Sun, 2000). The difference between the no map and the global map groups was not significant, but the difference between both these groups and the local map group was significant (Chou, Lin, & Sun, 2000). Once we are committed to a goal, we must make a plan to achieve a goal. A key element of all goal-directed planning is our personal assessment of the necessary skills and knowledge required to achieve a goal. A key aspect of self efficacy assessment is our perception of how novel and difficult the goal is to achieve. The ongoing results of this analysis is hypothesized to determine how much effort we invest in the goal (Clark, 1999). It is presumed that if we perceive a task as very difficult, that perception reflects an analysis of our own task-relevant skills. The usual solution to such perceptions is to attempt to increase self efficacy or self regulation perceptions and therefore reduce the perceptions that tasks are difficult. The problem with this strategy is that people when people lack knowledge or skills, no increase in their self efficacy alone, apart from a concurrent increase in knowledge and skills, will increase performance (Clark, 1999). Effort is primarily influences by specific and detailed self efficacy assessments of the knowledge required to achieve tasks (Clark, 1999). Automated expertise, developed over many hundreds of hours of practice, requires no cognitive effort to experess (Clark, 1999). Effort diminishes at either exceptionally low or high self efficacy levels and the relationship between self efficacy and effort follows the shape of an inverted “U” (Clark, 1999). According to Clark (1999), Sweller (1988;1994) has considerable evidence that when the “cognitive load” of a task exceeds the capacity of working memory, effort ceases. Clark further commented that Paas and Van Merrienboer (1993) have provided evidence that excessive cognitive load reduces both mental effort and performance (Clark, 1999). The more novel the goal is perceived to be, the more effort will be invested until we believe that we might fail. At the point where failure expectations begin, effort is reduces as we “unchoose” the goal to avoid a loss of control. This inverted U relationship suggests that effort problems take two broad forms: over confidence and under confidence (Clark, 1999). The level of mental effort necessary to achieve work goals can be influenced by adjusting perceptions of goal novelty and the effectiveness of the strategies people use to achieve goals (Clark, 1999). Motivation generates the mental effort that drives us to apply our knowledge and skills. Without motivation, even the most capable person will not work hard (Clark, 2003). Motivation is the result of our beliefs about what makes us successful and effective (Clark, 2003). WAINESS PHD QUALIFYING EXAM 84 Easy goals are not motivating (Clark, 2003). Peoples’ belief about whether they have the skills required to succeed at a task is perhaps the most important factor in the quality and quantity of mental effort people invest in their work (Clark, 2003). People will more easily and quickly choose to do what interests them (Clark, 2003). The term metacognition continues to be used in two distinct ways: the conscious and purposeful reflection on various aspects of knowing and learning, and the unconscious regulation of knowledge structures and learning that some information-processing theorists posit to be under the control of executive processes (Clements & Nastasi, 1999). The basic contention of achievement goal theory is that, depending on their subjective purposes, achievement goals differentially influence school achievement via variations in the quality of cognitive self-regulation processes (Covington, 2000). Cognitive self-regulation refers to students being actively engaged in their own learning, including analyzing the demands of school assignments, planning for and mobilizing their resources to meet these demands, and monitoring their progress toward completion of assignments (Covington, 2000). There are several cognitive and other factors that may be important in using VEs. These include individual differences that may affect efficient use of VEs , the effectiveness of passive exploration of a VE as opposed to active exploration, the kinds of features or cues within a VE that facilitate tracking one’s position during movement through the VE (i.e., navigation), and adverse sensory factors associate with immersion (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). The term navigation refers to a process of tracking one’s position in a physical environment to arrive at a desire destination. A route through the environment consists of either a series of locations or a continuous movement along a path (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). Navigation becomes problematic when the whole path cannot be viewed at once but is largely occluded by objects in the environment. These can include walls or large environmental objects such as trees, hills, or buildings. Under these conditions, once cannot simply plot a direct visual course from the start to finish locations. Rather, knowledge of the layout of the space is required. Maps or other descriptive information may provide this knowledge (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). Effective navigation of a familiar environment depends upon a number of cognitive factors. These include working memory for recent information, attention to important cues for location, bearing and motion, and finally, a cognitive representation of the environment which becomes part of a long-term memory, a cognitive map (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). This representation is what permits the local perceptual cures to be of use in tracking or maintaining a sense of “knowing where one is.” The representation also permits generation of expectancies for encountering future landmark (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). A cognitive map makes some aspects or attributes of the world explicit, while permitting other aspects to be computed or approximated as needed (i.e., they are implicit; Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). The interaction of cognitive style and environment type indicates that visual-spatial ability is an important predictor of navigation performance in the absence of flow-field WAINESS PHD QUALIFYING EXAM 85 information, but when this information is provided, the cognitive style groups perform similarly (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). In a series of five maze experiments, Cutmore, Hine, Maberly, Langford, and Hawgood (2000) found the following. A simple VE that presented the human with nothing more than a series of bare frames, each providing a view of a VE maze room, supported the acquisition of spatial knowledge about the VE (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). Compass headings did not seem to help exit finding tasks. Also, while landmarks provided useful cues, males utilized them significantly more often than females (Cutmore, Hine, Maberly, Langford, & Hawgood, 2000). In the field of psychology, there has been a demise of the behaviorist view in favor of the cognitive view of learning. A behaviorist view of learning emphasizes teaching strategies that involve repetitive conditioning of learner response. A cognitive view places importance on the learners’ cognitive activity and the mental models they form (Dalgarno, 2001). In the field of psychology, there has been a gradual rejection of the assumption, help by many cognitivists, that there is some objectively correct knowledge representation. The alternative view, termed constructivist, is that, within a domain of knowledge, there may be a number of individually constructed knowledge representations that are equally valid. The focus of teaching then becomes one of guiding learners are they build on and modify their existing mental models, that is, a focus on knowledge construction rather than knowledge transmission (Dalgarno, 2001). Typically, a simulation is defined as a model of a real workd environment (Dalgarno, 2001), while a microworld is defined as a model of a concept space, which may be a very simplified version of a real world environment, or it may be a completely abstract environment (Dalgarno, 2001). The existence of bookmarks is important not exactly to avoid disorientation problems but mainly to enable recovering from an eventual possibility of disorientation. The bookmark mechanisms allow the user to mark a node in the hypermedia document so that he can reach that node at any time during the navigation process and from any point of the hypermedia document (Dias, Gomes, & Correia, 1999). History lists can also be used (Dias, Gomes, & Correia, 1999). With one eye on the future, many educators and literary scholars are predicting nothing less than a paradigm shift in the manner in which we understand the learning experience and the education process as a result of hypermedia technologies in general and the World Wide Web in particular (Dillon & Gabbard, 1998). Previous researchers have buried navigational elements within their studies of the affects of learner control and interactivity on achievement and attitude. Consequently, the result of the many studies are conflicting. With the definition of learner control there also exist elements of interactivity and navigation. These terms are not synonymous, although the research sometimes treats them as such. Interactivity implies a relationship between the learner and the instructional module with varying degrees of engagement. Navigation is a function of interactivity along with feedback, pacing, inquiry, and elaboration. The presence of interactivity creates an opportunity for navigation (Farrell & Moore, 2000/2001). Cognitive load “the total amount of mental activity imposed on working memory at an instance in time” (Cooper, 1998) The major factor that contributes to cognitive load is the number of elements that need to be attended to at any one time during learning (Cooper, 1998). WAINESS PHD QUALIFYING EXAM 86 Message complexity, stimulus features, and additional cognitive demands inherent in hypermedia may combine to exceed the cognitive resources of some learners (Daniels & Moore, 2000). Giving learners control and autonomy over an environment can either facilitate learning or lead to disorientation and confusion (Dias, Gomes, & Correia, 1999). Learner control: Generically: “Student choice of practice items, reviewing and feedback.(Niemiec, Skorski, & Walberg, 1996, p. 158). Specific to computers: “…giving learners control over elements of a computer-assisted instructional program. (Hannafin & Sullivan, 1995, p. 19). Navigation: the paths learners chose to view information to accomplish various cognitive and learning tasks; a relational property among parts of a systems (individual, task, hypermedia, learning context) (Barab, Bowdish, & Lawless, 1997). [Navigation] determines the amount and quality of information retrieved from a hypermedia source (Farrell & Moore, 2000, p. 170). Cognition: the intellectual processes through which information is obtained, represented mentally, transformed, stored, retrieved, and used. Learning: The product of the interaction among what learners already know, the information they encounter, and what they do as they learn(Bruning, Shraw, & Ronning, 1999, p. 6). Hypermedia: …environments in which the information representation and management system is developed around a network of multimedia nodes connected by various links. (Barab, Bowdish, & Lawless, 1997, p. 23). …a generic term covering hypertext, multimedia, and related applications involving chunking of information into nodes that could be selected dynamically (McKnight, Dillon, & Richardson, 1991). The evidence suggests that the instructional methods and information content, not media, improve learning (Clark, 2001) Instructional methods are external representations of internal cognitive processes that are necessary for learning but which learners cannot or will not provide for themselves (Clark, 2001) Examples of instructional methods: Provide learning goals. Use examples (e.g., demonstrations, simulations and analogies). Monitor (in the form of practice exercises, and tests). Feedback (synchronous or asynchronous). Selection (highlighting important information) Novice and lower aptitude students have the greatest difficulty with hypermedia (Dillon & Gabbard,1998). Control does not appear to offer special benefits for particular learners or under specific conditions (Niemiec, Sikorski, & Walberg). More positive attitude can indicate less learning (Salomon, 1984). Recent evidence that some collaborative group work results in learning losses for some participants (Stipek, 2003). Media, including CBI and Distance Learning, does not increase learning (Bernard et al, AECT 2003). Six extensive meta analyses of distance and media learning studies in the past decade have found the same negative or weak results (see Bernard, et al, 2003) Use navigation maps (Chou & Lin, 1998; Ruddle et al, 1999, Chou, Lin, & Sun, 2000) Use menus (Benbasat & Todd, 1993; Farrell & Moore, 2000-2001) Grouping, location, color, concreteness, complexity, distinctiveness of icons (McDougall et al., 2000; Niemela & Saarinen, 2000; Zammit, 2000). WAINESS PHD QUALIFYING EXAM 87 Use simple control in support of theory. Interactivity improved learner understanding only when it was used in a way that minimized cognitive load and allowed for two-stage construction of a mental model (Mayer & Chandler, 2001). Part-Whole is better than Whole-Part for transfer. Part-Part and Part-Whole are better than Whole-Whole for transfer. Intelligence is not a function of how hard the brain works, but rather how efficiently it works. This efficiency may derive from a more focused use of brain areas relevant for good task performance (Gerlic & Jausovec, 1999). Gerlic and Jausovec (1999) conducted EEG studies of brain activity during multimedia performance. The content that results of the study showed a clear difference between multimedia presentations and text presentations. The video and picture presentations increased activity of the occipital and temporal lobes. The text presentation increased activity of the frontal lobes. Finding from this study support prior medical findings that one of the basic functions of the temporal cortex is the processing of auditory input, while the exclusive function of the occipital lobes is vision. It is also believed that the occipital cortex is involved in imagery. The prefrontal cortex appears to be involved in controlling and monitoring our thoughts and actions, and the frontal lobes control working memory (Gerlic & Jausovec, 1999). These findings suggest that it is reasonable to believe that multimedia presentations trigger visualization strategies such as mental imagery, which is critical to many kinds of problem solving and discovery (Rieber, 1995, as cited in Gerlic & Jausovec, 1999). Another explanation for the reported differences could be that video and picture presentations increased occipital activity because they included visual material, whereas the text presentation had no such material. However, the authors cite a number of reasons that this is a less plausible explanation of the differences (Gerlic & Jausovec, 1999). The EEG study also showed that gifted individuals exhibited lower mental activity when involved in learning the material. These differences were more pronounced for the video presentation than the text presentation, and could indicate a tendency that multimedia presentations are less effective for gifted students. However, one must bear in mind that the tasks used in the present study were rather simple involving only knowledge about facts. It is questionable if a similar trend would have been obtained for more complex information to be learned (Gerlic & Jausovec, 1999). Working memory refers to the limited capacity for holding information in mind for severl seconds in the context of cognitive activity (Gevins, Smith, Leong, McEvoys, Whitfield, Du, & Rush, 1998). Overload of working memory has long been recognized as an important source of performance errors during human-computer interaction and is particularly acute in unskilled users for whom unfamiliar procedures are likely to require greater commitment of cognitive resources. Furthermore, overload of working memory capacity had been found to be a limiting factor in the early stages of procedural skill acquisition. As a result, the need to minimize working memory load has been cited as a primary guiding principle for the design of intelligent tutoring systems (Gevins, Smith, Leong, McEvoys, Whitfield, Du, & Rush, 1998). “As shared symbol systems, media are potent cultural tools for the selective sculpting of profiles of cognitive processes” (Greenfield, Brannon, & Lohr, 1994, p. 87). WAINESS PHD QUALIFYING EXAM 88 A medium is not simply an information channel; as a particular mode of representation, it is also a potential influence on information processing (Greenfield, Brannon, & Lohr, 1994, p. 88). Each medium has its particular design features such that it presents certain kinds of information easily and well and other kinds poorly and with difficulty. Each medium, therefore, presents certain opportunities to construct particular kinds of representations. As a consequence, each medium stimulates different kinds of representational processes; it provides a particular kind of cognitive socialization (Greenfield, Brannon, & Lohr, 1994). From the point of view of development and socialization, video games are particularly important because they affect children during the formative years of childhood, when socialization is taking place (Greenfield, Brannon, & Lohr, 1994). Video games go beyond print and photography in their presentation of twodimensional representations of three-dimensional space. The consumer must be able to interpret not only static two-dimensional images into three-dimensional space, but dynamic images as well. Additionally, the user must not only interprets, but also mentally transform, manipulate, and relate dynamic and changing images (Greenfield, Brannon, & Lohr, 1994). It is the transfer of this skill to spatial contexts outside the game that is the focus of the present research. The question is: Can video game practice develop transferable skills in manipulating threedimensional spatial representations (Greenfield, Brannon, & Lohr, 1994)? Although we were not able to demonstrate the predicted experimental effect of shortterm practice of a game on mental paper folding, we were able to show a causal relationship of expertise gained over long-term and mental paper folding (Greenfield, Brannon, & Lohr, 1994). Effective use of mnemonic strategies has been characterized as developing through three stages. During the first stage, children are not capable of utilizing the strategy effectively. This difficulty is referred to as a mediational deficiency. During the second stage, children still do not use the strategy spontaneously; however, they are now capable of using the strategy effectively if specifically instructed to do so. This failure of children to spontaneously utilize a strategy which they are actually capable of using is referred to as a production deficiency. The final stage involves mature use of the strategy, by which time children produce the strategy spontaneously while performing strategy-appropriate tasks (Guttentag, 1984). In a study of second graders, it was found that the mental effort requirement of instructed cumulative rehearsal was significantly greater for production deficient children than for children who normally utilized a cumulative rehearsal strategy spontaneously. One possible explanation for this finding is that the decrease with age in the mental effort required of strategy use resulted form an increase with age in spontaneous use of the strategy. That is, because practice generally decreases the mental effort requirement of task performance, the children who use a cumulative strategy spontaneously may simply have been more highly practiced at using the strategy than were the children classified as production deficient (Guttentag, 1984). Alternatively, the mental effort requirement of strategy use may be one factor affecting children’s strategy selection. That is, there may be a tendency for children to avoid using strategies which require a very large expenditure of mental effort on their part (Guttentag, 1984). Haggas and Hantula (2002) conducted a study with university students to determine the effects of covert and overt computer responses to performance. An example of a covert format would be “THINK of the correct answer. When you have though of the correct answer, click the READY button to see if you were right.” Clicking the READY button brought up feedback: “The correct answer is [ ].” In the overt format, the display was “CLICK on the correct WAINESS PHD QUALIFYING EXAM 89 answer.” Clicking on the answer brought up differential feedback. A correct answer called up feedback in black text with an orange box, such as, “Answer choice [ ] is CORRECT!!”. An incorrect answer called up feedback in black text with a light blue box, for example, “Incorrect. The correct answer is [ ]” (Haggas & Hantula, 2002). The majority of participants showed preference for the overt format, and the difference between time taken to complete covert and overt questions was not significant. However, a negative relationship was found between the time taken to complete the program and the number of overt questions answered quickly (Haggas & Hantula, 2002). The seductive detail effect is the reduction of retention caused by the inclusion of extraneous details (Harp & Mayer, 1998). Seductive details are details that not part of the to-belearned material but tend to enhance the presentation of the material. Many models of learning (e.g. the CANE model: Clark, 1999) include the executive processes of selecting, organizing, and integrating. Selecting involves paying attention to the relevant pieces of information in the text (Harp & Mayer, 1998). Organizing involves building internal connections among the selected pieces of information, such as causal chains (Harp & Mayer, 1998). Integrating involves building external connections between the incoming information and prior knowledge existing in the learner’s long-term memory (Harp & Mayer, 1998). It is believe that seductive details interfere with some or all of these three metacognitive processes (Harp & Mayer, 1998). According to the distraction hypothesis, seductive details do their damage by “seducing” the reader’s selective attention away from the important information. A possible solution is to leave the details, but guide the learner away from them and to the relevant information (Harp & Mayer, 1998). According to the disruption hypothesis, seductive details are damaging because the interrupt the transition from one main idea to the next. In order for the reader to be able to construct a coherent mental model of the chain of events leading to the formation of lightning, links between the steps in the causal chain must be constructed in working memory. Because seductive details are presented between the steps of the causal chain, the reader is not able to see how to link the steps. As a result, the learner interprets each step as a separate, independent event, rather than as part of a causal chain. A way to solve the problem and keep the seductive details is to provide support that helps the reader to more effectively organize the important main ideas. For example, rewriting a passage by using organizational signals such as preview sentences and number signals in a passage about a process should help the reader to realize the steps explained in the passage are related to one another (Harp & Mayer, 1998). According to the diversion hypothesis, the learner builds a representation of the text organized around the seductive details, rather than around the important main ideas contained in the lesson. In this case, seductive details prime the activation of inappropriate prior knowledge as the organizing schema for the lesson. If the diversion hypothesis is correct, then revising a lesson by presenting all of the irrelevant information at the beginning of the lesson would exacerbate the seductive details effect. Conversely, revising the passage by moving the seductive details to the end of the lesson would result in reducing the seductive details effect, by coming after the important information, and therefore, too late to become the central component of the schema (Harp & Mayer, 1998). WAINESS PHD QUALIFYING EXAM 90 In their experiment on seductive details, Harp and Mayer (1998) found that the diversion hypothesis is the most likely explanation for the effect. Therefore, one way to discourage inappropriate schema activation is to delay the introduction of seductive information until after the reader has processed the information material. Another way, of course, is to simply not introduce seductive details at all (Harp & Mayer, 1998). Many educators believe that young children do not have the cognitive capacity to interact and make sense of the symbolic representations of computer environments. Early childhood educators believe that young children learn best by investigating with their senses, by examining that which is tactile and tangible (Howland, Laffey, & Espinosa, 1997). Simply because use of computers may be categorized as a concrete activity, we cannot assume this means that children’s involvement with computers necessarily results in high quality learning (Howland, Laffey, & Espinosa, 1997). The experiential mode is reactive and automatic, resulting in a response without conscious thought. Because the relevant information needed for decision-making already exists in our memory, our actions can be driven by the events as they occur. Computer games and drill and practice computer lessons result in this type of cognition (Howland, Laffey, & Espinosa, 1997). The reflective mode, on the other hand, is reasoned and conceptual, allowing the thinker to consider various alternatives. This type of explorative and discover orientation is at the heart of the developmentally appropriate practices we hope will take place in primary education (Howland, Laffey, & Espinosa, 1997). During “event-driven” computer games, one engages in the experiential mode by immersion in the recurring challenges and events. Although experiential learning can be a good motivator, the act of experiencing can easily become the sole outcome, with little or no actual thinking, connecting to other concepts, or generating new ideas (Howland, Laffey, & Espinosa, 1997). The challenge of computer-using primary educators is to find and use computational environments that meet the requirements of presenting “meaningful and manipulable” developmentally appropriate activities which do not simply rely on experiential cognition which may defeat the educational purpose of the activity (Howland, Laffey, & Espinosa, 1997). Intensity and direction are two variables of motivation which can be influenced by internal characteristics of the task or by extrinsic outcomes, such as rewards or praise (Howland, Laffey, & Espinosa, 1997). While computer games can provide fantasy, and while fantasy has been defined as a motivating factor (Malone, 1980), different children are drawn to different fantasies. While one child may be intrigued by the pretend notion of being a deep sea diver, another child might find that particular fantasy irrelevant to his own fantasy preferences (Howland, Laffey, & Espinosa, 1997). Curiosity also plays a role in motivation. An environment which is too simple to a child will fail to spark curiosity just as surely as one which proves too difficult (Howland, Laffey, & Espinosa, 1997). This point is similar to Vygotsky’s (1978) zone of proximal development (Howland, Laffey, & Espinosa, 1997). An optimal computer environment for learning might be one that matches the motivating factors of fantasy and curiosity with the child’s motivation toward mastery and competence (Howland, Laffey, & Espinosa, 1997). It is incorrect to consider language as correlative of thought; language is a crrelative of unconsciousness. The mode of language correlative to consciousness is meanings. The work of WAINESS PHD QUALIFYING EXAM 91 consciousness with meanings leads to the generation of sense, and in the process consciousness acquires a sensible (meaningful) structure (Hudson, 1998). The interface of a contemporary CBI program frequently can be likened to a control panel from which users access information in an oftentimes sophisticated and complicated piece of software (Jones, Farquhar, & Surry, 1995). One theoretical construct in the field of cognitive psychology is the notion of cognitive strategies. Cognitive strategies include rehearsal strategies, elaboration strategies, organization strategies, affective strategies, and comprehension monitoring strategies. These strategies are cognitive events that describe the way in which one is processing information (Jones, Farquhar, & Surry, 1995). Metacognition is a type of cognitive strategy that has executive control over other cognitive strategies. In the context of learning through a computer-based learning environment, metacognition refers to the activities of a user when monitoring, regulating, and orchestrating learning processes (Jones, Farquhar, & Surry, 1995). Strategy selection, attention, goal setting, and goal checking are four individual strategies within metacognition. These categories can be grouped into two major categories: (1) control processes and (2) monitoring processes (Jones, Farquhar, & Surry, 1995). Control processes: As the executive controller of cognitive processes, metacognition select the appropriate strategy for the task at hand. The selection of a cognitive strategy depends upon the individual’s understanding of the current problem or cognitive situation. Personal experiences in solving similar tasks and using various strategies will affect the selection of a cognitive strategy (Jones, Farquhar, & Surry, 1995). Control processes: To aid in the learner’s attention to the content, an individual cn also choose to attend to particular cognitive strategies. This strategy, attention, is important in followthrough, completing, and correctly performing the steps of subordinate cognitive strategies (Jones, Farquhar, & Surry, 1995). Monitoring processes: Cognitive processes such as learning and problem solving begin with the identification of a goal. In learning, this might be an understanding of a particular topic. In problem solving, the goal would be to find a solution (or the best solution) to the problem (Jones, Farquhar, & Surry, 1995). Goal setting guides the cognitive strategies in a certain direction. Goal checking are those monitoring processes that check to see if the goal has been accomplished, or if the selected strategy is working as expected. The monitoring process is active throughout an activity and constantly evaluates the success of other processes. If a cognitive strategy appears not bo be working, an alternative may then be selected (Jones, Farquhar, & Surry, 1995). Metacognitive guidance includes many familiar methods, such as advance organizers, graphic representations of problems, and hierarchical knowledge structures. These instructional methods should be used to aid the novice in developing an expert’s awareness of the problem space. Teaching the student problem space representational skills may be the most effective way to turn a “poor” novice problem solve into a “good” novice problem solver (Jones, Farquhar, & Surry, 1995). Metacognition is the “management” of though processes as one learns and solves problems. Learners using CBI are presented with large amounts of information and asked to manage that information to solve a particular problem or learn about a particular topic. In order to assist the management of information, the interface should provide users with relevant data WAINESS PHD QUALIFYING EXAM 92 about the program, how to use the program, where they are in the program, and how well they are doing (Jones, Farquhar, & Surry, 1995). Not every program needs a metaphor. Not all programs can support a metaphor. Study the content carefully and decide what the program is intended to do. Providing users with a theme can be more helpful than a forced or inappropriate metaphor (Jones, Farquhar, & Surry, 1995). If a metaphor can be used, use a metaphor that reflects the program’s content. Users should not have to learn the meaning of a metaphor along with the content of the program (Jones, Farquhar, & Surry, 1995). Provide maps so that users can find where they are, and allow provisions to jump to other information of interest from the map (Jones, Farquhar, & Surry, 1995). Provide visual effects to give users visual feedback that their choices have been made and registered by the program (Jones, Farquhar, & Surry, 1995). Designers should provide users with visual or verbal cues to help them navigate through unfamiliar territory. Overviews, menus, icons, or other interface design elements within the program should serve as advance organizers for information contained in the program (Jones, Farquhar, & Surry, 1995). Provide cues such as maps and menus as advance organizers to help users conceptualize the organization of the information in the program (Jones, Farquhar, & Surry, 1995). Metacognition, or the management of cognitive processes, involves goal-setting, strategy selection, attention, and goal checking (Jones, Farquhar, & Surry, 1995). The user interface can indicate the content covered in the program through the user of advance organizers such as menus. Placing checkmarks after visiting a section will indicate to the user sections that have been visited. However, it should be noted that visiting a section doesn’t necessarily mean that the learner viewed or engaged in the content (Jones, Farquhar, & Surry, 1995). In this article, we survey evidence that a large number of cognitive load theory (CLT) effects that can be used to recommendation instructional design are, in fact, only applicable to learners with very limited experience (Kalyuga, Ayers, Chandler, & Sweller, 2003). With additional experience, specific experimental effects can first disappear and then reverse. As a consequence, the instructional design recommendations that flow from the experimental effects also reverse (Kalyuga, Ayers, Chandler, & Sweller, 2003). We call the reversal of cognitive load effects with expertise the expertise reversal effect. Like all cognitive load effects, it originates from some of the structures that constitute human cognitive architecture (Kalyuga, Ayers, Chandler, & Sweller, 2003). Working memory limits profoundly influence the character of human information processing (Kalyuga, Ayers, Chandler, & Sweller, 2003). Only a few elements (or chunks) of information can be processes at any time without overloading capacity and decreasing the effectiveness of processing. Conversely, long-term memory contains huge amounts of domainspecific knowledge structures that can be described as hierarchically organized schemas that allow us to categorize different problem states and decide the most appropriate solution moves (Kalyuga, Ayers, Chandler, & Sweller, 2003). Controlled use of schemas requires conscious effort, and therefore, working memory resources. However, after having being sufficiently practiced, schemas can operate under automatic, rather than controlled, processing. Automatic processing of schemas requires minimal WAINESS PHD QUALIFYING EXAM 93 working memory resources and allows for problem solving to proceed with minimal effort (Kalyuga, Ayers, Chandler, & Sweller, 2003). CLT (see Sweller, 1999, and Sweller, van Merrienboer, & Paas, 1998, for recent reviews) is based on the assumptions that schema construction and automation are the major goals of instruction, but these goals can be thwarted by the limited capacity of working memory. Because of the limited capacity working memory, the proper allocation of available cognitive resources is essential to learning (Kalyuga, Ayers, Chandler, & Sweller, 2003). Experts possess a larger (and potentially unlimited) number of domain-specific schemas. Hierarchically organized schemas represent experts’ knowledge in the domain and allow experts to categorize multiple elements of related information into a single, higher level element. When confronted with a specific configuration of elements, experts are able to recognize the pattern as a familiar schema and treat (and act on) the whole configuration as a single unit. When brought into working memory, a single, high-level element requires considerably less working memory capacity for processing then the many low-level elements it incorporates, thus reducing the burden on working memory. As a consequence, acquired schemas, held in long-term memory, allow experts to avoid processing overwhelming amounts of information and effectively reduce the burden on limited capacity meoory. In addition, as already mentioned, experts are able to bypass working memory capacity limits by having many of their schemas highly automated due to extensive practice (Kalyuga, Ayers, Chandler, & Sweller, 2003). The level of learner experience in a domain primarily influences the extent to which schemas can be brought into working memory to organize current information. Novices lack sophisticated schemas associated with a task or situation at hand. For these inexperienced learners, no guidance for handling a given situation or task is provided by relevant schemas in long-term memory. Instructional guidance can act as a substitute for missing schemas and, if effective, acts as a means of constructing schemas (Kalyuga, Ayers, Chandler, & Sweller, 2003). If the instructional presentation fails to provide necessary guidance, learners will have to resort to problem-solving search strategies that are cognitively inefficient, because they impose a heavy working memory load (Kalyuga, Ayers, Chandler, & Sweller, 2003). Expertise reversal effect: In contrast, experts bring their activate schemas to the process of constructing mental representations of a situation or task. They may not need any additional instructional guiadance because their schemas provide full guidance. If, nevertheless, instruction provides information designed to assist learners in constructing appropriate mental representations, and experts are unable to avoid attending to this information, there will be an overlap between the schema-based and the redundant instruction-based components of guidance (Kalyuga, Ayers, Chandler, & Sweller, 2003). Cross-referencing and integration of redundant components will require additional working memory resources and might cause a cognitive overload. This additional cognitive load may be imposed even if a learner recognizes the instructional materials to be redundant and so decides to ignore that information as best has he or she can (Kalyuga, Ayers, Chandler, & Sweller, 2003). For more experienced learners, rather than risking conflict between schemas and instruction-based guidance, it may be preferable to eliminate the instruction-based guidance (Kalyuga, Ayers, Chandler, & Sweller, 2003). Split attention effect: When dealing with two or more related sources of information (e.g., text and diagrams), it is often necessary to integrate mentally corresponding representations (verbal and pictorial) to construct a relevant schema and achieve understanding. When different sources of information are separated in space or time, this process of information integration may WAINESS PHD QUALIFYING EXAM 94 place an unnecessary strain on limited working memory resources. Intensive search-and-match processes may be involved in cross-referencing the representations. These search-and-match processes may severely interfere with constructing integrated schemas, thus increasing the burden on working memory and hindering learning (Kalyuga, Ayers, Chandler, & Sweller, 2003). Superiority of physically integrated materials that do not require split attention over unintegrated materials that do require split attention and mental integration before they can be understood provides and example of the split-attention effect (Kalyuga, Ayers, Chandler, & Sweller, 2003). Redundancy Effect: Physical integration of two or more sources of information to reduce split attention and cognitive load is important if they sources of information are essential in the sense that they are not intelligible in isolation for a particular learner. Alternatively, if they sources are intelligible in isolation with one source unnecessary, elimination rather than physical integration of the redundant source is preferable (Kalyuga, Ayers, Chandler, & Sweller, 2003). Whether two sources of information are unintelligible in isolation and so require integration or whether one source is redundant and so should be eliminated does not depend just on the nature of the information, it also depends on the expertise of the learner. A source of information that is essential for a novice may be redundant for an expert (Kalyuga, Ayers, Chandler, & Sweller, 2003). Text coherence depends on the learner’s expertise. Text that is minimally coherent for novices may well be fully coherent for experts. Providing additional text is redundant for experts and will have negative rather than positive effects, thus demonstrating the expertise reversal effect (Kalyuga, Ayers, Chandler, & Sweller, 2003). Modality effect: Using a combination of both auditory and visual sources of information is an alternative way of dealing with split attention. According to dual-processing models of memory and information processing, the capacity to process information is distributed over several partly independent subsystems. As a consequence, effective working memory capacity can be increased by presenting some information in an auditory and some in an visual modality (Kalyuga, Ayers, Chandler, & Sweller, 2003). Many studies (Mayer, 1997; Mayer & Moreno, 1998; Mousavi, Low, & Sweller, 1995) have demonstrated that learners can integrate words and diagrams more easily when the worls are presented in auditory form rather than visually, providing an example of the modality effect (Kalyuga, Ayers, Chandler, & Sweller, 2003). However, auditory explanations may also become redundant when presented to more experienced learners. Kalyuga, Chandler, and Sweller (2000) demonstrated that if experienced learners attend to auditory explanations, learning might be inhibited (Kalyuga, Ayers, Chandler, & Sweller, 2003). Worked example effect: Worked examples consisting of a problem statement followed by explanations of all solution details represent a case of fully guided instruction. Exploratory learning environments, discovery learning, or problem solving, however, represent a form of less or even relatively unguided instruction. A considerable number of studies, such as Quilici and Mayer (1996) demonstrated that when adults learned laws of mechanics from unstructured simulations (designed as free exploration), the results were significantly worse than those for an example-based, tutorial condition (Kalyuga, Ayers, Chandler, & Sweller, 2003). When solving unfamiliar problems, learners normally use a means-end search strategy directed toward reducing differences between current and goal problems states by using suitable operators. These activities are unrelated to schema construction and automation and are WAINESS PHD QUALIFYING EXAM 95 cognitively costly because they impose heaving working memory load (Sweller, 1988, as cited in Kalyuga, Ayers, Chandler, & Sweller, 2003). Providing worked examples instead of problems eliminates the means-ends search and directs a learner’s attention toward a problem state and its associated moves (Kalyuga, Ayers, Chandler, & Sweller, 2003). Of course, worked examples should be appropriately structured to eliminate an unnecessary cognitive load do to, for example, split-attention effects (Kalyuga, Ayers, Chandler, & Sweller, 2003). As learners experience in a domain increased, solving a problem may not require a means-end search and its associated working memory load due to a partially, or even fully, constructed schemas. When a problem can be solved relatively effortlessly, analyzing a redundant worked example and integrating it with previously acquired schemas in working memory may impose a greater cognitive load than problem solving. Under these circumstances, practice in problem solving may result in more effective learning than studying worked examples, because solving problems may adequately facilitate further schema construction and automation (Kalyuga, Ayers, Chandler, & Sweller, 2003). Worked examples are most appropriate when presented to novices, but they should be gradually faded out with increased levels of learner knowledge and replace by problems (Kalyuga, Ayers, Chandler, & Sweller, 2003; Renkl & Atkinson, 2003). Some material imposes an intrinsically high cognitive load because the elements that must be learned interact and so cannot be processed in isolation without compromising understanding. Learners must process many interacting elements of information simultaneously in working memory where understanding is defined as the ability to process all necessary interacting elements in working memory simultaneously. However, the assessment of element interactivity is always relative to the level of expertise of an intended learner. If the learner holds an appropriate set of previously acquired domain-specific schemas, the whole set of interacting elements may be incorporated into a schema and regarded as a single element. Conversely, a novice learner may need to attend to each of the elements and learn all interaction between elements individually. If element interactivity is sufficiently high for the learner, this mental activity will overload the limited capacity of working memory and cause a learning failure (Kalyuga, Ayers, Chandler, & Sweller, 2003). How can novices acquire the schemas necessary to allow the processing of very highelement interactivity material if they cannot process all of the element in working memory simultaneously and if those interacting elements cannot be processed in isolation because they interact? See the Space Fortress dyadic protocol studies (Kalyuga, Ayers, Chandler, & Sweller, 2003). Another method is to initially present the information as isolated elements of information (Kalyuga, Ayers, Chandler, & Sweller, 2003). However, this method may not be beneficial to expert learners (Kalyuga, Ayers, Chandler, & Sweller, 2003). The Imagination Effect: The imagination effect occurs when learners are asked to imagine the content of instruction learn more than learners simply asked to study the material. More knowledgeable students who held appropriate prerequisite schemas found imagining procedures and relations more beneficial for learning compared with studying working examples, where less knowledgeable students found imagining procedures and relations had a negative effect compared with studying worked examples (Kalyuga, Ayers, Chandler, & Sweller, 2003). The process of mental imagining is closely associated with constructing and running mental representations in working memory. Because inexperienced learners have no appropriate schemas to support this process, attempts to engage in imagining are likely to fail (Kalyuga, WAINESS PHD QUALIFYING EXAM 96 Ayers, Chandler, & Sweller, 2003). When asked to study worked examples rather than imagine procedures, novices can construct schemas of interacting elements, an essential first step to learning (Kalyuga, Ayers, Chandler, & Sweller, 2003). Schema is defined as a cognitive construct that permits people to treat multiple subelements of information as a single element, categorized according to the manner in which it will be used (Kalyuga, Chandler, & Sweller, 1998). Schema have a dual function: storing learned information in long-term memory and reducing the burden on working memory by allowing multiple elements of information to be treated as a single element (Kalyuga, Chandler, & Sweller, 1998). Automation allows information to be processed with less working memory resources than if not automated. Schemas are stored in long-term memory with varying degrees of automaticity. A schema can be stored and retrieved from long-term memory either in fully automated form or in a form that requires conscious consideration of each of the elements and their relations (Kalyuga, Chandler, & Sweller, 1998). If a schema can be brought into working memory in automated form, it will make limited demands on working memory resources, leaving more resources available to search for a possible solution problem (Kalyuga, Chandler, & Sweller, 1998). Cognitive load theory, which incorporates this architecture, has been used to design a variety of instructional procedures, based on the assumption that working memory is limited and that skilled performance is driven by automated schemas held in long-term memory (Kalyuga, Chandler, & Sweller, 1998). Split attention effect (Kalyuga, Chandler, & Sweller, 1998). Redundancy effect (Kalyuga, Chandler, & Sweller, 1998). Limits of working memory (Kalyuga, Chandler, & Sweller, 2000). Split attention effect (Kalyuga, Chandler, & Sweller, 2000). Many ways to deal with split attention (Kalyuga, Chandler, & Sweller, 2000). Modality effect (Kalyuga, Chandler, & Sweller, 2000). Redundancy effect (Kalyuga, Chandler, & Sweller, 2000). Problem solving may inhibit schema construction and automation because the strategy normally used to solve problems, means-end analysis, imposes a heavy working memory laod that interferes with learning. A means-end strategy is directed towards reducing difference between current and goal problem states. To use the strategy, the solver must simultaneously consider the current problem state, the goal state, the difference between the current and goals states, the relevant operators and their relations to the differences between the current and goal states, and lastly, any subgoals that have been established (Kalyuga, Chandler, Tuovinen, & Sweller, 2001). Providing solution examples instead of problems should reduce cognitive load because it obviates the need for means-end search and instead requires learners to study each example state and its associated move or moves (Kalyuga, Chandler, Tuovinen, & Sweller, 2001). There are established conditions under which the worked example effect does not occur. If a worked example is structured in a manner that imposes a heavy cognitive load, there is no reason to predict that worked examples will be superior to solving the equivalent problems and the effect should disappear (Kalyuga, Chandler, Tuovinen, & Sweller, 2001). The redundancy effect can also affect the value of worked examples. For more experienced learners, some of the worked example information may be unnecessary, because the information is already know to the learner and, therefore, redundant. Trying to incorporate that redundant information with the schema already in working memory can create more cognitive WAINESS PHD QUALIFYING EXAM 97 load than necessary and even overload working memory (Kalyuga, Chandler, Tuovinen, & Sweller, 2001). For these learners, problem solving would be superior to worked examples, because problem solving allows them to using existing schema to solve a goal condition, and doesn’t require the inclusion of redundant schema information (Kalyuga, Chandler, Tuovinen, & Sweller, 2001). In the adaptive training (AT) method, the training system monitors and evaluates the performance of a student. Based on this evaluation, a new level of task difficulty is set for practice in an effort to maintain optimal learning conditions for the individual trainee (Mane, Adams, & Donchin, 1989). Two hypotheses underlie the concept of AT: (1) the learning of a complex perceptualmotor skill is better accomplished if the learner starts with a less difficult version of the task and then makes the transition to an increasingly more difficult version of the task; (2) learning of a task is better when the transition from one level of difficulty to another is based on the individuals’ level of proficiency rather than a fixed order transition (Mane, Adams, & Donchin, 1989). Part training (PT) is a method in which parts of the task are presented in isolation. A number of part training methods such as pure part, progressive part, repetitive part, retrogressive, and isolated parts have been developed (Mane, Adams, & Donchin, 1989). PT can be an effective method of training because it provides the student with an opportunity to study in isolation the relationship among a subset of the elements in the task. When training a task as a whole, it is often difficult to determine which of several factors in a situation determines the outcome of any given action, or to isolate the relationship of two variables from the influence of other variables. Breaking the task into parts is a good way to overcome that difficulty (Mane, Adams, & Donchin, 1989). In an experiment using a flight simulator computer game, when comparing PT to the whole task control group, there is a clear advantage which persists throughout the entire period of training. Subjects in the PT group performed better in game aspects which were directly related to the PT manipulation, with shorter performance times. PT also resulted in higher transfer performance (Mane, Adams, & Donchin, 1989). Successful problem solving depends on three components—skill, metaskill, and will— and each of these components can be influenced by instruction. When the goal of instruction is the promotion of nonroutine problem solving, students need to possess the relevant skill, metaskill, and well. Metacognitiion—in the form of metaskill—is central in problem solving because it manages and coordinated the other components (Mayer, 1998). Perhaps the most obvious way to improve problem solving performance is to teach the basic skills. The general procedure is to analyze each problem into the cognitive skills needed for solution and then systematically teach each skill to mastery (Mayer, 1998). One approach is to break apart a task into its component skills and then systematically teach each skill to mastery (part task component training). In this approach, any large task can be broken down into a collection of “instructional objectives” (Mayer, 1998). Another method is to break the task into a hierarchy of components (Mayer, 1998). Metaskills (or metacognitive knowledge) involves knowledge of when to use, how to coordinate, and how to monitor various skills in problem solving (Mayer, 1998). An important instructional implication of the focus on metacognition is the problem solving skills should be learned in the context of realistic problem-solving situations. Instead of using drill and practice on component skills in isolation—as suggested by a skill-based WAINESS PHD QUALIFYING EXAM 98 approach—a metaskill-based approach suggest modeling of how and when to use strategies in realistic academic tasks (Mayer, 1998). Rather than practicing of basic component skills in isolation, successful comprehension strategy instruction requires learning within the context of real tasks. By embedding strategy instruction in academic tasks, students also acquire the metacognitive skills of when and how to use the new strategies (Mayer, 1998). Effort theory and interest theory yield strikingly different educational implications. The effort theory is most consistent with the practice of teaching skills in isolation, and with using instructional methods such as drill-and-practice. The interest theory is most consistent with the practice of teaching skills in context, and with using instructional methods such as cognitive apprenticeship (Mayer, 1998). Individual interest refers to a person’s dispositions or preferred activities, and therefore is a characteristic of the person. Situational interest refers to a task’s interestingness, and therefore is a characteristic of the environment. Interest theory predicts that students think harder and process the material more deeply when they are interested, rather than uninterested (Mayer, 1998). Interest theory also predicts that an otherwise boring task cannot be made interesting by adding a few interesting details, such as seductive details (Mayer, 1998). Self-efficacy theory predicts that students work harder on a learning task when they judge themselves as capable than when they lack confidence in their ability to learn. Selfefficacy theory also predicts that students understand the material better when they have high self-efficacy than when they have low self-efficacy (Mayer, 1998). Simple user interaction in a multimedia explanation refers to user control over the words and pictures that are presented in the multimedia explanation—namely, the pace of the presentation. Simple user interaction may affect both cognitive processing during learning and the cognitive outcome of learning (Mayer & Chandler, 2001). The parts-first hypothesis assets that learners are more likely to experience cognitive overload when the whole presentation is given first (whole-part) and it therefore cannot serve as an effective context for organizing the subsequent parts presentation. Instead, when the parts presentation comes first (part-whole), learners can build separate component models for each of the key parts of the system. These component models will serve as chunks that can be more easily organized into a mental model when the whole presentation is given (Mayer & Chandler, 2001). In a whole-whole presentation, learners receive the entire multimedia explanation and then receive it again. In a part-part presentation, learners receive the parts presentation and then receive it again. In their experiment, the PW group performed better on the transfer test than the WP did, and the PP group performed better on the transfer text than the WW group did. For measures of deep understanding, there was a clear advantage of PW presentation over WW presentation and PP presentation over WW presentation. PW and PP seem the best methods for deep learning as measured by transfer (Mayer & Chandler, 2001). The locus of the redundancy effect seems to be at the point of visual attentional scanninc, as posited by the split-attention hypothesis. The onscreen text competes with the animation for visual attention, thus reducing the chances that the learner will be able to attend to relevant aspects of the animation and text (Mayer, Heiser, & Lonn, 2001). WAINESS PHD QUALIFYING EXAM 99 We interpret the redundancy as a new piece of support for the cognitive theory of multimedia learning and, in particular, the idea that humans possess separate visual and auditory processing channels that are each limited in capacity (Mayer, Heiser, & Lonn, 2001). According to a cognitive theory of multimedia learning, not all techniques for removing redundancy are equally effective. For example, in the case of multimedia explanations consisting of animation, narration, and on-screen text, one effective solution is to remove the onscreen text, but it does not follow that the same benefits would occur by instead removing the narration (Mayer, Heiser, & Lonn, 2001). Coherence effect refers to situations in which adding words or pictures to a multimedia presentation results in poorer performance on tests of retention or transfer (Mayer, Heiser, & Lonn, 2001). Seductive details (Mayer, Heiser, & Lonn, 2001). Mayer defines as the presentation of information in two or more formats, such as in words and pictures (Mayer, 1997; Mayer & Moreno, 1998). According to the dual-processing theory, visually presented information is processed— at least partially—in visual working memory whereas auditorily presented information is processed—at least partially—in auditory working memory (Mayer & Moreno, 1998). The major result of these studies is a split-attention effect in which students learned better when pictorial information was accompanied by verbal information presented in an auditory rather than a visual modality (Mayer & Moreno, 1998). The results also extend previous research on contiguity effects in which students learned better when an animation depicting the workings of a scientific system and the corresponding narration were presented concurrently rather than successively (Mayer & Moreno, 1998). According to the dual-processing theory of working memory, students learn better in multimedia environments when words and pictures are presented in separate modalities than when they are presented in the same modality (Mayer & Moreno, 1998). In split attention situations, the learner’s attentional resources (or central executive resources) are used to hold worlds and pictures in visual working memory sort her is not enough left over to build connections between words and pictures. In contrast, when learners can concurrently hold worlds in auditory working memory and picture in visual woking memory, they are better able to devote attentional resources to building connection between them (Mayer & Moreno, 1998). In split-attention situations, an overload in visual working memory reduces the learner’s ability to build coherent mental models that can be used to answer transfer questions. In contrast, when words are presented in an auditory working memory and pictures are presented in visual working memory, the learner is better able to organize representations in each store and integrate across stores (Mayer & Moreno, 1998). We define multimedia learning as learning from words and pictures, and we define multimedia instruction as presenting words and pictures that are intended to foster learning. The words can be printed or spoken. The pictures can be static or dynamic (Mayer & Moreno, 2003). We define meaningful learning as deep understanding of the material, which includes attending to important aspects of the presented material, mentally organizing it into a coherent cognitive structure, and integrating it with relevant existing knowledge. Meaningful learning is reflected in the ability to apply what was taught to new situations; problem solving transfer. In our research, meaningful learning involves the construction of a mental model of how a causal system works (Mayer & Moreno, 2003). WAINESS PHD QUALIFYING EXAM 100 A central challenge facing designers of multimedia instruction is the potential for cognitive overload—in which the learner’s intended cognitive processing exceeds the learner’s available cognitive capacity (Mayer & Moreno, 2003). Dual channel assumption (auditory and visual channels): (Mayer & Moreno, 2003). Limited processing capacity (working memory) (Mayer & Moreno, 2003). This is the central assumption of Chandler and Sweller’s (1991; Sweller 1999) cognitive load theory. Cognitive overload defined (Mayer & Moreno, 2003). Three kinds of load: essentially processing, incidental processing, and representational holding. Essential processing refers to cognitive processes that are required for making sense of the presented material, such as the five core processes in the cognitive theory of multimedia learning—selecting words, selecting images, organizing words, organizing images, and integrating. Incidental processing refers to cognitive processes that are not required for making sense of the presented material but are primed by the design of the learning task. Representational holding refers to cognitive processes aimed at holding a mental representation in working memory over a period of time (Mayer & Moreno, 2003). Reducing cognitive load can involve redistributing essential processing, reducing incidental processing, or reducing representational holding (Mayer & Moreno, 2003). Split attention effect: (Sweller, 1999) (Mayer & Moreno, 2003). Modality effect (Mayer & Moreno, 2003). Segmenting (part task): (Mayer & Moreno, 2003). Pretraining (part task): (Mayer & Moreno, 2003). Weeding involves making the narrated animation as concise and coherent as possible, so the learner will not be primed to engage in incidental processing (Mayer & Moreno, 2003). Signaling provides cues to the learner about how to select and organize the material (Mayer & Moreno, 2003). Aligning words and pictures (spatial contiguity) (Mayer & Moreno, 2003). Eliminating redundancy (Mayer & Moreno, 2003). Temporal contiguity effect (Mayer & Moreno, 2003). Constructivist learning occurs when learners construct meaningful mental representations from presented information (Mayer, Moreno, Boire, & Vagge, 1999). A design principle is a technique for constructing multimedia environments that foster constructivist learning. Although learners are not physically active in the multimedia environment, it may possible to promote some degree of cognitive activity that results in constructivist learning (Mayer, Moreno, Boire, & Vagge, 1999). Internal connections = selecting relevant information of the modal and organizing them into causal chains (Mayer, Moreno, Boire, & Vagge, 1999). External connections (aka referential connections). = integrating the internal connections to one another and with relevant prior knowledge (Mayer, Moreno, Boire, & Vagge, 1999). Constructivist learning occurs when learners are able to build referential connections between corresponding aspects of the visual and verbal representations of a multimedia presentation (Mayer, Moreno, Boire, & Vagge, 1999). Constructivist learning is fostered when the learner is able to hold a visual representation in visual working memory and a corresponding verbal representation in verbal working memory at the same time. The model implicates working memory (or cognitive load) as a major impediment to constructivist learning (Mayer, Moreno, Boire, & Vagge, 1999). WAINESS PHD QUALIFYING EXAM 101 The contiguity effect: learners perform better on retention and transfer when they view animated materials concurrently with corresponding narration than when the animation is viewed either before or after the narration (Mayer, 1997; Mayer, Moreno, Boire, & Vagge, 1999). If modalities must be presented successively, rather than concurrently, reducing the material to smaller bites reduces the detrimental learning effects of the contiguity effect (Mayer, Moreno, Boire, & Vagge, 1999). An explanation is a description of a causal system containing parts that interact in a coherent way. A change in one part causes a change in another part (Mayer & Sims, 1994). Multimedia learning occurs when students use information presented in two or more formats to construct knowledge. This definition also applies to the term multimodal, since learners are exposed to more than one sense modality, rather that multimedia, which refers to the idea that the instructor uses more than one presentation medium (Mayer & Sims, 1994). The dual coding theory involves three processes: A verbal explanation is presented along with a visual explanation, then in working memory the learner constructs mental representations of the two explanations and accesses relevant prior knowledge from long term memory, and lastly the two representations are combined or linked with referential connections (Mayer & Sims, 1994). Contiguity effect: From the dual coding theory, it is expected that meaningful learning occurs in working memory when multiple modes of information are process and linked with referential connection. This in turn leads to better transfer effects. Therefore, if the material is not presented concurrently, this process is ill-supported. (Mayer & Sims, 1994). Students performed better on problem-solving transfer when the voice in the multimedia message was from a human speaking with a standard accent rather than a human speaking with a foreign accent or a machine voice. These results are consistent with the social agency theory and cognitive load theories. The social agency theory suggests that social cues in a multimedia message can prime the social conversation schema in learners. Once the social conversation schema is activated, learners are more likely to acts as if they are in a conversation with another person. Thus, at least to some extent, the social rules of human-to-human communication come into play. Therefore, the learner tries harder to make sense of what the computer is asying by engaging in deep cognitive processing. The deep processes include selecting relevant information for further processing, organizing the pieces of information into coherent representations, and integrating verbal and visual representations with each other and with prior knowledge. Deep cognitive processing results in meaningful learning outcomes, which enable learners to apply (or transfer) what they have learned to new situations (Mayer, Sobko, & Mautone, 2003). Concrete icons enable users to use their everyday knowledge about the objects they depict to understand the likely function of the icon. The effects of icon concreteness are short lived and limited to users’ early experience of an icon set when users are unsure of the meaning of icons. The effect of icon complexity, in contrast, are more most apparent in tasks involving a search component and do not diminish as a result of experience (McDougall, de Bruijn, & Curry, 2000). Distinctiveness of an icon, and the features underpinning distinctiveness, vary depending on the nature of the array in which an icon is presented. Concrete icons in an array of abstract icons become distinct. Abstract icons in an array of concrete icons become distinct. Contrast is the distinguishing factor (McDougall, de Bruijn, & Curry, 2000). WAINESS PHD QUALIFYING EXAM 102 Virtual Reality (VR) is a multi-sensory highly interactive computer based environment where the user becomes an active participant in a virtually real world. First person’s point of view, freedome in navigation, and interaction are esessential for a computer environment to be characterized as a VR environment, or VE (virtual environment; Mikropoulos, 2001). A virtual environment designed to educate the ser is called a virtual learning environment. It should have and educational objective and provide users with experiences they would otherwise not be able to experience in the physical world (Mikropoulos, 2001). VR proposes the adaptation of technology to people and not the opposite (Mikropoulos, 2001). The physical structure of the human brain is affected by the way it is used. Different kinds of experiences configure the brain, especially children’s brains. The reorganization of children’s brains is an important factor in the educational process, specifically in the case of the involvement of mediand and educational technology (Mikropoulos, 2001). The goals of this article is to compare the electrical brain activity taking place in virtual versus real environments. A further goals is to measure and analyze the cognitive changes that users of educational VR systems experience and to evaluate the consequences of such a kind of educational software (Mikropoulos, 2001). Electroencephalography (EEG) showes the electrical activity of a number of neurons that can be recorded from the scalp. Techniques have been developed to extract information from the signals recorded in order to obtain an understanding of the brain processes underlying psychophysical and cognitive functions (Mikropoulos, 2001). College students were exposed to an educational VE with landscapes for geography and astronomy teaching, buildings and rooms for environmental and physics education, and the incised of cells for biology teaching. Movements in these environments were compared to movements in real world counterparts, with the real world versions occurring first (Mikropoulos, 2001). Subjects were more attentive when navigating in the virtual world. Less mental effort was used in the real world version of tasks than in the virtual version. All findings can be attributed to experience in the real world versus inexperience in the virtual world. Overall, though, the findings reported similar brain activity for the same task in both the real and virtual environment. This activity is connected with visual perception, attentional demands, and mental effort. The results thus indicate that users behave similarly in virtual and real environments. They also indicate that virtual reality provides educational environments for students to concentrate, perceive, and judge as a result of less eye-movement and alpha signal dimunition. Additionally, there is need for users to be trained in and comfortable with VR (Mikropoulos, 2001). Contiguity effect: Temporal-contiguity effect and spatial-contiguity effect. Spatial = modalities integrated or physically separated. Temporal = order of presentation. contiguity = split-attention effect (Moreno & Mayer, 1999). Modality principle = dual-channel effects (Moreno & Mayer, 1999). Mixed modalities are better (Moreno & Mayer, 1999). Suggests replacing split-attention effect with multiple terms: spatial-contiguity effect, temporal-contiguity effect, and modality effect, according to the situation because they results in different effects on working memory (Moreno & Mayer, 1999). Arousal theory suggests that adding entertaining auditory adjuncts will make a learning task more interesting, because it creates a greater level of attention so that more material is processed by the learner (Moreno & Mayer, 2000a). WAINESS PHD QUALIFYING EXAM 103 Adding extraneous sentences or illustrations, referred to as seductive details, results in poorer retention and transfer performance, even when the material was meant to entertain (Moreno & Mayer, 2000a). The coherence principle or theory holds that auditory adjuncts can overload the auditory channel (or auditory working memory). Any additional material (including sound effects and music) that is not necessary to make the lesson intelligible or that is not integrated with the rest of the materials will reduce effective working memory capacity and thereby interfere with the learning of the core material, and therefore, resulting in poorer performance on transfer tests (Moreno & Mayer, 2000a). The more relevant and integrated sounds are, the more they will help students’ understanding of the materials (Moreno & Mayer, 2000a). On the surface, seductive details and auditory adjuncts seem similar. However, the underlying cognitive mechanisms are quire different. Whereas seductive details seem to prime inappropriate schemas into which incoming information is assimilated, auditory adjuncts seem to overload auditory working memory (Moreno & Mayer, 2000a). The self-reference effect on memory is based on a very efficient mechanism to process material that is very familiar to oneself. Personalizing the context improves learning by helping learners interpret and interrelate important information in the familiar versus abstract problem statement (Moreno & Mayer, 2000b). Providing personalized message in media communication seems more likely to ease the processing of the message by being more consistent with the social rules and schemas of normal conversations (Moreno & Mayer, 2000b). A self-reference effect for problems-solving transfer in multimedia messages was observed across five experiments: Student who learned by means of a personalized explanation (either as speech or as on-screen text) were better able to use what they learned to solve new problems than students who received a neutral monologue (Moreno & Mayer, 2000b). The beneficial effects of introducing self-referencing into a multimedia science lesson occur independently of the behavioral interaction required during a computer lesson. When the presentation is linear, so that students are required only to watch an animation while listening to or reading an explanation, and when students are required to make choices by clicking on the computer screen, self-referencing seems to promote the mental interactions needed to actively involve the learner in the process of understanding (Moreno & Mayer, 2000b). Learning environments can vary in immersion from no immersion (such as illustrated text) to low immersion (such as an educational game presented using a computer display and speakers) to high immersion (such as a computer display presented using a head-mounted display [HMD] and earphones; Moreno & Mayer, 2002). The Design-A-Plant game puts learners on an alien planet where they must make a plant flourish. The games uses a static 3D environment with the plant centered horizontally on the screen. It uses a pedagogic agent who offers individualized advice concerned the relation between plant features and climate conditions (Moreno & Mayer, 2002). The questions for the study included: do the same instructional design principles there were discovered with a non-immersive medium also apply to low-immersion media (e.g., desktop games) and more immersive media (e.g., HMD games)? The researchers focused on retention, transfer, and program ratings (Moreno & Mayer, 2002). Modality effect (Moreno & Mayer, 2002). WAINESS PHD QUALIFYING EXAM 104 The study provided evidence that students felt a stronger sense of presence in more immersive VREs. Also, students who learn in a more immersive VRE do not necessarily learn a computer-based lesson more deeply as compared with students in a less immersive VRE (Moreno & Mayer, 2002). The researchers argued the lack of media effects might have been do to the low quality of the graphics and a less compelling environment (Moreno & Mayer, 2002). Modality effects appear to be consistent across non-, low-, and high-immersive environments (Moreno & Mayer, 2002). Because some media may enable instructional methods that are not possible with other media, it might be useful to explore instructional methods that are possible in immersive environments but not in others (Moreno & Mayer, 2002). People have a limited working memory that is able to hold and process only a few items of information at a time (Mousavi, Low, & Sweller, 1995). People have a huge long-term memory that is effectively unlimited in size (Mousavi, Low, & Sweller, 1995). Schema acquisition is a primary learning mechanism. Schemata have the functions of storing information in long-term memory and of reducing working memory load by permitting people to treat multiple elements of information as a single element (Mousavi, Low, & Sweller, 1995). Automation of cognitive processes, including automatic use of schemata, is a learning mechanism that also reduces working memory load by effectively bypassing working memory. Automated information can be processed without conscious effort (Mousavi, Low, & Sweller, 1995). A limited working memory is central to this architecture and central to cognitive load theory (Mousavi, Low, & Sweller, 1995). Split-attention effect (Mousavi, Low, & Sweller, 1995). Can be reduced through dualmodality presentations (Mousavi, Low, & Sweller, 1995). In general, the data (using Space Fortress) show facilitation in skill acquisition through the employment of various part-task procedures and specific instructional strategies over the baseline control conditions. However, there are a number of caveats (Newell, Carlton, Fisher, & Rutter, 1989). The part-task effect is strongly influenced by the nature of the part task selected for prior practice. It appears that only part tasks that reflect “natural” units of coordinated activity facilitate skill acquisition (Newell, Carlton, Fisher, & Rutter, 1989). Control of difficulty levels did not affect performance (Newell, Carlton, Fisher, & Rutter, 1989). Results showed that both the presence of icons (versus textual indicators) and the spatial grouping of icons speeded the search for a target file (Niemela & Saarinen, 2000). Our results support the notion that icons, by their pictorial nature, may have other inherent properties that lead to improved user performance at the interface (Niemela & Saarinen, 2000). The grouping of items reduces the number of items to be searched. Spatially close items tend to be grouped, but in more stimulus condition, humans are able to attend selectively to spatially scattered subsets of elements (Niemela & Saarinen, 2000). WAINESS PHD QUALIFYING EXAM 105 In this study, the grouping based on both the spatial closeness and similar appearance if icons seemed to enable more efficient search than did the visual grouping based on the similarity of icons (Niemela & Saarinen, 2000). Cognitive load theory (CLT) originated in the 1980s and underwent substantial development and expansion in the 1990s (Paas, Renkl, & Sweller, 2003). Low-element interactivity refers to environments where each element can be learned independently of the other elements, and there is little direct interaction between the elements. High-element interactivity refers to environments where there is so much interaction between elements that they cannot be understood until all the elements and their interactions are processed simultaneously. As a consequence, high-element interactivity material is difficult to understand (Paas, Renkl, & Sweller, 2003). Element interactivity is the driver of our first category of cognitive load: intrinsic cognitive load, because the demands on working memory capacity imposed by element interactivity are intrinsic to the material being learned. Different materials differ in their levels of element interactivity and thus intrinsic cognitive load, and they cannot be altered by instructional manipulations; only a simpler learning task that omits some interacting elements can be chosen to reduce this type of load (Paas, Renkl, & Sweller, 2003). Working memory, in which all conscious cognitive processing occurs, can handle only a very limited number of novel interacting elements; possibly no more than two or three. Longterm memory can contain vast numbers of schemas—cognitive constructs that incorporate multiple elements of information into a single element with a specific function (Paas, Renkl, & Sweller, 2003). With schema use, a single element in working memory might consist of a large number of lower level, interacting elements, which if processed individually might have exceeded the capacity of working memory (Paas, Renkl, & Sweller, 2003). The automation of schema so that they can be processed unconsciously further reduces the load on working memory (Paas, Renkl, & Sweller, 2003). As well as element interactivity, the manner in which information is presented to learners and the learning activities required of learners can also impose a cognitive load. When that load is unnecessary and so interferes with schema acquisition and automation, it is referred to as extraneous or ineffective cognitive load. Cognitive theorists spend much of their time devising alternative instructional design and procedures that reduce extraneous cognitive load compared to conventionally used procedures (Paas, Renkl, & Sweller, 2003). Extraneous cognitive load is primarily important when intrinsic cognitive load is high because the two forms of cognitive load are additive (Paas, Renkl, & Sweller, 2003). The last form of cognitive load is germane or effective cognitive load. Germane cognitive load is influenced by the instructional design. The manner in which information is presented to learners and the learning activities required of learners are factors relevant to levels of germane cognitive load. Whereas extraneous cognitive load interferes with learning, germane cognitive load enhances learning. Cognitive load theory (CLT) is concerned with the development of instructional methods that efficiently use people’s limited cognitive processing capacity to stimulate their ability to apply acquired knowledge and skills to new situations (i.e., transfer). CLT is based on a cognitive architecture that consists of a limited working memory, with partly independent processing units for visual/spatial and auditory/verbal information, which interactis with a comparatively unlimited long-term memory. Central to CLT is the notion that working memory WAINESS PHD QUALIFYING EXAM 106 architecture and its limitations should be a major consideration when designing instruction (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). The most important learning processes for developing the ability to transfer acquired knowledge and skills are schema construction and automation. According to CLT, multiple elements of information can be chunked as single elements in cognitive schema, which can be automated to a large extent. Then, they can bypass working memory during mental processing thereby circumventing the limitations of working memory. Consequently, the prime goals of instruction are the construction and automation of schemas (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Cognitive load is not simply considered as a by-product of the learning process but as a major factor that determines the success of an instruction intervention (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Cognitive load can be defined as a multidimensional construct representing the load that performing a particular task imposes on the learner’s cognitive system. The construct has a causal dimension reflecting the interaction between task and learner characteristics and an assessment dimension reflecting the measurable concepts of mental load, mental effort, and performance (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Task characteristics that have bee identified in CLT research are task format, task complexity, use of multimedia, time pressure, and pacing of instruction. Relevant learner characteristics comprise expertise level, age, and spatial ability (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Mental load is the aspect of cognitive load that originates from the interaction between task and subject characteristics. Mental load provides an indication of the expected cognitive capacity demands and can be consider an a priori estimate of the cognitive load (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Mental effort is the aspect of cognitive load that refers to the cognitive capacity that is actually allocated to accommodate the demands imposed by the task; thus, it can be considered to reflect the actual cognitive load. Mental effort is measured while participants are working on a task (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Performance, also an aspect of cognitive load, can be defined in terms of learner’s achievements. It can be determined while people are working on a task or thereafter (Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Mental models explain human cognitive processes of understanding external reality, translating reality into internal representations and utilizing it in problem solving (Park & Gittelman, 1995). Mental model formation depends heavily on the conceptualizations that individuals bring to a task. When interacting with the environment, with other, and with the artifacts of technology, people form mental models of themselves and the things with which they interact (Park & Gittelman, 1995). People sometimes develop the dynamic characteristics of mental models showing direction of processes, motion, and changes over time. These indicate that the dynamic characteristics of mental models seem to be determined primarily by subjects’ understanding of the system features and functions more than by the visual content of the externally presented training contents or the system (Park & Gittelman, 1995). WAINESS PHD QUALIFYING EXAM 107 Motional or motion cures simulating system functions (visible or invisible) in visual displays seem to facilitate the formation process of dynamic characteristics of mental models (Park & Gittelman, 1995). Simulations that include the actual movements in a task are more directive and effective for teaching the dynamic nature of a given task and aiding the formation of dynamic characteristics of mental models for the task (Park & Gittelman, 1995). Worked out examples usually consist of a problem formulation, solution steps, and the final solution itself (Renkl, & Atkinson, 2003). In later stages of skill acquisition, emphasis is on increasing speed and accuracy of performance, and skills, or at least subcomponents of them should become automated. During these stages, it is important that the learners actually solve problems as opposed to studying examples (Renkl, & Atkinson, 2003). Cognitive skills refer to the learners’ capabilities to solve problems from intellectual domains such as mathematics, medical diagnosis, or electronic troubleshooting. Cognitive skill acquisition is, therefore, a narrow term as compared to learning. For example, it does not include acquisition of declarative knowledge for its own sake, general thinking or learning skills, general metacognitive knowledge, and so on (Renkl, & Atkinson, 2003). Intrinsic load (Renkl, & Atkinson, 2003). Germane load (Renkl, & Atkinson, 2003). Extraneous load (Renkl, & Atkinson, 2003). Fading worked out solution steps (Renkl, & Atkinson, 2003). Worked out examples defined (Renkl, Atkinson, Maier, & Staley, 2002). Animation can avoid being distracters to learning if it is clear to the learner that the animation (e.g. a moving spaceship) is not part of the to-be learned material (Rieber, 1996). The term seductive details is used to describe highly interesting but unimportant test segments. These segments usually contain information that is tangential to the main themes of a story, but are memorable because they deal with controversial or sensational topics (Schraw, 1998). Dyadic protocol (Shebilske, Wesley, Regian, Arthur, & Jordan, 1992). Learner-controlled instruction was superior to the program controlled instruction with regard to student performance in a novel procedural task (Shyu & Brown, 1995). Prior knowledge did not significantly contribute to performance based on control type (Shyu & Brown, 1995). Direct manipulation with visual feedback was the most user-friendly interface in the study (Svendsen, 1991). Results indicate that direct manipulation may hinder effective problem solving, because the interface is so supportive of thoughtless action that the user neglects to look for rules where these are called for (Svendsen, 1991). When a program is characterized as user friendly, it entails that users are able to learn the functionality of a program fairly quickly, are able to use this functionality, and like using the program (Svendsen, 1991). The focus on authentic learning tasks—whole tasks that are based on real-life tasks— can be found in practical educational approaches, such as subject-based education, the case method, problem-based learning, and competency-base learning (van Merrienboer, Kirshner, & Kester, 2003). WAINESS PHD QUALIFYING EXAM 108 A severe problem with authentic whole tasks is that learners may have difficulty learning because they are overwhelmed by the task complexity (van Merrienboer, Kirshner, & Kester, 2003). Scoffolds, according to their original meaning with educational psychology, include all devices or strategies that support students’ learning (van Merrienboer, Kirshner, & Kester, 2003). In both cognitive apprenticeship learning and our framework, scaffolding explicitly pertains to a combination of performance support and fading. Initially, the support enables a learner to achieve a goal or action not achievable without that support. When the learner achieves the desired goal, support gradually diminishes until it is no longer needed (van Merrienboer, Kirshner, & Kester, 2003). Because excessive or insufficient support can hamper the learning process, it is critical to determine the right type and amount of support and to fade at the appropriate time and rate (van Merrienboer, Kirshner, & Kester, 2003). CLT emphasizes the need to integrate support for novice learners with the task environment fully; otherwise, split-attention effects increase extraneous cognitive load because learners have to integrate information mentally from the task environment with the given support (van Merrienboer, Kirshner, & Kester, 2003). It is clearly impossible to use highly complex learning tasks from the start of a course or graining because this would yield excessive cognitive load for the learners. The common solution is to let learners start their work on relatively simple learning tasks and progress toward more complex tasks (van Merrienboer, Kirshner, & Kester, 2003). Complex performances are broken down into simpler parts that are trained separately or, in a part-whole approach, are gradually combined into whole-task performance. It is not until the end of the training program that learners have the opportunity to practice the whole task (van Merrienboer, Kirshner, & Kester, 2003). Part-task approaches to sequencing are highly effective to prevent cognitive overload because the load associated with a part of the task is lower than the load associated with the whole task (van Merrienboer, Kirshner, & Kester, 2003). However, part-task approaches to sequencing and instructional models driven by separate objectives do not work well for complex performances that require the integration of skills, knowledge, and attitudes and extensive coordination of constituent skills in new problem situations (van Merrienboer, Kirshner, & Kester, 2003). Whole-task approaches attend to the coordination and integration of constituent skills from the very beginning, and they stress that learners quickly develop a holistic vision of the whole task that is gradually embellished during the training (van Merrienboer, Kirshner, & Kester, 2003). Learning tasks are often equated with conventional problems. Such tasks confront the learner with a given state and a set of criteria for an acceptable goal state. There is overwhelming evidence that such conventional task are exceptionally expensive in terms of working memory capacity. E.g., means-end analysis (van Merrienboer, Kirshner, & Kester, 2003). Learning tasks that take the form of worked examples confront learners not only with a given state and a desired goal state but also with an example solution. Studying those examples as a substitute for performing conventional problem solving tasks may be beneficial, because it focuses attention on problem states and associated solution states and so enables learners to induce generalized solutions or schemas (van Merrienboer, Kirshner, & Kester, 2003). A WAINESS PHD QUALIFYING EXAM 109 disadvantage of worked-out examples is that they do not foce learners to study them carefully. (van Merrienboer, Kirshner, & Kester, 2003). An alternative to worked examples is completion tasks that present a given state, a goal state, and a partial solution to the learners who must complete the solution. Completion tasks combine the strong points of worked out examples and conventional learning tasks. Like conventional learning tasks, they directly encourage learners to be active because learners have to complete the solution, which is only possible by the careful study of the partial example provided by the completion task (van Merrienboer, Kirshner, & Kester, 2003). One way is to present necessary information before the learners start working on a learning task or series of tasks. The other way is to present the necessary information precisely when the learners need it during task performance (just-in-time information). CLT does not yield an unequivocal answer to the question of which of the two ways is best (van Merrienboer, Kirshner, & Kester, 2003). In contrast to supportive information, procedural information pertains to consistent task components or recurrent task aspects that are performed as routines by experts. These tasks can become automated by experts. CLT not only indicates that procedural information is best presented when learners need it, but it also raises two related design issues. First, presenting procedural information precisely when it is needed to perform particular actions prevents temporal split-attention effects. Second, presenting procedural information so that is it fully integrated with the task environment prevents spatial split-attention effects (van Merrienboer, Kirshner, & Kester, 2003). Supportive information may be helpful in performing the nonrecurrent aspects of learning. It is best presented before a class of equivalent learning tasks, and it is critical that the learners elaborate on it so that it can be easily retrieved from long-term memory when necessary for the learning task. Elaboration are used to develop schemas whereby nonarbitrary relations are established between new information elements and the learner’s prior knowledge (van Merrienboer, Kirshner, & Kester, 2003). Split-attention effect: (Yeung, Jin, & Sweller, 1997). Redundancy effect: (Yeung, Jin, & Sweller, 1997). Trail and error in computer gaming is defined as the absence of a systematic strategy in playing a game. This particular strategy involves actions and reactions to circumstances, consequences, and feedback within the game framework. Knowledge of how to play the game is accumulated through observation and active participation in the gaming process, not be reading rules and instructions (Dempsey, Haynes, Lucassen, & Casey, 2002). In this study, strategies in playing computer games included trial and error, reading instructions, relying on prior knowledge or experiences, and developing a personal game-playing strategy. Trial and error was by far the predominant strategy across all game types (126 of the 160 games played). It was often employed even in cases where participants reported that they know a more efficient strategy (Dempsey, Haynes, Lucassen, & Casey, 2002). Self-efficacy is defined as one’s belief about one’s ability to successfully carry out particular behaviors (Davis, & Wiedenbeck, 2001). WAINESS PHD QUALIFYING EXAM 110 We define problem solving as the intellectual skill to propose solutions to previously unencountered problem situations (Tennyson & Breuer, 2002). Creativity is defined as the cognitive skill of creating a problem situation and proposing a solution(s) (Tennyson & Breuer, 2002). It goes without saying that the most efficient medium would not necessarily be ideal for every stage of learning. The goal is to have a principled and empirical way to calculate optimal information distributions at various points in different types of learning processes, including of course terminal distributions (Cobb, 1997). Airline pilots are destined always to share major parts of their cognitive work with their instruments, trapeze artists to get most of the work packed into their heads. The way forward in media design is to model learner and medium as distributed information systems, with principled, empirically determined distributions of information storage and processing over the course of learning (Cobb, 1997). A distribution-of-information analysis suggests that schematized information is to a large extent preprocessed in a consumer culture, and so imposes a low memory demand when called up for problem solving. But unfamiliar relations between decontextualized letters and numbers are fully processed in working memory with predictably poor results (Cobb, 1997). WAINESS PHD QUALIFYING EXAM 111 References for Question 2 Allen, R. B. (1997). Mental models and user models. In M. Helander, T. K. Landauer & P. Prabhu (eds.), Handbook of Human Computer Interaction: Second, Completely Revised Edition (pp. 49-63). Amsterdam: Elsevier Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: Instructional principles from the worked examples research. Review of Educational Research, 70(2), 181-214. Atkinson, R. K., Renkl, A., Merrill, M. M. (2003). Transitioning from studying examples to solving problems: Effects of self-explanation prompts and fading worked-out steps. Journal of Educational Psychology, 95(4), 774-783. Avouris, N., Dimitracopoulou, A., & Komis, V. (2003). On analysis of collaborative problem solving: An object-oriented approach. Computers in Human Behavior, 19, 147-167. Bailey, D. H. (1996). Constructivism and multimedia: Theory and application; innovation and transformation. International Journal of Instructional Media, 23(2), 161-165. Baker, E. L., & Mayer, R. E. (1999). Computer-based assessment of problem solving. Computers in Human Behavior, 15, 269-282. Baker, R., & Dwyer, F. (2000). A meta-analytic assessment of the effect of visualized instruction. International Journal of Instructional Media, 27(4), 417-426. Banbury, S. P., Macken, W. J., Tremblay, S., & Jones, D. M. (2001, Spring). Auditory distraction and short-term memory: Phenomena and practical implications. Human Factors, 43(1), 12-29. Bangert-Drowns, R. L., & Pyke, C. (2001). A taxonomy of student engagement with educational software: An exploration of literate thinking with electronic text. Journal of Educational Computing Research, 24(3), 213-234. Barab, S. A., Young, M. F., & Wang, J. (1999). The effects of navigational and generative activities in hypertext learning on problem solving and comprehension. International Journal of Instructional Media, 26(3), 283-309. Bargh, J. A. (2002). Beyond simple truths: The human-Internet interaction. Journal of Social Issues, 58(1), 1-8. Baylor, A. L. (2001). Perceived disorientation and incidental learning in a web-based environment: Internal and external factors. Journal of Educational Multimedia and Hypermedia, 10(3), 227-251. Beatty, J. (1982). Task-evoked papillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 91(2), 276-292. Benbasat, I., & Todd, P. (1993). An experimental investigation of interface design alternatives: Icon vs. text and direct manipulation vs. menus. International Journal of Man-Machine Studies, 38, 369-402. Berg, G. A. (2000). Human-computer interaction (HCI) in educational environments: Implications of understanding computers as media [Electronic Version]. Journal of Educational Multimedia and Hypermedia, 9(4), 349-370. Blanton, B. B. (1998). The application of the cognitive learning theory to instructional design. International Journal of Instructional Media, 25(2), 171-177. Brown, D. W., & Schneider, S. D. (1992), Young learners’ reactions to problem solving contrasted by distinctly divergent computer interfaces. Journal of Computing in Childhood Education, 3(3/4), 335-347. WAINESS PHD QUALIFYING EXAM 112 Brunken, R., Plass, J. L., & Leutner, D. (2003). Direct measurement of cognitive load in multimedia learning. Educational Psychologist 38(1), 53-61. Calvert, S. L., Watson, J. A., Brinkley, V. M., & Bordeaux, B. B. (1989). Computer presentational features for young children’s preferential selection and recall of information. Journal of Educational Computing Research, 5(1), 35-49. Castelli, C., Colazzo, L., & Molinari, A. (1998). Cognitive variables and patterns of hypertext performances: Lessons learned for educational hypermedia construction [Electronic Version]. Journal of Educational Multimedia and Hypermedia, 7(2-3), 177-206. Chadwick, J. (1992). The development of a museum multimedia program and the effect of audio on user completion rate. Journal of Educational Multimedia and Hypermedia, 3(1), 331340. Chou, C., & Lin, H. (1998). The effect of navigation map types and cognitive styles on learners’ performance in a computer-networked hypertext learning system [Electronic Version]. Journal of Educational Multimedia and Hypermedia, 7(2/3), 151-176. Chou, C., Lin, H, & Sun, C.-t. (2000). Navigation maps in hierarchical-structured hypertext courseware [Electronic Version]. International Journal of Instructional Media, 27(2), 165-182. Clark, R. E. (1999). The CANE model of motivation to learn and to work: A two-stage process of goal commitment and effort [Electronic Version]. In J. Lowyck (Ed.), Trends in Corporate Training. Leuven, Belgium: University of Leuven Press. Clark, R. E., & Estes, F. (1999, November/December). Authentic educational technology: The lynchpin between theory and practice. Educational Technology, 39(6), 5-13. Clements, D. H., & Nastasi, B. K. (1999). Metacognition, learning, and educational computer environments [Electronic Version]. Information Technology in Childhood Education, 10, 5-38. Corno, L., & Mandinach, E. B. (1983). The role of cognitive engagement in classroom learning and motivation. Educational Psychologist, 18(2), 88-108. Covington, M. V. (2000). Goal theory, motivation, and school achievement: An integrative review. Annual Review of Psychology, 51, 171-200. Cutmore, T. R. H., Hine, T. J., Maberly, K. J., Langford, N. M., & Hawgood, G. (2000). Cognitive and gender factors influencing navigation in a virtual environment. International Journal of Human-Computer Studies, 53, 223-249. Dalgarno, B. (2001). Interpretations of constructivism and consequences for computer assisted learning. British Journal of Educational Technology, 32(2), 183-194. Daniels, H. L., & Moore, D. M. (2000). Interaction of cognitive style and learner control in a hypermedia environment. International Journal of Instructional Media, 27(4), 369-383. Davidson-Shivers, G. V., Shorter, L., & Jordan, K. (1999). Learning strategies and navigation decisions of children using a hypermedia lesson [Electronic Version]. Journal of Educational Multimedia and Hypermedia, 8(2), 175-188. Dias, P., Gomes, M. J., & Correia, A. P. (1999). Disorientation in hypermedia environments: Mechanisms to support navigation. Journal of Educational Computing Research, 20(2), 93-117. Dienes, Z., & Fahey, R. (1998). The role of implicit memory in controlling a dynamic system. The Quarterly Journal of Experimental Psychology, 51A(3), 593-614. WAINESS PHD QUALIFYING EXAM 113 Dillon, A., & Gabbard, R. (1998, Fall). Hypermedia as an educational technology: A review of the quantitative research literature on learner comprehension, control, and style. Review of Educational Research, 63(3), 322-349. Eberts, R. E., & Bittianda, K. P. (1993). Preferred mental models for direct-manipulation and command-based interfaces. International Journal of Man-Machine Studies, 38, 769-785. Farrell, I. H., & Moore, D. M. (2000). The effect of navigation tools on learners’ achievement and attitude in a hypermedia environment. Journal of Educational Technology Systems, 29(2), 169-181. Feldman, S. (2001). The link, and how we think: Using hypertext as a teaching & learning tool. International Journal of Instructional Media, 28(2), 153-158. Fletcher-Flinn, C. M., & Gravatt, B. (1995). The efficacy of computer assisted instruction (CAI): A meta-analysis. Journal of Educational Computing Research, 12(3), 219-242. Flottemesch, K. (2000, May/June). Building effective interaction in distance education: A review of the literature. Educational Technology, 40(3), 46-51. Friedrichsen, P. M., Dana, T. M., & Zembal-Saul, C. (2001). Learning to teach with technology model: Implementation in secondary science teacher education [Electronic Version]. The Journal of Computers in Mathematics and Science Teaching, 20(4), 377-394. Frohlich, D. M. (1997). Direct manipulation and other lessons. In M. Helander, T. K. Landauer & P. Prabhu (eds.), Handbook of Human Computer Interaction: Second, Completely Revised Edition (pp. 463-488). Amsterdam: Elsevier Gerlic, I., & Jausovec, N. (1999). Multimedia: Differences in cognitive processes observed with EEG [Electronic Version]. Educational Technology Research and Development, 47(3), 5-14. Gevins, A., Smith, M. E., Leong, H., McEvoy, L., Whitfield, S., Du, R., & Rush, G. (1998). Monitoring working memory load during computer-based tasks with EEG pattern recognition methods. Human Factors, 40(1), 79-91. Guttentag, R. E. (1984). The mental effort requirement of cumulative rehearsal: A developmental study. Journal of Experimental Child Psychology, 37, 92-106. Haggas, A. M., & Hantula, D. A. (2002). Think or click? Student preference for overt vs. covert responding in web-based instruction. Computers in Human Behavior, 18, 165-172. Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of cognitive interest in science learning. Journal of Educational Psychology, 90(3), 414434. Hokanson, B., & Hooper, S. (2000). Computers as cognitive media: Examining the potential of computers in education. Computers in Human Behavior, 16, 537-552. Howland, J., Laffey, J., & Espinosa, L. M. (1997). A computing experience to motivate children to complex performances [Electronic Version]. Journal of Computing in Childhood Education, 8(4), 291-311. Hudson, B. (1998). Group work with multimedia: The role of the computer in mediating mathematical meaning-making [Electronic Version]. The Journal of Computers in Mathematics and Science Teaching, 17(2/3), 181-201. Jones, M. G., Farquhar, J. D., & Surry, D. W. (1995, July/August). Using metacognitive theories to design user interfaces for computer-based learning. Educational Technology, 35(4), 12-22. Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 23-31. WAINESS PHD QUALIFYING EXAM 114 Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional design. Human Factors, 40(1), 1-17. Kalyuga, S., Chandler, P., & Sweller, J. (2000). Incorporating learner experience into the design of multimedia instruction. Journal of Educational Psychology, 92(1), 126-136. Kashihara, A., Kinshuk, Oppermann, R., Rashev, R., & Simm, H. (2000). A cognitive load reduction approach to exploratory learning and its application to an interactive simulation-based learning system. Journal of Educational Multimedia and Hypermedia, 9(3), 253-276. Kozma, R. (2000). The relationship between technology and design in educational technology research and development: A reply to Richey [Electronic Version]. Educational Technology Research and Development, 48(1), 19-21. Mane, A. M., Adams, J. A., & Donchin, E. (1989). Adaptive and part-whole-training in the acquisition of a complex perceptual-motor skill. Acta Psychologica, 71, 179-196. Mayer, R. E. (1998). Cognitive, metacognitive, and motivational aspects of problem solving. Instructional Science, 26, 49-63. Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: Does simple user interaction foster deeper understanding of multimedia messages? Journal of Educational Psychology, 93(2), 390-397. Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology, 93(1), 187-198. Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: Evidence of dual processing systems in working memory. Journal of Educational Psychology, 90(2), 312-320. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43-52. Mayer, R. E., Moreno, R., Boire, M., & Vagge, S. (1999). Maximizing constructivist learning from multimedia communications by minimizing cognitive load. Journal of Educational Psychology, 91(4), 638-643. Mayer, R. E., & Sims, V. K. (1994). For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. Journal of Educational Psychology, 86(3), 389-401. Mayer, R. E., Sobko, K., & Mautone, P. D. (2003). Social cues in multimedia learning: Role of speaker’s voice. Journal of Educational Psychology, 95(2), 419-425. McDougall, S. J. P., de Bruijn, O., & Curry, M. B. (2000). Exploring the effects of icon characteristics on user performance: The role of icon concreteness, complexity, and distinctiveness. Journal of Experimental Psychology: Applied, 6(4), 291-306. Mikropoulos, T. A. (2001). Brain activity on navigation in virtual environments. Journal of Educational Computing Research, 24(1), 1-12. Miller, G. A. (1956). The magical number, seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91(2), 358-368. Moreno, R., & Mayer, R. E. (2000a). A coherence effect in multimedia learning: The case of minimizing irrelevant sounds in the design of multimedia instructional messages. Journal of Educational Psychology, 92(1), 117-125. WAINESS PHD QUALIFYING EXAM 115 Moreno, R., & Mayer, R. E. (2000b). Engaging students in active learning: The case for personalized multimedia messages. Journal of Educational Psychology, 92(4), 724-733. Moreno, R., & Mayer, R. E. (2002). Learning science in virtual reality multimedia environments: Role of methods and media. Journal of Educational Psychology, 94(3), 598-610. Mousavi, S. Y., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87(2), 319-334. Newell, K. M., Carlton, M. J., Fisher, A. T., & Rutter, B. G. (1989). Whole-part training strategies for learning the response dynamics of microprocessor driven simulations. Acta Psychologica, 71, 197-216. Niemela, M., & Saarinen, J. (2000, Winter). Visual search for grouped versus ungrouped icons in a computer interface. Human Factors, 42(4), 630-635. Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1-4. Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63-71. Park, O.-C., & Gittelman, S. S. (1995). Dynamic characteristics of mental models and dynamic visual displays. Instructional Science, 23, 303-320. Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example study to problem solving in cognitive skill acquisition: A cognitive load perspective. Educational Psychologist, 38(1), 13-22. Renkl, A., Atkinson, R. K., Maier, U. H., & Staley, R. (2002). From example study to problem solving: Smooth transitions help learning. The Journal of Experimental Education, 70(4), 293-315. Rieber, L. P. (1996). Animation as a distractor to learning. International Journal of Instructional Media, 23(1), 53-57. Salomon, G. (1983). The differential investment of mental effort in learning from different sources. Educational Psychology, 18(1), 42-50. Schacter, J., & Fagnano, C. (1999). Does computer technology improve student learning and achievement? How, when and under what conditions? Journal of Educational Computing Research, 20(4), 329-343. Schraw, G. (1998). Processing and recall differences among seductive details. Journal of Educational Psychology, 90(1), 3-12. Shebilske, W. L., Regian, W., Arthur, W., Jr., & Jordan, J. A. (1992). A dyadic protocol for training complex skills. Human Factors, 34(3), 369-374. Shyu, H.-y., & Brown, S. W. (1995). Learner-control: The effects of learning a procedural task during computer-based videodisc instruction. International Journal of Instructional Media, 22(3), 217-230. Svendsen, G. B. (1991). The influence of interface style on problem solving. International Journal of Man-Machine Studies, 35, 379-397. van Merrienboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking a load off a learner’s mind: Instructional design for complex learning. Educational Psychologist, 38(1), 5-13. Yeung, A. S., Jin, P., & Sweller, J. (1997). Cognitive load and learner expertise: Split-attention and redundancy effects in reading with explanatory notes. Contemporary Educational Psychology, 23, 1-21. WAINESS PHD QUALIFYING EXAM 116 3. Review the theoretical and empirical literature on the impact of scaffolding on learning. Include a discussion of types (e.g., graphical scaffolding) and contexts (e.g., K-12). Mayer, Mautone, and Prothero (2002) commented that a major instructional issue in learning by doing within simulated environments concerns the proper type of guidance, which they refer to as cognitive apprenticeship. Using a discovery-based geological game, the researchers argued that results of the study indicate that adding pictorial scaffolding to the learning materials lead to improved performance on a transfer for both high- and low-spatial students in the Profile Game. Moreno and Mayer (2000) have shown how personalization can improve learning (based on performance outcomes in both retention and transfer), based on theories that “self-referential language promotes the elaboration of the instructional materials” (p. 725), and “personalized messages are more consistent with our schemas for communicating in normal conversations and therefore require less cognitive effort to process” Their study focused on the use of an active pedagogical agent, a form of scaffolding where an animate object (either visual or auditory or just auditory) provides support during learning. As a result of their findings, the researchers argued that “multimedia science programs can result in broader learning if the communication model is centered around shared environments in which the student is addressed as a participant rather than as an observer” (p. 731). In a series of five random assignment experiments using University of California, Santa Barbara Psychology students as subjects, Moreno and Mayer (2000) examined the impact of personalization of multimedia messages on learning outcomes. The experiments were based on the assumptions that “self-referential language promotes the elaboration of the instructional materials” (p. 725) and “personalized messages are more consistent with our schemas for communicating in normal conversations and therefore require less cognitive effort to process” (p. 725; see Moreno and Mayer, 2000, for evidence supporting these assumptions). In each experiment, a computer program was used for teaching how lightning works (students were pretested for prior knowledge). One group was given neutral messages, while the other received personalized messages. Results in the Moreno and Mayer (2000) experiments supported use of personalized messages to increase performance. In all five experiments, those receiving personalized messages (whether textual or auditory) scored significantly higher on transfer tests, while results for retention varied. Retention increased for the game group when a pedagogical agent was added to the game. An interesting result of the experiment was that even though the addition of the pedagogical agent increased retention, the favorableness rating for using a pedagogical agent with or without personalization was not significant. The researchers suggested the lack of significance for might have been due to the nature of the questions and that a more sensitive set of survey questions might produce different results. As a result of the study, Moreno and Mayer (2000) argued that “multimedia science programs can result in broader learning if the communication model is centered around shared environments in which the student is addressed as a participant rather than as an observer” (p. 731). It should be noted that, while Moreno and WAINESS PHD QUALIFYING EXAM 117 Mayer, referred to the instrument as a game, it appears to fit the Gredler’s definition of a simulation, not a game or simulation game (Gredler, 1996). An overview of player position was considered an important feature in adventure games. Players reported that help functions, hints, and examples were necessary in adventure, miscellaneous, and word games. Mystery, intrigue, and suspense were pleasing to some players. Many liked the idea of games with familiar scenarios or stories (Dempsey, Haynes, Lucassen, & Casey, 2002). Instructional supports include the following elements that are listed by Alessi (2000): explaining or demonstrating the phenomenon or procedure; giving hints and prompts before student actions; giving feedback following student actions; providing a coach, advice, or help system; providing dictionaries and glossaries; providing user controls not needed in noninstructional simulation; and giving summary feedback or debriefing (Leemkull, de Jong, de Hoog, & Christoph, 2003). The success of a VR highly depends on the friendliness of the user interface. Upon entering the dynamic 3-D virtual representation of the solar system, the user has to project himself into this “reality” and to adopt new looking points, which is by no means an easy cognitive task, especially at young ages (Yair, Mintz, & Litvak, 2001). The lose of orientation and “vertigo” feeling which often accompanies learning in a virtual-environment is minimized by the display of a traditional, two-dimensional dynamic map of the solar system. The map helps to navigate and to orient the user, and facilitates an easier learning experience (Yair, Mintz, & Litvak, 2001). The Touch the Sky, Touch the Universe program lets students interact directly with various forms of multimedia that simulate resources used by practicing scientists. Journeys through the virtual simulations of the solar system and the Milky Way help students bridge the gap between the concrete world of nature and the abstract realm of concepts and models. As students examine images, manipulate three-dimensional models, and participate in these virtual simulations, they enhance their understanding of scientific concepts and processes. Students are not simply passive recipients of prepackaged multimedia content, and cause use a variety of computerized tools to view, navigate, and analyze a realistic three-dimensional representation of space (Yair, Mintz, & Litvak, 2001). Three simulation packages were selected, DRAX, FLOWERS, and LAB, because each contained a different type of simulation: physical, procedural, and process simulations, respectively. This categorization is related directly to different types of mental processing and is particularly useful in students of conceptual learning (Yildiz & Atkins, 1996). A physical simulation requires the learner to construct a mental model of how a system functions based on causal relationships between entities that form part of that system. Free discovery or guided discovery methods may be embedded in this type of simulation (Yildiz & Atkins, 1996). A procedural simulation is one designed to train the user to perform certain tasks in a systematic way, correcting anomalies, mistakes, or disturbances which may arise. Feedback on errors made, and the opportunity to repeat procedures many times, are characteristic features of this type of simulation (Yildiz & Atkins, 1996). WAINESS PHD QUALIFYING EXAM 118 A process simulation tries to help the student to understand the progression of a dynamic system. Normally it is run several times with different initial values for the parameters (Yildiz & Atkins, 1996). DRAX, which fits the general characteristics of a physical simulation, was designed to improve students’ understanding of how electricity is made in power stations by enabling them to obtain a surrogate experience of what a coal fired power station is like, what happens in each of the main buildings, and the process by which electricity is made (Yildiz & Atkins, 1996). FLOWERS, which contains the characteristics of a procedural simulation, was designed to illustrate the probabilistic nature of experimental results and to teach students scientific investigation methods. It included a wide range of statistical tools which could be called up as required by the user and were intended to improve students’ skills in constructing and interpreting graphs. Users were placed in the situation of conducting an experiment in growing flowers in which they had contro over four key interrelated variables that affect growth: nitrogen, temperature, potash, and length of daylight (Yildiz & Atkins, 1996). LAB, which fit the characteristics of a process simulation, enabled students to understand the relationships between gravity, speed, height, time, etc. The LAB was a room of on-screen experiments relating to energy. On-screen tools allowed the users to measure distance, time, and velocity in several different ways (Yildiz & Atkins, 1996). A study using the three simulations was conducted using 2296 students aged 11 to 18 years, randomly selected from two schools in North East England. A test was designed to cover the specific learning objectives of each simulation (Yildiz & Atkins, 1996). Results indicate that Interactive Video (IV) simulations can interact in complex ways with both gender and prior achievement characteristics. Nevertheless, DRAX, the physical simulation based on the power station, produced the greatest cognitive gain. The reason for this may well lie in the design of this simulation, which applied several important principles derived from learning theories. For example, at every point, it enabled students to obtain advance information (scaffolding) about what they could do and could expect; it helped students to relate new information to what they already knew from school physics, and it made use of animations, computer graphics, and games to reinforce nascent understanding. It also allowed students to decide their own learning route through the material, and it gave students immediate feedback on how they were doing with the on-screen experiments (Yildiz & Atkins, 1996). By comparison, although FLOWERS, the procedural simulation, provided some conceptual scaffolding and had real life relevance, game students little freedom of choice about how to solve the problem they were presented with. It also required a sophisticated approach (e.g., holding one variable constant while altering another). Due to curriculum time constraints, the task seemed to be beyond the capability of the middle and low achieving students (Yildiz & Atkins, 1996). The process simulation, LAB, lacked an explicit explanatory framework. There were no links to real life referents or examples of the application of the principles of physics being demonstrated. The feedback had to be worked out by the students themselves by interpreting the on-screen read outs of distance, speed, etc., making it more difficult to develop explicit hypothesis-test-interpret-hypothesis-test chains. For the middle achieving students, the facility to repeat the same experiment many times seemed to have been helpful, perhaps building confidence in their understanding. For high achieving students, the lack of challenge and variety may have become obstacles to developing understanding and may have been the factors which led to a lower score on the post-test than the pre-test. For pupils with low prior achievement, the WAINESS PHD QUALIFYING EXAM 119 lack of clear learning goals and advice may have prevented learning from occuring (Yildiz & Atkins, 1996). Carroll, W. M. (1994). Using worked examples as an instructional support in the algebra classroom. Journal of Educational Psychology, 86(3), 360-367. Cary, M., & Carlson, R. A. (1999). External support and the development of problem-solving routines. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24(4), 1053-1070. Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182. de Jong, T., de Hoog, R., & de Vries, F. (1993). Coping with complex environments: The effects of providing overviews and a transparent interface on learning with a computer simulation. International Journal of Man-Machine Studies, 39, 621-639. Gertjets, P., & Scheiter, K. (2003). Goal configurations and processing strategies as moderators between instructional design and cognitive load: Evidence from hypertext-based instruction. Educational Psychologist, 38(1), 33-41. Jimenez, L., & Mendez, C. (2001). Implicit sequence learning with competing explicit cues. The Quarterly Journal of Experimental Psychology, 54A(2), 345-369. Kalyuga, S., Chandler, P., Tuovinen, J., & Sweller, J. (2001). When problem solving is superior to studying worked examples. Journal of Educational Psychology, 93(3), 578-589. Kee, D. W., & Davies, L. (1988). Mental effort and elaboration: A developmental analysis. Contemporary Educational Psychology, 13, 221-228. WAINESS PHD QUALIFYING EXAM 120 Kee, D. W., & Davies, L. (1990). Mental effort and elaboration: Effects of Accessibility and instruction. Journal of Experimental Child Psychology, 49, 264-274. Kee, D. W., & Davies, L. (1991). Mental effort and elaboration: A developmental analysis of accessibility effects. Journal of Experimental Child Psychology, 52, 1-10. Khine, M. S. (1996). The interaction of cognitive style with varying levels of feedback in multimedia presentation. International Journal of Instructional Media, 23(3), 229-237. Mayer, R. E., Mautone, P., & Prothero, W. (2002). Pictorial aids for learning by doing in a multimedia geology simulation game. Journal of Educational Psychology, 94(1), 171185. Mautone, R. D., & Mayer, R. E. (2001). Signaling as a cognitive guide in multimedia learning. Journal of Educational Psychology, 93(2), 377-389. Murphy, N. & Messer, D. (2000). Differential benefits from scaffolding and children working alone. Educational Psychologist, 20(1), 17-31. Mwangi, W., & Sweller, J. (1998). Learning to solve compare word problems: The effect of example format and generating self-explanations. Cognition and Instruction, 16(2), 173199. Neale, D. C., & Carroll, J. M. (1997). The role of metaphors in user interface design. In M. Helander, T. K. Landauer & P. Prabhu (eds.), Handbook of Human Computer Interaction: Second, Completely Revised Edition (pp. 441-462). Amsterdam: Elsevier Quilici, J. L., & Mayer, R. E. (1996). Role of examples in how students learn to categorize statistics word problems. Journal of Educational Psychology, 88(1), 144-161. WAINESS PHD QUALIFYING EXAM 121 Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example study to problem solving in cognitive skill acquisition: A cognitive load perspective. Educational Psychologist, 38(1), 13-22. Renkl, A., Atkinson, R. K., Maier, U. H., & Staley, R. (2002). From example study to problem solving: Smooth transitions help learning. The Journal of Experimental Education, 70(4), 293-315. Rosswork, S. G. (1977). Goal setting: The effects on an academic task with varying magnitudes of incentive. Journal of Educational Psychology, 69(6), 710-715. Seaward, M. R. (1998). Interactive assistants provide ease of use for novices: The development of prototypes and descendants. Computers in Human Behavior, 14(2), 221-237. Tarmizi, R. A., & Sweller, J. (1988). Guidance during mathematical problem solving. Journal of Educational Psychology, 80(4), 424-436. Tuovinen, J. E., & Sweller, J. (1999). A comparison of cognitive load associated with discovery learning and worked examples. Journal of Educational Psychology, 91(2), 334-341. van Merrienboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for complex learning: The 4C/ID-model. Educational Technology Research & Development, 50(2), 39-64. Weidenbeck, S. (1989). Learning iteration and recursion from examples. International Journal of Man-Machine Studies, 30, 1-22. Wu, A. K. W., & Lee, M. C. (1998). Intelligent tutoring systems as design. Computers in Human Behavior, 14(2), 209-220.