Linda Heidenrich SED 625 CE 3: Comparing the Impacts of a Computer-Assisted versus Direct Instruction This study was conducted in Taiwan with 159 tenth graders who were taking a mandatory earth science course. The study was conducted over a two week time frame in which one group learned the content of debris flow hazards through a problem-based computer program and the other group learned through direct instruction. The students were tested before and after the experimental time frame through an Earth Science Achievement Test. The test was thirty multiple choice questions long, validated by three high school teachers and three professors, and consisted on questions from the first three levels of Bloom’s taxonomy. The results indicated the students who work on the problem-based computer program scored significantly higher on the questions requiring knowledge and comprehension, but both groups of students scored equally as poorly on the application questions. This study is an example of good research for several reasons. First, the researcher subjected both groups to computer usage “to control possibly extraneous variables resulting from the computer-novelty effect (Chang, 2001, p. 150).” Also, the researcher utilized previous research by Bransford (1979) on how students learn to create his experimental design. Yet, this study could have been improved in its experimental design. First, the researcher goes into great detail regarding each task the computerassisted students were assigned while explaining very little detail about the direct instruction method. If students were asked to complete the same tasks regardless of instructional method the study is more valid than if the direct instruction student group never had the chance to problem solve or present their individual discoveries. In problem solving and designing a presentation the computer-assisted student group were allowed to develop their content knowledge in more depth therefore allowing the results to not be related to instructional method, but instead the depth at which they explore debris hazards and their effects. This question became more prevalent when analyzing the results because both groups of students did equally well on the application questions presented on the assessment. This would suggest both groups had access to problem solving and application assignments but that is never explicitly stated. Another component of the study, more specifically the publishing of the study, which could have been improved, is the publishing of data. The researcher published only standard deviations for the scores, but not the raw scores. This provides some insight regarding the components of the assessment that were successful but it requires the reader to extrapolate the data for themselves. This year I have been committed to use computer programs as a more integral part of my biology curriculum. In the age of MySpace and instant messaging, students are much more involved with computers than they are with individuals. The connection made between a student and a computer screen cannot be duplicated between student and teacher. The computer allows for more exploration based on individual students’ needs, a feat that cannot be matched by a teacher in a class of forty students. The problem I have in my class when we go to the computer lab is the students rarely complete the assignment required of them because they become so involved in the computer program that they forget to complete the work. This complete engagement is exciting as a teacher but I need the physical evidence of standards mastery so I often have to interrupt them as the peak of their engagement to remind my students to complete their work.