1 Toward A Taxonomy Of Online Reading Comprehension Strategies Lisa Zawilinski, University of Connecticut Amy Carter, Clemson University Ian O’Byrne, University of Connecticut Greg McVerry, University of Connecticut Theresa Nierlich, University of Connecticut Donald J. Leu, University of Connecticut Abstract This paper provides an overview of work taking place on the TICA (Teaching Internet Comprehension to Adolescents) project, funded by the Institute of Education Sciences, U.S. Department of Education. It then presents the methods and analytic approaches used to explore a central question: Which skills and strategies appear to be important for successful online reading comprehension? This part of the project uses a series of online reading comprehension protocols with think alouds to identify the skills and strategies that appear to be important for successful online reading comprehension. These think aloud protocols were conducted with 53 of the most skilled adolescent online readers selected from a population of 1,025 seventh graders from 12 diverse and economically challenged school districts in South Carolina and Connecticut. In South Carolina, we work in largely rural school districts; in Connecticut we work in largely urban school districts. These populations were selected since they are most at risk to be left behind in an age of online information should they drop out of school (Finn, 1989; Friedman, 2005; 1993;Wylie & Hunter, 1994) and since this is an age group that is especially engaged by the use of the Internet (Chandler-Olcott & Mahar, 2003; Lankshear & Knobel, 2003; Reinking, 2001). The 53 seventh-grade students who provided think aloud data were selected on the basis of their survey scores, which characterized them as the highest scoring students in online reading comprehension. They also met several other criteria: frequency of Internet use, in and out of school; (3) the use of the Internet for the most diverse range of purposes; (4) expertise in explaining the strategies they employed during online reading and (5) comfort in explaining strategies to an adult. Following selection, these 53 students participated in a short interview to verify that they were skilled users of the Internet. We used think-aloud methods recommended by Afflerbach (2002) to elicit strategy use during online reading comprehension. During each of three sessions, participants were asked to read online and to think aloud, using both researcher-selected and student-selected reading assignments. Students' online reading was recorded using Camtasia software (http://www.techsmith.com/camtasia.asp). This tool created a real-time movie of all online actions on the screen as well as an audio recording of verbal thinkaloud data. Camtasia recordings and transcripts of verbal protocols were coded and analyzed using constantcomparative (Bogdan & Biklen, 2003; Merriam, 1988) and abductive (Onwuegbuzie & Leech, 2006) methods. NVivo analytic tools were used to search for patterns across online reading actions and verbal reports of online strategy use. Common patterns of skill and strategy use during online reading comprehension were then categorized into larger skill sets (e.g., defining, or understanding, a problem, locating information, analyzing information, synthesizing information, communicating information) and used to inform a taxonomy of online reading comprehension skills demonstrated by proficient online readers. This paper presents the procedures used for conducting these think aloud verbal protocols and provides online links to video examples. It reports a preliminary description of the skill sets among adolescents assessed to be the most proficient in reading for information on the Internet. Finally, the authors share the website created for this project, making many of these methodological tools available to other researchers. Please visit: http://www.newliteracies.uconn.edu/iesproject/documents.html Paper presented at the annual meeting of the National Reading Conference. Austin, TX. November 30, 2007. 2 Running Head: A Taxonomy of Online Reading Comprehension Toward A Taxonomy Of Online Reading Comprehension Strategies Lisa Zawilinski, University of Connecticut Amy Carter, Clemson University Ian O’Byrne, University of Connecticut Greg McVerry, University of Connecticut Donald J. Leu, University of Connecticut Please Do Not Quote Without Permission of The Authors This paper is available online at: http://docs.google.com/Doc?id=dcbjhrtq_27mv3xjr 3 Toward A Taxonomy Of Online Reading Comprehension Strategies This paper provides an overview of work taking place on the TICA (Teaching Internet Comprehension to Adolescents) project, funded by the Institute of Education Sciences, U.S. Department of Education. It also describes the procedures we use to determine which skills and strategies appear to be important for successful online reading comprehension. Finally, it provides a preliminary description of the online reading comprehension skills and strategies that are emerging in our work. This work is important so that we might begin to thoughtfully expand definitions of reading comprehension instruction to include the new skills and strategies required to read and comprehend information on the Internet. It is also important to ensure that students who have access to the Internet the least, those in diverse and economically challenged school districts, do not get left behind in an age of online information and learning. The work is predicated on the fact that the Internet has become an essential context for reading comprehension and learning in the twenty-first century. The Internet Is This Generations' Defining Technology For Reading The Internet has become a vital new dimension of reading in schools (International Reading Association, 2002; National Center for Education Statistics, 2003), in daily life (Lebo, 2003), and in the work place (Friedman, 2005; U. S. Department of Commerce, 2002). In 2005, for example, over one billion people, one sixth of the world’s population, were reading on the Internet (Internet World Stats: Usage and Population Statistics, n.d.). Most of this growth took place in the past five years (de Argaez, 2006). At this rate, nearly half of the world's population will be reading online within five years. This historic and global change in the technology of reading is especially pronounced among adolescents, redefining social practices of literacy (Coiro, Knobel, Lankshear, & Leu, in 4 press). Consider, the case of Accra, Ghana, for example, where a recent study (Borzekowski, Fobil, & Asante, 2006) indicates that 66% of 15-18 year olds attending school, and 54% of 15-18 year olds not attending school, report having gone online previously. In Japan, 98% of homes have access to the Internet with broadband 16 times faster than that found in the U.S. (Bleha, 2005). In the U.K., 74% of children and young people aged nine to nineteen have access to the Internet at home, and most of these (84% in all) are daily or weekly Internet users (Livingstone & Bober, 2005). In the U.S., 87 percent of all students between the ages of 12 and 17 report using the Internet; nearly 11 million do so daily (Pew Internet and American Life Project, 2005). Similar patterns are commonly found in other nations. These data suggest that the Internet, is rapidly becoming a defining technology for literacy and learning, especially for adolescents (Alvermann, XXXX; Reinking, 2001). The Internet Requires Additional Online Reading Comprehension Skills And Strategies Traditionally, we have tended to assume that online reading comprehension is isomorphic with offline reading comprehension (Coiro, 2003; Leu, Zawilinski, Castek, Bannerjee, Housand, Liu, & O'Neil, in press). Data is appearing, however, to question this assumption; online reading comprehension appears to require a set of additional comprehension skills and strategies. One study, among highly proficient six grade students (Coiro & Dobler, in press), found that online reading comprehension shared a number of similarities with offline reading comprehension but that online reading comprehension was also more complex and included a number of important differences. A second study (Leu, Castek, Hartman, Coiro, Henry, Kulikowich, & Lyver, 2005), found no significant correlation between scores on a state reading comprehension assessment and offline reading comprehension. The results from this second study also suggest that new skills and strategies may be required during online reading. A third study (Coiro, 2007), using a 5 regression model, has found that while offline reading comprehension and prior knowledge contributed a significant amount of variance to the prediction of online reading comprehension, additional, independent variance was contributed by knowing students’ online reading comprehension ability. All of these studies point to the conclusion that additional skills and strategies are required during online reading comprehension. The recent report of the RAND Reading Study Group (2002) captures the essence of the problem: “… accessing the Internet makes large demands on individuals’ literacy skills; in some cases, this new technology requires readers to have novel literacy skills, and little is known about how to analyze or teach those skills” (p. 4). Students Who Require Our Assistance The Most May Be Receiving It The Least The mistaken assumption that online reading comprehension is identical to offline reading comprehension is unfortunate, but we see it throughout educational systems. It is most visible, perhaps, in U.S. state assessment policy where not a single state reading assessment, required by No Child Left Behind (NCLB) legislation (No Child Left Behind Act of 2001 (2002), measures students’ ability to read and comprehend information online. No state assessment of reading measures students' ability to read search engine results to select the most appropriate link for a particular task, read a web page to select the most likely link where information might appear, critically evaluate information online for reliability or accuracy, or read information contained in email messages, blogs, and wikis (Leu, Ataya, & Coiro, 2002). Thus, many of the important aspects of online reading comprehension are never assessed. As a result, unfortunately, there is also little incentive to teach them. Indeed, our current policies in reading, with their focus on testing skills and strategies required for offline reading comprehension, may actually be exacerbating the very problem they 6 seek to solve (Henry, 2006). This is especially true for those students who require our support the most -- students in economically challenged school districts who have access to the Internet at home the least. Economically challenged school districts in the U.S. are under the greatest pressure to raise reading test scores. These districts are forced to focus limited resources on raising performance on state assessment that solely evaluate offline reading comprehension; they have little time or interest in providing instruction in online reading comprehension skills (Henry, 2006). In more privileged districts, however, students have greater access at home and attend schools that provide more opportunity to use the Internet, since the pressure from NCLB to raise offline reading comprehension scores is substantially less (Henry, 2006). The cruelest irony of NCLB may be that we end up helping those the least who require our help the most. What Are The Skills And Strategies Required For Successful Online Reading Comprehension? It is clear that Internet is quickly becoming this generation's defining technology for reading. It also seems clear that new comprehension skills appear to be required during online reading. Finally, students who need our assistance the most with online reading comprehension may actually receive it the least because of recent public policy initiatives. Despite these observations, there is relatively little understanding of how reading comprehension instruction should be conceptualized or conducted in relation to online information (Castek, 2007; Coiro, 2003; International Reading Association; RAND Reading Study Group, 2002; Reinking, 1997). What are the skills and strategies required for successful online reading comprehension? This important question is the initial focus of our work. To contextualize our study of this issue, we first present an overview of our entire project. Next, we describe the multiple theoretical perspectives used to frame our investigations, arguing 7 that the issues and the contexts associated with online reading comprehension instruction are far too complex for any single perspective to account for all that is taking place. Then, we present the methodological approach we have taken to identify the major skills and strategies associated with successful online reading comprehension, providing examples of the strategies reported by our student informants and an emerging taxonomy of online reading comprehension skills and strategies. Finally, we share several videos of think aloud verbal protocols collected during online reading comprehension tasks to illustrate this approach. An Overview Of The TICA Project What are the skills and strategies required for successful online reading comprehension? How should they be taught? These questions are the focus of this three-year reading comprehension research grant, funded by the Institute of Education Sciences, U.S. Department of Education. Research teams from Clemson University and the University of Connecticut are conducting this work. It takes place in seventh-grade classrooms within diverse and economically challenged districts in South Carolina and Connecticut. In South Carolina, we work in largely rural school districts; in Connecticut we work in largely urban school districts. These populations were selected since (1) they are most at risk to be left behind in an age of online information should they drop out of school (Finn, 1989; Friedman, 2005; Wylie & Hunter, 1994), (2) they appear to have fewer opportunities to develop these skills because of the nature of state assessments and pressure from NCLB (Henry, 2006), and since this is an age group that seems especially attracted to the use of the Internet (Chandler-Olcott & Mahar, 2003; Lankshear & Knobel, 2003; Reinking, 2001). The primary goal of Year 1 was to develop a theoretical, data-driven framework for producing high levels of online reading comprehension, engagement, and school learning among 8 7th grade students in economically challenged school districts. During Year 1, we collected and analyzed two sources of information to inform subsequent work: a) a survey of students in our target population aimed at characterizing Internet use at home and at school; b) verbal protocol data obtained with think aloud procedures from high volume Internet readers in our target population as they read informational texts obtained from the Internet. In addition, we have begun to field test as we prepared for the design studies scheduled for Year 2. In Year 2 we are field-testing the viability of various approaches to implementing Internet Reciprocal Teaching (IRT), an adapted model of Reciprocal Teaching (Palincsar, 1986; Palincsar & Brown, 1984). During this year, we seek to understand how best to increase online reading comprehension among adolescents at risk of becoming dropouts. Specifically, we are conducting a design experiment to generate formative data aimed at (a) refining the intervention and its implementation, (b) identifying key variables to control or manipulate in the conventional, experimental field-trials during Year 3, and (c) insuring fidelity of the intervention across diverse contexts during Year 3. The major goal of Year 3 is to conduct an experiment with random assignment of treatment conditions at the teacher level using hierarchical linear modeling (HLM) procedures. The proposed experimental design will test the effects of our adapted reciprocal teaching approach designed to increase online reading comprehension and knowledge of effective Internet reading comprehension strategies. The experiment in the third year will span most of the academic year. The Importance of Multiple Theoretical Perspectives Theoretical orientations toward the changing nature of literacy on the Internet typically reflect a single tradition of inquiry or a single theoretical lens from sociolinguistics, psycholinguistics, information science, cognitive theory, or sociocultural theory (Coiro, Knobel, 9 Lankshear, & Leu, in press). To develop viable approaches to online reading comprehension, we believe it is essential to bring multiple theoretical perspectives to bear, recognizing the multiple realities (Labbo & Reinking, 1999) that exist. Research questions related to online reading comprehension occur in contexts that are far too complex for any single perspective to account for all that is taking place. Thus, we initially framed this project around the idea that the complexities of contending with digital literacies are best understood by adopting rich, complex, and multiple theoretical lenses. We conceptualize our work at the beginning within a new literacies of the Internet perspective (Leu, Kinzer, Coiro, & Cammack, 2004) a perspective that integrates theoretical views from information science, cognitive science, sociocultural theory, critical theory, mulitliteracies, and sociolinguistics. In addition to a preliminary set of principles informing us about how the new literacies of the Internet are likely to be acquired, this perspective identifies five processes that take place during online reading comprehension: “The new literacies of the Internet and other ICT include the skills, strategies, and dispositions necessary to successfully use and adapt to the rapidly changing information and communication technologies and contexts that continuously emerge in our world and influence all areas of our personal and professional lives. These new literacies allow us to use the Internet and other ICT to identify important questions, locate information, analyze the usefulness of that information, synthesize information to answer those questions, and then communicate the answers to others.” (Leu, et. al, 2004, p. 1570) We also draw upon important insights generated from dual coding theory (Pavio, 1990), 10 cognitive flexibility theory (Spiro & Jehng, 1990), multiliteracies (New London Group, 1996), and other theoretical frameworks as they help us to enrich our understanding of the data that is emerging. Methods For Identifying Online Reading Comprehension Skills and Strategies A central issue in this research is to develop a preliminary taxonomy of the skills and strategies that appear to be important for online reading comprehension. This taxonomy will be important to our subsequent work, especially the instructional treatment study in Year 3. While we could have selected participants from a population of highly-skilled, adult, online readers or students from higher achieving, and more economically privileged school districts, we determined that grounding this aspect of the work in our target population had several advantages. Most importantly, it contextualized the skills that emerged within the experiences of our target students. This made these skills especially relevant to the instructional intervention that we planned for Year 3. In particular, we sought to obtain skills at a level that would be helpful to our high achieving students in online reading comprehension as well as our lower achieving students. Using adults who were highly proficient, but outside of our target population, may have provided us with skills and strategies that might not have matched the full range of needs among students in our schools. Thus we sought informants who were the highest performing online readers in the schools we were studying, middle schools in diverse and economically challenged districts in South Carolina and Connecticut. We invited these students to serve as informants using think aloud procedures (Afflerbach, 2002). Using An Online Survey To Select Our Participants Our initial sample included a total population of 1,025 seventh graders in 14 middle schools 11 (eight in South Carolina and six in Connecticut) whose parents completed permission forms and who completed an online survey of Internet use (Henry, Mills, Rogers, & Witte, 2006). This survey of Internet use was designed to help us determine approximately 50 of the most skilled and experienced online readers in our total population to serve as our informants. This survey measured: 1) online reading comprehension ability, 2) frequency of Internet use, 3) diversity of Internet use, 4) expertise in explaining online reading comprehension strategies, and 5) comfort in explaining strategies used when reading on the Internet to an adult. Each area of the survey is described in detail below followed by the procedures used in selecting the student informants used in this portion of our work. The entire survey may be viewed online at: http://camss.clemson.edu/READING/servlet/Page1. Online Reading Comprehension Ability. We sought students who were skilled in online reading comprehension. Online reading comprehension ability was calculated from three different types of elements on the survey, comprising a total of 16 items. Seven items consisted of rubric-scored performance items on the survey which evaluated the quality of the strategies students used during online reading tasks such as evaluating the reliability of information on the Internet, locating information for a given purpose, or downloading an email attachment. Each was scored from 0-2 with 0 representing "no response" or "incorrect response," 1 representing a "somewhat skilled response," and 2 representing a "highly skilled response." Five items consisted of multiple choice selections which evaluated skills such as reading a set of search engine results to determine the best choice for an information problem. Each multiple choice item was scored as 0 for "incorrect" and 2 for "correct." Four items consisted of responses about how frequently students reported evaluating information they read on the Internet at home or at school. Each was scored from 0-2 with 0 representing a response of "never" or "less than once a 12 week," 1 representing a response of "once a week" or "a few times a week" and 2 representing a response of "once a day" or "several times a day." The highest possible total score for online reading comprehension ability was 32. The mean score for online reading comprehension ability among the total population was 12.13 (S.D. = 5.15). Quartiles were developed from the entire set of scores, which ranged from 0-27. Scores in the top quartile ranged from 17-27. Inter-rater reliability for the rubric-scored responses was calculated using Fleiss' Kappa (1971), a statistic for measuring reliability when coding qualitative/categorical variables using multiple raters. A random sample of 50 sets of student responses from both research sites was derived for the seven skill items to be scored and the kappas for each scoring rubric was calculated among four independent raters. Fleiss' Kappa has a range of 0 – 1.00 and, since four raters were used, a kappa of .7 or better was desired for the purposes of this investigation. Reliability for the seven skill item rubrics on these 50 sets of survey responses ranged from .75 to .96, which was deemed acceptable. The four raters then divided the data set and scored the remainder of the entire set of student responses independently, using these same scoring rubrics. Frequency of Internet use. We also sought students who used the Internet frequently. Twenty-seven survey questions asked students to report time spent on a variety of Internet activities in school and another, parallel, set of twenty-seven items asked students to report time spent on the same tasks outside of school. Students were asked to report on the frequency with which they performed Internet activities with items such as these: "use search engines to locate information for a school assignment;" "read websites to help with short homework assignments;" "check a second source to see if the online information I found is true;" "look to see who wrote the online information I found before I use it;" and "play online games." Students rated their Internet use on a 6-point Likert scale: 0=Never, 1=Less than once a week, 2=Once a week, 3=A 13 few times a week, 4=Once a day. 5=Several times a day. Scores were totaled across in-school and out-of-school use to identify students who reported using the Internet most frequently. The total possible score on these items was 270, representing a student who reported doing each item on the survey several times a day, both in school and out of school. Total frequency of use scores ranged from 0 - 178. The mean score was 57.56 (SD = 34.93). Scores for frequency of use were divided into quartiles. Diversity of Internet use. We also sought students who used the Internet for diverse purposes. The same survey questions used to determine the frequency of use on various Internet tasks were also used in a diversity of use analysis. If a student responded to any of the 27 survey items in the frequency of use analysis with "never," they received a score on that item of 0. If they responded with "Less than once a week" or a response indicating a more frequent use, they received a score of 1. A score of 1 indicated that they used the Internet for this particular purpose. The scores on these 54 items were then summed to derive a diversity of use score. The total possible Diversity of Internet Use Score was 54. Scores ranged from 0-50. The mean was 23.43 (SD = 9.16). Scores were then divided into three levels to indicate low (1-18), moderate (19-35), and high (36-54) general diversity of use. A rating of 0, 1, or 2 was assigned to each participant to indicate their level of general diversity where 0 = low, 1 = moderate, and 2 = high diversity of use. Expertise in explaining online reading comprehension strategies. We also sought students who could provide a clear explanation about the strategies they use when reading on the Internet. One survey question was used to determine student expertise with providing a clear explanation about strategy use. This question evaluated students' ability to explain strategy use while reading and communicating on the Internet. The item asked: "You are looking for reliable 14 websites about the rain forest. Your friend has sent you this list of four website addresses with no other information. If you had to predict which link would lead to the MOST reliable information about rainforests, which link would you choose? Why did you choose this answer?" The written explanation for this item was rubric scored with a possible score of 0 (no response or an incorrect response), 1 (a less skilled response), and 2 (a highly skilled response). The score on this item was used to represent expertise in explaining online reading comprehension strategies. The total possible score was 2. Scores ranged from 0 - 2. The mean was .71 (SD = .70). Comfort in working with an adult and thinking aloud. Finally, we sought students who would be comfortable with thinking aloud in front of an adult while reading online. The survey included this question, "Rate how comfortable you would be explaining to an adult (or thinking aloud) about where you go and how you read on the Internet." Responses included: Not at all comfortable (0), a little comfortable (1), somewhat comfortable (2), and very comfortable (3). The total possible score was 3. Scores ranged from 0 - 3. The mean was 2.04 (SD = .91). Procedures for student selection. To select our think aloud informants, the primary selection criterion was online reading comprehension ability. We only included students who scored within the top quartile of scores on this variable (17-27 out of 32). Then, within this population, we excluded students who: scored in the bottom quartile on the frequency of use score, had a low (0) Diversity of Internet Use Score, had a score of 0 on the item used to measure expertise in explaining online reading comprehension strategies, and had a score of 0 on the item used to measure comfort in working with an adult and thinking aloud. Parental permission was sought from all 63 students who met these criteria. Fifty-five out of sixty-three permission forms were returned. Two students decided to stop participating subsequent to the first think aloud session. This left a final sample of 53 informants (25 from Connecticut and 28 from South Carolina) for 15 the three think aloud sessions. A comparison of the mean scores of the selection variables between the total sample and the informants appears in Table 1. -------------------------Table 1 Here -------------------------- Sequence of Think Aloud Tasks Our student informants completed three different online reading sessions, lasting from 40 minutes to an hour, in the spring of the school year. Each session consisted of a short interview and a series on online reading comprehension activities during which the student informants were asked to think aloud while reading on the Internet; the first session also contained a short training session on thinking aloud while reading online. The three sessions provided online reading comprehension tasks that varied along two dimensions. One dimension involved the extent to which students completed teacher-determined versus student-determined online reading comprehension tasks. Session 1 provided students with a teacher-determined online reading task. Session 2 included teacher-determined tasks as well as an activity where students selected tasks from a list. In Session 3 students engaged in an online reading comprehension activity that was student-determined. The second dimension involved variation in the interruptive nature of the think aloud task, ranging from uninterrupted thinking aloud to a video prompted think aloud. It appears that an important trade off exists in the methodological choices one must make with think-aloud, verbal protocols (Afflerbach, personal correspondence). To the extent that you do not interrupt reading 16 during think-aloud protocols, you obtain data that is less likely to be influenced by experimenter probes but you obtain less data. To the extent the experimenter probes thinking aloud behavior, one is able to gather more data and also evaluate thinking during particular aspects of reading comprehension. We sought to obtain both unprobed and probed thinking aloud data and to make those decisions carefully and systematically. In the first session, students completed an uninterrupted think aloud activity on the reading task. In the second session, students completed a structurally prompted think aloud activity, being prompted at fixed structural locations to think aloud by the experimenters while reading online. In the third session, students completed a think aloud activity followed by a video stimulated think aloud activity while viewing the video of their online reading. By varying both the nature of the reading activity (from teacher-determined to student determined) and the nature of the think aloud activity (from uninterrupted think aloud to structurally prompted think aloud to video stimulated think aloud) we sought to collect strategy use data within a wide range of possible online reading and think aloud conditions. Session 1: An Uninterrupted Think Aloud. The first session used an uninterrupted think aloud procedure to provide us with baseline think aloud data on how students would perform if they were not interrupted by prompts, but rather spontaneously thought aloud while reading. Following a short training session on thinking aloud, students were given the targeted online reading comprehension task for session one along with a prompt to encourage them to think aloud throughout the session ("Can you tell me what you are thinking?"). Each student received a short, fictitious, message from another class, asking them to locate and evaluate the reliability of a “spoof site”, "Save the Northwest Pacific Tree Octopus" (http://zapatopi.net/treeoctopus/), provide three reasons for their answer, and summarize the most important information from that 17 site in one or two sentences. They were asked to send their information via IM, email, or with a posting at a blog site. Following the activity, students were interviewed to ensure that they were familiar with the term "reliable," an important concept in the task. When asked what this term meant, all reported meanings to indicate that they understood the term (e.g., "It means that you can trust it;" "It means it will always be there for you;" or "It's like a friend that you can trust.") The entire session was recorded using Camtasia software, a tool that records a video file containing all screen movement as well as audio data from the interview and think aloud. The complete protocol used in Session 1 is available at: http://www.newliteracies.uconn.edu/iesproject/documents.html Session 2: A Structurally Prompted Think Aloud. For the second session, each student was randomly assigned to one of two different online reading tasks: (A) chose two questions from a set of eight at a blog site (http://newliteracies.typepad.com/subject_surfi/), use the Internet to research them, and post your answers at the blog or (B) evaluate two websites for reliability, explain your evaluation in a word document that is sent by email and then, if time, revise an entry on wikipedia and explain how you figured out what to do in a word document sent as a word attachment. The entire session was recorded using Camtasia software. The protocols used in Sessions 2A and 2B are available at: http://www.newliteracies.uconn.edu/iesproject/documents.html Both tasks required students to think aloud ("Can you tell me what you are thinking?") at three types of structural locations, during the online reading comprehension task: (1) when students were reading any web page, (2) when students were about to click on a link and make an interactive decision about what to read next, and (3) when students were entering key words into a search engine. This resulted in students being prompted to explain their thinking at 18 structural points such as the following: reading and choosing the questions to respond to; reading any web page; reading search engine results; reading about information to establish a site’s reliability; reading blog entries; reading to figure out how a blog works and where to post a response; reading while composing a blog response; selecting a search engine; reading, just before clicking on a search engine result; reading, when students were about to click on ANY link; reading, when students were entering search engine key words; and reading, when students selected the link to the compose section of the blog. Session 3: A Video Stimulated Think Aloud Protocol. The third session provided students with an opportunity to read online about a topic that was of personal interest in a studentdetermined online reading comprehension activity. At the end of Session 2, each student was asked to come to the next session with an important question that they were interested in reading and learning more about on the Internet. They were also asked not to use the Internet to read about this question until the upcoming session. Twenty-five students were randomly assigned to an uninterrupted think aloud condition, following procedures used in Session 1; the other twenty-five students were randomly assigned to a structurally prompted think aloud condition, following procedures used in Session 2. After each informant determined that the online reading comprehension activity was completed, they also completed a video stimulated think aloud, 19 using the video of the session they just completed. The Camtasia video and audio recording of each student's reading session was played back and students were invited, using structurally prompted think aloud procedures, to explain what they were thinking at each important structural location. The complete protocol used in Session 3 is available at: http://www.newliteracies.uconn.edu/iesproject/documents.html Camtasia Video Recording of Online Screen Reading and Think Aloud Data Throughout all sessions, Camtasia software was used to record all sessions. This software records all screen activity on the computer and all audio spoken by the student and researcher. Thus, video files for all sessions were recorded and saved on each computer used in the online reading and think aloud sessions. Each video and audio file was transcribed. A time-stamped description of video actions was added to each transcript so that verbal reports could be analyzed alongside students' navigation decisions within one document. An example of an especially interesting recording of an entire think aloud session may be viewed at: http://www.newliteracies.uconn.edu/iesproject/videos/ Missing Data A total of 159 think aloud sessions were scheduled (53 students x 3 sessions each). Seventeen sessions (approximately 10%) were not used in this analysis because of technical or other problems. These recording failures were distributed relatively evenly throughout the three sessions. (Typically, the audio recording quality was insufficient to be useable.) A final total of 142 sessions were used in our analysis. Forty students had complete audio and video recordings for all three sessions. Analytic Procedures 20 Following the completion of the think aloud sessions, 142 transcripts were developed from the Camtasia video recordings. These included the transcription of the full audio recording as well as descriptions of all on-screen movements made by each informant as they read online. Then, all think aloud commentaries was parsed into idea units (Anderson,Reynolds, Schallertt, & Goetz, 1977) for analysis. An example appears at: http://www.newliteracies.uconn.edu/iesproject/videos/ The analysis of transcriptions was completed following both constant-comparative (Bogdan & Biklen, 2003; Merriam, 1988) and abductive (Onwuegbuzie & Leech, 2006) methods. Abductive coding methods (Onwuegbuzie & Leech, 2006) employ both inductive and deductive coding procedures. Initial codes were deduced from the theory base of online reading comprehension developed by Leu, Kinzer, Coiro, & Cammack (2004) which included developing questions, locating information, analyzing the usefulness of information, synthesizing information, and communicating information; these categories were inserted initially as primary nodes in the developing coding system. The "branches" that emerged from these primary nodes were derived through inductive procedures and constant comparative analysis, involving several of the researchers who had both administered verbal protocols and transcribed the data. The data were parsed into strategy units and inserted under the primary categories. In the process, ancillary nodes were also developed. The researchers then discussed the various elements of each strategy – those that clearly extended the existing codes as well as those that did not. New categories and extensions of categories grew from these discussions until the need to extend was exhausted from the initial set of transcripts. The codes were then inserted into the NVIVO-7 project file with distinctive descriptions that were agreed upon by the researchers. For example, under the general category of locating information, strategies for reading search engine results 21 were categorized as follows: 1. Click and Look: Action based on proceeding systematically through search engine results (i.e. begin with first link and progress list wise, one link after the other). 2. Description Reading: Action based on specific reading of search results (i.e., identifies bolded words from keyword input, related words, etc) 3. Touring Results Page: Action based on scrolling through results page prior to close reading or change of keywords (i.e., a virtual text walk). 4. URL Reading: Action based on specific reading of URL (i.e., identifying certain elements of URL such as .com, .edu., .gov, etc). It should also be noted that this analytic process was also informed by work taking place concurrently on a number of other independent research studies including ones by Coiro (2006), Henry (2006), Castek (2007), XXXX, XXXX, and XXXX among others. Insights from these studies often informed work on this project. As the coding of the verbal protocols continues, opportunities to revisit and revise the developing taxonomy continue to be presented and the need to reconsider descriptions of categories may again arise. This process is being carried out through discussions by researchers until agreement is reached and categories and descriptions are revised. This approach thus serves to ground the taxonomy in existing and developing theories of Internet reading comprehension while allowing for new information to emerge from the data. Examples of Online Reading Comprehension Strategies We present in this section examples of two categories of strategies that appear to be especially important for successful online reading comprehension: locating information and 22 analyzing the usefulness of information. While our analysis is ongoing, it is becoming clear that online reading comprehension skills in these two areas may be “circuit breaker” skills; having them leads to successful online reading comprehension, while lacking them often leads to failure. There may be a useful analogy to decoding skill during offline reading comprehension, where it is very difficult to successfully comprehend offline text without adequate decoding skill. Online, it appears equally difficult to successfully comprehend information if you can not locate it or if you fail to think analytically about the information that you encounter. Sometimes the two interact, as when a reader analyzes search engine results to determine the most useful link for their informational purpose. An Example Of Successful Search Strategies. One example of a student who is very skilled at locating information online is a student we shall call Heather. In the interview portion of Session 3, Heather indicates that she knows how skilled she is at locating information with a search engine. She describes knowing two online reading comprehension skills that others might not -- selecting appropriate key words and carefully reading the “small part,” the summary under each item that is listed in a set of search engine results: R: What do you think you know about reading on the Internet that some kids might not know? S: (long pause) … being able to pick out key words. Some students aren’t very good at picking out topics or whatever. Or being about to read the small part underneath it and determining if that’s what you’re looking for or not. To be able to read the summary. Heather has recently been diagnosed with a disease called CMV. Since this third session is a 23 student directed reading task, Heather has decided to read in order to learn exactly what this disease is and how you get it. When it is time to begin reading online, Heather indicates that she knows about search engines and goes immediately to google: S: Now, I like to use google because I’m just familiar with it. [14:00 Opens new IE page. Goes to google by typing URL in address bar.] Heather also shows us that she knows how to use key words to locate information: [14:23 Types in “causes of CMV” in Google search bar.] S: I’m going to look up the causes of CMS because that’s one of the questions. [14:35 Opens first result in list of search engine results. ] S: I’m going to read and see what it says. Heather also shows us how she carefully analyzes the information that she encounters at this first location, monitoring what she reads and comparing it to her needs: [15:03 Begins scrolling down to read the information at the first page she opens.] S: I’m just kind of scanning it right now. I’m trying to see if it tells what causes it. It tells what it’s related to. I don’t really see much about what it is. Okay, I’m not seeing the question I was looking for. [15:57 Hits the back button to return to search results.] S: I’m going to go back and read some more of these and see what it says. She also shows us that it is useful to actually read the summaries below each search engine results. Heather does not use a strategy often used by less successful online readers in our sample who simply use a “click and look” strategy, working one’s way down the list of results, clicking and looking quickly at each one without ever reading the summary: 24 [16:12 Is reading search results this time, not just clicking to see if the page has what she wants.] R: Okay. As you’re reading here, what are you thinking? What are you looking for? S: I’m looking for some of the main causes for it, but I’m not really finding much about what I’m looking for. And, finally, Heather knows how to reduce the set of key words in order to find a broader set of results in order to locate information on how one contracts this disease: S: So, I think what I’m going to do is I’m going to go up here and change it to just CMV. Because I’m not finding much… [16:33 Modifies key word to “CMV”] R: And what made you decide to do that? S: Because when you type in the causes of CMV, it tells you what it causes in people instead of the cause of getting it. Heather shows us four important online reading comprehension skills related to locating information online. She knows: that one can use a search engine to locate information how to make key word searches related to her question; how to analyze the results summaries to determine if any meet her information needs; and how to expand a key word search when the initial results produce too narrow a set of informational resources. Heather eventually locates the information she is looking for because she has a sophisticated set of online reading comprehension skills to help her locate information. An Example Of Less Successful Search Strategies: The “.com Approach.” 25 An example of a student who is not as skilled at locating information online is a student we shall call Barbara. Barbara begins the first online reading session seeking to locate the site called Save the Pacific Northwest Tree Octopus. Like many of our less successful online readers, Barbara does not appear to know about search engines. Instead, she uses a “.com strategy” to locate information, typing in the name of the topic she is looking for in the address window and adding a .com at the end: [13:38 … highlights address in address bar, types in www.savethepacificnorthwesttreeoctopus.com in the address bar, presses enter and waits] [14:16 the url could not be retrieved…) Because this does not produce the site, Barbara tries clicking on the full URL option provided by the browser: (14:16 clicks the link http://www.savethepacificnorthwesttreeoctopus.com and waits] [14:55 clicks link again, waits] [15:14 clicks back button] Then she continues using her .com strategy, revising the name by dropping off the word “pacific” in the address window: [15:22 highlights address bar, types in www.savethenorthwesttreeoctopus.com, presses enter and waits] [15:55 the url could not be retrieved, clicks the link http://www.savethenorthwestreeoctopus.com and waits] Next, she tries another option, still using a .com strategy; she drops off the word “tree” in the 26 address window: [16:16 highlights the address bar, types in www.savethenorthwestoctopus.com, presses enter and waits] [16:54 the url could not be retrieved] S: I wonder why it’s not coming up. [long pause] [indecipherable] [long pause] Finally, she tries yet another variation of the .com strategy, adding back the term “pacific” but dropping the term “tree.” [17:10 highlights the address bar, types in savethepacificnorthwestoctopus and presses enter and waits] [18:01 MSN search page appears with nothing found resembling savethepacificnorthwestoctopus, You can see that Barbara does not appear to know about search engines or she chooses not to use one. Barbara also does not know how to make key word searches, using instead, a “.com strategy” to try to locate information. (One suspects she might look for information on the revolutionary war by entering www.revolutionarywar.com or information on George Washington by entering www.georgewashington.com). Being limited with location skills in this manner makes it difficult to successfully complete any online reading comprehension task. You can also see that we have found a contrastive approach like this, focusing on both successful and unsuccessful online reading comprehension strategies, provides a richer approach to understanding the full dimensions of the skill sets that are important for online reading comprehension. Analyzing The Usefulness Of Information – Effective And Ineffective Strategies Our data indicate that the analysis of information takes place at multiple points in time 27 during online reading comprehension. It happens as readers analyze the question that they have, analyze the results listed from a search engine to decide which will be most useful, critically evaluate the reliability of information at a site they visit, analyze their synthesis of information to see if it meets the needs of the task they are trying to complete, and analyze the message they are composing to make certain it makes sense. One form of analysis may be especially important to consider during online reading – the analysis of how reliable information might be at a web site. This skill was evaluated during the first session with our informants, when we asked them to gather information and evaluate the reliability of the spoof site, Save The Pacific Northwest Tree Octopus (http://zapatopi.net/treeoctopus/). Most of our higher performing online readers reported that they thought this sight was reliable with slightly more than half of our students (27) reporting it to be “very reliable.” Only 6 out of 53 students reported that the site was unreliable and each of these had just participated in a lesson that used this site, teaching students to be suspicious of information online. Few of our informants showed any indication of questioning the information at this site. And, when told that the site was not reliable, some of the students insisted that we were wrong. Then, when asked to do so, very few students could use the Internet to find information that would demonstrate the site was either reliable or unreliable. Most seemed to not question the reliability of information online during reading even though many students reported, during an interview, that you can’t really trust information that you find on the Internet. An Example Of A More Successful Reader With The Online Analysis Of Information. Pat was one of the few students who was able to determine that the site, Save the Northwest Pacific Tree Octopus did not provide reliable information. She read the information at the site 28 quickly: [18:08 results page opens, clicks the first result to Save The Pacific Northwest Tree Octopus] [18:21 page opens, scrolls up and down page] Pat then began to post his response at the blog: [20:35 clicks on blog from task bar, scrolls up and down page] [20:52 clicks on a previous post] [21:12 clicks on post a comment link] [21:22 puts cursor in comment box, begins to type] Her post at this blog read: “The most important info that the world needs toknow is that your website is bogus. There is know such thing as a tree octopus,they can only live in water. What kind of a joke are you guys trying to pull. No one is going to belive your bogus requests, so stop trying. the info on the website is unreliable in all ways.” The most important thing to notice in this video and think aloud is that Pat did not draw upon any strategy to demonstrate that the site was not reliable, she only relied upon a previous classroom experience. This happened with all of the other five students who indicated that this site was not reliable. None searched for information about what other people were saying on the Internet about Save the Northwest Pacific Tree Octopus or visited a site such as snopes.com to see if it was listed as a hoax site. An Example Of Less Successful Reader With The Online Analysis Of Information. An example of a student who failed to analyze the reliability of information presented at Save The Endangered Pacific Northwest Tree Octopus was a student we shall call Ali. Rather 29 than initially thinking about the source and evaluating the reliability of information at this site, Ali’s first thought is about how much information is available and that she is going to copy and paste some of that information to complete the task. R: So tell us about what you’re thinking. [26:00] Scrolls down and up again S: I’m thinking there’s a lot of information and, um, that I’m gonna copy some of it and paste it onto document Microsoft. [26:23] Highlights and copies text I think I’m gonna copy “The Pacific Northwest tree octopus (Octopus paxarbolis) can be found in the temperate rainforests of the Olympic Peninsula on the west coast of North America.” Copy and I’m gonna paste [26:34] Maximizes document from “Start” bar [26:35] Pastes onto document The information that Ali eventually copies and pastes into the blog entry, is extensive and challenging for most seventh graders to understand. Only the final line, where she evaluates the site and indicates that she finds it reliable, is in her own words: From: Anonymous [anonymous-comment@blogger.com] To:* XXXXXX *Subject:* [Mrs. Gonzales' Class] paxarbolis) can be found in the temperate rainforests of the Olympic Peninsula on the west coast of North America. These solitary cephalopods reach an average size (measured from arm-tip to mantle-tip,) of 30The Cephalopods ("head-foot") are the mollusk class Cephalopoda characterized by bilateral body symmetry, a 30 prominent head, and a modification of the mollusk foot, a muscular hydrostat, into the form of arms or tentacles. Teuthology, a branch of malacology, is the study of cephalopods.-33 cm. Reaching out with one of her eight arms, each covered in sensitive suckers, a tree octopus might grab a branch to pull herself along in a form of locomotion called tentaculation; … The history of the tree octopus trade is a sad one. Their voracious appetite for bird plumes having exhausted all the worthy species of that family, the fashionistas moved on to cephalopodic accoutrements during the early 20th Century. Tree octopuses became prized by the fashion industry as ornamental decorations for hats, leading greedy trappers to wipe out whole populations to feed the vanity of the fashionable rich. … its effects still reverberate today as these millinery deprivations brought tree octopus numbers below the critical point where even minor environmental change could cause disaster I think that this website save the pacific Northwest Tree octapus is very relible and that it gives a lot of information and I would use it again if i am looking fro an endagered Species Then, when told that the information at this site was unreliable, Ali told the researcher that that the researcher was wrong. She remained convinced it was reliable, even after being told that it was not: R: You, um, what if I told you that this site was not at all reliable and that the information was not true. S: I would say that you were wrong and that maybe you used a different a website and it’s just called the same thing because the stuff I found out was everything I needed to 31 find out and some other stuff that I didn’t need to know so I think it’s very reliable and I disagree with you. Summary As is evident from these examples, our seventh-grade informants demonstrated a variety of skills and strategies, ranging from poor to very effective. As we use these examples to refine our analytic coding scheme, we begin to think about how to apply what we have learned to help less skilled readers and interpret our analytic codes into more interpretable skill sets that can be used for assessment and instruction. An Evolving Preliminary Taxonomy The first step in the process we are following is the development of a preliminary coding scheme. Our analysis of think aloud data from our informants has led us to generate a preliminary coding scheme that captures the general nature of the skills and strategies that our informants use during online reading comprehension. This analytic scheme is presented in Figure 1. It has been helpful in beginning to understand the large set of data that we have in our transcriptions and videos of online reading. It continues to undergo refinement as we evaluate its utility to capture all aspects of our think aloud data. -----------------Figure 1 Here -----------------This work has also been helping us understand that our coding scheme, by itself may not be maximally useful to inform practice. As we begin to move from our verbal protocol analyses to thinking about how these patterns might inform our Internet Reciprocal Teaching lessons in the classroom, we found it helpful to summarize our observations in a preliminary “teacher-friendly” 32 list of items. As a result, we have also been developing a somewhat parallel taxonomy that is more useable to guide classroom instruction and assessment. We have tried to turn what we have been learning, in terms of the patterns of more successful and less successful strategy use, into action statements that might come from a highly skilled online reader. (e.g., “I know how to…”). We believe that statements like these, derived from the patterns in our data, are more likely to be helpful to classroom teachers and to inform the design of assessment instruments. Our developing classroom instructional taxonomy appears in Figure 2. -----------------Figure 2 Here ------------------ Discussion (At the end, mention the importance of this work in relation to the NCLB influence we see in schools.) References Afflerbach, P. (2002). The use of think-aloud protocols and verbal reports as research methodology. In M. Kamil (Ed.) Methods of literacy research, pp. 87-103 Hillsdale, NJ: Erlbaum Associates. 33 Anderson, R.C., Reynolds, R.E., Schallert, D.L., & Goetz, E.T. (1977). Frameworks for comprehending discourse. American Educational Research Journal 14, 367-381. Bleha, T. (May, June, 2005). Down to the wire. Foreign Affairs. Retrieved December 15, 2005, from http://www.foreignaffairs.org/20050501faessay84311/thomas-bleha/down-to-the- wire.html Bogdan, R. C., & Biklen, S. K. (2003). Qualitative research for education: An introduction to theories and methods, (5th ed.). Boston: Allyn & Bacon. Borzekowski, D. L., Fobil, J. N. & Asante, K.O. (2006). Online access by adolescents in Accra: Ghanaian teens' use of the Internet for health information. Developmental Psychology, 42, 450-458. Castek, J. (2007). An examination of classroom instruction that integrates the new literacies of online reading comprehension: Exploring the contexts that facilitate acquisition and the learning outcomes that result. Doctoral dissertation proposal, University of Connecticut. Chandler-Olcott, K., & Mahar, D. (2003). “Tech-savviness” meets multiliteracies: Exploring adolescent girls’ technology-mediated literacy practices. Reading Research Quarterly, 38, 356-385. Coiro, J. L. (2007). Exploring changes to reading comprehension on the Internet: Paradoxes and possibilities for diverse adolescent readers. Doctoral dissertation. The University of Connecticut. Coiro, J., Knobel,M., Lankshear, M. & Leu, D.J. (Eds.). (in press). Handbook of research on new literacies. Lawrence Erlbaum, Mahwaw, NJ. de Argaez, E. (January, 2006). Internet world stats news, 14. Retrieved December 15, 2006 from, http://www.internetworldstats.com/pr/edi014.htm#3. 34 Finn, J.D. (1989). Withdrawing from school. Review of Educational Research, 59, 117-142. Finn, J.D. (1993). School engagement and students at risk. Washington, DC: U.S. Department of Education, National Center for Education Statistics [On-line]. Available at: http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=93470. Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378-382. Friedman, T. L. (2005). The world is flat: A brief history of the twenty-first century. New York, NY: Farrr, Straus, and Giroux. Henry, L.A. (2006). Exploring new literacies pedagogy and online reading comprehension among adolescents and their teachers: Issues of social equity or social exclusion? Doctoral dissertation proposal. The University of Connecticut. Available at: http://newliteracies.uconn.edu/lahenry/proposal_HENRY.pdf Henry, L. A., Mills, C., Rogers, A., and Witte, J. (2006). In D. Reinking and Leu, D.J. (Chairs). Developing Internet reading comprehension strategies among adolescents at risk to become dropouts. An online survey of Internet use among adolescents at risk of dropping out of school. Poster paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA. International Reading Association (IRA). (2002). Integrating literacy and technology in the curriculum: A position statement. Newark, DE: International Reading Association. Internet World Stats (2006). Internet Usage Statistics – The Big Picture: World Internet Users and Population Stats. Retrieved from: http://www.internetworldstats.com/stats.htm Labbo, L.D., & Reinking, D. (1999). Multiple realities of technology in literacy research and instruction. Reading Research Quarterly, 34, 478-492. 35 Lankshear, C., & Knobel, M. (2003). New literacies: Changing knowledge and classroom learning. Milton Keynes: Open University Press. Lebo, H. (2003). The UCLA Internet report: Surveying the digital future, year three. Los Angeles: UCLA Center for Communication Policy. Retrieved March 26, 2007 from http://www.digitalcenter.org/pdf/InternetReportYearThree.pdf Leu, D. J., Ataya, R., & Coiro, J. (December, 2002). Assessing assessment strategies among the 50 states: Evaluating the literacies of our past or our future? Paper presented at the National Reading Conference. Miami, FL. Leu, D.J., Castek, J., Hartman, D., Coiro, J., Henry, L.A., Kulikowich, J. & Lyver, S. (2005). Evaluating The Development of Scientific Knowledge and New Forms of Reading Comprehension During Online Learning. Final report submitted to the North Central Regional Educational Laboratory/Learning Point Associates. Leu. D. J., Jr., Kinzer, C.K., Coiro, J., & Cammack, D. (2004). Towards a theory of new literacies emerging from the Internet and other ICT. In R.B. Ruddell & N. Unrau (Eds.), Theoretical Models and Processes of Reading, 5th Edition, 1570-1613. Leu, D. J. & Reinking, D. (2005). Developing Internet Comprehension Strategies Among Adolescent Students At Risk to Become Dropouts. U. S. Department of Education, Institute of Education Sciences Research Grant. Retrieved June 20, 2006 at http://www.newliteracies.uconn.edu/ies.html. Leu, D. J., Zawilinski, L., Castek, J., Banerjee, M., Housand, B., Liu, Y., and O’Neil. M (in press). What is new about the new literacies of online reading comprehension? In A. Berger, L. Rush, & J. Eakle (Eds.). Secondary school reading and writing: What research reveals for classroom practices. National Council of Teachers of 36 English/National Conference of Research on Language and Literacy: Chicago, IL. Livingstone, S. and Bober, M. (2005) UK children go online: Final report of key project findings. London: LSE Report, April 2005. Retrieved March 28, 2007 from www.children-go-online.net. Merriam, S.B. (1988). Case study research in education: A qualitative approach. San Francisco: Jossey-Bass. National Center for Education Statistics (2003). Dropout rates in the United States 1998. Washington, DC: U.S. Department of Education. Morgan, D. L. (2007). Paradigms Lost and Pragmatism Regained: Methodological Implications of Combining Qualitative and Quantitative Methods. Journal of Mixed Methods Research, 1(1), 48-76. New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60-92. No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002). Retrieved December 10, 2003, from http://www.ed.gov/policy/elsec/leg/esea02/index.html Onwuegbuzie, A. & Leech, N. L. (October, 2006). Qualitative data analysis: A step-by step approach. Presentation provided by Education Technology Services of Clemson University. Clemson, SC. Pew Internet & American Life Project (2005). Teens and Technology. Retrieved April 15, 2006 from http://www.pewinternet.org/topics.asp?c=4. Pavio, A. (1990). Mental representations: A dual-coding approach. New York: Oxford University Press. Palincsar, A.S. & Brown, A.L. (1984). Reciprocal teaching of comprehension-fostering and 37 comprehension-monitoring activities. Cognition and Instruction, 1, 117-175. Palincsar, A.S. (1986). Reciprocal teaching. In Teaching reading as thinking. Oak Brook, IL: North Central Regional Educational Laboratory. RAND Reading Study Group. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: Rand. Reinking, D. (1997). ME and my hypertext:) A multiple digression analysis of technology and literacy (sic). The Reading Teacher, 50, 626–643. Reinking, D. (2001). Multimedia and engaged reading in a digital world. In Ludo Verhoeven and Catherine Snow (Eds.), Literacy and Motivation. Mahwah, NJ: Lawrence Erlbaum. Spiro, R. & Jehng, J.C. (1990). Cognitive flexibility and hypertext: Theory and technology for the nonlinear and multidimensional traversal of complex subject matter. In D. Nix & R. Spiro (Eds.), Cognition, education and multimedia: Exploring ideas in high technology. Hillsdale, NJ: Erlbaum. U. S. Department of Commerce: National Telecommunications and Information Administration (2002). A nation online: How Americans are expanding their use of the Internet. Washington, DC: U.S. Department of Commerce. Wylie, V.L., & Hunter, W.A. (1994). The dropout problem: Can schools meet the challenge? NASSP Bulletin, 78, 74-80. 38 Table 1. A comparison of mean scores on the selection criteria used to identify student informants for the think aloud sessions. Online Reading Total Sample Informants (n=1,025) (n=53) Range Mean SD Range Mean SD 0-27 12.14 5.15 17-27 20.55 2.40 0-178 57.56 34.93 70-176 103.72 25.97 0-50 23.43 9.16 26-50 34.59 6.77 0-2 0.71 .70 1-2 1.36 .48 0-3 2.04 .91 1-3 2.42 0.64 Comprehension Ability (Max = 32) Frequency of Internet Use (Max=270) Diversity of Internet Use Expertise in Explaining Strategies Comfort in Working With Adult and Thinking Aloud 39 Figure Captions Figure 1. A preliminary coding scheme of online reading comprehension strategies. Figure 2. What do good online readers know? A preliminary taxonomy of skills and strategies for instruction and evaluation 40 Figure 1. I. Question or problem: Identifying a question or defining a problem A. Understanding the Audience: Student makes specific reference to audience or personal needs in relation to identifying or developing questions. B. Developing a Question, Problem or Topic: Includes narrowing, expanding and/or refining questions. C. Shift in Question, Problem or Topic: Students changes question or problem based on the availability of information or presentation of new information (abandonment of original question). D. Understanding the Question: Strategies for checking or understanding the question (rereading task/ self-monitoring). II. Locate: Using the Internet to locate an information resource (using a search engine and/or other methods). A. Keywords: Keyword entry strategies: enters a keyword or words to search in a search engine. 1. Spelling: Strategies for obtaining the correct spelling of keywords. B. Reading Results: Strategies for reading and selecting an information source on a page of search engine results. 1. Click and Look: Action based on proceeding systematically through search engine results (i.e. begin with first link and progress listwise, one link after the other). 2. Description Reading: Action based on specific reading of search results (i.e. identifies bolded words from keyword input, related words, etc) 3. Touring Page: Action based on scrolling through results page prior to close reading or change of keywords (i.e., a virtual text walk). 4. URL Reading: Action based on specific reading of URL (i.e., identifying certain elements of URL such as .com, .edu., .gov, etc) C. Search: Methods and strategies for searching for information on the Internet 1. Engine: Using a search engine to locate an information resource (includes strategies or reasons for choosing a search engine). 2. Dotcom Strategy: typing in the name or topic in the address bar and adding .com to the end. 3. Other: Using other methods (not search engine) to locate an information resource D. Webpage Reading: Reading strategies used at a webpage to locate information or decide where to go next (not reading search engine results). 1. Design Elements: Reading of visual features and attributes of webpage to target information (i.e., menu options, links, headings, images, etc.) 2. Summarize: Verbal summarization of information during webpage reading. 3. Text Walk: Actions based on quick review of webpage (similar to touring of results page) prior to close reading. 41 III. Evaluate: General statements or actions about critically evaluating information. A. Accuracy: Evaluating information based on the degree to which it is correct; the extent to which information contains factual and updated details that can be verified by consulting alternative and/or primary sources. 1. Accuracy Confirmation: Student confirms or rejects information from a secondary source. 2. Accuracy Shift in Reading: Student changes information source based on the accuracy of information. B. Bias and Stance: Evaluating information in relation to the stance an author takes (the lenses, viewpoint and/or agenda embedded within the information) 1. Bias-Stance Confirmation: Student confirms or rejects information source based on the bias or stance of a secondary source. 2. Bias-Stance Shift in Reading: Student changes information source based on the bias or stance of the author or sponsor of webpage. C. Relevancy: Evaluating information in relation to its utility/relevancy to the question or problem: the information's level of importance to a particular reading purpose or stated information need. 1. Relevancy Confirmation: Student confirms or rejects information source based on its level of relevancy 2. Relevancy Shift in Reading: Student changes information source based on it level of relevancy D. Reliability: Verification of information for reliability: the information's level of trustworthiness based on information about the author and the publishing body 1. Reliability Confirmation: Student confirms or rejects information source based on the reliability of the information 2. Reliability Shift in Reading: Student changes information source based on it level of reliability IV. Synthesize: Integrating information from multiple resources. A. Combination of Text and Multimedia: Synthesizing information across both textbased and visual sources of information (a combination of the above elements). B. Multimedia-based: Synthesizing information using multiple visual/audio sources (i.e. visual images, charts, diagrams, videos, audios, etc) C. Text-based: Synthesizing information across multiple text sources. V. Communicate: Use of one or more of the designated ICTs to share a response. A. Audience and Purpose: Skills and strategies used that relate to the purpose of communicating information 1. Audience (Content): Monitoring communication of information for audience or voice (i.e., formal versus informal writing styles) 2. Tool Selection (Form): Identification of why and how students choose a particular tool for the communication of information B. Formatting: Selection of particular fonts, colors, adding clip art, inserting images into document communicated via ICTs C. Note taking: Identification of skills and/or strategies used for note taking purposes 1. Hard: Paper/pencil based note taking. 42 2. Soft: Electronic/digital note taking. D. Proofreading: Strategies used for revising and editing a written response. E. Technical Aspects: Identification of technical aspects related to using ICTs (i.e., interfaces, etc) 1. Blog: Skills related to posting a comment to a blog 2. Email: Skills related to sending an email message, including attachments 3. Instant Messenger: Skills related to using instant messenger VI. Other Strategies: Strategies outside the theoretical framework A. Instruction: Statements related to instruction for using the Internet B. Technical Skill: Technical skills that show specific knowledge about the computer and or software interface (not related to communication tools as identified in other nodes; i.e. use of back button, copy/past function/ find feature, etc.) 43 Figure 2. Asking Questions/Solving Information Problems 1. I know what a really good question is. 2. I know who my audience is and what their needs are. 3. I know that revising the question, when I get new information, often makes it better. 4. I know how to use the Internet to develop background knowledge and make my question better. 5. I know that I need to remember my question and not get distracted. Locating Information: Search Engines 1. I know how different search engines work. 2. I know when to change and use a different search engine. 3. I know simple strategies for making my search more specific. 4. I know advanced search strategies and when they could be useful. 5. I know how to search generally for useful key words when I do not know much about the topic. 6. I know how to use quotation marks in my search terms and when these are most useful. 7. I know how to read search engine results and usually do so. 8. I know how to evaluate possible search engine results. 9. I know how to search for images as well as text. Locating Information: Web Pages 1. I know the typical “geography” of websites. 2. I know how to quickly skim information at a website to find the information I need, the link to take me there, or when I need to go somewhere else. 3. I can predict what kind of information will be at most links. 4. I know how to evaluate information that I find on a web site. 5. I know when I have located a useful website. Evaluating Information 1. Understanding – I know when information makes sense to me. 2. Relevancy – I know when information meets my needs. 3. Accuracy - I know how to verify information with another source. 4. Reliability - I know how to tell when information can be trusted. 5. Bias - I know that everyone “shapes” information and how to evaluate this. 6. Stance - I am a “healthy skeptic” about online information. Synthesizing Information 1. I know how to construct the information I need as I read selected information. 2. I know which information to ignore when I read. 3. I know how to put information together, and make inferences when it is missing, to answer my question. 4. I know when I have my answer. 5. I know how to integrate multiple media sources to derive my answer. 44 Communicating Information 1. I understand my audience’s needs. 2. I know how to construct a clear and unambiguous message so that the reader knows what I mean. 3. I know how NOT to make people upset with me from the way I write my message. 4. I know how/when to access and publish information on a blog. 5. I know how/when to access, revise, and publish information on a wiki. 6. I know how/when to use email. 7. I know how/when to use IM. 8. I know how/when to attach a document to my messages. 9. I know how/when to use other communication tools. 45 Author Notes The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305G050154 to The University of Connecticut. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.