Alternative measures of knowledge structure: as measures of text structure and of reading comprehension May 14, 2012 BSI Nijmegen, Nederland Roy Clariana RClariana@psu.edu Clariana, R.B. (2010). Multi-decision approaches for eliciting knowledge structure. In D. Ifenthaler, P. Pirnay-Dummer, & N.M. Seel (Eds.), Computer-Based Diagnostics and Systematic Analysis of Knowledge (Chapter 4, pp. 41-59). New York, NY: Springer. link 1 Overview • Introduction • I am an instructional designer and a connectionist, so my language may be a little different, also slow me down if my accent is difficult • My intent today is to describe my research on several approaches for measuring Knowledge Structure (KS) and along the way, describe tools, and maybe show extra ways of thinking about text, knowledge, comprehension, and learning 2 KS: Encompassing theoretical positions • Cognitive structures (de Jong & Ferguson-Hessler, 1986; Fenker, 1975; Korz & Schulz, 2010; Naveh-Benjamin, McKeachie, Lin, & Tucker, 1986; Shavelson, 1972) • Conceptual networks (Goldsmith et al., 1991) • Conceptual representations (Geeslin & Shavelson, 1975; Novick & Hmelo, 1994); (McKeithen, Reitman, Rueter, & Hirtle, 1981) • Conceptual structures (Geeslin & Shavelson, 1975; Novick & Hmelo, 1994) • Knowledge organization and knowledge structures (McKeithen et al., 1981) • Semantic structures (Gentner, 1983; Riddoch & Humphreys, 1999). 3 KS: Encompassing theoretical positions • Spatial knowledge (de Jong & Ferguson-Hessler, 1996; Dunbar & Joffe, 1997; Jee, Gentner, Forbus, Sageman, & Uttal, 2009; Korz & Schulz, 2010; Schuldes, Boland, Roth, Strube, Krömker, & Frank, 2011) • Categorical knowledge (Candidi, Vicario, Abreu, & Aglioti, 2010; Matsuka, Yamauchi, Hanson, & Hanson, 2005; Stone & Valentine, 2007; Wang, Rong, & Yu, 2008) • Conceptual knowledge (de Jong & Ferguson-Hessler, 1996; Edwards, 1993; Gallese & Lakoff, 2005; Hallett, Nunes, & Bryant, 2010; Rittle-Johnson & Star, 2009) 4 KS: My sandbox model Our symbolic connectionist view: • Knowledge structure (or structural knowledge) refers to how information elements are organized, in people and in artifacts • A departure from most theories, we propose that knowledge structure is pre-propositional, but that KS is the precursor of meaningful expression and the underpinning of thought • Said differently, knowledge structure is the mental lexicon that consists of weighted associations (that can be represented as vectors) between knowledge elements 5 KS is worth measuring • Measures of content knowledge structure have been empirically and theoretically related to memory, classroom learning, insight, category judgment, rhyme, novice-toexpert transition (Nash, Bravaco, & Simonson, 2006) and reading comprehension (Britton & Gulgoz, 1991; Guthrie, Wigfield, Barbosa, Perencevich, Taboada, Davis, Scafiddi, & Tonks, 2004; Ozgungor & Guthrie, 2004), and • And findings for combining individual knowledge structures to form group mental models (Cureeu, P.L., Schalk, R., & Schruijer, S., 2010; DeChurch & Mesmer-Magnus, 2010; Johnson & O’Connor, 2008; Mohammed, Ferzandi, & Hamilton, 2010; Pirnay-Dummer, Ifenthaler, & Spector, 2010). 6 Applied to reading comprehension, KS as a measure of the situation model Ferstl & Kintsch (1999) • Textbase (the text’s semantic content and structure, van Dijk & Kintsch, 1983) • Situation model (the integration of the ‘episodic’ text memory with prior domain knowledge, van Dijk & Kintsch, 1983); also called mental model of the text, the text model, the discourse model Ferstl, E.C., & Kintsch, W. (1999). Learning from text: structural knowledge assessment in the study of discourse comprehension. In van Oostendorp and Goldman (eds.), The construction of mental representations during reading. Mahwah, NJ: Lawrence Earlbaum. 7 Visually contingency company plan classical efficiency leadership humanistic leadership service goal focus success quality TQM success environment management classical individual quality work humanistic relationship Text base management feelings goal environment empowerment concerns productivity needs feelings employee empowerment motivation productivity work customers relationship company customers pay pay concerns plan contingency focus employee TQM efficiency individual measure needs motivation Situation model (pre list recall) Updated situation model (post list recall) contingency TQM leadership classical efficiency quality situation customers humanistic goal company concerns management success focus work plan individual employee needs measure empowerment motivation feelings productivity pay Ferstl, E.C., & Kintsch, W. (1999). Learning from text: structural knowledge assessment in the study of discourse comprehension. In van Oostendorp and Goldman (eds.), The construction of mental representations during reading. Mahwah, NJ: Lawrence Earlbaum. relationship 8 A KS measure of the situation model • Ferstl & Kintsch (1999) used pre-and-post-reading listcued partially-free recall to elicit KS of the birthday story (which obtains asymmetric matrices) • Participants – 42 undergraduate students (CU Boulder) • Pre-reading cued-association KS task: Students were presented by computer a 60 word list of birthdayrelated terms to view one at a time (randomized), and then were given the list on paper with 3 blanks beside each list term and were asked to write in the 3 terms from the list that come to mind • Reading: Students then read the 600-word long birthday story • Post-reading cued-association KS task: i.e., same as pre-task, fill in the list Clariana, R.B., & Koul, R. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35 (2), 229-236. 9 Results • Established that the KS cued association paradigm was appropriate for assessing background knowledge and text memory • This KS approach facilitated interpretation, depicting how the text ‘added to’ the post reading situation model (see their figure 10.4, p.260); provided a different or other way to think about reading comprehension (p.268) • Test-retest reliability may be a problem for this KS approach Ferstl, E.C., & Kintsch, W. (1999). Learning from text: structural knowledge assessment in the study of discourse comprehension. In van Oostendorp and Goldman (eds.), The construction of mental representations during reading. Mahwah, NJ: Lawrence Earlbaum. 10 Another KS measure of the text base (or situation model?) • Clariana & Koul (2008), we asked students to draw concept maps (KS) of a text • Participants – 16 graduate students in a science instructional methods course (Penn State GV) • First, students discussed concept maps in class • Then working in dyads (8 pairs), students were given a 255 word passage on the heart and circulatory system and were asked to create a concept map of it • KS data sources – 8 dyad concept maps of the text – 1 expert concept map of the text – A Pathfinder network (PFNet) map of the text automatically formed by ALA-Reader software Clariana, R.B., & Koul, R. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35 (2), 229-236. 11 Data • 26 terms identified across all of the maps and text • (Text concept map), dyads’ concept map link lines entered into a 26 x 26 half matrix • Matrix analyzed using Pathfinder Knot left atrium Link Array right ventricle to the pulmonary vein moves through pulmonary artery passes into to the lungs deoxygenated oxygenated a b c d e f g left atrium lungs oxygenate pulmonary artery pulmonary vein deoxgenate right ventricle a 0 0 0 1 0 0 b c d e f g 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 - (n2-n)/2 pair-wise comparisons Clariana, R.B., & Koul, R. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35 (2), 229-236. 12 Data as percent overlap • Percent overlap was calculated as links in common divided by the average total links e.g., Dyad PFNet 2 4 54 e.g., Expert PFNet % overlap = 4 / ((6+8)/2) % overlap = 4 / 7 % overlap = 57% Clariana, R.B., & Koul, R. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35 (2), 229-236. 13 Data as percent overlap Table 1. The average percent of agreement for each pair of concept map networks (the number of network propositions are shown in parentheses). Dyad 1 (18) Dyad 4 (3) Dyad 5 (6) Dyad 7 (13) Dyad 8 (9) Dyad 2* (22) Dyad 3* (11) Dyad 6* (12) ALA-Reader Text (28) Non-science majors D1 D4 D5 D7 D8 -0% -0% 22% -7% 38% 11% -0% 0% 0% 0% -10% 0% 36% 11% 0% 7% 18% 24% 8% 0% 7% 0% 22% 8% 0% 13% 13% Expert map (16) 12% 0% * dyads with a science major 6% 9% 24% 5% 14% 8% Science major D2* D3* D6* -61% -65% 87% Text An aspect of measurement reliability and validity -- 52% 46% 55% 58% 59% 64% -71% In the epigraph to Educational Psychology: A Cognitive View, Ausubel (1968) says, “The most important single factor influencing learning is what the learner already knows.” Clariana, R.B., & Koul, R. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35 (2), 229-236. 14 Percent Agreement with the 255-word text The strong influence of prior domain knowledge 80% 70% Expert map D6* D3* 60% D2* 50% Only those with prior domain knowledge could adequately ‘capture’ the text 40% 30% 20% D7 D5 10% D8 D1 D4 0% 0 5 10 15 20 25 Number of Concept Map Propositions Figure 3. The relationship between the number of propositions in the dyad concept maps and the average percent agreement with the 255-word text passage (* shows dyads with a science major). Clariana, R.B., & Koul, R. (2008). The effects of learner prior knowledge when creating concept maps from a text passage. International Journal of Instructional Media, 35 (2), 229-236. 15 ALA-Reader papers ALA-Reader converts text KS Clariana, R.B., Wallace, P.E., & Godshalk, V.M. (2009). Deriving and measuring group knowledge structure from essays: The effects of anaphoric reference. Educational Technology Research and Development, 57, 725737. Clariana, R.B., & Wallace, P. E. (2007). A computer-based approach for deriving and measuring individual and team knowledge structure from essay questions. Journal of Educational Computing Research, 37 (3), 209225. Koul, R., Clariana, R.B., & Salehi, R. (2005). Comparing several human and computer-based methods for scoring concept maps and essays. Journal of Educational Computing Research, 32 (3), 261-273. Clariana, R.B. (2010). Deriving group knowledge structure from semantic maps and from essays. In D. Ifenthaler, P. Pirnay-Dummer, & N.M. Seel (Eds.), Computer-Based Diagnostics and Systematic Analysis of Knowledge (Chapter 7, pp. 117-130). New York, NY: Springer. Also see HIMAT/DEEP software and Hamlet software 16 KS for influencing learning • e.g., Trumpower et al. (2010) used knowledge structure of computer programming represented as network graphs to pinpoint knowledge gaps • KS elicited as pair-wise comparisons and datareduced to networks using Pathfinder KNOT • Learners’ networks then compared to an expert referent network Trumpower, D.L., Sharara, H., & Goldsmith, T.E. (2010). Specificity of Structural Assessment of Knowledge. Journal of Technology, Learning, and Assessment, 8(5). Retrieved from http://www.jtla.org. 17 KS for influencing learning • The problems were intended to be complex enough so that the solution depended on integration of several interrelated concepts (relational) • The presence of subsets of links in participants’ PFnets differentially predicted performance on two types of problems, thereby providing evidence of the specificity of knowledge structure Trumpower, D.L., Sharara, H., & Goldsmith, T.E. (2010). Specificity of Structural Assessment of Knowledge. Journal of Technology, Learning, and Assessment, 8(5). Retrieved from http://www.jtla.org. 18 Protein structure as an analogy of knowledge structure in reading comprehension Christian Anfinsen received the Nobel Prize in Chemistry in 1972: • Linear sequence of amino acids enzyme structure enzyme function Is like: • Linear sequence of words in a text knowledge structure retrieval function 19 AA Linear sequence enzyme structure function APRKFFVGGNWKMNGKRKSLGELIHTLD GAKLSADTEVVCGAPSIYLDFARQKLDAKI GVAAQNCYKVPKGAFTGEISPAMIKDIGA AWVILGHSERRHVFGESDELIGQKVAHAL AEGLGVIACIGEKLDEREAGITEKVVFQET KAIADNVKDWSKVVLAYEPVWAIGTGKT ATPQQAQEVHEKLRGWLKTHVSDAVAV QSRIIYGGSVTGGNCKELASQHDVDGFLV GGASLKPEFVDIINAKH 20 Triose Phosphate Isomerase: http://www.cs.wustl.edu/~taoju/research/shapematch-final.pdf Read linear sequence of words in text Figure 1, p.136 Hyona, J., & Lorch, R.F. (2004). Effects of topic headings on text processing: evidence from adult readers’ eye fixation patterns. Learning and Instruction, 14, 131–152. 21 Knowledge structure Retrieval function Retrieval structure Imminent extinction today pandas pandas live exclusively linear the climate in the wild Imminent extinction the climate A B (propositional knowledge): Where do pandas live? In the wild A B,C,D (relational knowledge): What do we know about pandas today? Pandas are heading towards extinction in the wild due to climate change 22 Read KS Retrieval function Retrieval structure Retrieval function A B (propositional knowledge): Where do pandas live? In the wild Relational A B,C,D (relational knowledge): What do we know about pandas today? Pandas are heading towards extinction in the wild due to climate change 23 Summary of the introduction • KS cuts across theories, we support connectionist views • KS is worth measuring, it correlates with many kinds of performance • KS can be measured in different ways • KS has been used to visually represent the reading comprehension situation model • KS has been used to visually represent the text structure • Specific KS structure leads to specific cognitive performance • Enzyme Analogy: linear chain structure function 24 Measuring knowledge structure My foundation and trajectory for measuring KS: • Vygotsky (in Luria, 1979); Miller (1969) card-sorting approaches • Deese’s (1965) ideas on the structure of association in language and thought • Kintsch and Landauer’s ideas on representing text structure, and latent semantic analysis • Recent neural network representations (e.g., Elman, 1995) Jonassen, Beissner, and Yacci (1993) 25 Dave Jonassen’s summary of KS measures… Trumpower, Sharara, & Goldsmith, 2010 similarity ratings Knowledge elicitation Ferstl & Kintsch, 1999 concept maps written text free recall Clariana & Koul, 2008 Knowledge representation Knowledge comparison Elicit responses represent responses compare response 26 Jonassen, Beissner, & Yacci (1993), page 22 Dave Jonassen’s summary … similarity ratings card sort Knowledge elicitation hierarchical clustering quantitative graph comparisons free recall written text additive trees relatedness coefficients ordered recall graph building concept maps To show different KR let’s do an example … word associations semantic proximity C of PFNets qualitative graph comparisons Knowledge comparison Knowledge representation Trees scaling solutions expert/ novice Dimensional Networks MDS – multidimensional scaling ordered trees minimum spanning trees link weighted Jonassen, Beissner, & Yacci (1993), page 22 Pathfinder nets principal components cluster analysis 27 Knowledge Representation (KR) • Multidimensional scaling (MDS) - Family of distance and scalar-product (factor) models. Re-scales a set of dis/similarity data into distances and produces the low-dimensional configuration that generated them (e.g., see: http://www.tonycoxon.com/EssexSummerSchool/MDS-whynot.pdf) • Pathfinder Knowledge Network Organizing Tool (KNOT) algorithms take estimates of the proximities between pairs of items as input and define a network representation of the items. The network (a PFNET) consists of the items as nodes and a set of links (which may be either directed or undirected for symmetrical or non-symmetrical proximity estimates) connecting pairs of the nodes. (See: http://interlinkinc.net/KNOT.html) 28 Pathfinder Network (PFNet) analysis • Pathfinder seeks the least weighted path to connect all terms, shoots for n-1 links if possible • Pathfinder is a mathematical approach for representing and comparing networks, see: http://interlinkinc.net/index.html • Pathfinder data reduction is based on the least weighted path between nodes (terms), so for example, Deese’s 171 data points become 18 data points. Only the salient or important data is retained. • Pathfinder PFNet uses, for example: – Library reference analysis – Use google to search to see many more examples of how Pathfinder can be used Note that Ferstl & Kintsch (1999) used Pathfinder 29 bug flower yellow fly bird wing insect moth Deese (1965), free recall data (p.56) moth 100 12 12 12 11 1 0 4 insect 12 100 9 9 17 1 1 33 wing 12 9 100 44 19 0 0 3 are shown bird 12100 participants 9 44 100 21 a 1list of 0 3 time, and fly 11related 17 words, 19 one 21 at a100 1 1 8 asked to free recall a related term yellow 1 1 0 1 1 100 7 0 flower 0 1 0 0 1 7 100 2 bug 4 33 3 3 8 0 2 100 cocoon 11 10 2 2 6 0 0 7 color 0 1 0 1 1 17 3 0 Full array (n * n): 19 x 19 = 361 blue 0 1 0 1 2 23 7 0 Half array ((n – n)/2): ((19 x 19) –19 )/2 = 171 30 bees 2 3 10 10 6 2 2 5 Deese, J. (1965). The structure of associations in language and thought. Baltimore, MD: John Hopkins Press, page 56 2 moth insect wing bird fly yellow flower bug cocoon color blue bees summer sunshine garden sky nature spring butterfly butterfly spring nature sky garden sunshine summer bees blue color cocoon bug flower yellow fly bird wing insect moth Deese (1965), free recall data (p.56) 100 12 12 12 11 1 0 4 11 0 0 2 2 5 1 1 1 1 15 12 100 9 9 17 1 1 33 10 1 1 3 0 0 0 0 1 0 12 12 9 100 44 19 0 0 3 2 0 0 10 0 0 0 0 3 0 13 12 9 44 100 21 1 0 3 2 1 1 10 0 1 0 1 5 0 12 11 17 19 21 100 1 1 8 6 1 2 6 0 3 0 2 4 0 11 1 1 0 1 1 100 7 0 0 17 23 2 2 7 5 2 4 3 5 0 1 0 0 1 7 100 2 0 3 7 2 1 6 18 2 6 2 6 4 33 3 3 8 0 2 100 7 0 0 5 0 0 0 0 2 0 4 11 10 2 2 6 0 0 7 100 0 0 4 1 1 1 0 2 0 22 0 1 0 1 1 17 3 0 0 100 32 0 0 2 0 8 0 0 0 0 1 0 1 2 23 7 0 0 32 100 1 2 4 4 46 3 2 2 2 3 10 10 6 2 2 5 4 0 1 100 1 2 3 0 4 2 7 2 0 0 0 0 2 1 0 1 0 2 1 100 5 2 0 1 10 0 5 0 0 1 3 7 6 0 1 2 4 2 5 100 2 3 2 15 4 1 0 0 0 0 5 18 0 1 0 4 3 2 2 100 0 4 4 2 1 0 0 1 2 2 2 0 0 8 46 0 0 3 0 100 0 1 0 1 1 3 5 4 4 6 2 2 0 3 4 1 2 4 0 100 2 3 1 0 0 0 0 3 2 0 0 0 2 2 10 15 4 1 2 100 2 15 12 13 12 11 5 6 4 22 0 2 7 0 4 2 0 3 2 100 Full array (n * n): 19 x 19 = 361 Half array ((n2 – n)/2): ((19 x 19) –19 )/2 = 171 31 Deese, J. (1965). The structure of associations in language and thought. Baltimore, MD: John Hopkins Press, page 56 Using MDS in SPSS • Start SPSS and open this Deese data file • Under Analyze, select Scale, then select Multidimensional Scaling (ALSCAL)… 1. Move Variable from left to right 2. Create distances from data 3. Model How to - next page 4. Options 32 Select all of these 33 MDS of the Deese data Derived Stimulus Configuration Euclidean distance model 1,5 spring garden 1,0 summer sunshine nature bees Dimension 2 flower cocoon 0,5 butterfl 0,0 moth -0,5 yellow blue -1,0 fly insect color sky wing bird bug -1,5 -2 -1 0 1 Dimension 1 34 Side issue, the MDS obtains alternate visual representations (e.g., enantiomorphism) Eindhoven Amsterdam Utrecht Nijmegen The Hague The Hague Nijmegen Utrecht Amsterdam Both are “correct solutions”. WARNING!! Eindhoven Like geographic data, for example, MDS may be oriented in different ways 35 (describe Ellen Taricani’s 2002 dissertation, handing out teacher maps post-reading is a bad idea) How good is the MDS representation for displaying the relationship raw data? Derived Stimulus Configuration • Many dimensions (in this case 19) reduced to 2 dimensions • Check the “stress” value to estimate how strained the results are Euclidean distance model 1,5 spring garden Dimension 2 1,0 summer sunshine nature bees flower cocoon 0,5 butterfl 0,0 moth -0,5 yellow blue -1,0 fly insect color sky wing bird bug -1,5 -2 -1 0 Dimension 1 MDS is an algorithmic, power, approach rather than based on a distribution model, so no assumptions about data structure are required… 1 36 PFNet of Deese data sky summer blue spring sunshine color garden yellow flower nature butterfly cocoon moth wing bird bees fly insect bug 37 MDS and PFNet of the exact same data from Deese Derived Stimulus Configuration Euclidean distance model 1,5 sky summer spring blue color garden 1,0 garden sunshine yellow flower butterfly cocoon wing bird fly nature bees cocoon nature 0,5 butterfl 0,0 moth moth -0,5 bees yellow blue -1,0 fly insect color sky insect bug sunshine flower Dimension 2 spring summer wing bird bug -1,5 Pathfinder KNOT PFNet (i.e., local structure, verbatim, proposition specific) -2 -1 0 SPSS 1MDS Dimension (i.e., global structure, relational, fuzzy, gist) 1 38 MDS and PFNet of the exact same data from Deese Derived Stimulus Configuration Blue lines reproduce the PFNet links Euclidean distance model 1,5 sky summer spring blue color garden 1,0 garden sunshine yellow flower butterfly cocoon wing bird fly nature bees cocoon nature 0,5 butterfl 0,0 moth moth -0,5 bees yellow blue -1,0 fly insect color sky insect bug sunshine flower Dimension 2 spring summer wing bird bug -1,5 Pathfinder KNOT PFNet (i.e., local structure, verbatim, proposition specific) -2 -1 0 SPSS 1MDS Dimension (i.e., global structure, relational, fuzzy, gist) 1 39 MDS and PFNet data reduction • MDS uses all of the raw data to reduce the dimensions in the representation; if the stress is not too large, global clustering is likely to be good but local clustering less so, and the MDS distances between terms within a tight cluster of terms are more likely to misrepresent the relatedness raw data. • Pathfinder uses only the strongest relationship data (typically 80% of the raw data is discarded). Pathfinder analysis provides “a fuller representation of the salient semantic structures than minimal spanning trees, but also a more accurate representation of local structures than multidimensional scaling techniques.” (Chen, 1999, p. 408) 40 Dave Jonassen’s summary … similarity ratings quantitative graph comparisons free recall written text additive trees relatedness coefficients ordered recall graph building hierarchical clustering distance data card sort Knowledge elicitation concept maps Sabine Klois used … word associations semantic proximity C of PFNets qualitative graph comparisons Knowledge comparison Knowledge representation Trees scaling solutions expert/ novice Dimensional Networks MDS – multidimensional scaling ordered trees minimum spanning trees link weighted Jonassen, Beissner, & Yacci (1993), page 22 Pathfinder nets principal components cluster analysis 41 Poindexter and Clariana • Participants – undergraduate students in an intro Educational Psychology course (Penn State Erie) • Setup – complete a demographic survey and how to make a concept map lesson • Text based lesson interventions – instructional text on the “human heart” with either proposition specific or relational lesson approach • KS measured as ‘distances’ between terms in a concept map (a form of card sorting) and also concept map link data, but analyzed with Pathfinder KNOT Poindexter, M. T., & Clariana, R. B. (2006). The influence of relational and proposition-specific processing on structural knowledge and traditional learning outcomes. International Journal of Instructional Media, 33 (2), 177-184. 42 Treatments • Relational condition, participants were required to “unscramble” sentences (following Einstein, McDaniel, Bowers, & Stevens, 1984) in one paragraph in each of the five sections or about 20% of the total text content • Proposition-specific condition (following Hamilton, 1985), participants answered three or four adjunct constructed response questions (taken nearly verbatim from the text) provided at the end of each of the five sections, for a total of 17 questions covering about 20% of the total text content (no feedback was provided). Poindexter, M. T., & Clariana, R. B. (2006). The influence of relational and proposition-specific processing on structural knowledge and traditional learning outcomes. International Journal of Instructional Media, 33 (2), 177-184. 43 DK and KS Posttests • DK - Declarative Knowledge (Dwyer, 1976) – Identification drawing test (20) – Terminology multiple-choice items (20), declarative knowledge, e.g., the lesson text states A B, the posttest asks A ?(B, x, y, z) (explicitly stated) – Comprehension multiple-choice items (20), inference required, e.g., given A B and B C in the lesson text, posttest asks A ?(C, x, y, z) (implicit, not stated) • KS - Knowledge structure – Concept map link-based common scores – Concept map distance-based common scores Poindexter, M. T., & Clariana, R. B. (2006). The influence of relational and proposition-specific processing on structural knowledge and traditional learning outcomes. International Journal of Instructional Media, 33 (2), 177-184. 44 Note that declarative knowledge multiple-choice posttest items are sensitive to the linear order of the lesson text If the lesson text is A B, paraphrasing the stem (A’) and/or transposing stem and response (B A) to create posttest questions influences performance. posttest When MC posttest is: • Identical to lesson (A B): 77% • Transposed from lesson (B A): 71% • Paraphrased from lesson (A’ B): 69% • Both T & P from lesson (B A’): 67% Bormuth, J. R., Manning, J., Carr, J., & Pearson, D. (1970). Children’s comprehension of between and within sentence syntactic structure. Journal of Educational Psychology, 61, 349–357. Clariana, R.B. & Koul, R. (2006). The effects of different forms of feedback on fuzzy and verbatim memory of science principles. British Journal of Educational Psychology, 76 (2), 259-270. 45 Recording link and distance data in a concept map (n2-n)/2 pair-wise comparisons Distance Array left atrium right ventricle to the pulmonary vein moves through pulmonary artery passes into a b c d e f g left atrium lungs oxygenate pulmonary artery pulmonary vein deoxgenate right ventricle deoxygenated oxygenated Student’s concept map b 36 84 102 42 102 c d e f 120 114 138 54 84 144 138 42 114 120 g - Link Array to the lungs a 120 150 108 73 156 66 a b c d e f g left atrium lungs oxygenate pulmonary artery pulmonary vein deoxgenate right ventricle a 0 0 0 1 0 0 b c d e f g 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 - 46 Distance raw data reduction by Pathfinder KNOT (21 distance data points reduced to 6 link data points) Distance Array left atrium right ventricle pulmonary vein pulmonary artery a b c d e f g left atrium lungs oxygenate pulmonary artery pulmonary vein deoxgenate right ventricle a 120 150 108 73 156 66 b 36 84 102 42 102 c d e f 120 114 138 54 84 144 138 42 114 120 g - Pathfinder Network lungs deoxygenated oxygenated Pathfinder network (based on distances) a b c d e f g left atrium lungs oxygenate pulmonary artery pulmonary vein deoxgenate right ventricle a 0 0 0 0 0 1 b c d e f g 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 - 47 Example of link and distance PFNets for the same concept map left atrium right ventricle left atrium to the right ventricle pulmonary vein moves through pulmonary artery passes into pulmonary vein pulmonary artery to the lungs deoxygenated oxygenated Student’s concept map (i.e., link data) lungs deoxygenated oxygenated Pathfinder network (from distance data) 48 Means and sd Treatments Posttests Map-link COMP Map-prop 7.3 14.1 (5.4) (4.6) control ID 15.1 (4.4) TERM 12.3 (4.6) Map-dist Map-assoc 9.0 (3.6) propositionspecific 16.3 (5.6) 14.6 (5.7) 13.8 (3.7) 16.5 (8.3) 11.5 (3.4) relational 17.0 (2.6) 12.7 (3.5) 12.4 (3.0) 13.9 (9.4) 10.7 (4.6) Poindexter, M. T., & Clariana, R. B. (2006). The influence of relational and proposition-specific processing on structural knowledge and traditional learning outcomes. International Journal of Instructional Media, 33 (2), 177-184. 49 Analysis • MANOVA (relational, proposition-specific, and control) and five dependent variables including ID, TERM, COMP, Map-prop, and Map-assoc. • COMP was significance, F = 5.25, MSe = 17.836, p = 0.015, none of the other dependent variables were significance. • Follow-up Scheffé tests revealed that the propositionspecific group’s COMP mean was significantly greater than the control group’s COMP mean (see previous Table). Poindexter, M. T., & Clariana, R. B. (2006). The influence of relational and proposition-specific processing on structural knowledge and traditional learning outcomes. International Journal of Instructional Media, 33 (2), 177-184. 50 Correlations ID (drawing) TERM (MC) COMP (MC) Map-prop Map-link Map-distance Map-assoc ID -0.71 0.50 0.56 0.45 Verbatim AB Inference A C TERM COMP Map-link Prop -0.74 0.77 0.69 -0.53 0.71 -0.73 All sig. at p<.05 Compare to Taricani & Clariana next Poindexter, M. T., & Clariana, R. B. (2006). The influence of relational and proposition-specific processing on structural knowledge and traditional learning outcomes. International Journal of Instructional Media, 33 (2), 177-184. 51 Compare the correlation results to a related follow-up investigation Poindexter & Clariana (2006)Term Comp Link data 0.77 0.53 Distance data 0.69 0.71 Taricani & Clariana (2006) Term Comp Link data 0.78 0.54 Distance data 0.48 0.61 Taricani, E. M. & Clariana, R. B. (2006). A technique for automatically scoring open-ended concept maps. Educational Technology Research and Development, 53 (4), 61-78. 52 Clariana and Marker (2007) • Participants – 68 graduate students in INSYS intro ISD course • Computer-based lesson – text, graphics, and questions on instructional design, either asked to generate headings for each section or not • Seven sections referred to as A through G, each cover a component of the Dick and Carey model • KS as a sorting task and a new listwise task Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 53 Posttests • Declarative Knowledge – 30-item constructed response terminology test, 15 items from lesson sections B, D, and F (called “used”) and 15 items from A, C, E, and G (called “not used”) • Knowledge structure – Posttest focuses on 15 terms used in sections B, D, and F – Listwise rating task agreement scores (compared to linear and cluster referent) – Sorting task agreement scores (compared to linear and cluster referent) List and sorting used by Sabine Klois, note: sorting not the same as card sorting Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 54 Listwise rating task … (available at: www.personal.psu.edu/rbc4) Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 55 Sorting task … Drag related terms closer together and unrelated terms further apart. When done, click CONTINUE CONTINUE Preinstructional activities Instructional strategy Performance context Learner analysis Goal analysis Target population Entry behaviors Delivery system Feedback Job aid Transfer Intellectual skill Concept Verbal information Psychomotor skill Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 56 An example student PFNet Psychomotor Skill B5 Concept B3 Intellectual Skill B4 Performance Context D4 Target Population D1 Learner Analysis D2 Verbal Information B2 Entry Behaviors . D3 Goal Analysis B1 Instructional Strategy F4 Show how to count linear and nonlinear here … Pre-instructional activities F1 Delivery system F2 Job aid Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text F3 facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link Feedback F5 Transfer D5 57 Means and standard deviations Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 58 Analysis • The cued recall and sorting task posttest data were analyzed by a 2 (Treatment: Headings vs. No Headings) × 2 (Posttest: cued recall and sorting task) mixed ANOVA. The first is a between-subjects factor and the second is the within subjects factor. • The Treatment main effect was not significant, F(1, 61) = 0.220, MSE = 0.045, p = .94. The Posttest repeated measure was significant, F(1, 61) = 18.874, MSE = 0.022, p < .001, showing that the mean cued recall test score (M = 0.59) was greater than the mean sorting task score (M = 0.47). Finally, the anticipated disordinal interaction of Treatment and Posttest factors was significant, F(1, 61) = 5.119, MSE = 0.022, p = .027 Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 59 Generate headings when reading: better ‘structure’ but worse ‘recall’ Declarative knowledge Knowledge structure (KS) Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 60 Comparison of listwise and sorting KS Listwise task (more linear) i.e., A1 A2 Sorting task (more relational) i.e., A1 A3 or A4 or A5 4.1 3.9 3.7 3.5 3.3 3.1 2.9 no Head Headings 2.7 2.5 Linear Non-linear Linear Non-linear Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 61 of 54 Correlations of interest Table 1. The No Headers and Headers treatment group correlations (from Clariana & Marker, 2007). A B C D E No Header Treatment Group (N = 32) A. CR Posttest (15 max.) 1 B. Sorting task (linear) 0.24 1 C. Sorting task (non linear) -0.02 -0.37 * 1 D. Listwise task (linear) 0.62** 0.30 -0.21 1 E. Listwise task (non linear) 0.08 0.04 0.20 0.00 1 Header Treatment Group (N = 31) A. CR Posttest (15 max.) 1 B. Sorting task (linear) 0.22 1 C. Sorting task (non linear) 0.49** 0.09 1 D. Listwise task (linear) 0.44* 0.36 * 0.39 * 1 E. Listwise task (non linear) 0.37* 0.30 0.30 0.04 1 p<.05; ** p<.01 Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 62 Brain scans – in proficient readers, text with no headings requires right hemisphere activity to achieve coherence (more work), some students will not be able to form coherence “Consistent with previous studies…the right middle temporal regions may be especially important for integrative processes needed to achieve global coherence during discourse processing.” (p.1317 St. George, Kutas, Martinez, & Sereno, 1999) headings RH no headings LH RH http://brain.oxfordjournals.org/cgi/reprint/122/7/1317 LH 63 Review - Generate headings when reading: better ‘structure’ but worse ‘recall’ Declarative knowledge Knowledge structure (KS) Clariana, R.B., & Marker, A. (2007). Generating topic headings during reading of screen-based text facilitates learning of structural knowledge and impairs learning of lower-level knowledge. Journal of Educational Computing Research, 37 (2), 173-191. link 64 Comments • The better structured knowledge of the Headings group (i.e., more like the author’s text schema) should allow the learners to more flexibly use that knowledge (Jonassen & Wang, 1993) which should influence the reader’s ability to form inferences and comprehend the lesson text, but this apparently comes at the expense of text details. • These results are consistent with and help explain previous investigations that have reported that learners who generate headings score lower than no-headings control groups on lower-order outcomes but score higher on inference and comprehension tests (Dee-Lucas & DiVesta, 1980; Jonassen et al., 1985; Wittrock & Kelly, 1984). (These papers are listed on the next screen) 65 Generative learning (relational lesson tasks) DK KS reversal reference list • Dee-Lucas, D. & DiVesta, F. F. (1980). Learner-generated organizational aids: Effects on learning from text. Journal of Educational Psychology, 72(3), 304-311. • Jonassen, D. H., Hartley, J., & Trueman, M. (1985, April). The effects of learner generated versus experimenter-provided headings on immediate and delayed recall and comprehension. Chicago: American Educational Research Association (ERIC ED 254 567). • Wittrock, M. C., & Kelly, R. (1984). Teaching reading comprehension to adults in basic skills courses. Final Report, Project No. MDA 903-82-C-0169). University of California, Los Angeles. 66 MDS explanation: Read with terms A I Connectivity Matrix (Kintsch, 1998) words a b c d e f g h I a 1 1 0 0 0 0 0 0 0 b 0 1 1 0 0 0 0 0 0 c 0 0 1 1 0 0 0 0 0 d 0 0 0 1 1 0 0 0 0 e 0 0 0 0 1 1 0 0 0 Link Array (no color) f 0 0 0 0 0 1 1 0 0 g 0 0 0 0 0 0 1 1 0 h 0 0 0 0 0 0 0 1 1 I 0 0 0 0 0 0 0 0 1 E D F C I G B H A MDS 67 Same reading with terms A I, but with section headings Headings (i.e., color names) red blue green red words a b c d e f g h I blue red green a 1 1 0 0 0 0 0 0 0 1 0 0 b 0 1 1 0 0 0 0 0 0 1 0 0 c 0 0 1 1 0 0 0 0 0 1 0 0 d 0 0 0 1 1 0 0 0 0 0 1 0 e 0 0 0 0 1 1 0 0 0 0 1 0 Link Array (with headings) f 0 0 0 0 0 1 1 0 0 0 1 0 g 0 0 0 0 0 0 1 1 0 0 0 1 h 0 0 0 0 0 0 0 1 1 0 0 1 I 0 0 0 0 0 0 0 0 1 0 0 1 E F D C B A G I H green MDS 68 blue MDS of connectivity matrices No color names MDS Color names MDS red E E D F F D tighter clusters C C I B A G B H A G I H green ?…. Context (like topic headings) may alter memory structure in a regular way, and we can think about it visually. 69 blue Explanation using Lawrence Frase’s matrix multiplication to explain inference context Read A B and B C, model of the effects of context (as headings) on verbatim and inference activation A B C context A B C 0.9 0.3 0 0.3 0 0.9 0.3 0.3 0 0 0.9 0.3 0.3 0.3 0.3 1 no context A 0.8 0.5 0.1 B 0 0.8 0.5 C 0 0 0.8 w context A B C context A 0.9 0.1 0.1 0.6 B 0.6 0.9 0.1 0.7 C 0.2 0.6 0.9 0.7 0.7 0.7 0.6 1.3 control panel try context 1 name = context column -> receives 0.3 reading A->B prop association strength = 0.3 0.4 context association strength = 0.3 0.9 term A association strength = 0.9 0.9 term B association strength = 0.9 0.9 term C association strength = 0.9 mmult no context A B C output A 1 0.7 0.1 (no context) - (context) B 0 1 0.7 verbatim A->B = -0.033 - means context better C 0 0 1 inference A->C = -0.089 - means context better w context A B C A 1 0.7 0.2 B 0.1 1 0.7 C 0.1 0.1 1 row -> sends (also notice B-A, C-A, and B-C activations) Frase, L.T. (1969). Structural analysis of the knowledge that results from thinking about text. Journal of Educational Psychology, 60 (6, monograph, part 2), 1-16. 70 Clariana and Prestera (2009) • Background color as a weak context variable • Participants – 80 graduate students in INSYS intro instructional design course • Computer-based lesson – text, graphics, and questions with feedback on ISD, presented in 5 sections, each section covered a component of the Dick and Carey model (items with feedback should present STRONG AB effects) • Intervention – lesson presented either with or without a color band on the left margin (this use of color should have WEAK relational effects) Clariana, R.B., & Prestera, G.E. (2009). The effects of lesson screen background color on declarative and structural knowledge. Journal 71 ofof54 Educational Computing Research, 40 (3), 281 -293. link Example lesson screen Color or No color Clariana, R.B., & Prestera, G.E. (2009). The effects of lesson screen background color on declarative and structural knowledge. Journal of72 Educational Computing Research, 40 (3), 281 -293. link Posttests • Declarative Knowledge vocabulary posttest – 18 constructed response items (fill in the blank) and 18 multiple choice items terminology test (strong AB) • Knowledge structure posttest – sort the 36 vocabulary terms (same sorting task as Clariana & Marker (2006) above) • Results: The anticipated disordinal interaction of Subtest and Lesson Color was significant, F(1, 71) = 5.008, MSe = 0.618, p = .028, with lesson color enhancing structural knowledge scores and inhibiting declarative knowledge scores. Clariana, R.B., & Prestera, G.E. (2009). The effects of lesson screen background color on declarative and structural knowledge. Journal 73 ofof54 Educational Computing Research, 40 (3), 281 -293. link Lesson and posttest means Clariana, R.B., & Prestera, G.E. (2009). The effects of lesson screen background color on declarative and structural knowledge. Journal of74 Educational Computing Research, 40 (3), 281 -293. link Another disordinal interaction of declarative and structural knowledge Clariana, R.B., & Prestera, G.E. (2009). The effects of lesson screen background color on declarative and structural knowledge. Journal of75 Educational Computing Research, 40 (3), 281 -293. link Section summary • Different measurement approaches are better for prompting memory for linear or cluster KS • Linear lesson tasks establish linear KS and relational (generative) lesson tasks establish relational KS • Models can account for verbatim and inference outcomes • Next section - Alternative measures of KS 76 For KS, more terms may be better 0.70 Predictive Validity • Goldsmith et al. (1991) the relationship between the number of terms included in Pathfinder network analysis (elicited as pairwise) and the predictive ability of the resulting PFNets to predict end-ofcourse grades. • But only if these are really IMPORTANT terms (Clariana & Taricani, 2010) 0.60 0.50 0.40 0.30 0.20 0.10 0.00 0 5 10 15 20 25 30 Number of terms Goldsmith, T.E., Johnson, P.J., & Acton, W.H. (1991). Assessing structural knowledge. Journal of Educational Psychology, 83 (1), 88-96. Clariana, R.B., & Taricani, E. M. (2010). The consequences of increasing the number of sterms used to score open-ended concept maps. International Journal of Instructional Media, 37 (2), 163-173. link 77 Raw data reduction by Pathfinder KNOT 500 450 Raw data (half array, (n2-n)/2 ) KNOT tries to form a path with n-1 links Terms = 20 Raw data = 190 PFNet = 19 PFNet as % of raw data = 10% 400 350 Terms = 10 Raw data = 45 PFNet = 9 PFNet as % of raw data = 20% 300 250 200 Terms = 30 Raw data = 435 PFNet = 29 PFNet as % of raw data = 7% 150 100 50 0 0 5 10 Methods that elicit pairwise association fatigue with more then 20 to 30 terms) 15 20 Number of terms (n) 25 30 35 78 KS measurement • More terms are better but the problem with eliciting KS using pairwise comparisons (more than 20!) • So, we need a valid and efficient measure of KS … recall from above that: • Recall that Ferstl & Kintsch (1999) used a more efficient cued-recall list approach (3 recalls for each term) • Clariana & Marker (2007) added a ‘listwise’ approach, with one recognition retrieval for each term and a ‘sorting’ approach (dragging all terms around on the screen at the same time) • Do ‘listwise’ and ‘sorting’ results compare with the more traditional and accepted ‘pairwise’ approach? If yes, then these two can handle large lists of terms. 79 Clariana and Wallace (2009) • Compared pairwise, listwise, and sorting • Participants – 84 undergraduate students in business • All students completed 3 computer-delivered KS measures – listwise, pairwise, and sorting (randomized) using the 15 major concepts of the course • Students grouped for analysis into high and low groups based on a media split of their end-ofcourse multiple-choice exam Clariana, R.B., & Wallace, P. E. (2009). A comparison of pairwise, listwise, and clustering approaches for eliciting structural knowledge in information systems courses. International Journal of Instructional Media, 36, 139–143. 80 of 54 The three approaches pairwise listwise sorting Drag related terms closer together and unrelated terms farther apart. When done. click CONTINUE. Continue computer literacy internet networks WWW applications communications ergonomics input output system unit CPU operating system ethics privacy Clariana, R.B., & Wallace, P. E. (2009). A comparison of pairwise, listwise, and clustering approaches for eliciting structural knowledge in information systems courses. International Journal of Instructional Media, 36, 139–143. 81 Sorting and listing are faster than pairwise • Time to complete the tasks: – pair-wise approach, X = 447.4 s (sd = 140.6) – list-wise approach, X = 193.3 s (sd = 79.6) – Sorting approach, X = 115.5 s (sd = 62.7) • Concurrent / convergent validity: Do the 3 elicitation tasks obtain similar raw data and PFNet data? Clariana, R.B., & Wallace, P. E. (2009). A comparison of pairwise, listwise, and clustering approaches for eliciting structural knowledge in information systems courses. International Journal of Instructional Media, 36, 139–143. 82 Individuals’ raw data arrays were not similar (correlations) Table 3. Relatedness correlations of individual and group average raw proximity data. Group Low (n = 41) PxL 0.31 (.09) Individuals PxS -0.21 (.15) LxS -0.30 (.14) Group Average PxL PxS LxS 0.68 -0.63 -0.79 na na na High (n = 43) 0.31 (.16) -0.25 (.19) -0.29 (.13) 0.68 na -0.67 na -0.78 na P – pair-wise, L – list-wise, S – sorting Therefore, the 3 approaches do not elicit the same raw data associations, individuals’ raw data seems to be idiosyncratic or flaky or noisy; however the group average raw data are much more alike (averaging within a group ‘smooths out’ idiosyncrasy) Clariana, R.B., & Wallace, P. E. (2009). A comparison of pairwise, listwise, and clustering approaches for eliciting structural knowledge in information systems courses. International Journal of Instructional Media, 36, 139–143. 83 % overlap based on ‘group average’ PFNet common scores (intersection) pairwise PL PH listwise LL LH sorting SL SH Pairwise low (PL) -Pairwise high (PH) 64% -- Listwise low (LL) 79% Listwise high (LH) 71% 57% 79% 79% -- Sort low (SL) Sort high (SH) 57% 43% 43% 43% 71% 57% 64% 57% 64% -- linear (lin) non linear (nonlin) 50% 9% 29% 9% 36% 9% 36% 9% 29% 9% 29% 9% referent lin nonlin -- -- Clariana, R.B., & Wallace, P. E. (2009). A comparison of pairwise, listwise, and clustering approaches for eliciting structural knowledge in information systems courses. International Journal of Instructional Media, 36, 139–143. -- 0% -- 84 Sabine’s experts Expert_A Expert_B Expert_C Expert_D Expert_ave one 0.46 0.27 0.43 0.71 one 0.44 one 0.57 0.42 one 0.56 0.52 0.54 one each avg. = 0.47 0.51 0.41 0.49 0.58 All avg. = 0.49 one 0.60 0.67 0.53 0.79 one 0.73 one 0.53 0.53 one 0.79 0.79 0.58 one one 0.43 0.29 0.43 0.43 Expert_ave Expert_D Expert_C Expert_B Expert_A Expert_ave Expert_D Sorting Expert_C Expert_B Expert_A Expert_ave Expert_D Listwise Expert_C Expert_B % overlap Expert_A Pairwise one 0.43 one 0.21 0.36 one 0.57 0.64 0.36 one 0.65 0.66 0.68 0.54 0.74 0.39 0.41 0.43 0.34 0.50 0.65 0.41 85 Next directions for KS research? • Continue to find valid and efficient KS approaches • And close with a few provocative comments … 86 1st year undergraduate textbook in IST an obvious ‘collage’ 87 Web reading F-pattern? Heatmaps from user eyetracking studies of three websites. The areas where users looked the most are colored red; the yellow areas indicate fewer views, followed by the least-viewed blue areas. Gray areas didn't attract any fixations. http://www.useit.com/alertbox/reading_pattern.html 88 Gaze plot of the 4 main classes of web search reading behaviornavigation-dominant search-dominant tool-dominant http://www.useit.com/alertbox/fancy-formatting.html successful 89 Sources of eye-tracking • http://www.miratech.com/blog/eye-trackinglecture-web.html • http://www.youtube.com/watch?v=X60VPJDL AeM&feature=player_embedded 90 Altered reading due to web experience? • If students are not reading linearly, or are using (or not using) headings and other text signals (color, underline, highlights) differently, then the KS will be different • Specific KS can accomplish specific kinds of mental ‘work’ and other KS other work (the protein analogy) • So determining how today’s students read hypertext and web materials, and whether this transfer back to paper-based text is an important question • KS is one tool that can complement existing measures and help explain this 91 Term activation across sentences 7 6 1 2 5 3 Axis Title 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 terms knight rode forest country dragon princess kidnap free (freed) marry (married) hurried fought death (killed) armor thankful 4 4 5 3 6 7 2 13 1 10 7 0 4 8 9 10 11 12 1 13 92