Using CogAT David Lohman Institute for Research and Policy on Acceleration Belin-Blank Center & Iowa Testing Programs University of Iowa http://faculty.education.uiowa.edu/dlohman/ Topics Distinguishing between ability & achievement Overview of CogAT Comparing CogAT with other ability tests Interpreting CogAT scores General issues in selection Identification of talent in special populations Combining Achievement, Ability, & Teacher ratings: the Lohman – Renzulli matrix Distinguishing between ability and achievement Puzzlements for common interpretations of ability & achievement Is ability more biologically based? Lower relative achievement than ability = underachievement Most studies show same heritability for IQ (Gf) and achievement tests (Gc) But there are an equal number of “overachievers” Status scores (IQ, PR) show good stability But one must keep getting better to retain that IQ Between 9 – 17 r(True IQ) = .75. 60% in top 3% at 9 NOT in top 3% at age 17 Fluid abilities invested in experience to produce particular constellations of crystallized abilities? Only for very young children Thereafter, crystallized abilities -> fluid Level 1. Nominalism (Most people here) “ability” and “achievement” are separate (Jangle fallacy –T. Kelley, 1927) Ability Achievement Level 2. Oh, Oh – there’s more overlap than uniqueness here! Its all ‘g’ (any indicant will do) Its all just a product of experience Preserve stage 1 beliefs – Purge ability of visible achievement (e.g. measure “process” or use only “nonverbal” measures) Ability Achievement Level 3. Island kingdoms –Things get even more complicated (most scholars of human abilities) Effects of language, culture, and experience on the development of ability (“All abilities are developed” Anastasi) Experience alters the structure of the brain Mental processes do not exist independently of knowledge. Example of Flynn Effect 105 100 IQ on the 1995 Scale 95 90 85 80 75 70 1910 1920 1930 1940 1950 1960 1970 1980 1990 Year Gains in Wechsler-Binet IQ for the U.S. White population. Sources J. Horgan (1995) and D. Schildlovsky. 2000 Proportion of variance in WISC Full Scale IQ at age 7 accounted for by genetic factors as a function of socioeconomic status (SES) Low High Turkheimer et al. (2003) Psychological Science, 14 (6). N= 319 twin pairs. 43% White, 54% Black. Most families poor. Fluid-Crystallized Continuum (1) Fluid Physical skills General physical fitness Crystallized Basketball Football Volleyball Swimming Wrestling Field hockey Cycling Fluid-Crystallized Continuum (1) Fluid Cognitive abilities Physical skills General fluid ability (Gf) General physical fitness Crystallized Science achievement Math achievement Basketball Social studies achievement Knowledge of literature Football Specific factual knowledge Volleyball Swimming Wrestling Field hockey Cycling Fluid-Crystallized Continuum (2) Fluid Cognitive abilities Physical skills General fluid ability (Gf) General physical fitness Crystallized Science achievement Math achievement Basketball Social studies achievement Knowledge of literature Football Specific factual knowledge Volleyball Swimming Wrestling Field hockey Cycling A common ability-achievement space Level 4. Systems theories (A handful) Aptitude Theory (Richard Snow) Sidesteps the issue of defining intelligence; starts with expertise & the contexts in which it is developed & displayed, readiness to learn in those contexts Overview of CogAT Some History Lorge -Thorndike Intelligence test Cognitive Abilities Test Form 1 1974 Forms 2 – 3 (no Composite score) Forms 4 – Thorndike & Hagen – Comp score Form 5 – Hagen Form 6 – Lohman & Hagen Co-normed with the ITBS & ITED Primary uses of CogAT To guide efforts to adapt instruction to the needs and abilities of students To provide an alternative measure of cognitive development To identify students whose predicted levels of achievement differ markedly from their observed levels of achievement Primary Battery (K-2) General Reasoning Ability Verbal Reasoning ..... Quantitative Reasoning ..... Nonverbal Reasoning ..... Oral Vocabulary Verbal Reasoning Relational Concepts Quantitative Concepts Figure Classification Matrices No reading Tests untimed (paced by teacher) Mark directly in booklet Multilevel Battery (gr. 3-12) General Reasoning Ability Verbal Reasoning ...... Quantitative Reasoning ..... Nonverbal Reasoning ..... Verbal Classification Sentence Completion Verbal Analogies Quantitative Relations Number Series Equation Building Figure Classification Figure Analogies Figure Analysis Tests timed Separate Answer sheet Common Directions 3 Separate Test Batteries (Not one) Scores Raw score = number correct Scale score – USS Within level - map number correct on to a scale whose intervals are approximately the same size Between levels – maps number correct on different levels of the test on to a single, common, developmental scale USS Scale D etc C B A Relationships among Stanines, Percentile Ranks, and Standard Age Scores 134 - 150 Composites Composite scores Partial VQ, VN, QN Full – VQN or C [do NOT use for screening] Primary Battery V or (VQ) versus N Multilevel Battery V versus QN Consequential Validity: Score warnings Age out of range Age unusual for coded grade Estimated test level Level unusual for coded grade Targeted score Too few items attempted to score Many items omitted (slow and accurate) Extremely variable responses Personal Confidence Intervals Pattern of item responses aberrant? Inconsistent across subtests within a battery? Personal Standard Error of Measurement (PSEM) V Q N SAS PR 120 116 125 89 84 94 1 25 50 75 99 Score Profiles CogAT 6 ‘ABC’ Profile system Measuring the pattern “A” profiles: Confidence bands overlap for all three scores. Scores are at roughly the sAme level “B” profiles: One score is aBove or Below the other two scores, which do not differ “C” profiles: Two scores Contrast “E” profiles: Extreme B or C profiles (>=24) “A” Profile V Q N SAS PR 120 116 125 89 84 94 1 25 50 75 99 “B” Profiles V Q N V Q N SAS PR 120 116 100 89 84 50 SAS PR 95 92 110 38 31 73 1 25 50 75 99 N- 1 25 50 75 99 N+ “C” Profile V Q N SAS PR 120 110 100 89 73 50 1 25 50 75 99 V+ N- Extreme “C” Profile SAS V Q N 120 107 92 PR 1 25 50 75 99 89 67 31 SAS Max – SAS Min = 28 E (V+ N-) Profile Level Median (middle) age stanine 6 5 8 2 A B (V+) C (Q+ V-) E (N+ V-) CogAT6 Profile frequencies for students in K-12 population Profile Percent in K-12 population A 33 B 42 B+ ( 21) B- (22) E 7 B+ (4) B- (3) CogAT6 Profile frequencies for students in K-12 pop. and for students with two stanine scores of 9 Percent in K-12 population Percent in Stanine=9 group A 33 37 B 42 27 Profile B+ ( 21) ( 6) B- (22) ( 21) E 7 19 B+ 4 ( 3) B- 3 ( 16) 37% Comparing CogAT with other tests Reliability Many estimates for a given test Sources of error Correlation versus standard error of measurement (SEM) Correlations depend on sample variability Easily misinterpreted SEM Typical SD of distribution of test scores if each student could be tested many times Person-level estimate – Only on CogAT SEM for SAS scores SEM for SAS scores SEM for SAS scores SEM for SAS scores Standard Errors of Measurement for Individual & Group Tests WISC -IV SB-V CogAT 6 OLSAT8 Inview Verbal 3.9 3.6 3.4 5.7 5.3 Nonverbal 4.2 3.9 3.7 5.8 4.5 Quantitative 4.5 5.3 3.3 Comp/Full Scale 2.8 2.8 2.2 5.7 3.5 Raven NNAT 3.0 6.1 Standard Errors of Measurement for Individual & Group Tests WISC -IV SB-V CogAT 6 OLSAT8 Inview Verbal 3.9 3.6 3.4 5.7 5.3 Nonverbal 4.2 3.9 3.7 5.8 4.5 Quantitative 4.5 5.3 3.3 Comp/Full Scale 2.8 2.8 2.2 5.7 3.5 Raven NNAT 3.0 6.1 CogAT is more reliable than Individually-administered tests: SB-V WISC-IV Group-administered tests: Inview Otis-Lenon NNAT Conditional Standard Error of Meas. Cogat 6 Verbal Battery: Level A 20.00 SEM 15.00 USS Score 10.00 Raw Score 5.00 0.00 0 5 10 15 20 25 30 35 40 45 50 55 60 65 Number Correct Conditional SEM's for CogAT6 Verbal USS scores, by test level Verbal USS 191-195 196-200 201-205 206-210 211-215 216-220 221-225 226-230 231-235 236-240 241-245 246-250 251-255 256-260 261-265 266-270 K . . 1 . . 11.5 15.9 12.0 17.0 12.5 2 . . 9.9 11.4 12.5 13.0 13.4 13.0 A . . 6.5 7.5 8.5 10.5 13.1 14.8 13.9 14.5 15.0 B . . 5.3 5.9 7.0 7.4 8.9 10.4 13.2 C . . 4.8 5.2 5.4 5.9 6.9 8.4 10.9 14.8 13.3 16.9 D ... . . 4.3 4.5 4.8 5.2 5.6 6.2 7.4 8.5 10.8 13.3 14.3 16.5 14.8 95th PR 15.4 99th PR 16.4 Out of level testing? SAS or PR scores? Primary Battery – Multilevel Battery? Requires individual testing Assumes child can use machine-readable answer sheet Quant battery assumes familiarity with numerical operations Level A – H? Common time limits & directions Validity Construct Representation --- all three aspects of fluid reasoning ability Predictive Excellent for predicting current and future academic achievement Predictions the same for all ethnic groups Consequential No other test comes close Validity: Construct Representation Carroll’s Three-Stratum Theory of Human Abilities Gf Fluid Reasoning Abilities Carroll’s Three-Stratum Theory of Human Abilities Verbal Sequential Reasoning Quantitative Reasoning FiguralInferential Reasoning Correlation between WISC Full Scale Score and CogAT Composite = .79 Predictive Validity Correlations with current and subsequent achievement Within Battery predictions strong Verbal with Reading, Soc Studies (r =.4 - .8) Quant with Mathematics (r = .4 - .8) Figural–Nonverbal with Math (r = .4 - .7) Across batteries Multiple correlations – typically R = .8 Negative for verbal ach. after controlling g Often better than prior achievement in the domain V and QN partial composite especially useful Within ethnic-group correlations the same Implications for TALENT identification Consequential Validity: Advice on score interpretation? Early 20th century theory of ‘culture-fair measure of g’ 21st century theory of reasoning abilities Evidence from research on human abilities Evidence from predictions of academic achievement Evidence from ATI research Evidence from cognitive psychology Consequential Validity: Score use Does every child (teacher) receive potentially useful information? Specific suggestions for how to use the level and profile of scores to Assist the child in learning by adapting instruction better to meet his/her learning style Build on cognitive strengths Shore up weaknesses Interpretive Guide for Teachers & Counselors Short Guide for teachers (free online) Profile interpretation system (free online) Norms Flynn Effect (next slide) Shaunessy et al. (2004) Cattell Culture Fair test 17.8 IQ points higher than NNAT Project Bright Horizon in Phoenix 2000 K-6 children, about ½ ELL CogAT, Raven, NNAT Raven 10 SAS points higher than CogAT or NNAT Example of Flynn Effect 105 100 IQ on the 1995 Scale 95 90 85 80 75 70 1910 1920 1930 1940 1950 1960 1970 1980 1990 Year Gains in Wechsler-Binet IQ for the U.S. White population. Sources J. Horgan (1995) and D. Schildlovsky. 2000 Mistakes in norming NNAT NNAT SD's by Test Level 25 Standard Deviation . 20 George (2001) 15 Naglieri & Ronning (2000) 10 Bright Horizon 5 0 A B C D E Test Level F G True Versus Reported NAI Scores by NNAT Test Level True NAI Score Level 100 115 130 145 A 100 121 142 163 B 100 119 139 158 C 100 119 137 156 D 100 117 134 151 E 100 115 130 145 F 100 116 132 149 G 100 116 132 148 True Versus Reported NAI Scores by NNAT Test Level True NAI Score Level 100 115 130 145 A 100 121 142 163 B 100 119 139 158 C 100 119 137 156 D 100 117 134 151 E 100 115 130 145 F 100 116 132 149 G 100 116 132 148 Over-identification Rates for the Number of Students with NAI Scores Above 115, 130, and 145 True NAI Score Level 115 130 145 A 1.5 3.4 11.9 B 1.4 2.6 7.3 C 1.3 2.3 5.8 D 1.2 1.7 2.9 E 1.0 1.0 1.0 F 1.1 1.4 2.0 G 1.1 1.4 1.9 Over-identification Rates for the Number of Students with NAI Scores Above 115, 130, and 145 True NAI Score Level 115 130 145 A 1.5 3.4 11.9 B 1.4 2.6 7.3 C 1.3 2.3 5.8 D 1.2 1.7 2.9 E 1.0 1.0 1.0 F 1.1 1.4 2.0 G 1.1 1.4 1.9 Interpreting CogAT scores Primary uses of CogAT To guide efforts to adapt instruction to the needs and abilities of students To provide an alternative measure of cognitive development To identify students whose predicted levels of achievement differ markedly from their observed levels of achievement Myths about adapting instruction All students are pretty much alike Reading Vocab Across Grades 400 350 Vocabulary Developmental Standard Score V O C 300 A B U L250 A R Y 99th %-tile 80th %-tile 50th %-tile 20th %-tile 1st %-tile 200 150 100 K 1 2 3 4 5 6 Grade 7 8 9 10 11 12 Reading Vocab Across Grades 400 350 Vocabulary Developmental Standard Score V O C A B U L A R Y 300 99th %-tile 80th %-tile 50th %-tile 20th %-tile 1st %-tile 250 200 150 100 K 1 2 3 4 5 6 Grade 7 8 9 10 11 12 Myths about adapting instruction All students are pretty much alike Every student is unique Myths about adapting instruction All students are pretty much alike Every student is unique Adaptations should be based on self-reported learning styles Myths about adapting instruction All students are pretty much alike Every student is unique Adaptations should be based on self-reported learning styles If the method is right, the outcome will be good Examples of correlations Predictor and criterion r N Aspirin and reduced risk of death by heart attacka .02 General batting skill as a Major League baseball player and hit success on a given instance at bata .06 — Calcium intake and bone mass in premenopausal womena .08 2,493 Effect of nonsteroidal anti-inflammatory drugs (e.g., ibuprofen) on pain reductiona .14 8,488 Weight and height for U.S. adultsa .44 16,948 22,071 Myths about adapting instruction All students are pretty much alike Every student is unique Adaptations should be based on self-reported learning styles If the method is right, the outcome will be good Individualization requires separate learning tasks Important Characteristics of Students Cognition (knowing) Domain knowledge & skill Reasoning abilities in the symbol systems used to communicate knowledge (Verbal, Quant., Spatial) Affection (feeling) anxiety, interests, working alone/with others Conation (willing) persistence, impulsivity Important Characteristics of Classrooms Structure Novelty/Complexity/Abstractness Dominant symbol system Opportunities for working alone or with others General Principles of Instructional Adaptation Build on Strength Focus on working memory Scaffold wisely Emphasize strategies When grouping, aim for diversity Case Study: Naomi Verbal Quantitative Nonverbal Composite No. of Number Raw Age Scores Grade Scores Items Attempted Score USS SAS PR S PR S 40 39 31 148 107 67 6 59 5 40 38 18 109 85 17 3 11 2 40 40 30 147 109 71 6 60 6 135 100 50 5 38 4 PR V Q N 1 25 50 75 67 17 71 Profile 6E (Q-) 99 Primary uses of CogAT To guide efforts to adapt instruction to the needs and abilities of students To provide an alternative measure of cognitive development To identify students whose predicted levels of achievement differ markedly from their observed levels of achievement ITBS – CogAT correlation CogAT High Low Low ITBS High ITBS – CogAT correlation CogAT High Low Low ITBS High ITBS – CogAT correlation CogAT High Low Low ITBS High ITBS – CogAT correlation CogAT only CogAT High Both ITBS only Low Low ITBS High Proportion of students identified by one test also identified by the second test Correlation between tests Cut score 0.50 0.60 0.70 0.80 0.90 Top 1% 0.13 0.19 0.27 0.38 0.54 Top 2% 0.17 0.23 0.31 0.42 0.58 Top 3% 0.20 0.26 0.35 0.45 0.60 “Do not use the Composite score to screen children for academic giftedness” Thorndike & Hagen (1984) (CogAT4) Thorndike & Hagen (1992) (CogAT5) Lohman & Hagen (2000) (CogAT6) Generally good news for low achieving students The lower the student’s score on an achievement test The greater the likelihood that CogAT scores will be higher Especially for nonverbal battery Primary uses of CogAT To guide efforts to adapt instruction to the needs and abilities of students To provide an alternative measure of cognitive development To identify students whose predicted levels of achievement differ markedly from their observed levels of achievement Predicting Achievement from Ability Predicted Achievement Score A c h i e v e m e n t Hig h Avg Distribution of Achievement for SAS of 110 70 80 90 100 110 120 Standard Age Score 130 Moderate Correlation Moderate Correlation Achievement Unexpectedly High Ach. Expected Level of Ach. B Unexpectedly Low Ach. A Ability Predicting Ach vrs Flagging AchAbility discrepancies Who are the students (at any ach level) who are most likely to improve if given new motivation or instructional resources? Reasoning Ability > Achievement Underachievement 1. • poor effort, instruction, etc. Well developed ability to transfer knowledge & skills to novel situations 2. • evidence for practice in varied contexts Achievement > Reasoning Ability Overachievement 1. • unusual effort, good instruction Difficulty in applying knowledge/skills in unfamiliar contexts 2. • need for integration, cross-course transfer General issues in selection Golden Rules of selection Identification criteria must be logically and psychologically tied to the requirements of the day-to-day activities that students will pursue. Mathematics? Literary arts? Visual Arts? Differentiated selection implies differentiated instruction Example r =PR.6 using Example r = .6of 100 90 Mathematics Achievement 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60 Nonverbal Reasoning 70 80 90 100 Example r = PR .6 r = .6 using Example 100 90 Mathematics Achievement 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60 Nonverbal Reasoning 70 80 90 100 Example r = PR .6 r = .6 using Example 100 90 Mathematics Achievement 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60 Nonverbal Reasoning 70 80 90 100 Example r = .6 Imprecision of even high correlations Given r = .8 What is the likelihood that a student who scores in 60-70th PR at Time 1 will scores in the 60-70th PR at Time 2? Lohman, D. F. (2003). Tables of prediction efficiencies. Lohman, D. F. (2003). Tables of prediction efficiencies. Lohman, D. F. (2003). Tables of prediction efficiencies. Proportion of students identified by both tests Correlation between tests Cut score 0.50 0.60 0.70 0.80 0.90 Top 1% 0.13 0.19 0.27 0.38 0.54 Top 2% 0.17 0.23 0.31 0.42 0.58 Top 3% 0.20 0.26 0.35 0.45 0.60 Regression to the mean The tendency of students with high scores to obtain somewhat lower scores upon retest 0 at the mean Increases with distance from the mean Easily predicted from correlation Ypred = Mean + r (Y – mean) Causes of Regression to the Mean “Errors” of measurement Often much larger for high scoring students Differential growth rates Changes in the abilities measured by the tests at time 1 and time 2 (esp achievement tests) Changes in the norming population school sample or national age sample Reducing Regression Use the most reliable tests available (judge by SEM on reported score scale) Avoid accepting the highest score as the best estimate of ability Average scores Ability and Achievement test scores Within domain (e.g., math ach & CogAT Q or QN) Achievement at T1 and T2 Revolving door policies Combining scores “And,” “or” or “Average” "And" "Or" "Average" Test 1 and Test 2 Test 1 or Test 2 Average of Test 1 and Test 2 Figure 5. Plots of the effects of three rules: (a) high scores on test 1 and test 2; (b) high scores on test 1 or test 2; and (c) high scores on the average of test 1 and test 2. Screening tests You administer a screening test to reduce the number who must be administered the admissions test Assume a correlation of r = .6 between the two tests Assume students must score at the 95th PR or higher on the admissions test What cut score on the screening test will include all of those who would meet this criterion? Proportion of students in top X percent of screening test who exceed the same or a more stringent cut score on follow up test r = .6 Admissions test Top x % Screening Test 5% 3% 1% 30% 0.80 0.84 0.91 25% 0.75 0.80 0.87 20% 0.68 0.73 0.82 15% 0.59 0.65 0.75 10% 0.48 0.54 0.65 5% 0.31 0.36 0.48 3% 0.22 0.26 0.36 Proportion of students in top X percent of screening test who exceed the same or a more stringent cut score on follow up test r = .6 Admissions test Top x % Screening Test 5% 3% 1% 30% 0.80 0.84 0.91 25% 0.75 0.80 0.87 20% 0.68 0.73 0.82 15% 0.59 0.65 0.75 10% 0.48 0.54 0.65 5% 0.31 0.36 0.48 3% 0.22 0.26 0.36 Screening might make sense When admissions test is expensive to administer When the correlation between the admissions & screening test is very high When there are many more applicants than places in the program When the false rejection rate is not an issue Local versus National Norms Except for regional or national talent searches, the PRIMARY reference group is not the nation or even the state but the school or school district. The need for special instruction depends on the discrepancy between the child’s level of cognitive and academic development and that of his or her classmates. Multiple perspectives: Nation, the local population, opportunity-to-learn subgroups within the local population Identification of Talent in Special Populations ELL children Identifying academic talent Not giftedness Tradeoff Measuring the right things approximately for ELL students or the wrong things with greater accuracy Inference of Aptitude? When someone learns in a few trials what others learn in many trials Opportunity to learn is critical Common norms appropriate only if experiences are similar Placement by achievement Multiple Perspectives The need for special programming depends most importantly on the discrepancy between a child’s achievements & abilities and that of his or her classmates Except for regional talent searches, summer programs that draw from different schools, etc… Make better use of local norms! For ELL students in grade 3, compare scores to: Other ELL students in grade 3 Other students in grade 3 in the district/school Other grade 3 students in the nation Multiple Programming Options Current level of achievement is primary guide Programming goal: to improve the achievement at a rate faster than would otherwise occur For on- and below-grade-level achievement options include: tutors, after-school or weekend classes/clubs, etc. Motivational component critical. For achievement well in advance of peers, consider single-subject acceleration Combining ITBS and CogAT Grades K – 2 Average CogAT V and ITBS Reading Total Average CogAT Q and ITBS Math total CogAT NV stands alone Grades 3 – 12 Average CogAT V and ITBS Reading Total Average CogAT QN and ITBS Math Total Use NCE scores – they can be averaged Then sort by grade and OTL group Integrating ability, achievement, and teacher ratings See Lohman, D. F. & Renzulli, J. (2007). A simple procedure for combining ability test scores, achievement test scores, and teacher ratings to identify academically talented children. Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity (NOMINATED students only) Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Verbal Ability Or Quant/NV Ability (ALL Students) Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Verbal Ability >97th PR Or Quant/NV Ability >80th PR Above Avg. Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Above Avg. Verbal Ability >97th PR II I >80th PR IV III Or Quant/NV Ability Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Above Avg. Verbal Ability >97th PR II I admit >80th PR IV III Or Quant/NV Ability Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Above Avg. Verbal Ability >97th PR II Admit but watch I >80th PR IV III Or Quant/NV Ability Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Above Avg. Verbal Ability >97th PR II I >80th PR IV III Enrichment Or Quant/NV Ability Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Above Avg. Verbal Ability >97th PR II I >80th PR IV Try next year III Or Quant/NV Ability Teacher Rating (Renzulli Scales) on Learning ability, Motivation, or Creativity Below Avg. Above Avg. Verbal Ability >97th PR II Admit but watch I admit >80th PR IV Try next year III Enrichment Or Quant/NV Ability Final Thoughts: Using CogAT Examine warnings and confidence intervals on score reports Do not screen using Composite score Use V and QN instead (at grade 3 +) Combine with Reading Total and Math Total Average measures of the same construct; Use “or” for measures of different constructs To identify talent, measure the right aptitudes but then compare scores to the proper norm group(s) Emphasize local norms for in-school programs ELL Compare the performance of the ELL 3rd grader with that of other ELL 3rd graders Be wary of national norms that you can purchase– esp on nonverbal tests (Raven, NNAT,…) Nonverbal tests have a role to play, but should never stand alone Emphasize the identification of talent rather than the identification of giftedness General It is unwise to accept the highest score as the best estimate of ability Combine ability and achievement test scores in principled ways Teacher ratings are only as good as teacher training in making ratings Do not simply add teacher ratings and similar measures to ability/achievement scores There is no way to measure innate ability; all abilities are developed Measures of achievement and ability differ in degree – not kind Future expertise is built on the base of current knowledge in a domain, reasoning abilities needed for new learning in that domain, interest in the domain, and the ability to persist in the pursuit of excellence All of which depend on opportunity and circumstance The End www.cogat.com http://faculty.education.uiowa.edu/dlohman NCE Scores Get from the publisher for CogAT Table look up (Table 32 in CogAT Norms Manual) Convert PR’s to NCE scores In Excel NCE = NORMINV (PR/100, 50, 21.06) If SAS > 135 NCE = 21.06 * [(SAS – 100)/16] + 50