Part 1: You may complete this “standardized test” portion of the exercise with your classmates, via discussion. However, to foster your learning, please refrain from simple copying of classmates’ answers! For the following, use: DAYC-Developmental Assessment of Young Children: Examiner’s Manual (Voress & Maddox; 1998; Austin, Pro-Ed) Validity Question A test’s validity needs to be assessed relative to its intended purpose/use. For what purpose(s)/uses(s) was the DAYC intended? Content validity—How the test-makers developed their test items Content validity—Data to document how test items are selected to represent just the right level of difficulty (not too easy, not too hard) Content validity—Data to document that test items do not bias against some groups or give an advantage to others Criterion-related validity—Correlations of the DAYC with other tests that measure the same construct Construct validity —The five basic constructs on which the DAYC is based Construct validity—Testing the construct of developmental change (correlation of DAYC with chronological age) Construct validity—Testing the construct of that children who are at risk or disabled will perform at developmentally lower levels (documentation that at-risk and disabled children receive lower scores than children who have not been so identified) (Note: Sensitivity and specificity are not reported—Instead, the authors compare the scores of the clinical subgroups of children, using a comparison statistic called a t-test) Construct validity—Because the test is developmental, the inter-correlations of subtests should be significant (since all should show a developmental trend), even though the correlations themselves will be low to moderate (because the tests examine different abilities) Construct validity—Factor analysis—Traits found in factor analysis can be matched up to the underlying developmental construct of the test, namely a single factor of “development”, onto which all of the subtests load strongly. When factor analysis is conducted with different subgroups, again, a single “developmental” factor emerges. Construct validity—Item scores of individuals correlate with the corresponding sub-scores of those individuals, indicating that the items as well as the test as a whole are measuring the same developmental construct. Page number(s) where relevant information is found Reliability Internal-consistency reliability results—across different age groups Internal-consistency reliability results—across ethnic, gender, at-risk and clinical groups Test-retest reliability results Inter-rater reliability results Standardization (normative) sample Standardization Error in measurement It is noted that SEM is relatively small (Note that confidence intervals are not used, as a result.) The DAYC authors suggest that measurement error be addressed by subjectively recording any situational or examinee factors that may have influenced performance. The DAYC test items essentially consists of a set of criterion referenced measures. Where is the 6-line paragraph that tells you this? What are the three “testing” contexts in which these criterion-referenced measures are made? Qualifications of the tester Procedure for establishing entry points, basals and ceilings How to calculate the child’s chronological age. The names of the individuals who collected the standardization sample How the authors selected the individuals who would collect the normative sample The size of the standardization sample The demographic characteristics of the normative sample (i.e., what is the nature of the population represented by the normative sample) A demonstration that demographic characteristics are comparably distributed across the different age groups Comparing examinee performance to that of the standardization sample Cross-battery approach Where to record the item scores How the item scores are summed (to yield the raw score) Conversion of the raw score into converted scores (standard score, percentile and age equivalent) Conversion of summed standard scores into a quotient standard score called the “General Development Quotient” (GDQ) How to compare the child’s DAYC converted scores to the converted scores on other developmental scales administered to the examinee Where to record/write testing results and summaries How to interpret raw and converted scores, e.g. using due caution in interpretation of age equivalent scores. (Note: Now that you’ve had SPHS 5780, this kind of information is “old hat” to you.) Procedures to follow when comparing the child’s relative performance on different DAYC subtests, or when comparing the child’s relative performance on the DAYC and other developmental tests (Note: This looks a lot like cross-battery assessment, right?) Part 2: Administer an oral mechanism examination with two other classmates, in trios of three, as discussed in class. People take turns acting as 1) examinee; 2) examiner; 3) observer/recorder. Each person should turn in ONE completed oral mech form (three forms per group). At the top of the form, put the name of the examiner and the names of the other two people in the trio. (Please do NOT specify which of the other two was your examinee!)