2c. Procedures for ensuring fairness, accuracy, consistency, and freedom of bias for key assessments of candidate performance and evaluations of program quality and unit operations The following procedures are in place in assessments of candidate performance: Construct Accuracy NCATE Definition Key assessments are of the appropriate type and content such that they measure what they purport to measure. To this end, the assessments should be aligned with the standards and/or learning proficiencies that they are designed to measure. Procedures Assessments in which grades in coursework are used include alignment of topics and aspects of the course are aligned with standards, knowledge, skills, and dispositions described in the standards Faculty members annually review assessments to determine continued accuracy as standards and curricula are updated Learning outcomes are aligned with assessments strategies and communicated on syllabi Assessments utilize rubrics that include indicators that are developmental steps toward the standard Consistency Key assessments produce dependable results or results that would remain constant on repeated trials. Institutions can document consistency through providing training for raters that promote similar scoring patterns, using multiple raters, conducting simple studies of inter-rater reliability, and/or comparing results to other internal or external assessments that measure comparable knowledge, skills, and/or professional dispositions. University supervisors, cooperating teachers, and candidates are trained on the Teacher Performance Assessment and clinical/field related forms Teacher Performance Assessment raters are nationally trained and calibrated Multiple raters are used in terms of candidate performance, evaluation of clinical and field experiences Assessments utilize rubrics that include indicators that are developmental steps toward the standard In designing action plans in response to dispositional issues, more than one faculty member is included in meeting and the design of the plan Field coordinators provide a second assessment and observation as soon as it appears that a problem may arise in clinical and field experiences. Fairness in Assessment. Avoidance of Bias The assurance that candidates have been exposed to the knowledge, skills, and dispositions that are being evaluated in key assessments and understand what is expected of them to complete the assessments. To this end, instructions and timing of the assessments should be clearly stated and shared with candidates. In addition, candidates should be given information on how the assessments are scored and how they count toward completion of programs. The unit has addressed any contextual distractions and/or problems with key assessment instruments that introduce sources of bias and thus adversely influence candidate performance. Contextual distractions include inappropriate noise, poor lighting, discomfort, and the lack of proper equipment. Problems with assessments include missing or vague instructions, poorly worded questions, and poorly reproduced copies that make reading difficult Learning outcomes are aligned with assessments strategies and communicated on syllabi Dispositions are presented during new student orientation, review during cohort application meetings, and provided in handbooks Timing of assessments is delineated for candidates, mentors, field supervisors, and faculty members in handbooks and syllabi To increase candidate opportunity to address dispositional needs, a detailed disposition assessment was developed to provide observational data on focus areas Assessments that involve candidate performance are completed in the field, with support materials developed by the candidate in the setting of his or her choice Due to the complicated nature of the Teacher Performance Assessment, candidates are supported in reviewing the syllabi and task demands In a step-by-step process during seminars or advanced methods courses while they are in the field In their annual review of assessments, faculty members deliberate each assessment to determine that instructions are clear and wording is explicit In all coursework candidates are provided rubric when assessments and assignments are initially discussed Using web-based assessments allows for consistency of presentation All data are aggregated and analyzed independent of programs Procedures are also in place for the assessments that address program quality and unit operations. Construct Accuracy NCATE Definition Procedures Key assessments are of the appropriate type and content such that they measure what they purport to measure. To this end, the assessments should be aligned with the standards and/or learning proficiencies that they are designed to measure. Assessments completed by employers, candidates, and graduates are all aligned with the conceptual framework After analyzing data for several years, instructor factors and course factors were delineated on the candidate evaluation of courses Consistency Key assessments produce dependable results or results that would remain constant on repeated trials. Institutions can document consistency through providing training for raters that promote similar scoring patterns, using multiple raters, conducting simple studies of inter-rater reliability, and/or comparing results to other internal or external assessments that measure comparable knowledge, skills, and/or professional dispositions. Multiple assessments address the same areas, including advising, clarity of curriculum, faculty quality and interactions, satisfaction, program quality and relevance, are completed by faculty members, university supervisors, candidates, cooperating teachers, employers, and graduates Data are collected annually allowing for drawing trend lines and identifying outliers All items are aligned with the conceptual framework All programs/courses/procedures are evaluated using the same core items; individual programs may choose to add others Fairness in Assessment. Avoidance of Bias The assurance that candidates have been exposed to the knowledge, skills, and dispositions that are being evaluated in key assessments and understand what is expected of them to complete the assessments. To this end, instructions and timing of the assessments should be clearly stated and shared with candidates. In addition, candidates should be given information on how the assessments are scored and how they count toward completion of programs. Assessment items are aligned with conceptual framework The unit has addressed any contextual distractions and/or problems with key assessment instruments that introduce sources of bias and thus adversely influence candidate performance. Contextual distractions include inappropriate noise, poor lighting, discomfort, and the lack of proper equipment. Problems with assessments include missing or vague instructions, poorly worded questions, and poorly reproduced copies that make reading difficult In their annual review of assessments, faculty members deliberate each assessment to determine that instructions are clear and wording is explicit; an advisory panel reviews the satisfaction survey annually Efforts to collect candidate data on satisfaction surveys including a raffle, flyers posted throughout the college, postings on blackboard, announcements in classrooms, emails to group lists, faculty member announcements Unit operation and program evaluations are made available during the “lull” at the beginning of spring quarter All parties are assured of the anonymity of their responses; all data are collected and managed independently of the programs Using web-based assessments allows for consistency of presentation and anonymity When “paper” evaluations are used, faculty members are instructed to leave the room; evaluations are collected and turned in directly to the office by a candidate All data are collected and managed independently of the programs