2.1 What are the significant changes in how the unit uses its assessment system to improve candidate performance, program quality, and unit operations? Significant changes in our use of the assessment system have emerged due to our transformation initiative efforts: 1. Since it was initially designed, our assessment system has included (a) a candidate improvement cycle tracking the collection, analysis, and use of data to improve candidate performance; and (b) programs and unit operation improvement cycle, tracking collection, analysis, and use of data related to programs, procedures, and operations. With the Transformation Initiative, we have included a third improvement cycle: collecting, analyzing, and using data related to p-12 student outcomes in order to improve candidate performance, programs, and unit operations. This is a significant shift and moves us from what we are doing, how we are doing it, and what our candidates are doing, to looking at the impact of what we are doing in the world beyond the university. 2. Dean Johnson has proposed the development and institutionalization of a “mosaic of evaluations” that provide evidence related to our impact on students, schools, and communities. This mosaic implies that a wide range of assessments and strategies are needed, and that the focus of assessment is on what happens due to our effort rather than our effort itself. To encourage this shift in thinking, Dean Johnson released an RFP for seed money for field testing of these efforts. In addition, a task force meets monthly to discuss ways to increase our assessment of our impact on students, schools, and communities. Three proposals that will begin collecting data during Spring, 2012 include: Backtracking graduates’ performance at the University of Cincinnati and linking it to AYP and performance data of those employed in Cincinnati Public Schools Meta-analysis of candidates’ impact on p-12 students’ behavior change through generating effect size on single case study designs Meta-analysis of candidates’ impact on p-12 students’ literacy skills through generating effect size on single case study designs 3. We have moved to a transparent system. Assessments are web-based, and rubrics are presented to students. Candidate performances on those assessments are shared. The use of those data to improve programming is shared with candidates, faculty members, and cooperating teachers. 4. We have become involved in a national effort in the development and implementation of the Teacher Performance Assessment. During the 2011-2012 academic year we are engaged in a full pilot of this assessment. Feedback has consistently been submitted to the assessment development team at Stanford. In addition, programs have made program changes including teaching candidates the technology of documenting their performance on video, increasing opportunities to reflect in response to a prompt, and utilizing embedded formative assessments. These changes have emerged in response to candidate and faculty discussions related to the Teacher Performance Assessment. In 2.3 we provide further information related to these changes.