NCATE: 2.1 Assessment System and Unit Evaluation

advertisement
American University – School of Education, Teaching, and Health
NCATE: 2.1 Assessment System and Unit Evaluation
2.1 How does the unit use its assessment system to improve candidate performance, program quality
and unit operations?
2a. Assessment System
SETH's program and assessment management system allows the EPP to operationalize the conceptual
framework's organizing principles of Community, Diversity, Equity, and Excellence and the EPP's
professional commitments of Knowledge, Beliefs, Practice, and Reflection. For initial programs, the
INTASC standards guide the system's multiple assessments and the EPP's key assessments align to state
or SPA standards. Assessments for advanced programs align to learning outcomes identified by program
faculty.
The EPP's system includes comprehensive and integrated assessment and evaluation measures that
begin prior to initial program admission. Assessments are both formative and summative and provide
the EPP and its professional community with developmental and ongoing data about candidates,
programs, and overall EPP performance. These data inform candidate advisory actions and program
improvement decisions.
Initial and advanced program faculty and staff make decisions about candidate performance based on
multiple assessments at key decision points. Decision points for initial Programs are (1) Program
Admission (2) Student Teaching Entry (3) Student Teaching Completion (4) Program Completion and (5)
Employment and Performance Follow-up. Decision points for advanced programs are (1) Program
Admission (2) Advancement to Candidacy (3) Program Completion and (4) Employment and
Performance Follow-up. Details of each assessment at each decision point can be found the Assessment
Guide in Exhibit 2.4.a.
The EPP employs several strategies to ensure all assessment procedures are fair, accurate, consistent
and free of bias. First, key assessments are aligned with institutional, state, national, and professional
standards (Exhibit 2.4.a), ensuring that candidates are measured against recognized standards and the
organizing principles and professional commitments of the unit's conceptual framework. This
transparency allows candidates to reasonably predict evaluators' expectations. Second, all faculty, both
clinical faculty and cooperating teachers, participate in at least one training session regarding overall
responsibilities and assessment tools. Third, the unit has developed rubrics for assessments, provided to
candidates and evaluators. Fourth, the unit has developed policies for decision point assessments that
allow candidates to reflect upon work they have completed and resubmit components that do not meet
expectations or re-take a practical component.
The EPP relies on its professional community to evaluate the capacity and effectiveness of its
assessment system, including full-time and part-time faculty, clinical faculty supervisors, supervisor
leads, method leads, cooperating teachers, partner school leaders and program alumni. Supervisor and
methods leads (Exhibit 3.4.d: Clinical Experience Personnel Chart) provide support to the Director of the
Office of Teacher Education, ensuring a link between the cooperating teacher, the student, the faculty,
and program staff. Leads use assessments to monitor candidate progress and ensure data are available
to make decisions about candidate progression through Practicum and Student Teaching. Leads meet
with the Director of the Office of Teacher Education formally three times per year (Aug., Jan., June), and
informally as needed to discuss candidate progress, provide feedback regarding practicum and student
teaching, report experiences of cooperating teachers and other school personnel, and discuss the
quality of assessment data. Supervisor leads also work with individual supervisors to provide mentoring
and feedback to candidates. This lead structure was implemented in Fall 2012. The Special Education
Program has a similar process between the Director of the Special Education Program, faculty, clinical
faculty supervisors, cooperating teachers and school leaders. The faculty of the Curriculum and
Instruction program are currently developing its professional community, which includes leaders from
partner organizations and internship placement sites.
The directors of initial and advanced programs work closely with faculty to ensure the effective design
and implementation of key assessments. Each assessment includes a rubric or scoring guide, which
indicates acceptable levels of performance, and opportunities for resubmission. Given the cycle of SPA
resubmissions for the EPP, the utility and validity of data produced through key assessments has mostly
occurred during the rejoinder process. Relevant feedback from SPA review is applied to the assessments
of initial programs reviewed by the state. Advanced program assessments are reviewed by peer faculty
through AU's Learning Outcomes process required for Middle States accreditation.
2b. Data Collection, Analysis, and Evaluation
The EPP uses a customized, web-based assessment and program management system called GoEd
(goed.american.edu), which is continuously maintained and regularly improved upon to meet the needs
of faculty, staff, candidates and the professional community. Candidates in all programs enter the
system as prospects and maintain access after graduation.
GoEd allows access to data from the University's official database, including contact information, course
and schedule information, and grades. GoEd imports data directly from the ETS Praxis system. Access to
a candidate's account can be granted to internal users such as faculty, as well as external users such as
cooperating teachers.
GoEd provides regular, comprehensive data on program quality, unit operations, and candidate
performance at each decision point for initial and advanced programs. Candidates have access to
assessment rubrics and evaluative feedback. Each semester, candidates and faculty complete
assessments in the system. Users access the system to provide evaluative feedback about their
experiences and about the effectiveness of assessments. All data can be aggregated, summarized, and
reported. Data can be disaggregated by program and across time periods.
Data collected in GoEd are regularly and systematically compiled, aggregated, summarized, analyzed,
and reported internally and externally to improve candidate performance, program quality, and EPP
operations. GoEd data are used in conjunction with data from the AU Office of Institutional Research to
complete SPA reports, PEDS reports, Title II reports, CAEP Annual Reports and Learning Outcomes
required by Middle States. Internally, data is reviewed to monitor candidate progress and program
effectiveness. See Exhibit 2.4.a for the 3-level Assessment Analysis Structure of Data Review and
Analysis, also described in Section 2c.
AU has a Policy on Student Academic Grievances that covers undergraduate and graduate students (see
Exhibit 2.4.e). The policy prescribes a 4-step process that includes review by an annually appointed EPP
grievance committee. The Dean holds the official grievance files, as well as a file of informal complaints.
GoEd also has an area for staff to track communication with an individual candidate, which allows all
involved staff to have a full understanding of actions taken regarding a candidate. Candidates may
access these comments upon request.
2c. Use of Data for Program Improvement
The EPP employs a 3-level assessment analysis structure to ensure data from assessments are reviewed
and analyzed regularly for program improvement (Exhibit 2.4.a). For initial programs, LEVEL I data are
reviewed each semester by faculty, clinical faculty, cooperating teachers and the Director of the Office
of Teacher Education. These data are collected from course assessments, field experiences, student
teaching, and developmental portfolio tasks. Faculty and clinical faculty supervisors submit data and
review results to ensure candidates' successful program continuation. Faculty may also use results to
adjust instruction in a course or adjust how they approach supervisory tasks. If a faculty or clinical
faculty member has evidence that an assessment may need to be adjusted in order to gain more
evidence of student performance or to better align with standards, they bring these suggestions to the
methods leads or supervisory leads who present this information to the Office of Teacher Education or
the Director of the Special Education Program. During June supervisor lead meetings, data from key
assessments are discussed. If there are compelling findings, decisions about program or assessment
adjustments are proposed. The program director then reviews these changes with the Dean and they
determine if changes to courses or non-key assessments should be made. The teacher education
committee provides guidance about changes that affect an entire program or a key assessment. The
Director of the Special Education Program follows a similar process with faculty members, supervisor
leads and the special education committee.
The Dean, Director of the Office of Teacher Education, and faculty leaders who compile SPA and state
reports review LEVEL II data annually or biannually. Level II data include data from program
assessments, allowing a formal opportunity for the Dean and directors to review aggregated program
data with faculty leaders. Given the rejoinder process, each program has been reviewed approximately
every two years. Since Fall 2012 and Spring 2013 SPA submissions, two programs have been nationally
recognized. We expect the remaining programs to become fully nationally recognized in the next year. A
proposed process for internal review is outlined in Section 2c in Moving Toward Target and will begin in
Fall 2016.
The Dean and Directors review data at the EPP level (LEVEL III) annually for initial programs. This analysis
occurs during the compilation of external reports, including PEDS reports, Title II reports, CAEP Annual
Reports and the Learning Outcomes report required by Middle States. Advanced program data are
reviewed annually through the Learning Outcomes report.
Download