WGU Competency_Definition 13May2011 from P. Schmidt

advertisement
DEFINING THE ELEMENTS OF COMPETENCE
Earning a Western Governors University (WGU) degree is based on a student’s demonstration of
competency. For purposes of definition and in the simplest terms, competency may be thought of as
possessing the knowledge, skills, and abilities to perform at the appropriate level for the degree being
awarded. In a traditional educational system, grades are based on individually-developed curricula
and objectives created by individual course instructors; as a result, the means and criteria of
evaluation typically vary from instructor to instructor. In a competency-based system such as WGU’s,
the student demonstrates progression through mastery of multi-dimensional assessments built to
measure uniform statements of competence.
WGU’s comprehensive approach to competency-based education provides evidence, collected
through rigorous assessment during all phases of our programs, to ensure that our students are fully
competent. We define competency in terms of domains of knowledge and skill. Subject matter
experts draw upon a variety of resources such as practical experience, job task analysis, published
standards, and other research to define competence. Early on, WGU developed a methodology for
ensuring close alignment for content and pedagogy between its programs and key national, state, and
professional standards, commonly used national exams, and research data.
DOMAIN STRUCTURE
Once we define the elements of competency, we then organize them into a hierarchical domain
structure that supports development of courses of study and assessments. (See Figure 1, showing a
particular example from a Teachers College initial licensure program domain.)
Figure 1 - Domain Structure
The domain and subdomain levels serve strictly as organizational structures. The actual definition of
competency resides in the competency statements—integrated performances representing key
elements of competence—which are then operationally defined in terms of very specific, measurable
individual objectives. Every objective is directly measured at least once.
We then use several different types of assessment instruments (e.g., performance tasks, objective
exams, live observations, simulations, etc.), which address every individual competency statement
and objective, to measure the student’s actual competence in each subdomain. (See Figure 2.) In
other words, students receive credit only when they have passed every assessment, not just for
completing “activities.”
Figure 2 — Assessments for Each Domain/Subdomain (including Learning Progress Assessments)
Subdomain
Subdomain
Subdomain
Subdomain
Domain
Subdomain
Performance Tasks
Objective Exams
Observations
Passing these
assessments
= competence
Other Assessments
Traditional objective exams tend to focus almost exclusively on content knowledge, which is only one
aspect of competence. By using a variety of exam types designed to address the full range of
knowledge, skills, and abilities we come away with a more complete picture of the student’s actual
competence. Appropriate assessment types are based on the taxonomy level of the test objectives
measured. (See Figure 3; this example relates to Teachers College initial licensure programs.)
Figure 3 — Types of Assessment Used to Test Elements of Competence
Assessments
Knowledge
Skills
Abilities
(Know)
(Know How)
(Do It!)
Exams
Performance Tasks
Pre-clinical Exercises
Observations
Work Sample
Legend
Blue =
Heavily Tested
Light Blue =
Somewhat Tested
White =
Not Tested
ONGOING PERFORMANCE ASSESSMENT
Student performance is measured during all phases of the degree program. All students must initially
take the WGU Readiness Assessment. Throughout their respective programs, students are given preassessments (learning progress assessments) to determine their pattern of mastery and non-mastery
of key competencies within each domain. These pre-assessments provide information critical to
defining the student’s individualized Academic Action Plan (AAP). The AAP lists all assessments
students must pass to complete their programs. Every student in a given program must pass the same
set of assessments.
Students complete many sets of performance tasks during their Course of Study. The tasks allow
students, with the help of their mentors, to judge their own capabilities and competencies relative to
the capabilities and competencies required for mastery. Then, students’ performance on summative
competency exams enables them to demonstrate that they have fully mastered the competencies and
performance skills of each domain.
At the beginning of each term, the student and mentor establish an Academic Action Plan, which is a
detailed blueprint of the required learning resources and assessments that comprise the program.
This plan identifies the competency areas that the student needs to develop and the assessments the
student will attempt, with agreed-upon required completion dates (RCDs) for each. Once this plan is
complete, the student accesses the Course of Study (syllabus) for a particular area. The Course of
Study lists and provides links to learning resources that are aligned with the competencies for that
area. The resources are organized, ordered, and provide a pacing/timing context. The student begins
to work through the Course of Study and in that process accesses as many of the different learning
resources as necessary. Depending on the level of competence a student brings to a certain area of
study and performance, he or she may move more quickly through the Course of Study than the
pacing guide indicates. While all learning resources are available to students, they may opt not to use
some in areas of proficiency. Students demonstrate the knowledge and skills they gain from learning
resources through the successful completion of WGU assessments. (Please see a description of WGU
library and information resources later in this document.)
ASSURING THE ASSESSMENTS ARE FAIR, ACCURATE, CONSISTENT, AND FREE FROM BIAS
The assessment system is designed to provide fair, valid, high quality assessment tools for managing
programs and operations for WGU students and staff. WGU employs a comprehensive development
process to ensure that every aspect of its assessment program meets the highest legal and
professional standards. The process begins with careful definition of program content and
development of a detailed test specification, followed by the writing and detailed, multi-step review
of assessment items and the setting of performance standards for those items. Items are field tested
prior to the building of live test forms. Finally, studies are conducted to establish fairness, accuracy,
and consistency of its performance assessment procedures, and changes are made in its practices
consistent with the results of these studies.
Fairness, accuracy, and consistency are also directly addressed in the grading process. Professional
evaluators (graders) with background in the assessment area are trained, using established rubrics for
each assessment, in evaluating content and in providing appropriate feedback to students. They
evaluate a specific set of assessments and are not given the identity of the student, thus limiting
personal bias.
The key underlying concepts of validity, quality, fairness, and consistency are supported by a defined
process. Clearly, these key concepts can also be viewed as outcomes of our process. The overall
process is founded on best practices as defined by authoritative bodies. While not exhaustive, below
is a brief summary of how WGU addresses each foundational concept.
Validity: Processes are consistent with best measurement practices. Table 1 clearly shows that we
follow best practices and collect all the necessary validity evidence in the development of our
assessments. Validity is highly interrelated with quality, fairness, and consistency.
Quality: Our review process, reviewer qualifications and training, and management practices ensure
that every item is checked, rechecked, and then approved by appropriate subject matter experts. A
very rigorous review process covers all aspects of the testing materials including such things as
correctness, editorial quality, absence of potentially biasing language, technical merit, scoring, etc.
Fairness: Using best practices as the model, our process ensures all candidates are treated fairly and
that testing materials meet fairness requirements. For example, if statistical analysis determines that
an item is unfair (perhaps due to a computer error), we hand score all students to determine who was
affected. If some were unfairly failed on the exam, they are notified of the change in status.
Consistency: Consistency comes in many forms. First, we use the same development process across all
development projects, ensuring consistency in the materials we produce. Next, we carefully control
the administration and scoring of exams to ensure consistent testing conditions. Further, we strive to
develop multiple test forms of like content and difficulty level. Finally, we ensure consistency of the
overall testing program by regularly reviewing statistical performance, anecdotal feedback, variance
reports from proctors, and periodic program reviews.
Table 1 – Assessment Development Process
PROCESS STEPS
PROCESS STEPS
Program Definition
Creation by Associate Provost for Program Management, Senior Product
Managers, and Product Managers; approved by Program Councils
Content Area Definitions
Building and refining of the:
Domains
Subdomains
Competencies
Objectives
Alignment to national, state, and professional standards as applicable.
Prioritization or weighting
of the content elements
Determination by faculty and outside Subject Matter Experts (SMEs) that
indicates which critical KSAs (knowledge, skills, abilities) to test.
Test Specification
(Blueprint)
Creation of a full specification that includes assessment method, content,
taxonomy, item type. These decisions determine how each KSA will be
measured.
Item Development
Following standard item development process using SMEs from across the
country representing good mix of gender, ethnicity, and years of experience in
the profession.
Item Review
Following a comprehensive item review process that includes content, technical,
editorial, and sensitivity checks; utilizing content reviewers from across the
country representing good mix of gender, ethnicity, and years of experience as
certified teachers or school leaders. Reviewers are trained to identify and
correct problems, including issues with: item construction, correctness, clarity of
communications, bias across gender, race/ethnicity, regionalism, relevance to
the exam, taxonomy, alignment to objectives and learning resources, etc.
Standard Setting
Using a Modified Angoff procedure with representative committees of judges
(e.g., in teacher education programs, these would be certified teachers or school
leaders from various geographic areas and from various ethnic backgrounds).
This method generates values for every item in our pool. Examination form cut
scores are then computed using values from the selected items.
Form development or test
algorithm
Following test specifications precisely. Items are pulled (selected by random
sampling from item pool) to match specifications and expected difficulty
parameters. Additional forms are pulled to meet specifications and to match
difficulty parameters.
Administration & scoring
quality control and
assurance
Following standardized WGU methodology for exam administration including
prior notification, administration by trained proctors, collection of results using
secure databases, and using standardized methods for scoring and reporting.
Committee Review (New
Item Bank)
Performance of secondary committee reviews using a diverse group of SMEs.
Student questions and
comments feedback loop
Following standard WGU methodology to allow students, mentors, and
evaluators to provide feedback. Official channels exist for reviewing feedback
and for escalating formal challenges to appropriate administrators.
Test and item statistical
analysis & ongoing test
maintenance
Conducting periodic analyses to identify non-performing items/exams.
While this process continues to grow with the University and to evolve as new competencies and
challenges change the landscape of online education, it is already characterized by demonstrated
fairness and reliability. For example, after the assessment team publishes new assessment items, it
closely monitors test performance for an initial period of time, typically until approximately thirty
students have completed the exam. Item statistics such as item difficulty and discrimination are
monitored in conjunction with overall student performance to ensure that items perform as expected.
Potentially nonperforming items are flagged for technical review. Affected students are identified and
pass/fail decisions are adjusted where appropriate.
Download