Placement assessment Principles annotated

advertisement
Suggested Principles for Placement Assessment System
(adapted from Robert Rothman, 2010, Alliance for Excellent Education)
1. Coherence
The placement assessment system reflects clear standards for college and career
readiness and aligns with key learning goals and the curriculum as defined by the
college and disciplinary faculty.
Placement results should be acceptable to other colleges, so that re-testing is not needed or
required. (Joe Montgomery)
Re-testing at the current or another college, as a strategy to achieve higher-level placement results,
should be minimized.
Placement criteria should be clearly connected to any differences in the available courses. If
scoring systems are used, scores should derive from criteria that grow out of the work of the
courses into which students are being placed. [CCCC]



Issues/Questions:
What's the role of the existing statewide college readiness standards (TMP math
standards, HEC Board's "college readiness definitions" in English) in terms of providing
coherence and clear standards for the placement assessment system? Should we propose
a process to formally align placement tests with these standards, and if not, how do we
justify the disconnect between the standards and the tests?
Should the placement tests reflect and be consistent with the first-year college curricula
in math and composition/writing, and if so, how do we engage faculty in a process to
produce that alignment?
How much do we know about current practice with respect to a) acceptance of placement
results across colleges in the system and b) re-testing policies and practices, especially
related to students trying to improve their placement, and should we gather more
systematically information about these practices across the state?
2. Comprehensiveness
The placement assessment system consists of a toolbox of assessments and sources
of evidence that meet a variety of different purposes and that provide various users
with information they need to make decisions.
Multiple instruments are used in the assessment and placement process.
Final decisions about placement should take into consideration other student characteristics and
factors that influence success (e.g., goals, attributes, special needs, etc.). [AMATYC]
If for financial or even programmatic reasons the initial method of placement is somewhat
reductive, instructors of record should create an opportunity early in the term to review and
change students’ placement assignments, and uniform procedures should be established to
facilitate the easy re-placement of improperly placed students.
3. Accuracy and Credibility
The information from the placement assessment system supports valid inferences
about student progress toward college and career readiness.
For each instrument used in placement, there is solid evidence of validity, including significant
correlations between the test scores and class performance or higher course success rates using the
instrument.
The placement process minimizes placement of students into unneeded remedial classes (false
negatives), minimizes placement of students into classes that are too difficult for them (false
positives), maximizes placement of poorly-prepared students into remedial classes (true negatives),
and maximizes placement of qualified students into college level courses (true positives).
Decision-makers should carefully weigh the educational costs and benefits of various
approaches (timed tests, portfolios, directed self-placement, etc.) recognizing that the method
chosen implicitly influences what students come to believe about the discipline.
Most of the following is paraphrased or quoted from the best study I have read on placement
assessment: "Assessing Developmental Assessment in Community Colleges" (CCRC Working
Paper No. 19, Assessment of Evidence Series) Paper by: Katherine L. Hughes & Judith ScottClayton - 02/2011. URL: http://ccrc.tc.columbia.edu/Publication.asp?uid=856
(A summary is available at http://ccrc.tc.columbia.edu/Publication.asp?uid=857 )
Importance of context
Placement assessments exist as part of a larger context and are not ends in themselves. Here is
the conundrum:
"An analogy can be made to a clinical trial in which individuals’ medical history is assessed in
order to help estimate their ability to benefit from a certain treatment. If the individuals
selected for the treatment do not benefit, it could be because the treatment is universally
ineffective, because the initial assessment inadequately predicts who is likely to benefit, or
because the assessment does not provide enough information to accurately target variations of
the treatment to different people. Similarly, if developmental education does not improve
outcomes, is it because the "treatment" is broken per se or because the wrong students are
being assigned to it? Or is some different or additional treatment required?" (Hughes p.2)
In spite of this larger context, we must strive to improve our placement testing processes.
Accuracy in placement assessment
Accuracy is maximized when an assessment predicts the initial developmental English and math
classes in which a student has the highest probability of success. The placement process
minimizes placement of students into unneeded developmental classes (false negatives),
minimizes placement of students into classes that are too difficult for them (false positives),
maximizes placement of poorly-prepared students into developmental classes (true negatives),
and maximizes placement of qualified students into college-level courses (true positives). For
each instrument used to determine placement, there is solid evidence of validity, including
significant correlations between the test scores and class performance or higher course success
rates using the instrument.
Multiple measures:
It is important to emphasize that there are countless internal and external factors that
influence a student's success or lack of success in a course: the college's ability to provide atrisk students the support they need, the "grit and tenacity" of the student, the amount of hours
they are working, the variability of course grading between instructors, etc., etc., etc. For that
reason many researchers are suggesting multiple measures for course placement: e.g.,
COMPASS scores plus high school or college GPA, grades in last high school math course, and
surveys of affective characteristics such the LASSI (Learning and Study Stragegies Inventory)
with scales for attitude, motivation, time management, anxiety and concentration among
others.
Are our current assessments accurate and credible?
I strongly recommend you read the entire Hughes study for an in-depth analysis of this
question, but if nothing else the chart on page 14 and the analysis that follows. This is not an
issue that leads itself to simplistic answers: it is quite complex as you will see from Hughes.
4. Fairness
The placement assessment system provides an opportunity for all students to
demonstrate what they know and are able to do.
Students perceive the placement process and results as being fair and accurate.
There is a readily-available appeals process, so that students can discuss their results with
placement staff/faculty and have an opportunity to present additional evidence in support of their
preferred placement option.
Students should have the right to weigh in on their assessment through directed selfplacement, either alone or in combination with other methods.
Students should have the opportunity to prepare for the placement assessment.




Issues/Questions:
Would there be some value in proposing that we do some data-gathering with students
about their perceptions of these issues--do they think the existing system is "fair,"
"accurate," etc., how would they suggest we make it fairer...?
Should we consider Universal Design issues vis a vis both the existing tests and any
proposed alternative?
What kind of appeal processes and preparation opportunities currently exist around the
system, and do we need to gather additional information about promising practices in
these areas?
How might a "directed self-placement" process be implemented in a way that was also
efficient in terms of cost and staff time, and is it as applicable in math as in English?
5. Transparency
Clear information about the entire placement assessment system is available to
students and interested stakeholders and easily-accessible.
Students understand the purpose and implications of the assessment process.





Issues/Questions:
what information is currently available about the tests used, processes involved,
consequences of decisions...?
what information about the "placement assessment system" do we need to be
"transparent" about?
is there anything that we can't or shouldn't be transparent about?
does it make sense to create some common information aapproaches across the system
and put it on the web (e.g., on "checkoutacollege.com")?
who else besides students...?
6. Utility
The placement assessment system makes a positive contribution to students’
chances for success at the college.
The assessment process creates a greater understanding of students’ strengths and weaknesses in
the areas being assessed. Assessment results can be used to support the development of
improvement strategies.
Placement processes should be continually assessed and revised in accord with course
content, overall program goals, shifts in the abilities of the student population, and empirical
evidence.
Notes/Comments:
For placement to make a positive contribution to student success it must:
insure the student has the skills needed for future classes
2. prevent redundant time & effort
3. provide the best possible pathway
1.
From Michael Kirst, lead researcher on the K-16 Bridge Project and co-editor, with Andrea
Venezia, of the book From High School to College: Improving Opportunities for Success in
Postsecondary Education (2004, Jossey-Bass):
“The content, cognitive demands, and psychometric quality of placement exams are a
‘dark continent’ in terms of the assessment research literature. Students are admitted to the
postsecondary institution under a low standard, but placed in credit courses or remediation on
another higher standard.” (http://thecollegepuzzle.blogspot.com/2008/07/understandingcollege-placement-exams.html )
“A federally financed study of Texas public-college students has found little evidence
that remedial programs there improve underprepared students' graduation chances or their
performance in the labor market soon after college. [quote from the study:"If anything, we find
some evidence that remediation might worsen the outcomes of some students…”]
(http://thecollegepuzzle.blogspot.com/2008/06/texas-remedial-programs-does-not-help.html )
From Issues in College Success brief prepared by ACT, 2007
(http://www.act.org/research/policymakers/pdf/nonacademic_factors.pdf )
“By definition, success in college means fulfilling academic requirements: a student earns a
degree by taking and passing courses. Longitudinal research confirms that the best preparation
for college is academic. All factors considered, prior academic achievement and cognitive ability
surpass all other factors in their influence on student performance and persistence in college.
To be sure, nonacademic factors also matter, especially as they relate to academic activities.
Nonacademic factors can influence academic performance, but cannot substitute for it.
Relevant nonacademic factors can be classified into three groups:
1. Individual psychosocial factors, such as motivation (e.g., academic self-discipline,
commitment to school) and self-regulation (e.g., emotional control, academic self-confidence)
2. Family factors, such as attitude toward education, involvement in students’ school activities,
and geographic stability
3. Career planning that identifies a good fit between students’ interests and their
postsecondary work
A Brief Overview of the Ten Factors of the College Success Factors Index 2.0:
http://www.cengagesites.com/academic/?site=4584&SecID=1657
1.
2.
3.
4.
5.
6.
Responsibility/Control: If we do not take control over the responsibilities we assume in
college, less success is possible.
Competition: The need to compete is part of our culture and thus an aspect of college
and career success. For successful students, competition becomes internalized-they
compete with themselves.
Task Planning: A strong task orientation and a desire to complete a task in a planned
step-by-step manner are very important to college success.
Expectations: Successful students have goals that are related to assignments, areas of
study, and future careers.
Family Involvement: Family encouragement and/or participation in planning and
decision making are factors in a student's success.
College Involvement: Being involved in college activities, relating to faculty and
developing strong peer relationships are important factors in retention.
Time Management: How people maximize their time and prioritize class assignments
affects their productivity and success.
8. Wellness: People need ways to handle their problems. Stress, anger, sleeplessness,
alcohol/drug use, inadequate diet and lack of exercise are deterrents to college success.
9. Precision: To approach one's education by being exact, careful with details and specific
with assignments is a measure of success.
10. Persistence: To face a task with diligence, self-encouragement and a sense of personal
urgency even when some tasks take extra effort and are repetitious is a mark of
academic success.
7.
From “The role of academic and non-academic factors in improving college retention,” ACT
Policy Report, 2004 (http://www.act.org/research/policymakers/pdf/college_retention.pdf )
“Our findings indicate that the non-academic factors of academic-related skills, academic selfconfidence, academic goals, institutional commitment, social support, certain contextual
influences (institutional selectivity and financial support), and social involvement all had a
positive relationship to retention.
The academic factors of high school grade point average (HSGPA) and ACT Assessment scores,
and socioeconomic status (SES) had a positive relationship to college retention, the strongest
being HSGPA, followed by SES and ACT Assessment scores. The overall relationship to college
retention was strongest when SES, HSGPA, and ACT Assessment scores were combined with
institutional commitment, academic goals, social support, academic self-confidence, and social
involvement.”
7. Cost-Effectiveness
The placement assessment system provides reasonable value to students and the
college at a reasonable cost.
Costs of the assessment process are acceptable, in view of the benefits that result.
There are several factors related to cost-effectiveness of placement assessment including:
1. Cost of assessment testing
2. Instructional costs
3. Opportunity cost of missing out on potential Financial Aid benefits (without testing)
Cost of assessment testing: The two predominant placement/assessment models in the
Washington State Community and Technical College system are Compass (ACT) and Accuplacer
(College Board). Both tests are administered on a per unit fee basis, with most colleges passing
on the cost of assessment and related administration to the students. Both Compass and
Accuplacer offer volume discount pricing models. A third option offered through Pearson called
MyFoundationsLab offers an integrated testing, diagnostics and instructional model.
Compass offers placement assessment in math, reading and writing, with diagnostic
assessment available for math and writing as well. All assessments are based on a per unit cost
including the diagnostics, with the exception of the more intensive writing assessment (e-write)
which is charged back at a rate of 3.5 units. Additionally, there is a 0.4 unit charge per student
for collection of demographic information. If a student takes the standard placement
assessment consisting of math, reading, writing and a demographic section, there is a charge
for 3.4 units.
Per unit pricing for Compass is as follows:






1 - 4,999 students: $1.66 per unit
5,000 - 14,999: $1.50 per unit
15,000 - 34,999: $1.42 per unit
35,000 - 99,999: $1.34 per unit
100,000 - 174,999: $1.27 per unit
175,000 or more: $1.21 per unit
The maximum savings per student is $.45 per unit ($1.53 or more for the standard battery) at
the system vs. institutional level. These fees are paid by students at most/all colleges so cost
savings for the colleges would be minimal.
Accuplacer offerings assessment in reading comprehension, sentence skills, arithmetic,
elementary algebra and college-level math. Diagnostic assessments are also available in reading
comprehension, sentence skills, arithmetic, and elementary algebra. There is an essay
evaluation tool as well. Each of the placement assessments is charged back at a rate of one unit
per assessment. Diagnostic assessments are charged back at a rate of 2 units per assessment.
Per unit pricing for Accuplacer is as follows:



Standard: $2.10 per unit
College Board member discount: $1.95 per unit
State approved discount (meaning the tools have been approved for an entire state):
$1.75/unit
The maximum savings per student is $.35 per unit ($1.05 or more for the standard battery) at
the system vs. institutional level. These charges are paid by students at most/all colleges.
The emergent player is Pearson, which offers both a traditional assessment tool
(MyReadinessTest) that is comparable to Compass and Accuplacer at a rate of $10 per 10 tests.
There is a 20% volume discount offered for systems vs. individual institutions. Pearson also
offers an integrated testing and instructional system called MyFoundationsLab. Rather than
paying for a stand alone assessment, students pay a fee which gives them access to testing and
instructional materials for a year at a time. Currently the access fee is $100, plus the cost of ebooks. At one institution in Washington, the cost of the e-books is approximately 40% of the
cost of traditional textbooks rendering a significant savings to students (particularly if they need
to take multiple developmental courses). As with the other assessment tests, fees are paid by
students.
Instructional costs: Developmental instructional costs are determined by the amount of
remediation required by students. The amount of remediation can be impacted by testing--cut
scores resulting in lower placements often equate to more dollars being spent on remediation-and by the instructional model. In the traditional, sequential instructional model, students place
at a particular level and take courses sequentially (sometimes repeating courses) until the
desired level is reached. In the newer, modularized instructional model, students take only
those pieces of instruction they need (based on diagnostic testing). For example, while one
student may need to take the entire level (5 credits) of pre-college algebra, another student
may need only a portion (for example, 2 credits). While the cost of integrated, diagnostic
testing up front may actually cost slightly more than non-diagnostic placement testing, the
potential savings in terms of fewer credits of remediation is potentially great (dependent at
least in part, of course, on the instructional model in use).
The potential instructional cost savings through more precise placement of students (resulting
in fewer dollars spent on remediation) is substantial, particularly when you consider the large
proportion of students enrolling in developmental coursework. According to a recent study
published by the Community College Research Center, ("Assessing Developmental Assessment
in Community Colleges"), more than half of entering community college student enroll in some
kind of remediation, with a great deal more testing into but never taking a developmental
course. The study suggests that the most reliable and valid assessments are those that rely on
multiple indicators, such as a placement exams, high school transcripts, and assessment of
noncognitive abilities. Multiple indices are often not considered due to additional costs up
front; however, the study suggests that the potential for increased validity (through adoption of
such measures) is just not very well known. Increased validity in placement assessment yields
higher rates of student success--an important aspect of cost effectiveness for community
colleges.
Math reform currently underway at a handful of community colleges in Washington State has
the potential to further impact instructional costs. Utilizing a combination of technologysupported instruction and integrated assessment, the newer models of math education can
reduce per student costs in several ways. A recent study by the National Center for Academic
Transformation ("Cost-Effective Strategies for Improving Student Performance: Redesign +
MyMathLab in Higher Education Mathematics Courses") suggests that cost savings can occur in
the form of declining faculty workloads due to logistical and other routine tasks being
supported by technology, increasing capacity to serve students due to computerization of some
tasks such as quiz grading, and actual dollar cost savings realized through the reduction in the
length of instruction and textbook costs. At one institution in Washington State, the
combination of math reform (reducing the total number of pre-college math courses from four
to three by eliminating redundancy in curriculum) and textbook savings (through conversion to
e-books) is $1282 per student for the full sequence (academic year 2010-11 prices).
Opportunity costs related to Financial Aid: Currently students entering community colleges with
no GED or high school diploma must receive an "Ability to Benefit" certification in order to
receive Financial Aid. To do so, they must pass a federally approved placement test and receive
a qualifying score (as determined by the U.S. Department of Education). Although this impacts a
relatively small number of students, the opportunity cost (of missed opportunity to receive
financial aid) would be great without the availability of standardized testing.
Conclusion: Students typically pay their own testing fees. Testing centers require staffing,
regardless of which test or tests are administered there. Research suggests that increasing the
number of indicators used to place students, not limiting or reducing options, is the most
promising road to boosting student success. Some students need standardized testing to qualify
for financial aid, so it can't be eliminated. Diagnostic testing, although it costs more, has shown
more precision in terms of accurately placing students. Cut scores vary, in some cases wildly,
from institution to institution, causing some confusion in the general public and even within
higher education. Although correlation can occur between placement scores and student
success, in most cases it's weak or even non-existent. Students can encounter difficulty in
moving from one institution to another where different placement tests, or cut scores, are in
place. Coordinating placement assessment across the institution has value, although much of
the value appears to be in the form of public perception and, potentially, student satisfaction.
More significantly, strategies intended to reduce time spent in developmental coursework and
increase student success have the potential to transform our institutional outcomes and
tremendously enhance cost effectiveness. Examples of emerging and promising practices
include using multiple indicators for placement, taking advantage of diagnostic aspects of
assessment and/or integrated testing and instruction, reducing or eliminating redundancy in
developmental curriculum, and providing contextualized learning whenever possible.
8. Engagement/Ownership
The placement assessment system functions in a way that engages faculty and
encourages them to share the responsibility of maintaining the ongoing quality of
the system.
Thoughts from Math Faculty:
The most important and easiest way to engage faculty in maintaining placement quality
is by monitoring the success rates of students in math, correlated with placement
(score, age of score, tool used, etc).
2. Both Testing and Math Faculty need to buy-in to the importance reviewing placement
and talk to each other.
3. Since SFCC uses the MMT, math faculty have to be involved as they are writing the test.
Continuous improvement is guaranteed (or forced, depending on your perspective) in
this way.
1.
It's far easier to engage faculty in the responsibility of maintaining a quality placement
system if they trust the system works. Faculty would have little engagement with a
placement tool that was not tied well to local curriculum and our student population,
such as the COMPASS which our data has shown was not a reliable placer.
5. Lack of alignment across the state for the Intermediate Algebra Proficiency leads to
distrust of a common placement tool. Therefore, the SFCC math faculty feel a cross-walk
would really only be possible above the IAP level.
4.
Thoughts from English Faculty:
1.
2.
3.
4.
5.
6.
Regularly scheduled report attended to annually at an English Department meeting that
shows how many students took the placement, the distribution of the placements, the
number of challenges, and the distribution of the challenge results. Another section of
the report should show courses success rates in the writing sequence correlated to
placement, as well as ratings distributions for the Portfolio. The Composition Director
could write a summary of these data pointing out patterns, potential areas of concern,
and possibilities for intervention.
An implication of maintenance responsibility is attending to the consequences of
placement. A test cannot be valid if it leads to unacceptable consequences for
stakeholders. This underpins point #1.
Directed Self Placement and Dynamic Criteria Mapping both require substantial faculty
engagement and are 21st century alternatives to standardized placement tools.
However, because they are faculty driven, they haven't amassed the kind of statisticallyimpressive validation argumentation that eduational measurement companies produce.
Ask faculty to provide frequent written feedback on pre-college students' success in
their classes (as we do for athletes)
Publicly celebrate the successes we experience with pre-college placement
Our current challenge processes for the reading and writing COMPASS are highly faculty
engaged. Faculty apply faculty-constructed criteria to authentic writing samples to
determine appropriate placement.
Download