Assessment Status Needs

advertisement
Status Report on the Assessment Program at DeVry
Much effort has been put forth in planning and implementing DeVry’s assessment plan,
especially as it pertains to direct measures of student academic achievement such as
senior/capstone project ratings and WRAP. However, the full range of activities
necessary to successfully implement the plan across the DeVry system has not yet
occurred. There are three main areas needing immediate improvement:
1. Multiple-rater recruitment and administration – Very few campuses have provided
ratings for senior/capstone projects from more than one rater, usually the faculty
member overseeing the project. In some cases, especially in CIS and TCM, the
client or project sponsor has also provided ratings. Still, very few general education
faculty or program faculty not teaching the project course have been involved. This
is a major flaw in the process as seen by NCA.
2. Timeliness of ratings inputting – Very few campuses have input the project ratings
into the web-based data collection package in a timely fashion. Last term (Summer
2000), ratings were still being inputted for Spring 2000 as late as weeks 11 and 12.
The ratings must come in very soon after students present their project results at
the end of the term for which ratings are being done, ideally in weeks 14 or 15, or
during Admin. Week at the latest. Too much time is currently being spent “chasing
down” late campuses to get their ratings inputted.
3. Use of project rating results – Very few campuses are using the results of project
ratings in the way in which they were intended. Deans must meet with the entire
program faculty and other stakeholders (e.g., gen-ed faculty who did ratings) during
each term to review the detailed results of the prior term’s ratings. Faculty input
regarding the use of the results to improve the program must be solicited and
documented each term. Also, the effects of prior program changes on student
achievement must be documented in subsequent terms.
Another limitation in using the project-rating results is that relatively few campuses
are providing a sufficient number of useful comments in addition to their numerical
ratings. Just providing ratings without explaining why they were chosen (via
comments) is not sufficient when it comes to using the results to improve academic
programs.
If senior/capstone project ratings continue to be inputted and used in the current
fashion, DeVry is in serious danger of being required by NCA to conduct a focused visit
on assessment. Campus deans, DAAs, and campus Assessment Committees must
take responsibility for ensuring that multiple ratings are provided, that ratings are
provided in a timely fashion, and that rating results are used by faculty to improve
academic programs. To that end, the following activities/timetables are required:
1
 Assemble a rater team of at least the senior project faculty member, another
program faculty member (or two), a general-education faculty member, and (if
applicable) the client or project sponsor. They should be involved throughout the
term, not just at the end when the report is presented. The other faculty can provide
guidance and support throughout the term rather than just judging the final product.
The Chicago campus has provided an excellent model of how to involve gen-ed
faculty in technical senior projects that may be emulated by other campuses.
 Provide appropriate training for members of assessment rating teams, including
what each rating point represents, and emphasize the need to provide comments for
each outcome. It is extremely difficult if not impossible for program faculty to
appropriately use results to improve the program without examining detailed and
aggregated comments.
 Make clear the relationship between and among program deans, chairs, the campus
DAA, and the campus Assessment Committee. Make explicit each person’s or
group’s responsibility, and assign accountabilities for each stage of the assessment
process. Assign each aspect of reaching Level Three of NCA Assessment
Implementation to the appropriate persons or groups (see March 2000 NCA
Addendum to Handbook of Accreditation for detailed descriptions of “Level Three”
implementation of assessment).
 Provide a clear and rigid timetable for the provision and use of senior/capstone
project ratings. The following is suggested as a model (where Term X is the first
term in which a campus may perform ratings):
TERM X:
Weeks 1 & 2:
Agree on rater teams; train as needed
Weeks 3-14:
Team members work with students as appropriate
Week 15:
Conduct assessment of senior/capstone project results
TERM X + 1:
(same as Term X for next group of students, plus:)
Admin. Week:
Faculty complete inputting of ratings on web site
Week 3:
Clients complete inputting of ratings on web site
Week 4:
Training on using assessment ratings for program
improvement; program dean goes over results/debriefs with
OBT program director
2
Week 5 or 6:
Faculty meeting on results, program improvement, and
improving the outcomes assessment instrument and process
Week 9 or 10:
Sharing of results of faculty meeting with campus
Assessment Committee
TERM X + 2:
(same as Term X and X+1 for next groups of students, plus:)
Week 5 or 6:
Faculty meeting on results, including accounting for last
term’s improvements and/or program changes
The above timetable is a suggestion based on input from campus- and OBT-based
academic staff as to what works best in practice. There may be slight variations due to
timing of events such as program reviews, but in general the timetable should be
adhered to for each term.
 The faculty meeting on assessment results must occur each term. It should have
the following attributes:
1. Assessment results coming in each term are the catalyst for the meeting
2. It must include each faculty member’s observations about their program in
addition to their reactions to and analysis of the rating reports
3. There must exist at the end some agreement or consensus on actions to be
taken to improve the program and the teaching and learning process, as well as
on how to improve the outcomes assessment instrument and process itself
4. The entire meeting’s activities, especially the analysis of the data, the
observations of faculty, and the actions agreed upon to improve the program,
must be fully documented in a format easily shared with stakeholders
5. It must document the accounting for last term’s agreed-upon improvements, and
provide a mechanism to determine the efficacy of those actions
We are at a critical stage in the assessment process at DeVry. If the actions discussed
are not taken throughout the system, we face a host of negative consequences from
NCA. We have taken many positive initial steps forward in developing and
implementing our assessment program, but the effort has now stalled at its most critical
phase, namely the provision and use of the full range of information regarding our
capstone projects to improve teaching and learning at DeVry. The assessment program
must urgently receive the full force of all of our efforts, or our standing as an institution
of higher education will be seriously compromised in the eyes of our accrediting body.
3
Status of Academic Programs Assessment Progress
EET: All campuses are providing ratings, but few provide comments and few have
multiple raters, especially general-education faculty. The use of results is inconsistent,
though few if any campuses are conducting full faculty meetings to go over results and
use them to change the program.
TCM: All campuses are providing ratings. Many campuses provide comments, and
many have multiple raters, especially clients. Few are providing general-education
faculty ratings. Some campuses like Kansas City have been using the results in an
appropriate fashion. TCM is currently the best program in terms of providing ratings
and using the results.
CIS: As of summer 2000, all campuses are to provide results. Prior to summer, only
pilot campuses provided results. Most campuses have provided some results, though
use of multiple raters is inconsistent. Almost no gen-ed faculty have been included. It
is too early to judge if results are being used.
AIS/BIS/OM: Most campuses have been providing results. Few multiple raters have
been included, except for some clients in BIS projects. Small sample sizes (grad.
classes) hamper the use of results to their fullest potential.
ECT: All campuses are to provide ratings for summer 2000, but as of this writing only
Chicago has done so. ECT assessment is to include a test to cover outcomes under
objective one, so the project ratings cover only objectives two through six. It is too early
to judge whether results will be used properly. Chicago’s input has been very good,
with many multiple raters and copious comments provided.
IT: Currently nothing is being collected for IT.
BSTM: Currently nothing is being collected for BSTM.
Gen Ed: All campuses are to provide ratings for summer 2000 HUMN-430 projects. As
of this writing, only Kansas City and North Brunswick have provided partial ratings. It is
too early to determine how ratings will be used.
4
Download