Assessing Faculty - Loyola University Chicago

advertisement
Assessing Faculty
By “Sitting Beside”
Larry A. Braskamp
Loyola University of Chicago
Lbraska@luc.edu
1
Assess ( e-ses’) v.t.
• …To take stock of; evaluate; to assess the
situation
• … L assidere sit by (as a judge in court)…
• Source. Funk and Wagnalls New International Dictionary of the
English Language, 1993
2
To Assess is to
• Understand
• Judge
• Act (decide, assist, help develop)
3
Challenge
• Design and implement a faculty assessment
program that simultaneously fosters
individual faculty development and fulfills
the institutional mission
4
Assessing by “Sitting Beside”
Setting
Expectations
Using
Evidence
“To Sit
Beside”
Collecting
Evidence
•Source. Braskamp, Larry A. & Ory, John C. (1994) Assessing Faculty Work. San Fransisco: Jossey-Bass
5
“To Sit Beside” as an image
•
•
•
•
•
•
•
To humanize the process
To understand as well as to judge
To enhance role of colleagues
To build community
To increase respect for diversity
To demonstrate individual accountability
To promote mutual and collective accountability
6
Assessing by “Sitting Beside”
Setting
Expectations
Using
Evidence
“To Sit
Beside”
Collecting
Evidence
7
Setting Expectations
• Division of faculty work
–
–
–
–
Teaching
Research and Creative Activity
Outreach/Professional Practice/Engagement
Citizenship
• Individual and Institutional
• Responsibilities vs. quality
• Quality/value/impact/influence
– Merit
– Worth
8
Setting Expectations
• To expand the variety of faculty work (roles
and responsibilities) to be recognized and
rewarded (e.g., teaching; research and
creative activity; outreach; citizenship)
9
The Work of Teaching
• Instructing
• Advising, Supervising, Guiding, and
Mentoring Students
• Developing Learning Activities
• Developing as a Teacher
10
The Work of Research and
Creative Activities
•
•
•
•
Conducting Research
Producing Creative Works
Editing and Managing Creative Works
Leading and Managing Funded Research
and Creative Projects
11
The Work of Practice and
Professional Service
• Conducting Applied Research and
Evaluation
• Disseminating Knowledge
• Developing New Products, Practices,
Clinical Procedures
• Participating in Partnerships with Other
Agencies
• Performing Clinical Service
12
Characteristics of Public Service
• They contribute to the public welfare or the
common good
• They call upon faculty members’ academic
and/or professional expertise
• They directly address or respond to realworld problems, issues, interests, or
concerns
13
The Work of Citizenship
• Contributing to the Local Campus
• Contributing to Disciplinary and
Professional Associations and Societies
• Contributing to civic, political, religious,
and other communities
14
Setting Expectations
• To distinguish between expectations for
(institutional) and expectations of
(individual) faculty
15
Setting Expectations
• To distinguish between what one does and
how well one does it, i.e., workload/
activities/ effort/ roles/ responsibilities
vs.
quality/ effectiveness/ influence/
impact/value/ merit/worth/excellence
16
Setting Expectations
• To distinguish between merit (“quality
according to standards of the profession”)
and worth (“value of work that is a benefit
to the institution”) M. Scriven.
•Source. Scriven, Michael (1978) Avalue versus Merit. Evaluation News (8), 1-2
17
Assessing By “Sitting Beside”
Setting
Expectations
Using
Evidence
“To Sit
Beside”
Collecting
Evidence
18
Collecting and Organizing Evidence
• Multiple Perspectives
– Sources
– Methods
• Credibility of Evidence
• Trustworthiness of Evidence
– Validity
– Reliability
– Fairness
– Consequences
• Portrayal of Faculty Work
• Building a Case
19
Collecting and Organizing
Evidence
• Everything counts, but not everything needs
counting
• To think in terms of “building a case”
• To emphasize a multiple perspective
approach to collecting evidence from
multiple sources using multiple methods
20
Collecting and Organizing
Evidence
• Evidence must be credible
• Evidence must be trustworthy (i.e., valid,
reliable, fair, consequential)
• Faculty need to be able to portray their
work for others to review, critique, and
judge
21
Multiple Sources
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Oneself
Faculty colleagues
Campus administrators
Faculty development professionals
Students
Parents
Participants
Alumni
Citizens and community groups
Public officials
Professional and disciplinary colleagues
Accreditation officials
Board members
Consultants
Experts
Customers
22
Multiple Methods
•
•
•
•
•
•
•
•
•
Rating Scales
Observations
Interviews
Written appraisals
Measures of outcomes and achievements
Documentation and records review
Measures of eminence, quality, and impact
Video and audio tapes
Simulations
23
Types of Evidence
for Describing and Judging Teaching
• Descriptions of Teaching Activities
– Summary of responsibilities and activities
– Analyses of student learning and challenges
– Audio and videotapes
– Samples of teaching
– Participation in improvement activities
• Outcomes
– Student learning and achievements
– Student development
24
Types of Evidence for Describing
and Judging Teaching
• Judgments About Teaching
– Ratings from various sources
– Written appraisals from various sources
• Eminence Measures
– Honors and awards
– Invited presentations
• Self-Reflections and Appraisal
– Personal journals and logs
– Public self-appraisals
25
Types of Evidence for Describing and
Judging Research
• Descriptions of Research and Creative Activity
– Summary of responsibilities and activities
– Analyses of research and creative problems
– Participation in improvement activities
• Outcomes
– Publications in journals
– Papers presented at professional meetings
– Books (authored and edited)
– Chapters in books
– Monographs
– Grants and external funding
– Unpublished papers and reports
26
Types of Evidence for Describing and Judging
Research
• Judgments about Research
– Evaluations from faculty peers
– Evaluations from departmental chairs, deans
– Evaluations from experts (curators, critics)
• Eminence Measures
– Referee or editor of journal
– Honors and awards from profession
– Officer of national professional association
– Invited papers and guest lectures
– Invited exhibitions and performances
– Citation rate of published work
• Self-reflection and Appraisal
– Personal journals and logs
– Public self-appraisals
27
Types of Evidence for Describing and
Judging Practice
• Descriptions of Practice Activities
– Analyses of contemporary problems
– Audio and videotapes
– Samples of work
– Participations in improvement activities
• Outcomes
– Client feedback on progress
– Client behavioral outcomes
– Degree social problem addressed is understood
– Policy changes linked to work of faculty
– Influence on research and teaching within profession
– Influence on teaching and research within institution
– Inventions, improved clinical practices and procedures
28
Types of Evidence for Describing and
Judging Practice
•
Judgments about Practice
– Evaluations from participants, clients, patients
– Evaluations from sponsoring organizations
– Evaluations of and letters of appreciation
– Evaluations from faculty colleagues and experts
• Eminence Measures
– Honors and awards from profession
– Officer of professional association
– Invited exhibitions and performances
• Self Reflection and Appraisal
– Personal journals and logs
– Public self-appraisals
29
Types of Evidence for Describing and
Judging Citizenship
• Descriptions of Activities
– Attendance records of committee work
– Representation at functions for institutional
advancement
– Support of campus activities (cultural and sporting
events)
– Degree of involvement in professional organizations
– Degree of participation in religious/public/civic affairs
• Outcomes
– Changes in policies in governance of campus and
professional associations
30
Types of Evidence for Describing and
Judging Citizenship
• Judgments About Citizenship
– Ratings of effectiveness by faculty peers and administrators
– Evaluation by fellow committee members and chair
– Modeling behavior as judged by colleagues and students
– Evaluation from participants of community programs, public
officials
• Eminence Measures
– Reappointment or reelection to public office
– Reelection or reappointment to leadership positions
• Self-reflection and Appraisal
– Personal journals and logs
– Public self-appraisals
31
Administration Influencing Student Ratings
of the Instructor or Course
Student anonymity -- Signed ratings are more positive
than anonymous ratings.
Instructor in classroom -- Ratings are more positive if the
instructor remains in the room.
Directions -- Ratings are more positive if the stated use is
for promotion.
Timing -- Ratings administered during final exam are
generally lower than those given during class.
Midterm -- Ratings are less reliable if the student raters
can be identified.
32
Nature of Course Influencing Student
Ratings
Required/elective -- Ratings in elective courses are higher
than in required courses.
Course level -- Ratings in higher-level courses tend to be
higher that in lower-level courses.
Class size -- Smaller classes ten to receive higher ratings, yet
low Correlations between class size and student ratings
suggest class size is not a serious source of bias.
Discipline -- In descending order, lower ratings are given to
courses in arts and humanities, biological and social sciences,
business, computer science, math, engineering, and physical
sciences.
33
Instructor Characteristics
Influencing Student Ratings
Rank -- Professors receive higher ratings than teaching
assistants.
Gender of instructor -- No significant relationship exists
between gender of instructor and his or her overall evaluation,
although ratings do slightly favor women instructors.
Personality -- Warmth and enthusiasm are generally related to
ratings of overall teaching competence.
Years teaching -- Rank, age, and years of experience are
generally unrelated to student ratings.
Research productivity -- Research productivity is positively
but minimally correlated with student ratings.
34
Student Characteristics Influencing Student Ratings
• Expected grade--Students expecting high grades in a course give
higher ratings than do students expecting low grades.
• Prior interest in subject matter---Similar to elective courses,
students with prior interest give somewhat higher ratings.
• Major or minor---Majors tend to rate instructors more positively than
nonmajors.
• Gender---Gender of student and overall evaluations of instructors are
not related although students tend to rate same sex instructors slightly
higher.
• Personality characteristics---No meaningful and consistent
relationships exist between the personality characteristics of the
students and their ratings.
•
35
Instrumentation Influencing Student Ratings
• Placement of items---Placing specific items before or after
global items on the rating form has insignificant effect on
the global ratings.
• Number of scale points--Using six-point scales yields
slightly more varied responses and higher reliabilities than
five-point scales.
• Negative wording of Items--Overall ratings of the course
and instructor are not significantly influenced by the
number of negatively worded items in the rating scale.
• Labeling all scale points versus only end-points--Labeling only end-points yields slightly higher average
ratings
36
Assessing by “Sitting Beside”
Setting
Expectations
Using
Evidence
“To Sit
Beside”
Collecting
Evidence
37
Using Evidence
• To distinguish between individual and
institutional uses of evidence
• “Assessment is everybody's business, but
not everybody else's business” R. Stake
• To promote both individual and mutual and
collective accountability
38
Using Evidence for Self
Development
• Emphasize the informational rather than
controlling use
• Design assessment feedback so that it is intrinsic
to the task itself
• Rely on specific, diagnostic, descriptive
information that focuses on faculty work
• Encourage feedback on work in progress
• Develop mentoring relationships among faculty so
that discussions of work are encouraged
39
Using Evidence for
Accountability
• Rely on various types of descriptive and
judgmental evidence collected from multiple
sources to develop a composite portrayal
• Interpret evidence in a way that is consistent with
institutional goals
• Develop profiles of faculty over time
• Closely link assessment with both faculty and
institutional development
40
A Portrayal of Faculty Work
• Statement of personal goals, roles and institutional
expectations
• Teaching
– Responsibilities and Activities
– Assessment and its Use
•
•
•
•
•
Research and Creative Activities
Practice
Citizenship
Honors and Recognition
Activities to Improve Faculty Work
41
Creating a Campus Culture of
Assessment
• Do you collect evidence about your work solely
for your own personal and professional use?
• Do you share this evidence with colleagues so that
you can discuss your own effectiveness, enhance
your own career development, and meet
institutional expectations?
42
A Possible Reorganization
• Does your current organizational structure
most effectively:
– Foster development of your faculty and staff?
– Fulfill the mission of your
institution/college/school/department
43
Download