A Model for Peer and Student Involvement in Course Assessment

advertisement
A Model for Peer and Student Involvement in Course Assessment
Sheri Sheppard, Michelle Johnson, Larry Leifer
Mechanical Engineering Design
Stanford University
Stanford, CA 94305
Abstract - This paper discusses a protocol and rationale for
peer and student involvement in the assessment of courses
in Mechanical Engineering at Stanford. The protocol is
based upon elements of good teaching practice, and on
standards for peer review as used in journal publication. It
has been "prototyped" in nine engineering courses over the
past two years and has generally been found to be a good
mechanism for near real-time monitoring that creates
constructive feedback for teaching and learning quality
control. Major features of this ongoing project will be
summarized, including faculty attitudes and required time
commitment.
Background
In the 1970's and 1980's many evaluation programs focused
on using student opinion and work to assess faculty impact
in the classroom [1-6]. This focus has generally been
proven unsuccessful because it was limited and misleading.
Students were never equipped to comment on all aspects of
faculty teaching [2,3,7]. The summative use of these
evaluations in tenure and promotion situations negatively
affected faculty relationship to students, administrators,
colleagues and their feelings regarding reappointment,
promotion, tenure and compensation [8,9]. Unfortunately,
the 1970's and 1980's often sincere but erroneous
summative evaluation attempts have left many faculty
resistant to the new move of the 1990's toward more
balanced and formative peer based evaluations [3,10].
According to Hutchings (1996), the new impetus for
Peer Review of Teaching was fueled by the Pister Report
1991, a report of the Task Force on Faculty Rewards from
the University of California. It was the first such report to
nationally proclaim that teaching needed to be peer
reviewed in the same manner as research is peer reviewed.
Inspired by this report, a project entitled “From Idea to
Prototype: The Peer Review of Teaching [11]“ was the first
notable activity that attempted to establish an
institutionalized collaborative atmosphere conducive to
fostering a Peer Review of Teaching culture amongst
faculty in private and public research institutions, and
regional and metropolitan universities. The two-year long
project was launched in January of 1994 by the American
Association of Higher Education (AAHE) and involved
twelve universities studying peer assessment of teaching1.
The project is considered revolutionary because it was the
first of its kind to facilitate Peer Review pilot work at the
departmental level.
Stanford is one of the twelve participating universities
and one of the three department's at Stanford involved in
the AAHE project is Mechanical Engineering.
The
prototype Faculty Peer Review methodology developed by
the Mechanical Engineering group is known as ME PEER. The ME-PEER process can be partitioned into in
three stages: the conceptualizing stage where the
participating faculty developed an unranked assessment
criteria to evaluate teaching excellence; the full scale
testing stage where the assessment criteria was used to
evaluate faculty-student learning dynamics in several
courses; and the redesign stage where the prototype was
examined for failure modes so that a new and more
beneficial methodology can be proposed and documented.
Stage I: Conceptualizing/Testing and Initial
Prototyping
During the 1994-95 academic year, a group of Mechanical
Engineering faculty explored the role that peers could play
in curriculum assessment. The Mechanical Engineering
group became known as the "ME-PEER" project, and was
comprised of Ed Carryer, Mark Cutkosky, John Eaton, Ken
Goodson, Tom Kenny, Larry Leifer, Sheri Sheppard, with
consulting assistance by Dr. Michele Marincovich, Director
of the Center for Teaching and Learning (Stanford), and
Kathleen Quinlan, Ph.D. Candidate, School of Education
(Stanford).
The Stanford ME-PEER group began with the goal of
developing peer-review methods to assess teaching
effectiveness and to document teaching scholarship for
promotion and tenure purposes. Early in the formulation
process the group narrowed its focus on formative
assessment rather than summative assessment [13], with the
objective of providing input to improve the teaching
process, as reflected in student learning behavior. The
focus on peer-conducted student interviews emerged from
1The workbook From Idea to Prototype: the peer review of
teaching (1995), and the paper by Quinlan (1996) [12]
summarize many aspects of the overall project.
ME-PEER: Sheppard, Johnson and Leifer, printed July 11, 1997, page = 1
approximately 15 hours of discussion in which the group
attempted to identify the major elements of good teaching
and how they might best be assessed.
The deliverables from this first stage of the ME-PEER
project included:
• Issues Identified. Table 1 shows the issues important in
assessing teaching that were identified by the Stanford
ME-PEER group. This list became the basis of the
student interviewing process. This table also contains the
group's relative weight of the role of student and peer
input on the various issues. In the sense that an issue is a
variable, these weights express the group's perception of
who would be in the best position to "observe" that
variable. Weighting of the observation component is inline with the peer review of teaching consensus [e.g.,
2,3,9,12,14,15] that a well-designed evaluation program
would measure multiple outputs (e.g., student and peer
opinion) of the faculty teaching effort in order to build a
complete picture of their teaching effectiveness.
• Protocol Prototype Tested. The Protocol for peer
assessment includes peer-conducted student interviews.
The products of the protocol are a reflective memo, video
taped student interviews, and a summary memo. The
protocol is summarized in Table 2. In the first year of
the project, three courses2 were evaluated using this
protocol. The three products of this protocol allow the
ME-PEER process to be divided along three phases many
scholars [e.g., 1,2,9,216] perceive as being necessary to
ensure that teaching results in effective student learning.
The reflective memo examines the pre-interactive
component of teaching by asking the professor to revisit
the methods used to plan and prepare their course
learning objectives. The video taped student interviews
examine the interactive phase of teaching by encouraging
discussion on delivery of instruction as well as the
communication of the learning objectives. The summary
memo facilitates the post-interactive phase by providing
feedback that allows the peer-reviewed faculty to
measure, reflect and revise his teaching process.
Greenwood and Ramgali (1980) [17] suggested that
“none of the means of evaluating college teaching used
alone seems to have a research base which indicates that
it is a sufficiently valid measure of teaching effectiveness
of a given professor". As evidenced here, the ME-PEER
process is based on three means of evaluating professors.
• Discussion. There were over 20 hours of in-depth
discussion about elements of good teaching with
effective sharing of perspectives between both junior and
senior faculty. The discussion tended to focus more on
the course and curriculum as a whole, rather than the
teacher. Many scholars [e.g., 3,13,18] would say that
this discussion and negotiation process was an important
aspect of ME-PEER because it gave the Mechanical
Engineering faculty the opportunity to collaborate and
share information on teaching and scholarship, to define
for themselves the aspects within their discipline that
constituted effective and ineffective teaching, and
develop and control the ways for self-evaluation, selfreflection, and improvement of their teaching.
• Publication and presentation. Work in progress was
published and presented like that of any other research
project (e.g., Leifer and Sheppard, 1995 and 1996
[19,20], annual meetings of the American Association
for Higher Education (AAHE); Leifer and Brereton,
1995 at the NSF Engineering Education Coalition
Assessment Workshop; Leifer and Sheppard, 1995, at a
Peer-Teaching-Review panel for the Frontiers in
Education '95 conference; and Sheppard, Leifer and
Carrier, 1996, Journal for Innovation in Higher
Education [21]).
• Professional Development. There has been continuous
interaction with AAHE personnel (e.g., Dr. Pat
Hutchings, Director of the AAHE Peer-Teaching
Assessment Project as a guest speaker on an assessment
of engineering reform panel at FIE in Nov. '95) and
faculty in the School of Education (e.g., Dean Richard
Shavelson, Profs. Jim Greeno and Lee Shulman) spoke
at the Stanford Workshop on Product-Based Learning
(Aug. '95).
•
Institutionalization. The project has been endorsed by
the Provost's office and travel funds were made
available to attend national meetings of the PeerTeaching Assessment community. Department chairs
have endorsed the Peer-Teaching Assessment protocol
and encouraged faculty to participate. Endorsement of
the university's administration is extremely important.
According to Cavanuagh [13] and Geis [22], in order
for any form of Peer Review of Teaching to become as
established as Peer Review of Research, the assessment
criteria must be linked to the university’s mission and
reward system as well as the department and
participating members mission.
Stage II: Full Scale Testing
2
1994-95 ME-PEER courses:
ME112: Mechanical Systems, a senior-level
undergraduate design course.
ME117/220: Introduction to Sensors, a senior level or
masters level design course.
ME250: Introduction to Heat Transfer, a graduate level
thermoscience course.
Based upon the general satisfaction amongst the faculty
participants in the 1994-95 project, it was decided by the
ME-principals (Sheppard and Leifer, with support from the
department chair, Ronald Hanson) to recruit additional
participants for the 1995-96 project. Four members from
the first year (Sheppard, Leifer, Carryer and Kenny) elected
ME-PEER: Sheppard, Johnson and Leifer, printed July 11, 1997, page = 2
to participate in the second year, and six additional
teaching staff were recruited (Renate Fruchter from Civil
Engineering, and Mike Hill, Sanjiva Lele, Borjana Mikic,
William Reynolds, and Jim Widmann from Mechanical
Engineering). The objectives of the project for the 1995-96
academic year were to refine the prototype and document
the protocol. Particular attention was given to individuals
who were not part of the original group who identified the
list of issues and defined the protocol. In addition, we
collected data on how much time participation required.
Table 1: Issues Relevant to Student Interviews. The first five issues were derived from work of Way, 1992 and Hildbrand
et al., 1971 [23]. The numbers listed under Student and Peer Input reflect the Stanford ME-PEER group's opinion as to the
relative observability students or peers could provide for each issue (expressed an a percentage).
Stanford ME-PEER Review Issues
Student
Input
Peer
Input
Instructor-Group Interaction
Relates to rapport with the class as a whole, sensitivity to class response, and
skill at securing active class participation.
90
10
Instructor-Individual Interaction
Relates to mutual respect and rapport between the instructor and the individual
student.
90
10
Dynamism-Enthusiasm
Relates to the flair and infectious enthusiasm that comes with confidence,
excitement about the subject, and pleasure in teaching.
90
10
90
10
10
90
20
80
80
20
50
50
Putting in the Effort
Relates to work done to prepare materials, lectures and learning experiences
necessary to transform a curriculum design into an effective curriculum
experience. Also includes timeliness and clarity of feedback to students on
their homework, project reports, etc.
60
40
Continuous Development
10
90
Analytic-Synthetic Approach
Relates to scholarship, with emphasis on breadth, analytic ability, design
ability, and conceptual understanding. Also related to confidence as an
engineer, problem solving abilities, and conceptual understanding.
• as perceived by students
• as perceived by peer
Organization & Clarity
Relates to skill at presentation, but is subject-related, not student related, and
not concerned merely with rhetorical skill. Includes concern for course design
and course implementation.
• Course Design
• Course Management
• Clarity
Relates to continuing evolution of the curriculum content and process, reflects
continuous challenge and revision of the course content and process.
The deliverables from the 1995-96 ME-PEER project
included further prototyping of the methodology. The issue
set and observability weights were further supported by six
additional cases3. These six additional cases enabled us to
31995-96
E14: Statics and Deformables, a sophomore-level strength
of materials course.
ME112: Mechanical Systems, a senior-level undergraduate
design course.
ME-PEER courses:
ME-PEER: Sheppard, Johnson and Leifer, printed July 11, 1997, page = 3
estimate faculty time needed to carryout the Protocol, as
summarized in Table 3. Reducing the time needed to
complete the protocol was extremely important because we
recognized that time is commonly sited [13] as a major
barrier to faculty involvement in formative evaluation
programs. In general, deeper awareness was developed by
participants on how their peers look at teaching, how they
themselves view their own teaching, and how students view
course offerings in the department. Besides increasing
participants awareness, we achieved intra-departmental
awareness by providing consulting services to the Civil
Engineering department, who applied the protocol to one of
its courses. We also made inroads into institutionalization
by successfully getting the Provost's office to financially
support documentation efforts of ME-PEER project. In an
effort to elevate teaching to scholarly levels, more
publications and presentations of the Protocol [e.g.,
20,24,25] were made.
Stage III: Evaluation and Redesign
To determine the impact of the ME-PEER project on its
faculty participants, two focus groups were held during the
fall of 1996 of 1994-95 and 1995-96 project participants.
Prior to convening the focus groups, a preliminary survey
polled each faculty member to identify key issues for focus
group discussion. Each ME-PEER faculty member was
assigned to a focus group based on scheduling availability.
Dr. Michele Marincovich, Director of Stanford's Center for
Teaching and Learning facilitated each session.
In
addition, minutes of each of the sessions were taken by a
graduate student familiar with the project and each session
was videotaped. The dialogue from the sessions was then
transcribed and common themes were identified. Based
upon the dialogue record and session minutes, the
following benefits and shortcoming of the ME-PEER
protocol were identified.
Overall, the faculty focus groups revealed that
participants consider ME-PEER’s combination of the
reflective memo, student interviews and the summative
memo to be “a good mechanism for real-time
monitoring” (Research Associate Renate Fruchter) and
constructive feedback. The most frequently cited benefit
was the reflective memo, especially because it encouraged
self-examination and assessment immediately following a
course. One participant stated that the act of writing the
reflective memo “forces you to think about course
objectives and personal objectives ... and what (you)
ME118: Introduction to Mechatronics, a senior-level
undergraduate design course.
ME131b: Fluid Mechanics, a senior-level fluids course.
ME210b: Cross-functional rapid prototyping, a graduate
level design project course.
CE222: A/E/C Engineering, a graduate level design project
course in Civil Engineering.
want to achieve over and above technical content”
This reflection
(Assistant Professor Sanjiva Lele).
process was declared an invaluable aspect of the protocol
that helped the instructor being reviewed to assess the
impact of his or her efforts on the students. Many of the
project participants have incorporated the reflective memo
in their personal course assessment protocol. We consider
this an important index of professional development.
Another extremely valuable outcome of ME-PEER
protocol was the cross-collaborative, mentoring,
atmosphere created by the process. One participant stated
that he “enjoyed the atmosphere of instructors all sitting
and sharing about teaching..how they approach teaching
... and how they try and engage students” (Consulting
CrossAssociate
Professor
Edward
Carryer).
collaboration focused on peer assessment of teaching
became a bridge between divisions (the Mechanical
Engineering Department is divided into three divisions:
Applied Mechanics, Design and Thermosciences), and
across departments (mechanical engineering and civil
engineering). Most participants felt that the atmosphere
created by the ME-PEER Protocol allowed them to identify
common philosophies regarding teaching excellence.
According to Cavanuagh [13], the creation of this type of
collaborative community is a prerequisite to any peerreview framework and will be necessary for any successful
Peer Review of Teaching program.
Both junior and senior faculty have been involved in
the ME-PEER project. In addition, at least two pairs of the
faculty participants had established mentor relationships
prior to entering the ME-PEER process. These pairs were
of special value in regards to assessing the impact of MEPEER review on mentor-mentee performance. Typically,
one senior faculty member mentors a junior member of the
faculty. These teams found that the ME-PEER protocol
facilitated their interaction by providing a formal and richly
informative procedure for teaching assessment in a manner
that would inform the preparation of appointment papers.
Most participants felt that use of summative feedback
such as the School of Engineering's Tau Beta Pi (7%3)
survey was “too little, too late and too important to
tenure”. Specifically, 7%3 is “too little because it is
done too quickly (typically 15 minutes is allotted to
students to fill out the survey) and does not provide
enough feedback (rank scores only)...too late because
there is no real chance of using the information that
quarter " (Professor William Reynolds). Because of
these limitations, some participants it was "too important
in the tenure decisions” (Professor William Reynolds);
but in spite of the limitations, 7%3 scores do weigh heavily
in tenure cases.
Student interviews that are part of the ME-PEER
protocol, were found to provide more useful feedback than
the 7%3 survey. Student commentary gained through these
interviews was perceived as being rich with information
that went far beyond the 7%3 scores in that it tells the
ME-PEER: Sheppard and Leifer, printed July 11, 1997, page = 4
instructor what to change and how. Many participants
viewed as invaluable, the opportunity to interview students
in the role of the peer reviewer. One faculty member who
interviewed and reviewed several courses, felt that
“students (were) thoughtful and educated customers”
(Assistant Professor Thomas Kenny), who are able to
provide useful and concise facts and opinions about their
learning process and the teaching methods that impact their
learning. Another faculty member felt that “it was good
for students to hear about other students experience for
them to realize that different students learn in different
ways ... so they can get insight into the teacher’s
dilemma and about their own learning styles” (Associate
Professor Sheri Sheppard).n essence, through the peer
conducted
student
Table 2: Summary of ME-PEER Protocol for Peer Assessment. The following steps were taken sequentially to implement
peer assessment.
Step
1
2
3
4
5
Description of Activity
Faculty form assessment teams (typically 3 or 4 faculty), looking at 1 to
4 of their courses.
The faculty member whose course is being reviewed (call him/her
Faculty Member #1), writes a reflective memo (guidelines for
reflective memos are available in Cornell Teaching Evaluation
Handbook).
Faculty members #2 &#3 convene two or three focus-groups of students
(5 to 7 students per interview session) from Faculty Member #1's class
to be interviewed. Interviews are soon after the term is over, when
possible. Faculty member #2 serves as the facilitator, and faculty
member #3 takes notes on student comments. The basis for discussion
is an expanded version of Table 1 which gives specific examples of,
for example, behaviors associated with a particular issue. These
examples help the facilitator student discussion. The list of issues in
Table 1 is made available to the students during the interview process.
In addition, the sessions maybe video taped to support development of
the summary memo. Tapes also serve to calibrate the interviewing
process. Video tapes are never seen by Faculty Member #1.
Faculty members #2 and #3 write a summary memo to Faculty member
#1 using the Reflective Memo written by Faculty member #1 and the
notes from the student interviews, that they then present to Faculty
member #1.
Faculty members change roles and review another class.
Deliverable
n.a.
reflective memo
written by the
instructor
peer facilitated
focus group
interviews (may
be video taped)
summary memo
written by peer
assessors
n.a.
Table 3: Time requirements for implementing the peer review protocol.
ME-PEER: Sheppard and Leifer, printed July 11, 1997, page = 5
Activity
Description of the Protocol
Reported Task
Completion Time
3 to 6 hours
1
Reflective Memo Preparation
2
Setting-up Interviews
• may involve visiting class, e-mail connection with students important
• good to give students specific meeting time options to choose from
• try to make "ordering lunch"/taping easy for peers to set-up
Conducting Interviews
• may be 1.5 hrs. long if time is taken at being of session for students to
introduce themselves (2 per class recommended).
Summary memo
• includes input from multiple interviewers.
Class time "lost" for advertising
3
4
5
6
TOTAL (sum of all task time required to complete the assessment a
particular course)
Iinterviews, ME-PEER provided the opportunity for a
faculty participant to observe and learn about how other
faculty teach their methods and "tricks-of-the-trade."
The focus sessions also identified several aspects of the
ME-PEER protocol that need improvement. The most
frequently cited concern was the time requirement. Several
faculty felt that “too much time was required for the
value added” (Professor John Eaton). Many participants
agreed that streamlining the process by minimizing the
time needed to complete the Reflective and Summary
memos, and organize the student interview sessions. For
example, templates could be created for the reflective and
summary memos and are likely to prove valuable in
recruiting new faculty and establishing the ME-PEER as a
mainstream assessment tool. Establishing templates for the
reflective and summary memos would clarify and simplify
the protocol. For example, several participants struggled
with constructing the summary memo. They were unclear
on what should be reported.
One participant noticed that fellow reviewers “had a
tendency to put a brighter spin on the students
comments” (Borjana Mikic, Ph.D.). One reason suggested
was that unlike the "Peer Review of Research" process,
ME-PEER process lacks anonymity and therefore,
reviewers may not feel that they can be totally candid.
While true, this does not negate the importance of
collaboration and development of shared values in
formative assessment.
Anonymity is for summative
assessment and the raw data collected in the ME-PEER
Protocol could be used anonymously. Some faculty
participants cautioned against the goal of mainstreaming
ME-PEER and using it as a summative tool. They
speculated on whether the process of standardization would
“make the process too formal” and thus constrain it.
1 to 1.5 hours
1 to 1.5 hours per
interview
3-6 hours
0.2 to 0.4 hours
8 to 15 hours per course
(or roughly 3.5-5 hours
per faculty member
involved)
Summary and Future Directions
A methodology for Peer Review of Teaching has been
developed in the Mechanical Engineering Department at
Stanford University as part of the larger AAHE "From Idea
to Prototype: The Peer Review of Teaching" Project. This
methodology is based on seven issues of effective teaching
and input from self-reflection, student interviews concluded
by peers, and summary feedback. It has been successfully
used to review nine engineering courses.
It is our plan to expand the ME-PEER approach to
Faculty Peer Review beyond the School of Engineering.
Furthermore, we believe that it is important to:
1) encourage junior faculty and Ph.D. students to
participate in the process as a training ground that exposes
them to teaching excellence and teaching as a community
of practice activity,
2) encourage the participation of mentorship pairs as a
method that allows senior faculty to formally assess and
observe the teaching of junior faculty,
3) incorporate mechanisms for easy recruitment of faculty
and students in order to establish a self-sustaining process,
and possibly,
4) correlate the results to the current summative tools used
at Stanford.
Acknowledgments
The support and participation of Dr. Michele Marincovich
(Director of the Center for Teaching and Learning), Prof.
Bob Weisberg (the Provost's Office) and Dr. Kathleen
Quinlan are gratefully acknowledged. In addition, we
thank Prof. Lee Shulman and Dr. Pat Hutchings for
conceiving of the larger project.
ME-PEER: Sheppard, Johnson and Leifer, printed July 11, 1997
References
1) Donald, J.G., and Shore B.M., (1977), "Student
Learning and the Evaluation of Teaching," In If
teaching is important....The evaluation of instruction
in higher education. Edited by P. D. Naomi
Griffiths. Canada: Clarke, Irwin & Company
Limited, 1977, pp. 41-72.
2) Geis, G.L., (1977), "Evaluation: Definitions, problems
and strategies," In If Teaching Is Important..The
Evaluation of Instruction in Higher Education,
Edited by P. D. Naomi Griffiths. Canada: Clarke,
Irwin & Company Limited, 1977, pp. 8-41.
3) Hutchings, P., (1996), "The Peer Review of Teaching:
Progress, Issues and Prospects," Innovative Higher
Education, Volume 20, Number 4, Summer, 1996,
pp. 221-234.
4) McKeachie, W.J., (1987), "Can Evaluating Instruction
Improve Teaching," In Techniques for Evaluating
and Improving Instruction, New Directions for
Teaching and Learning, Number 41, Fall, 1987, pp.
3-7.
5) Seldin, P., (1980), Successful Faculty Evaluation
Programs. A Practical guide to Improve Faculty
Performance and Promotion/Tenure Decisions, First
Edition, Crugers, NY: Coventry Press, 1980.
6) Seldin, P., (1984), Changing Practices in Faculty
Evaluation.
A
Critical
Assessment
and
Recommendations for Improvement, First Edition.
The Jossey-Bass Higher Education Series. San
Francisco: Jossey-Bass Inc., 1984.
7) Nadeau, G.G., (1977), "Student Evaluation of
Instruction: The Rating Questionnaire," In If
Teaching Is Important..The Evaluation of Instruction
in Higher Education, Edited by P. D. Naomi
Griffiths. Canada: Clarke, Irwin & Company
Limited, 1977, pp. 73-128.
8) Aleamoni, L.M., (1987), "Typical Faculty Concerns
about Student Evaluation of Teaching," In
Techniques for Evaluating and Improving
Instruction, New Directions for Teaching and
Learning, Number 31, Fall, 1987, pp. 25-30.
9) Keig, L., and Waggoner, M.D., (1994), Collaborative
Peer Review: The Role of Faculty in Improving
College Teaching, Edited by B. Hollister. 1994
ASHE-ERIC Higher Education Report No. 2,
Washington, D.C.: The George Washington
University, School of Education and Human
Development.
10) Sullivan, A., (1977), "A Framework for the Evaluation
of Teaching: Self-Assessment and Formal
Evaluation," In If Teaching Is Important..The
Evaluation of Instruction in Higher Education,
Edited by P. D. Naomi Griffiths. Canada: Clarke,
Irwin & Company Limited, 1977, pp. 129-149.
11) From Idea to Prototype: the peer review of teaching (a
project workbook), 1995, Washington, DC:
American Association of Higher Education.
12) Quinlan, K.M., (1996) "Involving Peers in the
Evaluation and Improvement of Teaching: A Menu
of Strategies," Innovative Higher Education, Volume
20, Number 4, Summer, 1996, pp. 299-307.
13) Cavanagh, R.R., (1996), "Formative and Summative
Evaluation in the Faculty Peer Review of Teaching,"
Innovative Higher Education, Volume 20, Number
4, Summer, 1996, pp. 235-240.
14) Lewis, K.G., (1991), "Gathering Data for the
Improvement of Faculty: What Do I Need and How
Do I Get It?," In Effective Practices for Improving
Teaching, New Directions for Teaching and
Learning, Number 48, Winter, 1991, pp. 65-86.
15) Menges, R.J., (1991), "The Real World of Teaching
Improvement: A Faculty Perspective", In Effective
Practices for Improving Teaching, New Directions
for Teaching and Learning, Number 48, Winter,
1991, pp. 21-37.
16) Way, D.G., (1992), Teaching Evaluation Handbook,
Instructional Support, Cornell University, 1992.
17) Greenwood, G.E., and Ramagli Jr, H.J., (1980),
"Alternatives to Student Ratings of College
Teaching," Journal of Higher Education , Volume
51, pp. 673-84.
18) Astin, A., Comstock, C., Epperson, D., Greeley, A.,
Katz, J., and Kauffman, J., Faculty Development in a
Time of Retrenchment. New Rochelle: Change
Magazine Press, 1974.
19) Leifer, L.J., and Sheppard, S., invited panel,
"Opportunities for Innovative Assessment Methods
in Engineering Education," FIE'95, 25th Annual
Frontiers in Education Conference on Engineering
Education for the 21st Century, Atlanta, Georgia,
November 4, 1995
20) Leifer, L.J., and Sheppard, S., invited tutorial, "The
ME PEER Teaching Assessment Project," AAHE
Peer Teaching Assessment Conference and
Workshop, George Washington University, 26 June,
1996
21) Sheppard, S.D., Leifer, L., Carryer, J.E., (1996),
"Commentary on Student Interviews," Innovative
Higher Education, Volume 20, Number 4, Summer,
1996, pp. 271-276.
22) Geis, G.L., (1991), "The Moment of Truth: Feedback
Information About Teaching," In Effective Practices
for Improving Teaching, New Directions for
Teaching and Learning, Number 48, Winter, 1991,
pp. 7-19.
23) Hildbrand, M., Wilson, R.C., Dienst, E.R., (1971),
Evaluating University Teaching, Center for Research
and Development in Higher Education, University of
California, Berkeley, 1971.
24) Sheppard, S.D., (1996b), "Innovations in Mechanical
Engineering at Stanford University," Oct. 7-8, 1996,
ME-PEER: Sheppard, Johnson and Leifer, printed July 11, 1997
a
workshop
on
Mechanical
Engineering
Undergraduate Education for the Next Twenty-Five
Years, Cambridge, Massachusetts.
25) Sheppard, S.D., and Leifer, L., "ME-PEER: one model
for peer involvement in course assessment," invited
presentation and paper, Proceedings from the
Conference on New Approaches to Undergraduate
Engineering Workshop VIII, July 25, 1996,
Kingston, ON.
ME-PEER: Sheppard, Johnson and Leifer, printed July 11, 1997
Download