360 Degree Evaluation

advertisement
‫اهداف ارزيابي‬
‫دانشجو‬
‫چرا ارزيابي اهميت دارد؟‬
‫‪Assessment drives student‬‬
‫‪learning‬‬
‫جورج ميلر‬
‫‪ ‬تغييربرنامه آموزش ي و يا روشهاي آموزش بدون تغيير نحوه‬
‫امتحانات به هيج نتيجه اي نخواهد رسيد‬
‫‪ ‬تغييرنظام امتحانات حتي بدون تغييربرنامه اثربسيارعميق تري‬
‫نسبت به تغييربرنامه آموزش ي بدون تغييرنطام امتحانات بر ماهيت‬
‫يادگيري باقي مي گذارد‬
‫در دراز مدت نوع فعاليت هايي كه دانشجويان براي يادگي ــري‬
‫انجام مي دهند و آنچه كه ياد‬
‫مي گيرند ‪ ،‬به وسيله نوع امتحاناتي كه بايد بگذرانند ‪ ،‬تعيين مي‬
‫شود‬
‫گيلبرت‬
‫انواع ارزيابي‬
‫‪ ‬ارزيابي تكويني ‪Formative‬‬
‫‪ ‬ارزيابي تصديقي‬
‫‪Summative‬‬
‫اهداف ارزيابي تكويني‬
‫‪ ‬مطلع ساختن دانشجو از باقيمانده مطالبي كه براي رسيدن به هدفهاي آموزش ي بايد ياد بگيرد‬
‫‪ ‬ارزيابي ميزان پيشرفت دانشجو‬
‫‪ ‬تعيين نقاط قوت و ضعف يادگيري دانشجو‬
‫‪ ‬كمك به دانشجو در نقاطي كه ضعف دارد‬
‫‪ ‬كمك به معلم براي ايجاد تغييرات الزم در برنامه آموزش ي يا روش آموزش يا حتي هدف هاي آموزش ي‬
‫‪ ‬در طول دوره آموزش ي انجام مي شود‬
‫‪ ‬نبايد براي قضاوت بكار رود‬
‫‪ ‬نبايد در هيچ مدرك رسمي ثبت شود‬
‫اهداف ارزيابي تصديقي‬
‫‪ ‬هدف نمره دادن به دانشجو‬
‫‪ ‬معموال“ در پايان يك دوره آموزش ي انجام مي گيرد‬
‫‪ ‬آزمونها معموال“ بسيار جامع و مفصل‬
‫‪ ‬براي تصميم گيري در باره ارتقاي دانشجويان به سال باالتر و يا‬
‫دريافت يك درجه علمي بكار مي رود‬
‫‪ ‬قضاوت در باره اثربخش ي كار معلم و برنامه درس ي‬
‫نگاه معملين درفرايند ياددهي يادگيري‬
‫‪‬‬
‫چه محتوايي بايد تدريس شود ؟‬
‫چه چيزي را دانشجويان بايد ياد بگيرند ؟‬
‫چه روشهاي ياددهي يادگيري مناسب هستند ؟‬
‫‪‬‬
‫چگونه يادگيري دانشجويان را مي توان ارزيابي كرد ؟‬
‫‪‬‬
‫‪‬‬
‫ارزيابي در آخرين مرحله فرايند ياددهي و يادگيري‬
‫مورد توجه معلم قرار مي گيرد‬
‫نگاه دانشجويان درفرايند ياددهي يادگيري‬
‫‪‬‬
‫با چه روش ي ارزيابي خواهم شد؟‬
‫‪‬‬
‫چه چيزي را بايد بدانم ؟‬
‫‪‬‬
‫اهداف يادگيري كدامند؟‬
‫‪‬‬
‫با چه روشهايي بايد مطالعه كنم ؟‬
‫درفرايند ياددهي و يادگيري از نظر دانشجويان ارزيابي معموال“‬
‫در راس قرار ارد‬
Evaluation is for
Improvement
Not
Provement
360 Degree
Performance Appraisal
Definition


Evaluation tool that utilizes opinions of ,many different people that interact
with the employee on a routine basis3
Generates more accurate feedback by “gathering information from people
about an individual’s performance as seen by the standards and expectations
of their boss, self, peers, direct reports, and customers.”
‫ابزار ارزشیابی است که از عقاید افراد مختلف که با فرد آزمون شونده بصورت روتین‬
‫اینترکشن دارند استفاده می کند‬
‫ فرد‬performance ‫فیدبک دقیقتری از طریق جمع اوری اطالعات از افراد درمورد توانائی‬
‫خود‬،‫به دست می دهد که این توانائی ها بوسیله استانداردها وانتظارات رئیس‬
.‫گزارشات مستقیم ومشتریان دیده میشود‬،‫همکاران‬،‫فرد‬
Key Features






Usually based on a questionnaire, possibly web-based
Choosing Appraisers
– Done by the individual employee
– Done by HR
– Done randomly
Feedback is usually anonymous
Appraisal is normally followed up with actions for individual improvement and development
Not to be used for decision-making, only purpose is for employee growth
Utilizes many stakeholders inside, and outside of, the organization
‫معموال ً بر اساس پرسشنامه است‬
‫انتخاب ارزیاب ها توسط برخی از کارکنان ویا بصورت راندوم انجام میشود‬
‫ است‬anonymous ‫فیدبک معموال ً بدون نام‬
‫ارزیابی معموال ً با واکنش فرد برای بهبودی و وپیشرفت دنبال می شود‬
‫برای تصمیم گیری استفاده نمیشود‬
‫فقط برای ارتقاء فرد استفاده می شود‬






Appropriate Uses







Employee Development
Employee Coaching
Validate personal opinion of one’s self (or not)
Starting point for personal development plan
Seeking objective information, such as determining employee knowledge,
skills, and behavior, not personality traits
Benchmark individual performance against peer group
Provide a broader view of the employee’s performance
‫تکامل و پیشرفت فرد‬
‫آماده سازی فرد‬
‫ارزش بخشیدن به عقیده شخصی فرد‬
‫نقطه شروع برای پلن تکاملی و پیشرفت‬
‫مهارت ها ورفتار فرد‬،‫جستجوی اطالعات عینی مثل تعیین دانش‬
‫ونه ویژگی شخصیتی فرد‬
‫محک زدن توانائی فردی درمقابل هم گروههایش‬
‫فراهم نمودن یک چشم انداز وسیعتر از توانائی فرد‬







Inappropriate Uses




Rarely linked to decisions on pay
Not recommended for promotion decisions
Should not be a heavy determinant in bonus awards
In small organizations where anonymity is unlikely or there are a lack
of enough peers and direct reports to reduce outlying opinions
‫ندرتاً به تصمیم گیری ارتباط دارد‬
‫برای پیشبرد تصمیمات توصیه نمیشود‬
‫نباید یک تعیین کننده قوی دراعطای جایزه‬
‫یاتشویق باشد‬
‫درسازمانهای کوچک که احتمال ناشناس ماندن‬
‫کمتر است‬




Pros and Cons

Pros
– Combined opinions
more accurate
– Colleague comments
tend to carry weight
– Some skills are best
judged by peers and not
management
– Increases motivation of
employees
– Helps engender a more
honest organizational
culture

Cons
– Administratively
burdensome
– Results can be difficult
to interpret
– Feedback can de
damaging unless
handled appropriately
– Can generate an
environment of
suspicion and cynicism if
not managed opened
and honestly
Support For Use




US Office of Personnel Management supports
research that shows “assessment approaches with
multiple rating sources provide more accurate,
reliable, and credible information”
Spencer and Morrow indicate that 360-degree
feedback systems could yield a Return on
Investment as high as 700 percent
In 1997, 8% of companies used 360°, increased to
10% by 2000
Current posted information at www.360degreefeedback.com states that nearly all Fortune
1000 companies have either already implemented a
360 degree approach or plan to shortly
Is The Environment
Appropriate?

Ask yourself the following questions:
– What is the desired outcome of the feedback?
– Do we have enough raters?
– Is this applicable to all of our employees or an
employee group?
– Are our employees mature enough to handle
the feedback and to give feedback?
– Is there openness and trust between
supervisors and their direct reports?

Is The Environment
Appropriate?
More questions to ask yourself
– Are our employees and “managers willing to
listen and learn and to effect any necessary
changes as a result?”
– Are we willing to devote the time and energy
to make this system work? (It won’t work
unless everyone in the organization is on
board from the “get-go”)
– What do we want to do with the information
that is gained? Help the employees grow or
are we looking for a way to determine pay
and promotion?
Implementation





Build credibility early on by seeking input from all
levels of the organization; use this input to craft the
feedback tool
Ensure the employees that this will not be used to
determine pay, promotion, or bonus
Instill in management that this tool is to assist them
in coaching their direct reports to grow in all
aspects of their professional responsibilities
Seek outside professional help to ensure smooth
implementation
Train appraisers to be constructive, positive, and
specific with their feedback
Implementation

360-feedback.com 9 Step Process
– Determine organizational readiness
– Develop an appropriate survey and process given
organizational needs and objectives
– Generate enthusiasm among key decision makers and
participants
– Ensure that participants and managers have the skills to
support the process
– Provide an orientation briefing
– Administer the survey
– Coach participants in one-on-one meetings
– Provide organizational summary data
– Re-conduct the survey (in four to six months)
Participants





Superiors
Peers
Direct Reports
Customers
Self
Participants
The following slides outline the pros and cautions
associated with each participant in 360 Degree
Appraisal process. There may be occasions when
one source or another may not be chosen to
participate.
For each individual being appraised, specific groups
should be chosen to ensure that the feedback is
appropriate and that a plan for improvement can be
generated for the employee.
A pro is something that is a positive outcome from
that specific group. A caution is not something that
is necessarily negative but must be monitored so
that it does not create a negative situation for all
involved.
Superiors

Pros
– First-line supervisors
often in best position to
carry out full cycle
performance
management
– Superiors have authority
to redesign an
employees work based
on individual and team
performance
– Most Federal employees
think that best ratings
come from first-line
supervisors

Cautions
– Relying solely on
superiors reduces
validity of performance
feedback
– Superiors may not be in
same location as
employee, preventing
them from having
hands-on knowledge of
the employee’s
performance
– Training may be lacking
on appropriate methods
of evaluation
Peers1

Pros
– Peer pressure and peer approval more effective
motivators than supervisors
– Peer ratings have proven to be excellent predictors of
future performance
– Peer ratings remarkably valid and reliable in rating
behaviors and manner of performance
– Peer ratings tend to average out bias from other
groups in the rating process
– Increased use of self-directed team encourages use
of peer evaluation
– Peer ratings help move supervisors into a coaching
role as opposed to a pure judging role
Peers

Cautions
– Should not be used to determine pay, bonuses, or
promotions (creates animosity and prevents truthful
responses from peers)
– Do not divulge the names of those providing feedback; in
general anonymity is preferred to prevent animosity and
generate truthful responses
– Choose the peers wisely; don’t choose at random-the
peers must be very familiar with the work requirements
and performance
– Can be very time consuming for peers to participate
– Can cause tension among employees and breakdown of
teams
– Ensure employee involvement in creation; otherwise no
buy-in will be achieved from employees or their
representatives
Direct Reports

Pros
– Gives supervisors a more
comprehensive picture of
employee needs & issues
– Makes employees feel that
they have a greater voice in
organizational decision making
– Extremely effective in
evaluating supervisor’s
interpersonal skills
– Combine direct report ratings
to achieve an average rating;
adds validity and reliability
– Supervisors are more
responsive to direct report
feedback, creating more
effective managers

Cautions
– Need for anonymity is essential; if
not anonymous, reprisal from
supervisors is likely
– Supervisors may feel that authority
is undermined when they must
take into consideration that their
employees are rating them
– Allow only direct reports with at
least a one year relationship with
the supervisor and no disciplinary
action to comment
– If undergoing downsizing or
reorganization, carefully weigh the
need for direct reports in the
process; may add fuel to the fire
Customers

Pros
– Serves as “anchor” for all
other performance factors
– Combined with peer
evaluation, these data
round out feedback and
focus attention beyond
only serving the
supervisor’s needs
– Ensures that the
employees concentrate
their attention on the
customer as the customer
will have some say with
regards to their feedback

Cautions
– Only ask customers to
evaluate outputs, not
processes; they can’t
always see the entire
process
– Customer feedback process
is time consuming; focus
this time on “big picture”
items
– Don’t ask the customer to
evaluate a single
employee, unless the
customer has a direct
relationship with the
employee
Self

Pros
– Improves communication
between supervisor and
employee
– Particularly useful if entire
cycle focuses on selfassessment; forces
individual to keep track of
successes and failures
– Develops ability to see
one’s self for what they
really are
– Allows supervisor to have
better handle on
performance when it can
not always be observed

Cautions
– Research indicates “low
correlation between selfratings and all other
sources of ratings,
particularly supervisor
ratings”
– Self ratings are consistently
higher than other ratings
– If supervisors do not use
appropriate feedback skills,
the fact that a self-rating is
higher than the
supervisor’s may cause
alienation and
defensiveness
Web Resources







www.360-degreefeedback.com
www.hr.com
http://humanresources.about.com
www.businessballs.com
www.managers.org.uk
www.quality.org
www.opm.gov
Closing Thought
This is a performance development tool!
“In working with organizations, one of the biggest fears people have
is that a group of anonymous people will determine their raises,
promotions, and standing. I am a strong proponent of introducing
360 degree feedback as a developmental tool for individuals.
In a performance development environment, the question of
whether 360 degree feedback should impact performance appraisal
becomes irrelevant. The performance appraisal has transformed
into the performance development tool. The measurements used to
determine compensation in such a system include meeting
measurable goals, attendance, and contribution.”
-Susan M. Heathfield11
The end
Who Should
Assess?
 Faculty
 Self
 Peers
 Tutors
 Other
team members
 Standardized
 External
 Public,
patients, patients
and internal examiners
society, …
360 o
360 Degree Evaluation

Surveys of people who work with the
resident
–
–
–
–
–

Nurses
Other residents
Students
Other health professionals
Staff
Given as feedback to resident to help
improve
(few studies of effectiveness and reliability in
360 Degree Evaluation
Craig McClure, MD
May 15, 2003
Educational Outcomes
Service Group
Description





Use of rating forms to report
frequency of observed behavior
Multiple people in contact with
resident act as evaluators
Often survey type form
Ratings summarized by topic
Include goal-setting
Background


Human resources in business
ACGME found no published
reports of use in GME
Use for “Soft” Areas




More accurate for formative than
summative feedback
Interpersonal & communication
Professional behavior
Limited
– Patient care
– Systems-based practice
Decision to Utilize





Accepted and used by residents,
faculty, staff?
Develop or purchase?
Cost?
Who are the raters?
How will the tool be used?
Decision to Utilize (2)




To whom is the information
available?
What core competencies will be
evaluated with this tool?
How nurture trust the process
remains confidential?
Platform of evaluation
Acceptance



Will all potential evaluators fully
participate?
Will raters be fair & honest?
Will residents accept the feedback
from non-faculty?
Develop or Purchase




Development permits tailoring
Development time may be
considerable
Purchasing gives a ready-made
product
Purchasing: computer based
Developing



Expert in educational testing
Programming expertise
Pilot period
Purchase




Items measured appropriate?
Does it perform as claimed?
Inter-rater reliability?
Degree of support and ability to
customize
Cost





If purchasing, monetary cost
If developing, personnel support
Data management system
Personnel time to complete forms
Annual development plan
Cost (2)



Addressing EEOC/grievance
complaints
Handling disputes over data
Divisive & counterproductive for
those resistant
Personnel Evaluation
Time



5 to 10 nurse evaluators per
resident to give reproducible
results
More for faculty
More for patients
Identify Raters






Patients (how explain process)
Nursing staff
Clerical staff members
Physician faculty members
Non-physician faculty members
Residents
Identify Raters (2)



Medical students
Allied Health Personnel
Self-assessment
Patients as Raters




Literacy
Language
Culture (medical and otherwise)
Personality
Intended Utility



Intervals: monthly, quarterly,
yearly
Summative versus formative
To support high stakes decisions?
Access to Information



Resident
Advisor
Program Director
Confidentiality & Trust





Raters require anonymity
Residents require confidentiality
Both need the process to be
positive & constructive
Prior history conditions
expectations
Education to process aids current
participation
Platform of Evaluation



PDA
Paper
Computer
Challenges


Securing appropriate instruments
for variety of evaluators
Managing data successfully
Advantages




Electronic database for
documentation
Ease of access for raters
Rapid turnaround for feedback
“Gap” analysis (self perception
versus image of others)
Disadvantages






Hardware/software costs
Lack of validation in GME
Potential information overload
Selection bias
Discoverability
Potential for invalid feedback
References


Assessment of Communication
and Interpersonal Skills
Competencies, C.C. Hobgood, et.al.
Academic Emergency Medicine
2002;9: 1257-69
ACGME/ABMS Joint Initiative
Toolbox of Assessment Methods,
September 2000
References (2)


360-degree Feedback, K.G.
Rodgers,et.al. Academic
Emergency Medicine 2002;9:13001304
Letter from ADFM listserv,
Goldsmith to Kikano
Expanding our toolbox…..
Does
Shows how
Knows how
Knows
how
Knows
Knows
Established technology
of efficient written or
computer-based high
fidelity simulations (MCQ,
Key Feature, Script Concordance
Test, MEQs….)
Expanding our toolbox…..
Does
Shows how
Shows
how
Knows how
Knows
how
Knows
Established technology
of structured high
fidelity in vitro
simulations requiring
behavioural
performance (OSCE, SPbased testing, OSPE….)
Expanding our toolbox…..
Does
Shows how
Shows
how
Knows how
Knows
Emerging technology of
appraising in vivo
performance (Work-based
assessment: Clinical worksampling, Mini-CEX, Portfolio,
practice visits, case orals….)
Expanding our toolbox…..
Does
Shows how
Knows how
Knows
“Domain specific” skills
Emerging technology of
appraising in vivo
performance (self-, peer, coassessment, portfolio,
multisource feedback, learning
process evaluations……)
“Domain
independent”
skills
‫متشکرم‬
Download