The Hardest Part of Teaching

advertisement
The Hardest Part of Teaching
March Faculty Development
Workshop
Sponsored by PETAL
A Brief Note
• For most of us, the
hardest part of
teaching is not really
the grading.
• It’s the waking up in
time for the 8AM
class.
Timeline of Events
•
•
•
•
•
•
She shoots! She scores!
Scooby Doo, who are you?
Consider this, Batman!
I’m a doctor, not a dictionary!
Can we talk?
Wrap up
She Shoots! She Scores!
Goals for the Workshop
• Understand some of the
terminology of
assessment as
springboard for thinking
• Define our goals in
creating systems for
assessing students here
at Fisher
• Start the dialogue about
grading and assessing
students here at Fisher
Scooby Doo, Who Are You?
Let’s Get Acquainted
• Who am I?
– Dr. Kris Green, MST/CS/Mathematics
– I hate grading – reducing students to a single symbol
– I enjoy providing feedback to my students to help
them learn
– I think tests, etc. should be a place to continue
learning, rather than a proof of learning
– I rarely use the exact same anything twice
• Who are you?
– Name, Department, Ideas about grading
Consider This, Batman!
Case Studies for Comparison
• This is the tale of three students in a high school
Latin II class. Each has an 85% average, but got
there differently.
– Kris has received, despite his efforts, a score of 85%
on every test, homework, and class exercise.
– Cindy started off in the 70% range, but has
consistently been in the 90% range for the second
half of the year.
– Mike is the opposite of Cindy. He started in the 90%
range, then spent the second half of the semester in
the 70% range.
• Do all three deserve the same course grade,
traditionally a B?
I’m a doctor, not a dictionary!
The Basic Terminology
From Grant Wiggins (Educative Assessment)
– The aim of assessment is primarily to educate and
improve student performance, not merely to audit it.
– Assessment should be educative in two basic senses:
• It should be deliberately designed to teach (not just measure)
by revealing to students what worthy work looks like (offering
authentic tasks)
• It should provide rich and useful feedback to all students and
to their teachers
The Guiding Light(s)
Where should we head?
1. Assessment reform must center on the
purpose, not merely on the techniques or tools,
of assessment.
2. Assessment reform is essentially a moral
matter.
3. Assessment is central, not peripheral, to
instruction.
4. Assessment anchors teaching, and authentic
tasks anchor assessment.
5. Ass performance improvement is local.
What Are Little Grades Made Of?
Components of Assessment
• Collecting the data
– Consider the sources of the data
– Consider the frequency of the data
– Consider the relevance of the data
• Evaluating the data
– Comparison against standards
– Comparison against other work
– Providing effective feedback
• Assigning a grade-symbol
Just the Facts, Ma’am
Some Possible Data Sources
Caveat Grader
Qualitative v. Quantitative
• But remember, the data we collect is qualitative data –
how students are doing with the material, what students
have done, what students are having trouble with.
• Consider the typical math scheme:
– Hand work in (qualitative)
– Put a percentage grade on work and average (quantitative)
– Assign a letter grade (qualitative)
• Multiple translations like this will loose meaning without
clearly defined grade standards (not simply percentage
point or total point requirements).
• We should provide “Grade Profiles” to our students –
qualitative descriptions of what student performance at
each letter grade looks like (good examples from
Foundation for Critical Thinking, www.criticalthinking.org)
Another Dichotomy
Objective v. Subjective
Objective grading measures performance relative
to fixed, universal standards
Subjective grading is based on more relative
measures like the rest of the class’s
performance or a student’s earlier performance.
But, all assessment requires judgment. Hiding the
judgment in a single letter grade is dishonest
and does not really help the student learn from
his or her mistakes.
Another Dichotomy
Summative v. Formative
• Summative evaluation is like a final exam:
a one shot sampling of topics are covered
and you are assessed as to whether you
know/understand/can do them at that point
only.
• Formative evaluation is on-going and is
designed to help the student improve;
thus, it is a part of the learning process:
writing and revising a paper, for example.
Can We Talk?
Questions for Discussion
1. What do I want the students to know, understand, and
be able to do? How does this affect my teaching and
planning?
2. What does an A student look like? What about a B, C,
D, or F?
3. Are these profiles of A, B, C, D, F students consistent
across the curriculum, or should they change as the
level of the coursework changes?
4. What is the role of standards in assessing students:
should we hold them up to a rigid ruler or should the
ruler flex based on the other students?
5. How can we avoid grade compression and grade
inflation?
The Check(list) is in the mail
Checklist of Requirements
• Let’s come up with 3-5 items in each group that
would be necessary components for any system
to assess students here at Fisher.
• We’ll share these and generate a master list with
descriptors. I’ll email this to everyone and place
the information on my website, along with this
PowerPoint:
• http://keep2.sjfc.edu/faculty/green then look for
teacher resources.
Selected Resources for Perusal
• Grant Wiggins, Educative Assessment
• Tom Bourner and Steve Flowers Teaching and
Learning Methods in Higher Education
www.bbk.ac.uk/asd/Bourne.htm
• Part III of the New York State MST Standards
Guide at www.emsc.nysed.gov/guides/mst/
• Office of Academic Planning and Assessment,
Univ. of Massachusetts Amherst at
www.umass.edu/oapa/oapafiles/oapaindex.html
Download