Marilyn`s Summary of Assigned Readings

advertisement
Marilyn's Summary of Assigned Readings
- resources compiled by Brian Frank - engineering.queensu.ca/egad/resources_new.php
Specific Websites reviewed:
Michigan:
http://www.engin.umich.edu/teaching/assess_and_improve/handbook/plans.html
Stanford:
http://www.stanford.edu/dept/pres-provost/irds/assessment/downloads/AM.pdf
Methodology for Development of Assessment Plan:
Idea:


we wish to answer the question: 'How well does this program achieve its educational outcomes
and objectives'
based on the answer to that question, develop ways to improve the program
This is pretty much the same as what we are doing where our outcomes and objectives include the CEAB
graduate attributes plus our own sustainability attribute - with ensuring that we work towards
continuous improvement.
Steps from Michigan:
1. develop program objectives:
"program educational objectives are broad statements that describe the career and
professional accomplishments that the program is preparing graduates to achieve (ABET, 2004)"
(McMaster - done - these are our grad attributes + sustainability attribute)
2. develop program level learning outcomes:
"Program outcomes are statements that describe what students are expected to know and be
able to do by the time of graduation. These relate to the skills, knowledge, and behaviours that
students acquire in their matriculation through the program (ABET, 2006)"
(McMaster - done - these are our competencies)
3. Develop measurable performance criteria for each outcome:
"A performance criterion is a specific statement that describes a measurable aspect of
performance that is required to meet the corresponding outcome. Each performance criterion
must also specifically describe an acceptable level of measurable performance. For performance
criteria that are not directly assessable, indirect indicators of the performance can be identified.
There should be a limited number of performance criteria for each outcome. Set a schedule for
reviewing and updating the performance criteria."
(McMaster - This is trickier. We are working to develop the thresholds and targets (based on
modified Bloom's taxonomy - remembering, understanding, applying, ...) for each of our
1
competencies but have not defined (to my knowledge) exactly what a student can do under
each Bloom's taxonomy level for a specific competency. Here it seems that Michigan is doing
things differently from us. They seem to have what looks like a competency mapped to a
taxonomy level. See the table below ( sorry that it is cut off!):
Outcome K: Graduates have an ability to use the techniques, skills, and modern engineering
tools necessary for engineering practice (Updated 6/12/99)
Taxonomy Level:
Knowledge
Comprehension
Application
Analysis
Bloom’s Definition:
Remembering previously learned
information
Grasping the meaning of information
Applying knowledge to actual
situations
Breaking down objec
into simpler parts and
the parts relate and ar
Verbs:
Arrange, define, describe, duplicate,
identify, label, list, match, memorize,
name, order, outline, recognize,
relate, recall, repeat, reproduce,
select, state
Classify, convert, defend, describe,
discuss, distinguish, estimate,
explain, express, extend, generalized,
give example(s), identify, indicate,
infer, locate, paraphrase, predict,
recognize, rewrite, report, restate,
review, select, summarize, translate
Apply, change, choose,
compute, demonstrate,
discover, dramatize, employ,
illustrate, interpret,
manipulate, modify, operate,
practice, predict, prepare,
produce, relate schedule,
show, sketch, solve, use, write
Analyze, appraise, br
calculate, categorize,
contrast, criticize, dia
differentiate, discrimi
distinguish, examine,
identify, illustrate, inf
outline, point out, que
relate, select, separate
test
Outcome Element:
Lists available techniques, skills, and
tools available to a specific
engineering discipline.
Classifies the role of each technique,
skill, and tool in solving engineering
problems, studying the performance
of existing processes or systems,
and/or developing designs.
Uses engineering techniques,
skills, and tools including
computers to solve
engineering problems.
Compares results from
software or simulator
system performance o
from alternative calcu
methods including he
Use modern engineering
techniques, skills, and tools
such as computer software,
simulation packages, and
diagnostic equipment.
Uses engineering techniques,
skills, and tools including
computers to monitor
performance of engineering
systems and/or create
engineering designs.
Selects appropriate te
and tools for a specifi
engineering task.
Uses engineering techniques,
skills, and tools to acquire
information needed for
decision-making.
We will need to think about this. This is consistent also with an email from Art (Feb 7, 2011)!
This is really a rubric (discussed by Stanford). Does this mean that we need to work out rubrics
for each competency within the context of Bloom's taxonomy?
2
4. Align the curriculum and supporting practices with the learning outcomes
"Make a table or matrix showing all the learning outcomes on one axis and all the required
learning experiences in the program (such as courses, advising, co-ops, etc.) on the other axis. In
the cells, note where skills for each outcome are taught. An example matrix may be helpful"
(McMaster - done -This is essentially our curriculum mapping.)
5. Specify assessment methods for measuring each performance criterion
"Collecting evidence is about answering questions, such as sub-questions that help answer the
central question "How well does this program achieve its educational outcomes and, ultimately,
its educational objectives?" Specify your program's questions. Specify corresponding assessment
methods that will provide such information to program-level decision-makers about how well
each performance criterion is being met. Two sections of this handbook support faculty with this
step in their assessment planning: conducting direct assessments and conducting indirect
assessments"
(McMaster - this is the next step that Nik and Martin have been working on.) There is a lot of
information about direct and indirect assessment methods. A short summary (based on notes
from Stanford) of direct and indirect methods is given below:
Direct Methods:
These are tests, projects, assignments, case studies, etc. Stanford recommends the use of
'embedded assessments' which are tasks (competencies) that are integrated into specific
courses. From Stanford: ' embedded assessments usually involve classroom assessment
techniques but are designed to collect specific information on program learning outcomes'.
(McMaster - it would seem that what we need to do is asks the instructors of the courses that
we will be measuring next year to directly incorporate an assessment of a competency into their
final exams or projects.)
Stanford also talks about 'objective vs performance' assessment. 'Objective' assessments are
short answer, multiple choice, etc. types of problems where there is essentially only one
answer. 'Performance' assessment questions require students to respond by selecting,
organizing, creating, performing and/or presenting ideas.' They state that performance
assessment is better at measuring higher-order thinking - but requires expert judgement in
grading. (That should be fine if we have 'experts' marking the exams).
Indirect Methods:
From Stanford: "Capture students' perceptions of their learning attitudes and experiences. May
also include informal observation of student behaviour, evaluation of retention rates, and
analysis of program procedures that are linked to student learning." An important point from
Stanford is that Indirect Methods are not sufficient for measuring student learning outcomes.
They must be supplemented with direct measures. (This is consistent then with our approach of
starting assessment based on performance on test, project, etc.)
3
6. Do the Assessment
Michigan also points out some practical stuff that needs to be done regarding schedules:
- determine assessment method
- develop schedule for conducting assessments
- develop schedule for analyzing and report on assessments
- figure out who will assemble and analyze the report
-how will the data be used to determine causes of student weaknesses
7. Program Improvement
This is all about taking action (based on the assessment results) to improve program. Obvious
stuff associated with this:
- what needs to be improved?
- what actions can be taken to achieve the improved program?
- who will be responsible for its implementation
- what will the schedule be for implementation
- who will assess the corrective action taken (to ensure it lead to improvement)
- who will document all this
- I suspect we need to repeat steps 6 & 7 continuously.
Summary of Rubrics
This is well summarized in the Stanford area as well as in the two-page summary by Brian Frank.
A rubric is used as a way of grading an assignment or written report. It is essentially a matrix
where the element s in the first column are called a 'dimensions' . Moving along a row gives a
description or scale of how well that dimension has been met. Stanford: 'The dimensions lay
out and describe the parts of the task'. The scale points could be linked to Bloom's taxonomy.
We also define the threshold and target level for each dimension.
Example from Stanford (only showing 2 dimensions)
Task Description: develop lab report for particular experiment
Dimension
Unacceptable
Marginal
Proficient
Ability to formulate
Formulates non- Formulates nonFormulates
and test good
testable
testable hypothe hypotheses that
scientific hypothesis
hypothesis.
but indicatates
are testable but
designs poor
some
lack some clarity.
experiments
understanding...
Usually applies
Designs weak
appropriate
experiments.
methodologies.
Ability to analyze and
Fails to identify
Inconsistently
Usually applies
interpret data
or apply
identifies and
appropriate
methodologies
applies method... methodology...
Exemplary
Formulates elegant and
easy to test hypotheses.
Applies appropriate
methodologies
Consistently identifies
and correctly applies...
4
Download