Designing Knowledge Assessment Tests (KATs)

advertisement
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
Guide to Facilitators:
Designing Knowledge Assessment Tests (KATs)
Introduction
Knowledge Assessment Tests (KATs) are key components of any learning programme’s
evaluation system. In the case of the WFP Food Security Assessment Learning Programme,
KATs would be intended to measure participants’ understanding on the design and
implementation of food security assessments (FSAs).
The purpose of this Guidance Note is to assist facilitators in KAT design by outlining the
recommended approach and format for KAT development and implementation.
1. What are Knowledge Assessment Tests?
KATs are a series of interactive, theoretical and case-study based questions which serve three
primary purposes:
1) As a pre-training diagnostic tool, a KAT is used to assess the baseline food security
assessment knowledge among participants prior to FSAs workshops.
Submitted as a pre-training test to participants, it helps facilitators tailor workshops to
emphasize the modules where participants are weaker and cover more quickly wellunderstood topics. This increases workshop efficiency.
2) As a post-training knowledge assessment tool measuring skills gained by participants, a
KAT provides immediate feedback to facilitators and participants on how well the knowledge
and skills presented are internalized and understood.
Participants are tested either straight after the learning modules, which helps identify
problems in understanding before moving on to more complex topics, or at the end of entire
workshop to assess gains in knowledge and skills.
3) As a training evaluation tool (by comparing pre and post-training tests results), it
measures the effectiveness of trainings and identify its strengths and weaknesses for the
benefit of similar future trainings.
Comparing participant scoring from pre and post assessment KATs, it allows identifying
progress per module, providing guidance to facilitators regarding training approaches and
methods more or less effective in achieving the learning objectives.
2. What are the key components of Knowledge Assessment Tests?
KATs should measure participants’ knowledge, comprehension and application of food security
assessment principles. Therefore, to measure the skills effectively gained, KATs should ideally
combine theory based questions with more applied, case-study based questions.
Theory based questions test participant knowledge and comprehension of workshop concepts
and techniques. Case study based questions, on the other hand, measure the ability of
participants to apply these concepts and techniques on real life scenarios. To do so facilitators
must develop concise, realistic case studies which challenge participants to “operationalize” the
concepts presented.
1/7
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
3. How to design effective Knowledge Assessment Tests?
Overall KAT design
Ideally, KATs should include both theory-based and case-study questions, respectively testing
comprehension or knowledge and internalization or capacity to apply.
However, the development of case studies and related questions require a somewhat heavier
investment from facilitators prior to the training workshop, and may thus not be feasible on all
instances. Should faciliators wish to include in their final KATs questions testing skills
application, some key principles to keep in mind for the development of a case study are offered
in Annex 1.
Case-study preparation (if retained)
The first step in designing KATs is to select a case-study on the basis of which participants’
capacities to apply key skills they were taught will be tested. Case studies can be drawn from
existing reports tailored to the needs of the exercise, or can be made-up. However, to simplify
facilitators’ tasks, the present tool proposes a series of ready-made case studies which allow to
test some of the core assessment skills taught in WFP1.
Standard KAT template
A template is proposed below. When designing KATs for Food Security Analysis training
workshops, facilitators are recommended to use this template, to ensure consistency in the
approach followed throughout training workshops.
The template asks facilitators to indicate not only the question but also the module and learning
objective for which the question has been designed, as well as the question type, the cognitive
domain the question refers to, the correct answer and an explanation regarding the answer(s).
This template is recommended as it is a useful tool for maintaining consistency between
learning objectives and KATs and helps facilitators organize their thoughts and materials more
concisely.
Standard Template for Knowledge Assessment Test
1
Session name and number
Fill in module name and number
Learning objective
Fill in the learning objective the question is intended to address
Bloom taxonomy
Indicate which level of thought is required per the Bloom taxonomy classification
system
Question type
Fill in the question type (Multiple choice, Fill in the blank, Multiple response, etc)
Question
State the question/ provide the possible responses
Correct answer(s)
Provide the answer
Feedback for correct/ incorrect
answer(s)
Indicate why the incorrect answers are incorrect
See Annexes: KATS case studies proposed for EFSA, Quantitative Analysis and Market Analysis.
2/7
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
4. How are Knowledge Assessment Tests implemented and evaluated?
Submitting the tests
Participants are asked to complete KATs both at the beginning of a workshop (for diagnostic
purposes) and after completion of learning modules (for knowledge-assessment or evaluative
purposes). All participants are required to complete KATs on an individual basis, to enable
facilitators to immediately identify participants likely to need additional attention and support.
Using the tests
The most appropriate way to evaluate responses to KATs depends on the purpose of the KATs.
 Using KATs results for diagnostic purposes should first be reviewed qualitatively, to
determine the average baseline knowledge amongst participants and to look for collective gaps
to tailor the workshop to the audience.
Then, individual scoring of pre-training KATs should be conducted to identify individuals on the
margins, i.e. those that appear most knowledgeable (and thus can be relied on to support other
participants) and those that appear least knowledgeable (and thus may need additional
support).
 When KATs are used as a post-training knowledge assessment and / or training
evaluation tool individual scoring of post-training KATs should be conducted and compared to
pre-test results to measure knowledge and skills gained and to identify modules which show to
have produced more or less successful results
Standard scoring system
In terms of scoring KATs, it is recommended that participants’ correct answers be scored as a
percentage of the total number of questions. To capture the differences in difficulty level
between theory-based and applied questions (when included in the test), it is recommended that
facilitators use a simple weighting system. Under this system, more straightforward theorybased questions would be assigned a weight of 1 and more difficult case-study based questions
given a higher weight of 2.
Table 1 below illustrates how this weighting system would work in the case of a theoretical sub
module which contained 8 modules (and learning objectives), 16 questions (2 per learning
objective) and a 1:2 ratio between theory and case study based questions.
Table 1. Weighting system for KAT grading scheme
Type of question
Question Weight
Number of
questions
Total number of points
possible
Theory- based
1
8
8
Case study based
2
8
16
16
24
All questions
In this case, if the participant were to answer all 16 questions correctly then that participant would
receive 24 of the possible 24 points, scoring 100%. If the participant was to answer one theory
based question incorrectly then that participant would receive 23 of the possible 24 points, scoring
96%. If the participant was to answer one case study based question incorrectly then that participant
would receive 22 of the possible 24 points, scoring 92%.
3/7
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
The below formula can be used to come up with an overall KAT score per participant, using
the above weight system:
(Number of correct theory-based questions*1) + (Number of correct case study based questions*2)
_____________________________________________________________________________________________________________________________________________________________ _______________________________
(Total number of theory-based questions*1) + (Total number of case study based questions*2)
This guidance note recommends that percentage scores be communicated in real time to
participants so they can assess their own progress.
Deciding on score thresholds
Facilitators are also recommended to devise aggregate scoring thresholds which when
breached, trigger a set of remedial actions. While facilitators are encouraged to derive
thresholds and remedial actions that they feel are appropriate, this guidance note proposes the
following (Table 2):
Table 2. Aggregate scoring thresholds for KATs
Percent of participants
that scored below 75%
Recommended action
≤25%
Concepts sufficiently understood to progress to next sub module
>25%
Conduct detailed review of key concepts
Provide additional exercises to participants for further practice
To be prepared, facilitators should assume that a certain percentage of participants will struggle
with key concepts in workshops. Thus, it is recommended that facilitators compile additional
review materials, exercises and even KATs in advance of workshops in order to meet
participants’ needs.
4/7
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
Facilitator’s guide: Devising Knowledge Assessment Tests
ANNEXES
Annex 1: Principles for the design of tailored KATs
Case-study Design
The first step in designing effective KATs is to develop a case-study which describes either a
fictional describes either a fictional or actual food security related emergency. The nature of the
emergency itself (whether it is a result of political instability or natural disasters, etc) is up to the
discretion of the facilitator, but the crisis must have a clear impact on food availability and
access. Information provided to the participants may include:
1)
A short description of the affected country/ region (Natural, Physical, Social and Human
capital) including baseline (or pre emergency) health and livelihood status
2)
Scale of the emergency and amount of food produced, the crop shortfalls expected as a
result of the crisis and any impending impacts on food prices and household purchasing
power
3)
Ongoing food security interventions (and how they are impacted by the crisis)
4)
Reference to any obstacles to the implementation of a food security assessment or
intervention
5)
Any other information specific to a certain module (i.e. population numbers or sample
household listings for sampling module, etc)
Case-studies should whenever possible be developed keeping in mind the field context which
participants will face when implementing FSAs.
Case-studies can be developed for each learning module of an FSA workshop or one casestudy can be developed for all learning modules. Individual, module specific case-studies
should be short and highly tailored towards the concepts (and learning objectives) presented in
the pertinent module. More detailed case-studies intended to cover all modules must address
the learning objectives in each workshop module.
Overall KAT design
KATs should be developed according to the following guidelines:
1)
Questions should be clear and concise and provide enough information to be nonambiguous.
2)
Questions should directly relate to module learning objectives and exercises. A suggested
tool for ensuring coherence between KATs, learning objectives and exercises is the Food
and Agriculture Organization’s (FAO) Performance and Content Matrix (See Box 1 in
annex).
3)
KATs should require the participant to engage in various levels of cognitive thought as
defined by the Bloom Taxonomy (See Box 2 in annex)
4)
KATs can be offered in one of the following forms:
 True or False
 Multiple Choice
 Multiple Responses (where there is more than one right answer)
 Matching
 Ordering
 Fill-in the Blanks
As a general guideline, three to four questions are recommended per learning objective.
5/7
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
Box 1: The Food and Agriculture Organization’s (FAO) Performance and Content
Matrix
The FAO’s Performance and Content Matrix is a tool developed to help ensure consistency between
learning objectives, exercises and KATs. The matrix is shown below:
Content
Concepts
Procedures/
Processes
Principles/ Rules
Application
Establish whether
or not the object is
part of the class
Reform the procedure/ apply
the process to a new situation
Solve the problem by
applying the principles
Understanding
Reformulate the
definition
Reformulate the procedure/
process
Reformulate the principles
Performance
Facts
List the
List the characteristics of the
characteristics of
List the principles
concept
the concept
+
World Food Programme. Food security assessment facilitators’ tool kit: Learning event design and planning guide. First edition.
Memorization
Repeat
the facts
As its foundation, this matrix suggests that learning objectives should encourage participants to apply,
understand and memorize workshop contents which have been sub-classified as Facts, Concepts,
Procedures/ Processes, and Principles/ Rules. For each type of performance required (application,
understanding and memorization), the matrix then defines the intellectual processes that exercises and
KATs should require for each of the content types. By organizing learning objectives, participant
performance and workshop content in such a manner, this matrix facilitates workshop planning and
development and is particularly useful for guiding what types of KATs are appropriate for each learning
objective and content type.
6/7
WFP - Enhancing Capacities in food security and response analysis (ENCAP) project
September 2009
Box 2: Bloom Taxonomy Cognitive Domain
The Bloom Taxonomy is a hierarchical, cognitive thought model which defines intellectual activity
into 6 layers of complexity. The Bloom taxonomy model is used in FSA workshops to identify the
depth of learning required, to define and classify learning objectives, and to develop appropriate
exercises and KATs. The table below shows the 6 layers of thought identified by the Bloom
Taxonomy model and provides examples of exercises and KATs associated with each category.
Category
Example of KATs/ Exercises, and Key Words
Knowledge:
Recall of information
Examples: Recite a policy. Quote prices from memory. Know safety rules.
Key words: define, describe, identify, know, label, list, match, name, outline, recall,
recognize, reproduce, select, state
Comprehension:
Understanding and interpretation
of instructions and problems.
Rephrasing of instructions and
problems.
Examples: Rewrite the principles of test design. Identify the steps in a complex task.
Translate an equation on to a computer spreadsheet.
Key words: comprehend, convert, defend, distinguish, estimate, explain, extend,
generalize, give, infer, interpret, paraphrase, predict, rewrite, summarize, translate.
Application:
Use of concepts in new situation.
Application of classroom learning
to work situations.
Examples: Use the manual to calculate an employee’s vacation time. Apply principles
of statistics to evaluate a written test.
Key words: apply, change, compute, construct, demonstrate, discover, manipulate,
modify, operate, predict, prepare, produce, relate, show, solve, use.
Analysis:
Separation of material or
concepts into components, to
examine their organizational
structure. Distinction between
facts and inferences.
Examples: Use of logical deduction to troubleshoot a piece of equipment. Recognize
logical fallacies. Identify a department’s training needs.
Key words: analyse, break down, compare, contrast, deconstruct, differentiate,
discriminate, distinguish, identify, illustrate, infer, outline, relate, represent, select,
separate.
Synthesis:
Creation of structures or patterns
from diverse elements. Assembly
of parts to form a whole,
particularly a new meaning or
structure. Use of facts or
experience to present arguments.
Examples: Write a manual for an operation or process. Design a machine for a
specific task. Integrate learning from several sources to solve a problem. Revise a
process to improve its outcome.
Key words: categorize, combine, compile, create, devise, design, explain, generate,
modify, organize, plan, rearrange, reconstruct, relate, reorganize, revise, rewrite,
summarize, tell, write.
Evaluation:
Judgment of the value of ideas or
materials.
Examples: Select the most effective solution. Select the most qualified candidate for
a job. Explain and justify a new budget.
Key words: appraise, compare, conclude, contrast, criticize, critique, defend,
describe, discriminate, evaluate, explain, interpret, justify, relate, summarize, support.
World Food Programme. Food security assessment facilitators’ tool kit: Learning event design and planning guide. First
edition.
+
7/7
Download