Reading Notes, Chapters 7-12

advertisement
Ferdon, EDTECH503 (SU09)
Chapter 7: Analyzing the Learners
Analysis of Learners
 Understanding target audience is essential for instruction to be used and usable. Learnercentered environments - careful attention to learner skills, knowledge, beliefs and attitudes.
 Diagnostic teaching - Thorough understanding of learner’s conceptual/cultural knowledge.
 Some aspects easily quantifiable; others are harder to define. Human needs: determine
content, delivery, how receptive. Captive audience or willing volunteer?
 Determine learner motivation (intrinsic/extrinsic); helps make instruction more appealing.
Popular Approaches to Analyzing Learners
 Mager – Working document, no need to format/organize, describe both differences and
similarities. Analyze: age, gender, physical characteristics, education, hobbies, interests,
reading ability, organizational membership, prerequisite skills, reason taking course, attitude
about course, biases and beliefs, need-gratifiers, terms and topics to avoid.
 Heinich, Molenda, Russell, and Smaldino – 1) General characteristics - help make informed
decisions about content and delivery, 2) Entry competencies - prior knowledge necessary for
success, and 3) Learning styles - attractiveness & effectiveness of instruction.
 Dick, Carey and Carey – entry behaviors, prior knowledge, attitudes toward content/delivery,
academic motivation, educational and ability levels, learning preferences, attitude towards
group providing instruction, group characteristics. Tremendous impact on instruction.
 Smith and Ragan – Learner characteristics: 1) stable similarities – same/unchanging, 2)
stable differences – different/unchanging (physical, personality traits), 3) changing
similarities – human development, processes same for all people, 4) changing differences –
change with time, i.e. knowledge, values, skills, beliefs, motivation.
 Morrison, Ross and Kemp – Personal and social characteristics: age/maturity level,
motivation and attitude, expectations and aspirations, work experience, talents, dexterity,
working conditions (noise, weather, etc.).
Learner Analysis Procedure
 Universal design of education –Accommodate all, embed in instruction, diverse needs.
 Typically two types of documents: 1) Chart of learner characteristics data (challenged,
average, gifted), handy reference but clinical. 2) Fictitious profile of typical learner – info
less accessible but more human perspective. Choose one, or do both then compare.
Evaluation of the Success of a Learner Analysis
 Summative – Compare learner analysis with what learners say about instruction.
 Formative – Member check, compare with someone who designed instruction for a similar
population and/or, check with member of target audience.
Synthesis:
A thorough understanding of learner characteristics and needs allows the designer to create
instruction that will be effective, efficient, and appealing. When it comes right down to it, there
is no one “best” way to conduct a learner analysis and, in some regards, it’s just your best guess.
The instructional designer must determine potential problems with various approaches to learner
1
Ferdon, EDTECH503 (SU09)
analysis and select the one that will work best in that particular situation.
2
Ferdon, EDTECH503 (SU09)
Chapter 8: Developing Instructional Goals and Objectives
Instructional Goals and Objectives
 Goal - Broad or abstract; not specific or observable features; intent of instruction. Can be
topic for subordinate instructional objectives.
 Instructional objective – Specific; how instruction will affect learner. Focus on what learner
will do (instructional systems design). Type depends on goal (performance, conceptual).
Popular Approaches to Setting Goals and Objectives
 Mager – most common approach, performance objectives: action, conditions, criterion.
 Dick, Carey and Carey – determine goals/objectives either by SME or a performance
technology approach (data gathered during needs analysis). Conduct a subordinate skills
analysis to determine performance objectives.
 Heinrch et al - “ABCD” approach: audience (describe learners), behavior (expectations after
instruction), conditions (setting and circumstances), degree (acceptable standard). Objectives
in four domains: cognitive, affective and psychomotor domains (more traditional) and
interpersonal (people centered).
 Morrison, Ross and Kemp – Terminal objective: major objective, overall outcome. Enabling
objectives – observable behaviors, indicate achievement of terminal objective.
Goal Setting/Translation of Goals into Objectives
 Goal setting is critically important step. Fairly easy with no past practice. Goals for ID
project often influenced by tradition, politics, decision maker’s perspective.
 Functional analysis system technique (FAST) – One way of working backwards from
existing interventions to arrive at overarching goal. Generate increasingly abstract verb-noun
pairs, (brush teeth, maintain hygiene, maintain health); helps determine why various
activities are important.
 Clear objectives, which are observable or measureable, make it easier for the design team.
 Wording of levels (know, comprehend, apply, analyze, synthesize, evaluate) in Bloom’s
taxonomy is a good place to start when developing instructional objectives.
 Gagne’s hierarchy of intellectual skills also works well as foundation for writing objectives.
Five types of learning outcomes are divided into three categories: declarative knowledge
(verbal information), procedural knowledge (motor skills, intellectual skill, cognitive
strategy), and affective knowledge (attitudes).
Evaluation of the Success of Instructional Goal Setting and Objective Specification
 Take time to carefully consider how well learner and task analysis has driven creation of
goals and objectives. Continue to compare goals and objectives against each other to make
sure they support one other, are realistic, and are clearly articulated.
Synthesis:
Learner and task analysis should drive creation of goals and objectives. Well-stated goals
(outcome of instruction) and objectives (outcome of instructional activity) provide focus and
drive instruction. Goals are always necessary, but objectives may or may not be, depending on
how learners will be evaluated. Levels by Bloom and Gagne provide common language for
writing of measureable, observable objectives.
3
Ferdon, EDTECH503 (SU09)
Chapter 9: Organizing Instruction
Scope and Sequence of Instruction
 Scope and sequence - Scope = amount of information; places restriction on topic. Sequence
= order of presentation; isolates knowledge, relates to whole.
 K-12: Curriculum – entire scope of what will be learned, measured in years. Units – large
sets of activities, measured in weeks/months. Lesson plans – specific day-to-day activities.
 University: Program of study –courses leading to degree. Syllabus – scope and sequence for
single course. “Sit down” class or distance learning. Business: competencies or certificates.
Events of Instruction/Continuum of Learning Experience
 Posner – Macro: education levels (i.e. elementary, secondary). Micro: relationship among
parts of lesson; events of instruction – activities within lesson (intro, body, concl., assess).
 Robert Gagne nine events of instruction: gain attention, inform of objective, recall prior
learning, present stimulus, provide guidance, learner performance, teacher feedback, assess
performance, retention and transfer. Create at least one instructional activity for each event.
 Smith and Ragan – Two aspects: 1) supplantive (supplied by instructor) 2) generative
(generated by student). Instructor presents activities/info, learner must actively engage.
 Continuum – One end concrete real-world activities; other end contrived abstract activities.
 Dales’ cone of experience categorizes learning continuum. Base – direct/real-world, middle
– video, pics and photos, top – visual/verbal symbols.
 Bruner – Learning experiences are 1) enactive – things we know but don’t have
words/images for; synthesize/apply; 2) Iconic – explanation via symbols or representations
(middle of Dale); 3) Symbolic – sounds/signs not directly associated with event (top of Dale).
Instructional Delivery
 Classroom teaching – Traditional; group students by experience or ability; roots in writings
of English educator Joseph Lancaster (1778-1838). Teacher-directed, teacher feedback.
 Programmed instruction – Students work independently, feedback from instructional media.
Instruction is the focus, not the delivery method (technology, print, etc.).
 Distance education – synchronous or asynchronous, may be programmed instruction. Often
uses a learning management system (LMS), computer-based home for the class/learners.
 Immediate feedback – 1) Student and instructor; revise based on observation/communication,
and 2) Student only – programmed instruction; less personalized; large groups of learners.
Instruction Activities in Noneducational Situations/Effective Instruction
 Provide supports and job aides (i.e., instructions on fax machine), not learning or training.
 Effective instruction will include: 1) Content for extra support and challenge, 2) Learning
styles/types, 3) Appropriate activities for instructional events, 4) Job aides - focus on
concepts, and 5) Delivery method to suit individual’s or organization’s needs.
Synthesis:
Needs, task and learner analysis will determine goals and objectives, which are the basis for
determining scope and sequence of learning activities. Choice of delivery method is important
decision and affects instructional events (variety, concrete to abstract) as well as job aides.
4
Ferdon, EDTECH503 (SU09)
Consideration of content, learning style and needs of individual and organization are key.
5
Ferdon, EDTECH503 (SU09)
Ch. 10: Creating Learning Environments, Producing Instructional Activities
Development of Instruction
 Recommend specific activities, prescriptions, based on needs, task and learner analysis as
well as goals and objectives. This is the only part of the process that causes learning to occur.
Learning Environment
 Learning environment – where instruction occurs. Centered around learner, knowledge,
assessment (feedback/revision), community (learn from each other); not mutually exclusive.
 Hannafin, Land and Oliver – two learning environments: 1) Directed: specific objectives,
structured activities, develops same/similar knowledge, skills, attitudes; passive; 2) Openended: problem-based, no specific objectives, divergent thinking, constructivist.
 Cruickshank et al – Direct teaching: instructor at center; directed learning environment.
Indirect teaching: instructor on periphery, support and guide; common in open-ended LEs.
 PBL – open-ended, refine understanding in meaningful way. Hannafin et al say include
enabling contexts (project perspective), resources, tools (engage, manipulate), scaffolding.
 Simulations – evolving case studies, long history, play out situation without risk (astronauts).
 Instructional games are a form of simulation. Practice/refine skills/knowledge, identify gaps,
review/summarize, new concepts. Activates learner, greater interest, relaxed atmosphere.
Research and Support for Instructional Practices
 Joyce – Meta-analysis of effective instruction, four models/systems of instruction: personal,
information processing, behavioral, social.
 Ellis and Fouts – Looked at research on three levels: I – basic, pure research/lab. II –
comparative, determine effectiveness of instruction/school setting. III – evaluative, largescale implementation. 12 educational innovations, mixed research support, but clear winners.
Activities Based on Proven Effective Strategies
 Marzano, Pickering, and Pollock – Classroom Instruction That Works: Meta-analysis
resulted in nine effective, research-based learning strategies. 1) Similarities/differences –
identify, classify, graphic organizers, scattergrams, analogy. 2) Summarizing & note taking
– 10&2, pair share, compare/revise, reflective writing. 3) Recognize/reward effort. 4)
Homework & practice – clear purpose, feedback. 5) Non-linguistic representations – info
storage linguistic & imagery. Illustrations, animations, graphic organizers, sound,
kinesthetic. 6) Cooperative learning – Johnson and Johnson’s five elements, well-structured,
clear goals. 7) Set objectives, provide feedback. 8) Generating/testing hypotheses – explain
thinking. 9) Questions, Cues and Advance Organizers – retrieve knowledge, analyze.
 Advance org.: skim content, tell story, agenda/outline, graphic organizer, writing prompt.
 Just-in-time instruction – direct instruction based on immediate needs, is adaptive, instructor
as facilitator providing instruction as needed.
Synthesis:
This is the heart of the instructional designer’s job. Effective instruction should be researchbased and include a variety of learning environments and activities to meet needs of target
audience. The designer must think beyond his own experiences and preferred learning style to
meet the needs of a diverse body of learners.
6
Ferdon, EDTECH503 (SU09)
Chapter 11: Evaluating Learning Achievement
Evaluation, Assessment, & Measurement/Purpose of Evaluation/Goal of Learner Eval.
 Three major types of evaluations: learner, formative, summative.
 Assessment – procedures/techniques for gathering data. Measurement – data collected.
Instruments – devises used to collect data. Evaluation – process for determining success.
 Evaluations provide data analyzed to determine overall success of the instructional design
and determine any changes to be made. Learner evaluation – Based directly on goals and
objectives, measures impact of instruction on achievement.
Development of Learner Evaluations
 Validity – valid if it helps determine if learners met objectives. Face validity – experts judge
design of evaluation. Content validity – SMEs look at test items to judge appropriateness.
 Reliability – Evaluation is reliable if it will provide similar results whenever administered.
 Criterion referenced – learner judged on competency related to specific criteria; provides
useful information. Norm referenced – learners compared; info does not improve instruction.
 Determining if change is knowledge, skill or attitude (action/verb in objective) drives test
development. Millman and Green and Hopkins suggest questions to guide test development.
 Objective Tests - True-false (facts, minimal use/guessing), multiple-choice (considered most
useful), matching (appropriate for homogeneous lists), short-answer (no guessing).
 Constructed–Response Tests – Essay (higher levels of cognition, but unfair evaluation for
ELL and those with poor writing skills).
 Performance assessment – evaluate skill change. Direct testing (evaluate outcome),
performance ratings (actions not product; checklist, rating scale, rubric), observations and
anecdotal records (time consuming, subjectivity issue), portfolios (products/progress).
 Change in attitude is the most difficult to evaluate; highly subjective. Observation/anecdotal
records (same problems), surveys/questionnaires (more feasible, open-ended, fixed response,
rating scales), self-reporting inventories (yes/no statements), interviews (widely used, wide
range of topics, clarify/expand). Problems: social desirability, self-deception, semantics.
Implementation of Learner Evaluations
 Pre-assessment – enables adjustment of instruction for greater effectiveness. During
instruction - useful if instruction over multiple sessions – quick write, random call.
Summative - overall learner success.
Determination of Success of Learner Evaluations/Instructional Designer’s Role
 Successful evaluations provide enough data to make decisions about possible remediation or
revision of instructional materials. Instructional designer responsible for plan of action and
execution: 1) analyze goals/objectives, determine outcomes, 2) type of evaluations needed, 3)
choice of assessment techniques, 4) assist with data analysis and steps to take as a result.
Synthesis:
Formative and summative evaluation of the design’s success is necessary to ensure that
instruction meets both learner and client needs. Valid and reliable learner evaluation is necessary
to gauge effectiveness of instructional events. Choice of assessment technique must be based
7
Ferdon, EDTECH503 (SU09)
upon the expected learner outcome. Use of evaluation data to improve learning is key.
8
Ferdon, EDTECH503 (SU09)
Ch. 12: Determining Success of the Instructional Design Product and Process
Formative Evaluation
 Formative evaluation – used throughout ID process, gather data for feedback. Improves
instruction and shows the client design progress. Three approaches to formative evaluation:
 Morrison, Ross and Kemp – based on Gooler: 1) Planning – purpose, audience, issues
(questions to be answered), resources, evidence. 2) Conducting - data-gathering techniques
and analysis. 3) Reporting - includes conclusions/recommendations.
 Dick, Carey and Carey –Phase 1/one-on-one: address omissions, learner performance and
reactions. Phase 2/small group: apply changes, representative sample. Phase 3/field trial:
apply changes, larger audience, administered by instructor. SME addresses accuracy.
 Rapid Prototyping (prototypes evaluated by experts/users) and usability testing (observe
product in action, note problems inherent in design) are also forms of formative evaluation.
Summative Evaluation
 Results in a formal report that gauges success of instructional intervention. Goal – determine
if instruction resulted in desired change and if client’s goals were met.
 Program evaluators – experts who conduct summative evaluations; complicated process.
 Kirkpatrick (most cited): 1) Reactions - attitude survey, 2) Learning - pre-/post-tests, 3)
Transfer - difficult to assess; can be any point, 4) Results - measureable: increased sales,
fewer accidents, etc. *Begin with reactions, continue as time/$ permit; best to use all four.
 Smith and Ragan –1) Determine goals of evaluation, 2) Indicators of success, 3) Orientation
of evaluation – objective (quantitative, reliable) or subjective (description, can be biased), 4)
Design evaluation (data collected how, when, conditions), 5) Design evaluation measures
(learning transfer, outcomes, attitudes, cost), 6) Collect data, 7) Analyze, 8) Report results.
 Morrison, Ross and Kemp – Three major areas: program effectiveness (major issue),
efficiency and costs. Seven steps similar to Smith and Ragan. 1) Specify objectives, 2)
Evaluation designed for each objective, 3) Data collection instruments/procedures, 4) Carry
out evaluation, 5) Analyze results, 6) Interpret results, 7) Disseminate results/conclusions.
 Dick, Carey and Carey – Very different; two phases: expert judgment (potential to meet
needs) and field trial (document effectiveness with target group).
Group Processing: Evaluating the Instructional Design Team
 Team member roles – SME, programmer, graphic artist, content editor, Web-developer,
media developer. Instructional designer may also be project manager. Group processing –
formative evaluation, group members’ reactions on their work and interactions.
 Johnson, Johnson and Holubec – Group processing helps 1) improve approach to task, 2)
increase accountability, 3) members learn from each other, 4) eliminate problems.
 Should include: feedback, reflection, goal improvement, and celebration.
Synthesis:
Regardless of the scope of the project, formative evaluation should be built into the process.
Instruction will be most effective when improvements are made throughout the design process.
Summative evaluation is more complex and requires skill and experience generally beyond the
9
Ferdon, EDTECH503 (SU09)
scope of instructional designers (also, conflict of interest). Effective client communication
ensures that the project and evaluation will meet instructional goals and client needs.
10
Download