Uploaded by Mellite Periabras

ASSESSMENT IN LEARNING 1 & 2

advertisement
Assessment in Learning 1
&
Assessment in Learning 2
Prof. Ed 7 & Prof. Ed 8
Prepared by:
Ms. Mellite D. Periabras
ASSESSMENT
IN
LEARNING 1
Profession Education 7
TABLE OF CONTENTS
01 Nature and Role of Assessment
02 Principles of High Quality Assessment
03 Development of Tools for Classroom-based
Assessment
04 Interpretation of Assessment Results
05 Large-Scale Student Assessment
01 – NATURE OF ASSESSMENT
Chapter 1 – Concepts and Relevance of Assessment
Teaching and include a lot of instructional decisions to enhance and
increase student learning, hence, quality of instruction is strongly connected to the
structure of information on which these instructional decisions are made.
Linn (2003) stated that the student learning requires the use of a number of
techniques for measuring achievement. In order for effective teaching to take place in
the classroom, teachers must use different techniques of assessment to correlate with
the goals they have set for their students.
Measurement is the process of determining the quantity of achievement of learner by
means of appropriate measuring instruments. It is a systematic procedure of
determining the quantity or extent of all the measurable dimensions in the educative
process.
Simply, it is the quantification of what students have learned through the use of tests,
questionnaires, rating scales, checklist and other devices.
Chapter 1 – Concepts and Relevance of Assessment
Assessment refers to the full range of information gathered and synthesized by
teachers about their students and their classroom. It may be defined also as systematic
collection, review, and use of information about educational programs undertaken for the
purpose of improving student learning and development.
Evaluation is the process of determining the quality or worth of achievement in terms of
certain standard. It is a systematic procedure of determining the quality of the results of
measurement with the end view of improving and maximizing the acquisition of
desirable, educational outcomes.
Simply, it is the process of making value judgment assigning value or decisions on the
worth of students’ performance.
Types of Test
➢ Objective (Testing)
➢ Subjective (Perceptions)
Chapter 1 – Concepts and Relevance of Assessment
SEVERAL TYPOLOGIES OF TESTS
1. According to Mode of Response
❖ Oral Test (viva voce) – answers are spoken. Measure oral communication skills.
❖ Written Test – either select or provide a response to a prompt.
❖ Performance Test – to demonstrate their skills or ability to perform specific actions.
(problem-based learning, inquiry tasks, demonstration tasks, exhibits, presentation tasks
and capstone performances)
2. According to Ease of Qualification of Response
❖ Objective Test – can be corrected and quantified quite easily. Scores can be easily
compared.
❖ Subjective Test – elicits varied responses. May have more than one answer.
3. According to Mode of Administration
❖ Individual Test – given to one person at a time.
❖ Group Test – administered to a class of students or group of examinees simultaneously.
Chapter 1 – Concepts and Relevance of Assessment
4. According to Test Constructor
❖ Standardized Test – prepared by specialists who are versed in the principles of
assessments.
❖ Non-standardized Test – prepared by teachers who may not be adept at the principles of
test construction.
5. According to Mode of Interpreting Results
❖ Tests that yield norm-referenced interpretations – evaluative instruments that measure a
student’s performance in relation to performance of a group on the same test.
❖ Tests that allow criterion-referenced interpretations – each student’s performance against
an agreed upon or pre-established criterion or level of performance.
Chapter 1 – Concepts and Relevance of Assessment
6. According to Nature of Answer
❖ Personality Test – it measures one’s personality and behavioural style.
❖ Achievement Test – measure students’ learning as a result of instruction and training
experiences.
❖ Aptitude Test – determine a student’s potential to learn and do new task.
❖ Intelligence Test – measure learner’s innate intelligence or mental ability.
❖ Sociometric Test – measures interpersonal relationships in a social group.
❖ Trade or Vocational Test – assess an individual’s knowledge, skills and competence in
a particular occupation.
FUNCTIONS OF TESTING
A. Instructional Functions
1. Tests facilitate the clarification of meaningful learning objectives.
2. Tests provide a means of feedback to the instructor and the student.
3. Test can motivate learning.
4. Tests can facilitate learning.
5. Tests are useful means of overlearning.
B. ADMINISTRATIVE FUNCTIONS
1. Tests provide a mechanism of quality control.
2. Tests facilitate better classification and placement decisions.
3. Tests can increase the quality of selection decisions.
4. Tests can be a useful means of accreditation, mastery of certification.
C. RESEARCH AND EVALUATION
1. Tests are useful for program evaluation and research.
D. GUIDANCE FUNCTIONS
1. Tests can be of value in diagnosing an individual’s special aptitudes and abilities.
Why do we assess?
1. Diagnose student’s strengths and weaknesses or differences among
students
2. Evaluate student’s achievement and progress and provide feedback
3. As a vehicle to empower students to monitor and evaluate their own
progress
4. Determine teacher’s instructional effectiveness
5. Guide decision-making for designing interventions
6. ProvideOF
information
to parents and administrators
PORPUSE
ASSESSMENTS
1. Assessment for learning (AfL)) is generally formative in nature and is used by teachers to
consider approaches to teaching and next steps for individual learners and the class. It could be done
before, during and after instruction.
➢ to determine the level of skills prior to instruction
➢ to diagnose learning difficulties or advanced knowledge
➢ to make necessary changes in teaching strategies
➢ to identify and correct learning errors
PORPUSE OF ASSESSMENTS
2. Assessment as learning (AfL) – when students reflect on the results of assessments and use the
results to chart their own progress and plan the next steps to improve performance; it builds
metacognition as it involves students in setting and monitoring their own learning goals (SELFASSESSMENT)
3. Assessment of learning (AoL)
➢ assessment that is accompanied by a number, letter grade, or description (summative)
➢ compares one student’s achievement with standards
➢ results can be communicated to the student and parents
➢ occurs at the end of the learning unit
Purposes of Educational Measurement, Assessment and Evaluation (Kellough,
1993)
1. Improvement of Student Learning
2. Identification of Students’ Strengths and Weaknesses
3. Assessment of the Effectiveness of a Particular Teaching Strategy
4. Approval of the Effectiveness of the Curriculum
5. Improvement of Teaching Effectiveness
6. Communication with Involvement of Parents in the Children’s Learning
RELEVANCE OF ASSESSMENTS
Students – Through varied learner-centered and constructive assessment tasks, students
become actively engaged in the learning process.
Teachers – Provide direction as to how teachers can help students more and what teachers
should do next.
Parents – use the information to concoct home-based activities to supplement their children’s
learning.
Administrators and program Staff – to identify strengths and weaknesses of the program,
designate program priorities, assess options and lay down plans for improvement.
Policymakers – provides information about students’ achievements which in turn reflect the
quality of education being provided by the school.
Chapter 2 – Roles of Assessment
1. Placement Assessment
- used to determine a learner’s entry performance
- done at the beginning of instruction (pre-assessment)
2. Formative Assessment
- occurs during instruction
- used a feedback to enhance teaching and improve the process of
learning.
3. Diagnostic Assessment
- to identify learning difficulties during instruction
- can detect commonly held misconceptions in a subject
4. Summative Assessment
- done at the end of instruction to determine the extent to which
students have attained the learning outcomes.
Which of the following shows the relevance
of assessment of administrators?
a.
b.
c.
d.
Given feedback to students about their progress
Plan and conduct faculty development programs
Discover learning areas that require special attention
Diagnose and identify students’ learning needs
Ans. B
Mr. Castro uses evidences of student learning to
make judgements on student achievement
against goals and standards. He does this at the
end of a unit or period. Which purpose does
assessment serve?
a.
b.
c.
d.
Assessment as learning
Assessment for learning
Assessment of learning
Assessment tool
Ans. C
02 – PRINCIPLES OF HIGH QUALITY ASSESSMENT
Chapter 3 – Appropriateness and Alignment of Assessment
Methods to Learning Outcomes
The Taxonomy of Educational Objectives
Taxonomy is a classification of materials arranged hierarchically. It is a classification system of
learning hierarchy. Educational objectives are classified into three domains. These domains are
cognitive domain, affective, and psychomotor domain.
1. Cognitive Domain. This refers to the objectives which emphasize recall or recognition of
knowledge and development of intellectual abilities and skills.
2. Affective Domain. This domain refers to the objectives which describe changes in interest,
attitudes and values and development of appreciation and adequate adjustment.
3. Psychomotor Domain. This refers to the objectives which emphasize some muscular or
some manipulation of materials and objects some acts which require a neuromuscular
coordination.
HIERARCHY OF THE COGNITIVE DOMAIN (Benjamin Bloom)
1. Remembering (Knowledge). Recalling and remembering previously learned material including
specific facts, events, persons, dates, methods, procedures, concepts, principles and theories.
2. Understanding (Comprehension). Understanding and grasping the meaning of something
including translation from one symbolic form to another interpretation, explanation, predictions,
inferences, restating, estimation and other uses that demonstrate understanding.
3. Applying (Application). This refers to the ability to use a learned rule, method, procedure,
principle, theory, law and formula to solve new situation. Using abstract ideas, rules, or
generalized methods in novel and concrete situations.
4. Analyzing (Analysis). This level refers to the ability to break down materials into component
parts to identify the relationship. This may include (1) identification of parts; (2) analysis of the
relationship between parts; and (3) recognition of the principles involved. This level is higher
than comprehension because it requires an understanding of both the content and structural
form of the organizational principles.
5. Evaluating (Evaluation). This is concerned with the ability to judge the value of material for a
given purpose. Judging the quality worth or value of something according to established
criteria.
6. Creating (Synthesis). This refers to the ability to put parts together to form a new whole. This
level stresses creative behaviours with emphasis on the formulation of new structure. This
concern on arranging and combining elements and parts into novel patterns and structures
HIERARCHY OF AFFECTIVE DOMAIN (David Krathwohl)
Receiving. This refers to the student’s willingness to give attention to the materials being
presented.
Responding. This refers to the active participation on the part of the students. Students show
willingness to respond and find initial level of satisfaction.
Valuing. This level concerned with the worth, value or importance a student attaches to a
particular object, situation or action. Something is perceived as holding appositive value, a
commitment is made.
Organization. This is concerned with bringing together different values, resolving conflicts
between them and organizing them into a value system. Brings together a complex set of
values and organizes them in an ordered relationship that is harmonious and internally
consistent.
Characterization. At this level, the student has a value system that has controlled his
behaviour for sufficiently long time. Organized system values becomes a person’s life
outlook and the basis for a philosophy of life.
HIERARCHY OF PSYCHOMOTOR DOMAIN (Elizabeth J. Simpson)
1. Perception. This is concerned with the use of the sense organs to obtain cues that guide
motor activity. It ranges from awareness of a stimulus selection of cues to translating cues to
action in a performance.
2. Set. This refers to readiness to act. It includes mental, physical, and emotional readiness to
act. Perception is an important prerequisite.
3. Guided Response. This is the early stage in learning a complex skill. It is concerned with
initiating the act of the teacher as a model and trying out different approaches and choosing
the most appropriate ones. It includes imitation, trial and error.
4. Mechanism. This is concerned with performance acts that have become automatic and can
be performed with some proficiency and confidence. This is also concerned with habitual
responses that can be performed with some confidence and proficiency.
5. Complex Overt Response. This is skillful performance of motor acts that involve complex
movement pattern. Performance is quick, smooth, accurate and automatic requiring a minimum
of effort.
6. Adaptation. This is concerned with well-developed skills. In this level, the individual can
modify movement patterns to fit special requirements or a problem situation.
7. Origination. Creating new movement patterns to fit a particular situation or specific problem.
Learning outcomes emphasize creativity based upon highly developed skills.
TYPES OF ASSESSMENT METHODS
1. Selected-Response Format
- students select from a given set of options to answer a question or a
problem
- there is only one correct or best answer
- are objective and efficient
2. Constructed-Response Format
- (subjective) demands that students create or produce their own answers
in response to a question, problem or task.
- categories: brief-response items; performance tasks; essay items; or oral
questioning
3. Teacher Observation
- watching how students respond to oral questions and behave during
individual and collaborative, activities, the teacher het information if learning is
taking place in the classroom.
4. Student Self-Assessment
- process where the students are given a chance to reflect and rate their
own work and judge how well they have performed in relation to a set of
Chapter 4 – Validity and Reliability
VALIDITY
- Validity is the most important quality of a good measuring instrument.
- Validity refers to the degree to which a test measures what it intends to
measure. It is the usefulness of the test for a given measure. A valid test is
always reliable.
Types of Validity
1. Content Validity. Content validity means the extent to which the content or
topic of the test is truly representative of the course.
2. Concurrent Validity. Concurrent validity is the degree to which the test
agrees or correlates with a criterion set up as an acceptable measure.
3. Face validity. A test that appears to adequately measure the learning
outcomes and content.
4. Predictive Validity. Predictive validity is determined by showing how well
predictions made from the test are confirmed by evidence gathered at some
subsequent time.
5. Construct Validity. Construct validity of the test is the extent to which the
test measures a theoretical trait.
FACTORS THAT AFFECT THE VALIDITY OF A TEST
1. Inappropriateness of the test items.
2. Directions of the test items.
3. Reading vocabulary and sentence structure
4. Levels of difficulty of the test item
5. Poorly constructed test items
6. Length of the test items
7. Arrangement of the test items
8. Pattern of the answers
9. Ambiguity
RELIABILITY
Reliability means the extent to which a test is dependable, self-consistent
and stable. It refers to consistency and accuracy of test results. If the measures
exactly the same degree each time it is administered, the test is said to have high
reliability. A test to be reliable should yield essentially the same scores when
administered twice to the same group of students.
FACTORS THAT AFFECT RELIABILITY
1. Length of the test.
2. Moderate item difficulty.
3. Objective scoring.
4. Heterogeneity of the student group.
5. Limited time.
Chapter 5 – Practicality and Efficiency
TIME REQUIRED
- It may be easier said than done, but a desirable assessment is short yet able to provide
valid and reliable results.
EASE IN ADMINISTRATION
- Should be easy to administer to avoid questions during the test or performance task,
instructions must be clear and complete. Instructions that are vague will confuse the students and
they will consequently provide incorrect responses.
EASE OF SCORING
- Obviously, selected response formats are the easiest to score compared to restricted
and more so extended-response essays.
EASE OF INTERPRETATION
- Students are given a score that reflects their knowledge, skills or performance.
COST
- Citing cost as a reason for being to come up with valid and reliable tests is simply
unreasonable.
- It is relevant to know that excessive testing may just train students on how to take tests
but inadequately prepares them for a productive life as an adult.
Chapter 6 – Ethics
Students’ Knowledge of Learning Targets and Assessments
❑ Opportunity to Learn
❑ Prerequisite Knowledge and Skills
❑ Avoiding Stereotyping
❑ Avoiding Bias in Assessment Tasks and Procedures
❑ Accommodating Special Needs
❑ Relevance
- Assessment should reflect the knowledge and skills that are
most important for students to learn.
- Assessment should tell teachers ad individual students.
Oppurtunity to learn things that are important.
- Assessment should tell teachers ad individual students
something that they do not already know.
❑ Ethical Issues
❑
Which of the following is true about
selected-response items?
a.
b.
c.
d.
It assess the effective domain.
It assess only part of the cognitive
domain.
It assess the higher levels of the cognitive
domain.
It assess the psychomotor domain.
Ans. B
A Grade 7 Science teacher grouped for an oral
presentation about mitigation and disaster risk reduction.
She forgot to discuss to the class how they shall be
graded. What ethical concern did the teacher fail to
consider?
a. Confidentiality
b. Fairness
c. Relevance
d. Transparency
Ans. D
03 – DEVELOPMENT OF TOOLS FPR CLASSROOM-BASED
ASSESSMENT
Chapter 7 – Planning the Test
Overall Test Development Process
a. Planning Phase
b. Item Construction Phase
c. Review Phase
Planning of the Test
1. Identifying Purpose of Test
2. Specifying the Learning Outcomes
3. Preparing a Test Blueprint
- Table of Specifications (TOS) –
spells out WHAT will be tested and HOW it
will be tested to obtain the information
needed.
Forms of Table of Specifications
1.
2.
3.
One-Way TOS – often used for skilloriented subjects like language and
reading or for classroom formative
tests focusing in specific skills.
Two-Way TOS Expanded TOS -
Chapter 8 – Selecting and Constructing Test Items and Tasks
After the preparation of the
table of specification, the next step is
the construction of the test proper.
For the classroom teachers, the
construction of the test has become a
routine activity, although a number of
them, still feel that objective
construction (of the test) can hardly
be achieved. Because of the essential
and indispensable role of that tests
play in educative process, the
teachers, whether they like it or not,
should
possess
a
good
understanding of the test. Hence, they
are expected to write good and
purposeful questions.
Preliminary Steps in Constructing Teacher-Made Tests
1. Prepare a table of specifications.
2. The test should be of various types of items.
3. Clear, concise, and complete directions should precede all types of test.
4. There should only be one possible correct response for each item in the
objective test.
5. The test items should be carefully worded to avoid ambiguity.
6. Majority of the test should be of moderate difficulty. Only very few difficult
and easy items should be included.
7. The items included should be arranged in a rising order of difficulty, that is,
from the easiest to the most difficult.
8. The regular sequence in the pattern of responses should be avoided.
9. Each test item should be independent. That is, leading clues to other items
should be avoided.
10. The test should not be too short nor too long but it can be completed within
the time allotted by all or nearly all of the students.
11. Make the answer key that contains all acceptable answers.
12. Decide upon the values of scoring.
Characteristics of a Good Test
1. Validity - The extent to which a tests measures what it intends to measure.
2. Reliability - The ability of the test to show similar results when it is repeated or
when a different form is used.
3. Usability - The test is within the comprehension of the students and easy to
administer and score. It is also suitable to test conditions and within budget
constraints.
OBJECTIVE TESTS
- Objective tests are item types that can be scored objectively.
Completion Test Items.
This test consists of a series of items, which requires the students to fill a word or
words on the blanks provided. These test items are useful for measuring knowledge of
factual information. They are applicable to the measurement of concepts and skills at
the lower level of cognitive domain.
Constructing a True – False Test
These test items are simply stated in a declarative form, which the students must
judge as either true or false. These test items are characterized by the fact that only two
answers are possible. They are not applicable to the measurement of concepts and skills
at the higher level of cognitive domain.
Constructing a Matching Test Items
These test items consist of two columns in which each item in the first has a
corresponding answer in the second column. Like the completion test item and truefalse test items, they are only applicable on the measurement of concepts and skills at
the lower level of cognitive domain.
Constructing a Multiple choice Tests
Multiple choice item is the most versatile type of test. It can measure a variety of
learning outcomes, from the most simple to the most complex and is applicable to
almost all subject matter content.
A multiple choice item consists of a problem and a list of suggested solutions.
The parts of a multiple choice item are: stem; and alternatives or options which consist
of distracters and a key.
Forms of multiple choice items
1. Correct answer variety
2. Best answer variety
3. Incomplete statement variety
4. Negative or exception variety
ESSAYS
Essays, classified as non-objective tests, allow for the assessment of higher order thinking
skills. Such test require students to organize their thoughts on a subject matter in coherent
sentences ion order to inform an audience. In essays tests, students are requested to write one or
two or more paragraphs on a specified topic.
Advantages of Essay Type of Test
1. The essay test can easily be prepared.
2. It trains students for thought organization and self-expression.
3. It is economical.
4. It affords students to develop their critical thinking.
5. the essay test can be used to measure higher mental abilities among students.
6. It minimizes cheating and memorization.
7. It minimizes guessing.
Disadvantages of Essay Type of Test
1. Due to limited sampling of items, the test may become invalid and unreliable measure of abilities.
2. Questions usually are not well-prepared.
3. It is difficult to score.
4. Scoring is highly subjective due to the influence of the teacher’s personal judgment.
5. It is time consuming on the part of the teachers and students.
Chapter 9 – Improving Classroom-Based Assessment
I. Teachers’ Own Review
- To presume perfection right away after its construction may lead to failure
to detect shortcomings of the test or assessment task.
II. Peer Review
- Can openly review together the classroom tests and tasks they have
devised against some consensual criteria.
III. Students Review
- Engagement of students in reviewing items has become a laudable
practice for improving classroom tests.
IV. Empirically-Based Procedures
- Item-improvement using empirically-based methods is aimed at improving
the quality of an item using students’ responses to the test.
Formative assessments help teachers monitor progress of
student learning and at the same time guide them in
making instructional alterations. Choose the testing
practice which is NOT formative in use.
a. Pretest is given to form initial impression of what learners know
about the new unit of work.
b. Diagnosing the group’s learning needs to determine how they can
be better assisted and guided.
c. Planning instruction based on what needs to be emphasized and
managed.
d. Assigning grades for report based on results of periodical
examinations.
What is the only requirement for a test to
be considered an objective test?
a. Appropriateness of difficulty level of items to test
takers.
b. Observance of fairness in writing the items.
c. Presence of only one correct or nearly best answer
in every item.
d. Comprehensibility of test instructions.
04 – INTERPRETATION OF ASSESSMENT RESULTS
Chapter 10 – Utilization of Assessment Data
TYPES OF TEST SCORES
I. Percentile Rank – Percentile rank gives the percent of scores that are at or below
a raw or standard score. This should not be confused with the percentage of
correct answers.
II. Standard Scores – it is difficult to use raw scores when making comparisons
between groups on different tests considering that test may be different levels of
difficulty.
- is a derived score which utilizes the normal curve to show how a
student’s performance compares with the distribution of scores above and below
the arithmetic mean.
A. z-score – gives the number of standard deviations of a test score above
or below the mean.
B. T-score – is standard deviation.
- calculates how much a result varies from the average or mean.
C. Stanine – is a method of scaling scores on a nine-point scale.
TYPE OF TEST SCORE INTERPRETATIONS
A. Norm-Referenced Interpretations
- pertains to the average score in a test.
- are explanations of a learner’s performance in comparison with
other learners of the same age or grade.
B. Criterion-Referenced Interpretations
- provide meaning to tests scores by describing what the learner
can and cannot do in light of a standard. Test scores allow for absolute
interpretations and not comparative.
- compare a person’s knowledge or skills against a predetermined
standard, learning goal, performance level, or other criterion.
Chapter 11 – Grading and Reporting Results
GRADING – is a process of assigning a numerical value, letter or symbol to
represent student knowledge and performance.
Grading Systems
A. Types of Comparison
1. Norm-referenced grading – focuses on performance of one’s peers.
2. Criterion-referenced grading – focuses on defined learning targets.
B. Approaches to Grading
Several types of grade notations:
1. Numerical grades (100, 99, 98, …)
2. Letter grades (A, B, C, etc.)
3. Two-category grades (Pass-Fail; Satisfactory-Unsatisfactory)
4. Checklists
5. Standards-based (Advanced, Proficient, …, Beginning or Similar)
Reporting – A report card is common method of reporting a learner’s abilities and
progress.
When teachers use a norm-referenced
framework in interpreting student performance,
what does it mean?
a. A student’s performance is compared to specified standards of
mastery.
b. A student’s performance is compared to clear descriptions of
specific tasks.
c. A student’s performance is compared to other students in the
group.
d. A student’s performance is compared to his/her learning potential.
Grading and reporting are designed to serve
purposes in school. Which of the following
explains the guidance function of grading?
a. Assist students make realistic educational and vocational
plans.
b. Enhance student’s learning.
c. Inform parents (guardians) of learner’s progress.
d. Select students for promotion, graduation and honors.
Involvement of teachers and curriculum
experts is a must in the development of largescale tests to ensure its
a. Content validity
b. Concurrent validity
c. Predictive validity
d. Construct validity
One of the things large-scale student
assessment cannot do is to
a. monitor progress of individual students
b. compare school performance amongst schools
c. determine the number of students reaching the
standard for mastery
d. draw profile of student performance across
learning outcomes
PRACTICE
TEST
ASSESSMENT
IN
LEARNING 2
Profession Education 8
Chapter 1 – 21st Century Assessment
❖ Educational assessment as an agent of educational change, is of great
importance. Coupled with the traditional focus on teaching and learning, it will
produce a strong and emerging imperative to alter our long-held conceptions
of these three parts: teaching, learning, and assessment.
❖ Characteristic of 21st Century Assessment, are the essentials for the assessment
that will be used most especially by the educators.
1. Responsive – teacher can adjust instructions, school leaders can
consider additional education opportunities and policy makers can modify
programs and resources to cater to the present needs of the school community.
2. Flexible – assessment need to be adaptable to student’s settings.
3. Integrated – assessments are to be incorporated into day-to-day
practice rather than as add-ons at the end of instructions or during a single
specified week of the school calendar.
4. Informative – desired 21st century goals and objectives are clearly stated
and explicitly taught.
– learning objectives, instructional strategies, assessment
methods, and reporting processes are clearly aligned.
Chapter 1 – 21st Century Assessment
5. Multiple Methods – An assessment continuum that includes a spectrum
of strategies is the norm.
6. Communicated – communication of assessment data is clear and
transparent for all stakeholders.
7. Technically Sound – adjustments and accommodations are made in the
assessmeny process to meet the student needs an fairness.
8. Systematic – Twenty-first century assessment is part of a
comprehensive and well-aligned assessment system that is balanced and
inclusive of all students, constituents, and stakeholders and designed to support
improvement at all levels.
❖ In assessment, teachers play various roles and have different goals. These are
as follows: mentor, guide, accountant, reporter, and program director.
❖ The assigning of value or importance to the results of the assessment is what
evaluation is all about.
❖ Assessments can be used as basis for decision-making at different phases of
the teaching-learning process.
Chapter 1 – 21st Century Assessment
TYPES OF EDUCATION DECISIONS
1. Instructional – reached according to the results of test administered to a class.
2. Grading – are assigned to the students using assessment as one of the factors.
3. Diagnostic – made to determines student’s strengths and weaknesses and the
reason or reasons.
4. Selection – involves accepting or rejecting the examinee based on the results of
assessment, for admission or qualification to a program or school activity.
5. Placement – involves the process of identifying students who needs remediation
or may recommended for enrichment program of the school.
6. Guidance and Counselling – teacher may use the results of socio-metric tests to
identify who among the students are popular or unpopular. Those who are
unpopular may be given help for them to gain friends and become more sociable.
7. Program or Curriculum – educational decisions may be reached: to continue,
discontinue, revise or replace a curriculum or program being implemented.
8. Administrative Policies – involves determining the implications to resources
including financial consideration in order to improve the student learning as a
result of an assessment.
Chapter 1 – 21st Century Assessment
❖ Student Learning Outcome is the totality of accumulated knowledge, skills, and
attitudes that students develop during a course of study.
❖ In crafting student learning outcomes, the following sources must be
considered: - - institution’s mission statement;
- policies on competencies and standards issued by government
education agencies;
- competencies expected by different professions;
- business and industry;
- thrusts and development goals of the national government and local
government;
- global trends and developments; and
- general education skills.
❖ Good student learning outcomes is:
- specific
- integrates acquired knowledge
- realistic
- focused on learner
- prepares learners for assessment and time bound
(SMART)
Chapter 2 – Types of Assessment
TRADITIONAL ASSESSMENT – are indirect and inauthentic measure of students
learning outcomes. This kind of assessment is standardized and for that reason,
they are one-shot, speed-based, and norm-referenced.
- Traditional assessments often focus on learner’s ability of memorization
and recall, which are lower level of cognition skills.
- Paper-and-pencil tests or quizzes are best examples which mainly
describe and measure student learning outcomes.
AUTHENTIC ASSESSMENT – focuses on the analytical and creative thinking skills,
students to work cooperatively and collaboratively and performance skills
(process or product) that reflect student learning, student achievement, and
student attitudes of relevant activities.
- Assessment is authentic when it measures performances or products
which have realistic meaning than can be attributed to the success in school.
Chapter 2 – Types of Assessment
The commonly reported dimensions of authenticity are grouped
into three broad categories (Frey, 2012):
a) The Context of Assessment
- Realistic activity or context
- The task is performance-based
- The task is cognitively complex
b) The Role of Student
- A defense of the answer or product is required.
- The assessment is formative.
- Students collaborate with each other or with the teacher.
c) The Scoring
- The scoring criteria are known or student-developed.
- Multiple indicators or portfolios are used for scoring.
- The performance expectation is mastery.
Chapter 2 – Types of Assessment
Authentic assessment has four basic characteristics:
1. the task should be representative of performance in the field;
2. attention should be paid to teaching and learning the criteria for
assessment;
3. self-assessment should play a great role; and
4. when possible, students should present their work publicly and defend it.
Other Types of Assessments
1. Formative and Summative Evaluation
Formative Assessments - occurs during instruction, between lessons, and
between units.
Summative Assessments- are conducted at the end of each section or unit to
find out student achievement. (units tests, exams, essays, or projects of a student’s
final grade)
2. Norm and Criterion-Referenced Assessment
Norm-referenced – what the student can perform by comparing to another
student.
Criterion-referenced – without reference to the performance of others which
Chapter 2 – Types of Assessment
3. Contextualized and Decontextualized Assessment
Contextualized – the focus is on the students’ construction of
functioning knowledge and the students’ performance in application of
knowledge in the reak work context of the discipline area.
Decontextualized – includes written exams and term papers,
which are suitable for assessing declarative knowledge, and do not
necessarily have a direct connection to a real-life context.
4. Analytic and Holistic Assessment
Analytic – specific approach in the assessment of learning
outcomes.
- students are given feedback on how well they doing on
each important aspect of specific task expected from them.
Holistic – a global approach in the assessment of a studentlearning outcome.
- in the form of reflection papers and journals, peer
assessment. Self-assessment, group presentation and portfolio.
Twenty-first century skills are built on core literacy and
numeracy required of students to master. In order to
meet the demands of the 21st century educations need
to focus on:
I. What to teach
II. How to teach
III. How to assess
a. I & II
b. I & III
c. III only
d. I, II, III
Who among the following teachers
is doing an evaluation?
a. Teacher Romnick who is computing the final grades based on
several criteria for assessment.
b. Teacher Ronnel who is administering the chapter exam to his
students.
c. Teacher Ronnie who is re-checking the test paper of his
students.
d. Teacher Michelle who is rating the finished project of her
students.
Chapter 3 – Nature of Performance-Based Assessment
Performance-Based Assessment
- is one in which the teacher observes and makes a judgement
about the student’s demonstration of a skill or competency in creating a
product, constructing a response, or making a presentation (McMillan,
2007).
Types of Performance Task
1. Solving a Problem – critical thinking and problem solving
2. Completing an inquiry – to collect data and develop understanding
3. Determining a position – make decision or clarify position
4. Demonstration task – skills to complete well-defined complex tasks
5. Developing exhibits – visual presentation
6. Presentation task – work or task performed in front of an audience
7. Capstone performances – tasks that occur at the end of a program of
study and enable students to show knowledge and skills in the context
that matches the world of practicing professionals.
Chapter 4 – Designing Meaningful Performance-Based Assessment
Steps in Developing a Meaningful Performance Assessment
1. Defining the purpose of assessment
2. Identifying performance task
3. Developing scoring schemes
4. Rating performance
Performance Assessment Primarily Used 4 Types of Learning Targets
1. Deep understanding
2. Reasoning
3. Skills
4. Products
❖
Performance needs to be identified so that students may know what
tasks and criteria to be performed. In this case, a task description must
be prepared to provide the listing of specification of the tasks and will
elicit the desired performance of the students.
PRACTICE
TEST
Thank
You!
CREDITS: This presentation template was
created by Slidesgo, including icons by
Flaticon and infographics & images by Freepik
Download