Uploaded by cristianalopez2143

LEARNING CONTENT TO RESEARCH

advertisement
LEARNING CONTENT TO RESEARCH
1. Basic Concepts: Educational Measurement; Assessment; Evaluation of Learning &
Programs
2. Types of Measurement, indicators, variables & factors
o Variables
 A quantity or function that may assume any given value or set of values.
 An educational variables (denoted by an English alphabet, like X) is a
measurable characteristic of a student.
 It may be directly measurable (e.g. Xage of student, Xheight of a
student).
 Most often cannot be directly measured (e.g. Xclass participation of a
student).
o Indicators
 The building blocks of educational measurement upon which all other
forms of measurement are built.
 A group of indicators constitute a variable.
 They were introduced when direct measurements are not feasible.
 An indicators Idenotes the presence or absence of a measured
characteristic. thus:
11, if the characteristic is present
10, if the characteristic is absent
 For the variable Xclass participation, we can 1₁, 12, 13, 14 ………… denote the
participation of a student in n recitations and let X=sum of the I's divided by n
recitations. Thus, if there were n=10 recitations and the students participated in 5
of theses 10 then X-5/10 or 50%.
o Factors
 A group of variables form a construct or a factor
 Formed through a group of variables, and the variables which form a factor
correlate highly with each other but have low correlations with variables
in another group.
3. Various roles of assessment
o Integrate grading
o Learning
o Motivation for your students
4. Clarity of Learning Targets
a. cognitive targets- skills, competencies & abilities targets
 skills refer to specific activities or task that a student can proficiently do.
 Abilities are the qualities of being able to do something.
 It is important to recognize a student’s ability in order that the program of
study can be so designed as to optimize his/her innate abilities.
5. Learning Domains
o The cognitive learning domain involves intellect—the understanding of
information and how that develops through application on a scale that increases
from basic recall to complex evaluation and creation.
The affective learning domain involves our emotions toward learning and how
that develops as we progress from a low order process, such as listening, to a
higher order process, like resolving an issue.
o The psychomotor learning domain involves our physicality and how that
develops from basic motor skills to intricate performance.
6. Bloom’s Taxonomy of cognitive objectives
 Bloom’s taxonomy of cognitive objectives describes learning in six levels in the
order of: knowledge, comprehension, application, analysis, synthesis and
evaluation:
o Knowledge: rote memorization, recognition, or recall of facts
o Comprehension: understanding what the facts mean
o Application: correct use of the facts, rules, or ideas
o Analysis: breaking down information into component parts
o Synthesis: combination of facts, ideas, or information to make a new whole
o Evaluation: judging or forming an opinion about the information or situation
7. Appropriateness of Assessment Methods
o Written-Response Instruments - This includes objective tests (multiple-choice,
true or false, matching type or short answer test), essays, examinations, and
checklists.
Examples:
Objective test – appropriate for the various levels of the hierarchy of educational
objectives.
Essay – when properly planned, can test the students’ grasp of high-level cognitive
skills particularly in areas of application, analysis, synthesis, and evaluation.
o Product-Rating Scale - These scales measure products that are frequently rated in
education such as book reports, maps, charts, diagram, notebook, essay and
creative endeavor of all sorts.
Example:
Classic “Handwriting” Scale–is used in the California Achievement Test, Form W.
Prototype handwriting specimens of pupils are moved along the scale until the
quality of handwriting sample is most similar to the prototype handwriting.
o Performance Test - One of these is the performance checklist which consists of the
list of behaviors that makes up a certain type of performance. It is used to
determine whether or not an individual behaves in a certain way when asked to
complete a particular task.
Example: Performance Checklist in Solving a Mathematics ProblemBehavior
 Identifies the given information
 Identify what is being asked
 Use a variable to replace the unknown
 Formulate the equation
 Performs algebraic expressions
 Obtain the answer
 Checks of the answers make sense.
o Oral Questioning - An appropriate assessment method when the objectives are:
To the students’ stock knowledge; and To determine the student’s ability to
communicate ideas in a coherent verbal sentence.
8. Properties of Assessment Methods
o
o
o
Validity of test
- The instrument’s ability to measure what it purports to measure.
- The appropriateness, correctness, meaningfulness and usefulness of the
specific conclusions that a teacher reaches regarding the teaching-learning
situation
Types of Validity
- CONTENT VALIDITY
- FACE VALIDITY
- CRITERION-RELATED VALIDITY
- CONSTRUCT VALIDITY
 Content Validity – refers to the content and format of the instrument.
 Face Validity - refers to the outward appearance of the test.
- It is the lowest form of test validity
 Criterion-Related Validity - also called predictive validity.
- The test is judge against a specific criterion.
It can also be measured by correlating the test with a known valid
test.
 Construction Validity - the test is loaded on a “construct” or factor.
Reliability of test
- Reliability is the degree to which a test consistently measures whatever it
measures.
- Something reliable is something that works well and that you can trust.
- It is a term synonymous with depend ability and stability.
Types of Reliability
- EQUIVALENCY RELIABILITY
- STABILITY RELIABILITY
- INTERNAL CONSISTENCYRELIABILITY
- INTER-RATER RELIABILITY
 Equivalency Reliability
- also called equivalent forms reliability or alternative-forms.
- Is the extent to which two items measure
- Identical concepts at an identical level of difficulty.
- Equivalency reliability is determined by relating two sets of test
scores to one another to highlight the degree of relationship or
association.
 Stability Reliability
- Sometimes called test, re-test reliability
- Is the agreement of measuring instruments over time.
Equivalency reliability is determined by
- relating two sets of test scores to one another
- to highlight the degree of relationship or
- association.
 Internal Consistency Reliability
- Used to assess the consistency of results across items within a test
(consistency of an individual’s performance from item to item &
item homogeneity)
-
To determine the degree to which all items measure a common
characteristic of the person
 Inter-Rater Reliability
- Used to assess the degree to which different raters or observers give
consistent estimates of the same phenomenon.
o Fairness of assessment
- The concept that assessment should be 'fair' covers a number of aspects.
- Students need to know exactly what the method of assessment will be
used.
- Assessment has to be viewed as opportunity to learn rather than an
opportunity to weed out poor and slow learners.
o Practicality & efficiency of assessment
- Something efficient is being able to accomplish a purpose and is
functioning effectively.
Practicality is defined as something that is concerned with actual use
rather than theoretical possibilities.
o Ethics in assessment
- Refers to questions of right and wrong.
- Webster defines ethical (behavior) as conforming to the standards of
conduct of a given profession or group
9. Types of Tests
· true-false type test
· matching type test
· supply type test
· multiple choice test
· short answer test
· essay test
10. Measures of Central Position
o Mean
- The mean (or average) is the most popular and well-known measure
of central tendency. It can be used with both discrete and continuous
data, although its use is most often with continuous data.
o Median
- Median is the value which occupies the middle position when all the
observations are arranged in an ascending/descending order. It divides
the frequency distribution exactly into two halves. Fifty percent of
observations in a distribution have scores at or below the median. Hence
median is the 50th percentile. Median is also known as ‘positional average’.
- It is easy to calculate the median. If the number of observations are odd,
then (n + 1)/2th observation (in the ordered set) is the median. When the
total number of observations are even, it is given by the mean of n/2th and
(n/2 + 1)th observation.
o
Mode
- Mode is defined as the value that occurs most frequently in the data.
Some data sets do not have a mode because each value occurs only once.
On the other hand, some data sets can have more than one mode. This
happens when the data set has two or more values of equal frequency
which is greater than that of any other value. Mode is rarely used as a
summary statistic except to describe a bimodal distribution. In a bimodal
distribution, the taller peak is called the major mode and the shorter one
is the minor mode.
11. Measures of Variability
o Fractiles
- Are measures of location or position which include not only central
location but also any position based on the number of equal divisions in a
given distribution into four equal divisions, then we have quartiles
denoted by Q1, Q2, Q3 and Q4. The most commonly used fractiles are the
quartiles, deciles and percentiles.
o Quartile deviation
- Quartile Deviation (Q) - Next to range quartile deviation is another
measure of variability. It is based upon the interval containing the middle
fifty percent of cases in a given distribution. One quarter means 1/4th of
something, when a scale is divided in to four equal parts. “The quartile
deviation or Q is the one-half the scale distance between the 75t and 25th
percentiles in a frequency distribution.”
o Mean absolute deviation
o Standard deviation & variance
- Standard deviation - The standard deviation is the average amount of
variability in your dataset. It tells you, on average, how far each score lies
from the mean. The larger the standard deviation, the more variable the
data set is.
- Variance - is the average squared difference of the values from the mean.
Unlike the previous measures of variability, the variance includes all
values in the calculation by comparing each value to the mean. Variance is
the square of the standard deviation. This means that the units of variance
are much larger than those of a typical value of a data set. While it’s harder
to interpret the variance number intuitively, it’s important to calculate
variance for comparing different data sets in statistical tests like ANOVAs.
12. Grading System
- The two most common types of grading systems used at the university level are
norm-referenced and criterion-referenced. Many professors combine elements of
each of these systems for determining student grades by using a system of
anchoring or by presetting grading criterion which is later adjusted based on
actual student performance.
o Norm-referenced grading
- In norm-referenced systems students are evaluated in relationship to one another
(e.g., the top 10% of students receive an A, the next 30% a B, etc.). This grading
system rests on the assumption that the level of student performance will not vary
much from class to class. In this system the instructor usually determines the
percentage of students assigned each grade, although this percentage may be
determined (or at least influenced) by departmental expectations and policy.
o Criterion-referenced grading system
-
o
o
In criterion-referenced systems students are evaluated against an absolute scale
(e.g. 95-100 = A, 88-94 = B, etc.). Normally the criteria are a set number of points
or a percentage of the total. Since the standard is absolute, it is possible that all
students could get as or all students could get Ds.
Alternative grading system
- Alternative grading emphasizes providing detailed and frequent feedback to
students, giving students further agency in how they will be assessed. These
methods are meant to reduce students' anxiety and fixation on grades by
emphasizing the learning process.
- alternative grading forgoes a conventional points-based approach to grading and
favors holistic and continuous forms of assessment and feedback.
Cumulative & averaging grading system
- The cumulative grading system, the grade of a student in the grading
[period equals his current grading period grade which is assumed to have the
cumulative effects of the previous grading periods.
- In the averaging system, the grade of a student on a particular grading period
equals the average of the grades obtained in the prior grading periods and the
current grading period.
LEARNING CONTENT TO RESEARCH
21st Century Assessment
1. Characteristics of 21st Century Assessment
 Responsive  Flexible
 Integrated
 Informative
 Multiple Methods
 Communicated
 Technically Sound
 Systemic
2. Instructional Decisions in Assessment
2.1 Decision-making at different phases of teaching-learning phases
2.2 Assessment in Classroom Instruction
2.3 Types of Educational Decision
3. Outcome-Based Assessment
3.1 Student Learning Outcome
3.2 Sources of Student Expected Learning Outcome
3.3 Characteristics of Good Learning Outcome
Types of Assessment
1. Traditional and Authentic Assessment
4.1 Traditional as Direct and Indirect Measure
4.2 Authentic as Direct Realistic Performance Based Activity
2. Formative Evaluation and Summative Evaluation
5.1 Formative as Measure of Teaching / Learning Effectiveness
5.2 Summative as Measure of Learning at the End of Instruction
3. Norm and Criterion-Referenced Assessment
6.1 Norm-Referenced as a Survey Testing
6.2 Criterion-Referenced as Mastery Testing
4.
Contextualized and Decontextualized Assessment
7.1 Contextualized as Measure of Functioning Knowledge
7.2 Decontextualized as Assessment of Artificial Situation
5. Analytic and Holistic Assessment
8.1 Analytic as Specific Approach
8.2 Holistic as Global Approach
Nature of Performance-Based Assessment (PBA)
1. Meaning and Characteristics
9.1 PBA as Defined by Authorities
9.2 Features of PBA
2. Types of Performance Tasks
10.1 Solving a Problem
10.2 Completing an Inquiry
10.3 Determining Position
10.4 Demonstrations
10.5 Developing Exhibits
10.6 Presentation Tasks
10.7 Capstone Performances
3. Strengths and Limitations
11.1 Advantages
11.2 Disadvantages
Designing Meaningful Performance-Based Assessment
1. Designing the Purpose of Assessment
12.1Learning Targets Used in Performance Assessment
12.2 Process and Product-Oriented Performance-Based Assessment
12.3 Authentic as Direct Realistic Performance-Based Activity
2. Identifying Performance Tasks
13.1 Suggestions for Constructing Performance Tasks
3. Developing Scoring Schemes
14.1Rubric as an Assessment Tool
14.2 Types of Rubrics
14.3 Rubric Development
Affective Learning Competencies
1. Importance of Affective Targets
2. Affective Traits and Learning Targets
16.1 Attitude Targets
16.2 Value Targets
16.3 Motivation Targets
16.4 Academic Self-Concept Targets
16.5 Social Relationship Targets
2.6 Classroom Environment Targets
3. Affective Domain of the Taxonomy of Educational Objectives
17.1Receiving
17.2 Responding
17.3 Valuing
17.4 Organization
17.5 Characterization
Development of Affective Assessment Tools
4. Methods of Assessing Targets
18.1Teacher Observation
a. Self-Report
2.
Utilizing the Different Methods or Combination of Methods in Assessing Affect
19.1Type of Affect
19.2 Grouped or Individual Responses
19.3
Use of Information
3.
Affective Assessment Tools
20.1 Checklists
20.2 Rating Scale
20.3 Likert Scale
20.4 Semantic Differential Scale
20.5 Sentence Completion
Nature of Portfolio Assessment
1. Purposes
21.1 Why Use Portfolio?
21.2 Characteristics
2. Types
22.1 Showcase
22.2 Documentation
22.3 Process
22.4 Product
22.5 Standard-Based
3. Elements
23.1Parts and Designs
Grading and Reporting System
1. K to 12 Grading of Learning Outcome
2. The Effects of Grading on Students
3. Building a Grading and Reporting System
26.1Basis of Good Reporting
4. Major Purposes of Grading and Reporting
5. Grading and Reporting Methods
6. Developing Effective Reporting System
7. Tools for Comprehensive Reporting System
8. Guidelines for Better Practice
9. Planning and Implementing Parent-Teacher Conference
Statistics and Computer: Tools for Analyzing Assessment Data
4. Statistics
5. Descriptive and Inferential Statistics
6. Statistical Tools for: Grouped and Individual Data
35.1Measures of Central Tendency
35.2 Measures of Variability
35.3 Standard Scores
35.4 Indicators of Relationship
Computer: Aid in statistical computing and data presentation
Download