Dr. David Cababaro Bueno Dr. Segundo Chavez Redondo Jr. Classroom Assessment Practices

advertisement
Teachers’ Assessment Skills in Practice at the Elementary to Senior High School
Department of a Private Educational Institution: A Cross-Sectional Analysis
Dr. David Cababaro Bueno
Dr. Segundo C. Redondo, Jr.
==========================================================
Abstract – This study focuses on the analysis of the classroom assessment skills an practices of teachers across levels and
departments towards a more authentic assessment. The descriptive-cross-sectional design was used to gather descriptive and
comparative data for the purpose of describing the characteristics of several groups of teachers relative to their classroom assessment
practices. All teachers from elementary, junior high school and the senior high school were included. The Classroom Assessment
Practices and Skills (CAPS) questionnaire was used. Reliability estimates of teachers’ perceived skill in classroom assessment
were done using Cronbach’s Alpha, which was α = .95 indicating high levels of internal consistency. The data gathered were
analyzed using Mean, and Analysis of Variance (ANOVA) at .05 level of confidence. The teachers across levels are very
skilled calculating central tendency of teacher-made tests, assessing students’ class participation, calculation of grades, using
assessment results in planning, decision-making, communicating and providing feedback, problem solving, evaluating class
improvement, and writing true or false tests. Moreover, they are skilled in writing multiple-choice tests measuring higher order
thinking skills (HOTS). They always prepare and employ multiple-choice question, true or false and essay questions, HOTS,
problem solving, assessment results for decision-making and written feedback along with student’s grades. There is significant
difference between the assessment skills of the elementary and the senior high school teachers. Furthermore, no significant
differences on the assessment practices teachers across levels are identified. Furthermore, there is a moderate positive correlation
between the assessment skills and practices of elementary, junior, and senior high school teachers. Traditional forms of assessment
are more preferred by the teachers compared to the alternative assessments.
Keywords – Classroom assessment, skills, practices, cross-sectional design, elementary, junior and senior high school,
department, private educational institution, Olongapo City
INTRODUCTION
Educational assessment is an essential component of the teaching profession. It it the process used in
the classroom by the teacher to obtain information about students’ performances on assessment tasks, using a
variety of assessment methods, to determine the extent to which students are achieving the target
instructional outcomes. In this regard, researchers suggest that a sound educational assessment requires a
clear conception of all intended learning outcomes of the instruction and a variety of assessment procedures
that are relevant to the instruction, adequate to sample student performance, and fair to everyone. This means
teachers should competently be able to choose and develop assessment methods appropriate for instructional
decisions; administer, score, and interpret results of externally produced and teacher-made assessment; use
assessment results when making educational decisions; develop valid grading procedures; communicate
assessment results to various audiences; and recognize unethical, illegal, and inappropriate methods and uses
of assessment (Alkharusi, Aldhafri, Alnabhani, & Alkalbani, 2012).
Thus, teaching is a multifaceted process that requires teacher competencies in measurement and
assessment skills. Such skills may include: test planning and construction; grading; interpretation of test
results; use of assessment results to inform teaching and learning; interpretation of standardized tests; and
communicating results to relevant stakeholders (Koloi-keaikitse, 2017).
Assessment of students is very critical because effective teaching decisions are based on the ability of
teachers to understand their students and to match actions with accurate assessments (McMillan, 2008).
However, past research has shown that there are many problems associated with teachers’ classroom
assessment practices. These include teachers’ lack of an adequate knowledge base regarding the basic testing
and measurement concepts (Stiggins, 2014), limited teacher training in assessment and failure of teachers to
employ and adhere to measurement guidelines they learned in measurement courses (Campbell & Evans,
2000).
Teachers adopt different classroom assessment practices to evaluate students’ learning outcomes, and
they spend much of their classroom time engaged in student assessment related activities. Teachers control
classroom assessment environments by choosing how they assess their students, the frequency of these
assessments, and how they give students feedback. All these are a clear indication that classroom assessments
play an integral part of the teaching and learning process. Just like teachers everywhere, Columban College,
Inc. (CCI) teachers are the key drivers of the education process. Their instructional and classroom assessment
practices are a means by which the education system is enhanced and defined (Nenty, Adedoyin, Odili, &
Major, 2007). For this reason, it is imperative to understand the ways in which teachers feel about assessment
practices, their perceptions regarding assessment training and their experiences as they attempt to use various
assessment methods to evaluate students’ learning outcomes. It is also important to understand their thought
processes as they develop and use assessment methods, grade students’ work and interpret assessment results.
Teachers’ assessment practices are an essential element for addressing students’ learning needs, and they can
ultimately improve the education system and accountability. Understanding teachers’ assessment practices
serves as a way of finding out if teachers adopt or use quality assessment methods to meet the learning needs
of students (McMillan, 2008).
The role of student assessment at the various levels in the educational system is to generate
information to be used for making “high stakes” decisions, such as selecting and placing students in
appropriate training programs. Student assessment in a private educational institution also plays an important
role of helping students prepare for standardized examinations needed for those “high stakes” decisions.
However, few formal studies on teachers’ classroom assessment skills and practices have been
conducted. This makes it difficult to have a clear understanding about the nature and magnitude of
assessment issues of teachers in the elementary to senior high school. This study endeavors to bring an
awareness regarding how teachers generally perceive their classroom assessment skills and practices as
paradigm shift towards outcomes-based assessment practices.
Framework of the Study
This study assesses the teacher’s response pattern in a set of items that measured their perceived
skills in classroom assessment practices. In order to gain insights into teacher’s response to their perceived
skill in assessment scale, an Item Response Theory (IRT) model was utilized. IRT refers to a set of models
that connects observed item responses to a participant examinee’s location on the underlying trait that is
measured by the entire scale (Mellenbergh, 1994). IRT models have been found to have a number of
advantages over other methods in assessing self-reported outcomes such as teacher beliefs, perceptions, and
attitudes (Hambleton, Swaminathan, & Rogers, 1991). IRT is a general statistical theory about examinee item
and test and how performance relates to the abilities that are measured by the items in the test. IRT models
have the potential to highlight whether items are equivalent in meaning to different respondents, they can be
used to assess items with different response patterns within the same scale of measurement, therefore can
detect different item response patterns in a given scale (Hays, Morales, & Reise, 2000). Thus, IRT is regarded
as an improved version of Classical Test Theory (CTT) as many different tasks may be performed through
IRT models that provide more flexible information. Test items and traits of the test taker are referenced on
the same interval scale (Koloi-keaikitse, 2017).
In order to understand what students know or do not know, educators need assessment. Classroom
assessment is possibly the first and most important part of the teaching and learning process that includes
measurement, feedback, reflection, and change. Classroom assessments play an important role as they are
essential for generating information used for making educational decisions. Classroom assessments also serve
many purposes for teachers such as: grading, identification of students with special learning needs, student
motivation, clarification of students’ achievement expectations, and monitoring instructional effectiveness
(Stiggins, & Bridgeford, 2014). Thus, classroom assessments must be transformed into the content and use of
assessment information and insights as part of an ongoing learning process.
The purpose of classroom assessment is not just to generate information for decision making, but
also to foster learning improvement. For this reason, if properly offered on a frequent basis it would help
students to refine and deepen their understanding of what they learn. Classroom assessments are also
essential for conveying expectations that can stimulate learning (Wiggins, 2008). The more information we
have about students, the clearer the picture we have about their achievement, learning challenges and where
those challenges emanate. For this reason, there is a need to pay attention to how it is used, as failure to do
this may lead to inaccurate assessment of students’ achievement and may ultimately prevent students from
reaching their full academic potential (Stiggins, & Bridgeford, 2014).
In other words, assessment serves as an important deciding factor for the future of students’ learning
outcomes. Educators must have a clear understanding of the assessment practices that teachers use as they
assess students, and the assessment challenges teachers face. The most efficient way to measure, understand,
and appreciate teachers’ assessment practices is to assess their perceptions about classroom assessment
methods. They should make decisions about the choice of assessment methods they want to employ and
establish if such methods are relevant for assessing the specific content and effective to help students reach
their academic potential. Teachers should also make decisions about how they are going to grade, give
students feedback, and how they will analyze, interpret, and use assessment results to inform decisions in
teaching and learning.
Classroom assessment involves a wide range of activities from designing paper-pencil tests and
performance measures to grading, communicating assessment results, and using them in decision-making
(Zhang & Burry-Stock, 2013). Although there is a great deal of research on teachers’ assessment practices,
few empirical research attempts have been made to link these practices to teachers’ skills in the classroom
assessment environment.
Studies which focused on classroom assessment showed that teacher assessment practices have been
affected by various subject areas such as English Language, Mathematics, Science, Social Studies, Music,
Visual Arts in the elementary, middle high school, and senior high school departments. The results suggested
that the overall use of and thinking about assessment practices in a secondary English context aligns with
prior formative assessment, although the practices reported as used in this study seemed to be more targeted
to individual students and their learning needs (Tolley, 2016); and both Filipino and Indonesian junior high
school English teachers used assessment for learning using written comments as their primary method for
providing feedback (Balinas, 2016). Given issues related to differences in learner characteristics, effective
sampling across the content domain and recent emphases on assessing meaningfully contextualized abilities
and higher-order cognitive processes, the traditional mathematics test arguably does not provide a valid
measure of student ability, Dandis (2013), concluded that teachers mainly use written exams to assess their
student, they reported using some alternative assessment but sporadically, the teachers showed dissatisfaction
with the methods they use and they prefer using direct observation to assess their students, and the teachers
gave some suggestions for improvement purposes; while Awoniyi (2016) used school-based among senior
high school management of assessment towards improving assessment practices; and Nabie (2013) indicated
that many practicing Mathematics teachers integrated and used multiple assessment techniques in their
problem-solving and investigation lessons. Teachers identified pedagogical issues, motivation, social learning,
diagnosis, and student thinking as the reasons for their choice of assessment techniques. Moreover, a study of
primary and secondary mathematics teachers’ changing assessment practices in the context of policy,
stakeholder, and personal presses for change, findings revealed several trajectories of change in the interplay
between assessment forms and the functions that they serve (Saxe & Franke, 1997).
Moreover, a study on classroom assessment practices by secondary school mathematics teachers in
Nandi Central Sub-County concluded that discourse, observation, students’ self-assessment and peer
assessment were the common classroom assessment practices reported. Open-open questions, select-type
items and super items were the common assessment formats used across school categories. Assessment
information were mainly used to give students grades or marks, diagnose students‘ learning problems and to
assign them to different programs or tasks and mathematical competencies often considered when math
teachers prepared assessment tools across school categories included communication, problem solving,
mathematical reasoning and use of symbols and formal language (Eliud, 2015), while Bandele (2013)
investigated assessment literacy of science teachers in public secondary schools in Ekiti State using survey
design. Results showed that majority of the teachers had low knowledge of assessment techniques, and placed
more emphasis on formal assessment procedures than the informal. It was recommended that regular
seminar/workshop be organized for science teachers on assessment practices for quality assessment in the
classroom. In another science teachers’ assessment and grading practices study, assessment and grading
practices were found to be at odds with modern perspectives of assessment as well as its role in learning
(Carmen & Jakobsson, 2015). Furthermore, the currently described and enacted curriculum in K-8
classrooms is poorly designed for the purpose of building content knowledge according to analyses of science
curricula in the United States. A research on elementary science teachers and formative assessment
implementation in their classrooms found out that teachers bear a great burden of using effective strategies in
an attempt to create learning opportunities that lead to greater understanding of science, and the importance
of solidifying effective formative assessment use for elementary science teachers cannot be underestimated in
the efforts to increase effective, research-based science instruction (Pierson, 2013).
The influences of teachers’ authentic assessment on their classroom practices were paramount to
both teachers’ students if implemented effectively in the Social Studies classroom. The research found out
that for improvements to be made on teacher knowledge and ability and the policies and practices of
authentic assessment in schools, teachers, students, parents and policymakers value and see the potential for
authentic assessment to improve teaching and learning, it will continue to be under-emphasized, undervalued
and poorly used. Thus, the researcher recommended that the teaching universities in Ghana should broaden
their scope on the teaching of assessment to incorporate authentic assessment (Kankam, Bordoh, Eshun,
Bassaw, & Korang, 2014). In music education and visual arts, researchers have found out that music teachers
in the southwestern region of the United States seldom received administrative guidance or altered
assessment approaches due to standards-based curriculum adoption. They based grades on a combination of
achievement and non-achievement criteria, with non-achievement criteria receiving greater weight in
determining grades. Although instructional time, number of students taught, and number of concert
performances prepared/ given had no substantive relationship with assessment decisions, grading practices
were influenced by teaching level and teaching specialization (Russell & Austin, 2010); and showed that visual
art teachers with the knowledge of educational measurement across Ghana and Nigeria; for the fact that most
of them were studio oriented teachers. Other studies reviewed indicated the need for training of teachers,
generally on the new assessment strategies (Mohammed, 2015).
Moreover, a standardized test data from a southern suburban elementary school showed lagging
student scores behind those of students from similar settings. These scores suggested a disconnection
between teachers’ understanding of and practice in formative assessment. This project may lead to positive
social change by empowering teachers to design curriculum and assessment with authentic learning
experiences and providing students with goal-setting strategies to become responsible for learning. The
project’s positive social change may lead to this school and district closing the identified achievement gap. It
is recommended that further research on teacher perception of formative assessment should include more
elementary and middle schools (Bennett, 2015), while a study on teacher factor in assessment towards
enhancing universal Basic Education in Ijebu of Ogun State attempted to find out how teachers assess their
students and areas in which teachers need assistance. Results revealed that teachers indicated the need for
assistance on some assessment procedures such as: directing students to assess their own progress, skill of
test construction and item development procedure. Recommendations include staff re-orientation to correct
systematic defects on assessment practices (Kareem, 2011). In another elementary classroom, results were
discussed in light of other research, indicating that a greater understanding of assessment beliefs and
importance of practices can contribute to the development of relevant professional development aimed at the
improvement of teachers’ assessment pedagogies and practices can contribute to greater educational success
(Calveric, 2010), while an investigation on the knowledge and attitude of secondary school teachers towards
continuous assessment (CA) practices in Edo Central Senatorial District, Nigeria showed that majority of the
teachers, perceived CA practices as a systematic and comprehensive system of evaluation but have inadequate
knowledge of its cumulative and guidance oriented characteristic (Akinlosotu, 2016); and in secondary
schools in South Korea revealed that classroom speaking assessment currently conducted has broadly
employed performance-based tasks and that somewhat informative feedback has been offered to students in
the form of criterion descriptions plus marking scores. However there was still a strong tendency here
towards traditional formal testing to measure and report learning outcomes, one which resulted in teachers
having an overall pessimistic attitude towards the positive effects of such testing on teaching and learning. It
is evident from this study that there is need for improvements in order to facilitate better learning outcomes
in the classroom (Lee, 2010). The teachers implemented Formative Assessment (FA) practices in Grade 9
Technology classrooms in the Fort Beaufort district. The study found that teachers had no knowledge of how
to implement Formative Assessment in their classrooms and had a negative attitude towards it. Thus,
practitioners need to be re-trained on how to implement the Formative Assessment policy in schools (Kuze
& Shumba, 2011).
While it is very true that assessment plays an important part of the Teaching and Learning (T&L)
process. It is the duty of the teachers to carry out assessment tasks that would suit their intended purpose.
The research findings suggest that teachers view on the value they hold about assessment practices and the
actual practices do not show a lot of difference. It can be concluded that teachers view about AFL is at
moderately high level. The mean score of feedback for practices and value are at the high level. Unlike the
feedback, questioning records a mean score at the high level but their practices are at the moderately high
level. Other aspects of AFL are in the moderately high level (Varatharaj, Ghani, Abdullah, & Ismail, 2014).
On the other hand, a multilevel linear modeling technique was used to examine the effects of teachers'
assessment practices on students’ perceptions of the classroom assessment environment. Results showed that
students’ perceptions of the assessment environment were shaped by student characteristics such as selfefficacy; class contextual features such as aggregate perceived assessment environment and average selfefficacy levels of the class; teacher’s teaching experience and assessment practices; and interaction of student
characteristics, class contextual features, and teacher’s assessment practices (Alkharusi, 2010)
There is no empirical investigation on comparative analysis of the classroom assessment skills and
practices of the basic education teachers from elementary to senior high school that demonstrates
comparative analysis. Given the paucity of such research, Cavanagh et al. (2005) suggest that two strategies
can instead be applied: (1) examine the assessment skills in terms of forms/approaches, and (2) examine the
actual assessment practices that teachers use. Integrating teachers’ perceptions will build a foundation and
rationale for the assessment practice they use in their classrooms, through which one can learn to what extent
and in what ways students’ impacts their learning.
Thus, the purpose of this study is to examine assessment skills and practices about assessment of
teachers particularly in a private educational system.
OBJECTIVES OF THE STUDY
This study focuses on the classroom assessment practices of teachers across levels and departments
towards outcomes-based assessment model. The specific objectives of the study are to analyze: (1) the skills
of teachers in the areas of classroom assessment; (2) the practices of teachers related to classroom assessment;
(3) the significant variations on the belief, skills and practices on classroom assessment; and (4) the
implications towards paradigm shift to outcomes-based assessment.
METHODOLOGY
A descriptive-cross-sectional design was used to gather descriptive and comparative data for the
purpose of describing the characteristics of several groups of teachers relative to their classroom assessment
practices. Descriptive cross-sectional design is used to describe characteristics of a population or
phenomenon being studied at a given time. It does not answer questions about how/when/why the
characteristics occurred. Rather it addresses the "what" question. The characteristics used to describe the
situation or population is usually some kind of categorical scheme also known as descriptive categories.
Surveys can be a powerful and useful tool for collecting data on human characteristics, such as their beliefs,
attitudes, thoughts, and behavior (Dillman, Smtyth, & Christian, 2009; Gay, Mills, & Airasian, 2009; Mertens,
2014), hence the survey design fit very well within the framework of this study. All teachers from various
levels were covered in this study. Thus, there was no sampling technique used. The elementary school
teachers, junior high school and the senior high teachers were included. The Classroom Assessment Practices
and Skills (CAPS) questionnaire was used as the data collection instrument. The questionnaire contains
closed-ended items. The initial set of items was adopted from Assessment Practices Inventory (Zhang &
Burry-Stock, 2013). This instrument was created and used in the United States of America to measure
teachers’ skills and use of assessment practices across teaching levels, content areas, and teachers selfperceived assessment skills as a function of teaching experience. The Zhang & Burry-Stock (2013) instrument
consists of several items measured on two rating scales “use” and “skill” The “use” scale was meant to measure
teachers’ usage of assessment practices on a scale from 1 (never) to 5 (always). The “skill” scale was designed to
measure teachers’ self-perceived from 1 (not at all skilled) to 5 (very skilled). To check the content-validity of the
instrument, the draft questionnaire was given content experts in classroom assessment and teacher training.
They were asked to review the items for clarity and completeness in covering most, if not all, assessment and
grading practices used by teachers in classroom settings, as well as to establish face and content validity of the
instrument and items. Necessary revisions were made based upon their analyses. The draft questionnaire with
various items was pilot tested with a total sample of 10 teachers from primary school, 10 junior high school,
and 10 senior high school to assess the strengths and weaknesses of the questionnaire in terms of question
format, wording and order of items. It was also meant to help in the identification of question variation,
meaning, item difficulty, and participants’ interest and attention in responding to individual items, as well as
to establish relationships among items and item responses, and to check item response reliability (Gay, Mills,
& Airasian, 2009; Mertens, 2014). Reliability estimates of teachers’ perceived skill in classroom assessment
were estimated using Cronbach’s Alpha, which was α = .95 indicating high levels of internal consistency. The
researchers sought permission and approval of the school president to allow the data gathering from teachers.
The researchers took into account the ethical issues such as the confidentiality of the data gathered and the
anonymity of the respondents in the administration of the questionnaires. The data gathered were analyzed
using Percentage, Mean, and Analysis of Variance (ANOVA) at .05 level of confidence.
RESULTS AND DISCUSSION
1. Skills on Classroom Assessment
The results of the previous study revealed that primary school teachers, particularly those with a
certificate, need more skill training in assessment applications, statistical applications and criterion referenced
testing. Primary school teachers reported relatively higher discrepancies on use than perceived skill for
statistical applications and objective items, and secondary school teachers reported more skill than use of
statistical applications and objective items (Koloi-Keaikitse, 2012).
The classroom assessment skills of the teachers are presented in Table 1. As reflected, the elementary
school teachers (EST) are very skilled using assessment results for decision-making about individual students,
assessing individual student participation in whole class lessons, assessment of problem solving skills, using
assessment results for decision-making about individual students, using assessment results when planning
teaching, communicating classroom assessment results to others, including student improvement in the
calculation of grades, using assessment results when evaluating class improvement, writing true or false
questions, and providing written feedback comments along with grades. Moreover, the EST are skilled in
writing multiple-choice questions, writing essay questions, and writing test items for higher cognitive levels.
However, the EST are moderately skilled in conducting item analysis for teacher-made tests, revising a test
based on item analysis, using portfolio assessment, using peer assessments for student assessments, using a
table of specifications to plan assessments, developing rubrics for grading students’ assignments, and
calculating variability (standard deviation) for teacher-made tests. Thus, the overall mean assessment is 3.88.
This means that the elementary school teachers are skilled in conducting classroom assessment of students’
learning.
The classroom assessment skills of the Junior High School Teachers (JHST) are presented in Table 1.
As reflected, the JSHT are very skilled in writing essay questions, calculating central tendency for teachermade tests, assessing individual student participation in whole class lessons, assessment of problem solving
skills, using assessment results for decision-making about individual students, using assessment results when
planning teaching, communicating classroom assessment results to others, including student improvement in
the calculation of grades, using assessment results when evaluating class improvement, writing true or false
questions, and providing written feedback comments along with grades. Moreover, the JHST are skilled in
writing multiple-choice questions, writing test items for higher cognitive levels, conducting item analysis for
teacher-made tests, and revising a test based on item analysis. However, the JHST are moderately skilled in
using portfolio assessment, using peer assessments for student assessments, using a table of specifications to
plan assessments, developing rubrics for grading students’ assignments, and calculating variability (standard
deviation) for teacher-made tests. Thus, the overall mean assessment is 4.00. This means that the junior high
school teachers are skilled in conducting classroom assessment of students’ learning.
Table 1. Teachers’ Skills on Classroom Assessment
Classroom Assessment Skills
EST
JHST
X
DR*
X
DR*
1. Writing multiple-choice questions.
3.42
S
3.46
S
2. Writing essay questions.
4.01
S
4.22
VS
3. Writing test items for higher cognitive levels.
3.99
S
3.91
S
4. Calculating central tendency for teacher-made tests.
4.23
VS
4.37
VS
5. Conducting item analysis for teacher-made tests.
3.39 MS 4.00
S
6. Revising a test based on item analysis.
3.37 MS 3.39
S
7. Assessing individual student participation in whole class lessons.
4.26
VS
4.44
VS
8. Assessment of problem solving skills.
4.35
VS
4.52
VS
9. Using portfolio assessment.
3.20 MS 3.25 MS
10. Using assessment results for decision-making about students.
4.32
VS
4.47
VS
11. Using assessment results when planning teaching.
4.43
VS
4.51
VS
12. Communicating classroom assessment results to others.
4.33
VS
4.49
VS
13. Including student improvement in the calculation of grades.
4.26
VS
4.34
VS
14. Using peer assessments for student assessments.
3.21 MS 3.32 MS
15. Using a table of specifications to plan assessments.
3.37 MS 3.39 MS
16. Developing rubrics for grading students’ assignments.
3.12 MS 3.16 MS
17. Using assessment results when evaluating class improvement.
4.21
VS
4.43
VS
18. Writing true or false questions.
4.53
VS
4.66
VS
19. Providing written feedback comments along with grades.
4.67
VS
4.71
VS
20. Calculating variability (standard deviation) for teacher-made tests.
2.91 MS 3.03 MS
Overall Mean 3.88
S
4.00
S
*Descriptive Rating (DR): 1=Not Skilled; 2=Little Skilled; 3=Somewhat Skilled ; 4=Skilled; 5=Very Skilled
SHST
X
DR*
3.89
S
4.47
VS
4.10
S
4.65
VS
4.05
S
3.46
S
4.65
VS
4.57
VS
4.01
S
4.47
VS
4.66
VS
4.51
VS
4.42
VS
4.19
S
3.98
S
3.27 MS
4.51
VS
4.69
VS
4.72
VS
3.39 MS
4.23 VS
The same table further illustrates the classroom assessment skills of the Senior High School Teachers
(SHST) are very skilled in writing essay questions, calculating central tendency for teacher-made tests,
assessing individual student participation in whole class lessons, assessment of problem solving skills, using
assessment results for decision-making about individual students, using assessment results when planning
teaching, communicating classroom assessment results to others, including student improvement in the
calculation of grades, using assessment results when evaluating class improvement, providing written
feedback comments along with grades, and writing true or false questions. Moreover, the SHST are skilled in
writing multiple-choice questions, writing test items for higher cognitive levels, conducting item analysis for
teacher-made tests, revising a test based on item analysis, using portfolio assessment, using peer assessments
for student assessments, and using a table of specifications to plan assessments. However, the SHST are just
moderately skilled in developing rubrics for grading students’ assignments, and calculating variability
(standard deviation) for teacher-made tests. Thus, the overall mean assessment is 4.23. This means that the
senior high school teachers are very skilled in conducting classroom assessment of students’ learning.
In order to gather information about teaching and learning, teachers use a variety of assessment
instruments such as written tests, performance assessment, observation and portfolio assessment (Airasian,
2011; Stiggins & Bridgeford, 2014; Popham, 2008). Ndalichako (2014) observed that most primary school
teachers prefer to use tests and examinations to evaluate students’ learning. However, use of multiple
methods of assessment is recommended due to its potentiality in yielding valuable information regarding
students’ strengths and weaknesses in their learning (Gonzales & Fuggan, 2012). There are various methods
that can be used to assess students learning such as portfolios, projects, performance assessment such
methods offer rich information about teaching and learning. Portfolio is generally defined as a collection of
student work with a common theme or purpose (Wolf, 2011; Arter & Spandel, 2012; Damian, 2014; Popham,
2008). The key characteristic of portfolio assessment is that it highlights student effort, development, and
achievement over a period of time and emphasizes application of knowledge rather than simply recall of
information (Price, Pierson, & Light, 2011). The main advantage of using portfolio is the engagement of
students in assessing their own progress and achievement and in strengthening collaboration with their
teachers through establishing ongoing learning goals (Popham, 2008). Portfolios encourage self-reflection and
awareness among students as they review their previous assignments and assess strengths and weaknesses of
both the processes as well as the final products (Sweet, 2013). The main challenges associated with use of
portfolios are the reliability of scoring, time required to produce the product and to develop a credible scoring
system.
The findings of the present study affirmed the investigation on teachers’ assessment practices across
teaching levels and content areas, as well as teachers’ self-perceived assessment skills as a function of teaching
experience and measurement training (Zhang & Burry-stock, 2003). Thus, classroom assessment has received
increased attention from the measurement community in recent years. Since teachers are primarily
responsible for evaluating instruction and student learning, there is a widespread concern about the quality of
classroom assessment (Mullis & Martin, 2015).
More research has confirmed this general picture. Elementary teachers appear to be unaware of the
assessment work and do not trust or use their authentic assessment results (Florez & Sammons, 2017). Both
in questioning and written work, teachers' assessment focuses on low-level aims, mainly recall. There is little
focus on such outcomes as speculation and critical reflection (Ndalichako, 2013), and students focus on
getting through the tasks and resist attempts to engage in risky cognitive activities (Chih-Min, S. & Li-Yi, W.,
2016). Although teachers can predict the performance of their pupils, their own assessments do not tell them
what they need to know about their students' learning (Bombly, 2013).
2. Classroom Assessment Practices
Proper choice of classroom assessment method allows teachers to diagnose problems faced by
students in attaining desirable learning outcomes and in devising appropriate remedial measures to redress the
situation (Looney, Cumming, Kleij, & Harris, 2017). In a nutshell, classroom assessment can be viewed as a
totality of all the processes and procedures used to gather useful information about the progress in teaching
and learning which facilitates in regulating the pace and strategies of teaching . Frequency of assessment is
also considered important in facilitating retention of material learned (Panadero, Brown & Courtney, 2014).
They observed that the frequency of assessment has a mediating effect on student engagement in learning.
Research by Pryor and Crossouard (2010) showed that when the frequency of testing is increased, there is
increased student involvement in responding to questions and in discussing the subject matter. Other scholars
maintained that frequent testing helps students to monitor their learning and reinforces their engagement
with the course as a result of immediate feedback provided (Lysaght & O’Leary, 2013). It has also been
established that frequent testing has positive impact on future retention of material learned (Looney, 2014).
Since retention of material is one of an important components of master learning (Panadero, Brown &
Courtney, 2014), it can be inferred that frequent testing contributes to mastery learning.
Table 2 exposes the classroom assessment practices of teachers relative to the frequency of use of the
various assessment tools. As shown, the Elementary School Teachers (EST) are always using multiple-choice
questions, essay questions, test items for higher cognitive levels, assessment of problem solving skills, using
assessment results for decision-making about individual students, writing true or false questions, and always
providing written feedback comments along with grades. The EST oftentimes assess individual student
participation in whole class lessons, use assessment results when planning teaching, communicate classroom
assessment results to others, include student improvement in the calculation of grades, use a table of
specifications to plan assessments, and assessment results when evaluating class improvement. Moreover, the
EST sometimes calculate central tendency for teacher-made tests, conduct item analysis for teacher-made
tests, revise a test based on item analysis, use portfolio assessment, use peer assessments for student
assessments, and develop rubrics for grading students’ assignments. Thus, the teachers seldom calculate
variability (standard deviation) for teacher-made tests. The overall mean assessment is 3.64. This means that
the elementary school teachers oftentimes use these assessment tools for students’ learning.
Table 2. Classroom Assessment Practices
Practices on Classroom Assessment
EST
X
DR*
1. Writing multiple-choice questions.
4.33
A
2. Writing essay questions.
4.21
A
3. Writing test items for higher cognitive levels.
4.37
A
4. Calculating central tendency for teacher-made tests.
3.39 SO
5. Conducting item analysis for teacher-made tests.
2.67 SO
6. Revising a test based on item analysis.
2.66 SO
7. Assessing individual student participation in whole class lessons.
4.10
O
8. Assessment of problem solving skills.
4.21
A
9. Using portfolio assessment.
3.23 SO
10. Using assessment results for decision-making about students.
4.22
A
11. Using assessment results when planning teaching.
3.42
O
12. Communicating classroom assessment results to others.
3.53
O
13. Including student improvement in the calculation of grades.
3.67
O
14. Using peer assessments for student assessments.
3.12 SO
15. Using a table of specifications to plan assessments.
3.54
O
16. Developing rubrics for grading students’ assignments.
2.97 SO
17. Using assessment results when evaluating class improvement.
4.11
O
18. Writing true or false questions.
4.43
A
19. Providing written feedback comments along with grades.
4.37
A
20. Calculating variability (standard deviation) for teacher-made tests.
2.29
SE
Overall Mean 3.64
O
*Descriptive Rating (DR): 1=Never; 2=Seldom; 3=Sometimes; 4=Oftentimes; 5 =Always
JHST
X
DR*
4.41
A
4.33
A
4.42
A
3.41
O
2.91 SO
2.70 SO
4.13
O
4.26
A
3.41
O
4.36
A
3.50
O
3.61
O
3.72
O
3.25 SO
3.62
O
3.01 SO
4.19
O
4.51
A
4.40
A
2.42
SE
3.73
O
SHT
X
DR*
4.48
A
4.39
A
4.46
A
3.44
O
3.31 SO
2.87 SO
4.19
O
4.43
A
3.52
O
4.40
A
3.61
O
3.72
O
3.80
O
3.29 SO
3.65
O
3.21 SO
4.18
O
4.47
A
4.51
A
2.62
SE
3.83
O
The Junior High School Teachers (JHST) are always using multiple-choice questions, essay questions,
true or false questions, writing test items for higher cognitive levels, problem solving skills, assessment results
for decision-making about individual students, and always providing written feedback comments along with
grades. The feedback provided by teachers' written responses to students' homework was studied in an
experiment with students involving teachers in schools (Wyatt-Smith & Klenowski, 2013). They trained the
teachers to give written feedback which concentrated on specific errors and on poor strategy, with
suggestions about how to improve, the whole being guided by a focus on deep rather than superficial learning
(Wyatt-Smith & Looney, 2016). Analysis of variance of the results showed a big effect associated with the
feedback treatment in the final achievement. The treatment also reduced the initial superiority of boys over
girls and had a large positive effect on attitudes towards the subject (Xu & Brown, 2016). Moreover, the
JHST oftentimes calculate central tendency for teacher-made tests, assess individual student participation in
whole class lessons, use portfolio assessment, use assessment results when planning teaching, communicate
classroom assessment results to others, include student improvement in the calculation of grades, use a table
of specifications to plan assessments, and assessment results when evaluating class improvement.
Furthermore, the teachers sometimes conduct item analysis for teacher-made tests, and revise a test items, use
peer assessments for student assessments, and develop rubrics for grading students’ assignments. The
portfolio movement is more closely associated with efforts to change the impact of high-stakes, often
standardized, testing of school learning (Young, & Jackman, 2014). There is a vast literature associated with
the portfolio movement. Much of it is reviewed by DeLuca & Klinger, 2010), set out some of the issues in
education. A portfolio is a collection of a student's work, usually constructed by selection from a larger
corpus and often presented with a reflective piece written by the student to justify the selection (Cizek,
Schmid, & Germuth, 2013). Others (Alkharusi et al., 2012) emphasize that it is valuable for students to
understand the assessment criteria for themselves, while Brookhart (2011), points out that the practice of
helping students to reflect on their work has made teachers more reflective for themselves. However, there is
little by way of research evidence that goes beyond the reports of teachers, to establish the learning
advantages. Attention has focused rather on the reliability of teachers' scoring of portfolios because of the
motive to make them satisfy concerns for accountability, and so to serve summative purposes as well as the
formative (Koh, 2011). In this regard, the tension between the purposes plays out both in the selection and in
the scoring of tasks. Lyon (2011) describes scoring approaches based on a multi-dimensional approach, with
the criterion that each dimension reflects an aspect of learning which can be understood by students and
which reflects an important aspect of learning. However, the Junior High School Teachers seldom calculate
variability (standard deviation) for teacher-made tests. Thus, the overall mean assessment is 3.73. This means
that the JHST oftentimes use these assessment tools for students’ learning.
The Senior High School Teachers (SHST) are always using multiple-choice questions, essay
questions, true or false questions, writing test items for higher cognitive levels, problem solving skills,
assessment results for decision-making about individual students, and always providing written feedback
comments along with grades. Moreover, the SHST oftentimes calculate central tendency for teacher-made
tests, assess individual student participation in whole class lessons, use portfolio assessment, use assessment
results when planning teaching, communicate classroom assessment results to others, include student
improvement in the calculation of grades, use a table of specifications to plan assessments, and assessment
results when evaluating class improvement. Furthermore, the teachers sometimes conduct item analysis for
teacher-made tests, and revise a test items, use peer assessments for student assessments, and develop rubrics
for grading students’ assignments. However, the Senior High School Teachers seldom calculate variability
(standard deviation) for teacher-made tests. Thus, the overall mean assessment is 3.83. This means that the
SHST oftentimes use these assessment tools for students’ learning.
More than one assessment method should be used to ensure comprehensive and consistent
indications of student performance (Alkharusi et al., 2012). This means to obtain a more complete picture or
profile of a student’s knowledge, skills, attitudes, or behaviors and to discern consistent patterns and trends,
more than one assessment method should be used. Student knowledge might be assessed using completion
items; process or reasoning skills might be assessed by observing performance on a relevant task; evaluation
skills might be assessed by reflecting upon the discussion with a student about what materials to include in a
portfolio. Self-assessment may help to clarify and add meaning to the assessment of a written communication,
science project, piece of art work, or an attitude. Use of more than one method will also help minimize
inconsistency brought about by different sources of measurement error.
Before an assessment method is used, a procedure for scoring should be prepared to guide the
process of judging the quality of a performance or product, the appropriateness of an attitude or behavior, or
the correctness of an answer (Zhang & Burry-Stock, 2013). It means further that to increase consistency and
validity, properly developed scoring procedures should be used. Different assessment methods require
different forms of scoring. Scoring selection items (true or false, multiple-choice, matching) requires the
identification of the correct or, in some instances, best answer. Guides for scoring essays might include
factors such as the major points to be included in the “best answer” or models or exemplars corresponding to
different levels of performance at different age levels and against which comparisons can be made
(Committee, 2011). Procedures for judging other performances or products might include specification of the
characteristics to be rated in performance terms and, to the extent possible, clear descriptions of the different
levels of performance or quality of a product (Hendrickson, 2011).
Comments formed as part of scoring should be based on the responses made by the students and
presented in a way that students can understand and use them (Johnson, 2014). It further illustrates that,
comments, in oral and written form, are provided to encourage learning and to point out correctable errors or
inconsistencies in performance. In addition, comments can be used to clarify a result. Such feedback should
be based on evidence pertinent to the learning outcomes being assessed.
Procedures for summarizing and interpreting results for a reporting period should be guided by a
written policy (Koloi-keaikitse, 2017). This means that summary comments and grades, when interpreted,
serve a variety of functions. They inform students of their progress. Parents, teachers, counselors, and
administrators use them to guide learning, determine promotion, identify students for special attention and to
help students develop future plans. Comments and grades also provide a basis for reporting to other schools
in the case of school transfer and, in the case of senior high school students, post-secondary institutions and
prospective employers. They are more likely to serve their many functions and those functions are less likely
to be confused if they are guided by a written rationale or policy sensitive to these different needs. This policy
should be developed by teachers, school administrators, and other jurisdictional personnel in consultation
with representatives of the audiences entitled to receive a report of summary comments and grades.
The finding of the present study raises the issue of formative feedback by closely examining teachers’
responses to student's work. For example, if the teacher asks students to provide more details about a written
work, the practice is characterized as formative; however, a concern arises as to whether the student know
what the instructor meant when he or she asks for elaboration and more details (Wiliam & Thompson, 2008).
Formative feedback contradicts the traditional evaluative comments teachers frequently use, such as well done,
good, or great work and more. Chappuis and Stiggins (2013) argue that judgmental feedback not only holds less
for value for improvement and student learning, but it also discourages students from learning. Black and
Wiliam (2013) assert that formative feedback illuminates students’ strengths and weaknesses, provides some
suggestion for improvement, and avoids comparing one student with his or her peers. In addition, Black and
Wiliam (2013) point out the importance of oral feedback provided by the teacher, enabling students to reflect
on their learning. They write, “the dialogue between pupils and a teacher should be thoughtful reflective,
focused to evoke and explore understanding… so that all pupils have an opportunity to think and to express
their ideas”. Given the definitions and characteristics of formative feedback, it is an important component of
instruction that occurs while the instruction occurs and enables the instructor to adjust instruction based on
students’ suppositions respectively.
Thus, reporting of students’ progress takes the form of written reports and conferences (Roemer,
1999). Conferences are face-to-face events involving teacher, student and parents in various combinations for
different purposes (students taking the lead in sharing their learning with their parents serves the purpose of
encouraging them to take responsibility for their learning (Johnson, 2014).
Lastly, those who argue for using traditional assessments argue that just like other forms of
assessments, traditional tests are also focused on improving the cognitive side of instruction, i.e. the skills and
knowledge that students are expected to develop within a short period of time (Segers & Dochy, 2001). A
study conducted by Kleinert, Kennedy, and Kearns (1999) revealed that teachers expressed levels of
frustration in the use of alternative assessments such as portfolio assessment. Some major issues that teachers
have against the use of alternative assessments are that they require more time for students to complete, and
for teachers to supervise and assess. Thus, the teachers are generally also concerned about competencies they
have in reliably grading these forms of assessments and that such assessments are more teacher-based than
student-based.
3. Analysis of Variance (ANOVA) in the Assessment Skills and Practices of Teachers across Levels
Table 3 reflects the ANOVA in the assessment skills across levels. The f-ratio value is 2.27588, with
p-value of .111964. The result is not significant at p < .05. This means there are no significant variations on the
assessment skills of teachers across levels.
Table 3. ANOVA in the Assessment Skills across Levels
Source
SS
Between-treatments
Within-treatments
Total
1.2899
16.1531
17.443
dF
MS
F-ratio
2
57
59
0.645
0.2834
2.27588
For the ANOVA in the assessment practices across levels as shown in Table 4, the f-ratio value is
0.43015, with p-value of .652505. The result is not significant at p < .05. This means there are no significant
variations on the assessment practices of teachers across levels.
Table 4. ANOVA in the Assessment Practices across Levels
Source
SS
dF
MS
F-ratio
Between-treatments
0.3446
2
0.1723
0.43015
Within-treatments
22.8334
57
0.4006
Total
23.178
59
The f-ratio value is 0.43015. The p-value is .652505. The result is not significant at p < .05.
4. t-Test on the Assessment Skills and Practices
For the test of difference in the assessment skills between EST and JHST, the t-value value is 0.69362, with p-value of .246068. The result is not significant at p < .05. This means there is no significant
difference between the assessment skills of elementary and junior high school teachers. Similarly, the test of
difference in the assessment skills between JHST and SHST, the t-value value is -1.39832, with p-value of
.085062. The result is not significant at p < .05. This means there is no significant difference between the
assessment skills of the junior and the senior high school teachers.
However, there is significant difference between the assessment skills of the elementary and the
senior high school teachers because the t-value value is -2.20128, with p-value of .016931. The result
is not significant at p < .05.
Table 5. Differences between the Assessment Skills and Practices of Teachers
Assessment Skills
X1
X2
t-value
p-value
EST vs. JHST
JHST vs. SHST
EST vs. SHST
Assessment Practices
EST vs. JHST
JHST vs. SHST
EST vs. SHST
3.88
4.00
3.88
X1
3.64
3.73
3.64
4.00
4.23
4.23
X2
3.73
3.83
3.83
-0.69362
-1.39832
-2.20128
t-value
-0.41871
-0.50674
-0.93536
.246068
.085062
.016931
p-value
.338891
.307631
.177755
Result
Not significant at p < .05.
Not significant at p < .05.
Significant at p < .05.
Result
Not significant at p < .05.
Not significant at p < .05.
Not significant at p < .05.
Relative to the assessment practices, the test of difference in the skills between EST and JHST, the tvalue value is -0.41871, with p-value of .338891. The result is not significant at p < .05. This means there is no
significant difference between the assessment practices of the elementary and the junior high school teachers.
Similar result in the test of difference in the assessment skills between JHST and SHST is obtained because
the t-value value is -1.39832, with p-value of .085062. The result is not significant at p < .05. This means there
is no significant difference between the assessment practices of the junior and the senior high school teachers.
Moreover, there is no significant difference between the assessment practices of the elementary and
the senior high school teachers because the t-value value is -0.93536, with p-value of .177755. The result
is not significant at p < .05.
5. Relationship between Assessment Skills and Practices
Among EST as shown in Table 6, the value of R is 0.713 and the coefficient of determination is
0.508. Thus, there is a moderate positive correlation between assessment skills and practices of elementary
school teachers, which means there is a tendency for a skilled teacher in the preparation of the assessment
tool to frequently use the same tool in the classroom (and vice versa). Among JHST, the value of R is 0.634
and the coefficient of determination is 0.402.
Thus, there is a moderate positive correlation between assessment skills and practices of junior high
school teachers, which means by normal standards, the association between the skills and the practices of
teachers would be considered statistically significant.
Table 6. Correlation between Assessment skills and Practices
Teachers
Assessment
Skills (X1)
Assessment
Practices (X2)
R-value
EST
JHST
SHST
3.88
4.00
4.23
3.64
3.73
3.83
0.713
0.634
0.655
R2-value
(coefficient
determination)
0.508
0.402
0.429
Correlation
Moderate (+)
Moderate (+)
Moderate (+)
Moreover, similar result of moderate positive correlation is obtained among SHST, because the value
of R is 0.655 and the coefficient of determination is 0.429.
Therefore, teachers adopt a variety of classroom assessment practices to evaluate student learning
outcomes, and spend much classroom time engaged in assessment-related activities. Teachers typically
control classroom assessment environments by choosing how they assess their students, the frequency of
these assessments, and how they provide assessment feedback. For these reasons, it is imperative for them to
be competent in the various classroom assessment tools (Koloi-keaikitse, 2017).
The findings of the study affirm that primarily the current practices of assessment were focused on
exams, classroom discussions, classroom assignment, projects, and seminars. In addition, the study found out
that an informal exposure to formative assessment (alternative approach) existed among the faculty members
and based on students’ responses, overall, as a formal approach, alternative assessment was considered as a
new paradigm (Ahmad & Mussawy, 2009). Teachers depend on the classroom assessment information to
improve their instructional methods, and as such, that information plays an important role in student
learning. It is apparent that teachers should be made competent in the collection, analysis and use of
assessment information. Zhang and Burry-Stock (2003) argued that to be able to communicate assessment
results more effectively, teachers must possess a clear understanding about the limitations and strengths of
various assessment methods. Teachers must also use proper terminology as they use assessment results to
inform other people about the decisions about student learning. For this reason, teacher educators must find
ways in which they can improve their assessment training methods that can equip teachers with needed skills
for using and communicating assessment results. This finding equally brings major challenges to school
administrators who rely on teachers to provide them with information about student learning that they collect
from assessment results.
It is clear that items that assessed teachers’ perceived skills about test construction are helpful in
providing essential information about teachers’ perceived skills in classroom assessment practices. This
finding is important because it shows that if school managers want to know assessment areas that teachers
may need to be trained on, they may not ask them about their perceived skills in test construction, but rather
they may want to establish if teachers are more confident in using assessment information for improving their
instructional methods, or whether they are in a position to communicate assessment results for better
decision-making about student learning. These results generally imply the need for teachers or assessment
professional development specialists to focus their attention on assessment training on skills teachers need
most and those they have less perceived skills on.
Teachers are one of the key elements in any school and effective teaching is one of the key propellers
for school improvement. This study is concerned with how to define a teacher’s effectiveness and what
makes an effective teacher in relation to their skills and actual practices in assessing students’ learning. It
draws out implications for policymakers in education and for improving classroom practice.
Thus, the results of this study suggest that, although most teachers claimed that their training did
have a certain impact on their assessment practices, the changes occurred mostly while the teachers were
novice teachers. This finding also indicates that teachers are required to attend workshops or courses to
acquire updated assessment knowledge from time to time. Teacher training programs can equip teachers with
assessment knowledge by offering assessment courses to pre-service teachers and assessment workshops to
in-service teachers.
CONCLUSIONS AND RECOMMENDATIONS
The teachers across levels from elementary to senior high school are very skilled calculating central
tendency of teacher-made tests, assessing students’ class participation, calculation of grades, using assessment
results in planning, decision-making, communicating and providing feedback, problem solving, evaluating
class improvement, and writing true or false tests. Moreover, they are skilled in writing multiple-choice tests
measuring higher order thinking skills (HOTS). However, they are moderately skilled in developing rubrics,
and calculating variability for teacher-made tests. All the teachers across levels are always preparing and
employing multiple-choice question, true or false and essay questions, HOTS, problem solving, assessment
results for decision-making and written feedback along with student’s grades. Moreover, they oftentimes
employ assessment of individual student’s class participation, results in lesson planning, and evaluating class
and student’s improvement, communicate assessment result, and table of specifications. However, they
sometimes use item analysis, revise test items, peer assessment, and rubrics in classroom assessment, and they
seldom use the results of standard deviation for teacher-made tests. Moreover, there are no significant
variations on the assessment skills and assessment practices of teachers across levels, and there is no
significant difference between the assessment skills of elementary and junior high school teachers, as well as
between the junior and the senior high school teachers. However, there is significant difference between the
assessment skills of the elementary and the senior high school teachers. Furthermore, no significant
differences on the assessment practices teachers across levels. Furthermore, there is a moderate positive
correlation between the assessment skills and the assessment practices of elementary, junior, and senior high
school teachers. Additionally, the results showed that items asking teachers about their perceived skills in test
construction and calculation of statistical techniques such as measures of central tendency were the least
useful in understanding overall perceptions about assessment skills. Further examination of the results
showed that an item that asked teachers about their perceived skill in portfolio assessment proved to be the
most difficult for teachers to use, an indication that most of the teachers were less skilled in portfolio
assessment. This means using traditional forms of assessment such as true or false, multiple choice items and
essay questions are more preferred by the teachers compared to the alternative assessments such as portfolio
assessments. Thus, the findings of the study revealed the perceived strengths and weaknesses of teachers
relative to their classroom assessment skills and practices.
These findings have major implications for teacher educators and school managers. For teacher
educators these results highlight classroom assessment areas that they may need to focus on as they teach
assessment courses. Assessment entails a broad spectrum of activities that includes collection of information
for decision-making. The responsibility of teachers is to collect information through various assessment
methods that can be used to make informed decisions about students’ learning progress. The question is: are
teachers competent enough to use or apply assessment information for making students’ learning decisions?
From these results it was very clear that teachers are less confident in using assessment information to make
informed instructional and learning decisions.
The teachers should continue bringing change and preparing students for future endeavors though
authentic assessment. It is therefore imperative to understand their teaching practices particularly how they
assess and evaluate student learning outcomes. The gathered information should be used to highlight the level
of teachers’ competences in conducting classroom assessments towards planning and conducting teachers’
education and professional development. It is now essential for researchers, educators, and policy-makers in
the Philippine context to have a clear understanding of the perceived skills teachers hold about certain
classroom assessment practices as it can open avenues informing policy and practice for addressing the needs
that teachers have as they wrestle with their day-to-day classroom assessment practices. Furthermore,
research to establish why teachers felt least competent and in the use of portfolio assessment is highly
recommended.
REFERENCES
Airasian, P. W. (2011). Classroom assessment: Concepts and applications. New York: McGraw-Hill.
Ahmad, S., & Mussawy, J. (2009). Assessment Practices: Student’s and Teachers’ Perceptions of Classroom Assessment.
Master's Capstone Projects. 9. http://scholarworks.umass.edu/cie_capstones/9
Akinlosotu, P. J. (2016). Knowledge and Attitude of Secondary School Teachers Towards Continuous Assessment
Practices in Esan Central Senatorial District of Edo State. Journal of Education and Practice, 7(10), 71–79.
Arter, J. A. & Spandel, V. (2012). Using Portfolios of student work in Instruction and Assessment. Educational
Measurement; Issues and Practice, 26 - 44.
Alkharusi, H. (2010). A multilevel linear model of teachers’ assessment practices and students’ perceptions of the
classroom assessment environment. Procedia - Social and Behavioral Sciences, 5(2), 5–11.
https://doi.org/10.1016/j.sbspro.2010.07.041
Alkharusi, H., Aldhafri, S., Alnabhani, H., & Alkalbani, M. (2012). Educational Assessment Attitudes , Competence ,
Knowledge , and Practices : An Exploratory Study of Muscat Teachers in the Sultanate of Oman, 1(2).
https://doi.org/10.5539/jel.v1n2p217
Awoniyi, F. C. (2016). The understanding of senior high school Mathematics teachers of school-based assessment and
its challenges in the Cape Coast Metropolis. British Journal of Education, 4(10), 22-38
Balinas, E. S. (2016). English Teachers Classroom Assessment Practices,. International Journal of Evaluation and
Research in Education (IJERE), 5(1), 82–92. ISSN: 2252-8822
Bandele, S. O. (2013). Assessing Assessment Literacy of Science Teachers in Public Secondary Schools in Ekiti State.
Journal of Education and Practice, 4(28), 56–63. ISSN 2222-1735 (Paper) ISSN 2222-288X (Online)
Bennett, D. P. (2015). Elementary School Teacher Perceptions of Using Formative Strategies To Improve Instruction.
Walden University Dissertations and Doctoral Studies. http://scholarworks.waldenu.edu/dissertations Part
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2013). Assessment for learning: Putting it into practice.
Buckingham, UK: Open University Press.
Bombly, Sarah Mirlenbrink, "Reading Assessment Practices of Elementary General Education Teachers: A Descriptive
Study" (2013). Graduate Theses and Dissertations. http://scholarcommons.usf.edu/etd/4866
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues
and Practice, 30, 3-12. http://dx.doi.org/10.1111/j.1745-3992.2010.00195.x
Calveric, S. (2010). Elementary teachers’ assessment beliefs and practices. Virginia Commonwealth University, 2010.
Downloaded from http://scholarscompass.vcu.edu/etd/2332
Campbell, C., & Evans, J. A. (2000). Investigation of preservice teachers’ classroom assessment practices during student
teaching. Journal of Educational Research
Carmen, G., & Jakobsson, A. (2015). Science Teachers ’ Assessment and Grading Practices in Swedish Upper Secondary
Schools, Journal of Education and Training,2(2), 1–19. https://doi.org/10.5296/jet.v2i2.7107
Cavanagh, R.F., Waldrip, B.G., Romanoski, J.T., Fisher, D.L., and Dorman, J.P. (2005). Measuring student perceptions of
classroom assessment. Paper presented at the 2005 Annual Meeting of the Australian Association for Research in
Education: Parramatta. Retrieved September 5, 2009 at
http://docs.google.com/gview?a=v&q=cache:jokRTlUCmK4J:https://www.aare.edu.au/05pap/cav05748.pdf
+Measuring+student+perceptions+of+classroom+assessment&hl=en&gl=us
Chappuis, S., & Stiggins, R. J. (2013). Classroom assessment for learning. Educational Leadership, 60, 40-43.
Chih-Min, S. & Li-Yi, W. (2016). Factors Affecting English Language Teachers’ Classroom Assessment Practices A case
study at Singapore secondary schools. NIENTU Singapore Research Brief, 13, 10. www.nie.edu.sg
Cizek, G. J., Germuth, A. A., & Schmid, L. A. (2013). A checklist for evaluating K-12 assessment programs. Kalamazoo:
The Evaluation Center, Western Michigan University. Available from
http://www.wmich.edu/evalctr/checklists/
Damian, V. B. (2014). Portfolio assessment in the classroom. Helping children at home and school II: Handout for
families and educators, S3 - 129 – S3 - 131.
Dandis, M. A. (2013). The assessment methods that are used in a secondary mathematics class. Journal for Educators,
Teachers and Trainers, 4(2), 133–143. ISSN 1989 – 9572. http://www.ugr.es/~jett/index.php
DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning.
Assessment in Education: Principles, Policy & Practice, 17, 419–438. doi:10. 1080/0969594X.2010.516643
Dillman, D. A., Smyth, J.D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd
ed.). Hoboken, New Jersey: John Wiley & Sons.
Eliud, K. (2015). Classroom assessment practices by Mathematics teachers in secondary schools in Kenya. Kenya
Classroom Assessment Journal, 5(6), 35-41.
Flórez, M.T. & Sammons, P. (2013). Assessment for learning: effects and impact. Oxford University Department of
Education. Copyright CfBT Education Trust 2013.
Gay, L. R., Mills, G. E., & Airasian, P. (2009). Educational research competencies for analysis and applications.
Columbus, GA: Pearson Merrill Prentice Hall.
Gonzales, R. & Fuggan, F. G. (2012). Exploring the conceptual and psychometric properties of classroom assessment.
The International Journal of Educational and Psychological Assessment, 9(2), 45-60.
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory: Measurement
methods for the social sciences series (Vol. 2). New York, NY: Sage
Hays, R. D., Morales, L. S., & Reise, S. P. (2000). Item response theory and health outcomes measurement in the 21st
century. Medical Care, 38, II28–II42.
Hendrickson, K. A. (2011). Assessment in Finland : A Scholarly Reflection on One Country’s Use of Formative,
Summative, and Evaluative Practices. Mid-Western Educational Researcher, 25 (25), 33–43.
Johnson, S. (2012) Assessing Learning in the Primary Classroom. London: Routledge.
Johnson, S. (2014) On the reliability of high-stakes teacher assessment, Research Papers in Education, 18(1), 91-105.
Kankam, B., Bordoh, A., Eshun, I., Bassaw, T. K., & Korang, F. Y. (2014). An investigation into authentic assessment
practices of social studies teachers in the senior high schools ( SHSs ) in Ghana. American Journal of Social
Sciences,2(6), 166–172.http://www.openscienceonline.com/journal/ajss
Kareem, A. O. (2011). Universal basic education in Ijebu-Division of Ogun State. The African Symposium: An online journal
of the African Educational Research Network, 11(2), 106–112.
Koh, K. H. (2011). Improving teachers’ assessment literacy through professional development. Teaching Education, 22,
255-276. http://dx.doi.org/10.1080/10476210.2011.593164
Koloi-Keaikitse, S. (2012). Classroom Assessment Practices: a Survey of Botswana Primary and Secondary School
Teachers. Cogent Education, 38(1), 1–14. https://doi.org/10.1080/2331186X.2017.1281203
Koloi-keaikitse, S. (2017). Assessment of teacher perceived skill in classroom assessment practices using IRT Models.
Cogent Education, 38(1), 1–14. https://doi.org/10.1080/2331186X.2017.1281202
Kuze, M. W., & Shumba, A. (2011). An investigation into formative assessment practices of teachers in selected schools
in Fort Beaufort in South Africa, 29(2), 159–170.
Lee, S. (2010). Current practice of classroom speaking assessment in secondary schools in South Korea. The University
of Queensland (41938553).
Looney, A. (2014). Assessment and the reform of education systems. In C. Wyatt-Smith, V. Klenowski, & P. Colbert
(Eds.), Designing assessment for quality learning. Heidelberg: Springer.
Looney, A., Cumming, J., Kleij, F. Van Der, & Harris, K. (2017). Reconceptualising the role of teachers as assessors :
teacher assessment identity. Assessment in Education: Principles, Policy & Practice, (January), 1–26.
https://doi.org/10.1080/0969594X.2016.1268090
Lyon, E. G. (2011). Beliefs, practices and reflection: Exploring a science teacher’s classroom assessment through the
assessment triangle model. Journal of Science Teacher Education, 22, 417-435.
http://dx.doi.org/10.1007/s10972-011-9241-4
Lysaght, Z., & O’Leary, M. (2013). An instrument to audit teachers’ use of assessment for learning. Irish Educational
Studies, 32, 217–232. doi:10.1080/03323315.2013.784636
McMillan, J. M. (2008). Assessment essentials for student-based education (2nd ed.). Thousand Oaks: Crown Press.
Mellenbergh, G. J. (1994). Generalized linear item response theory. Psychological Bulletin, 115, 300–307.
doi:10.1037/0033-2909.115.2.300
Mertens, D. M. (2014). Research and evaluation in education and psychology. Integrating diversity with quantitative and qualitative and
mixed methods (3rd ed.). California: Sage Publications, Inc.
Mohammed, Y. (2015). Visual Art Teachers and Performance Assessment Methods in Nigerian Senior Secondary
Schools. Mgbakoigba: Journal of African Studies, 4, 1–18.
Mullis, I. V. S., & Martin, M. O. (2015). Assessment Frameworks. TIMMS and Pirls International Study Center, Boston
College.
Nabie, M. J. (2013). Integrating Problem Solving and Investigations in Mathematics : Ghanaian Teachers ’ Assessment
Practices Department of Mathematics Education Department of Mathematics Education. International Journal of
Humanities and Social Science,3(15), 46–56.
Ndalichako, J. L. (2014). Towards an understanding of assessment practices of primary school teachers in Tanzania.
Zimbabwe Journal of Education Research, 16(3), 168-177. Available on
http://dx.doi.org/10.4314/zjer.v16i3.26046.
Ndalichako, J.L. (2017). Examining Classroom Assessment Practices of Secondary School Teachers in Tanzania.
National Examinations Council of Tanzania
Nenty, H. J., Adedoyin, O. O., Odili, J. N., & Major, T. E. (2007). Primary teachers’ perceptions of classroom
assessment practices as means of providing quality primary and basic education by Botswana and Nigeria.
Educational Research and Review
Panadero, E., Brown, G., & Courtney, M. (2014). Teachers’ reasons for using self-assessment: A survey self-report of
Spanish teachers. Assessment in Education: Principles, Policy & Practice, 21, 365–383.
doi:10.1080/0969594X.2014.919247
Panadero, E., Brown, G., & Courtney, M. (2014). Teachers’ reasons for using self-assessment: A survey self-report of
Spanish teachers. Assessment in Education: Principles, Policy & Practice, 21, 365–383.
doi:10.1080/0969594X.2014.919247
Pierson, D. R. (2013). Elementary teachers’ assessment actions and elementary science education: formative assessment
enactment in elementary science. University of Iowa. http://ir.uiowa.edu/etd/5043
Popham, W.J. (2008). Classroom assessment: What teachers need to know? 5th Ed. Boston: Ally and Bacon.
Price, J. K, Pierson, E. & Light D. (2011). Using Classroom Assessment to Promote 21st Century Learning in Emerging
Market Countries. Paper presented at Global Learn Asia Pacific 2011, Melbourne Australia
Pryor, J., & Crossouard, B. (2010). Challenging formative assessment: Disciplinary spaces and identities. Assessment &
Evaluation in Higher Education, 35, 37–41. doi:10.1080/02602930903512891
Roemer, K. L. (1999). Student evaluation practices used by Montessori elementary teachers. The University of Memphis
Review, 22(19), 23-25
Russell, J. A., & Austin, J. R. (2010). Assessment practices of secondary Music teachers. Journal of Research in Music
Education, 58(1), 37 –54. https://doi.org/10.1177/0022429409360062
Saxe, G. B., & Franke, M. L. (1997). Teachers’ Shifting Assessment Practices in the Context of Educational Reform in
Mathematics. Center for Research on Evaluation , Standards , and Student Testing (CRESST ), University of
California , Los Angeles Los Angeles , CA 90095-1522, 1522
Stiggins, R. J., & Bridgeford, N. J. (2014). The ecology of classroom assessment. Journal of Educational Measurement,
22(4), 271–286
Sweet, D. (2013). Student Portfolios: Classroom Uses. Education Consumer Guide No. 8. Retrieved from
http://www2.ed.gov/pubs/OR/ConsumerGuides/classesuse.html.
Tolley, Leigh M., (2016). Assessing Formative Assessment: An Examination of Secondary English/Language Arts
Teachers' Practices. SURFACE at SURFACE Dissertations - ALL. Paper 457.
Varatharaj, R. K., Ghani, A., Abdullah, K., & Ismail, A. (2014). Assessment practices among Malaysian cluster school
teachers, International Journal of Research in Management & Business Studies,1(3),July - Sept 2014
Warsen, G. D. (2013). Making Grades Matter : Connections Between Teacher Grading Practices and Attention to State
Assessment. Dissertations. Paper 156. http://scholarworks.wmich.edu/dissertations Part
Wiliam, D., & Thompson, M. (2008). Integrating assessment with learning: What will it take to make it work? In C. A.
Dwyer (Ed.). The future of assessment: Shaping teaching and learning.. New York: Lawrence Erlbaum Associates.
Wolf, P. (2011). Academic improvement through regular assessment. Peabody Journal of Education, 82(4), 690-702.
Wiggins, G. P. (2008). Educative assessment: Designing assessments to inform and improve student performance. San Francisco:
Jossey-Bass Publishers.
Wyatt-Smith, C. M., & Looney, A. (2016). Professional standards and the assessment work of teachers. In D. Wyse, L.
Hayward, & J. Pandaya (Eds.), The Sage handbook of curriculum, pedagogy and assessment (pp. 805–820).
London: Sage
Wyatt-Smith, C., & Klenowski, V. (2013). Explicit, latent and meta-criteria: Types of criteria at play in professional
judgement practice. Assessment in Education: Principles, Policy & Practice, 20, 35–52.
doi:10.1080/0969594X.2012.725030
Xu, Y., & Brown, G. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher
Education, 58, 149–162. doi:10.1016/j.tate.2016.05.010
Young, J. E. J., & Jackman, M. G. (2014). Formative assessment in the Grenadian lower secondary school: Teachers’
perceptions, attitudes and practices. Assessment in Education: Principles, Policy & Practice, 21, 398–411.
doi:10.1080/0969594X.2014.919248
Download