Teacher Education Journal of South Carolina

advertisement
Teacher Education
Journal
of South Carolina
This journal is published jointly by the South Carolina Association of Colleges for
Teacher Education (SCACTE) and the South Carolina Association of Teacher Educators
(SCATE). Volume 15, No. 1 has been funded by both SCACTE and SCATE. Views
expressed are those of the authors and not necessarily those of any organization or
college/university.
Editor
Dr. Christopher Burkett
Columbia College
Peer Reviewers for the Teacher Education Journal of South Carolina
Linda Anast-May, Coastal Carolina University ∙ Todd Cherner, Coastal Carolina
University ∙ Susan Fernandez, Lander University ∙ Rebecca Harris, Bridgewater College ∙
Falicia Harvey, Columbia College ∙ Beth Lloyd, College of Charleston ∙ Sandra
McLendon, Southern Wesleyan University ∙ Michael Murphy, Lander University ∙ Holly
Pae, University of South Carolina - Upstate ∙ Marla Sanders, Columbia College ∙ Meg
Walworth, Anderson University
Teacher Education Journal of South Carolina, 2015 Edition
2
SCATE Officers and Committee Chairs
2015-16
Officers
President – Michael Murphy, Lander University
President-elect – Windy Schweder, USC – Aiken
Past President – Shelly Myers, Limestone College
Treasurer – Ashlee Horton, Lander University
Secretary – Julie Jones, Converse College
Executive Director – Judy Beck, USC Aiken
2 year Institution Representative – open
4-year Private Institution Representative – Marla Sanders, Columbia College
4-year Public Institution Representative – Susan Fernandez, Lander University
Public School Representative – Teresa White, Spartanburg District 7
Committee Chairs
Awards – Stacy Burr, USC Upstate
Legislative – Kathy Maness, Palmetto State Teacher’s Association
Membership – Abigail Armstrong, Winthrop University &
Erik Lowry, Francis Marion University
Nominations and Elections Committee - Courtney Howard, College of Charleston
Program Committee - Julie Jones, Converse College
Publications Committee - Chris Burkett, Columbia College
Publicity – Nan Li, Claflin University
Teacher Education Journal of South Carolina, 2015 Edition
3
South Carolina Association of Colleges of Teacher Education
Board of Directors
Officers
President- Shelly Myers, Limestone College
Past President- Edward Jadallah, Coastal Carolina University
Treasurer- Lienne Medford, Clemson University
Secretary- Valerie Harrison, Claflin University
Legislative Liaison- Larry Daniel, The Citadel
Board Members- Jennie F. Rakestraw, Winthrop University; Don Stowe, USC-Upstate;
Mona Williams Thornton, Southern Wesleyan University; Frances C. Welch,
College Charleston; Rachel Harvey, SC Department of Education; George Metz,
Charleston Southern University
Teacher Education Journal of South Carolina, 2015 Edition
4
TEACHER EDUCATION JOURNAL
of
SOUTH CAROLINA
SUBMISSION GUIDELINES
The TEJSC is a peer-reviewed, scholarly journal that, starting with the Fall 2009 Issue,
will include articles based on thematic strands. Articles sent to the TEJSC for publication
consideration must adhere to the following guidelines:
1). Articles must be applicable to a particular strand of the Journal.
2). Submissions should be no longer than 3000 words in length (limit does not include
figures, tables, and references).
3). The TEJSC follows the American Psychological Association standards for publishing
guidelines.
4). Articles should be submitted to the editor as an attachment in Word format. Each
article will be sent for peer review after the submission deadline has passed.
5). To be considered for the 2016 Issue, all articles must be submitted by June 30, 2016.
Strands for the Fall 2016 Issue:
Accreditation and Policy
Administration
Assessment
Current Issues in Education
Teaching and Learning across the Curriculum
Technology
The Teacher Education Journal of South Carolina is now accepting applications for
peer reviews for the journal. Please email Dr. Chris Burkett for an application.
Send article submissions and requests to become a peer reviewer to:
Dr. Chris Burkett
chrisburkett@columbiasc.edu
Teacher Education Journal of South Carolina, 2015 Edition
5
Contents
Technology
8
Technology Beliefs and Self-Efficacy- a Pilot Study
Michael Cook and Ron Knorr
23
Embracing the Future of Online Learning
Timothy Rayle and Michael Langevin
Assessment
32
The Science edTPA: Emphasis on Inquiry
D. Michelle Rogers and Lisa Barron
39
A Rubric for Revitalizing Teacher Reflection
Todd Cherner, Marcie Ellerbe, and Elena Andrei
57
Evaluating Teacher Preparation in Assessment Lite
Tara L. R. Beziat and Bridget K. Coleman
Teaching and Learning
67
Influences of Prior Experiences and Current Teaching Contexts
on New Teachers’ Use of Manipulatives for Math Instruction
Elizabeth Lee Johnson
76
Understanding English Language Learners: A Qualitative Study
of Teacher Candidates Teaching in Another Country, Culture,
and Context
Lee Vartanian
83
Transitioning to ACT Standards in South Carolina Schools:
Insights and Techniques for School Districts
Howard V. Coleman, Jeremy Dickerson, Cindy Ambrose, Edi Cox,
and Dottie Brown
90
Building Problem Solving Skills
Gary Bradley
Teacher Education Journal of South Carolina, 2015 Edition
6
Current Issues in Education
98
Attracting Early Childhood Teachers to South Carolina’s High
Needs Rural Districts: Loan Repayment vs. Tuition Subsidy
Henry Tran, Alison M. Hogue, and Amanda M. Moon
Teacher Education Journal of South Carolina, 2015 Edition
7
Technology Beliefs and Self-Efficacy—a Pilot Study
Michael Cook
Millikin University
Ron Knorr
Mercer University
Abstract
This research was a pilot study for validation of an instrument used to measure changes
in teacher’s technology beliefs and self-efficiency after a course in the use of educational
technology. The research literature revealed high interest but low self-efficiency in use of
such technology among teachers. After pilot survey validation, the survey was
administered pre and post course to a cohort of participants in a concentrated
graduate/professional development course. Results of this study indicated no statistically
significant change in teacher technology beliefs, but the results showed statistically
significant improvement in teacher technology self- efficacy across multiple educational
technology platforms. Explanations for these results are discussed and potential further
points for implementation indicated.
Introduction
As digital tools become increasingly available to teachers for use in classroom
instruction, discourse involving educational technology has expanded. The role of the
teacher has changed from writing traditional lesson plans and providing instruction to
designing relevant units and facilitating instruction by incorporating tools, skills, and
practices. Harnessing the power of technology is one approach, but the use of technology
within instruction represents a break from the traditional ways of learning.
The potential of technology to improve “…learning remains largely untapped in
schools today” (Lemke, Coughlin & Reifsneider, 2009, p. 2), as teachers struggle to
integrate supportive technology into instruction. As many teachers are digital immigrants,
the challenge to connect traditional pedagogy and content with technology is not easy, as
teachers transform (Considine, Horton & Moorman, 2009; Lapp, Moss & Rowsell, 2012)
and struggle with perceptions of technology use. Lack of time, access, knowledge, and
support contributed to negative perceptions (Hutchison, 2012). Li (2007) found teachers’
attitudes toward integrating technology into their instruction tend to be negative
One major area of concern is professional development (PD). Cook, Sawyer, and
Lee (2013) noted “…it would be inaccurate to suggest that teachers do not receive
professional development related to technology and digital tools; the problem goes
deeper than that, is much more insidious than a simple absence of PD” (p. 5134). The
issue lies more with how PD is provided, often in a discontinuous and generalized
manner lacking adequate time or focus.
In an effort to assist the integration process, teachers require high quality focused
PD with time and support to rethink their traditional notions of education and technology
(Spires, Lee & Turner, 2008). Administrators and educational policy makers must urge
teachers to alter their instructional practices by providing time for exploration with
Teacher Education Journal of South Carolina, 2015 Edition
8
technology and learning opportunities to be immersed in tech tools with the ongoing
scaffolding and support that lies at the heart of effective PD.
Review of the Literature
Lenses of Focus
Proponents of 21st century skills and instructional technology suggest the need for
new ways of seeing, thinking about, and practicing classroom instruction, yet teachers
sometimes see these changes as risky and difficult. To this point, teacher use of
technology has been low level (e.g., word processing and online searching); far less
technology use in classrooms seems to be higher-level uses (e.g., application and
creation). Much of the technology used does not match with best practices, i.e., low-level
use parallels teacher-centered instruction while high-level use is associated with studentcentered practice (Ertmer, 2005; Ertmer & Ottenbreit-Leftwich, 2010).
Prompting a major shift in teachers’ instruction is difficult. Ertmer and OttenbreitLeftwich (2010) suggest that pedagogical beliefs and self-efficacy/confidence are major
factors in teacher change. “Although teachers might believe that technology helps them
accomplish personal and/or professional tasks more efficiently, they are reluctant to
incorporate the same tools into the classroom for a variety of reasons” (p. 258), including
technology confidence (Paraskeva, Bouta, and Papagianni, 2008).
In a study measuring preservice teachers’ attitudes, perceptions, and selfconfidence toward computers, Pelton and Pelton (1996) found that positive attitudes but
low self-confidence in technology use. Examining teachers’ attitudes toward web-based
PD, Kao and Tsai (2009) found that Taiwanese teachers’ beliefs and confidence were
strong predictors of attitudes toward technology-driven PD. Albion (2001) determined as
pre-service teachers believe computers are useful, they lack confidence in their ability to
use them.
Teachers’ Beliefs about Technology. Technology is unlikely to be used if it does
not fit within teachers’ pre-existing instructional beliefs (Ertmer, 2005). However, Albion
(1999) suggests that teachers’ beliefs regarding the use of technology for instructional
purposes are significant factors in determining how they utilize technology in their
classrooms. Haney, Lumpe, Czerniak amd Egan (2002) suggests teacher beliefs are good
predictors of classroom action. While there are potentially useful strategies for facilitating
change in teachers’ beliefs, teachers themselves must place value on technology as a tool
for instruction to challenge their beliefs.
Findings from multiple research studies support the importance of considering
teachers’ beliefs about technology and technology integration. Pelton and Pelton (1996)
and Kao and Tsai (2009) found generally positive attitudes and perceptions about
technology. While Tsitouridou and Vryzas (2003) found that teachers have moderately
positive beliefs toward computers and Internet technologies, despite limited access to
both, Wozney, Venkatesh, and Abrami (2006) determined that teacher belief of success
was a strong predictor of varying levels of computer use. Abbitt and Klett (2007) noted a
significant increase in teachers’ beliefs about technology throughout a course specifically
designed to address technology integration.
Teachers’ Confidence with Technology. Even when teachers hold positive
beliefs about classroom technology use, they often lack the necessary confidence to
successfully teach with technology (Albion, 1999). Accordingly, Ertmer and Ottenbreit-
Teacher Education Journal of South Carolina, 2015 Edition
9
Leftwich (2010) suggest, “time and effort should be devoted to increasing teachers’
confidence for using technology…to achieve student learning objectives” (p. 261).
Several factors contribute to individual teachers’ confidence in using technology: age,
experience in school, home access, and level of personal use (Albion, 2001). Pelton and
Pelton (1996) found that participants self-reported their confidence as lower than their
perceptions of importance. Hogarty et al. (2003) found a positive relationship between
the confidence teachers feel with computers and level of computer use.
Kao and Tsai (2009) noted that teachers having high confidence in using
technology positively affected all other predictors of computer use. Wang, Ertmer, and
Newby (2004) examined the effects of vicarious learning on teachers’ confidence for
technology integration and found vicarious learning experiences have significant positive
effects on confidence for technology integration. Conrad and Munro (2008) investigated
the relationship between confidence, attitudes, and anxiety toward computers and
technology and determined that confidence with computers was highly related to
participants’ positive attitudes toward computer technology.
Theoretical Framework
The theory of self-efficacy (Bandura, 1993) was used to frame this survey and its
validation. Self-efficacy determines whether s/he has positive or negative beliefs and how
well they can motivate or debilitate themselves when faced with difficulty. Bandura and
Locke (2003) state, “people act on their beliefs about what they can do as well as their
beliefs about the likely outcomes of performance” (p. 92).
Chien Pan and Franklin (2011) note individuals with high self-efficacy “may
accomplish tasks far beyond their capabilities” and those with low self-efficacy “might
underestimate their ability to cope with difficult tasks” (p. 29). Zimmerman (2000)
discusses self-efficacy as a predictor of motivation and learning, as well as motivational
outcomes, such as choice, effort and persistence; similarly, self-efficacy beliefs have been
found to predict two measures of effort: one’s rate of performance and the amount of
energy one expends.
Self-efficacy is a set of self-beliefs, differentiated by and linked to distinct
functions (e.g., performance tasks). In developing measures of self-efficacy, Bandura
(2006) notes the importance of reducing ambiguity and increasing relevance by tailoring
the assessment of specific domains of functioning. A major component of self-efficacy is
performance accomplishment, which is made up of prior performance and mastery
experiences. These mastery experiences “provide striking testimony to one’s capacity to
effect personal changes” (p. 308).
Chien Pan and Franklin (2011) found teachers’ self-efficacy to be a significant
predictor of the integration of technology tools into instruction; likewise, classroom
environments can be determined in part by teachers’ instructional efficacy (Bandura,
1993). The ways in which teachers feel about their own instructional abilities affects the
ways in which they approach the educational process and the instructional practices they
use; that is, teachers with high self-efficacy are more likely to create mastery experiences
for students. As Zimmerman (2000) suggests, efficacious teachers are therefore more
likely to undertake instructional tasks that are more challenging.
Teacher Education Journal of South Carolina, 2015 Edition
10
Method
Participants and Setting
This study was conducted within a Study Abroad course at North Carolina State
University. Participants for this study were 21 teachers from five North Carolina counties
(Table1). All participants were members of the same Integrating Writing and Technology
course. Participant responses were coded to ensure anonymity. Neither reward nor
coercion was used with participants. Funding for this course was provided by the
Borchardt Fund through the Triangle Community Foundation.
Table 1
Participant Demographic Information
Demographic Information
Age Range
N
Teaching Experience
N
18-29
14
1-2 years
5
30-39
2
3-5 years
4
40-49
5
6-10 years
10
50+
0
11-15 years
1
16-20 years
1
20+
years
0
Gender
N
Male
4
Female
17
Primary Content
N
Elementary (all)
5
English/ELA
10
Education Level
N
Bachelors
12
Social Studies
1
Masters
9
English Language Learners (ELL) 2
Media Center Specialist
1
Music
1
N.C. County of Employment
N
Wake
12*
Technical Writing (higher ed.)
1
Durham
4
Chatham
2
Race/Ethnicity
N
Henderson
1
White
17
Alamance
1
African-American
3
NCSU
1
Asian-American
1
*2 at non-traditional schools (alternative and magnet)
Four face-to-face classes were held at N.C. State prior to departing for England.
The remainder of the face-to-face classes (3 full days and 2 half-days) were held on the
University of Surrey campus. The primary platform for online class meetings was
Weebly. Each student received a user ID and password to edit their personal page on the
class site; instructors uploaded assignments, suggestions, tutorial handouts and general
course information on the site, and participants’ individual pages served as their online
portfolios.
This hybrid course was designed to bring together experienced teachers across
content areas and grade levels to investigate the integration of writing and technology.
Throughout the course, participants used digital and other media collected from visits to
Teacher Education Journal of South Carolina, 2015 Edition
11
historical and literary sites and interactions with other cultures to prompt their writing
across progressions and modes and the development of multimodal portfolios.
Instructors utilized multiple teaching strategies to guide the course, providing
traditional instruction (e.g., assigned readings and related lectures) and writing activities
representing the various modes of writing. Ultimately, the goal of the course was to
design and implement it like intensive, ongoing PD; in the study abroad format, this put
the instructors and participants together in a unique way—living, studying and traveling
together for two weeks. A variety of scaffolding was provided throughout the two-week
study abroad portion of the course to assist participants in two major ways. First, we
provided intentional, scaffolded instruction by putting the teachers in the position of the
students. For each of the writing activities and assignments, instructors modeled each
step. Participants were also provided mini-lessons and modeling on each of the
technology tools required for the course. Additionally, instructors provided regularly
scheduled sessions for ongoing technology assistance. That is, the study abroad nature of
the course allowed participants constant, almost 24/7, access to the course instructors to
ask questions and to receive assistance and guidance at any time. Additionally, this
allowed participants to go beyond traditional one-shot PD and to engage in ongoing
follow-up conversations throughout the course.
Traditional writing assignments and technology tools were paired together to form
19 course assignments. The objectives of the course included the use of digital tools for
writing in multiple genres across several digital platforms and technologies. As a
culminating artifact for each assignment, participants merged their writing and media
together to form coherent multimodal representations and were required to implement a
technology component into their own classrooms incorporating one of the new
technologies they learned the course.
Construction of Questions and Validity
We determined the content we were interested in measuring in the scope of the
instrument. We reviewed the relevant research on teacher anxiety toward technology and
computers. With each draft of the instrument, we utilized a multiple review process
where researchers individually conducted an individual review and then met as a group to
review, making multiple revisions prior to the pre-test administration.
We used a Likert scale instrument to measure attitudes and opinions. When
writing and reviewing items, we kept four concepts in mind. First, we reviewed for
wording within items for vagueness or ambiguity. Second, we made sure all statements
were written positively, revising any item written negatively. Third, we reviewed all
items to remove any that were double-barreled. Fourth, we made certain participants
would have all the necessary information to respond to each item on the survey.
Face validity was assured as the survey questions aligned with the research
questions. Content validity was established by a review of the literature and close
examination of the survey questions by content and method experts on the research team.
Construct validity was established as the instrument was designed with support and
feedback from a psychometric expert to assure it elicits relevant information. To ensure
internal validity, we asked (1) Can these items really explain the outcome we want to
research? and (2) Do they help us look at relationships between independent and
dependent variables?
Teacher Education Journal of South Carolina, 2015 Edition
12
The instrument used for this study was developed prior to the beginning of the
course. The instrument (Appendix) was administered as a pre-test in April prior to the
course. Based on the pre-test data, an item analysis was conducted, a coefficient alpha
was obtained, and the survey was revised and given on the final day of class as the final
assignment before returning home.
Description of Questions
All items were constructed from examining the empirical research as well as our
knowledge of relevant indicators. Items were originally constructed to measure two facets
of teachers’ approaches to technology integration in their classrooms: beliefs and selfefficacy. Questions 1-18 were written to reflect beliefs. Participants were asked to
respond to each statement using the 5-point (1=strongly disagree to 5=strongly agree)
scale provided (e.g., I stay current with advances in educational technology). Items 19-53
were written to reflect self-efficacy, and participants were asked to indicate their level of
confidence in given situations. Students were again asked to read each statement and
respond via the 5-point scale (e.g., The effective use of technology in school can help
students learn).
Item Analysis
After the pre-test, we performed an item analysis for the entire survey and
calculated an initial coefficient alpha (=0.9163). Upon further examination, however,
we realized data were missing from three participants who not provided answers to all
survey items. We used Person Mean Replacement (Crocker and Algina,1986; Downey
and King,1998) to substitute for missing data. After utilizing mean replacement, there
was no significant change in the coefficient alpha (=0.9124).
Upon review, there were potentially two separate constructs (technology beliefs
and technology self-efficacy) being measured by the survey. We split the survey into two
sub-surveys—technology beliefs (sub-survey 1) and technology self-efficacy (sub-survey
2)—and performed an item analysis for each. Researchers (Cronbach, 1951; Nunnally,
1967; Cortina, 1993) note an alpha cutoff point of 0.7-0.75 is acceptable for testing
instrument reliability. From the literature on reliability, we used alpha levels as follows:
0.7-0.89=mediocre/good; > 0.9=excellent. The coefficient alpha for sub-survey 1 was
mediocre (=0.8129) and required further exploration. The coefficient alpha for subsurvey 2 was very high (=0.9791) and no additional action taken here.
The next stage of the item analysis involved reverse scoring and item removal in
sub-survey 1. Two items were negatively worded and were reverse scored. The new
coefficient alpha increased slightly (=0.901). Two items were found to be ambiguous
and vague and were removed. A slight change in coefficient alpha was seen (=0.9373).
Reliability data was calculated after the pre-test (Table 2) and post-test (Table 3).
Table 2
Reliability Data from Pre-Test
Factors
Technology Beliefs
Alpha
No. of Items
0.937
20
Technology Self-Efficacy
Alpha
No. of Items
0.979
35
Teacher Education Journal of South Carolina, 2015 Edition
13
Table 3
Reliability Data from Post-Test
Factors
Technology Beliefs
Technology Self-Efficacy
Alpha
No. of Items
Alpha
0.941
18*
0.979
*2 items removed after pre-test and item analysis
No. of Items
35
Results
Beyond examining the survey instrument for validity purposes, we analyzed the
participant data itself. As noted previously, the survey was originally designed with two
components in mind: beliefs and self-efficacy. After item analysis and post hoc analysis,
we realized the survey was in fact comprised of two related constructs: technology beliefs
and technology self-efficacy. As a pilot study using only 21 participants, factor analysis
was deferred until a follow up study containing a statistically robust number of
participants is performed.
We ran a series of dependent t-tests to examine potential changes in participants’
responses to the survey from their pre- and post-scores (Table 4).We found no significant
difference (p=.164) in participants’ technology beliefs. This finding can be due to
multiple factors. First, the course contained only two weeks of intensive PD. Second,
participants’ beliefs were already relatively high on the pre-survey (m=3.91), leaving
little room for significant change.
Table 4
Results of t-tests comparing pre-scores and post-scores of participant responses by
technology survey construct.
Number of
Number of
Construct
items in
Mean
Sig. (2-tailed)
participants
construct
Pre Mean: 3.9121
Technology
21
18
p=.164
Post Mean: 3.9947
Beliefs
Pre Mean: 2.6072
Technology
21
35
p=< .001*
Post Mean: 3.6296
Self-Efficacy
*Statistically significant at p<.05
A significant change in means was found in the second survey construct,
technology self-efficacy (p<.001). This finding suggests that the scaffolded instruction,
the intensive PD-like design, and the easy access to instructors benefited participants’
self-efficacy toward technology integration. The self-efficacy section of the survey
included 25 technology tools embedded in the course. We were also interested in
examining how and where teachers’ confidence toward specific tools increased
significantly.
Teacher Education Journal of South Carolina, 2015 Edition
14
To analyze the pre and post-data, we ran an additional series of paired t-tests for
each of the 25 tools (Table 5).Participants demonstrated significant increases in 19 of the
25 technology competencies. Extensive instruction was given for each of the tools
resulting in significant changes in teachers’ self-efficacy, suggesting the potential
benefits of intensive PD for teachers regarding technology integration. Six tools had no
significant changes. Of those six, three (Google Docs, Google Maps, and YouTube) tools
had a relatively high mean on the pre-test, suggesting that teachers were already quite
familiar with these tools. Three tools (Voicethread, iMovie/Movie Maker, and Audacity)
showed insignificant growth.
Table 5
Results of paired sample t-tests comparing changes in self-efficacy by individual
technology tool.
Technology Self-Efficacy
pre-score
post-score
t-test
by Tool
mean/SD
mean/SD
Moodle
m=2.71,
m=3.39,
t (20) = -2.70, p = .010*
sd=1.22
sd=0.855
n=21
Weebly
m=2.47,
m=3.89,
t (20) = -4.46, p = .000*
sd=1.26
sd=0.80
n=21
Issuu
m=1.68,
m=3.36,
t (20) = -6.64, p = .000*
sd=0.94
sd=0.83
n=21
Google Docs
m=3.42,
m=3.84,
t (20) = -1.36, p = .190
sd=1.21
sd=0.76
n=21
Google Maps
m=3.47,
m=3.77,
t (20) = -1.14, p = .267
sd=1.07
sd=0.71
n=21
Prezi
m=3.00,
m=3.77,
t (20) = -2.81, p = .012*
sd=1.29
sd=0.71
n=21
Pixlr
m=1.68,
m=4.03,
t (20) = -8.85, p = .000*
sd=0.88
sd=0.78
n=21
Pinterest
m=3.47,
m=4.24,
t (20) = -2.26, p = .036*
sd=1.46
sd=0.81
n=21
Letters.com
m=1.60,
m=4.08,
t (20) = -8.73, p = .000*
sd=0.92
sd=0.81
n=21
TimeToast
m=1.60,
m=3.17,
t (20) = -5.22, p = .000*
sd=0.92
sd=1.06
n=21
Go Animate
m=1.63,
m=3.26,
t (20) = -5.14, p = .000*
sd=0.89
sd=0.87
n=21
Xtranormal
m=1.92,
m=2.78,
t (20) = -2.15, p = .045*
sd=1.21
sd=1.13
n=21
Glogster
m=2.52,
m=3.42,
t (20) = -2.84, p = .011*
sd=1.26
sd=0.76
n=21
Comic Strip Editors
m=2.08,
m=3.21,
t (20) = -3.68, p = .002*
(Toon Doo and Bitstrips)
sd=1.08
sd=0.91
n=21
Voicethread
m=2.57,
m=2.78,
t (20) = -0.54, p = .593
sd=1.21
sd=1.08
n=21
Flickr
m=2.41,
m=3.89,
t (20) = -5.50, p = .000*
Teacher Education Journal of South Carolina, 2015 Edition
15
sd=1.01
YouTube
m=3.47,
sd=0.96
iMovie/Movie Maker
m=2.58,
sd=1.11
Audacity
m=2.41,
sd=1.16
Skype
m=3.15,
sd=1.06
Digital Video Cameras
m=3.31,
sd=0.94
Digital Still Cameras
m=2.91,
sd=1.08
Transferring Digital
m=3.42,
Images
sd=1.16
Using Embed Codes
m=2.68,
sd=1.15
Connecting to Wifi
m=3.42,
sd=0.90
*Statistically significant at p<.05
sd=0.87
m=3.78,
sd=0.91
m=3.21,
sd=1.03
m=2.98,
sd=1.09
m=3.78,
sd=0.91
m=3.94,
sd=0.84
m=4.01,
sd=0.93
m=4.36,
sd=0.68
m=3.78,
sd=0.78
m=4.15,
sd=0.68
n=21
t (20) = -1.39, p = .181
n=21
t (20) = -1.79, p = .090
n=21
t (20) = -1.66, p = .114
n=21
t (20) = -2.12, p = .048*
n=21
t (20) = -2.20, p = .041*
n=21
t (20) = -3.91, p = .001*
n=21
t (20) = -3.25, p = .004*
n=21
t (20) = -3.02, p = .007*
n=21
t (20) = -3.44, p = .003*
n=21
Conclusion
Traditionally, teachers are the “sages on the stages,” where information is
transmitted from them to the students. This course put teachers in the role and mindset of
students, requiring them to struggle and solve problems, while course instructors
facilitated and offered assistance. The course was designed this way to help teachers both
use technology in their teaching and provide their students with authentic learning
experiences by shifting their own role to that of facilitator.
From the data analysis, several findings emerged. First, no significant difference
was found between students’ pre- and post-survey belief responses. This finding may be
due to a short treatment time and high pre-survey mean. Even when teachers’ beliefs
regarding technology integration are already relatively high, beliefs are perhaps more
firmly rooted and more difficult to change.
Second, a significant change was found between teachers’ pre- and post-scores on
the self-efficacy construct. Upon further examination, significant changes were also
present in teachers’ self-efficacy toward 19 of the 25 technology tools utilized in the
course. As opposed to beliefs, self-efficacy is a relative term and is only meaningful to
each individual situation. Self-efficacy is perhaps more easily shaped than a person’s
belief system.
Several potential benefits exist from this model of intensive PD for teachers
regarding technology. Our study suggests three implications for technology training for
teachers. First, we attribute our results to the scaffolding provided participants. Our
instruction was intentional. It may benefit teacher educators and administrators to utilize
the scaffolded instruction we expect teachers to use with their students. Second, the
participants in our course had constant access to the instructors. This intensive
component allowed us to offer the guidance and assistance that traditional PD does not
Teacher Education Journal of South Carolina, 2015 Edition
16
provide, and our participants benefited from these interactions. While logistical issues
can be cumbersome, intensive PD can assist teachers in using technology. Third, the data
from this study suggest that further examination of this instrument can prove beneficial to
researchers investigating the implementation of technology at the classroom level. The
instrument provides the means to collect and analyze data assessing teachers’ beliefs and
self-efficacy toward implementing technology into their own classrooms.
References
Abbitt, J. T. & Klett, M. D. (2007). Identifying influences on attitudes and self-efficacy
beliefs towards technology integration among pre-service educators. Electronic
Journal for the Integration of Technology in Education, 6, 28-42.
Albion, P. R. (2001). Some factors in the development of self-efficacy beliefs for
computer use among teacher education students. Journal of Technology and
Teacher Education, 9(3), 321-347.
Albion, P.R. (1999). Self-efficacy beliefs as an indicator of teachers’ preparedness for
teaching with technology. In Proceedings of the 10th International Conference of
the Society for Information Technology and Teacher Education. Chesapeake, VA:
AACE.
Bandura, A. & Locke, E.A. (2003). Negative self-efficacy and goal effects revisited.
Journal of Applied Psychology, 88(1), 87-99.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
Bandura (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan
(Eds.), Self-efficacy beliefs of adolescents (Vol. 5, pp. 307-337). Greenwich, CT:
Information Age Publishing.
Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning.
Educational Psychologist, 28(2), 117-148.
Chien Pan, S. & Franklin, T. (2011). In-service teachers’ self-efficacy, professional
development, and Web 2.0 tools for integration. New Horizons in Education,
59(3), 28-40.
Conrad, A. M. & Munro, D. (2008). Relationships between computer self-efficacy,
technology, attitudes and anxiety: Development of the computer technology use
scale (CTUS). Journal of Educational Computing Research, 39(1), 51-73.
Considine, D., Horton, J., & Moorman, G. (2009). Teaching and reading the millennial
generation through media literacy. Journal of Adolescent & Adult Literacy, 52(6),
471-481.
Cook, M.P., Sawyer, D., & Lee, S. (2013). Integrating technology into classroom
instruction: A teacher model made easy. In R. McBride & M. Searson (Eds.),
Proceedings of Society for Information Technology & Teacher Education
International Conference 2013 (pp. 5133-5138). Chesapeake, VA: AACE.
Cortina, J.M. (1993). What is coefficient alpha? An examination of theory and
applications. Journal of Applied Psychology, 78(1), 98-104.
Crocker, L. & Algina, J. (1986). Introduction to classical & modern test theory. Belmont,
CA: Thompson Learning.
Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests.
Psychometrika, 16(3), 297-334.
Teacher Education Journal of South Carolina, 2015 Edition
17
Downey, R.G. & King, C.V. (1998). Missing data in likert ratings: A comparison of
replacement methods. Journal of General Psychology, 125(2), 175-191.
Ertmer, P.A. & Ottenbreit-Leftwich, A.T. (2010). Teacher technology change: How
knowledge, confidence, beliefs, and culture intersect. Journal of Research on
Technology in Education, 42(3), 255-284.
Hackbarth, G., Grover, V., & Yi, M. Y. (2003). Computer playfulness and anxiety:
Positive and negative mediators of the system experience effect on perceived ease
of use. Information & Management, 40, 221-232.
Hogarty, K. Y., Lang, T. R., & Kromrey, J. D. (2003). Another look at technology use in
classrooms: The development and validation of an instrument to measure
teachers’ perceptions. Educational and Psychological Measurement, 63(1), 139162.
Hutchison, A. (2012). Literacy teachers’ perceptions of professional development that
increases integration of technology into literacy instruction. Technology,
Pedagogy and Education, 21(1), 37-56.
Joiner, R., Brosnan, M., Duffield, J., Gavin, J., & Maras, P. (2007). The relationship
between Internet identification, Internet anxiety, and Internet use. Computers in
Human Behavior, 23, 1408-1420.
Kao, C. & Tsai, C. (2009). Teachers’ attitudes toward web-based professional
development, with relation to Internet self-efficacy and beliefs about web-based
learning. Computers & Education, 53, 66-73.
Kotrlik, J. W. & Redmann, D. H. (2009). Technology adoption for use in instruction by
secondary technology education teachers. Journal of Technology Education,
21(1), 44-59.
Lapp, D., Moss, B., & Rowsell, J. (2012). Envisioning new literacies through a lens of
teaching and learning. The Reading Teacher, 65(6), 367-377.
Lemke, C., Coughlin, E., & Reifsneider, D. (2009). Technology in schools: What the
research says: An update. Culver City, CA: Metri Group.
Li, Q. (2007). Student and teacher views about technology: A tale of two cities? Journal
of Research on Technology in Education, 39(4), 377-397.
Nunnally, J.C. (1967). Psychometric theory. New York, NY: McGraw-Hill.
O’Dwyer, L., Russell, M., & Bebell, D. (2003). Elementary teachers’ use of technology:
Characteristics of teachers, schools, and districts associated with technology use.
Technology and Assessment Study Collaborative: Boston College. Retrieved
from http://www.intasc.org
Paraskeva, F., Bouta, H., & Papagianna, A. (2008). Individual characteristics and
computer self-efficacy in secondary education teachers to integrate technology in
educational practice. Computers & Education, 50, 1084-1091.
Pelton, L. & Pelton, T. W. (1996). Building attitudes: How a technology course affects
preservice teachers’ attitudes about technology. In B. Robin et al. (Eds.),
Proceedings of Society for Information Technology & Teacher Education
International Conference 1996 (pp. 199-204). Chesapeake, VA: AACE.
Peterson, R.A. (1994). A meta-analysis of Chronbach’s coefficient alpha. Journal of
Consumer Research, 21(2), 381-391.
Teacher Education Journal of South Carolina, 2015 Edition
18
Popovich, P. M., Hyde, K. R., Zakrajsek, T., & Blumer, C. (1987). The development of
the attitudes toward computer usage scale. Educational and Psychological
Measurement, 47(1), 261-269.
Rahimi, M. & Yadollahi, S. (2011). Computer anxiety and ICT integration in English
classes among Iranian EFL teachers. Procedia Computer Science, 3, 203-209.
Rosen, L. R. & Weil, M. M. (1995). Computer anxiety: A cross-cultural comparison of
university students in ten countries. Computers in Human Behavior, 11(1), 45-64.
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4),
350-353.
Scott, C. R. & Rockwell, S. C. (1997). The effect of communication, writing, and
technology apprehension on likelihood to use new communication technologies.
Communication Education, 46, 44-62.
Spires, H.A., Lee, J.K., & Turner, K.A. (2008). Having our say: Middle grade students'
perceptions on school, technologies, and academic engagement. Journal of
Research on Technology in Education, 40(4), 497-515.
Tsitouridou, M. & Vryzas, K. (2003). Computer and information technology: The case of
Greece. Information Technology in Childhood Education Annual, 2003(1), 187207.
Wang, L., Ertmer, P. A., & Newby, T. J. (2004). Increasing preservice teachers’ selfefficacy beliefs for technology integration. Journal of Research on Technology in
Education, 36(3), 231-250.
Wang, Y. S. (2007). Development and validation of a mobile computer anxiety scale.
British Journal of Educational Technology, 38(6), 990-1009.
Wheeless, L. R., Eddleman-Spears, L., Magness, L. D., & Preiss, R. W. (2005).
Informational reception apprehension and information from technology aversion:
Development and test of a new construct. Communication Quarterly, 53(2), 143158.
Wozney, L., Venkatesh, V., & Abrami, P. C. (2006). Implementing computer
technologies: Teachers’ perceptions and practices. Journal of Technology and
Teacher Education, 14(1), 173-207.
Zhang, Y. (2007). Development and validation of an internet use attitude scale.
Computers & Education, 49, 243-253.
Zimmerman, B.J. (2000). Self-efficacy: An essential motive to learn. Contemporary
Educational Psychology, 25, 82-91.
About the Authors:
Michael Cook, Ph.D., is an Assistant Professor of English Education at Millikin
University in Decatur, Illinois, where he teaches and coordinates courses for the
Secondary English Education major. His research interests include multimodal literacy
and integrating technology into instruction.
Ron Knorr, Ph.D., is Chair of Teacher Education and Associate Professor of Education at
Mercer University, McDonough, Georgia, where he teaches research, middle grades
education, and foundations courses for graduate and undergraduate programs. His
research interests include teacher education and the History of Education in the American
South.
Teacher Education Journal of South Carolina, 2015 Edition
19
Appendix
Technology Beliefs and Self-Efficacy Scale
Demographic Data:
Name: (to be replaced with code and pseudonym)
Gender: (male/female)
Race: (options provided)
Years of Teaching Experience: (0, 1-2, 3-5, 6-10, 11-15, 16-20, 21 or more)
Grade Level Taught: (Elementary; Middle; Secondary; Higher Ed)
Primary Content Area Taught (open-ended question)
Describe the Technology Available in Your Classroom: (e.g., teacher laptop, computer
for every student, access to laptop carts only, etc.)
Number of Trips Taken Outside the U.S.: (0, 1-2, 3-5, 6-8, 9-11, 12 or more)
This scale is comprised of 2 main sections (one, technology beliefs; two, technology selfefficacy; self-efficacy includes three sub-categories).
Part 1
Directions for Part One: Please respond to each statement using a 5-point Likert-type
scale. Items are rated on the scale as follows: 1=Strongly Disagree, 2=Disagree,
3=Neither Agree nor Disagree, 4=Agree, and 5=Strongly Agree.
Technology Beliefs:
1. I stay current with advances in educational technology.
2. I prefer to use traditional pedagogies over technologically-delivered instruction.
3. I use technology in my teaching.
4. I use computers and technology in my professional development.
5. I use electronic collaboration and editing tools (e.g., Google Docs) for group work
projects.
6. I use electronic texts whenever available.
7. I write using a word-processing program.
8. New technologies intrigue me.
9. Staying abreast of new and important advances in technology is frustrating.
10. It is satisfying to be up to date on new and important advances in technology.
11. I can save and retrieve data and/or information electronically.
12. The effective use of technology in school can help students learn.
13. The effective use of technology can improve education.
14. The effective use of technology offers convenient ways to collaborate and share
information.
15. The effective use of technology can make learning meaningful.
16. Instructional time spent using technology is worthwhile.
17. Advanced technologies can increase students’ abilities to read, write, think, and
learn.
18. Technology tools can be repurposed for educational use.
*Two items were removed subsequent to the Item Analysis
Teacher Education Journal of South Carolina, 2015 Edition
20
Part 2
Technology Self-Efficacy—1:
Directions: Please rate the following statements on the following 5-point scale:
1=Strongly Disagree, 2=Disagree, 3=Neither Agree nor Disagree, 4=Agree, and
5=Strongly Agree.
1. I am sure of myself when working with/using computers and technology.
2. I am confident learning to use a new technology tool.
3. I am self-assured using computers, technology, and the Internet.
4. I am prepared to make the changes necessary to integrate technology into my
classroom.
Technology Self-Efficacy—2:
Directions: How confident do you feel in each of the following situations? Respond
using the scale provided—None, Some, Moderate, High, Very High.
1. Troubleshooting technology problems/issues without assistance.
2. Learning to use technology.
3. Comparing my technology skills with those of my colleagues.
4. Comparing my technology skills to those of my students.
5. Considering integrating collaborative, online tools into my instruction.
6. When faced with teaching using new technologies.
7. When asked about a technology I’m not familiar with.
8. Integrating instructional technology tools into my classroom instruction.
9. Familiarizing myself with new technologies.
10. Integrating Moodle (a course management system) into my instruction.
11. Integrating Weebly (a Web-page editor) into my instruction.
12. Integrating Issuu (an online publishing platform) into my instruction.
13. Integrating Google Docs (an electronic document sharing and editing tool) into my
instruction.
14. Integrating Google Maps into my instruction.
15. Integrating Prezi (a cloud-based presentation software) into my instruction.
16. Integrating Pixlr (an online photo editor) into my instruction.
17. Integrating Pinterest (pin-board style photo management and sharing tool) into my
instruction.
18. Integrating Lettrs.com (an online letter generator) into my instruction.
19. Integrating TimeToast (online interactive timelines) into my instruction.
20. Integrating GoAnimate (a tool for creating/sharing animated stories) into my
instruction.
21. Integrating Xtranormal (tool for creating/sharing animated stories) into my
instruction.
22. Integrating Glogster (an online, interactive poster program) into my instruction.
23. Integrating ToonDoo (a tool for creating/sharing comic strips) into my instruction.
24. Integrating Voicethread (a tool for sharing and commenting on photos/video) into my
instruction.
25. Integrating Flickr (photo sharing tool) into my instruction.
26. Integrating YouTube (video sharing tool) into my instruction.
27. Integrating iMovie or MovieMaker (movie editing tools) into my instruction.
28. Integrating Audacity (audio editing/podcasting tool) into my instruction.
Teacher Education Journal of South Carolina, 2015 Edition
21
29. Integrating Skype (online communication/phone tool) into my instruction.
30. Integrating digital video cameras (Flip or other) into my instruction.
31. Integrating digital still shot cameras into my instruction.
Teacher Education Journal of South Carolina, 2015 Edition
22
Embracing the Future of Online Learning
Timothy Rayle
Clay Community Schools
Michael Langevin
Equitable Education Solutions, LLC
Abstract
The use of online learning has become a common solution for high schools to meet the
needs of students. Many high schools have embraced online learning particularly for
students at-risk of not meeting the credit requirement for graduation. This study presents
an analysis of high school principal’s beliefs in the effectiveness, implementation, and
barriers of online learning, as well as suggestions to improve the effectiveness.
Introduction
The availability of online learning has the potential to provide students with
course choices and in some cases, the basic courses that should be part of every
curriculum (Picciano & Seaman, 2009). Online learning also has the conceivable
capability to provide academic options for secondary students in all content areas. The
Institute of Education research study for Pamoja Education (2014) recently stated that,
“the overall analysis of current published research suggest that online learning is growing
in schools in some countries in the world, but is still seen as rudimentary” (p. 2). If the
potential exists for secondary schools to benefit from the use of online learning, what are
the principals’ perceptions of effectiveness and implementation levels?
Background
Rayle (2011) stated that the use of technology to bridge the educational gap
within underserved populations was on the mind of politicians as early as 1996 as
evidenced in U.S. Education Secretary Richard Riley’s Report to the Nation on
Technology and Education (U.S. Department of Education, 1996). In the report, Riley
stated that during the past decade, the use of technology in American life exploded. This
explosion of technology characterized what is inevitable in American education—
computers and the Internet would play a vital role in classrooms (U.S. Department of
Education, 1996). Recognizing both the growing use of technology in schools and the
limited amount of applicable research and data collection, President Clinton’s Committee
of Advisors on Science and Technology identified an imperative need for extensive,
federally-sponsored research and evaluation on school technology (President’s
Committee of Advisors on Science & Technology, 1997).
Online learning has the potential to provide flexible access to content and
instruction, to pull together and distribute instructional subject matter more costefficiently, enable teachers to supervise additional students while maintaining learning
outcome quality, provide affordable academic options, and provide for the needs of every
student. Online technology enables students anywhere—poor inner cities, remote rural
areas, even at home—to take any course they like, from the best instructors in the world,
and to customize learning to their own needs, schedules, styles, interest, and academic
Teacher Education Journal of South Carolina, 2015 Edition
23
growth (Moe & Chubb, 2009). Only online courses can give a student access to the best
teachers and most rigorous and relevant courses regardless of where the student lives or
attends school (Rayle, 2011).
Companies providing prepackaged courses are making online learning
increasingly available for use in secondary schools. As more schools utilize the
technology, the use of online learning is being envisioned as a valuable tool. In a report
by Project Tomorrow (2013) researchers stated, “District leaders are increasingly seeing
the value of online classes as an effective learning strategy for a diverse set of
stakeholders including their administrative team, teachers, support staff and specific
populations of students” (p. 3). An analysis of online learning by Means, Toyama,
Murphy and Jones (2009) included more than 1,000 studies from 1996 to 2008
comparing online learning to traditional learning. The analysis investigated how the
effectiveness of online learning compared with traditional instruction and whether
supplementing traditional instruction with online teaching enhanced students’ learning.
The analysis revealed significant results in the area of effectiveness. As a result, the
researchers recommended continuing online learning in public schools and calls for more
research on the impact of online courses (Means et al., 2009).
Methods and Findings
Rayle (2011) conducted a quantitative study to determine the principal
perceptions and demographic relationship of the implementation and effectiveness of
online courses in Indiana public high schools. The Principal Online Learning Survey was
used with high school principals to determine the implementation and effectiveness of
online learning. The survey was developed after reviewing the current literature and
establishing content validity. Practitioners’ views and field-testing the instrument
provided face validity. The instrument consisted of a 44-item survey instrument
developed by the researcher to measure principals’ perceptions of the implementation and
beliefs of the effectiveness of online learning in high schools. The survey instrument was
patterned after the questionnaire developed by Picciano and Seaman (2007), who
developed their survey based upon a similar instrument used by Allen and Seaman
(2006).
An analysis was prepared to determine whether demographic factors played a role
in the principals’ perceptions of the implementation and effectiveness of online learning.
Factors examined included school location, school size, grade configuration, technology
costs, support costs, administrative costs, and existing technological infrastructure. Other
factors, which included the principals’ age, gender, number of years served as a principal,
number of years served as a teacher, ability to control the school budget, and highest
degree earned, were also examined.
In the fall of 2014, a follow-up study was conducted using these same methods
and instrument in order to identify trends in the principals’ perceptions about the
effectiveness, implementation level, and the barriers that prohibit the use of online
learning. Principals from 343 non-charter public high schools in Indiana encompassing
grades of at least 10-12 were included in the study. The principals’ implementation and
effectiveness data were collected from 154 principals who responded to the survey. To
ensure reliability within the survey instrument, the following Cronbach’s alpha values
(.94 for the 15 effectiveness factors, .94 for the 15 implementation factors, and .89 for the
Teacher Education Journal of South Carolina, 2015 Edition
24
seven barriers) indicated internal consistency within the sections of the survey
instrument. Data from the 2014 study were compared with data from the 2010 study in
regard to: The principals’ perception of the level of implementation of online learning in
public high schools; the principals’ perception of the effectiveness of online learning for
public high schools; and the barriers that impede the development of online learning in
public high schools.
Principals’ Belief in Effectiveness
The first section of the Principal’s Perception of Online Learning survey
examined the belief in effectiveness of online learning in meeting the needs of the
students. When comparing 2014 results with that of 2010, the overall mean decreased by
.10 (M = 3.75, SD = .71) indicating there is less of a belief in the general effectiveness of
online learning by the principals. The greatest decrease in means occurred in the belief
in effectiveness to provide students an opportunity to succeed, a .36 decrease, and the
belief in effectiveness to reduce scheduling conflicts, a .29 decrease. An increase in
means indicating there is more of a belief in the effectiveness of online learning by the
principals occurred in offering courses otherwise not available, and for providing courses
without teachers, both showing a .17 increase. Table 1 illustrates principals’ responses to
the 15 questions on the survey regarding perceptions of effectiveness.
Table 1
Perception of Effectiveness of Online Learning
Item
2010 M
2010 SD
2014 M
2014 SD
Difference in M
Differentiating Instruction
3.82
.90
3.69
0.89
-0.13
Growing Populations
3.73
.89
3.59
0.94
-0.14
Increasing Elective
3.96
.83
3.98
0.90
0.02
Offering Courses
3.98
.85
4.15
0.79
0.17
Offering AP Courses
3.71
1.17
3.58
1.16
-0.13
Recovering Credits
4.56
.37
4.36
0.80
-0.20
Reducing Class Size
3.31
1.21
3.18
1.02
-0.13
Individualizing Instruction
3.68
.82
3.60
1.10
-0.08
Increasing Opportunities
3.95
.87
3.59
1.08
-0.36
Providing Courses Without
3.50
1.20
3.67
1.03
0.17
Teacher Education Journal of South Carolina, 2015 Edition
25
Teachers
Cost Effective Courses
3.78
.98
3.63
0.96
-0.15
2010 M
2010 SD
2014 M
2014 SD
Difference in M
Meeting Needs of At-Risk
3.95
.92
3.67
1.09
-0.28
Reducing Conflicts
3.90
.85
3.61
.94
-0.29
Retaking Courses
4.34
.52
4.17
.90
-0.17
Enrichment Needs
3.63
1.09
3.74
1.04
0.11
Table 1 (continued)
Item
Principals’ Beliefs in Level of Implementation
The second section of the Principal’s Perception of Online Learning survey
investigated the principals’ belief in the current level of implementation of online
learning in their respective high schools. By the use of a five-point Likert scale, all
survey items were rated on a scale of 1 to 5 (1 indicated not implemented, 3 indicated
somewhat implemented, and 5 indicated fully implemented).
When comparing the 2014 results with that of 2010, the overall mean increased
by .51 indicating that online learning is being used in more schools (M = 3.29, SD =
1.00). Implementation levels showing the greatest gains in mean scores occurred in
offering courses not available, a .54 increase, and retaking courses, a .49 increase.
Respondents indicated the implementation of online learning in their schools for
recovering credits and for retaking courses as the top two reasons for the use of online
learning. A decrease in the mean, indicating there is less implementation of online
learning by the principals, occurred in using online learning to address growing
populations with limited space, a decrease of .25. The implementation of online learning
to reduce class size and to provide courses for growing populations with limited space
were reported as the least utilized rationale for online learning by the respondents. Table
2 illustrates principals’ responses to the 15 questions on the survey regarding perceptions
of the level of implementation.
Teacher Education Journal of South Carolina, 2015 Edition
26
Table 2
Perception of Implementation Level of Online Learning
Item
2010 M
2010 SD
2014 M
2014 SD
Difference in M
Differentiating Instruction
2.95
1.06
3.24
1.30
0.29
Growing Populations
2.84
1.25
2.59
1.42
-0.25
Increasing Electives
2.94
1.27
3.27
1.38
0.33
2010 M
2010 SD
2014 M
2014 SD
Difference in M
Offering Courses
2.78
1.37
3.32
1.38
0.54
Offering AP Courses
2.65
1.48
2.99
1.56
0.34
Recovering Credits
4.20
1.03
4.45
.89
0.25
Reducing Class Size
2.39
1.32
2.39
1.42
.00
Individualizing Instruction
3.03
1.23
3.30
1.36
0.27
Increasing Opportunities
3.24
1.30
3.52
1.33
0.28
Providing Courses Without
Teachers
2.58
1.42
2.97
1.55
0.39
Cost Effective Courses
2.78
1.38
2.93
1.45
0.15
Meeting Needs of At-Risk
3.38
1.34
3.65
1.32
0.27
Reducing Conflicts
3.09
1.38
3.19
1.47
0.10
Retaking Courses
3.86
1.20
4.35
.97
0.49
Enrichment Needs
2.78
1.30
3.26
1.46
0.48
Table 2 (continued)
Item
Principals’ Beliefs in the Barriers to Implementation of Online Learning
Principals who received surveys were asked to indicate on a five-point Likert
scale the level of barrier each of seven factors played in the principals’ schools ability to
Teacher Education Journal of South Carolina, 2015 Edition
27
offer online learning courses. All survey items were rated on a scale of 1 to 5 (1
indicated very much a barrier, 3 indicated neutral, and 5 indicated not a barrier at all).
When the 2014 results were compared with data from 2010, the mean increased in each
of the seven factors resulted in an overall mean increase by .49 (M = 3.68, SD = 1.08).
This indicated principals perceived the barriers were less of an impediment to the
implementation of online learning in their schools. Factors showing the greatest gains in
mean scores, indicating the barrier was decreasing, were course development and
purchasing costs, a .70 increase, and students will take online courses instead of
traditional, a .68 increase. Factors with the least change in the means were limited
infrastructure, a .17 increase, and available bandwidth, a .28 increase. Table 3 illustrates
principals’ responses to the seven questions on the survey regarding perceptions of the
barriers to implementation of online learning.
Table 3
2010 Barriers to Offering Online Learning
Characteristic
2010 M
2010 SD 2014 M 2014 SD Difference in M
Course Costs
2.59
1.35
3.29
1.33
0.70
Infrastructure
3.56
1.43
3.73
1.37
0.17
Bandwidth
3.52
1.43
3.80
1.35
0.28
Course Quality
3.06
1.21
3.49
1.37
0.43
Master Contracts
3.42
1.31
4.05
1.15
0.63
Teacher Training
3.13
1.27
3.75
1.28
0.62
Replacing Teachers
3.14
1.24
3.82
1.22
0.68
Implications
When compared to the results of the 2010 study, the overall mean for the belief in
the effectiveness of online learning decreased as the principals rated 10 out of the 15
items lower, the overall mean for the level of implementation increased as principals
rated implementation higher in 13 out of the 15 items, and the overall mean of the seven
barriers increased as the principals rated the seven factors as less of impediments for
implementation.
Why has implementation increased, yet the belief in the effectiveness of online
learning decreased? The answer may be found in the following three areas:
Teacher Education Journal of South Carolina, 2015 Edition
28
1. The reduction of barriers prohibiting the use of online learning. The 2014
study revealed that all seven barriers to the implementation of online learning
had decreased. The greatest decrease in perception of a barrier occurred in the
area many schools consider the most vital, course development and
purchasing costs. Without funds to provide digital technology, schools cannot
pursue the option. In an effort to increase the number of students having
access to digital learning, the Indiana Department of Education Office of
eLearning has awarded school corporations approximately $14,430,000.00 in
state dollars since 2010 (H. Baker, personal communication, December 19,
2014). These funds have allowed schools to implement digital technology,
including online learning. In situations where individual schools could not
afford to offer online courses, consortiums were set up to pool resources. One
example, the Western Indiana Knoy Learning Center, partnered with twenty
high schools to obtain a 21st Century Learning Center grant with a focus on
credit recovery for at-risk students. There were 1,157 credits were earned by
898 students over a three year span (E. Simpson, personal communication,
December 30, 2014).
2. School ownership. The top two rated reasons for offering online learning
were for students recovering credits and retaking courses. Developers of
prepackaged courses market their product to schools primarily to satisfy the
need to reduce the intense pressure to graduate students who are at-risk of not
meeting the credit requirements. Even though course developers adhere to
following the same standards in their courses as traditional students must
follow, prepackaged online courses are seen by many educators as not
meeting the basic state standards that traditional classroom teachers are
expected to cover. The lack of involvement in the development and use of
online courses resulting in no appreciation of ownership could lead school
personnel to view online learning as a lesser alternative to traditional
classroom teacher-driven instruction. Educators working in schools using
prepackaged formats tend to believe that accountability is lacking and students
are not experiencing the authentic learning taken place in traditional
classrooms.
3. Student initiative. If the primary focus of online learning is to recover credits
and retake failed courses, and online learning courses are predominantly
available for students after they fail a course, then the message being sent to
students is that online learning is for failures and not a viable alternative for
students who are academically motivated. Schools having created this culture
will suffer from students not desiring to be enrolled in online courses and, if
enrolled, will not put in their best efforts, thus living up to the low
expectations of the use of online learning in the school.
Recommendations
The key to the effectiveness of the use of online learning may be found in school
ownership and student initiative. In 2010, online learning courses were primarily
Teacher Education Journal of South Carolina, 2015 Edition
29
prepackaged; today schools can easily create their own. Teachers should be provided
professional development opportunities where they are encouraged to explore, develop
and use online blended designs, or to create their own totally virtual classrooms. The
shift from using traditional methods to using hybrid or fully teacher-driven online
methods including instant feedback and chat features would result in a major cultural
transformation, primarily resulting in students being provided multiple learning strategies
based upon individual needs. Once schools begin using and promoting online learning
across the curriculum, teachers’ fears and misunderstanding will subside and school
ownership will take on a positive characteristic.
Students should be supported in enrolling in online courses. The Michigan
Department of Education (2006) requires all students to complete at least one online
course. Their rationale is:
Today, technology plays an integral role in school, at home and in the workplace.
Completing a meaningful online learning experience in Grades 9-12 will allow
students to become familiar with the key means of increasing their own learning
skills and knowledge. It also will prepare them for the demands they will
encounter in higher education, the workplace, and personal lifelong learning. (p.
39)
Students tend to exhibit the beliefs of their teachers and school leaders. In order for the
integration of online learning opportunities to be successful, students need those
providing direction for their learning to promote online learning as a viable educational
choice. The Michigan Department of Education perspective is an excellent point of focus
when supporting student usage of online learning. Once genuinely encouraged to explore
online options, students will thrive, and more than likely outpace the teachers in creating
new learning opportunities. In 2008, Spires, Lee, and Turner stated, “Transforming
education to meet the demands of the 21st century begins with an acknowledgement that
today’s students have opportunities to learn in different ways than those of previous
generations” (p. 498). The end result of the educators developing and integrating the use
of online learning with fidelity will be engaged student learning.
References
Allen, I. E., & Seaman, J. (2006). Making the grade: Online education in the United
States 2006. Retrieved from http://www.sloan-c.org/publications/survey/pdf/
making_the_grade.pdf
Institute of Education for Pamoja Eduction. (2014). How pre-university online learning
experience can influence a successful transition into and through higher
education. Oxford: Institute of Education for Pamoja Eduction. Retrieved from
http://assets.cdnma.com/9136/assets/Research/IOE_research_overview.pdf
Means, B., Toyama, Y., Murphy, R., & Jones, K. (2009). Evaluation of evidence-based
practices in online learning: A meta-analysis and review of online learning
studies. Washington, DC: U.S. Department of Education.
Michigan Department of Education. (2006). Michigan merit curriculum high school
graduation requirements. Retrieved from
http://www.michigan.gov/documents/mde/111706-finalhsfaq_178578_7.pdf
Teacher Education Journal of South Carolina, 2015 Edition
30
Moe, T. M., & Chubb, J. E. (2009). Liberating learning: Technology, politics, and the
future of American Education. San Francisco, CA: Jossey-Bass.
Picciano, A., & Seaman, J. (2007) K-12 online learning: A survey of U.S. school district
administrators. New York, NY: The Sloan Consortium. Retrieved from
www.k12hsn.org
Picciano, A., & Seaman, J. (2009). K–12 online learning: A 2008 follow-up of the survey
of U.S. school district administrators. Retrieved from http://www.sloan-c.org/
publications/survey/k-12online2008
President’s Committee of Advisors on Science, & Technology. (1997). Report to the
president on the use of technology to strengthen K-12 education in the United
States. Washington, DC: Panel on Educational Technology. Retrieved from http://
www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nov2007k12.pdf
Project Tomorrow (2013). 2013 trends in online learning. Retrieved from
http://images.email.blackboard.com/Web/BlackboardInc/%7B829dd21f-b3f2409e-9a34-e37de5835c74%7D_k12Trends2013_web.pdf
Rayle, T.W. (2011). Principals perceptions about the implementation and effectiveness of
online learning in public high schools in Indiana (Doctoral dissertation).
Retrieved from
http://scholars.indstate.edu/bitstream/10484/1815/1/Rayle,%20Timothy.PDF
Spires, H. A., Lee, J. K., & Turner, K. A. (2008). Having our say: Middle grade student
perspectives on school, technologies, and academic engagement. Journal of
Research on Technology in Education, 40(4), 497-515. Retrieved from
http://www.unc.edu/world/having-our-say-middle-grade-student-perspectives-onschool-technologies-and-academic-engagement.pdf
U.S. Department of Education. (1996). Getting America’s students ready for the 21st
century: Meeting the technology literacy challenge: A Report to the Nation on
Technology and Education. Retrieved from http://www2.ed.gov/about/offices/list/
os/technology/plan/national/index.html
About the Authors:
Timothy Rayle, Ph.D., serves as the assistant superintendent for Clay Community
Schools and is an adjunct professor in the Educational Leadership department for Indiana
Wesleyan University. Dr. Rayle has been active in the Indiana Principal Leadership
Institute and serves as a mentor to principals throughout Indiana.
Michael Langevin, Ph. D., is the founder and CEO of Equitable Education Solutions,
LLC. His company works with schools throughout the Midwest and provides
consultation on research-based strategies to improve learning outcomes for all students.
He has also served as an adjunct professor at Indiana State University and is a member of
the design team for the Indiana Principal Leadership Institute.
Teacher Education Journal of South Carolina, 2015 Edition
31
The Science edTPA: Emphasis on Inquiry
D. Michelle Rogers
Lisa Barron
Austin Peay State University
Abstract
edTPA is a performance-based assessment used to evaluate the teaching skills of preservice teacher candidates. The Middle Childhood Science and Secondary Science
edTPA assessments require that students are engaged in specific skills of scientific
inquiry. While inquiry has been a focus of science education reform at the national level
for years, there is little evidence to show that inquiry is common in the K-12 classroom.
This article explains what inquiry is, why it is valuable, and why its inclusion poses a
challenge and opportunity for new teachers being assessed with edTPA. The authors
also provide suggestions to help college faculty guide candidates to success in a science
edTPA.
Introduction
edTPA, formerly the Teacher Performance Assessment, is an assessment for
candidates in teacher education programs, and has been implemented nationally since the
2012 academic year. It was developed by Stanford University faculty and staff at the
Stanford Center for Assessment, Learning, and Equity (SCALE), in cooperation with
teachers and teacher educators across the United States. It was designed to assess if
graduating teacher candidates are equipped with the appropriate skills and ready for
licensure.
For edTPA, the teacher candidate plans, conducts and reflects on a learning
segment consisting of three to five consecutive, connected lessons (SCALE, Secondary
Science Assessment Handbook, 2015, p. 1). The format requires the candidate to provide
extensive documentation of the entire process, including lesson plans, video clips of
instruction, student work samples, analysis of instruction and assessment, and a series of
reflective commentaries. These materials are submitted and scored nationally by trained
and vetted subject-area specialists including P-12 teachers and teacher educators.
Thirty-three states and the District of Columbia are now using edTPA at different
levels. Currently, fourteen states require edTPA as a step toward teacher licensure.
Other states are in an exploratory phase, using edTPA in their programs without it being
consequential for licensure or program completion as they await local or state policy
(edTPA, n.d.).
There are twenty-seven versions of edTPA relating to various licensure areas,
including a variety of secondary level, subject-specific edTPAs. Elementary teachers
may be required to use a mathematics or literacy edTPA. There is no elementary level
science edTPA. The Middle Childhood Science edTPA (for states with middle childhood
license grades 4-9) and the Secondary Science edTPA (for license in grades 7-12) are
quite similar to each other. Both require the teacher to actively engage students in a
complex set of skills: scientific inquiry.
Teacher Education Journal of South Carolina, 2015 Edition
32
Scientific Inquiry
Scientific inquiry is the set of actions undertaken by astronomers, biologists,
chemists, geologists, physicists, all scientists, in their professions. On a grand scale,
when taken together, the practices of scientific inquiry represent the process that has
allowed science to expand the realm of human knowledge. For an individual student, the
practices of inquiry provide an insight into how science progresses. Perhaps more
importantly, the ability to conduct scientific inquiry allows a person to answer his or her
own questions, applying what is known about natural phenomenon to expand one’s
knowledge, solve problems, think critically, and draw conclusions based on evidence.
Through the development of scientific “habits of mind,” students become effective
problem solvers (AAAS, 2009).
Inquiry in science education is a set of skills, a body of knowledge, and activities
that are practiced by students with the guidance of a skilled teacher/facilitator. According
to the National Research Council (2012), the practices of scientific inquiry in a K-12
science classroom may be broken down into several elements (Table 1).
Table 1. Practices of Scientific Inquiry for K-12 Science Classroom*
1. Asking questions
2. Developing and using models
3. Planning and carrying out investigations
4. Analyzing and interpreting data**
5. Using mathematics and computational thinking
6. Constructing explanations
7. Engaging in argument from evidence**
8. Obtaining, evaluating, and communicating information
*As published in A Framework for K-12 Science Education: Practices, Crosscutting
Concepts, and Core Ideas and The Next Generation Science Standards.
**Items 4 and 7 are specifically required by the edTPA
Some of the practices of inquiry listed in Table 1 may sound like familiar steps in
the scientific method and they may be used that way. However, aspects of these elements
may be taught explicitly or practiced individually.
In their article in Science and Children, Banchi and Bell (2008) provide useful
descriptions and examples of four levels of inquiry, which vary depending upon the
degree of guidance provided by the teacher. As students become more experienced with
the skills of inquiry, they might move from confirmation inquiry, in which all the steps of
the activity are prescribed, but students get the experience of performing the experiment
and analyzing data. The confirmation occurs when the results of the experiment confirm
a scientific principle as anticipated. Structured inquiry is similar except that students do
not know what result to expect and must draw their own conclusions at the end. In
guided inquiry, students are responsible for the design of the experimental procedures.
Only in open inquiry, do students conduct research on a question which they themselves
generate, completing the entire process of scientific investigation with minimal guidance.
Even mature students typically require “extensive” experience with more teacher-directed
levels of inquiry to prepare them for open inquiry.
Teacher Education Journal of South Carolina, 2015 Edition
33
Inquiry in edTPA
As detailed in the Secondary Science edTPA Assessment Handbook (used with
permission, SCALE, 2015), the assessment requires the teacher candidate to thoughtfully
plan and conduct three to five consecutive, connected lessons (about three to five hours
of instruction). During instruction, all or some of the lessons must be video recorded and
the teacher candidate extracts and submits two continuous, unedited clips, each of no
more than 10 minutes in length.
To achieve success on this particular assessment, before beginning any detailed
planning of lessons, the teacher candidate should be aware of specific elements of inquiry
delineated in the scoring rubrics as indicative of teacher competency. Additionally,
certain of these activities need to be conducted in a manner that is clearly visible on the
brief video clips submitted with the assessment. Failure to take these requirements into
account before lesson planning may result in teaching that would be considered highly
skilled under another assessment tool, particularly any instrument not designed
specifically for science teaching, but it will likely lead to poor performance on the
edTPA.
To be considered successful as a beginning teacher using edTPA criteria, the
candidate should expect to demonstrate his or her role as a facilitator and to show that
students are actively engaged during the following classroom activities:
1) Students “present or record evidence and/or data in tables, maps, diagrams, or
other graphical or statistical displays.” (The teacher might provide raw data
from an experiment for this purpose, as the edTPA does not specifically require
that students collect the data themselves.)
2) Students analyze data in a discussion. In particular, students should identify
patterns and inconsistencies in the data. Optimally, students should also
discuss limitations of the research design. The first video clip should take
place during this discussion.
3) Using the data as evidence, along with knowledge of the science discipline,
students explain what has occurred or make predictions about future
occurrences. The second video clip should occur during this activity (SCALE,
2015).
Inquiry in the Classroom
For decades, science education reformers and influential organizations have
sought to expand the teaching of inquiry in elementary and secondary schools.
Nationally, reform efforts were formalized through the American Advancement of
Science (AAAS) and its 1993 publication of Benchmarks for Science Literacy which
includes grade-level appropriate goals in science inquiry for grades K-12. In 1996, the
National Research Council (an agency of the National Academy of Sciences) published
the National Science Education Standards. Its “Science as Inquiry” standards include
detailed descriptions of skills that should be taught at each grade level. The National
Research Council felt elaboration was needed and published Inquiry and the National
Science Education Standards: A Guide for Teaching and Learning in 2000.
In recent years, the National Research Council, the National Science Teachers
Association, AAAS and others have continued to press for national standards for both
science content and science inquiry. A Framework for K-12 Science Education:
Teacher Education Journal of South Carolina, 2015 Edition
34
Practices, Crosscutting Concepts, and Core Ideas (National Research Council, 2012)
was the basis for the Next Generation Science Standards (NGSS Lead States, 2013). The
NGSS include practices of scientific inquiry interwoven with content. These standards
have been adopted in some states and will likely be adopted or serve as a foundation for
new standards in many more.
Despite what seems to be consensus on the importance of inquiry skills, other
trends have made it difficult for teachers to devote classroom time to their practice. A
common complaint by teachers, parents and others in all subject areas and grade levels is
that content standards have led to school curricula that is “a mile wide and an inch deep.”
Teachers are held accountable for covering an enormous volume of material. Students
are rushed through superficial coverage of content and rarely is there time to devote to
the long term inquiry projects necessary to develop the skills of scientific practice.
Frustration on the part of the teachers, students, parents and even administrators is a
common result.
Another nation-wide trend, the demand for accountability in education, has led to
an era of high-stakes testing. Efforts to assess the performance of all students on a
regular basis, in a cost-efficient manner, and with standardized assessment instruments
that allow easy comparison of teachers, schools, districts and states has resulted in the
widespread use of multiple choice tests. Currently, tests tend to emphasize the
memorization of science content facts (National Research Council, 2014). Teachers are
held accountable for student performance using these tests (Tennessee Department of
Education, 2014).
If and when reforms lead to revision of standardized tests, will changing the
questions be enough? Research suggests that even a more elaborate test, such as the
Third International Math and Science Study (TIMSS), which includes elaborative written
responses along with multiple choice questions, may provide an inaccurate measure of
student understanding (Harlow & Jones, 2004).
In addition to the difficulty of assessing a complex skill such as scientific inquiry
using a standardized test, there is the issue of how test-performance pressure affects
teaching and learning. Grant Wiggins (1990) has suggested that neither teachers nor
students will be motivated to practice a complex task like scientific inquiry unless they
will be evaluated through authentic assessment. Such an assessment would necessarily
take place in the context of a real inquiry, rather than the artificially constructed context
of a standardized test item. Wiggins (1990) expressed the following concern:
While multiple-choice tests can be valid indicators or predictors of
academic performance, too often our tests mislead students and teachers
about the kinds of work that should be mastered.… it is the form, not the
content of the test that is harmful to learning…[students and teachers] are
led to believe that right answers matter more than habits of mind and the
justification of one's approach and results. (p. 3)
With teacher performance under constant scrutiny, educators are rewarded for
teaching the content and skills that provide the greatest return on standardized tests. In
many cases, there is no incentive and little opportunity for teachers to expend time and
effort and to devote scarce classroom time and resources to teaching inquiry.
Solid, recent research to document the frequency and quality of inquiry in U.S.
classrooms is scarce. One large-scale study in California in 2011 revealed that 40% of K-
Teacher Education Journal of South Carolina, 2015 Edition
35
5 classrooms include less than an hour of instruction in science per week and only about
10% of elementary students regularly participate in science lessons that engage them in
inquiry (Dorph, et al., 2011).
Researchers Capps and Crawford (2013) describe existing research as consisting
principally of teacher self-reports and anecdotes that point to an absence of inquiry in
most classrooms. To gather new data, the researchers solicited experienced and
successful 5th-9th grade teachers from across the U.S. for their study. Using criteria such
as the number of college science classes a teacher had taken and prior experience
conducting research, Capps and Crawford selected teachers whom they expected to be
more capable and more likely than most to teach inquiry in their classrooms. They then
gathered a variety of evidence, including classroom observations and lesson plan
analysis, to examine the teachers’ use of inquiry practices. In a majority of the
classrooms, no scientific inquiry activities were found. In the classrooms that did involve
inquiry, the activities were usually isolated stages of an incomplete investigation, often
comprised only of using tools to take measurements or completing mathematical
calculations in a science context. Capps and Crawford came to the unfortunate conclusion
regarding implementation of inquiry-based teaching that, “even some of the best teachers
currently struggle” (p. 498).
Helping Candidates with the Science edTPA
As suggested by the National Research Council, college science methods teachers
will want to continue to encourage teacher candidates to use of a variety of strategies,
including inquiry, for teaching science (2012). Teacher candidates will likely observe
other strategies in use during classroom observations. Given the current lack of evidence
for frequent use of inquiry in most classrooms, it is risky to assume that teacher
candidates will have observed inquiry-based science lessons in the field or that students
will have experience with the practices of inquiry.
Faculty will want to be sure that candidates understand that the science edTPA
has very specific requirements that will not be met by just any science content lessons, no
matter how expertly taught. College faculty can guide candidates in a careful reading of
the entire Assessment Handbook, addressing the specific components of inquiry that will
need to be included. Faculty can encourage candidates to consider how they might
engage students in data analysis and argument from evidence in a manner that will lend
itself to video recording.
Faculty should also keep in mind that teacher candidates may have limited
knowledge of the students in the school placement in which they will be assessed with
edTPA. Supervising faculty can assist candidates by emphasizing the need to design an
inquiry lesson appropriate to the current skill level of these particular students and remind
students not to make assumptions about what these students “should” already know.
Candidates can discuss this matter with their classroom mentor teacher and can conduct
appropriate pre-assessments of students’ knowledge of inquiry, mathematics, graphing,
laboratory skills, or other relevant activity.
Because the ability to conduct inquiry is such an important part of science
education, college education faculty may find further investigation, both into local K-12
classrooms and in their candidates’ undergraduate science coursework, worthwhile and
enlightening. Science methods teachers need to have some familiarity with what is
Teacher Education Journal of South Carolina, 2015 Edition
36
happening in the classrooms where their candidates are placed. How often is student
inquiry taking place? Is it confirmational, structured, guided or open? What do
candidates see their classroom mentor teachers doing on a regular basis?
Finally, college faculty should use caution in making assumptions about the
inquiry abilities of candidates themselves. The complex skills of inquiry require
extensive practice for mastery. Science coursework on a candidate’s transcript is no
guarantee of these skills. It is important for faculty to determine what experiences
candidates have had and what skills they have developed. If there are areas where
candidates are weak, do available science methods courses offer opportunities for
candidate growth? If additional resources are needed, are students aware of opportunities
for professional development, perhaps in additional coursework or through independent
work in a campus research laboratory?
Depending on a number of factors, including how science is currently being
taught in K-12 schools associated with teacher education programs and the skills of
candidates themselves, the science area edTPA may be a challenge for many candidates.
However, the guidance of college education faculty who are familiar with the specific
requirements of edTPA and issues relating to inquiry in education can help candidates
achieve success on this assessment and equip them with valuable skills for their teaching
careers.
References
American Association for the Advancement of Science, (2009). Benchmarks for science
literacy. Retrieved from http://www.project2061.org/publications
Banchi, H. and Bell, R. (2008). The many levels of inquiry. Science and Children, 46(2),
26-29.
Capps, D. K., & Crawford, B. A. (2013). Inquiry-based instruction and teaching about the
nature of science: are they happening? Journal of Science Teacher Education, 24(3):
497-526. doi: 10.1007/s10972-012-9314-z
Dorph, R., Shields, P., Tiffany-Morales, J., Harty, A., McCaffrey, T. (2011). High hopes–
few opportunities: The status of elementary science education in California.
Sacramento, CA: The Center for the Future of Teaching and Learning at WestEd.
edTPA. (n.d.). Retrieved from http://edtpa.aacte.org
Harlow, A., & Jones, A. (2004). Why students answer TIMSS science test items the way
they do. Research in Science Education, 34(2): 221-239.
doi:10.1023/B:RISE.0000033761.79449.56
NGSS Lead States. (2013). Next generation science standards: for states, by states.
National Academies Press: Washington, D.C.
National Research Council. (1996). National science education standards. National
Academy Press: Washington, D.C.
National Research Council. (2000). Inquiry and the national science education
standards: A guide for teaching and learning. National Academy Press: Washington,
D.C.
National Research Council. (2012). A framework for K-12 science education: Practices,
crosscutting concepts, and core ideas. National Academy Press: Washington, D.C.
National Research Council. (2014). Developing assessments for the next generation
science standards. Washington, DC: The National Academies Press.
Teacher Education Journal of South Carolina, 2015 Edition
37
Stanford Center for Assessment, Learning and Equity (SCALE) (2015). edTPA
secondary science assessment handbook. Retrieved from http://edtpa.aacte.org
Stanford Center for Assessment, Learning and Equity (SCALE) (2015). edTPA middle
childhood science assessment handbook. Retrieved from http://edtap.aacte.org
Tennessee Department of Education. (2014). Tennessee value-added assessment system.
Retrieved from http://www.tn.gov/education/data/TVAAS.shtmlWiggins, Grant.
(1990). The case for authentic assessment. Practical Assessment, Research &
Evaluation, 2(2). Retried from http://PAREonline.net
About the Authors:
D. Michelle Rogers, M.S., is an Instructor in the Department of Biology at Austin Peay
State University in Clarksville, Tennessee. Her research interests include the teaching of
critical thinking skills and science practices.
Lisa Barron, Ed.D., is an Assistant Professor in the Martha Dickerson Eriksson College
of Education at Austin Peay State University in Clarksville, Tennessee. Her research
interests include performance-based assessments and problem based learning.
Teacher Education Journal of South Carolina, 2015 Edition
38
A Rubric for Revitalizing Teacher Reflection
Todd Cherner, Ph.D.
Marcie Ellerbe, Ph.D.
Elena Andrei, Ed.D.
Coastal Carolina University
Abstract
The notion that teachers must be “reflective practitioners” has long been supported by
research, and teacher education programs typically promote reflection as one of their
central tenets. Additionally, several rubrics have been created and used to evaluate preservice teachers’ reflection. The challenge, however, is that few of these rubrics are
research-based or have been validated. In response, this article presents a narrative that
explains how three teacher educators came together and used qualitative research
methods to construct and validate a rubric designed to evaluate pre-service teachers’
reflections. This rubric can be adopted by teacher educators to assess the quality of their
pre-service teachers’ reflections.
Introduction
Reflection has long been part of the teaching process (Bengtsson, 1995; Wildman
& Niles, 1987), and scholars recognize the intimate relationship between reflection and
instructional planning (Artzt & Armour-Thomas, 2008; Pewewardy, 2002). These ideas
are well-established by the “Reflective Practitioner” framework for teachers (Schön,
1983 &1987), which purports that reflection can meaningfully inform teachers’
performance in the classroom. Furthermore, there are implications for reflection as it
relates to teacher education (Bullough & Gitlin, 1989; Roth, 1989). One task for teacher
educators is to develop pre-service teachers’ reflection skills, and scholars have
experimented with different techniques for doing so (Wubbels & Korthagen, 1990;
Zeichner & Teitelbaum, 1982). The challenge for teacher educators, however, does not
reside with the importance of reflection as being a key element in teacher preparation
programs; rather, it relates to evaluative instruments that name and describe the qualities
inherent to the act of reflection and how such instruments might be used to support preservice teachers’ embodiment of reflective practice. As such, the research question for
this study is: How can a rubric to evaluate pre-service teachers’ reflections be designed
and validated? This question is important because creating such a rubric represents the
first step in establishing a clear and consistent set of expectations that can be used across
courses to evaluate pre-service teachers’ reflections. In this article, we first provide a
research-based definition for reflection and a theoretical framework. Next, we discuss the
methodology utilized to design and validate a rubric for reflection, and we conclude with
a discussion of the rubric and its implications for future research.
Defining Reflection
Before creating the rubric, we conducted a review of literature to define the term
“teacher reflection” using previously published works. We searched Google Scholar
Teacher Education Journal of South Carolina, 2015 Edition
39
using combinations of the following key terms: reflection, teacher, and reflective
practitioner. This search returned thousands of articles. To limit the results, we added the
terms definition, explained, means, and defined as. This reduced the total number of
articles from thousands to hundreds. To reduce it further, we combined the search terms
with quotations marks. Examples of these terms included: “reflective practitioner
means” and “definition of teacher reflection” and we found less than 100 articles that
specifically defined teacher reflection.
Reflection is considered an essential element to a teacher’s planning, instruction,
and analysis of student learning (Borko & Livingston, 1989; Krajcik et al., 1994). When
pedagogy is viewed through a reflective lens, teachers conduct ongoing inquiry into their
own instruction with the purpose of deepening their understanding of their teaching
practice (McIntyre, 1993). As Artzt and Armour-Thomas (2008) explain, reflection is
“thinking about teaching. It involves the thoughts teachers have before, during, and after
the actual enactment of a lesson” (p. 6). Yet, defining teacher reflection as merely
“thinking” about one’s own teaching does not go deep enough if the aim is to consider
how to support the development of pre-service teachers’ ability to reflect.
Hatton and Smith (1995) explain that there are specific purposes for teacher
reflection. For example, reflection about how teachers interacted with students during a
lesson represents a different type of reflection than made when analyzing formative
assessments to determine student learning. Similarly, reflecting on one’s beliefs about
education and how these have been influenced by personal ecology (Bronfenbrenner,
1986) is different from reflecting on the instructional choices made to support student
learning. The overarching concept pre-service teachers need to understand is that
reflection as it relates to teaching goes beyond merely thinking about general teaching
practices. Rather, it is intentional thinking about a component of teaching that one wishes
to improve in order to maximize student learning. This idea then feeds into Schön’s
(1983 & 1987) conceptualization of the reflective practitioner model.
Theoretical Framework: The Reflective Practitioner Model
The term “reflective practitioner” was first introduced by Schön (1983 & 1987),
and represents an individual who is “capable of analyzing situations, [and able to] choose
and use relevant knowledge and reflect on [his/her] own experiences” (as cited in
Kolmos, 2006, p. 175). Schön’s definition positions reflective practitioners as people who
can remove themselves from a specific situation and look at it from outside their
perspective. Thus, the act of reflecting is qualitative in nature (Kincheloe, 1995; Ottosoon
& Björk, 2004) in that the “reflective practitioner” understands and interprets the
different elements, influences, and experiences related to the phenomenon at the center of
the reflection, internalizes them, and makes cogent decisions. These attributes make the
“reflective practitioner” model appealing to teacher preparation programs (Calderhead,
1989; Korthagen, et al., 2001; McKernan & McKernan, 2013; Zeichner, 1994).
The College of Education where we, the researchers of this study, teach adopted
the Reflective Practitioner model. Our College of Education requires the conceptual
framework shown in Figure 1 to appear in all education course syllabi, which ensures all
education majors are routinely exposed to it.
Teacher Education Journal of South Carolina, 2015 Edition
40
Figure 1. The Reflective Practitioner Model
According to this model, reflective practitioners are able to:
1.
2.
3.
4.
5.
Work with diverse populations;
Apply content and pedagogical knowledge to the teaching and learning process;
Integrate technology to improve teaching and learning;
Demonstrate professional behavior and dispositions; and,
Engage in reflective practice to improve teaching and learning.
This model implies that students in the College of Education – inclusive of pre-service
and in-service teachers – develop their reflective practitioner stance as they progress
through their coursework and student teaching experiences. Because this model is applied
to a variety of programs, there are multiple ways students engage with it.
As teacher educators, we routinely design assignments with the intention of
(re)connecting our students with the Reflective Practitioner model. After utilizing the
framework for multiple semesters, we began questioning how our assignments were or
were not leading our students who are pre-service teachers to the more sophisticated
levels of reflection as they progressed through their teacher respective education
programs. Specifically, we began to discuss the exact requirements and expectations for
the reflections we assigned and realized that our differing perspectives of reflection may
impact our students’ conceptualization of reflection. As we discussed our students’
growth as reflective practitioners, the more questions we had related to best practices for
teaching reflection and assessing the quality of reflections. As a result, we designed this
three-part study as shown in Figure 2 to first probe our personal understanding of
reflection before creating a validated rubric for assessing our pre-service teachers’
reflections.
Teacher Education Journal of South Carolina, 2015 Edition
41
Figure 2. The Study’s Three Phases
Phase One: Conceptualizing Reflection
The study’s first phase was designed: (1) to meta-reflect (Hagström & Scheja,
2014) on our own understanding of reflection, and (2) to investigate the content preservice teachers included in their reflections.
Investigate Personal Schema
To investigate our personal understandings and schema of reflection, we used an
online discussion forum to engage in four written conversations. To frame these
conversations, we used the prompts listed in Appendix 1 to uncover how we understand
and enact reflection in our professional lives and how we hold pre-service teachers
accountable for using reflection to support their learning. To frame our written
conversations, we unpacked the term reflection as a noun, verb, and adjective. Reflection
as a noun encouraged us to think about reflection as a product. Reflection as a verb
allowed us think about the processes we use to reflect. Reflection as an adjective enabled
us to think about how we embody reflection as practitioners. We found unpacking our
understandings of reflection using these prompts useful because the different parts of
speech limited our writings to one aspect of reflection at a time. Without it, one of us may
have written about wanting students to produce reflections (e.g., reflection as noun) while
someone else may have written about what it means to identify one’s self as a reflective
being (e.g., reflection as adjective). Using these prompt, we considered reflection through
different lenses in an organized, thoughtful fashion.
Analyzing Our Schema of Reflection
After we concluded our written conversations, we used contrast comparative
analysis (Lincoln & Guba, 1985) to identify patterns that emerged within our responses.
These responses represent our conceptualization of reflection at the study’s onset, and our
analysis revealed six patterns across three categories as shown in Table 1.
Teacher Education Journal of South Carolina, 2015 Edition
42
Table 1
Written Conversation Codes
Category
Content of Reflections
Pattern
Other perspectives
Personal opinions & insight
Conventions of Reflection
Length and format
Language of the profession
Process of Reflection
Fostering a reflective stance
Content of Reflection. Our written conversations revealed that we define “content” as
including references to multiple perspectives, personal opinions, and personal insight.
Regardless of the type of reflection assigned, we expect reflection to include specific
acknowledgement of varying perspectives related to the stated topic. In our discussion,
Dr. Andrei wrote:
What makes reflection important is connecting the dots of what you read and
discuss in class and/or on your own, with what you see or do in the classroom and
what you think. Being a teacher is a craft and an art in the same time, which
means you cannot rely only on your experience and how you feel (Andrei, written
conversation, January 13, 2014).
In this way, reflection is seen as a tool for intentionally considering concepts and topics
from differing perspectives. Being able to view a phenomenon from multiple perspectives
begins fostering the appreciation of the multiple realities and contexts in which students
and teachers exist (Hatch, 2002; Healy & Perry, 2000).
Conventions of Reflection. Our conversation also revealed that word length and language
of the profession requirements may create artificial barriers for reflection. Dr. Cherner
said:
My expectation for language use and length of reflection are naturally and
organically going to be different than yours [in reference to Drs. Andrei and
Ellerbe]. And they should be. If I’m teaching graduate students who are in-service
teachers and you are teaching undergraduate students who are still pre-service, we
are talking about two very different kinds of students and our expectations for
reflection should reflect that (Cherner, spoken conversation, March 17, 2014).
The idea that there should be a “standard” length and language requirement is not
realistic. The development of teachers’ reflective abilities is related to their experiences as
students and as teachers (Korthagen, 2004). From our perspective, teacher educators
should differentiate their expectations for reflection based on the development and
experience of their students. As such, we saw standardized expectations for length and
language as not being realistic.
Teacher Education Journal of South Carolina, 2015 Edition
43
Process of Reflection. The third category we constructed related to how, if at all, we
intentionally support pre-service teachers in developing their reflective stance. Dr. Ellerbe
wrote:
What should we teach our students to help them develop their reflection
products? Maybe reflection is mostly invisible – like comprehension – so maybe
some of the same questions about assessing comprehension and building
comprehension apply to our work in thinking about reflection (Ellerbe, written
conversation, March 7, 2014).
We considered if it was possible to guide someone in developing a reflective process and
if we demonstrated this process to our students. We questioned whether our students were
developing their own reflective processes by completing the reflective work we assigned,
and whether we made visible the process we used to position ourselves as reflective
practitioners. We concluded our written conversation by discussing if deliberately
developing reflective practitioners would require us to revise our pedagogical practices.
Review Student Samples
To investigate the content our pre-service teachers include when reflecting, we
randomly selected 15 student reflections. We selected 10 reflections from two different
undergraduate literacy courses and five reflections from a secondary English language
arts methods course. We read the reflections and identified the following emerging
patterns (Erickson, 1986):
1. Summarizing a reading or experience;
2. Writing about what was learned from a reading or experience with inferences
aimed at applying what was learned to a new situation;
3. Making connections between past experiences and what was learned from the
reading or experience;
4. Planning how to use new knowledge in the future; and,
5. Questioning or evaluating the experience in an effort to make sense of it.
By identifying the content pre-service teachers already included in their reflections, it
demonstrated their conceptualizations of how they understood reflection. They also
provided a baseline for how our pre-service teachers were reflecting before an
accountability piece (e.g., the rubric) was put in place. As such, this analysis may serve as
meaningful data for a future study.
Phase Two: Rubric Creation
As we entered the study’s second phase, we identified explicit qualities of
reflection to be used across a variety of reflection-based assignments in our courses for
pre-service teachers. We searched for existing rubrics and analyzed them. The data from
this phase and from phase one was used to draft the reflection rubric.
Review of Rubrics
We searched Google Scholar for existing rubrics using the following search
terms: rubrics for reflection, reflection rubrics for education, and educational rubrics.
The first round of articles consisted of case studies and commentaries about reflection but
did not include specific rubrics. An additional Google search was conducted using the
same search terms. (We used Google Search because it broadened our search outside of
Teacher Education Journal of South Carolina, 2015 Edition
44
scholarly publications.) After exploring multiple search results, we noticed many
universities provided links to reflection rubrics on their websites. These practitioner
resources were, for the most part, developed by professors in education departments and
were meant to be utilized by colleagues and/or by pre-service teachers. After making this
observation, we collected 18 reflection rubrics. To analyze the rubrics, we first read them
to create a comprehensive list of common elements, which included elements found in
the rubrics’ criteria and indicators sections. We then tallied the frequency that those
elements occurred across the rubrics as shown in Table 2 and operationalized them using
the descriptions they had.
Table 2
Elements Found in Rubrics for Assessing Reflection
Element
Defined As
Frequency
Goal Setting
Planning next steps based on what was seen, done,
observed or read
4
Analysis
Understanding and looking critically at what was said, 9
done, observed or read to draw conclusions
Examples
Specific details about what was said, done, observed
or read
8
Self-Assessment
Evaluation of what one learned
5
Connections
Links to other areas such as previous experience,
outside readings, etc.
4
Organization
Structure and organization of the reflection
7
Summary
Summary or retelling of what was read or observed
8
Mechanics
Grammar, fluency, writing conventions
10
These elements represent a variety of content including higher-order thinking elements
such as “evaluation” and “analysis” to format-orientated criteria like “organization” and
“mechanics.” With this foundation in place, we created our rubric.
Creation of Rubric
We used the data collected from the study’s first phase coupled with the elements
noted across existing rubrics as the foundation for our “5+1 Reflection Traits” rubric (see
Appendix 2). We used the “6+1 Writing Traits” rubric (Education Northwest, 2014) as a
model due to its notoriety. Table 3 presents our rubric’s elements, which includes five
defined criteria and one open-ended criterion.
Teacher Education Journal of South Carolina, 2015 Edition
45
Table 3
The 5+1 Reflection Traits’ Criteria
Criteria
Defining Statement
Contextual
Framework
The reflection focuses on a specific topic or phenomenon (e.g. a
reading, observation, or experience) that the author
contextualizes
Connections
The reflection makes associations with and between various
sources (e.g. readings, discussions, and/or experiences)
Critical Analysis
The reflection demonstrates an understanding that there is more
than one way for the phenomenon to be viewed and/or
interpreted
Self-Assessment &
Personal Analysis
The reflection shares internal thinking in order to (out)grow
and/or (re)affirm understanding of the reflection’s topic
Disciplinary
Language
Reflection uses language (e.g. key terms, content-area
vocabulary, and overarching concepts) that is aligned to the
discipline
Conventions
There are no prescribed guidelines for the conventions section.
Each instructor will individualize this section based the type of
reflection (e.g., book review, essay, video, etc) assigned.
(organization and
mechanics)
After operationalizing our rubric’s criteria, we developed the performance
indicators to serve as guidelines for teacher educators to evaluate reflections. The
performance indicators delineate each criterion into Exemplary, Accomplished,
Developing, Beginning, and Unacceptable. We intentionally selected language that we
viewed as positive and supportive because we recognize reflective writing as presenting
unique challenges to even the most seasoned teachers. By using positive language, we
envisioned the rubric as a support for pre-service teachers who are developing their
reflective stances. Lastly, the indicators for the Convention Criterion were intentionally
omitted so individual teacher educators could customize appropriate word count,
grammar, and language usage expectations based on the type of reflection assigned and
the type of students they were teaching.
Phase Three: Validation of Rubric
Because the creation of a rubric is a qualitative act, we used member checking as
our validation method (Cho & Trent, 2006; Lincoln & Guba, 1985). Three colleagues
from our College who hold advanced education degrees evaluated the rubric using the
Validation of the 5+1 Reflection Traits Rubric Analysis Form (see Appendix 3). Two of
the reviewers use rubrics regularly as part of their accreditation responsibilities to the
College, and the other reviewer instructs pre-service special education teachers in
creating and using rubrics as part of their teacher education program. Additionally, the
third reviewer provides professional development to professors across the University in
Teacher Education Journal of South Carolina, 2015 Edition
46
creating and using rubrics in their own courses. Based on their member checking, they
identified strengths and a weakness in the rubric that we report in Table 4.
Table 4
Strengths and Weakness of the 5+1 Reflection Traits Rubric
Strengths
Weakness
 Clarity of Language
 Defined Criterion
 Progression of Indicators
 Quantification of Indicators
Revision of Rubric
The reviewers identified three strengths and one weakness. They found strength in
the clarity of language, the parallelism of language and the alignment between the
criterion and performance indicators. Additionally, they found the statements used to
operationalize each criterion to be clear and definitive. They noticed how all the
performance indicators built on one another, thus distinguishing the expectations for each
indicator. However, the performance indicators did not include specific quantifications
for evidence, which was identified as a weakness. For example, words such as somewhat,
limited, and vague appear in the performance indicators for “Contextual Framework”. As
teachers and researchers, we grappled with the quantifiers and thought about how to
revise the rubric. Similarly to the Conventions criterion in the rubric (see Appendix 2) we
believe a quantifier that is too strict could be limiting to some teacher educators and their
pre-service and/or in-service teachers based on the context of their reflections. We
recognize that these words used in the performance indicators are not measurable
numbers, but they do allude to the qualities we identified as being part of meaningful
reflection. Based on context, teacher educators can unpack and exemplify the various
performance indicators by providing examples of reflections.
Discussion and Implications
As pre-service teachers are continually introduced to new educational topics,
trends, and research, they need a space to write and discuss their developing
conceptualization of those ideas. That space is afforded to them in the context of
reflection. Moreover, if we are to develop pre-service teachers into the “reflective
practitioners” that Schön (1983 &1987) described and our College supports, clear
markers delineating the qualities of meaningful reflections must be put forward. In
response, this study first identified the qualities of meaningful reflections before creating
and then validating a reflection rubric. However, a rubric alone will not develop preservice teachers into reflective practitioners. It is the role of teacher educators to model
reflection and support pre-service and in-service teachers in developing their own
reflective stance. This rubric is one tool teacher educators can use to do so.
Before using this rubric, teacher educators need to consider their own teaching.
We know that there is a connection between reflection, teaching, and assessment (Borko
& Livingston, 1989; Krajcik et al., 1994), and this connection precludes effective teacher
educators from only assigning reflections. Rather, they need to model, critique, and
Teacher Education Journal of South Carolina, 2015 Edition
47
discuss reflection of both high and low qualities. By doing so, teacher educators will be
welcoming pre-service and in-service teachers into the phenomenon of reflective practice
and preparing them to create meaningful reflections.
Limitations
This study has limitations. First, we each assign different types of reflections in
our courses, but we did not intentionally investigate any specific teaching practices we
used in conjunction with the assigned reflections that might impact our students’
understanding of reflection or any specific processes by which they might use to
complete the reflection.
Second, the pre-service teachers’ reflections we collected and analyzed in the
study’s first phase were based on different experiences (e.g., reflections on classroom
observations, learning experiences in class, assigned readings, etc.) across courses.
Because the experiences were different, the content of the analyzed student reflections
may have been influenced by the assignment.
Finally, rubrics used to evaluate pre-service teachers’ reflections have existed for
a significant amount of time. Even though we reached saturation with our sampling, there
are still potentially hundreds, if not more, rubrics we did not analyze.
We acknowledge that this study focused solely on creating a rubric that could be
used to assess reflection and not the instructional practices and specific assignments that
may also impact the depth of reflection. Although these limitations exist, the rubric
created does provide a research-based baseline for conceptualizing reflection.
Conclusion
As education continues into the 21st century, teacher educators, pre-service and
in-service teachers need reflection perhaps more so than ever before. With new standards,
teaching strategies, and assessments continually being rolled out with the aim of
preparing public school K-12 students for college and the workforce, reflection represents
a space for educators to consider the implications of these rollouts. The “5+1 Reflection
Traits” rubric presented here is meant to guide pre-service and in-service teachers in
using reflection for discovery, the consideration of multiple perspectives, and growth. If
we, as teacher educators, can support our pre-service teachers in developing their abilities
to reflect, we are taking a huge step in preparing them to become the reflective
practitioners that Schön wrote about more than three decades ago.
References
Artzt, A. F., & Armour-Thomas, E. (2008). Becoming a reflective mathematics teacher:
A guide for observations and self-assessment. New York, NY: Routledge.
Bengtsson, J. (1995). What is reflection? On reflection in the teaching profession and
teacher education. Teachers and Teaching: Theory and Practice, 1(1), 23-32.
doi:10.1080/1354060950010103
Bronfenbrenner, U. (1986). Ecology of the family as a context for human development:
Research perspectives. Developmental Psychology, 22(6), 723-742.
doi:10.1037/0012-1649.22.6.723.
Borko, H., & Livingston, C. (1989). Cognition and improvisation: Differences in
mathematics instruction by expert and novice teachers. American Educational
Research Journal, 26(4), 473-498. doi:10.3102/00028312026004473
Teacher Education Journal of South Carolina, 2015 Edition
48
Bullough JR, R. V., & Gitlin, A. D. (1989). Toward educative communities: Teacher
education and the quest for the reflective practitioner. International Journal of
Qualitative Studies in Education, 2(4), 285-298. Doi:10.1080/0951839890020403
Calderhead, J. (1989). Reflective teaching and teacher education. Teaching and Teacher
Education, 5(1), 43-51. doi:10.1016/0742-051X(89)90018-8
Coastal Carolina University, (n.d). Conceptual framework. Retrieved from
http://www.coastal.edu/education/framework.html
Cho, J., & Trent, A. (2006). Validity in qualitative research revisited. Qualitative
Research, 6, 319-340. doi:10.1177/1468794106065006
Erickson, F. (1986). Qualitative methods in research on teaching. In M. Wittrock (Ed.),
Handbook of research on teaching (3rd ed.) (pp. 119-161). New York, NY:
Macmillan.
Education Northwest (2014). 6+1 Trait rubrics. Retrieved from
http://educationnorthwest.org/traits/traits-rubrics
Kincheloe, J. (1995). Meet me behind the curtain: The struggle for a critical postmodern
action research. Critical Theory and Educational Research, 71-89.
Kolmos, A. (2006). Future Engineering Skills, Knowledge and Identity. In Christensen,
J., L.B. Henriksen, & A. Kolmos (eds.), Engineering science, skills, and Bildung.
Aalborg, Denmark: Aalborg University Press.
Korthagen, F. A. (2004). In search of the essence of a good teacher: Towards a more
holistic approach in teacher education. Teaching and Teacher education, 20(1),
77-97.
Korthagen, F. A., Kessels, J., Koster, B., Lagerwerf, B., & Wubbels, T. (2001). Linking
practice and theory: The pedagogy of realistic teacher education. New York, NY:
Routledge.
Hagström, L., & Scheja, M. (2014). Using meta-reflection to improve learning and
throughput: redesigning assessment procedures in a political science course on
power. Assessment & Evaluation in Higher Education, 39, 242-252.
doi:10.1080/02602938.2013.820822
Hatch, J. A. (2002). Doing qualitative research in education settings. Albany, NY:
SUNY Press.
Hatton, N., & Smith, D. (1995). Reflection in teacher education: Towards definition and
implementation. Teaching and Teacher Education, 11(1), 33-49.
doi:10.1016/0742-051X(94)00012-U
Healy, M., & Perry, C. (2000). Comprehensive criteria to judge validity and reliability of
qualitative research within the realism paradigm. Qualitative Market Research:
An International Journal, 3, 118-126. doi:10.1108/13522750010333861
Krajcik, J. S., Blumenfeld, P. C., Marx, R. W., & Soloway, E. (1994). A collaborative
model for helping middle grade science teachers learn project-based
instruction. The Elementary School Journal, 483-497. doi:10.1086/461779
Lincoln, Y. & Guba, E. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
McIntyre, D. (1993). Theory, theorizing and reflection in initial teacher education. In J.
Calderhead & P. Gates (Eds.), Conceptualizing reflection in teacher development
(pp. 39-52). Bristol, PA: Falmer Press.
McKernan, J., & McKernan, J. (2013). Curriculum action research: A handbook of
methods and resources for the reflective practitioner. New York, NY: Routledge.
Teacher Education Journal of South Carolina, 2015 Edition
49
Ottosoon, S., & Björk, E. (2004). Research on dynamic systems – some considerations.
Technovation, 24(11), 863-869.
Pewewardy, C. (2002). A Review of the Literature and Implications for Practice. Journal
of American Indian Education, 41(3), 23.
Roth, R. A. (1989). Preparing the reflective practitioner: Transforming the apprentice
through the dialectic. Journal of Teacher Education, 40(2), 31-35.
Schön, D. A. (1983). The reflective practitioner: How professionals think in action (Vol.
5126). US: Basic books
Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for
teaching and learning in the professions. San Francisco, CA: Jossey-Bass, Inc.
Wildman, T. M., & Niles, J. A. (1987). Reflective teachers: Tensions between
abstractions and realities. Journal of Teacher Education, 38(4), 25-31.
doi:10.1177/002248718703800405
Wubbels, T., & Korthagen, F. A. (1990). The effects of a pre‐service teacher education
program for the preparation of reflective teachers. Journal of Education for
Teaching, 16(1), 29-43.
Zeichner, K. M. (1994). Research on teacher thinking and different views of reflective
practice in teaching and teacher education. Teachers’ Minds and Actions:
Research on Teachers’ Thinking and Practice, 9-27.
Zeichner, K. M., & Teitelbaum, K. (1982). Personalized and inquiry‐oriented teacher
education: an analysis of two approaches to the development of curriculum for
field‐based experiences. British Journal of Teacher Education, 8, 95-117.
About the Authors:
Todd Cherner is an assistant professor of education. As part of the Spadoni College of
Education’s faculty, Todd teaches literacy courses and English teaching methods courses
to pre-service middle and high school teachers. Todd is originally from Orlando and
holds a professional teaching license issued by the state of Florida. Todd’s personal
mission for his work is to help support marginalized populations by preparing highly
qualified teachers to work in schools that traditionally serve these populations.
Additionally, he is very interested in using technology to support students’ digital literacy
skills.
Marcie Ellerbe is a 4th grade teacher at Burgess Elementary located in Horry County
Schools. Marcie received PhD in Language and Literacy from the University of South
Carolina. Her previous work experience includes serving the elementary education and
graduate literacy programs at Coastal Carolina University. Marcie’s research focus is on
curriculum development with an emphasis on enhancing reading and writing instruction.
Elena Andrei is an assistant professor of literacy with emphasis on English for Speakers
of Other Languages (ESOL). Her previous work experiences include serving as an
English as a foreign language teacher in her native Romania and as an ESOL teacher in
North Carolina. She teaches both literacy and ESOL certification classes. Her research
interests are second language literacy, teacher education, and non-native English
speaking teachers.
Teacher Education Journal of South Carolina, 2015 Edition
50
Appendix 1: Prompts for reflecting about reflection
Prompt 1: Context. After reading our preliminary thinking about reflections, I identified
two themes and had one idea I want us to discuss. The first theme I identified was a need
for reflections and the stimulus (e.g. text and/or experience) to connect. In some fashion,
we each commented on students being able to synthesize different ideas together in the
form of a reflection. The second theme I identified was grading and assessment of
reflections. Whereas we need to hold students accountable for completing reflections as
part of the coursework we assign, we also are all aware that our grading of reflections
influences the content of the reflections. For instance, I touched on completion grades in
my preliminary thinking, Elena shared a rubric she uses, and Marcie articulated the
“counter-productive” influence grading has on reflection. Finally, the idea I had was
related to how we use reflection. In our preliminary thinking, we each focused on
students using reflection to advance their teaching practice (in one way or another).
However, I am curious about how we use reflection. These thoughts informed the
prompts I composed. For this week, please compose a single, double-spaced page in
response to each prompt.
Prompt 1A. What makes reflection important? Please compose a brief narrative that
demonstrates how YOU have used reflection in the past to solve a problem related to the
field of education or your teaching practice.
Prompt 1B. What is the best way to hold students accountable for reflecting as course
requirement? How will your accountability method have the least impact/influence on
the content of your students’ reflection(s)? Please outline a brief assessment method for
evaluating the meaningfulness of students’ reflections.
Prompt 2: What does reflection as a noun mean to you?
Prompt 3: Katie Wood Ray explains that we can think of curriculum as process or
product...as verb or noun. Curriculum of product is created away from students...like the
unit plan that is completely constructed with SLOS and assessments before the teacher
has even met the kids. Such curriculum can also be considered curriculum that is
standards driven. On the other hand, curriculum of process is constructed with
students...like a genre study that unfolds through student inquiry about the genre and the
work of authors who write in that genre. After reading our posts, I am thinking of
reflection in a similar fashion. Reflection as noun or product and reflection as process or
verb. Given this, please comment on the following:
Prompt 3A. Which seems as though it would be more personally beneficial to the
learner: reflection as noun or verb? Why do you think this is so?
Prompt 3B. What is your personal process of reflecting? How do you help your students
consider a personal process for reflecting?
Teacher Education Journal of South Carolina, 2015 Edition
51
Prompt 4: We have discussed a variety of topics related to reflection, now we are on
reflection as an adjective.
Prompt 4A. What does it mean to be a reflective person, a reflective practitioner?
Prompt 4B. If reflection as a verb is process-oriented and reflection as a noun is productoriented, what does that mean for reflection as an adjective?
Teacher Education Journal of South Carolina, 2015 Edition
52
Appendix 2: 5 + 1 Reflection Traits Rubric
Contextual Framework: The reflection focuses on a specific topic or phenomenon (e.g. a reading, observation, or experience) that the author
contextualizes
Exemplary
Accomplished
Clear and concise description
of the topic/phenomenon with
multiple relevant details is
included
A somewhat clear description
of the phenomenon with
relevant details is included
Developing
Limited description of the
phenomenon with mostly
relevant details is included
Beginning
Vague description of
the phenomenon with
few relevant details
included
Unacceptable
No description of
the topic or
phenomenon is
offered
Connections: The reflection makes associations with and between various sources (e.g. readings, discussions, and/or experiences)
Exemplary
A variety of relevant
associations are made and
supported with appropriate
evidence (e.g. references,
quotes, and/or course
objectives)
Accomplished
Relevant associations are
made and supported with
evidence (e.g. references,
quotes, and/or course
objectives)
Developing
Relevant associations are
made but lack supporting
evidence or evidence is
inappropriate
Beginning
Some associations are
made but they may be
lacking relevance
and/or supporting
evidence
Unacceptable
No connection is
offered
Critical Analysis: The reflection demonstrates an understanding that there is more than one way for the phenomenon to be viewed and/or
interpreted
Exemplary
Accomplished
Thoughtful selection and indepth analysis of possible
implications are presented and
various perspectives are
carefully explored
Thoughtful selection and
analysis of some relevant
implications are presented and
some perspectives are
explored
Developing
Implications presented lack
thoughtful analysis and one
alternative perspectives are
explored
Beginning
Unacceptable
Implications presented
lack analysis and no
alternative perspectives
explored
No critical analysis
is offered
Self-Assessment & Personal Analysis: The reflection shares internal thinking in order to (out)grow and/or (re)affirm understanding of the
reflection’s topic
Exemplary
Accomplished
Developing
Beginning
Teacher Education Journal of South Carolina, 2015 Edition
Unacceptable
53
Thorough analysis is provided
that clearly explores or
develops personal
understandings (e.g. biases,
stereotypes, preconceptions or
assumptions) of a topic and
includes a detailed selfevaluation related to those
understandings
Analysis is provided that
explores personal
understandings (e.g. biases,
stereotypes, preconceptions or
assumptions) of a topic while
and includes a self-evaluation
related to those
understandings
Analysis is provided that has
little exploration of personal
understandings (e.g. biases,
stereotypes, preconceptions or
assumptions) of a topic
accompanied by a minimal
self-evaluation related to
those understandings
Analysis is provided
that lacks exploration
of personal
understandings (e.g.
biases, stereotypes,
preconceptions or
assumptions) of a topic
and a self-evaluation is
not included related to
those understandings
No analysis nor
self-evaluation is
provided related to
those
understandings
Disciplinary Language: Reflection uses language (e.g. key terms, content-area vocabulary, and overarching concepts) that is aligned to the
discipline
Exemplary
Disciplinary language is welldeveloped, technically
accurate, and professional
Accomplished
Disciplinary language is
attempted, technically
accurate, and professional
Developing
Beginning
Disciplinary language is used
in limited amount or is used
inaccurately
Disciplinary language
is seldom used or is
often used inaccurately
Unacceptable
No language of
discipline is
offered
Conventions (organization and mechanics): The instructor is to individualize these dimensions according to his/her discretion
Exemplary
Accomplished
Developing
Beginning
Teacher Education Journal of South Carolina, 2015 Edition
Unacceptable
54
Appendix 3: Validation of the 5+1 Reflection Traits Rubric Analysis Form
Thank you for your willingness to critique our instrument. Your feedback will provide us with
information about how we can better refine our instrument, and please provide your feedback
using the following form.
Name:
Title:
Number of years that
you have been an
educator:
Your field of
expertise:
Directions: Please review each of the following dimensions to determine how “concise” and
“adequate” they are by rating them on a 5-point scale using the following breakdown:
5 = Very Good 4 = Good
3 = Average
2 = Below Average
1 = Poor
Additionally, the “concise” rating focuses on how succinct and crisp the language used to
describe the dimension is. The “adequate” rating determines if the language used satisfactorily
aligns the dimension’s description to its indicators. Lastly, please provide any additional
comments in the space following each dimension, especially if a rating of 1, 2, or 3 is assigned.
Contextual Framework
How concise is this dimension’s description? 5
How concise are this dimension’s indicators? 5
How adequately aligned is the dimension’s
5
description to its indicators?
Additional Comment(s) and Suggestion(s):
4
4
4
3
3
3
2
2
2
1
1
1
4
4
4
3
3
3
2
2
2
1
1
1
3
3
3
2
2
2
1
1
1
Connections
How concise is this dimension’s description? 5
How concise are this dimension’s indicators? 5
How adequately aligned is the dimension’s
5
description to its indicators?
Additional Comment(s) and Suggestion(s):
Critical Analysis
How concise is this dimension’s description? 5
How concise are this dimension’s indicators? 5
How adequately aligned is the dimension’s
5
description to its indicators?
Additional Comment(s) and Suggestion(s):
4
4
4
Self-Assessment & Personal Analysis
How concise is this dimension’s description? 5
How concise are this dimension’s indicators? 5
How adequately aligned is the dimension’s
5
4
4
4
3
3
3
2
2
2
1
1
1
Teacher Education Journal of South Carolina, 2015 Edition
55
description to its indicators?
Additional Comment(s) and Suggestion(s):
Disciplinary Language
How concise is this dimension’s description? 5
How concise are this dimension’s indicators? 5
How adequately aligned is the dimension’s
5
description to its indicators?
Additional Comment(s) and Suggestion(s):
4
4
4
3
3
3
2
2
2
1
1
1
Teacher Education Journal of South Carolina, 2015 Edition
56
Evaluating Teacher Preparation in Assessment Literacy
Tara L.R. Beziat
Auburn University at Montgomery
Bridget K. Coleman
University of South Carolina- Aiken
Abstract
Classroom assessment literacy is a vital part of pre-service teachers’ preparation as it allows
them to monitor student progress and document their effect on student learning. We conducted
two studies that measured participants' knowledge of standards-based classroom assessment
strategies during their teacher preparation. Results revealed that pre-service teachers lack
assessment literacy, despite completing undergraduate, introductory courses in classroom
assessment. Secondary education majors and those who have been admitted to a program show
a greater knowledge of assessment practices. Suggestions for improving pre-service teachers’
preparation are discussed.
Introduction
The current education system demands that teachers have a command of different forms
of classroom assessment. Specifically, teachers need to be able to create and implement valid
and reliable assessments in order to show the impact of their teaching on learning. Also, teachers
need to be able to discuss the results of assessments with parents and students, as well as use the
results of their assessments and standardized assessments to adjust instruction. Proper
assessment in the classroom plays a vital role in ensuring students are meeting instructional
objectives. Despite an emphasis being placed on classroom assessment for the past three
decades, evidence suggests deficiencies in assessment knowledge in classroom teachers and preservice teachers (Plake, Impara, & Fager, 1993; Marso & Pigge, 1993; Hayden, Oescher, &
Banbury, 1995; Alkharusi, Aldhafri, Alnabhani, & Alkalbani, 2012; Daniel & King, 1998;
McMillan, Myran & Workman, 2010).
Daniel and King (1998) made two recommendations to improve assessment practices in
the classroom. The first recommendation is that teachers should receive continuous training in
assessment practices. Daniel and King’s second recommendation is to find and study programs
that successfully prepare pre-service teachers in testing and measurement. By examining the
successful practices of education programs, pre-service teachers may be prepared for classroom
assessment. The present study will address the issue of teacher preparation in classroom
assessment practices from a program evaluation prospective.
Assessment Literacy
In 1991, Richard Stiggins introduced the idea of assessment literacy. In his seminal
article, “Assessment Literacy,” he maintained that education and the research associated with it
spent too much time on just the process of learning or producing learning. To understand the
product of teaching, educators must be able to judge and analyze data to provide effective
instruction. Assessment literacy includes assessing what students know and can do, interpreting
the results from assessments, and applying the results to improve learning (Webb, 2002).
Teacher Education Journal of South Carolina, 2015 Edition
57
Educators need to be assessment literate because of the emphasis on measuring student outcomes
through standardized testing and the variety of assessment forms available.
In 1990 the American Federation of Teachers, the National Council on Measurement in
Education and the National Education Association created the “Standards for Teacher
Competence in Education Assessment of Students” in an attempt to address the assessment
literacy issue. The Council for the Accreditation of Educator Preparation (CAEP), the
accrediting body for teacher programs, has also placed an emphasis on assessment training for
teacher candidates. In 2012, they published “Assessment Literacy Standards and Performance
Measure for Teacher Candidates and Practicing Teachers” which provides their view of
assessment literacy.
If teachers want to take ownership of their classrooms and the assessments used to
measure student learning and progress, they need to be assessment literate. The evidence
indicates that pre-service teachers lack the skills to create, select and interpret assessments.
Alkharusi, Kazem, and Al Musawai (2011) highlighted some of the issues and discrepancies in
teacher assessment knowledge on a variety of measures related to assessment, attitudes,
perceived skills and knowledge. Participants who had completed a measurement course were
more literate in assessment. Gains in knowledge were associated with taking measurement
courses and completing a teaching practicum. Alkharusi, et al., (2012) also found teachers were
lacking in assessment literacy knowledge despite having positive attitudes about assessment and
perceiving high levels of competence in assessment. Ogan-Bekiroglu and Suzuk (2014) found
that the pre-service teachers were able to create and implement assessments but still needed
practice with some of the statistical and psychometric properties of assessment (e.g. validity and
reliability). Additional coursework in statistics and research measurement topics seems
necessary to address these areas.
The goals of the current research project are to measure and monitor pre-service teachers’
knowledge and application of assessment literacy and make program adjustments to meet any
deficiencies. The study was designed to pinpoint areas of strengths and weaknesses among
programs of study in the knowledge and skills of classroom assessment. The research questions
addressed in the study were:
1. What is the level of assessment literacy, as measured by the Teacher Assessment Literacy
Questionnaire by Plake & Impara (1992), of our pre-service teachers?
2. How does the assessment literacy of our pre-service teachers compare across professional
programs – early childhood, elementary, and secondary education?
Methodology
Overview
This study initially began as longitudinal study at one university. However, due to one of
the researchers changing universities, we were not able to collect enough data to measure the
progress of the students. Data was collected at one campus during a spring semester and at two
campuses during a fall semester.
Instrumentation
The Teacher Assessment Literacy Questionnaire by Plake & Impara (1992) which
consists of 35 items, was used to assess pre-service teachers’ knowledge of assessment. This
survey aligns with the American Federation of Teachers, the National Council on Measurement
Teacher Education Journal of South Carolina, 2015 Edition
58
in Education and the National Education Association (1990) “Standards for Teacher Competence
in Education Assessment of Students” and five questions are allocated per standard (7
Standards). Each item is a multiple-choice question with four possible answers and one correct
answer. In recent studies the KR20 reliability coefficient for the scores was .62 (Alkharusi,
Aldhafri, Alnabhani, & Alkalbani, 2012) and .78 (Alkharusi, Kazem, & Al-Musawai, 2011). In
the current study the KR20 reliability coefficient was .77.
Standards for Teacher Competence in the Educational Assessment of Students (1990)
 Standard 1: Choosing assessment methods
 Standard 2: Developing assessment methods
 Standard 3: Administering, scoring and interpreting assessment results
 Standard 4: Using assessment results for decision making
 Standard 5: Using assessment in grading
 Standard 6: Communicating assessment results
 Standard 7: Recognizing unethical practices
Procedures
Pre-service teachers’ knowledge of classroom assessment was measured during their
undergraduate teacher education using the TALQ. Students enrolled in classes identified as
teaching classroom assessment literacy, were asked to complete an online survey. This survey
asked students to provide their student ID number, as a way to track them over the course of the
semester. It also contains demographic questions, like “What is your major?” and the 35 items
of the TALQ.
The first administration of the survey (pre-test) occurred during the first two weeks of
their class and then a second administration of the test was during the last two weeks of this
class. Students enrolled in educational psychology courses and courses specific to classroom
assessment (e.g. Measurement and Evaluation of Teaching or Classroom Assessment) were
given the questionnaire at the beginning and end of the course. The participants completed the
survey outside of class time with no time constraints, no incentives for their participation, and no
connection to their course grade.
Study 1
Participants
Participants in this study were pre-service teachers enrolled in a nationally accredited
teacher education program in a small university in the southeastern United States. Of those who
completed the pre-test and the post-test, the majority of students were enrolled in the elementary
program (n=12) and were enrolled in the undergraduate educational psychology class (n=13).
Just over half of the students had been officially admitted to the teacher program (n=14) and had
a GPA between 3.0 and 2.5 (n=11).
Results
Forty-nine students completed the first administration of the TALQ, however, only 26
(53.1 %) of those students completed both administrations. Table 1 presents means and standard
deviations of the TALQ items by Assessment Standards. Although slightly higher in the second
administration, the overall pre (17.92) and post (18.15) averages were not significantly different.
The highest percentage of items answered correctly were related to Standard 1, choosing
assessment methods. The lowest percentage, indicating the weakest area, was revealed in
Standard 6, communicating assessment results.
Teacher Education Journal of South Carolina, 2015 Edition
59
Table 1
Pre and Post TALQ Means and Standard Deviations
First Administration
Variable
M
SD
N=26
Second Administration
M
SD
(5 items per standard on TALQ)
Total Score
Standard 1
Standard 2
Standard 3
Standard 4
Standard 5
Standard 6
Standard 7
17.92
3.08
2.54
2.77
2.35
2.77
1.50
2.92
4.82
1.09
1.02
1.27
1.16
.90
1.10
1.16
18.15
3.346
2.538
2.923
2.692
2.538
1.462
2.6
5.98
1.06
.95
1.20
1.29
1.07
1.21
1.26
Paired samples t-tests were used to see if students’ scores differed from the first
administration to the second on the total score and standard score. No significant differences
were found. Finally, scores were compared based on the different programs of study (early,
elementary, and secondary) and whether students were admitted to the program or not. Students
in the secondary programs scored better on both administrations of the test in comparison to
those in the early and elementary programs (See Figure 1).
Figure 1.
Pre and Post TALQ Scores by Program Area Spring
Additionally, those who were officially admitted to the professional program outperformed those
who were not in the program on both administrations (See Figure 2).
Teacher Education Journal of South Carolina, 2015 Edition
60
Figure 2.
Pre and Post TALQ scores by Program Standing
Our initial results from this study indicate that our students are not improving in
assessment literacy. Specifically, their knowledge of classroom assessment practices appeared to
remain the same over the course of the semester. However, there are some differences to note.
Secondary students and students who have been admitted to the program have a better
understanding of assessment literacy as measured by the TALQ.
Study 2
Participants
Based on our initial results, we continued to collect data. In the subsequent semester, we
collected data from two campuses in the Southeast. The majority of students were
undergraduates (n=7) and majoring in early or elementary education. Also, the majority of
students (n=7) had a GPA between 3.5 and 4.0.
Results
A total of 26 students completed the initial questionnaire, however, only 11 completed
the post-test, yielding a 42.3% rate of completion for both survey administrations. Table 2
shows the means and standard deviations the TALQ items by Assessment Standards for these 11
students. As in Study 1, the overall score increased slightly in the post-test, however, not
significantly. In Study 2, Standards 1, 2, and 7 received the highest percentage of correct
responses. Standard 6 was again the weakest area indicating deficiencies with communicating
assessment results.
Teacher Education Journal of South Carolina, 2015 Edition
61
Table 2
Pre and Post TALQ Means and Standard Deviations Fall
M
Pre-Test
Total
19.27
Standard One
2.82
Standard Two
2.55
Standard Three
3.09
Standard Four
3.27
Standard Five
2.91
Standard Six
1.64
Standard Seven
3.00
Post-Test
Total
20.27
Standard One
3.46
Standard Two
3.18
Standard Three
2.91
Standard Four
2.91
Standard Five
2.64
Standard Six
2.09
Standard Seven
3.09
Notes: N=11
SD
5.00
1.54
.934
.831
.904
1.04
1.36
1.00
5.16
1.23
1.08
1.04
1.14
1.29
1.38
1.04
Again, paired samples t-tests were used to compare the pre and post scores of the students
for the total score and standard score. The differences between the pre and post-test for Standard
2: Developing assessment methods appropriate for instructional decisions, were approaching
significant (t(10)=-2.05, p=.067). No other significant differences were found. Scores were also
examined to see if there were differences between the different majors and types of student.
Again, secondary education majors scored higher on the pre and post-test in comparison to
elementary majors (See Figure 3).
Teacher Education Journal of South Carolina, 2015 Edition
62
Figure 3.
Pre and Post TALQ scores by Program Area
In the second set of data, there were undergraduate and graduate students. The graduate students
were enrolled in an alternative certification program. They out-scored their undergraduate
counterparts on the pre and post-tests (See Figure 4).
Figure 4.
Pre and Post TALQ scores by Academic Standing
Conclusions
Our initial results from this study indicate that our students are not improving in
assessment literacy as measured by the TALQ. Specifically, participants’ knowledge of
Teacher Education Journal of South Carolina, 2015 Edition
63
classroom assessment seems to remain the same over the course of the semester. However, there
are some differences to note. Secondary students, students who have been admitted to the
program, and graduate students have a better understanding of assessment literacy.
As noted previously, the goal of this research is to monitor and improve pre-service
teachers’ knowledge of classroom assessment practices. Students not only need to understand
how to design assessments, but able to interpret them and then make instructional decisions
based on those interpretations. This study has helped to identify areas of weakness, including
communicating assessment results, which can be addressed in the curriculum and field
experiences. Also, our initial results indicate early childhood and elementary program students
may need a greater focus on classroom assessment literacy.
After examining differences across programs and the weaknesses identified by the
questionnaire, one of the universities is implementing a programmatic change for early
childhood, elementary, and special education. A statistics course is being added to the course
requirements for the purpose of strengthening pre-service teachers’ skills with data collection
and analysis. Education faculty will share current deficits in students’ knowledge with the math
faculty who will teach the statistics course. There will be an emphasis on how statistics can be
utilized to inform teachers of students’ progress and measure academic growth. Strengthening
teachers’ data analysis skills should increase their use of statistics in assessment and the analysis
of students’ scores. A better understanding of statistics should also strengthen their
communication of assessment results with others. Teachers should be better equipped to analyze
and use quantitative data to inform instruction.
Currently, the results indicate no significant change in assessment literacy as a result of
course work in assessment. As DeLuca and Bellara (2013) noted, one possible problem to
examine is the alignment of the content in these classes to the assessment standards. Since
course grades indicated our pre-service teachers were satisfactorily completing the classroom
assessment courses offered at our universities, yet growth was not indicated by the TALQ
results, we need to examine the alignment of the course objectives and assignments with the
national assessment standards. The TALQ is aligned with the seven assessment standards. The
degree to which the Classroom Assessment courses align with those standards is a factor
affecting this study. As a result, the course content is being reviewed for alignment with the
standards.
Aligning those introductory classroom assessment courses in itself is not enough, the
entire program needs to increase its emphasis on the application of the standards. Teacher
preparation programs need to optimize opportunities to promote the development of assessmentliterate teachers and hold pre-service teachers accountable for the application of the assessment
standards in PK-12 settings. For example, Standard 6, communicating assessment results was the
weakest area revealed in our data. This is a more advanced skill involving the application of data
analysis and then relating that information to others. Pre-service teachers could benefit from
more practice doing this in their field experiences. Programs should provide more opportunities
for students to examine classroom assessment data and make instructional decisions based on the
data. This impacts the entire teacher education program, as assessment outcomes are
intentionally integrated throughout.
Calling attention to assessment standards and promoting their use among all education
faculty in coursework and programs will also bring the deficits to the forefront of undergraduate
education programs. Recognizing the importance of an overall assessment foundation along
with the data analysis skills can change the way faculty incorporate assessment concepts with
Teacher Education Journal of South Carolina, 2015 Edition
64
their students. A conscious effort to promote assessment literacy among faculty and pre-service
teachers can initiate immediate changes in education courses and long-term implementation
through programmatic improvements aligned with the Assessment Standards. Improving the
assessment literacy of our pre-service teachers supports improved teaching and learning.
References
Alkharusi, H., Aldhafri, S., Alnabhani, H., & Alkalbani, M. (2012). Educational Assessment
Attitudes, Competence, Knowledge, and Practices: An Exploratory Study of Muscat
Teachers in the Sultanate of Oman. Journal of Education and Learning, 1(2), 217-232.
Alkharusi, H., Kazem, A. M., & Al-Musawai, A. (2011). Knowledge, skills, and attitudes of
preservice and inservice teachers in educational measurement. Asia-Pacific Journal of
Teacher Education, 39(2), 113-123.
American Federation of Teachers, National Council on Measurement In Education, & National
Education Association. (1990). Standards for Teacher Competence in Educational Assessment
of Students. Retrieved from http://buros.org/standards-teacher-competence-educationalassessment-students.
Council for the Accreditation of Educator Preparation. (2013). CAEP Commission
Recommendations to CAEP Board of Directors: Accreditation Standards and
Recommendations. Retrieved from
http://caepnet.files.wordpress.com/2013/09/final_board_approved1.pdf
Daniel, L.G., & King, D.A. (1998). Knowledge and use of testing and measurement literacy of
elementary and secondary teachers. Journal of Educational Research, 91, 331–343.
DeLuca, C. & Bellara, A. (2013). The current state of assessment education: Aligning policy,
standards, and teacher education curriculum. Journal of Teacher Education, 64(4), 356372.
Haydel, J.B., Oescher, J. & Banbury, M. (1995, April). Assessing classroom teachers’
performance assessments. Paper presented at the annual meeting of the American
Educational Research Association, San Francisco.
Marso, R.N. & Pigge, F.L. (1993). Teachers’ testing knowledge, skills and practices. In S.L.
Wise (Ed.), Teacher training in measurement and assessment skills (pp. 129-185).
Lincoln, NE: Buros Institute of Mental Measurements, University of Nebraska-Lincoln.
McMillan, J. H., Myran, S., & Workman, D. (2002). Elementary teachers' classroom assessment
and grading practices. The Journal of Educational Research, 95(4), 203-213.
Ogan-Bekiroglu, F., & Suzuk, E. (2014). Pre-service teachers’ assessment literacy and its
implementation into practice. Curriculum Journal, 25(3), 344-371.
Plake, B.S., & Impara, J.C. (1992). Teacher competencies questionnaire description. Lincoln:
University of Nebraska.
Plake, B.S., Impara, J.C., & Fager, J.J. (1993). Assessment competencies of teachers: A national
survey. Educational Measurement: Issues and Practice, 12(4), 10–12.
Stiggins, R. J. (1991). Assessment Literacy. Phi Delta Kappan, 72(7), 534-39.
Stiggins, R. J. (1999). Evaluating classroom assessment training in teacher education
programs. Educational Measurement: Issues and Practice, 18(1), 23-27.
Stiggins, R. J. (2014). Improve assessment literacy outside of schools too. Phi Delta
Kappan, 96(2), 67-72.
Webb, N. (2002, April). Assessment literacy in a standards-based urban education setting.
Annual meeting of the American Educational Research Association, New Orleans, LA.
Teacher Education Journal of South Carolina, 2015 Edition
65
About the Authors:
Tara L.R. Beziat is an assistant professor of Educational Psychology at Auburn University at
Montgomery. Her research interests include metacognition, teaching strategies and assessment
practices.
Bridget K. Coleman is an associate professor of Education at the University of South Carolina
Aiken where she is the program coordinator for Secondary Mathematics Education and Middle
Level Education. Her research interests include best practices in education, assessment practices,
and interdisciplinary curriculum planning.
Teacher Education Journal of South Carolina, 2015 Edition
66
Influences of Prior Experiences and Current Teaching Contexts on New Teachers’ Use of
Manipulatives for Math Instruction
Elizabeth Lee Johnson
University of South Carolina at Beaufort
Abstract
In order to develop mathematical proficiency, students need access to a variety of
representations to make sense of mathematical ideas. This case study documents the perceived
influences on three novice elementary teachers’ use of math manipulatives for teaching
mathematics. Three sources of data were collected: video-recorded lessons, interviews, and a
focus group. Analyses indicated that, although concrete representations were accessible to all
three teachers, they were the least used among the available representations. While participating
teachers shared similar pre-service experiences in relation to their coursework and internship,
there were substantive differences between them in relation to how they viewed adopted
standards documents, interacted with colleagues, perceived their students, and perceived district
involvement. All three participants expressed concerns related to instructional time, and that
district-led professional development was not helpful in supporting the use of concrete
representations. The findings from this study indicate that current teaching contexts impact
instructional practices more than pre-service experiences.
Introduction
The mathematics that students need to know today is different from the knowledge
needed by their parents. The mathematics curriculum from Pre-K through middle school has
many components, but the heart of mathematics in those years is the concepts relating to
numbers and operations. Quantitative reasoning requires the learner to create a meaningful
representation of a given problem, consider its parts, think about the meaning of quantities, and
use different objects and properties of operations (Common Core State Standards, 2010).
Manipulatives are effective learning tools in helping to develop quantitative reasoning. The use
of multiple representations and the ability to translate fluently among those different forms
facilitates student learning and helps deepen mathematical understanding (Lesh, 1987).
This study involved analysis of three beginning teachers’ practices related to the use of
math manipulatives in their mathematics instruction. All three teachers (Amy, Beth, and Carly)
had similar educational backgrounds and participated in similar coursework and field placement.
During the study, these three beginning teachers participated in interviews, video-recorded
mathematics lessons, and a focus group discussion. The interviews and discussions were
designed to investigate beginning teachers’ beliefs about their classroom practices relating to
traditional practices versus reform-based options for teaching mathematics. Discussion topics
included their pre-service and current teaching experiences in mathematics, their perception of
their use of math manipulatives, and their perceptions of their current teaching contexts. The
objective was to investigate what influenced beginning teachers’ implementation of mathematics
education reform practices in their classroom as novice teachers.
The purpose of this study was to examine 1) how beginning teachers made use of
concrete representations for teaching mathematics and 2) in what ways their prior experiences
and current teaching contexts impacted their use of concrete representations.
Teacher Education Journal of South Carolina, 2015 Edition
67
Review of the Literature
Effectiveness of Manipulatives
Conceptual understanding is essential to mathematics proficiency (Battista, 1999; Burns,
2005). Abstract mathematical concepts often create challenges for students in constructing
mathematical understanding (Devlin, 2000; Kamii et al., 2001), therefore hands-on
manipulatives and graphic pictorial representations of mathematical concepts are helpful. Handson experiences allow students to understand how numerical symbols and abstract equations
operate at a concrete level (Devlin, 2002; Maccini & Gagnon, 2001). Students can benefit
significantly from instruction that includes multiple models that approach a concept at different
cognitive levels (Lesh & Doerr, 2003; Lesh et al., 2003; Lesh & Fennewald, 2010). One type of
representation is the use of concrete models, or manipulatives, which can be used for
mathematical problem solving and everyday situations.
Wolfe (2001) contended that concrete experiences provide meaning for the learner, since
the representational or symbolic experiences may have little meaning without the concrete
experiences on which to build. It is difficult for teachers to help students make connections
because mathematics is so abstract (Hartshorn & Boren, 2000). One way of making those
abstract concepts more concrete is the use of math manipulatives, objects designed to concretely
represent mathematical ideas and concepts (Moyer, 2001), for mathematics instruction.
Manipulatives are often described as physical objects that are used as teaching tools to engage
students in hands-on exploration of mathematical concepts (Boggan, et al., 2010).
Research indicates that the proper use of manipulatives (base ten blocks, fraction pieces,
counters, snap cubes, etc.) results in marked success in achievement and that manipulatives are
particularly helpful in assisting students in understanding mathematical concepts (Hartshorn &
Boren, 2000; Sowell, 1989). Sherman and Bisancz (2009) found that young children are able to
solve complex equivalence problems with manipulatives when problems are presented in a nonsymbolic context. Johnson (2000) found that students using manipulatives for mathematics
usually outperformed those who did not. Research by D’Angelo and Nevin (2012) indicated that
manipulatives help solidify abstract mathematical concepts for better understanding.
Wolfe (2002) suggested that mathematics should be introduced to young children through
objects. Russell (2000) found that concrete materials enabled students to develop imaginary
pictures in their minds, therefore increasing their computational fluency. Cotter (2000) observed
mathematics instruction in two first-grade classrooms and concluded that the group using
manipulatives exhibited a better conceptual understanding of place value. Ruzic and O’Connell
(2001) found that the long-term use of manipulatives for mathematical instruction had a positive
effect on student learning by allowing students to use concrete objects to internalize more
abstract concepts. Sutton and Krueger (2002) found that the use of manipulatives engaged
students and increased their interest and enjoyment of mathematics, resulting in higher
achievement. Munger (2007) reported that the experimental group using math manipulatives
scored significantly higher in mathematical achievement than the control group. Baker (2008)
found that the benefits of using math manipulatives influenced both students and pre-service
teachers. Student engagement and content mastery increased, and preservice teachers
experienced less anxiety and a change in attitude toward mathematics.
In keeping with these findings the National Council of Teachers of Mathematics
Standards, Principles and Standards for School Mathematics (2000), recommends extensive use
of manipulatives. Teachers need to provide the necessary manipulatives, graphic illustrations,
Teacher Education Journal of South Carolina, 2015 Edition
68
and experiences to allow students to build and explore in order to support student development
of mathematical concepts. Learners must reflect on and communicate their experiences with
manipulatives in order to build meaning (Kelly, 2006; Krech, 2000). Some found, however, that
although teachers have learned appropriate strategies for using manipulatives for mathematical
instruction, their beliefs about how students learn mathematics may influence how and why
teachers use manipulatives the way that they do (Moyer, 2001; Philipp et al. 2007).
Beliefs Influence Practices
Wilkins (2008) found that content knowledge, attitudes, and beliefs are all related to
teachers’ instructional practices. Prior research on pre-service elementary mathematics teachers
has mostly examined pedagogical beliefs in teaching mathematics, beliefs about one’s own
teaching ability, or content knowledge as individual constructs (Ball, 2003; Burton, 2006; Hart,
2004; Hill, et al, 2008; Morris, et al, 2009; Philipp, 2007; Swars, et al, 2007; Wilkins & Brand,
2004). Understanding the various changes in these constructs that take place after these teachers
enter their own classroom continues to be a predicament.
Swars, et al (2007) noted that it is common for pre-service teachers to begin their
professional teacher preparation programs with a more traditional view of what it means to know
and teach mathematics whereas university mathematics education programs are more likely to
promote the constructivist view of teaching and learning mathematics such that it is supported by
the National Council of Teachers of Mathematics (2000). Teacher’s beliefs about teaching and
learning mathematics influence their practice (Hofer & Pintrich, 2002; Holt-Reynolds, 2000a;
Muis, 2004; Scott, 2001). Tirosh (2000) states that there is a relationship between teachers’
beliefs about mathematics, the use of manipulatives, and of the purposes for using manipulatives
(Tirosh, 2000).
Methodology
This study addresses the questions “How do beginning teachers make use of concrete
representation for teaching mathematics, and in what ways do prior experiences and current
teaching contexts impact beginning teachers’ use of concrete representations?”
Context and Participants
This study focused on the teaching of mathematics by new teachers who graduated
within the past four years from one southeastern university and who have access to concrete
representations in their current school settings. To diversify the sample, the researcher attempted
to select new teachers who were (a) currently teaching at different grade levels, (b) in different
schools, and (c) preferably in different local school districts. The volunteer participants were
three new teachers who had completed similar field experiences and coursework where math
manipulatives were introduced. At the time of the study, each participant was currently teaching
in her own classroom on a full-time, permanent basis and had less than three years of teaching
experience.
Data Collection and Analysis
In order to triangulate findings, the researcher conducted interviews, collected video
recorded lessons, and facilitated a focus group meeting. There were two goals for the interviews.
The first round of interviews related to beginning teachers’ prior experiences in their formative
years as well as their attitudes and beliefs relating to mathematics. The video recordings were
used to determine the actual amount of time math manipulatives were used for mathematics
instruction. The second round of interviews and the focus group concentrated more on novice
teachers’ perceptions of the influences of their current teaching practices.
Teacher Education Journal of South Carolina, 2015 Edition
69
The first steps for analysis were transcribing, coding, and analyzing the initial interviews
with the participants. The researcher used recordings of the interviews, which were conducted in
the classroom setting. Information was organized by research question. This process allowed the
researcher to address specifically the first research question, “How do beginning teachers make
use of concrete representations for teaching mathematics?”
The purpose of the first interview was to discuss participants’ perceptions of their current
teaching practices and their preparation to teach mathematics prior to entering their own
classroom. The researcher coded statements based on similar topics that arose. The second step
for analysis was transcribing, coding, and analyzing the lesson video recordings, which were
conducted in the classroom setting with participants’ own students. This process also allowed the
researcher to address specifically the first research question. Several perspectives were
considered. The first was an overview of the practices of all of the participants collectively. The
researcher specifically wanted to know overall among participants the percentage of the time
manipulatives were being used for mathematics instruction, particularly how much of the time
students were actively engaged versus how much of the time students were passive learners. In
addition to looking at an overview of all participants, the researcher wanted to analyze each
participant's data separately from the whole group. The researcher constructed a frequency
distribution to compare observed frequencies of the use of various representations that were
collected through video observation for the whole group. Then a frequency distribution was
constructed in order to compare observed frequencies of the use of various representations used
in each participant’s classroom.
The next step for analysis was transcribing post-video interviews, which were conducted
in the classroom setting, in order to identify themes that arose during one-on-one discussions
with participants. This allowed the researcher to specifically address the second research
question “In what ways do prior experiences and current teaching contexts influence beginning
teachers’ use of concrete representations in their mathematics instruction?” Once the researcher
had transcribed all of the interviews, she looked for themes within each.
The final step for analysis was transcribing the focus group discussion. This step explored
possible themes that arose during one-on-one discussions with participants and helped identify
common themes and concerns within and among the group of participants. The data from the
focus group allowed the researcher to further address the second research question “In what ways
do prior experiences and current teaching contexts influence beginning teachers’ use of concrete
representations in their mathematics instruction?” After completing the transcription of the focus
group meeting the researcher formulated ideas about which statements could be grouped into
categories.
Results
Results of the quantitative analysis revealed that none of the beginning teachers in the
study used manipulatives for their primary instruction. All three participants had ample access to
manipulatives, yet they used them to significantly different degrees. Although they shared
similar pre-service experiences in relation to their coursework and internship, there were
substantive differences between them in relation to how they viewed the Common Core State
Standards for Mathematics (CCSSM), how they interacted with their colleagues, and how they
perceived district involvement. This seems to result in different implementations of concrete
representations. For the remainder of the results, each participant has been given a pseudonym in
order to create a readable description of the findings.
Teacher Education Journal of South Carolina, 2015 Edition
70
CCSSM were a subject of concern for all three. Amy and Beth, who were in their first
year of teaching, expressed concerns about learning and teaching to the standards. In contrast the
third year teacher, Carly, felt more comfortable with the CCSSM, having worked with them the
previous year. All three agreed that CCSSM were good for the students as long as expectations
were clarified to the teachers. All three participants felt that how the standards were implemented
within their districts had not been clarified by district personnel.
Carly perceived that she had a strong teacher/administration support group at her school.
Neither of the other two participants felt that they had ample support from colleagues unless they
specifically inquired, and both frequently made instructional decisions independently.
Due to perceived behavioral problems when manipulatives were used, Amy opted for
the use of technology or pencil/paper activities in lieu of manipulatives in order to manage the
classroom. Rulers were distributed to the students to look at when making conversions along
with a conversion chart, and on one occasion they actually measured items in the classroom.
Amy cited episodes with students throwing the base ten blocks at each other during instruction
and arguments over fraction pieces as reasons for avoiding the use of manipulatives. Beth
reported similar issues with behavior management, although it was only a slight deterrent from
using manipulatives. She used three-dimensional objects while teaching a lesson on nets, but
Beth also frequently opted for technology in lieu of manipulatives. Most of the manipulatives in
Beth’s room were in packages in the back of the room. Carly did not feel that behavior while
using manipulatives was problematic (perhaps because she had more experience that the other
two participants). She felt that she had success in using measurement tools during instruction,
with students measuring objects in the classroom and distances in the hallway. Recordings of
classroom lessons showed that manipulatives such as counters and measurement tools were
readily available at each group’s table. Carly also reported that making and using “ladybug
clocks” was helpful for the students when calculating elapsed time. All three expressed concern
with the amount of time needed for student completion of tasks while using manipulatives.
All three participants also expressed frustration in the lack of guidance from district
personnel. Cursory in-service meetings were not perceived to be helpful. Meetings were brief
and seldom, leaving participants to feel unsure of expectations for their instruction. Presentation
of new material left little time for questions, clarification, exploration, or practice. This
disconnect between teachers and district left teachers to pursue strategies for mathematics
instruction individually.
Conclusion
According to all three participants, prior experiences exposing them to the use of
manipulatives in their math methods class and, to varying degrees, in their internships impacted
their knowledge of concrete representations. They unanimously expressed having exposure to
math manipulatives in their pre-service experiences, but differed in the amount of exposure they
felt they had. Participants communicated that their instructional decisions were based more
around their current teaching contexts than pre-service experiences.
Current teaching contexts significantly impacted beginning teachers’ use of concrete
representations. All three participants conveyed frustration regarding the vagueness of their
districts’ implementing CCSSM. Participants perceived a lack of guidance from their districts,
citing mixed messages relating to the implementation of their mathematics instruction, lack of
professional development, and minimal face-to-face contact time. Participants conveyed a lack
of explanation and follow-up relating to professional development, as well.
Teacher Education Journal of South Carolina, 2015 Edition
71
All participants agreed that time necessary for using manipulatives was an issue. This
issue included searching for different resources, preparing materials for lessons, and time to
actually finish a lesson without interruption. For the two less experienced participants, Amy and
Beth, behavior influenced the use of manipulatives for math instruction. For these same two
participants there was a lack of interactive team support in their mathematics planning. Although
they indicated that they felt that their pre-service experiences primarily influenced their teaching
practices, there was little evidence to support these statements. In the case of all three
participants, based on the observed lesson plans, individual interview discussions, and the focus
group discussion, primary influences were related to the context of their current teaching
environment.
Discussion
The recurring themes in the discussions with participants as to some possible reasons for
the participants’ practices overall, lack of manipulatives use, and particular differences between
participants’ practices were technology, behavioral issues, CCSSM, time constraints,
professional development, and collegial collaboration. Concerns relating to the Common Core
were intertwined with professional development that might support teachers and students in the
transition toward the implementation of CCSSM. The most evident difference among the
influences of instructional practices was the level of collaboration with participants’ colleagues.
Based on the researcher’s observations throughout this study, it seemed that the current
teaching contexts for beginning teachers had a greater influence on their teaching practices than
their pre-service experiences. Although more research is needed, there appears to be a strong
relationship between what is happening in beginning teachers’ classrooms and teacher
interactions within their teaching environment. This translates into a paradigm shift in how we
might approach instruction for our pre-service teachers. The more we can immerse pre-service
teachers into an authentic classroom setting during their learning process, the better the
probability that they will transition into their own teaching environment embracing the effective
strategies that are taught during their pre-service experience.
References
Baker, A. (2008). How do manipulatives affect a mathematics classroom? Doctoral dissertation.
Accession No: OCLC: 262835136.
Ball, D.L. (2003). What mathematical knowledge is needed for teaching mathematics?
Learning, 6, 1-9. Washington, D.C.
Battista, M.T. (1999). Fifth graders’ enumeration of cubes in 3D arrays: Conceptual progress in
an inquiry-based classroom. Journal of Research in Mathematics Education, 40(4), 417448.
Boggan, M. Harper, S., Whitmire, A. (2010). Using manipulatives to teach elementary
mathematics. Journal of Instructional Pedagogies. Retrieved from:
http://www.aabri.com/manuscripts/10451.pdf
Burton, M.E. (2006). Effects of a combined mathematics methods and content course on
mathematical content knowledge and teacher efficacy of elementary preservice teachers.
(Doctoral dissertation,University of Alabama. Retrieved from
http://search.proquest.com/docview/305350731
Cotter, J.A. (2000). “Using Language and Visualization to Teach Place Value.” Teaching
Children Mathematics, 7, (October, 2000), 108-114.
Teacher Education Journal of South Carolina, 2015 Edition
72
Common Core State Standards (2010). Common Core State Standards Initiative: Preparing
America’s students for college and career. Retrieved from http://www.corestandards.org
Devlin, K. (2002). Finding your inner mathematician. The Chronicle of Higher Education, 46,
B5.
D’Angelo, F. & Nevin, I. (2012). Teaching mathematics to young children through the
use of concrete and virtual manipulatives. Bloomsburg University of
Pennsylvania. Retrieved August 25, 2015 from
http://files.eric.ed.gov/fulltext/ED534228.pdf
Hart, L. (2004). Beliefs and perspectives of first-year, alternative preparation elementary
teachers in urban classrooms. School Science and Mathematics, 104, 79-88.
Hartshorn, R. & Boren, S. (2000). Experiential learning of mathematics: Using Manipulatives.
(Report No. EDO-RC-90-5). ERIC Digest. Charleston, WV: ERIC Clearinghouse on
Rural Education and Small Schools. (ERIC Document Reproduction Service No. ED
321967).
Hill, H.C., Ball, D., and Schilling, S. (2008). Unpacking pedagogical content knowledge:
Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal
for Research in Mathematics Education, 39(4), 372-400.
Hofer, B.K. & Pintrich, P.R. (2002). Personal Epistemology: The psychology of beliefs about
knowledge and knowing. Mahweh, NJ: Lawrence Erlbaum Associates.
Holt-Reynolds, D. (2000a). What does the teacher do? Constructivist pedagogies and prospective
teachers’ beliefs about the role of a teacher. Teaching and Teacher Education, 10(1), 2123.
Johnson, E.L. (2014). Relationship between prior experiences, current teaching contexts, and
novice teachers’ use of concrete representation for mathematics instruction. (Doctoral
Dissertation, University of South Carolina).
Johnson, V.M. (2000) An investigation of the effects of instructional strategies on
conceptual understanding of young children in mathematics. New Orleans, LA:
American Educational
Kelly, C.A. (2006). Using manipulatives in mathematical problem solving: A performance based
analysis [Electronic version]. The Montana Mathematics Enthusiast 3(2), 1984-193.
Krech, B. (2000). "Model with manipulatives." Instructor, 109(7):6–7.
Kilpatrick, J. Swafford, J. and Findell, B. (Eds). (2001). Adding it up: Helping Children learn
mathematics. Washington, DC: National Academy Press.
Lesh R., Post, T. & Behr, M. (1987). Representations and translations among representations in
mathematics learning and problem solving. In C. Janvier (Ed.), Problems of
Representation in the Teaching and Learning of Mathematics (pp. 33-40) Hillsdale, NJ:
Lawrence Erlbaum Associates.
Lesh, R. & Doerr, H. (2003). Beyond Constructivism: A Models and Modeling Perspectives on
Mathematics Problem Solving, Teaching and Learning, Hillside, NJ: Lawrence Erlbaum
Associates.
Lesh, R.,K., Cramer, H.M. Doerr, T., Post & Zawojewski, J. (2003). Using a translation model
for curriculum development and classroom instruction. In R. Lesh and H. Doerr (Eds.),
Beyond Constructivism. Models and Modeling Perspectives on Mathematics Problem
Solving, Learning and Teaching. Mahwah, NJ: Lawrence Erlbaum Associates.
Teacher Education Journal of South Carolina, 2015 Edition
73
Lesh, R. & T. Fennewald (2010). Chapter 2. Modeling: What is it? Why do it? Modeling
Students Mathematical Modeling Competencies, pp. 5-15. New York, NY: Springer
Publishing.
Morris, A., Hiebert, J. & Spitzer, S. (2009). Mathematical knowledge for teaching in planning
and evaluation instruction: What can pre-service teachers learn? Journal for Research in
Mathematical Education, 40(5), 491-529.
Moyer, P.S. (2001). Are we having fun yet? How teachers use manipulatives to teach
mathematics. Educational Studies in Mathematics, 47, 175-197.
Munger, D. (2007, October 9). Children learn and retain math better using manipulatives
[Msg.1]. Message posted to
http://scienceblogs.com/cognitivedaily/2007/10/children_learn_and_retain_math.php
Muis, K.R. (2004). Personal epistemology and mathematics: A critical review of synthesis and
research. Review of Educational Research, 74(3), 317-377.
National Council of Teachers of Mathematics. (2000). Principles and standards for school
mathematics. Reston, VA: National Council of Teachers of Mathematics.
Phillip, R.A., Ambrose, R., Lamb, L., Sowder, J., Schappelle, B., Sowder, L., Thanheiser, E., &
Chauvot, J. (2007). Effects of early field experiences on mathematical content knowledge
and beliefs of prospective elementary school teachers: An experimental study. Journal
for Research in Mathematics Education, 38(5), 438-476.
Russell, S.J. (2000). Developing computational fluency with whole numbers. Teaching
Children Mathematics, 7(3), 154-158.
Ruzic, R. & O’Connell, K. (2001). Manipulatives Enhancement Literature Review
http://www.cast.org/ncac/Manipulatives1666.cfm
Ruzic, R. & O’Connell, K. (2001). Manipulatives. National Center on Accessing the General
Curriculum. Retrieved from http://www.cast.org/ncac/index.cfm?i_66
Scott, J. (2004). The forced autonomy of mathematics teachers. Educational Studies in
Mathematics, 55(1): 227-257.
Sherman, J., & Bisanz, J. (2009). Equivalence in symbolic and non-symbolic contexts:
Benefits of solving problems with manipulatives. Journal of Educational
Psychology, 101, 88-100.
Sowell, E. (1989). Effects of manipulative materials in mathematics instruction. Journal for
Research in Mathematics Education, 20(5): 498-505.
Sutton, J. & Krueger, A. (Eds). (2002). EDThoughts: What We Know About Mathematics
Teaching and Learning. Aurora, CO: Mid-Continent Research for Education and
Learning.
Swars, S., Hart, L.C., Smith, S.Z., Smith, M. & Tolar, T. (2007). A longitudinal study of
elementary pre-service teachers’ mathematics beliefs and content knowledge. School
science and mathematics, 107(9), 325-335.
Tirosh, D. (2000). Enhancing prospective teachers’ knowledge of children’s conception: The
case of the division of fractions. Journal for Research in Mathematics Education 31(1),
5-25.
Vygotsky, L.S. (1978). Mind in Society: The Development of Higher Psychological Processes.
Edited by Michael Cole, Vera John-Steiner, Sylvia Scribner, Ellen Souberman.
Cambridge, MA: Harvard University Press.
Wilkins, J.L.M. (2008). The relationship among elementary teachers’ content knowledge,
attitudes, beliefs, and practices. Journal of Mathematics Teacher Education, 11, 139-164.
Teacher Education Journal of South Carolina, 2015 Edition
74
Wilkins, J.L.M. & Brand, B.R. (2004). Change in pre-service teachers’ belief: An evaluation of
mathematics methods course. School Science and Mathematics, 104, 226-233.
Wolfe, J. (2002). Learning from the Past: Historical voices in early childhood education
(2nd ed.). Alberta, CA: Mayerthhorpe, Piney Branch Press.
Wolfe, P. (2001). Brain matters: Translating research into classroom practice. Alexandria, VA:
Association for Supervision and Curriculum Development.
About the Author:
Dr. Elizabeth Lee Johnson is an early childhood and elementary professor at the University of
South Carolina Beaufort and serves as the elementary education program coordinator. She was a
teacher in the elementary classroom for 20 years prior to teaching at the university level.
Teacher Education Journal of South Carolina, 2015 Edition
75
Understanding English Language Learners: A Qualitative Study of Teacher Candidates
Teaching in Another Country, Culture, and Context
Lee Vartanian
Lander University
Abstract
Regardless of the influx of English Language Learners (ELLs) in public schools, many preservice and in-service teachers are underprepared to teach them (O’Neal, Ringler, & Rodriguez,
2008). Study abroad experiences, that include a teaching component, may provide an option for
teacher candidates wishing to increase their ability to teach ELLs. In May 2014, nine teacher
candidates from South Carolina taught for two weeks in an elementary school in central
Guatemala. This qualitative study focused on how the experience of teaching in a new country,
culture, and context impacted participant perceptions of effective teaching practice, the diverse
needs of children, and their role as professional educators. Through a qualitative analysis of
their daily journals, individual & group interviews, and participant observation, the researcher
found that participants reflected on their pedagogical practice, expanded their empathy for
ELLs, and increased their confidence in communicating with ELLs.
Introduction
Throughout the United States, the numbers of English Language Learners (ELLs) in
public schools are increasing. In South Carolina, the numbers are rising particularly fast. From
2000 to 2010, South Carolina schools saw a 610% increase in ELLs (Horsford & Sampson,
2013). However, many pre-service and in-service teachers are not prepared for this growing
population (Menken & Antunez, 2001; O’Neal, Ringler, & Rodriguez, 2008). Although teacher
candidates take a broad base of courses, including child development, content pedagogy,
classroom management, and assessment, few programs have courses on the pedagogy of
teaching ELLs (Samson & Collins, 2012). Without direct guidance on how to instruct ELLs,
teachers consistently report feeling unprepared to meet the needs of the ELLs in their classrooms
(Gandara, Maxwell-Jolly, & Driscoll, 2005).
In addition to having pedagogical knowledge, two qualities are crucial for effectively
teaching ELLs: linguistic understanding and cultural awareness (Menken & Antunez, 2001;
Lucas, Villegas, & Freedson-Gonzalez, 2008). As teacher preparation programs attempt to
better prepare teacher candidates to effectively teach ELLs, study abroad programs may provide
a powerful tool for teaching linguistic and cultural knowledge. Studies have shown that study
abroad programs have demonstrated success in impacting student sensitivity to other cultures and
fluency in other languages (Quezada & Alfaro, 2007; Colville-Hall & Adamowicz-Hariasz,
2011).
Adding a service-learning component to a study abroad trip can impact the cultural
competence of college students. Study abroad experiences that involve a service-learning
component have a strong record of impacting teacher candidate’s personal and professional
growth (Tarrant, 2010). One study examined how international activities impacted various
global competencies across a broad spectrum of study abroad experiences (Stebleton, Cherney,
& Soria, 2013). When controlling for other factors, the researchers found that students who
travel abroad for service opportunities are associated with increased development in the ability
and comfort to work with people from other cultures (Stebleton, Cherney, & Soria, 2013). This
Teacher Education Journal of South Carolina, 2015 Edition
76
implies that there is something substantive about service learning experiences that strengthens
cultural competence during a study abroad trip.
For teacher candidates, a natural service learning option for study abroad is to teach in a
different culture, context, and country. Several studies have demonstrated how study abroad
programs, that include a teaching component, can impact teacher candidates’ linguistic and
cultural awareness (Alfaro, 2008; Palmer & Menard-Warwick, 2012). One study examined the
reflections of four bilingual teacher candidates who spent a semester teaching in Mexico.
Candidates repeatedly described the cultural differences they noticed and how they negotiated
those differences during their stay. They also repeatedly reflected on how this experience
impacted their ability and commitment to teach students from different cultural backgrounds in
their future classrooms (Alfaro, 2008). A similar study explored the experiences of seven
teacher candidates in a month-long study abroad experience in Mexico. While staying with host
families, participants studied Spanish, took a second language acquisition course, and served in
local schools. The researchers were interested in the connections participants made between
their Mexican students with the students they would encounter in their future classrooms in the
United States. The researchers found that participants developed a broad base of empathy for the
students they encountered and regularly reflected on how they might apply this deeper cultural
understanding to their future practice as teachers (Palmer & Menard-Warwick, 2012).
This study
In May 2014, I co-led a two-week study abroad trip to Guatemala. Nine teacher
education and two psychology majors, one psychology professor, and one teacher education
professor (the author) taught at a private elementary school that serves the indigenous
community of Santa Maria de Jesus in central Guatemala. Prior to the trip, participants took part
in six seminars covering Guatemalan culture, conversational Spanish for educators, cross cultural
psychology, and strategies for safe international travel. Additionally, each participant was
required to read I, Rigoberta Menchú, one of the only first hand accounts about the tumultuous
history of the diverse indigenous communities of Guatemala and their fight against oppression
and injustice (Menchú, 1983). Once in Guatemala, participants lived with host families in the
historical, colonial town of Antigua. Participants provided tutoring in the school in the morning,
and had the afternoon free to take Spanish lessons and explore Antigua.
Methodology
This study focuses on how the experience of teaching in a new country, culture, and
context impacts participant perceptions of effective teaching practice, the diverse needs of
children, and their role as a professional educator. These themes were explored through daily
journals, individual & group interviews, and participant observation. Using a phenomenological
approach, this study focused on the experiences and reflections of nine teacher candidate
participants. Grounded in the experiences and perceptions of individuals, a phenomenological
study aims to capture the lived experience of individuals and how they make sense of it (Patton,
2002). With the exception of one African American female and one White male, the seven
remaining teacher candidate participants were White females. Pseudonyms were used in place of
the names of the participants.
As a participant observer, I worked alongside each of the participants. This allowed me
to achieve a level of access to and understanding of the shared experience. I kept field notes,
which I regularly reviewed and used to focus my observations and questions (Spradley, 1980).
Teacher Education Journal of South Carolina, 2015 Edition
77
Once the data was recorded and transcribed, I initiated a content analysis of the participant
interviews and journals. After reading and rereading all of the data, I unitized and categorized
the data according to similarities (Lincoln & Guba, 1985). Over numerous iterations of
exploring, defining, and redefining categories, three major themes emerged from the content
analysis: empathy for second language development, emerging strategies for teaching ELLs, and
cultural comparisons.
Findings
Empathy for Second Language Learners
A unique characteristic of this study abroad experience was that participants immersed
themselves in learning a second language while teaching in that language. They lived with host
families who spoke nothing but Spanish, taught students in Spanish, and received Spanish
instruction in the evenings. They were truly immersed in another culture and language. The
experience of being a learner in a foreign land—in need of assistance and understanding from
native speakers—led to comments expressing increased empathy towards ELLs:
It kinda puts you in their position, like “I don’t know anything that’s going on.” And, it
just makes you appreciate the patience for people who take the time out to help you
understand. You know, it takes a lot.
In addition to feeling empathy for the plight of second language learners, several
participants mentioned learning patience through the experience of speaking in another language.
Most applied this newly learned patience to the prospect of working with ELLs in their future
classrooms. One participant, Cierra, forecasted how the language immersion component of this
experience will lead her to be a more patient and effective communicator with ELL students and
parents:
When you’re just kind of talking about it in the States, you don’t really know how
difficult it will be. And you get here, and you’re completely immersed in it and you have
no choice. And, it is different… I think that’s going to be helpful to take back with me
when, in the future, I have students and parents from other native lands, or other cultures,
it’s going to help me to be more patient with them and communicate better.
Tristan explained how this immersive experience may lead her to work more
cooperatively and patiently with future ELL students:
It’s very beneficial, in my opinion, knowing that we have experienced it, and we can help
students out who have trouble. In my opinion, it’s beneficial to me because I know that
there are people out there. And, it’s not as much you need to help understand their
language, or that they need to understand us. It’s a cooperation thing. It’s patience. I
learned a lot of patience.
Emergent strategies for ELL instruction
For some participants, the challenge of teaching in Spanish brought out certain
innovations and inner teacher qualities that emerged during their teaching experiences in the
Teacher Education Journal of South Carolina, 2015 Edition
78
school. One day, Laticia and Jennifer’s kindergarten class covered the concept of capital and
lowercase letters. Some students had difficulty understanding the concept. Recognizing student
confusion, Laticia and Jennifer found strategies that spoke to their inner voices as teachers. For
example, Laticia made up a song to help her kindergartners understand how to draw a lowercase
"i":
I make little rhymes. Yesterday, they were doing the points on the "i's" really big. So, I
kept saying, "chiquitito puntito" like, "small point." So, as they were writing they were
like "Chiquitito-puntito" cause it just rhymes and it really clicks for them.
Jennifer, one of the most expressive participants in our group, used her theatrical nature
to bring physical movement into the classroom. To demonstrate capital letters, she became very
big with her arms and feet stretched wide. For lowercase letters, she crouched low. She
described how this experience helped her to find a way to teach anything:
You figure out ways to get your point across, or you start looking for things to
demonstrate or show… you do whatever you can to make sure you get your point across
and the children understand. And, I think that’s something I’ll definitely implement when
I get back.
Although the participants had varying levels of Spanish comprehension, most were at
similar linguistic levels as their students. Tobias was a beginning Spanish speaker who could not
rely on oral Spanish fluency to instruct his students. With a limited knowledge of Spanish, he
found that he physically modeled for the students much more than he did as a student teacher the
month prior, in South Carolina. This inspired a realization that he needed to boost the amount of
modeling he does with his students in his future classroom:
I think that's important, in any school, is to model it. So, I'll just say, you know, "mirar"
which means "watch" and I'll show them and they instantly know what to do. And, I think
a lot of times in American schools we just sit there and talk, talk, talk, when we should
just show them and then they would understand a lot better. So, it's kind of a blend of
oral communication and modeling what exactly it is you want them to do.
Cultural Comparison
Participants repeatedly shared observations of the culture within the school as compared
to school cultures they’ve experienced in the United States. For example, Penelope was struck
by how calmly the teachers guided the students to focus on quality not quantity, which was in
contrast to the example she has seen in South Carolina schools:
I don’t ever really take the time to say “Slow down. It’s OK to do your work slow.” It’s
more like: “OK, finish, finish, finish.” And here it’s like, the slower the better. It’s
interesting to watch the students, because there is no fast activity. The other day when we
were making bracelets and necklaces and drawing pictures, it took the whole day.
Teacher Education Journal of South Carolina, 2015 Edition
79
For some participants, this slow approach was a refreshing change from what they have
experienced in schools back home. However, Cierra expressed concern that this approach may
be causing these students to fall unnecessarily behind:
I’ve learned that the structure is completely different. At home, we spend a little time on
math, a little time on reading and all that. Here you just kind of go over a vowel, or a
consonant and a vowel. And, they put it together with words and then they just color it.
So, they’re doing more worksheets… They spend the first half of the day doing that. And
the second half might be pullout time, where you work individually on numbers and
vowels. But, it’s very repetitious. They do the same thing every day. Whereas we’re [in
the United States] always learning something new and trying to advance every day to get
the students to learn more. But sometimes I feel like these students are stuck in the same
place and are not really going anywhere.
Several participants remarked at how welcoming and positive the students were, which
stood in contrast to the perceived interest of students in the United States. In our group interview
at the end of the first week, Caitlyn typified the group sentiment about the students:
I think it’s really cool that not only the kids in your class love you and want to hug you,
but all the kids in the entire school… like, if you are standing by the door on the way out,
every single one of them will stop and hug you. And I think it’s really sweet because, “I
have no idea who you are, but thank you very much!” (laughs) I really like that.
In addition to commenting on the differences between this Guatemalan school and ones
they have experienced in the United States, some participants noticed a disturbing similarity.
Working one-on-one with students often led to a deeper understanding of the plight of these
children. An awareness of the ugliness and implications of poverty was not lost on all of the
participants. As in our country, schools reflect the society around them. Problems with
alcoholism, child neglect and abuse were not uncommon in the lives of these students. Beatrice
found this out, firsthand:
There is this little boy named Jefferson… his sister was not present today. This is a scary
thought because their father is abusive. I hugged Jefferson for a very long time today
because he didn’t seem to want to let go. It hurts my heart to think this little boy is being
abused.
Tobias also noticed how the issues of poverty may be similar in Guatemala, as in the
United States:
A lot of the kids we're going to have, especially in Title I schools, are coming from the
same situations. We just may not think about that because we're in the United States.
Conclusions
The effective teacher of ELLs has strong content pedagogical knowledge, linguistic
understanding, and cultural awareness (Menken & Antunez, 2001; Lucas, Villegas, & FreedsonGonzalez, 2008). Through on-site discussion and reflection, participants noted an increased
Teacher Education Journal of South Carolina, 2015 Edition
80
empathy for ELLs, an awareness of cultural differences, and emergent strategies for effectively
teaching ELLs. These findings represent a modest increase in their overall cultural awareness
and self-efficacy in their ability to teach ELLs. However, it is important to conduct future
research to see whether or not these participants will continue reflecting on what they saw and
experienced in Guatemala. Although many noted the impact this would have on them as future
teachers, will these lessons be forgotten? Follow-up interviews with these participants may
clarify the effectiveness of this experience on their actual behaviors as in-service teachers.
Additionally, these interviews may provide valuable insight for how to enhance future study
tours and whether or not study tours like these have the potential to improve the efficacy of ELL
instruction.
References
Alfaro, C. (2008). Global student teaching experiences: Stories bridging cultural and
inter-cultural difference. Multicultural Education, 15(4), 20-26.
Colville-Hall, S. & Adamowicz-Hariasz, M. (2011): Franco-American teachers-intraining: A study of best practices in teaching and studying abroad. Frontiers: The
Interdisciplinary Journal of Study Abroad, 21, 275-288.
Gandara, P., Maxwell-Jolly, J. & Driscoll, A. (2005). Listening to teachers of English
language learners: A survey of California Teachers’ challenges, experiences, and
professional development needs. Santa Cruz, CA: The Center for the Future of Teaching
and Learning.
Horsford, S.D. & Sampson, C. (2013). High-ELL-growth states: Expanding funding
equity and opportunity for English Language Learners. Voices in Urban Education, 37,
47-54 Retrieved from
http://vue.annenberginstitute.org/sites/default/files/issuePDF/VUE37.pdf
Lincoln, Y. & Guba, E. (1985). Naturalistic inquiry. New York: Sage Publications.
Lucas, T., Villegas, A.M., & Freedson-Gonzalez, M. (Sept-Oct 2008). Linguistically
responsive teacher education: preparing classroom teachers to teach English language
learners. Journal of Teacher Education, 59(4), 361-373.
Meaney, K., Bohler, H., Kopf, K., Hernandez, L., & Scott, L. (2008). Service-learning
and pre-service educators' cultural competence for teaching: An exploratory
study. Journal of Experiential Education, 31(2), 189-208.
Menchú, (1983). I, Rigoberta Menchú: An Indian woman in Guatemala. London:
Verso Books.
Menken, K., & Antunez, B. (2001). An overview of the preparation and certification of
teachers working with Limited English Proficient (LEP) students. Washington, DC:
National Clearinghouse for Bilingual Education.
O’Neal, D.D., Ringler, M. & Rodriguez, D. (2008). Teachers’ perceptions of their
preparation for teaching linguistically and culturally diverse learners in rural eastern
North Carolina. The Rural Educator, 30(1), 5-13.
Palmer, D.K., & Menard-Warwick, J. (2012). Short-term study abroad for Texas
preservice teachers: On the road from empathy to critical awareness. Multicultural
Education, 19(3), 17-26.
Patton, M.Q. (2002). Qualitative research & evaluation methods (3rd ed.), Thousand
Oaks, CA: Sage Publications.
Teacher Education Journal of South Carolina, 2015 Edition
81
Quezada, R.L., & Alfaro, C. (2007). Biliteracy teachers' self-reflections of their
accounts while student teaching abroad: Speaking from “the other side.” Teacher
Education Quarterly, 34(1), 95-113.
Spradley, J.P. (1980). Participant observation. Orlando, FL: Harcourt Brace
Javonovich.
Stebleton, M., Cherney, B., & Soria, K. (2013). Going global: College students'
international experiences and self-perceived intercultural competencies. Frontiers: The
Interdisciplinary Journal of Study Abroad, 22, 1-24.
Tarrant, M.A. (2010). A conceptual framework for exploring the role of studies
abroad in nurturing global citizenship. Journal of Studies in International Education, 14,
433-451.
About the Author:
Lee Vartanian, Ph.D., is an Associate Professor of Education at Lander University, where he
serves as Program Director of the Elementary Education Program and Campus Director of
Teaching Fellows. Since his first trip in 2008, he has taken two groups of students for extended
teaching experiences at an elementary school in Central Guatemala.
Teacher Education Journal of South Carolina, 2015 Edition
82
Transitioning to ACT Standards in South Carolina Schools:
Insights and Techniques for School Districts
Howard V. Coleman
Jeremy Dickerson
Coastal Carolina University
Cindy Ambrose
Edi Cox
Dottie Brown
Horry County Schools
Abstract
The adoption of ACT Standards in South Carolina for the 2015/2016 school year is requiring
rapid and systemic change for teachers, local school leaders, district administrators, and
university teacher/administrator preparation programs. This article presents descriptions of the
new standards and assessments, provides an overview of the current transition to ACT
Standards, and emphasizes the importance of planning and implementing relevant, high quality
professional development for teachers and local school leaders. The article concludes with
examples of techniques from the Horry County Schools ACT planning and implementation
process.
Introduction
Like many other states, South Carolina’s educational system continues to review and
reflect on different sets of standards to be used as benchmarks and curriculum guides in K-12
schools. The Common Core State Standards (CCSS) for English Language Arts and
Mathematics were adopted by the South Carolina State Department of Education in July, 2010
(SCSDE, 2012). The implementation schedule for the CCSS was a two-year transition period
from 2011 through 2013, followed by a 2013/2014 bridge year for instructional purposes, and
then full implementation by the 2014/2015 school year. After one year of full implementation,
the South Carolina General Assembly passed a law (Act 200) in 2014 requiring new ELA and
mathematics standards in South Carolina schools for the 2015/2016 school year to replace the
Common Core State Standards. This paper presents an overview of the current status of the
South Carolina student achievement standards that will include the utilization of the ACT Aspire,
the ACT Plus Writing, the ACT Workkeys, and the SC Palmetto Assessment State Standards
(SCPASS) in Science, Social Studies tests, and the End-of-Course Examination Program
(EOCEP) subject assessments. The new standards will require university teacher education
programs to modify their curricula to effectively prepare educational leaders and teachers for
these changes.
An Overview: The ACT Standards
Act 200, passed by the General Assembly and signed by the Governor in June 2014, sets
forth numerous requirements for the statewide assessment system that includes: (a) being able to
compare the performance of South Carolina students to the performance of students in other
states on comparable standards; (b) being comprehensive, cohesive and able to signal a student’s
preparedness for the next educational level; (c) providing a clear indication of a student’s
Teacher Education Journal of South Carolina, 2015 Edition
83
preparedness for postsecondary success in college or career; and (d) satisfying federal and state
accountability requirements.
In an effort to meet these requirements for a statewide assessment system, South Carolina
awarded The ACT Company a contract in November, 2014 for ELA and math assessments in
grades 3rd – 8th and the 11th grade. South Carolina is using three ACT Assessment Programs to
measure student achievement of the College and Career Ready Standards: (a) ACT Aspire; (b)
ACT College-Readiness Tests and (c) ACT Workkeys (ACT, 2015). The tests are based on
descriptions of the essential skills and knowledge students need to become ready for college and
future careers. In addition, the Data Recognition Corporation (DRC) was selected as the testing
contractor for the South Carolina Science and Social Studies (SCPASS) and the End-of-Course
Examination (EOCEP) testing programs.
ACT Aspire
The ACT Aspire assessment covers five content areas: (a) English; (b) math; (c) Reading;
(d) Science; and (e) Direct Writing at all grade levels. The Aspire assessment includes a
vertically scaled battery of achievement tests designed to measure student growth for Grades 3
through 8 for the five content areas and measures students’ progress toward college and career
readiness. The scale scores are linked to college and career data through scores on the ACT
College and Career Readiness assessment and the ACT National Career Readiness Certificate
program (2015).
The ACT Aspire score report contains information about skills that are designed to assist
students in answering three questions: (a) where do I stand right now; (b) how can I make goals
for the future; and (c) am I on target for high school, college and career. Reporting categories in
ACT Aspire include science, technology, engineering, and mathematics (STEM), justification
and explanation in mathematics, progress with text complexity in reading, and a progress toward
career readiness indicator (Aspire Summative Assessment, 2015). Aspire tests consist of a
writing prompt, multiple choice, and constructed response items.
ACT College-Readiness Tests
The ACT College-Readiness tests measure student achievement related to high school
subject-area curricula. The ACT College-Readiness Benchmark scores on the subject-area tests
represent the level of achievement required for students to have a 50% chance of obtaining a B
or higher or about a 75% chance of obtaining a C or higher in corresponding first-year college
courses (ACT College-Readiness, 2015). These college courses include English composition,
college algebra, introductory social science courses, and biology. The ACT English test
measures standard written English and rhetorical skills. The ACT math test measures
mathematical skills students usually acquire in courses taken through grade 11. The ACT social
science course/reading test measures reading comprehension. The ACT science test measures
the interpretation, analysis, evaluation, reasoning, and problem-solving skills required in the
natural sciences. The ACT Plus writing test measures writing skills emphasized in high school
English classes and in entry-level college composition courses. According to a research study
conducted by ACT Incorporated, a relationship was found between a student's ACT composite
score and the possibility of him or her earning a college degree (Radunzel & Noble, 2003).
The ACT College-Readiness Tests are being administered to all SC 11th grade students
with the exception of students taking alternate assessments. The ACT assessment consists of
multiple-choice items and one writing prompt. ACT operationally defines “college-readiness” on
Teacher Education Journal of South Carolina, 2015 Edition
84
the basis of student scores in each of the subtests areas that correspond to “skills” in entry-level
college courses in English, algebra, social science and the humanities (Using ACT Results,
2015). The ACT replaces the High School Assessment of Program (HSAP) Test that has been
given to high school students in previous years.
ACT Workkeys
ACT WorkKeys is a job skills assessment system that is designed to help employers
select, hire, train, develop, and retain a high-performance workforce (2015). As part of ACT's
Work Readiness System, ACT Workkeys is designed to assist people in high schools, colleges,
professional associations, businesses, and government agencies in building their skills to develop
successful career pathways. Successful completion of ACT WorkKeys assessments can lead to
earning the ACT’s National Career Readiness Certificate (NCRC, 2015). The ACT NCRC is
issued at four levels and is intended to measure essential work skills needed for success in jobs in
a variety of industries and occupations. To earn an ACT NCRC, individuals must successfully
complete the following three ACT Workkeys Assessments: (a) applied mathematics; (b) locating
information; and (c) reading for information. The assessments are designed to measure a range of
work skills and abilities that include performing basic mathematical operations, reading and
understanding documents, finding data and information in graphics, and applying information
derived from graphics to work-related problems.
SC PASS and EOCEP Standards
Data Recognition Corporation (DRC) was selected as the contractor for the 2015 South
Carolina Palmetto Assessment of State Standards (SCPASS) Science and Social Studies and
End-of-Course Examination (EOCEP) testing programs. SCPASS is a statewide assessment
administered to students in grades four through eight. All students in these grade levels are
required to take the SCPASS except for those who qualify for the SC Alternate Assessment.
The EOCEP includes assessments in English 1, Algebra I, US History and Biology for
grades seven through twelve. All public middle school, high school, alternative school, virtual
school, adult education, and home school students who are enrolled in courses in which the
academic standards corresponding to the EOCEP tests are taught must take the appropriate tests.
Planning and Implementation
The transition from the 2014/2015 Common Core State Standards (CCSS) to the ACT
Standards follows a recent change in SC from the 2011 Accountability Standards to the CCSS.
Teachers and school leaders are now implementing another new set of standards, while at the
same time continuing to be required to assess students on the SCPASS and EOCEP standards.
School districts are facing immediate professional development needs for teachers and
administrators to understand the ACT Standards and to adjust, modify and adapt instruction,
lesson plans and support services to promote student achievement of the standards. School
system leaders are advised to establish and implement ongoing staff development to assist
educators in gaining comprehensive knowledge of the standards, the assessments, and effective
intervention strategies to help all students in meeting the standards.
Professional Development
Teachers and administrators will need sustained professional development on the ACT
Standards and assessments to effectively implement the new accountability model. The majority
Teacher Education Journal of South Carolina, 2015 Edition
85
of the available research on effective professional development focuses on its relationship to
student achievement (Marzano, 2003; Hord, 2004; Kedzior & Fifield, 2004. Sparks (2002) states
that the quality of teaching is determined by the teachers’ and principals’ learning via
professional development that is aligned with content, standards, and assessment, and that this is
the most effective means to directly impact student achievement. Numerous studies have
revealed that when teachers and school leaders receive well-designed professional development
for an average for forty-nine hours over a six to twelve month time period, student achievement
can increase by as much as twenty-one percentile points (Yoon, Duncan, Lee, Scarloss &
Shapley, 2007).
Richardson (2003) notes that effective professional development for transitioning to new
standards and assessments must be district-wide, long term with follow up, have adequate funds
for all materials and substitute teachers, encourage agreement among all participants, and have
repeated opportunities to practice new instructional applications (p. 402). Joyce and Showers
(2002) studies have revealed that it takes teachers an average of 20 separate instances of practice
to master new skills. Professional development that focuses on teachers analyzing the specific
skills, content and concepts they’ll teach in their discipline has been shown to improve both
teacher practice and student learning (Darling-Hammond et al., 2009).
Professional development must support teachers’ and administrators’ understandings of
the ACT Standards and assessments to be successful. The ACT Company provides a Learning
ACT service with free online product training via pre-recorded on demand videos, open
enrollment webinars and teacher-requested webinars. In addition, school districts may purchase
in person professional development sessions on curriculum, instruction and assessment for
teachers and administrators.
One SC District’s Action Plan
Horry County Schools (HCS) is a large district in northeastern South Carolina. With 56
schools and over 42,000 students, its planning and transitioning process to the ACT Standards
and assessments is in progress. The analysis of these efforts may provide ideas for other districts
and for university educator programs.
Horry County Schools continuously communicated with internal and external
stakeholders to share the rigors of these new assessments. Released testing items were shared
with various stakeholder group meetings that included parents, the business community, school
advisory groups, and the board of education. The school system repeatedly shared the message
that while the district did not expect to perform as well on these assessments as it did prior to
their implementation, it was the same district, doing the same good work to educate students, and
as with any new assessment it would improve with repeated administrations. Horry County
Schools’ responses and reflections to ACT assessments are listed below.
ACT
Like many other districts, Horry County Schools faculty members were very familiar
with the ACT assessment and its parameters since it had been utilizing the Explore, Plan and
ACT assessment system (EPAS) and encouraged high school students to take the ACT as a
college entrance examination. The school district had never administered the ACT to an entire
class of students prior the state’s selection of ACT as the assessment for the high school juniors.
The school district made the decision to administer the optional science portion of the assessment
so that the students could utilize the results for college admissions. Horry County Schools also
Teacher Education Journal of South Carolina, 2015 Edition
86
decided to emphasize the strategies of time writing, non-fiction texts, constructed response and
claims and evidence to prepare students for the assessment.
ACT ASPIRE
There was a limited time period between the announcement of the ACT Aspire
assessment and administration, so Horry County Schools had to work quickly to prepare
students. Writing instruction was immediately targeted as a high priority. ACT Aspire included
a timed writing prompt and is a significant change from the South Carolina Palmetto Assessment
of State Standards (PASS) test, which is untimed. Students were given 30 minutes to complete a
writing prompt that required them to state a claim and support it with evidence or ideas.
Teachers had traditionally required students to generate a graphic organizer and a rough
draft before writing, so this was a huge paradigm shift to simply begin writing to a prompt. In
addition, each grade level focused on a different type of writing, which was also new approach.
Third and sixth grade students were given a reflective narrative prompt, fourth and seventh
graders wrote to an analytical expository prompt, and fifth and eighth graders were required to
write a persuasive argument. The expectations for the writing were also different relative to a
more holistic rubric that focused on the content and ideas communicated rather than the PASS
writing prompts. This difference placed more emphasis on the structure and mechanics of
writing.
To prepare for these changes, committees of teachers were formed and writing prompts,
which mirrored the examples released from ACT, were developed and distributed to schools.
Each school administered three practice prompts in the time between the announcement of the
assessment and the administration of the actual test. Instructional coaches from each school met
to analyze the rubric released from ACT and collectively score and calibrate after the first
practice prompt. Coaching steps for teachers were generated from the analysis of those scores.
Another significant difference in these assessments required teachers to prepare students
to include constructed responses in both reading and math. In math, test items were more
conceptual with increased depth of knowledge levels. Many items were multi-operational and
multi-step, so more emphasis needed to be placed on conceptual understanding versus procedural
knowledge by both the teachers and the students. Teachers were asked to include open-ended
problems for students to solve on a more frequent basis, and provided frequent opportunities for
constructed written math responses. This approach required justification and explanation of
problem solving and demonstrated students conceptual understanding of the mathematical
functions and operations used.
Students were also given long nonfiction passages to read with numerous assessments
items to prepare them for the assessment. Coaches began working with teachers in social studies
and science to ensure that students were able to construct written responses that included a claim
and supporting evidence. The released items on the ACT Aspire web site assisted coaches and
teachers in their instructional focus, especially in depth of knowledge. Overall, it was evident
that the rigor of this assessment was a new experience for teachers and students.
ACT WorkKeys
Two schools in Horry County, The Academy for Arts Sciences and Technology and The
Academy for Technology and Academics, tested seniors with WorkKeys as a part of their second
year in the Work Ready SC initiative. All high schools had the opportunity to utilize Career
Ready 101 modules available through the ACT WorkKeys web site. Schools met with students
Teacher Education Journal of South Carolina, 2015 Edition
87
to explain the value of the WorkKeys results and the value of the certificate they could earn via
this assessment. The certificate levels and their value in business and industry were also
communicated to all students. All schools offered ACT preparatory sessions after school and on
Saturdays. Elementary, middle and high school principals allotted time in each of their monthly
meetings to review test blueprints and to discuss resources such as the NWEA’s MAP College
Readiness Linking Study and released items. Instructional coaches took time at their monthly
meetings to review resources and to develop strategies to ensure that students were familiar with
the test items and requirements.
Conclusion
The process of adapting to new accountability standards and assessments can be an
overwhelming process for educators. As South Carolina continues the implementation of new
ACT standards in public schools, it will be critical for teachers and school leaders to develop
ongoing district plans to meet the challenges of these accountability changes. This can be
accomplished by including practical, application-based professional development that is in
alignment with the assessments and the new standards. Horry County Schools action plan
provides an example of how educators can respond to the ever-changing assessment era in public
schools.
References
ACT National Career Readiness Certificate. (2015). Retrieved from
http://www.act.org/certificate/
ACT College and Career Readiness Standards. (2015). Retrieved from
http://www.act.org/standard/
ACT Aspire. (2015). Summative Test Assessment Bulletin #1. Retrieved from
http://www.discoveractaspire.org/pdf/2014_ACT
Darling-Hammond, L., Chung Wei, R., Andree, A., & Richardson, N. (2009).
Professional learning in the learning profession: A status report on teacher development
in the United States and abroad. Oxford, OH: National Staff Development Council.
Hord, S. M. (Ed.). (2004). Learning together, leading together: Changing schools
through professional learning communities. New York, NY: Teachers College Press.
Joyce, B. & Showers, B. (2002). Student achievement through staff development.
Alexandria, VA: Association for Supervision and Curriculum Development.
Kedzior, M., & Fifield, S. (2004). Teacher professional development. Education Policy
Brief, 15(21), 76–97.
Marzano, R. J. (2003). What works in school: Translating research into
action. Alexandria,, VA: Association for Supervision and Curriculum Development.
Radunzal, J., Noble, J. (2003, April). Tracking 2003 act-tested high school graduates:
College readiness, enrollment, and long-term success. Retrieved
from: http://www.act.org/research/researchers/reports/pdf/ACT_RR2012-2.pdf
Richardson, V. (2003). The dilemmas of professional development. Phi Delta Kappan,
84(5), 401–406.
South Carolina General Assembly, (2013/2014 Session). Act 200, Retrieved from
http://www.scstatehouse.gov/sess120_2013-2014/bills/3893.htm
Teacher Education Journal of South Carolina, 2015 Edition
88
South Carolina State Department of Education (2015). Retrieved from
https://ed.sc.gov/scde-grant-opportunities/NewSCStandards.cfm
Sparks, D. (2002). Designing powerful professional development for teachers and
principals. Oxford, OH: National Staff Development Council.
Using ACT Results. (2015). Retrieved from
http://www.act.org/aap/pdf/Using-Your-ACT-Results.pdf
Yoon, K. S., Duncan, T., Lee, S., Scarloss, B., and Shapley, K. (2007). Reviewing
the evidence on how teacher professional development affects student achievement.
Issues and Answers No. 033. REL Southwest. Retrieved from
http://ies.ed.gov/ncee/edlabs/regions/southwest/pdf/rel_2007033_sum.pdf
About the Authors:
Howard V. Coleman is an Associate Professor of Educational Leadership in the College of
Education at Coastal Carolina University. He has served as a high school principal, a
superintendent and as a professional consultant for public schools, government agencies and
private companies. Coleman has published articles and book chapters on leadership, technology
and instructional assessment.
Jeremy Dickerson is a Professor of Instructional Technology in the College of Education at
Coastal Carolina University. He has experience teaching and working in technology management
in universities and serving as an educational consultant in business and industry. Dickerson has
published numerous journal articles and book chapters and has presented at professional
conferences in the United States and abroad.
Cynthia Ambrose has been a teacher, assistant principal, principal, and district administrator in
public schools in South Carolina for 29 years. She currently serves as Chief Academic Officer
for Horry County Schools and was named South Carolina's District Administrator of the Year in
2011.
Edi Cox has been a high school business teacher and central office technology specialist in
public schools. She currently serves as the Executive Director of Online Learning and
Instructional Technology in Horry County Schools and has led the development of the Virtual
School Program for the school district.
Dottie Brown has been a teacher, a district instructional coach, an assistant principal, a principal,
and a consultant for schools across the country. She currently serves as Executive Director for
Elementary Schools in Horry County Schools and led a literacy reform effort for the school
district.
Teacher Education Journal of South Carolina, 2015 Edition
89
Building Problem Solving Skills
Gary Bradley
University of South Carolina Upstate
Abstract
Elementary, middle-grades, and secondary preservice teachers engaged in mathematics methods
courses whose focus was problem solving interacted with non-routine problems during the
semester-long. To assess the efficacy of the problem-solving focus, the preservice teachers were
given a non-routine problem at the beginning of the course and the same non-routine problem at
the end of the course. Although some growth in problem solving strategies was seen, there was
no increase in the number of correct answers to the problem. The findings indicate no
significance between the math background or efficacy of the preservice teachers on their
effective use of problem solving skills. However, positive changes were noted in students’
attitude and interest in problem solving. In light of the data collected, the capacity of a single
math methods course to build the problem solving skills of preservice teachers to a level where
they are comfortable using and teaching these skills is shown to be limited.
Introduction
Problem solving is an essential component of mathematics education. It is emphasized in
countries that constantly out-perform the United States in middle level and secondary
mathematics. The Common Core State Standards (CCSS) calls for K-12 teachers in the United
States to engage all students in mathematical problem solving” (Young, 2013). Problem solving
also increases mathematical understanding, facilitates creative solutions, and develops the
process of inquiry. Preservice teachers need these skills to effectively present problem solving
strategies, scaffold the learning, and correctly evaluate their students’ progress (McKeachie,
2012).
Problem solving strategies that are widely used in Asian countries such as China, Japan,
and Singapore. The United States consistently ranks below these Asian countries in math test
scores (National Center for Education Statistics, 2012). Researchers believe that a likely reason
for this difference is that the amount of time that Asian students spend on problem solving is
significantly greater than students here in the US (Stigler & Hiebert, 1999). Students who spend
more time in problem solving may be able to close the gap in math test scores with those of
Asian countries.
Problem solving strategies are integrated in the objectives of the CCSS. The CCSS
standards for mathematics practice describe the areas of expertise that preservice teachers need
to develop in their students. “These practices rest on important ‘processes and proficiencies’
with longstanding importance in mathematics education. The first of these are the National
Council of Teachers of Mathematics (NCTM) process standards of problem solving…”
(Common Core State Strands Initiative, 2012). Although South Carolina has voted to suspend its
implementation of the CCSS, the current SC standards under consideration are nearly identical to
the Common Core. Therefore, preservice teachers need to have the problem solving skills needed
to confidently teach their students these standards.
Problem solving assists in the development of mathematical understanding. Cha, Kwon,
and Lee (2007) demonstrated that through problem solving activities such as solving puzzles,
students understood abstract concepts more thoroughly. The problem solving process can be
Teacher Education Journal of South Carolina, 2015 Edition
90
referred to as an open ended investigative task that can help students with their own
understanding. These types of problems provide students with opportunities for finding new and
creative solutions. Preservice teachers can develop a balance of both the problem solving skills
and the math procedural skills necessary for successful mathematics learning (Kilpatrick, 2009).
Polya’s work in the 1950’s and 1960’s laid the foundation of problem solving strategies
(Polya, 1954, 1963). He claimed that new knowledge in mathematics can be obtained by using
an appropriate problem along with the previous knowledge of the student. Researchers such as
Hatfield, in 1978, further defined and tested Polya’s work by distinguishing three types of
teaching in problem solving: teaching for problem solving, teaching around problem solving,
and teaching inside problem solving. The first type emphasizes the educational material,
theorems, and skills. The second type is focused on the teacher who helps lead students towards
successful problem solving. The third type includes the presentation of new mathematics’
content. These three types of problem solving are consistent with problem strategies taught in
successful middle level and secondary math classrooms.
Additional researchers found that promoting various solutions to problems has the
potential to change the classroom environment from one that emphasizes procedures to one that
emphasizes process, enhances the quality of mathematics’ lessons, and contributes to students’
conceptual learning (Stigler & Hibert, 1999; Silver & Kenney, 1995; Lajoie 1995). Boaler
(1998) found that students who learned mathematics through open-ended activities developed
conceptual understanding while students who followed a traditional approach developed
procedural understanding. Yackel and Cobb (1996) concluded that classroom cultures which
promote different solutions to problems can contribute to the development of students’ academic
autonomy. This classroom culture can also increase students’ mathematical self-efficacy.
Teaching Problem Solving in a Math Methods Course
A math methods course is a logical place to teach problem solving skills. The successes
and challenges of problem solving experienced by the students in the class can help guide and
reaffirm the preservice teachers own experience in the problem solving process. A math
methods course can help preservice teachers learn different problem solving strategies and the
word clues that can help identify which problem solving strategy to use. Entire lessons can be
created by the preservice teachers that display their ability to transfer their problem solving skills
to their students. Preservice teachers can share these lessons in a collaborative classroom
atmosphere and post them online facilitating additional collaboration (Kuh, Kinzie, Schuh, &
Whitt, 2010). In a math methods course where academic risk taking is fostered, preservice
teachers can solve non-routine problems that highlight many problem solving strategies.
Teaching problem solving in a math methods course is not challenge free. Additional
time must be taken to pose and solve the problem (Sternberg, 1996). It also requires preservice
teachers to draw from a solid math skill set which they may not possess. Preservice teachers are
often not accustomed to thinking creatively because they may not have been required to use
problem solving skills in their own academic experience. They often discount problem solving
as something unnecessary and complicated.
Many preservice teachers face barriers to successfully learning and implementing
problem solving heuristics. These barriers stem from the preservice teachers’ student culture,
academic environment, and self-efficacy. The student culture manifests itself in the implicit
understanding of the students’ classroom responsibilities and behavioral expectations. These
student perceptions are often referred to as a didactical contract. If students are given a test or
Teacher Education Journal of South Carolina, 2015 Edition
91
assignment that violates this contract they may not engage in the task in a manner that the
teacher anticipates or refuse to participate altogether (Brousseau, 1997). Stigler and Hibert
(1999) found that in the United States, teachers of middle grade students rarely engage their
students in solving problems with multiple solutions. If preservice teachers have not had much
exposure to problem solving in their academic experience or have not experienced other math
teachers using problem solving in their clinical experiences, preservice teachers are likely to see
the introduction of problem solving in a math methods course as a violation of this didactical
contract.
The current academic environment in public K-12 schools can be a barrier to problem
solving. Teachers, principals, schools, and entire states are under tremendous pressure to have
their students perform at or above grade level. The No Child Left Behind legislation and the
Race to the Top program, although well-meaning, have placed a great deal of emphasis on
improved scores on standardized tests. In many states teachers’ pay and promotion are heavily
dependent on how well their students perform. Teachers may be less likely to take the time to
teach mathematical content using problem solving to their students if they believe that the end of
grade test is heavily biased toward procedures.
Given the strong support of problem solving in the literature and in practice, a research
study was designed to measure the impact of teaching problem solving skills to preservice
teachers in a math methods course. Qualitative and quantitative data were collected at the
beginning and end of the course to measure the change of problem solving skills and student
opinions about problem solving. In addition, data were collected to determine the strength of a
relationship between the number of math courses the preservice teachers took and their self-rated
efficacy on problem solving.
Methodology
This study examines how preservice teacher’s problem solving strategies changed during
a semester long course. This data was compared with compared the number of math courses the
students had taken and their self-rated math efficacy with effective problem solving strategies.
The preservice teachers also responded to questions relating to the value they placed on problem
solving.
Thirty-five preservice teachers from three sections of a required math methods course
were purposefully selected to participate in this study. One professor taught two sections for the
28 elementary majors and the other professor taught the seven middle and secondary majors.
Both the elementary and middle level / secondary classes were a 3 credit hour course. The
preservice teachers were all seniors who were completing their final semester of coursework
before beginning their student teaching experience.
At the beginning of the course the preservice teachers were asked to complete a short
mathematics survey. The survey asked the students how many classes that they had taken in high
school and college and how they rated their mathematical ability on a scale of 1 through 5.
Preservice teachers were then also asked to complete a problem solving activity. Students were
requested to show all scratch work, even their work that did not lead to the final answer. Students
were given 12 minutes to complete the question. Students were given the same question at the
end of the semester. A correct solution to the problem solving task was not presented by the
professors until after the student responses from the post-test were collected.
During each weekly class a problem solving task was presented lasting approximately 20
minutes. Preservice teachers solved or attempted to solve the problems individually for 5-7
Teacher Education Journal of South Carolina, 2015 Edition
92
minutes and then with a partner for 5-7 minutes. Solution strategies were shared and explained
by the instructor. The weekly problems were selected to highlight the use of the 9 problem
solving strategies listed in Table 1.
Table 1. Problem solving strategies presented during the course
9 Problem Solving Strategies
1. Look for a Pattern, Draw a Picture
6. Guess and Check
2. Work Backwards
7. Solve a Simpler Problem
3. Make an Organized List
8. Write a Mathematical Sentence or Formula
4. Use a Table Chart or Tree Diagram
9. Use Logical Reasoning
5. Draw a Picture
Students were encouraged to use multiple problem solving strategies in solving their
problem. Their strategies were recorded on a large piece of chart paper or Power Point Slide. At
the conclusion of the exercise, the problem solving process was reviewed. Based on the work of
Polya (1963), the following process was used in each class:
a) Read / understand the problem
b) Devise a strategy / plan for solving the problem
c) Carry out the plan
d) Determine if the answer is appropriate.
Students were required to record their solutions and a correct solution (if it differed from
their work) for each problem solving task in a problem solving journal. This activity facilitated
the reflection on the problem solving skills they had just used. Preservice teachers were
encouraged to write comments in their journal on the problem solving skills that they found new
or useful. Following the journal entries, preservice teachers were asked how they might present
the problem solving activity they had just completed to their own students.
At the end of the 15 week semester, preservice teachers were once again presented with
the initial problem solving task and ask to arrive at an appropriate solution. Students were given
a maximum of 12 minutes to complete the problem. Correct solution strategies to the problem
were presented by preservice teachers and the instructor. The students could compare the
strategy that they used and the correct solution strategies presented. Multiple strategies were
presented as legitimate ways to solve the problem. The pre-test / post-test question read, “An
elevator holds 20 children and 15 adults. If 12 children get on the elevator how many adults can
still fit in the elevator.” This problem and the other problems used in this study were collected
from the Palette of Problems found in September 2009 edition of Mathematics Teaching in the
Middle School.
The students were given a short survey asking them to relate their impressions of problem
solving during this semester. Students also made unprompted comments during the class about
using problem solving. These comments shed light on the value that preservice teachers give
problem solving in their academic experience and in their future classroom.
Each of the instructors evaluated all of the student responses (both pre-tests and posttests) with the following rubric: (5) Mature Strategies included both multiplicative and additive
strategies used effectively; (4) Multiplicative Strategies included the use of ratios and
proportions; (3) Additive Strategies included basic addition and subtraction skills; (2)
Emerging Strategies included some logical attempt to solve the problem but with no clear skill
Teacher Education Journal of South Carolina, 2015 Edition
93
set used; (1) Unknown Strategies in which their work followed no recognizable pattern. The
instructors then compared their evaluations and discussed any differences they encountered.
Table 2. Problem Solving Strategy Category
Strategy Used
Strategy Description
M
X
A
Mature
Multiplicative
Additive
T
Transition
N
Novice
U
Unknown
Student used effective PS strategies
Set up proportion and solved traditionally; can’t tell reasoning beyond this
Only evidence of additive strategies throughout their response
Incorrect ratios used incorrectly and/or indication that 1:1 trade won’t work
and/or some use of multiplication and/or division
Understands 1 child does not equal 1 adult; had correct ratio but could not work
with it or couldn’t find correct ratio but could work with one they assigned.
The student used PS strategies that were not legible or logical
Findings
Significant differences were found between the percentage of elementary and middle
level / secondary students who gave the correct answer to the problem in the pretest. 60% of
middle level secondary students had the correct answer while only 40% of the elementary
students had the answer correct. Surprisingly, the number of correct answers in both the middle
level / secondary and elementary students declined by 10% in the post-test.
Table 3. Elementary and Middle Level / Secondary Survey and Test Data
Survey Categories, Pre-Test and Post-Test
Elementary
Students
ML / Secondary
Students
Average Number of High School Math Courses
3.95
4.14
Number of College Math Courses
4.15
7.00
Self-rated Mathematical Ability (Scale 1-5)
3.33
3.83
Percentage of Correct Answers Before Treatment
40%
70%
Percentage of Correct Answers After Treatment
30%
60%
An analysis of the quantitative data on the number of classes taken by both groups and
their score on the post-test was made. The R2 coefficient was calculated as 0.03 for both groups.
This supports the conclusion that there was no significant correlation between these two
variables. The correlation coefficient was also calculated for students self-rated rated math
ability and their score on the post test. Its low value of 0.04 supports the conclusion that were is
also no relationship between these two variables. For both groups, no significant correlation was
found between the number of math classes taken or self-rated math ability on the post-test score
on the correct answer to the test problem.
Movement was seen in the problem solving strategies used by the elementary preservice
teachers between the pretest and the post test. Although there was a decrease in the number of
preservice teachers using Mature problem solving methods by 2, there was decrease of 4
preservice teachers moving out of the unknown category. These 4 students moved into the
Multiplication and Transition categories. While the middle level / secondary students remained
Teacher Education Journal of South Carolina, 2015 Edition
94
mostly unchanged, one student did change from the Mature to the Multiplicative problem solving
strategy.
Table 4. Summary of Strategies used to solve pre-test and post-test
Problem Solving
Question
Elementary Students
ML / Secondary
Students
Pre-Test
M=9
X=7
A=1
N=2
T=3
U=6
M=5
X=2
A=0
N=0
T=0
U=0
Post-Test
M=7
X = 11
A=0
N=2
T=5
U=2
M=4
X=3
A=0
N=0
T=0
U=0
Change in Problem
Solving Level
M = -2
X=3
A = -1
N=0
T=2
U = -4
M = -1
X=1
A=0
N=0
T=0
U=0
The free response section of the survey was revealing. The comments tended to fall into
three categories. Most students felt that they saw value in learning problem solving strategies.
One student said, “I feel like this course has given me many ideas for activities to teach math,
and given me a better attitude toward teaching math.” The second category of comments shared
that although the students felt that they understood problem solving better after taking the course,
they did not feel completely ready to present problem solving strategies to their class. This
comment was reflective of the concern shared by several students, “I liked the non-routine
problems but I am just not comfortable with math without a lot more practice.” The third
category was a general discomfort with the non-routine nature of problem solving. One student
said, “The problems seemed to create more confusion than help.” Student comments were fairly
equally distributed among these three groups.
Conclusions
The disconnect between the number of math courses taken and the lack of problem
solving skills in both elementary and middle level / secondary preservice teachers observed in
data may speak to a lack of problem solving exercises that these students experience in their K12 math courses. A procedural mindset of mathematics instruction is prevalent in K-12
education. Cooperative efforts with the university level education departments and math
departments may help to address this important issue.
Significant change was seen in the problem solving strategies used by the elementary
preservice teachers. The greatest area of growth was in the multiplicative problem solving
strategy. This may be due to the nature of the problem used in the pre-test and post-test or to the
problem solving emphasis in the methods course. Further research is needed to determine if
increases in Additive or in Mature problem solving strategies may be observed.
Several students commented that these problem solving exercises seemed to produce
more confusion and frustration than help. Students seemed to be confused by the number of
different ways that a non-routine problem can be solved. On several occasions the preservice
teachers asked the instructor which problem solving strategy is the best one to use. The
Teacher Education Journal of South Carolina, 2015 Edition
95
instructor pointed out that any strategy that lead to a valid answer should be supported by the
teacher. This could again be the result of a procedural mindset that researchers such as Stigler
and Hiebert point out as a weakness in the United States math education system (Stigler &
Hiebert 1999).
One student suggested that problem solving should be introduced in other courses: “I feel
that this class would be better if paired with Math 232 & 233.” This exemplifies the need for the
education department to work more closely with the math department to align their curriculum.
When topics such as problem solving are covered in both education methods courses and content
courses, students will see the value of problem solving and have additional practice in using
problem solving strategies.
Further research may explore programs that incorporate problem solving skills in math
content courses and education courses. This research supports the conclusion that problem
solving cannot be effectively taught in one methods course. However, value is seen in both
student attitudes to problem solving and in students’ effective problem solving strategies, even if
presented in just one course.
References
Boaler, J. (1998). Open and closed mathematics: Student experiences and understandings.
Journal for Research in Mathematics Education, 29 (1), 41-62.
Brousseau, G. (1997). Theory of didactical situations in mathematics. 1970-1990. Dordrecht, the
Netherlands: Kluwer.
Cha, S., Kwon, D., & Lee, W. (2007, October). Using puzzles: problem-solving and abstraction.
In Proceedings of the 8th ACM SIGITE conference on Information technology education,
135-140
Common Core State Standards Initiative. (2012). Common core state standards for mathematics.
Standards for mathematics practice. Retrieved August, 14, 2015 from,
http://www.corestandards.org/Math/Practice.
Hatfield, L. (1978). Heuristical emphases in the instruction of mathematical problem solving:
Rationales and research. Mathematical problem solving: papers from research work shop.
Columbus, Ohio: Eric/Smeac, 21-42.
Kilpatrick, J. (2009). The mathematics teacher and curriculum change. PNA, 3(3), 107–121.
Kuh, G. D., Kinzie, J., Schuh, J. H., & Whitt, E. J. (2010). Student success in college: Creating
conditions that matter. John Wiley & Sons.
Lajoie, S. P. (1995). A framework for authentic assessment in mathematics. Reform in school
mathematics and authentic assessment, 19-37.
McKeachie, W. (2012). McKeachie's teaching tips. Cengage Learning.
National Center for Educational Statistics. (2012). Mathematics Achievement of Fourth- and
Eighth-Graders in 2011. Retrieved August 14, 2015, from
http://nces.ed.gov/timss/results11_math11.asp
Polya, G. (1954). Mathematics and plausible reasoning: Volume 1: Induction and analogy in
mathematics. Oxford University Press.
Polya G. (1963). On learning, teaching and learning teaching. American Mathematical Monthly,
605-619.
Rock, D., & Porter, M. K. (2008). Palette of problems. Mathematics Teaching in the Middle
School, 96-100.
Teacher Education Journal of South Carolina, 2015 Edition
96
Silver, E. A., & Kenney, P. A. (1995). 3'2' Sources of assessment information for instructional
guidance in mathematics. Reform in School Mathematics and Authentic Assessment, 38.
Sternberg, R. J. (1996). How to develop student creativity. Alexandria VA, USA: Associations
for Supervision & Curriculum Development.
Stigler, J., & Hiebert, J. (1999). The teaching gap. New York: The Free Press.
Yackel, E., & Cobb, P. (1996). Socio-mathematical norms, argumentation, and autonomy in
mathematics. Journal for Research in Mathematics Education, 458–477.
Young, P. (2013). Using teacher evaluation reform and professional development to support
common core assessments. Center for American Progress, Feb 2013. 7-12.
About the Author:
Gary Bradley is an Assistant Professor of Education at the University of South Carolina, Upstate.
He has considerable experience in teaching middle school and high school and is interested in
inquiry based instruction and the use of technology to promote student engagement.
Teacher Education Journal of South Carolina, 2015 Edition
97
Attracting Early Childhood Teachers to South Carolina’s High Needs Rural Districts:
Loan Repayment vs. Tuition Subsidy
Henry Tran
Alison M. Hogue
Amanda M. Moon
University of South Carolina
Abstract
South Carolina (SC) has enacted a rural teaching initiative designed to provide financial
incentives to attract teachers to their high needs rural districts. The plans have components that
address both loan repayments and tuition subsidies; however, it is unknown which of the two
options may be more incentivizing. Our analysis suggests that a reframing of the loan repayment
to a tuition subsidy is not associated with stronger preference for teaching for at least five years
in rural high needs districts. However, we uncovered positive associations between respondents’
degree of confidence in their own self-efficacy (b=1.24, p=.008) and general openness to teach
in high needs districts (b=1.63, p<.0001) with the response variable.
Introduction
The recruitment and retention of teachers in rural communities is a national problem
(Monk, 2007). Rural environments are unattractive to many teachers because they are often
associated with lower salaries, lack of amenities (e.g., shopping, medical services, and
recreational options), lack of school and community resources, as well as potential social and
cultural isolation (Tuck, Berman & Hill, 2009). While teacher recruitment is an issue for the
nation overall (Synar, & Maiden, 2012), the teacher supply issue is especially critical in rural
areas because students from that environment are often low achieving and from impoverished
and undereducated families (Vaughn & Saul, 2013). To address the teacher supply problem,
many states and districts are offering incentives for individuals to teach and live in rural
communities (Jimerson, 2003). Missouri, for instance, has implemented the Employer-Assisted
Housing Teacher Program, which provides interest-free mortgage loans to teachers in critical
need areas (McClure & Reeves, 2004). North Carolina has constructed affordable housing units
that only licensed teachers are allowed to rent. There has also been a recommendation to offer a
monthly gas allowance to commuter teachers (Hines & Mathis, 2007). In states that are primarily
rural, such as Alaska, districts have “spent thousands of dollars recruiting teachers from other
states” (Mader, 2014).
The state of South Carolina (SC) is of particular concern given its context. Firstly, SC is
projected to confront obstacles related to rural teaching recruitment and retention. Despite having
almost half of its schools classified as rural (46.6%) and the majority (57.1%) of the state’s rural
students living in poverty, less than half (36.5%) of state education funds goes towards their
districts (Strange, Johnson, Showalter, & Klein, 2012). In addition, 7.3% of teachers in high
poverty SC schools are not “highly qualified” (South Carolina Federal Report Card, 2014). The
Rural School and Community Trust, a national nonprofit organization, found SC to be the third
highest-need state “in terms of policymakers’ attention to rural education, behind only Alabama
and Mississippi”. This is because “students in the state’s rural schools performed among the
lowest third of states in the US in math and fared even worse in reading” (Johnson, Showalter,
Klein & Lester, 2014). To address the issue of high quality teacher recruitment in rural
Teacher Education Journal of South Carolina, 2015 Edition
98
communities to tackle these problems, the state is currently working on providing rural teaching
incentives.
Specifically, SC has proposed the Rural Teacher Recruitment Initiative, which provides
incentives for certified teachers to seek positions in the 21 rural SC school districts that have
been identified as “high needs” districts (i.e., with teacher turnover greater than 12%) (Wood,
2015). SC’s plan to attract and retain teachers through several financial incentive options
includes a pay-scale increase equivalent to five years of experience for new teachers; two years
of graduate school tuition in exchange for four years of teaching service; free SC public
university tuition for high school seniors graduating from eligible districts if they return to an
eligible district to teach for eight years; or $7,500 per year of loan repayment (for up to five
years) for teachers after graduation (Cary, 2015). While the final two options address both tuition
subsidies and loan repayment as potential incentives to promote entry into the rural teaching
profession for those who are about to begin and those that have recently graduated college, it is
unknown which of the two options may be more incentivizing for teachers in training who have
already begun but have not yet completed college.
Fields (2009) examined a similar topic with New York University law students in order
to explore the effect of debt burden on career choice. Specifically, the theory of debt aversion
suggests that “individuals experience disutility from debt beyond the interest of expense of
borrowing” (p. 1). In Fields’ study, students were entered into a lottery, where they were
randomly assigned to either a financial aid package that offered an upfront tuition waiver or an
incremental loan repayment by the state of New York in exchange for ten years of service in
public interest law. The financial aid packages were of equal monetary value and standard
economic theory would suggest that there would be no difference in career choices as a result.
However, the study determined that students who received the tuition waiver package were 33%
more likely to enter into a job in public interest law after graduation than the group who received
loan repayment.
Based on the results of Field’s (2009) study, it is conceivable that aspirant teachers may
also be differentially impacted by the different ways financial incentives are framed. Therefore,
this paper examines whether pre-service early childhood teachers at the University of SC (USC)
have the same preference for the debt aversion incentive over the loan forgiveness incentive as
the law students in Field’s (2009) NYU study did. We further explore several additional factors
that may impact their willingness to teach in rural districts post graduation.
Purpose
The purpose of this study is to compare the degree of interest for two of the core financial
incentives proposed by the state of SC for the recruitment of pre-service teachers currently
enrolled in a teacher education program to teach in high needs rural districts. Specifically, we
sought to answer the following question:
Is there a differential preference for employment in South Carolina’s high needs rural
teaching positions between individuals who are offered hypothetical tuition subsidies vs.
hypothetical loan repayment after graduating from their early childhood teacher
education program?
Addressing this question is important because even if both a $50,000 loan repayment vs. $50,000
tuition subsidy have the same absolute dollar cost from the government’s perspective, the
strength of each of the incentives may differ. Discovering which of the two options is more
Teacher Education Journal of South Carolina, 2015 Edition
99
impactful for generating interest for teaching at rural schools can help the state be more cost
effective with public dollars.
Method
The population of this study is all early childhood (P-2nd grade) teacher preparation
students at USC. While this only represents one teaching program in SC, USC is the flagship
university of the state and trained 16% of the state’s early childhood graduates in 2014 (South
Carolina Commission on Higher Education, 2014). Undergraduate early childhood teacher
preparation students in three sections of a required class in USC’s College of Education were
electronically surveyed. Participation was voluntary and there was no direct incentive to
participate.
The survey was opened from April 5, 2015 to April 15, 2015 and contained 46 items,
most of which were Likert scale items that asked respondents to note their degree of agreement
(i.e., strongly disagree, disagree, somewhat disagree, somewhat agree, agree and strongly agree)
with a series of statements. It was distributed to 87 individuals, with the achieved response rate
of 74% (n=64). The respondents were overwhelmingly female, who accounted for 94.7% of the
sample, which aligns with the female majority in the field (Bureau of Labor Statistics, 2013).
Other select sample demographic information can be found in table 1.
Table 1. Demographics Data for the Sample
Variables
Percent of Sample
Male
Female
5.26
94.74
Has observed a class in a rural school
Has not observed a class in a rural school
59.32
40.68
Has interned at a rural school
Has not interned at a rural school
11.48
88.52
Program Year (Sophomore)
Program Year (Junior)
Program Year (Senior)
77.59
20.69
1.72
Has immediate family member who has
worked in K12 education
40.35
Does not have immediate family member
who has worked in K12 education
59.65
Has significant other who has worked in
K12 education
7.02
Does not have significant other who has
worked in K12 education
92.98
Teacher Education Journal of South Carolina, 2015 Edition
100
To determine the answer to our core research question concerning whether tuition
subsidies are preferred over loan repayment, we created two versions of the survey that were
identical with the exception of one item. Survey A inquired about interest in teaching in a rural
high needs districts for at least five years if respondents were offered tuition subsidies, while
survey B inquired about that same interest if respondents were offered loan repayment. Half of
the survey recipients were randomly assigned to each of the surveys.
In addition to the central research question, other supplementary questions were asked of
respondents to obtain information spanning across several areas, including location of where
respondents were raised, whether respondents have family who work in education, the degree of
confidence in their own self-efficacy as a teacher, attraction towards working in a rural school
environment, etc. Results from an internal consistency analysis suggested acceptable reliability
for the overall survey instrument (α =.88) despite the breadth of the survey. Because of the large
number of questions and their correlations amongst one another, an exploratory factor analysis
was conducted to reduce the number of items to a smaller set of common dimensions. In order to
complete the factor analysis based on data from all respondents, care was taken to handle
missing data.
Complete data for all variables were achieved for 51 out of the 64 respondents. The
missing data was spread across most of the variables, with no readily identifiable pattern and was
therefore addressed with the multiple imputation method. This process involves missing values
being imputed based on the normal distribution of all the complete data from variables in the
imputation model, which produces less biased results than simple mean substitutions (Little &
Rubin, 2002). Specifically, in this study, simulated groups were created using the non-missing
information from the dataset to predict the missing values through the process of 20 imputations,
and these estimates were then pooled to generate the imputations (Graham, Olchowski, &
Gilreath, 2007). By performing this process, there is an explicit recognition that the substituted
values are estimates and are therefore addressed accordingly (i.e., they include standard errors
that make the final test results more accurate).
We then proceeded to conduct the exploratory factor analysis on the full data set. Guided
by information obtained from the scree plot and adherence to the Kaiser rule to drop all factors
with eigenvalues under one (Bandalos, & Boehm-Kaufman, 2008), five factors were kept. Based
on the questions that were associated with the factors, we named the five factors as respondents’
1) confidence in their own self-efficacy, 2) desire to teach locally, 3) positive perception of rural
environments, 4) sense of public service and 5) openness to teach in rural and high turnover
districts. Examples of the items and their associated factors are listed in table 2.
Teacher Education Journal of South Carolina, 2015 Edition
101
Table 2. Factors and their associated sample items
Factors
Sample Items
Respondents’
confidence in their
own self-efficacy
I can keep students on task when working on difficult
assignments.
I can motivate students who show low interest in
schoolwork.
I can promote learning when there is lack of support from
the home.
I am capable of teaching children who struggle
academically.
I have an ability to successfully teach language arts subject
content to students who struggle academically.
Respondents’ desire
to teach locally
I have an ability to successfully teach mathematics subject
content to students who struggle academically.
I plan to teach in South Carolina after graduating.
I have a strong attachment to the community in which I was
raised.
I plan to teach in the same county in which I lived when I
was a child (ages 0-12).
I plan to teach in the same county in which I lived when I
was an adolescent (ages 13-17).
Respondents’
positive perception
of rural
environments
Respondents’ sense
of public service
People live high quality of life in rural communities.
There would be opportunity for career advancement in a
rural school district.
Children in rural schools excel academically.
I plan to work in the public school system.
Teacher Education Journal of South Carolina, 2015 Edition
102
I would teach in a rural school district for at least five years
if I was hired at the same pay rate as a teacher with five
years of teaching experience
Respondents’
openness to teach in
rural and high
turnover districts
I am willing to teach in a school with high teacher turnover
rates (> 12%) after graduation.
I would rather work in a rural school system than pay for my
college education myself.
An ordered logistic regression was conducted to determine the association between the
main predictor concerning tuition subsidy vs. loan repayment and the aforementioned five factors
as they relate to respondents’ willingness to work in a high needs rural district for at least five
years. This particular method was selected given the ordinal nature of the response variable. In
order to better meet the assumptions of the model, the response variable was recoded.
Specifically, there were few respondents who indicated that they strongly disagreed (n=1),
disagreed (n=3) or somewhat disagreed (n=4) with wanting to work in a high needs rural district
for at least five years, therefore these three categories were collapsed into one (i.e., generally
disagreed). Recoding the outcome does not substantively change the results, but helped the model
meet the proportional odds assumption that is necessary for ordered logistic regression.
Results
Results of the analysis indicate that reframing of the loan repayment to a tuition subsidy
was not associated with higher degree of respondents’ interest in teaching in rural high needs
districts for at least five years. That being said, the model did uncover positive associations
between respondents’ degree of confidence in their own self-efficacy (b=1.24, p=.008) and
openness to teach in high needs districts (b=1.63, p<.0001) with the response variable.
Respondents’ sense of public service was also positively related to their degree of agreement
towards working in a high needs rural district for at least five years, but there is an increase
likelihood that this finding was a result of chance (b=.56, p=.086). The full results of the model
are displayed in table 3.
Teacher Education Journal of South Carolina, 2015 Edition
103
Table 3. Ordered Logistic Regression Results for Willingness to Work in a High Needs Rural
District for at least Five Years
Variables
Coefficients
Tuition Subsidy
Confidence in Self-Efficacy
Teach locally
Positive Perception of Rural Environment
Sense of Public Service
Openness to Teach in High Needs Districts
.02
(.65)
1.24***
(.46)
.31
(.37)
.24
(.30)
.56*
(0.32)
1.63***
(0.44)
N
64
* p<0.1; ** p<0.05; *** p<0.01; p-values not provided for cuts
To determine whether salary increases will be an effective incentive for increasing
interest in high-needs rural teaching, it is important to first ascertain potential teachers’
understanding of the current salary environment. In this study, respondents were asked to
provide their best estimate of the salaries of beginning teachers with a B.A. in SC. A one sample
t-test showed that the difference in the mean salary amounts between what respondents estimated
(M = $29,446.81, SD = $4,422.32) and the actual salaries of SC teachers with a B.A. ($32,389)
as reported from the latest available teacher salary data (National Education Association, 2012),
were statistically significant, t(46) = -4.56, p <.0001, 95% CI [$28,148.37, $30,745.25]. This
suggests that respondents, on average, underestimated the amount of salaries for beginning
teachers with B.As.
While respondents underestimated beginning SC teacher salaries, their estimates (M =
$29,446.81, SD = $4,422.32) were, however, statistically equivalent to the minimum required
salary amount ($29,523) as listed in the fiscal year 2014-15 state teacher salary schedule (SC
Department of Education, 2015), t(46) = -.12, p = .91, 95% CI [$28,148.37, $30,745.25]. In sum,
this suggests that our sample provided estimates of salaries that are more aligned with the
minimum of what districts are required to offer, rather than what salaries are actually offered.
This underestimation of actual salaries amongst even teachers in training suggests that more
education is necessary to inform the public about teacher pay, especially if this underestimation
negatively impacts teaching interests in rural-high needs districts. Fortunately, correlations
between the deviation between salary estimates and the actual salaries with respondents’
agreement to work in a high needs rural district for at least five years were not significant and
showed no discernable pattern.
Discussion
While the reframing of loan repayment to tuition subsidies has been found to increase
interest in individuals working in the public sector (Fields, 2009), in our study, the differential
Teacher Education Journal of South Carolina, 2015 Edition
104
framing was not found to be associated with respondents’ stated willingness to teach at a high
needs rural district for at least five years. Despite this, our analysis did uncover at least two
factors that were related to our outcome of interest. Specifically, respondents’ sense of
confidence in their self-efficacy and openness to teach in high needs rural districts in general
were both found to positively predict their interest in teaching at a high needs rural district for at
least five years.
This information is both enlightening and confirming. Firstly, increasing salary and
providing other financial incentives has often been the major espoused solution to the teacher
supply problem. Our findings suggest a potential alternative way to tackle the teacher
recruitment issue. One such alternative is to redirect resources to help teachers gain more
confidence with their ability (e.g., more diversified training in not only content area but working
with different student populations) and this may potentially help with increasing teacher interest
in teaching at rural high needs district. Secondly, as expected, teacher candidates who were more
comfortable with teaching in high needs districts in general were more likely to be willing to
teach in a rural high needs district for at least five years. Finding ways to increase this
comfortability is critical.
In addition, as noted earlier, one of the stated barriers to achieving a sufficient number of
teachers to address the teacher supply problem is pay (Tuck, Berman & Hill, 2009; Jimerson,
2003). Specifically, there is a general perception among the public that teachers are underpaid (or
at least not paid well). In fact, in our study, we found that even teachers in training
underestimated the amount of actual salaries that beginning teachers earn. Given this, it is likely
that other potential teacher candidates outside of the education world (e.g., not teachers in
training) would also underestimate teacher salaries. This underestimation may result in fewer
individuals that are willing to consider teaching as a profession and can serve as a barrier beyond
the actual dollar amounts teachers are paid. In other words, more accurate information could
potentially help address some aspects of the teacher supply problem. More research on this topic
is warranted.
Limitations
Our study provides a snapshot at many possibilities; however, it is important to note that
like all studies, it is not without its limitations. Specifically, we focused on a very particular
population (elementary teacher aspirants at a specific teacher training program at a state
university) and therefore broadening that focus may provide further enlightening findings. For
instance, researchers may seek to compare our findings to those from high school teacher
aspirants or from teacher aspirants in other teacher training programs.
In addition, it is also important to note that in Field’s (2009) work from which this study
is based, students actually received the subsidy, which resulted in higher interest in the public
law sector. It is possible that his results indicated stronger inclination toward public service with
tuition subsidy as opposed to loan repayment because participants in his study went through the
process of attaining their education with knowledge that it would end in public service. In
particular, his treatment (i.e., subsidy) sample went through three years of school with an
expectation of working in the public sector. Our sample consists of students to whom the tuition
subsidy proposal does not apply in reality. Had the proposal been actually offered, results of this
study could have differed.
Teacher Education Journal of South Carolina, 2015 Edition
105
Conclusion
Despite its limitations, this study provided insight into some very novel aspects of the
teacher recruitment problem for high needs rural districts. To directly address the research
question posed for this study, we did not find differential preference for employment in South
Carolina’s high needs rural teaching positions between individuals who are offered hypothetical
tuition subsidies vs. hypothetical loan repayment after graduating from their early childhood
teacher education program. However, we found respondents’ sense of confidence in their selfefficacy and openness to teach in high needs rural districts in general predicted their stated
interest in teaching at a high needs rural district for at least five years.
Ultimately, our study aimed to shed light on how to potentially attract teachers to rural
schools. In an open ended-question inquiring about respondents’ primary motivation for
becoming a teacher, the majority (56.2%) indicated that their primary motivation was to “help
children.” It is this noble motivation that we must appeal to, as students in rural high needs
district often require the most help of all
References
Bandalos, D.L. & Boehm-Kaufman, M.R. (2008). "Four common misconceptions in exploratory
factor analysis". In Lance, Charles E.; Vandenberg, Robert J. Statistical and
Methodological Myths and Urban Legends: Doctrine, Verity and Fable in the
Organizational and Social Sciences. Taylor & Francis. pp. 61–87.
Bureau of Labor Statistics. (2013). 2013 BLS Current Population Survey. Retrieved on June 12,
2015 from http://www.bls.gov/cps/aa2012/cpsaat11.pdf
Cary, N. (2015, January 14). Haley Budget: teacher incentives, more reading coaches.
Retrieved from Greenville Online:
http://www.greenvilleonline.com/story/news/education/2015/01/12/haley-budget-teacherincentives-reading-coaches/21662411/
Collins, T. (1999, December). Attracting and Retaining Teachers in Rural Areas. Retrieved from
ERIC: http://eric.ed.gov/?id=ED438152
Fields, E. 2009. Educational Debt Burden and Career Choice: Evidence from a Financial Aid
Experiment at NYU Law School. American Economic Journal: Applied Economics, 1(1),
1-21
Graham, John W., Olchowski, Allison E. and Gilreath, Tamika D. (2007) How Many
Imputations are Really Needed? Some Practical Clarifications of Multiple Imputation
Theory, Prev Sci 8:206
Hines, D., & Mathis, K. (2007). Regional specific incentives for teacher recruitment and
retention. Retrieved June 19, 2015, from www.dpi.state.nc.us/docs/internresearch/reports/incentives-trr.pd
Jimerson, L. (2003). The Competitive Disadvantage: Teacher Compensation in Rural
America. Policy Brief.
Johnson, J., Showalter, D., Klein, R., & Lester, C. Why Rural matters 2013-2014. The Condition
of Rural Education in the 50 States. Rural School and Community Trust Report.
Little, Roderick J.A., & Rubin, Donald B. (2002). Statistical Analysis with Missing Data, Second
Edition. Hoboken, New Jersey: Whiley-InterScience.
Mader, J. (2014). Study Panel: Teacher Incentives May Boost Teacher Retention. EdWeek.
Retrieved from: http://blogs.edweek.org/edweek/rural_education/2014/12/study_
panel_teacher_incentives_may_boost_teacher_retention.html
McClure, C., & Reeves, C. (2004). Rural Teacher Recruitment and Retention Review of the
Teacher Education Journal of South Carolina, 2015 Edition
106
Research and Practice Literature. AEL.
Monk, D. (2007). Recruiting And Retaining High-Quality Teachers In Rural Areas. The Future
of Children, 155-174.
South Carolina Commission on Higher Education. (2014). CHE Teacher Certifcation Numbers
2014.
South Carolina State Department of Education. (2014). South Carolina Federal Report Card
2014.
Strange, M., Johnson, J., Showalter, D., & Klein, R. (2012). Why Rural Matters 2011-12: The
Condition of Rural Education in the 50 States. A Report of the Rural School and
Community Trust Policy Program. Rural School and Community Trust.
Synar, E., & Maiden, J. (2012). A Comprehensive Model for Estimating the Financial Impact of
Teacher Turnover. Journal of Education Finance, 38 (2), 130-144
Tuck, B., Berman, M. & Hill, A. (2009). Local amenities, unobserved quality, and market
clearing: Adjusting teacher compensation to provide equal education opportunities.
Economics of Education Review, 28, 58-66
Vaughn, M. & Saul, M. (2013). Navigating the rural terrain: Educators’ visions to promote
change. Rural Educator, 34(2), 38-48
Wood, L. (2015, January 15). Gov. Haley Proposes Rural Teaching Incentive. Aiken Standard .
About the Authors:
Henry Tran, MPA, SHRM-CP, PHR, Ph.D., is an Assistant Professor at the University of South
Carolina’s Department of Educational Leadership and Policies who studies education human
resources (HR) and finance. He holds two national HR certifications and sits on the board of
advisors for the National Education Finance Conference. He can be reached at htr@sc.edu.
Alison M. Hogue is a student services manager at the University of South Carolina’s Career
Center and a doctoral student in its higher education administration program. She has served on
the board of the South Carolina Association of Colleges and Employers since 2012.
Amanda M. Moon is a graduate student at the University of South Carolina.
Teacher Education Journal of South Carolina, 2015 Edition
107
Download