The effect of the type of achievement grouping on students` question

advertisement
Aust. Educ. Res. (2015) 42:429–441
DOI 10.1007/s13384-014-0164-x
The effect of the type of achievement grouping
on students’ question generation in science
Sibel Kaya
Received: 7 August 2014 / Accepted: 22 November 2014 / Published online: 21 March 2015
Ó The Australian Association for Research in Education, Inc. 2015
Abstract This study aimed to examine the influence of different types of
achievement grouping on question generation. There were 46 participants from two
Grade 5 classrooms. Students completed a test to determine their achievement
levels. One of the classrooms was randomly assigned, to work in homogeneous
achievement groups and the other one in heterogeneous achievement groups. The
study lasted for 5 weeks during the spring semester of the 2013–2014 academic
year. Before the study, both classrooms received instruction on the taxonomy of
questions. Students were divided into corresponding achievement groups in the last
science lesson of each week and were asked to generate and discuss questions in
their groups regarding the topics covered in the week. The results were analysed
based on a comparison between the homogeneous and heterogeneous achievement
groups regarding the number of questions each student asked and the level of those
questions. The results noted no difference between heterogeneous and homogeneous
achievement groups in terms of the numbers of total questions, lower order questions or higher order questions. High-achieving students generated more overall
questions and more higher order questions regardless of grouping type.
Keywords
Student questions Primary science Achievement grouping
Introduction
One of the important roles of science education is to develop students’ questioning
capabilities (Bybee 2000; Chin and Osborne 2008; Hakkarainen 2003; National
S. Kaya (&)
Department of Primary Education, Kocaeli University College of Education, 41380 Izmit, Kocaeli,
Turkey
e-mail: sibelkaya@gmail.com
123
430
S. Kaya
Research Council 1996; Shodell 1995). However, research shows that the questions
posed by students in a classroom constitute only a very small fraction of all of the
questions asked during instruction (Graesser and Person 1994; Kaya and Kablan
2013; Nystrand et al. 2003). The fear of losing classroom control (Hand and
Treagust 1994; Jofili et al. 1999), the pressure of having to cover the content within
a specified period of time and the preference of didactic and teacher-centred
approaches lead to a lack of student questions in science classrooms (Chin and
Osborne 2008). Given that posing questions is fundamental to scientific inquiry, the
development of students’ abilities to ask questions, reason, solve problems and think
critically should be a central focus of the current science educational reform (Zoller
et al. 1997).
The classroom discourse tends to be monologic when teachers control the flow of
the lesson with minimal input from the students (Nystrand et al. 2003). In this type
of discourse students cannot have an active role in the construction of knowledge.
Classroom discourse is dialogic where students’ ideas are exchanged through open
discussion and the teacher sets the grounds for students to construct knowledge.
Student questions play an important role in dialogic and discussion-like conversations. Students usually ask questions to obtain additional information or for
clarification of ideas. Therefore, teachers can easily use this opportunity to open the
floor for discussion rather than answering the question themselves (Nystrand et al.
2003).
Student questions serve different functions in classrooms. They reveal students’
understanding (Chin and Osborne 2008; Graesser and Olde 2003; Watts et al. 1997),
as well as give ideas about misconceptions regarding a topic (Etkina 2000; Etkina
and Harper 2002; Maskill and Pedrosa de Jesus 1997). Student questions can be
used as an assessment tool to monitor understanding since they reveal what students
know and what they do not know (Black et al. 2002; Chin and Osborne 2008).
Student questions might also influence the direction of the classroom instruction
(Chin and Brown 2002; Etkina 2000; Etkina and Harper 2002; Keys 1998; Maskill
and Pedrosa de Jesus 1997) by determining the concepts to be learned and processes
to be used (Keys 1998). Chin and Osborne (2008) state that incorporation of student
questions as part of teaching could be motivational for students.
Some student questions allow the instructor and other students to view the
instructional material from a different perspective (Marbach-Ad and Sokolove
2000). Question generation in science classrooms helps students to enhance their
creativity and critical thinking skills (Cuccio-Schirripa and Steiner 2000; Shodell
1995) as well as trigger students’ curiosity and interest for a topic (Keys 1998).
Therefore, to actively engage in science learning, students need to be encouraged to
pose questions (Yager 1992). Dori and Herscovitz (1999) reported an association
between question-asking skills and improved problem solving. Question generation
is an important metacognitive strategy as it focuses students’ attention on content
and main ideas (Chin and Brown 2002; King 1994; Rosenshine et al. 1996).
Students may ask lower order factual questions that usually have a single answer
requiring memorisation, recall or simple observation (Chin and Brown 2000b; King
1994; Lai and Law 2013; Zhang et al. 2007) or they may ask higher order questions
that can only be answered through further investigation or by gathering more
123
The effect of the type of achievement
431
information on the topic other than from the textbook (Hofstein et al. 2005). These
higher order questions usually arise when students try to relate new knowledge with
existing information, and integrate complex information from multiple sources
(Chin and Brown 2000b). Higher order questions encourage learners to engage in
critical reasoning, synthesis and evaluation (Chin and Osborne 2008; Graesser and
Person 1994; Shodell 1995). This type of question is more influential in the process
of knowledge construction and the answers contribute to a deeper level of
understanding compared to lower order questions (Hakkarainen 2003; Graesser and
Olde 2003; Hofstein et al. 2005; Lee et al. 2006; Zhang et al. 2007).
Special attention should be given to promote higher order questions by students
since understanding and achievement level are related to the quality rather than the
quantity of questions asked (Harper et al. 2003). Chin and Brown (2000a) state that
when students ask questions, they initiate a process of hypothesising, predicting,
thought-experimenting and explaining, which help them to construct knowledge and
resolve conflicts in their understanding.
The research literature suggests that students do not ask higher order questions
spontaneously (Chin and Brown 2002). They need to be encouraged or stimulated to
ask higher order questions. Various strategies have been recommended for teachers
to enhance students’ questioning ability. Rosenshine and colleagues (1996) stated
that the most important aspect is to scaffold; that is, to support initially and then
cease the support gradually as students learn how to ask questions. First of all,
students need explicit training in how to ask questions (Chin and Osborne 2008;
King 1994; Marbach-Ad and Sokolove 2000). Informing students about the
taxonomy and linguistics of question formulation helps them develop more thoughtprovoking questions (Chin and Kayalvizhi 2002; King 1994). Students need to
know the difference between a fact-based question and an open ended higher order
question (Chin and Osborne 2008).
Another important factor for asking higher order questions is the existence of
group learning contexts. From a sociocultural perspective, knowledge is constructed
through social interactions (Chin and Brown 2002; Driver et al. 1994) and language
is an important mediator of learning (Vygotsky 1986). Ideas and explanations are
co-constructed socially during classroom discussions and internalised by individuals
(Mortimer and Scott 2003; Vygotsky 1978). Question generation is an essential
component of science talks (Lemke 1990) and collaborative group learning contexts
are more conducive to higher order student questioning (Chin and Osborne 2008;
Hofstein et al. 2004; Marbach-Ad and Sokolove 2000). King (1994) proposed that
peer questioning is more effective than self-questioning since self-questioning is
limited to what the individual student already knows. During group discussions, an
individual student’s questions could motivate and encourage other students in the
group to use thinking processes and ask further questions; thus, it helps in the
construction of knowledge as a group (Chin and Brown 2002). Group discussions
provide valuable opportunities for students to talk science (Bianchini 1997;
Richmond and Striley 1996) and shift instruction from a passive teacher-centred
style to active student-centred learning (Crouch and Mazur 2001; Ebert-May et al.
1997). Various factors such as nature of the task, student motivation or achievement
levels influence the level of student questions (Chin and Osborne 2008).
123
432
S. Kaya
Achievement grouping
In general, small group work facilitates better learning compared to individual
learning (Johnson and Johnson 2009; Leonard 2001; Rohrbeck et al. 2003; Slavin
2004). However, successful group work requires careful formation to ensure
appropriate group composition (Blumenfeld et al. 1997; Carter et al. 2003; Leonard
2001). The studies on the effect of achievement grouping on student learning are
inconclusive. The researchers argue that low-achieving students are encouraged by
high-achieving students in heterogeneous achievement groups (Chang et al. 2009);
whereas, they are demotivated in homogeneous achievement groups (Chang et al.
2009; Saleh et al. 2005). With a take on social interaction and motivation theory,
Saleh and colleagues (2005) explain that in heterogeneous groups, high-achievers
set an example for low-achievers, thus stimulate them to perform better.
Researchers point out that the effect of achievement grouping depends on
students’ achievement levels. In general, high-achieving students perform well
regardless of the type of grouping (Burris et al. 2006; Lou et al. 1996; Saleh et al.
2005); low achievers perform better in heterogeneous achievement groups (Leonard
2001; Lou et al. 1996; Saleh et al. 2005; Schumm et al. 2000). For average
achievers, sometimes both homogeneous groups (Lou et al. 1996; Saleh et al. 2005),
and heterogeneous groups are beneficial (Leonard 2001; Schumm et al. 2000).
There are limited studies that compare achievement grouping in terms of social
interaction. Among them, the study conducted by Fuchs and colleagues (1998)
examined high-achieving third and fourth grade students’ interactions on complex
mathematical tasks within homogeneous and heterogeneous pairings. They found
that high-achieving students produced better quality work, had greater cognitive
activity when they were paired with other high-achieving students.
In terms of questioning, researchers indicate that high complexity level questions
are generated when there are student–student interactions (Dori and Herscovitz
1999). Collaborative groups provide ample opportunity to generate questions (Chin
2004). How to group students to ensure an optimal learning environment for
productive student questioning is an area to be explored (Chin and Osborne 2008).
This study aims to examine the influence of different types of achievement grouping
on question generation. It is hypothesised that through the stimulation of lowachieving students by high-achieving students, heterogeneous achievement groups
generate more higher order questions compared to homogeneous achievement
groups.
Method
Participants
The participants were 46 students, about eleven years old, from two Grade 5
classrooms. The school was located in a north-western province of Turkey. The
rationale for selecting this school was its average standing in terms of the
nationwide standardised test scores and socioeconomic background of students.
123
The effect of the type of achievement
433
Table 1 Achievement test scores and distribution of students in each class
Group
Mean test score (Out of 33)
Female
Male
Classroom A (Homogeneous)
16.55 (6.34)
12
10
Classroom B (Heterogeneous)
17.75 (6.39)
12
12
24
22
Total
Both classrooms were taught by the same science teacher. The distribution of
students in each classroom and the mean achievement test scores which were used
to determine groups are reported in Table 1. There was no statistically significant
difference between the two classrooms’ achievement test scores. Classroom A was
randomly assigned to work in homogeneous achievement groups and Classroom B
was assigned to work in heterogeneous achievement groups.
In order to determine homogeneous achievement groups, students’ achievement
test scores were listed from the highest to the lowest and students with similar scores
were grouped in five different groups. For heterogeneous grouping, the same steps
were followed but this time, one student from each group was randomly selected
and five heterogeneous groups were created.
Procedure
The study lasted for five weeks during the spring semester of the 2013–2014
academic year. The unit, Living Things, was covered in the science curriculum
during this period. To determine which achievement group a student should be
placed, an achievement test was developed from the Trends in Mathematics and
Science Study (TIMSS) Grade 4 questions. The TIMSS questions were preferred
because they were developed by a panel of experts and because they measure
various important skills, such as the ability to recall, describe, classify, compare,
contrast, use models, interpret, analyse, synthesise, draw conclusions, hypothesise
and generalise (Martin et al. 2008). There were 30 questions in the science
achievement test, three were worth two points and 27 were worth one point. The
highest possible score was 33. The test was composed of three cognitive domains:
knowing, applying, reasoning; and three content domains: life, physical and earth
science. The test duration was 45 minutes.
Before study, both classrooms received instruction on the taxonomy of the
questions. After the instruction, a one-week trial session was conducted to observe
question generation and group dynamics. During this process, any groups having
difficulty were further instructed on how to generate questions. Some students who
had behavioral problems in their groups or could not get along with group members
were moved to other respective groups.
As part of the study, students were divided into homogeneous achievement
groups in Classroom A and heterogeneous achievement groups in Classroom B in
the last science lesson of every week. Student desks were arranged into cluster
groups of four to five students. Each group was given a bowl to collect questions.
123
434
S. Kaya
Students were asked to generate questions in their groups regarding the topics
covered that week. All students generated questions by taking turns and discussing
the question with the group members. When a student could not come up with a
question he/she skipped his/her turn. After the discussion, each student wrote down
his/her question on a piece of paper, folded the paper and put it in the group’s bowl.
Students wrote their names on the paper so that the questions could be corresponded
to them. The students were allowed to use their science textbooks but were not
allowed to write down the questions in the textbooks. The teacher and the researcher
walked among the groups to monitor discussions and to resolve conflicts. Each
session lasted 20–30 minutes. When question generation had been exhausted, the
teacher brought the groups together for whole class discussion. Selected student
questions from each group were discussed among the whole class under the
direction of the teacher.
Coding questions
The student questions were coded as lower order and higher order. Lower order
questions were related to simple facts and explanations of phenomena. Higher order
questions were those that can be answered through further investigation or seeking
more information by sources other than the textbook. Higher order questions
involved making inferences, reasoning, application of an idea, and the synthesis and
evaluation of a new idea. Some examples of lower order and higher order questions
generated by students are presented in Table 2. Generally, How and Why questions
are considered linguistically higher order, since they elicit analysis; however, the
actual level of questions cannot be judged exclusively from the words alone
(Nystrand et al. 2003). The questions were coded as lower order if they had a single
answer that can be easily found in the textbook.
Table 2 Examples of Lower order and Higher order Questions
Lower order Questions
Higher order Questions
Do plants eat meat?
Why should we protect the environment?
Do fungi need sunlight?
How do animals benefit the environment?
Which is the largest animal on Earth?
How can we prevent erosion?
Why do bees like flowers?
What would happen if we keep hunting animals?
What do plant roots do?
Are butterflies vertebrates?
What would happen if we don’t recycle?
Do ships cause water pollution?
What would happen if we keep on cutting trees?
How do water turtles breathe under the water?
Do all plants need water?
What should we do in order for people to keep the
environment clean?
Are all microscopic organisms bad for humans?
How does pollution harm humans?
How do polar bears stay warm?
Why is geothermal water hot?
123
How does cleaning products keep us clean, while
they pollute the environment?
The effect of the type of achievement
435
For example, as seen in Table 2, ‘Why is geothermal water hot?’ is coded as
lower order since there is only one possible answer that could be similar to: ‘Due to
the heat produced in magma’. However, the question: ‘Why should we protect the
environment?’ could be answered in several different ways depending on the
students’ views and experiences on environmental issues. Therefore, the second
Why question is coded as higher order.
The questions written by the students were validated according to the content.
The levels of questions were judged by three experts (two science educators and one
curriculum specialist). For the measure of reliability among the three experts,
crosstab analysis on SPSS 18 was conducted. Each of the Cohen’s Kappa statistic
values indicated the level of reliability between the two experts. The average
Cohen’s Kappa statistic was 0.80. In order to resolve differences and reach 100 %
agreement, all questions were re-evaluated by the experts. One question, on which
no agreement was reached, was omitted from the final statistical analysis.
Data analysis
The analysis of the results was based on a comparison between the homogeneous and
heterogeneous achievement groups regarding the number of questions each student
presented and the level of the questions. First, descriptive statistics were performed
for the number and type of the questions. In order to test whether there were any
differences between the two classrooms in terms of the frequency of each type of the
questions, Chi square (v2) analysis was conducted using SPSS 18. Then, the means of
the number of total questions, higher order questions and lower order questions that
were asked by each student were compared using multivariate analysis of covariance
(MANCOVA). MANCOVA is an extension of the analysis of covariance
(ANCOVA), which evaluates whether population means of multiple dependent
variables are equal across the levels of a categorical independent variable, while
statistical control of the effects of other continuous variables is known as covariates (Howell 2009). In the current study, the number of total questions, the number of
lower order questions and the number of higher order questions were used as
dependent variables. The type of achievement group was used as a categorical
independent variable and the achievement test score as a covariate. The significance
level (alpha) of 0.05 was used in evaluation of the results.
Results
After four weeks of study, during the unit of Living Things, a total of 462 questions
were generated from both the classrooms. Approximately 77 % of these questions
were rated as lower order; the remaining 23 % were rated as higher order by the
researchers. Table 3 displays the distribution of the questions in each class.
Classroom A, which worked in homogeneous groups, generated 236 questions;
185 (78.4 %) of which were lower order and 51 (21.6 %) were higher order.
Classroom B, which worked in heterogeneous groups, generated 226 questions; 169
(74.8 %) of which were lower order and 57 (25.2 %) were higher order. There was
123
436
S. Kaya
Table 3 Distribution of Questions in Each Class
Group
Total Questions
Lower order Questions
Higher order Questions
Homogeneous
236
185 (78.4 %)
51 (21.6 %)
Heterogeneous
226
169 (74.8 %)
57 (25.2 %)
Total
462
354 (77 %)
v2
p
0.840
0.359
108 (23 %)
Table 4 Descriptive Statistics Results for the Number of Questions Asked by Each Student
Groups
Min
Max
Mean
SD
Homogeneous
2
22
10.73
±5.54
Heterogeneous
1
23
9.42
±5.90
Homogeneous
1
19
8.41
±4.95
Heterogeneous
1
19
7.04
±4.97
Homogeneous
0
5
2.32
±1.52
Heterogeneous
0
6
2.38
±2.10
Total Questions
Lower order Questions
Higher order Questions
no difference between the two classrooms in terms of the frequency of each type of
question (v2 = 0.840, p = 0.359).
Table 4 displays the descriptive statistics of the average number of questions
generated in each group. Accordingly, on average, 10.73 total questions, 8.41 lower
order and 2.32 higher order, were asked by each student in the homogeneous group,
and 9.42 questions, 7.04 lower order and 2.38 higher order, were asked by each of
the students in the heterogeneous group during the course of the study.
In order to compare groups for the number of questions generated by each student,
multivariate analysis of covariance (MANCOVA) was performed by using the type of
achievement group as the categorical independent variable and the achievement test
score as the covariate (see Table 5). Results showed that there were no differences
between homogeneous and heterogeneous achievement groups in terms of total
number of questions, lower order questions or higher order questions (p [ 0.05). This
means that regardless of the type of achievement grouping (homogeneous or
heterogeneous), on average, students in both the classrooms generated a similar
number of the questions. However, there was a significant effect of the test score on
the total number of questions (p = 0.047) and higher order questions (p = 0.033).
That is, students with higher achievement test scores generated more total questions
and more higher order questions in both the achievement groups. There was no
influence of the test score on the number of lower order questions generated.
Discussion and recommendations
This study examined whether the type of achievement grouping influences the
numbers of questions generated in groups. It was hypothesised that through the
123
The effect of the type of achievement
437
Table 5 Multivariate Analysis of Covariance (MANCOVA) Results
Source
Dependent Variable
Achievement group
Total q.
Lower order q.
Higher order q.
Test score
Total q.
Type III Sum
of Squares
F
p
30.32
0.99
0.325
28.34
1.18
0.283
0.03
0.01
0.918
127.56
4.17
0.047
Lower order q.
54.75
2.29
0.138
Higher order q.
15.17
4.83
0.033
stimulation of low-achieving students by high-achieving students, heterogeneous
achievement groups generate more higher order questions. However, the results
showed that there were no differences between heterogeneous and homogeneous
groups in terms of the number of the total questions, lower order questions or higher
order questions.
In general, high-achieving students generated more total questions and more higher
order questions in both homogeneous and heterogeneous achievement groups. This
finding was consistent with earlier research reporting that students who are at
conceptually higher levels tend to ask more higher order questions (Harper et al. 2003;
Graesser and Person 1994); and that high-achieving students perform well regardless
of the type of grouping (Burris et al. 2006; Lou et al. 1996; Saleh et al. 2005).
Although the current study was limited with its small sample size and single unit
of the science subject, some recommendations can be taken into consideration for
educators and researchers. In this study, the percentage of higher order questions
generated in both groups was relatively low. Special attention should be given to
promote higher order student questions since understanding and achievement level
are related to the quality of the questions asked (Harper et al. 2003). The students
that participated in this study received instruction on the taxonomy of questions and
how to generate higher order questions. Other strategies could be combined with the
instruction in order to aid students to ask better questions. For example, through
guided questioning strategy, students could be asked to use stems to formulate
higher order questions (e.g., ‘‘What would happen if ____?’’, ‘‘Why is ____
important?’’) (King 1994).
As researchers point out, students should not be expected to ask higher order
questions spontaneously (Chin and Brown 2002). Providing stimulation and familiar
materials tend to increase student questions in classrooms because students’
curiosity is often aroused by out-of-school everyday experiences (Chin and Osborne
2008). Furthermore, some students may need time and encouragement to be able to
ask higher order questions (Chin and Brown 2002). It may also be the case that
some students might not be motivated to generate questions when specifically asked
to do so. Therefore, these students should be identified and different scaffolding
techniques should be utilised for them.
Another important instructional approach to promote higher order questioning is
inquiry-based teaching (Chin and Osborne 2008; Hofstein et al. 2004; Marbach-Ad
123
438
S. Kaya
and Sokolove 2000). This study might be replicated in a more stimulating inquirybased context. Crawford and colleagues (2000) reported that when opportunities for
scientific inquiry were provided, students were able to bring their informal science
experiences into class and observe, infer, draw conclusions and ask questions for an
investigation. The teacher acts as a fellow investigator rather than an authoritative
figure during this process. Inquiry type activities are natural stimulators for question
generation because students are actively involved in ‘open-ended-type’ experiences
where they hypothesise, investigate, plan and conduct experiments (Hofstein et al.
2005).
This study used students’ written questions as a data source in order to reach
several groups at once. It provided practicality in terms of data collection; however,
in doing so it might have sacrificed authenticity. Future studies on achievement
grouping can focus on oral questions generated by students in more authentic
contexts. Audio or video recording might provide rich data of student conversations.
This study used students’ achievement scores when creating homogeneous and
heterogeneous groups based on the notion that achievement level affects the type of
questions asked by students (Harper et al. 2003). Other measures used when
creating groups, such as students’ language skills or collaborative working skills,
may influence group dynamics differently; thus, yield different results in question
generation.
Other lines of research could focus on group dynamics in different physical and
learning environments inside classrooms. Different instructional strategies could be
devised to promote higher order questioning, especially for low-achieving students.
Some strategies that are recommended by other researchers include using a question
board to display student questions (Dixon 1996), utilising the KWHL (KnowWonder-How-Learned) chart where the teacher asks students what they already
Know about a topic, what they Wonder about a topic, How they could find out, and
what they have Learned (van Zee et al. 2001), or using question journal or diaries
(Etkina 2000; Harper et al. 2003). Determining the optimal conditions in which
groups function will help teachers to establish a productive classroom discourse.
References
Bianchini, J. A. (1997). Where knowledge construction, equity, and context intersect: Student learning of
science in small groups. Journal of Research in Science Teaching, 34(10), 1039–1065.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2002). Working inside the black box:
Assessment for learning in the classroom. London: King’s College London.
Blumenfeld, P. C., Marx, R. W., Soloway, E., & Krajcik, J. (1997). Learning with peers: From small
group cooperative to collaborative communities. Educational Researcher, 25(8), 37–40.
Burris, C. C., Heubert, J. P., & Levin, H. M. (2006). Accelerating mathematics achievement using
heterogeneous grouping. American Educational Research Journal, 43(1), 105–136.
Bybee, R. W. (2000). Teaching science as inquiry. In J. Minstrell & E. H. van Zee (Eds.), Inquiring into
inquiry learning and teaching in science (pp. 20–46). Washington, DC: American Association for
the Advancement of Science.
Carter, G., Jones, M. G., & Rua, M. (2003). Effects of partner’s ability on the achievement and conceptual
organization of high-achieving fifth-grade students. Science Education, 87(1), 94–111.
123
The effect of the type of achievement
439
Chang, M., Singh, K., & Filer, K. (2009). Language factors associated with achievement grouping in
math classrooms: A cross-sectional and longitudinal study. School Effectiveness and School
Improvement: An International Journal of Research, Policy and Practice, 20(1), 27–45.
Chin, C. (2004). Students’ questions: Fostering a culture of inquisitiveness in science classrooms. School
Science Review, 86(314), 107–112.
Chin, C., & Brown, D. E. (2000a). Learning deeply in science: An analysis and reintegration of deep
approaches in two case studies of Grade 8 students. Research in Science Education, 30(2), 173–197.
Chin, C., & Brown, D. E. (2000b). Learning in science: A comparison of deep and surface approaches.
Journal of Research in Science Teaching, 37(2), 109–138.
Chin, C., & Brown, D. E. (2002). Student-generated questions: A meaningful aspect of learning in
science. International Journal of Science Education, 24(5), 521–549.
Chin, C., & Kayalvizhi, G. (2002). Posing problems for open investigations: What questions do pupils
ask? Research in Science & Technological Education, 20(2), 269–287.
Chin, C., & Osborne, J. (2008). Students’ questions: a potential resource for teaching and learning
science. Studies in Science Education, 44(1), 1–39.
Crawford, T., Kelly, G. J., & Brown, C. (2000). Ways of knowing beyond facts and laws of science: An
ethnographic investigation of student engagement in scientific practices. Journal of Research in
Science Teaching, 37(3), 237–258.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American
Journal of Physics, 69(9), 970–977.
Cuccio-Schirripa, S., & Steiner, H. E. (2000). Enhancement and analysis of science question level for
middle school students. Journal of Research in Science Teaching, 37(2), 210–224.
Dixon, N. (1996). Developing children’s questioning skills through the use of a question board. Primary
Science Review, 44, 8–10.
Dori, Y. J., & Herscovitz, O. (1999). Question-posing capability as an alternative evaluation method:
Analysis of an environmental case study. Journal of Research in Science Teaching, 36(4), 411–430.
Driver, R., Asoko, H., Leach, J., Mortimer, E., & Scott, P. (1994). Constructing scientific knowledge in
the classroom. Educational Researcher, 23(7), 5–12.
Ebert-May, D., Brewer, C., & Allred, S. (1997). Innovation in large lectures: Teaching for active learning.
BioScience, 47(9), 601607.
Etkina, E. (2000). Weekly reports: A two-way feedback tool. Science Education, 84(5), 594–605.
Etkina, E., & Harper, K. A. (2002). Closing the feedback loop in large enrollment physics courses.
Journal of College Science Teaching, 31(7), 476–480.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Karns, K. (1998). High-achieving students’ interactions and
performance on complex mathematical tasks as a function of homogeneous and heterogeneous
pairings. American Educational Research Journal, 35(2), 227–268.
Graesser, A. C., & Olde, B. A. (2003). How does one know whether a person understands a device? The
quality of the questions the person asks when the device breaks down. Journal of Educational
Psychology, 95(3), 524–536.
Graesser, A. C., & Person, N. K. (1994). Question asking during tutoring. American Educational
Research Journal, 31(1), 104–137.
Hakkarainen, K. (2003). Progressive inquiry in a computer-supported biology class. Journal of Research
in Science Teaching, 40(10), 1072–1088.
Hand, B., & Treagust, D. F. (1994). Teachers’ thoughts about changing to constructivist teaching/learning
approaches within junior secondary science classrooms. Journal of Education for Teaching, 20(1),
97–112.
Harper, K. A., Etkina, E., & Lin, Y. (2003). Encouraging and analyzing student questions in a large
physics course: Meaningful patterns for instructors. Journal of Research in Science Teaching, 40(8),
776–791.
Hofstein, A., Navon, O., Kipnis, M., & Mamlok-Naaman, R. (2005). Developing students’ ability to ask
more and better questions resulting from inquiry-type chemistry laboratories. Journal of Research in
Science Teaching, 42(7), 791–806.
Hofstein, A., Shore, R., & Kipnis, M. (2004). RESEARCH REPORT: Providing high school chemistry
students with opportunities to develop learning skills in an inquiry-type laboratory: a case study.
International Journal of Science Education, 26(1), 47–62.
Howell, D. C. (2009). Statistical methods for psychology (7th ed.). Belmont: Cengage Wadsworth.
Jofili, Z., Geraldo, A., & Watts, M. (1999). A course for critical constructivism through action research: A
case study from biology. Research in Science & Technological Education, 17(1), 5–17.
123
440
S. Kaya
Johnson, D. W., & Johnson, R. T. (2009). An educational psychology success story: Social
interdependence theory and cooperative learning. Educational researcher, 38(5), 365–379.
Kaya, S., & Kablan, Z. (2013). Assessing the relationship between learning strategies and science
achievement at the primary school level. Journal of Baltic Science Education, 12(4), 525–534.
Keys, C. W. (1998). A study of grade six students generating questions and plans for open- ended science
investigations. Research in Science Education, 28(3), 301–316.
King, A. (1994). Guiding knowledge construction in the classroom: Effects of teaching children how to
question and how to explain. American Educational Research Journal, 31(2), 338–368.
Lai, M., & Law, N. (2013). Questioning and the quality of knowledge constructed in a CSCL context: a
study on two grade-levels of students. Instructional Science, 41(3), 597–620.
Lee, E. Y. C., Chan, C. K. K., & van Aalst, J. (2006). Student assessment of collaborative learning in a
CSCL environment. International Journal of Computer-Supported Collaborative Learning, 1(1),
57–87.
Lemke, J. L. (1990). Talking science: Language, learning and values. Norwood, NJ: Ablex.
Leonard, J. (2001). How group composition influenced the achievement of sixth-grade mathematics
students. Mathematical Thinking & Learning, 3(2/3), 175–200.
Lou, Y., Abrami, P. C., Spence, J. C., Poulsen, C., Chambers, B., & d’Apollonia, S. (1996). Within-class
grouping: A meta-analysis. Review of Educational Research, 66(4), 423–458.
Marbach-Ad, G., & Sokolove, P. G. (2000). Can undergraduate biology students learn to ask higher level
questions? Journal of Research in Science Teaching, 37(8), 854–870.
Martin, M.O., Mullis, I.V.S., & Foy, P. (with Olson, J. F., Erberber, E., Preuschoff, C., & Galia, J.).
(2008). TIMSS 2007 International science report. Chestnut Hill: TIMSS & PIRLS International
Study Center, Boston College.
Maskill, R., & Pedrosa De Jesus, H. (1997). Pupils’ questions, alternative frameworks and the design of
science teaching. International Journal of Science Education, 19(7), 781–799.
Mortimer, E., & Scott, P. (2003). Meaning making in secondary science classrooms. Buckinhgam: Open
University Press.
National Research Council (NRC). (1996). National science education standards. Washington, DC:
National Academy Press.
Nystrand, M., Wu, L. L., Gamoran, A., Zeiser, S., & Long, D. A. (2003). Questions in time: Investigating
the structure and dynamics of unfolding classroom discourse. Discourse Processes, 35(2), 135–198.
Richmond, G., & Striley, J. (1996). Making meaning in classrooms: Social processes in small-group
discourse and scientific knowledge building. Journal of Research in Science Teaching, 33(8),
839–858.
Rohrbeck, C. A., Ginsburg-Block, M. D., Fantuzzo, J. W., & Miller, T. R. (2003). Peer- assisted learning
interventions with elementary school students: A meta-analytic review. Journal of Educational
Psychology, 95(2), 240.
Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to generate questions: A review of
the intervention studies. Review of Educational Research, 66(2), 181–221.
Saleh, M., Lazonder, A. W., & Jong, T. D. (2005). Effects of within-class ability grouping on social
interaction, achievement, and motivation. Instructional Science, 33(2), 105–119.
Schumm, J. S., Moody, S. W., & Vaughn, S. (2000). Grouping for reading instruction: Does one size fit
all?. Journal of Learning Disabilities, 33(5), 477–488.
Shodell, M. (1995). The question-driven classroom: Student questions as course curriculum on biology.
The American Biology Teacher, 57(5), 278–281.
Slavin, R. E. (2004). When and why does cooperative learning increase achievement. The
RoutledgeFalmer reader in psychology of education, 1, 271–293.
van Zee, E. H., Iwasyk, M., Kurose, A., Simpson, D., & Wild, J. (2001). Student and teacher questioning
during conversations about science. Journal of Research in Science Teaching, 38(2), 159–190.
Vygotsky, L. (1986). Thought and language. Cambridge: MIT Press.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge,
MA: Harvard University Press.
Watts, M., Gould, G., & Alsop, S. (1997). Questions of understanding: Categorising pupils’ questions in
science. School Science Review, 79(286), 57–63.
Yager, R. E. (Ed.). (1992). The status of science, technology, society: Reform efforts around the world.
Arlington: ICASE.
123
The effect of the type of achievement
441
Zhang, J., Scardamalia, M., Lamon, M., Messina, R., & Reeve, R. (2007). Socio-cognitive dynamics of
knowledge building in the work of 9- and 10-year-olds. Educational Technology Research and
Development, 55(2), 117–145.
Zoller, U., Tsaparlis, G., Fatsow, M., & Lubezky, A. (1997). Student self-assessment of higher-order
cognitive skills in college science teaching. Journal of College Science Teaching, 27(2), 99–101.
Sibel Kaya graduated from the Florida State University with Ph.D. degree in elementary education. She
currently teaches at Kocaeli University, Turkey. Her research interests include, elementary science
teaching and learning, classroom discourse and pre-service elementary teachers.
123
Download