Bowland Maths Assessment

advertisement
BOWLAND
MATHS
Impact analysis: Guidance Note
Overall aim: to compile evidence about the teaching of problem solving and its
impact, which could be used for demonstrating its effectiveness to the others
Objective: simple qualitative assessment of student progress in problem solving
for and by teachers
Introduction:
Some enthusiastic teachers have expressed interest in monitoring/measuring
student progress, so that they can develop a more objective sense of what works.
This note provides guidance for teachers to help them do simple impact analysis in
their class/school, within a small informal network community of teachers, who
could then also learn from each other.
A research group in Japan, led by Keiichi Nishimura, will be conducting a parallel
study with a small group of Japanese teachers in their next academic year (April
2015 – March 2016); this will provide excellent international comparative
opportunities. Dr. Nishimura is an Associate Professor of Gakugei University in
Japan who advises the Japanese government on maths education; he is also a
founder of Bowland Japan and a core member of IMPULS.
Design: Summary




Use two Bowland assessment tasks as a ‘pre-test’ with a class in a given
period in September/October; keep the student worksheets
Make a record of all problem-solving tasks taught in the class in the year
Use the same two assessment tasks as ‘post-tests’ with the same class, but
in two separate periods in June/July; keep the student worksheets. Once
the ‘test results’ are collected, teachers could follow up to teach the tasks
Use progression grids as guides to evaluate progress made by students
Assessment Tasks to be used:
 For lower level classes: Day Out and Tuck Shop
 For higher level classes: Fares not fair and Tuck Shop
N.B. The above task combination is the one expected to be used in Japan. An
alternative ‘similar task’ is Soft Drinks. Teachers can selected other assessment
tasks, although this could limit comparative discussions with Japanese colleagues.
Pre-test (September or October) - at the beginning of the year:
Use two assessment tasks (A and B) in a class as ‘tests’ in which half of the students
(S) do task A while the other half (S') do task B
 Distribute tasks A and B randomly in the class (put an equal number of the
two tasks in a pile, roughly mix and distribute)
 Distribute worksheets – on grid paper, the responses tend to be neater
 Tests should be in silence with a time limit of 40 minutes, with students
working alone with no intervention from teachers
 Observe students working and record noteworthy events
 Collect worksheets - without teaching the tasks
Post-test (June/July) - at the end of the year:
Ask all the students in the class (S and S') to do task A in one period and task B in
another period; subsequently, each task can be 'taught' in class.
Control group – if possible
 It would be ideal if the pre- and post- tests could be conducted with more
than one class of similar ability (within the same school) who would be
taught different levels of problem solving during the year – especially if
there was a class for which no problem solving would be taught
 Variation across participating teachers/schools may provide useful insights
– although level/type of teaching would not be easy to compare
Qualitative comparisons
 The main focus of the analysis would be to look for progress made by
students, using the task progression grids
o So that teachers can ‘demonstrate’ progress with specific examples:
“with ‘x’ level of teaching, students made ‘this kind’ of progress”
o There should be no expectation of numerical analysis (e.g. how many
students moved from one level to another within a progression grid)
- but rather a qualitative analysis based on ‘checks’
 Three types of comparison to ‘triangulate’, although individual teachers
would not be expected to analyse all these exhaustively:
o Same students doing the same tasks
 SA(Sept) against SA(Jul);
 S'B(Sept) against S'B(Jul);
o Different students doing the same tasks at different times
 SA(Sept) against S'A(Jul),
 S'B(Sept) against SA(July)
o Same students doing different tasks
 SA(Sept) against SB(Jul)
 S’A(Sept) against S’B(Jul)
In the summer, we hope to have a one-day workshop to explore more about
worksheet marking, using the progression grid, based on the students’ retained
worksheets.
Additional option to assess ‘pair work’
Each of above three ‘test’ periods (one for the pre-test for both tasks and one each
for the two post tests) could be followed by a period in which students pair up to
do the same tasks, but still as tests and without teacher intervention
 Pairs could be given 2/3 of the period for joint work, with the final 1/3 of
the period used for individual students to write ‘reflections’ about what
changes they made as a result of working in pairs
(Dr. Nishimura’s team will be using this optional addition to help examine student
learning through pair work)
Download