The impact of topic and sub- topic role on candidate tests

advertisement
International Study & Language Centre
The impact of topic and subtopic role on candidate
performance in paired speaking
tests
John Slaght
© University of Reading 2011
www.reading.ac.uk
TEEP SPEAKING TEST (example)
• Focus question: e.g. Which is better (A) privatized
services, or (B) publicly-funded services?
• Monologue: Candidates A and B 3-minute turns
explaining the benefits of either privatized services or
publicly funded services.
• Dialogue : candidates discuss a scenario related to the
topic and negotiate an agreement about one of three
options.
• Candidates finally return to original focus question to
try to reach an agreement over the two options.
JS The impact of topic and role on candidate performance in paired speaking tests
Candidature etc.
• same 6 weeks pre-sessional
• similar IELTS speaking level on entry
• TOEFL, PTE etc. entrants excluded
• familiar with the test format (practised)
• given 24 hours’ notice of partner
• partner from same Spoken Language group
[interlocutor effect – O’Sullivan, 2004:140]
JS The impact of topic and role on candidate performance in paired speaking tests
Overall ‘topic effect'
• “considering topic effect is a critical step in validating
any topic-based test” (Jennings, Fox, Graves & Shohamy,
1999)
• monologue = same overall topic with different
viewpoints
• interactional stage (scenario & final discussion) =
same overall topic with independent viewpoint
JS The impact of topic and role on candidate performance in paired speaking tests
Table 1: IELTS/TEEP progression for
Candidates A and B 2010/2011
progression
2010
Candidate
A
Candidate
B
progression
2011
Candidate A
Candidate B
-1.5
0
2
-1.5
0
0
-1.0
2
1
-1.0
1
0
-0.5
5
4
-0.5
8
7
0.0
7
9
0.0
9
13
0.5
13
17
0.5
28
24
1.0
19
16
1.0
16
17
1.5
4
7
1.5
9
8
2.0
1
0
2.0
1
3
2.5
0
0
2.5
1
0
Totals
51
54
Totals
73
72
JS The impact of topic and role on candidate performance in paired speaking tests
Comparative performance 2010 & 2011
2010
2011
• 75%  comparing TEEP
exit with IELTS entry
• 73.5%  comparing TEEP
exit with IELTS entry
• 72.5%  = Candidate A
• 74%  = Candidate A
• 74%  = Candidate B
• 72%  = Candidate B
JS The impact of topic and role on candidate performance in paired speaking tests
Perceived Gender Bias
Questionnaire
TOPIC
MALE
FEMALE
NEUTRAL
1. learning
methods
2. GM farming
0
2
25(92.5%)
8
0
19(70.0%)
3. Air travel
2
0
25(92.5%)
4. female roles
0
20
7(26.0%)
5. voluntary service
18
0
9(33.0%)
6. heritage
4
0
23(85.0%)
7. arts & sport
6
1
20(74.0%)
8. employment
4
0
23(85.0%)
9. future energy
1
6
20(74.0%)
10. language
learning
0
2
25(92.5%)
28 respondents
JS The impact of topic and role on candidate performance in paired speaking tests
Table 6: ranking of performance related to topic
variable
ranking (percentage)
1st
2nd
3rd
4th
5th
6th
gender
0
4
0
17.3
22
56.8
age
4
0
17.8
26
21.8
30.4
nationality
8.6
17.3
8.6
17.6
34.9
13
education
13
21.7
39.5
8.6
8.6
8.6
knowledge
34.7
34.7
8.6
14
4
4
proficiency
39.2
30.6
13
8.6
8.6
0
JS The impact of topic and role on candidate performance in paired speaking tests
The sub-topic effect
• the impact of having either sub-topic A or B on
individual performance
• random allocation of order
• each candidate in the pairing allocated different
information
• sub-topic effect gender bias
JS The impact of topic and role on candidate performance in paired speaking tests
Monologue (example)
Candidate A
•
•
•
•
•
(privatisation of services)
higher quality
meeting customer needs
choice for the consumer
more competition
between companies
encouraging economic
growth
Candidate B
•
•
•
•
•
(publicly-funded services)
free or cheaper access
to some services
open to everyone
job security for
employees
government protection
sense of local or
national pride
JS The impact of topic and role on candidate performance in paired speaking tests
Concerns about impact of topic
effect on performance
• Impact of topic on performance depending on gender
of candidate
• 10 versions of test were administered but …
• only 3 versions administered in both 2010 and 2011:
– female roles
– future energy
– language learning
JS The impact of topic and role on candidate performance in paired speaking tests
ENTRY & EXIT SCORES BY TOPIC
AND GENDER 2010 & 2011
version
s
scores
FEMALE ROLES
2010
2011
F
FUTURE ENERGY
2010
2011
M
F
M
F
LANGUAGE LEARNING
2010
2011
M
F
M
F
M
F
M
-2.0
0
0
0
0
0
0
0
0
0
0
0
0
-1.5
0
0
0
0
0
0
0
0
1
0
0
0
-1.0
0
0
1
0
0
0
0
0
0
0
0
0
-0.5
0
1
0
1
0
0
2
3
2
0
1
1
0.0
0
1
2
1
2
0
4
2
0
1
3
1
0.5
0
1
0
3
4
1
7
14
5
4
7
2
1.0
0
1
2
0
5
0
5
1
1
2
4
3
1.5
0
0
0
2
1
2
4
2
1
0
1
2
2.0
0
0
1
0
0
1
1
0
0
0
0
0
mean Female 1.2 Male 1.6
Female0 .74
Male 0.71
JS The impact of topic and role on candidate performance in paired speaking tests
Female 1.52 Male 0.56
Results using simple
t-tests and ANNOVA
• No evidence of differences caused by topic effect
and/or sub-topic effect related to gender
• Sub-topic effect evaluated within each topic
• No evidence that differences in scores was different
for the 3 versions
• Clear evidence of difference in scores related to entry
score
JS The impact of topic and role on candidate performance in paired speaking tests
Difference of exit related to
entry score
EXIT
Entry
5.5
6.0
6.5
7.0
7.5
Overall
5 or less
3
5
5
2
0
16
5.5
1
15
15
9
0
40
6.0
1
8
13
7
1
30
6.5
1
3
5
13
0
22
7 or
more
0
1
2
7
0
10
Overall
6
32
41
38
1
118
JS The impact of topic and role on candidate performance in paired speaking tests
Difference of exit related to
entry score (percentage)
EXIT (percentage)
entry
5.5
6.0
6.5
7.0
7.5
Count
5 or less
19
31
38
13
0
16
5.5
3
38
38
23
0
40
6.0
3
27
43
23
3
30
6.5
5
14
23
59
0
22
7 or
more
0
10
20
70
0
10
Overall
5
27
35
32
1
118
JS The impact of topic and role on candidate performance in paired speaking tests
How can fairness be
achieved?
• identify topics which clearly have a gender bias
• or find which other variable is the root of the problem
• confirm whether the topic is appropriate for a PST
• revise or reject
• review scoring system
• provide a public list of possible topics like a reading list
at the beginning of the course – good washback or
consequential validity?
JS The impact of topic and role on candidate performance in paired speaking tests
What do YOU think?
JS The impact of topic and role on candidate performance in paired speaking tests
References
•
Brookes, L. (2009). Interacting in pairs in a test of oral proficiency. Language Testing 26(3): 341-366. SAGE.
•
Csépes, L. (2009). Measuring oral proficiency through paired-task performance. Peter Lang. Frankfurt.
•
Fulcher, G. & Reiter, R. (2003). Task difficulty in language tests. Language Testing 20 (3): 321-344. SAGE.
•
Lumley. T. & O’Sullivan,B. (2005). The effect of test-taker gender, audience & topic on task performance in tape-mediated assessment of speaking.
Language Testing 22 (4): 415-437. SAGE.
•
Lazaraton, A. (2006). Process and outcome in paired oral assessment. ELT Journal. Vol. 60. (3): 287-289. OUP.
•
May, L (2011). Interaction in a paired speaking test. Language Testing & Evaluation: Vol. 24. Peter Lang. Frankfurt.
•
Norton, J. (2005). The paired format in the Cambridge Speaking Tests. ELT Journal. Vol. 59 (4): 287-297. OUP.
•
O’Loughlin, K. (2002). The impact of gender in oral proficiency testing. Language Testing. Vol. 19 (2): 169-92. SAGE.
•
O’Sullivan, B. (2002). Learner acquaintanceship and oral proficiency test pair-task performance. Language Testing. Vol. 19(3): 277-295. SAGE.
•
Saville, N. & Hargreaves, P. (1999). Assessing speaking in the revised FCE. ELT Journal. Vol. 53(1): 42-57.
JS The impact of topic and role on candidate performance in paired speaking tests
Download