- PPTX 559.7 KB

advertisement
Interactive
Communication
Dr Lynda Taylor
Interactive communication
– the construct in co-construction
Overview
• Current understandings of ‘interactive communication’
• in spoken language behaviour
• in L2 speaking proficiency assessment
• Some issues and challenges for testers
•
•
•
•
•
construct representation
construct relevance
rating procedures
score interpretation
technological developments
• Some directions for the future?
Current
understandings
Spoken language as…
• reciprocal for the most part
• simultaneously proactive and responsive
– contingent speaker+listener roles
• dynamic and co-constructed
– evolving, emerging between/among interlocutors
• iterative – not necessarily linear, or predictable, or ‘tidy’
• shaped by diverse personal, cognitive and contextual factors
• spoken interaction as the ‘primordial site of sociality’ (Schegloff
1986:112)
Contribution of applied linguistics (i)
• 1970s onwards - steady expansion of the construct definition
underpinning operational approaches to assessing L2 ability
• Halliday (1976) and van Dijk (1977) in discourse analysis
• Hymes (1972) in sociocultural factors
... the performance of a person … takes into account the interaction
between competence (knowledge, ability for use), the competence
of others, and the cybernetic and emergent properties of events
themselves… (1972:283)
• growing awareness of the interactional dimension of
language ability (‘interactional linguistics’)
Contribution of applied linguistics (ii)
• Canale & Swain (1980) – ‘communicative competence’
• Kramsch (1986) – ‘interactional competence’
‘Interaction always entails negotiating intended meanings, i.e.
adjusting one’s speech to the effect one intends to have on the
listener. It entails anticipating the listener’s response and possible
misunderstandings, clarifying one’s own and the other’s intentions …
(1986:367)
• Bachman (1990) – ‘communicative language ability (CLA)’
• growing awareness of the interactional nature of spoken
language ability in particular, with implications for assessment
Operationalisation in assessment (i)
• direct approach in speaking task design – attempting to
stimulate ‘talk-in-interaction’
• interview, i.e. one-on-one or singleton face-to-face interaction,
e.g. Cambridge Proficiency, OPI
• paired format, i.e. 2 test takers interacting, with/without
facilitator/examiner(s)
• group format, i.e. 3 or more test takers interacting, with/without
facilitator/examiner(s)
• a combination of two or more of the above (e.g. Cambridge
speaking tests)
Operationalisation in assessment (ii)
• conceptualisation and articulation of assessment criterion/a
that capture elements of interactional competence:
• Cambridge Tests – the ‘Interactive Communication’ criterion
• IELTS – some elements captured within ‘Fluency and Coherence’?
• Occupational English Test (OET) – ‘Overall communicative
effectiveness’ and ‘Appropriateness’
• How far do semi-direct speaking tests, e.g. ibTOEFL and PTE
Academic, lend themselves to assessing interactional
competence (or some elements thereof)?
Defining
Interactive Communication (i)
“Interactive communication refers to the candidate’s ability to
take part in the interaction appropriately using language to
achieve meaningful communication.”
(Taylor & Galaczi, 2011:185)
Defining
Interactive Communication (ii)
“At lower levels, this includes initiating and responding, the
ability to use interactive strategies to maintain or repair
communication, and sensitivity to the norms of turn-taking.
Candidates are given credit for being able to ask for repetition
and clarification if necessary.”
(Taylor & Galaczi, 2011:185-6)
Defining
Interactive Communication (iii)
“At higher levels this criterion refers to the candidate’s ability to
take a proactive part in the development of the discourse,
participating in a range of interactive situations in the test and
developing discussions on a range of topics by initiating and
responding appropriately. It also refers to the deployment of
strategies to maintain interaction at an appropriate level
throughout the test so that the tasks can be fulfilled.”
(Taylor & Galaczi, 2011:185-6)
Occupational English Test - OET
(2 out of 5 criteria)
• Overall communicative effectiveness – including how well
you are able to maintain meaningful interaction
• Appropriateness – including the use of suitable professional
language and the ability to explain in simple terms as
necessary; also, how appropriately you use language to
communicate with the patient given the scenario of each
roleplay
Some issues and
challenges for testers
1. Construct representation
• How comprehensive is our current definition of the
‘interactive communication’ (sub)construct for the purposes
of language teaching, learning and assessment?
• To what extent does it take account of the full range of
elements which research suggests are core to interactional
competence? Is the (sub)construct too narrowly defined?
• What other elements of interactional competence might need
to be represented - and thus sampled and evaluated - for the
purpose of assessment (as well as teaching and learning)?
Existing elements of IC
•
•
•
•
•
•
•
•
initiating and responding
turntaking norms
communication maintenance and repair strategies
proactive role in constructing the discourse
topic shift and development
achieving meaningful communication
using language appropriately
task fulfilment (?)
Additional elements? (i)
•
•
•
•
•
Closings (as well as openings)
Back-channelling, listener responses, response/reactive tokens
Latching and interruptions
Use of pausing, hesitation and silence
Floor-holding: use of fillers – ‘smallwords’ but also fixed
phrases such as ‘now let me see…’, ‘that’s a good question…’
• Use of deixis and ellipsis
• Use of vague language
Additional elements? (ii)
• Establishing ‘common ground’
• Genre awareness (e.g. sharing personal stories or telling jokes)
• Sequencing practices in common speech acts, especially where
‘face’ is involved, e.g. the use of ‘face-saving pre-sequences’ when
framing invitations, offers or requests to avoid dis-preferred
responses (such as a rejection)
• Politeness control
• Prosodic features: speed, prominence, para/tone and key (inc. pitch
and volume)
• Laughter, sharp intake of breath, etc
• Paralinguistic features: posture, gaze and gestures
IC interface with other criteria
• Are some of these measures already covered under other
assessment criteria (an issue of separability)?
• prosodic features … Pronunciation
• fixed phrases, fillers … Vocabulary
• pausing, hesitation … Fluency
• Are some measures ignored completely (e.g. genre
awareness, sequencing practices, politeness mechanisms,
paralinguistic features)? If so, why?
2. Construct relevance (context)
• Savignon (1983:8-9) – communication as ‘context specific’
• ‘Communication takes place in an infinite variety of situations, and
success in a particular role depends on one’s understanding of the
context and on prior experience of a similar kind.’
• growing recognition of context-specific nature of spoken interaction
(findings from discourse and conversation analysis)
• To what extent is ‘context relevance’ or ‘context dependence’
satisfactorily accounted for in construct definition and
operationalisation as far as interactional competence is concerned?
• How far is a speaking test task actually designed to frame or ‘drive’
the nature of the spoken interaction? Are our speaking assessment
tasks sufficiently ‘context-rich’?
Overall communicative effectiveness
(OET) – guidance to test takers
• In each roleplay, take the initiative, as a professional does
• Talk to the interviewer as you do to a patient
• Don’t expect the patient to lead or to move to the next issue for you – do
this yourself
• Deal clearly with the points given on the rolecard, asking questions and
explaining as necessary
• Link what you say clearly to the purpose of the communication
(coherence)
• Make sure the patient understands what you are saying and be
prepared to explain complicated issues in a simple way
• Remember that you are interacting with your patient, not just explaining
to him/her
Appropriateness (OET)
– guidance to test takers
• Practise explaining medical and technical terms and procedures in
simple language – remember that the English you know as a
professional may be quite different from the English used by patients
• Notice what people say in different situations and copy these
phrases (checking what they mean first) – people choose what to say
depending on the situation (e.g., formal/informal, speaking to a
colleague, child, child’s parent)
• Consider asking questions to check that the patient has understood
what you are saying if this seems appropriate to the situation
Language performance...
‘… involves interactive responsiveness and the use of a rich
repertoire of linguistic choices to express meaning…’
(Fulcher 2013)
• If we wish to sample and evaluate
interactional competence effectively,
should we design our speaking test
tasks to be more context-specific/
relevant/rich (especially at higher
proficiency levels, e.g. C-levels)?
3. Rating procedures
• Separability/Divisibility
• How far do the IC rating scale and its descriptors capture essential
aspects of interactional competence? e.g. features of lexicogrammar (fillers for floor-holding) and pronunciation (prosody)
• Or are these actually located in other scales? And if so, are they
adequately represented there (or are they marginalised)?
• A place for a separate ‘interactive listening skills’ subscale?
• Scalability
• How scalable are certain criteria, such as ‘genre awareness’ or
‘sequencing practices’? e.g. politeness (a 3-point scale
appropriate/somewhat appropriate/not appropriate)?
3. Rating procedures
• Discriminability and differentiation
• What elements of interactional competence is it sensible/realistic
to ask human raters to attend to, whether in real-time
assessment or in post-hoc rating?
• Are differential scales needed according to whether the rater is
implicated or not in the co-constructed speech?
• Which aspects of interactional competence are best judged by a
human rater (using eye and ear)? And which by automated
scoring systems?
4. Score interpretation
• the co-constructed nature of spoken interaction
• McNamara (1997), Swain (2001), Chalhoub-Deville (2005)
• interactional competence does not reside in an individual, nor is
it independent of the interactive practice in which it is
constituted (He and Young, 1998)
• How far does it make sense to assign a score to an individual
test taker on the criterion of interactive communication?
• May (2009), Ducasse and Brown (2009)
• And what does such a score mean? Should it be ‘shared’?
• Fulcher and Davidson (2007), Taylor and Wigglesworth (2009)
5. Technological advances (i)
• steady shift away from traditional notion of ‘interactional
competence’ premised largely upon face-to-face, spoken
interaction involving physical proximity
• computer-mediated and internet-based communication, e.g.
Voice over Internet Protocol (VOIP) – Skype, Facetime, Twitter
• Multi-modality and interactional competence
– spoken? written? ‘blended’?
• What are the implications for
construct definition and
operationalisation?
5. Technological advances (ii)
• developments in ASR systems and their capacity to readily
discern and measure interactional features of talk
• Which features can be meaningfully rated by machine, or at
least measured automatically in some way to inform a human
judgement on quality of interactional competence?
• e.g. prosodic patterns, ‘flow’ and ‘confluence’
• Could automated assessment tools be used (in partnership
with human raters) to track and evaluate the more complex
and sophisticated features of interactional competence?
• e.g. genre awareness, or context-relevant sequencing practices
Issues and challenges for testers
• construct representation
• construct relevance
• rating procedures
• score interpretation
• technological developments
Possible directions
for the future
Assessing interactional competence
•
•
•
•
•
•
Expanding the construct (resisting reductionism)
Improving specification of high-level/expert behaviour
Analysing the nature of ‘blended’ interactions (social media)
Developing and optimising automated assessment systems
Reappraising CB-based speaking assessment
Increasing inter-disciplinarity (inter/cross-cultural studies,
computational linguistics, linguistic ethnography, behavioural
and linguistic studies in professional/organisational domains)
Fitness for purpose in testing
• Acknowledging the complex interaction of the linguistic AND the
non-linguistic in spoken discourse
• interactional competence linked to ‘forms of meaningful semiotic
human activity seen in connection with social, cultural, and historical
patterns and developments of use’ (Blommaert 2005:2-3)
• Acknowledging the complex communicative needs of a
postmodern globalised world
• ‘We need to develop instruments with imagination and creativity to
assess proficiency in the complex communicative needs of English as
a lingua franca … they should feature social negotiation … and
demonstrate pragmatic competence … We need tests that are
interactive, collaborative and performative.’ (Canagarajah 2006:240)
Thank you for your interest and your attention!
Questions and comments?
Download