Higher education teaching preparation

Identification and implementation of
measures of effectiveness of academic
teaching preparation programs in higher
education (SP10-1840)
Appendices to OLT Report
Appendix A: Literature Review
Page 1
Appendix B: Excerpt from UWA TPP Audit
Page 42
Appendix C: Academic Professional Development Effectiveness Framework:
Formal Programs
Page 43
Appendix D: Academic Professional Development Effectiveness Framework:
Informal Programs
Page 44
Appendix E: An introduction to the Academic Professional Development
Effectiveness Framework
Page 45
Appendix F: Narrative Guidelines
Page 56
Appendix G: Narrative Exemplar 1: Overall provision
Page 64
Appendix H: Narrative Exemplar 2: Postgraduate Teaching Internship Scheme
Page 70
Appendix I: Narrative Exemplar 3: Informal Programs
Page 77
Appendix J: Narrative templates
Page 79
Appendix A
LITERATURE REVIEW
Introduction
Professional development programs and activities to enhance teaching and learning have been a
feature of the academic culture of many higher education institutions throughout the world for
more than 50 years. During this time there have been significant changes in the teaching
environment in universities. Pedagogical understandings have developed leading to more varied
teaching methods, technology has provided unprecedented opportunities of access and
enrichment, academic staff have engaged in dialogue and reflections on their teaching, and
ethnic and cultural diversity has demanded new understandings and skills of academic staff.
More recently, a growing awareness that university students of the 21st century expect
educational experiences which cannot be met by the teaching methods of the past has motivated
higher education institutions to take action to raise the quality of teaching and enhance the
student learning experience (Knapper, 2003; Hanbury Prosser & Rickinson, 2008). Countries
such as Norway, UK and Sri Lanka, have even gone as far as making pedagogical training of
university teachers compulsory as one step towards quality assurance (Gibbs & Coffey, 2004).
These developments have thrown the spotlight on academic development in higher education
which faces the challenge of raising the quality and status of teaching despite pockets of
resistance to change and some lingering scepticism towards professional development
programs (Lipsett, 2005). Although many academic staff now recognise the importance of
staying abreast of new teaching ideas and responding to new educational challenges, others
prefer to immerse themselves in the ‘academic respectability’ of their discipline. Knapper
refers to this as the tyranny of the academic disciplines which inhibits the application of
knowledge and insights from other fields, including Education (p. 6). Brew (1995) suggests
that what underpins such attitudes is a limited conception of teaching and the perception that
professional development programs lack a substantial discipline base and are therefore
‘service’ activities rather than scholarly pursuits. This view is now being challenged with
accreditation for academic professional development programs, the tentative step by a
number of institutions towards making teacher preparation compulsory for new staff, the
recognition of excellence in teaching by institutions, and a growing interest in research into
teaching and learning in higher education. This emerging body of research includes a
significant number of studies related to academic development.
Several scoping studies contribute to an international perspective of academic development
programs. Gosling’s report (2008) is a substantial account of the purpose, strategic
importance, structure and status of academic development units (ADU’s) in the UK,
identifying the growth and breadth of their work, but also highlighting the sense of
marginalisation and demoralisation felt by many staff in these units. Gosling’s report
concludes by identifying ten principles for maximising the operation and influence of ADUs
and making their work more visible. Similarly the Taking Stock Report (2010), published by
Ako Aotearoa, The National Centre for Tertiary Teaching Excellence in New Zealand,
acknowledges that the key to high quality tertiary education is the capability of the staff in
the sector and the way in which they are supported to develop their practice (p. 3). Not
surprisingly the Report reveals that the majority of tertiary teachers in New Zealand are
appointed on the basis of their credentials in their discipline and have no formal preparation
for teaching. Nevertheless, the comprehensive provision of academic development programs
nation wide is evidence of the opportunities available to staff to up-skill and participation
rates suggest that many are willing to do so (p. 7). While provision is extensive, the report
notes considerable variation in structural and management arrangements, in intended
1
Appendix A
outcomes, and in the nature, type and availability of programs. The situation regarding
academic development in the USA also has been increasingly documented (Donald, 1977;
Sullivan, 1983; Diamond,1988; Gaff & Simpson, 1994; Crawley, 1995; Lewis, 1996)
providing an overview of progress especially since the 1960s and 1970s when student
protests about uninspired teaching put academic development under pressure (Gaff &
Simpson, p. 168). With calls for accountability from parents and governing bodies in the
1990s, the quality of university teaching attracted further attention, resulting in significant
growth in the number of ADUs across the USA. Subsequent research into teaching in higher
education led to tentative claims of improved teaching as a result of this increased
professional development activity.
In Australia the Development of Academics and Higher Education Futures Report (Ling, 2009)
was the result of a scoping project documenting current academic development activities in
Australian universities. The Report reveals consistency in that nearly all Australian universities
have a central academic development unit, but a significantly varied landscape of mission
statements, organisational structure, provision, intended audience, institutional support, and
resourcing. It identifies a number of key issues and makes recommendations for the sector,
institutions and academic development units within institutions. With respect to these it calls for
continued financial support at the Federal Government level, institution-wide policies, structure
and strategies to consistently encourage, support and recognise quality teaching, and the
development of continuing career-related professional development opportunities. It also
emphasises the need for programs to be underpinned by research, scholarship and evidencebased practice, and for academic developers to engage in forms of evaluation which will
indicate the impact of their activities in the long term (p. 62).
This is a timely challenge given the increasing attention directed to academic development
programs in higher education over recent years. Indeed throughout the last two decades a
government agenda of quality, value for money and enhanced participation has resulted in
persistent attention to the quality of higher education in Australia leading to a number of
inquiries with subsequent reports and recommendations (Ramsden, 2003, p. 233; Bradley et
al, 2008; ACER, 2008). This attention has not only focused on policy and practice at the
institutional level, but also on teaching practices, the gulf between research and teaching
quality in universities and the expectations of students who are now seen as fee paying
‘consumers’ (Clark et al., 2002, p. 129). In response to this a number of Australian
universities now require academic staff new to teaching to undertake an initial teaching
preparation program in the first years of their appointment, while others encourage their staff
to participate in professional development related to teaching through an extensive range of
program offerings. With greater attention being paid to the quality of teaching in universities
more broadly, and in individual performance reviews and promotion more specially, there are
clear expectations that teaching staff will provide evidence of the quality of their teaching
and of ongoing participation in teacher development programs. This in turn, leads to
questions of accountability and calls for academic developers to demonstrate that their
actions are not just in line with institutional strategic initiatives, but have improved student
learning experiences and outcomes (Brew, 2007, p. 69). In order to respond to this there is an
expectation that academic developers engage in systematic evaluation of their programs and
their impact.
Types and purpose of teaching preparation program for academics (TPPs)
Hicks (1999) defined academic development as the provision of pedagogically sound and
discipline relevant development for academic staff across the broad spectrum of disciplines
present within a university so as to impact effectively on student learning (p. 44). While
2
Appendix A
Hicks’ definition is not universally agreed, it nevertheless suggests that academic
development involves a wide range of activities which includes TPPs1. This is supported by
a number of scoping studies which have documented the range and intended outcomes of
various types of TPPs both in Australia and overseas, (Stefani, 2011; Ako Aotearoa, 2010;
Hicks, Smigiel, Wilson & Luzeckyj, 2010; Holt, 2010; Ling, 2009; Viskovic, 2009; Dearn,
Fraser & Ryan, 2002; Gibbs, Habeshaw & Yorke, 2000; Hilton, n.d).
In general what we do know is that teaching preparation activities for academics vary in
scope, content, delivery mode, intended outcomes and audience. They can be formal or
informal, short or extended, planned or ad hoc, face to face or on-line. Furthermore they may
be centrally designed and delivered, be more decentralised including faculty/school or
discipline activities, be offered through professional associations or occur through
collaborative, peer, mentoring or partnership arrangements.
Prebble et al (2004), in their synthesis of the research on the impact of academic development
programs group TPPs into five distinct categories: short training courses; in-situ training;
consulting, peer assessment and mentoring; student assessment of teaching; and intensive staff
development. Kreber and Brook (2001) developed a similar categorisation of TPPs but
excluded student assessment of teaching as a distinct category. Their four categories are: a
centralised course on learning to teach in higher education; individual consultations with
teaching staff, seminars/workshops; collaborative action research studies on teaching; and peer
consultation programs (Kreber and Brook, 2001, p. 101). In reviewing the literature on the
impact of academic development programs, Southwell & Morgan (2010) grouped TPPs into two
broader categories: Group-based staff development programs (which included short training,
intensive staff development and in-situ/ collegial communities), and individual-based academic
staff development programs (which included mentoring and peer review) (p.47).
Effectiveness of various types of TPPs
It could be argued that all of these programs or strategies have the underlying, if not explicitly
stated, goal of improving student learning. However, a scan of the literature reveals that there is
little research to suggest which, if any, of these types of programs is effective in achieving this
implicit goal, or indeed, if any of the intended outcomes explicitly stated in TPP documents are
achieved. According to Devlin (2008) the questions of whether or not various teacher
development interventions actually work and, if so, in what ways such interventions influence
skills, practices, and foci, and/or ultimately lead to improved learning, remain largely
unanswered in higher education (p. 15). This is a problem for academic developers who would
benefit from evidence of what works, not only in terms of understanding what would make
professors eager to take advantage of these programs (Lewis, cited in Lycke, 1999, p.131), but
also so that the limited resources available can be directed to the most appropriate TPPs for each
particular context (Postareff et al, 2007).
Two scoping studies reveal some tentative conclusions relating to the effectiveness of
particular types of TPPs (Prebble et al, 2004; Southwell & Morgan, 2010). Together their
findings suggest that short training courses which present discrete, skills-based topics have
little impact on teaching and learning as there is limited opportunity to change teachers’
conceptions of teaching and no opportunity for teachers to apply the new techniques within
the discipline specific context. On the other hand intensive, more comprehensive programs
TPP is used throughout this paper to represent teaching preparation programs for academics in higher
education institutions as distinct from courses which prepare teachers for the school sector.
1
3
Appendix A
can influence teacher beliefs and behaviour and may lead to a more student-focused approach
in teaching.
There is also general agreement that discipline based programs or ‘in-situ’ training is a more
effective setting for TPPs especially when they involve participation in communities of
practice, mentoring, reflective practice, and action learning (Warhurst, 2006; Rindermann et
al., 2007; Spronken-Smith & Harland, 2009; Ortlieb et al., 2010). This type of academic
development which is collaborative, authentic and presented within communities of practice
also has been shown to be valued by staff (Ferman, 2002; Reid, 2002; Peseta & Manathunga,
2007). Wahr (2008) emphasises the importance of context, demonstrating that professional
development designed specifically to meet the needs of staff within their discipline mitigates
barriers to learning, a view generally supported in the literature (Knight and Trowler, 2000;
Healey and Jenkins, 2003; Taylor, 2006).
Within the last decade an increasing number of studies have focused on the effectiveness of
peer review or observation as a strategy for professional development (Hammersley-Fletcher
& Orsmond, 2005; Gosling & O’Connor, 2006; McMahon et al., 2007; Bennet & Barp, 2008;
Toth & McKey, 2010). Byrne et al. (2010) suggest that peer observation is often dealt with
quite unimaginatively with the traditional and somewhat superficial observation of a one-off
teaching session dominating the process (p. 216). Given the diversity in teaching settings
today, including on-line learning, they call for more varied and in-depth approaches and for
more engagement with pedagogical theory, critical reflection and collaboration as a
foundation for improvement in teaching from the peer review process. Their study, motivated
by discontent with an existing compulsory peer review program, was based on the belief that
the peer development process is the opportunity to develop one’s own practice in a
meaningful way by engaging in dialogue with others about pedagogy (p. 218). Participants
developed a year long plan for peer development with colleagues with mutual interests,
engaging in dialogue and scholarship to clarify their understandings of teaching and learning
issues. At the conclusion of the study they reported genuine professional development,
increased collegiality and autonomy, and that the process of peer development was more
effective than observation or review (p. 221). It appears that the success of peer development
in this case was largely due to the ownership of the process by the academics involved. In the
absence of such ownership what remains is a ‘top down’ approach, which Byrne et al. warn
may encourage compliance with policy or procedures rather than engagement with practice,
thereby perpetuating the audit culture in higher education rather than developing a teaching
and learning culture.
There is a great deal of support in the literature for peer learning within faculty learning
communities as a basis to developing a teaching and learning culture (Shulman, 1993; Senge,
2000; Cranton & Carusetta, 2002; Cox, 2004; Boreham, 2004; Feger & Arruda, 2008;
McCluskey de Swart, 2009). Such learning communities may be cohort based (where the
participants have similar needs and concerns) or topic based (where the focus is on a
particular teaching and learning need) (Cox, 2004, p.8). Their structure may be formal with
group members selected by senior staff, a specific purpose identified and financial support
provided, or informal with voluntary participation in less structured situations. Regardless of
the structure, a number of benefits of this process are reported. These include powerful
learning from informal interactions in a stress free environment, improved collegiality, the
emergence of a sense of citizenship within the institution which leads to improved outcomes
for students and a ‘ripple effect’ to other staff (Cox, 2004). Cox also reported that
participation in a community of learning motivated staff not only to try new approaches to
teaching, but also to analyse the impact of these new approaches which included increased
engagement of students in class, increased student interest, a more positive classroom
4
Appendix A
environment and improved student evaluations. Perhaps more significant, however, were the
reports of an increase in students’ abilities to apply principles and concepts to new problems,
to ask good questions, to work productively with others, to think independently and to
synthesise and integrate information (p.12). Much of the reported improvement in student
performance was attributed to the use of a more student focused approach which included
collaborative learning, active learning, discussion and the use of technology to engage
students. The evidence supporting these claims was in the form of surveys, teaching project
reports and teaching portfolios which incorporated exemplars of student work.
Several studies have reported the impact of peer learning and learning communities on midcareer academics, suggesting that it is not only new academics who benefit from peer interaction
and communities of practice (Blaisdell et al., 2004; McMahan and Plank, 2003; Smith and
Smith, 1993). McMahan and Plank (2003) describe peer support between experienced
academics as ‘old dogs teaching each other new tricks’. They report that mid-career academics,
having developed confidence in their discipline, teaching techniques and assessment, often shift
their focus to student learning and are keen to try new approaches to teaching related problems.
One method of supporting the learning of mid-career academics, reported by Smith & Smith
(1993), is through establishing ‘partners in learning’ and encouraging engagement with the
scholarship of teaching and learning
In a multi-layered study Allern (2010) examined the intersection of peer review, the
scholarship of teaching and teaching portfolios, as a means of academic development. Based
on the notion that peer review of research is an accepted practice, participants’ teaching
portfolios were reviewed by peers, challenging academic staff to go beyond recording and
providing evidence of their practice to interrogating their teaching beliefs and practices based
on the pedagogical frameworks explored during TPPs. The study found that by using their
theoretical understandings to review their portfolios and those of their peers, participants
moved beyond an interest in teaching ‘tricks’ to deeper analysis of teaching, eventually
leading to improved practices.
While these findings suggest strong support for peer learning and learning communities as a
form of academic development there are also limitations reported in the literature (Hollins et
al, 2004; Stoll et al, 2006; Wells & Feun, cited in Feger & Arruder, 2008 ;Vescio, Ross &
Adams, 2008). The most common issues include time constraints due to competing demands,
size of staff (both too small and too large), lack of institutional support, uncertainty about
effectiveness, divergent views on teaching, the quality of leadership and flawed program
design and facilitation, all of which can limit the impact and sustainability of learning
communities. A study reported by Clegg et al (2006) attempted to mitigate the problem of
time constraints by using email communication as a basis for collegial support during the
implementation of a teaching innovation. Analysis of email transcripts and interviews at
various stages of the innovation indicated that peer support and learning through email
interactions had been an effective form of academic development (p. 98).
A number of the more recent studies of the impact of various types of TPPs report the evaluation
of a particular strategy with a focus on a single outcome. Many of these focus on formal
extended strategies for groups of participants such as Foundations type programs although there
have been a number of studies that have addressed individualised activities such as the use of
'consultations' as a development strategy. (See Table 1 for a summary of these.)
5
Appendix A
Table 1 Indicative Studies of Impact of Teaching Preparation Programs
Study Reference
Teaching Preparation
Outcomes Focus
Strategy
Teacher
Teaching
Institutional Student
Behaviour
Akerlind, 2007
Extended/short courses
Allern 2010
Teaching Portfolios
Asmar, 2002
Academic development
Bennet & Barp, 2008
Peer review
Blaisdell & Cox, 2004
Peer review/Comm. of Practice
Breda, Clement & Waytens 2003
Short Course
Buckridge 2008
Teaching Portfolios
Byrne, Brown & Challen, 2010
Peer review
Cilliers & Herman 2010
Extended course
Cox 2004
Community of practice
Devlin 2008
Extended course
Donnelly 2008
Accredited program
Edwards, Mc Arthur & Earl 2004
Accredited program
Feger & Arruda, 2008
Community of practice
Gibbs &Coffey 2004
Extended course
Giertz 1996
Short course
Ginns, Kitay and Prosser, 2008
Extended course
Godfrey, Dennick & Walsh 2004
Short course
Gosling & O’Connor, 2006
Peer review
Hammersley-Fletcher & Orsmond, 2005 Peer review
Hanbury, Prosser & Rickinson 2008
Accredited program
Healey &Jenkins, 2003
Discipline specific
Ho, Watkins & Kelly 2001
Short course
Horsburgh, 1999
Extended/Short courses
Kinchin, 2005
Extended/Short courses
Knapper, 2003
Academic development
Kogan, 2002
Academic development
Light, Calkins, Luna & Dane 2009
Extended course
Lomas *Kinchin, 2006
Academic development
McCluskey de Swart, 2009
Peer learning
McMahan and Plank 2003
Peer learning
Ortlieb, Biddix & Doepker 2010
Community of practice
Peseta & Manathunga 2007
Community of practice
Piccinin & Moore 2002
Consultations
Piccinin, Cristi & McCoy 1999
Consultations
Postareff, Linblom-ylanne & Nevgi 2007 Extended course
Reid, 2002
Discipline specific
Rindermann, Kohler & Meisenberg 2007Consultations
Rust 1998
Workshops
Rust 2000
Extended course
Southwell & Morgan 2010
Extended/Short courses
Spafford Jacob & Goody 2002
Extended course
Spronken-Smith &Harland 2009
Community of practice
Stes, Clement 7 van Petegem 2007
Extended course
Toth & McKey 2010
Peer review
Approaches
Discipline specific
Approaches



































































Trowler & Bamber, 2005
Wahr, 2010
Culture



6
Appendix A
Warhurst 2006
Community of practice
Weimer, 2007
Academic development
Weurlenader & Stenfors-Hayes 2008
Short course




These studies are often localised and context-bound case studies and therefore not generalisable
across TPPs more broadly.
Measuring effectiveness
While evaluating the impact of TPPs might seem a relatively straightforward matter on the
surface, there is in fact, considerable debate about how to determine the impact of academic
development programs, what aspects to measure, how to measure them and how conclusive
attempts to assess impact can be. Furthermore, any consideration of the impact of academic
development programs can only be meaningful when contextualised against the size and type of
institution, the resources available and the intended outcomes, adding to the complexity of the
task. This however, is not a new problem.
In 1975 Gerry Gaff lamented the lack of impact evaluation of academic development
programs, suggesting that unless we evaluate our programs and demonstrate that they
produce results in terms of better courses or better educated students, more knowledgeable,
sensitive, effective, or satisfied faculty members, or more effectively managed organizations,
we will all be out of business (p. 4). Decades later Kreber and Brook (2001) continued to
argue that serious evaluation was long overdue and pointed to the difficulty of developing a
framework when most academic development outcomes rather than being end points in
themselves, were part of the process of becoming (p. 54). Similarly Sword (2011) more
recently maintained that changes which might occur as a result of participation in TPPs are
designed to unfold slowly over time rather than be observable at a point in time. Indeed the
very fact that academic teaching development deals with human responses suggests that
attempts to measure outputs or outcomes will be challenging. The complexity of the task of
evaluation is further complicated by the increasing attention directed to the culture of the
institution and the extent to which teaching and learning quality is supported and rewarded
(Chalmers, 2011).
These challenges appear to have inhibited evaluation initiatives with few studies addressing the
long-term impact of TPPs (Gibbs & Coffey, 2004; Prebble, et. al., 2004; Trowler & Bamber,
2005). In his report on educational development in the United Kingdom, Gosling (2008,) noted
that academic development strategies are monitored, but there appears to be little systematic
investigation of the impact of strategies (p. 38). Although this appears to be a universal issue
according to reports and studies emanating from North America, the United Kingdom, Europe,
Scandinavia, Africa and Australasia it should not diminish the urgency of determining how to
evidence impact beyond participant satisfaction as a basis to making the most effective use of
limited resources (Postareff et al 2007).
A number of studies have attempted to address the question of how to measure effectiveness and
impact of teaching preparation activities (Bowie et al, 2009; Hanbury et al, 2008; Devlin, 2008;
Postareff et al, 2007; Gibbs & Coffey, 2004). Rust (1998), in a UK based study, reported that
although the academic development workshops were highly rated and the feedback positive, it
could be seen as mainly a measure of participant's satisfaction, providing no evidence that the
workshops achieve any changes in participants' practice (p. 72). While there has been some
research in the intervening years into the effectiveness of TPPs, the ‘happy sheet’ that reports
participant satisfaction at the end of the TPP arguably remains the dominant form of evaluation
of the majority of TPPs. A review of tertiary practitioner education training and support in
7
Appendix A
New Zealand commissioned by Ako Aotearoa (2010), confirmed that participant feedback on
the program was the predominant formal mechanism used for evaluation and that academic
developers relied on participant anecdotal evidence of their changes in teaching practice. In
highlighting the inherent difficulty of identifying the impact of academic development
activities on the quality of teaching and learning the report alludes to the challenge not only of
determining how to evaluate, but also of isolating what can be evaluated.
In general, the literature relating to the evaluation of academic development uses the terms
‘effectiveness’, ‘impact’, ‘influence’ and ‘strengths’ interchangeably without attempting to
define them, thereby lacking clarity of purpose. The American Evaluation Association (2011)
defines evaluation as assessing the strengths and weaknesses of programs, policies, personnel,
products, and organisations to improve their effectiveness. Moon (2004) asserts that an effective
intervention, such as TPPs for academics, will result in changes in knowledge and practice
appropriate to the teaching-learning context. This suggests that an evaluation of the
effectiveness of TPPs for academics must direct attention not only to the effectiveness of the
practices and processes as reported in participant satisfaction surveys, but also to the changes
which occur as a result of these practices and processes. A number of studies have investigated
the changes which occur as result of participation in TPPs.
Conceptions of teaching
While in general there would be no argument with the premise that improving the
performance of teachers will lead to better student outcomes, generating data to support this
is more difficult. Recognising this as problematic a number of studies have instead focused
on the impact of TPPs on teachers’ conceptions of teaching, reasoning that in the longer term
these changes would have some impact on student learning (Marton & Booth, 1997; Van
Rossum & Schenk, 1984; Marton & Saljo, 1997). Research has shown that all teachers hold
personal conceptions of teaching which are the result of their own experiences, both as
students and teachers (Dall’Alba, 1991; Fox 1983; Martin and Balla, 1991; Pratt 1992;
Prosser, Trigwell and Taylor 1994; Ramsden, 1992; Samuelowicz and Bain, 1992). Such
conceptions usually range from those who hold fast to the teacher-focused and contentoriented transmission model of teaching (Entwistle &Walker, 2000), to those who place the
student at the centre of decisions related to teaching and learning and see teaching as
synonymous with the facilitation of student learning and conceptual understanding (Kember,
1997; Prosser et al., 1994). Samuelowicz and Bain (1992) suggest that in reality this is a
more complex position with teachers having ‘ideal’ conceptions of teaching and ‘working’
conceptions of teaching so that even if they believe that student centred approaches are best,
they may not act on this belief (p. 110).
A number of studies support the position that in order to effect change in teaching, higher
education teachers must begin to think about teaching and learning differently (Bowden,
1989; Gibbs, 1995; Gow and Kember, 1993; Ramsden, 1992; Trigwell, 1995). This is
somewhat of a challenge given the literature which acknowledges the difficulties in changing
conceptions of teaching, (Holt-Reynolds 1991; Kennedy 1991; Simon and Schifter, 1991;
Taylor, 1990). Despite the difficulties there have been attempts to change participants’
conceptions of teaching by increasing their awareness of other conceptions which are more
conducive to student learning (Martin & Ramsden, 1992; Ramsden, 1992; Trigwell, 1995)
and through action research (Gibbs, 1995; Kember et al, 1997). The use of action research is
in line with Ramsden’s (2003) position that teachers in higher education need to think about
teaching in a scholarly way in order to challenge their long held conceptions.
8
Appendix A
Ho, Watkins and Kelly (2001) implemented a program intended to challenge and change
conceptions of university teachers based on a model of change developed from the work of a
several theorists (Argyris and Schön, 1974; Posner, Strike and Hewson, 1982; Lewin, 1947;
Shaw et al, 1990). Participants were engaged in a four-stage process of self-awareness,
confrontation, exposure to alternative conceptions and finally, commitment (p. 147). The
impact of the program was assessed through a comparison of conceptions of teaching using
recorded interviews at the Pre, Immediate Post (at the end of the program) and Delayed Post
(one year later) stages. Data on student perceptions of participants’ teaching through CEQ
scores and students’ approaches to studying, both before and after the program, was also
analysed. Of the participants, approximately 25 per cent demonstrated significant changes in
conceptions of teaching which had a positive effect on their teaching practices and student
learning approaches. A further 50 per cent of the participants demonstrated changes to a
lesser extent and approximately 25 per cent exhibited minor or no change in conceptions of
teaching. While the conclusions were drawn with caution due to the small number of
participants, the results suggest that professional development programs can lead to changed
conceptions of teaching with consequential changes in teaching practice and student
approaches to learning, and that these changes occur over time (Ho et al, p. 163).
Giertz (1996), in her study of the long term effects of an intensive three week training program
for academic staff, focused on the effects of teacher preparation on pedagogic understandings
and competence. The data gathered, which included self-reported progress and observations by
‘supervisors’, identified changes in the teaching of the participants and, perhaps more
significantly, the spread of their influence over other staff. Evidence of a shift from teachers to
learners’ needs was seen in the preparation for teaching, in the presentation of classes and in
collegial conversations about teaching and learning. Participants typically reported that they
were less focused on myself, and more aware of the students’ need to understand, demonstrating
changed conceptions of teaching and learning (p. 70).
A further perspective on the effect of TPPs on teacher beliefs was provided by Akerlind
(2007) who argued that individual academics experience the world of teaching differently
and therefore hold different conceptions of it. She identified five different approaches to
developing as a university teacher and suggested that these are hierarchical with each stage
building on the previous. The five stages are: a focus on improving content knowledge or
what to teach; building experience in how to teach; building a repertoire of teaching
strategies to be a more skilful teacher; determining which strategies work in order to be an
effective teacher, and finally, understanding what works and why in order to effectively
facilitate student learning (p. 27). The findings of the study assert that only when academics
are focused on the development of a repertoire of teaching strategies will they be interested in
development workshops. Any attempt to introduce them to notions of reflection or teaching
portfolios will have little effect if their view of teaching development is focused on
improving their content knowledge. This is an interesting finding in terms of debates about
whether TPPs should be compulsory.
Similarly Entwistle and Walker (2000) suggest that there is a ‘nested hierarchy of
conceptions of teaching’ and that more sophisticated understandings emerge from experience
as individuals create their own knowledge of teaching over time (p. 352). They suggest that
by developing ‘strategic alertness’ teachers can use their experiences to improve their
teaching (p. 357). Other studies (Woods & Jeffrey, 1996; Forest, 1998; Trigwell & Prosser,
1997) report similar conclusions suggesting that seizing on classroom opportunities for
‘teachable moments’ or ‘learning moments’ can be significant events in changing
conceptions of teaching. However, this assertion has been challenged by Norton et al (2005)
9
Appendix A
who claim genuine development only comes about by changing teachers underlying
conceptions of teaching and learning.
Prebble (2004), Knight (2006) and Ginns et al (2008) concluded that teaching development
programs which are based on conceptual change models can be effective in shifting teachers’
beliefs from a teacher-centred to a more student-focused approach to teaching. These
changed conceptions then have the potential to influence student learning if there is a
consequential shift from teaching and learning experiences which focus on memorisation and
routine tasks for surface learning, to approaches which emphasise understanding and
application of concepts which promote deep learning (Hanbury et al, 2008). Kember &
Kwan (2000) adopt a slightly different schema characterising lecturers’ conceptions of
teaching, as content-centred and learning-centred (p. 475). They then identified four
subcategories of teaching: teaching as passing on information; teaching as making it easier
for students to understand; teaching as meeting learning needs, and teaching as facilitating
students to become independent learners (pp.477-478). Each of these conceptions is
manifested through teacher behaviours such as motivational approaches, teaching strategies,
attention to and engagement of students and assessment practices. Other studies also have
indicated a close relationship between conceptions of teaching and approaches to teaching,
and between approaches to teaching and student approaches to learning (Hanbury et al.,
2008; Kember, 1996; Biggs, 1987). McAlpine & Weston, (2000) state this relationship quite
categorically: Fundamental changes to the quality of university teaching . . . are unlikely to
happen without changes to professors’ conceptions of teaching (p. 377).
Eley (2006) suggests that the relationship between teacher conceptions and teaching practice is
more tenuous, being one of espoused conceptions and reported approaches (p. 193). This view
is supported by Kane et al (2002) who caution that we should view the connection between
teacher beliefs and their actions as tentative. They challenge the legitimacy of a number of
studies which claim to measure the impact of TPPs on conceptions of teaching, arguing that in
most cases the studies rely on self- reporting and do not observe teachers’ practice and are
therefore unable to discriminate between what are the espoused theories-of-action and the
theories-in-use. According to Argyris et al. (1985) the difference between these is that an
espoused theory communicates aims and intentions about behaviour in a certain situation, while
theories-in-use determine the actions. Despite these cautionary conclusions, Postareff et al
(2004) confirm that, in general, academics who participate in extended pedagogical training
which challenges conceptions of teaching demonstrate positive self efficacy in regard to
teaching beliefs, considerably more so than those who take short courses and remain uncertain
of their understandings of teaching and their abilities. What becomes apparent from the
literature is that in order to measure the impact of TPPs on conceptions of teaching, longitudinal
studies which include observation and analysis of teaching materials are likely to provide more
reliable evidence than self-reports alone.
Teacher behaviour
According to Rust (2000) there is evidence that higher education teachers with a post
graduate teaching qualification receive better student feedback ratings than those who do not
hold such qualifications (p. 255). Leaving aside variables such as motivation, aptitude and
experience it would be reasonable to expect that completing such a qualification would have
some effect on teacher behaviour and a number of studies have drawn this conclusion (Breda
et al, 2004; Cilliers & Herman, 2010; Devlin, 2008; Donnelly, 2008; Stes et al, 2007;
Weurlandeer & Stenfors-Hayes, 2008). Donnelly (2006) reported that the three main effects
on teacher behaviour following participation in an academic development program were the
development of new instructional strategies, the implementation of new teaching approaches,
10
Appendix A
and the change in beliefs about teaching and learning theories.
A number of studies support Donnelly’s findings. Godfrey et al (2004) reported improved
teaching practices as a result of a training program for medical teachers. Using a pre and post
training survey, participants reported using a greater range of teaching techniques, more
successful implementation of strategies and sustained changes in their teaching following
training. While acknowledging the weakness of self-reporting instruments, the researchers
concluded that TPPs can lead to changes in teaching behaviours (p. 847). Similar findings were
evident in the study by Spafford Jacob et al (2002) with participants reporting the use of new
techniques to make lectures interesting, stimulate group discussion, encourage student
participation and elicit questions and feedback from students. Mc Arthur et al (2004) undertook
a comparison of two groups of academics; one had completed a graduate certificate in higher
education while the other had not. They found that both groups displayed a good level of student
focus and the use of various interactive techniques to sustain student involvement and attention
during lessons (p. 4). However, on closer examination of their responses it became clear that the
group who had completed formal training used more problem solving, group discussion,
interactive tasks and adopted a more sophisticated structure to classes using pacing and variety
to engage students. The group with no formal training relied on their content knowledge and
personality to maintain attention. The researchers viewed their findings with caution, however,
concluding that the difference between the two groups was minimal and that the benefit of the
training appeared to be in assisting less experienced teachers to develop more quickly.
Several studies have concluded that participation in TPPs can have an impact on overall
approaches to teaching. Participants in Knight’s (2006) study reported that following completion
of the PGCE their approach to teaching practice was more informed by educational theory, was
more geared to student-centred learning, more competent in the theatre of learning and
reflected a better understanding of the student perspective (p.20). Hanbury, Prosser and
Rickinson (2008) confirm these findings reporting that following completion of a teaching
preparation program, participants perceived their teaching to be more student-focused, which
was reflected in their teaching practices, curriculum development and confidence in taking risks
and trying new approaches (p. 477). They attributed the latter to having a theoretical framework
within which to evaluate their teaching. Significantly, participants reported that these
improvements occurred only after they had opportunities for practice and reflection over time
(p. 475). Similar findings of delayed transfer of learning have been reported in other studies
(Kirkpatrick, 1996; Cilliers & Herman, 2010).
There are a number of studies which report a link between the impact of TPPs on teacher
behaviour and an increased interest in the scholarship of teaching and learning. Healey
(2000) takes the position that teaching and learning in higher education are inextricably
linked to the scholarship of teaching within disciplines and that engagement with the
scholarship of teaching will lead to improvements in teaching and learning. The term
‘scholarship of teaching’ is used often in the field of academic development, yet is rarely
defined. Martin, Benjamin, Prosser and Trigwell (1999) drew on the relevant literature to
identify three strands which define the scholarship of teaching. These are engagement with
the research on teaching and learning, reflecting on personal teaching and student learning
experiences within a particular discipline, and sharing knowledge and practice about teaching
and learning, both within the discipline and more generally. Huber and Hutchings (2005)
conclude that encouraging and supporting scholarship in teaching and learning is
fundamental to improving student learning and TPPs, and centres for teaching in general, can
provide the backbone for ongoing efforts towards faculty’s development as scholars of
teaching and learning (p. 121).
11
Appendix A
Despite the positive relationship between TPPs and improvements in teaching practices
reported in the literature, some studies have suggested that one factor which limits the impact
of TPPs on teacher behaviour is the lack of discipline specific material presented during the
programs. Jenkins (1996) argued that when planning programs academic developers must
recognise the importance that staff attach to their discipline if they wish to maximise their
impact. Others emphasise the importance of the discipline in academic identity (Becher,
1994; Diamond & Adam, 1995b; Gibbs, 1996; Jenkins, 1996; Boud, 1999; Reid, 2002),
while Kolb (1981) identified the different learning styles of different disciplines and the
implications of this for teaching preparation programs. Shulman (1986, 1987) also
emphasized the need for teachers to have not only knowledge of their discipline, but also how
best to present the particular concepts, methodologies, issues and insights to their students.
Lindblom-Ylanne et al (2006), in their study of over 300 higher education teachers, found a
strong relationship between approaches to teaching and academic discipline. Predictably
teachers of ‘hard disciplines’ (physics, chemistry and mathematics) adopted more teacher
focused approaches, while those in the ‘soft disciplines’ (history, art, philosophy) were more
student focused (p. 294). Furthermore, teachers of the ‘hard disciplines’ reported the highest
levels of self-efficacy, perhaps because of the linear and straightforward teaching content (p.
295).
The studies which report attempts to measure the impact of TPPs on teacher behaviour tend
to rely on interviews, Likert-type-scale surveys or observations. These are variously
administered immediately following the program, a short time later or after a longer period,
for example a year. Gibbs (2003) warns that studies which rely solely on self-reporting with
no corroborating evidence are likely to draw flawed conclusions about the impact of TPPs on
teacher behaviour. There is also some suggestion that the timing and nature of the
measurement tool affects the data collected, with surveys unlikely to reveal the deeper
understandings which can be communicated through guided conversation (Rust, 2000). In his
study, Rust illustrates the advantages of drawing on a range of data sources including
participant feedback via surveys and guided conversations, observations of teaching and the
submission of a teaching portfolio. In almost all cases behavioural change following
participation in the TPP was not only reported, but also observed in the organisation of
lectures, the use of new activities in lectures, the alignment of course materials and
assessment, engagement with reflection for improvement, and consideration of the students’
expectations and views in the development of course and teaching materials. The researchers
acknowledged that although the study reported and evidenced changes in teacher behaviour,
no conclusions could be drawn about outcomes for students (Rust, 2000, p. 259).
In general, the literature confirms that it is possible to evidence a shift from teacher-focused
to student-focused approaches in teaching and learning following participation in
professional development programs. Academics also report feeling more confident and less
stressed about teaching, having expanded repertoires of teaching strategies and being more
aware of improved outcomes for their students (Smith and Bath, 2003; Giertz, 1996). Kinchin
(2005) cautions the need to use multiple sources of evidence as self-reported improvements
may, in fact, be quite superficial.
Student engagement and approaches to learning
The role of academic developers is frequently described as facilitating improvement in the
quality of teaching and learning (Eggins & MacDonald, 2003). However, those conducting TPPs
have little direct contact with students to be able to assess their effectiveness in achieving this
goal. Furthermore, the literature on the relationship between TPPs and student learning is not
only scant, but at times confusing or contradictory. For example, some studies have concluded
12
Appendix A
that there is little evidence regarding the impact of training on teaching and even less evidence
of impact on student learning (Gilbert and Gibbs, 1999;Weimer and Lenze, 1997), while others
suggest a positive, albeit indirect, relationship (Gibbs and Coffey, 2004; Hanbury et al, 2008).
Horsburgh’s (1999) study into the factors which impact on student learning suggests that
TPPs can influence student learning by assisting teachers to adopt teaching approaches which
encourage deep learning. Students interviewed in the study revealed that how teachers teach,
assess and help them learn have a direct influence on their learning. They listed expertise in
the discipline and an ability to relate this to the workplace as important characteristics of
effective teachers in addition to giving clear explanations, being well organised, presenting
learning material at an appropriate level, being enthusiastic, giving helpful feedback, using
relevant assessment tasks and having realistic workload expectations, all characteristics of
good teaching as described by Ramsden (1991). Furthermore, the students identified and
valued teachers who ‘asked critical and challenging questions’ and ‘stimulated discussion’
thereby encouraging a deep approach to learning. Simply put the more students practice,
discuss, write, get feedback, problem solve, study etc., the more they are likely to learn
(Shulman, 2002; Kuh, 2003). The researchers found a link between the teachers who used
these techniques and participation in TPPs. However, other studies suggests that the
relationship between engagement in class and student learning may not be as strong as
expected since learning outcomes are influenced by many factors (Ewell, 2002; Klein et al,
2005; Carini et al., 2006).
Perhaps the most cited study in terms of the impact of TPPs on student learning is that of Gibbs
and Coffey (2004). Their extensive international investigation, which drew on evidence from
student ratings of teachers, teachers’ self-perceptions of being either student or teacher focused
and whether students had deep or surface approaches to learning makes a convincing case for
the importance of TPPs in higher education institutions. The participants included ‘trained’ and
‘untrained’ academics and their students, and data related to teaching quality and student
learning was collected before, during and a year after the completion of the program. They
concluded that participation in a TPP increased the adoption of student focused approaches in
teaching and improved the quality of teaching in aspects such as enthusiasm, organisation of
learning experiences, use of group techniques and rapport with students. They also reported a
strong correlation between improved teaching and the adoption of deep learning approaches by
the students. These findings were confirmed some years later by Hanbury et al (2008) who also
found that when teachers adopt a student focused approach to teaching following participation in
TPPs, their students adopt deep learning approaches to their studies. This was determined
through classroom surveys which gathered data on the amount of time students spent on a
course, their participation in class, their interest in learning, the degree of personal responsibility
taken for their learning, the quality of their assignments, their exam results, and perceived
quality of their education program as a whole. It is important to note, however, that these
student perception related indicators are not a measure of student learning, a connection which is
difficult, if not impossible, to establish (Meiers & Ingvarson, 2003; Prebble et al., 2004; Knight,
2006; Gibbs and Coffey, 2004). Rather, changes in these indicators following staff participation
in TPPs may provide a basis for conclusions about the approach students take to their learning.
This position is supported by the comprehensive review of the literature by Prebble which
concluded that TPPs for academics can have a direct impact on teaching quality, which in turn
can have a positive influence on student learning, but that the relationship between TPPs and
student learning outcomes was indirect. Gosling’s (2008) report on the effectiveness of teacher
development activities in the UK also notes that despite the belief that students will benefit from
professional development workshops, the link between professional development of staff and
improved student learning is indirect and in some cases unproven (p. 45). McAlpine et al (2008)
13
Appendix A
also highlighted the difficulty in tracking the impact of TPPs to student learning. In their highly
structured study lecturers participated in a workshop on curriculum design for improved student
engagement. Over 1000 students then completed a questionnaire comparing their learning in the
redesigned courses with that in previous courses. While their responses suggest improvements
in their learning, the researchers concluded that this revealed more about the curriculum than
about the individual teaching practices of the lecturers.
This highlights the ambiguity of some data which was also reflected in an earlier study by
Horsburgh (1999) which tracked the introduction of a teaching and learning innovation. Prior
to implementation teachers attended professional development which they reported led to
increased collegial dialogue around teaching and learning, a greater focus on student oriented
approaches and a heightened interest in improving their teaching. When surveyed the student
responses indicated that the program was more academically rigorous…, more problem and
case study-based, more capability-focused… and taught in such a way as to encourage
critical thinking and independence, compared with responses to previous teaching which was
perceived to be technical and task-based (p. 19). While the study concluded that professional
development activities can have an effect on student learning, this again may be a case of the
curriculum design contributing to improved student outcomes rather than the teaching per se.
Further evidence of the complexity of trying to measure effectiveness of TPPs on student
learning is outlined by Devlin who notes that designing and conducting research to this end is a
challenging endeavour (p.12). Devlin’s year-long study involved two groups of academics with
one undergoing a teacher development program intended to improve their teaching practices,
increase their focus on student approaches to teaching and learning and improve student
learning. The effectiveness of the program was then assessed using student evaluations of
teaching, staff self-evaluations, a teaching orientation questionnaire, student learning outcomes,
teachers’ journal entries and a treatment package effect measure which was a questionnaire
which asked participants to rank aspects of the intervention (p. 13). While it is claimed that the
intervention was somewhat effective in improving specific aspects of the quality of participants’
teaching, a number of challenges emerged in relation to drawing meaningful conclusions from
the data (p. 13). In particular it was noted that changes in classroom practice following
participation in TPPs are not immediate and therefore any impact on student learning outcomes
is difficult to track. There was also evidence of a cyclical relationship between changes in
teaching practice resulting in improved student learning outcomes which then stimulate further
changes in teaching practice. This cyclical relationship occurs over time which complicates
attempts to establish a relationship between TPPs and student learning.
The complex nature of student learning further exacerbates efforts to determine the impact of
staff participation in TPPs. Shavelson (2010) defines student learning in higher education as a
permanent change in observable behaviour over time (p.9). Typically this is measured in higher
education institutions by indirect measures such as graduation rates, progress and retention rates,
student satisfaction or employment rates (Crosling et al., 2006). However, these measures do not
describe the actual learning or changes in student behaviour. Shavelson believes that instead,
universities should be interested in measuring not only knowledge and reasoning in the
disciplines, but also students’ broad abilities in critical thinking, analytical reasoning, problem
solving, communicating and individual and social responsibility, all of which take time to
develop and mature (p. 186). Given that these attributes develop over time it is likely that
student learning will be influenced by a number of teachers, rather than by one semester of study
with one teacher, casting further doubt on whether TPPs can have a direct influence on student
learning. In addition, leaving aside personal circumstances, a number of other ‘educational’
factors may impact on student learning, further confounding attempts to draw strong links
between participation in TPPs and student learning outcomes.
14
Appendix A
Horsburgh identified the significance of the other ‘educational’ factors asserting that even if
there are changes in teacher behaviour, unless other characteristics of the learning
environment and education process are transformative in enhancing and empowering the
student, it is unlikely that quality teaching alone will lead to improved student learning (p.
10). In this sense, transformation refers to the ways in which students acquire knowledge and
skills and apply them to the wider context. Most universities now identify graduate attributes
which acknowledge employer expectations such as a high level of discipline knowledge,
analytical abilities, synthesis and critical appraisal, personal qualities of adaptability,
flexibility, self-motivation and confidence, social skills for effective interaction in teams, and
interpersonal and communication skills. Quality learning experiences are those which
transform the conceptual abilities and self-awareness of the student and provide the skills and
abilities to contribute actively to a rapidly changing world (Ramsden, 1993). Horsburgh
identifies eleven elements which are necessary for a transformative educational experience:
curriculum, learning processes, assessment, teaching and teachers, learning resources and
support, institutional systems and structures, organisational culture and socio-economic and
political context (p.11). On closer examination more than half of these can be said to be of an
‘institutional’ nature rather than a ‘teaching’ nature, suggesting that universities need to pay
attention to the culture surrounding teaching and learning activities.
Institutional Culture
A number of studies have investigated the relationship between institutional culture and the
impact of the work of academic developers (Buckridge, 2008; Cilliers & Herman, 2010;
Hanbury et al, 2008; Toth & McKey, 2010). Trowler (2002) noted that instigating and
managing change in a university can be problematic with academics often reluctant to engage
with innovation. Weimer (2007) considered the overall impact of academic development
efforts on institutional culture, the valuing of teaching and predominant pedagogical practice.
She argued that over the past 30 years there has been little impact on the quality of teaching
in higher education in terms of the valuing of teaching and pedagogical practices, but
suggests that it has made a difference in the lives and teaching of many individual faculty
members and therefore has benefited the learning experiences of even more students (p. 6). In
seeking to understand why the impact is limited to individuals and why so many university
staff continue to teach with very little understanding of teaching and learning, Weimer
addresses issues of nomenclature, understaffing of development units and resultant staff
burnout, insufficient funding, frequently changing administrative structures, and increasing
pressure to demonstrate accountability in ways which are at odds with the very goals of
academic development (p. 6.). Knapper (2003) in his report card on educational development
suggests that in terms of effects on higher education practice … we (academic development)
would earn at best an A for effort, but probably a C for impact (p. 7). He points to the
financial, social and political context within which universities operate as limiting factors,
and concludes that an A for effort and a C for impact could be considered commendable
under the circumstances (p.7).
Trowler & Bamber (2005) explore the intersection of policy and institutional capacity and
culture and highlight the gulf which exists between effecting change in individual teacher
behaviour and achieving more widespread institutional change. In particular, they contend
that the expectation that change at one level of an institution will automatically translate to
change at another broader level is fundamentally flawed and exaggerates the power of
agency over that of structure, seeing individual actors as prime movers and shakers in social
change (p. 6). Dill (1999) agreed that while academic development units certainly enable
individual academics to enhance or improve their teaching, the dominant orientation of the
15
Appendix A
programs is to individuals rather than to departments, disciplines or institutions. What is
needed to bridge this gap according to Dill, is an understanding of the theory of change,
intelligently applied to realistic policy development and implementation based on exemplars
of best practice, genuine support and expectations of gradual change. However, Trowler and
Bamber (2005) caution that even well formulated policy needs to be complemented by an
‘enhancement culture’, and ‘institutional architecture’ if TPPs are to effect broader change.
An enhancement culture is one in which staff are encouraged to share and question their
current teaching beliefs, knowledge and practices with a view to solving problems, improving
practice and reflecting on the gap between their knowledge and practice. This culture must be
complemented by a learning architecture which includes mechanisms for systematic review
processes, sharing of successful practises and benchmarking activities (Trowler & Bamber,
2005). An absence of such a culture severely limits the potential ripple effect of academic
development programs.
The significance of the enhancement culture was a major finding of a study by Ginns et al
(2008) which investigated the impact of a GCHE on conceptions of teaching and approaches
to teaching. Their findings, which correlate with those of earlier studies (for example
Colquitt, LePine and Noe, 2000) suggest that both individual and organisational factors such
as support from senior managers and peers, and collegial networks are critical to the transfer
of learning from academic development programs. Cilliers and Herman (2010) and Asmar
(2002) also emphasise the importance of a supportive institutional environment. Such an
environment is characterised by providing academic development opportunities for staff who
wish to improve their teaching; attending to the status of teaching within the institution
through recognition and reward of teaching achievements; providing funding to support
initiatives aimed at improving teaching, funding the scholarship of teaching, and creating an
enabling environment for academics (Cilliers and Herman, p.7). Improvement in teaching in
this context is viewed as progress along a continuum from non-reflective teaching practice
through reflective teaching practice to scholarly teaching and finally to a contribution to the
scholarship of teaching and learning (p.7). An enabling environment is then seen as one in
which academics are not expected to progress to a prescribed level, but rather encourages all
to become at least reflective teachers and some to become scholarly, perhaps leading to more
widespread institutional change over time. This approach acknowledges the stress of ‘change
overload’ felt by academics under pressure to improve their teaching, lift their publication
rate and cope with the demands of a diverse student body (Lomas and Kinchin, 2006).
Spafford Jacob et al (2002) identified a number of contextual and cultural barriers to the
development of an enhancement culture. These included lack of school/faculty support for
programs with attendance having to be squeezed into a schedule that is dominated by
research agendas, administration duties and the actual preparation and delivery of lectures
(p.1), lack of funding and resources, lack of interest from colleagues and a resistance to
change or new teaching ideas (p.7). As a consequence of these barriers the majority of the
participants in their survey indicated that they had very little or no influence on the
enhancement of teaching in their school (p.8). Gibbs and Coffey (2004) describe this
dichotomous situation as an alternative culture which exists within the training program, but
is not representative of what occurs in departments or faculties (p. 98). A number of studies
(Walker and Entwistle, 1999; Kogan, 2002; Ginns et al., 2008; Southwell and Morgan, 2010)
agree that changing the teaching practice of individual academics through TPPs is of little
value without support at all levels of the institutional hierarchy and a genuine commitment to
cultural change.
Other studies (Buckridge, 2008; Ginns et al, 2008; Klenowski, Askew & Carnell, 2006) have
pursued the relationship between the scholarship of teaching, TPPs and the impact on
16
Appendix A
institutional culture. In particular Buckridge investigated the impact of teaching portfolios
developed during TPPs suggesting that they should stimulate collegial dialogue about
teaching and learning, a strengthening of the nexus between teaching and research, a more
sophisticated institutional conception of teaching and a greater focus on the scholarship of
teaching (p.123). However, in reality, the success of portfolios in stimulating such
institutional change is limited by the culture of institutions which tends to value quantitative
data related to research performance above qualitative teaching related data, particularly for
promotional purposes. Within such a culture, academics are likely to be satisfied with
reasonable teaching results and devote more energy to research activities which will be more
highly valued by the institution (Buckridge). Boyer (1990) believes that institutions have a
responsibility to recognise ‘the scholarship of teaching’ equally with research if they are
serious about effecting change in teaching practices in higher education. Rather than the
scholarship of teaching and learning and disciplinary research being competing priorities
where ‘more of one necessarily leads to less of the other’ (Asmar, 2002), there is an argument
that there should be a ‘synergy’ of teaching and research which would improve the status of
teaching and learning in institutions (Boyer, 1998). This was certainly not the case in
Donnelly’s (2000) study where participants lamented the lack of interest in discussing,
changing, supporting or researching student learning within their departments (p. 209).
Nevertheless, Donnelly challenges academic staff not to give up despite the prevailing
attitudes, but to continue to seek opportunities for collaboration within their departments
since it is only by these individuals continuing to take action to alter their own environments
that there is any chance for deep change (p. 215).
The notion of ‘learning architecture’ (Dill, 1999) is associated with the procedures within
universities for accountability and improvement which might include processes for
systematic review and benchmarking, for dissemination of best practice in teaching and
learning and for supporting innovation and experimentation in teaching (Trowler and
Bamber, 2005). Institutions which focus solely on internal review procedures which mirror
external processes risk developing a culture of compliance which results in the production of
documents and policies rather than the development and sharing of new knowledge and skills
related to teaching and learning (Dill, p.134). In contrast to this, the architecture of a
learning organisation will privilege processes and activities which result in improved
practice. Such organisations tend to have embedded procedures for the use of evidence as a
basis for solving teaching and learning related problems; the coordination of teaching and
learning activities within academic units; a systematic approach to gathering and using
information from research, staff and past graduates to inform curricula innovations; systemwide coordination, support and review of the quality of teaching and learning, and institutionwide dissemination of new knowledge and best practice related to improving teaching and
learning. Other studies (Huber, 1991; Szulanski, 1996) confirm that institutional learning
requires not only a supportive culture, but also embedded processes which encourage and
facilitate the sharing, borrowing and adoption of new ideas and practices from one section to
another.
Hanbury, Prosser and Rickinson (2008) investigated the effects of academic development
programs on individuals and the relationship between programs and institutional missions
and departments (p.1). Individual participants reported improvements to their own teaching
(being more confident and student focused) and emphasised the importance of institutional
support, particularly at the department level, in sustaining the benefits from TPPs (p.475). A
number of department heads reported that staff who participated in TPPs acted as catalysts
for the sharing of ideas about teaching and developed the language and confidence to discuss
and challenge existing teaching and learning practices (p. 478). This helped raise the profile
of pedagogy and encouraged new staff to take a professional approach to their teaching.
17
Appendix A
Other institutional benefits reported included the initiation of interdisciplinary collaboration,
a greater awareness of the need to understand pedagogy in addition to discipline knowledge,
and the development of processes for mentoring new staff. However, this was not a
unanimous view, with some heads of departments commenting that TPPs were something to
be got out of the way so that they can concentrate on research and strategic planning (p.480).
In this context it is not surprising that many participants reported that although they
appreciated the collegial networks they developed, they did not anticipate any career benefit
from having completed a TPP, confirming their belief that their research portfolios would
determine their success in promotion. This perception appears to be widespread as evidenced
in Wright’s (1995) study in which academics across the world identified acknowledged
greater recognition of teaching in the promotional process as having the potential to improve
their teaching more than any other action, suggesting a significant gap exists between
university culture and academic motivation (Ramsden & Martin, 1996). The challenge is for
institutions to identify ways to define and measure excellence in teaching and the scholarship
of teaching and learning just as they do for research (Jenkins, 1997).
Evaluation Frameworks
A number of studies have focused on the development of evaluation frameworks. Kirkpatrick's
(1998) evaluation framework which includes the four levels of Reaction, Learning, Behaviour
and Results (all four levels need to be considered to have a complete picture) appears to have
informed the design of a number of models for evaluating academic development activities.
Guskey (2000), as cited by Stes, Clement and Van Petegem (2007) categorised and identified
five levels of professional development evaluation (p.100) and Kreber and Brook (2001)
propose six levels at which academic develop programs can be evaluated for impact. Together,
these include:
1. participants’ perceptions/satisfaction with the program;
2. participants’ beliefs about and approaches to teaching and learning (Trigwell & Prosser,
2004);
3. participants’ teaching performance (including their use of new knowledge and skills);
4. students’ perceptions of staff’s teaching performance;
5. students’ learning; and
6. the culture of the institution (including institutional change). (See also Trowler & Bamber,
2005).
While these frameworks are helpful in identifying what to evaluate, a number of contextual
factors must also be considered. The time frame in which change can be expected will vary
depending on what is being measured. For example, participant satisfaction can be identified
during, immediately after or some time following the TPP. However, identifying changes in
teaching practices or in student learning would be expected to take considerably longer to
become evident. Ramsden (2003) estimates that it can take five to ten years for changes to
teaching practice before evidence of improvement to the student learning experience can be
expected. This position is supported by Knight’s (2006) investigation of the effectiveness of
preparation programs in which past and current participants of a TPP were surveyed. Past
participants tended to rate the program as more effective than current participants, suggesting
that further experience and reflection over time are integral to the ultimate success of the
programs. Furthermore, other factors such as teaching experience, duration of teaching
preparation program, the history and context surrounding the formation of initial conceptions
of teaching (Postareff, 2007) and institutional architecture and climate (Trowler & Bamber,
2005) can all influence the impact of TPPs
Frameworks for the evaluation of TPPs must also consider the purpose or intended outcomes of
18
Appendix A
the programs, which vary considerably. Longer, intensive, more formal programs tend to focus
on building understanding and capacity in terms of pedagogical approaches appropriate to
learners in higher education. Shorter courses and workshops tend to have a single intention such
as providing orientation, disseminating information or instructing in particular skills. Lewis
(1996) suggested that, in general there were three main intentions of professional development
activities: to initiate instructional improvement, to facilitate organisational improvement and the
personal development of participants. Gilbert and Gibbs (1998) identified five theoretical
frameworks or models that commonly inform the purposes of teaching preparation programs.
These change models include:
1. Behavioural Change - changing classroom teaching behavior;
2. Development Change - changing from a focus on teacher and subject, to the subject and
student (passive), to teacher and student (active) and finally to students (independent);
3. Conceptual Change - linking the teacher's conceptions of teaching to their teaching
intentions and the strategies they use;
4. Reflective Practice - development of teachers as reflective practitioners;
5. Student Learning - changing focus from teaching to student learning, encompassing
students' approaches to learning, students' perceptions of their learning environments and
their learning outcomes.
McLean, Cilliers and Van Wyk (2008) take a broader approach identifying the five main
purposes of TPPs as being: to orient new staff into the policies, processes and academic culture
of the institution; to develop specific skills in line with institutional priorities; to enhance
teaching and learning; to develop the scholarship of teaching, and to develop educational
leadership. Nickols (2000) views professional development as a management tool identifying 14
purposes of professional development, none of which relate to student learning. More recently
Grasgreen (2010) included a more institutional focus claiming that the purpose behind the surge
of interest in encouraging participation in teaching certificate courses is about investment in
future staff and quality assurance.
Table 1 lists a number of studies that are indicative of those investigating the impact of TPPs
in one or more of the domains of change. Table 1 also identifies the different types of
programs and their different purposes and outcomes. As the outcomes from these various
models and types of TPPs differ depending on their purpose, there would be little sense in
applying the same measures of effectiveness and impact to them all. The challenge then is to
determine a framework of indicators which can be adaptable to the various context and
purposes. The Preparing Academics for Teaching in Higher Education (PATHE) project which
articulated principles and developed a framework for the evaluation of foundations of university
teaching programs also acknowledged the importance of clarity of purpose and contextual issues
in designing and applying evaluation frameworks (Bowie et al, 2009).
Indicators of Impact
The last twenty five years have heralded significant changes in higher education institutions,
many of which have been directed at improving the quality of educational experiences for
students. A consequence of this has been the push to provide evidence of efficiency,
effectiveness and improved quality in teaching and learning (Guthrie& Neumann, 2007).
Academic development units which have a central role in advancing the quality of teaching and
learning within institutions are also increasingly being expected to demonstrate the effectiveness
of their TPPs. While Gray & Radloff (2008) suggest that this necessarily will rely on external
evaluation and validation, with benchmarking and self-regulation as starting points (p.104) it is
also important for institutions to gather a variety of data from different sources to develop a
comprehensive picture of the impact of all types of TPPs offered (Stefani, 2011, Poole & Iqbal,
2011).
19
Appendix A
Although evaluations of academic development programs have been a part of the higher
education landscape for some time (Carroll, 1980; Abbott, Wulff & Szego, 1989; Weimer &
Lenze, 1997; Rust, 2000; Adamson & Duhs, 2004), the question of when and how best to
measure the impact of academic development programs remains largely unanswered. If we
understand the term impact to refer to final or longer term effects, influences or changes as a
result of a project or program of activities then, in an ideal situation, indicators of impact would
be developed concurrently with academic programs, enabling not just the collection of base line
data related to intended outcomes, but the ongoing collection of data. The more common
practice, however, is that institutions use a range of post program measures or indicators of
effectiveness or satisfaction. Cave et al (1997) identified three kinds of indicators:
1. Simple indicators are usually expressed in the form of absolute figures and are intended to
provide a relatively unbiased description of a situation or process.
2. Performance indicators differ from simple indicators in that they imply a point of
reference; for example, a standard, objective, assessment, or comparison, and are therefore
relative rather than absolute in character.
3. General indicators are commonly externally driven and are not indicators in the strict
sense; they are frequently opinions, survey findings or general statistics. (p. 9)
Higher education institutions tend to use performance indicators to monitor their own
performance, which not only assists in the evaluation of their operations but also enables
them to collect data for external audits, and government accountability and reporting
processes (Rowe, 2004). Guskey and Sparks (2002) argued that the most important qualities
of professional development can be grouped into three categories: content characteristics,
process variables and context characteristics, which should form the basis of the development
of indicators. There is, however, considerable agreement that four types of indicators are
suitable: Input, Process, Output, and Outcome (Cave, Hanney & Kogan, 1991; Carter, Klein
& Day, 1992; Borden & Bottrill, 1994; Richardson, 1994; Chalmers, 2008; Shavelson, 2010).
These can be more broadly categorised as Quantitative and Qualitative indicators.
Quantitative indicators are based on numerical assessments of performance and are typified
by Input and Output indicators. Input indicators refer to the human, physical and financial
resources dedicated to particular programs, while Output indicators refer to the results of the
programs which are measurable. Qualitative indicators use non numerical assessments of
performance and include Process and Outcome indicators. Process indicators reveal how
programs are delivered within the particular context referring to policies and practices
related to learning and teaching, performance management and professional development of
staff, quality of curriculum and the assessment of student learning, and quality of facilities,
services and technology (Chalmers, 2008, p.12). Outcome indicators focus on the quality of
provision, satisfaction levels and the value added from learning experiences.
There is increasing commentary on the validity of the various types of indicators (Guthrie &
Neumann, 2006; Blackmur, 2007; Rutkowski, 2007; Coates, 2007; Ramsden, 1991). Burke et
al (2002) and Chalmers (2008) comment on the limitations of Input and Output indicators
which tend to generate statistics which reveal how much or how many, but say little about
quality and may actually inhibit the analysis of teaching and learning processes, experiences
and interactions which could be more enlightening. Outcome and Process indicators do
provide information about quality, but because they are more difficult to measure and often
produce tentative results they are used less frequently (Romainville, 1999). Others studies
have concluded that an emphasis on Output or Outcome indicators over Input and Process
indicators is likely to result in unbalanced information which could lead to inaccurate
conclusions about provision (Borden & Bottrill, 1994; Burke & Minassians, 2001). This is at
odds with the prevalence of an accountability focus on inputs, outputs and systems, rather
20
Appendix A
than processes and outcomes which are seen as more ‘troublesome’ indicators of quality and
therefore avoided (Cave et al., 1990).
Despite the reservations, there is general agreement that the use of Process indicators is
appropriate and useful for generating information related to teaching and learning in higher
education (DEEWR Report 2008). The information generated then needs to be interpreted
and contextualised with data provided from the other types of indicators. Collectively these
indicators can provide a comprehensive picture of the quality of teaching and learning
activities suggesting that no one type should be privileged over another (Chalmers, p. 7) and
indeed, multiple sources of both quantitative and qualitative data will avoid erroneous
interpretations being made (Canadian Education Statistics Council, 2006; Rojo, Seco,
Martinez & Malo, 2001). Guthrie & Neumann (2006) also emphasise that indicators must be
understood as interrelated and linked and should only be interpreted in light of contextual
information concerning institutional operation, and with the purpose for which the
information is being used, made explicit.
As the majority of data resulting from the use of indicators is often diverse, the process of
determining ‘fitness for purpose’ is extremely important and requires extensive consultation
(Cabrera, Colbeck & Terenzini, 2001). This raises questions about the integrity of indicators
and measures of effectiveness applied broadly to make judgements on quite discrete
programs. The selection or development of indicators of impact should take account of the
intended outcomes of TPPs, the context of provision, the reliability of the data to be
collected, the need for an efficient and informative process and the importance of varied data
types and sources.
The most common sources of data used in studies of the impact of TPPs include pre and post
inventories, surveys, questionnaires, interviews, document analyses, comparative analyses,
multivariate analyses, conceptual analyses, phenomenography and peer observation (Tight
2003, p. 397). Specifically these may include staff self assessments, student surveys of
satisfaction, changes in student performance, inventories of teaching or learning approaches,
teaching awards and peer assessment. Questions have been raised about the validity of the
data generated by these quantitative and qualitative techniques. Quantitative methods can be
problematic if the data generated from a range of programs is aggregated since it might
diminish the effectiveness of a particular program and limit the validity of the conclusions
(McArthur et al 2004). Kane et al. (2002) signal a potential problem with the use of
questionnaires or multiple-choice-type inventories about teacher conceptions and beliefs
claiming that they limit the thinking of participants and may reflect the self-fulfilling
prophecy rather than reality. Richardson (1996) noted that these methods are too
constraining and may not validly represent teachers’ beliefs (p. 107). McArthur drew a
similar conclusion citing the example of a study which made comparisons between teachers
who had completed a Graduate Certificate in Higher Education and those who had not. While
the quantitative data suggested that there was little difference between the groups on a range
of measures, the qualitative data revealed a number of subtle yet significant differences.
McDowell (1996) however, suggested that interviews and surveys also can be problematic in
that participants might in fact respond with what they consider to be ‘right’ answers, making
it difficult to draw meaningful conclusions (p. 140).
Gardner et al (2008) support the use of what they term ‘soft’ indicators as evidence of impact
when ‘hard’ indicators such as improved learning, higher standards and changed behaviour
are difficult to measure (p. 96). ‘Soft’ indicators include real life stories...bottom up evidence
and anecdotes (Campbell et al, 2007, p. 22) and although they may be considered untidy
(Gardner, p. 97) they nevertheless provide a useful source of evidence in situations where
21
Appendix A
complex causal relationships make it difficult to draw conclusions. Gardner’s study which
focused on indicators of impact of an educational intervention identified media references,
publications, presentations, ‘good news stories’, project cameos, short anecdotal quotes and
accounts of engagement within the community as having value in the absence of more
precise quantitative data (p. 101). Similar indicators of the impact of TPPs were identified in
a study by Light et al (2009) which collected evidence from participants using an Approaches
to Teaching Inventory, teaching project reports, which detailed actual changes implemented,
and post program interviews. Based on this evidence they reported that as a result of the TPP
teachers had:
1. Redesigned assessment in their units to better engage students in learning required
concepts, in one instance reducing the failure rate in the unit from 28 per cent to 5.5 per
cent;
2. Formed strong ongoing relationships with colleagues they met through the program, and
continued developing their teaching practice through dialogue or collaboration;
3. Drawn on their TPP experiences and subsequent improvements in teaching in preparing
successful promotion applications;
4. Been nominated for and/or received teaching awards;
5. Published journal articles on work that grew out of the TPPs; and
6. Pursued further study in teaching and learning in higher education (p.107).
The Report of the European Science Foundation (2010) The Impact of Training for Teachers
in Higher Education supported the use of such indicators stating that there is a need for
reflective or ‘thick’ studies (Day 3, para 3) which will provide a different view from
quantitative studies which tend to have a single focus and attempt to define causal
relationships. Qualitative studies are able to give voice to successful interventions, why they
work in particular contexts and, perhaps more importantly, to reveal unintended outcomes.
Asmar (2002), in reporting on institutional wide changes in the culture surrounding teaching,
learning and research, details how university wide staff-student forums were used to enrich
the quantitative data collected through a modified CEQ and how collectively the data was
used to not only challenge faculties to think about changing their teaching behaviour, but also
to re examine their beliefs about teaching in a research intensive environment.
Kreber (2011) supports the use of comprehensive case studies which use thick descriptions of
context and careful attention to the particulars of the situation in the evaluation of TPPs
(p.55). Similarly, Sword (2011) reported on a longitudinal case study which involved the
collection of a range of documents over time, providing much richer evidence of change
following participation in a TPP than the more usual question of ‘what did you find most
helpful about this program?’ (p. 128). Trigwell (2010), cited in The Impact of Training for
Teachers in Higher Education agrees that frameworks of indicators tend to neglect a focus on
the question of why courses have an impact, whether it be subtle or obvious. To make such a
determination requires what Pawson and Tilley (2004) describe as realist evaluation (see also
Pawson & Tilley, 1997 for realistic evaluation) which poses questions such as ‘What works
for different individuals or groups?’ and ‘In what contexts and ways does it work?’ rather
than the more usual question of ‘Has this program worked?’(p. 2). Drawing a distinction
between program intentions and the reality of their delivery, they suggest that it is important
to recognise that programs do not work in the same way for all participants. Furthermore,
programs are embedded in social systems which can either enhance or limit the extent to
which they benefit participants. Innovative pedagogical approaches can be presented and
inspirational stories of success shared, but whether individuals ultimately take up the
challenge of changing their practice depends on their capacity, their circumstances, the
support of colleagues and their perception of institutional recognition of their endeavours.
22
Appendix A
Pawson and Tilley caution the need to expect considerable variation in the pace, extent and
depth of change which individuals might demonstrate following participation in TPPs.
Clearly this limits the value of using quantitative measures to determine the impact of TPPs.
The unique quality of realist evaluation is that an end verdict of success or failure is not its
intent, but rather it seeks to develop a clearer understanding of how programs might lead to
diverse effects, or what Weiss & Bucuvalas (1980) term ‘enlightenment’ rather than ‘political
arithmetic’. They caution that this position may not satisfy those who want to ‘see’ results
and remain sceptical when the response to their questions of ‘Did that work? and ‘Do the
results justify the means?’ is ‘That depends’. Although the conclusions drawn from realist
evaluations might be partial and provisional they are nonetheless useful since they can lead
to more tightly focused programs in the future (Pawson & Tilley, 2004, p.16).
Burke and Modaressi (2000) raise the issue of consultation in their review of the introduction
of state level performance indicators in higher education in the US. They found that where
there was extensive consultation prior to the introduction of performance indicators there was
a greater commitment to their long term use, unlike the resistance experienced where
consultation was minimal. Given the disparate views about the value of the various types of
indicators it seems that any meaningful instrument to measure the impact of TPPs should
include both quantitative and qualitative indicators, reflect the intended outcomes of the
programs, take account of institutional context and engage the various stakeholders.
Training and Development
Many of the issues related to demonstrating the effectiveness of teaching preparation
programs are not unique to the field of academic development. The business sector is equally
concerned with the effectiveness of training and development initiatives. During the 1990s a
great deal of attention was directed to the quality of training in an attempt to explain what,
when, why and for whom training was most effective (Machin & Fogarty, 1997). This
coincided with significant changes in the workplace, many precipitated by rapid
technological advances which created an unprecedented demand for training in the use of
new technologies (Thayer, 1997). More recently organisations have engaged in training and
development activities in an attempt to improve their performance and competitiveness
(Cromwell & Kolb, 2004). Training employees is seen as one of the most effective methods
of improving productivity, communicating organisational goals, retaining valued employees
and managing change in a competitive market situation (Rothwell & Kolb, 1999; Barrett &
O’Connell, 2000; Arthur et al, 2003; Bassi and McMurrer, 2008).
Despite these perceived benefits, organisations are often reluctant to invest in training
programs without evidence of satisfactory returns for their investment. Generally speaking,
for training to be deemed beneficial, individuals participating in the training need to take new
knowledge back to the workplace and apply what they have learned (Hatala & Fleming,
2007; Wang & Wilcox, 2006). The effective and continued application of the knowledge and
skills gained by trainees is known as transfer of training. The reluctance to commit resources
to training programs is not unfounded in light of the number of studies concluding that the
rate of transfer of training is poor, with estimates ranging from as low as 10 per cent to an
optimistic 50 per cent take up rate (Broad & Newstrom, 1992; Kirwan & Birchall, 2006;
Burke & Hutchins, 2007; Blume et al, 2010). Consequently, while much of the early
evaluation of training programs centred on feedback to the providers, what is of greater
interest to organisations today is whether trainees have developed new understandings, skills
or attitudes and the extent of transfer of these into the workplace. Therefore much of the
research in the field has focused on attempts to isolate the factors which facilitate the transfer
23
Appendix A
of training.
Transfer of training is said to have occurred when there is evidence of a change in work
behaviour as a result of training interventions (Foxon, 1993, p.130). However, attempts to
define transfer in terms of post training implementation are fraught with difficulty because
they suggest that transfer happens at a point in time and is absolute, when in fact transfer may
be partial at first and develop over time. Such definitions also assume that the learning can be
identified, quantified and measured. In response to these issues Foxon developed a five step
model which includes the intention to transfer, initiation, partial transfer, conscious
maintenance and unconscious maintenance and suggests that this is a more realistic portrayal
of what happens as learners experiment with new skills (p.131). Attempts to treat transfer of
learning as a product fail to identify many of the nuances of transfer and to assess which
skills have been tried, when, with what degree of success and why others have not been used
(Foxon, 1994, p.1).
In reality the transfer of learning into the workplace is a complex process and many factors
have been identified as having an impact on it (Marx, 1986; Georges, 1988; Broad &
Newstrom, 1992; Burke & Hutchins, 2007). These tend to be clustered around the themes of
motivation levels and trainee characteristics, training design and delivery and workplace
environment (Holten et al, 2000; Cheng and Ho, 2001). Some studies classify these more
broadly as input factors ( program design, the working environment and trainee
characteristics), outcome factors (scope of learning expected and the retention of learning)
and conditions of transfer (relevance of the training to the work context and supervisor
support) (Baldwin & Ford, 1988; Foxon, 1993). More complex models designed to explain
the effectiveness of training offer a more systems approach pointing to the relationship
between the context before and after training, trainee characteristics, organisational variables
and whether the focus of the training was cognitive, skills based or interpersonal (CannonBowers et al, 1995). Despite these variations there is considerable agreement that the factors
influencing transfer are learner characteristics, training design and delivery and
organisational climate.
Learner characteristics
A number of studies have investigated the effect of motivational factors on the transfer of
learning and identify the importance of trainees’ attitudes, interest, values and expectations
(Noe, 1986; Baldwin & Magjuka, 1991; Tannenbaum, Mathieu, Salas, & Cannon-Bowersave,
1991; Foxon, 1993; Cheng and Ho, 2001; Pugh & Bergin, 2006; Pham et al, 2010). It has
also been found that motivation to learn will be increased by a positive pre training attitude,
by training being mandatory and by expectations of post training accountability. This
suggests that initiatives which enhance pre and post training motivation are worthy of
consideration by organisations investing in training programs.
More recently Grossman & Salas (2011) reported, in their meta analysis of the sometimes
contradictory research findings, that trainees with higher cognitive abilities and self-efficacy
who are highly motivated and see the utility of training are more likely to transfer their
learning. While it is not suggested that organisations should select only employees who
display these characteristics for training, nevertheless there may be lessons for aligning
recruitment practices with the goals of the organisation (Kirwan & Birchall, 2006).
Training Design and Delivery
The extent of training transfer also appears to vary as a function of the training design and
24
Appendix A
delivery method, the skill of the trainer and the target skill or task. In terms of program
design, Goldstein & Ford (2002) suggest that organisations which conduct a needs analysis to
inform the design, development, delivery and evaluation of training programs are more likely
to deliver an effective training program. In their meta analysis of training design and
evaluation, Arthurs et al (2003) found that training programs which adopt a mixed method
approach incorporating lectures and practical tasks appear to be effective. Despite their poor
image, however, lectures have been shown to be effective as a method of training for specific
skills suggesting that web based training might also be effective (p. 242).
Lyso et al (2011) investigated the use of action learning projects in training and development
and reported that the approach was effective for the transfer of learning, particularly when
immediate supervisors also participated in the project. Furthermore, they found that including
goal setting and self-management strategies into training programs facilitated the transfer of
learning (Tziner, Haccoun & Kadish, 1991; Burke & Hutchins, 2007; Robbins & Judge,
2009). This is in line with studies which found that training which is conducted in a realistic
setting, is designed to include behaviour modelling and encourages the anticipation of
problems, is conducive to the transfer of learning (Kraiger, 2003; Taylor, 2005; Keith &
Frese, 2008; Grossman & Salas, 2011).
The timing of training programs also has been identified as an important consideration since
the benefits of many training programs are inhibited by the lack of, or delay in, opportunity to
apply the new knowledge and skills (Clarke, 2002; Arthur et al, 2003; Burke & Hutchins,
2007). Training which builds in both opportunity and time for participants to develop new
skills in the authenticity of the workplace is likely to result in higher levels of transfer than
programs which are delivered in isolation of context and need (Cromwell & Kolb, 2004).
Other studies have focused on the importance of follow-up for the transfer of training,
asserting that the training program should not be an end in itself, but part of an ongoing
process of development which is supported by the trainer and supervisor, emphasising the
importance of the organisational context for effective training (Velada et al, 2007; Baldwin et
al, 2009).
Organisational Factors
A number of studies have investigated the significance of the work environment to the
transfer of learning. A positive transfer climate is one in which trainees are reminded,
encouraged and rewarded for using new skills, are assisted when having difficulty and are
supported by supervisors and peers (Clarke 2002; Holton et al., 2003; Colquitt et al, 2006;
Gilpin-Jackson & Busche, 2007; Blume et al, 2010). Consequently organisations which
embed cues, incentives, feedback methods and collegial support into the work culture are
more likely to maximise the benefits of training programs. In particular, where supervisors
are encouraging role models and the training is valued in the context of the work
environment, trainees are more likely to adopt new skills.
The importance of support appears to be underestimated in organisations with a number of
studies suggesting that the perceived lack of supervisor support is a significant barrier to the
transfer of learning (Broad & Newstrom, 1992; Richey, 1992; Cromwell and Kolb, 2004;
Chiaburu & Marinova, 2005; Salas, 2006; Salas & Stagl, 2009). Peer support has also been
reported as influencing the transfer of training (Hawley & Barnard, 2005). The research
suggests that the critical elements of peer support are the observation of others performing
new skills, coaching and feedback while attempting new skills, and opportunities to share
ideas and experiences (Chiaburu & Marinovich, 2005; Hawley & Barnard, 2005; Lim &
25
Appendix A
Morris, 2006; Nijman et al, 2006; Gilpin-Jackson & Busche, 2007). In terms of feedback the
important characteristics have been identified as the nature (positive or negative), the
frequency, the source and the helpfulness of the feedback (DeNisi & Kluger, 2000; Maurer et
al., 2002; Hatala and Fleming, 2007; Van den Bossche, Segers & Jansen, 2010).
Lyso et al (2011) also found that the work environment has the potential to be a significant
inhibitor of the transfer of learning. Their study concluded that although individual
participants reported benefits for their own practice, a year after the training program there
was no evidence of impact of the training on the organisation as a result of obstacles in the
workplace (p.210). This is obviously not a new problem as Marx (1986) also suggested that
during training, organisations should address the possible difficulties which might be
experienced back in the workplace so that trainees can develop coping mechanisms. This
highlights the importance of bridging the gap between the training and work environments to
maximise the benefits of training programs. One way of doing this is to incorporate action
planning into training courses so that strategies to facilitate the transfer of learning can be
embedded in the work environment
Measuring effectiveness of training
Effects of training can be analysed in terms of impact on individuals, organisations and
national performance with the first two dominating the literature on measuring effectiveness.
Studies into the effectiveness of training programs show that despite newer models being
available (Day, Arthur, & Gettman, 2001; Kraiger, Ford, & Salas, 1993) the dominant form of
evaluation of programs is based on Kirkpatrick’s (1975) four-level model (reaction, learning,
behaviour, results).
Reaction criteria, which often use self-report measures, represent trainees’ affective and
attitudinal responses to the training program. Learning criteria measure the learning outcomes
of training, usually through a testing procedure, but are not measures of job performance.
Behaviour criteria on the other hand are measures of actual on-the-job performance and can
be used to identify the impact of training on actual work performance, usually determined
using supervisor assessment (Arthur et al, 2003). Results criteria often involve an assessment
of the dollar value gained from specific training.
Despite the lack of correlation between participant reaction and learning and behaviour
change, reaction measures are most commonly used to measure effectiveness of training with
as many as 78 per cent of organisations adopting this form of measurement (Van Buren &
Erskine, 2002, p. 239). However, how trainees feel about or whether they like a training
program doesn’t elucidate what they learned, how their behaviours or performance was
influenced or what the effect was on the organization (Colquitt, LePine, & Noe, 2000; Arthur,
Tubre, Paul, & Edens, 2003). It is also interesting to note that when reaction criteria are used
there is a trend to more positive results than when the other levels of the model are used (Van
Buren & Erskine, 2002), suggesting that organisations which rely solely on this first level for
evaluation don’t have a complete picture of the success of their training initiatives.
While a number of studies have used other evaluation models many have methodological
problems in that they focus on quite specific programs with little generalisation possible
(Doucougliagos & Sgro, 2000) and others lack a strategy to isolate the training effect (Bartel,
2000). The one approach which seems to have a degree of credibility is utility analysis which
relies on an analysis of the competencies that have developed and been transferred in the
workplace as a result of training rather than the impact of training on organisational results
(Phillips & Schirmer, 2008). This approach is particularly appropriate for assessing the
26
Appendix A
influence of training on soft skills. The findings of several studies which have used utility
analysis indicate that trainee job classification has the largest effect on program impact
(Honeycutt et al, 2001; Collins &Holten, 2004; Powell & Yancin, 2010).
Despite the number of studies in the training and development literature which focus on the
impact of training on organisational performance there is some doubt about the scientific
validity of the studies (Thang et al, 2010). While most organisations invest in training
because they believe that higher performance will result, the theoretical framework
underpinning a relationship between training and organisational performance has been the
subject of some debate (Alliger, et al. 1997, Kozlowski, et al. 2000, Fey et al, 2000; Faems et
al, 2005; Zwick, 2006). Some frameworks are based on the notion that improvement in
individual performance will result in improved organisational performance (Devanna et al,
1984) while others point to the importance of policies (Guest 1987) or the relationship
between HR practices and organisational strategies as fundamental to organisational
outcomes (Wright & McMahon, 1992). More recent studies focus on the theoretical models
of needs assessment, training design and evaluation strategies required to affect
organisational outcomes (Kozlowski& Klein, 2000). In general what is common throughout
these studies is an acknowledgement of the difficulty in demonstrating the link between
individual training and organisational performance, congruent with the findings in relation to
the impact of TPPs on student learning.
Conclusion
While the literature paints a picture of widespread and diverse TPP related activities in higher
education in Australia and overseas, a great deal of uncertainty surrounds the question of
whether these activities are having an impact. In Australia this uncertainty has been fuelled
by the Bradley Report on Higher Education which revealed a worrying decline in student
satisfaction with some aspects of their learning experiences (Bradley et al, 2008). This has
generated a degree of reservation about the effectiveness of TPPs on teachers, students and
the wider academic community at a time when teaching and learning in higher education are
under scrutiny amid calls for accountability (Stefani, 2011). The lack of confidence is
perhaps not surprising, given the lack of a commonly understood framework for evaluation.
Kreber (2011) points to the difficulty of developing such a framework when most academic
development outcomes are intangible and rather than being end points in themselves, are part
of the process of becoming which demands considerable investment of self by participants (p.
54). The changes which might occur as a result of participation in TPPs are designed to
unfold slowly over time rather than be observable at a point in time (Sword, 2011). The
implication of this is that evaluations of academic activity should include a focus on process
not just outputs, and the alignment between process and the intended outcomes of the
program. The very fact that academic development deals with human responses suggests that
attempts to measure outputs or outcomes will be challenging if not impossible.
Being cognisant of the inherent difficulties in evaluating the effectiveness of TPPs, there are
some conclusions which can be drawn from the literature and which are relevant to this
project. The studies cited indicate that it is possible to gather evidence related to changes in
teacher’s conceptions of teaching, in teachers’ behaviour, in teaching approaches, and to
student engagement and approaches to learning. It is also possible to examine context of
provision in terms of the institutional architecture and enhancement culture. The evaluation
process should involve the systematic collection of a wide range of data and while not
privileging one form over another it is acknowledged that rich qualitative data can provide a
deeper understanding of the impact of TPPs than might be possible from quantitative
27
Appendix A
measures alone. Bamber’s (2008) review of the literature related to the evaluation of TPPs
supports this position and raises the question of whether the notion of measurement is even
appropriate. She argues that academic developers should gather and interpret a broad
spectrum of data within their local context – a long-term, complex process avoiding
simplistic attempts to link professional development and student learning directly (p. 108).
The complexity brings with it uncertainty about the effectiveness of TPPs and it may be that
causal proof should not be an expectation within the accountability agenda. The literature
also supports contextualised evaluation, highlighting that while large scale multi institutional
evaluations might provide information on general trends and issues related to academic
development, they are not always relevant to individual institutions (Knight, 2006; Prosser et
al., 2006; Bamber, 2008,).
There is also evidence in the literature that changes in teacher beliefs and behaviour and
student learning approaches occur over time, suggesting that evaluation should be
longitudinal (Ho, 2000; Rust, 2000; Meiers & Ingvarson, 2003; Knight, 2006; Hanbury,
Prosser and Rickinson, 2008; Cilliers & Herman, 2010). Academic developers need to
identify the kinds of information that would constitute evidence of change over time and then
systematically collect this year in and year out, making it possible to track changes which
might be small, yet significant. Decisions about what kinds of evidence to collect might be
informed by the particular model of change which underpins the TPP. For example, some
studies have been predicated on the belief that changes in teacher’s conceptions of teaching
are necessary before changes in teacher behaviour towards more student oriented approaches
can be expected (Ho, 2000). This demands an evaluation methodology which reveals
participants’ conceptions of teaching, their teaching practices and their students’ approaches
to study over time (Bamber, 2008). In a reflective practice model of change relevant data
might be generated from structured reflection including journal writing and interviews with
small groups or document analysis to provide evidence of the achievement of the desired
outcomes of TPPs. Where the scholarship of teaching and learning underpins change,
academic developers may seek data derived from teaching portfolios, promotion applications,
grant applications and publications. Given the substantial support for viewing institutional
architecture and climate as a fundamental agent of change, academic developers may
interrogate the level of decentralised support through interviews with heads of
faculty/department, tracking of collaborative initiatives and questionnaires related to
participant perceptions of support.
The large body of literature around the issues of evaluation of TPPs while at times
contradictory does nevertheless provide a starting point for the conceptualisation and
development of an evaluation framework which addresses the challenges of relevance,
adaptability, rigour and inclusivity.
28
Appendix A
References
Abbott, R. D., Wulff, D. H. & Szego, C. K. (1989). Review of research on TA training. In J.
D.Nyquist, R. D. Abbott & D. H. Wulff (Eds), New directions for teaching and learning:
teaching assistant training in the 1990s. San Francisco, Jossey-Bass.
Adamson, L. & Duhs, R. (2004). Spreading good practice or struggling against resistance to
change: how are we doing? Issues from a Swedish case. Paper presented at the Ninth
Annual SEDA Conference for Staff and Educational Developers, Birmingham, 16–17
November.
Akerlind, G. (2007). Constraints on academics’ potential for developing as a teacher. Studies
in Higher Education, 32: 1, 21–37.
Ako Aotearoa (2010). Tertiary practitioner education training and support: Taking stock. Ako
Aotearoa – The National Centre for Tertiary Teaching Excellence, Wellington, NZ.
Retrieved 27 October 2010 from http://akoaotearoa.ac.nz/download/ng/file/group-4/takingstock---tertiary-practitioner-education-training-and-support.pdf
Allern, M. (2010). Developing scholarly discourse through peer review of teaching. Paper
presented at the Designs for Learning 2nd International Conference - Towards a new
conceptualization of learning. Stockholm University.
Alliger, G. M., Tannenbaum, S. I., Bennett, W., Traver, H. & Shortland, A. (1997). A
meta-analysis on the relations among training criteria. Personnel Psychology, 50,
341–358.
Arthur, W., Bennett, W., Edens, P. S. & Bell. S.T. (2003). Effectiveness of training in
organisations: A meta-analysis of design and evaluation features. Journal of Applied
Educational Psychology, Vol. 88: No. 2, 234–245.
Arthur, W., Tubre, T. C., Paul, D. S. & Edens, P. S. (2003). Teaching effectiveness: The
relationship between reaction and learning criteria. Educational Psychology, 23, 275–285.
Asmar, Christine (2002). Strategies to enhance learning and teaching in a research-extensive
university. International Journal for Academic Development, 7: 1, 18–30.
Australian Council of Educational Research (2008). Australian Survey of Student
Engagement (AUSSE) Australasian University Scale Statistics Report, January.
Baldwin, T. T. & Ford, J. K. (1988). Transfer of training: A review and directions for future
research. Personnel Psychology, 41, 63–105.
Baldwin, T. T. & Magjuka, R. J. (1991). Organisational training and signals of importance:
Linking pre training perceptions to intentions to transfer. Human Resource Development
Quarterly, 2(1), 25–36.
Bamber, Veronica (2008). Evaluating lecturer development programmes: received wisdom or
self knowledge? International Journal for Academic Development, 13: 2, 107–116.
Barnett, R. & Coate, K. (2005). Engaging the curriculum in higher education. Maidenhead,
Open University Press.
Bartel, A. P. (2000). Measuring the employer's return on investments in training: evidence
from the literature. Industrial Relations, 39, 502–24.
Barrett, A. & O'Connell, P. J. (2001). Does training generally work? The returns to inorganization training. Industrial and Labor Relations Review, 54: 3, 647–62.
Bassi, L. J. & McMurrer, D. P. (2008). Toward a human capital measurement methodology.
Advances in Developing Human Resources, 10: 6, 863–81.
Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education,
19(2), 151–161.
Bennett, S. & Barp, D. (2008). Peer observation – a case for doing it online. Teaching in
Higher Education, 13(5), 559–570.
29
Appendix A
Blackmur, D. (2007). When is an auditor really a regulator? The metamorphosis of The
Australian Universities Quality Agency: Implications for Australian higher education public
policy. University of Queensland Law Journal, 26(1), 143–157.
Blaisdell, Muriel L. & Cox, Milton D. (2004). Midcareer and senior faculty learning
communities: Learning throughout faculty careers. New Directions for Teaching and
Learning, No. 97, 137–148.
Blume, B. D., Ford, J. K., Baldwin, T. T. & Huang, J. L. (2010). Transfer of training: a metaanalytic review. Journal of Management, 39, 96–105.
Borden, V. & Bottrill, K. (1994). Performance indicators: Histories, definitions and methods.
New Directions for Institutional Research, 82, 5–21.
Boreham, N. (2004). A theory of collective competence: challenging the neo-liberal
individualisation of performance at work. British Journal of Educational Studies, 52 (1), 5–
17.
Boud, D. (1999). Situating academic development in professional work: Using peer learning.
The Journal for Academic Development, 4:1, 3–10.
Bouteiller, D. & Cossette, M. (2007). Apprentissage, Transfert, Impact, Une Exploration des
Effets de la Formation dans le Secteur du Commerce de Détail (Montréal, CA: PSRA,
CIRDEP, UQAM/DEFS, HEC Montreal). Cited in Y. Chochard and E. Davoine. (2011).
Variables influencing the return on investment in management training programs: a utility
analysis of 10 Swiss cases. International Journal of Training and Development, 15: 225–
243.
Bowie, C., Chappell, A., Cottman, C., Hinton, L. & Partridge, L. (2009). Evaluating the
impact of foundations of university teaching programs: A framework and design principles.
Sydney, ALTC.
Boyer, E. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ,
Carnegie Foundation for the Advancement of Teaching.
Boyer Commission. (1998). Reinventing undergraduate education: A blueprint for America’s
research universities. Stony Brook, NY, State University of New York. Retrieved 29
August 2011, http://naples.cc.sunysb.edu/Pres/boyer.nsf/
Bradley, D., Noonan, P., Nugent, H. & Scales, B. (2008). Review of Australian higher
education: Final report. Canberra, Commonwealth Government. Retrieved 23 May, 2011,
from
http://www.deewr.gov.au/highereducation/review/pages/reviewofaustralianhighereducationr
eport.aspx
Breda, J., Clement, M. & Waeytens, K. (2003). An interactive training programme for
beginning faculty: Issues of implementation. International Journal for Academic
Development, 8:1, 91–104.
Brew, A. (Ed.) (1995). Directions in staff development. Buckingham, UK, Society for
Research into Higher Education and Open University Press.
Brew, Angela (2007). Evaluating academic development in a time of perplexity.
International Journal for Academic Development, 12: 2, 69–72.
Broad, M. L. & Newstrom, J. W. (1992). Transfer of training. Massachusetts, AddisonWesley Publishing Company Inc.
Buckridge, M. (2008). Teaching portfolios: Their role in teaching and learning policy.
International Journal for Academic Development, 13: 2,117–127.
Burke, L. & Hutchins, H. (2007). Training transfer: an integrative literature review. Human
Resource Development Review, 6, 263–96.
Burke, J.C. & Modarresi, S. (2000). To keep or not to keep performance funding: Signals
from stakeholders. Journal of Higher Education, 71(4), 432–453.
Byrne, J., Brown, H. & Challen, D. (2010). Peer development as an alternative to peer
observation: a tool to enhance professional development. International Journal for
Academic Development, 15: 3, 215–228.
30
Appendix A
Cabrera, A.F., Colbeck, C. L. & Terenzini, P. T. (2001). Developing performance indicators
for assessing classroom teaching practices and student learning: The case of engineering.
Research in Higher Education, 42(3), 327–352.
Campbell, S., Benita, S., Coates, E., Davies, P. & Penn, G. (2007). Analysis for policy:
Evidence-based policy in practice. London, Government Social Research Unit.
Cannon-Bowers, J. A., Salas, E., Tannenbaum, S. I. & Mathieu, J. E. (1995). Toward
theoretically based principles of training effectiveness: A model and initial empirical
investigation. Military Psychology, 7, 141–164.
Carini, Robert M., Kuh, George D. & Klein, Stephen P. (2006). Student engagement and
student learning: Testing the linkages, Research in Higher Education, Vol. 47, No. 1,
February.
Carroll, J. G. (1980). Effects of training programs for university teaching assistants: a review
of empirical research. Journal of Higher Education, 51(2), 167–183.
Carter, N., Klein, R. & Day, P. (1992). How organisations measure success: the use of
performance indicators in government. London, Routledge.
Cave, M., Hanney, S., Henkel, M. & Kogan, M.(1997). The Use of Performance Indicators in
Higher Education: The Challenge of the Quality Movement (3rd ed.). London, Jessica
Kingsley.
Chalmers, D. (2008). Indicators of university teaching and learning quality. ALTC, Sydney.
Chalmers, D. & Thomson, K. (2008). Snapshot of teaching and learning practice in
Australian higher education institutions. Carrick Institute for Learning and Teaching in
Higher Education Ltd, Sydney.
Cheng, E. & Ho, D. (2001). The influence of job and career attitudes on learning motivation
and transfer. Career Development International, 6: 1, 20–7.
Chiaburu, D. & Marinova, S. (2005). What predicts skills transfer? An exploratory study of
goal orientation, training self efficacy and organizational supports. International Journal of
Training and Development, 9: 2, 110–23.
Cilliers, F. J. & Herman, N. (2010). Impact of an educational development programme on
teaching practice of academics at a research-intensive university. International Journal for
Academic Development, 15: 3, 253–267.
Clark, G., Blumhof, J., Gravestock, P., Healey, M., Jenkins, A. & Honeybone, A. (2002).
Developing new lecturers: The case of a discipline-based workshop. Active Learning in
Higher Education, 3(2), 128–144.
Clarke, N. (2002). Job/work environment factors influencing training transfer within a human
service agency: some indicative support for Baldwin and Ford's transfer climate construct.
International Journal of Training and Development, 6, 146–62.
Clegg, Sue, McManus, Mike, Smith, Karen & Todd, Malcolm J. (2006). Self-development in
Support of Innovative Pedagogies: Peer support using email. International Journal for
Academic Development, 11: 2, 91–100.
Coaldrake, P. & Stedman, L. (1998). On the brink: Australia’s universities confronting their
future. Australia, University of Queensland Press.
Coates, H. (2007). Excellent measures precede measures of excellence. Journal of Higher
Education Policy Management, 29 (1), 87–94.
Collins, D. B. & Holton, E. F. (2004). The effectiveness of managerial leadership
development programs: a meta-analysis of studies from 1982 to 2001. Human Resource
Development Quarterly, 15: 2, 217–48.
Colquitt, J.A., LePine, J.A. & Noe, R.A. (2000). Toward an integrative theory of training
motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied
Psychology, 85, 678–707.
Cox, M. (2004). Introduction to faculty learning communities. New Directions for Teaching
and Learning, No. 97, Spring.
Cranton, P. & Carusetta, E. (2002). Reflecting on teaching: The influence of context.
31
Appendix A
The International Journal for Academic Development, 7 (2), 167–176.
Crawley, A. L. (1995). Faculty development programs at research universities: Implications
for senior faculty renewal. To Improve the Academy, 14, 65–90.
Cromwell, S. & Kolb, J. (2004). An examination of work-environment support factors
affecting transfer of supervisory skills training to the workplace. Human Resource
Development Quarterly, 15: 4, 449–71.
Crosling, G., Heagney, M. & Thomas, L. (2009). Improving student retention in higher
education: Improving teaching and learning. Australian Universities’ Review, Vol. 51, No.
2, 9–18.
Day, E. A., Arthur, W. & Gettman, D. (2001). Knowledge structures and the acquisition of a
complex skill. Journal of Applied Psychology, 86, 1022–1033.
Dearn, J., Fraser, K. & Ryan, Y. (2002). Investigation into the provision of professional
development for university teaching in Australia: A discussion paper. Retrieved 16 October
2010 from www.dest.gov.au/NR/rdonlyres/D8BDFC55-1608-4845-B1723C2B14E79435/935/unLteaching.pdf
DeNisi, A. & Kluger, A. (2000). Feedback effectiveness: can 360-degree feedback appraisals
be improved? Academy of Management Executive, 14: 1, 129–39.
Devanna, M. A., Fombrun, C. J. & Tichy, N. M. (1984). A framework for strategic
human resource management. In C. J. Fombrun, N. M. Tichy & M. A. Devanna
(Eds), Strategic human resource management. New York, Wiley, pp. 33–55
Devlin, M. (2008). Research challenges inherent in determining improvement in university
teaching. Issues in Educational Research, 18(1) Retrieved 14 October 2010 from
http://www.iier.org.au/iierl8/devlin.pdf.
Diamond, R. M. (1988). Faculty development, instructional development, and organizational
development: Options and choices. In E.C. Wadsworth (Ed.), A handbook for new
practitioners. Professional & Organizational Development Network in Higher Education
(POD Network). Stillwater, OK, New Forums Press, pp. 9–11.
Diamond, R. M. & Adam, B. A. (Eds). (1995). The disciplines speak: Rewarding the
scholarly, professional, and creative work of faculty. Washington, DC, American
Association for Higher Education.
Dill, D. (1992). Quality by design: Towards a framework for academic quality management.
In J.C. Smart (Ed.), Higher Education: Handbook of Theory and Research, Vol. VIII. New
York, Agathon Press, 37–83.
Dill, D. (1999). Academic accountability and university adaptation: the architecture of an
academic learning organisation. Higher Education, 38: 2, 127–154.
Donald, J. G. (1977). The search for teaching competencies in higher education. In A. B.
Smith (Ed.), Faculty Development and Evaluation Newspaper, 3 (2), 11–14.
Donnelly, R. (2006). Exploring lecturers' self-perception of change in teaching practice.
Teaching in Higher Education, 11: 2, 203–217
Donnelly, R. (2008). Lecturers' self-perception of change in their teaching approaches:
Reflections on a qualitative study. Educational Research, 50: 3, 207–222.
Doucougliagos, C. & Sgro, P. (2000). Enterprise return on a training investment. Adelaide:
National Center for Vocational Education.
Edwards, V., McArthur, J. & Earl, S. (2004). Postgraduate certificates in teaching and
learning: Different insights into their impact. Paper presented at the Inaugural All Ireland
Society for Higher Education (AISHE) Conference, Trinity College Dublin. Retrieved 17
October 2010 from http://www.aishe.org/conf2004/proceedings/
Eggins, H. & Macdonald, R. (Eds.) (2003). The Scholarship of Academic Development.
Buckingham, Open University Press.
Eley, M. (2006). Teachers conceptions of teaching, and the making of specific decisions in
planning to teach. Higher Education, 51, 191–214.
Entwistle, N. & Walker, P. (2000). Strategic alertness and expanded awareness within
32
Appendix A
sophisticated conceptions of teaching. Instructional Science, 28, 335–361.
Ericsson, K.A. & Simon, H.A. (1984). Protocol analysis: Verbal reports as data. Cambridge,
MA, MIT Press.
European Science Foundation (2010). The impact of training for teachers in higher education.
Standing Committee for Social Sciences.
Ewell, P. T. (2002). An analysis of relationships between NSSE and selected student learning
outcomes measures for seniors attending public institutions in South Dakota. National
Center for Higher Education Management Systems, Boulder, CO.
Faems, D., Sels, L., DeWinne, S. & Maes, J. (2005). The effect of individual HR
domains on financial performance. International Journal of Human Resource
Management, 16 (5), 676–700.
Feger, S. & Arruder, E. (2008). Professional learning communities: Key themes from the
literature. RI, Brown University.
Ferman, T. (2002). Academic professional development practice: What lecturers find
valuable. International Journal for Academic Development, 7, 146–158.
Fey, C. & Bjórkman, I. (2001). Effect of human resource management practices on
MNC subsidiary performance in Russia. Journal of International Business Studies,
32(1), 59–75.
Foxon, M. J. (1993). A process approach to the transfer of training. Part 1: The impact of
motivation and supervisor support on transfer maintenance. The Australian Journal of
Educational Technology, 9(2), 130–143.
Foxon, M. (1994). A process approach to the transfer of training. Part 2: Using action
planning to facilitate the transfer of training. Australian Journal of Educational Technology,
10(1), 1–18. Retrieved 5 September 2011 from
http://www.ascilite.org.au/ajet/ajet10/foxon.html
Gaff, J. G. & Simpson, R. D. (1994). Faculty development in the United States. Innovative
Higher Education, 18 (3), 167–176.
Gapp, J.M. & Leslie, D.W. (1993). The invisible faculty: Improving the status of part-timers
in higher education. San Fancisco, Jossey Bass.
Gardner, John , Holmes, Bryn & Leitch, Ruth (2008). Where there is smoke, there is (the
potential for) fire: Soft indicators of research and policy impact, Cambridge Journal of
Education, 38: 1, 89–104.
Georges, J. C. (1988). Why soft skills training doesn't take. Training and Development
Journal, 25(4), 42–47.
Gibbs, G. (1996). Supporting educational development within departments. International
Journal for Academic Development, 1(1), 27–37.
Gibbs, G. (2003). Researching the training of university teachers: Conceptual frameworks
and research tools. In H. Eggins & R. Macdonald (Eds), The scholarship of academic
development. Buckingham, Open University Press.
Gibbs, G. & Coffey, M. (2004). The impact of training of university teachers on their
teaching skills, their approach to teaching and the approach to learning of their students.
Active Learning in Higher Education, 5: 87, 87–100.
Gibbs, G., Habeshaw, T. & Yorke, M. (2000). Institutional learning and teaching strategies in
English higher education. Higher Education, 40: 351–372.
Giertz, B. (1996). Long-term effects of a programme for teacher training. International
Journal for Academic Development, 1: 2, 67–72.
Gilbert, A. & Gibbs, G. (1998). A proposal for an international collaborative research
programme to identify the impact of initial training on university teachers. Research and
Development in Higher Education, 21,131–143.
Gilpin-Jackson, Y. & Bushe, G. R. (2007). Leadership development training transfer: a case
study of post-training determinants. Journal of Management Development, 26, 980–1004.
Ginns, Paul, Kitay, Jim & Prosser, Mike (2008). Developing conceptions of teaching and the
33
Appendix A
scholarship of teaching through a Graduate Certificate in Higher Education. International
Journal for Academic Development, 13: 3, 175–185.
Godfrey, J., Dennick, R. & Welsh, C. (2004). Training the trainers: do teaching courses
develop teaching skills? Medical Education, 38: 844–847.
Goldstein, I. L. & Ford, J. K. (2002). Training in organizations: Needs assessment,
development, and evaluation (4th ed.). Belmont, CA, Wadsworth.
Gosling, D. (2008). Educational development in the United Kingdom. Report for the heads of
educational development group. London, Heads of Educational Development Group
(HEDG) UK. Retrieved 17 October 2010 from
http://www.hedg.ac.uk/documents/HEDG_Report_final.pdf
Gosling, D. & O’Connor, K. (2006). From peer observation of teaching to review of
professional practice (RPP): A model for continuing professional development.
Educational Developments, 7(3), 1–4.
Gow, L. & Kember, D. (1993). Conceptions of teaching and their relationship to student
learning. British Journal of Educational Psychology, 63: 20–33.
Grasgreen, A. (2010). Preparing professors to teach, Inside Higher Ed. Retrieved 8 August
2011from www.highered.com/2010/10/15/mit
Gray, K. & Radloff, A. (2008). The idea of impact and its implications for academic
development work. International Journal for Academic Development, 13: 2, 97–106.
Grossman, R. & Salas, E. (2011). The transfer of training: what really matters.
International Journal of Training and Development, 15, 2, 103–120.
Guest, D. E. (1987). Human resource management and industrial relations. Journal of
Management Studies, 24(5), 503–521.
Guskey, T. (2002). Does it make a difference? Evaluating professional development.
Educational Leadership, 59(6), 45–51.
Guskey, T. & Sparks, D. (2002). Linking professional development to improvements in
student learning. Paper presented at the annual meeting of the American Educational
Research Association, New Orleans, LA (April).
Guthrie, J. & Neumann, R. (2007). Economic and non-financial performance indicators in
universities. Public Management Review, 9(2), 231–252.
Hammersley-Fletcher, L. & Orsmond, P. (2005). Reflecting on reflective practices within
peer observation. Studies in Higher Education, 30(2), 213–224.
Hanbury, A., Prosser, M. & Rickinson, M. (2008). The differential impact of UK accredited
teaching development programmes on academics' approaches to teaching. Studies in Higher
Education, 33: 4, 469–483.
Harvey, L., Moon, S. & Geall, V. (1997). Graduates work: Organisational change and
students' attributes. Birmingham, Centre for Research into Quality, UCE.
Hatala, J. & Fleming, P. (2007). Making transfer climate visible: utilizing social network
analysis to facilitate transfer of training. Human Resource Development Review, 6: 1, 33–
63.
Hawley, J. & Barnard, J. (2005). Work environment characteristics and the implications for
training transfer: a case study of the nuclear power industry. Human Resource Development
International, 8: 1, 65–80.
Healey, Mick (2000). Developing the scholarship of teaching in higher education: A
discipline-based approach. Higher Education Research & Development, 19: 2, 169–189.
Healey, M. & Jenkins, A. (2003). Discipline-based educational development. In H. Eggins &
R. Macdonald (Eds), The scholarship of academic development. Buckingham, The Society
for Research into Higher Education and Open University Press.
Herbert, D. M. B., Chalmers, D. & Hannam, R. (2002). Enhancing the training, support and
management of sessional teaching staff. Paper presented at the Australian Association for
Research in Education, University of Queensland, Brisbane.
Hicks, M., Smigiel, H., Wilson, G. & Luzeckyj, A. (2010). Preparing academics to teach in
34
Appendix A
higher education. Final Report. Sydney, Australian Learning and Teaching Council.
Hicks, 0. (1999). Integration of central and departmental development-reflections from
Australian universities. International Journal for Academic Development, 4:1, 43–51.
Hilton, A. (n.d.). Comparative indicators in educational development (CIED). A series of
short institutional case studies arising from the CIED project. Learning and Teaching
Support Network Generic Centre.
Ho, A., Watkins, D. & Kelly, M. (2001). The conceptual change approach to improving
teaching and learning: An evaluation of a Hong Kong staff development programme.
Higher Education, 42, 143–169.
Hollins, E.R., McIntyre, L.R. DeBose, C., Hollins,K.S. & Towner, A. (2004). Promoting a
self-sustaining learning community: Investigating an internal model for teacher
development. International Journal of Qualitative Studies in Education, 17 (2), 247–267.
Holt, D. (2010). Strategic leadership for institutional teaching and learning centres:
Developing a model for 21st century. Sydney, Australian Learning and Teaching Council.
Holton, E. F., Bates, R. & Ruona, W. E. A. (2000). Development of a generalized learning
transfer system inventory. Human Resource Development Quarterly, 11: 4, 333–60.
Honeycutt, E. D., Karande, K., Attia, A. & Maurer, S. D. (2001). A utility based framework
for evaluating the financial impact of sales force training programs. Journal of Personal
Selling and Sales Management, 21: 3, 229–38.
Horsburgh, Margaret (1999). Quality monitoring in higher education: the impact on student
learning. Quality in Higher Education, 5: 1, 9–25.
Huber, G. (1991). Organizational learning: The contribution process and the literatures.
Organization Science 2 (1), 88–115.
Huber, M. & Hutchings, P. (2005). The advancement of learning. San Francisco, Jossey-Bass.
Jenkins, A. (1996). Discipline-based educational development. International Journal for
Academic Development, 1: 1, 50–62.
Jenkins, A. (1997). Twenty-one volumes on: Is teaching valued in geography in higher
education? Journal of Geography in Higher Education, 21(1), 5–14.
Kane, R., Sandretto, S. & Heath, C. (2002). Telling half the story: A critical review of
research on the teaching beliefs and practices of university academics. Review of
Educational Research, 72: 177–228.
Keith, N. & Frese, M. (2008). Effectiveness of error management training: a meta-analysis.
Journal of Applied Psychology, 93, 59–69.
Kember, D. (1997). A reconceptualization of the research into university academics’
conceptions of teaching. Learning and Instruction, 7(3): 255–275.
Kember, D. & Kwan, K. (2000). Lecturers’ approaches to teaching and their relationship to
conceptions of good teaching. Instructional Science, 28: 469–490.
Kinchin, I. (2005). Evolving diversity within a model of peer observation at a UK university.
BERA (http:/www.leeds.ac.uk/ed/educol/).
Kirkpatrick, D. L. (1975). Techniques for evaluating training programs. In D. L. Kirkpatrick
(Ed.), Evaluating training programs. Alexandria, VA, ASTD.
Kirkpatrick, D. L. (1996). Great ideas revisited. Training and Development, 50 (1), 54–59.
Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels (2nd ed.). San
Francisco, CA: Berrett-Koehler.
Kirwan, C. & Birchall, D. (2006). Transfer of learning from management development
programmes: testing the Holton model. International Journal of Training and Development,
10, 252–68.
Klein, S. P., Kuh, G. D., Chun, M., Hamilton, L. & Shavelson, R. (2005). An approach to
measuring cognitive outcomes across higher education institutions. Research in Higher
Education 46(3), 251–276.
35
Appendix A
Klenowski, V., Askew, S., & Carnell, E. (2006). Portfolios for learning, assessment and
professional development in higher education. Assessment & Evaluation in Higher
Education, 31(3), 267–286.
Knapper, C. (2003). Editorial. International Journal for Academic Development, 8: 1, 5–9.
Knight, P. (2006). The effects of postgraduate certificates: A report to the project sponsor and
partners. The Institute of Educational Technology, The Open University. Retrieved 02
November 2010 from http://kn.open.ac.uk/public/document.cfm?docid=8640
Kogan, M. (2002). The role of different groups in policy-making and implementation:
Institutional politics and policy-making. In P. Trowler (Ed.), Higher education policy and
institutional change: Intentions and outcomes in turbulent environments Buckingham,
SRHE/Open University Press, pp. 46–63.
Kozlowski, S., Brown, K., Weissbein, D., Cannon-Bowers, J. & Salas, E. (2000). A
multi-level approach to training effectiveness. In K. Klein, & S. Kozlowski (Eds),
Multi-level theory, research, and methods in organizations: Foundations, extensions,
and new directions. San Francisco, Jossey-Bass, pp. 157–210.
Kozlowski, S. W. J. & Klein, K. J. (2000). A multilevel approach to theory and
research in organizations: Contextual, temporal, and emergent processes. In K. J.
Klein & S. W. J. Kozlowski (Eds), Multi-level theory, research, and methods in
organizations: Foundations, extensions, and new directions (3–90). San Francisco,
Jossey-Bass.
Kraiger, K. (2003). Perspectives on training and development. Handbook of Psychology:
Volume12, Industrial and Organizational Psychology. Hoboken, NJ, Wiley, pp. 171–92.
Kraiger, K., Ford, J. K. & Salas, E. (1993). Application of cognitive, skill-based, and
affective theories of learning outcomes to new methods of training evaluation. Journal of
Applied Psychology, 78, 311–328.
Kreber, C. (2011). Demonstrating fitness for purpose: Phronesis and authenticity as
overarching purposes. In L. Stefani (Ed.), Evaluating the effectiveness of academic
development: Practice and principles. NY, Routledge.
Kreber, C. & Brook, P. (2001). Impact evaluation of educational development programmes.
International Journal for Academic Development, 6: 2, 96–108.
Kuh, G. D. (2003). What we’re learning about student engagement from NSSE. Change,
35(2): 24–32.
Lewis, K. G. (1996). Faculty development in the United States: A brief history. International
Journal for Academic Development, 1: 2, 26–33.
Lewis, P. (1997). A framework for research into training and development. International
Journal of Training and Development, 1: 1, 1–7.
Light, G., Calkins, S., Luna, M. & Drane, D. (2009). Assessing the impact of a year-long
faculty development program on faculty approaches to teaching. International Journal of
Teaching and Learning in Higher Education, 20: 2,168–181.
Lim, D. and Morris, M. (2006). Influence of trainee characteristics, instructional satisfaction,
and organizational climate on perceived learning and training transfer. Human Resource
Development Quarterly, 17: 1, 85–115.
Lindblom-Ylänne, Sari , Trigwell, Keith , Nevgi, Anne & Ashwin, Paul (2006). How
approaches to teaching are affected by discipline and teaching context. Studies in Higher
Education, 31: 3, 285–298.
Ling, P. (2009). Development of academics and higher education futures. Report Vol. 1.
Sydney, ALTC.
Lipsett, A. (2005). Lecturers bored by lessons in teaching. Times Higher Education
Supplement, (April 22), p.1 and p.10.
Lomas, L. & Kinchin, I. (2006). Developing a peer observation program with university
teachers. International Journal of Teaching and Learning in Higher Education, Vol.18, No.
3, 204–214.
36
Appendix A
Lycke, K. H. (1999). Faculty development: Experiences and issues in a Norwegian
perspective. International Journal for Academic Development, 4: 2, 124–133.
Lyso, I.H., Mjoen, K. & Levin, M. (2011). Using collaborative action learning projects to
increase the impact of management development. International Journal of Training and
Development, 15: 3, 210–224.
Machin, M. A. & Fogarty, G. J. (1997). The effects of self-efficacy, motivation to transfer,
and situational constraints on transfer intentions and transfer of training. Performance
Improvement Quarterly, 10(2), 98–115.
Manathunga, C. (2006). Doing educational development ambivalently: Applying postcolonial metaphors to educational development? International Journal for Academic
Development, 11, 19–29.
Marticchio, J.J. & Baldwin, T. (1997). The evolution of strategic organisational training. In
R.G. Ferris (Ed.), Research in personnel and human resource management, Vol. 15,
Greenwhich, JAI, pp.1–46.
Marton, F. & Booth, S. (1997). Learning and Awareness. New Jersey, Lawrence Erlbaum
Associates.
Marx, R. D. (1986). Self-managed skill retention. Training and Development Journal, 40(1),
54–57.
Maurer, T. J., Mitchell, D. R. D. & Barbeite, F. G. (2002). Predictors of attitudes toward a
360-degree feedback system and involvement in post-feedback management development
activity. Journal of Occupational and Organizational Psychology, 75, 87–107.
McAlpine, L., Oviedo, G. & Emrick, A. (2008). Telling the second half of the story: Linking
academic development to student experience of learning. Assessment & Evaluation in
Higher Education, 33(6), 661–673.
McAlpine, L. & Weston, C. (2000). Reflection: Issues related to improving professors’
teaching and students’ learning. Instructional Science, 28, 363–385.
McArthur, J., Earl, S. & Edwards, V. (2004). Impact of courses for university teachers.
Melbourne, AARE.
McDowell, L. (1996). A different kind of R&D? Considering educational research and
educational development. In: G. Gibbs (Ed.), Improving student learning: using research to
improve student learning. Oxford, Oxford Centre for Staff Development.
McLean, M., Cilliers, F. & Van Wyk, J. (2008). Faculty development: Yesterday, today and
tomorrow. Medical Teacher, 30(6), 585–591.
McMahan, M. & Plank, K. M. (2003). Old dogs teaching each other new tricks: Learning
communities for post-tenure faculty. Paper presented at the 23rd annual Lilly Conference
on College Teaching. Miami University: Oxford, Ohio.
McMahon, T., Barrett, T. & O’Neill, G. (2007). Using observation of teaching to improve
quality: Finding your way through the muddle of competing conceptions, confusion of
practice and mutually exclusive intentions. Teaching in Higher Education, 12(4), 499–511.
McCluskey de Swart, S. (2009). Learning fellows seminars: A case study of a faculty
development program using experiential learning theory to improve college teaching. PhD
Thesis, Case Western Reserve University.
Meiers, M. & Ingvarson, L. (2003). Investigating the links between teacher professional
development and student learning outcomes, Vol.1. Canberra, ACER.
Moon, J. (2004). Using reflective learning to improve the impact of short courses and
workshops. Journal of Continuing Education in the Health Professions, 24(1), 4–11.
Morrow, C., Jarrett, Q. & Rupinski, M. (1997). An investigation of the effect and economic
utility of corporate-wide training. Personnel Psychology, 50, 91–119.
Nickols, F. (2000). Evaluating training: There is no “cookbook” approach. Retrieved
August 13, 2009, from http://home.att.net/~nickols/evaluate.htm
Nikols, F. (2010). Leveraging the Kirkpatrick Model: Validation versus evaluation. Retrieved
24 May, 2011 from http://www.nickols.us/LeveragingKirkpatrick.pdf
37
Appendix A
Nijman, D., Nijhof, W., Wognum, A. & Veldkamp, B. (2006). Exploring differential effects of
supervisor support on transfer of training. Journal of European Industrial Training, 30: 7,
529–49.
Noe, R. A. (1986). Trainee attributes and attitudes: Neglected influences in training
effectiveness. Academy of Management Review, 11, 736–749.
Norton, L., Richardson, J. Hartley, J., Newstead, S. & Mayes, J. (2005). Teachers’ beliefs
and intentions concerning teaching in higher education. Higher Education (50), 4, 537–571.
Ortlieb, E.T., Biddix, J. P. & Doepker, G.M. (2010). A collaborative approach to higher
education induction. Active Learning in Higher Education, 11, 108–118.
Pawson, R. & Tilley, N. (1997). Realistic Evaluation. London, Sage.
Pawson, R. & Tilley, N. (2004). Realist evaluation. Paper funded by British Cabinet Office.
Peseta, T. & Manathunga, C. (2007). The anxiety of making academics over: Resistance and
responsibility in academic development. In I. Morley (Ed.), The value of knowledge.
Oxford, Interdisciplinary Press.
Pham, N., Segers, M. & Gijselaers, W. (2010). Understanding training transfer effects from a
motivational perspective. Business Leadership Review, VII: III, July. Retrieved 14
November, 2011 from http://www.mbaworld.com/blr-archive/issues-73/5/index.pdf
Phillips, J.J. & Schirmer, F. (2008). Return on Investment in der Personal-Entwicklung, Der
5-Stufen-Evaluationsprozess (2nd ed.). Berlin, Springer-Verlag.
Piccinin, S., Cristi, C. & McCoy, M. (1999). The impact of individual consultation on student
ratings of teaching. International Journal for Academic Development, 4: 2, 75–88.
Piccinin, S. & Moore, J. (2002). The impact of individual consultation on the teaching of
younger versus older faculty. International Journal for Academic Development, 7: 2, 123–
134.
Poole, G. & Iqbar, I. (2011). An exploration of the scholarly foundations of educational
development. In. J.C. Smart & M.B. Paulsen (Eds), Higher education: Handbook of theory
and research, 26, pp. 317-354.
Postareff, L., Lindblom-Ylanne, S. & Nevgi, A. (2007). The effect of pedagogical training on
teaching in higher education. Teaching and Teacher Education, 23, 557–571.
Powell, K. S. & Yalcin, S. (2010). Managerial training effectiveness, a meta-analysis 1952–
2002. Personnel Review, 39: 2, 227–41.
Prebble, T., Margraves, H., Leach, L, Naidoo, K., Suddaby, G. & Zepke, N. (2004). Impact of
student support services and academic development programmes on student outcomes in
undergraduate tertiary study: a best evidence synthesis. Wellington, Ministry of Education.
Prosser, M., Rickinson, M., Bence, V., Hanbury, A. & Kulej, M. (2006). Formative
evaluation of accredited programmes. York,The Higher Education Academy.
Prosser, M. & Trigwell, K. (1999). Understanding teaching and learning: The experience in
higher education. Buckingham, Society for Research into Higher Education and the Open
University Press.
Pugh, K.J. & Bergin, D. (2006). Motivational influences on transfer. Educational
Psychologist. 41(3), 147–160.
Ramsden, P. (1991). A performance indicator on teaching quality in higher education: The
course experience questionnaire. Studies in Higher Education, 16, 129–150.
Ramsden, P. (2003). Learning to teach in higher education. (2nd ed.). London, Routledge.
Ramsden, P., & Martin, E. (1996). Recognition of good university teaching: Policies from an
Australian study. Studies in Higher Education, 21, 299–315.
Reid, A. (2002). Is there an “ideal” approach for academic development? In A. Goody & D.
Ingram (Eds), 4th World Conference of the International Consortium for Educational
Development, Perth, Organisational and Staff Development Services:, UWA.
Report to DEEWR (2008). Review of Australian and international performance indicators
and measures of the quality of teaching and learning in higher education 2008. Sydney,
ALTC.
38
Appendix A
Richardson, J. T. E. (1994). A British evaluation of the Course Experience Questionnaire.
Studies in Higher Education, 19, 59–68.
Richey, R. C. (1992). Designing Instruction for the Adult Learner. London, Kogan Page Ltd.
Rindermann, H., Kohler, J. & Meisenberg, G. (2007). Quality of instruction improved by
evaluation and consultation of instructors. International Journal for Academic
Development, 12: 2, 73–85.
Robbins, S. P. & Judge, T. A. (2009). Organizational behavior. NJ, Pearson Prentice Hall.
Romainville, M. (1999). Quality evaluation of teaching in higher education. Higher
Education in Europe, 24(3), 414–424.
Rothwell, W. & Kolb, J. (1999). Major workforce and workplace trends influencing the
training and development field in the USA. International Journal of Training and
Development, 3, 44–53.
Rowe, K. (2004). Analysing & reporting performance indicator data: 'Caress’ the data and
user beware! Background paper for The Public Sector Performance & Reporting
Conference, under the auspices of the International Institute for Research
(IIR), Canberra: ACER.
Rust, C. (1998). The impact of educational development workshops on teachers' practice.
International Journal for Academic Development, 3:1, 72–80.
Rust, C. (2000). Do initial training courses have an impact on university teaching? The
evidence from two evaluative studies of one course. Innovations in Education and Teaching
International, 37: 3, 254–262.
Rutkowski, D. J. (2007). Converging us softly: How intergovernmental organizations
promote neoliberal educational policy. Critical Studies in Education, 48(2), 229–247.
Salas, E. & Stagl, K. C. (2009). Design training systematically and follow the science of
training. In E. Locke (Ed.), Handbook of Principles of Organizational Behaviour:
Indispensible Knowledge for Evidence-Based Management (2nd ed.). Chichester, John
Wiley & Sons, pp. 59–84.
Salas, E.,Wilson, K., Priest, H. & Guthrie, J. (2006). Design, delivery, and evaluation of
training systems. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (3rd
ed.). NJ, John Wiley & Sons, pp. 472–512.
Senge, P. M. (2000). The academy as learning community: Contradiction in terms or
realizable future? In A. F. Lucas & Associates (Eds), Leading academic change: Essential
roles for department chairs. San Francisco, Jossey-Bass, pp. 275–300.
Shavelson, R. J. (2010). Measuring college learning responsibly: Accountability in a New
Era. California, Stanford University Press.
Shulman, L.S. (1986). Those who understand: knowledge growth in teaching. Educational
Researcher, Vol. 15, 4–14.
Shulman, L.S. (1987). Knowledge & teaching: foundations of the new reform, Harvard
Educational Review, 57 (1), 1–22.
Shulman, L. S. (1993). Teaching as community property. Putting an end to pedagogical
solitude. Change, 25 (6), 6–7.
Shulman, L. S. (2002). Making differences: A table of learning. Change, 34(6): 36–45.
Smith, Calvin & Bath, Debra (2003). Evaluation of a networked staff development strategy
for departmental tutor trainers: benefits, limitations and future directions, International
Journal for Academic Development, 8: 1, 145–158.
Smith, B. L. & Smith, M. J. (1993). Revitalizing senior faculty through statewide efforts. In
M. J. Finkelstein and & M. W. LaCelle-Peterson (Eds), Developing Senior Faculty as
Teachers. New Directions for Teaching and Learning, No. 55. San Francisco, Jossey-Bass.
Southwell, D. & Morgan, W. (2010). Leadership and the impact of academic staff
development and leadership development on student learning outcomes in higher education:
A review of the literature. Sydney, ALTC.
Spafford Jacob, H. & Goody, A. (2002). Are teaching workshops worthwhile? In A. Goody &
39
Appendix A
D. Ingram (Eds.), Spheres of influence: Ventures and visions in educational development.
Perth: The University of Western Australia. Retrieved 9 October, 2010 from
http://www.csd.uwa.edu.au/ICED2002/publication/
Spronken-Smith, R. & Harland, T. (2009). Learning to teach with problem-based learning.
Active Learning in Higher Education, 10:138, 138–153.
Stefani, L. (Ed.) (2011). Evaluating the effectiveness of academic development: Principles
and practice. New York, Routledge.
Stes, A., Clement, M. & Van Petegem, P. (2007). The effectiveness of a faculty training
programme: Long-term and institutional impact. International Journal for Academic
Development, 12: 2, 99–109.
Stoll, L., Bolam, R., McMahon, A., Wallace, M. & Thomas, S. (2006). Professional learning
communities: A review of the literature. Journal of Educational Change, 7 (94), 221–258.
Sullivan, L. L. (1983). Faculty development: A movement on the brink. College Board
Review, 127, 29–31.
Sword, H. (2011). Archiving for the future: A longitudinal approach to evaluating a
postgraduate certificate program. In L. Stefani (Ed.), Evaluating the effectiveness of
academic development: principles and practice. New York, Routledge.
Szulanski, G. (1996).Exploring internal stickiness: Impediments to the transfer of best
practice within the firm, Strategic Management Journal, 17 (Special Issue), 27–43.
Tannenbaum, S. L., Mathieu, J. E., Salas, E. & Cannon-Bowers, J. A. (1991). Meeting
trainees' expectations: The influence of training fulfilment on the development of
commitment, self efficacy, and motivation. Journal of Applied Psychology, 76(6), 759–769.
Taylor, K. (2006). Autonomy and self-directed learning: A developmental journey. In C.
Hoare (Ed.), Handbook of adult development and learning. Oxford, Oxford University
Press.
Taylor, P. J., Russ-Eft, D. F. & Chan, D. W. L. (2005). A meta-analytic review of behaviour
modeling training. Journal of Applied Psychology, 90, 692–709.
Thang, N. N., Quang, T. & Buyens, D. (2010). The relationship between training and firm
performance: A literature review. Research and Practice in Human Resource Management,
18(1), 28–45.
Thayer, P. W. (1997). A rapidly changing world: Some implications for training systems in
the year 2001 and beyond. In M. A. Quiñones & A. Ehrenstein (Eds), Training for a rapidly
changing workforce. Washington, DC, American Psychological Association, pp.15–30.
Thew, N. & Clayton, S. (2004). More bang for our buck? Using an impact assessment tool to
enhance teacher training programmes. Paper presented at the Ninth Annual SEDA
Conference for Staff and Educational Developers Birmingham.
Tight, M. (2004). Research into higher education: An a-theoretical community of practice?
Higher Education Research & Development, 23: 4, 395–411.
Toth, K. E. & McKey, C. A. (2010). Identifying the potential organizational impact of an
educational peer review program. International Journal for Academic Development, 15:1,
73–83.
Trigwell, K. & Prosser, M. (2004). Development and use of the approaches to teaching
inventory. Educational Psychology Review, 16, 409–426.
Trowler, P. (2002). Higher education policy and institutional change: Intentions and
outcomes in turbulent environments. Buckingham: SRHE/Open University Press.
Trowler, P. & Bamber, R. (2005). Compulsory higher education teacher training: Joined-up
policies, institutional architectures and enhancement cultures. International Journal for
Academic Development, 10: 2, 79–93.
Tziner, A., Haccoun, R. R. & Kadish, A. (1991). Personal and situational characteristics
influencing the effectiveness of transfer of training improvement strategies. Journal of
Occupational Psychology, 64,167–177.
Van Buren, M. E. & Erskine, W. (2002). The 2002 ASTD state of the industry report.
40
Appendix A
Alexandria, VA, American Society of Training and Development.
Van den Bossche, P., Segers, M. & Jansen, N. (2010). Transfer of training: The role of
feedback in supportive social networks. International Journal of Training and
Development. Vol.114, no.2, 81–94.
Vanhoff, J. & Van Petegem, P. (2007). Designing and evaluating the process of school selfevaluations. Improving Schools,14, 200-212.
Velada, R., Caetano, A., Michel, J. W., Lyons, B. D. & Kavanagh, M. J. (2007). The effects of
training design, individual characteristics and work environment on transfer of training.
International Journal of Training and Development, 11, 282–94.
Vescio, V., Ross, D. & Adams, A. (2008). A review of research on the impact of professional
learning communities on teaching practice and student learning. Teaching and Teacher
Education, 24, 80–91.
Viskovic, A. R. (2009). Survey of literature relating to tertiary teacher development and
qualifications. Wellington, NZ , Ako Aotearoa National Centre for Tertiary Teaching
Excellence.
Wahr, F. (2010). Attempting to decolonise academic development - designing an action
learning approach to integrate sustainability into the higher education curriculum. Paper
presented at the 8th World Congress on Participatory Action Research and Action Learning,
6–9 Sept, Melbourne, Australia.
Walker, P. & Entwistle, N. (1999). Conceptions of teaching and levels of understanding:
Emerging structures and shifting awareness. In C. Rust (Ed.), Proceedings of the 1998 6th
International Improving Student Learning Symposium: Improving students’ learning
outcomes. Oxford, Oxford Centre for Staff and Learning Development, pp. 309–318
Warhurst, R. P. (2006). We really felt part of something: Participatory learning among peers
within a university teaching-development community of practice. International Journal for
Academic Development, 11: 2, 111–122.
Weimer, M. (2007). Intriguing connections but not with the past. International Journal for
Academic Development, 12: 1, 5–8.
Weimer, M. & Lenze, L. F. (1997). Instructional interventions: a review of the literature on
efforts to improve instruction. In R. P. Perry & J. C. Smart (Eds), Effective teaching in
higher education: research and practice. New York, Agathon Press.
Weiss, C. & Bucuvalas, M. (1980). Social Science Research and Decision-making. Sage,
Newbury Park.
Weurlander, M. & Stenfors-Hayes, T. (2008). Developing medical teachers' thinking and
practice: Impact of a staff development course. Higher Education Research & Development,
27: 2, 143–153.
Winfred A., Winston, B., P. & Bell, S. (2003). Effectiveness of training in organizations: A
meta-analysis of design and evaluation features. Journal of Applied Psychology, Vol. 88,
No. 2, 234–245.
Woods, P. & Jeffery, B. (1996). Teachable Moments. Buckingham, Open University Press.
Wright, A. (1995). Teaching improvement practices: International perspectives. In A. Wright
(Ed.), Successful faculty development: Strategies to improve university teaching. Bolton,
Anker.
Wright, P. M. & McMahan, G. C. (1992). Theoretical perspectives for strategic human
resource management. Journal of Management, 18(2), 295–320.
Zwick, T. (2006). The impact of training intensity on establishments’ productivity. Labour
Economics, 11, 715–740.
41
Appendix B
Excerpt from UWA TPP Audit (page 1 of 4)
UWA
Type of Preparation
Features
1. Location
2. Number/ Frequency
3. Duration
Grad Cert in Tertiary
Teaching
a. Central Teaching Unit
b. Faculty/Department/Schoolbased
c. Discipline-based
Number offered per semester
4. Status
5. Participants
7. Intended outcomes /
Impact
Teacher Focused
Learner focused
c. Extended: 1 yr
9. Evaluation
b. Intensive: 3 days, 3
hr on-line module plus
c. 6 follow up sessions
over semester
b. Intensive: 3x1/2 day b. Intensive: 3 days plus
plus
c. Extended: 5 seminars c. Extended: Seminars
over semester
over semester
Candidates apply for
limited places
d.
a.
b.
c.
d. Nil to date
a. 90%
b. 10%
c. Assignments
Number attending
Face to face
Flexible/on-line
Submission of work
b. Optional
c. Accredited
a. Face to face and
b. On-line
c. Completion of tasks
a. Face to face
c. Reflective statement
d. Approx 23 per year
a. Face to face
c. Teaching/learning
project
a. Develop pedagogical knowledge a. Pedagogy on tertiary a. Teaching to promote a. Intro to learning
teaching
learning
theory
b. Encourage reflective practice
b. Reflective practice
b. Reflective practice
and peer support
c. Develop teaching approaches/ c. Digital technologies; c. Curriculum design;
c. Teaching skills;
skills /strategies
assessment,
assessment of learning; communication skills
measurement and
context specific
learning
teaching methods,
a. Intro to learning
theory
b. Reflective practice and
peer support
c. Teaching strategies
d. Change teacher behaviour
d. Developmental focus
e. Improve student learning
f. Digital technology
e. The students'
perspective of T and L
f. Active learning;
cooperative learning
f. Effective questioning f. Engaging students
g. Enhance student experience
g. Managing groups
h. Policy/orientation
h. Responding to
student feedback
j. Leadership development
a. Generic teaching approaches
b. Discipline specific delivery
h. Evaluation of
teaching
a. Generic plus
b. Discipline specific
assignments
a. Follow up of participants in the
short term
b. Institutional processes for
review of programs
c. Participant evaluation
10. Institutional Climate
1 intake per year
a. New staff expected to
complete
i. Change institutional culture
8 Generic/ discipline
specific
1 per semester
c. Available to all staff
a. New staff
f. Engage students in learning
Institutional Focus
1 per semester
b. Optional
c Advanced standing
into GCTT
a. New staff expected to a. CATL
complete
b. Post grads teaching b. Doctoral students with
24 hrs in labs, tutes etc limited teaching
experience can apply
c. Available to all staff c. Available to all staff
b. Inexperienced staff
6. Format
1 intake per year
a. Mandated/monitored
b. Optional
c. Accredited
Post Grad. Teaching
Internship Scheme
a. CATL
b. Faculty of Education
a. Short e.g 1-3 hour w/shops
b. Intensive 1-3 days
c. Extended e.g. over 1 semester
of more
Foundations of Uni T
Introduction to
and L
University Teaching
a. Centre for the
a. CATL
Advancement of T and L
(CATL)
a. Policy
b. Recognition
c. Promotion criteria aligned
a. Generic plus
b. Discipline specific
focus
a. Ongoing follow up
during semester
a. Generic plus
b. Apply learning to
discipline/context
a. Ongoing monitoring
b. External review
a. Generic plus
b. Discipline specific
learning partners
a. Ongoing learning
partnerships
encouraged
b. Cycle of review
c. Evaluation
conducted
c. Evaluation
conducted
c. Evaluation conducted
b. Formal review cycle
c. Portfolio
development
d. Support at Faculty/School level.
42
Appendix C
Academic Professional Development Effectiveness Framework
Formal/Extended Programs
Program Level
Focus
Input Indicators
Process Indicators
Output Indicators
Outcome Indicators
[To what extent and in what ways do
TPPs for academics have an effect on:]
Teacher knowledge, skills and
practice
1.
2.
Teacher reflective practice and
scholarship of teaching and learning
TPPs delivered by staff with
appropriate qualifications and
experience
The range and mode of TPPs is
aligned with University
guidelines on good teaching
and staff needs
10. TPPs align with institutional
commitment to self-reflective
practice and research informed
teaching practices
3.
4.
TPPs provide a pedagogical
framework for understanding
teaching and learning in higher
education
5.
No. completions of formal
programs
6.
TPP evaluations
7.
Teacher perceptions of changes
in their approach to teaching
and learning following
completion of TPPs as
evidenced by portfolio,
improved student evaluations,
teaching awards, peer review,
self-reflection
8.
Quality of teaching as
evidenced through promotion
applications, PDRs etc.,
following completion of TPPs
9.
Evidence of student focused
approach in course/teaching
materials
Delivery of TPPs models
teaching and learning
strategies, resources and
assessment practices which
enhance the quality of teaching
and learning
11. TPPs encourage critical
reflection of participants’
beliefs and practices regarding
teaching, learning and
assessment.
13. TPP participants report the use
of student feedback when
reviewing courses and teaching
12. TPPs incorporate research
which informs teaching and
learning in higher education
Student engagement, learning
experience
14. TPPs align with espoused
priorities related to student
learning experiences and
engagement
15. TPPs draw on a framework of
evidence based teaching and
learning practices (e.g. HEA
professional standards
framework)
17. Unit evaluations
16. Student perceptions of teaching
are incorporated into TPPs
Student approaches to learning
18. TPPs incorporate University
graduate attributes
19. TPPs highlight importance of
relevant, authentic and
inclusive assessment tasks
20. TPP participant perceptions of
quality of student assessment
tasks
Institution Level
Focus
[To what extent and in what ways does
institutional architecture and climate
contribute to the effectiveness of TPPs?]
Policy
Input Indicators
1. University policies and priorities
recognise the role of TPPs in
enhancing the quality of teaching
and learning e.g.
Process Indicators
2.
PDR process and promotion
criteria record/ recognise
completion of TPPS
a. requiring, and providing
financial support for, the
completion of a formal TPP for
new academic appointments
b. recognising and rewarding
teaching (through career
progression, grants etc.)
c. faculty/dept recognition of staff
participation in TPPs in
workload formulas
Resourcing
Output Indicators
3.
Number and proportion of staff
completing TPPs
4.
Non completion rates
5.
Number and proportion of new
appointments enrolled in TPPs
6.
Number and proportion of staff
completing TPPs who are
nominated for Teaching Awards
7.
Number and proportion of staff
competing TPPS who receive
Teaching Awards/Grants
Outcome Indicators
8.
Annual report of TPP activities
9.
Satisfaction as reported
through TPP evaluations
10. Period external review of
program and benchmarking
report to University Teaching
and Learning Committee
11. University allocates adequate
resources to TPP provision
(funding, staff and facilities), e.g.
a. an adequate number and range
of TPPs are planned
b. appropriately qualified and
experienced staff appointed
Culture
12. TPPs delivered within a culture
of supporting learning
communities
13. Number and proportion of TPP
participants attending teaching
and learning events
43
Appendix D
Academic Professional Development Effectiveness Framework
Informal /Short/Ad Hoc Programs
Program Level
Focus
Input Indicators
Process Indicators
Output Indicators
Outcome Indicators
[To what extent and in what ways do TPPs
for academics have an effect on:]
Teacher knowledge, skills and
practice
Teacher orientation/awareness of
institutional policies, practices and
support
Student engagement and learning
experience
1.
TPPs responsive to audit of staff
needs and institutional priorities
(e.g. LMS)
2.
Number and range of TPPs
focused on knowledge, skills and
strategies
3.
Ongoing teaching development is
an expectation of all staff
4.
TPPs are varied and designed to
meet staff and institutional needs
5.
TPPs highlight and model a range
of evidence-based teaching and
learning strategies, resources and
assessment techniques which can
be adapted to discipline specific
contexts
6.
Number of workshops offered
8.
Workshop evaluations
7.
Number and proportion of staff
attending workshops
9.
Staff perception of their teaching
skills following participation in
TPPs
10. Number and range of TPPs
related to orientation/
dissemination of university
policies, processes and priorities
related to teaching and learning
11. TPPs clarify institutional
expectations/requirements
regarding teaching and learning
(e.g. assessment, academic
conduct, grading, international
students, supervision)
12. Number and proportion of staff
attending TPPs
15. Number of TPPs with a focus on
student engagement and learning
experience
16. Reviews of student experience
inform TPPs
17. Number and proportion of staff
attending TPPs
14. Workshop evaluations
13. Range and scope of TPPs
18. Unit evaluations
Institution Level
[To what extent and in what ways does
institutional architecture and climate
contribute to the effectiveness of TPPs?]
Policy
Input Indicators
1.
University policies and priorities
recognise the role of TPPs in
enhancing the quality of teaching
and learning e.g.
a)
adoption of an appropriate
model of TPP provision
(central, distributed, facultybased)
b)
faculty/dept recognises staff
participation in informal TPPs
in workload formulas
Process Indicators
Output Indicators
2.
Range and mode of TPPs offered
Outcome Indicators
3.
Workshop evaluations
4.
Period external review of
offerings and benchmarking
report to University Teaching and
Learning Committee
7.
Annual Report of TPP activities
c) recognition and reward of
quality teaching
Resourcing
5.
University allocates adequate
resources to TPP provision e.g.
a)
an adequate number and
range of TPPs are planned
b)
appropriately qualified and
experienced staff are
appointed
c)
financial and in-kind support is
available for various staff
needs, e.g. sessional staff, post
grad and clinical tutors
6. Number, timing, range and mode
of programs offered
44
Appendix E
An introduction to the Academic Professional Development Effectiveness
Framework
These notes were prepared initially to support the teams participating in the trial of the draft APD Effectiveness
Framework and have now been modified to serve as an introduction to the use of the Framework.
Project Team
Leader:
Members:
Professor Denise Chalmers, UWA: denise.chalmers@uwa.edu.au
Professor Sue Stoney, ECU: s.stoney@.edu.au
Ms Veronica Goerke, Curtin University: V.Goerke@exchange,curtin.edu.au
Dr Allan Goody, Curtin University: A.Goody@curtin.edu.au
Dr Di Gardiner, UWA: di.gardiner@uwa.edu.au (Project officer)
45
Appendix E
Understanding the Academic Professional Development Effectiveness Framework
The following notes provide a brief overview of the project, the development and structure of the APD
Effectiveness Framework and how it can be used. Each section concludes with a brief summary.
A. Introduction: Why develop an evaluation framework?
Professional development programs and activities to enhance teaching and learning have been a feature of
the academic culture of many higher education institutions throughout the world for more than 50 years. A
number of scoping studies (Gosling, 2008; Ling, 2009; Ako Aotearoa, 2010; Southwell and Morgan, 2010;
Stefani, 2011) capture the breadth and depth of provision, both nationally and internationally, revealing
diverse organisational structures and program delivery. There is also a growing body of research into
various types of programs (Prebble, 2004; Gibbs & Coffey, 2004; Knight, 2006; Hanbury et al, 2008;
Southwell and Morgan, 2010)1 documenting the varied and substantial teacher preparation offerings of
academic development units. What is less apparent in the literature is evidence of whether these activities
have had an impact on enhancing teaching, student satisfaction and learning experiences, or the
institutional architecture and culture which supports, rewards and recognises teaching.
In Australia, the Development of Academics and Higher Education Futures Report (Ling, 2009) revealed
consistency in that nearly all Australian universities have a central academic development unit, but a
significantly varied landscape of mission statements, organisational structure, provision, institutional
support, and resourcing. The Report also identified issues and made recommendations, one of which was
that academic developers need to engage in forms of evaluation which will indicate the impact of their
activities in the long term (p. 62). This was in line with Devlin’s (2008) view that the questions of whether or
not various teacher development interventions actually work and, if so, in what ways such interventions
influence skills, practices and foci, and/or ultimately lead to improved learning, remain largely unanswered
in higher education (p.15).
Addressing such questions is a timely challenge given the increasing attention directed to academic
development programs in higher education over recent years. Indeed throughout the last two decades a
government agenda of quality, value for money and enhanced participation has resulted in persistent
attention to the quality of higher education in Australia leading to a number of inquiries with subsequent
reports and recommendations (Ramsden, 2003, p. 233; Bradley et al, 2008; ACER, 2008). With greater
attention being paid to the quality of teaching in universities more broadly, and in individual performance
reviews and promotion more specially, there are clear expectations that teaching staff will provide
evidence of the quality of their teaching and of ongoing participation in teacher development programs.
This, in turn, leads to questions of accountability and calls for academic developers to engage in systematic
evaluation of their programs to demonstrate that their activities are not just in line with institutional
strategic initiatives, but that they have improved student learning experiences (Brew, 2007, p. 69). We also
need the evidence of what ‘works’ so that the limited resources available are directed to the most
appropriate teaching preparation activities for each particular context. While on the surface this seems a
relatively straightforward matter there is, in fact, considerable debate about how to determine the impact
of academic development programs, what aspects to measure, which outcomes can be measured, how to
measure them and whether attempts to assess effectiveness can be conclusive. Furthermore, any
consideration of the impact of academic development programs can only be meaningful when
contextualised against the size and type of institution, the resources available and the intended outcomes,
adding to the complexity of the task.
The APD Effectiveness Framework is intended to be one tool to assist academic developers to evaluate the
effectiveness of their teaching preparation programs, to understand the factors influencing the
effectiveness of such programs on teacher beliefs and behaviours, student approaches to learning and
1
For more detail see the full Literature Review available at http://www.catl.uwa.edu.au/projects/tpp/effectiveness
46
Appendix E
institutional culture, and to enhance sector-wide understanding of the different purposes and impacts of
different types of teaching preparation programs for academics (TPPs).2
In brief: While the literature paints a picture of widespread and diverse TPP related activities in higher
education in Australia and overseas, a great deal of uncertainty surrounds the question of whether these
activities are having an impact. In Australia this uncertainty has been fuelled by the Bradley Report on
Higher Education which revealed a worrying decline in student satisfaction with some aspects of their
learning experiences (Bradley et al, 2008). This has generated a degree of reservation about the
effectiveness of TPPs on teachers, students and the wider institutional community and culture at a time
when teaching and learning in higher education are under scrutiny amid calls for accountability (Stefani,
2011). In an era of accountability TPPs are not immune from this scrutiny.
B. Development of the Framework: What principles underpin the APD Effectiveness Indicator
Framework?
Considerable scholarship and effort goes into planning and delivering TPPs which are supported by the
universities and faculties through centrally managed units and/or faculty and school based programs. The
range and types of TPPs and activities vary considerably from formally accredited programs such as
Graduate Certificates in Tertiary Teaching, to Foundations of University Learning and Teaching programs for
academics new to teaching, to less formal programs with incidental workshops run through a central unit
or instigated within faculties or departments. These might also include formal or informal peer review of
teaching, and processes and practices that encourage self-reflection and teacher discussions through
networks and communities of practice. Furthermore these programs, in their varied forms, are provided
on-shore, off-shore and on-line. The challenge in developing a framework was in not privileging one
particular type of TPP, and taking account of the variations in duration, mode and purpose of the various
types of programs. The Framework is based on a broad definition of TPPs to allow the investigation of the
effectiveness of the range of programs in varied contexts and is underpinned by the following principles:
1. The Framework should be relevant the intentions of the formal and informal teaching preparation
programs offered by academic developers in Australian universities.
2. The Framework should be informed by the literature and studies relating to measuring the
effectiveness of TPPS.
3. The Framework should be adaptable to the variety of programs and contexts.
4. The Framework should be based on a clear understanding of the term effectiveness.
Principle 1: The Framework must relevant to current practices
Prior to the development of the Framework an extensive audit of TPP for Australian academics was
conducted initially using each university’s website. Subsequently each audit was sent to institutions for
checking, verification or amendment and return. The audits were then used to determine the types of
programs offered (e.g. formal or informal), the features of the programs (e.g. duration, status, participants,
mode, frequency etc.), the intended outcomes, and the degree of alignment with institutional architecture
(e.g. policy, resourcing etc.) and culture (e.g. recognition and reward of quality teaching etc.). The findings
of the audit informed the design of the Framework.
Principle 2: The Framework must be informed by research
2
Throughout this report ‘TPP’ is used to refer to teaching preparation programs for academics. However the title of
the Framework uses the more commonly accepted term Academic Professional Development to avoid confusion with
programs which prepare teachers for the school sector.
47
Appendix E
Prior to the development of the draft Framework an extensive review of the literature was undertaken. The
focus was on summarising the findings of the studies that have investigated the effectiveness and impact of
the range of teaching preparation programs for academics. The findings were then used as basis for
identifying the aspects of TPPs which could reasonably be expected to be evaluated. In general the main
findings which are reflected in the design of the draft Framework are:

The difference in effectiveness of long versus short programs.

The difference in effectiveness between discipline based and generic programs and between in-situ,
peer learning etc. and large group programs.

The lack of evidence supporting a direct relationship between TPPs and student learning outcomes
(although there is strong research-based evidence that good teaching has positive effects on student
approaches to learning and that teaching preparation programs can have positive effects on the quality
of teaching).

The evidence which indicates that it is possible to identify changes in conceptions of teaching; teacher
knowledge, skills and understanding; curriculum design (in particular the alignment of assessment with
course outcomes); the use of varied and student focused learning strategies; engagement with
reflective practice and the scholarship of teaching; student engagement, learning experiences and
approaches to learning, and institutional architecture and culture.

The evidence that both qualitative and quantitative measures are necessary to understand how and
why programs are effective.
Principle 3: The Framework must be adaptable to the variety of programs and contexts
While most universities have dedicated academic development units, there is diversity in terms of the
position of units within the university hierarchy, their size, structure, resourcing and responsibilities, and
the number and range of TPPs which are offered. This has implications for the design of the Framework and
the degree to which one matrix can be adaptable to all contexts and activities. The Framework has
therefore been designed to incorporate a collection of possible indicators from which academic developers
can choose those relevant to their programs and context. Three areas of Outcomes Focus have been used
to enable the Framework to be adaptable to the variety of programs offered. These, informed by the two
previous principles, are:

Teacher: Knowledge, skill, behaviour, reflective practice and scholarship of teaching

Student: Engagement and enhancement of learning and approaches to learning

Institution: Architecture and culture
Principle 4: The Framework must be based on a clear understanding of effectiveness.
The Framework is intended to be an evaluation tool for those involved in academic development to judge
the effectiveness of their programs and for institutional review of the effectiveness of their academic
development initiatives. It is also compatible with other strategies of evaluation such as benchmarking,
internal review and monitoring as part of an ongoing cycle of quality enhancement. As a form of such
evaluation the Framework includes indicators of change over both the short and long term and it is this
emphasis on change which underpins the use of the term effectiveness. Much of the literature related to
evaluating academic development programs uses the terms ‘effectiveness’, ‘impact’, ‘influence’ and
‘strengths’ interchangeably without attempting to define any of them. The American Evaluation Association
(2011) defines evaluation as assessing the strengths and weaknesses of programs, policies, personnel,
products, and organisations to improve their effectiveness. This definition mirrors the intent of the APD
Effectiveness Framework which is based on the notion that an effective intervention, in this case TPPs, will
48
Appendix E
result in change which is appropriate to the situation (Moon, 2004), and in particular, in knowledge and
practice appropriate to the teaching-learning context. In evaluating the success of TPPs, two aspects
require attention: the effectiveness of the practices and processes involved, and the changes which occur
as a result of these practices and processes. Consequently the Framework includes indicators of change
which require looking at evidence beyond the immediate results of participant satisfaction and quality of
program delivery, to the intermediate and longer term effects of programs on teacher and student
behaviours, to the institutional teaching and learning policies and culture and to data which demonstrates
sustained and sustainable improvement.
In brief: The development of the Framework has been informed by the audit of TPPs in Australian
universities and the literature in the field, and is underpinned by the need to provide an adaptable, relevant
evaluation Framework incorporating evidence-based indicators. It is intended to be a collection of possible
indicators to assist academic developers in evidencing the effectiveness of their work. The term
effectiveness is used to reflect processes and product, and short and long term change.
C. Structure of the Framework: How is it organised?
The structure of the Framework is designed to assist academic developers to document the effectiveness of
their TPPs drawing on data they have related to the indicators. A separate framework is presented for
formal and informal TPPs. Each Framework includes indicators of effectiveness for Programs and the
Institution, four types of indicators, areas of evidence focus for each level and effectiveness indicators
related to each area of evidence focus in the following structure:
Effectiveness
indicators
Categories of TPPs

Formal: These include TPPs for which there is formal accreditation, or which are mandated or required
by the university, (for example Grad. Certs or similar, foundations of university learning and teaching,
compulsory orientation for new or sessional staff), and which are delivered over extended time periods
(for example several days, a semester or full year).

Informal: These include all optional workshops, seminars, on-line tutorials, teaching and learning
events, peer learning, ad hoc sessions, faculty based programs, etc. which typically have a single focus
and are short in duration (for example one to three hours).
49
Appendix E
Levels of focus
As a consequence of the focus on both teaching preparation programs and the context in which these
activities take place it was necessary to present the framework in two levels:

Program: This level focuses on the intended outcomes of TPPs.

Institution: This level focuses on the alignment between institutional architecture (e.g. policy,
resourcing, review procedures) and enhancement culture (e.g. support for the transfer of learning), and
TPPs.
The inclusion of the institutional level outcomes emanates from the work of Trowler and Bamber (2005)
which highlights the importance of the context and the alignment between institutional architecture
culture and TPPs. In recognising the importance of the context within which TPPs are delivered, the project
team determined that it was essential to co locate the indicators for each level within one framework,
rather than separating them which could divert attention to the program level only.
Types of Indicators
There is considerable agreement that Input, Process, Output and Outcome indicators, which can be more
broadly categorised as quantitative and qualitative, are suitable for the purposes of evaluation (Chalmers,
2008; Shavelson, 2010). Quantitative indicators are based on numerical assessments of performance and
are typified by Input and Output indicators while qualitative indicators use non numerical assessments of
performance and include Process and Outcome indicators. The Framework is based on the following
understanding of each type of indicator:

Input indicators refer to the human, physical and financial resources dedicated to particular programs;

Output indicators refer to the results or outcomes of the programs which are measurable such as the
number of program participants;

Process indicators reveal how programs are delivered within the particular context referring to policies
and practices related to learning and teaching, performance management and professional
development of staff, quality of curriculum and the assessment of student learning, and quality of
facilities, services and technology (Chalmers, 2008, p.12);

Outcome indicators focus on the quality of provision, satisfaction levels and the value added from
learning experiences.
All indicators have some limitations. For example, Input and Output indicators tend to generate statistics
which reveal how much or how many, but say little about quality and may actually inhibit the investigation
of teaching and learning processes, experiences and interactions which could be more enlightening.
Outcome and Process indicators provide information about quality, but are more difficult to measure.
While there is sufficient evidence that the use of Process indicators is most appropriate and useful for
generating information related to quality in teaching and learning (Chalmers, DEEWR Report 2008) there is
also recognition of the need for quantitative information. The important point is that information needs to
be drawn from a variety of indicators and that the information generated needs to be interpreted and
contextualised.
Outcomes Focus
The areas of Outcomes Focus represent the intended outcomes most commonly identified by the audit of
TPP activities for academics in Australian universities and those outcomes of TPPs which the literature
suggests can be evaluated. The way in which the intended outcomes are articulated varies from broad
general statements to those which are specific. To manage this difference and the breadth of offerings the
outcomes have been categorised into the following common areas:
50
Appendix E

Teacher knowledge, skills and practice: includes intended outcomes relating to, for example,
pedagogy, curriculum design, assessment and feedback, teaching approaches, strategies, and skills,
deep and surface learning, large and small group teaching, use of technology, etc.

Teacher reflective practice and scholarship of teaching: includes intended outcomes relating to, for
example, use of student feedback, techniques for reflecting on and evaluating teaching, peer review,
innovations in teaching, communities of practice, researching teaching, etc.

Student engagement, learning experience: includes intended outcomes relating to, for example,
effective group teaching, active learning, questioning and communication techniques, use of ICT and
LMS to engage students, dealing with diversity, inclusive teaching, dealing with difficult students,
enhancing learning experiences etc.

Student approaches to learning: includes intended outcomes relating to, for example, student focused
approaches to teaching and learning, authentic assessment, PBL, work integrated learning, group tasks,
critical and creative questioning etc.

Policy: includes the extent to which institutional organisation, policies and strategic priorities recognise,
support and value quality teaching and learning and participation in TPPs through, for example,
promotion criteria, financial and workload support for participation in TPPs, embedded review
processes, recognition and reward for excellence in teaching through promotion criteria etc.

Resourcing: includes the extent to which institutions commit resources to TPPs both centrally and at
faculty/department level, to the recognition and reward of quality teaching and to activities which
promote quality teaching, etc.

Culture: includes the extent to which institutional culture encourages participation in TPPs, promotes
the sharing of teaching and learning ideas and issues, celebrates excellence in teaching, encourages and
rewards the scholarship of teaching, supports communities of practice, values teaching and learning
related events, etc.
Specific Effectiveness Indicators
Each of the Frameworks includes a collection of specific Effectiveness Indicators at the Program and
Institution level. These numbered indicators are presented within areas of Outcomes Focus representing
the intended outcomes of the TPPs. The numbering does not indicate a hierarchy of indicators nor is there
any horizontal relationship between the indicators. The Effectiveness Indicators have been derived
following consideration of:

the intended outcomes of TPPs as revealed by the audit;

evidence of which outcomes of TPPs can be measured from the extensive literature review;

the need for a balance between quantitative and qualitative indicators to provide objective credibility
and rich information;

the need for both short term and long term data;

the need to balance participant surveys with other sources of evidence (triangulation);

the role of the Framework in providing a basis from which to create a narrative about the effectiveness
and impact of TPPs;

feedback from the academic development community during Stage 1 of the Project;

ease and flexibility of use.
In brief: The structure and content of the APD Effectiveness Framework has been informed by the audit of
TPPs in Australian universities, the literature review and feedback from the academic development
51
Appendix E
community. While there are separate Frameworks for Formal and Informal TPPs, they have a common
structure. Each includes two levels: Programs and Institution; four types of indicators: Input, Process,
Output and Outcome; areas of Outcomes Focus: teachers, students, policy, resourcing and culture, which
represent the intended outcomes of TPPs; and specific indicators for each area of Outcomes Focus (see
diagram above).
D. Interpreting the Framework: How can it be used?
It is expected that the APD Effectiveness Framework will serve as an evaluation or accountability tool as
well as informing future curriculum design.
Evaluation Tool
The APD Effectiveness Framework is intended to assist academic developers in evidencing the effectiveness
of their TPPs and the consequential changes in teaching and learning related activities and processes. One
way of achieving this might be to develop a narrative3, using the APD Effectiveness Framework to select
indicators around which to frame the discussion and which can be supported by data.
The findings of the literature review suggest that in developing such a narrative academic developers
should select both qualitative and quantitative indicators, and short and long term indicators in order to
present a comprehensive account of the effectiveness of their TPPs. Throughout the discussion the
indicators may be interrelated and linked to data all of which should be interpreted in light of the
institutional context and the explicit purpose for which the information is being presented. Although there
is evidence in the literature of tension between proponents of ‘hard’ or ‘soft’ indicators, there is growing
support for the use of ‘bottom up’ evidence in the form of, for example, real life stories, anecdotes, forums,
reflections and case studies which give voice to successful interventions and why they work in particular
contexts. This seems particularly relevant for evaluating the effectiveness of TPPs where complex causal
relationships and the considerable variation in pace, extent and depth of change which individual’s might
demonstrate following participation in TPPs, make it difficult to draw linear conclusions. By developing a
narrative academic developers are able to integrate both kinds of data in evidencing the effectiveness of
their TPPs.
The Institutional level indicators will assist in determining the alignment between institutional priorities,
TPPs and the teaching and learning culture.
Curriculum Design
The APD Effectiveness Framework might also be used to inform the design of TPPs. Sometimes called
‘backward design’, the process of looking at the intended outcomes and indicators of effectiveness at the
beginning of the design process allows for stronger alignment not only of the curriculum, instructional
strategies and evaluation processes, but also of TPPs with institutional values and priorities regarding
teaching and learning. The use of the Framework in the design stage also facilitates the identification of
data which will be useful for evaluation, and how and when to collect it, reducing reliance on participant
surveys for evidence of effectiveness.
In brief: The Framework represents a collection of indicators to support the development of a narrative
demonstrating the effectiveness/impact of TPPs. It is not intended that all indicators be addressed, but
rather that academic developers select those of particular interest, identify data which will provide
evidence of the indicator and systematically collect and analyse the relevant data over time. Determining
the effectiveness or impact of TPPs serves two purposes: it provides evidence for accountability and
provides feedback for future planning and delivery.4
3
4
Exemplars are available on the project website
For a summary of other uses identified by the trial see the Project Report on the OLT website.
52
Appendix E
E. Collaboration
It is well recognised that engaging academics at the development stage is fundamental to their acceptance
of new initiatives (Knight and Trowler, 2000; Gosling et al, 2006; Bryman, 2007). This was a key conclusion
from the ALTC project Learning Leaders in Times of Change (Scott et al, 2008) which acknowledged that
one’s peer group is an important source of motivation (or demotivation). A key feature of this project
therefore, was the involvement of academic developers throughout the project. The CADAD network,
which endorsed the project from the outset, was invited to review and discuss early iterations of the
Framework and to nominate for participation in the trial. The academic development community, in
general, was invited to participate in a workshop (following the HERDSA 2011 Conference) outlining the
purpose and structure of the draft Framework and to nominate for the trial. Nine teams subsequently took
part in the trial.
The trial teams represented a diverse range of institutions encompassing Go8s, regional, multi campus and
cross sector institutions, with a range of distinctive features including a focus on technology and design,
distance education, research, off shore teaching, professional education and industry partnerships. There
was also considerable range in size and age of institution: five teams were based in universities with 30 000
students or fewer and the remainder in larger institutions reporting between 40 000 and 74 000 students;
and five were based in universities established prior to the 1960s and the remainder in newer institutions.
Trial teams consisted of between two and five members, with a team leader responsible for coordinating
the trial activities in their institution and the final presentation and written report. There was a range of
experience within trial teams from senior academic developers with more than five years’ experience to
those newer to the field.
Each trial team developed an action plan focused on an institution specific issue or interest and although
they were provided with an introduction to the APD Effectiveness Framework and its intended application
they were encouraged to adapt and modify the Framework to suit their purposes and context. This was a
particularly important focus of the trial given the design principles of relevance and adaptability. The trial
teams embraced this challenge and demonstrated a variety of applications of the Framework which went
beyond the expectations of the project team.
Conclusion
Each of the trial teams was able to engage with the support materials which are available on the project
website to document the effectiveness of their TPPs in a variety of ways. In their feedback the trial teams
identified the following strengths of the Framework:
Flexibility: it is adaptable to varying institutional contexts, programs and sectors and can also be
adapted to an institution specific need.
Comprehensive: the range of both qualitative and quantitative indicators allows for an understanding
of how and why programs are effective or not and can also be used for a detailed investigation of
particular outcomes or a broad overview of provision to be evidenced in the long and short term.
Educative: the Framework facilitates decisions on what data to collect, when to collect and how to
organise it, and reveals gaps in current evaluation practices.
Scaffold for planning: the Framework provides a structure for the development of a systematic long
and short term approach to evaluation and provides a repository for evidence.
Inclusion of institutional culture: provides a basis for conversations which will promote an awareness
of the need for alignment between policy, resourcing and recognition of quality teaching and
learning and for the development of university-wide enhancement cultures.
53
Appendix E
Curriculum development: the Framework is useful as a standard in the design of teaching preparation
programs.
Benchmarking: the Framework is helpful for both internal benchmarking of programs and for
integration into the CADAD benchmarking process.
54
Appendix E
References
Ako Aotearoa (2010). Tertiary practitioner education training and support: Taking stock. Ako Aotearoa –
The National Centre for Tertiary Teaching Excellence, Wellington, NZ. Retrieved 27 October 2010 from
http://akoaotearoa.ac.nz/download/ng/file/group-4/taking-stock---tertiary-practitioner-educationtraining-and-support.pdf
American Evaluation Association Retrieved August 8, 2011 from
http://www.evaluationwiki.org/index.php/American_Evaluation_Association_%28AEA%29
Chalmers, D. (2008). Indicators of university teaching and learning quality. Sydney, ALTC.
Gibbs, G. & Coffey, M. (2004). The impact of training of university teachers on their teaching skills, their
approach to teaching and the approach to learning of their students. Active Learning in Higher Education,
5: 87, 87-100.
Hanbury, A., Prosser, M. and Rickinson, M. (2008). The differential impact of UK accredited teaching
development programmes on academics' approaches to teaching. Studies in Higher Education, 33: 4, 469483.
Knight, P. (2006). The Effects of Postgraduate Certificates: A report to the project sponsor and partners. The
Institute of Educational Technology, The Open University. Retrieved 02 November 2010 from
http://kn.open.ac.uk/public/document.cfm?docid=8640
Ling, P. (2009). Development of academics and higher education futures. Report Vol. 1., Sydney, ALTC.
Moon, J. (2004). Using reflective learning to improve the impact of short courses and workshops. Journal of
Continuing Education in the Health Professions, 24(1), 4-11.
Prebble, T., Margraves, H., Leach, L., Naidoo, K., Suddaby, G., & Zepke, N. (2004). Impact of student
support services and academic development programmes on student outcomes in undergraduate tertiary
study: a best evidence synthesis. Wellington, NZ, Ministry of Education.
Report to DEEWR (2008). Review of Australian and international performance indicators and measures of
the quality of teaching and learning in higher education 2008. Sydney, ALTC.
Shavelson, R. J. (2010). Measuring College Learning Responsibly: Accountability in a New Era. California,
Stanford University Press.
Southwell, D. & Morgan, W. (2010). Leadership and the impact of academic staff development and
leadership development on student learning outcomes in higher education: A review of the literature. A
report for the Australian Learning and Teaching Council (ALTC). Sydney, ALTC.
Stefani, L. (Ed.) (2011). Evaluating the effectiveness of academic development: Principles and practice. New
York, Routledge.
55
Appendix F
Narrative Guidelines: Using the APD Effectiveness Framework
Introduction
The following is intended to support academic developers in demonstrating the effectiveness of their
TPPs for academics. One way to achieve this is to develop a narrative which integrates the indicators
from the APD Effectiveness Framework and relevant data in creating a comprehensive picture of the
effectiveness of the design and delivery of TPPs and the consequential changes in teaching and
learning. These guidelines provide the basis for developing the narrative and can be adapted to the
particular context or concerns.
In the sections which follow you will find a set of indicative questions to prompt your thinking about
the particular outcome you are demonstrating, the relevant indicators and some suggested sources of
evidence. Each of the indicators is labelled with a set of letters and numbers in brackets to provide for
easy reference back to the APD Effectiveness Framework. (i.e. Formal [F] or Informal [I]; the level of
interest, i.e. Program [P] or Institution [I] and the number of the particular effectiveness indicator
within the Framework. For example [FP 10] is Formal Framework, Program Level and Indicator 10.
The following sequence may be helpful in using the APD Effectiveness Framework and these
guidelines:
1.
Select the Formal or Informal Framework as appropriate to your evaluation.
2.
Select the Program or Institution level as your focus.
3.
Select the Outcome Focus (i.e. intended outcomes) you are interested in demonstrating.
4.
Consider the indicators of achievement of this outcome.
5.
Look at the indicative questions in the template for addressing the outcomes.
6.
Identify, collect and analyse the relevant data/evidence.
7.
Begin the narrative with an introductory section outlining the Context.
8.
Using the indicative questions and suggested evidence develop the narrative around the
particular Outcome Focus, indicator’s and evidence.
9.
Consider using both qualitative and quantitative indicators if appropriate.
10. Remember there is no hierarchy of the indicators, no horizontal links and no expectation that all
indicators will be addressed for each area of focus.
Narrative Guidelines
56
Appendix F
Introduction
Your narrative might start with an elaboration of context in which the professional development
programs take place. The following section provides some suggested headings to guide this
contextualisation. This can be written in narrative style, bullet point or tabular – whatever best suits
your preferences. As this section develops it is possible to cross reference your comments with the
indicators in the APD Effectiveness Framework. Those most relevant to this section have been identified
in the column on the right, but there may be others which are also relevant to your circumstances.
Indicative Headings and Detail
Indicator/s
Context:

Name of institution

Location, age, nature, size of the university

Number of staff

Position of Academic Development unit (ADU) within university
administrative structure

Status of Teaching and Learning within the institution

University allocates adequate
resources to TPP provision [FI
11]

TPPs delivered by staff with
appropriate qualifications and
experience [FP 1]

Annual Report of TPP activities
[FI 8]
Profile off the ADU:

When established

Administrative structure of ADU

Mission/Vision/ Priorities of ADU

Number of staff

Qualifications of staff

Facilities

Budget

Responsibilities
Strategic Plan for short and long term

Current priorities

Future needs

Issues/ challenges

Focus on demonstrating effectiveness
Conclusion
Narrative Guidelines
57
Appendix F
Overview of Programs
This section provides an opportunity to develop an overview of the TPPs for academics delivered or coordinated by the academic development unit.
Indicative questions
Indicators
Evidence (Guidelines only)

What formal and informal programs are offered?

How frequently are they offered?

What is the duration of the programs?

Which programs are mandated/ optional?

What particular needs are met by the programs
offered? (e.g. sessional staff, lab demonstrators, tutors
etc.)

TPPs for academics delivered by staff with appropriate
qualifications and experience [FP 1]
 The range and mode of TPPs is aligned with University
guidelines on good teaching and staff needs [FP 2]
 TPPs draw on a framework of evidence based teaching and
learning practices (eg HEA professional standards framework)
[FP 15]
 TPPs incorporate research which informs teaching and learning

What modes of delivery are used?

Who delivers the programs?

Are the programs generic or discipline specific?

Are there any programs delivered in collaboration with
faculties/departments?

Who is responsible for program design?

How are decisions about program design made?

What short term/ long term, formal/ informal
evaluation procedures are in place?
 Number of workshops offered [IP 6]

How do the evaluation procedures align with program
intentions?
 Number, timing, range and mode of programs offered [II 6]

What follow up is there of individual participants to
determine whether program intentions have been
achieved?
in higher education [FP 12]
Staff qualifications/experience
Schedule of programs
University policies
Mapping of program foci against university
policies, graduate attributes, student feedback
Basis of curriculum design model
Responsiveness of program design to staff
feedback and requests
 Student perceptions of teaching are incorporated into TPPs [FP
16]
 TPPs incorporate University graduate attributes [FP 18]
 TPPs responsive to audit of staff needs and institutional
priorities (e.g. LMS) [IP 1]
 TPPs are varied and designed to meet staff and institutional
needs [IP 4]
 Reviews of student experience inform TPPs [IP 16]
 Range and scope of TPPs [IP 13]
 University allocates adequate resources to TPP provision e.g.
an adequate number and range of TPPs are planned,
appropriately qualified and experienced staff are appointed,
financial and in-kind support is available for various staff
needs, e.g. sessional staff, post grad and clinical tutors [II 5]
 TPP/workshop evaluations [FP 6] [IP 8] [IP 14] [II 3]
Conclusion
Narrative Guidelines
58
Appendix F
Overview of Participation
This section focuses on evidencing the effectiveness of TPPs for academics through an evaluation of participation rates and draws on the Institutional level indicators. The
particular focus should be clarified in an introduction to this.
Indicative questions

How many staff have completed Grad Certs?

How many participants attend formal programs?

How many participants attended each informal
program?

How many participants attend more than one program?

How have participation rates changed over time?

Which programs are in greatest demand and why?

How many requests are made by faculties/schools for
discipline specific workshops?

How are ad hoc requests for TPPs managed?

How is participation recognised?
Indicators



Number of completions of a Graduate Certificate by
staff[FP 5]
Number and proportion of staff completing TPPs
(Attrition rates from programs, non-completions [FI 3]
Number and proportion of new appointments enrolled
in TPPs[FI 4]
 University policies and priorities recognise the role of TPPs in
enhancing the quality of teaching and learning e.g. a)
requiring, and providing financial support for, the completion
of a formal TPP for new academic appointments, c)
faculty/dept recognition of staff participation in TPPs in
workload formulas [FI 1]

Ongoing teaching development is an expectation of all staff [IP
3]

Number and proportion of staff attending workshops [IP 7]
Evidence (Guidelines only)
Completions of Grad Certs or similar over time.
Number and proportion of staff who compete FULT
programs.
Attendance at each program/workshop
Patterns of attendance over time
Feedback from participants regarding why they have
chosen to attend.
Links between university strategic priorities and
attendance
Conclusion
Narrative Guidelines
59
Appendix F
Achievement of Intended Outcomes
This section provides guidelines for demonstrating effectiveness through achievement of intended outcomes of individual programs. It can be adapted to either formal or
informal programs and the questions, indicators and evidence can be varied according to whether the focus is on a program as whole or on particular outcomes only. The
particular purpose of the evaluation should be clarified in an introductory section.
Indicative questions

What are the pedagogical/ theoretical/ research-based
foundations of the programs?

What are the intended outcomes of the various
programs offered?


How are the intended outcomes reflected in the design
and delivery of TPPs?

What do you expect participants will know and be able
to do following the program?

How have staff responded to programs which focus on
pedagogy, teaching skills, reflective practice, the student
learning experience, orientation to university policies,
etc.?

What evidence is there that participants are achieving
the intended outcomes?

In what ways have participants changed their teaching
practices following participation in TPPs?
To what extent do participants engage with reflective
practices?

To what extent have TPPs enhanced the student learning
experience?

To what extent have TPPs contributed to changes in
student approaches to learning?

To what extent are participants aware of institutional
TPPs provide a pedagogical framework for
understanding teaching and learning in higher
education. [FP 3]

Delivery of TPPs models teaching and learning
strategies, resources and assessment practices which
enhance the quality of teaching and learning. [FP 4]

TPPs encourage critical reflection of participants’
beliefs and practices regarding teaching, learning and
assessment. [FP 11]
How are decisions made about the intended outcomes
of each program?


Indicators


Teacher perceptions of changes in their approach to
teaching and learning following completion of TPPs
as evidenced by portfolio, improved student
evaluations, teaching awards, peer review, selfreflection [FP 7]
Evidence of student focused approach in
course/teaching materials [FP 9]

Student perceptions of teaching are incorporated
into TPPs [FP 16]

TPPs incorporate research which informs teaching
and learning in higher education [FP 12]

TPPs draw on a framework of evidence based
teaching and learning practices (e.g. HEA
professional standards framework) [FP 15]

Quality of teaching as evidenced through promotion
applications, PDRs etc., following completion of TPPs
[FP 8]
Evidence (Guidelines only)
Program descriptors
Mapping of program intended outcomes, content and
delivery strategies against university policy, student
feedback, professional standards frameworks (e.g.
HEA), graduate attributes
Participant’s unit outlines, teaching materials etc.
(compared with previous outlines)
Excerpts from participant journals
Interview/survey/observation of participants
Excerpts from promotion applications, teaching
portfolios, PDR submissions
Comparison of assessment tasks pre and post
participation in a TPP
Annotated exemplars of student assessment
submissions
Formal and informal student feedback
Analysis of unit evaluations over time
Changes in teaching philosophy as evident in teaching
portfolios
Analysis of workshop evaluations
Reports from follow up sessions in which participants
share their use of new strategies, ICTs, LMS, etc.
Narrative Guidelines
60
Appendix F
priorities, policies etc. as a result of TPPS?

Unit evaluations [FP 17]
Focus group discussions with students

TPP participant perceptions of quality of student
assessment tasks [FP 20]

TPPs clarify institutional expectations/requirements
regarding teaching and learning (e.g. assessment,
academic conduct, grading, international students,
supervision) [IP 11]
Analysis of student unit evaluations and changes made
to unit outlines, teaching materials, teaching strategies
etc.

TPPs highlight importance of relevant, authentic
assessment tasks [FP 19]

TPPs incorporate University graduate attributes [FP
18]

TPP participants report the use of student feedback
when reviewing courses and teaching [FP 13]

Unit evaluations [FP 17] [IP 18]

Number and proportion of staff attending programs
related to student learning experiences [IP 17]

Number and proportion of staff attending programs
related to orientation [IP 12]

Number and range of TPPs related to
orientation/dissemination of university policies,
processes and priorities related to teaching and
learning [IP 10]

Number and range of TPPs focused on knowledge,
skills and strategies [IP 2]

TPPs highlight and model a range of evidence-based
teaching and learning strategies, resources and
assessment techniques which can be adapted to
discipline specific contexts [IP 5]

Number of TPPs with a focus on student engagement
and learning experience [IP 15]
Tracking of use of PBL, work integrated learning etc.
following participation in TPPs
Student participation in on-line forums, class
discussions etc.
Conclusion
Narrative Guidelines
61
Appendix F
Institutional Architecture (Policy and Resourcing) and Culture
This section provides an opportunity to consider the extent of alignment between institutional architecture and culture and the intended outcomes of TPPs for academics.
Institutional architecture refers to the degree to which university policies, priorities, resourcing and review processes support and encourage participation in TPPs.
Institutional culture refers to the extent to which there is recognition and reward of teaching, support for activities related to the scholarship of teaching and learning and
the development of a culture which fosters a focus on teaching and learning and supports the transfer of learning from TPPs.
Indicative questions
Indicators
Evidence (Guidelines only)
Policy, resourcing, alignment


In what ways and to what extent do institutional
policies and strategic priorities encourage, support and
recognise participation in TPPs?

University policies and priorities recognise the role of
TPPs in enhancing the quality of teaching and learning
e.g. encourage and support completion of TPPs,
recognise and reward teaching, recognition
participation in TPPs in workload formulas. [FI 1]
Policies regarding appointment, promotion,
professional development, teaching awards etc.
Alignment between institutional policies and TPPs
Number of new staff completing TPPs
How do decisions regarding resourcing of TPPs
influence provision and effectiveness?

How does the design and delivery of TPPs reflect
institutional priorities?
University allocated adequate resources to TPP
provision (funding, staff and facilities). [FI 11][II 5]

Annual Report of TPP activities [FI 8] [II 7]
In what ways does the institution recognise and reward
teaching?

Number, timing, range and mode of TPPs offered [II 6]

Range and mode of TPPs offered [II 2]
Evaluation policy relating to TPPs

To what extent does recognition and reward of teaching
influence participation in TPPs?

PDR process and promotion criteria recognise
completion of TPPS. [FI 2]
Analysis of evaluations over time

What formal review processes of TPPs are required?

Number and proportion of new appointments enrolled
in TPPs [FI 5]

What additional review processes of TPPs are
undertaken?

Non completion rates [FI 4]


How responsive are TPPs to recommendations from
institutional review processes?
Number and proportion of TPP participants nominated
for teaching awards. [FI 6]

Periodic external review of program and benchmarking
report to T and L committee [FI 10] [II 4]

Workshop evaluations[II 3]

TPPs delivered within a culture of supporting learning


Calendar of TPP activities and changes in this over time.
Teaching awards recipients
External review commendations and recommendations
Culture

How has institutional culture influenced provision or
Involvement of TPP participants in Teaching and
Learning events
Narrative Guidelines
62
Appendix F
participation in TPPs?



Teaching grants recipients
communities [FI 12]
To what extent do past TPP participants support
teaching and learning related events?
 Number and proportion of TPP participants attending teaching
How does faculty/department culture support the
transfer of learning from TPPs?
 Number and proportion of TPP participants who receive
and learning related events. [FI 13]
Publication related to teaching and learning
Faculty/department policies related to teaching and
learning
Peer learning activities
teaching grants. [FI 7]
To what extent do peer learning, communities of
practice etc. contribute to a supportive culture for TPP
participants?
Conclusion
Conclusion
The final task is to conclude the narrative. In doing do you might:
 Note the indicators which have been met,

Identify progress since any previous review,

Articulate short and long term goals for the future based on the data collected,

Identify indicators which still need to be addressed and suggest possible courses of action.
Narrative Guidelines
63
Appendix G
Academic Professional Development Effectiveness Framework
Evaluation Focus: Overview of Provision
Introduction
Relevant Indicators and Evidence
(see attached or links)
This evaluation is the first of a number which will focus on the effectiveness of teaching
preparation programs for academics (TPPs) offered by the Centre for the Advancement of
Teaching and Learning (CATL) at UWA. The APD Effectiveness Framework has been used as
a basis for evaluation as it identifies indicators of effectiveness related to various aspects of
TPPs and as such enables attention to be directed to areas of particular interest or concern.
This evaluation reviews the design, delivery and diversity of programs offered and the
extent to which this provision reflects the indicators of effectiveness of practice, primarily
input and process indicators, of the APD Effectiveness Indicator Framework. The analysis of
the effectiveness of practice is an important foundation to the next step of evaluating the
achievement of the intended outcomes of specific programs.
Where indicators of the APD Effectiveness Framework are met they are included in the
right side panel with related evidence. Key phrases related to specific indicators are in bold
for easy reference to the APD Effectiveness Framework.

[FI 1] [FI 11]
See CATL Organisational
Structure
See CATL OPP
At UWA the majority of the TPPs are delivered through CATL. CATL consists of three teams
working together closely – Administrative, eLearning and Academic. CATL employs a total
of 11.8 FTE (12 individuals) ongoing staff, and currently 2.6 FTE fixed-term contract or
casual staff in support of over 1300 academic staff at the University. All academic staff
delivering formal TPPs have teaching qualifications and more than five years’ experience
teaching in the higher education sector. Professional staff who are appointed to
professional development roles, have experience in providing training appropriate to the
audience.

[FI 11]
See Organisational Structure

[FP 1]
Staff qualifications (Attach. 1)


[FI 11 b)]
[FI 8]
Program schedule and
presenters (Attach. 2)
See Annual Report
CATL is committed to sustaining the quality of the staff through rigorous appointment
procedures and has recently attracted a strong field of applicants for appointment as
education developers for the LMS project. The successful candidates have undergone
professional development specific to their role in delivering workshops to support the
introduction of Moodle at UWA and their work will be monitored by the Leader of the
eLearning team and the LMS Implementation Manager. Consultation through staff
surveys and advice from reference groups, which include academic staff, have been
instrumental in the design and delivery of the Moodle workshops.

[II 5]
Presenters
qualifications/experience
(Attach. 3)
[II 6]
See Schedule of workshops
The evaluation should be considered within the contextual information about CATL which is
available in the Organisational Structure and OPP.
Overview of Provision
1. Introduction


[IP 1]
See Moodle Update
The appointment of appropriately qualified and experienced staff is fundamental to the
design and delivery of high quality and effective TPPs and to the status of such courses
within the academic community.
2. Development of programs/ events
Narrative Exemplar 1
64
Appendix G
The focus of CATL is on providing teaching and learning support in areas which are most
relevant to the UWA context, priorities, teaching and learning related policies and
Educational Principles (graduate attributes). To this end it hosts a range of programs,
workshops, visiting speakers and events aimed at enhancing teaching and learning
practice at UWA. During the annual review of CATL activities the alignment of programs to
institutional priorities is monitored.



[FP 2]
[FP 18]
[FP 14]
Links between programs
offered and UWA policies
and Educational Principles
See Calendar of events
See Operational Objective 1a
and performance review
(Attach. 4, p.2)

[IP 11]
See CATLyst and Assessment
Project websites



[FP 2]
[FP 3]
[FI 1 a]
Links between program aims
and institutional priorities

[FI 1]
Staff Development Policy





[FP 2]
[FP 3]
[FP 12]
[FP 14]
[FP 15]
See Program description
See UWA Teaching Criteria
Framework





[FP 3]
[FP 4]
[FP 11]
[FP 12]
[FP 17]
See Course outcomes and
activities
In addition to supporting the key areas of teaching practice and scholarship at UWA, the
program/workshop calendar is developed on the basis of the strategic and operational plan
for the Centre and feedback from participants of TPPs.
In addition to centrally organised programs and events CATL is also actively engaged with
faculties. It has initiated faculty based projects aimed at enhancing teaching and learning,
such as the CATLyst Network, and participated in teaching and learning projects, such as
the current assessment and feedback project which is embedded within the faculties.
3. Formal Programs
Formal programs are semester long or full year programs for specific groups of UWA staff
and post graduate students which provide pedagogical knowledge and practical skills in
preparation for teaching in a higher education setting. These include:
 Foundations of Teaching and Learning
 Introduction to University Teaching
 Postgraduate Teaching Internship Scheme.
a. Foundations of Teaching and Learning
The Foundations program is offered every semester to support staff relatively new to the
University and those with teaching experience who wish to refine, test, validate or develop
their present conceptions of good teaching and their current teaching practice. The
program is taught either as a semester long programme or in an intensive mode.
Completion of the Foundations of Teaching and Learning program is a requirement for new
staff and enables advanced standing into the Graduate Certificate in Tertiary Teaching,
offered by the Faculty of Education. HR provides a list of new staff to CATL and letters are
sent to the individuals and their Heads of School advising them of the upcoming programs.
The program is underpinned by a theoretical framework for teaching and learning in a
research intensive university, and provides opportunities for staff to explore a range of
teaching and learning strategies. The course outcomes clearly articulate the importance of
research informed teaching, and display the core understandings and competencies as
represented by the HEA Professional Standards Framework which has been modified and
adopted as the UWA Teaching Criteria Framework (TCF). The TCF focuses on teaching
practices, core knowledge and professional values all of which are directed to providing a
high quality learning experience for students. The pedagogical framework for
understanding teaching and learning in higher education which is fundamental to the
program is evident in the course outcomes which are designed to enable participants to:










Articulate an approach to, or philosophy of, teaching and learning;
Engage in critically reflective practice to enhance teaching;
Critically analyse incidents and issues in teaching practice and student learning and
develop action plans (if appropriate) to resolve them;
Design and deliver a unit of study that aligns student learning outcomes, teaching and
learning strategies and assessment strategies;
Use a variety of teaching strategies to enable effective student learning;
Identify diverse student needs and develop appropriate teaching approaches to
support these;
Incorporate elements of eLearning into teaching practice, especially LMS;
Evaluate teaching using a range of techniques, including SPOT and peer reviews;
Apply the outcomes of contemporary research into teaching and learning into
practice;
Demonstrate the relationship between teaching, research and scholarship in teaching
Narrative Exemplar 1
65
Appendix G
practices.
A highly valued component of the Foundations program is the panel of students who are
invited to share their perceptions of effective teaching and what they value in learning
experiences with the participants. Similarly a panel of experienced, award winning teachers
who describe their approach to teaching their students draws comments of appreciation
from the participants.


[FP 1]
[FP 16]
See Key Activities





[FP 3]
[FP 4]
[FP 11]
[FP 12]
[FP 15]
See Key Activities

[FI 1]
See Induction for Casual Staff
See eligibility for funding

[FP 13]
See Reflective Task

[FI 1a.]







[FP 3]
[FP 4]
[FP 12]
[FP 13]
[FP 14]
[FP 17]
[FP 19]
Course outcomes and
content as listed.
Sample class activities (A 5)

[FP 4]
Program activities as listed
b. Introduction to University Teaching (IUT)
This program is specifically designed for postgraduate students who are teaching at UWA in
seminars, tutorials or laboratories, some of whom are eligible to be paid casual tutor rates
for attending. Offered once each semester the program is presented in three half day
workshops followed by five seminars. It provides an introduction to the body of research
informing student learning and effective teaching as a foundation upon which to develop
new understandings and teaching skills. For those new to teaching it provides a theoretical
framework for thinking critically and reflectively about their teaching and offers a host of
practical tips and strategies Effective teaching involves the application of principles and
practices modelled in the program, and the role of the teacher in adapting these is
emphasised. More specifically participants are provided with an opportunity to:
 Use a variety of teaching strategies to enable effective student learning;
 Gather feedback on their teaching using a range of techniques;
 Reflect upon their approach to, or philosophy of, teaching and learning;
 Engage in critically reflective practice to enhance their teaching;
 Work with a colleague in a learning partnership to provide each other with support and
feedback on their teaching and to engage in meaningful dialogue about teaching and
learning.
c. Postgraduate Teaching Internship Scheme
The Postgraduate Teaching Internship Scheme allows promising doctoral students to
develop teaching skills in their fields and to undertake a program of professional
development activities during the course of their PhD candidature. The participants receive
payment to attend the program and the school in which they are located receives financial
support for their teaching costs.
The program commences with an intensive three-day workshop prior to the beginning of
the academic year and is then followed by regular meetings (usually five) throughout each
teaching semester providing participants with opportunities to reflect individually and
with peers, on their practice and to investigate relevant and current issues, for example,
peer review, developing a teaching portfolio, and engage in conversations around race and
inclusivity. These are complemented with on-line activities and written assignments all of
which model a range of teaching and learning techniques. The workshops include
contributions from staff in CATL, faculty-based course coordinators, experienced UWA
lecturers and students. The introductory workshops focus on:
 Approaches to learning. The student's perspective: how, what and why do
students learn? Current research on student learning and its implications for
practice.
 Approaches to teaching, especially teaching to promote student learning.
 Learning environments.
 Teaching methods, activities and practices such as lecturing, small group
teaching, cooperative learning, active learning and eLearning.
 Curriculum design and assessment of learning.
 Evaluation of teaching.
 Teaching as critically reflective professional practice.
A variety of activities which have been designed to enable the achievement of these
outcomes includes:
 Experiential learning exercises;
 Mini-lectures;
 Panel discussions;
 Cooperative learning, small group and large group work;
Narrative Exemplar 1
66
Appendix G



Micro-teaching with videotaping and peer feedback;
Independent study of materials;
On-line tasks.
Successful completion of the program requires a number of tasks to be submitted. These
include:

Feedback on teaching including peer observation of teaching, the use of formal
feedback strategies such as Student’s Perceptions of Teaching (SPOT) and informal
feedback strategies.

A reflective statement based on a critical incident which draws upon a learning
journal, feedback on teaching and other program activities. A teaching and learning
folio.

A Curriculum Development Report.


[FP 11]
[FP 7]
Sample Journal entry (A 6)


[FP 8]
[FP 9]
Excerpts from teaching Folios
(A 7)
[FP 17]
Analysis of unit evaluations
(A 8)

4. Informal Programs
Complementing the formal programs offered by CATL is a suite of workshops, events and
resources which are designed to provide ongoing professional development opportunities
for staff which reflect university teaching and learning priorities, new initiatives or current
issues and staff needs. CATL’s Strategic and Operational Plan shows a commitment to
increasing the variety and number of workshops and resources and to responding to faculty
demand for discipline specific support. These programs are typically quite short (between
1 and 3 hours) and are offered more than once a year to accommodate demand and staff
preferences for attendance. The programs support one of the following key focus areas:
 Curriculum Development Support for the New Courses
 Developing your Teaching Career
 Evaluation of Teaching
 Orientation and Induction
 Teaching and Research
 Teaching for Learning
 Teaching with Technology







[II 1]
See Strategic and Operational
Plan and Review of
Performance (Attach. 4)
[IP 1]
[IP 2]
[IP 10]
[IP 11]
[IP 13]
[IP 6]
See workshop topics
In particular the following workshops, seminars and teaching and learning related events
are typical of the range presented each year:

























Assessment: Learning’s Destination or Journey?
Developing your Teaching Portfolio
Embedding Communication Skills across the curriculum 1
English Language Support for Teaching Staff 1
Quickstart Guide to developing your LMS Unit (multiple sessions)
Quickstart Guide to UWA’s Lecturing Recording System
Lectopia (multiple sessions)
WebCT & Lectopia (combined session)
An introduction to PBL
Group work in problem based learning
Sessional staff Orientation
Small Group Teaching: Seminars and Tutorials
Supervising International Students
Supervising Postgraduate Students
Teaching and Learning Orientation
Teaching Large Classes: Active learning in lectures
Teaching Offshore
Teaching Smarter @ UWA
Teaching with Power point
What Counts as evidence of good teaching?
Evaluation of Teaching: other forms of evaluation
Evaluation of Teaching: SPOT strategies and core items
Peer review of Teaching
Engaging students in research and enquiry
Indigenous teaching: Sharing research based exemplars for good practice




[IP 2]
List of programs
[IP 5]
[IP 11]
[IP 15]
See Program descriptions



[IP 6]
[II 6]
[II 7]
See Workshop Schedule

[IP 1]
See Student Learning and
Teaching Policies
Narrative Exemplar 1
67
Appendix G




Marking and Grading: workshop on sustainability, reliability and defensibility
Service Learning: Evidence, Quality, Challenges
Student plagiarism ten years on: new issues, new threats, new approaches
Ten years’ experience of enrolling international students: lessons for campus,
classroom and curriculum.
A number of these workshops focus on engaging students and creating positive learning
experiences while others provide clarification of institutional policies and processes
related to teaching and learning.



Utilising the expertise of new CATL academic development staff, a number of new
workshops have been introduced to support staff in interpreting the results of student
feedback, peer review, and eLearning, all of which are current priorities within the
university. In addition to the workshops, staff are supported through a range of quality online resources designed to enhance teaching and the student learning experience.
In 2010, CATL introduced ‘on request’ workshops for the first time, reducing the number of
workshops which had scheduled dates and times, but offering the remainder of the
workshops ‘on request’ to groups of interested staff. This change was made to enable CATL
to focus on workshop topics with greater levels of demand, whilst still offering support in
other areas and being responsive to individual and discipline group needs. Similarly, in the
area of eLearning, training workshop offerings were streamlined, with Quickstart
workshops for WebCT and Lectopia being scheduled regularly, and more advanced training
available on request.
Other informal activities include ad hoc workshops in collaboration with schools and
faculties, for example the Schools of Physics, Population Health and Psychology, visiting
teachers’ workshops, for example the assessment workshop by Jude Carroll, and more
than 30 individual consultations with staff. A number of faculties also participated in the
CATLyst network (a collaborative program between CATL and the Faculties) designed to
promote teaching and learning in the faculties. Over the last two years the focus of the
CATLysts has been on supporting sessional staff which was identified as a university
priority. Approximately 80 sessional teaching staff attended an inaugural Sessional Staff
Professional Development Day in March 2010 which was repeated in March in 2011.
Approximately one third of those attending were new to university teaching, and one third
had less than three years’ experience. The program was designed with input from CATL
staff, faculty based course coordinators and feedback from sessional staff. On the basis of
positive feedback the university has agreed to continue funding this initiative. The faculty
based initiative for 2011 was Assessment and Feedback, where a consultant has worked
with CATL and Faculty leaders to review assessment practices on a course by course basis.
Funding has been provided by the Teaching and Learning Committee to support the
initiative.
5. Summary
CATL is satisfied that the design, delivery and diversity of TPPs meet a significant number of
the TPP Effectiveness Indicators. The broad suite of programs demonstrates alignment with
University priorities related to teaching and learning is responsive to staff needs and
acknowledges the importance of the student learning experience. The programs provide a
strong foundation in pedagogy, model effective teaching practices, encourage reflection
and are underpinned by the Teaching Criteria Framework. Given the size of the academic
development team charged with delivering the programs, it is considered that these
achievements are satisfactory. CATL acknowledges the support of the Teaching and
Learning Committee in providing funding for a number of the initiatives such as Teaching
and Learning Month, visiting teachers’ workshops, Sessional Staff PD and the Postgraduate
Teaching Internship Scheme.
6.

[IP 15]
See Workshop Descriptions
[II 1]
See UWA OPP ED3, ED5
[IP 1]
See UWA OPP, p.12
[II 2]
Comprehensive list of
workshops above
See On-Line Resources


[IP 1]
[IP 2]
See Strategic and Operational
Plan Review of Targets and
Progress, Sem 1, 2011, p.2 (A
9)
Links between participant
feedback and course content
(A 10)

[IP 4]
List of faculty based
collaborations and visiting
teacher workshops (A 11)
[IP 1]
See position paper Sessional
Staff
See Program


[II 5 c)]
See T and L Committee
Minutes



[IP 1]
[IP 4]
[IP 11]
See Assessment Project
website

[FP 2]
See OPP

[FP 15]
See CATL Teaching Criteria
Framework

[FI 11]
See T and L Committee
budget
Further progress
While the evidence related to the APD Effectiveness Framework confirms that the
programs offered are varied in type, number and focus to meet UWA Teaching and
Narrative Exemplar 1
68
Appendix G
Learning priorities and staff needs, given the introduction of a new LMS and New Courses
in 2012, it is expected that there will be additional demands on staff. While a number of
appointments have been made to address this in the short term, it is critical that staffing
levels be reviewed in order to provide ongoing support during the transition phase for
these new initiatives and in order to meet the priority of working with staff in their
disciplinary and faculty context to a greater extent in the future. Within the context of the
APD Effectiveness Framework further attention also needs to be directed to the following:






[FI 15]
[II 1]
[FP 2]
See PDR Review

[FI 1c)]
See Teaching Criteria
Framework

[FI 2]

[FI 1c.]
Although the completion of the Foundation program is noted as being a requirement
for new staff to UWA, CATL believes that more specific attention could be given to
tracking the completion of the formal TPPs through HR and in consultation with the
Heads of School.
In line with Recommendations 10, 13 and 14 of the 2008 Review of the PDR Process
and Q7 of the new PDR guidelines, CATL will also pursue ways of encouraging staff to
evidence their learning from participation in TPPs for the purpose of their PDR.
Similarly CATL believes that those conducting PDRs should be aware of the extensive
range of formal and informal programs available when advising staff to seek further
opportunities to enhance their teaching and learning practices.
It appears that participation in professional development is not a consideration in
workload formulas (although not all faculty formulas have been accessed). Given that
the criteria for promotion refer to the Teaching Criteria Framework which includes a
commitment to ongoing professional development in the Areas of Activity, Core
Knowledge and Professional Values it seems that this does require further attention at
the faculty/school level. CATL will consider ways of further extending its programs in
relation to the Teaching Criteria Framework.
Narrative Exemplar 1
69
Appendix H
APD Evaluation: Effectiveness of PTIS on Approaches to Teaching
Program Review: Postgraduate Teaching Internship Scheme
1.
INTRODUCTION
This review is focused on the Postgraduate Teaching Internship Scheme and seeks to
determine the extent to which it achieves the outcomes related to enhancing teacher
knowledge, skills and practice. The APD Effectiveness Framework has been used as the
basis for the assessment and the relevant indicators and related evidence are identified in
the panel on the right. Where appropriate, attachments of relevant data have been
included.
Having commenced in 2000, the Postgraduate Teaching Internship Scheme (PTIS) was
recognised for its achievements when it received the program award in the national Carrick
Institute Awards for Teaching Excellence in 2006. The Scheme reflects the University's goals
in supporting high quality teaching and learning and fostering the nexus between teaching
and research, as expressed in its Strategic Plan and associated documents.
The program is designed and delivered by highly qualified and experienced staff, as
evidenced in the Evaluation of the Overall Provision of TPPs by CATL. Over the eleven years
since its inception 228 interns have completed the PTIS. A number of graduates of the
program have succeeded in gaining academic positions in Australia and overseas and rate
the program as a significant part of their success. While this suggests that the PTIS has the
potential to achieve its intended outcomes, the extent to which this is demonstrated has
not been fully investigated. Therefore, this analysis focuses on the teacher knowledge,
skills and practice outcomes and seeks to determine the extent to which participants of
the PTIS develop their understanding of and change their approach to teaching as a result
of the program.

[FP8]
Recipient of Carrick
Institute Award and
Commended by AUQA

[FP1]
Evaluation of Provision of
TPPS (Attached)

[FP6]
See comments from past
Interns
2.
INTENDED OUTCOMES OF THE PTIS
a.
The broad outcomes of the PTIS which relate to teacher knowledge, skills and practice
are that participants should be able to:

Develop an awareness of contemporary research on student learning and its
implications for curriculum design, university teaching, assessment of learning and
teaching evaluation;

[FP3]

[FP4]
Refine, modify, or confirm existing conceptualisations of the nature of teaching and
learning in higher education;

[FP12]

Reflect on students’ perspectives of teaching and learning;

[FP13]

Consider a range of standards for, and informed alternatives to, conventional practice
across many aspects of the teaching role;

[FP14]

[FP15]


Develop teaching strategies with a focus on the learner;

[FP16]

Explore teaching strategies to engage students and promote deep learning (e.g. large
group teaching, small group teaching, cooperative learning, active learning and

[FP17]
See 2011 Program
Narrative Exemplar 2
70
Appendix H
Outline, pp 1, 4, 5
eLearning;

Recognise the influence of the learning environment on learning;

Explore issues of teaching and learning of particular interest to you.
b.
Specific Behaviours
More specifically these broad outcomes are translated into action requiring participants to:

Articulate an approach to, or philosophy of, teaching and learning

Design and deliver a unit of study that aligns student learning outcomes, teaching and
learning strategies and assessment strategies.

Use a variety of teaching strategies to enable effective student learning.

Identify diverse student needs and develop appropriate teaching approaches to
support these.
See Program Outline, p. 1,
6-10,

Incorporate elements of eLearning into teaching practice, both using WebCT and other
digital learning tools and approaches.
See sample of class
activities (Attached)

Develop a teaching and learning folio.

Apply the outcomes of contemporary research to teaching and learning practice.
3.
RELEVANT INDICATORS
This evaluation will seek to evidence the following Outcome Indicators from the APD
Effectiveness Framework:
 Teacher perceptions of changes in their approach to teaching and learning following
completion of TPPs as evidenced in the teaching portfolio, improved student
evaluations, teaching awards, peer review, self-reflection;

Quality of teaching as evidenced through promotion applications, PDRs etc.;

Evidence of student focused approach in course/teaching materials.
4.
DATA COLLECTION
a.
Participants

[FP4]

[FP7]

[FP8]

[FP9]

[FP7]

[FP8]

[FP9]
Participants were identified from those who completed the PTIS in 2008, 2009 or 2010 and
included a range in terms of age, gender and teaching experience.
b.
Data
In determining the data to be collected consideration was given to the indicator/s to be
evidenced (i.e. changes in teacher knowledge, skills and practice), the identification of
relevant and readily available data and the need to balance participant and observer
perceptions with evidence in documents such as course materials etc.
Data related to the indicators was collected from informal discussions, teaching portfolios
compiled during the internship, success in university teaching awards, and an internal
review of the program. The comprehensive teaching portfolios prepared by the Interns
include a variety of data which is evidence of the way in which their approach to teaching
has been influenced by the program. This data includes both formal and informal
Narrative Exemplar 2
71
Appendix H
evaluations, self and observer reports of their teaching practices, analysis of a critical
incident related to teaching, and teaching notes and teaching materials which included
lecture/tutorial preparation, class materials, assessment tasks and marking rubrics, and
group or team tasks. Data related to nominations for and recipients of teaching awards has
also been useful. Having completed this review it is acknowledged that encouraging
ongoing reflections by the interns and a discussion focused on their teaching approaches
and materials over the subsequent years would provide evidence of the longer term effects
of participation in the PTIS.
5.
DATA ANALYSIS
The data analysis which revealed significant changes in conceptions of teaching followed by
consequential changes in approaches to teaching is summarized below.
A. Informal discussion
During informal focus group discussions participants identified a number of critical learning
experiences during the program which changed their understanding of teaching which I
turn influenced their teaching practice. These included the introduction to a theoretical
framework within which to develop and analyse their teaching, the demonstration of
techniques for engaging students and opportunities to observe each other and appreciate
the power of the teacher to inspire and connect students with the subject rather than
focusing on delivering content. The comments below have been clustered within common
themes emerging from the discussions and offered as examples of the growth experienced
by participants.
Changed understandings of teaching: from transmission to facilitator of learning:
 I now realise that university classrooms should facilitate an open dialogue between
teachers and students assisting each other in their learning.

My attitude to teaching and my conception of the power of the teacher to influence
student motivation and learning was completely changed by the program.

The program modeled the theories of teaching and learning and so once I experienced
a constructivist environment I was able to apply it to my own teaching.




[FP 3]
I don’t think I really knew what curriculum meant before, but having been involved in
the design of the unit I think I now understand about the connection between the
curriculum and assessment… or that assessment is actually part of the curriculum.

[FP 4]

[FP 7]
I am so glad I am now aware of the existence and importance of pedagogical research
and how it can help my teaching.

[FP 9]
I learned about constructive alignment of the curriculum to promote deep learning and
now am surprised at how many teachers don’t think about this.
Changes in teaching practice:
 I thought about the research into adult attention spans and then decided to break up
my lectures with video, power point and questions to encourage participation.


Full notes of focus
group discussions
available.
I was inspired to take a risk as a result of the program and rather than present the
competing perspectives about ordinary Germans and the Holocaust I divided the
students into teams, asked them to interpret documents and then give a reasoned
explanation to the class. Opposing teams argued their cases. The students were really
excited and talked about it for weeks afterwards.

[FP7]

[FP11]

[FP14]

[FP15]
Although I was scared I tried some interactive elements in the big lecture and I was
surprised at how much I enjoyed it – I think it was because I could see the students
Narrative Exemplar 2
72
Appendix H
enjoying working on the question I had given them.

I realised that structured learning activities are much better for first years and students
struggling with big concepts and I now understand why this is called scaffolding.

The planning templates shared during the program helped me to think about the
structure of every lecture and how break it into segments.

The ideas on creating teaching materials to stimulate higher order thinking made a big
difference to my teaching.
Understanding the need for active rather than passive students:
 I was worried about how few students actually spoke during tutorials so I decided to
give group work a go and assigned questions and tasks to different groups within the
tutorial group.

I designed problems which require students to use the readings not just remember
them, and then had them try to solve the problems in class. They really seemed to enjoy
it and they talked to each other a lot more instead of just to me.

I can see that the use of small groups in my tutes is helping the students to know each
other and have more confidence to participate

I enjoyed the challenge of devising ways of enhancing student interest and motivation
which I wouldn’t have even thought of without the program.

[FP3]

[FP4]

[FP14]

[FP7]

[FP8]

[FP9]
B. Teaching Portfolios
Excerpts from these portfolios are provided as evidence that the PTIS does change the way
in which the participants think about teaching, prepare and present classes and interact
with their students.
1)
Reflective Statements
During the internship participants are expected to engage in reflective practice and in the
final portfolio must submit a reflective statement. Analysis of these statements confirms
that the PTIS provides them with the knowledge, confidence, skills and experience to adopt
new approaches in their teaching, and that these can have positive effects on students.
Examples from statements include:
 The interactive exercise I used in the lecture was the part that elicited the most positive
feedback from students. The reason I tried this exercise was solely based on the fact
that we had been encouraged in Internship meetings to use interactive elements in our
lectures, even if the group of students was large.

The Internship has encouraged me to engage with different teaching methods which
otherwise I might not have come across

A couple of students said that they had never written a poem in their lives until my
lecture and they were proud that they had tried

Being a good tutor meant learning to keep my mouth shut or respond to questions with
questions rather than answers and this was something I learnt… particularly while
undergoing the PTIS.

I have learned that a successful tutorial depends less on how well I know the subject
and more on how well I have prepared a lesson plan and learning activities that are
appropriate and interesting for the students.
See other excerpts from
reflective statements
(Attached)
Narrative Exemplar 2
73
Appendix H

Following the stilted discussion in the first part of the tutorial I decided to adopt the
‘think, pair , share’ with the hope of generating a more lively exchange… it as real
success as it got the student talking to one another and contributing to the wider
discussion.

I am hopeful that structuring of my class activities ensures students are actively
critiquing, analysing and challenging themselves instead of passively listening to others
or recalling basic information.
2)
Teaching Materials
An analysis of teaching materials used by the participants demonstrates a student centered
approach to class preparation and tasks with the setting of clear outcomes; linking class
activities to the outcomes, time allocated to opportunities for interaction through activities
such as questioning, discussions, simulations, problem-solving and debates; individual tasks
followed by feedback; checking for achievement of outcomes at the conclusion of classes
and the use of marking rubrics providing clear guidelines for students and constructive
feedback rather than purely summative comments. [FP7, FP8, FP9]

[FP7]

[FP8]

[FP9]
See excerpts from
teaching plans (Attached)
This evidence corroborates the claims made in the reflective statements regarding the use
of student centred approaches to teaching.
3)
Unit evaluations
The participants of the PTIS are required to conduct and reflect on student evaluations of
their first and second semester teaching during the program. Reflections which evidence
the effectiveness of the PTIS on approaches to teaching [FP7] include:
 As evidenced from my SPOT results from semester one and two I clearly progressed in
teaching and I attribute this to the skills and knowledge gained…the internship enabled
me to develop my teaching style, become clear about my aims and ways to achieve
them


Before I undertook the PTIS I thought that preparing for a tutorial simply required me
to know the content, but I have come to realise that having knowledge myself is of no
use if I can’t get the students interested and motivated to learn.

[FP7]
Excerpts from unit
evaluations (Attached)
It is only once content can be packaged within an engaging learning activity that both
the activity and the content can have purpose.
Feedback from the Interns’ students further demonstrates the adoption of student centred
approaches to teaching through comments such as ‘stimulating discussions’, ‘group work is
insightful’, ‘looking forward to the role play’, ‘tutor (intern) kept the class interacting
through the unit’, ‘enjoy the depth of our discussions’, ‘good interactive learning’.
Without exception the participants in this review demonstrated an improvement in their
unit evaluations between the first and second semester. Allowing for the different groups
of students and possibly different subject matter, this is nevertheless compelling evidence
that the PTIS has an impact on the teaching quality. Furthermore the majority of comments
offered by students refer to being engaged and stimulated in a positive environment and
enjoying challenging interactions with their peers.
4)
Supervisor review
Interns were required to organise observations of their teaching by a supervisor and a
learning partner. Comments from the supervisors confirm that teaching practices improved
throughout the duration of the program:
 She significantly enhanced and extended her teaching ability


[FP7]
Videos used at the beginning of the lecture and again later on…were a great way of
Narrative Exemplar 2
74
Appendix H
Peer Review
getting students’ attention

The quotes you used throughout the lecture to illustrate points were great. You related
them back to the material really well

You took the explanation to a deeper level

…the lecture was particularly good in how it modeled the art of thinking and teasing
out ideas.

…demonstrated an excellent pedagogical praxis

Their [lecture] clear structure helped students follow complex arguments and several
changes of delivery method kept the audience engaged
Analysis of the various sources of data in the teaching portfolios and comparison of the
evidence consistently confirms that the PTIS is effective in developing teacher knowledge,
skills and practice as demonstrated through a student centered approach to teaching both
small and large classes following participation in the program.
C. Teaching Awards
In 2011 eight participants of the PTIS were nominated for teaching awards in their faculty.
Of the eight two had completed the program in 2010, while the remainder had completed
in 2008 or 2009. These highly competitive awards are student nominated and require
nominees to address and demonstrate the following criteria:

Approaches to teaching that influence, motivate and inspire students to learn,

Development of curricula and resources that reflect a command of the field,

Approaches to assessment and feedback that foster independent learning,

Respect and support for the development of students as individuals,

Scholarly activities that have influenced and enhanced learning and teaching.

[FP7]
Excerpts from nominee
submissions (Attached)
While the nominations themselves represent a significant achievement, especially when
considered within the context of the large number of experienced colleagues, the winning
of Excellence in Teaching Awards by 7 participants is further evidence of the way in which
the PTIS has influenced approaches to teaching.
D. Program Evaluation
 A 2004 review of the PTIS graduates found that they had an influence on student
engagement, enthusiasm and approaches to learning and that they demonstrated
pedagogically informed approaches to teaching.

A further review conducted in 2010 surveyed over 192 graduates of the PTIS and
concluded that the PTIS met its intended outcomes, was endorsed by all participants as
worthwhile and had enabled all to develop teaching skills and understandings which
they had also found to be valuable in other aspects of their professional and personal
lives.
6.
CONCLUSION
The analysis of the data relevant to the APD Effectiveness Framework confirms that the
PTIS does result in changes in approaches to teaching. The data available has related to
the short term and the recommendation of this evaluation is that the participants be
followed up over the longer term to determine whether the gains from the program are

[FI9]

[FP3]
See UWA News, p. 12

[FP 1]

[FP 2]
Narrative Exemplar 2
75
Appendix H
sustained and to what extent their career progression might be enhanced by participation
in the PTIS. While this evaluation has focused on the Process and Outcome Indicators, the
Input and Output indicators have been addressed in the Evaluation of Overall Provision of
TPPs for academics by CATL.

[FP 5]
See Evaluation Overall
Provision
Narrative Exemplar 2
76
Appendix I
Academic Professional Development Evaluation
Informal Programs: ‘Developing Your Teaching Portfolio’ and ‘What Counts As Good Evidence?’
Program Level
This evaluation of the informal workshops ‘Developing your Teaching Portfolio’ and ‘What Counts as Good Evidence?’ uses the APD Effectiveness Framework as a basis for determining
the extent to which the design and delivery of these programs supports staff in meeting the relevant institutional guidelines. These programs are part of suite of informal programs
designed to assist academics to think about their teaching and how to evidence their practice, particularly those preparing teaching portfolios or submissions for teaching awards.
Together with workshops on ‘Peer Review’ and ‘Evaluating Your Teaching’ they also encourage reflection on the quality of teaching and learning as the basis for improvement.
Focus
Input Indicators
Process Indicators
Output Indicators
Outcome Indicators
[To what extent and in what ways do TPPs
for academics have an effect on:]
Teacher orientation/awareness of
institutional policies, practices and
support
Response to indicators
19. Number and range of TPPs related
to orientation/ dissemination of
university policies, processes and
priorities related to teaching and
learning
 ‘Developing your Teaching
Portfolio’ and ‘What Counts as
Good Evidence’ are both offered
twice a year, once in each
semester, to enable staff to
schedule attendance at an
appropriate time.
20.
TPPs clarify institutional
expectations/requirements
regarding teaching and learning
(e.g. assessment, academic conduct,
grading, international students,
supervision, academic
development)
 The ‘Developing your Teaching
Portfolio’ and ‘What Counts as Good
Evidence’ Workshops are directly
aligned with university policies
related to promotion, PDR and
teaching awards.
 It is a requirement for all staff to
have a teaching portfolio (for
promotion, PDRs, Study Leave
Applications), which must include
an organised collection of evidence.
 These 2 workshops enable staff to
leave with practical strategies
21. Number and proportion of staff
attending TPPs
23. Workshop evaluations
22. Range and scope of TPPs
 The two complementary workshops
are sufficient to support staff in
understanding how to prepare a
portfolio and evidence their
teaching.
 The number and proportion of staff
attending these workshops tends to
be low which might be an indication
that only those new to teaching or
needing to review their portfolio for
promotion are attending.
 Further investigation of the
motivation to attend might be
warranted in terms of marketing the
 Participant evaluations of these
workshops are very positive
with mean scores between 4.13
and 4.88 (max.5).
 Comments from participants
highlight the high standard of
presentation, the practical and
informative nature of the
workshops, the clarification of
university requirements, the
helpfulness of the exemplars
shared, the value of the
opportunity to think and write
about their teaching
philosophy, the supportive
Narrative Exemplar 3
77
Appendix I
related to:
a. Using the Teaching Criteria
Framework and promotional
level descriptors as a framework
for thinking about the contents
and structure of their teaching
portfolios.
b. Identifying possible sources of
evidence
workshops to attract higher
numbers. Gathering and reflecting
on evidence requires time and is
best done regularly to capture the
range of evidence and demonstrate
sustained performance.
environment and leadership
and the guidance in collecting
appropriate evidence.
 The small numbers do however
enable a more personalised
approach to be adopted which is
appreciated by the participants.
c. Critiquing exemplars of
evidence
d. Considering how best to present
evidence.
 Staff are also provided with a
template to assist in the
preparation of their portfolio and
are made aware of other resources
which they can access.
Evidence
Workshop Calendar
Policy on Teaching Portfolios
Notes on Developing Your
teaching Portfolio
Template for portfolio
development
Notes on Evidencing Your
Teaching
Attendance:
‘Developing your Teaching Portfolio’
Workshop Evaluations
attached
2010: 7 ( once w/shop)
2011: 18
‘What Counts as Good Evidence’
2010: 17
2011: 15
Conclusion/Recommendations:
CATL is satisfied that the design and delivery of these two workshops is of a high quality as it meets the Input, Process and Outcome Indictors of the APD Effectiveness Framework. Further consideration
needs to be given to marketing of these workshops so that staff can be supported to begin the preparation of their portfolios earlier in their careers. As further evidence of the effectiveness of the
workshops it is also considered that a follow up of the participants once they have commenced their portfolios might be warranted. Informal discussion with past participants to determine the worth of
this will be pursued.
Narrative Exemplar 3
78
Appendix J
Blank Narrative Templates
Data/Evidence
Focus: (Insert program or Institutional Focus)
Introduction
Relevant Indicators and Evidence (see attached or
links)
79
Appendix J
Academic Professional Development Effectiveness Framework
Focus
[To what extent and in what ways do TPPs
have an effect on:]
Input Indicators
Process Indicators
Output Indicators
Outcome Indicators
(Insert program or Institutional focus)
Response to indicators
Evidence
Conclusion/Recommendations:
80
81