Going with the grain - University of Leeds

advertisement
Going with the grain? Issues in the evaluation of educational innovations.
Charles Anderson, Kate Day, Jeff Haywood, Ray Land and Hamish Macleod
Department of Higher and Further Education, University of Edinburgh.
Paper presented at 8th European Conference for research on Learning and Instruction,
Gothenberg, 25-29 August 1999.
Abstract
Whilst considerable attention has been paid in both the evaluation and innovation
literatures to the value of setting in context the object of enquiry and viewing it from
differing perspectives, relatively little account has been taken of the related and
important issue of what constitutes an appropriate grain of description. This article
takes up the challenge of raising and exploring a number of distinct issues concerning
the 'granularity' of an evaluation, and in the process sets out a framework to guide
reflection on these issues. General concerns and points of principle are illustrated with
reference to a recent large scale evaluation, conducted by the authors, of a programme
promoting the use of ICT for learning and teaching within higher education.
Background
A central theme of the literature on educational evaluation in the last few decades has
been the need to ensure that innovations are not assessed solely on the basis of a
disembeded set of outcome measures; but studied within the wider contexts that frame
them, with an eye to identifying the factors that constrain or enable their immediate
uptake and longer-term existence within a school or a whole educational system. This
stance concerning evaluation has been argued for from a number of different
theoretical perspectives; but it can also be justified on the more pragmatic grounds of
pointing to overwhelming evidence from existing studies of the ways in which
innovations are powerfully shaped, or indeed 'rejected' by individual contexts. For
instance, a recent study reported by Cole (1995, 1996) of the 'life history' of the "Fifth
Dimension", a programme of computer-mediated after-school activities for children
implemented in different sites, revealed very distinct differences across the sites
studied in the sustainability of this particular programme – differences that were
clearly linked to the pre-existing purposes, ethos and patterns of interaction in the
individual sites.
Agreement on the value of examining the wider context within which innovations are
embedded can, however, mask important conceptual difficulties surrounding the
definition of context and its identification1. Attention to context also confronts an
evaluation team with difficult operational decisions in designing and implementing a
particular study. How in practice does one separate out the 'figure' of an innovation
from the 'ground' of its context and then view the relationship between them? To what
extent does one focus on illuminating the immediate context of application of an
innovation, or also attempt to consider the effects of more distal factors?
Grain of description
This article presents the ways in which, as a research group carrying out evaluations of
the use of ICT in higher education, we have attempted to take account of the immediate
and wider institutional contexts in which the innovations we have been studying were
framed; and our reflections on how our attempt to view innovations from different
perspectives affected the detail and type of description that it was possible and/or
desirable to achieve. It will be argued, (largely by means of illustrative material from a
recent evaluation project), that it may help evaluators to clarify their purposes and
communicative intentions, if they direct attention explicitly to the question of the grain
of description that is appropriate or feasible when situating innovations in context or
presenting the perspectives of different actors on the processes and outcomes of an
innovation. On this central theme, it is necessary first to consider the crucial matter of
audience requirements.
Audience requirements
The question of providing a well-contextualised assessment of an innovation is
complicated by the fact that evaluators are hardly ever, and some would argue should
never be, able to aspire to the role of mythical neutral observers independently
determining how they will depict the terrain that they are surveying. They must take
into account the requirements of the evaluation’s audiences. Professional researchers,
policy makers and practitioners have, in Bell and Raffe's useful phrase, different
"normative world views of research" ( Bell and Raffe, 1991, p.133). In particular,
judgements are likely to vary between these three constituencies as to what makes for a
good evaluation report. Researchers will judge a report according to the ‘canons’ of
sound professional practice, including the degree to which observations and claims are
well supported by a full, clear presentation of evidence. Policy-makers are likely to
want a succinct, synoptic report which makes appropriate recommendations and is
1 A very clear overview of the conceptual problems surrounding the definition of context and an argument for a
more relational interpretation of context, based around the metaphor of weaving, is provided in Cole (1996). A
fruitful reconceptualisation of this construct of context that breaks away from the metaphorical view of context as a
'container' for actions has emerged from the work of scholars such as Lave (1988) and Engeström (1993) who draw
on activity theory (Leont'ev, 1978). Viewed from within activity theory contexts can be conceived as integrated,
complex systems; and Lave and Engeström stress how current systems of activity cannot be fully understood
ahistorically in terms of the immediate actions of the participants but need to be set within a history of interactions
and of culturally derived resources and goals.
guided by a realistic appreciation of the constraints within which policies have to be
pursued. On the other hand, practitioners will see merit in reporting which focuses
closely on the day-to-day challenges involved in implementing an innovation and
provides insights and suggestions that are of clear practical relevance.
Viewing the needs of practitioners from a different angle, educational psychologists are
recognising once again that if the goal of improving practice is to be achieved serious
attention needs to be given to the matter of the nature and level of description provided
within studies of teaching and learning. For instance, Entwistle, McCune and Walker
(2000) recognise that, with an audience of practitioners in mind judgments concerning
the level of explanation and the choice of conceptual frameworks need to be guided
both by "realism (to what extent is the explanation recognised by participants) and
what has been called pedagogical fertility", i.e. the capacity to generate innovations
within, or new perspectives on, learning and teaching (Entwistle and Hounsell, 1987).
Very often evaluators will wish, or be obliged, to respond to certain at least of these
expectations of policy-makers and practitioners in writing up their findings, sometimes
by producing separate versions of a report targeted at different audiences. Being alert
to the needs of different audiences clearly has implications not only for report writing
but for all stages of an evaluation. Figures 1 and 2 present some of the key points
concerning nature and level of description that would seem to flow from an attempt to
respond to the separate expectations of practitioners and policy makers.
The level and nature of description and explanation identified as desirable in Figure 1
could be provided by following one of the illuminative approaches to evaluation that
have held such sway since Parlett and Hamilton's (1972) groundbreaking efforts in the
early seventies. Similarly the desiderata in terms of type and detail of account
highlighted in Figure 2 can be seen to be compatible, to some degree at least, with a
systems approach to evaluation. However, there is a clear contrast between the
implications for the 'grain' of evaluation identified in Figure 1 and those identified in
Figure 2 – a contrast which poses a challenge to evaluators who wish to bear in mind
the needs of the different audiences of practitioners, policy makers and peers within the
research community and still conduct their study in a principled manner. At the stage
of report writing there is also the need to avoid the problems that may arise from
employing a “blurred genre” (Geertz, 1980). These challenges are made particularly
acute by the fact that evaluators will customarily be working to a contract which
imposes, (possibly considerable), constraints on their freedom of manoeuvre.
Figure 1:
Practitioners' Expectations and Implications for the 'Grain' of an
Evaluation
Figure 2:
Policy Makers' Expectations and Implications for the 'Grain' of
an Evaluation
The preceding paragraphs have then established what are likely desiderata held by
policy-makers and practitioners concerning the grain of description provided by
evaluators; and the general challenges faced in providing information for more than
one audience. A later section of the paper will detail the attempts made to meet these
challenges in a recent evaluation carried out by our research team. The following
section will briefly describe the ICT project whose products were being researched and
the key tasks of the evaluation. Having set the scene, attention will then turn to
highlighting some of the central questions which arose concerning detail and type of
description within this evaluation. Examining these questions will allow us to illustrate
general issues that need to be addressed in making principled decisions about what
constitutes an appropriate grain of investigation, analysis and reporting in evaluative
research.
Detail, Level and Type of Description
The Teaching and Learning Technology Programme
In February 1992, the Universities Funding Council of the United Kingdom launched
the first phase of a national Teaching and Learning Technology Programme (TLTP)
with the intention of making "teaching and learning [in higher education] more
productive and efficient by harnessing modern technology" and of helping "institutions
to respond effectively to the current substantial growth in student numbers and to
promote and maintain the quality of their provision" (UFC Circular 8/92). In this first
phase of the programme, £7.5 million funding was provided each year for three years
and academics were invited to bid for monies for projects to develop new methods of
learning and teaching using ICT. Of the 43 projects which were funded during this first
phase, 11 were concerned with implementing the use of learning technology within
mainstream teaching and learning in single universities. The remaining 32 projects
involved academics from different universities working as consortia and focused on
courseware development. These consortia ranged in size from 2 to 44 member
institutions and the projects covered a wide range of disciplines. A second phase of the
programme was later announced, under which a further 33 projects were funded.
These two first phases spanned the period 1992-1996, with over £11 million being
invested in the programme by the UK Funding Councils, plus investments made by
universities themselves.
Range of materials created
The 76 TLTP projects supported under Phase 1 and 2, (some of which received
additional continuation funding after the end of the second phase), created a wide
range of different products. The majority of the projects produced computer-based
materials, with some supporting ‘teacher user guides’. There were, however, a few
projects in this group which used video as a major or minor part of their output.
Products of the subject-based projects were most likely to be used in academic
departments or supplied by central services to students via networks. For institutional
and some subject-based projects their aim was mainly to influence the way in which
teaching and learning took place rather than to produce courseware, computer-based
or otherwise. The majority of projects set out to create tangible materials which could
be used within courses (albeit with the same broad aim of affecting teaching practice,
but this was not centre stage in their activities).
The First TLTP evaluation
An independent evaluation of these first two phases of TLTP was commissioned from a
research team drawn from Coopers & Lybrand, The Institute of Education and the
Tavistock Institute; and the report of this evaluation was published in June 1996
(Coopers & Lybrand et al., 1996). This report made recommendations for greater
central co-ordination of the programme, progress monitoring of projects and both
formative and summative project evaluation. These recommendations became
conditions of the TLTP Phase 3 funding announced in May 1998. Although the
evaluation that reported in 1996 was able to give a clear view of the management and
general direction taken by the TLTP projects, many of the learning materials they had
produced had just been released, or were still in the pipeline. It was, therefore,
deemed at that point to be premature to take on the task of assessing the extent to
which TLTP products, broadly defined, had been disseminated throughout the UK
higher education sector and the degree of integration that these products had achieved
within day-to-day teaching and learning. This task fell to our own research group.
Evaluating the use of TLTP materials in UK higher education
In January 1998 we were commissioned by the Higher Education Funding Council of
England to conduct a study of the use within UK higher education of the learning
materials produced by TLTP Phases 1 and 2. The study itself was carried out to a tight
schedule between February and August 1998 (Haywood, et al., 1999).
Key tasks in this evaluation were:





to find out which TLTP products were being used, where and how
to examine the pattern of usage of TLTP courseware in relation to other uses of ICT
within learning and teaching in UK higher education
to explore how ‘contextual’ factors might have influenced the uptake of TLTP
products
to assess the overall impact of TLTP
to conduct a bibliographic search to track existing studies of TLTP use.
These tasks were pursued with surveys of:
 all TLTP projects (76),
 all teaching departments in all Higher Education institutions in the UK (3854),
 courses/modules using TLTP in those departments and
 key informants in 102 medium and large Higher Education institutions.
(The surveys included a considerable number of open questions that required a
qualitative analysis.)
Data were also gathered from the TLTP Co-ordinator and the TLTP central collection of
materials. Case studies of the implementation and use of TLTP materials were
constructed, largely through interviews.
A considerable proportion of higher education courses are now delivered through
franchising arrangements with further education colleges.
For the sake of
completeness, it was appropriate to gain information from the further education
colleges thought to deliver higher education courses on the extent of their TLTP usage.
In carrying out these tasks the research team was faced with a number of difficult,
general, methodological challenges which will be the subject of a forthcoming paper.
In this present article attention will remain focused on methodological problems and
decisions which had very direct implications for the grain of description that could be
achieved within this evaluation, starting with the matter of the timescale over which
the TLTP learning materials had been produced.
Timescale of the Teaching and Learning Technology Programme
The TLTP materials which were the object of our study were created by seventy-six
projects over the six year period 1992 to 1998. Thus the timescale over which materials
were produced and brought into use was extensive, with some projects delivering their
early, or at least prototype, products within a few months of start up and others only
just reaching delivery by mid 1998. Accordingly an evaluation taking place in 1998
required some members of staff in universities to think back several years to when
TLTP began, whereas others had only recently begun to see relevant products
appearing. We thus faced the difficulties common to all retrospective surveys of how
to achieve reliable recall as memories fade and to deal with the tendency to view the
past through the screen of hindsight and rationalisations. An innovation implemented
several years ago that has taken root may be enfolded within ongoing practices, and
therefore be harder to separate out as a distinct object in itself. An added complication
in our evaluation was the considerable variability within the population in the time
period (extent and recency) on which they could comment, and correspondingly in the
detail of description that it was reasonable to seek. Technological changes had also
occurred over Phases 1 and 2 of TLTP that needed to be taken into account.
These difficulties associated with the gathering of retrospective accounts, and in
particular the question of the degree of detail that informants could be relied on to
provide, were at the forefront of our thinking in the design phase of the evaluation.
Judgements concerning the types and specificity of description that informants could
give of past actions, played a central part in guiding both our general strategy and the
fine shading of individual survey questions. Keeping these difficulties constantly in
mind led us to scale down our ambitions for certain aspects of the study; and to be
appropriately tentative about the claims that could be made for the whole period of
1992 to 1998 from a ‘snapshot’ taken at the end of the period. Meeting the challenge
posed by timescale and variability in informants’ length of experience of using
products had to be achieved largely at the level of the design of individual survey
items, taking into account the degree of detail that could be achieved on any individual
inquiry and assisting informants by directing them to pin their comments to a precise
period in time.
The problems that we faced in collecting retrospective data in this particular study may
have been particularly acute; but they are clearly not unique given that many
evaluations of publicly funded programmes of educational innovation are carried out
post facto, and therefore are collecting what in effect are historical data. We would
suggest from our own experience that directing attention closely to the specificity of the
account that informants can be expected to provide of ‘historical’ events, provides a
useful focus during the process of design for debating and attempting to resolve the
difficulties inherent in researching the past.
Different perspectives, different levels of description
Given these concerns about the attrition of data and changes of view over time, it was
imperative to maximise access to information on the use of TLTP products by following
different routes into different sources. Independently of this aim to maximise access to
available data, it appeared important in terms of ensuring the validity and reliability of
our research findings to view the usage of TLTP products from a number of different
perspectives. This enabled cross-checks to be carried out on the pattern of findings and
a fuller, and thus possibly more nuanced, picture of product use to be constructed.
Gaining a view from differently located ‘fixed points’ also allowed information to be
gathered at contrasting levels and detail of description. Figure 3 shows the main
perspectives that we set out to capture.
Figure 3: Perspectives on the use of TLTP materials
University
wide
perspective
FE colleges’
perspective
Departmental
/ School
perspective
Projects’
perspective
Published
studies
USE OF
TLTP
MATERIALS
Course
/ Module
perspective
To illustrate the consequences that choosing these different perspectives had for the
levels of detail and description that could be achieved, course organisers whom we
surveyed and lecturers whom we interviewed in the course of constructing case studies
could provide a fine-grained, more narrowly-focused account of the use of TLTP
materials. Published articles and reports on the incorporation of TLTP materials into
courses were, by and large, written from the viewpoint of developers, enthusiastic
adopters and those making innovative use of the products. Existing literature thus also
gave a detailed, close-up picture of product use. By contrast informants with an
institution-wide perspective could contribute a more broad-brush account, situating
the use of TLTP products within the wider context of a university’s overall use of ICT
for learning and teaching and in relation to other ICT initiatives and developments.
There was thus an inverse relationship between the ‘width’ of view and ’depth’ of
description that could be achieved from any of the perspectives that featured in our
study as Figure 4 reveals. To fulfil our own ambitions for the evaluation and to meet
the needs of different audiences, it was important that we gained both :
 a wider-angled, more distant view of TLTP products, that admittedly could only
give us a ‘coarser’ grain of definition and
 a ’close-up shot’ with a fine grain of definition that would not, however, illuminate
objects that fell outside of its narrow field of vision.
The need to take both wide-angled, long-distance and close-up shots applied not only
to the investigation of the use of TLTP products themselves but also to the task, which
is described in the following section, of mapping the territory in which these TLTP
products were situated.
Figure 4: Perspective metaphors
wide-angled,
distant view
with a ‘coarser’
grain of definition
WIDER CONTEXTS
SITE AND
PROCESSES
OF USE OF
TLTP
MATERIALS
RELATIONSHIP TO OTHER ICT
INITIATIVES AND
DEVELOPMENTS
close-up shot
with a fine grain of definition
and narrower field of vision
Exploring the context of use
Carrying out the central evaluative task of viewing and understanding the uptake and
use of TLTP materials in relation to their immediate and wider contexts required us to
build up a sector-wide picture of:

the use of ICT for learning and teaching in higher education

the contextual resources and constraints influencing both ICT use and the progress
of educational innovation.
Surprisingly little published research was available that documented the extent of
either general, or very specific, uses of ICT for learning and teaching in UK higher
education. Given the pace of technological change, it would also have been dangerous
to assume that what data were available accurately represented the current pattern of
activity in the sector. Accordingly we needed to build up a wider picture of how ICT
was being used for learning and teaching, within which the use of TLTP materials
could be viewed2. Measures were taken of departments’ reported levels of general ICT
usage and of the extent and nature of its use in learning and teaching. As a result a
clear summary picture of current ICT use for teaching and learning in higher education
in the UK has emerged from our survey work.
A key consideration in pursuing this task of building up the wider picture of the role of
ICT in learning and teaching was that of ensuring that the grain of description we
gained of this general role of ICT in higher education was proportionate to the level of
detail gained from the questions concerning the TLTP materials themselves. Although
this may seem a very straightforward point of design, it was not at all trivial to
implement given the diversity across the higher education sector in the size, structure
and function of the departments/units that we were surveying.
Turning to the matter of contextual resources and constraints, the difficulties inherent
in both conceptualising contexts and in defining them for the practical purposes of a
research study have been referred to earlier in this article. For purely heuristic
purposes we found it useful at the design phase and later for communicating to our
audiences to conceive of contextual resources in terms of a number of layers, as Figure
5 illustrates. (We are aware, however, of the limitations of this representation of
context and as is the case with other handy but imprecise and potentially ‘dangerous’
conceptual tools have recognised the need to deploy it with caution.) One possible
important determinant of the uptake of TLTP materials could have been the
discipline/subject area taught by a particular department – the degree to which TLTP
courseware was seen as consonant with the practices of a discipline and the current
level of commitment in that discipline to using ICT in learning and teaching. Course
characteristics, including the ones listed in Figure 5 of level, size, mode of delivery and
student profile also needed to be taken into consideration as factors which might have
influenced the use of TLTP materials. The use of ICT within courses will itself in turn
by affected by a department's general use of ICT, its access to ICT resources and its
openness to innovation. Finally the characteristics of individual higher education
2
The importance of doing so had been highlighted by our earlier findings concerning knowledge about, and
attitudes towards, the deployment of ICT for learning and teaching within Scottish HEIs, which formed part of an
evaluation of the impact of the Learning Technology Dissemination Initiative on Scottish higher education
institutions. Day, K., Haywood, J. and Macleod, H. 1997. Evaluation of the Learning Technology Dissemination
Initiative.
Final report for the Scottish Higher Education Funding Council.
[http:
//www.flp.ed.ac.uk/LTRG/LTDI.html]
institutions, including their particular strategy for supporting ICT in teaching and
learning and the extent to which a defined strategy has been implemented, may have
impacted on the use of TLTP courseware and, therefore, required investigation.
Figure 5: TLTP materials in context
INSTITUTIONAL CHARACTERISTICS
DEPARTMENTAL CHARACTERISTICS
COURSE CHARACTERISTICS
DISCIPLINARY
USE OF
TLTP
MATERIALS
SCOPE AND INTEREST
LEVEL, SIZE, MODE OF DELIVERY,
STUDENT PROFILE
ACCESS TO ICT RESOURCES, GENERAL USE
OF ICT, OPENNESS TO INNOVATION
SIZE, AGE, ICT RESOURCES, STRATEGY
Exploring the contexts within which TLTP materials were being utilised, or were failing
to take root, posed very considerable challenges. Many of these challenges revolved
round the practical, pressing problem of finding an exact wording for individual
survey items that would allow respondents across a diverse sector to answer at a level
of description that they would find feasible and enable us to obtain data that would
permit reliable, valid comparisons to be made. The difficulties associated with
deciding the level of detail at which to pitch particular questions in a large-scale survey
to a heterogeneous population can be illustrated by the especially taxing problem of
dealing with institutional and departmental differences in course structure. It was not
possible to find any completely satisfactory resolution to this particular problem.
In the UK, different institutions structure the courses that their students will take
during an academic year in quite different ways. In one university, for example, a first
year student might take only three courses, whereas in another university a first year
student might take a quite large number of individual modules. There was a danger,
therefore, that in asking questions of an individual department about the number of its
courses/modules that employed TLTP materials, one took a measure not solely of
TLTP usage but also of the way in which that department happened to organise its
teaching. The wording of individual items in the departmental level and course level
questionnaires were designed to manage this problem, as far as was possible; and the
course level questionnaire asked respondents who identified a course as using TLTP
materials to state: “What percentage of a full-time student’s annual workload does the
course/module comprise?” However, it was a problem which could not be side
stepped. Accordingly, we have been appropriately cautious in our analysis and
conservative in our interpretation of questions which asked about the number of
students enrolled on courses/modules using TLTP materials and about the number of
a department’s courses/modules which had taken up TLTP materials.
Decisions concerning the ‘grain’ of an evaluation
Returning from this illustration of the difficulties involved in constructing survey items
at an appropriate level of detail to focus on the general strategies of the evaluation, as
with the use of TLTP materials themselves, we tried to build up a picture of their
contexts of use from the different perspectives that are detailed in Figure 3. To pursue
one of the evaluation’s key aims of gaining a clear, full picture of how the usage of
TLTP products was influenced by contextual resources and constraints, it was
necessary to take both sharply-focused near views of the immediate context of TLTP
product use and a more distant, wider representation of potentially important
contextual factors. Capturing narrowly-focused, fine-grained and wider, ‘broad-brush’
perspectives, and the gathering of both quantitative and qualitative data, also afforded
a certain degree of flexibility at the stage of reporting, allowing us to tailor write-ups to
meet the needs of different audiences. At the same time the level and detail of
description that we could achieve in investigating specific topics was constrained by
the timescale of the innovation itself and the diversity within the population that we
were studying.
Figure 6: determining the ‘grain’ of description
Audience(s)
Methodological
Approach: general
and specific
purposes of the
evaluation
‘grain’ of description
of innovation and its
contexts
Time: innovation
has been in
existence
Time: period of
informants’ recall
Diversity/
Homogeneity
within population
being studied
Perspective of
Informants:

distant, wide

close, narrow
Figure 6 represents the interlinked set of chief ‘drivers’ and constraints that determined
the grain of description that it was desirable or possible to achieve within this
particular evaluation. The features identified in Figure 6 as determining the grain of
description in our recent study are likely to figure prominently in any large-scale
evaluation:
 methodological approach and purposes,
 audience design,
 history, i.e., time period of the innovation itself and the period that informants are
being asked to recall,
 diversity or homogeneity of the population that is being studied,
 the perspective-point(s) from which informants are able to view the innovation.
While models of evaluation always necessarily contain assumptions, and sometimes
provide explicit recommendations, concerning the level and detail of investigation and
reporting that should be provided, it is not at all common in the evaluation literature to
find the grain of description foregrounded as a topic in its own right. This present
article has addressed the topic of the grain of description from the point of view of
researchers trying to grapple with the day-to-day tasks of evaluation in a principled
way and has highlighted how some of the difficulties that we encountered impacted on
the grain of description that we were able to achieve. Considerable attention has also
been directed to the way that the needs of audiences powerfully influence decisions
about the level and detail of description that should be pursued within an evaluation.
It is to be hoped that these reflections from the standpoint of ‘working’ evaluators will
prompt others to examine points raised in this article from a more theoretical
perspective.
A sceptical reader, while conceding that the topic of what makes for an appropriate
detail and level of description in an evaluation is clearly of some interest and
importance, might challenge this claim that it deserves to achieve a more central place
in the methodological literature. For example, it might be pointed out that questions of
level and detail of description are intricately intertwined with wider issues concerning
the value positions that underpin an evaluation and the approach that is pursued. If
there is clarity concerning values and approach surely decisions about the grain of
description will take care of themselves? Our response to such a line of argument
would involve stressing that an evaluation cannot straightforwardly follow a standard
set of guidelines for, and a particular value orientation to, practice. Decisions will have
to be made at points between competing directions and values; and accommodations
sought to meet real-world constraints. These decisions between competing purposes
and strategies for meeting difficulties will very often revolve around the detail of
definition within data-gathering, analysis or reporting. Keeping attention closely fixed
on the question of what will make for an appropriate grain of description can,
therefore, help to make more explicit the tensions between competing purposes and aid
the process of making design decisions in a principled manner.
References
Bell, C. and Raffe, D. 1991. Working together? Research, Policy and Practice. The
experience of the Scottish evaluation of TVEI. In G. Walford (ed.) Doing Educational
Research. Routledge: London and New York.
Cole, M. 1995. Socio-cultural-historical psychology: some general remarks and a
proposal for a new kind of cultural-genetic methodology. In J.V. Wertsch, P. del Río, A.
Alvarez (eds.) Sociocultural Studies of Mind. Cambridge: Cambridge University Press.
Cole, M. 1996. Cultural Psychology: A Once and Future Discipline. Cambridge, Mass.
and London: The Belknap Press of Harvard University Press.
Coopers & Lybrand, The Institute of Education, University of London and the
Tavistock Institute, Evaluation Development and Review Unit. 1996. Evaluation of the
Teaching and Learning Technology Programme Final Report. London.
Engeström, Y. 1993. Developmental studies of work as a testbench of activity theory:
the case of primary care medical practice. In S. Chaiklin and J. Lave (eds.)
Understanding practice. Cambridge: Cambridge University Press.
Entwistle, N., McCune, V. and Walker, P. 2000 Conceptions, styles and approaches
within higher education: analytic abstractions and everyday experience. In R. J.
Sternberg and L-F Zhang, (eds.) Perspectives on Cognitive, Learning and Thinking Styles.
Hillsdale, NJ: Erlbaum.
Entwistle, N.J. and Hounsell, D. J. 1987. Concepts and communication in student
learning and cognitive psychology. Paper given at the British Psychological Society,
Cognitive Psychology Section Conference, St. John’s College, Oxford, April, 1987.
Geertz, C. 1980. Blurred genres. American Scholar, 49, 165-179.
Haywood, J., Anderson, C., Day, K., Haywood, D., Land, R., Macleod, H. 1999. Use of
TLTP Materials in UK Higher Education. Bristol: the Higher Education Funding Council
for England. Report 39. [htpp://www.flp.ed.ac.uk/LTRG/TLTP.html]
Lave, J. 1988. Cognition in practice: mind, mathematics and culture in everyday life.
Cambridge: Cambridge University Press.
Leont’ev, A. N. 1978. Activity, consciousness and personality. Englewood Cliffs, NJ:
Prentice Hall.
Parlett, M. and Hamilton, D., 1972. Evaluation as Illumination. Occasional Paper No. 9.
University of Edinburgh: Centre for Research in Educational Sciences.
Download