deJongh-MultipleIntelligencesAssessmentDee

advertisement
An investigation into the contribution of theories of multiple intelligences to the
promotion of a deep approach to learning through the assessment of learning in higher
education
Charles de Jongh
Malyon College, Australia
Sarah Gravett
University of Johannesburg, South Africa
The overwhelming consensus is that assessment is one of the key factors influencing whether students adopt a
deep or surface approach to learning, with the students’ experience and perception of assessment a
determinative factor. While theories of multiple intelligences have been widely applied to teaching, they have
not been meaningfully applied to the assessment of learning or to the promotion of deep learning. This paper
reports on an action research project that investigated the potential contribution of theories of multiple
intelligences to the promotion of deep learning through assessment in higher education. Four aspects of the
research will be addressed, namely: the research problem, questions, purpose and aims; deep learning,
assessment and theories of multiple intelligences; the research design and methodology; and the key research
findings. Significantly, the paper reports on the development and application of principles for Multiple
Intelligence Based Assessment for Deep Learning.
Page |2
1.
Introduction
This paper reports on an action research project that investigated the potential contribution of theories of
multiple intelligences to the promotion of deep learning through the assessment of learning in higher education
(de Jongh, 2010). The term ‘approaches to learning’ describe the qualitative aspect of learning, related to the
intention and quality of learning, rather than to how much is remembered (Biggs, 2003; Ramsden, 2003 and
Rhem, 1995). Students’ approaches to learning were initially researched by Marton, Säljö and colleagues at
Gothenburg University (Entwistle, 1991 and Marton & Säljö, 1984) and focused on the relationship between
students and their learning in context (Bowden & Marton, 1998 and Ramsden, 2003). Marton (Marton & Säljö,
1984) initially referred to deep and surface processes, while Entwistle (1991) used the term approach.
Subsequently, Marton adopted Entwistle’s preference for approaches (Bowden & Marton, 1998). A deep
approach to learning (hereafter, deep learning) refers to internally motivated learning that results in integration
of learnt material with existing experiences, knowledge and interests (Chalmers & Fuller, 1996; Fry, Ketteridge
& Marshall, 2003; Lambert & Lines, 2000; Merrow, 2003 and Ramsden, 2003). By contrast, a surface approach
(hereafter, surface learning) is characterized by an attitude in which learning is regarded as external to the
student (Lambert & Lines, 2000; Nightingale, Te Wiata, Toohey, Ryan Hughes & Magin, 1996 and Prosser &
Trigwell, 1999), demanding minimum effort (Biggs, 2003), often motivated by anxiety and fear (Entwistle,
1991). A strategy approach (hereafter, strategy learning) is pragmatic or functional in that a student adopts an
approach appropriate to their intentions, resulting in either deep or surface learning (Atherton, 2005; Chalmers
& Fuller, 1996; Fry et al, 2003 and Lambert & Lines, 2000).
The approach a student takes to learning is variously influenced by internal and external factors or
motivations (Atherton, 2005; Biggs, 2003; Bowden & Marton, 1998; Marton & Säljö, 1984; Prosser & Trigwell,
1984 and Ramsden, 2003). With deep learning, the most significant influencing factors are internal and relate to
a belief that studies are an opportunity to develop one’s thinking, to satisfy a curiosity and an expectation of
enjoyment (Marton & Säljö, 1984; Nightingale et al, 1996 and Prosser & Trigwell, 1999). This does not mean
that lecturers are unable to influence students’ approaches, as there are a number of strategies that may be
utilized to encourage deep learning, including the appropriate use of the assessment of learning (Biggs, 2003;
Ramsden, 2003; Rowntree, 1977 and Tim, 2004). The overwhelming consensus is that assessment is one of the
most significant influencing factors on students’ approach, with the students’ experience and perception of
assessment a determinative factor (Biggs, 2003; Boud, 1995; Bowden & Marton, 1998; Chalmers & Fuller,
1996; Entwistle, 1991; Haines, 2004; Lambert & Lines, 2000; Prosser & Trigwell, 1999; Ramsden, 2003 and
Rhem, 1995). This means that students are inclined to evaluate the required assessment and then make a choice
regarding their approach (Biggs, 2003 and Chalmers & Fuller, 1996); because assessment requirements send
coded messages which influence approaches to learning (Boud, 1995); therefore, “in trying to change
approaches, we are not trying to change students, but to change the students’ experiences, perceptions or
conceptions of something” (Ramsden, 2003:45).
In an endeavour to contribute to an understanding of the promotion of deep learning in higher
education, an action research project was carried out investigating the potential contribution of theories of
multiple intelligences to the promotion of deep learning through the assessment of learning in higher education
(de Jongh, 2010). The significance of the research is that it was the first known application of theories of
multiple intelligences to the promotion of deep learning through assessment in higher education. Four aspects of
the research will be addressed in this article; namely, the research problem, questions, purpose and aims; deep
learning, assessment and theories of multiple intelligences; the research design and methodology; and the key
research findings.
2.
Research problem, questions, purpose and aims
2.1
Research problem and questions
The research problem was derived from a consideration of assessment and deep learning, and the potential
contribution of theories of multiple intelligences to the promotion of deep learning; understanding that theories
of multiple intelligences relate to an understanding that people are intelligent in different ways. As previously
indicated, assessment influences the choice of learning approach; consequently, this relationship is of
significance to the lecturer endeavouring to promote deep learning. Therefore, the key question is how
assessment may be constructed to promote deep learning and what the potential contribution of theories of
multiple intelligences is? Theories of multiple intelligences were popularized by Howard Gardner’s Frames of
Mind (1983) and developed in Robert Sternberg’s academic work, A Triarchic Theory of Human Intelligence
(1985). They propelled theories of multiple intelligences into education, with Gardner being more influential at
a popular level, although not without his critics (Chen, 2004; Denig, 2004; Smith, 2002 and Willingham, 2006).
The fundamental thesis is that the historical understanding of intelligence as a single attribute is too limited;
Page |3
rather, it is argued that intelligence is multi-faceted (Gardner, 1983, 1993 & 1999 and Sternberg, 1985, 1988 &
1998). Based on the understanding that people are intelligent in different ways, it is argued that theories of
multiple intelligences have the potential to meaningfully contribute to the promotion of deep learning through
assessment. This view is consistent with many supporters of Gardner who argue that students be afforded the
opportunity to explain material in ways shaped by different intelligences (Brualdi, 1996).
Few, if any, works could be found that apply Gardner’s theory to assessment; one of the exceptions
being Lazear’s application in Multiple Intelligence Approaches to Assessment (1999). At the commencement of
the research, the same could be argued in relation to Sternberg; for example, Sternberg and Williams’
Intelligence, Instruction and Assessment – Theory into Practice (1998) only briefly considers assessment of
learning. However, parallel to the commencement of the research, Sternberg produced limited literature in the
area (for example, Sternberg, 2004a & 2007 and Sternberg & Grigorenko, 2003). Lazear’s application applies
each of Gardner’s proposed eight intelligences to the assessment of learning in primary and secondary
education; the limitation being his proposal that each student be given individual assessment matching their
intelligence strengths which would be very difficult to implement in the majority of higher education contexts.
While significant work has been done on the application of theories of multiple intelligences to teaching, there
remains a gap in their manageable and practical application in higher education, particularly related to
assessment, which is as argued before, a crucial determining factor related to students’ approaches to learning.
However, it was our contention that theories of multiple intelligences have a significant contribution to make to
both assessment and the promotion of deep learning through assessment. The research problem was, therefore,
that insufficient consideration has been given to the contribution of theories of multiple intelligences to the
promotion of deep learning through the assessment of learning.
The first research question asked how the demands of deep learning may be promoted through
assessment; while the second asked how theories of multiple intelligences could be utilized to contribute to
assessment, while promoting deep learning. Thirdly, ‘What principles may be derived from the demands of deep
learning and theories of multiple intelligences for the assessment of learning?’ These elicited the development of
principles for Multiple Intelligence Based Assessment for Deep Learning (abbreviated, MIBADL).
2.2
Research purpose and aims
The research purpose was to examine the potential contribution of theories of multiple intelligences to the
promotion of deep learning through the assessment of learning. Derived from the research purpose were the
research aims, which were to examine the contribution of theories of multiple intelligences to assessment and
the promotion of deep learning through assessment in the context of theories of multiple intelligences. An
extensive review of the relevant literature was utilized to derive principles for deep learning assessment and for
Multiple Intelligences Based Assessment (hereafter, MIBA), followed by the synthesizing of these into
MIBADL principles. The MIBADL principles were then applied in a higher education setting, with the
empirical research contributing to conclusions relating to the findings regarding the contribution of theories of
multiple intelligences to assessment and the promotion of deep learning through assessment in the context of
theories of multiple intelligences.
3.
Deep learning, assessment and theories of multiple intelligences
3.1
Deriving deep learning assessment principles
It has been argued that one of the key effects of assessment on a student’s approach to learning is observed in
the backwash effect in which the student’s approach is shaped by their expectations of assessment, with the
challenge being to align assessment to the desired approach to learning (Biggs, 2003; Boud, 1995; Chalmers &
Fuller, 1996 and Madaus, 1998). However, some have proposed that such efforts may simply contribute to
complex surface or strategy learning (Atherton, 2005); while others argue that teaching quality can be equally
determinative (Nightingale, 1996). Assessment that promotes deep learning is characterized by good preparatory
guidance, lecturer support, setting of high expectations, clear requirements, choice of tasks, appropriate
resourcing, adequate time, emphasizing principles and structures, building on existing knowledge, emphasizing
depth, integrative requirements, explicit aims and objectives, and meaningful feedback (Biggs, 2003; Campbell,
1998; Engineering Subject Centre, 2005; Haines, 2004; Logan, 1971 and Walvoord & Anderson, 1998). Based
on the relationship between assessment and learning, seven interdependent principles were derived for
assessment that promotes deep learning (Angelo & Cross, 1993; Entwistle, 1991; Qualifications and Curriculum
Authority, 2006; Race, 1995; Rhem, 1995; Sternberg, 2007; Trigwell & Prosser, 1996; Trigwell, Prosser &
Waterhouse, 1999 and Walvoord & Anderson, 1998):
Page |4
1.
2.
3.
4.
5.
6.
7.
3.2
Assessment is integral to course design and centred on envisaged achievement;
Assessment focuses on significant principles and structures;
Assessment is based on clearly stated objectives and outcomes, directly associated with the aims and
purpose of the course;
Assessment for deep learning uses a wide variety of methods and types;
Assessment requirements and criteria are clearly and explicitly stated;
Assessment for deep learning is supported by good preparatory guidance, material and personal
support, and appropriate resourcing;
Assessment gives early and comprehensive feedback, addressing weaknesses and improving learning.
Deriving MIBA principles
Based on Gardner and Sternberg’s understandings of multiple intelligences, principles were then derived for
MIBA of learning. In line with Gardner (1999), the research favoured the term intelligences, while
acknowledging that not all scholars agree on its use (Brualdi, 1996 and Sternberg & Grigorenko, 2003). The
MIBA principles are those exclusively deriving from theories of multiple intelligences.
1.
2.
3.
4.
5.
3.3
Acknowledge that students have different intelligence strengths;
Acknowledge that students achieve differently;
Assessment options acknowledge differences and are characterized by variety;
Variety and choice apply in the specific assessment of all objectives and outcomes;
Variety in assessment is based on different intelligences.
Synthesizing MIBADL principles
The final stage was to synthesize the deep learning assessment principles (excluding principles four to seven as
related to all good practice) and the MIBA principles into the MIBADL principles:
1.
2.
3.
4.
Student’s envisaged achievement is integral to course design, acknowledging that students have
different intelligence strengths and achieve differently;
The focus is on the significant principles and structures of the course material; therefore, allowance is
made for different intelligence strengths, different ways of achieving, and for variety and choice;
Variety and choice in assessment is based on clear and stated objectives and outcomes, which are
directly associated with the aims and purpose of the course;
A wide variety of methods and types of assessment is utilized, based on an intentional consideration of
different intelligences.
4.
Research design and methodology
4.1
Choosing an action research design
The choice of research design type is a vital aspect of any research, as it has a major impact on the quality of the
resultant research findings. While the research had a significant theoretical component, the main focus was on
the challenge of promoting a deep approach to learning in higher education students, which was also the
researchers’ context. As such, a research design that contributed to and enhanced the practice of the researchers
was required. To this end, practitioner action research was the chosen research design type.
In education, action research is growing in importance in the development and improvement of
educational practice and theory (Bradbury & Reason, 2003; Glanz, 1999; Herr & Anderson, 2005; Koshy, 2005;
Masters, 2000; Nolen & Vander Putten, 2007; Riding, Fowell & Levy, 1995 and Tomal, 2003). It has been
variously defined and is regarded as a group of approaches to research (Bradbury and Reason, 2003; Dick, 2000;
Herr & Anderson, 2005; Mertler, 2009 and Quigley 1997), with two main approaches, practitioner- and
participatory action research (Dick, 2000; MacIsaac, 1996; Masters, 2000 and O’Brein, 1998). Practitioner
action research is that in which practitioners research and improve their own practice (Henning, van Rensburg &
Smit, 2004; Huysamen, 1994; McNiff, 2002; Mertler, 2009; Pring, 2000 and Quigley & Kuhne, 1997). Action
research was appropriate for the research as the intention was to investigate the three areas brought together in
MIBADL, so contributing to the improvement of both educational practice and the development of theory.
Page |5
4.2 Positioning the research
A vital aspect of action research its cyclical nature and related elements (McNiff & Whitehead, 2002; Mertler,
2009 and Stringer, 2008), with action research often described in terms of a spiral of research, in which the entry
point is previous experience and research and the exit point the entry point for further research (Koshy, 2005;
McNiff, 2002 and Stringer, 2008). Understanding of the specific nature of an action cycle vary; however, each
commences with the awareness of an aspect of educational practice or theory justifying research, a consideration
of input sources, the application of a possible intervention, reflection on the intervention, and consideration for
future practice, educational theory and/or research (MacIsaac, 1995; Mertler, 2009; McNiff, 2002 and Quigley
& Kuhne, 1997). As action research does not have a common approach to application, consideration was given
to five proposals (Ferrance, 2000; Glanz, 1999; McNiff, 2002; Quigley & Kuhne, 1997 and Wiersma & Jurs,
2005), with all reflecting a common essential process which was applied as follows:





4.3
Step 1, the issue for research –
clarifying the issue for research which in this research was to examine the contribution of
theories of multiple intelligences to assessment and the promotion of deep learning through
assessment in the context of theories of multiple intelligences.;
Step 2, proposing a possible solution –
the development and proposal of a possible solution for the utilization of theories of multiple
intelligences for the promotion of deep learning through the assessment of learning, referred
to as MIBADL;
Step 3, application of a possible solution –
MIBADL applied in a higher education context;
Step 4, reflection on the application –
the application was reflected on and the empirical findings presented;
Step 5, considerations for the future –
a presentation of the research findings and considerations for the future.
The research sample
Given that practitioner action research focuses on the researcher’s own practice, it is accepted that action
researchers are not required to use a representative sample; therefore, use is typically made of a convenience or
purposive sample (Koshy, 2005; Quigley & Kuhne, 1997; Stringer, 2008; Tomal, 2003 and Wiersma & Jurs,
2005). As the research was located in higher education, the sample was located in the existing context of the
lead researcher, being a Private Higher Education Institution (abbreviated, PHEI). The PHEI had an enrolment
of 225 students, offering programmes of study up to a four-year Bachelor of Theology, with instruction in
English.
The sample was self-selected by virtue of it including the students taught in a given semester; while
purposive in relation to the course chosen, endeavouring to increase the likelihood of applying the research
findings to other contexts. Twenty fulltime students participated in the research, with most of the students
falling into the age group 20-29. In terms of race (by South African Department of Education definition), there
were eight black and twelve white students, with the majority being South African citizens. Thirteen listed
English as their first language, with six regarding English as their second language. Academically, seventeen
had completed Grade (or Year) 12 and three Grade 10-11 (these having been required to successfully complete a
one-year Certificate of Theology for entrance); furthermore, nine had completed tertiary qualifications, two had
Honours degrees (four years), one a first degree (three years) and six a Diploma or Technical Certificate. The
biographical data points to a diverse sample, although it did not represent the typical cross-section of students in
South African higher education.
A decision was made to implement MIBADL in a second-year level Biblical Studies semester long
course for two reasons. Firstly, Biblical Studies is regarded as a knowledge or theory-focused field of studies,
which endeavoured to address the potential critique that MIBADL may be useful for practical subjects, but not
for knowledge or theory-focused study. Secondly, being a second level course meant that most of the students
were familiar with the researcher, which it was anticipated would enhance the research quality. The specific
course was entitled ‘BBS 225, The Pentateuch,’
examining key background issues to the study of the Old Testament. Thereafter, the focus is on the
Pentateuch (Genesis to Deuteronomy), with attention being given to introductory considerations,
theological themes, and textual exegesis from the book of Genesis (De Jongh, 2006:3).
Page |6
With outcomes presented as follows:
On completion of this course, the learner will be expected to:
1.
Discuss the nature of authorship and oral tradition in ancient times, together [with] their
impact on the study of the Old Testament.
2.
Demonstrate an ability to examine the key introductory considerations of the Pentateuch, as a
whole and with reference to its constituent books.
3.
Discuss the theological theme of the Pentateuch as a whole, as well as the themes of its
constituent books.
4.
Meaningfully exegete a selected passage or passages from the Book of Genesis.
(De Jongh, 2006:3)
4.4
Operationalizing the MIBADL principles
The MIBADL principles were operationalized in an assessment instrument for use in higher education.
Gardner’s eight intelligences were regarded as too cumbersome; while Sternberg’s three abilities were
manageable, but exclude Gardner’s personal intelligences. Therefore, four intelligence emphases were derived,
as in figure 1 below, and related to an assessment emphasis.
Figure 1: Proposed Intelligence and Assessment Emphases
Intelligence
emphases
Academic
Gardner’s
intelligences
Linguistic
Logicalmathematical
Sternberg’s
abilities
Analytical/
academic
Creative
Musical
Spatial
Creative
Practical
Bodilykinesthetic
Naturalistic
Practical
Relational
Interpersonal
Intrapersonal
Assessment
emphasis
Written work,
‘Write about’;
for example,
an assignment
or examination.
Creative work,
‘Create afresh’;
for example,
an artwork
or inventive solution.
Practical task,
‘Make anew’;
for example,
a practical task
or situational application.
Relational task,
‘Relate amongst’;
for example,
relationship report
or reflective evaluation.
The process of constructing assessment items was then based on the following instrument.
Figure 2: MIBADL Assessment Instrument
Outcome/s
statement
Statement of
outcome/s
Intelligence
emphasis

Academic
Creative
Practical
Relational
Assessment
items

Assessment
criteria
Individual or
generic
Page |7
4.5
Implementing the MIBADL instrument
As the focus of the research was the application of the MIBADL assessment instrument, the use of the
instrument was the only adjustment made to the course as previously presented. Considering the challenges
presented by each of the intelligence emphases, the academic emphasis was both simple and complex. Simple,
in that it was the emphasis most common in higher education; complex, the items needed to promote deep
learning. The creative and practical emphases were challenging in their difference to the more typical academic
emphasis. The main concern was the marking of the assessment items, especially in relation to the technical
aspects of the submissions; however, the assessment criteria were intended to guide students.
The relational emphasis was most difficult and the one in which the assessment options were arguably
most forced. In developing the options, the starting point was a student-others dynamic. However, in the light of
Gardner’s inter- and intrapersonal intelligences, and as the field was Christian ministry and theology, a studentself and a student-God dynamic was added, without intentionally moving to spiritual intelligence (McMullen,
2003 and Vaughan, 2004). The question that was not addressed was whether the forcing of assessment in the
relational emphasis was problematic; while another was whether all the relational items were actually relational
and not practical. In relation to all the emphases developed, the main challenge was the communication of
requirements for other than academic emphases.
A final consideration was whether limits would be placed on the students’ choice of options, the
concern being that most would opt for the academic emphasis (later shown to be unfounded); while not wanting
to compel students to make choices they would not want. Responding to the concerns, students were permitted
to choose a maximum of three of a given emphasis. To achieve this, without revealing the nature of the given
emphasis, the same intelligence emphasis was given the same number across the four requirements (for
example, all academic options were numbered ‘1’).
4.6
Data collection
As action is open to a variety of data collection methods (McNiff & Whitehead, 2002; Mertler, 2009 and
O’Brein, 1998), data collection was carried out by means of questionnaires, interviews and a research journal.
The first motivation for the use of questionnaires was that they were the best way in which to collect baseline
data, including biographical details and initial responses; while the second motivation was to use the
questionnaires as a basis for personal interviews (Koshy, 2005 and McNiff & Whitehead, 2002). Six
questionnaires were utilized at key moments in the semester: shortly after the introduction to the course, after
the submission of each of the four course requirements, and at the end of the course. Use was made of both
closed and open questions (McNiff & Whitehead, 2002; Mertler, 2009; Quigley & Kuhne, 1997 and Wiersma &
Jurs, 2005) to obtain both objective data (including biographical information and personal choices) and
subjective responses (including students’ experiences of and response to the assessment). The initial
questionnaire focused on biographical data and the students’ initial responses to the assessment. The following
four questionnaires gave students an opportunity to give feedback on the assessment options chosen, reasons for
the choice, and to indicate any changes they would make to the options. The final questionnaire gave an
opportunity to reflect on the course as a whole, making use of one open-ended item.
As interviews provide detailed responses, greater clarification, deeper insights and useful perspectives
(Altrichter, Posch & Somekh, 1993; Koshy, 2005; McNiff & Whitehead, 2002; Schwalbach, 2003; Stringer,
2008 and Tomal, 2003), each student was subjected to one semi-structured and open interview to enhance an
understanding of their experiences (Quigley & Kuhne, 1997 and Wiersma & Jurs, 2005). Groups of different
students were allocated to each of the questionnaires so that each student was subject to one interview based on
the questionnaire submitted at that time. Interviews were commenced by asking questions of clarification with
respect to the student’s completed questionnaire, followed by open questions in which they were able to direct
the interview. Questionnaire answers were clarified where necessary, with the interview either focused on the
course and issues specific to the questionnaire or broader, exploring different aspects of the course. The
combination of questionnaires and interviews was effective, as the questionnaires ensured that all students gave
feedback throughout the research period, while the interviews gave each student one meaningful opportunity to
give more significant input with regards to their experience of the assessment. The final data collection method
was a research journal, which could be used in a variety of ways, from detailed recording to general
observations (Koshy, 2005; McNiff & Whitehead, 2002; Mertler, 2009 and Schwalbach, 2003). The focus of the
journal was to record important observations and personal experiences throughout the research.
4.7
Data analysis
Most of the data was qualitative, where analysis “… usually focus[es] on understanding peoples’ experience
and perspectives as a common outcome of the research process” (Stringer, 2008:39). Therefore, the aim of the
Page |8
analysis was to systematically analyze the data and extract content relevant to the research aims by content
analysis (Huysamen, 1994 and Stringer, 2008), a process in which “… the researcher begins with specific
observations (i.e., data), notes any patterns in those data, formulates one or more tentative hypotheses, and
finally develops general conclusions and theories” (Mertler, 2009:14). The analysis involved the processing and
reduction of data, presentation and description of the main characteristics and features, interpretation of the data
and drawing of conclusions (Altrichter et al, 1993; Koshy, 2005; Mertler, 2009; Stringer, 2008 and Tomal,
2003). The data was presented, as far as possible, in “the participants’ own talk…” (Stringer, 2008:99), while
also presenting as much of the data as possible (Costello, 2003).
5.
Key research findings
The focus of the research was to examine the contribution of theories of multiple intelligences to assessment and
the promotion of deep learning through assessment in the context of theories of multiple intelligences. An
analysis of the empirical and research data produced the following findings, presented first with respect to
assessment and then with respect to promoting deep learning.
5.1
Findings related to the assessment of learning
With respect to the contribution of theories of multiple intelligences to the assessment of learning the finding
was that theories of multiple intelligences have a positive contribution to make to the aims of and requirements
for the assessment of learning.
Contribution to the aims assessment of learning
An extensive study of the relevant literature indicates that the main aims of assessment are the measurement of
achievement, motivation of learning, monitoring of progress and supporting of learning (de Jongh, 2010). The
most common aim of the assessment of learning is indicated as being the measurement of achievement (Stringer,
2008); with the measurement of achievement the scoring and ranking of the students’ mastery of that which they
are expected to have learnt (Ebel, 1998). In the light of the particular research, no finding was made. The
utilization of theories of multiple intelligences in the construction of assessment made a definite contribution to
the increasingly important aim of the motivation of learning (Gardner, 1993 and Siebörger & Macintosh, 2004).
Student feedback indicated that they were more motivated than they generally experienced, with there being
specific indications of a positive response to the assessment in an intention to learn more. Certain students also
reflected their appreciation at not having to complete typical written assessment, which is not always well
regarded by students (Logan, 1971). In relation to the aim of monitoring progress (Lambert & Lines, 2000), the
application of theories of multiple intelligences made no notable contribution as it fell outside of the scope of
the research. While the use of assessment for the monitoring of progress may be achieved in most assessment,
an unanswered question was the extent to which the use of MIBA provides a more authentic indication of a
student’s progress. Finally, with respect to the supporting of learning (Lazear, 1999; Walvoord & Anderson,
1998 and South African Qualifications Authority, 2005), no finding could be made as it also fell outside the
scope of the research. It could be argued that assessment in line with student intelligence strengths probably will
produce better insights into progress, so contributing to better student support; however, no such finding could
be made.
Contribution to the requirements for assessment
A study of the relevant literature reveals that validity, reliability, fairness and practicability are the main
requirements of assessment (de Jongh, 2006). Validity of an assessment item means that it measures what it is
supposed to measure (Haines, 2004 and Solomon, 2002); the application of theories of multiple intelligences
made a positive contribution to the reduction of the indirect impact of the form of assessment. Walvoord and
Anderson (1998) emphasize that assessment should elicit the intended response; a requirement reinforced by
Luckett and Sutherland (2000) who refer to ‘fitness of and for purpose.’ An application of theories of multiple
intelligences reduces the impact of the form of assessment, increasing the likelihood that students demonstrate
their learning, so increasing validity. In relation to reliability, the consistency of assessment across varied
circumstance and contexts (Nightingale et al, 1996b and Wojtczak, 2002), it appears that the application of
MIBA produces no measurable impact on reliability; however, a definitive finding cannot be made.
The third requirement, fairness, relates to the construction of assessment items (Haines, 2004) and the
students’ experience of the assessment (Siebörger & Macintosh, 2004). In the context of MIBA, the finding was
that a positive contribution is made. Firstly, items considerate of multiple intelligences are more likely to be fair
in that different intelligence strengths are catered for. Secondly, the students’ experience is more likely to
include a sense of fairness, not always present in typical written assessment (Sternberg, 2006a, 2006b & 2007);
an observation borne out in a number of students who expressed appreciation for the assessment choice and
Page |9
variety. Finally, with respect to practicability (Geyser, 2004 and Lambert & Lines, 2000), the finding was that
assessment considerate of multiple intelligences at least did not add any additional burden. Wojtczak (2002)
proposes that variety in assessment may positively contribute to practicability; however, that was not a
consideration of the research and a finding was not made.
5.2
Findings related to the promotion of deep learning
With respect to the promotion of deep learning through the assessment of learning in the context of theories of
multiple intelligences, the findings were that theories of multiple intelligences make a positive contribution to
the construction of assessment items that promote deep learning, and that assessment that is shaped by theories
of multiple intelligences contributes to the promotion of deep learning in certain students.
Constructing assessment items
The four principles for MIBADL were presented earlier in this article; the finding was that the application of
theories of multiple intelligences not only informs those principles, it also makes a positive contribution to the
construction of the related assessment items. An examination of the four principles justifies this finding. In the
following presentation of the principles, the impact of the application of theories of multiple intelligences can be
seen in the italicized sections of the principles:
1.
2.
3.
4.
The student’s envisaged achievement is integral to course design, acknowledging that students
have difference intelligence strengths and achieve differently.
The focus is on significant principles and structures; therefore, allowance is made for different
intelligence strengths, different ways of achieving, and for variety and choice.
Variety and choice in assessment is based on clear and stated objectives and outcomes, which
are directly associated with the aims and purpose of the course.
A wide variety of methods and types of assessment is utilized, based on an intentional
consideration of different intelligences; with comprehensive guidelines available for all forms
of assessment.
It is apparent that the principles are shaped by theories of multiple intelligences in that differences
between students are acknowledged and that assessment items take those differences into consideration. It may
be argued that the acceptance of theories of multiple intelligences legitimizes the use of variety in the
construction of assessment items. In principle one, reference is made to students achieving differently; if they
achieve differently, then they should be assessed differently. If their differences in achievement are associated
with the different intelligences, then it follows that the different intelligences should be considered in the
construction of assessment options. The second principle requires an allowance for different intelligence
strengths in the construction of assessment items, and reiterates that students achieve in ways that differ on the
basis of the intelligence strengths. Principle three associates with theories of multiple intelligences in the
requirement for variety and choice, which can be achieved through a consideration and application of theories of
multiple intelligences. The final principle is a clear statement of what had been argued with respect to the first
three principles; namely that variety in assessment is best achieved ‘based on an intentional consideration of
different intelligences….’ Therefore, it was found that the application of theories of multiple intelligences can
make a discernible and positive contribution to the construction of assessment items that promote deep learning;
however, such promotion of deep learning is not guaranteed.
Promoting deep learning in certain students
Positively, the choice and variety presented in assessment items and options opened up the opportunities for
students to express their mastery of the course outcomes in different ways based on an appreciation of their
different intelligence strengths. This was particularly observable in a number of submissions that reflected key
aspects of deep learning, including the relating of material being studied to everyday experience (Ramsden,
2003), intentional and relevant references to existing knowledge (Marton & Säljö, 1984), clear expression of
and references to internal or intrinsic motivation (Atherton, 2005), appropriate and meaningful reflection on
personal meaning (Ramsden, 2003), and various and varying demonstrations of complex understanding
(Bowden & Marton, 1998). It was significant that most of these aspects of deep learning were expressed in the
assessment intelligence emphases other than academic, which is significant in that the academic emphasis is the
one most commonly utilized in higher education. In other words, the simple inclusion of creative, practical and
relational intelligence emphasis assessment items appears to have made an important contribution to the
promotion of deep learning.
However, it was apparent from the empirical research that the application of MIBADL did not ensure
the adoption of deep learning both throughout the course and in all students, this being especially noticeable
P a g e | 10
towards the end of the course as students began to feel fatigued and pressured by their overall study load.
Throughout the course, students indicated that they were often influenced by a variety of factors, including time
pressure, uncertainties and familiarity with certain forms of assessment. Towards the end of the course, they
indicated that they were fatigued by the demands across their courses (most students were enrolled in a fulltime
load of five units) and by the challenge of completing the specific course in which the research was taking place.
Therefore, it could also not be concluded that the use of MIBADL guarantees a consistent deep learning
approach; while it was apparent that MIBADL alone does not guarantee the adoption of a deep approach to
learning by students. Therefore, the overall finding was that assessment shaped by theories of multiple
intelligences contributes to the promotion of deep learning in certain students, while they did not necessarily
maintain a deep approach to learning throughout the course.
6.
Conclusion
The final conclusion is that the research has made a valuable contribution to the development of educational
practice and theory, specifically that MIBADL contributes to the promotion of deep learning; MIBA principles
are valuable in the assessment of learning; and that a consideration of theories of multiple intelligences
contributes to enhanced assessment practices.
Keywords
action research
learning
assessment
multiple intelligences
higher education
student motivation
Acknowledgements
The research reported here was carried out in completion of a DEd degree at the University of Johannesburg,
with Charles de Jongh the researcher and Sarah Gravett the supervisor. The research was variously funded and
supported by the University of Johannesburg, The Baptist Theological College of Southern Africa, Malyon
College (The Queensland Baptist College of Ministries) and Queensland Baptists.
Authors
Charles de Jongh is Academic Dean and Lecturer in Biblical Studies at Malyon College (The Queensland
Baptist College of Ministries) in Brisbane, Queensland.
E-mail: charles@malyon.edu.au
Sarah Gravett is Executive Dean of the Faculty of Education at the University of Johannesburg, South Africa.
E-mail: sgravett@uj.ac.za
P a g e | 11
References
Altrichter, H., Posch, P. & Somekh, B. (1993). Teachers Investigate Their Work: An Introduction to the
Methods of Action Research. London: Routledge.
Angelo, T.A. & Cross, K.P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers. San
Francisco: Jossey-Bass.
Atherton, J.S. (2005). Learning and teaching: approaches to study ‘deep’ and ‘surface’. Retrieved September
15, 2005, from http://www.learningandteaching.info/learning/deepsurf.htm
Biggs, J. (2003). Teaching for Quality Learning at University. Second Edition. Maidenhead: Open University.
Boud, D. (1995). Assessment and learning: contradictory or complementary? In P. Knight (Ed.), Assessment for
Learning in Higher Education. London: Kogan Page.
Bowden, J. & Marton, F. (1998). The University of Learning – Beyond Quality and Competence. London:
RoutledgeFalmer.
Bradbury, H. & Reason, P. (2003). Action research: An opportunity for revitalizing research purposes and
practices. Qualitative Social Work, 2. Retrieved March 17, 2009, from
http://qsw.sagepub.com/cgi/content/abstract/2/2/155
Brualdi, A.C. (1996). Multiple intelligences: Gardner’s theory (ERIC Identifier ED410226). Washington: ERIC
Clearinghouse on Assessment and Evaluation (accessed September 13, 2004, from ERIC database).
Campbell, E. (1998). Teaching strategies to foster “deep” versus “surface” learning. Teaching Options,
November. Retrieved September 15, 2005, from
http://www.uottawa.ca/academic/cut/options/Nov_98/Teaching Strategies_en.htm
Chalmers, D. & Fuller, R. (1996). Teaching for Learning at University. London: Kogan Page.
Chen, J.-Q. (2004). Theory of multiple intelligences: Is it a scientific theory? Teachers College Record, 106(1),
17-23.
Costello, P.J.M. (2003). Action Research. London: Continuum.
De Jongh, C. (2006). Study Guide: BBS 225 – The Pentateuch. Randburg: BTCSA.
De Jongh, C. (2010). Theories of multiple intelligences and learning assessment for deep learning in higher
education. Unpublished doctoral dissertation, University of Johannesburg, Johannesburg, Gauteng,
South Africa.
Denig, S. (2004). Multiple intelligences and learning styles: Two complementary dimensions. Teachers College
Record, 106(1). Retrieved October 3, 2005 from
http://www.tcrecord.org/PrintContent.asp?ContentID=11513
Dick, B. (2000). A beginner’s guide to action research. Retrieved March 4, 2009, from
http://www.scu.edu.au/schools/gcm/ar/arp/guide.html
Ebel, R. (1998). The ‘essentials’ of educational measurement. In J. Gultig et al (Eds.), Understanding
Outcomes-Based Education: Teaching and Assessment in South Africa – Reader. Cape Town:
SAIDE/Oxford.
Engineering Subject Centre. (2005). Deep and surface approaches to learning. Retrieved September 15, 2005,
from http://www.engsc.ac.uk/er/theory/learning.asp
Entwistle, N. (1991). Learning and studying: contrasts and influences. Creating the Future: Perspectives on
Educational Change. Retrieved 16 December, 2009, from
http://www.newhorizins.org/future/Creating_the_Future/crfut_entwistle.html
Ferrance, E. (2000). Action Research. Providence: LAB.
Fry, H., Ketteridge, S. & Marshall, S (2003). Understanding student learning. In H. Fry, S. Ketteridge & S.
Marshall (Eds.), A Handbook for Teaching and Learning in Higher Education. London: Kogan Page.
Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. 20th Anniversary Edition. New
York: Basic.
Gardner, H. (1993). Multiple Intelligences: The Theory in Practice - A Reader. New York: Basic.
Gardner, H. (1999). Intelligence Reframed: Multiple Intelligences for the 21 st Century. New York: Basic.
Geyser, H. (2004). Learning from assessment. In S. Gravett & H. Geyser (Eds.), Teaching and Learning in
Higher Education. Pretoria: Van Schaik.
Glanz, J. (1999). Action research. Journal of Staff Development, 20(3). Retrieved December 1, 2005 from
http://www.nsdc.org/library/publications/jsd/glanz203.cfm
Haines, C. (2004). Assessing Students’ Written Work. London: RoutledgeFalmer.
Henning, E., Van Rensburg, W. & Smit, B. (2004). Finding Your Way in Qualitative Research. Pretoria: Van
Schaik).
Herr, K. & Anderson, G.L. (2005). The Action Research Dissertation: A Guide for Students and Faculty.
Thousand Oaks: Sage.
Huysamen, G. K. (1994). Methodology for the Social and Behavioural Sciences. Johannesburg: Thomson.
Koshy, V. (2005). Action Research for Improving Practice: A Practical Guide. London: Paul Chapman.
P a g e | 12
Lambert, D. & Lines, D. (2000). Understanding Assessment: Purposes, Perceptions, Practice. London:
RoutledgeFalmer.
Lazear, D. (1999). Multiple Intelligence Approaches to Assessment – Solving the Assessment Conundrum
(Revised). Chicago: Zephyr.
Logan, D. (1971). Students’ views on assessment. In ULIE (Ed.), Conference on Assessment of Learning,
Courses and Teaching. London: ULIE.
Luckett, K. & Sutherland, L. (2000). Assessment practices that improve teaching and learning. In S. Makoni
(Ed.), Improving Teaching and Learning in Higher Education: A Handbook for Southern Africa.
Johannesburg: WITS University.
MacIsaac, D. (1996). An introduction to action research. Retrieved March 4, 2009, from
http://physicsed.buffalostate.edu/danowner/actionrsch.html
Madaus, G. (1998). The influence of testing on the curriculum. In J. Gultig et al (Eds.), Understanding
Outcomes-Based Education: Teaching and Assessment in South Africa – Reader. Cape Town:
SAIDE/Oxford.
Marton, F. & Säljö, R. (1984). Approaches to learning. In F. Marton et al (Eds.), The Experience of Learning.
Edinburgh: Scottish Academic.
Masters, J. (2000). The history of action research. Retrieved March 4, 2009, from
http://www2.fhs.usyd.edu.au/arow/arer/oo3.htm
McMullen, B. (2003). Spiritual intelligence. Student BMJ, 11, 60-61.
McNiff, J. (2002). Action research for professional development. Retrieved March 29, 2005, from
http://www.jeanmcniff.com/Copy%20booklet%20for%20web%20site.doc
McNiff, J. & Whitehead, J. (2002). Action Research: Principles and Practice. London: RoutledgeFalmer.
Merrow, J. (2009). Easy grading makes ‘deep learning’ more important. Retrieved September 15, 2005, from
http://www.usatoday.com/news/opinions/editorials/2003-02-04-merrow_x.htm
Mertler, C.A. (2009). Action Research: Teachers as Researchers in the Classroom. Second Edition. Los
Angeles: Sage.
Nightingale, P. (1996). Accessing and managing information. In P. Nightingale et al (Eds.), Assessing Learning
in Universities. Sydney: University of NSW.
Nightingale, P., Te Wiata, I, Toohey, S., Ryan, G., Hughes, G. & Magin, D. (1996). Assessment project
glossary. In P. Nightingale et al (Eds.), Assessing Learning in Universities. Sydney: University of
NSW.
Nolen, A.L. & Vander Putter, J. (2007). Action research in education: Addressing gaps in ethical principles and
practices. Educational Researcher, 36(7). Retrieved March 17, 2009 from
http://www.aera.net/uploadedFiles/Publications/Journals/Educational_Researcher/3607/10EDR07_401
-407.pdf
O’Brein, R. (1998). An overview of the methodological approach to action research. Retrieved March 4, 2009,
from http://www.web.net/~robrein/papers/arfinal.html
Pring, R. (2000). Philosophy of Educational Research. London: Continuum.
Prosser, M. & Trigwell, K. (1999). Understanding Learning and Teaching: The Experience in Higher
Education. Buckingham: SRHE & Open University.
Qualifications and Curriculum Authority. (2006). The ten principles. Retrieved March 17, 2006, from
http://www.qca.org.uk/907.html
Quigley, B.A. (1997). The role of research in the practice of adult education. In B.A. Quigley & G.W. Kuhne
(Eds.), Creating Practical Knowledge Through Action Research. San Francisco: Jossey-Bass.
Quigley, B.A. & Kuhne, G.W. (1997). Understanding and using action research in practice settings. In B.A.
Quigley & G.W. Kuhne (Eds.), Creating Practical Knowledge Through Action Research. San
Francisco: Jossey-Bass.
Race, P. (1995). What has assessment done for us – and to us? In P. Knight (Ed.), Assessment for Learning in
Higher Education. London: Kogan Page.
Ramsden, P. (2003). Learning to Teach in Higher Education. Second Edition. London: RoutledgeFalmer.
Rhem, J. (1995). Deep/surface approaches to learning: an introduction. The National Teaching and Learning
Forum, 5(1), 1-5.
Riding, P., Fowell, S. & Levy, P. (1995). An action research approach to curriculum development. Information
Research, 1(1). Retrieved December 1, 2005, from http://informationr.net/ir/1-1/paper2.html
Rowntree, D. (1977). Assessing Students. London: Harper & Row.
Schwalbach, E.M. (2003). Value and Validity in Action Research: A Guidebook for Reflective Practitioners.
Lanham: ScarecrowEducation.
Siebörger, R. & Macintosh, H. (2004). Transforming Assessment. Cape Town: Juta.
Smith, M.K. (2002). Howard Gardner, multiple intelligences and education. Retrieved September 13, 2004,
from http://www.infed.org/thinkers/gardner.htm
P a g e | 13
Solomon, P.G. (2002). The Assessment Bridge. Thousand Oaks: Corwin.
South African Qualifications Authority (2005). NQF objectives and what does our NQF look like? Retrieved
November 17, 2005, from http://www.saqa.co.za/show.asp?main=about/nqfobjectives.htm
Sternberg, R.J. (1985). Beyond IQ: A Triarchic Theory of Human Intelligence. Cambridge: University.
Sternberg, R.J. (1988). The Triarchic Mind: A New Theory of Human Intelligence. New York: Penguin.
Sternberg, R.J. (1998). Applying the triarchic theory of human intelligence in the classroom. In R.J. Sternberg &
W.M. Williams (Eds.), Intelligence, Instruction, and Assessment: Theory into Practice. London:
Lawrence Erlbaum.
Sternberg, R.J. (2006a). Examining intelligence. BizEd, 5(2), 22-27.
Sternberg, R.J. (2006b). Recognizing neglected strengths. Educational Leadership, 64(1), 30-35.
Sternberg, R.J. (2007). Assessing what matters. Educational Leadership, 65(4), 20-26.
Sternberg, R.J. & Grigorenko, E.L. (2003). Teaching for successful intelligence: Principles, procedures, and
practices. Journal for the Education of the Gifted, 27(2-3), 207-208.
Sternberg, R.J. & Williams, W.M. (Editors). (1998). Intelligence, Instruction, and Assessment: Theory into
Practice. London: Lawrence Erlbaum.
Stringer, E. (2008). Action Research in Education. Second Edition. Columbus: Pearson.
Tim, C. F. (2004). Encouraging deep learning. Retrieved September 15, 2005, from
http://www.cdtl.nus.edu.sg/link/mar2004/learn1.htm
Tomal, D.R. (2003). Action Research for Educators. Lanham: Rowman & Littlefield.
Trigwell, K. & Prosser, M. (1996). Towards an understanding of individual acts of teaching. Retrieved
December 10, 2009, from http://www.herdsa.org.au/confs/1996/trigwell1.html
Trigwell, K., Prosser, M. & Waterhouse, F. (1999). Relations between teachers’ approaches to teaching and
students’ approaches to learning. Higher Education, 37, 57-70.
Walvoord, B.E. & Anderson, V.J. (1998). Effective Grading. San Francisco: Jossey-Bass.
Wiersma, W. & Jurs, S.G. (2005). Research Methods in Education. Eighth Edition. Boston: Pearson.
Willingham, D.T. (2006). Reframing the mind: A response to Howard Gardner. Retrieved August 2, 2006, from
http://www.educationnext.org/unabridged/20043/willigham.pdf
Wojtczak, A. (2002). Evaluation of learning outcomes (revised). Retrieved December 8, 2009, from
http://www.iime.org/documents/elo.htm
Download