Using Formative Assessment to Improve MBA Students’ Understanding of Economics

advertisement
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Using formative assessment to improve MBA students’ understanding of
economics
Dr. Rebecca Stratling
Durham University Business School
Mill Hill Lane
Durham DH1 3LB
UK
e-mail: Rebecca.stratling@durham.ac.uk
phone: 0044 (0)191 3345379
July 1-2, 2014
Cambridge, UK
1
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Using formative assessment to improve MBA students’ understanding of
economics
ABSTRACT
Economics is a compulsory part of the syllabus of most accredited MBA programmes.
However, many MBA students complain that the subject is inaccessible and too theoretical to
be useful for managerial practice. This disenchantment with the subject is of concern both
regarding students’ academic success and their development of professional competency, as
the ability to apply economic analysis is widely recognised as an important management skill.
Despite increasing numbers of postgraduate students, there has so far been little research into
mature students’ use and perceptions of formative assessment. With regards to postexperience MBA programmes, educators frequently assume that, as highly motivated and
experienced learners, students are predominantly interested in group and case study based
learning activities rather than individual formative assessment. If this assumption is correct, it
suggests a dissonance between widely applied formative assessment methods in economics
and students’ learning orientation.
This exploratory research aims to contribute to a better understanding of mature MBA
students’ perceptions of different types of formative assessment in the context of a
compulsory business economics course. The results of a survey of a MBA class with
experience in formative assessment by audience response systems, online quizzes, and group
presentations indicate that, in contrast to perceived wisdom, both multiple choice based
audience response and online quizzes were received more positively than group
presentations. While these findings are based on a survey of students on a single MBA
programme, they highlight the need for more research into mature students’ use of and
attitudes towards formative assessment.
Keywords: teaching economics, formative assessment; assessment for learning; mature
students; MBA; group presentations; online quizzes; audience response systems,
July 1-2, 2014
Cambridge, UK
2
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
INTRODUCTION
While the ability to critically engage in economic analysis both at micro and at
macroeconomic level is widely recognised as an important management skill (Harris, 1984;
Healey, 1993; Poltinik, 2010), in many MBA programmes business economic courses are
positively loathed by students. Standard complaints about MBA economics courses are that
they are perceived to be very difficult and too theoretical to be of any practical use to
managers (Gregorowicz & Hegji, 1998; Jain, 2006; Poltinik, 2010; Webber & Mearman,
2012).
Regular formative assessment and feedback in Business Economics courses on MBA
programmes therefore serves multiple purposes. In addition to providing students with the
ability to practice their economic reasoning skills and to receive diagnostic feedback,
formative assessment is also supposed to encourage students to revise regularly, to aid their
ability to self-regulate their learning behaviour, to improve their self-efficacy beliefs by
giving them confidence in their learning progress, and to improve their perception of the
usefulness of economic reasoning skills to managers (Walstad, 2001).
This raises the question which types of formative assessment are the most appropriate given
the subject and the characteristics of the students. A survey of prior academic research
reveals that investigations into students’ perception of formative assessment and feedback
tends to be focussed on school children and undergraduate students, with little attention being
paid to students on post-graduate and, in particular, post-experience programmes. However,
mature learners tend to have different learning needs and skills than younger learners. Adult
education theories tend to presuppose, for instance, that compared to adolescents or
traditional undergraduate students, mature learners are more intrinsically motivated to learn,
more able and prepared to engage in self-directed learning, as well as more focussed on the
relevance of knowledge to practice (Devlin, 1996; Forrest & Peterson, 2006). Moreover,
July 1-2, 2014
Cambridge, UK
3
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
mature learners - particularly those attending post-experience programmes – often have
different learning goals than younger learners. Whereas for younger learners achieving
academic qualifications to enable them to successfully enter the world of work tends to be a
key objective, practitioners are likely to be more interested in developing the ability to
engage in “reflective practice” via the development of connective, critical, and personal
thinking (De Dea Roglio & Light, 2009).
These differences are likely to have an impact on students’ perception of learning related
activities. Given the cost and effort involved in designing and implementing formative
assessment and feedback (Duijnhouwer, Prins, & Stokking, 2012; Geide-Stevenson, 2009;
Latif & Miles, 2011) as well as the potential negative impact of misaligned didactic strategies
on students’ learning (Biggs, 1993; Biggs, Kember, & Leung, 2001; Keough, 2012; Kyndt,
Dochy, Struyven, & Cascallar, 2011), a better understanding of mature students’ perception
of formative assessment would be helpful.
This exploratory research aims to contribute to this endeavour by investigating the views of
post-experience MBA students about the usefulness of formative assessment via audience
response systems (clickers), online quizzes, and group presentations in the context of a
Business Economics course. While the research is exploratory in nature, as it focuses on the
perceptions of students in a single subject in a specific MBA programme, the results give
some indication that mature students’ attitudes towards formative assessment might not
conform to widely held assumptions and therefore require more thorough investigation.
THE PURPOSE OF FORMATIVE ASSESSMENT
In contrast to summative assessment, students’ performance or indeed participation in
formative assessment does not directly affect their final grades. Without the pressure of ‘hard
sanctions’, formative assessment and feedback are supposed to provide clear guidance to
students about learning goals, gaps between their current and expected performance levels as
July 1-2, 2014
Cambridge, UK
4
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
well as means to bridge the gaps, e.g. by identifying particular topics which require more
revision, explaining how to develop relevant arguments, or encouraging more effective
learning strategies (Crooks, 1988; Hattie & Timperley, 2007; Nicol & Macfarlane-Dick,
2006).
Good formative assessment and feedback are expected to positively affect students’
motivation towards a subject (Duijnhouwer et al., 2012) as well as help them develop positive
and realistic self-efficacy beliefs (Crooks, 1988; Duijnhouwer et al., 2012). They should
discourage students from applying a surface approach to learning which focuses on
memorising disjoint items of information (Biggs et al., 2001), and instead motivate them to
employ deep learning strategies to develop a comprehensive and critical understanding of the
subject.
In addition, formative assessment can help lecturers improve their teaching by identifying
topics which are poorly understood. This can also prompt lecturers to consider the
effectiveness of their teaching methods (Geide-Stevenson, 2009; Kay & LeSage, 2009; Nicol
& Macfarlane-Dick, 2006; Roediger, Agarwal, McDaniel, & McDermot, 2011; Roediger &
Karpicke, 2006).
While experimental research into the impact of assessment on material retention suggests that
even just testing students’ knowledge can improve their understanding and recall (Crooks,
1988; Mayer et al., 2009; Roediger et al., 2011; Roediger & Karpicke, 2006, Trees &
Jackson, 2007), in educational practice, the provision of feedback tends to be regarded as a
crucial element of formative assessment. Feedback can not only help improve students’
understanding of the subject (Roediger & Karpicke, 2006; Trees & Jackson, 2007). It can
also help students to develop more effective encoding strategies and to improve their ability
to self-regulate learning (Cassidy, 2011; Kelemen, Winningham, & Weaver, 2007; Roediger
& Karpicke, 2006; Trees & Jackson, 2007).
July 1-2, 2014
Cambridge, UK
5
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
However, research into the use of homework, audience response systems, and classroom tests
in schools and higher education institutions suggests that formative assessment often focuses
more on factual knowledge or the ability to define concepts than on deductive or inductive
reasoning (Biggs, 1993; Crooks, 1988; Duijnhouwer et al., 2012; Hattie & Timperley, 2007;
Mayer et al., 2009; Roediger & Karpicke, 2006). Moreover, the feedback provided is often
limited to information about correct or incorrect answers (Angus & Watson, 2009;
Duijnhouwer et al., 2012; Geide-Stevenson, 2009; Hattie & Timperley, 2007; Roediger et al.,
2011). This focus on memorisation skills and the lack of diagnostic feedback suggests that
there is often a degree of incongruity between assessment and feedback design on the one
hand and the didactic objectives typically expected of modern education programmes,
particularly in higher and adult education, on the other (see e.g. Aggarwal & Lynn, 2012;
Entwistle & Peterson, 2004). This is of particular concern since poor formative assessment
and feedback methods can negatively affect students’ motivation towards a course and
incentivise students to opt for surface or strategic rather than deep approaches to learning
(Biggs, 1993; Crooks, 1988; Duijnhouwer et al., 2012; Hattie & Timperley, 2007).
Even when formative assessment and feedback are well designed by didactic standards,
students’ perception of the appropriateness of assessment and feedback methods depends on a
variety of factors, some of which are outside of the control of a lecturer of a particular class
(Baeten, Kyndt, Struyven, & Dochy, 2010; Biggs, 1993; Entwistle & Peterson, 2004). In
addition to the physical, social, and instructional context in which the assessment and
feedback is provided, students’ perception of their usefulness tends to be affected by their
subject specific skills, their motivation, their metacognitive abilities, and their self-efficacy
beliefs (Baeten et al., 2010; Biggs et al., 2001; Entwistle & Peterson, 2004).
Furthermore, some of the effects of formative assessment which lecturers see as beneficial,
e.g. with regard to encouraging students to engage actively with the material, practice their
July 1-2, 2014
Cambridge, UK
6
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
skills, and keep up with their studies by revising regularly (Lee, Courtney, & Balassi, 2010),
might be perceived negatively by students. For example, students might reject formative
assessment which requires them to critically analyse situations or utilise their skills to solve
problems because of the effort or time this requires (Crooks, 1988; Entwistle & Peterson,
2004; Kay & LeSage, 2009; Trees & Jackson, 2007). In addition, formative assessment is
often designed to force students to time their revision and practice in line with the assessment
schedule set by the lecturer and thereby restricts their ability follow their own preferences
(Geide-Stevenson, 2009). So while ideally formative assessment and feedback is designed to
facilitate students’ self-regulatory learning behaviours, learners might perceive them as
restrictive impositions. Prior research into differences between mature and traditional
undergraduate students suggests that mature students tend to have more positive perceptions
of their subject knowledge as well as their ability to manage time, self-regulate learning, and
concentrate on their studies (Devlin, 1996). This is likely to apply particularly to mature
learners who have previously successfully studied for degrees or professional qualifications
and have successful careers. MBA students might therefore perceive formative assessment
and diagnostic feedback as unnecessary or indeed unwarranted coercion.
Students might even interpret the provision of diagnostic feedback “as an indication of the
teacher’s underestimation of their capacities” (Duijnhouwer et al., 2012, p. 181). This might
lead students to ignore the feedback and to develop a negative attitude towards the subject
and its formative assessment, which would be detrimental to their learning progress (Kyndt et
al., 2011).
However, research into undergraduate students’ metacognitive skills suggests that most
students noticeably overestimate their subject knowledge and understanding (Kelemen et al.,
2007; Roediger et al., 2011). Moreover, self-regulatory behaviour relies not only on students’
metacognitive skills to monitor their learning progress, develop strategies to improve the
July 1-2, 2014
Cambridge, UK
7
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
effectiveness of their learning and set realistic achievement and mastery goals, it also requires
students to be able to execute them (Edens, 2006). Given the demanding nature of their
programmes of study, family responsibilities or professional commitments, mature learners
might face important rival claims on their time and energy, which limit their ability to
implement effective self-regulatory learning strategies. Formative assessment might therefore
act as an important prompt for students to make time for their studies.
Consequently a better understanding of mature students’ perceptions of and attitudes towards
different types of formative assessment and feedback could help lecturers improve teaching
in postgraduate and – in particular – post-experience programmes. This is particularly
important for subjects like Business Economics, which are frequently perceived as “difficult”
or “inaccessible”.
As previously highlighted, students joining MBA programmes are often apprehensive about
Business Economics due the subject’s reputation as difficult and impractical (Gregorowicz &
Hegji, 1998; Jain, 2006; Poltinik, 2010; Webber & Mearman, 2012). Moreover, many
students have strong preconceived notions about how economic relationships work, based on
previous professional and private experiences (Davies & Mangan, 2007; Ping, 2003). In order
to help students engage with economic theory based reasoning, it is essential to identify and
challenge their misconceptions about economics. Apart from the problem of identifying
misconceptions early on, challenging students’ convictions on the basis of often very
elementary and abstract economic concepts and theories can add to the resentment students
feel towards the subject. In this context it is vital to establish and maintain students’
engagement and active participation in the course despite initial frustration and possibly
incredulity.
However, formative assessment and feedback can be time consuming (Geide-Stevenson,
2009). Given the demanding nature of MBA programmes and the wide range of micro and
July 1-2, 2014
Cambridge, UK
8
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
macro-economic topics typically covered in business economics courses, the question
whether formative assessment and feedback is an effective use of students’ class and
independent study time is particularly acute. Nonetheless, research into the impact of
formative assessment and feedback on student learning in economics is limited (Ball, Eckel,
& Rojas, 2006; Geide-Stevenson, 2009; Jain, 2006; Latif & Miles, 2011; Lee et al., 2010), in
particular in relation to postgraduate students. Moreover, existing research tends to focus on
the impact of formative assessment on students’ exam performance and fails to consider
students’ perception and evaluation of formative assessment.
RESEARCH METHOD AND RESULTS
This research investigates the views of mature students enrolled on a one-year full-time MBA
programme at a UK Russell Group University of three different types of formative
assessment employed in a core Business Economics course: in-class quizzes using audience
response systems, self-assessment quizzes using online assessment, and group presentations.
The key didactic purpose of all three forms of formative assessment was to facilitate a deep
approach to learning by encouraging students to actively apply relevant economic concepts
and theories and by assisting them in identifying and remedying knowledge gaps and
misconceptions. Moreover, all three forms of formative assessment were designed to
motivate students to revise regularly.
The class consisted of 46 students who hailed from 26 different countries. In order to qualify
to join the MBA programme applicants need to have at least three years management
experience and usually either an undergraduate degree with a 3.3 US GPA classification (or
equivalent) or a professional qualification of a similar standard. Students’ average age was 31
years, 34% of them were female, and 19.5% relied on a professional qualification to join.
The Business Economics course covered both micro and macroeconomic topics and was
designed to account for about 150 study hours consisting of eleven 4-hour lectures (including
July 1-2, 2014
Cambridge, UK
9
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
30 minute break times) and about 110 hours of independent individual and group study.
Lectures consisted of a mix of traditional lectures, class discussions, organised group work
and simulations.
Student feedback indicated that the course was well received in terms of the quality of
teaching and the usefulness of the course content to managers. However, students also tended
to rate the course as the most challenging and most labour intensive out of all courses they
studied.
Throughout the course students were expected to engage in weekly formative assessment
using all three different modes. In order to incentivise students to participate, the rationale for
the different forms of formative assessment was explained in the first session and whenever
participation rates appeared to slip. There were no formal sanctions for a failure to engage in
formative assessment.
At the end of the course, students were asked to participate in an anonymous online
questionnaire on students’ opinions about the key didactic features of the course. The
response rate for the questionnaire was 65.2%. Respondents came from 19 different
countries, their average age was 31.8 years; 40% of respondents were female.
Clicker questions
Audience response systems, often called “clickers”, are hand-held devices which allow
students to choose answers to multiple choice questions projected on a screen (for a typical
example, please refer to the question set in Appendix A). Instructors can then immediately
make the aggregate of the students’ responses visible by displaying frequency distributions.
Depending on students’ performance, clicker questions can be used to facilitate a detailed
discussion of the reasons for correct or incorrect answers.
Prior research into the use of clickers in undergraduate programmes suggests that they help
improve students’ lecture attendance, attention during lectures and engagement with the
July 1-2, 2014
Cambridge, UK
10
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
material (Keough, 2012). Students also frequently record that using clickers facilitates peer
and class discussion and allows them to immediately identify misconceptions or gaps in their
knowledge (Crooks, 1988; Mayer et al., 2009). While similar effects could be reached by
asking verbal questions in class, the use of clickers allows lecturers to gain feedback on all
students’ understanding and is thereby expected to reduce the risk that they overestimate their
students’ learning progress (Kay & LeSage, 2009). Moreover, clickers improve the
opportunities for introvert or insecure students to participate actively in class (Trees &
Jackson, 2007).
However, the need to allocate lecture time for students to work out answers and to provide
diagnostic feedback means that less time is available e.g. for a wider or more detailed
coverage of the subject (Kay & LeSage, 2009) or for alternative learning activities such as
group work and case studies.
Moreover, the use of clickers requires attention, cognitive energy and active participation
from students (Kay & LeSage, 2009; Trees & Jackson, 2007). While this increased effort is
one of the key objectives of the use of clickers, it might not be appreciated by students who
are comfortable with a passive role in lectures.
Finally, Kay & LeSage (2009) point out that contingent teaching, ideally facilitated by clicker
questions, requires lecturers to be able to set good questions and to respond positively to poor
results, e.g. by improving the way they explain arguments. Poorly phrased questions,
questions which focus on memorisation rather than understanding, or a lack of flexibility by
lecturers might lead students to perceive clicker questions as an annoying gimmick rather
than a useful learning tool.
In the Business Economics course, multiple choice clicker questions were used for two
distinct purposes: To review material from the previous lecture at the start of each subsequent
July 1-2, 2014
Cambridge, UK
11
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
lecture, and then at regular intervals throughout the lecture to allow the students to apply and
practice the new material discussed.
In order to reduce the risk that students might simply guess the most likely answer and to
encourage students to practice verbalising economic arguments, pairs of students were
required to share one clicker. This meant that students had to discuss the questions and agree
answers with their partner.
Online quizzes
The Business Economics course required all students to participate in weekly online quizzes
during their independent study-time, to give both students and the lecturer feedback on
students’ learning progress. Students were instructed to first revise the preceding week’s
lecture material and then to take the tests which consisted mainly of multiple choice and
multiple answer questions. After submitting the tests, students immediately received
information on their score, on questions they answered correctly or incorrectly, and
diagnostic feedback which explained the reasoning behind the correct (and incorrect) answers
(for an example, please refer to Appendix A).
Students who failed to take the test or who answered less than 70% of questions correctly
were reminded by e-mail two days before the subsequent lecture to engage in revision and
(re)take it. By this time on average 83.5% of students had participated in the online quizzes
(SD = 11.1). The reminder e-mails included short explanations of the didactic rationale for
engaging in revision and taking the quiz. Moreover, as part of the introductory explanation of
the didactic design of the course, students were alerted to the fact that 30% of the marks of
the final summative exam would be allocated to a multiple choice test. While exam questions
would not be identical to the questions set in the online quizzes, it was pointed out that the
quizzes would provide a good practice for this part of the exam.
July 1-2, 2014
Cambridge, UK
12
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Prior research suggests that undergraduate students tend to perceive the possibility to take online tests as part of their learning as helpful provided that the systems operate reliably (Angus
& Watson, 2009). However, mature students who are confident in their ability to self-regulate
their learning might perceive the requirement to take online quizzes as a needless imposition.
Moreover, the limitation of the complexity of answers to multiple choice questions might
frustrate students’ ambitions to display their own critical understanding. This is likely to be
particularly acute for mature students who can draw on their own professional experience. If
students do not believe that assessment is important or reflects their performance and effort
accurately, they are less likely to take it seriously (Crooks, 1988). Consequently mature
students might find online quizzes aggravating rather than helpful.
Group presentations
Throughout the course the lecturer used the example of a German equipment manufacturer
called “KRONES AG” as a running case study to illustrate the application of relevant
economic theories and concepts. At the end of each lecture, a list of “Pause for Thought”
questions was published (for an example, please refer to Appendix B), which required
students to analyse issues discussed in the lecture in their tutorial groups using another firm.
Each tutorial group consisted of six to seven students. Students were expected to collaborate
in the data collection and analysis and were required to prepare a 10 minute presentation on
the questions set. This work had to be conducted during students’ independent study time. In
the next lecture one group would be chosen to present and receive feedback from the rest of
the class as well as the lecturer.
This means that whereas feedback on students’ performance in relation to clicker questions
and online quizzes was immediate and direct, feedback on students’ group work was delayed
until after the presentation. Moreover, as only one group was able to present each week,
feedback to all other groups was indirect, as students needed to draw conclusions about their
July 1-2, 2014
Cambridge, UK
13
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
own performance based on the discussion of the chosen group’s presentation.
To provide students with a positive incentive to engage in the group work and to reduce freerider problems, the “Pause for Thought” questions were designed to contribute to the
development of an answer to a pre-seen exam question which accounted for 50 per cent of the
summative exam mark.
Collaborative learning is expected to support student learning as students’ skills of
elaboration, justification and argumentation are developed (Yadin & Or-Bach, 2010). As
verbalising economic arguments helps students’ grasp of the subject (Walstad, 2001), this is
of particular importance in economics.
Effective collaborative learning requires students to have a high degree of motivation,
learning autonomy as well as organisational and communication skills (Yadin & Or-Bach,
2010). This suggests that collaborative group work is likely to be particularly well suited to
mature students, who are expected to be highly motivated and well organised (Devlin, 1996;
Forrest & Peterson, 2006).
Moreover, the ability to apply their knowledge to a real-life company is also likely to be more
in tune with mature students’ interest in reflective practice (De Dea Roglio & Light, 2009)
than clicker questions or online quizzes.
The introduction of group work in many aspects of teaching on MBA programmes partly
reflects demands by business representatives and academics that, in order to equip students
for an increasingly team-oriented approach to management, business schools should improve
the constructive use of team work in management education (Blaylock, Falk, Hollandsworth,
and Kopf, 2009). However, in the context of undergraduate management programmes or
MBA programmes for students without management experience, group work is often mainly
directed at improving students’ team working skills (Isabella, 2005; Willcoxon, 2006). By
contrast, in the context of post-experience MBA programmes, group work tends to be
July 1-2, 2014
Cambridge, UK
14
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
focussed on helping students learn from each others’ experiences and to critically revaluate
their previous views and understandings in the light, not only of newly acquired theories, but
also the views of other students (De Dea Roglio & Light, 2009).
Findings and Analysis
As previously discussed, students’ attitudes towards formative assessment tend to be related
to their learning approaches. Students with a deep approach to learning are expected to
perceive formative assessment more positively than students with a surface approach to
learning.
Consequently, students were requested to fill in the revised two-factor Study Process
Questionnaire developed by Biggs et al. (2001) as part of the survey. This well established
questionnaire consists of 20 questions which are rated on a 5-point Likert scale. The
questions are designed to elicit the degree to which students have a deep or surface motive for
learning as well as whether they employ a deep or surface learning strategy.
Whereas students with a deep learning motive (DM) show an intrinsic interest in learning,
students with a surface learning motive (SM) tend to be stimulated by a fear of failure.
Students who adopt a deep learning strategy (DS) aim to maximise meaning, whereas
students who follow a surface learning strategy (SS) tend to focus on rote learning and
covering only a limited range of material. Given the close relationship between motive and
strategy, Biggs et al. (2001) suggest to consolidate both into a more general construct of
learning approach, which can be either deep (DA) or surface oriented (SA).
[Table 1 about here]
While the Cronbach alpha values to test the subscale reliability compare well with those
found by Biggs et al. (2001, p. 142), the Cronbach alpha values for the deep learning strategy
(DS) and, in particular, the surface learning strategy (SS) are below 0.7. Moreover,
exploratory factor analysis found that clean factor loadings could only be obtained when
July 1-2, 2014
Cambridge, UK
15
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
forced extraction of two factors was prescribed. As in this case the deep (DA) and the surface
learning approach (SA) could be clearly delineated and as the Cronbach alpha values for both
items were comfortably above the 0.7 threshold, the analysis was limited to the two factor
model. The results reported regarding the deep and surface learning approach were calculated
using Biggs et al.’s (2001) scoring method. However, as part of the sensitivity analysis all
tests were repeated using the data from the exploratory factor analysis. The results of both
methods are consistent.
In line with assumptions in adult education theories (Devlin, 1996; Forrest & Peterson, 2006),
the students in this sample have a statistically significant higher proclivity towards a deep
rather than a surface approach to learning, Wilcoxon Signed Rank test, T = 1.5, p < .000. As
Table 1 suggests, the mean scores for the deep learning approach are 77.5% higher than those
for the surface learning approach. Only three students in the sample scored higher for the
surface learning approach then the deep learning approach.
To elicit whether the perception of formative assessment and feedback was dependent on
students’ prior knowledge, students were asked to rate their economic knowledge prior (PK)
to starting their MBA on a 5-point Likert scale, with higher scores suggesting little prior
knowledge, M = 3.43 SD = 1.01. Students were also asked to rate their confidence at the
beginning of the programme in their ability to cope well with the course (PC) on a 6-point
Likert scale with a high score indicating low levels of confidence, M =2.77, SD = 1.25.
Additional controls for gender, age and nationality were introduced. However, these rarely
yielded statistically significant results and are therefore not reported.
Due to the non-parametric distribution of most of the survey data and the limited sample size,
correlations between students’ perceptions of the formative assessment methods and student
characteristics were measured using Kendall’s tau correlations. As required additional non-
July 1-2, 2014
Cambridge, UK
16
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
parametric analysis was conducted using e.g. Man Whitney U and Wilcoxon Signed Rank
tests.
Student perceptions of clicker questions.
As indicated in Table 2, students rated the usefulness of clicker questions for the
development of their understanding of economics very positively. On a 6-point Likert scale
students rated the benefit of thinking about clicker questions on average at 5.4, SD = 0.62,
and the explanation of answers to clicker questions on average at 5.53, SD = 0.57. While
there is no statistically significant difference in the correlation between students' overall
opinion regarding the helpfulness of clicker questions and their learning approaches (DA,
SA), prior knowledge (PK) and prior confidence (PC), the results suggests that student
characteristics are related to how students perceive the use of clicker questions.
[Table 2 about here]
As expected, the results suggest that students who employ a deep learning approach (DA) are
more positive about clicker questions (2f, 2j, 2n) than students with a surface approach (SA)
to learning (2j, 2k, 2l, 2n, 2o, 2p). In particular the findings suggest that students with a
surface approach to learning are more likely to feel that too many clicker questions were used
(2j) and that too much time was spent on allowing students to work out answers and on
providing feedback (2k). These students were also less inclined to see the discussion of
answers to clicker questions with their neighbours as useful to their learning (2p).
The positive significant correlation between a surface approach to learning and the perception
that clicker questions lead to a reduction of students’ opportunity to actively engage in class
(2l) was somewhat surprising. This might indicate that students with a surface approach to
learning might not necessarily be passive in class, but that they might prefer general
discussions, e.g. in the sense of wider ranging conversations, rather than rather than tightly
focussed analytical debates. As students had been asked to rate the usefulness of the different
July 1-2, 2014
Cambridge, UK
17
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
didactic features of the course on a 4-point likert scale, it was possible to establish, whether
students with a surface approach to learning perceived general group discussions as
particularly helpful to their learning.
[Table 3 about here]
The Kendall’s tau correlations revealed that, while students’ approach to learning is
significantly related to their perception of the usefulness of general class discussions, it is
positively significantly correlated to students’ deep rather than surface approach to learning
(see Table 3). This suggests that the significant negative correlation between a surface
approach to learning and the perception that clicker questions lead to a reduction of students’
opportunity to actively engage in class is not driven by students’ perception that general class
discussions are particularly helpful to develop their understanding of economics.
Students who recorded that they had little knowledge of economics prior to joining the MBA
programme (PK) tended to find clicker questions particularly helpful to identify their subject
mastery (2e) and knowledge gaps (2f). They were also more likely to indicate that getting
clicker questions wrong made them more determined to work harder (2n). This suggests that
clicker questions may help students with little prior subject knowledge develop realistic and
positive self-efficacy beliefs.
In order to substantiate the correlation results the responses to the question regarding
students’ prior knowledge of economics were recoded into a dummy using the mean to split
the distribution. The outcomes of the Man Whitney U tests supported the interpretation of the
results of the correlation analysis discussed above (see Table 4).
[Table 4 about here]
The results of the Kendall’s tau correlations moreover indicate a positive significant
correlation between students’ concerns about their ability to cope with the subject prior to the
programme (PC) and the perception that clicker questions made students feel better about
July 1-2, 2014
Cambridge, UK
18
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
their performance in the course (2o) and that getting answers wrong when others got them
right inspired students to work harder (2n). This further indicates that clicker questions might
be helpful to develop realistic and positive self-efficacy beliefs in students who are
apprehensive about the course.
Student perceptions of online quizzes.
Students also generally perceived online quizzes as very useful for the development of their
understanding of business economics. As reported in Table 5, on a 6-point Likert scale online
quizzes scored a mean of 5.57, SD = 0.57. While no statistically significant difference in the
correlation between students’ overall opinion regarding the helpfulness of online quizzes and
their learning approaches (DA, SA), prior knowledge (PK) and prior confidence (PC) could
be found, the results reveal that students’ characteristics are relevant for how students use and
perceive online quizzes.
[Table 5 about here]
As expected, the results suggest that students who strive for a deep learning approach are
more positive about the requirement to take quizzes from the start of the course (5o), and
make more intensive use of the feedback provided than students with a surface approach to
learning (5e, 5i). Students with a surface approach to learning also appear to be less
incentivised by the online quizzes to increase their revision effort (5d, 5g) and find them less
helpful for identifying gaps in their knowledge (5l).
Students who thought they had little knowledge of economics prior to joining the MBA
programme (PK) appear to have been particularly incentivised to revise carefully between
lectures due to the requirement to participate in online quizzes (5b, 5d). Their responses also
indicate that they found the online quizzes particularly valuable for identifying their subject
mastery (5k) and knowledge gaps (5l). This suggests that for students with little prior subject
knowledge the online assessment and feedback was useful to improve their ability to selfJuly 1-2, 2014
Cambridge, UK
19
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
regulate their study behaviour. In addition, students with little prior knowledge appear to
have found the feedback on the online quizzes particularly helpful to develop their
understanding of economics (5j).
As above, in order to substantiate the correlation results, Man Whitney U tests were
conducted using a dummy to control for students’ prior knowledge of economics. The results
support the interpretation of the correlation analysis discussed above (see Table 6).
[Table 6 about here]
The correlations and Man Whitney U tests suggest that students with less prior knowledge
tended to feel particularly positive about the requirement to engage with online quizzes after
the course has finished (5p). By contrast, prior to the start of the course, students with little
subject knowledge did not have statistically significant different views about the requirement
to engage with online quizzes than students with high subject knowledge (5o). The Wilcoxon
Signed Rank test for all students indicates that significantly more students thought it was a
good idea to conduct online tests after the course had finished, 5p Mdn = 6.0, than at the
beginning of the course, 5o Mdn = 5.0, T = 126.5, p < .05.
This implies that students’ perception of the usefulness of formative assessment can change
over time as they gain more experience with assessment and feedback methods or as they
become more aware of their intrinsic benefits.
Although mature students are often expected to be successful self-regulated learners (Devlin,
1996; Forrest & Peterson, 2006), it cannot be expected that they are ex-ante able to reliably
evaluate whether a particular form of formative assessment might be beneficial in a particular
subject context. This indicates that it might be worthwhile, even when teaching mature
students, to explicitly encourage and monitor participation in formative assessment.
Finally, the results of the Kendall’s tau correlations show a positive significant correlation
between students’ concerns about their ability to cope with the subject prior to the
July 1-2, 2014
Cambridge, UK
20
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
programme (PC) and their perception that the online quizzes were useful to aid their regular
revision of the lecture material (5b, 5d). As supposedly successful self-regulated learners, one
might expect that mature students who are apprehensive about their performance in a course
would aim to ease these concerns by engaging in diligent revision on their own accord.
Never-the-less, these findings indicate that even those students appreciate the revision
support provided by online quizzes.
Student perceptions of group presentations.
Adult education theories (Devlin, 1996; Forrest & Peterson, 2006) and the literature on
reflective practice (De Dea Roglio & Light, 2009) tend to assume that group work is not only
beneficial for mature students in general and post-experience MBA students in particular, but
that these students are likely to be particularly appreciative of collaborative learning
activities. De Dea Roglio & Light (2009) highlight that case study focussed group work is
particularly suited for post-experience MBA students, as it allows managers to draw on their
prior knowledge and reflect critically on their current and prior experiences. Moreover, the
confrontation with the different perspectives of other group members adds to the constructive
dissonance in learning, which is expected to prompt students to critically reconsider their
previous views and understandings (De Dea Roglio & Light, 2009; Entwistle & Peterson,
2004). This suggests that group work is likely to be more positively perceived than clicker
questions and online quizzes.
However, the process of learning via constructive dissonance can be very uncomfortable for
students at it might challenge their current identity and view of the world (Lund Dean and
Jolly, 2012). This is likely to be particularly acute in highly international cohorts with
students from a wide variety of cultural and professional backgrounds.
July 1-2, 2014
Cambridge, UK
21
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Indeed, while the results of the student feedback on the group presentations indicate that most
students found them worthwhile, this form of formative assessment attracted the lowest mean
score (4.47, SD = 1.47) on a 6-point Likert scale (see Table 7).
[Table 7 about here]
Comparing the scores for group presentations (7a) and online quizzes (5a) using a Kendall’s
W test, Fr = 0.474, p < .01, yielded statistically significant results which indicate that students
found online quizzes more helpful to their learning. The same applies for the comparison of
group presentations (7a) and clicker questions (2a), Fr = 0.540, p < .01. By contrast,
comparing scores for online quizzes (5a) and clicker questions (2a) yielded no statistically
significant results, Fr = .003, p > .10. This suggests that these two forms of formative
assessment are perceived to be of similar use.
To substantiate these findings, Wilcoxon Signed Rank tests were conducted on the survey
data on students’ perception of different didactic features of the course (see Table 3). In line
with the previous findings, the Wilcoxon Signed Rank tests suggest that students found
online quizzes and clicker questions to be of similar use and that they perceived the group
work on the “Pause for Thought” questions, i.e. the group presentations, as less helpful (see
Table 8).
[Table 8 about here]
With regard to research on undergraduate students Koppenhaver (2006, p. 29) warns that
group activities “often require scarce student time to coordinate and hold meetings, increase
the opportunity for interpersonal conflict, and can create a feeling, especially among highachievement students, that they are doing the instructor’s job”. It is possible that despite postexperience MBA students’ maturity and the frequently held expectations regarding their
ability to manage their own as well as collaborative learning, these students might indeed
face similar challenges to those reported by undergraduate students. Disparities in the
July 1-2, 2014
Cambridge, UK
22
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
participation, effort, and the contribution of group members can cause friction between
students (Friedman, Cox, and Maher, 2008), which cannot always be effectively solved, as in
the University setting students lack the resources, incentives, and sanctions which are
available to them in their workplace environment (Willcoxson, 2006).
Another potential reason for students’ less positive perception of the formative group work
might be that feedback was not individualised and students often needed to draw conclusions
from the feedback other groups received for their own work. However, a comparison of the
results of the responses to questions 7f and 7g did not yield any statistically significant
differences, Kendall’s W test, Fr = .005, p > .10. This indicates that students rated the
helpfulness of the feedback provided on their own presentations similar to that provided on
other groups’ presentations.
As group work is likely to be more time consuming than e.g. participation in clicker
questions, and student feedback highlighted that many students thought Business Economics
was by far the most time intensive course in the core programme, further analysis was
conducted to establish whether the amount of time students spend on group work or on the
course overall influenced students’ perception of the usefulness of the group presentations.
However, despite extensive testing, no statistically significant link between time spent
studying and students’ perception of group feedback could be established.
To identify whether students’ perception of the group presentations might be affected by a
less positive attitude towards group work in general, the results of students’ rating of the
usefulness of different didactic features of the course (see Table 3) were considered. The
Wilcoxon Signed Rank tests on the data indeed suggest that students perceived group work
and case studies as less beneficial for their understanding of economics than any other
didactic feature with the exception of the textbook (see Table 8). The results also indicate that
there is no statistically significant discernable difference in students’ perception of different
July 1-2, 2014
Cambridge, UK
23
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
types of group work and case studies. This suggests that students’ opinions about the group
presentations were influenced mainly by their perception of group work in general, and not
merely by particularities of the group work and feedback on the “Pause for Thought”
questions.
Moreover, with regard to concerns that case studies and group work might be perceived less
positively because they require students to prepare during their independent study time, it is
important to note that the Wilcoxon Signed Rank tests suggest that online quizzes and the use
of lecture notes, which also require independent study time, are perceived as positively as
clicker questions (see Table 8). This indicates that the requirement to work outside of class is
probably not a major factor in students’ perception of formative group presentations
compared to alternative forms of formative assessment.
CONCLUSION
The findings of this exploratory research project suggest that some frequently held
preconceptions about mature students’ attitudes towards formative assessment and feedback
might be misconceived. The overwhelming majority of MBA students surveyed found all
three types of formative assessment, clickers, online quizzes and group presentations, helpful
to the development of their understanding of economics. Surprisingly, given the literature on
adult education theories (Devlin, 1996; Forrest & Peterson, 2006) and on reflective practice
(De Dea Roglio & Light, 2009), group work and group presentations were perceived less
positively than more traditional lecture activities and formative assessment by clickers and
online quizzes. This indicates that the fact that formative assessment in economics is often
conducted using multiple choice exercises, is unlikely to taint students’ perception of the
subject.
While the research is limited in terms of scope, and therefore generalisability, it suggests that
more research into mature students’ perceptions and use of formative assessment methods
July 1-2, 2014
Cambridge, UK
24
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
would be beneficial.
It is important to note that the findings do not imply a recommendation to abandon group
work in favour of more individualised learning activities. As highlighted by Entwistle &
Peterson (2004), formative assessment and feedback should be focused not on students’
preferences but on the best way to facilitate their learning. Students’ perception of group
work and its assessment is likely to be affected by the constructive dissonance group work
tends to generate. Although this is uncomfortable for students, it is an important tool to
prompt them to re-evaluate their previously held views (De Dea Roglio & Light, 2009;
Entwistle & Peterson, 2004). However, a better understanding of students’ perception of
group work might help develop didactic tools to support students in managing group work
more effectively.
The research also indicates that students’ learning approach, prior subject knowledge and
confidence impact on their perception and use of formative assessment and feedback.
However, student feedback questionnaires issued as part of quality assurance procedures at
institutional or programme levels typically fail to take account of such differences. As
formative assessment is aimed to encourage a deep approach to learning and to support
particularly students with little subject knowledge and confidence, these findings suggest that
course and programme reviews would benefit from exploring how these characteristics affect
students’ perceptions of the usefulness of formative assessment.
July 1-2, 2014
Cambridge, UK
25
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
REFERENCES
Aggarwal, A. K. &, Lynn, S. A. (2012). Using continuous improvement to enhance an online
course. Decision Sciences Journal of Innovative Education, 10, 25-48.
Angus, S. D. &, Watson, J. (2009). Does regular online testing enhance student learning in
the numerical sciences? Robust evidence from a large dataset. British Journal of
Educational Technology, 40, 255–272.
Baeten, M., Kyndt, E., Struyven, K., & Dochy, F. (2010). Using student-centred learning
environments to stimulate deep approaches to learning: Factors encouraging or
discouraging their effectiveness. Educational Research Review, 5, 243-260.
Ball, S. B., Eckel, C., & Rojas, C. (2006). Technology improves learning in large principles
of economics classes: Using our WITS. The American Economic Review, 96, 442-446.
Biggs, J., Kember, D., & Leung, D. Y. P. (2001). The revised two-factor study process
questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71, 133-149.
Biggs, J. B. (1993). From theory to practice: A cognitive systems approach. Higher Education
Research & Development, 12, 73-85.
Blaylock, B. K., Falk, C. F., Hollandsworth, R., & Kopf, J. M. (2009). A borrowed approach
to more effective business education. Journal of Management Education, 33, 577-595.
Cassidy, S. (2011). Self-regulated learning in higher education: Identifying key component
processes. Studies in Higher Education, 36, 989-1000.
Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Review of
Educational Research, 58, 438–481.
Davies, P., & Mangan, J. (2007). Threshold concepts and the integration of understanding in
economics. Studies in Higher Education, 32, 711-726.
De Dea Roglio, K., & Light, G. (2009). Executive MBA programs: The development of the
reflective executive. Academy of Management Learning & Education, 8, 156–173.
July 1-2, 2014
Cambridge, UK
26
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Devlin, M. (1996). Older and wiser? A comparison of the learning and study strategies of
mature age and younger teacher education students. Higher Education Research &
Development, 15, 51-60.
Duijnhouwer, H., Prins, F. J., & Stokking, K. M. (2012). Feedback providing improvement
strategies and reflection on feedback use: Effects on students’ writing motivation, process,
and performance. Learning and Instruction, 22, 171-184.
Edens, K. M. (2006). The interaction of pedagogical approach, gender, self-regulation, and
goal orientation using student response system technology. Journal of Research on
Technology in Education, 41, 161-177.
Entwistle, N. J., & Peterson, E. R. (2004). Conceptions of learning and knowledge in higher
education: Relationships with study behaviour and influences of learning environments.
International Journal of Educational Research, 41, 407-428.
Forrest, S. P., & Peterson, T. O. (2006). It’s called andragogy. Academy of Management
Learning & Education, 5, 113-122.
Friedman, B. A., Cox, P. L., & Maher, L. E. (2008). An expectancy theory motivation
approach to peer-assessment. Journal of Management Education, 32, 580-612.
Geide-Stevenson, D. (2009). Does collecting and grading homework assignments impact
student achievement in an introductory economics course? Journal of Economics and
Economic Education Research, 10, 3-14.
Gregorowicz, P., & Hegji, C. E. (1998). Economics in the MBA curriculum: Some
preliminary survey results. Journal of Economic Education, 29, 81-87.
Harris, R. G. (1984). The values of economic theory in management education. The
American Economic Review, 74, 122-126.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research,
77, 81-112.
July 1-2, 2014
Cambridge, UK
27
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Healey, N. M. (1993). What role for economics in business and management education?
Journal of Further and Higher Education, 17, 34-39.
Isabella, L. A. (2005). Using existing teams to teach about teams: How an MBA course in
managing teams helps students and the program. Journal of Management Education, 29,
427-452.
Jain, A. (2006). Students’ perceptions of workshop based introductory macro-economics
tutorials: A survey. Economic Papers, 25, 235-251.
Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience
response systems: A review of the literature. Computers & Education, 53, 819-827.
Kelemen, W. L., Winningham, R. G., & Weaver, C. A. (2007). Repeated testing sessions and
scholastic aptitude in college students’ metacognitive accuracy. European Journal of
Cognitive Psychology, 19, 689-717.
Keough, S. W. (2012) Clickers in the classroom: A review and a replication. Journal of
Management Education, 36, 822-847.
Koppenhaver, G. D. (2006). Absent and accounted for: Absenteeism and cooperative
learning. Decision Sciences Journal of Innovative Education, 4, 29-49.
Kyndt, E., Dochy, F., Struyven, K., & Cascallar, E. (2011). The perception of workload and
task complexity and its influence on students’ approaches to learning: A study in higher
education. European Journal of Psychology of Education, 26, 393-415.
Latif, E., & Miles, S. (2011). The impact of assignments on academic performance. Journal
of Economics and Economic Education Research, 12, 1-11.
Lee, W., Courtney, R. H., & Balassi, S. J. (2010). Do online homework tools improve student
results in principles of microeconomics courses? American Economic Review: Papers &
Proceedings, 100, 283-286.
July 1-2, 2014
Cambridge, UK
28
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Lund Dean, K., & Jolly, J. P. (2012). Student identity, disengagement, and learning.
Academy of Management Learning & Education, 11, 228-243.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimberm, B., Chun, D., Bulger, M.,
Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: Fostering
learning with questioning methods in large lecture classes. Contemporary Educational
Psychology, 34, 51–57.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated
learning: A model and seven principles of good feedback practice. Studies in Higher
Education, 31, 199-218.
Ping, L. C. (2003). Information and communication technologies. ICT. Addressing the
challenges of economic education. International Review of Economics Education, 2, 2554.
Polutnik, L. (2010). The case for economic reasoning in MBA education revisited. The
American Journal of Economics and Sociology, 69, 78-84.
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermot, K. B. (2011). Testenhanced learning in the classroom: Long-term improvements from quizzing. Journal of
Experimental Psychology: Applied, 17, 382-395.
Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and
implications for educational practice. Perspectives on Psychological Science, 1, 181-210.
Trees, A. R., & Jackson, M. H. (2007). The learning environment in clicker classrooms:
Student processes of learning and involvement in large university-level courses using
student response systems. Learning, Media and Technology, 32, 21-40.
Walstad, W. B. (2001). Improving assessment in University economics. The Journal of
Economic Education, 32, 281-294.
July 1-2, 2014
Cambridge, UK
29
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Webber, D. J., & Mearman, A. (2012). Students’ perceptions of economics: Identifying
demand for further study. Applied Economics, 44, 1121-1132.
Willcoxson, L. E. (2006). “It is not fair!”: Assessing the dynamics and resourcing of
teamwork. Journal of Management Education, 30, 798-808.
Yadin, A., & Or-Bach, R. (2010). The importance of emphasizing individual learning in the
“collaborative learning era”. Journal of Information Systems Education, 21, 185-194.
July 1-2, 2014
Cambridge, UK
30
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
APPENDIX A
Example of a multiple choice question and its feedback in an online quiz
Question
Which of the following will help to bring down the price of fuel in the UK (other things
equal)?
- A reduction in oil supply quotas agreed by the OPEC countries;
- A successful government campaign to reduce new car prices in the UK leading to an
expansion of car ownership;
- Release of oil from the US oil reserve stocks;
- A fall in the exchange rate at which the UK pound is traded against the US dollar (i.e. the
pound depreciates);
- Higher than expected rates of economic growth recorded in the advanced industrialised
countries.
Feedback
Correct answer: Release of oil from the US oil reserve stocks
- If the US would release oil from their reserves the oil supply would increase (i.e. the
supply function would shift to the right), thereby driving down prices.
- Increased economic growth and increased car ownership would increase the demand for
oil (i.e. shift the demand curve for oil to the right) and therefore increase the price of oil.
- A reduction of output quotas would reduce the supply of oil (i.e. the supply function
would shift to the left) and therefore the price of oil would increase.
- The depreciation of the UK pound against the US dollar (which is the currency in which
oil is traded) means that more pounds now need to be spent to buy the same amount of oil,
i.e. the price of oil in UK pounds increases.
July 1-2, 2014
Cambridge, UK
31
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
APPENDIX B
Example of a “Pause for Thought” Question
Please consider to which degree your chosen firm is likely to be affected by exchange rate
movements.
- Consider in particular issues such as, e.g. how exchange rate movements are likely to
* affect the demand for your firm’s goods and services.
* affect your firm’s cost of production.
* influence decisions about investment and the use or location of production facilities.
* influence decisions about target markets.
* influence trading costs and risks.
- Consider how your firm deals with the risks of exchange rate fluctuations.
July 1-2, 2014
Cambridge, UK
32
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
TABLES
Table 1: Deep and surface learning approaches
DM
DS
SM
SS
DA
SA
M
16.5
17.1
8.4
10.7
33.9
19.1
Mdn
19.0
17.0
7.5
10.0
33.0
19.0
SD
3.9
3.5
3.5
3.5
7.07
6.3
Cronbach α
.777
.693
.809
.554
.867
.832
July 1-2, 2014
Cambridge, UK
33
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 2: Kendall's τ correlation of students’ perceptions of the use of clickers
M
(SD)
(2a) Thinking about how to answer clicker questions 5.4
in lectures helped my understanding of economics. (0.62)
(2b) The explanation of the answers to clicker 5.53
questions helped my understanding of economics. (0.57)
(2c) The use of clickers in questions increased my 4.8
motivation to attend lectures.
(1.19)
(2d) The use of clickers helped my attention in 5.3
lectures.
(0.65)
(2e) The clicker questions helped me to identify 5.3
which concepts I understood well and which I (0.65)
DA
SA
PK
PC
.050
-.212
.046
.010
.746
.169
.783
.950
.120
-.253
.235
.102
.442
.105
.163
.537
.124
-.080
.129
.060
.396
.584
.416
.698
.232
-.206
.010
-.080
.128
.179
.950
.622
.208
-.058 .372** -.033
.173
.703
.263*
-.147 .355** -.003
.085
.336
.032
.984
.149
-.206
.184
-.041
.332
.182
.270
.804
.102
.047
-.094 -.123
.486
.749
.552
.428
.153
.057
.041
-.036
.305
.702
.800
.820
.024
.837
didn’t.
(2f) Participating in clicker questions helped me 5.4
identify issues I needed to learn more about.
(0.67)
(2g) I think that the result of the clicker questions 5.4
encouraged the lecturer to explain concepts and (0.62)
their applications more clearly the second time
around.
(2h) The use of clickers made me more interested in 4.77
the topics we covered in this course.
(1.04)
(2i) The use of clickers helped me to participate more 4.87
actively in this class.
(0.97)
(2j) There were too many clicker questions in 2.27 -.244* .367** -.129 -.133
lectures.
(1.28)
(2k) We spend too much time waiting for responses 2.07
July 1-2, 2014
Cambridge, UK
.092
-.170
.011
.408
.387
.347** -.108 -.085
34
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
and discussing answers to clicker questions.
(1.17)
(2l) I think that in lectures where we used more 2.5
clicker
questions,
there
was
usually
less (1.22)
.246
-.166
.018
.497
.585
.414** -.050 -.151
.251
.004
.748
.327
.166
-.100
.285*
.073
.422
.489
.068
.632
opportunity for students to actively engage in
discussions in class.
(2m) I liked the fact that clicker responses are 4.6
anonymous.
(1.40)
(2n) I became determined to work harder if I got 4.57 .488*** -.449*** .275* .391**
answers to clicker questions wrong when most (1.22)
.001
.002
.080
.011
other students got them right.
(2o) In general, participation in the clicker questions 5.0
made me feel better about my own performance in (0.87)
.189 -.472*** .151 .316**
.202
.001
.347
.044
-.338** .170
.144
the course.
(2p) I think I learned more using clickers because I 4.87
had to discuss the answer with my neighbour (1.04)
.164
.268
.023
.290
.362
before voting.
Note. N = 30. 6-point Likert scale.
* p > .10 level, ** p > .05, *** p > 0.01 level, (2-tailed).
July 1-2, 2014
Cambridge, UK
35
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 3: Kendall's τ correlation of Students’ perceptions of different didactic features of the
course
How helpful would you rate the following didactic
M
features of the course to the development of your (SD)
understanding of economics?
DA
(3a) Online Quizzes
3.63
.121
-.124 .406*** .140
(0.56)
.310
.304
.002
.269
3.60
0.58
-.066
.135
.035
(0.62)
.620
.577
.284
.776
-.170
.165
.042
(3b) Clickers
(3c) Explanations by the lecturer
(3d) Discussions in class (excluding clickers)
(3e) Lecture notes
(3f) Group work on “Pause for Thought Questions”
3.70 .237**
SA
PK
PC
(0.47)
.044
.155
.193
.735
3.53
.208*
.060
-.035
-.076
(0.51)
.071
.650
.782
.534
.037
.071
.111
3.57 .243**
(0.50)
.043
.759
.588
.385
2.73
.016
.161
.050
-.148
(1.01)
.910
.269
.752
.336
0.97
.128
.133
.055
(3g) Group work in class excluding “Pause for 2.87
Thought”
(0.97)
.388
.260
.355
.648
(3h) Krones case
3.00
0.79
-.078
.187
.199*
(0.74)
.487
.497
.129
.099
3.23
.119
-.093
.150 .254**
(0.77)
.294
.419
.225
.035
2.70
.111
-.071
.214*
.172
(0.99)
.319
.530
.077
.148
(3i) Other case studies used in class
(3j) Textbook
Note. N = 30. 4-point Likert scale.
* p > .10 level, ** p > .05, *** p > 0.01 level, (2-tailed).
July 1-2, 2014
Cambridge, UK
36
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 4: Man Whitney U tests on students’ perceptions of the use of clickers
2a
2b
2c
2d
2e*
2f**
2g
2h
Mann-Whitney U
90.0
73.5
88.0
99.0
67.5
58.5
90.0
99.0
Wilcoxon W
261.0 244.5 259.0 270.0 238.5 229.5 261.0 270.0
Z
-.853 -1.681 -.887 -.423 -1.904 -2.327 -.853 -.423
Asymp. Sig. (2-tailed)
.393
.093
.375
.672
.057
.020
.393
.672
Exact Sig. [2*(1-tailed Sig.)]
.465a
.146a
.415a
.723a
.087a
.035a
.465a
.723a
Monte Carlo Sig. (2-tailed)
.462b .127b .412b .732b .082b .023b .462b .681b
Monte Carlo Sig. (1-tailed)
.247b .085b .203b .355b .046b .016b .247b .344b
2i
2j
2k
2l
2m
2n**
2o*
2p
Mann-Whitney U
105.5
85.5
89.0
80.0
80.0
56.0
70.0
84.0
Wilcoxon W
276.5 163.5 167.0 158.0 251.0 227.0 241.0 255.0
Z
-.118 -.991 -.849 -1.258 -1.229 -2.317 -1.700 -1.114
Asymp. Sig. (2-tailed)
.906
.322
.396
.208
.219
.020
.089
.265
Exact Sig. [2*(1-tailed Sig.)]
.917a
.346a
.439a
.249a
.249a
.028a
.113a
.325a
Monte Carlo Sig. (2-tailed)
1.000b .342b .413b .207b .230b .018b .092b .292b
Monte Carlo Sig. (1-tailed)
.523b .166b .203b .100b .116b .008b .057b .159b
Note. Grouping variable: PK. a. Not corrected for ties. b. Based on 10000 sampled tables
with starting seed 605580418.
* p > .10 level, ** p > .05, *** p > 0.01 level, (2-tailed).
July 1-2, 2014
Cambridge, UK
37
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 5: Kendall's τ correlation of students’ perceptions of online quizzes
M
(SD)
(5a) Online quizzes were helpful to develop my 5.57
understanding of business economics.
(0.57)
(5b) My incentive to revise the lecture material 5.03
between lectures increased because I was required (1.03)
DA
SA
PK
PC
-.067
-.092
-.163
.066
.661
.549
.325
.682
.104
-.228 .486*** .306**
.480
.122
.032
.086
.823
.550
.002
.050
to participate in online quizzes.
(5c) My incentive to take the tests increased because 4.63
I knew the lecturer monitored my participation.
(1.30)
(5d) Because of the online quizzes I tended to revise 4.73
the lecture material more carefully between (1.36)
.336** .155
.031
.310
.144 -.409*** .300* .306*
.440
.006
.061
.051
.091
.100
lectures than I otherwise would have done.
(5e) If I scored less than 70 per cent I did some more 4.45 .452*** -.333**
revision of the lecture material before I retook the
.002
.023
.564
.518
.086
.105
.128
.031
.543
.460
.404
.835
.046
.244*
.003
.080
.750
.092
.984
.604
.134
-.105
.225
.057
.370
.482
.164
.719
-.077
-.006
.614
.970
quiz.
(5f) When taking the online quizzes I first read the 3.03
questions set and then scanned my lecture slides (1.43)
and notes to find the correct answer.
(5g) I mainly guessed the answers to the online 2.33
quizzes.
(1.18)
(5h) For the online quizzes I tended to consider 5.13
carefully the feedback on questions I answered (0.97)
wrong.
(5i) For the online quizzes I tended to consider 4.13 .332** -.242*
carefully the feedback on questions I answered (1.41)
.019
.090
.092
-.169
correctly.
(5j) The explanation of the answers to the questions 5.07
July 1-2, 2014
Cambridge, UK
.334** .153
38
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
in the online quizzes helped my understanding of (0.98)
.543
.265
.041
.339
.058
-.168 .447*** .196
.704
.271
economics.
(5k) Participating in online quizzes helped me 5.27
identify concepts which I understood well and (0.74)
.007
.226
which I didn’t.
(5l) Participating in online quizzes helped me 5.20
identify issues I needed to learn more about.
(1.03)
.134
.008
.241
-.016 .366*** .063
-.106
.911
.682
.480
(5n) I became determined to work harder if I got 4.57 .503*** -.400*** .156
.232
(5m) Participating in online quizzes took up too 3.9
much time.
answers to online quizzes wrong.
(1.45)
(1.10)
.375
-.349** .436*** .188
.000
(5o) At the beginning of the course I thought it was a 4.63 .339**
good idea to require students to take online (1.16)
0.19
.021
.010
.006
.317
.130
-.122
-.006
-.114
.401
.969
.455
quizzes.
(5p) After the course finished I thought it had been a 5.40
good idea to require students to take online (0.97)
quizzes.
.173
.254
-.272* .437*** .288*
.075
.008
.074
Note. N = 30. 6-point Likert scale.
* p > .10 level, ** p > .05, *** p > 0.01 level, (2-tailed).
July 1-2, 2014
Cambridge, UK
39
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 6: Man Whitney U tests on students’ perceptions of online quizzes
5a
5b**
5c
5d*
5e
5f
5g
Mann-Whitney U
93.0
52.0
76.5
68.5
74.0
Wilcoxon W
264.0 223.0 247.5 239.5 227.0 271.0 183.5 246.0
Z
-.703 -2.510 -1.383 -1.836 -1.290 -.347
-.110 -1.511
Asymp. Sig. (2-tailed)
.482
.012
.167
.066
.197
.729
.912
.131
Exact Sig. [2*(1-tailed Sig.)]
.545a
.017a
.185a
.095a
.227a
.755a
.917a
.172a
Monte Carlo Sig. (2-tailed)
.482b
.013b
.172b
.067b .198 b .727b
.914b
.149b
Monte Carlo Sig. (1-tailed)
.272b
.007b
.081b
.029b
.100b
.365b
.454b
.082b
5i
5j*
5k**
5l**
5m
5n**
5o
5p**
Mann-Whitney U
99.0
70.0
54.0
58.0
92.5
59.5
92.0
49.5
Wilcoxon W
270.0 241.0 225.0 229.0 170.5 230.5 263.0 220.5
Z
-.390 -1.809 -2.538 -2.313 -.678 -2.127 -.702 -2.845
Asymp. Sig. (2-tailed)
.696
.070
.011
.021
.498
.033
.483
.004
Exact Sig. [2*(1-tailed Sig.)]
.723a
.113a
.022a
.035a
.518a
.039a
.518a
.012a
Monte Carlo Sig. (2-tailed)
.708b .066b
.019b
.026b .517b
.036b .498b .004b
Monte Carlo Sig. (1-tailed)
.352b .038b
.011b
.015b .260b
.016b .239b .003b
100.0 105.5
5h
75.0
Note. Grouping variable: PK. a. Not corrected for ties. b. Based on 10000 sampled tables
with starting seed 2000000.
* p > .10 level, ** p > .05, *** p > 0.01 level, (2-tailed).
July 1-2, 2014
Cambridge, UK
40
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 7: Kendall's τ correlation of students’ perceptions of group presentations
M
(SD)
(7a) The group work for the "Pause for Thought 4.47
Questions"
was
helpful
to
develop
my (1.17)
DA
SA
PK
PC
-.177
.095
.019
-.122
.224
.518
.905
.470
.003
.027
.036
-.035
.985
.853
.815
.819
-.030
-.003
.043
-.089
.835
.985
.781
.560
-.011
.035
.160
-.018
.940
.806
.302
.907
.062
-.024
.138
-.047
.666
.866
.374
.758
.116
-.326**
.243
.181
.422
.025
.122
.239
.132
-.207
.185
.138
.362
.154
.238
.370
understanding of economics in the business
context.
(7b) The group work for the "Pause for Thought 4.43
Questions" was helpful to develop my research (1.25)
skills.
(7c) The group work for the "Pause for Thought 4.33
Questions" was helpful to develop my team (1.27)
working skills.
(7d) My understanding of business economics 4.13
benefitted from the discussion with my group (1.38)
members.
(7e) The requirement to prepare short presentations 4.47
on the "Pause for Thought Questions" just in case (1.48)
we were asked to present in the next lecture
helped my group to work more consistently
towards the pre-seen exam question than we
would otherwise have done.
(7f) The feedback on the short presentations on the 4.57
"Pause for Thought Questions" for my firm was (1.28)
useful.
(7g) The feedback on the short presentations on the 4.60
"Pause for Thought Questions" for other students' (1.45)
firms was useful.
Note. N = 30. 6-point Likert scale.
* p > .10 level, ** p > .05, *** p > 0.01 level, (2-tailed).
July 1-2, 2014
Cambridge, UK
41
2014 Cambridge Conference Business & Economics
ISBN : 9780974211428
Table 8: Wilcoxon signed rank tests on students’ perceptions of different didactic features of
the course
How helpful would you rate the following
didactic features of the course to the
development
of
your
understanding
of
M
Mdn
SD
(3a) Online Quizzes
3.63
4
0.56
The null hypothesis
(3b) Clickers
3.60
4
0.62
that the median of
(3c) Explanations by the lecturer
3.70
4
0.47
(3d) Discussions in class (excluding clickers)
3.53
4
0.51
(3e) Lecture notes
3.57
4
0.50
economics?
the
differences
between
(3f) Group work on “Pause for Thought
Questions”
variables equals 0
can be retained.
The null hypothesis
2.73
3
1.01
that the median of
the
(3g) Group work in class excluding “Pause
for Thought Questions”
the
2.87
3
0.97
(3h) Krones case
3.00
3
0.74
(3i) Other case studies used in class
3.23
3
0.77
(3j) Textbook
2.70
3
0.99
differences
between
the
variables equals 0
can be retained.
Note. N = 30. 4-point Likert scale.
July 1-2, 2014
Cambridge, UK
42
Download