800 crit paper1-PrestonClark

advertisement
Running head: CRITICAL LITERATURE REVIEW #1: DIGITAL GAME-BASED LEARNING
Critical Literature Review #1: Digital game-based learning
Preston A. Clark
Central Michigan University
Author Note
This is a critical review of an article written by Severine Erhel & Eric Jamet
Critical Literature Review #1: Digital game-based learning
1
Table of Contents
The Problem ........................................................................................................................ 2
Theoretical Perspective and Literature Review .................................................................. 4
Research Design and Analysis ............................................................................................ 5
Interpretation and Implication of Results ........................................................................... 7
Closing Remarks ................................................................................................................. 9
References ......................................................................................................................... 10
Critical Literature Review #1: Digital game-based learning
2
The Problem
Erhel and Jamet (2013) were interested in exploring how digital game-based learning
(DGBL) is impacted when learning instructions are incorporated into the game verses the usual
entertainment instructions. The authors hypothesized that changing the type of instructions
presented to the learner at the beginning of the game would either influence or hinder the
learner’s motivation. The authors were attempting to increase the cognitive processes that would
lead to deeper learning of medical content. Though the authors did not directly list their
questions to make them easy to discern, the following questions appear to be what they wished to
address:
1. What are the effects of providing either learning instructions or entertainment
instructions, within a digital learning game, on motivation and deeper cognitive
learning? Will it increase deeper learning?
2. Will entertainment instruction significantly lower intrinsic motivation?
3. Does entertainment instruction encourage performance by promoting fear of failing?
4. For Experiment 2: Does the addition of providing knowledge of the correct responses
(KCRs) for quiz questions increase deep learning responses? (pp. 158-159)
This writer believes that it would have been beneficial to Erhel & Jamet if they had
simply written out their research hypotheses/questions instead of embedding them within the
depths of their paper. A clearer presentation would have made it easier to compare stated research
questions to the subsequent results more precisely. Additionally, listing their hypotheses would
minimize any confusion as to what they were studying. This writer found the need to constantly
recheck to see if the questions presented were indeed the questions were being addressed in the
later discussions.
Critical Literature Review #1: Digital game-based learning
3
The authors based their research on applying a value-added research method taken from
Adams, Mayer, MacNamara, Koenig, and Wainess (2012) to examine how learning instructions
within a digital game would enhance cognitive processes verses the commonly used
entertainment instructions that focused on the elements of the game. The authors believed that
this was a validated approach for comparing their one independent variable on instruction type.
They also believed that this method was the most rigorous approach that could would ensure a
quality study (p. 157).
Based on the studies they cited, Erhel and Jamet found that DGBL had great potential as
a learning tool but noted that a missing element of DGBL promoting deep learning was in how
instructions were delivered in the games. They wanted to see if the use of learning instructions
rather than the usual entertainment instructions would lead to deep learning of content in
participants. If this was true, then they would further provide support for DGBL software as a
useful and learning/cognition enhancing activity. The authors stated a legitimate issue to explore
with the potential to provide new knowledge in the area of digital game instruction for the
education community.
The authors’ description of the types of data they wished to collect in the study seemed to
be reasonable, as they were comparing the independent variable of learning instructions or
entertainment instructions to address their primary research question. However, the confusion
that arises is the issue of the additional questions the authors had and whether the methods they
used were appropriate for the study. In the following sections, a review of the approach and
subsequent outcomes will be examined.
Critical Literature Review #1: Digital game-based learning
4
Theoretical Perspective and Literature Review
Erhel and Jamet endeavored to provide a clear history of research from the point where
digital learning games were seen only as digital forms of entertainment to where the games
became tools for enhancing learning. The authors described how research initially looked at
multimedia and hypermedia as it was believed that research in those areas would yield quality
research and promote higher cognitive processing (p. 156). They proceeded to provide studies
that showed the progression of research to how digital games and DGBL became the interest of
study in recent years.
The literature review for this study was grouped according to the different areas of focus
where the authors believed the studies fit. The literature review was effectively organized. It
allowed the reader to see the connections Erhel and Jamet were trying to make as they tied one
group of studies to the next. The areas identified focused on the use of digital games in learning,
DGBL and motivation, a comparison of digital learning games to conventional media, and using
instructions to improve learning effectiveness (pp. 156-158).
A weak point in the literature review was that the authors did not summarize the review
to provide the reader with an introduction to their hypotheses. The writers immediately move to
the first of two experiments before one is able to decipher the initial areas of interest that the
authors wished to pursue in their study. This action results in the reader not fully being able to
agree with the leap to the purpose of the current study. It is only after the authors provided miniliterature reviews within their introductory sections of their experiments that the connection was
made clear.
It is during the mini-literature review for the second experiment that a description of the
knowledge of correct response (KCR) construct is provided. The authors adapted the use of
Critical Literature Review #1: Digital game-based learning
5
KCRs to their experiment with the plan to capture data that would be relevant to their question
on influencing cognitive processing with the use of entertainment instructions for DGBL.
The three research questions inferred from the authors’ statements about experiment one
were consistent outgrowths from the literature reviews presented. However, there is a
disconnection between studying the independent variable of the instruction-types and describing
the other variables of motivation and learning performance as being dependent. This writer feels
that motivation and learning performance could also have been analyzed as independent
variables as well. The assumption that instructions alone will be the sole reason for participant
behavior would need to be further separated for research clarity.
The use of the feedback construct of KCRs in the second experiment seemed to be
identified more as an afterthought that was based primarily on one previous study (p. 162). There
didn’t appear to be enough literature presented that would warrant the connection to for the
rationale the authors provided. This reader would have preferred more explanation and thought
for the experiment that linked the use of KCRs to introductory instructions.
Research Design and Analysis
The authors created two experiments and used the value-added method used by Adams et
al. (2012) where only the change (i.e. one independent variable) in instruction-type was applied
differently across the two groups of participants in each experiment. In the first experiment,
Erhel and Jamet compared two types of instructions: learning instructions and entertainment
instructions, to see how it impacted intrinsic motivation and how well participants achieved
goals.
The researchers stated there were 46 participants for the first experiment and divided
them into two similar groups of nine men and 15 women with an average age of approximately
Critical Literature Review #1: Digital game-based learning
6
20.5. Immediately one sees a discrepancy as the total number of the participants would equal 48
and not the described 46 participants. The following provides a breakdown of the rest of the
approach to collecting data (p. 159):

One room was divided into six booths for participants.

Participants were assigned to one of two experimental conditions where they were
introduced to the digital game based learning software called ASTRA.

A pre-test questionnaire (prior knowledge) excluded those participants whose scores were
higher than three points. The maximum possible score was six.

Five phases to the experiment:
1. Participants wore headphones and followed simulation by first reading the initial
instructions and then use the ASTRA simulation. The order of diseases presented
and the instruction condition were counterbalanced. No mention that the study
was also looking at their motivation or their knowledge during this phase.
2. A four question quiz was administered after each of the four presentations.
3. After the ASTRA session, a series of questionnaires were administered. The first
was a 15 question Likert survey that addressed motivation.
4. The second questionnaire had four paraphrase-type questions about participant
knowledge and memorization.
5. The final questionnaire had four inference-type questions survey.
This approach was similar to the value-added method used by Adams et al. However,
Erhel and Jamet differed in the number of questionnaires used. One could be concerned as to the
amount of time and overall interest required for participants to complete them. That is to say,
does the number of questions/questionnaires used within the experiment impact the methodology
chosen in any way? While it is understandable that the authors identified a number of dependent
variables to study, it seems to this reader that there was no need to study them all at the same
time.
Critical Literature Review #1: Digital game-based learning
7
In the second experiment, the authors added KCRs to ASTRA quizzes in examine
whether they could illicit deeper learning in participants when given entertainment instructions.
All other aspects of the experiment were the same as in the first experiment. They identified a
similar group of 44 new participants and performed the experiment with the added element of the
KCRs (p. 162).
Separate from experiment 1, experiment 2 had a clearer purpose identified. The primary
focus was on the introduction of the new independent variable of adding the KCRs to the quizzes
of those participants who were given introductory entertainment instructions. All aspects of the
previous experiment remained the same. The authors hoped that by adding KCRs participants
would show an increase in cognitive processes while learning content.
Interpretation and Implication of Results
The authors we able to convey enough of the literature review to explain their rationale
for the research they wished to conduct. The challenge for the authors was in being more explicit
with their research questions and in evaluating their chosen method to be certain that they were
getting the type of data they needed for their study.
The authors’ attempt to identify how DGBL could be strengthened to ensure deep
learning and increased cognitive capability was a valuable endeavor. As more DGBL
environments are being utilized in the classroom, their choice to identify instruction type as an
independent variable for looking at how it may influence deep learning was a good step toward
providing useful knowledge to the academic and educational communities.
The experiments did yield some questions however. The authors may have been
combining too many dependent variables in this study to examine when perhaps a simpler
experiment would have addressed their primary question better. The additional variables
Critical Literature Review #1: Digital game-based learning
8
identified in the study could have been set aside for a later study once their primary component
was resolved. For example, given the number of dependent variables, one might wonder if
performing an analysis on just the type of instruction was enough.
In addition, the question of co-influence of the other dependent variables may have been
an issue to consider. For example, was motivation only impacted by a change in instruction?
How can one be sure that it was instructions and not expectation based on the type of instruction
that could have influenced the participant? It may not be as clearly identifiable as the authors
made it seem.
The authors noted a number of their own limitations within the experiments. Particularly,
they recognized that their methodology was problematic in noting that their quizzes may not
have been challenging enough due to the high scores participants had received (p. 165). It was
good that they noticed this flaw as it limited their ability to see any range in data given most
participants reached the upper end of achievement.
Moreover, a concern that was not mentioned arises with the use of KCRs in the second
experiment. It is possible to interpret the KCRs as a learning instruction and therefore not
different from the learning instructions provided at the beginning of the ASTRA session. When
seen as an instruction, the KCR is more accurate than perhaps the original learning instructions
used during the introduction of the game because of its immediate location next to the
knowledge questions that follow. Having the KCRs occur during the quizzes may have indeed
increased participant learning of the material thereby causing them to have better recall for the
later deeper cognitive questioning that followed immediately after the ASTRA session. The
question of this phenomenon as being deeper learning or simply that rote learning has occurred
Critical Literature Review #1: Digital game-based learning
9
as a result of providing/repeating answers challenges the authors’ interpretation of their results
since the KCRs are studied just before being tested for knowledge.
Closing Remarks
In this study, Erhel and Jamet attempted to demonstrate the relevance of how the type of
instruction in a digital game that is designed for learning can be more effective at increasing
motivation and enhancing content knowledge (i.e. deep learning). While some of their results
lend themselves to supporting their hypotheses, others, such as the KCR results lend themselves
to further discussion and exploration. Questions of methodology and written organization caused
some confusion in this study. Had it been possible for the authors to rewrite sections of their
literature review to include formalizing their research questions and identifying elements within
their quizzes to be more rigorous, there would have been greater clarity of the knowledge they
gleaned from their work, and an even better contribution to share with the learning community.
Critical Literature Review #1: Digital game-based learning
10
References
Adams, D. M., Mayer, R. E., MacNamara, A., Koenig, A., & Wainess, R. (2012). Narrative
games for learning: Testing the discovery and narrative hypotheses. Journal of
Educational Psychology, 104(1), 235-249.
Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback
on motivation and learning effectiveness. Computers & Education, 67, 156-167.
Download