Critical Review of Research #2

advertisement

Critical Review of Research #2

Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

Katie Krieg

EDU 800

Critical Review 2 2

Critical Review

Problem

1. Clarity of the Problem

The problem this article addressed was clearly stated in the introduction of the paper. The authors explored the use of peer feedback to promote higher levels of thinking during online discussions. The introduction began by explaining that “student discussion has been identified as a key component of interactive online learning environments; both instructors and researchers agree that this is where the ‘real’ learning takes place” (Ertmer, et al., 2007, p. 412). While feedback has proven effective in face-to-face classes, the authors pointed out how very little research existed in regards to feedback in online learning environments, with even less focus on how feedback enhances the quality of discussion responses. The authors also stated “the purpose of this exploratory study was to fill this gap by examining students’ perceptions of the value of giving and receiving peer feedback regarding the quality of discussion postings in an online course” (p. 416). This statement was addressed in the purpose of the study section. From these sections, the authors made their purpose and the problems they were addressing very clear.

2. Need and Educational Significance

The authors used the introduction to support the idea that good student discussion leads to valid understanding, but that online discussions often focus around personal stories and do not include reflections or critical thinking. To help improved reflections and critical thinking during discussion, the role of feedback in instruction was introduced. Morey (2004) stated “in general, instructional feedback provides students with information that either confirms what they already know or changes their existing knowledge and beliefs” (p. 413). As Lynch (2002) and Palloff &

Pratt (2001) further pointed out, “feedback may be even more important in online leanrning

Critical Review 2 environments than in traditional classrooms” (p. 414). Students agreed that feedback was an important factor in online environments and expteced it to be:

1.

Prompt, timely, and thorough

2.

Ongoing formative and summative

3.

Constructive, supportive and substantive

4.

Specific, objective, and individual

3

5.

Consistent (p. 414)

Because of the amount of time this level of feedback would require of instructors, peer feedback was suggested as a possible solution. After addressing the importance of feedback in an online learning environment, the authors pointed out the lack of research that existed with the use of feedback in online courses, particularly peer feedback. If discussions are an important element of online courses, and feedback is essential in reaching higher levels of thinking, but very time consuming, than it is justifiable to research how peer feedback can be utilized with online discussion postings.

3. Researchable

This exploratory study focused on the impact of peer feedback on the quality of discussion responses, and whether the quality could be maintained or increased through peer feedback. While higher levels of thinking are measurable, they can be difficult to determine. The authors addressed this challenge by utilizing the levels of Bloom’s taxonomy. “Postings at the knowledge, comprehension, and application levels received one point; postings demonstrating analysis, synthesis, or evaluation received two points; non-substantive comments received zero points” (p. 417).

Critical Review 2

Student perceptions of the value of giving and receiving peer feedback were also examined in this studied. The researchers used entry and exit surveys along with interviews to obtain both quantitative and qualitative data regarding students’ perceptions of the value of receiving instructor feedback compared to peer feedback, and giving peer feedback. These

4 methods would clearly allow the authors to research the problem.

Theoretical Perspective and Literature Review

4. Conceptual Framework

The conceptual framework of this paper focused on the relationships among three distinct areas: discussions, critical thinking, and feedback. Ideas from these three areas are consistently intertwined throughout the article. The authors pointed out the importance of student discussion in the beginning of the paper. Cunningham (1992), noted “that it is the dialog among community members that promotes learning” (pp. 412-413). As the authors emphasized the importance of dicussion, they also noted the lack of critical thinking in responses. Deeper thinking is an important part of the learning process, and therefore an essential part of learner responses in an online discussion. To help promote deeper levels of thinking in online discussions, the authors remarked about the importance of feedback and that good feedback performs these functions:

1.

Clarifies what good performance is (goals, criteria, standards)

2.

Facilitates the development of self-assessment and reflection

3.

Delivers high quality information to students about their learning

4.

Encourages teacher and peer dialogue around learning

5.

Encourages positive motivational beliefs and self esteem

Critical Review 2

6.

Provides opportunities to close the gap between current and desired performance

7.

Provides information to teachers that can help shape teaching (pp. 413-414)

The authors continued to build upon the three main concepts: discussions, critical thinking, and feedback. Feedback was addressed from two perspectives: the instructors and peers. By utilizing Bloom’s taxonomy, the instructors and peers evaluated the critical thinking levels of

5 discussion postings and provided feedback to participants. Feedback helps learners build upon what they know, and therefore can help them reach higher levels of thinking, and deep thinking is a crucial part of effective online discussions.

5. Relevant Theory and Prior Research

The authors began with evidence supporting the value of discussions in the learning process. When focusing on online environments, the authors noted how discussions often centered on personal stories and lacked the deep thinking necessary to achieve critical levels of learning. This lead to information on the benefits of feedback improving learning. The authors first focused on the general role of feedback in instruction and went further in depth with information about feedback in online learning environments. Feedback was deemed to possibly be even more important with online learning than face-to-face settings (p. 414). Specifically,

Ertmer and Stepich (2004) pointed out how “research has shown that the quality of student discussion responses can be increased through the use of constructive feedback that is prompt, consistent, and ongoing” (p. 414). Dunlap (2005) noted how instructors would have to be constantly onlne in order to provide this level of feedback, something that would be rather impractical (p. 414).

Literature research continued with advantages and challengaes of peer feedback. Previous research pointed out the benefits of both receiving as well as giving feedback. Challenges

Critical Review 2 focused on the general difficulties in giving meaningful feedback. This literature review led to

6 the purpose of the study where the authors pointed out that little research existed examining the impact of feedback in online learning environments. “Additionally, very few, if any, studies have examined the impact of using peer feedback to shape the quality of discourse in an online course” (p. 416). Due to the limit amount of existing research related to this study, the authors effectiviely connected their investigation to prior literature.

6. Summary of Literature Review

The flow and clarity of the literature review move effortlessly, building from one idea to the next. Although, the literature review does not structurally end with a summary of the literature, the authors did address the brevity of previous literature that referenced online feedback in the purpose of the study. Their summary of the literature is very brief, containing only a fraction of a sentence before moving on to the reasons for this study.

While feedback has been demonstrated to be an effective strategy in traditional learning environments, limited research has been conducted that examines the role or impact of feedback in online learning environments in which learners construct their own knowledge, based on prior experiences and peer interactions (p. 416).

This summary only hints at a tiny portion of the literature that was addressed in the beginning of the article. It does not acknowledge discussion postings, nor does it summarize any perceptions of feedback received by instructors and peers, or perceptions of giving peer feedback.

7. Clarity of Research Question

The research questions focused on feedback impacting the quality of postings, as well as the perceptions of giving and receiving peer feedback. The authors very clearly state the research questions in the purpose of the study (p. 416):

Critical Review 2

1.

What is the impact of peer feedback on the quality of students’ postings in an online environment? Can the quality of discourse/earning be maintained and/or increased

7 through the use of peer feedback?

2.

What are the students’ perceptions of the value of receiving peer feedback? How do these perceptions compare to the perceived value of receiving instructor feedback?

3.

What are the students’ perceptions of the value of giving peer feedback?

To address the appropriateness of their research questions, the authors emphasized the gap that existed in information about feedback in online learning environments. The authors had previously stressed the importance of discussion as part of the learning process. They also addressed how critical thinking during discussion is necessary for deep learning to occur (p.

413), supporting the first discussion question. Since time is a crucial factor when providing feedback, peer feedback was suggested as a solution to instructor feedback (p. 414). The suggestion of peer feedback formulated the second two research questions, focusing on both receiving and giving peer feedback.

Research Design and Analysis

8. Study Design

Through this exploratory study, the authors used both descriptive and evaluative approaches to examine not only the impact of feedback on discussion postings, but the perceptions of giving and receiving feedback (p. 416). Descriptive studies gather information without changing or manipulating the environment. They intend to collect data to demonstrate relationships and describe situations. They do not make predictions or answer how, when or

Critical Review 2 why. Evaluation research looks to assess a situation, providing information beyond what one

8 might gain from a simple observation.

This study focused purely on “what” was happening. The authors wanted to know what impact feedback had on discussion postings, and what the perceptions were of giving and receiving feedback. These questions do not require you to use an experimental study and change the environment. They also do not ask how, when or why. These questions require the researcher to gather information about a situations, making the descriptive and evaluative methods an appropriate choice for this study.

9. Sampling Methods

The study included 15 graduate students, 10 female and 5 males. All participants were enrolled in an online technology integration course during the spring semester of 2005. The participants include eight administrators, four former or current teachers, two international students with no previous K-12 experience in the United States, and one student with a teaching degree, but no teaching experience. Five of the participants were pursuing a master’s degree and nine were doctoral students (p. 417).

Holton and Burnett (1997) stated that “one of the real advantages of quantitative methods is their ability to use smaller groups of people to make inferences about larger groups that would be prohibitively expensive to study” (p. 71). While a small sample size may be acceptable with qualitative research, it can reduce the reliability. Beyond the sample size, little information is known about the participants beyond education and gender. The sample does not include much variety with 13 of the 15 participants having a background in education, which could make generalizing results difficult. Also, with 10 of the 15 participants being female, this too may have an impact on the overall ability to generalize results.

Critical Review 2

10. Procedures and Materials

Students provided feedback to one another through a numerical score (0-2) based on

Bloom’s taxonomy, and descriptive comments. Bloom’s taxonomy was chosen because of participants’ familiarity with it, and because of the researchers’ successful implementation of it

9 in a similar graduate course. The instructors provided feedback for the first six weeks to model the feedback process. Students provided peer feedback using the same method for the following six weeks. The instructors received all peer feedback before distributing it to students to ensure anonymity (p. 417). The scoring method using Bloom’s taxonomy and the scaffolding techniques used by the instructors helped provide a consistent method for providing peer feedback.

This study collected both quantitative and qualitative data from participant interviews, scored weekly discussion posts, and collected data from entry and exit surveys. Because postings were rated by different students throughout the course and not all postings received a peer score, the feedback scores from the students were inconsistent and incomplete. For data collection purposes, two researchers scored all postings. All date, time, and discussion question numbers were removed from postings to ensure this information did not impact results. The researchers began by scoring a randomly selected discussion question and compared scores after every 10 postings. With 86% agreement between the two raters after processing all posts from the first question, the two researchers divided and independently rated the remaining 16 discussion questions. This process provided a uniform procedure for collecting data from students’ posts in order to identify what, if any, impact peer feedback had on the quality of postings (pp. 418-420).

Students completed a survey with Likert-style items and open-ended questions at the end of week five, prior to the start of peer feedback, and again in week 16. Participants also completed an interview with one of the research members, via phone or in person. The survey

Critical Review 2 10 allowed students to rank the importance of various aspects of feedback. Through the open-ended questions and interviews, researchers were able to gain more information about views on giving and receiving feedback (p. 420). This combination of data collection procedures offered consistency as well as insights about feedback perceptions, the exact aspects of the research questions.

11. Reliability and Validity

The authors addressed both the reliability and validity of their study, although validity was handled with greater depth than reliability. The use of Bloom’s taxonomy provided a high degree of face validity, as both the participants and researchers had experience using it to differentiate levels of thinking (p. 421). Face validity is a subjective judgement, and although it is necessary throughout the research process, it is often seen as a weak measure of validity

(Drost, 2011, p. 116). Instructors also modeled the scoring process by supplying students with scores on their discussion postings prior to asking them to provide peer feedback.

Validity was also addressed by triangulating entry and exit surveys with the interviews.

The entry survey indicated how participants felt about instructor and peer feedback. The interviews provided details and examples of these initial thoughts. The exit survey provided further support for comments made during interviews (p. 421). This allowed feedback perceptions to be compared across the open-ended questions and interview responses.

Furthermore, NUD*IST qualitative analysis software was used to help identify recurring patterns and themes from the interview data.

The use of a standardized interview protocol increased the reliability, while the use of multiple interview subjects and evaluators reduced interviewer biases. Interrater reliability was established by having researchers code the discussion responses and compare results. Despite

Critical Review 2 11 these measures, the short duration of the study, along with the small sample size and the fact that only 12 of the 15 entry surveys were returned effect the reliability of the research.

Interpretation and Implications of Results

13. Conceptual Limitations

The authors included a brief section specific to the limitation of this study. They noted their limitations as “the small sample size, the relatively short duration of the study, and the fairly limited scale used to judge the quality of student postings” (p. 428). The authors pointed out in their discussion sections how using a scoring rubric with “only two meaningful levels of quality may not have provided enough room for growth, thus causing a ceiling effect to occur”

(pp. 425-426). Student started out with relatively high scores on posts, leaving little room for growth. Also, many of the discussion questions were not conducive to high-level response, again making it difficult for students to show improvement in their responses.

Throughout the paper the authors expressed other limitations as they arose. Several times throughout their study the authors noted how “the ability to give meaningful feedback, which helps other think about the work they have produced, is not a naturally acquired skill” (Pallof &

Pratt, 1999, pp. 415, 427). Providing more time upfront to train students how to use the scoring rubric more effectively might have impacted their perceptions of feedback. The lack of training was evident as students stressed their difficulties using Bloom’s taxonomy to rate the level of thinking. They felt it was difficult to apply the rubric throughout the course. One student in particular felt confused when trying to decide which level of Bloom’s taxonomy applied to responses, and therefore often gave out the top score (pp. 424-425).

Critical Review 2

14. Conclusions

The authors included descriptions of their results and offered several quotes from

12 interviewees. Some of the interview responses supported the data, whereas other comments went against the quantitative results. While the quantitative data did not demonstrate a significant improvement in the quality of postings, the authors pointed out how during the interviews, eight of the 15 participants described how they used information from the feedback process to improve their posting (p. 422). Six students specifically discussed how providing peer feedback increased their learning (p. 423). These comments indicate a level of inconsistency with the quantitative results.

Participants perceived feedback from the instructor as more important than peer feedback at both the beginning and the end of the course. These scores were supported by student comments that the instructor was more knowledgeable, or that not everyone was motivated to provide quality feedback. On the other hand, one student noted how “peers are more often on the same level and may be able to explain in a manner that makes more sense than the instructor” (p.

423). The authors addressed this inconsistent comment by noting how despite instructor feedback being the preferred choice, 13 of the participants still valued peer feedback. This was also validated by comments from students stating peer feedback confirmed that their ideas were meaningful to others and that they learned from their peers’ perspectives.

Giving feedback was rated at the same level as receiving feedback. The interview comments were consistent with this data, stating how students “reflected on the feedback they had given to peers as they formed their own response to discussion questions” (p. 423). Students also considered how others would view their posts and made additions to their responses.

Critical Review 2

15. Relationship of Results to Theory

13

In the detailed results section, the authors connected their findings back to both the initial research questions as well as the three main areas of their theoretical base: discussions, critical thinking, and feedback. Most of the discussion of the results centered on feedback and included information on its relationship to discussions and critical thinking, as well as the research questions.

The results showed that students considered the quality of feedback to be more important than quantity. The importance that feedback be timely also increased by the end of the course (p.

421). These items relate back to the initial discussion on the role of feedback, and the expectation from students that feedback is prompt, timely, thorough and specific (p. 414). Although the quality of students posting did not improve with peer feedback, it also did not decrease. This relates back to the first research question about whether the quality of learning can be maintained or increased through peer feedback. It also deomonstrates the relationship among discussions, critical thinking, and feedback.

In regards to the second research question, feedback from the instructor was viewed as more important that peer feedback on both the entry and exit surveys. This data was supported with several comments from interviewees. Although instructor feedback was viewed as more important, 13 of the participants still valued peer feedback (p. 423). Students rated the importance of giving and receiving feedback at the same level on the exit survey, indicating that, on average, neither aspect was more important. Interview comments suggested that both giving and receiving feedback resulted in learning. This ties back to both the second and third research questions about the perceptions of giving and receiving feedback.

Critical Review 2

16. Significance of Study

Considering this article has been cited by 160 other articles since it was published eight

14 years ago, there is a significance to this study. Noting the importance of feedback in general, and more specifically the significance of feedback in online learning environments, knowledge about the impact of feedback is vital. Combine that with the fact that little research existed in regards to the impact of feedback in online environments, and the authors easily demonstrate a need for their study.

The main implication in the conclusion of this article was that “if the use of peer feedback can reduce an instructor’s workload in an online course yet help maintain a high quality of postings, this may offer an effective strategy for facilitating learning in an online course (p.

429). While this may be true, it pays little attention to the three initial research questions, focusing on the impact of peer feedback and the perceptions of giving and receiving feedback.

While this study offered information on the perceptions of feedback, it left the door wide open for future research options. Additional research should look at an alternate scoring rubric, allowing for greater variation among scores. Studies should also offer greater clarity in the number and type of postings that will be scored, allowing for students to post social comments that are not included in the data. Additional research is also necessary to determine the most effective means for facilitating peer feedback in online learning environments.

Critical Review 2

References

Cunningham, R. D. (1992). Beyond educational pschology: Steps toward an educaitonal semiotic. Educational Psychology Review, 4 , 165-194.

Drost, E. A. (2011). Validity and reliability in social science research. Education Research and

15

Perspectives, 38 (1), 105-123.

Dunlap, J. C. (2005). Workload reduction in online courses: Getting some shuteye. Performance and Improvement, 44 (5), 18-25.

Ertmer, P. A., & Stepich, D. A. (2004, July). Examining the relationship between higher-order learning and students' preceived sense of community in an oline learning environment.

Paper presented at the proceedings of the 10th Australian World Wide Web conference.

Gold Coast, Austrailia.

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., . . . Mong,

C. (2007). Using peer feedback to enhance the quality of studnet online postings: An exporatory study. Journal of Computer-Mediated Communication, 12 , 412-433.

Holton, E. H., & Burnett, M. B. (1997). Qualitative research methods. In R. A. Swanson, & E. F.

Holton, Human resource development research handbook: Linking research an practice

(pp. 65-87). San Francisco: Berrett-Koehler Publishers.

Lynch, M. M. (2002). The online educator: A guide to creating the virtual classroom.

New

York: RoutledgeFlamer.

Morey, E. H. (2004). Feedback reserch revisited. In D. H. Hansen, Handbook of research on educational communications and technology (pp. 745-783). Mahway, NJ: Lawrence

Erlbaum.

Critical Review 2

Pallof, R. M., & Pratt, K. (1999). Building learning communities in cyberspace.

San Francisco,

16

CA: Jossey-Bass.

Palloff, R. M., & Pratt, K. (2001). Lessong from the cybespace classroom: The realities of online teaching.

San Francisco, CA: Jossey-Bass.

Download