Denver Poster Take 7 Monica Approved

advertisement

Web-Homework Platforms:

Measuring Student Efforts in Relationship to Achievement

Michael J. Krause

INTRODUCTION GENERAL OBSERVATIONS

Since the 2007 fall semester, I have used web-based accounting textbook homework platforms with two different elementary accounting books and also with two different Intermediate Accounting textbooks.

Because of class size, I have had more viable measurement opportunities with beginning level students to analyze the effectiveness of their web-based systems.

My definition of “effectiveness” centers around the ability of the web-based system to link a measurement of student effort with exam performance outcomes.

Additionally, when a text test bank provides Blooms Taxonomy measures, I can differentiate performance outcomes between higher vs. lower levels of learning.

I categorized the class performance on a unit exam by sub-dividing the population into thirds – top, middle and bottom achievers. I then tried to link the test performance to student efforts to prepare for the exam. The available artifact to reveal this link between efforts and outcomes can be generated by the webhomework platform itself. After grading homework and extra review assignments based upon my articulated parameters, the web-platform awards points. These platform points have no intuitive meaning as they can vary depending upon the number of questions assigned in anticipation of a particular unit exam. However, when related within population sub-groups, the platform points should give insights into the fundamental factors behind student achievement or lack of it.

The presented observations made during the 2011 academic year. I taught four sections of Financial Accounting, two sections per semester. I measured student performance on two unit exams – the same exams each semester. The fall 2010 semester enrollment ranged from 73 to 79. While the spring 2011 semester enrollment started at 67 and ended with 58. Each exam had thirty multiple choice questions which could be categorized as either “knowledge or comprehension”

(lower levels) or as “application or above” (higher levels) in keeping with the

Blooms model. An original analysis of each exam generated an estimate of students who used the web-system. Web-users were projected to be the top two groups in platform unit score which therefore truncated the population by a third.

My past efforts to link Blooms Taxonomy with outcome measurements produced a consistent finding. When utilizing multiple choice questions designed to measure knowledge, comprehension and application, an error pattern emerges in relation to the question population. In short, I compare the question category’s error rate versus its existence rate. The error rate for “knowledge” questions appears to be less than the existence percentage for that category on an exam. While the error pattern for “application” questions appear to be greater than the existence rate for that category on an exam. Finally, the error rate and the existence rate appear to be about the same for comprehension questions. Therefore this study seeks to replicate prior findings but in a different manner. This study combines

“knowledge” questions with “comprehension’ questions. “Application” questions are grouped with higher levels of learning such as “analysis”. The text test bank provided the designation as to the learning level that a question measures.

EXHIBIT 1 – Exam #1Test Score vs. Web-Platform Unit Score

Population

N=79 -Fall

N=67-Spring

Exam #1

Top 3 rd

Middle 3 rd

Bottom 3 rd

Whole Class

Fall 2010

Platform

Score

Max = 165

127.31

95.04

58.69

93.70

Spring 2011

Platform

Score

Max = 258

186.45

180.65

122.23

163.37

EXHIBIT 2 – Exam #1 Test Score vs. Web-Platform Unit Score (Common-Sized

Population

N=79 -Fall

N=67 –Spring

Exam #1

Top 3 rd

Middle 3 rd

Bottom 3 rd

Whole Class

EXHIBIT 3 – Exam #2 Test Score vs. Web-Platform Unit Score

Population

N=73 -Fall

N=58 –Spring

Exam #2

Top 3 rd

Middle 3 rd

Bottom 3 rd

Whole Class

EXHIBIT 4 – Exam #2 Test Score vs. Web-Platform Unit Score (Common-Sized

Population

N=73 -Fall

N=58 –Spring

Exam #2

Top 3 rd

Middle 3 rd

Bottom 3 rd

Whole Class

Fall 2010

Test Score

Max = 100

79.88

61.78

48.27

63.29

Fall 2010

Test Score

Max = 100

100.00%

77.34%

60.43%

79.23%

Fall 2010

Test Score

Max = 100

73.13

57.64

41.13

57.30

Fall 2010

Test Score

Max = 100

100.00%

78.82%

56.24%

78.35%

Fall 2010

Platform

Score

Max = 165

100.00%

74.65%

46.10%

73.60%

Fall 2010

Platform

Score

Max = 155

103.83

76.92

34.50

71.82

Fall 2010

Platform

Score

Max = 155

100.00%

74.08%

33.23%

69.17%

Spring 2011

Test Score

Max = 100

78.23

58.57

44.73

60.48

Spring 2011

Test Score

Max = 100

100.00%

74.87%

57.18%

77.31%

Spring 2011

Test Score

Max = 100

75.05

55.05

41.47

57.16

Spring 2011

Test Score

Max = 100

100.00%

73.35%

55.26%

76.16%

Spring 2011

Platform

Score

Max = 258

100.00%

96.89%

65.56%

87.62%

Spring 2011

Platform

Score

Max = 230

160.05

125.75

77.79

121.28

Spring 2011

Platform

Score

Max = 230

100.00%

78.57%

48.60%

75.78%

)

)

BLOOMS TAXONOMY OBSERVATIONS

EXHIBIT 5 – Exam #1, Fall 2010 Analysis (Error Rate vs. Existence Rate)

Error

Error

Exam #1,

Fall 2010

Top

3 rd

Middle

3rd

Bottom

3rd

Total

Class

Total

Answers

K or C 24% 42% 55% 40% 1580

A or A 31% 53% 64% 50% 790

Exist.

Rate

67%

33%

Error All 26% 46% 58% 43% 2370 100%

K or C = Knowledge or Comprehension; A or A = Application or Analysis

Exhibit 6 – Exam #2, Fall 2010 Analysis (Error Rate vs. Existence Rate)

Error

Error

Error

Exam #2,

Fall 2010

Top

3 rd

Middle

3rd

Bottom

3rd

Total

Class

Total

Answers

K or C 25% 38% 47% 37% 1095

A or A

All

31%

28%

50%

44%

65%

56%

49%

43%

1095

2190

Exist.

Rate

50%

50%

100%

WEB-PLATFORM SPECIFIC OBSERVATIONS

EXHIBIT 7 – Test Scores of Exams #1 & 2 (All Results vs. Web-Platform Users)

Mid Mid Bot.

Bot.

By Thirds

Only = Used

Top

Web All

Exam #1:

79.88

Fall 2010

(N=79/53)

Spring 2011

(N=67/45)

78.23

Exam #2:

Fall 2010

(N=73/49)

Spring 2011

(N=58/39)

73.13

75.05

Top

Only

81.94

80.80

76.81

77.92

All

61.78

58.57

57.64

55.05

Only

66.59

61.93

60.76

60.85

All

48.27

44.73

41.13

41.47

Only

51.28

48.60

46.69

46.69

CONCLUSIONS

1. Web-users of all talents consistently outperformed those who did not use it.

2. Bottom third of class significantly below class average on test score and web use.

3. Bottom third performs below expectations for Blooms lowest levels of learning.

4. Top third performs above expectations for Blooms highest levels of learning.

Download