References Regarding Online Course Evaluations Samuels, B. (2013).

advertisement
References Regarding Online Course Evaluations
Samuels, B. (2013). “Increasing Number of Academic Departments Use Online Course
Evaluations”, CampusWest, April 30, 2013.
Most departments at Syracuse University (SU) recently switched from paper to online course evaluations due to the
efficiency of quick processing and cost-savings of a computer-based system. The administration at SU revealed that
class time is more productive when course evaluations can be done out-of-class, and students can give more time
reflecting about the course and giving more detailed responses on the evaluation. According to SU, the quality of
data collected is improved since students do not have to fill in circles on a form. Open-ended responses are richer
because students are better at typing than writing comments out by hand. SU sends out reminders to students, but
individual departments remind students that their feedback is important.
Miller, M.H. (2010). “Online Evaluations Show Same Results, Lower Response Rates”, The
Chronicle of Higher Education, May 6, 2010.
A study conducted by Kansas State University’s IDEA Center (a nonprofit group that tries to improve how colleges
use course evaluations), revealed that the only difference between student rating completed online and on paper was
that students who took online surveys gave their professors higher rating for using educational technology to
promote learning. On average, the IDEA Center reports that there is a 78% response rate for paper evaluations and a
53% response rate for online evaluations. However, the IDEA Center has a list of ideas on their website on how
colleges can encourage higher student response rates. Due to the many benefits of online course evaluations (saves
class time, eliminate paper use, data processing is quicker, etc), more and more colleges are moving to an online
system. In the end, as institutions become more environmentally conscious, paper-based evaluations will be a thing
of the past.
Emery, L., Head, T., Zeckoski, A., and Yu Borkowski, E. (2008) “Deploying an Open Source,
Online Evaluation System: Multiple Experiences.” Presentation at Educause 2008, October 31,
Orlando, FL.
Four institutions, University of Michigan Ann Arbor, Virginia Tech, University of Cambridge and University of
Maryland, collaborated on an open source online evaluation system within Sakai. Response rates in the various
pilots ranged from 32% to 79%. They found the key benefits of online evaluations to be security, validity,
efficiency, cost savings, rapid results turnaround and higher quality student comments.
eXplorance Inc., “A Fresh Look at Response Rates.” White Paper.
http://www.explorance.com/Education/brochures/A%20Fresh%20Look%20at%20Response%20Rates.pdf
This white paper outlines 9 best practices for moving to online course evaluations. Key benefits to moving online
are listed as well as strategies to build response rates.
Office of Institutional Research and Assessment, Marquette University, “On-line Course
Evaluation Pilot Project at Marqette University.” (2008). http://www.marquette.edu/oira/ceval/
Marquette University moved from a copyrighted instrument, IAS, to their own instrument, MOCES. Because of the
copyright concerns the new instrument has re-worded items that maintain the intent of the IAS items. In spring
semester of 2008 a pilot was conducted in 124 course sections with 3837 students. They evaluated the effectiveness
of an online approach versus paper and pencil and the software used to deliver the evaluations. Response rates
online were lower in 3 of the 5 pilot departments, comparable in 1 and higher in 1 when compared to 3 semester
averages of paper and pencil forms. A “power analysis” of the response rates revealed the rates were high enough of
95% confidence in the results. There was no significant difference in the mean ratings for the 4 core questions
between the old IAS form and the
MOCES online form.
Ardalan, A., Ardalan, R., Coppage, S., and Crouch, W. (2007) “A comparison of student
feedback obtained through paper-based and web-based surveys of faculty teaching.” British
Journal of Educational Technology. Volume 38 Number 6 2007.
This paper provides a summary of the current research in online vs. paper evaluations as well as results from a
student to compare the feedback results. The same form was given to 46 section pairings – one paper and one online.
The online response rate was 31% (392 out of 1276 possible responses) and the paper was 69% (972 out of 1415).
No significant difference was found in the quantitative ratings between the two methods. They examined the
differences on an “overall effectiveness” question in rating for faculty who were above the college average and then
for faculty who were below the college average. Faculty who were above the average were scored slightly lower
online and the faculty who were below the college average were scored higher online. There was no significant
difference in the number of students giving open-ended feedback online however there was a significant increase in
the length of open-ended feedback online.
Anderson, J., G. Brown, and S. Spaeth. (2006) “Online Student Evaluations and Response Rates
Reconsidered.” Innovate 2 (6). http://www.innovateonline.info/index.php?view=article&id=301
Synopsis from Innovate: “Many administrators are moving toward using online student evaluations to assess courses
and instructors, but critics of the practice fear that the online format will only result in lower levels of student
participation. Joan Anderson, Gary Brown, and Stephen Spaeth claim that such a concern often fails to acknowledge
how the evaluation process already suffers from substantial lack of engagement on the part of students as well as
instructors; the online format, they assert, merely inherits the fundamental problem of perceived irrelevance in the
process itself. After addressing the reasons behind this problem and discussing how well-designed online
evaluations can still make a positive difference, the authors describe the development and implementation of a
comprehensive, college-wide online evaluation survey at Washington State University's College of Agricultural,
Human, and Natural Resources. In reviewing the survey results, they found that class size, academic discipline, and
distribution method played a negligible role in student response rates. However, they found that variances in
response rate were significantly influenced by the relative level of participation among faculty members and
department heads in the original development of the survey. The authors maintain that online surveys can make the
process more relevant and meaningful to students, but they conclude that eliciting greater response rates will still
require sustained support, involvement, and advocacy by faculty members and administrators.”
Avery, Rosemary J., Bryant W.K., Mathios, A., Kang, H., and Bell, D. (2006) “Electronic
Course Evaluations:Does an Online Delivery System Influence Student Evaluations?” Journal of
Economic Education. Washington: Winter 2006. Vol. 37, Iss. 1 p21-38 (ProQuest document ID
973267691).
The Department of Policy Analysis and Management a Cornell University did a study of course evaluation data
from 1998-2001. Using the same form, data was analyzed from 29 courses (20 using the paper version, 9 using the
online version). The study examined response rates and mean scores between the methods. While specific response
rates varied, online was typically lower than the paper form. For example, in fall 2000 paper was 69% compared
with 47% online. Using a 5-point scale on their 13 questions, 4 questions had a significant difference in mean scores
between methods. This was a greater than 0.10 difference with the web having the higher mean score. The other 9
questions had a less than 0.10 difference in mean scores again with web having the higher means.
Donovan, J., Mader, C., and Shinsky. J., (2006) “Constructive student feedback: Online vs.
traditional course evaluations.” Journal of Interactive Online Learning. Volume 5, Number 3,
Winter 2006.
Abstract: Substantial efforts have been made recently to compare the effectiveness of traditional course formats to
alternative formats (most often, online delivery compared to traditional on-site delivery). This study examines, not
the delivery format but rather the evaluation format. It compares traditional paper and pencil methods for course
evaluation with electronic methods. Eleven instructors took part in the study. Each instructor taught two sections of
the same course; at the end, one course received an online course evaluation, the other a traditional pencil and paper
evaluation. Enrollment in these 22 sections was 519 students. Researchers analyzed open-ended comments as well
as quantitative rankings for the course evaluations. Researchers found no significant differences in numerical
rankings between the two evaluation formats. However, differences were found in number and length of comments,
the ratio of positive to negative comments, and the ratio of formative to summative comments. Students completing
faculty evaluations online wrote more comments, and the comments were more often formative (defined as a
comment that gave specific reasons for judgment so that the instructor knew what the student was suggesting be
kept or changed) in nature.
Ernst, D. (2006) “Student Evaluations: A Comparison of Online vs. Paper Data Collection.”
Presentation at Educause 2006, October 10, Dallas, TX.
The College of Education and Human Development at the University of Minnesota did a study on 314 class pairs
(14,154 student evaluations) from fall 2002 to fall 2004. The goals were to see if there is a difference in response
rate, a difference in response distributions, a difference in average ratings between the two methods and what are the
common perceptions of each method. In the study group the online form averaged a 56% response rate whereas the
paper version averaged 77%. Slightly more students responded on the high and low ends of the 7-point scale than
did in the middle. There was no significant difference in the mean rating on 4 required questions.
Laubsch, P. (2006). “Online and in-person evaluations: A literature review and exploratory
comparison.” Journal of Online Learning and Teaching, 2(2).
http://jolt.merlot.org/Vol2_No2_Laubsch.htm
The Master of Administrative Science program at Fairleigh Dickinson University performed a study on courses
taught by adjunct faculty. The online evaluations received a 61% response rate and the in-class evaluations received
a 82.1% response rate. They found that the online evaluations received twice as many comments (counting total
words) as the in-class evaluations. On the question about “materials being clearly presented” (focused on the faculty
member) the variation in mean scores in online and in-class was 0.33 on a 5-point scale with online having a lesspositive rating. This is a statistically significant difference. Administrators noted that both means were better than
the “agree” and were not considered poor ratings.
Lovric, M. (2006). Traditional and web-based course evaluations-comparison of their response
rates and efficiency. Paper presented at 1st Balkan Summer School on Survey Methodology.
http://www.balkanprojectoffice.scb.se/Paper%20Miodrag%20Lovrich_University%20of%20Bel
grade.pdf
The paper presents a short literature review comparing online evaluations with paper. The Economics department at
University of Belgrade, Serbia conducted a small pilot in a course of 800 students in May of 2006. Half the students
received paper evaluations in class and half were directed to complete an identical online evaluation. The paper
evaluation received a 92.5% response rate and the online received a 52% response rate after an incentive was
introduced. They found that nearly twice as many students filled out the open-ended question online when compared
to the paper group. On the instructor-related questions they found a variation of 0.09 to 0.22 on a 10-point scale. No
statistical analysis was done for significance.
Anderson, H., Cain, J., Bird, E. (2005) “Online Student Course Evaluations: Review of
Literature and a Pilot Study.” American Journal of Pharmaceutical Education 2005; 69 (1)
Article 5.
The literature review revealed several studies that found no statistically significant differences between delivery
modes. Two also noted that students provided more comments in the online forms. Response rates varied widely.
The University of Kentucky College of Pharmacy, driven by the faculty’s desire for more timely return of results (34 months typically), launched a pilot study of online evaluations in 3 courses. The response rates for the 3 courses
were 85%, 89%, and 75%. The 9 courses using the paper forms averaged an 80% response rate (consistent with the
2 previous years also about 80%). The comments on the online forms were more frequent and longer than the paper
forms. Students liked the online form better than the paper form and thought they could provide more effective and
constructive feedback online.
Online CTE Project Team. (2005). Online course and teaching evaluation: Report on a trial run
with recommendations. Teaching and Learning Center, Lingnan University.
http://www.ln.edu.hk/tlc/level2/pdf/online%20cte%20report%20050411.pdf
OnSet (2005). Online student evaluation of teaching in higher education. (n.d.). Online student
evaluation: Bibliograpy. http://onset.byu.edu/index.php?title=Bibliography
Monsen, S., Woo, W., Mahan, C. Miller, G. & W (2005). “Online Course Evaluations: Lessons
Learned.” Presentation at The CALI Conference for Law School Computing 2005.
Yale Law started online course evaluations in 2001 with a less than 20% response rate. The current 8-question form
is run by student representatives and has a 90% response rate. Students cannot see their grades until they fill out the
evaluation. Northwestern University School of Law started online course evaluations in 2004. So far they have a
68% response rate which compares to a 70-80% paper response rate. Northwestern is against using any penalties
(withholding information from a student until they fill out an evaluation). The University of Denver Sturm College
started online course evaluations in 2002 with a pilot of 10 courses. The pilot had an 83% response rate. Continuing
into 2003 the pilot expanded to 80 courses (with an 81% response rate) and then expanded to all of their offerings
(with a 64% response rate). Currently they maintain a response rate around 70%. Duke Law started online course
evaluations in 2003 when their scantron machine broke and the expense of replacing was too great. They proposed a
goal of 70% response rate and used the same form online. The first term averaged a 66% response rate (with 29% of
the 82 courses reaching the 70% goal). In spring 2004 the average was 60% (with 30% of the 119 courses reaching
the 70% goal). In fall 2004 the average was 52% (with 8% of the 93 courses reaching the 70% goal). In spring 2005,
after dropping non-law students from the pool, the average was 67% (with 41% of the 117 courses reaching the 70%
goal). The school is considering several penalties for failure to fill out an evaluation – withholding registration,
withholding grades, or withholding free printing.
Norris, J., Conn, C. (2005). “Investigating strategies for increasing student response rates to on
line delivered course evaluations.” Quarterly Review of Distance Education 2005; 6 (1) p13-32
(ProQuest document ID 975834871).
This paper reports the findings of 2 studies done at Northern Arizona State University. The first study looked at
historic data from 2000-2002 to examine student responses to online course evaluations in 1108 course sections.
This group had an average response rate of 31%. A follow-up questionnaire was sent to 50 faculty in the group to
explore what strategies improved response rate. These results informed the second study on 39 online course
sections and 21 sections of a required freshman face-to-face course. The second study used some basic strategies (no
penalty strategies) in the implementation of the online course evaluations: 2 weeks before the end of the course the
URL to evaluation was posted in the course management system, an announcement containing a statement of course
evaluation value and due date was sent in a method appropriate to the class (email, online syllabus or discussion
board), and a reminder email was sent 1 week before the class ended containing the URL and due date. The 39
online course sections averaged a 74% response rate and the 21 face-to-face courses averaged a 67% response rate.
In addition, 11 sections of the face-to-face course used paper evaluations and received a 83% response rate. These
suggestions are very similar to the emerging findings from the TLT Group’s BeTA project.
Schawitch, M. (2005) “Online Course Evaluations: One Institute’s Success in Transitioning from
a Paper Process to a Completely Electronic Process!” Presentation at the Association for
Institutional Research Forum, June 2005.
The Rose-Hulman Institute of Technology piloted an online course evaluation in 2002 with a small group of faculty.
Over the academic year the pilot had a 70% response rate. 77% of students preferred the online mode and faculty
reacted positively to the pilot. In 2003 the entire campus adopted the online form. Over the 3 terms, the online
evaluations had response rates of 86%, 78% and 67%. In 2004 the 3 terms had 75%, 71% and 67%. Historically
paper evaluations had an 85-87% response rate. They are investigating various incentive possibilities.
Ballantyne, C.S., (2004) Online or on paper: An examination of the differences in response and
respondents to a survey administered in two modes. Paper presented to the Australasian
Evaluation Society Annual Conference, Adelaide, South Australia, 13-15 October, 2004,
http://www.aes.asn.au/conference2004/index.htm#fri
Brigham Young University Student ratings: Frequently asked questions. Retrieved 13th
September (2004) from http://studentratings.byu.edu/docs/FAQ.htm Carini, R.M, Hayek, J.C.,
Kuh, G.D. & Ouimet, J.A. "College student responses to web and paper surveys: Does mode
matter?", Research in Higher Education, 2003, 44, (1), P1-19 Retrieved 13th September 2004
from http://www.kluweronline.com/issn/0361-0365/contents
Collings, D. & Ballantyne, C. S. (2004, November 24-25). Online student survey comments: A
qualitative improvement? Paper presented at the 2004 Evaluation Forum, Melbourne, Victoria..
http://www.tlc.murdoch.edu.au/pubs/docs/Eval_forum_paper.pdf .
Dommeyer, C.J., Baum, P., Hanna, R.W & Chapman, K.S. (2004) "Gathering faculty
evaluations by in-class and online surveys: Their effects on response rates and evaluations".
Assessment and Evaluation in Higher Education, Vol 29, No. 5, October 2004, p611-623
Liegle, J O and D S McDonald. (2004) Lessons Learned From Online vs. Paper-based Computer
Information Students' Evaluation System. In The Proceedings of the Information Systems
Education Conference 2004, v 21 (Newport): §2214. ISSN: 1542-7382. (A later version appears
in Information Systems Education Journal 3(37). ISSN: 1545-679X.)
http://proc.isecon.org/2004/2214/index.html
Georgia State University College of Business ran a voluntary pilot from 2002 to 2003 using an identical online
version of their paper course evaluation form in the Department of Computer Information Systems. Faculty feared
an online form would yield lower scores and lower response rates. In particular, the fear was that few students would
submit online evaluations, poor students would “take revenge” on the faculty and good students wouldn’t bother.
The paper form had a 67% response rate and the online form had an 82% response rate. This likely due to the fact
that the CIS department had easy access to computer labs for students to take the evaluations online. Using a
question on teacher effectiveness, the study found no significant difference between the methods. Good students
participated in the same numbers and weaker students did fewer online evaluations.
Ballantyne, C.S. (2003), Online evaluations of teaching: An examination of current practice and
implications for the future. In Sorenson, D.L & Johnson, T.D (Eds) Online Student Ratings of
Instruction, New Directions for Teaching and Learning, No. 96, Winter 2003, Jossey-Bass
Bothell, T.W & Henderson, T. (2003) “Do online ratings of instruction make sense?” In
Sorenson, D.L & Johnson, T.D (Eds) Online Student Ratings of Instruction, New Directions for
Teaching and Learning, No. 96, Winter 2003, Jossey-Bass
Carini, R.M., Hayek,J.C., Kuh, G.D., & Ouimet, J.A. (2003). College student responses to web
and paper surveys: Does mode matter? Research in Higher Education, 44(1), 1-19.
Chang, T. S. (2003, April 21-25). The results of student ratings: The comparison between paper
and online survey. Paper presented at the American Education Research Association, Chicago,
IL. http://www.nhlue.edu.tw/~achang/data/a7.pdf
Dommeyer, CJ., Baum, P., Chapman, KS., and Hanna, RW. (2003). “An experimental
investigation of student response rates to faculty evaluations: The effect of the online method and
online treatments.” Paper presented at Decision Sciences Institute; Nov. 22-25, 2003;
Washington, DC.
The College of Business And Economics at California State University, Northridge did a study with 16 professors to
see how the method of evaluation affects response rate and if online treatments (incentives) affect the response rate.
Each professor taught 2 sections of the same undergraduate business course. The same form was used in both
methods. Instructors were randomly assigned into 1 of 4 groups using different incentives: 0.25% grade incentive
for completion of an online evaluation (4 courses), in-class demonstration on how to do the online evaluation (2
courses), if 2/3 of the class submitted online evaluations students would receive their final grades early (2 courses),
or a control group (8 courses). The online evaluations averaged a 43% response rate and the paper evaluations
averaged 75%. Looking at just the control group, their average response rate was 29%. In the individual cases the
incentives had the effect of increasing response rate (grade incentive 87% response rate, demonstration 53%, and
early final grade 51%).
Hardy, N.(2003) “Online ratings: Fact and fiction.” In Sorenson, D.L & Johnson, T.D (Eds)
Online Student Ratings of Instruction, New Directions for Teaching and Learning, No. 96,
Winter 2003, Jossey-Bass
Johnson, T.D. (2003) “Online student ratings: Will students respond?” In Sorenson, D.L &
Johnson, T.D (Eds) Online Student Ratings of Instruction, New Directions for Teaching and
Learning, No. 96, Winter 2003, Jossey-Bass
Muffo, J.M., Sinclair, A. & Robson, V. (2003) A comparison of web versus paper based alumni
surveys. Paper presented at the Annual Forum of the Association for Institutional Research ,
Tampa, Florida, May 20, 2003
McGhee, D.E & Lovell, N. (2003) “Psychometric properties of student ratings of instruction in
online and oncampus courses.” In Sorenson, D.L & Johnson, T.D (Eds) Online Student Ratings
of Instruction, New Directions for Teaching and Learning, No. 96, Winter 2003, Jossey-Bass
Dommeyer, C.J., Baum, P. and Hanna, R.W (2002) "College students’ attitudes towards methods
of collecting teaching evaluations: In-class versus on-line". Journal of Education for Business,
2002, 78, (1), p11-15
Fraze, S., Hardin, K., Brashears, T., Smith, J., Lockaby, J. (2002) “The Effects Of Delivery
Mode Upon Survey Response Rate And Perceived Attitudes Of Texas Agri-Science Teachers.”
Paper presented at the National Agricultural Education Research Conference, December 11-13,
Las Vegas, NV.
Texas Tech University studied 3 modes of surveying a random group of Texas Agri-Science teachers. The 3 modes
were e-mail, web, and paper. No significant difference in the reliability of the responses was found. However the
response rates were 60%, 43% and 27% for paper, web and e-mail respectively.
Kasiar, J. B., Schroeder, S. L. & Holstad, S. G. (2002). Comparison of traditional and web-based
course evaluation processes in a required, team-taught pharmacotherapy course. American
Journal of Pharmaceutical Education, 66(3), 268-70.
Krajewski, S. (n.d.) Annotated Bibliography.
Sax, L., Gilmartin, S., Keup, J., Bryant, A., and Plecha, M. (2002). Findings from the 2001 pilot
administration of Your First College Year (YFCY): National norms. Higher Education Research
Institute, University of California.
The YFCY distributed its survey that assesses student development during the first year in college using 3 methods:
online, online or paper, and paper. In a pool of 57 schools, 16 used the alternative methods of distribution. The study
found no significant difference in responses between the methods. The response rate overall was 21%. The online
only method response rate was 17% and the online or paper group had a 24% response rate.
Sheehan, K.B. (2002). “Online Research Methodology: Reflections and Speculations”, Journal
of Interactive Advertising, vol. 3, no. 1, Fall 2002, Retrieved 13th September 2004 from
http://www.jiad.org/vol3/no1/sheehan
Thorpe, S.W. (2002). Online student evaluation of instruction: An investigation of non-response
bias. Paper presented at the 42nd Annual Forum of the Association for Institutional Research in
Toronto Canada. Retrieved 9th November 2004, from http://www.airweb.org/forum02/550.pdf
Drexel University studied whether significant differences exist in student responses to course evaluations given on
paper and online in 3 courses. Response rates in the 3 classes for paper and online (respectively) were 37% and
45%, 44% and 50%, 70% and 37%. In comparing students who responded to the evaluations across the 3 courses the
study found that women were more likely than men to respond, students who earned higher grades were more likely
to respond, and students with a higher overall GPA were more likely to respond. For two courses the online
evaluations had a slightly higher average item rating. For the other course 2 significant differences were found:
students doing the online evaluation were less likely to participate actively and contribute thoughtfully during class
and to attend class when compared to the paper evaluation group. But the responses overall were not significantly
different.
Handwerk, P., Carson, C., and Blackwell, K. (2000). “On-line vs. paper-and-pencil surveying of
students: A case study.” Paper presented at the 40th Annual Meeting of the Association of
Institutional Research, May 2000 (ERIC document ED446512).
The University of North Carolina at Greensboro did a study of using and online version of a feedback survey for
determining why students selected or did not select Greensboro. They found the online version generated more
comments though had a lower (26%) response rate than the paper version (33%). No significant difference was
found in the response content between the two methods.
Hmieleski, K. (2000) "Barriers to online evaluation: Surveying the nation’s top 200 most wired
colleges". Interactive and Distance Education Assessment (IDEA) Laboratory, Rensselaer
Polytechnic Institute, Troy,NY, 2000
Hmieleski, K. and Champagne, M. (2000) "Plugging in to course evaluation". Assessment,
September/October 2000.
Dillman, D.A (2000). Mail and internet surveys: The tailored design method. John Wiley &
Sons. New York:
Cummings, R. J. and Ballantyne, C.S. (1999). Student feedback on teaching: Online! On target?
Paper presented to the Australasian Evaluation Society Annual Conference, Perth, Western
Australia, 6-8 October, 1999.
http://www.tlc.murdoch.edu.au/pubs/docs/AES_1999_Conference_Final.rtf
Cody, A. (1999) "Evaluation via the web". Teaching and Education News, 1999, 9, (6)
University of Queensland. Retrieved 13th September 2004 from
http://www.tedi.uq.edu.au/TEN/TEN_previous/TEN4_99/index.html
Cummings, R. and Ballatyne, C. (1999). “Student feedback on teaching: Online! On target?”
Paper presented at the Australisian Society Annual Conference, October, 1999.
Murdoch University School of Engineering ran a pilot in 1999 of online course evaluations using the same form
online as on paper. Students found the online form easier, faster, and felt it offered greater anonymity. The school
has a 50% mandate for response rate in course evaluations. Typically paper evaluations had a 65% response rate.
The online pilot averaged 31% with 4 of the 18 courses over the 50% mandate. The response rate range was a wide
3% to 100%. Because the pilot was inadequately promoted, some faculty didn’t know they were using online forms
and didn’t adequately prepare students. Students noted that they felt no pressure to fill out the online evaluations.
The investigators concluded that the quality of responses was the same because they received the same amount of
comments online which is what is used most from the evaluation form.
Layne B.H., DeCristofor J.R., McGinty D (1999). “Electronic versus traditional student ratings
of instruction.” Res Higher Educ. 1999; 40:221-32.
At a southeastern university 66 courses made up of 2453 students did a comparison of response effects between
paper-and-pencil and online using the same form. Half did online and half did paper-and-pencil forms. The online
response rate was 47% and the traditional group was 60%. Also, 76% of the online forms provided comments
compared to 50% of the traditional forms. No significant difference was found in methods.
Matz, C. (1999). “Administration of web versus paper surveys: Mode effects and response
rates.” Masters Research Paper, University of North Carolina at Chapel Hill. (ERIC document
ED439694).
In a survey of academic reference librarians in North Carolina, Matz found no significant difference in response
contents between the methods used. The online form had a 33% response rate and the paper form had a 43%
response rate.
Schaefer, D.R. & Dillman, D.A. 1998 “Development of a standard e-mail methodology: Results
of an experiment” Public Opinion Quarterly, 62(3), p378-397
Cates, W. M. (1993). A small-scale comparison of the equivalence of paper-and-pencil and
computerized versions of student end-of-course evaluations. Computers in Human Behavior, 9,
401-409.
Download