Course Evaluation FAQs

Professor speaks to his class in language lab, CAS
Professor Scott Hartley, wearing protective glasses, talks with students in the lab, CAS

Online course evaluations are cost-effective, decrease staff workload, lower the margin for error, preserve class time that would otherwise be spent on in-class evaluations, and allow for quick feedback. Nevertheless, administering course evaluations online versus by paper represents a significant change in our process of evaluation. Below are responses to frequently asked questions:

What are the all-University questions on the course evaluation form?

The scale for ALL questions is 0-4, with higher numbers being better.

Classroom Climate

  • My instructor welcomed students' questions.
  • My instructor offered opportunities for active participation to understand course content.
  • My instructor demonstrated concern for student learning.

Student Learning

  • In this course I learned to analyze complex problems or think about complex issues.
  • My appreciation for this topic has increased as a result of this course.
  • I have gained an understanding of this course material

Are digitally administered evaluations as accurate as paper evaluations?

Burton et al. (2012) found that out of eighteen studies exploring the differences between quantitative feedback provided on paper vs. online evaluations, fourteen of those studies reported no difference between the delivery methods and two reported slightly higher ratings online. In their own experiment, Burton et al. (2012) determined that online ratings were significantly higher than those collected via paper evaluations. Heath et al. (2007) found that online formats garner more positive and useful comments than comments offered via paper evaluations.

Will the quality of the qualitative feedback given online be as high as that offered in a paper version?

Most studies show a higher percentage of students who respond to evaluations given online include qualitative feedback (Donovan et al. 2006; Heath et al. 2007; Kasiar et al. 2002; Laubsch 2006). The amount of online qualitative feedback is also greater than that in the paper evaluations. In research analyzing word count, studies find that qualitative feedback from online evaluations exceeds that of paper evaluations, often by a wide margin (Burton et al. 2012; Heath et al. 2007; Kasiar et al. 2002; Hardy 2003; Hmieleski and Champagne 2000). Perhaps most importantly, several studies have examined the quality of the comments submitted through both formats (paper vs. online) and discovered that online comments were more substantive and informative, as defined by more words per comment, more descriptive text, and more detailed feedback (Ballantyne 2003; Burton et al. 2012; Collings and Ballantyne 2004; Donovan et al. 2006; Johnson 2002).

Will allowing "absentee" students to participate in online evaluations lower the course evaluation scores?

According to Perrett (2013), course and instructor ratings are not related to student attendance. In addition, students with a higher cumulative GPA and higher SAT scores complete online evaluations at higher rates than students with poor GPAs and lower SAT scores (Thorpe 2002). Students expecting higher grades also evaluate at a higher rate (Adams and Umbach 2012). Finally, students expecting poor grades in a class are no more likely to score an instructor below the class mean than students expecting good grades (Avery et. al. 2006; and Thorpe 2002).

Although these studies may help to alleviate concerns, it is also important to remember that there may be value in gaining feedback from students who attend the course meetings infrequently. Knowing why a student stopped engaging with a course may provide insights into ways of improving the course in the future.

Is the return rate of online evaluations the same as those administered on paper?

Studies comparing response rates of online vs. paper evaluation find that online evaluations generally have 9-10% lower response rates than do paper evaluations. Adding interventions can boost response rates by 7 – 25%, depending upon which incentives or interventions are used, from 7-25% (Ravenscroft & Enyeart, 2009; Norris & Conn, 2005; Johnson, 2002).

Response rates increase when faculty make it a point to let their students know how to find the evaluations, that the students’ comments are valued, and how the data will be used overall. See below for more suggestions on how to raise response rates.

Will online response rates be high enough to have statistical validity?

Nulty (2008) used and justified an 80% confidence interval for his calculations, and through a vast number of assumptions and corrections for bias, states that classes with fewer than 20 students need a minimum of a 58% response rate to be considered valid. Courses with greater than 50 enrollees can use 35% as their bar. Since instituting online evaluations, Miami’s average response rate has exceeded these thresholds.

How can the response rates for online evaluations be improved?

Studies show that the course instructor can have the biggest influence on raising response rates. Studies show that many students believe that faculty do not take evaluations seriously, and do not make changes as a result of the students’ reviews (Marlin, 1987; Nasser & Fresco, 2002; Spencer & Schmelkin, 2002). In fact, when asked, very few instructors report having made changes in direct response to student evaluation input (Beran & Rokosh, 2009).

Taking time to educate the students on how evaluations are used and to emphasize to students that their input will be taken seriously will have a positive effect on response rates (Gaillard et. al., 2006). Constructive, informative, and encouraging instructor-student engagement around the course evaluation process is also critical in maintaining or improving response rates (Norris & Conn, 2005; Johnson, 2002; Anderson et. al., 2006; Ballantyne, 2003).

Below are some other tips for improving response rates. Select those that work better with your teaching philosophy and personal style of working with students:

  • Remind students about the evaluation two to three weeks before the semester or term ends. One study (Norris & Conn, 2005) found an increase in student response rates when students were given an early notification that evaluations were approaching. A reminder at around 2 to 3 weeks before the term ended was found to be ideal, raising response rates an average of 17 %.
  • Inform students how to complete the evaluations and where to find them. One study found that courses in which instructors demonstrated how to find and use the evaluations system had a 24% higher response rate than in courses with no demonstration given (Dommeyer et al, 2004).
  • Consider making the evaluation an assignment. Making an evaluation an assignment, even with no point value attached, raised response in one study by 7% (Johnson, 2002).
  • Emphasize the importance of evaluation: Students are more likely to complete course evaluations if they understand how they are being used, and believe their opinions matter (Gaillard et al, 2006). Explain how the University and you personally use the feedback. Chen and Hoshower (2003) found that students consider an improvement in teaching to be the most important outcome of an evaluation system, followed closely by an improvement in course content and format.
  • Ask students to bring their laptops to class and allow time in class to complete the evaluation.


Adams, M. and Umbach, P. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53: 576-591.

Anderson, J., Brown, G. & Spaeth, S. (2006). Online student evaluations and response rates reconsidered. Innovate, 2(6). Retrieved from

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education, 37(1): 21-37.

Ballantyne, C.S. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. In D. L. Sorenson & T. D. Johnson (Eds.), New Directions for Teaching and Learning #96: Online students ratings of instruction (pp. 103-112). San Francisco, CA: Jossey-Bass.

Beran, T., & Rokosh, J. (2009). Instructors' perspectives on the utility of student ratings of instruction. Instructional Science, 37(2): 171-184.

Burton, W., A. Civitano, and P. Steiner-Grossman. 2012. Online versus paper evaluations: differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1): 58-69.

Chen, Y. & Hoshower, L.B. 2003. Student evaluation of teaching effectiveness: an assessment of student perception and motivation. Assessment and Evaluation in Higher Education, 28(1): 71-88.

Collings, D., & Ballantyne, C. (2004). Online student survey comments: A qualitative improvement? Paper presented at the 2004 Evaluation forum, Melbourne, Australia. Retrieved from

Dommeyer, C. J., Baum, P., Hanna, R. W., and Chapman, K. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment and Evaluation in Higher Education, 29(5): 611-623.

Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: Online vs. Traditional course evaluations. Journal of Interactive Online Learning. 5(3), 283-295.

Gaillard, F., Mitchell, S, & Kavota, V. (2006). Students, Faculty, And Administrators’ Perception Of Students’ Evaluations Of Faculty In Higher Education Business Schools. Journal of College Teaching & Learning, 3(8): 77-90.

Hardy, N. (2003). Online ratings: fact and fiction. New Directions for Teaching and Learning, 96, 31-41. Retrieved from

Heath, N. M., Lawyer, S. R., & Rasmussen, E, B. (2007). A comparison of web-based versus pencil-and-paper course evaluations. Teaching Psychology, 34, 259-261. Retrieved from

Hmieleski, K. & Champagne, M. V. (2000). Plugging in to course evaluation. The Technology Source Archives, Sept./Oct. Retrieved from

Johnson, T. (2002). Online student ratings: Will students respond? Paper presented at the annual meeting of the American Educational Research Association, New Orleans, 2002. Retrieved from

Kasiar, J. B., Schroeder, S. L. , & Holstad, S. G. (2002). Comparison of Traditional and Web-Based Course Evaluation Processes in a Required, Team-Taught Pharmacotherapy Course. American Journal of Pharmaceutical Education, 66: 268-270.

Laubsch, P. (2006). Online and in‐person evaluations: A literature review and exploratory comparison. Journal of Online Learning and Teaching, 2(2). Retrieved from

Layne B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Res Higher Education, 40:221-232.

Liegle, J. O., & McDonald, D. S. (2004, November 5). Lessons Learned From Online vs. Paper‐based Computer Information Students' Evaluation System. Information Systems Education Journal, 3(37). Retrieved from

Marlin, J. (1987). Student Perceptions of End-of-Course Evaluations. The Journal of Higher Education, 58(6): 704-716.

Nasser, F., & Fresko, B. (2002). Faculty Views of Student Evaluation of College Teaching. Assessment & Evaluation in Higher Education, 27(2): 187-198.

Norris, J., & Conn, C. (2005). Investigating Strategies for Increasing Student Response Rates to Online-Delivered Course Evaluations. Quarterly Review of Distance Education, 6: 13-29.

Nulty, D. (2008, June). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314. Retrieved from

Perrett, J. 2013. Exploring graduate and undergraduate course evaluations administered on paper and online: A case study. Assessment & Evaluation in Higher Education, 38(1): 85-93.

Ravenscroft, M. & Enyeart, C. (2009). Online Student Course Evaluations: Strategies for Increasing Student Participation Rates: Custom Research Brief. Education Advisory Board, Washington D.C. Retrieved from:

Spencer, K. & Pedhazur Schmelkin, L. (2002). Student Perspectives on Teaching and its Evaluation. Assessment & Evaluation in Higher Education, 27(5): 397-409.

Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.

University of British Columbia, Vancouver. (2010, April 15). Student Evaluations of Teaching: Response Rates. Retrieved from