Each academic department or program is required to develop a teaching evaluation plan with the aim of improving the quality of instruction and student learning (MUPIM 7.2.B). These plans should include formal evaluation of courses that can be used for both self-improvement and summative assessment (MUPIM 7.2.C).
Beginning in 2013, all formal course evaluations were administered via an online platform.
Below are key points that should be considered in administering course evaluations to ensure credibility, integrity and instructional improvement:
Note: Many classes consist of multiple contact times which may or may not have the same instructor. For example, a large lecture course taught by Dr. Wong may have multiple lab sections with different instructors (Professor Smith, Taylor and Jones) and meeting times. Because departments do not assign CRNs in a consistent way in this situation, it is difficult to identify the “unit of observation” for digital evaluations.
Online course evaluations are cost-effective, decrease staff workload, lower the margin for error, preserve class time that would otherwise be spent on in-class evaluations, and allow for quick feedback. Nevertheless, administering course evaluations online versus by paper represents a significant change in our process of evaluation. Below are responses to frequently asked questions:
The scale for ALL questions is 0-4, with higher numbers being better.
Consult your academic dean’s office or see the Center for the Enhancement of Learning, Teaching and University Assessment (CELTUA)’s website for division questions.
Consult the CELTUA website for more information.
Burton et al. (2012) found that out of eighteen studies exploring the differences between quantitative feedback provided on paper vs. online evaluations, fourteen of those studies reported no difference between the delivery methods and two reported slightly higher ratings online. In their own experiment, Burton et al. (2012) determined that online ratings were significantly higher than those collected on paper evaluations. Heath et al. (2007) found that online formats garner more positive and useful comments than comments offered via paper evaluations.
Most studies show a higher percentage of students who respond to evaluations given online include qualitative feedback (Donovan et al. 2006; Heath et al. 2007; Kasiar et al. 2002; Laubsch 2006). The amount of online qualitative feedback is also greater than that in the paper evaluations. In research analyzing word count, studies find that qualitative feedback from online evaluations exceeds that of paper evaluations, often by a wide margin (Burton et al. 2012; Heath et al. 2007; Kasiar et al. 2002; Hardy 2003; Hmieleski and Champagne 2000). Perhaps most importantly, several studies have examined the quality of the comments submitted through both formats (paper vs. online) and discovered that online comments were more substantive and informative, as defined by more words per comment, more descriptive text, and more detailed feedback (Ballantyne 2003; Burton et al. 2012; Collings and Ballantyne 2004; Donovan et al. 2006; Johnson 2002).
According to Perrett (2013), course and instructor ratings are not related to student attendance. In addition, students with a higher cumulative GPA and higher SAT scores complete online evaluations at higher rates than students with poor GPAs and lower SAT scores (Thorpe 2002). Students expecting higher grades also evaluate at a higher rate (Adams and Umbach 2012). Finally, students expecting poor grades in a class are no more likely to score an instructor below the class mean than students expecting good grades (Avery et. al. 2006; and Thorpe 2002).
Although these studies may help to alleviate concerns, it is also important to remember that there may be value in gaining feedback from students who attend the course meetings infrequently. Knowing why a student stopped engaging with a course may provide insights into ways of improving the course in the future.
Studies comparing response rates of online vs. paper evaluation find that online evaluations generally have 9-10% lower response rates than do paper evaluations. Adding incentives can boost response rates by 7 – 25%, depending upon which incentives or interventions are used, from 7-25% (Ravenscroft & Enyeart, 2009; Norris & Conn, 2005; Johnson, 2002).
Response rates increase when faculty make it a point to let their students know how to find the evaluations, that the students’ comments are valued, and how the data is used overall. See below for more suggestions on how to raise response rates.
Nulty (2008) used and justified an 80% confidence interval for his calculations, and through a vast number of assumptions and corrections for bias, states that classes with fewer than 20 students need a minimum of a 58% response rate to be considered valid. Courses with greater than 50 enrollees can use 35% as their bar. Since instituting online evaluations, Miami’s average response rate has exceeded these thresholds.
Studies show that the course instructor can have the biggest influence on raising response rates. Studies show that many students believe that faculty do not take evaluations seriously, and do not make changes as a result of the students’ reviews (Marlin, 1987; Nasser & Fresco, 2002; Spencer & Schmelkin, 2002). In fact, when asked, very few instructors report having made changes in direct response to student evaluation input (Beran & Rokosh, 2009).
Taking time to educate the students on how evaluations are used and to emphasize to students that their input will be taken seriously will have a positive effect on response rates (Gaillard et. al., 2006). Constructive, informative, and encouraging instructor-student engagement around the course evaluation process is also critical in maintaining or improving response rates (Norris & Conn, 2005; Johnson, 2002; Anderson et. al., 2006; Ballantyne, 2003).
Below are some other tips for improving response rates. Select those that work better with your teaching philosophy and personal style of working with students:
Adams, M. and Umbach, P. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53: 576-591.
Anderson, J., Brown, G. & Spaeth, S. (2006). Online student evaluations and response rates reconsidered. Innovate, 2(6). Retrieved from http://www.innovateonline.info/index.php?view=article&id=301
Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education, 37(1): 21-37.
Ballantyne, C.S. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. In D. L. Sorenson & T. D. Johnson (Eds.), New Directions for Teaching and Learning #96: Online students ratings of instruction (pp. 103-112). San Francisco, CA: Jossey-Bass.
Beran, T., & Rokosh, J. (2009). Instructors' perspectives on the utility of student ratings of instruction. Instructional Science, 37(2): 171-184.
Burton, W., A. Civitano, and P. Steiner-Grossman. 2012. Online versus paper evaluations: differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1): 58-69.
Chen, Y. & Hoshower, L.B. 2003. Student evaluation of teaching effectiveness: an assessment of student perception and motivation. Assessment and Evaluation in Higher Education, 28(1): 71-88.
Collings, D., & Ballantyne, C. (2004). Online student survey comments: A qualitative improvement? Paper presented at the 2004 Evaluation forum, Melbourne, Australia. Retrieved from http://our.murdoch.edu.au/Educational-Development/_document/Publications...
Dommeyer, C. J., Baum, P., Hanna, R. W., and Chapman, K. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment and Evaluation in Higher Education, 29(5): 611-623.
Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: Online vs. Traditional course evaluations. Journal of Interactive Online Learning. 5(3), 283-295.
Gaillard, F., Mitchell, S, & Kavota, V. (2006). Students, Faculty, And Administrators’ Perception Of Students’ Evaluations Of Faculty In Higher Education Business Schools. Journal of College Teaching & Learning, 3(8): 77-90.
Hardy, N. (2003). Online ratings: fact and fiction. New Directions for Teaching and Learning, 96, 31-41. Retrieved fromhttp://www.google.com/url?sa=t&rct=j&q=northwestern%20course%20evaluatio...
Heath, N. M., Lawyer, S. R., & Rasmussen, E, B. (2007). A comparison of web-based versus pencil-and-paper course evaluations. Teaching Psychology, 34, 259-261. Retrieved from http://www.isu.edu/psych/fac_rasmussen.shtml
Hmieleski, K. & Champagne, M. V. (2000). Plugging in to course evaluation. The Technology Source Archives, Sept./Oct. Retrieved fromhttp://technologysource.org/article/plugging_in_to_course_evaluation/.
Johnson, T. (2002). Online student ratings: Will students respond? Paper presented at the annual meeting of the American Educational Research Association, New Orleans, 2002. Retrieved from http://www.armstrong.edu/images/institutional_research/onlinesurvey_will...
Kasiar, J. B., Schroeder, S. L. , & Holstad, S. G. (2002). Comparison of Traditional and Web-Based Course Evaluation Processes in a Required, Team-Taught Pharmacotherapy Course. American Journal of Pharmaceutical Education, 66: 268-270.
Laubsch, P. (2006). Online and in‐person evaluations: A literature review and exploratory comparison. Journal of Online Learning and Teaching, 2(2). Retrieved from http://jolt.merlot.org/Vol2_No2_Laubsch.htm
Layne B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Res Higher Education, 40:221-232.
Liegle, J. O., & McDonald, D. S. (2004, November 5). Lessons Learned From Online vs. Paper‐based Computer Information Students' Evaluation System. Information Systems Education Journal, 3(37). Retrieved from http://isedj.org/3/37/ISEDJ.3%2837%29.Liegle.pdf
Marlin, J. (1987). Student Perceptions of End-of-Course Evaluations. The Journal of Higher Education, 58(6): 704-716.
Nasser, F., & Fresko, B. (2002). Faculty Views of Student Evaluation of College Teaching. Assessment & Evaluation in Higher Education, 27(2): 187-198.
Norris, J., & Conn, C. (2005). Investigating Strategies for Increasing Student Response Rates to Online-Delivered Course Evaluations. Quarterly Review of Distance Education, 6: 13-29.
Nulty, D. (2008, June). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314. Retrieved from http://public.clunet.edu/~mondsche/misc/Nulty.pdf.
Perrett, J. 2013. Exploring graduate and undergraduate course evaluations administered on paper and online: A case study. Assessment & Evaluation in Higher Education, 38(1): 85-93.
Ravenscroft, M. & Enyeart, C. (2009). Online Student Course Evaluations: Strategies for Increasing Student Participation Rates: Custom Research Brief. Education Advisory Board, Washington D.C. Retrieved from:http://tcuespot.wikispaces.com/file/view/Online+Student+Course+Evaluatio...
Spencer, K. & Pedhazur Schmelkin, L. (2002). Student Perspectives on Teaching and its Evaluation. Assessment & Evaluation in Higher Education, 27(5): 397-409.
Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.
University of British Columbia, Vancouver. (2010, April 15). Student Evaluations of Teaching: Response Rates. Retrieved fromhttp://teacheval.ubc.ca/files/2010/05/Student-Evaluations-of-Teaching-Re...